text
stringlengths
100
500k
subset
stringclasses
4 values
What is the most lucid, intuitive explanation for the various FTs - CFT, DFT, DTFT and the Fourier Series? Even after having studied these for quite sometime, I tend to forget [if I'm out of touch for a while] how they are related to each other and what each stands for [since they have such similar sounding names]. I'm hoping you'd come up with an explanation that is so intuitive and mathematically beautiful that they'll get embedded into my memory for ever and this thread will serve as a super quick refresher whenever I [or anyone else] needs it. fourier-transform Vighnesh VighneshVighnesh $\begingroup$ Probably should start with the Fourier series $\endgroup$ – endolith Nov 14 '11 at 17:36 $\begingroup$ Are you familiar with Pontryagin duality? $\endgroup$ – Lorem Ipsum Nov 14 '11 at 18:29 $\begingroup$ @yoda - No. Could you please elaborate or point me to some good references? [I'll of course google it out.] $\endgroup$ – Vighnesh Nov 14 '11 at 18:56 $\begingroup$ "Steve on Image Processing": Fourier transforms addresses exactly this question. $\endgroup$ – nobar Apr 18 '14 at 5:11 $\begingroup$ I don't when to rewrite an answer here (unless required to). Yet, a possible answer is given in Can I study continuous time Fourier Transform and treat the rest as special cases following the Pontryagin duality track proposed by @LoremIpsum $\endgroup$ – Laurent Duval Sep 6 '17 at 22:19 I wrote this handout as a complement to Oppenheim and Willsky. Please take a look at Table 4.1 on page 14, reproduced below. (Click for larger image.) I wrote that table specifically to answer questions such as yours. Note the similarities and differences among the four operations: "Series": periodic in time, discrete in frequency "Transform": aperiodic in time, continuous in frequency "Continuous Time": continuous in time, aperiodic in frequency "Discrete Time": discrete in time, periodic in frequency I hope you find these notes helpful! Please feel free to distribute as you wish. Steve TjoaSteve Tjoa $\begingroup$ Good summary. Note that the "Discrete Time Fourier Series" referenced in the table above is typically referred to as the discrete Fourier transform (DFT). $\endgroup$ – Jason R Nov 15 '11 at 12:35 $\begingroup$ To nitpick a little, this answer is indeed a good summary as Jason R says, and something that is worth having permanently on dsp.SE so that everyone can link to it for future reference, but it is not really responsive to the question which asked for an intuitive explanation of these issues (lucidity presumably being an added bonus and not absolutely reqjuired since it is mentioned in the title but not in the text of the question). $\endgroup$ – Dilip Sarwate Nov 15 '11 at 13:32 $\begingroup$ A great response Steve - I believe this is what the OP is looking for. Short, sweet, and to the point. $\endgroup$ – Spacey Nov 15 '11 at 16:49 $\begingroup$ Is it a misprint at the vary bottom of the page 2 of your handout? It's stated: $x(t)b(t-t_0)=x(t_0)b(t-t_0)$. Wasn't it meant $\int_{-\infty}^{\infty}x(t)b(t-t_0)dt = x(t_0)$? $\endgroup$ – mbaitoff Nov 17 '12 at 8:18 $\begingroup$ Not a misprint. Both of your statements are true, but I intended to write the first one because that section of the guide describes the basic, axiomatic definitions of the unit impulse. The second statement is then derived from those definitions: $\int_{\infty}^{\infty} x(t)\delta(t-t_0) dt = \int_{\infty}^{\infty} x(t_0)\delta(t-t_0) dt = x(t_0) \int_{\infty}^{\infty} \delta(t-t_0) dt = x(t_0)$. $\endgroup$ – Steve Tjoa Nov 20 '12 at 19:19 For a lucid and correct explanation of these concepts, you would have to go through some of the standard textbooks (Oppenheim-Schafer, Proakis-Manolakis or "Understanding Digital Signal Processing" by Richard Lyons which is a very good but relatively less popular book). But assuming a coffee-table discussion, I will be making some extremely loose statements in what follows. :) For a general continuous time signal, you wouldn't expect any particular frequency to be absent, so its Fourier Transform (or the Continuous Fourier Transform) would be a continuous curve with support possibly -inf to +inf. For a periodic continuous signal (period T), Fourier expressed the signal as a combination of sines and cosines having the same period (T, T/2, T/3, T/4, ...). Effectively, the spectrum of this signal is a series of spikes at locations 1/T, 2/T, 3/T, 4/T, ... This is called the Fourier Series representation. There is a theorem that says that the Fourier series representation of any periodic continuous time signal converges to the signal as you include more and more sines and cosines (or complex exponentials) in the mean square sense. Moral so far: periodicity in time => spiky spectrum On to discrete time... What happens if you sample a continuous time signal? It should be clear that for a sufficiently high signal, you wouldn't be able to reconstruct the signal. If you make no assumption about the frequencies in the signal, then given the sampled signal, there is no way you can say what the true signal is. In other words, different frequencies are represented equivalently in the discrete-time signal. Going through some math tells you that you can obtain the spectrum of the sampled signal from the original continuous signal. How? You shift the spectrum of the continuous time signal by amounts +-1/T, +-2/T, ... and add all the shifted copies (with some scaling). This gives you a continuous spectrum that's periodic with period 1/T. (note: the spectrum is periodic as a result of sampling in time, the time signal doesn't have to be periodic) Since the spectrum is continuous, you can as well represent it with just one of its periods. This is the DTFT ("Discrete-Time" Fourier Transform). In the case where your original continuous time signal has frequencies no higher than +-1/2T, the shifted copies of the spectrum don't overlap and hence, you can recover the original continuous-time signal by selecting one period of the spectrum (the Nyquist sampling theorem). Another way to remember: spiky time signal => periodicity in spectrum What happens if you sample a continuous-time periodic signal with sampling period T/k for some k? Well, the spectrum of the continuous-time signal was spiky to being with, and sampling it by some divisor of T means that the spikes in the shifted copies fall exactly on multiples of 1/T, so the resulting spectrum is a spiky periodic spectrum. spiky periodic time signal <=> spiky periodic spectrum (assuming that the period and sampling frequency are "nicely related" as above.) This is what is known as the DFT (Discrete Fourier Transform). FFT (Fast Fourier Transform) is a class of algorithms to compute the DFT efficiently. The way DFT is invoked is as follows: Say you want to analyze a sequence of N samples in time. You could take DTFT and deal with one of its periods, but if you assume that your signal is periodic with period N, then DTFT reduces to DFT and you have just N samples of one period of DTFT which completely characterize the signal. You can zero-pad the signal in time to get a finer sampling of the spectrum and (many more such properties). All of the above is useful only if accompanied by a study of DSP. The above are just some very rough guidelines. rk2rk2 Let $x(t)$ denote a bounded function with period $T$, that is, for all real numbers $t$, $x(t+T) = x(t)$. As a particular example, $\cos(2\pi t/T)$ is such a function. We want to find the "best" approximation $a_n\cos(2\pi nt/T)$ for this function where we wish to choose the coefficient $a_n$ so that $$\int_0^T (x(t) - a_n\cos(2\pi nt/T))^2\,\mathrm dt,$$ the squared error is as small as possible. Expanding out the integrand, we have $$\text{squared error} = \int_0^T x^2(t)\,\mathrm dt - 2a_n \int_0^T x(t)\cos(2\pi nt/T)\,\mathrm dt +(a_n)^2\int_0^T \cos^2(2\pi nt/T)\,\mathrm dt.$$ The leftmost integral is the energy $E$ delivered by one period of $x(t)$ while rightmost integral has value $T/2$ and so we see that $$\text{squared error} = E - 2a_n \int_0^T x(t)\cos(2\pi nt/T)\,\mathrm dt +(a_n)^2\frac{T}{2}.$$ Now. for $a > 0$, the quadratic function $az^2 + bz + c$ has a minimum at $z = -b/2a$ (midway between the roots $(-b/2a) \pm \sqrt{b^2 - 4ac}/2a$ !!) and so, since we have expressed the squared error as a quadratic function of $a_n$, the choice of $a_n$ that minimizes the squared error is $$a_n = \frac{2}{T}\int_0^T x(t)\cos(2\pi nt/T)\,\mathrm dt.$$ Similarly, choosing $b_n$ as $$b_n = \frac{2}{T}\int_0^T x(t)\sin(2\pi nt/T)\,\mathrm dt$$ minimizes the squared error between $x(t)$ and $b_n\sin(2\pi nt/T)$. Thus we see that Fourier series is nothing but a cheap trick to find the minimum squared error approximation to a periodic function $x(t)$ in terms of the sine and cosine signals of the same period and harmonics thereof. Dilip SarwateDilip Sarwate Endolith is correct in that, if you actually start with the Fourier series, and see how it is extended to the Fourier transform, then things start beginning to make a lot of sense. I give a brief explanation for this in the first half of this answer. A good (perhaps not simple) way to look at the Fourier transform family (by which I mean the 4 you've listed above), is through the Pontryagin duality goggles. It gives you a nice way to remember the different transforms by the original and transformed domains. For a complex valued function on $\mathbb{R}$ (assuming other necessary conditions for the F.T. to exist), its Fourier transform is also a complex valued function on $\mathbb{R}$. The space $\mathbb{R}$ is a Pontryagin self-dual and you can say that if a transform in the entire family has $\mathbb{R}$ as both the original and transformed domain, then it is the Fourier transform (or CFT, as you called it). A complex valued sequence of $n$ numbers can be viewed as a periodic complex valued function on $\mathbb{Z}/n\mathbb{Z}$, which is a cyclic integer modulo $n$ group (see finite abelian groups for more info). The transform for this sequence also has the domain $\mathbb{Z}/n\mathbb{Z}$ (self-dual) and this is the discrete Fourier transform. The domain of the unit circle, $\mathbb{T}$ (all complex numbers with absolute value 1; also see circle group) and the set of integers $\mathbb{Z}$ are Pontryagin duals of each other. Similar to the first two, a transform between $\mathbb{Z}$ to $\mathbb{T}$ exists and is what we call the discrete-time Fourier transform and the other way round is the Fourier series, from which everything started. This answer isn't fully complete and I'll perhaps build on this answer to make a few points clear when I have the time, but until then, this might be something to chew on until you get a more intuitive explanation from someone else. Also try reading variants of Fourier analysis on Wikipedia. I think the foremost thing is to fundamentally understand why do we need fourier transforms. They are one of many possible signal transforms, but also one of the most useful ones. A transform basically transforms a signal into another domain which may give us insight about the signal in that domain, or or it may be that the domain is mathematically easy to work. Once we are done working in that domain, we can take inverse transform to get to the desired result more easily. The most basic building block in fourier theory are monotones (sines and cosines). We can decompose a signal into its frequency components(monotones) using the fourier math. So, fourier transform basically transforms a signal from time domain to frequecy domain. The coefficient of each of the monotones in the fourier series tells us about the strength of that frequency component in the signal. Fourier transforms(CFT, DFT) explicitly gives us a frequency domain view of the signal. In nature, sines and cosines are the prominent waveforms. Synthetic signals like square wave, or signals having sharp fluctuations are less likely to occur naturally and not surprisingly compose of infinite range of frequencies as very clearly explained by fourier transforms. People had doubts whether any signal can be wretten as sum of sines/cosines. Fourier showed square waveform(which is far away from sines/cosines) can indeed be. White noise contains all the frequencies with equal strength. Also, if you are working with fourier series, then the coefficients along with the phase term can be seen as that required to properly superimpose the constituent sinosoidal waveforms so that the superposition is indeed the required signal of which you are taking the transform. When working with fourier transforms, the complex numbers implicitly have the phase terms and the required magnitude of each of the monotones. (integration is roughly like summation. continuous=>integration, discrete=> summation) I think once you have the understanding of the theme of a concept, rest all are just details that you will yourself have to understand by reading books. Reading about the application of fourier transforms to various fields will give you better perceptive. abhishekabhishek A DFT is a transform of a vector of numbers pairs from one orthogonal space to another. Very commonly done as a numerical computation. For some reason, when taking one bunch of numbers from the real world, the 2nd bunch of numbers often turns out to be close enough to something quite useful. I am reminded of the The Unreasonable Effectiveness of Mathematics in the Natural Sciences, especially regarding applying the DFT to many systems the seem to be approximated by various kinds of 2nd degree differential equation, even the sound of the coffee spoon I just dropped. The other 3 XYZ-FTs make assumptions about the existence of some mythical infinite entities to help symbolic solutions fit on the whiteboard before the coffee gets too cold. They are the "spherical cows" of signal processing. The DTFT and Fourier Series pretend that one vector can be extended infinitely at the cost of infinite density of the other entity. The Fourier Series pretends that both entities can be infinite continuous functions. Take enough math courses and one might even determine all the definitions and assumptions required to make these fictional entities exact and complete duals in some sense. $\begingroup$ What is meant by "orthogonal space" in your first sentence? What is the space orthogonal to, or what special property does the space have that you are distinguishing it from other run-of-the-mill spaces by bestowing on it the adjective "orthogonal"? $\endgroup$ – Dilip Sarwate Nov 15 '11 at 3:40 $\begingroup$ Maybe "orthonormal" the more correct term for the vector spaces? $\endgroup$ – hotpaw2 Nov 15 '11 at 4:00 $\begingroup$ I have generally seen "rthogonal" and "orthonormal" applied as adjectives to small collections of vectors or matrices. $\mathbf x$ and $\mathbf y$ are orthogonal if $\langle\mathbf{x},\mathbf{y}\rangle=0$ and orthonormality requires in addition that the vectors have unit length. A matrix $A$ is called orthogonal if $AA^T$ is a diagonal matrix and orthonormal if $AA^T$ is the identity matrix. Does orthogonal or orthonormal space mean that all the vectors in the space are orthogonal to each other or are orthogonal and have unit length too? If so, can you give an example of such a space? $\endgroup$ – Dilip Sarwate Nov 15 '11 at 12:42 $\begingroup$ The dot product between all sines or cosines that are exactly periodic in a DFT aperture length is zero, except for identical frequency functions. Even if N is larger than the number of coffee beans in the bag. Make them unit amplitude for orthonormal. $\endgroup$ – hotpaw2 Nov 15 '11 at 21:21 $\begingroup$ Your space is the space of $N$-vectors of complex numbers (since you said "vectors of pairs of numbers"). There are no sines and cosines in the space, only $N$-tuples of complex numbers, and any orthogonal or orthonormal set of such $N$-vectors can contain at most $N$ such $N$-tuples. I would recommend deleting your comment above, and possibly even your whole answer. $\endgroup$ – Dilip Sarwate Nov 15 '11 at 22:44 Not the answer you're looking for? Browse other questions tagged fourier-transform or ask your own question. Relation between Fourier Series & Fourier transform Intuitively, what is fourier series representation of a signal? Also intuitively what is frequency response? Why is the Fourier transform so important? What is the meaning of the DFT? What is the basic concept of Fourier transform Benefit to know Fourier series for image processing? Can I study continuous time Fourier Transform and treat the rest as special cases What is the meaning of $Ta_k$ of fourier series or transform? When to use the DTFT vs the DFT (and their inverses) in analysis? Fourier Transforms and Series for the NON mathematically inclined. Intuitive explanation of the Fourier Transform for some of the functions Why Fourier series if Fourier transform can be calculated for both periodic and aperiodic? What is an Intuitive Explanation of the Phase of a Signal Maximum Magnitude Deviation between DFT and DTFT Is possible reach the DFT if I have the DTFT? Difference between the DTFT and DFT The Fourier Transform of a periodic function and it's series
CommonCrawl
Extremal absorbing sets in low-density parity-check codes AMC Home New nonexistence results on perfect permutation codes under the hamming metric doi: 10.3934/amc.2021010 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. Partial direct product difference sets and almost quaternary sequences Büşra Özden 1,, and Oǧuz Yayla 2, Hacettepe University, Graduate School of Science and Engineering, Beytepe, Ankara, Turkey Institute of Applied Mathematics, Middle East Technical University, 06800, Ankara, Turkey This paper is a part of Büşra Özden's PhD thesis Received May 2020 Revised January 2021 Early access May 2021 Table(2) In this paper, we study the $ m $-ary sequences with (non-consecutive) two zero-symbols and at most two distinct autocorrelation coefficients, which are known as almost $ m $-ary nearly perfect sequences. We show that these sequences are equivalent to $ \ell $-partial direct product difference sets (PDPDS), then we extend known results on the sequences with two consecutive zero-symbols to non-consecutive case. Next, we study the notion of multipliers and orbit combination for $ \ell $-PDPDS. Finally, we present two construction methods for a family of almost quaternary sequences with at most two out-of-phase autocorrelation coefficients. Keywords: Nearly perfect sequence, partial direct product difference set, cyclotomic classes, quaternary sequence, autocorrelation. Mathematics Subject Classification: 05B10and94A55. Citation: Büşra Özden, Oǧuz Yayla. Partial direct product difference sets and almost quaternary sequences. Advances in Mathematics of Communications, doi: 10.3934/amc.2021010 K. T. Arasu, J. F. Dillon and K. J. Player, Character sum factorizations yield sequences with ideal two-level autocorrelation, IEEE Transactions on Information Theory, 61 (2015), 3276-3304. doi: 10.1109/TIT.2015.2418204. Google Scholar K. T. Arasu, C. Ding, T. Helleseth, P. V. Kumar and H. M. Martinsen, Almost difference sets and their sequences with optimal autocorrelation, IEEE Transactions on Information Theory, 47 (2001), 2934-2943. doi: 10.1109/18.959271. Google Scholar [3] T. Beth, D. Jungnickel and H. Lenz, Design Theory: Volume 1, Cambridge University Press, 1999. Google Scholar Y. Cai and C. Ding, Binary sequences with optimal autocorrelation, Theoretical Computer Science, 410 (2009), 2316-2322. doi: 10.1016/j.tcs.2009.02.021. Google Scholar A. Çeșmelioǧlu and O. Olmez, Graphs of vectorial plateaued functions as difference sets, Finite Fields and Their Applications, 71 (2021), 101795. doi: 10.1016/j.ffa.2020.101795. Google Scholar Y. M. Chee, Y. Tan and Y. Zhou, Almost p-ary perfect sequences, in International Conference on Sequences and Their Applications doi: 10.1007/978-3-642-15874-2_34. Google Scholar I. Chih-Lin and R. D. Gitlin, Multi-code CDMA wireless personal communications networks, Proceedings IEEE International Conference on Communications ICC'95, 2 (1995), 1060-1064. doi: 10.1109/ICC.1995.524263. Google Scholar [8] C. J. Colbourn and J. H. Dinitz, Handbook of Combinatorial Designs, CRC press, 2006. doi: 10.1201/9781420049954. Google Scholar L. E. Dickson, Cyclotomy, higher congruences, and Waring's problem, American Journal of Mathematics, 57 (1935), 391-424. doi: 10.2307/2371217. Google Scholar J. F. Dillon and H. Dobbertin, New cyclic difference sets with singer parameters, Finite Fields and Their Applications, 10 (2004), 342-389. doi: 10.1016/j.ffa.2003.09.003. Google Scholar C. Ding, T. Helleseth and K. Y. Lam, Several classes of binary sequences with three-level autocorrelation, IEEE Transactions on Information Theory, 45 (1999), 2606-2612. doi: 10.1109/18.796414. Google Scholar C. Ding, T. Helleseth and H. Martinsen, New families of binary sequences with optimal three-level autocorrelation, IEEE Transactions on Information Theory, 47 (2001), 428-433. doi: 10.1109/18.904555. Google Scholar V. E. Gantmakher and M. V. Zaleshin, Almost six-phase sequences with perfect periodic autocorrelation function, in International Conference on Sequences and Their Applications doi: 10.1007/978-3-319-12325-7_8. Google Scholar [14] S. W. Golomb and G. Gong, Signal Design for Good Correlation: For Wireless Communication, Cryptography, and Radar, Cambridge University Press, 2005. doi: 10.1017/CBO9780511546907. Google Scholar B. Gordon, W. Mills and L. Welch, Some new difference sets, Canadian Journal of Mathematics, 14 (1962), 614-625. doi: 10.4153/CJM-1962-052-2. Google Scholar M. Hall, A survey of difference sets, Proceedings of the American Mathematical Society, 7 (1956), 975-986. doi: 10.1090/S0002-9939-1956-0082502-7. Google Scholar T. Helleseth and G. Gong, New nonbinary sequences with ideal two-level autocorrelation, IEEE Transactions on Information Theory, 48 (2002), 2868-2872. doi: 10.1109/TIT.2002.804052. Google Scholar T. Helleseth and P. V. Kumar, Sequences with low correlation, in Handbook of coding theory, Vol. I, II Google Scholar T. Helleseth, P. V. Kumar and H. Martinsen, A new family of ternary sequences with ideal two-level autocorrelation function, Des. Codes Cryptogr., 23 (2001), 157-166. doi: 10.1023/A:1011208514883. Google Scholar J. R. Hollon, M. Rangaswamy and P. Setlur, New families of optimal high-energy ternary sequences having good correlation properties, Journal of Algebraic Combinatorics, 50 (2019), 1-38. doi: 10.1007/s10801-018-0835-1. Google Scholar H. Hu, S. Shao, G. Gong and T. Helleseth, The proof of Lin's conjecture via the decimation-Hadamard transform, IEEE Transactions on Information Theory, 60 (2014), 5054-5064. doi: 10.1109/TIT.2014.2327625. Google Scholar D. Jungnickel and A. Pott, Perfect and almost perfect sequences, Discrete Applied Mathematics, 95 (1999), 331-359. doi: 10.1016/S0166-218X(99)00085-2. Google Scholar Y.-S. Kim, J.-S. Chung, J.-S. No and H. Chung, On the autocorrelation distributions of Sidel'nikov sequences, IEEE Transactions on Information Theory, 51 (2005), 3303-3307. doi: 10.1109/TIT.2005.853310. Google Scholar Y.-S. Kim, J.-W. Jang, S.-H. Kim and J.-S. No, New quaternary sequences with optimal autocorrelation, ISIT, (2009), 286–289. Google Scholar Y.-S. Kim, J.-W. Jang, S.-H. Kim and J.-S. No, New quaternary sequences with ideal autocorrelation constructed from Legendre sequences, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 96 (2013), 1872-1882. doi: 10.1587/transfun.E96.A.1872. Google Scholar P. V. Kumar, R. A. Scholtz and L. R. Welch, Generalized bent functions and their properties, Journal of Combinatorial Theory, Series A, 40 (1985), 90-107. doi: 10.1016/0097-3165(85)90049-4. Google Scholar A. Lempel, M. Cohn and W. Eastman, A class of balanced binary sequences with optimal autocorrelation properties, IEEE Transactions on Information Theory, 23 (1977), 38-42. doi: 10.1109/tit.1977.1055672. Google Scholar S. L. Ma and W. S. Ng, On non-existence of perfect and nearly perfect sequences, International Journal of Information and Coding Theory, 1 (2009), 15-38. doi: 10.1504/IJICOT.2009.024045. Google Scholar S. L. Ma and B. Schmidt, On $(p^a, p, p^a, p^a-1)$-relative difference sets, Designs, Codes and Cryptography, 6 (1995), 57-71. doi: 10.1007/BF01390771. Google Scholar A. Maschietti, Difference sets and hyperovals, Designs, Codes and Cryptography, 14 (1998), 89-98. doi: 10.1023/A:1008264606494. Google Scholar J. Michel and Q. Wang, Some new balanced and almost balanced quaternary sequences with low autocorrelation, Cryptography and Communications, 11 (2019), 191-206. doi: 10.1007/s12095-018-0281-x. Google Scholar J.-S. No, H. Chung and M.-S. Yun, Binary pseudorandom sequences of period $2^n-1$ with ideal autocorrelation generated by the polynomial $z^d+(z+ 1)^d$, IEEE Transactions on Information Theory, 44 (1998), 1278-1282. doi: 10.1109/18.669400. Google Scholar B. Özden and O. Yayla, Cryptographic functions and bit-error-rate analysis with almost $ p $-ary sequences, International Journal of Information Security Science, 8.3 (2019), 44-52. doi: 10.1007/s12095-020-00423-5. Google Scholar B. Özden and O. Yayla, Almost p-ary sequences, Cryptography and Communications, 12 (2020), 1057-1069. doi: 10.1007/s12095-020-00423-5. Google Scholar R. E. Paley, On orthogonal matrices, Journal of Mathematics and Physics, 12 (1933), 311-320. doi: 10.1002/sapm1933121311. Google Scholar A. Pott, Finite Geometry and Character Theory, Lecture Notes in Mathematics, 1601, Springer-Verlag, Berlin, 1995. doi: 10.1007/BFb0094449. Google Scholar K.-U. Schmidt, Quaternary constant-amplitude codes for multicode CDMA, IEEE Trans. Information Theory, 55 (2009), 1824-1832. doi: 10.1109/TIT.2009.2013041. Google Scholar X. Shi, X. Zhu, X. Huang and Q. Yue, A family of $m$-ary $\sigma$-sequences with good autocorrelation, IEEE Communications Letters, 23 (2019), 1132-1135. doi: 10.1109/LCOMM.2019.2915234. Google Scholar G. L. Sicuranza and A. Carini, Nonlinear system identification using quasi-perfect periodic sequences, Signal Processing, 120 (2016), 174-184. doi: 10.1016/j.sigpro.2015.08.018. Google Scholar V. M. Sidel'nikov, Some k-valued pseudo-random sequences and nearly equidistant codes, Problemy Peredachi Informatsii, 5 (1969), 16-22. Google Scholar J. Singer, A theorem in finite projective geometry and some applications to number theory, Transactions of the American Mathematical Society, 43 (1938), 377-385. doi: 10.1090/S0002-9947-1938-1501951-4. Google Scholar R. G. Stanton and D. Sprott, A family of difference sets, Canadian Journal of Mathematics, 10 (1958), 73-77. doi: 10.4153/CJM-1958-008-5. Google Scholar T. Storer, Cyclotomy and Difference Sets, Lectures in Advanced Mathematics, 2, Markham Publishing Co., Chicago, IL, 1967. Google Scholar X. Tang and C. Ding, New classes of balanced quaternary and almost balanced binary sequences with optimal autocorrelation value, IEEE Transactions on Information Theory, 56 (2010), 6398-6405. doi: 10.1109/TIT.2010.2081170. Google Scholar X. Tang and G. Gong, New constructions of binary sequences with optimal autocorrelation value/magnitude, IEEE Transactions on Information Theory, 56 (2010), 1278-1286. doi: 10.1109/TIT.2009.2039159. Google Scholar X. Tang and J. Lindner, Almost quadriphase sequence with ideal autocorrelation property, IEEE Signal Processing Letters, 16 (2008), 38-40. Google Scholar A. Tirkel and T. Hall, New quasi-perfect and perfect sequences of roots of unity and zero, in International Conference on Sequences and Their Applications Google Scholar Q. Wang, W. Kong, Y. Yan, C. Wu and M. Yang, Autocorrelation of a class of quaternary sequences of period $2 p^m$, preprint, arXiv: 2002.00375. Google Scholar O. Yayla, Nearly perfect sequences with arbitrary out-of-phase autocorrelation, Advances in Mathematics of Communications, 10 (2016), 401-411. doi: 10.3934/amc.2016014. Google Scholar N. Y. Yu and G. Gong, New binary sequences with optimal autocorrelation magnitude, IEEE Transactions on Information Theory, 54 (2008), 4771-4779. doi: 10.1109/TIT.2008.928999. Google Scholar Table 1. Orbits of $ G = {\mathbb Z}_{10} \times {\mathbb Z}_3 $ under $ x \rightarrow 19x $ orbits of length 1 (0, 0)}, {(0, 1)}, {(0, 2)}, {(5, 0)}, {(5, 1)}, {(5, 2) (4, 0), (6, 0)}, {(1, 1), (9, 1)}, {(7, 0), (3, 0)}, {(8, 2), (2, 2)}, (9, 0), (1, 0)} {(7, 2), (3, 2)}, {(7, 1), (3, 1)}, {(1, 2), (9, 2)}, (8, 1), (2, 1)}, {(6, 2), (4, 2)}, {(6, 1), (4, 1)}, {(2, 0), (8, 0) Table 2. Sequences, their autocorrelation and alphabet Construction Out-of-phase autocorrelation Alphabet [29], [36] 0 $ p $-ary [17], [18] -1 $ p $-ary [3], [10], [15], [16], [30], [32], [35], [41], [42], [43], [44] -1 binary [12], [27], [40], [44] $ \pm 2 $ binary [2], [40], [27], [44] $ (0,-4) $ binary [45], [50] $ (0,\pm 4) $ binary [4], [11], [44] $ (1,-3) $ binary [25] $ (2p,-2) $ or $ (\pm 2p, \pm 2) $ binary [1], [19], [21] -1 ternary [23] $ (0,-3,3\zeta_3,3\zeta_3^2) $ ternary $ (0,\pm 2i,-4,-2,-2\pm 2i) $ or $ (0,\pm 2i,\pm2,-2\pm 2i) $ quaternary [24] $ (-2,\pm 2i) $ quaternary [25], [44] $ (0,-2) $ quaternary [31] $ (-1,\pm 3) $ quaternary [46] $ (-1,\pm(1+2i)) $ or $ (\pm 1,-3) $ quaternary [48] $ (\frac{p^{n-1}(p-7)}{2},\frac{p^{n-1}(p-3)}{2},p^n) $ quaternary [6] $ 0 $ $ p $-ary with one zero $ -1 $ $ p $-ary with one zero [38] $ -1 $ $ m $-ary with one zero [13] $ (0,3q^{n-1}) $ $ 6 $-ary with one zero [47] $ (0,p^{(k-1)n}) $ $ p^{kn} $-ary with zeros $ (0,p^{(k-1)n}) $ $ \frac{p^{n-1}}{\gcd(t,p^{n-1})} $-ary with zeros $ 0 $ $ m $-ary with one zero Theorem 3.7 $ \frac{q-3}{2} $ quaternary with one zero Proposition 7 $ (-1,0) $ $ m $-ary with $ \frac{q}{2}+1 $ zeros Richard Hofer, Arne Winterhof. On the arithmetic autocorrelation of the Legendre sequence. Advances in Mathematics of Communications, 2017, 11 (1) : 237-244. doi: 10.3934/amc.2017015 Ji-Woong Jang, Young-Sik Kim, Sang-Hyo Kim. New design of quaternary LCZ and ZCZ sequence set from binary LCZ and ZCZ sequence set. Advances in Mathematics of Communications, 2009, 3 (2) : 115-124. doi: 10.3934/amc.2009.3.115 Oǧuz Yayla. Nearly perfect sequences with arbitrary out-of-phase autocorrelation. Advances in Mathematics of Communications, 2016, 10 (2) : 401-411. doi: 10.3934/amc.2016014 Pinhui Ke, Yueqin Jiang, Zhixiong Chen. On the linear complexities of two classes of quaternary sequences of even length with optimal autocorrelation. Advances in Mathematics of Communications, 2018, 12 (3) : 525-539. doi: 10.3934/amc.2018031 Ji-Woong Jang, Young-Sik Kim, Sang-Hyo Kim, Dae-Woon Lim. New construction methods of quaternary periodic complementary sequence sets. Advances in Mathematics of Communications, 2010, 4 (1) : 61-68. doi: 10.3934/amc.2010.4.61 Pinhui Ke, Panpan Qiao, Yang Yang. On the equivalence of several classes of quaternary sequences with optimal autocorrelation and length $ 2p$. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020112 Fanxin Zeng, Xiaoping Zeng, Zhenyu Zhang, Guixin Xuan. Quaternary periodic complementary/Z-complementary sequence sets based on interleaving technique and Gray mapping. Advances in Mathematics of Communications, 2012, 6 (2) : 237-247. doi: 10.3934/amc.2012.6.237 Hongyu Han, Sheng Zhang. New classes of strictly optimal low hit zone frequency hopping sequence sets. Advances in Mathematics of Communications, 2020, 14 (4) : 579-589. doi: 10.3934/amc.2020031 Xiaohui Liu, Jinhua Wang, Dianhua Wu. Two new classes of binary sequence pairs with three-level cross-correlation. Advances in Mathematics of Communications, 2015, 9 (1) : 117-128. doi: 10.3934/amc.2015.9.117 Longye Wang, Gaoyuan Zhang, Hong Wen, Xiaoli Zeng. An asymmetric ZCZ sequence set with inter-subset uncorrelated property and flexible ZCZ length. Advances in Mathematics of Communications, 2018, 12 (3) : 541-552. doi: 10.3934/amc.2018032 Zhenyu Zhang, Lijia Ge, Fanxin Zeng, Guixin Xuan. Zero correlation zone sequence set with inter-group orthogonal and inter-subgroup complementary properties. Advances in Mathematics of Communications, 2015, 9 (1) : 9-21. doi: 10.3934/amc.2015.9.9 Limengnan Zhou, Daiyuan Peng, Hongyu Han, Hongbin Liang, Zheng Ma. Construction of optimal low-hit-zone frequency hopping sequence sets under periodic partial Hamming correlation. Advances in Mathematics of Communications, 2018, 12 (1) : 67-79. doi: 10.3934/amc.2018004 Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $ \beta $-transformation. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 Yixiao Qiao, Xiaoyao Zhou. Zero sequence entropy and entropy dimension. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 435-448. doi: 10.3934/dcds.2017018 Walter Briec, Bernardin Solonandrasana. Some remarks on a successive projection sequence. Journal of Industrial & Management Optimization, 2006, 2 (4) : 451-466. doi: 10.3934/jimo.2006.2.451 Olof Heden. The partial order of perfect codes associated to a perfect code. Advances in Mathematics of Communications, 2007, 1 (4) : 399-412. doi: 10.3934/amc.2007.1.399 Lin Yi, Xiangyong Zeng, Zhimin Sun, Shasha Zhang. On the linear complexity and autocorrelation of generalized cyclotomic binary sequences with period $ 4p^n $. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021019 Yang Yang, Guang Gong, Xiaohu Tang. On $\omega$-cyclic-conjugated-perfect quaternary GDJ sequences. Advances in Mathematics of Communications, 2016, 10 (2) : 321-331. doi: 10.3934/amc.2016008 Kai-Uwe Schmidt, Jonathan Jedwab, Matthew G. Parker. Two binary sequence families with large merit factor. Advances in Mathematics of Communications, 2009, 3 (2) : 135-156. doi: 10.3934/amc.2009.3.135 Matthew Macauley, Henning S. Mortveit. Update sequence stability in graph dynamical systems. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1533-1541. doi: 10.3934/dcdss.2011.4.1533 Büşra Özden Oǧuz Yayla
CommonCrawl
Demonstration of intracellular real-time molecular quantification via FRET-enhanced optical microcavity Deep tissue localization and sensing using optical microcavity probes Aljaž Kavčič, Maja Garvas, … Matjaž Humar Imaging-based spectrometer-less optofluidic biosensors based on dielectric metasurfaces for detecting extracellular vesicles Yasaman Jahani, Eduardo R. Arvelo, … Hatice Altug Overcoming evanescent field decay using 3D-tapered nanocavities for on-chip targeted molecular analysis Shailabh Kumar, Haeri Park, … Hyuck Choo Photonic crystal enhanced fluorescence emission and blinking suppression for single quantum dot digital resolution biosensing Yanyu Xiong, Qinglan Huang, … Brian T. Cunningham Monitoring contractility in cardiac tissue with cellular resolution using biointegrated microlasers Marcel Schubert, Lewis Woolfson, … Malte C. Gather Single-cell biomagnifier for optical nanoscopes and nanotweezers Yuchao Li, Xiaoshuai Liu & Baojun Li Microsphere-based interferometric optical probe Yongjae Jo, Junhwan Kwon, … Myunghwan Choi Nucleic acid-based fluorescent sensor systems: a review Aya Shibata, Sayuri L. Higashi & Masato Ikeda Non plasmonic semiconductor quantum SERS probe as a pathway for in vitro cancer detection Rupa Haldavnekar, Krishnan Venkatakrishnan & Bo Tan Yaping Wang1 na1, Marion C. Lang1 na1 nAff6, Jinsong Lu1 na1, Mingqian Suo1, Mengcong Du1, Yubin Hou2,3,4,5, Xiu-Hong Wang ORCID: orcid.org/0000-0003-2586-70841,3,4,5 & Pu Wang2,3,4,5 Imaging and sensing Optical sensors Single cell analysis is crucial for elucidating cellular diversity and heterogeneity as well as for medical diagnostics operating at the ultimate detection limit. Although superbly sensitive biosensors have been developed using the strongly enhanced evanescent fields provided by optical microcavities, real-time quantification of intracellular molecules remains challenging due to the extreme low quantity and limitations of the current techniques. Here, we introduce an active-mode optical microcavity sensing stage with enhanced sensitivity that operates via Förster resonant energy transferring (FRET) mechanism. The mutual effects of optical microcavity and FRET greatly enhances the sensing performance by four orders of magnitude compared to pure Whispering gallery mode (WGM) microcavity sensing system. We demonstrate distinct sensing mechanism of FRET-WGM from pure WGM. Predicted lasing wavelengths of both donor and acceptor by theoretical calculations are in perfect agreement with the experimental data. The proposed sensor enables quantitative molecular analysis at single cell resolution, and real-time monitoring of intracellular molecules over extended periods while maintaining the cell viability. By achieving high sensitivity at single cell level, our approach provides a path toward FRET-enhanced real-time quantitative analysis of intracellular molecules. Molecular studies at single cell resolution have received growing attention in the last decade due to the increasing awareness of intrinsic cellular heterogeneity of gene and protein expression as well as metabolites production1,2,3. The big challenge of single cell analysis, however, is the extremely low numbers of molecules, i.e., an average of only 1 × 105 molecules (nM level) for proteins. Current single cell techniques, such as laser scanning confocal imaging, surface-enhanced Raman spectroscopy, as well as the sequencing techniques for "omics", albeit play crucial roles in obtaining substantial molecular information, there are limitations in respect to real-time and quantitative analysis. Thus, the combination of efficient sample manipulation and highly sensitive detection is urgently desired. Optical microcavities, such as microspheres4,5,6, microrings7,8,9, microdisks10,11, and similar configurations12,13, which confine light within a small cavity, generating whispering gallery modes (WGMs) due to total internal reflection, have demonstrated great capability in bioanalysis due to their high quality factor, ultra-low loss, ultra-long photon lifetime and ultra-high intracavity power and intensity14,15,16. Compared to most single-pass optical devices, such as waveguides17 and optical interferometers18, where light interacts with the analyte only once; in WGM based approaches, light can interact with the analyte as often as 105 times16, greatly enhancing the sensing performance. In order to achieve ultrahigh sensitivity or ultrafast sensing, several mechanisms have been explored, including evanescent coupling of the microcavity resonance to a plasmonic nanoantenna19,20,21, laser-frequency locking22, exceptional point technique23 and others. Microcavity-based optical sensors can resolve single molecules or particles showing promise in the detection and manipulation of viruses, proteins and antibodies for clinical diagnostics and environmental monitoring15,24,25,26,27,28. Recently, the reach of evanescent and plasmonic techniques have extended the detection limit down to single biomolecules with dimensions in the single nanometer range15,25. However, these studies with optical WGM microcavity biosensors are technically not applicable for intracellular applications due to the adoption of fiber or prism couplers29,30,31. A promising solution is the active mode WGM microcavity with gain material, such as a dye-doped or intrinsically luminescent optical micro-resonator. While the small volume enables embedding into a cell, it also allows free space excitation with the pump light without adoption of a coupler. It has been shown that WGMs can be successfully created from small dye-doped fluorescent beads (with sizes ranging from sub-micrometer to micrometer, depending on the refractive index of the bead material) that are taken up by individual cells, to oil droplets stained with dye injected into a cell32,33,34. Lasers embedded in the cytoplasm of a cell or tissue, have recently been employed to record transient cardiac contraction profiles with cellular resolution as well as for high-density optical barcoding of cells35,36,37. The intracellular microcavity interacts with molecules in the cytosol via evanescent mode coupling and can serve as an optical sensor to detect molecules of interest. However, intracellular sensing of a molecule-of-interest has not been feasible so far due to the extreme low quantity of cytosolic molecules at single cell level. Here, we demonstrate that incorporating Förster resonance energy transfer (FRET) to an active-mode WGM microcavity platform can greatly increase the sensitivity enabling intracellular quantitative sensing of small molecules at single cell resolution. FRET-assisted dual lasing of a micro-resonator First, lasing action from a dye-doped microsphere was evaluated before introducing FRET to the lasing line. A dragon green (DG, MW 369 Da) doped polystyrene (PS) microspheres (1% DG doping, n = 1.59) with diameter of 15 µm (D) was ingrained in ultrapure water (ddH2O) and excited by a 473 nm pulsed diode-pumped laser (pulse duration 2.5 ns, repetition rate 100 Hz, diameter of pump spot on sample 30 µm). A typical WGM emission pattern was observed (Fig. S1a) with a center wavelength (CWL) at 534 nm. The center peak consistently remains the maximum height in the spectral envelope under optimized excitation conditions, so it can be easily identified without ambiguousness. The lasing threshold was 26.3 nJ. Above this threshold, the emission intensity increases linearly with the pumping energy (Fig. S1b). The free spectral range (FSR) of DG obtained from the spectrum was 3.79 nm, which is in good agreement with the theoretical estimation of 3.81 nm (Fig. S1c). Subsequently, a DG microbead was embedded in 50 µM rhodamine 6 G (R6G) solution and pumped with the same laser. The optical setup is shown in Fig. 1a. After overcoming the lasing threshold, two subsets of lasing peaks were observed: one group of peaks with CWL around 520 nm, attributed to DG emission; and a second group of peaks with CWL around 565 nm, attributed to R6G emission (Fig. 1b). The lasing outputs of the two sets of peaks depend on the pumping power. As shown in Fig. 1c, the relationship between output intensity and pump energy exhibits a characteristic s-shaped curve, similar to Fig. S1b. The lasing thresholds are 209 nJ and 309 nJ for DG and R6G, respectively. During 10 min of continuous laser operation, or 1 × 103 pump pulses, the lasing frequency remained unshifted (Fig. 1d, e), determined by peak fitting to the lasing spectra (n = 3), although the output intensities of DG and R6G decreased over time due to photo bleaching (Fig. S2a, b). Fig. 1: Dual-lasing of DG and R6G via an optical microcavity. a Schematic representation of the optical set-up (for details please see Methods section). b Typical lasing spectra of a DG microsphere in 50 µM R6G under varied pumping energy from 60.3nJ to 1262nJ. c Lasing thresholds of DG (green) and R6G (pink) are at 209nJ and 309nJ, respectively. d, e Lasing frequencies of DG (green) and R6G (pink) are independent of pumping duration (d) and number of pulses (e). As a control experiment, a sodalime glass microsphere (n = 1.59, D = 15 µm)) without fluorophore-doping was embedded in 50 µM R6G and excited under the same conditions. In this case, we only observed broad fluorescence emission of R6G, with a maximum @558 nm (Fig. S2c). Increasing the R6G concentration to 500 µM and pump energy to as high as 10,000 nJ resulted in intensified fluorescence (Fig. S2c, d), however, did not generate lasing output. This control experiment indicates that R6G lasing depends on the DG emission. The lasing action of R6G can be a consequence of two possible mechanisms. First, the DG lasing output functions as a pumping source to excite the R6G molecules; or secondly, Förster resonance energy transfer (FRET) between DG and R6G takes place38. To test the first hypothesis, a 15 µm sodalime glass microsphere was embedded in the same R6G solution, and a 532 nm nanosecond-laser was used as a pumping source, which provides a similar wavelength to DG lasing but with much higher single-pulse energy. However, only spontaneous emission rather than lasing of R6G was observed (Fig. S3a), ruling out the first hypothesis (Here we would like to clarify that with the described setup, R6G lasing can be realized, however, much higher R6G concentration (>5 mM) and pumping energy (>mJ) are required, please see Fig. S3b, c). The second option, FRET, requires a strong overlap between the emission spectrum of the donor, DG, and the absorption spectrum of the acceptor, R6G, to ensure energy conservation, which is clearly fulfilled for DG and R6G (Fig. 2a). A second prerequisite for FRET is the distance between donor and acceptor, meaning that if the distance between DG and R6G is within 1–10 nm, efficient energy transfer via FRET can occur. As the polystyrene microsphere is negatively charged, the cationic dye R6G would be readily adsorbed on the anionic surface, thus meeting the distance requirement for FRET (c.f. SI Fig. S4 for charging properties of the microspheres and molecules). Therefore, presumably, the two fluorophores form an ideal FRET pair, in which DG acts as donor (D) and R6G acts as acceptor (A). Fig. 2: FRET determines R6G lasing. a Absorption and emission spectra of DG and R6G show a strong overlap of DG emission and R6G absorption spectra, indicating good energy conservation. b Time-resolved fluorescence decay of DG in the absence (green) and presence (pink) of R6G. The shorter fluorescence lifetime of DG in the presence of R6G indicates FRET from DG to R6G. c Schematic illustration of energy transfer via FRET at the interface of the microcavity. To investigate this hypothesis further, we analyzed the fluorescence decay curves of DG in the presence of R6G, as the FRET process would affect the fluorescence decay of the confined donor DG in the presence of a suitable acceptor. A time-correlated single-photon counting (TCSPC) was applied to evaluate the fluorescence decay curve after excitation with a short pulse of light. The fluorescence decay of DG in the absence of R6G (τD) was determined to be 4.12 ns (mean value) regardless of the location of ROI (region of interest), Fig. S5 (DG-H2O). In contrast, in the presence of R6G, the decay time (τDA) of DG located on the surface of the microsphere was reduced to 2.14 ns (Fig. 2b, and Fig. S5 DG-R6G), whereas the decay time for the interior DG of the microsphere remains unchanged (4.10 ns, Fig. S5). Considering the high spectral overlap between the dyes, clearly, the energy transfer mechanism from surface DG to R6G was further evidenced as Förster type via dipole-dipole coupling. The energy transfer process is illustrated in Fig. 2c. The un-altered τ value of interior DG molecules (Fig. S5) indicates no energy transfer occurred in this situation. FRET efficiency (EFRET) was determined to be 48% (EFRET = (τD-τDA)/τD)39, which is high compared with the published data40,41,42. The ultra-high FRET efficiency is very likely owing to the evanescent field of WGM, as evidenced in the literature that an evanescent field would enhance the fluorescent energy transfer43,44. Due to the adjustment in spatial distance between donor and acceptor molecules and the variation of R6G molecular orientation, FRET efficiency may vary. Indeed, we obtained different τDA values (Fig. S5) for different ROIs on the microsphere surface, corresponding to EFRET of 52%, 47% and 42%, 43% and 59%, respectively. The data indicate that R6G lasing is a joint result of FRET and WGM. By measuring the FSRs of DG and R6G on the observed spectra at different R6G concentrations, we found FSRDG is always smaller than FSRR6G (c.f. Table S1). We also calculated the cavity sizes for DG and R6G using FSR = λ2/nπD. The resonant cavity for R6G is constantly larger than that of DG (c.f. Table S2). The decreased FSR of DG can be contributed to the increased refractive index near the sphere surface. Due to the layer of R6G, the DG observes a little bit refractive index increase. Whereas for R6G, since, somehow, the gain position is a little outside, its WGM might be pulled outward slightly. As described previously16,45, for a reactive WGM sensor, by first order perturbation theory, the frequency shift (Δω) can be estimated by Eq. (1): $$\frac{\triangle \omega }{\omega }=-\frac{{\alpha }_{{ex}}{\left|{{{{{\bf{E}}}}}}({r}_{0})\right|}^{2}}{2\int \varepsilon {\left|{{{{{\bf{E}}}}}}(r)\right|}^{2}{dV}}$$ where ε is the permittivity of the medium and αex is the polarizability of the particles (molecules) bound to the microcavity; \({{{{{\bf{E}}}}}}({r}_{0})\) and \({{{{{\bf{E}}}}}}(r)\) are the modal field amplitudes at the binding site r0 and throughout the mode, respectively. The frequency shift produced by a molecule binding to the microcavity is proportional to the intensity ∼E2(r0) encountered at the binding site r0. Therefore, once a molecule is bound on the surface of a WGM microcavity where the evanescent field strength E(r) is high, the molecule will become polarized at the optical frequency ω. The energy that is needed to polarize the molecule and induce the dipole moment is ½αexE(r0)2. Since FRET enables effective energy transfer from DG molecules to R6G molecules, located on the binding site r0, which contributes extra energy Ex(r0) on top of the evanescent field. Any mechanism that can amplify the field intensity at the binding site will dramatically increase the sensitivity in molecule detection, as evidenced by the plasmonic effect in the field46,47. Therefore, we hypothesize that FRET-coupled WGM (FRET-WGM) would exhibit higher sensing performance than its non-FRET counterpart (non-FRET-WGM). Resonant energy transfer greatly enhances WGM sensing performance To test this hypothesis, we compared the sensing performance of FRET-WGM and non-FRET-WGM. First, a DG microsphere was embedded in various concentrations of R6G solution and pumped with a 473 nm laser. Depending on the concentration of R6G, different resonance shifts were observed (Fig. 3a, only the center peaks are shown here. for the full spectra please refer to SI figure S6a, b). As the concentration of R6G increased, the CWL of DG lasing shifted toward shorter wavelengths (blue shift), i.e., from 534 nm in pure water to 520 nm in 50 µM R6G solution. Correspondingly, the CWL of R6G lasing shifted toward longer wavelengths (red shift) (Fig. 3a). Fig. 3: The FRET-WGM sensing system. a Dose-dependent lasing spectra (only showing the CWL) of DG and R6G displaying wavelength blue-shift of DG and red-shift of R6G with increase of R6G concentration. b Mean wavelength-concentration plot of DG (green) and R6G (pink) of five experiments. Data are presented as mean values ± SD (n = 5). c log-scale Δλ-concentration curve showing that the wavelength gap (Δλ) exponentially increases as the R6G concentration increases (for a mathematical model ref. SI table S3, Δλ = λ2(R6G)-λ1(DG)). Data are presented as mean values ± SD (n = 5). By measuring eleven different concentrations (0, 10 nM, 50 nM, 100 nM, 500 nM, 1 μM, 5 μM, 10 μM, 50 μM, 100 μM and 500 μM) and repeating the measurement for five times, the mean value of resonance wavelengths, λDG and λR6G, were taken to draw the concentration-dependent curve (Fig. 3b). The average wavelength distances between CWLS of DG and R6G (Δλ = λR6G-λDG) at different R6G concentrations show an exponential increase with increase of R6G concentration. The log-scale curves are shown in Fig. 3c (for a mathematical model please see Table S3). The inset shows linear fit to the curve. For a R6G concentration of 500 µM, the wavelength distance Δλ is 53.5 nm; a considerable distance of 12.8 nm is already observable at a R6G concentration of 1 µM; and for 10 nM R6G, the Δλ value is 6.2 nm. Thus, the observed changes in frequency shift could be a sensitive means to quantify the extra-cavity acceptor molecules in solution. The calculated limit of detection (LOD) for R6G is 15.2 pM (we define the LOD as equal to the linewidth FWHM (δλ), i.e., ΔλLOD = δλ), opening up the potential to detect very small concentration changes of the acceptor dye with high sensitivity. In the second experiment, we addressed the FRET effect on WGM sensing. Therefore, a mirror-study of "non-FRET-WGM" was performed. To minimize errors caused by non-FRET factors, such as molecular size and dipole moment, a non-fluorescent R6G analog, R6G hydrazide (R6GH), was synthesized48, utilizing the unique property of rhodamine transformation from the fluorescent ring-opened form (R6G) to the non-fluorescent spirolactam (R6GH) (Fig. 4a). While R6GH and R6G both are positively charged (Fig. S4) and have similar molecular size (MW 457 vs 479 Da, confirmed by ESI-MS, Fig. 4a right panel, for the full mass spectrum refer to Fig. S7) and polarity, R6GH molecules do not absorb or emit energy in the visible region (Fig. 4b). Therefore, there is no energy transfer between DG and R6GH. Time-resolved fluorescence decay also confirmed this (τ values of DG in the presence of R6GH remains same as in H2O regardless of the location of the measurement, Fig. S5 DG-R6GH). A DG microsphere was ingrained in various dosages of R6GH solution and excited with a 473 nm laser. In clear contrast to the aforementioned FRET-WGM results, the presence of R6GH molecules resulted in a dose-dependent red-shift of the DG resonance wavelength (Fig. 4c, for full spectra please refer to SI fig. S6c). From curve fitting of the Δλ-concentration plot (Fig. 4d), the LOD for R6GH was calculated to be 747.4 nM (again using ΔλLOD = δλ, see table S3 for detailed mathematical model). The comparison of DG-R6G with DG-R6GH shows that the evaluated LODFRET-WGM is around 5 × 104 times lower than that of the non-FRET-WGM system. Clearly, introducing FRET to the active-mode WGM sensing system greatly increases the sensing performance. Fig. 4: The non-FRET-WGM sensing system. a Structural formula of R6G and R6GH. R6GH is a R6G analog with similar structure and molecular size. Right panel shows the molecular weight of R6GH determined by ESI mass spectrometer (cf. Fig S8 for full mass spectrum). b Absorption and emission spectra of DG and R6GH. Unlike R6G, R6GH does not absorb and emit light in the visible region. c Dose-dependent lasing spectra of DG in the presence of various concentrations of R6GH (solvent is 90%H2O + 10%EtOH. Only center peaks are shown, full spectra in SI Fig. S6c). d Wavelength shift (Δλ)-concentration curve of DG in the presence of R6GH. Δλ = λDG-in-R6GH-λDG-in-solvent. Data are presented as mean values ± SD (n = 3). e Threshold of DG in the presence of R6G (pink) or R6GH (red). Due to FRET, the lasing threshold of DG in R6G is much higher than that of R6GH. f Lasing thresholds of DG and R6G. As the R6G concentration increases, the thresholds of DG rise, indicating an increased energy loss of DG molecules. Comparing the obtained spectra at the same concentration of R6G and R6GH additionally shows that the lasing threshold of DGFRET-WGM is much higher than that of DGnon-FRET-WGM (Fig. 4e). This finding further indicates that energy in fact is transferred from DG to R6G, increasing the losses for the DG WGM system. An increased concentration of R6G also leads to a higher lasing threshold of DG (Fig. 4f), since more energy is transferred from DG to R6G, causing more losses for the DG system. It is interesting to note the switch of the resonance wavelength shift of DG (λDG) when the concentrations of R6GH and R6G increase, respectively, from a red-shift in the R6GH-case to a blue-shift in the R6G-case, indicating different underlying mechanisms. In the non-FRET-WGM system, as documented in many publications, the shift to longer resonance wavelength occurs because the bound R6GH molecules will effectively "pull" part of the optical field to the outside of the microsphere by Δl 49, thereby increasing the roundtrip path length by 2πΔl. This increase in path length produces the shift (Δλ) to lower frequencies (red-shift). The shift of the resonance wavelength mainly depends on the refractive index change. For the investigated R6G concentrations, the change of the refractive index (Δn) is small (c.f. SI table S4), corresponding to a small shift of the resonance wavelength Δλ. This explains the much lower sensitivity of a pure active WGM. In the case of FRET-WGM, the refractive index change of R6G (Δn) is equally small to that of non-FRET-WGM (c.f. SI table S4). However, the exceptionally high FRET efficiency suggests FRET plays the dominating role in the sensing process, overcompensating the effect by the refractive index change, although the mechanism is not yet fully understood at the moment. For more comprehensive insights into energy transfer of the microcavity, we applied rate equations to simulate the intensity for every single mode and thereby obtain the efficiency of cavity energy transfer, please refer to SI section 3. Analyzing each mode of the rate equations reveals that FRET effect is sufficient to achieve population inversion, that is to say, FRET dominates the process for R6G lasing. With FRET being the dominating factor, the mechanisms of the active-mode WGM system investigated here are fundamentally different from the mechanisms underlying a non-FRET-WGM system. The described FRET-WGM configuration allows energy transfer from inner cavity to out cavity, distinguishing from intra-cavity FRET reported previously50,51,52,53. To validate the theoretical model, we altered the surface charge of the microsphere to abolish the distance requirement for FRET to occur. To this end, we used PAH molecule (Poly (allylamine hydrochloride), MW 10,000–20,000 Da) to coat the microsphere to achieve a positively charged surface. TEM image confirmed a ~5 nm layer formed around the microsphere (Fig. S8a, b). The measured zeta potential of the DG microspheres after PAH modification was 15.9 mV (Fig. S8c). The positively charged microsphere was embedded in 50 μM R6G solution and pumped with 473 nm pulsed laser. We randomly selected 5 microspheres and the data are shown in Fig. S8d. We observed solely DG lasing instead of DG and R6G dual lasing. The CWL of DG lasing spectrum is 535.8 nm, which is comparable with that in pure H2O (red-shifted by 1.8 nm due to the increased refractive index caused by the PAH modification). It is assumed that the positively charged DG microsphere repels R6G molecules and thus R6G molecules are distanced from the microsphere, interfering with the energy transfer via FRET, as a result, R6G fails to lase. The data further support the mechanism of FRET playing a leading role in the FRET-WGM sensing system. In the FRET-WGM format, the shift of DG toward shorter wavelengths indicates that with increased R6G concentration, the energy transfer from DG to R6G becomes more effective, leading to higher losses in the DG lasing process and a higher gain in the R6G lasing process. The FRET-WGM is analogous to a quasi-3-level lasing system, where increased losses lead to a frequency shift toward shorter wavelengths54. This is because increased losses require a higher gain and subsequently a higher excitation level. This in turn means the maximum gain of the system is at a shorter wavelength. This is what can be observed for the DG lasing: higher R6G concentration increases the loss for DG lasing, because more of the energy is transferred to the R6G transition, leading to a shift toward shorter excitation wavelengths. The R6G emission, on the other hand, shifts to longer wavelengths, because here the gain is increased at higher concentrations. Supporting this explanation is that with increasing R6G concentration also a higher lasing threshold for DG lasing was observed (Fig. 4f). The relationship between wavelength gap (Δλ) and acceptor concentration can be further quantitatively simulated via absorption/emission cross section analysis (please refer to SI section 4). The initial concentration of DG will determine the absolute value of CWL and sensitivity to R6G concentration. Increasing DG concentration in the microsphere to 2.0% (w/w), similar lasing performance was observed. The CWL of 2% DG microsphere in H2O is 539 nm, red-shifted by 5 nm due to higher gain molecule quantity in the resonator. Embedding the microsphere in various concentrations of R6G and pumping with a 473 nm laser, similarly, we observed dose-dependent blue- and red-shift of DG and R6G CWLs, respectively (Fig. S9a, b). At the same R6G concentration, the wavelength distance between DG and R6G (Δλ = λR6G-λDG, Fig. S9c) is larger for 2% DG-R6G than that of 1% DG-R6G, as shown in Table 1. The LODFRET-WGM of the 2% DG microcavity sensor, which is calculated to be 3.2 pM, is lower than that of 1% counterpart (15.2 pM). The data, on the one hand, further confirmed the aforementioned results; more importantly, also provide a valid path to optimize the sensitivity of the FRET-WGM platform. In order to rule out detrimental effects by the ambient temperature, the frequency difference between DG and R6G lasing at higher temperature (30 °C) was investigated. Increasing temperature resulted in an overall blue shift of both DG and R6G resonant peaks, however Δλ (λR6G-λDG) value remained unchanged (Fig. S10), compared to the room temperature (20 °C) data. Table 1 Δλ (λR6G-λDG) values at same R6G concentration with different DG-doping The magnitude of the resonant wavelength shift Δλ (Δλ/λ = Δω/ω) is inversely proportional to the mode volume Vmode given by the denominator in Eq. (2)55: $$\frac{\triangle {\lambda }_{r}}{{\lambda }_{r}}=\frac{{\alpha }_{{ex}}\sigma }{{\varepsilon }_{0}\left({n}_{s}^{2}-{n}_{m}^{2}\right)R}$$ where ns and nm are the refractive indices of the sphere and exterior medium, respectively, and σ is the surface density of bound biomolecules. Therefore, reducing the size (modal volume) of the optical resonator would further boost the sensing capability. Meanwhile, usage of a high Q microcavity would also increase the sensitivity of a FRET-WGM microlaser. FRET-WGM microlasers for intracellular molecule sensing For intracellular operation of the FRET-WGM microlasers, a DG bead was first internalized into a cell (T47D, human breast cancer cell line) via simple co-culture and endocytosis. The 3D confocal image of the cell confirms the DG microbead was truly inside the cell rather than sitting on top of the cell (Fig. S11a and supplementary video). Most beads enter cells following 4 h of incubation. Cells harboring beads grow and divide as usual (Fig. S11b, c). A single cell harboring one DG bead was excited with 473 nm pulsed laser. We observed typical WGM emission of DG with CWL @ 537 nm (Fig. S12a), which is red-shifted by 3 nm compared to that in H2O due to the higher refractive index of cytoplasm (ncytoplasm 1.37 vs \({{{{{\mathrm{n}}}}}}_{{{{{{\mathrm{H}}}}}}_2{{{{{\mathrm{O}}}}}}}\) 1.33). The intracellular lasing threshold (60.3 nJ, Fig. S12b) is also higher than in ddH2O (Fig. S1b). For the measurement of intracellular R6G concentration, 100 μM R6G solution was co-cultured with T47D cells harboring DG microspheres for 30 mins to allow R6G molecules to enter the cells. R6G molecules passively penetrated the plasma membrane and dispersed in the cytoplasm (Fig. 5a). A single cell harboring a microsphere was pumped with 473 nm laser. In the presence of intracellular R6G, two subsets of WGM lasing peaks could be observed (Fig. 5b, c), which is similar to the data obtained in the extracellular setup. During 10 mins of continuous laser operation or 1 × 103 pump pulses, the lasing frequency remained un-shifted (Fig. 5d, e), however, the lasing intensities of DG and R6G decreased over time due to photo bleaching (Fig S12c, d). The lasing thresholds of DG and R6G are higher than the non-cellular setup (Fig. 5f vs Fig. 1c) due to more energy loss in cell. Fig. 5: Real-time intracellular R6G quantification. a Image of A T47D cell harboring a DG (microsphere (green) following incubation with R6G (red) for 5 mins. The cell membrane (blue) was stained with CellMaskTM. The experiment was repeated independently three times with similar results. Scale bar 10 μm. b Dual lasing of DG and R6G within a T47D cell. c Schematic illustration of intracellular dual lasing via FRET from a cell harboring a FRET-donor doped optical microcavity and implanted in cytoplasm containing a FRET-acceptor. d, e Lasing wavelengths of DG and R6G remain un-shifted following 10 mins of continuous pumping (d) and 1000 pulses (e). f The intracellular lasing thresholds of DG and R6G are 309 nJ and 447 nJ, respectively, higher than that of the extracellular setup (Fig. 1c). g Real-time measurement of Δλ. Intracellular lasing spectra of DG and R6G are recorded following 5 mins incubation with 10 μM, 50 μM and 100 μM R6G solutions, respectively. The experiments were conducted 3 times with similar results, as-shown is a representative. h Intracellular R6G quantification using Δλ-concentration standard curve. The determined intracellular R6G concentrations following 5 mins incubation of R6G are 1.54, 18.55 and 46.12 μM, respectively. Data are presented as mean values ± SD (n = 3). This finding demonstrates an advantage of using the wavelength rather than lasing intensity as a measure to quantify the concentration (although in fact, data acquisition is completed within seconds, far before severe photo bleaching occurs). Fluorescence intensity often is a measure for cross-examining cellular events and pathophysiologic conditions in small animal models of human diseases. While intensity measurements are convenient in the laboratory, they are often inadequate and sometimes imprecise in real-world situations. It is impossible to know the probe concentration at each point of the image. In addition, fluorescence intensity changes may be due to photo bleaching, photo-transformation and/or diffusive processes and so forth. On the contrary, molecular quantification using lasing wavelength can provide unique or complementary information, which is more accurate since it is not influenced by photo bleaching. By measuring the wavelength gap between DG and R6G (Δλ) (Fig. 5g), the intracellular concentration of R6G after 5 mins incubation can be determined via the standard concentration-Δλ curve shown in Fig. 3c (Although the intracellular refractive index is slightly higher than pure water, Δλ would not be affected. Therefore, the standard curve is also applicable for the intracellular study). The measured Δλ value is 40.74 nm, corresponding to an intracellular R6G concentration of 46.12 μM (Fig. 5g, h). The discrepancy between the intracellular and the extracellular R6G concentration is due to the barrier function of the cell membrane. Similarly, 10 μM and 50 μM of R6G solution were incubated with T47D cells harboring DG microspheres, the intracellular concentrations of R6G after 5 mins incubation were determined to be 1.54 μM and 18.55 μM, respectively (Fig. 5 g, h). The cells remained alive after pumping experiments were completed (cells on the slide grow and proliferate normally). In summary, we presented a FRET-enhanced active-mode WGM sensing platform, which allows free space excitation and enables sensitive intracellular detection of R6G at single cell resolution, providing a method for fast (no amplification is needed) and cost-effective real-time molecular analysis at single cell resolution. FRET has been used extensively throughout biology, materials science, chemistry, suggesting a great potential for studying molecular interactions, energy transfer and conservation. WGM lasers offer multiple options via altering the gain medium configuration, including doped in the resonator and cross-linked on the surface of the resonator. The presented approach combining FRET and WGM could be used for studying real-time molecular interactions or sensitive detection, either extracellularly or intracellularly, by means of a suitable acceptor-donor combination and WGM format. Fig. S13 gives an example for potential intracellular sensing applications. Instead of the DG-R6G combination, more cell-friendly FRET pairs, e.g., CFP-YFP, which can be co-expressed with the host protein in cytoplasm, could be considered in the future. To increase the sensitivity and reduce the mode volume, microcavities fabricated with higher refractive index materials, such as aqueous-stable perovskites, luminescent semi-conductor materials, OLED and so forth, could be employed. Intracellular sensing to reveal the real-time information at single cell resolution, such as protein/protein (or protein/DNA, DNA/DNA) interactions, regulations of signaling molecules upon stimuli and aberrant expression under pathological conditions, can bring fundamental information and understanding of biological processes in health and disease. It also enables novel diagnostics and precise interventions for treating diseases like cancers and diabetes. The FRET assisted WGM platform provides an operative approach for realization of non-destructive intracellular sensing at single cell resolution. Further optimizing the detection limit and smart designs of FRET-WGM sensing probe to provide real-time intracellular dynamic information will be embarked in future. Optical set-up The setup is based on a 473 nm microchip laser (BrightSolutions Co. Model FP2-473-10-0.1, 473 nm 10μJ pulse energy, pulse duration 2.5 ns, repetition rate 100 Hz) for exciting WGM. The beam shape is elliptical, so a plastic prism (Thorlabs Inc.) was used to expand the smaller axis and get a better illumination of the back aperture of the objective. The laser was then focused on the sample via an objective lens (Thorlabs Inc., 40×, NA = 0.6) (Fig. 1a). The pump laser was focused to a 30 µm large spot and a maximum pulse energy of 1–50 nJ was used depending on resonator size and tissue scattering. Emission from the microlaser was collected by the same objective, separated from the pump light by a dichroic mirror (Thorlabs Inc. Transmission band: 505–800 nm, Reflection band: 380–475 nm) and passed to the camera port of the microscope. Subsequently the fluorescence was collected via a second objective (Thorlabs Inc. 60×, NA = 0.85), the excitation light is blocked by a long pass filter (Thorlabs Inc., 490 nm) and the signal was detected by a spectrometer (Zolix Instruments Co., Omni-λ500i, resolution = 0.4 nm, 100 ms acquisition time). In the backward direction, the microbead is imaged onto a webcam to control the position of the microbead and make sure that there is only one microbead in focus for the measurements. Laser threshold characteristics were acquired on the same set-up by varying the pump power with a set of neutral density filters. Spectra were integrated over 800 pump pulses below the threshold, whereas between 100 and 200 pump pulses were used above the threshold. 100 spectra were analyzed for each pump energy. Cell culture and assays Breast cancer cell lines MCF7 and T47D were purchased from the American Type Culture Collection (ATCC). All cells were cultured in DMEM medium (Thermo Scientific) supplemented with 10% FBS, 1% Glutamax, 1% penicillin and 1% streptomycin in an incubator with 5% CO2 and 80% humidity at 37 °C. Cells in the logarithmic growth phase were used for all experiments. To count the number of cells, a Countesst automated cellcounter (Invitrogen, USA) was used. For viability assay, cells were incubated in a 96-well plate (5000 cells per well, in triplicate) with appropriate culture medium for 24 h. Subsequently, the initial medium was replaced with fresh medium containing the various concentrations of microbeads and incubated for another 24 h. The culture medium was gently removed and the cells were washed twice with sterile PBS. Then 10 μl cell counting kit-8 (CCK-8) solution was added to each well, the absorption (OD) at 450 nm was measured with the microplate reader after a 3 h co-cultivation. The following formula was used to evaluate cell viability: Cell viability (%) = (mean of Abs. value of treatment group/mean Abs. value of control) × 100%. Sample preparation for optical experiments Preparation of Poly-d-lysine coverslips Glass coverslips were wipe-cleaned with ethanol and left until dry. 20 µl of Poly-d-lysine (PDL, Sigma-Aldrich) was added on the top of the coverslip; the liquid was allowed to spread out to cover the entire coverslip. The samples were left at room temperature for 10 mins to ensure PDL molecules were fully adhered on the coverslip surface. Then the excessive PDL was removed and the coverslips were washed 3× with ddH2O. The coverslips were stored at room temperature on a dry place over night. Preparation of DG microbeads samples On the top of the PDL coverslip, 20 µl dragon green (DG) microbeads suspension (1:20 dilution in ddH2O) was added, after 10 mins, which allowed the microbeads to settle down, the sample was covered with a clean coverslip, and excess liquid was carefully removed. The sample was mounted on a glass slide and sealed with nail polish. After the nail polish was dry, the glass slide was mounted on a holder and the sample was fixed in the measurement setup. Preparation of cells harboring DG microbeads A clean 18 × 18 mm coverslip was placed in a 35 mm cell culture petri dish. A mixture of 2 μL DG microbeads (103 beads μl−1) and 300 μL log-phase growing cells (102 cell μl−1) was gently dropped on the coverslip to ensure microbeads and cells evenly cover the slip. The coverslip was stored in the CO2 incubator for 4 h to allow the cells adhere on the coverslip and the microbeads to be internalized into the cells. Then the coverslip was washed three times with phosphate buffered saline (PBS) to remove un-internalized microbeads. For intracellular real-time R6G quantification, two coverslips with live cells harboring DG microbeads were prepared as aforementioned. On the top of the coverslip, 15 μL 50 μM R6G solution was added and the samples were incubated at 37 °C for 5 mins. One coverslip was washed with PBS and the cell membrane was stained with DP for confocal imaging (please see SI for details). The other coverslip was washed and covered with a clean piece of coverslip, then mounted on a glass slide as described above and sealed with nail polish. Additional experimental methods please refer to SI. Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. The authors declare that the main data supporting the findings of this study are available within the paper and its Supplementary Information files. Raw data used in this study are available from the corresponding author upon request. Pennisi, E. Chronicling embryos, cell by cell, gene by gene. Science 360, 367 (2018). Camp, J. G., Wollny, D. & Treutlein, B. Single-cell genomics to guide human stem cell and tissure engineering. Nat. Methods 15, 661–667 (2018). Stuart, T. & Satija, R. Integrative single-cell analysis. Nat. Rev. Genet. 20, 257–272 (2019). Chen, Y. J., Schoeler, U., Huang, C. H. B. & Vollmer, F. Combining whispering-gallery mode optical biosensors with microfluidics for real-time detection of protein secretion from living cells in complex media. Small 14, e1703705 (2018). Yang, L. & Vahala, K. J. Gain functionalization of silica microresonators. Opt. Lett. 28, 592–594 (2003). Xiao, Y. F. et al. Low-threshold microlaser in a high-q asymmetrical microcavity. Opt. Lett. 34, 509–511 (2009). Shopova, S. I., Zhou, H., Fan, X. & Zhang, P. Optofluidic ring resonator based dye laser. Appl. Phys. Lett. 90, 221101–221103 (2007). Ta, V. D., Chen, R. & Sun, H. D. Lasers: Coupled Polymer Microfiber Lasers for Single Mode Operation and Enhanced Refractive Index Sensing. Adv. Optical Mater. 2, 220–225 (2014). Jiang, X. F., Zou, C. L., Wang, L., Gong, Q. & Xiao, Y. F. Whispering-gallery microcavities with unidirectional laser emission. Laser Photon. Rev. 10, 40–61 (2016). Guo, Z., Qin, Y., Chen, P., Hu, J. & Wu, X. Hyperboloid-drum microdisk laser biosensors for ultrasensitive detection of human igg. Small 16, 2000239 (2020). Tamboli, A. C. et al. Room-temperature continuous-wave lasing in GaN/InGaN microdisks. Nat. Photon. 1, 61–64 (2007). Shi, C., Soltani, S. & Armani, A. M. Gold nanorod plasmonic upconversion microlaser. Nano Lett. 13, 5827–5831 (2013). Mehrabani, S. & Armani, A. M. Blue upconversion laser based on thulium-doped silica microcavity. Opt. Lett. 38, 4346–4349 (2013). Subramanian, S., Wu, H. Y., Constant, T., Xavier, J. & Vollmer, F. Label‐free optical single‐molecule micro‐ and nanosensors. Adv. Mater. 30, 1801246.1–1801246.21 (2018). Zhi, Y., Yu, X. C., Gong, Q., Yang, L. & Xiao, Y. F. Single nanoparticle detection using optical microcavities. Adv. Mater. 29, 1604920 (2017). Vollmer, F. & Yang, L. Review label-free detection with high-q microcavities: a review of biosensing mechanisms for integrated devices. Nanophotonics 1, 267–291 (2012). Suter, J. D., Lee, W., Howard, D. J., Hoppmann, E. & Fan, X. D. Demonstration of the coupling of optofluidic ring resonator lasers with liquid waveguides. Opt. Lett. 35, 2997–2999 (2010). Fan, X. D. et al. Sensitive optical biosensors for unlabeled targets: a review. Analytica Chim. Acta 620, 8–26 (2008). Baaske, M. & Vollmer, F. Optical observation of single atomic ions interacting with plasmonic nanorods in aqueous solution. Nat. Photon. 10, 733–739 (2016). Dantham, V. R., Holler, S., Barbre, C., Keng, D. & Arnold, S. Label-free detection of single protein using a nanoplasmonic-photonic hybrid microcavity. Nano Lett. 13, 3347–3351 (2013). Santiago-Cordoba, M. A., Cetinkaya, M., Boriskina, S. V., Vollmer, F. & Demirel, M. C. Ultrasensitive detection of a protein by optical trapping in a photonic-plasmonic microcavity. J. Biophoton. 5, 629–638 (2012). Su, J., Goldberg, A. F. & Stoltz, B. M. Label-free detection of single nanoparticles and biological molecules using microtoroid optical resonators. Light.: Sci. Appl. 5, e16001 (2016). Chen, W., Ozdemir, S. K., Zhao, G., Wiersig, J. & Yang, L. Exceptional points enhance sensing in an optical microcavity. Nature 548, 192–196 (2017). Vollmer, F., Arnold, S. & Keng, D. Single virus detection from the reactive shift of a whispering-gallery mode. Proc. Natl Acad. Sci. 105, 20701–20704 (2009). Baaske, M. D., Foreman, M. R. & Vollmer, F. Single-molecule nucleic acid interactions monitored on a label-free microcavity biosensor platform. Nat. Nanotechnol. 9, 933–939 (2014). Zijlstra, P., Paulo, P. M. R. & Orrit, M. Optical detection of single non-absorbing molecules using the surface plasmon resonance of a gold nanorod. Nat. Nanotechnol. 7, 379–382 (2012). Pang, Y. & Gordon, R. Optical trapping of a single protein. Nano Lett. 12, 402–406 (2012). Kim, E., Baaske, M. D., Schuldes, I., Wilsch, P. S. & Vollmer, F. Label-free optical detection of single enzyme-reactant reactions and associated conformational changes. Sci. Adv. 3, e1603044 (2017). Reynolds, T. et al. Fluorescent and lasing whispering gallery mode microresonators for sensing applications. Laser Photon. Rev. 11, 1600265 (2017). Armani, A. M., Kulkarni, R. P., Fraser, S. E., Flagan, R. C. & Vahala, K. J. Label-free, single-molecule detection with optical microcavities. Science 317, 783–787 (2007). Santamaría-Botello, G. A. et al. Maximization of the optical intra-cavity power of whispering-gallery mode resonators via coupling prism. Opt. Express 24, 26503–26514 (2016). Humar, M. & Hyun, Y. S. Intracellular microlasers. Nat. Photon. 9, 572–576 (2015). Fikouras, A. H., Schubert, M., Karl, M., Kumar, J. D. & Gather, M. C. Non-obstructive intracellular nanolasers. Nat. Commun. 9, 4817 (2018). Schubert, M. et al. Lasing within live cells containing intracellular optical microresonators for barcode-type cell tagging and tracking. Nano Lett. 15, 5647–5652 (2015). Matja, H. & Yun, S. H. Whispering-gallery-mode emission from biological luminescent protein microcavity assemblies. Optica 4, 222–228 (2017). Schubert, M., Woolfson, L., Barnard, I. R. M., Dorward, A. M. & Gather, M. C. Monitoring contractility in cardiac tissue with cellular resolution using biointegrated microlasers. Nat. Photon. 14, 1–7 (2020). Martino, N., Kwok, S. J. J., Liapis, A. C., Forward, S. & Yun, S. H. Wavelength-encoded laser particles for massively multiplexed cell tagging. Nat. Photon. 13, 720–727 (2019). Chen, Q., Kiraz, A. & Fan, X. Optofluidic FRET lasers using aqueous quantum dots as donors. Lab a Chip 16, 353–359 (2016). Shopova, S. I., Cupps, J. M., Zhang, P., Henderson, E. P. & Fan, X. D. Opto-fluidic ring resonator lasers based on highly efficient resonant energy transfer. Opt. Express 15, 12735–12742 (2007). Zambrana-Puyalto, X. & Ponzellini, P. Nicolò Maccaferri, et al. Förster-resonance energy transfer between diffusing molecules and a functionalized plasmonic nanopore. Phys. Rev. Appl. 14, 054065 (2020). Pramanik, A. et al. Forster resonance energy transfer assisted white light generation and luminescence tuning in a colloidal graphene quantum dot-dye system. J. Coll. Interfac. Sci. 565, 326–336 (2020). Chakraborty S., & Arshad Hussain S. Fluorescence resonance energy transfer (FRET) between acriflavine and CdTe quantum dot. Mater Today: Proc. 46, 6087–6090 (2021). Andrew, P. & Barnes, W. L. Forster Energy Transfer in an Optical Microcavity. Science 290, 785 (2000). Jana, S. et al. Microcavity-Enhanced Fluorescence Energy Transfer from Quantum Dot-Excited Whispering Gallery Modes to Acceptor Dye Nanoparticles. ACS Nano 15, 1445–1453 (2020). Arnold, S., Khoshsima, M., Teraoka, I., Holler, S. & Vollmer, F. Shift of whispering-gallery modes in microspheres by protein adsorption. Opt. Lett. 28, 272–274 (2003). Min, B. et al. High-Q surface-plasmon-polariton whispering-gallery microcavity. Nature 457, 455–458 (2009). Xiao, Y. F., Zou, R. L. & Li, B. B. High-Q exterior whispering gallery modes in a metal-coated microresonator. Phys. Rev. Lett. 105, 153902 (2010). Zhang, Z., Zheng, Y., Hang, W., Yan, X. & Zhao, Y. Sensitive and selective off-on rhodamine hydrazide fluorescent chemosensor for hypochlorous acid detection and bioimaging. Talanta 85, 779–786 (2011). Vollmer, F. & Arnold, S. Whispering-gallery-mode biosensing: label-free detection down to single molecules. Nat. Methods 5, 591–596 (2008). Sun, Y., Shopova, S. I., Wu, C. S. & Arnold, S. Bioinspired optofluidic FRET lasers via DNA scaffolds. Proc. Natl Acad. Sci. USA 107, 16039–16042 (2010). Sun, Y. & Fan, X. Distinguishing dna by analog-to-digital-like conversion by using optofluidic lasers. Angew. Chem. 124, 1262–1265 (2012). Chen, Y.C., Chen Q., & Fan X. Optofluidic chlorophyll lasers. Lab. Chip. 16, 2228–2235 (2016). Chen, Q. et al. Self-assembled DNA tetrahedral optofluidic lasers with precise and tunable gain control. Lab. Chip 13, 3351 (2013). Pask, H. M. & Carman, R. J. Ytterbium-doped silica fiber lasers: versatile sources for the 1-1.2 μm region. IEEE J. Sel. Top. Quantum Electron. 1, 2–13 (1995). Braginsky, V. B., Gorodetsky, M. L. & Ilchenko, V. S. Quality-factor and nonlinear properties of optical whispering-gallery modes. Phys. Lett. A 137, 393–397 (1989). This research was supported in part by the National Natural Science Foundation of China (No: 92053116 X.H.W., 62035002 P.W., 61905006 Y.H.), the National Research and Development Program of China (No. 2017YFB0405200 P.W.), and Beijing Natural Science Foundations (L182011 X.H.W., 4192013, X.H.W.). The authors thank Dr. Yan Yinzhou at BJUT and Dr. Yao Haizi at Fuzhou University for valuable comments on the manuscript and BJUT Core Facilities for technical support. Part of this work was performed at the Technology Centre for Protein Sciences (TCPS) in Tsinghua University and Health Science Centre in Peking University. Marion C. Lang Present address: Carl Zeiss Microscopy GmbH ZEISS Group, Kistlerhofstr.75, 81379, Munich, Germany These authors contributed equally: Yaping Wang, Marion C Lang, Jinsong Lu. Laboratory for Biomedical Photonics, Institute of Laser Engineering, Faculty of Materials and Manufacturing, Beijing University of Technology, 100124, Beijing, China Yaping Wang, Marion C. Lang, Jinsong Lu, Mingqian Suo, Mengcong Du & Xiu-Hong Wang Laboratory for Advanced Laser Technology and Applications, Faculty of Materials and Manufacturing, Beijing University of Technology, 100124, Beijing, China Yubin Hou & Pu Wang Key Laboratory of Trans-scale Laser Manufacturing Technology, Ministry of Education, Beijing, China Yubin Hou, Xiu-Hong Wang & Pu Wang Beijing Engineering Research Center of Laser Technology, Beijing, China Beijing Colleges and Universities Engineering Research Center of Advanced Laser Manufacturing, Beijing, China Yaping Wang Jinsong Lu Mingqian Suo Mengcong Du Yubin Hou Xiu-Hong Wang Pu Wang X.H.W. and M.C.L. proposed the idea and designed the experiments. M.C.L., Y.W., J.L. conducted the measurements. M.C.L, Y.W., J.L., and X.H.W. wrote the paper. X.H.W. performed modifications of the paper. X.H.W. and J.L. answered the questions raised by the reviewers. M.S., M.D., and Y.H. participated in the data analysis and discussions. X.H.W. and P.W. supervised the project. Correspondence to Xiu-Hong Wang. Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Wang, Y., Lang, M.C., Lu, J. et al. Demonstration of intracellular real-time molecular quantification via FRET-enhanced optical microcavity. Nat Commun 13, 6685 (2022). https://doi.org/10.1038/s41467-022-34547-4
CommonCrawl
Quantifying dissipation using fluctuating currents Thermodynamic uncertainty relations constrain non-equilibrium fluctuations Jordan M. Horowitz & Todd R. Gingrich Irreversibility in dynamical phases and transitions Daniel S. Seara, Benjamin B. Machta & Michael P. Murrell The role of quantum coherence in non-equilibrium entropy production Jader P. Santos, Lucas C. Céleri, … Mauro Paternostro Emergence and control of complex behaviors in driven systems of interacting qubits with dissipation A. V. Andreev, A. G. Balanov, … A. M. Zagoskin Generalized entropies, density of states, and non-extensivity Sámuel G. Balogh, Gergely Palla, … Dániel Czégel Partition of energy for a dissipative quantum oscillator P. Bialas, J. Spiechowicz & J. Łuczka Irreversible work and Maxwell demon in terms of quantum thermodynamic force B. Ahmadi, S. Salimi & A. S. Khorashad Heat flow due to time-delayed feedback Sarah A. M. Loos & Sabine H. L. Klapp Emergence of energy-avoiding and energy-seeking behaviors in nonequilibrium dissipative quantum systems Thiago Werlang, Maurício Matos, … Daniel Valente Junang Li1, Jordan M. Horowitz1,2,3, Todd R. Gingrich1,4 & Nikta Fakhri1 Nature Communications volume 10, Article number: 1666 (2019) Cite this article Nanoscale biophysics Statistical physics, thermodynamics and nonlinear dynamics Systems coupled to multiple thermodynamic reservoirs can exhibit nonequilibrium dynamics, breaking detailed balance to generate currents. To power these currents, the entropy of the reservoirs increases. The rate of entropy production, or dissipation, is a measure of the statistical irreversibility of the nonequilibrium process. By measuring this irreversibility in several biological systems, recent experiments have detected that particular systems are not in equilibrium. Here we discuss three strategies to replace binary classification (equilibrium versus nonequilibrium) with a quantification of the entropy production rate. To illustrate, we generate time-series data for the evolution of an analytically tractable bead-spring model. Probability currents can be inferred and utilized to indirectly quantify the entropy production rate, but this approach requires prohibitive amounts of data in high-dimensional systems. This curse of dimensionality can be partially mitigated by using the thermodynamic uncertainty relation to bound the entropy production rate using statistical fluctuations in the probability currents. Nonequilibrium dynamics is an essential physical feature of biological and active matter systems1,2,3. By harvesting a fuel—in the form of solar energy, a redox potential, or a metabolic sugar—the molecular dynamics in these systems differs profoundly from the equilibrium case. Some of the fuel's free energy is utilized to perform work or is stored in an alternative form, but the remainder is dissipated into the environment, often in the form of heat1,4. The energetic loss can alternatively be cast as an increase in entropy of the environment, and the entropy production is associated with broken time-reversal symmetry in the system's dynamics5,6,7. This connection has been leveraged to experimentally classify particular biophysical processes as thermal or active8,9 based on the existence of probability currents10,11. There is great interest in going beyond this binary classification—thermal versus active—to experimentally quantify how active, or how nonequilibrium, a process is12,13,14. Such a quantification could, for example, provide insight into how efficiently molecular motors are able to work together to drive large-scale motions15,16,17,18,19. One way to quantify this nonequilibrium activity is to measure the dissipation rate, or how much free energy is lost per unit time. In a biophysical setting, a direct local calorimetric measurement is challenging, but signatures of the dissipation are encoded in stochastic fluctuations of the system20, even far-from-equilibrium21,22,23,24,25,26,27,28,29. We set out to develop and explore strategies for inferring the dissipation rate from these experimentally-accessible nonequilibrium fluctuations. In a system of interacting driven colloids, where all degrees of freedom are tracked, Lander et al. have indirectly measured dissipation from fluctuations27. However, it should also be possible to bound dissipation on the basis of nonequilibrium fluctuations in a subset of the relevant degrees of freedom. As a tangible example of our motivation, consider the experiment of Battle et al., which tracked cilia shape fluctuations to determine that the cilia dynamics were driven out of equilibrium9. With suitable analysis of those shape fluctuations, might one determine, or at least constrain, the free energetic cost to sustain the cilia's motion? Though our ultimate motivations pertain to these complex systems, here we present an exhaustive analytical and numerical study of a tractable model30. Using a model consisting of beads coupled by linear springs, we demonstrate how the statistical properties of trajectories provides information about the dissipation rate. The bead-spring model furthermore allows us to address various practical considerations that will be important for future experimental applications of the inference techniques: how much data is required, what is the role of coarse graining, and what can be done about the curse of dimensionality. We show that fluctuations in nonequilibrium currents can provide a route to bound the dissipation rate, even in high-dimensional dynamical systems operating outside a linear-response regime. Crucially, we anticipate many of these insights will support the data analysis of experimentally accessible biological and active matter systems. Bead-spring model As one of the simplest possible nonequilibrium models, we consider two coupled beads, each allowed to fluctuate in one dimension. The beads are connected to each other and to the boundary walls by linear springs with stiffness k (see Fig. 1). We imagine that the beads are embedded in two different viscous fluids, one hot with temperature Th and the other cold with temperature Tc. These fluids exert a friction γ on each bead, absorbing energy from the beads' motion. In the absence of coupling between the beads, the average rate at which each thermal bath injects energy exactly balances with the rate it absorbs energy due to frictional drag. By coupling the beads, however, there is a net steady-state rate of heat flow \(\dot Q_{{\mathrm{ss}}}\) from the hot reservoir into the system and out to the cold reservoir. The hot reservoir's entropy changes with rate \(\dot S_{\mathrm{h}} = - \dot Q_{{\mathrm{ss}}}/T_{\mathrm{h}}\) while the cold reservoir's entropy increases with rate \(\dot S_{\mathrm{c}} = \dot Q_{{\mathrm{ss}}}/T_{\mathrm{c}}\). In total, the steady-state entropy production rate can therefore be written as $$\dot S_{{\mathrm{ss}}} = \dot S_{\mathrm{h}} + \dot S_{\mathrm{c}} = \dot Q_{{\mathrm{ss}}}\left( {T_{\mathrm{c}}^{ - 1} - T_{\mathrm{h}}^{ - 1}} \right).$$ Two coupled beads at different temperatures. a An illustration of the model with the red bead immersed in a hot temperature bath Th and the blue bead immersed in a cold temperature bath Tc. Three springs with equal spring constant k connected the beads and the walls. Displacements away from the equilibrium position of the hot and cold beads are denoted by x1 and x2, respectively. b The steady-state probability density and current as a function of bead displacements for spring constant k = 1, friction γ = 1, and thermal energy scales kBTc = 25 and kBTh = 250. c The local entropy production rate calculated from Eq. (7) of the system as a function of bead displacements for the same parameters This equation expresses the entropy production rate as the product of a flux \(\dot Q_{{\mathrm{ss}}}\) and the conjugate thermodynamic driving force \((T_{\mathrm{c}}^{ - 1} - T_{\mathrm{h}}^{ - 1})\). The typical situation is that the driving force may be tuned in the lab and the flux is measured as a response. Suppose, however, that it is not simple to measure the heat flux. Rather, we imagine directly observing the bead positions as a function of time. Those measurements are sufficient to extract the entropy production rate, but to do so we must go beyond the thermodynamics and explicitly consider the system's dynamics, an approach known as stochastic thermodynamics1,31,32. The starting point is to mathematically describe the bead-spring dynamics with a coupled overdamped Langevin equation \(\mathop {{\bf{x}}}\limits^. = A{\mathbf{x}} + F{\mathbf{\xi }}\), where x = (x1, x2)T is the vector consisting of each bead's displacement from its equilibrium position, ξ = (ξ1, ξ2)T is a vector of independent Gaussian white noises, and $$A = \left( {\begin{array}{*{20}{c}} { - 2k/\gamma } & {k/\gamma } \\ {k/\gamma } & { - 2k/\gamma } \end{array}} \right), \ F = \left( {\begin{array}{*{20}{c}} {\sqrt {2k_{\mathrm{B}}T_{\mathrm{h}}/\gamma } } & 0 \\ 0 & {\sqrt {2k_{\mathrm{B}}T_{\mathrm{c}}/\gamma } } \end{array}} \right).$$ The matrix A captures deterministic forces acting on the beads due to the springs, while F describes the random forces imparted by the medium. The strength of these random forces depends on the temperature and the Boltzmann constant kB, consistent with the fluctuation-dissipation theorem33. It is useful to cast the Langevin equation as a corresponding Fokker-Planck equation for the probability of observing the system in configuration x at time t, ρ(x, t): $$\frac{{\partial \rho ({\mathbf{x}},t)}}{{\partial t}} = - \nabla \cdot (A{\mathbf{x}}\rho ({\mathbf{x}},t) - D\nabla \rho ({\mathbf{x}},t)) \equiv - \nabla \cdot {\mathbf{j}}({\mathbf{x}},t),$$ with D = FFT/2. Though we are modeling a two-particle system, it can be helpful to think of the entire system as being a single point diffusing through x space with diffusion tensor D and with deterministic force γAx. The second equality in Eq. (3) defines the probability current j(x, t). These probability currents (and their fluctuations) will play a central role in our strategies for inferring the rate of entropy production. Due to its analytic and experimental tractability, this bead-spring system and related variants have been extensively studied as models for nonequilibrium dynamics34,35,36,37,38,39. In particular, the steady-state properties are well-known. Correlations between the position of bead i at time 0 and that of bead j at time t are given by Cij(t) = 〈xi(0)xj(t)〉. The expectation value is taken over realizations of the Gaussian noise to give $$C(t) = {\int_{ - \infty }^t} d s\;e^{A(t - s)}FF^Te^{A^T(t - s)}.$$ The steady-state density and current are expressed simply as $$\begin{array}{*{20}{l}} {\rho _{{\mathrm{ss}}}({\mathbf{x}})} \hfill & = \hfill & {(2\pi \sqrt {{\mathrm{det}}{\cal{C}}} )^{ - 1}e^{ - \frac{1}{2}{\mathbf{x}}^T{\cal{C}}^{ - 1}{\mathbf{x}}}} \hfill \\ {{\mathrm{j}}_{{\mathrm{ss}}}({\mathbf{x}})} \hfill & = \hfill & {(A{\mathbf{x}} + D{\cal{C}}^{ - 1}{\mathbf{x}})\rho _{{\mathrm{ss}}}({\mathbf{x}})} \hfill \end{array}$$ in terms of the long-time limit of the correlation matrix $${\cal{C}} \equiv \mathop {{\lim }}\limits_{t \to \infty } C(t) = \frac{{k_{\mathrm{B}}}}{{12k}}\left( {\begin{array}{*{20}{c}} {7T_{\mathrm{h}} + T_{\mathrm{c}}} & {2(T_{\mathrm{c}} + T_{\mathrm{h}})} \\ {2(T_{\mathrm{c}} + T_{\mathrm{h}})} & {T_{\mathrm{h}} + 7T_{\mathrm{c}}} \end{array}} \right).$$ The steady-state current jss(x) is a vector field that specifies the probability current conditioned upon the system being in configuration x. Associated with this current is a local conjugate thermodynamic force \({\mathbf{F}}({\mathbf{x}}) = k_{\mathrm{B}}{\mathbf{j}}_{{\mathrm{ss}}}^T({\mathbf{x}})D^{ - 1}/\rho _{{\mathrm{ss}}}({\mathbf{x}})\)40,41. The product of the microscopic current and force is the local entropy production rate at configuration x: \(\dot \sigma _{{\mathrm{ss}}}({\mathbf{x}}) = {\mathbf{F}}({\mathbf{x}}) \cdot {\mathbf{j}}_{{\mathrm{ss}}}({\mathbf{x}})\). Upon integrating over all configurations, we obtain the total entropy production rate $$\begin{array}{*{20}{l}} {\dot S_{{\mathrm{ss}}}} \hfill & = \hfill & {{\int} d {\mathbf{x}}\,\dot \sigma _{{\mathrm{ss}}}({\mathbf{x}}) = k_{\mathrm{B}}{\mathrm{Tr}}\left\{ {AD^{ - 1}A{\cal{C}} - {\cal{C}}^{ - 1}D} \right\}} \hfill \\ {} \hfill & = \hfill & {k_{\mathrm{B}}\frac{{k(T_{\mathrm{h}} - T_{\mathrm{c}})^2}}{{4\gamma T_{\mathrm{h}}T_{\mathrm{c}}}}.} \hfill \end{array}$$ Comparing with Eq. (1), we see that the rate of net heat flow is \(\dot Q_{{\mathrm{ss}}} = k_{\mathrm{B}}k(T_{\mathrm{h}} - T_{\mathrm{c}})/4\gamma\). Our ability to analytically compute the heat flow derives from the linear coupling between beads, yet we are ultimately interested in experimental scenarios in which linear coupling could not be assumed. In those more complicated systems, there is no simple analytical expression for the local entropy production rate, but we could still estimate \(\dot \sigma _{{\mathrm{ss}}}\) by sampling trajectories from the steady-state distributions—either in a computer or in the lab. We now consider strategies for this estimation by sampling the bead-spring dynamics and comparing with the analytical expression, Eq. (7). Estimating the steady state from sampled trajectories We first seek estimates of jss(x) and ρss(x) from a long trajectory x(t) of bead positions over an observation time τobs. We estimate the steady-state density by the empirical density, the fraction of time the trajectory spends in state x: $$\rho ({\mathbf{x}}) = \frac{1}{{\tau _{{\mathrm{obs}}}}}{\int_{0}^{\tau _{{\mathrm{obs}}}}} \delta \left( {{\mathbf{x}}\left( t \right) - {\mathbf{x}}} \right)dt,$$ where δ is a Dirac delta function. The empirical density is an unbiased estimate of the steady-state density, meaning the fluctuating density ρ(x) tends to ρss(x) in the long-time limit. Similarly, an unbiased estimate for the steady-state currents is the empirical current $${\mathbf{j}}({\mathbf{x}}) = \frac{1}{{\tau _{{\mathrm{obs}}}}}{\int_0^{\tau _{{\mathrm{obs}}}}} \delta \left( {{\mathbf{x}}\left( t \right) - {\mathbf{x}}} \right)\circ d{\mathbf{x}}(t).$$ This Stratonovich integral can be colloquially read as the time-average of all displacement vectors that were observed when the system occupied configuration x. In practice, experiments typically record the configuration x at discrete-time intervals Δt such that the trajectory is given by the timeseries {xΔt, x2Δt,...}. Consequently we work with estimates of the density and currents42: $$\hat \rho ({\mathbf{x}}) = \frac{{\Delta t}}{{\tau _{{\mathrm{obs}}}}}\mathop {\sum}\limits_{i = 1}^{\tau _{{\mathrm{obs}}}/\Delta t} K \left( {{\mathbf{x}}_{i\Delta t},{\mathbf{x}}} \right)$$ $$\hat {\mathbf{{\jmath}}}({\mathbf{x}}) = \frac{{\hat \rho ({\mathbf{x}})}}{{2\Delta t}}\frac{{\mathop {\sum}\limits_{i = 2}^{\tau _{{\mathrm{obs}}}/\Delta t - 1} L \left( {{\mathbf{x}}_{i\Delta t},{\mathbf{x}}} \right)\left[ {{\mathbf{x}}_{\left( {i + 1} \right)\Delta t} - {\mathbf{x}}_{\left( {i - 1} \right)\Delta t}} \right]}}{{\mathop {\sum}\limits_{i = 2}^{\tau _{{\mathrm{obs}}}/\Delta t - 1} L ({\mathbf{x}}_{i\Delta t},{\mathbf{x}})}},$$ where K and L are kernel functions43. The kernel functions make it natural to spatially coarse grain the data, a necessity because experiments have a limited resolution and because most microscopic configurations will never be sampled by a finite-length trajectory. The function K(xiΔt, x) controls how observing the ith data point at position xiΔt impacts the estimate of \(\hat \rho\) at a nearby position x. Similarly, L controls how currents are estimated in the neighborhood of the observed data points. Specific choices for K and L are discussed in the Methods section. Using \(\hat \rho\) and \(\hat {\mathbf{ \jmath}}\) we can now construct direct estimates of the entropy production rate. Direct strategies for entropy production inference In computing Eq. (7), we integrated the local entropy production rate F(x) ⋅ jss(x) over all configurations x. When jss(x) and F(x) are not known, it is natural to replace them by the estimators \(\hat {\mathbf{ \jmath}}({\mathbf{x}})\) and \(\hat {\mathbf{F}}({\mathbf{x}}) \equiv k_{\mathrm{B}}\hat {\mathbf{ \jmath}}^T({\mathbf{x}})D^{ - 1}/\hat \rho ({\mathbf{x}})\). Though \(\hat {\mathbf{F}}\) is constructed from the unbiased estimators \(\hat {\mathbf{ \jmath}}\) and \(\hat \rho\), \(\hat {\mathbf{F}}\) is only asymptotically unbiased, necessitating sufficiently long trajectories for the bias to become negligible. Utilizing \(\hat {\mathbf{F}}\), we approximate \(\dot S_{{\mathrm{ss}}}\) by either a spatial or a temporal average: $$\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{spat}}} \equiv {\int} d {\mathbf{x}}\,\hat {\mathbf{F}}({\mathbf{x}}) \cdot \hat {\mathbf{ \jmath}}({\mathbf{x}})$$ $$\begin{array}{*{20}{l}} {\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{temp}}}} \hfill & \equiv \hfill & {\frac{1}{{\tau _{{\mathrm{obs}}}}}{\int_0^{\tau _{{\mathrm{obs}}}}} dt\,\hat {\mathbf{F}}({\mathbf{x}}(t)) \cdot \circ d{\mathbf{x}}(t)} \hfill \\ {} \hfill & = \hfill & {\frac{1}{{\tau _{{\mathrm{obs}}}}}\mathop {\sum}\limits_{i = 2}^{\tau _{{\mathrm{obs}}}/\Delta t} \hat {\mathbf{F}}\left( {\frac{{{\mathbf{x}}_{i{\mathrm{\Delta }}t} + {\mathbf{x}}_{(i - 1){\mathrm{\Delta }}t}}}{2}} \right) \cdot \left[ {{\mathbf{x}}_{i{\mathrm{\Delta }}t} - {\mathbf{x}}_{(i - 1){\mathrm{\Delta }}t}} \right].} \hfill \end{array}$$ The performance of these estimators is assessed using data sampled from numerical simulations of the Langevin equation, described further in Methods. As illustrated in Fig. 2, the estimators are biased for any finite trajectory length, but they converge to the analytical result, Eq. (7), with sufficiently long sampling times. Convergence of dissipation estimates. The spatial (blue solid circles) and temporal (red solid squares) dissipation rate estimates converge to the steady-state value \(\dot S_{{\mathrm{ss}}}\) (red line) of Eq. (7). Estimates of the total dissipation rate, calculated from Eqs. (12) and (13), are extracted from Langevin trajectories simulated for time τobs with timestep 10−3 using k = γ = 1, kBTc = 25, and kBTh = 250 as in Fig. 1. Error bars are the standard error of the mean, computed from 10 independent trajectories. Estimates of the local dissipation rate from the spatial estimator with different trajectory lengths are plotted in the inset At first glance \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{spat}}}\) and \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{temp}}}\) may appear equivalent due to ergodicity. Indeed, with an infinite amount of sampling, both schemes must yield the same result, \(\dot S_{{\mathrm{ss}}}\), but the temporal estimator converges significantly faster with finite sampling. Plots of the estimated local dissipation rate (Fig. 2 inset) hint at the reason \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{spat}}}\) converges more slowly: \(\dot \sigma _{{\mathrm{ss}}}({\mathbf{x}})\) must be accurately estimated by \(\hat {\dot \sigma }_{{\mathrm{ss}}}({\mathbf{x}}) = \hat {\mathbf{F}}({\mathbf{x}}) \cdot \hat {\mathbf{ \jmath}}({\mathbf{x}})\) throughout the entire configuration space. The integral in Eq. (12) equally weights \(\hat {\dot \sigma }_{{\mathrm{ss}}}({\mathbf{x}})\) at all x, even those points which have been infrequently (or never) visited by the stochastic trajectory. Our x has dimension two, but we will also consider higher-dimensional configuration spaces, for example by coupling more than two beads in a linear chain. If that configuration space has dimension greater than three or four, it becomes impractical to estimate \(\dot \sigma _{{\mathrm{ss}}}\) across the entire space. Furthermore, estimating Eq. (12) for high-dimensional x confronts the classic problem of performing numerical quadrature on a high-dimensional grid, where it is well-known that Monte Carlo integration becomes a superior method. The temporal integral can be thought of as a convenient way to implement such a Monte Carlo integration, with sampled x's coming from the configurations of the stochastic trajectory. Notably, Eq. (13) is computed from estimates of the thermodynamic force near the sampled configurations xiΔt, precisely where the finite trajectory has been most reliably sampled. In contrast, Eq. (12) requires spurious extrapolation of the kernel density estimates (\(\hat \rho\) and \(\hat{\mathbf{ \jmath}}\)) to points which are far from the any sampled configurations. The advantage of the temporal estimator over the spatial one becomes even more pronounced as dimensionality increases. Nevertheless, even \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{temp}}}\) becomes harder to estimate when x grows in dimensionality. Getting accurate estimates of F around the xiΔt requires observing several trajectories which have cut through that part of configuration space while traveling in each direction. But when the dimensionality is large, recurrence to the same configuration-space neighborhood takes a long time. Consequently, we turn to a complementary method which can be informative even when x is too high-dimensional to accurately estimate F. Indirect strategy for entropy production inference Thus far our estimators have been based on detailed microscopic information, but as the dimensionality of x increases, estimating the microscopic steady-state properties requires exponentially more data. To combat this curse of dimensionality, it is standard to project high-dimensional dynamics onto a few preferred degrees of freedom9,44,45,46. For example, the projected coordinates could be two principle components from a principle component analysis. Such projected dynamics have been used to detect broken detailed balance9, however, these reduced dynamics overlook hidden dissipation from the discarded degrees of freedom. An alternative strategy that retains contributions from all degrees of freedom is provided by recent theoretical results relating entropy production and current fluctuations in general nonequilibrium steady-state dynamics28,29,47,48,49,50,51,52. To this end, we introduce a single projected macroscopic current, constructed as a linear combination of the microscopic currents: $$j_{\mathbf{d}} = {\int} d {\mathbf{x}}\,{\mathbf{d}}({\mathbf{x}}) \cdot {\mathbf{j}}({\mathbf{x}}),$$ where d(x) is a vector field that weights how much a microscopic current at x contributes to the macroscopic current jd. Any physically measurable current—electrons flowing through a wire, heat passing from one bead to the other, or the production of a chemical species in a reaction network—can be cast as such a linear superposition of microscopic currents. Figure 3 illustrates one particular example by applying the weighting field d(x) = F(x) to project microscopic currents onto the single macroscopic current jF. Each step of the trajectory is weighted by the value of d associated with the observed transition, and this weighted average, accumulated as a function of time, is the fluctuating macroscopic current (fluctuating because it depends on the particular stochastic trajectory). Each trajectory observed for a time τobs yields a measurement jd of the fluctuating current, and many such trajectories give a distribution P(jd) characterized by mean 〈jd〉 and variance Var(jd). The thermodynamic uncertainty relation (TUR)28,29,48,49,50 then constrains the entropy production rate in terms of the dynamical fluctuations of this macroscopic current: $$\dot S_{{\mathrm{ss}}} \ge \frac{{2k_{\mathrm{B}}\left\langle {j_{\mathbf{d}}} \right\rangle ^2}}{{\tau _{{\mathrm{obs}}}{\mathrm{Var}}(j_{\mathbf{d}})}} \equiv \dot S_{{\mathrm{TUR}}}^{({\mathbf{d}})}.$$ Extracting current fluctuations from trajectories. a A realization of a long trajectory diffusing through configuration space. The macroscopic current is computing by choosing a vector field d(x), here chosen as the thermodynamic force field F(x). b On the microscopic scale, the trajectory may be modeled as discrete jumps between neighboring lattice sites (here labeled with symbols: lozenge, star, square, …). The continuous-space vector field is decomposed into components along the direction of possible jumps, i.e., d evaluated at the black triangle site can be expressed in terms of the weight d◊,▲ associated with a jump from the black triangle to the white diamond. c A realization of the trajectory gives a single value of the empirical current jd. By recording many realizations, the empirical current distribution P(jd) is sampled to give 〈jd〉 and Var(jd) (inset). In the case that d = F, the mean slope of this accumulated current is the average entropy production rate. d The empirical current for a single realization is constructed as the sum of the d weights for each microscopic transition of the jump process Note that we have used Var(jd) to denote the variance of the macroscopic empirical current distribution, but some prior work29,48 used this notation to denote the way the variance scaled with observation time. The difference between these notations is the factor of τobs in the denominator of the right hand side of Eq. (15). Unlike the field of microscopic currents, j(x), the macroscopic current jd is a single scalar quantity, allowing estimates of its cumulants—particularly the mean \(\widehat {\left\langle {j_{\mathbf{d}}} \right\rangle }\) and variance \(\widehat {{\mathrm{Var}}(j_{\mathbf{d}})}\)—to be extracted from a modest amount of experimental data. Indeed, measurements of kinesin fluctuations have recently been used to infer constraints on the efficiency of these molecular motors18,53. Importantly, the TUR is valid for any choice of d, granting freedom to consider fluctuations of arbitrary macroscopic currents, some of which will yield tighter bounds than others. In a later section, we use Monte Carlo sampling to seek a choice for d which yields the tightest possible bound, but first we consider an important physically motivated choice, d = F. In this case, the macroscopic current jF is the fluctuating entropy production rate (cf. Eqs. (7) and (14)), so \(\left\langle {j_{\mathbf{F}}} \right\rangle = \dot S_{{\mathrm{ss}}}\). With access to F, we can thus compute the entropy production rate by simply taking the mean of the generalized current (the average slope in Fig. 3), or we could use the fluctuations from repeated realizations of jF to get a bound on \(\dot S_{{\mathrm{ss}}}\) via Eq. (15). It perhaps seems foolish to settle for a bound if one could compute the actual entropy production rate, but in practice one would not typically have access to F. More likely, it would only be possible to estimate F from data as \(\hat {\mathbf{F}}\). With sufficient data, \(\hat {\mathbf{F}}\) converges to F such that a temporal estimate of the entropy production rate would eventually become accurate, but this convergence is slow in high dimensions. Alternatively, by choosing \({\mathbf{d}} = \hat {\mathbf{F}}\), we obtain a TUR lower bound estimate $$\widehat {\dot S}_{{\mathrm{TUR}}}^{(\hat {\mathbf{F}})} = \frac{{2k_{\mathrm{B}}\widehat {\left\langle {j_{\hat {\mathbf{F}}}} \right\rangle }^2}}{{\tau _{{\mathrm{obs}}}\widehat {{\mathrm{Var}}(j_{\hat {\mathbf{F}}})}}}.$$ A key advantage of this estimate is that it is less sensitive to whether \(\hat {\mathbf{F}}\) has converged than either \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{spat}}}\) and \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{temp}}}\). When \(\hat {\mathbf{F}}\) is noisily estimated due to little data or high dimensionality, the TUR estimate can nevertheless provide an accessible route to constraining the entropy production rate from experimental data. Convergence of the entropy production rate estimates To assess the costs and benefits of the various estimation schemes, we numerically sampled trajectories for the two-bead model of Fig. 1 and for a variant with five beads coupled along a one-dimensional chain with spring constant k, the five beads being embedded in thermal baths whose temperatures linearly ramp from Tc to Th. Equation (7) gives the entropy production rate for the two-bead model as a function of the bath temperatures. An analogous expression is derived in Supplementary Note 1 for the model with five beads, and both expressions are plotted with a solid red line in Fig. 4. The temporal and spatial estimators both converge to these analytical expressions in the long trajectory limit, while the TUR estimate tends to the lower bound \(\dot S_{{\mathrm{TUR}}}^{{\mathbf{(d)}}}\). We performed a series of calculations to assess: (1) how close is this lower bound to the true dissipation rate and (2) how long of a trajectory is needed to converge all three estimates. Performance of dissipation rate estimators. Data are shown both for the model with two beads (a) and the higher-dimensional model with five beads (b). The TUR bound with d = F (dashed black line) becomes tighter to the actual dissipation rate (solid red line) when the dynamics is closer to equilibrium (Tc/Th → 1) and in the limit of many beads. Inset plots show the estimator convergence rates for temperature ratios of 0.1 and 0.5, with error bars reporting standard error, computed from 10 independent samples. The blue dashed line in a is the TUR bound resulting from a Monte Carlo optimization scheme, as described further in Fig. 5. The bottom right inset of b reflects that the TUR estimator may be useful as a practical proxy for the entropy production rate for high-dimensional systems when the dynamics is weakly driven We discuss the convergence results first, plotted as insets in Fig. 4. Using a trajectory of length τobs, \(\hat {\mathbf{F}}\) was estimated, and this estimated thermodynamic force field was used to plot how quickly \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{spat}}}\) and \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{temp}}}\) converged to their expected value of \(\dot S_{{\mathrm{ss}}}\). To compare convergence of the TUR bound on an equal footing, we recognize that the τobs → ∞ limit of a long trajectory with perfect sampling will not yield \(\dot S_{{\mathrm{ss}}}\) but rather the bound \(\dot S_{{\mathrm{TUR}}}^{({\mathbf{F}})}\). In all three cases we scale the estimate by its appropriate infinite-sampling limit and observe how quickly this ratio decays to one. The superiority of the temporal estimator over the spatial one is clear in the two-bead model, and the inadequacy of the spatial estimator is so stark in the higher-dimensional five-bead model that it was prohibitive to compute. The TUR estimator performance is comparable to the temporal average estimator when F can be estimated well (low dimensionality and large thermodynamic driving). In the more challenging situation that the phase space is high dimensional and the statistical irreversibility is more subtle, the TUR estimator appears to offer some advantage. It converges with roughly an order of magnitude fewer samples than are required for \(\widehat {\dot S}_{{\mathrm{ss}}}^{{\mathrm{temp}}}\) (see bottom right inset of Fig. 4b). To understand how well one can estimate the entropy production rate from current fluctuations, we must also address how close the TUR lower bound is to \(\dot S_{{\mathrm{ss}}}\). The dashed black line of Fig. 4 shows that the TUR lower bound equals the actual entropy production rate in the near-equilibrium limit Tc → Th. Far from equilibrium, the TUR lower bound remains the same order of magnitude as the entropy production rate, with the deviation increasing with the size of the temperature difference. Comparing the dashed black lines in two different dimensions, we can see that as more beads are added to the model, this deviation between \(\dot S_{{\mathrm{TUR}}}^{({\mathbf{F}})}\) and \(\dot S_{{\mathrm{ss}}}\) decreases. Hence the TUR bound more closely approximates the actual entropy production rate with increasing dimensionality and decreasing thermodynamic force, precisely the conditions when the TUR estimate converges more rapidly. Optimizing the macroscopic current Thus far we have focused on measuring the statistics of a particular macroscopic empirical current, the fluctuating entropy production, constructed by choosing d = F. This choice was a natural starting point since the fluctuations are known to saturate Eq. (15) in the equilibrium limit Tc → Th29. However, when working with timeseries data we had to replace F by the estimate \(\hat {\mathbf{F}}\), and this estimated thermodynamic force is error-prone in high dimensions. In the previous section we saw that the TUR estimator is sufficiently robust that a tight bound for \(\dot S_{{\mathrm{ss}}}\) may be inferred even when \(\hat {\mathbf{F}}\) has not fully converged to F. This robustness derives from the validity of Eq. (15) for all possible choices of d. The generality of the TUR can be further leveraged by optimizing over d: $$\dot S_{{\mathrm{ss}}} \ge \frac{{2k_{\mathrm{B}}}}{{\tau _{{\mathrm{obs}}}}}\mathop {{{\mathrm{max}}}}\limits_{{\mathbf{d}}({\mathbf{x}})} \frac{{\left\langle {j_{\mathbf{d}}} \right\rangle ^2}}{{{\mathrm{Var}}(j_{\mathbf{d}})}}.$$ We are not aware of methods to explicitly compute the optimal choice of d, but a vector field d*(x) which outperforms F(x) can be found readily by Monte Carlo (MC) sampling with a preference for macroscopic currents with a large TUR ratio 〈jd〉2/Var(jd). Each step of the MC algorithm requires 〈jd〉 and Var(jd), which could be estimated with trajectory sampling, as illustrated in Fig. 3a, c. In fact, one could collect a single long trajectory—from an experiment or from simulation—then sample d* based on mean and variance estimates \(\widehat {\left\langle {j_{{\mathbf{d}}^ \ast }} \right\rangle }\) and \(\widehat {\left\langle {{\mathrm{Var}}(j_{{\mathbf{d}}^ \ast })} \right\rangle }\) for that fixed trajectory. Such a scheme is enticing, but we warn that the procedure is susceptible to over-optimization of the TUR ratio since optimizing to maximize the ratio \(\widehat {\left\langle {j_{{\mathbf{d}}^ \ast }} \right\rangle }^2/\widehat {{\mathrm{Var}}(j_{{\mathbf{d}}^ \ast })}\) is not the same as optimizing the ratio \(\left\langle {j_{{\mathbf{d}}^ \ast }} \right\rangle ^2/{\mathrm{Var}}(j_{{\mathbf{d}}^ \ast })\). The former can yield a large value just because the trajectory happens to return anomalous estimates for the mean and variance of the generalized current. The latter ratio does not depend on any one trajectory but has rather averaged over all trajectories. Avoiding over-optimization requires appropriate cross-validation. For example, d* could be selected based on one sampled trajectory then the dissipation bound inferred by an independently sampled trajectory. Rather than implementing such a cross-validation scheme, we avoided the over-optimization problem for this model system by putting the dynamics on a grid to compute the means and variances exactly. As described in Methods, we construct a continuous-time Markov jump process on a square lattice with grid spacing h = {h1, h2} such that the h → 0 jump process limits to the same Fokker-Planck description, Eq.(3), as the continuous-space Langevin dynamics48. The vector field d(x) is also discretized as a set of weights dx+h,x associated with the transition from x to the neighboring microstate at x + h (see Fig. 3b, d). In place of trajectory sampling, the mean and variance can be extracted from a standard computation of the current's scaled cumulant generating function as a maximum eigenvalue of a tilted rate matrix54,55,56. The MC sampling returns an ensemble of nearly-optimal choices for d* such that \(\dot S_{{\mathrm{ss}}} \ge \dot S_{{\mathrm{TUR}}}^{({\mathbf{d}}^ \ast )} \ge \dot S_{{\mathrm{TUR}}}^{({\mathbf{F}})}\). Each d* from the ensemble yields a similar TUR ratio, but the near-optimal vector fields are qualitatively distinct (see Fig. 5). We lack a physical understanding of the differences between the various near-optimal choices d* and the thermodynamic force field F. Even without a clear physical interpretation, we have a straightforward numerical procedure for extracting as tight of an entropy production bound as can be obtained from macroscopic current fluctuations. Monte Carlo sampling for maximally informative currents. We seek a weighting vector field d such that the TUR bound is as close to the true entropy production rate as possible. Starting either with d = F (blue curve) or with a random vector field (red curve), a Markov chain Monte Carlo procedure was used to change d in search of a higher \(\dot S_{{\mathrm{TUR}}}^{({\mathbf{d}})}/\dot S_{{\mathrm{ss}}}\) ratio. The Monte Carlo sampling discovers diverse ways to yield a similar maximal value of the TUR ratio, suggesting that while the optimization problem is not dominated by a single basin, competitive near-optimal solutions can be discovered from a variety of starting points Physical systems in contact with multiple thermodynamic reservoirs support nonequilibrium dynamics that manifest as probability currents in phase space. Detection of these currents has been used in a biophysical context to differentiate between dissipative and equilibrium processes. In this paper, we have explored how the currents can be further utilized to infer the rate of entropy production. Using a solvable toy model, we demonstrated three inference strategies: one based on a spatial average, one based on a temporal average, and one based on fluctuations in the currents. Regardless of strategy, the entropy production inference becomes more challenging and requires more data as the thermodynamic drive decreases. This challenge results from the fact that weakly driven systems produce trajectories which look very similar when played forward or backward in time. The weaker the drive, the more data it requires to confidently detect the statistical irreversibility. It is in this weak driving limit that we see the most stark difference between the performance of the three studied estimators. As we move to higher-dimensional but weakly driven systems, it requires too much data to detect the statistical irreversibility at every point in phase space, so performing spatial averages is out of the question. The temporal average can still be taken, but for a fixed amount of data, estimates of F become systematically more error-prone with increased dimensionality. In that limit we find it useful to measure not just the average current, but also the variance. By leveraging the TUR we circumvent the need to accurately estimate F and achieve more rapid convergence. The TUR-inspired estimator is not without pitfalls. Most prominently, it only returns a bound on the entropy production rate, and it is not simple to understand how tight this bound will be. That tightness, characterized by \(\eta \equiv \dot S_{{\mathrm{TUR}}}^{({\mathbf{F}})}/\dot S_{{\mathrm{ss}}}\), does not, for example, depend solely on the strength of the thermodynamic drive. In Supplementary Note 2 and Supplementary Figure 2, we make this point by separately tuning the various spring constants to show how η depends on properties of the system in addition to the ratio of reservoir temperatures. Though our modestly sized toy systems (no more than five coupled beads) always produce η of order unity, there is little reason to believe that the TUR bound will remain a good proxy for the entropy production rate in the limit of a high-dimensional system in which only a few degrees of freedom are visible. Future experiments are needed to elucidate whether these inference strategies can be usefully applied to the complex biophysical dynamics that has motivated our study. Numerically generating the bead-spring dynamics We simulate the bead-spring dynamics in two complementary ways: as discrete-time trajectories in continuous-space and as continuous-time trajectories in discrete space. The results presented in Figs. 2 and 4 stem from continuous-space calculations. Trajectories are generated by numerically integrating the overdamped Langevin equation using the stochastic Euler integrator with timestep Δt according to x(i+1)Δt = xiΔt + AxiΔtΔt + Fη, where η is a vector of random numbers drawn from the normal distribution with variance Δt for each timestep. Setting k = γ = 1, we numerically integrate the equation of motion with timestep Δt = 0.001. The initial condition x0 is effectively drawn from the steady state by starting the clock after integrating the dynamics for a long time from a random initial configuration. In addition to the discrete-time simulations, continuous-time jump trajectories were simulated in discrete space with a rate $${\Bbb W}_{{\mathbf{x}} + {\mathbf{h}},{\mathbf{x}}} = \left[ {\left( {A{\mathbf{x}}/2} \right) + {\mathbf{h}}^TD} \right] \cdot {\mathbf{h}}/{\mathbf{h}}^T{\mathbf{h}}$$ for transitioning from a lattice site at position x to a neighboring site at position x + h48. This discrete-space trajectory was generated by first discretizing the phase space on a 200 by 200 grid with x1 ranging from −50 to 50 and x2 ranging from −20 to 20 as shown in Fig. 3a. The Markov jump process is simulated using the Gillespie algorithm57. Estimating density and current To form histogrammed estimates, we bin the data into a 100 by 100 grid with x1 ranging between ±50 and x2 ranging between ±20. We can write the kernel functions as \(K({\mathbf{x}}_{i{\mathrm{\Delta }}t},{\mathbf{x}}) = L({\mathbf{x}}_{i{\mathrm{\Delta }}t},{\mathbf{x}}) = \mathop {\sum}\nolimits_{m,n} \chi _{mn}({\mathbf{x}})\chi _{mn}({\mathbf{x}}_{i{\mathrm{\Delta }}t})\), where χmn is the indicator function taking the value 1 only if the argument lies in the bin with row and column indices m and n. Alternatively, a continuous estimate of the density and current can be constructed using smooth non-negative functions for K and L, each of which integrates to one. For our kernel density estimates, we place a Gaussian at each data point by choosing K(xiΔt, x) ∝ exp[(x − xiΔt)TΣ−1(x − xiΔt)]. The breadth of the ith Gaussians bi, known as the bandwidth, sets the diagonal matrix Σ−1 via \(\Sigma _{ii} = b_i^2\). The estimation of currents proceeds similarly using kernel regression with the Epanechnikov kernel58 $$L({\mathbf{x}}_{i{\mathrm{\Delta }}t},{\mathbf{x}}) \propto \left( {\begin{array}{*{20}{l}} {\mathop {\prod}\limits_{j = 1}^d \left( {1 - \frac{{(x_{i{\mathrm{\Delta }}t;j} - x_j)^2}}{{b_j^2}}} \right),} \hfill & {|{\mathbf{x}}_{i{\mathrm{\Delta }}t} - {\mathbf{x}}| < {\mathbf{b}}} \hfill \\ {0,} \hfill & {{\mathrm{otherwise,}}} \hfill \end{array}} \right.$$ where d is the spatial dimension and xiΔt;j is the jth component of the configuration xiΔt at discrete time i. The bandwidth for both Gaussian and Epanechnikov kernels are chosen using the rule of thumb suggested by Bowman and Azzalini58, specifically $${\mathbf{b}} = \left( {\frac{4}{{N(d + 2)}}} \right)^{1/(d + 4)}\frac{{\tilde {\boldsymbol{\sigma }}}}{{0.6745}}.$$ Here N denotes the length of the data, and \(\tilde {\boldsymbol{\sigma }}\) is the median absolute deviation estimator computed by \(\tilde {\boldsymbol{\sigma }} = \sqrt {{\mathrm{median}}\{ |v - {\mathrm{median}}(v)|\} {\mathrm{median}}\{ |{\mathbf{x}} - {\mathrm{median}}({\mathbf{x}})|\} }\), where v is the magnitude of the velocities. In general the bandwidth will go to zero with increasing data length, so the kernel estimator should be asymptotically unbiased. In that limit of infinite data, the differences between histogram and kernel density estimates are insignificant. When data is limited, we find the fastest convergence by using kernel density estimates with a multivariate Gaussian for K and the Epanechnikov kernel for L. To optimally handle limited data, the bandwidth is typically chosen to minimize the mean squared error (MSE) of the estimated function59,60,61: $${\mathrm{MSE}}_{\dot{S}_{{\mathrm{ss}}}} = \left\langle {\left( {\widehat {\dot S}_{{\mathrm{ss}}} - \dot S_{{\mathrm{ss}}}} \right)^2} \right\rangle \ \,{\mathrm{and}}\, \ {\mathrm{MSE}}_{{\mathrm{TUR}}} = \left\langle {\left( {\widehat {\dot S}_{{\mathrm{TUR}}} - \dot S_{{\mathrm{TUR}}}} \right)^2} \right\rangle ,$$ where the expectation value is taken over realizations of trajectories. The MSE is naturally a function of the bandwidth since the value of the estimator depends on b. Supplementary Figure 1 shows this bandwidth-dependence of the MSE estimated from the five-bead model temporal estimator and TUR lower bound with τobs = 1200 and Tc/Th = 0.1. Notice that the TUR lower bound tends to be less sensitive to the choice of bandwidth. Estimation of the TUR lower bound To get estimates for the current's mean and variance, \(\widehat {\left\langle {j_{\mathbf{d}}} \right\rangle }\) and \(\widehat {{\mathrm{Var}}(j_{\mathbf{d}})}\), from a single realization of length τobs, we first divide the trajectory into τobs/Δτ subtrajectories of length Δτ. For the continuous-time Markov jump process as shown in Fig. 3b, the vector field d(x) is discretized as a set of weights dx+h,x associated with the edges of the lattice and the trajectory is series of lattice sites occupied over time. The accumulated current, as illustrated in Fig. 3d, is computed as the sum of weights along the subtrajectory k: \(J_{\mathbf{d}}^{(k)} = \mathop {\sum}\nolimits_i {{\mathbf{d}}_{{\mathbf{x}}_i,{\mathbf{x}}_{i + 1}}} .\) For the continuous-space Langevin dynamics, the accumulated current for subtrajectory is given by \(J_{\mathbf{d}}^{(k)} = \mathop {\sum}\nolimits_i {\mathbf{d}} \left( {\frac{{{\mathbf{x}}_{i{\mathrm{\Delta }}t} + {\mathbf{x}}_{(i - 1){\mathrm{\Delta }}t}}}{2}} \right) \cdot \left( {{\mathbf{x}}_{i{\mathrm{\Delta }}t} - {\mathbf{x}}_{(i - 1){\mathrm{\Delta }}t}} \right)\). This accumulated current is scaled by the trajectory length to get the fluctuating macroscopic current for subtrajectory k: \(j_{\mathbf{d}}^{(k)} = J_{\mathbf{d}}^{(k)}/\Delta \tau\). The sample mean and variance of \(\left\{ {j_{\mathbf{d}}^{(1)},j_{\mathbf{d}}^{(2)}},... \right\}\) give \(\widehat {\left\langle {j_{\mathbf{d}}} \right\rangle }\) and \(\widehat {{\mathrm{Var}}(j_{\mathbf{d}})}\), respectively. Computing the mean and variance by tilting It is useful to conceptualize 〈jd〉 and Var(jd) in terms of sampled trajectories, but finite trajectory sampling will result in statistical errors. We may alternatively compute the mean and variance as the first two derivatives of the scaled cumulant generating function \(\phi (\lambda ) = {\mathrm{lim}}_{\tau _{{\mathrm{obs}}} \to \infty }\frac{1}{{\tau _{{\mathrm{obs}}}}}{\mathrm{ln}}\left\langle {e^{\lambda j_{\mathbf{d}}\tau _{{\mathrm{obs}}}}} \right\rangle\), evaluated at λ = 0. The expectation value averages over all trajectories of length τobs, and in the long-time limit, ϕ(λ) coincides with the maximum eigenvalue of the tilted operator with matrix elements \({\Bbb W}(\lambda )_{{\mathbf{x}} + {\mathbf{h}},{\mathbf{x}}} = {\Bbb W}_{{\mathbf{x}} + {\mathbf{h}},{\mathbf{x}}}e^{\lambda d_{{\mathbf{x}} + {\mathbf{h}},{\mathbf{x}}}}\)54,55,56. By discretizing space, we computed ϕ(λ) around λ = 0 as the maximal eigenvalue of the tilted operator. Using numerical derivatives, we estimate $$\left\langle {j_{\mathbf{d}}} \right\rangle = \phi ' (0) \approx \frac{{\phi (\delta \lambda ) - \phi ( - \delta \lambda )}}{{2\delta \lambda }}$$ $${\mathrm{Var}}(j_{\mathbf{d}}) = \phi '' (0) \approx \frac{{\phi (\delta \lambda ) + \phi ( - \delta \lambda )}}{{\delta \lambda ^2}}$$ with δλ = 0.00001. MC optimization We seek a vector field d(x) such that the TUR bound is as large as possible. To identify such a choice of d, we first decompose it into a basis of M = 100 Gaussians: $${\mathbf{d}}({\mathbf{x}}) = \mathop {\sum}\limits_{i = 1}^M {w^{(i)}} {\mathrm{exp}}\left[ {({\mathbf{x}} - {\mathbf{x}}^{(i)})B^{ - 1}({\mathbf{x}} - {\mathbf{x}}^{(i)})} \right].$$ The ith Gaussian, centered at position x(i), carries a weight w(i). The centers for the first 50 Gaussians are uniformally sampled with x1 ranging from −50 to 50 and x2 from −20 to 20. The breadth of the Gaussians along the i direction, Bii, is set to 10% of the length of the interval from which uniform samples are drawn. Only the weights for these 50 Gaussians will be allowed to freely vary. The remaining 50 Gaussians are paired with the first 50 to impose the antisymmetry d(x) = −d(−x). Practically, this antisymmetry constraint is achieved by placing a second Gaussian at −x with the opposite weight as the Gaussian positioned at x. With this regularization, we replace the optimization of d with a sampling problem. We sample the first 50 weights w in proportion to \({\mathrm{exp}}(\beta \dot S_{{\mathrm{TUR}}}^{({\mathbf{d}})})\), where β is an effective inverse temperature and \(\dot S_{{\mathrm{TUR}}}^{({\mathbf{d}})}\) depends on the weights since d depends on w. By choosing β = 5000, the sampling is strongly biased toward weights that give a near-optimal value of the TUR bound. After initializing the weights with uniform random numbers from [−1, 1], Monte Carlo moves w → w′ were proposed by perturbing the wi's by random uniform numbers drawn from [−0.5, 0.5]. The d′ corresponding to these new weights was computed according to Eq. (24), and the TUR bound for that proposed macroscopic current was computed using numerical derivatives of the tilted operator \({\Bbb W}(\lambda )\) around λ = 0 as described above. The maximum eigenvalue calculations made use of Mathematica's implementation of the Arnoldi method, performed using sparse matrices. Each proposed move to w′ was accepted with the Metropolis criterion \({\mathrm{min}}[1,{\mathrm{exp}}( - \beta (\dot S_{{\mathrm{TUR}}}^{({\mathbf{d}})} - \dot S_{{\mathrm{TUR}}}^{({\mathbf{d}}' )}))]\). In addition to starting from a random choice of d, we performed MC sampling about the thermodynamic force by expressing d as $${\mathbf{d}}({\mathbf{x}}) = {\mathbf{F}}({\mathbf{x}}) +\mathop {\sum}\limits_{i = 1}^M {w^{(i)}} {\mathrm{exp}}\left[ {({\mathbf{x}} - {\mathbf{x}}^{(i)})B^{ - 1}({\mathbf{x}} - {\mathbf{x}}^{(i)})} \right].$$ Again, we have 100 Gaussians, half of them uniformally placed throughout the space and the rest positioned to make the perturbation antisymmetric. We stochastically update the weights by adding a uniform random number drawn from [−0.05, 0.05], and conditionally accept the update with the same Metropolis factor as before. The resulting TUR lower bound tends toward higher values until it hits a plateau (Fig. 5 blue line). For each temperature ratio in Fig. 4a, the MC sampling was run for 500 steps, after which the TUR bound achieved a plateau and further optimization is either impossible or at least significantly more challenging. Representative data generated from sampling trajectories with the aforementioned codes can be accessed online at https://doi.org/10.5281/zenodo.2576526. Computer codes implementing all simulations and analyses described in this manuscript are available for download at https://doi.org/10.5281/zenodo.2576526. Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 75, 126001 (2012). Marchetti, M. C. et al. Hydrodynamics of soft active matter. Rev. Mod. Phys. 85, 1143 (2013). Gnesotto, F. S., Mura, F., Gladrow, J. & Broedersz, C. P. Broken detailed balance and non-equilibrium dynamics in living systems: a review. Rep. Prog. Phys. 81, 066601 (2018). Qian, H., Kjelstrup, S., Kolomeisky, A. B. & Bedeaux, D. Entropy production in mesoscopic stochastic thermodynamics: nonequilibrium kinetic cycles driven by chemical potentials, temperatures, and mechanical forces. J. Phys.: Condens. Matter 28, 153004 (2016). ADS Google Scholar Feng, E. H. & Crooks, G. E. Length of time's arrow. Phys. Rev. Lett. 101, 090602 (2008). Roldán, É., Martinez, I. A., Parrondo, J. M. R. & Petrov, D. Universal features in the energetics of symmetry breaking. Nat. Phys. 10, 457 (2014). Roldán É. Irreversibility and Dissipation in Microscopic Systems (Springer, 2014). Martin, P., Hudspeth, A. J. & Jülicher, F. Comparison of a hair bundle's spontaneous oscillations with its response to mechanical stimulation reveals the underlying active process. Proc. Natl Acad. Sci. USA 98, 14380–14385 (2001). Battle, C., et al. Broken detailed balance at mesoscopic scales in active biological systems. Science 352, 604–607 (2016). Qian, H. Vector field formalism and analysis for a class of thermal ratchets. Phys. Rev. Lett. 81, 3063 (1998). Zia, R. K. P. & Schmittmann, B. Probability currents as principal characteristics in the statistical mechanics of non-equilibrium steady states. J. Stat. Mech.: Theory Exp. 2007, P07012 (2007). Fodor, É. et al. How far from equilibrium is active matter? Phys. Rev. Lett. 117, 038103 (2016). Fodor, É. et al. Nonequilibrium dissipation in living oocytes. EPL (Europhys. Lett.) 116, 30008 (2016). Ghanta, A., Neu, J. C. & Teitsworth, S. Fluctuation loops in noise-driven linear dynamical systems. Phys. Rev. E 95, 032128 (2017). Mizuno, D., Tardin, C., Schmidt, C. F. & MacKintosh, F. C. Nonequilibrium mechanics of active cytoskeletal networks. Science 315, 370–373 (2007). Lau, A. W. C., Lacoste, D. & Mallick, K. Nonequilibrium fluctuations and mechanochemical couplings of a molecular motor. Phys. Rev. Lett. 99, 158102 (2007). Fakhri, N. et al. High-resolution mapping of intracellular fluctuations using carbon nanotubes. Science 344, 1031–1035 (2014). Pietzonka, P., Barato, A. C. & Seifert, U. Universal bound on the efficiency of molecular motors. J. Stat. Mech.: Theory Exp. 2016, 124004 (2016). Brown, A. I. & Sivak, D. A. Allocating dissipation across a molecular machine cycle to maximize flux. Proc. Natl Acad. Sci. USA 114, 11057–11062 (2017). Kubo, R. The fluctuation-dissipation theorem. Rep. Prog. Phys. 29, 255 (1966). Kurchan, J. Fluctuation theorem for stochastic dynamics. J. Phys. A. Math. Gen. 31, 3719 (1998). Crooks, G. E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E 60, 2721 (1999). Harada, T. & Sasa, Si Equality connecting energy dissipation with a violation of the fluctuation-response relation. Phys. Rev. Lett. 95, 130602 (2005). Collin, D. et al. Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. Nature 437, 231 (2005). Seifert, U. & Speck, T. Fluctuation-dissipation theorem in nonequilibrium steady states. EPL (Europhys. Lett.) 89, 10007 (2010). Jarzynski, C. Equalities and inequalities: irreversibility and the second law of thermodynamics at the nanoscale. Annu. Rev. Condens. Matter Phys. 2, 329–351 (2011). Lander, B., Mehl, J., Blickle, V., Bechinger, C. & Seifert, U. Noninvasive measurement of dissipation in colloidal systems. Phys. Rev. E 86, 030401 (2012). Barato, A. C. & Seifert, U. Thermodynamic uncertainty relation for biomolecular processes. Phys. Rev. Lett. 114, 158101 (2015). Gingrich, T. R., Horowitz, J. M., Perunov, N. & England, J. L. Dissipation bounds all steady-state current fluctuations. Phys. Rev. Lett. 116, 120601 (2016). Gladrow, J. Broken Detailed Balance in Active Matter—Theory, Simulation and Experiment. Master's thesis, Georg-August-Universität Göttingen (2015). Sekimoto, K. Kinetic characterization of heat bath and the energetics of thermal ratchet models. J. Phys. Soc. Jpn. 66, 1234–1237 (1997). Sekimoto, K. Langevin equation and thermodynamics. Prog. Theor. Phys. Suppl. 130, 17–27 (1998). Van Kampen, N. G. Stochastic Processes in Physics and Chemistry, Vol. 1 (Elsevier, 1992). Rieder, Z., Lebowitz, J. L. & Lieb, E. Properties of a harmonic crystal in a stationary nonequilibrium state. J. Math. Phys. 8, 1073–1078 (1967). Bonetto, F., Lebowitz, J. L. & Lukkarinen, J. Fourier's law for a harmonic crystal with self-consistent stochastic reservoirs. J. Stat. Phys. 116, 783–813 (2004). Ciliberto, S., Imparato, A., Naert, A. & Tanase, M. Heat flux and entropy produced by thermal fluctuations. Phys. Rev. Lett. 110, 180601 (2013). Falasco, G., Baiesi, M., Molinaro, L., Conti, L. & Baldovin, F. Energy repartition for a harmonic chain with local reservoirs. Phys. Rev. E 92, 022129 (2015). Chun, H.-Myung & Noh, J. D. Hidden entropy production by fast variables. Phys. Rev. E 91, 052128 (2015). Mura, F., Gradziuk, G. & Broedersz, C. P. Non-equilibrium scaling behaviour in driven soft biological assemblies. Phys. Rev. Lett. 121, 038002 (2018). Qian, H. Mesoscopic nonequilibrium thermodynamics of single macromolecules and dynamic entropy-energy compensation. Phys. Rev. E 65, 016102 (2001). Van den Broeck, C. & Esposito, M. Three faces of the second law. II. Fokker-Planck formulation. Phys. Rev. E 82, 011144 (2010). Just, W., Kantz, H., Ragwitz, M. & Schmüser, F. Nonequilibrium physics meets time series analysis: Measuring probability currents from data. EPL (Europhys. Lett.) 62, 28 (2003). Lamouroux, D. & Lehnertz, K. Kernel-based regression of drift and diffusion coefficients of stochastic processes. Phys. Lett. A 373, 3507–3512 (2009). Weiss, J. B. Fluctuation properties of steady-state Langevin systems. Phys. Rev. E 76, 061128 (2007). Gladrow, J., Fakhri, N., MacKintosh, F. C., Schmidt, C. F. & Broedersz, C. P. Broken detailed balance of filament dynamics in active networks. Phys. Rev. Lett. 116, 248301 (2016). Gladrow, J., Broedersz, C. P. & Schmidt, C. F. Nonequilibrium dynamics of probe filaments in actin-myosin networks. Phys. Rev. E 96, 022408 (2017). Pietzonka, P., Barato, A. C. & Seifert, U. Universal bounds on current fluctuations. Phys. Rev. E 93, 052145 (2016). Gingrich, T. R., Rotskoff, G. M. & Horowitz, J. M. Inferring dissipation from current fluctuations. J. Phys. A: Math. Theor. 50, 184004 (2017). Pietzonka, P., Ritort, F. & Seifert, U. Finite-time generalization of the thermodynamic uncertainty relation. Phys. Rev. E 96, 012101 (2017). Horowitz, J. M. & Gingrich, T. R. Proof of the finite-time thermodynamic uncertainty relation for steady-state currents. Phys. Rev. E 96, 020103 (2017). Proesmans, K. & Van den Broeck, C. Discrete-time thermodynamic uncertainty relation. EPL (Europhys. Lett.) 119, 20001 (2017). Chiuchiù, D. & Pigolotti, S. Mapping of uncertainty relations between continuous and discrete time. Phys. Rev. E 97, 032109 (2018). Seifert, U. Stochastic thermodynamics: from principles to the cost of precision. Phys. A: Stat. Mech. its Appl. 504, 176–191 (2018). Article ADS MathSciNet CAS Google Scholar Lebowitz, J. L. & Spohn, H. A Gallavotti–Cohen-type symmetry in the large deviation functional for stochastic dynamics. J. Stat. Phys. 95, 333–365 (1999). Lecomte, V., Appert-Rolland, C. & van Wijland, F. Thermodynamic formalism for systems with Markov dynamics. J. Stat. Phys. 127, 51–106 (2007). Touchette, H. The large deviation approach to statistical mechanics. Phys. Rep. 478, 1–69 (2009). Gillespie, D. T. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81, 2340–2361 (1977). Bowman, A. W. & Azzalini, A. Applied Smoothing Techniques for Data Analysis: the Kernel Approach with S-Plus Illustrations, Vol. 18 (OUP Oxford, 1997). Sheather, S. J. & Jones, M. C. A reliable data-based bandwidth selection method for kernel density estimation. J. R. Stat. Soc. Series B Stat. Methodol. 53, 683–690 (1991). Sheather, S. J. Density estimation. Stat. Sci. 588–597 (2004). Turlach, B. A. Bandwidth selection in kernel density estimation: a review. CORE Inst. de. Stat. 19, 1–33 (1993). We gratefully acknowledge the Gordon and Betty Moore Foundation for supporting T.R.G. and J.M.H. as Physics of Living Systems Fellows through Grant GBMF4513. This research was supported by a Sloan Research Fellowship (to N.F.), the J.H. and E.V. Wade Fund Award (to N.F.), and the Human Frontier Science Program Career Development Award (to N.F.). Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA Junang Li, Jordan M. Horowitz, Todd R. Gingrich & Nikta Fakhri Department of Biophysics, University of Michigan, Ann Arbor, MI, 48109, USA Jordan M. Horowitz Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, 48104, USA Department of Chemistry, Northwestern University, Evanston, IL, 60208, USA Todd R. Gingrich Junang Li Nikta Fakhri J.M.H, T.R.G. and N.F. designed research, J.L. and T.R.G. performed research and analyzed data, J.L., J.M.H., T.R.G. and N.F. wrote the paper. Correspondence to Todd R. Gingrich or Nikta Fakhri. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. peer review file Li, J., Horowitz, J.M., Gingrich, T.R. et al. Quantifying dissipation using fluctuating currents. Nat Commun 10, 1666 (2019). https://doi.org/10.1038/s41467-019-09631-x Estimating time-dependent entropy production from non-equilibrium trajectories Shun Otsubo Sreekanth K. Manikandan Supriya Krishnamurthy Communications Physics (2022) Time irreversibility in active matter, from micro to macro J. O'Byrne Y. Kafri F. van Wijland Nature Reviews Physics (2022) Odd dynamics of living chiral crystals Tzer Han Tan Alexander Mietke Daniel S. Seara Benjamin B. Machta Michael P. Murrell Optimal Thermodynamic Uncertainty Relation in Markov Jump Processes Naoto Shiraishi Journal of Statistical Physics (2021)
CommonCrawl
Search all SpringerOpen articles Applied Informatics Sparsity preserving score for feature selection Hui Yan1 Applied Informatics volume 2, Article number: 8 (2015) Cite this article Compared with supervised feature selection, selecting features in unsupervised learning scenarios is a much harder problem due to the lack of label information. In this paper, we propose sparsity preserving score (SPS) for unsupervised feature selection based on recent advances in sparse representation technique. SPS evaluates the importance of a feature by its power of sparse reconstructive relationship preserving. Specially, SPS selects features that minimize reconstruction residual based on sparse representation in the space of selected features. SPS aims to jointly select features by transforming data from a high-dimensional space of original features to a low-dimensional space of selected features through a special binary feature selection matrix. When the sparse representation is fixed, our searching strategy is an essentially discrete optimization and our theoretical analysis guarantees our objective function can be easily solved with a closed-form solution. The experimental results on two face data sets demonstrate the effectiveness and efficiency of our algorithm. In many areas, such as text processing, biological information analysis, and combinatorial chemistry, data are often represented as high-dimensional feature vectors, but often only a small subset of features is necessary for subsequent learning and classification tasks. Thus, dimensionality reduction is preferred, which can be achieved by either feature selection or feature extraction (Guyon & Elisseeff 2003) to a low dimensional space. In contrast to feature extraction, feature selection aims at finding out the most representative or discriminative subset of the original feature spaces according to some criteria and maintains the original representation of features. During recent years, feature selection has attracted much research attention and widely used in a variety of applications (Yu et al. 2014; Ma et al. 2012b). According to the availability of labels of training data, feature selection can be classified into supervised feature selection (Kira et al. 1992; Nie et al. 2008; Zhao et al. 2010) and unsupervised feature selection (He et al. 2005; Zhao & Liu 2007), (Yang et al. 2011; Peng et al. 2005). Supervised feature selection selects features according to label information of each training data. Unsupervised methods, however, are not able to obtain label information directly, and they frequently select the features which best preserve the data similarity or manifold structure of data. Feature selection mainly focuses on search strategies and measurement criteria. The search strategies for feature selection can be divided into three categories: exhaustive search, sequential search, and random search. The exhaustive search aims to find out the optimal solution from all possible subsets. However, it is NP-hard and thus it is impractical to run. Sequential search methods, such as sequential forward selection and sequential backward elimination (Kohavi & John 1997), start from an empty set or the set of all candidates as the initial subset selected and successively add features to the selected feature or eliminate features from a subset one by one. The major drawback of the traditional sequential search methods relies heavily on search routes. Although the sequential methods do not guarantee the global optimality of selected subset, they have been widely used because of their simplicity and relatively low computational cost even for large-scale data. Plus-l-minus-r (l-r) (Devijver 1982), a slightly more reliable sequential search method, considers deleting features that were previously selected and selecting features that were previously deleted. However, it only partially solves the limit of search routes and brings in additional parameters. The random search methods, such as the random hill climbing and its extension sequential floating search (Jain & Zongker 1997), take advantage of randomized steps of the search and select features from all candidates with a chance probability per feature. Measurement criterion is also an important research direction in feature selection. Data variance (Duda et al. 2001) ranks the score of each feature by the variance along a dimension. The measurement criterion of data variance finds features that are useful for representing data; however, these features may not be useful for preserving discriminative information. Laplacian score (He et al. 2005) is a recent locality graph-based unsupervised feature selection algorithm. Laplacian score reflects locality preserving power of each feature. Recently, Wright et al. present a Sparse Representation-based Classification (SRC) (Wright et al. 2009) method. Afterwards, sparse representation-based feature extraction becomes an active direction. Qiao et al. (2010) present a Sparsity Preserving Projections (SPP) method, which aims to preserve the sparse reconstructive relationship of the data. Zhang et al. (2012) recently present a graph optimization for dimensionality reduction with sparsity constrains, which can be viewed as an extension of SPP. Clemmensen et al. (2011) provide a sparse linear discriminant analysis with a sparseness constraint on projection vectors. As we know, feature selection with direct connection to SRC has not emerged. In this paper, we use SRC as a measurement criterion to design an unsupervised feature selection algorithm called sparsity preserving score (SPS). The formulated objective function, which is an essentially discrete optimization, aims to seek a binary linear transformation such that in a low-dimensional space the sparse representation coefficients are preserved. As the sparse representation is fixed, our theoretical analysis guarantees our objective function can be easily solved with a closed form, which is optimal solution. SPS simply ranks the score of each feature by Frobenius norm of sparse linear reconstruction residual in the space of selected features. Unsupervised feature selection criterion Let x i ∈ R m × 1 be the \( i \) th training sample and X = [x 1, x 2, …, x N ] ∈ R m × N be a matrix composed of entire training samples. The unsupervised criterion to select m ' (m ' < m) features is defined as $$ { \min}_A\mathrm{loss}\left(X,X{U}^A\right)+\mu \varOmega \left({U}^A\right) $$ where A is the set of the indices of selected features, U A is the corresponding m × m-sized feature selection matrix, and XU A is reconstruction of the reduced space in \( {R}^{m^{\prime}\times N} \) to the original space in R m×N. loss(⋅) is the loss function, and μΩ(U A) is the regularization with μ as its parameter. Sparse representation Given a test sample y, we represent y in an overcomplete dictionary whose basis vectors are training sample themselves, i.e., y = Xβ. If the system of linear equation is underdetermined, this representation is naturally sparse. The sparsest solution can be sought by solving the following l 1 optimization problem (Donoho 2006; Cands et al. 2006): $$ \widehat{\beta}= \arg { \min}_{\beta}\left|\right|\beta \left|\right|{}_1,s.t.,y=X\beta $$ This problem can be solved in polynomial time by standard linear programming algorithms (Chen et al. 2001). We formulate our strategy to select n(n < m) features as follows: given a set of unlabeled training samples x i ∈ R m × 1, i = 1,.., N, learn a feature selection matrix P ∈ R m×n such that P is optimal according to our objective function. For the task of feature selection, P is required to be a special 0–1 binary matrix which satisfies two constraints: (1) each row of P has one and only one non-zero entry of 1 and (2) each column of P has at most one non-zero entry. Accordingly, the sum of entries in each row equals 1 and the sum of entries in each column less than or equals 1. For test, \( {x}_i^{\hbox{'}}={U}^T{x}_i \) is the new representation of χ i where x ' i (k) = x i (k) if the kth feature is selected, and otherwise \( {x}_i^{\hbox{'}}(k)=0 \). We define the following objective function to minimize the sparse linear reconstruction residual and measure the sparsity by the l 1 -×norm of coefficients. $$ \begin{array}{ccc}\hfill { \min}_{P,\left\{{\beta}_i,i=1,\dots, N\right\}}\hfill & \hfill \begin{array}{l}J\left(P,{\beta}_i\right):=\\ {}s.t.,\end{array}\hfill & \hfill \begin{array}{l}{\displaystyle {\sum}_{i=1}^N\left|\right|P{x}_i-P{D}_i{\beta}_i\left|\right|{}_F^2+\lambda \left|\right|{\beta}_i\left|\right|{}_1}\\ {}{\displaystyle {\sum}_{j=1}^mP\left(i,j\right)=1}\\ {}{\displaystyle {\sum}_{i=1}^nP\left(i,j\right)\le 1,\kern0.5em }\\ {}P\left(i,j\right)=0\kern0.5em \mathrm{or}\;1\end{array}\hfill \end{array} $$ Here, D i = [x 1, …, x i − 1, x i + 1, …, x N ] ∈ R m × (N − 1) is the collection of training samples without the ith sample, β i is the sparse representation coefficient vector of χ i over D i , and λ is a scalar parameter. The items in line 1 of (2) are approximation and sparse constraints in the features selected space, respectively. (2) is a joint optimization of P and β i (i = 1, …,N). Since \( P \) and β i (i = 1,..,N) are dependent on each other, this problem cannot be solved directly. We update the variables alternately with others fixed. By fixing β i (i = 1,..,N), removing terms irrelevant to P and rewriting the first term in (2) in a matrix form, the optimization problem (2) is reduced to $$ \begin{array}{l}\ { \min}_P\mathrm{trace}\left\{P\varGamma {\varGamma}^T{P}^T\right\}\\ {}s.t.,\kern0.5em {\displaystyle {\sum}_{j=1}^mP\left(i,j\right)=1}\kern0.5em \\ {}{\displaystyle {\sum}_{i=1}^nP\left(i,j\right)\le 1}\kern0.5em \\ {}P\left(i,j\right)=0\kern0.5em \mathrm{or}\;1\end{array} $$ where Γ = [γ 1, …, γ N ], and γ i = x i − D i β i . Under the constraints in (3), we suppose P(i, k i ) = 1, then $$ \begin{array}{cc}\hfill \mathrm{trace}\left\{P\varGamma {\varGamma}^T{P}^T\right\}\hfill & \hfill \begin{array}{l}={\displaystyle {\sum}_{i=1}^mP\left(i,:\right)\varGamma {\varGamma}^T{P}^T\left(i,:\right)}\\ {}={\displaystyle {\sum}_{i=1}^m\left\{P\left(i,:\right)\varGamma \right\}{\left\{P\left(i,:\right)\varGamma \right\}}^T\Big\}}\\ {}={\displaystyle {\sum}_{i=1}^m{\displaystyle {\sum}_{j=1}^N{\left\{\varGamma \left({k}_i,j\right)\right\}}^2}}\end{array}\hfill \end{array} $$ The optimization problem in (3) is converted into computing the sparsity preserving score of each feature, which is defined as $$ \mathrm{Score}(i)={\displaystyle {\sum}_{j=1}^N{\left\{\varGamma \left({k}_i,j\right)\right\}}^2,i=1,\dots, m} $$ And then we rank and select the n smallest ones from Score(i), i = 1, …, m. Without loss of generality, suppose the n selected features are indexed by \( {k}_i^{*},i=1,\dots, n \). We can construct the matrix P as $$ P\left(i,j\right)=\left\{\begin{array}{c}\hfill 1,\kern2.25em j={k}_i^{*}\hfill \\ {}\hfill 0,\ \mathrm{otherwise}\hfill \end{array}\right. $$ By fixing P, removing terms irrelevant to β i (i = 1, …,N), the optimization problem (3) is reduced to the following l 1 optimization problem $$ { \min}_{\left\{{\beta}_i,i=1,\dots, N\right\}}{\displaystyle {\sum}_{i=1}^N\left|\right|P{x}_i-P{D}_i{\beta}_i\left|\right|{}_F^2+\lambda \left|\right|{\beta}_i\left|\right|{}_1} $$ The iterative procedure is given in Algorithm 1. The initial solution of β i can be calculated directly in the original space of selected features, and it can be used as a good initial solution of the iterative algorithm (Yang et al. 2013). Note that since the P obtained via the first iteration is 0–1 matrix, some values of features (corresponding to \( j\ne {k}_i^{*} \)) are equal to zero in the second iteration. Thus, it is meaningless to compute the coefficient vector β i for features whose values are equal to zero. In other words, P becomes a stable value after the first iteration. Thus, we give non-iterative version of Algorithm 1, i.e., Algorithm 2, where we compute β i in the original space as $$ { \min}_{\beta_i}\left|\right|{x}_i-{D}_i{\beta}_i\left|\right|{}_F^2+\lambda \left|\right|{\beta}_i\left|\right|{}_1 $$ Some standard convex optimization techniques or TNIPM in (Kim et al. 2007) can be used to solve β i . In our experiments, we directly use source code provided by authors in (Kim et al. 2007). Algorithm 1: Iterative procedure for sparsity preserving score Algorithm 2: Non-iterative procedure for sparsity preserving score Several experiments on Yale and ORL face datasets are carried out to demonstrate the efficiency and effectiveness of our algorithm. In our experiments, all samples are not pre-processed. Our algorithm is an unsupervised method, and thus, we compare our Algorithm 2 with other four representative unsupervised feature selection algorithms including data variance, Laplacian score, feature selection for multi-cluster data (MCFS) (Cai et al. 2010), and spectral feature selection (SPEC) (Zhao & Liu 2007) with all the eigenvectors of the graph Laplacian. In all the tests, the number of the nearest neighbors in Laplacian score, MCFS, and SPEC is taken to be half of the number of training images per person. For both datasets, we choose the first five and six images, respectively, per person for training and the rest for testing. After feature selection, the recognition is performed by the "L2"-distance based 1-nearest neighbor classifier. Table 1 reports the top performance as well as the corresponding number of features selected, and Fig. 1 illustrates the recognition rate as a function of the number of features selected. As shown in Table 1, our algorithm reaches the highest or comparable recognition rate at the lowest dimension of feature selected space. From Fig. 1, we can see that with only a very small number of features, SPS can achieve significant better recognition rates than the other methods. It can be interpreted from two aspects: (1) SPS jointly selects features and obtain the optimal solution of a binary transformation matrix, while the other methods only add features one by one. Thus, SPS considers the interaction and dependency among features. (2) Features selected with sparse reconstructive relationship preserving are capable of enhancing recognition performance. Table 1 The comparison of the top recognition rates and the corresponding number of features selected Recognition results of the feature selection methods with respect to the number of selected features on (a-1, a-2) Yale and (b-1, b-2) ORL We randomly choose five and six images, respectively, per person for training and the rest for testing. Since the training set is randomly chosen, we repeat this experiment ten times and calculate the average result. The average top performances obtained are reported in Table 2. The results further verify that SPS can select more informative preserving feature subset. Table 2 The comparison of average top recognition rates This paper addresses the problem on how to select features with power of sparse reconstructive relationship preserving. In theory, we prove our feature subset is the optimal solution in closed form if the sparse representation vectors are fixed. Experiments are done on the ORL and Yale face image databases, and results demonstrate our proposed sparsity preserving score is more effective than data variance, Laplacian score, MCFS, and SPEC. Cai, D., Zhang, C., He, X.: Unsupervised feature selection for multi-cluster data. In: International Conference on Knowledge Discovery and Data Mining. ACM, Washington, DC, USA (2010) Cands E, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223 Chen S, Donoho D, Saunders M (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159 MathSciNet Article MATH Google Scholar Clemmensen L, Hastie T, Witten D, Ersboll B (2011) Sparse discriminant analysis. Technometrics 53(4):406–413 MathSciNet Article Google Scholar Devijver, P. A., Kittler, J (1982) Pattern recognition: a statistical approach. Prentice-Hall, Englewood Cliffs, London Donoho D (2006) For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829 Duda R, Hart P, Stork D (2001) Pattern classification. John Wiley Sons, New York Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182 MATH Google Scholar He, X., Cai, D., Niyogi, P. : Laplacian score for feature selection. In: Advances in neural information processing systems. MIT Press, Cambridge, MA (2005) Jain A, Zongker D (1997) Feature selection: evaluation, application, and small sample performance. IEEE J Pattern Analys Machine Intell 19:153–158 Article MATH Google Scholar Kim SJ, Koh K, Lustig M, Boyd S, Gorinevsky D (2007) A method for largescale l1-regularized least squares. IEEE J Selected Topics Signal Process 1(4):606–617 Kira K, Rendell L (1992) A practical approach to feature selection. In: 9th International Workshop on Machine Learning, San Francisco, Morgan Kaufmann 249-256. Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 92(12):273–324 Ma Z, Nie F, Yang Y, Sebe N (2012b) Web image annotation via subspace-sparsity collaborated feature selection. IEEE Transsact Multimedia 14(4):1021–1030 Nie, F. P., Huang, H., Cai, X., Ding, C. : Efficient and robust feature selection via joint \( {l}_{2,1} \) -norms minimization. In: Advances in neural information processing systems, Vancouver, BC, Canada (2010) Peng H, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Transact Pattern Analys Machine Intell 27(8):1226–1238 Qiao LS, Chen SC, Tan XY (2010) Sparsity preserving projections with applications to face recognition. Pattern Recogn 43(1):331–341 Wright J, Yang A, Ganesh A, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE J Pattern Analys Machine Intell 31:210–227 Yang, Y., Shen, H., Ma, Z., Huang, Z., Zhou, X.: \( {l}_{2,1} \) -Norm regularized discriminative feature selection for unsupervised learning. In: International Joint Conferences on Artificial Intelligence. Morgan Kaufmann, San Francisco, USA (2011) Yang J, Chu D, Zhang L, Xu Y, Yang JY (2013) Sparse representation classifier steered discriminative projection with applications to face recognition. IEEE Transsact Neural Networks Learn Syst 24(7):1023–1035 Yu D, Hu J, Yan H, Yang X, Yang J, Shen H (2014) Enhancing protein-vitamin binding residues prediction by multiple heterogeneous subspace SVMs ensemble. BMC Bioinformatics 15:297 Zhang LM, Chen S, Qiao L (2012) Graph optimization for dimensionality reduction with sparsity constraints. Pattern Recogn 45(3):1205–1210 Zhao Z, Liu H (2007) Spectral feature selection for supervised and unsupervised learning. In: Proceedings of international conference on machine learning. ACM, New York Zhao, Z., Wang, L., Liu, H.: Efficient spectral feature selection with minimum redundancy. In: International Joint Conferences on Artificial Intelligence. Morgan Kaufmann, Georgia, USA (2010) The authors would like to thank the anonymous reviewers for their constructive advice. This work is supported by the National Natural Science Foundation of China (Grant No. 61202134), Jiangsu Planned Projects for Postdoctoral Research Funds, China Planned Projects for Postdoctoral Research Funds, and National Science Fund for Distinguished Young Scholars (Grant No. 61125305). School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, ᅟ, China Hui Yan Correspondence to Hui Yan. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Yan, H. Sparsity preserving score for feature selection. Appl Inform 2, 8 (2015). https://doi.org/10.1186/s40535-015-0009-3 Feature selection Binary matrix Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Extreme Learning Always curious. Always learning. Maximal Poisson disk sampling. An improved version of Bridson's Algorithm Bridson's Algorithm (2007) is a very popular method to produce maximal 'blue noise' point distributions such that no two points are closer than a specified distance apart. In this brief post we show how a minor modification to this algorithm can make it 20x faster and allows it to produce much higher density point distributions. Figure 1. Poisson disc sampling based on a modified version of Bridson's algorithm. This modified algorithm runs in linear time and is about 20x faster than the original algorithm In many applications in graphics, particularly rendering, generating samples from a blue noise distribution is important. Poisson-disc sampling produces points that are tightly-packed, but no closer to each other than a specified minimum distance $r_c$, resulting in a more natural looking pattern (see above figure). To create such maximal poisson disk distributions, many people use Mitchell's best candidate, however, Bridson's algorithm (2007) is much faster as it runs in $O(n)$ time, rather than $O(n^2)$ for Mitchell's. In this brief post, I show how, by changing only a few lines of its implementation code, we can make Bridson's algorithm even more efficient. This new version is not only ~20x faster, but it produces higher quality point distributions, as it allows for more tightly packed and consistent point distributions. Summary of Bridson's algorithm. Mike Bostock has written two very elegant explanations of how Bridson's algorithm works. In his latest tutorial, he explains how candidate points are generated by sampling from inside an annulus of inner radius $r_c$ and outer radius $2r_c$. Furthermore, he explains how the underlying grid allows extremely fast checking of the validity of each of these candidates. His earlier visualization / tutorial, shows the time evolution of the candidates as the point distribution is constructed. The core of the algorithm depends on uniformly selecting random points (candidates) from an annulus. This can be achieved through the standard way of first selecting a uniform point $(u,v)$ from $[0,1)^2$, and then mapping this point to the annulus. That is, u = Math.random(); v = Math.random(); rOuter = 2 * rInner; for (let j = 0; j < k; ++j) { theta = 2 * Math.PI * u; r = Math.sqrt( rInner**2 + v*(rOuter**2 - rInner**2) ); x = parent[0] + r * Math.cos(theta); y = parent[1] + r * Math.sin(theta); An improved Sampling method However, my proposed improvement comes directly from the premise that in this very particular and surprisingly unique situation we do not need to uniformly sample from the annulus. Rather, we would prefer to select points closer to the inner radius as this is results in neighboring points closer to $r_c$. It turns out that the following simple change to the sampler provides an extraordinary improvement to its efficiency. (Also instead of stochastically sampling from an annulus, we are simply progressively sampling from the outside perimeter of the threshold circle.) epsilon = 0.0000001; seed = Math.random(); theta = (seed + j/k); r = rInner + epsilon; (I suspect that we could eliminate the need for epsilon=0.00001, if we carefully modified some of the inequality signs and/or floating point operations, throughout the code implementation.) We can again use Bostock's technique to visualize the the effects of this modification has on the evolution of horizon frontier of the candidates. The left image is based on the Bridson's original algorithm. Note the wide and dispersed region of red candidates. Also note how much the grey lines are intersecting each other. Compare this to the right image which is based on the new algorithm. In this case, note the much narrower and well-defined region of red candidates. Also note how the grey lines are far more regularly structured and that none of them are intersecting! (Don't forget that these visualizations artificially slow down the calculations in order to make the animations nicer!) Figure 2. A comparison of the frontier horizons of candidate points between the original Bridson Algorithm (left) and the new modified version (right). Bridson's algorithm requires setting a configuration parameter $k$, which is the maximum number of candidates to consider for each frontier point. It is generally set to a value around 30 (but sometimes up to 100). Figure 3. A comparison of output from the original algorithm (left) and the new modified version (right), both using $k=30$. Intuitively, increasing $k$ increases the potential point density and thus resulting in a tighter packing. The following table gives an indication on how the total number of output points varies, as this parameter $k$ varies. Also, the total number of candidates is included, as it is a useful proxy for CPU time. (Note that these numbers are based on the default dimensions of the implementation by Jason Davies. The intent of showing these numbers is not the focus on the individual numbers per se, but rather their relative values.) Results for Bridson's Algorithm \begin{array}{|r|rr|} \hline \text{k} & \text{Candidates} &\text{Points} \\ \hline 3 & 43\text{k} & \bf{9,282} \\ \hline 10 & 140\text{k} & \bf{10,862} \\ \hline 100 & 1.4\text{M} &\bf{12,369} \\ \hline 300 & 4.3\text{M} & \bf{12,765} \\ \hline 1000 & 13\text{M} & \bf{13,014} \\ \hline \end{array} For context, all of these numbers are far less than the theoretically maximal number of 21,906 points which could be achieved if the points were distributed in a hexagonal lattice arrangement. Below are the same data points, but this time for the modified algorithm Results for new Modified Algorithm \text{k} & \text{Candidates} & \text{Points} \\ \hline 3 & 66\text{k} & \bf{13,791} & \\ \hline Thus, for a given value of $k$, the new algorithm results in between 38% and 48% more points than the original value. Said another way, using the smallest value of $k=3$ with the new algorithm will still produce more points (ie higher point density) as using the largest value of $k=1000$ with the original algorithm! Furthermore this can be achieved by testing just 66 thousand candidates rather than 13 million candidates. This is a reduction by a factor of 20! For this new version, I recommend starting with a default setting of $k=4$, but values as high as $k=100$ are still fine. I show how, by changing only a few lines of its implementation code, we can make it much more efficient. This new version is not only ~20x faster, but for the same value of $k$ now produces point distributions with approximately 40% higher point density. Alternatively, one can increase the original $r_c$ by a factor of 1.2x, and then the output of both the original and the new algorithm will then have the same number of dots and the same dot distributions but the new one will run much faster. Figure 4. The modified algorithm with k=100. This results in +42% more points than the original algorithm using the same parameters (k=100). My name is Dr Martin Roberts, and I'm a freelance senior data science consultant, who loves working at the intersection of maths and computing. "Transforming organizations through innovative data solutions." Let's have a chat on how we can work together! Come follow me on Twitter: @Techsparx! My other contact details can be found here. If you liked this post, I think you will also like these ones! The Unreasonable Effectiveness of Quasirandom Sequences Evenly distributing points on a sphere A simple method to construct isotropic quasirandom sequences By Martin Roberts No Comments
CommonCrawl
Mon, 30 Sep 2019 23:51:28 GMT 3.4: Solving Linear Systems with Three Variables [ "article:topic", "inconsistent system", "license:ccbyncsa", "showtoc:no", "dependent systems" ] Book: Advanced Algebra (Redden) 3: Solving Linear Systems Solutions to Linear Systems with Three Variables Solve Linear Systems with Three Variables by Elimination Dependent and Inconsistent Systems Applications Involving Three Unknowns Skills to Develop Check solutions to linear systems with three variables. Solve linear systems with three variables by elimination. Identify dependent and inconsistent systems. Solve applications involving three unknowns. Real-world applications are often modeled using more than one variable and more than one equation. In this section, we will study linear systems consisting of three linear equations each with three variables. For example, \(\left\{ \begin{array} { l l } { 3 x + 2 y - z = - 7 } & { \color{Cerulean} { (1) } } \\ { 6 x - y + 3 z = - 4 } & { \color{Cerulean} { (2) } } \\ { x + 10 y - 2 z = 2 } & { \color{Cerulean} { (3) } } \end{array} \right.\) A solution to such a linear system is an ordered triple19 \((x, y, z)\) that solves all of the equations. In this case, \((−2, 1, 3)\) is the only solution. To check that an ordered triple is a solution, substitute in the corresponding \(x\)-, \(y\)-, and \(z\)-values and then simplify to see if you obtain a true statement from all three equations. \(\color{Cerulean}{Check:}\color{Black}{(-2,1,3)}\) \(\begin{array} { r } { \text { Equation } \color{Cerulean}{( 1 ) :} } \\ { 3 x + 2 y + z = - 7 } \\ { 3 ( \color{Cerulean}{- 2}\color{Black}{ )} + 2 ( \color{Cerulean}{1}\color{Black}{ )} - ( \color{Cerulean}{3}\color{Black}{ )} = - 7 } \\ { - 6 + 2 - 3 = - 7 } \\ { - 7 = - 7\:\:\color{Cerulean}{✓} } \end{array}\) \(\begin{array} { r } { \text { Equation } \color{Cerulean}{( 2 ) : }} \\ { 6 x -y + 3z = -4 } \\ { 6 ( \color{Cerulean}{- 2}\color{Black}{ )} - (\color{Cerulean}{ 1}\color{Black}{ )} -3 (\color{Cerulean}{ 3}\color{Black}{ )} = - 4 } \\ { - 12 -1 -9 = - 4 } \\ { - 4 = - 4 } \:\:\color{Cerulean}{✓}\end{array}\) \(\begin{array} { r } { \text { Equation }\color{Cerulean}{ ( 3 ) :} } \\ { x +10y -2z = 2 } \\ { ( \color{Cerulean}{- 2}\color{Black}{ )} +10 ( \color{Cerulean}{1}\color{Black}{ )} -2 ( \color{Cerulean}{3}\color{Black}{ )} = 2 } \\ { - 2+10 -6 = 2 } \\ { 2 = 2 } \:\:\color{Cerulean}{✓} \end{array}\) Table 3.4.1 Because the ordered triple satisfies all three equations we conclude that it is indeed a solution. Example \(\PageIndex{1}\): Determine whether or not \((1, 4, \frac{4}{3}\) is a solution to the following linear system: \(\left\{ \begin{aligned} 9 x + y - 6 z & = 5 \\ - 6 x - 3 y + 3 z & = - 14 \\ 3 x + 2 y - 7 z & = 15 \end{aligned} \right.\) \(\color{Cerulean}{Check:}\color{Black}{(1, 4, \frac{4}{3})}\) \(\begin{array} { r } { \text { Equation } \color{Cerulean}{( 1 ) :} } \\ { 9 x + y - 6 z = 5 } \\ { 9 ( \color{Cerulean}{1}\color{Black}{ )} + ( \color{Cerulean}{4}\color{Black}{ )} - 6 \left( \color{Cerulean}{\frac { 4 } { 3 }} \right) = 5 } \\ { 9 + 4 - 8 = 5 } \\ { 5 = 5 } \:\:\color{Cerulean}{✓}\end{array}\) \(\begin{array} { r } { \text { Equation } \color{Cerulean}{( 2 ) :} } \\ { -6x -3y +3z = -14 } \\ { 6 ( \color{Cerulean}{1}\color{Black}{ )} - 3( \color{Cerulean}{4}\color{Black}{ )}+ 3 \left( \color{Cerulean}{\frac { 4 } { 3 }} \right) = -14 } \\ { -6-12+4 = -14 } \\ { -14= -14 } \:\:\color{Cerulean}{✓}\end{array}\) \(\begin{array} { r } { \text { Equation } \color{Cerulean}{( 3 ) :} } \\ { 3x+2y-7z = 15 } \\ { 3 ( \color{Cerulean}{1}\color{Black}{ )} +2( \color{Cerulean}{4}\color{Black}{ )}-7 \left( \color{Cerulean}{\frac { 4 } { 3 }} \right) = 15 } \\ { 3+8-\frac{28}{3} = 15 } \\ { \frac{5}{3}= 15 } \:\:\color{red}{X}\end{array}\) The point does not satisfy all of the equations and thus is not a solution. An ordered triple such as \((2, 4, 5)\) can be graphed in three-dimensional space as follows: Figure 3.4.1 The ordered triple indicates position relative to the origin \((0, 0, 0)\), in this case, \(2\) units along the \(x\)-axis, \(4\) units parallel to the \(y\)-axis, and \(5\) units parallel to the \(z\)-axis. A linear equation with three variables20 is in standard form if \(ax+by+cz=d\) where \(a, b, c\), and \(d\) are real numbers. For example, \(6x + y + 2z = 26\) is in standard form. Solving for \(z\), we obtain \(z = −3x − \frac{1}{2} y + 13\) and can consider both \(x\) and \(y\) to be the independent variables. When graphed in three-dimensional space, its graph will form a straight flat surface called a plane21. Therefore, the graph of a system of three linear equations and three unknowns will consist of three planes in space. If there is a simultaneous solution, the system is consistent and the solution corresponds to a point where the three planes intersect. Graphing planes in three-dimensional space is not within the scope of this textbook. However, it is always important to understand the geometric interpretation. Exercise \(\PageIndex{1}\) Determine whether or not \((3, −1, 2)\) a solution to the system: \(\left\{ \begin{array} { l } { 2 x - 3 y - z = 7 } \\ { 3 x + 5 y - 3 z = - 2 } \\ { 4 x - y + 2 z = 17 } \end{array} \right.\) Yes, it is a solution. http://www.youtube.com/v/2UET4LzXoYg In this section, the elimination method is used to solve systems of three linear equations with three variables. The idea is to eliminate one of the variables and resolve the original system into a system of two linear equations, after which we can then solve as usual. The steps are outlined in the following example. Solve: \(\left\{ \begin{array} { l l } { 3 x + 2 y - z = - 7 } & { \color{Cerulean}{(1)} } \\ { 6 x - y + 3 z = - 4 } & { \color{Cerulean} { (2) } } \\ { x + 10 y - 2 z = 2 } & { \color{Cerulean} { (3) } } \end{array} \right.\) All three equations are in standard form. If this were not the case, it would be a best practice to rewrite the equations in standard form before beginning this process. Step 1: Choose any two of the equations and eliminate a variable. In this case, we can line up the variable \(z\) to eliminate if we group \(3\) times the first equation with the second equation. Next, add the equations together. \(\begin{aligned} 9 x + 6 y \color{red}{- 3 z}&\color{black}{ =} 21 \\ \pm 6 x - y \color{red}{+ 3 z}&\color{black}{ =} - 4 \\ \hline \\ 15x + 5y &= -25 \color{OliveGreen}{✓} \end{aligned}\) Step 2: Choose any other two equations and eliminate the same variable. We can line up \(z\) to eliminate again if we group \(−2\) times the first equation with the third equation. And then add, \(\begin{aligned} - 6 x - 4 y \color{red}{+ 2 z}&\color{black}{ =} 14 \\ \pm x + 10 y \color{red}{- 2 z} &\color{black}{=} 2 \\ \hline \\-5x + 6y& = 16 \color{OliveGreen}{✓}\end{aligned}\) Step 3: Solve the resulting system of two equations with two unknowns. Here we solve by elimination. Multiply the second equation by \(3\) to line up the variable \(x\) to eliminate. \(\begin{aligned} \color{red}{15 x}\color{black}{ +} 5 y &= - 25 \\ \pm \color{red}{- 15 x}\color{black}{ +} 18 y &= 48 \\ \hline\\23y&=23\\y&=1 \end{aligned}\) Step 4: Back substitute and determine all of the coordinates. To find x use the following, \(\begin{aligned} 15 x + 5 y & = - 25 \\ 15 x + 5 ( \color{OliveGreen}{1}\color{black}{ )} & = - 25 \\ 15 x & = - 30 \\ x & = - 2 \end{aligned}\) Now choose one of the original equations to find \(z\), \(\begin{aligned} 3 x + 2 y - z & = - 7 \quad\color{Cerulean}{(1)} \\ 3 ( \color{OliveGreen}{- 2}\color{Black}{ )} + 2 ( \color{OliveGreen}{1}\color{Black}{ )} - z & = - 7 \\ - 6 + 2 - z & = - 7 \\ - 4 - z & = - 7 \\ - z & = - 3 \\ z & = 3 \end{aligned}\) Hence the solution, presented as an ordered triple \((x, y, z)\), is \((−2, 1, 3)\). This is the same system that we checked in the beginning of this section. \((-2,1,3)\) It does not matter which variable we initially choose to eliminate, as long as we eliminate it twice with two different sets of equations. Solve: \(\left\{ \begin{array} { c } { - 6 x - 3 y + 3 z = - 14 } \\ { 9 x + y - 6 z = 5 } \\ { 3 x + 2 y - 7 z = 15 } \end{array} \right.\). Because \(y\) has coefficient \(1\) in the second equation, choose to eliminate this variable. Use equations \(1\) and \(2\) to eliminate \(y\). Next use the equations \(2\) and \(3\) to eliminate \(y\) again. This leaves a system of two equations with two variables \(x\) and \(z\), \(\left\{ \begin{array} { l } { 21 x - 15 z = 1 } \\ { - 15 x + 5 z = 5 } \end{array} \right.\) Multiply the second equation by \(3\) and eliminate the variable \(z\). Now back substitute to find \(z\). \(\begin{aligned} 21 x - 15 z & = 1 \\ 21 \left( \color{Cerulean}{- \frac { 2 } { 3} } \right) - 15 z & = 1 \\ - 14 - 15 z & = 1 \\ - 15 z & = 15 \\ z & = - 1 \end{aligned}\) Finally, choose one of the original equations to find \(y\). \(\begin{aligned} - 6 x - 3 y + 3 z & = - 14 \\ - 6 \left( \color{Cerulean}{- \frac { 2 } { 3} } \right) - 3 y + 3 ( \color{Cerulean}{- 1}\color{Black}{ )} & = - 14 \\ 4 - 3 y - 3 & = - 14 \\ 1 - 3 y & = - 14 \\ - 3 y & = - 15 \\ y & = 5 \end{aligned}\) \(\left( - \frac { 2 } { 3 } , 5 , - 1 \right)\) Solve: \(\left\{ \begin{array} { l } { 2 x + 6 y + 7 z = 4 } \\ { - 3 x - 4 y + 5 z = 12 } \\ { 5 x + 10 y - 3 z = - 13 } \end{array} \right.\). In this example, there is no obvious choice of variable to eliminate. We choose to eliminate \(x\). \(\begin{array} { l } { \color{Cerulean}{ (1) } } \\ { \color{Cerulean}{ (2) } } \end{array}\)\(\left \{ \begin{array} { l l } { 2 x + 6y + 7z = 4 } & { \stackrel{ \times3 } { \Rightarrow } } \\ { -3 x -4y +5z = 12 } & { \underset { \times 2 } { \Rightarrow }} \end{array} \right. \left\{ \begin{array} { l } { 6 x + 18 y + 21z = 12 } \\ { -6x -8y +10z = 24 } \end{array} \\\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad10y + 31z=36 \color{OliveGreen}{✓} \right.\) Next use equations \(2\) and \(3\) to eliminate \(x\) again. \(\begin{array} { l } { \color{Cerulean}{ (2) } } \\ { \color{Cerulean}{ (3) } } \end{array}\)\(\left \{ \begin{array} { l l } { -3 x -4y + 5z = 12 } & { \stackrel{ \times5 } { \Rightarrow } } \\ { 5 x +10y -3z = -13 } & { \underset { \times 3 } { \Rightarrow }} \end{array} \right. \left\{ \begin{array} { l } { -15 x -20 y + 25z = 60 } \\ { 15x +30y -9z = -39 }\end{array} \\\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad10y + 16z=21 \color{OliveGreen}{✓} \right.\) This leaves a system of two equations with two variables \(y\) and \(z\), \(\left\{ \begin{array} { l } { 10 y + 31 z = 36 } \\ { 10 y + 16 z = 21 } \end{array} \right.\) Multiply the first equation by \(−1\) as a means to eliminate the variable \(y\). Figure 3.4.10 Now back substitute to find \(y\). \(\begin{aligned} 10 y + 31 z & = 36 \\ 10 y + 31 ( \color{OliveGreen}{1}\color{Black}{ )} & = 36 \\ 10 y + 31 & = 36 \\ 10 y & = 5 \\ y & = \frac { 5 } { 10 } \\ y & = \frac { 1 } { 2 } \end{aligned}\) Choose any one of the original equations to find \(x\). \(\begin{aligned} 2 x + 6 y + 7 z & = 4 \\ 2 x + 6 \left( \color{OliveGreen}{\frac { 1 } { 2 }} \right) + 7 ( \color{OliveGreen}{1}\color{Black}{ )} & = 4 \\ 2 x + 3 + 7 & = 4 \\ 2 x + 10 & = 4 \\ 2 x & = - 6 \\ x & = - 3 \end{aligned}\) \(\left( - 3 , \frac { 1 } { 2 } , 1 \right)\) Solve: \(\left\{ \begin{array} { l } { 2 x - 3 y - z = 7 } \\ { 3 x + 5 y - 3 z = - 2 } \\ { 4 x - y + 2 z = 17 } \end{array} \right.\) \((3, -1, 2)\) http://www.youtube.com/v/CjSv8D3g2Ic Just as with linear systems with two variables, not all linear systems with three variables have a single solution. Sometimes there are no simultaneous solutions. Solve the system: \(\left\{ \begin{aligned} 4 x - y + 3 z & = 5 \\ 21 x - 4 y + 18 z & = 7 \\ - 9 x + y - 9 z & = - 8 \end{aligned} \right.\). In this case we choose to eliminate the variable \(y\). \(\begin{array} { l } { \color{Cerulean}{ (1) } } \\ { \color{Cerulean}{ (3) } } \end{array}\)\(\left \{ \begin{array} { l l } { 4 x -y + 3z = 5 } & \\ { -9 x +y -9z = -8 } & \end{array} \right. \\\quad-5x -6z=-3 \color{OliveGreen}{✓} \) Next use equations \(2\) and \(3\) to eliminate \(y\) again. \(\left\{ \begin{array} { l } { - 5 x - 6 z = - 3 } \\ { - 15 x - 18 z = - 25 } \end{array} \right.\) Multiply the first equation by \(-3\) and eliminate the variable \(z\). Adding the resulting equations together leads to a false statement, which indicates that the system is inconsistent. There is no simultaneous solution. \(\varnothing\) Just as with linear systems with two variables, some linear systems with three variables have infinitely many solutions. Such systems are called dependent systems. Solve the system: \(\left\{ \begin{array} { c } { 7 x - 4 y + z = - 15 } \\ { 3 x + 2 y - z = - 5 } \\ { 5 x + 12 y - 5 z = - 5 } \end{array} \right.\). Eliminate \(z\) by adding the first and second equations together. \(\begin{array} { l } { \color{Cerulean}{ (1) } } \\ { \color{Cerulean}{ (3) } } \end{array}\)\(\left \{ \begin{array} { l l } { 7 x -4y + z = -15 } & \\ { 3 x +2y -z = -5 } & \end{array} \right. \\\quad10x -2y=-20 \color{OliveGreen}{✓} \) Next use equations \(1\) and \(3\) to eliminate \(z\) again. This leaves a system of two equations with two variables \(x\) and \(y\), \(\left\{ \begin{array} { l } { 10 x - 2 y = - 20 } \\ { 40 x - 8 y = - 80 } \end{array} \right.\) Line up the variable \(y\) to eliminate by dividing the first equation by \(2\) and the second equation by \(−8\). \(\left \{ \begin{array} { l l } { 10 x -2y = -20 } & { \stackrel{ \div 2} { \Longrightarrow } } \\ { 40 x -8y = -80 } & { \underset { \div (-8) } { \Longrightarrow }} \end{array} \right. \left\{ \begin{array} { l } { 5 x - y = -10 } \\ { -5x +y = 10 }\end{array} \\\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 0=0 \:\:\color{Cerulean}{True} \right.\) A true statement indicates that the system is dependent. To express the infinite number of solutions \((x, y,z)\) in terms of one variable, we solve for \(y\) and \(z\) both in terms of \(x\). \(\begin{aligned} 10 x - 2 y & = - 20 \\ - 2 y & = - 10 x - 20 \\ \frac { - 2 y } { - 2 } & = \frac { - 10 x - 20 } { - 2 } \\ y & = 5 x + 10 \end{aligned}\) Once we have \(y\) in terms of \(x\), we can solve for \(z\) in terms of \(x\) by back substituting into one of the original equations. \(\begin{aligned} 7 x - 4 y + z & = - 15 \\ 7 x - 4 ( \color{OliveGreen}{5 x + 10}\color{black}{ )} + z & = - 15 \\ 7 x - 20 x - 40 + z & = - 15 \\ - 13 x - 40 + z & = - 15 \\ z & = 13 x + 25 \end{aligned}\) \(( x , 5 x + 10,13 x + 25 )\) A consistent system with infinitely many solutions is a dependent system. Given three planes, infinitely many simultaneous solutions can occur in a number of ways. Solve: \(\left\{ \begin{aligned} 7 x + y - 2 z & = - 4 \\ - 21 x - 7 y + 8 z & = 4 \\ 7 x + 3 y - 3 z & = 0 \end{aligned} \right.\) \(\left( x , \frac { 7 } { 3 } x + 4 , \frac { 14 } { 3 } x + 4 \right)\) http://www.youtube.com/v/WgGAtNiJMlI Many real-world applications involve more than two unknowns. When an application requires three variables, we look for relationships between the variables that allow us to write three equations. A community theater sold \(63\) tickets to the afternoon performance for a total of \($444\). An adult ticket cost \($8\), a child ticket cost \($4\), and a senior ticket cost \($6\). If twice as many tickets were sold to adults as to children and seniors combined, how many of each ticket were sold? Begin by identifying three variables. Let \(x\) represent the number of adult tickets sold. Let \(y\) represent the number of child tickets sold. Let \(z\) represent the number of senior tickets sold. The first equation comes from the statement that \(63\) tickets were sold. \(\color{Cerulean}{(1)}\quad \color{black}{x}+y+z=63\) The second equation comes from total ticket sales. \(\color{Cerulean}{(2)}\quad \color{black}{8}x+4y+6z=444\) The third equation comes from the statement that twice as many adult tickets were sold as child and senior tickets combined. \(\begin{aligned} x & = 2 ( y + z ) \\ x & = 2 y + 2 z \\ \color{Cerulean}{( 3 )} \quad\color{black}{ x} - 2 y - 2 z & = 0 \end{aligned}\) Therefore, the problem is modeled by the following linear system. \(\left\{ \begin{array} { c } { x + y + z = 63 } \\ { 8 x + 4 y + 6 z = 444 } \\ { x - 2 y - 2 z = 0 } \end{array} \right.\) Solving this system is left as an exercise. The solution is \((42, 9, 12)\). The theater sold \(42\) adult tickets, \(9\) child tickets, and \(12\) senior tickets. A simultaneous solution to a linear system with three equations and three variables is an ordered triple \((x, y, z)\) that satisfies all of the equations. If it does not solve each equation, then it is not a solution. We can solve systems of three linear equations with three unknowns by elimination. Choose any two of the equations and eliminate a variable. Next choose any other two equations and eliminate the same variable. This will result in a system of two equations with two variables that can be solved by any method learned previously. If the process of solving a system leads to a false statement, then the system is inconsistent and has no solution. If the process of solving a system leads to a true statement, then the system is dependent and has infinitely many solutions. To solve applications that require three variables, look for relationships between the variables that allow you to write three linear equations. Determine whether or not the given ordered triple is a solution to the given system. 1. \((3, -2, -1)\); \(\left\{ \begin{array} { c } { x + y - z = 2 } \\ { 2 x - 3 y + 2 z = 10 } \\ { x + 2 y + z = - 3 } \end{array} \right.\) 2. \((-8, -1, 5)\); \(\left\{ \begin{array} { c } { x + 2 y - z = - 15 } \\ { 2 x - 6 y + 2 z = 0 } \\ { 3 x - 9 y + 4 z = 5 } \end{array} \right.\) 3. \((1, -9, 2)\); \( \left\{ \begin{array} { c } { 8 x + y - z = - 3 } \\ { 7 x - 2 y - 3 z = 19 } \\ { x - y + 9 z = 28 } \end{array} \right.\) 4. \((-4, 1, -3)\); \(\left\{ \begin{array} { l } { 3 x + 2 y - z = - 7 } \\ { x - 5 y + 2 z = 3 } \\ { 2 x + y + 3 z = - 16 } \end{array} \right.\) 5. \(\left( 6 , \frac { 2 } { 3 } , - \frac { 1 } { 2 } \right)\); \(\left\{ \begin{array} { c } { x + 6 y - 4 z = 12 } \\ { - x + 3 y - 2 z = - 3 } \\ { x - 9 y + 8 z = - 4 } \end{array} \right.\) 6. \(\left( \frac { 1 } { 4 } , - 1 , - \frac { 3 } { 4 } \right)\); \(\left\{ \begin{array} { r } { 2 x - y - 2 z = 3 } \\ { 4 x + 5 y - 8 z = 2 } \\ { x - 2 y - z = 3 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 4 x - 5 y = 22 } \\ { 2 y - z = 8 } \\ { - 5 x + 2 z = - 13 } \end{array} \right.\) \(\left\{ \begin{aligned} 2 y - 6 z & = 8 \\ 3 x - 4 z & = 5 \\ 18 z & = - 9 \end{aligned} \right.\) 9. \(\left( \frac { 1 } { 2 } , - 2,6 \right)\); \(\left\{ \begin{array} { c } { a - b + c = 9 } \\ { 4 a - 2 b + c = 14 } \\ { 2 a + b + \frac { 1 } { 2 } c = 3 } \end{array} \right.\) 10. \(( - 1,5 , - 7 )\); \(\left\{ \begin{array} { l } { 3 a + b + \frac { 1 } { 3 } c = - \frac { 1 } { 3 } } \\ { 8 a + 2 b + \frac { 1 } { 2 } c = - \frac { 3 } { 2 } } \\ { 25 a + 5 b + c = - 7 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 2 x - 3 y + z = 4 } \\ { 5 x + 2 y + 2 z = 2 } \\ { x + 4 y - 3 z = 7 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 5 x - 2 y + z = - 9 } \\ { 2 x + y - 3 z = - 5 } \\ { 7 x + 3 y + 2 z = 6 } \end{array} \right.\) \(\left\{ \begin{array} { c } { x + 5 y - 2 z = 15 } \\ { 3 x - 7 y + 4 z = - 7 } \\ { 2 x + 4 y - 3 z = 21 } \end{array} \right.\) \(\left\{ \begin{array} { c } { x - 4 y + 2 z = 3 } \\ { 2 x + 3 y - 3 z = 9 } \\ { 3 x + 2 y + 4 z = - 1 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 5 x + 4 y - 2 z = - 5 } \\ { 4 x - y + 3 z = 14 } \\ { 6 x + 3 y - 5 z = - 12 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 2 x + 3 y - 2 z = - 4 } \\ { 3 x + 5 y + 3 z = 17 } \\ { 2 x + y - 4 z = - 8 } \end{array} \right.\) \(\left\{ \begin{array} { c } { x + y - 4 z = 1 } \\ { 9 x - 3 y + 6 z = 2 } \\ { - 6 x + 2 y - 4 z = - 2 } \end{array} \right.\) \(\left\{ \begin{aligned} 5 x - 8 y + z & = 5 \\ - 3 x + 5 y - z & = - 3 \\ - 11 x + 18 y - 3 z & = - 5 \end{aligned} \right.\) \(\left\{ \begin{aligned} x - y + 2 z & = 3 \\ 2 x - y + 3 z & = 2 \\ - x - 3 y + 4 z & = 1 \end{aligned} \right.\) \(\left\{ \begin{array} { c } { x + y + z = 8 } \\ { x - y + 4 z = - 7 } \\ { - x - y + 2 z = 1 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 4 x - y + 2 z = 3 } \\ { 6 x + 3 y - 4 z = - 1 } \\ { 3 x - 2 y + 3 z = 4 } \end{array} \right.\) \(\left\{ \begin{array} { c } { x - 4 y + 6 z = - 1 } \\ { 3 x + 8 y - 2 z = 2 } \\ { 5 x + 2 y - 3 z = - 5 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 3 x - 4 y - z = 7 } \\ { 5 x - 8 y + 3 z = 11 } \\ { 2 x + 6 y + z = 9 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 3 x + y - 4 z = 6 } \\ { 6 x - 5 y + 3 z = 1 } \\ { 9 x + 3 y - 4 z = 10 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 7 x - 6 y + z = 8 } \\ { - x + 2 y - z = 4 } \\ { x + 2 y - 2 z = 14 } \end{array} \right.\) \(\left\{ \begin{array} { l } { - 9 x + 3 y + z = 3 } \\ { 12 x - 4 y - z = 2 } \\ { - 6 x + 2 y + z = 8 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 3 x - 5 y - 4 z = - 5 } \\ { 4 x - 6 y + 3 z = - 22 } \\ { 6 x + 8 y - 5 z = 20 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 7 x + 4 y - 2 z = 8 } \\ { 2 x + 2 y + 3 z = - 4 } \\ { 3 x - 6 y - 7 z = 8 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 9 x + 7 y + 4 z = 8 } \\ { 4 x - 5 y - 6 z = - 11 } \\ { - 5 x + 2 y + 3 z = 4 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 3 x + 7 y + 2 z = - 7 } \\ { 5 x + 4 y + 3 z = 5 } \\ { 2 x - 3 y + 5 z = - 4 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 4 x - 3 y = 1 } \\ { 2 y - 3 z = 2 } \\ { 3 x + 2 z = 3 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 5 y - 3 z = - 28 } \\ { 3 x + 2 y = 8 } \\ { 4 y - 7 z = - 27 } \end{array} \right.\) \(\left\{ \begin{aligned} 2 x + 3 y + z & = 1 \\ 6 y + z & = 4 \\ 2 z & = - 4 \end{aligned} \right.\) \(\left\{ \begin{aligned} x - 3 y - 2 z & = 5 \\ 2 y + 6 z & = - 1 \\ 4 z & = - 6 \end{aligned} \right.\) \(\left\{ \begin{aligned} 2 x & = 10 \\ 6 x - 5 y & = 30 \\ 3 x - 4 y - 2 z & = 3 \end{aligned} \right.\) \(\left\{ \begin{array} { c } { 2 x + 7 z = 2 } \\ { - 4 y = 6 } \\ { 8 y + 3 z = 0 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 5 x + 7 y + 2 z = 4 } \\ { 12 x + 16 y + 4 z = 15 } \\ { 10 x + 13 y + 3 z = 14 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 8 x + 12 y - 8 z = 5 } \\ { 2 x + 3 y - 2 z = 2 } \\ { 4 x - 2 y + 5 z = - 1 } \end{array} \right.\) \(\left\{ \begin{array} { c } { 17 x - 4 y - 3 z = - 2 } \\ { 5 x + \frac { 1 } { 2 } y - 2 z = - \frac { 9 } { 2 } } \\ { 2 x + 5 y - 4 z = - 13 } \end{array} \right.\) \(\left\{ \begin{aligned} 3 x - 5 y - \frac { 1 } { 2 } z & = \frac { 7 } { 2 } \\ x - y - \frac { 1 } { 2 } z & = - \frac { 1 } { 2 } \\ 3 x - 8 y + z & = 11 \end{aligned} \right.\) \(\left\{ \begin{array} { l } { 4 a - 2 b + 3 c = 9 } \\ { 3 a + 3 b - 5 c = - 6 } \\ { 10 a - 6 b + 5 c = 13 } \end{array} \right.\) \(\left\{ \begin{array} { l } { 6 a - 2 b + 5 c = - 2 } \\ { 4 a + 3 b - 3 c = - 1 } \\ { 3 a + 5 b + 6 c = 24 } \end{array} \right.\) 1. \((2, -1, -3)\) 3. \((4, 1, -3)\) 5. \((1, -1, 3)\) 7. \(\varnothing\) 9. \((5, -10, -6)\) 11. \(\left( \frac { 1 } { 2 } , - 2 , - \frac { 1 } { 2 } \right)\) 13. \(\left( 3 , \frac { 1 } { 2 } , 0 \right)\) 15. \(\left( x , \frac { 3 } { 2 } x - 3,2 x - 10 \right)\) 17. \((1, -2, 6)\) 19. \((-1, 2, -2)\) 23. \((1, 1, 0)\) 25. \((0, 1, -2)\) 29. \(\varnothing\) 31. \(( x , 2 x - 1,3 x + 2 )\) Set up a system of equations and use it to solve the following. The sum of three integers is \(38\). Two less than \(4\) times the smaller integer is equal to the sum of the others. The sum of the smaller and larger integer is equal to \(2\) more than twice that of the other. Find the integers. The sum of three integers is \(40\). Three times the smaller integer is equal to the sum of the others. Twice the larger is equal to \(8\) more than the sum of the others. Find the integers. The sum of the angles \(A, B\), and \(C\) of a triangle is \(180°\). The larger angle \(C\) is equal to twice the sum of the other two. Four times the smallest angle \(A\) is equal to the difference of angle \(C\) and \(B\). Find the angles. The sum of the angles \(A, B\), and \(C\) of a triangle is \(180°\). Angle \(C\) is equal to the sum of the other two angles. Five times angle \(A\) is equal to the sum of angle \(C\) and \(B\). Find the angles. A total of \($12,000\) was invested in three interest earning accounts. The interest rates were \(2\)%, \(4\)%, and \(5\)%. If the total simple interest for one year was \($400\) and the amount invested at \(2\)% was equal to the sum of the amounts in the other two accounts, then how much was invested in each account? Joe invested his \($6,000\) bonus in three accounts earning \(4 \frac{1}{2}\)% interest. He invested twice as much in the account earning \(4 \frac{1}{2}\)% as he did in the other two accounts combined. If the total simple interest for the year was \($234\), how much did Joe invest in each account? A jar contains nickels, dimes, and quarters. There are \(105\) coins with a total value of \($8.40\). If there are \(3\) more than twice as many dimes as quarters, find how many of each coin are in the jar. A billfold holds one-dollar, five-dollar, and ten-dollar bills and has a value of \($210\). There are \(50\) bills total where the number of one-dollar bills is one less than twice the number of five-dollar bills. How many of each bill are there? A nurse wishes to prepare a \(15\)-ounce topical antiseptic solution containing \(3\)% hydrogen peroxide. To obtain this mixture, purified water is to be added to the existing \(1.5\)% and \(10\)% hydrogen peroxide products. If only \(3\) ounces of the \(10\)% hydrogen peroxide solution is available, how much of the \(1.5\)% hydrogen peroxide solution and water is needed? A chemist needs to produce a \(32\)-ounce solution consisting of \(8 \frac{3}{4}\)% acid. He has three concentrates with \(5\)%, \(10\)%, and \(40\)% acid. If he is to use twice as much of the \(5\)% acid solution as the \(10\)% solution, then how many ounces of the \(40\)% solution will he need? A community theater sold \(128\) tickets to the evening performance for a total of \($1,132\). An adult ticket cost \($10\), a child ticket cost \($5\), and a senior ticket cost \($6\). If three times as many tickets were sold to adults as to children and seniors combined, how many of each ticket were sold? James sold \(82\) items at the swap meet for a total of \($504\). He sold packages of socks for \($6\), printed t-shirts for \($12\), and hats for \($5\). If he sold \(5\) times as many hats as he did t-shirts, how many of each item did he sell? A parabola passes through three points \((−1, 7), (1, −1)\) and \((2, −2)\). Use these points and \(y = a x ^ { 2 } + b x + c\) to construct a system of three linear equations in terms of \(a, b\), and \(c\) and then solve the system. A parabola passes through three points \((−2, 11), (−1, 4)\) and \((1, 2)\). Use these points and \(y = a x ^ { 2 } + b x + c\) to construct a system of three linear equations in terms of \(a, b\), and \(c\) and solve it. 1. \(8, 12, 18\) 3. \(A = 20 ^ { \circ } , B = 40 ^ { \circ } , \text { and } C = 120 ^ { \circ }\) 5. The amount invested at \(2\)% was \($6,000\), the amount invested at \(4\)% was \($2,000\), and the amount invested at \(5\)% was \($4,000\). 7. \(72\) nickels, \(23\) dimes, and \(10\) quarters 9. \(10\) ounces of the \(1.5\)% hydrogen peroxide solution and \(2\) ounces of water 11. \(96\) adult tickets, \(20\) child tickets, and \(12\) senior tickets were sold. 13. \(a = 1, b = −4\), and \(c = 2\) On a note card, write down the steps for solving a system of three linear equations with three variables using elimination. Use your notes to explain to a friend how to solve one of the exercises in this section. Research and discuss curve fitting. Why is curve fitting an important topic? 1. Answer may vary 19Triples \((x, y, z)\) that identify position relative to the origin in three-dimensional space. 20An equation that can be written in the standard form \(ax + by + cz = d\) where \(a, b, c\), and \(d\) are real numbers. 21Any flat two-dimensional surface. 3.3: Applications of Linear Systems with Two Variables 3.5: Matrices and Gaussian Elimination dependent systems inconsistent system
CommonCrawl
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up. Electromagnetics and vector calculus A friend of mine claims that vector calculus was invented to do electrodynamics. I'm dubious. I know that Maxwell first wrote down the so-called Maxwell's equations in scalar form and only later converted them into their vector forms. Because all of classical electrodynamics is derivable from Maxwell's equations, I wouldn't see the need to develop a branch of mathematics to simply condense the equations of a theory that was pretty much done. Wikipedia oddly does not have a "History" section for vector calculus. So can anyone either confirm or deny this? What is the history of vector calculus? Why was it invented (/discovered)? EDIT: I specifically want to know about how E&M may (or may not) have affected the development of vector calculus. An example of something that might be addressed: Apparently Heaviside's The Electrian was written a couple of decades before Gibbs' Vector Analysis -- which apparently is the book that codified the modern notations. How influential was The Electrian on Gibbs's work? mathematics physics calculus VicAche $\begingroup$ possible duplicate of How were vector quantities developed? $\endgroup$ – Alexandre Eremenko $\begingroup$ hsm.stackexchange.com/questions/1925/… $\endgroup$ $\begingroup$ hsm.stackexchange.com/questions/773/… $\endgroup$ $\begingroup$ I read each of those you suggest as a duplicate @Alexandre. The first one is closest. But it does completely answer my question. In fact it just leads me to more questions like: How influential was Heaviside's The Electrian to modern vector analysis? Was vector calculus written in its entirely modern form in Gibb's Vector Analysis -- or was it just mostly complete? If the latter, what was missing? What did Hamilton use quaternions for? Was it solving E&M problems? What are the earliest uses of vector/ quaternion calculus for E&M? (Is there such a thing as "quaternion calculus"?) $\endgroup$ $\begingroup$ You can obtain the answers to most of these questions by reading the references given in those answers. $\endgroup$ Before the vector Before vector calculus was introduced, a few landmarks have to be considered to understand vector history. These are: complex numbers and their geometrical interpretation Leibniz's work on the geometry of position the parallelogram representation of force and velocity The first one can be traced back to Cardan's Ars Magna published in 1545 as the guy was the first to introduce roots of negative numbers for themselves. One has to wait for two centuries before witnessing approval of this strange writing. Leibniz, in a letter to Huygens, express the will to give position a mathematical expression, just as amplitude has one. This is exactly what vectorial analysis is about, right? Unfortunately, I don't know what Leibniz' answer to his own question was. Now let's quote a genius "A body, acted on by two forces simultaneously, will describe the diagonal of a parallelogram in the same time as it would describe the sides by those forces separately." Newton in 1678, did not have the idea of a vector, but this does look like the sum of two vectors, right? Between 1799 and 1828, three pairs of two authors simultaneously and independently worked on the geometry of complex numbers. Wessel and Gauss in 1799 wrote independently on how to represent direction analytically, Buéé and Argand follow up in 1806 with geometrical interpretations for complex numbers, and Warren and Mourey both published in 1828 extensive books describing such representations. Going 3-D: the quaternion Hamilton (we're getting closer to electromagnetism) published a philosophical-ish paper in 1837 in which he expresses his hope to come up with a "theory of triplets" to describe 3-D geometry. In 1843 he finally comes up with a multiplication operation on what is now known as quaternions. He was very happy with that and proceeded to spend the rest of his life writing on quaternions. Hamilton (him again) introduced in a subsequent paper (1846) the terms scalar and vector, to describe the real and imaginary parts of his quaternions. The vector part of the quaternion product of two purely vectorial quaternions is equal to the opposite of what is know the vector/dot product, and the scalar part is what is now known as the cross product. Hamilton died but left a successor to his cause, Tait. Tait published numerous papers on quaternions, including extensive description of the use of operator Nabla which earned him Maxwell's praise as the "Chief Musician upon Nabla". Maxwell, or answering the original question In 1873, Maxwell published his Treatise on Electricity and Magnetism, a paper that had a huge impact on 19th century science. In this paper, Maxwell presents many of his results not only in the then-usual cartesian form, but also in their quaternionic forms. Maxwell defended and advertise the use of quaternions, not only as a practical tool (he seemed to be more at ease with cartesian geometry), but as a more effective way to think space-related quantities. For what I know, Maxwell did not contribute anything to vector calculus, but his endorsement of the then praised, but not used, quaternions in a breakthrough paper allowed vector calculus to become a widespread object in physics. Modern (post-Hamilton) Vector Analysis is mostly based on Gibbs and Heaviside work at the turn of the 20th century, but Maxwell sure contributed a lot to vector calculus by using them in what is, perhaps, the most read paper of the 19th century. This answer is based both for structure and content on a talk that I invite you to read as it's a very entertaining/informative piece of writing. vonbrand VicAcheVicAche $\begingroup$ The second paragraph on quaternions mixes up dot and cross products, e.g., "the scalar part is now what is known as the cross product." $\endgroup$ – KCd According to Thomas Hankin's biography of William Rowan Hamilton, Michael Faraday met with Hamilton in Dublin in 1834. Faraday is famous for his electromagnetic experiments and read about the field ideas of Boscovich. Faraday's experiments suggested to him that they could best be explained by the idea of fields. Faraday realized that field treatment must involve geometry and numerical analysis. Hamilton was known as a mathematical genius and Faraday convinced him of the need for a geometric Algebra. It took Hamilton 9 years, in 1843 he came up with Quaternions and in 1844 biquaternions (complex quaternions)[proceedings of Royal Irish Academy]. Maxwell used quaternions learning from his boyhood friend Peter Guthrie Tait. Maxwell wrote to Tait that he wanted to let quaternions "leaven electromagnetism",[Michael Crowe History of Vector Analysis] Unfortunately, Maxwell died young of cancer and Electromagnetism was kidnapped by telegraph operator Oliver Heaviside who hated quaternions, titling one of his book's sections "On the abstrusity of quaternions and the advantages gained in ignoring them". Heaviside with the Help J. Willard Gibbs, truncated the quaternion to make vector analysis. This lead Physics down a wrong path that still exists today in spite of a small number of people championing quaternions since Hamilton's time. Ludwik Silberstein was first to publish the (bi)quaternion version of Maxwell's Equation where all 4 Maxwell's are written as one biquaternion equation in the same paper he gave the quaternion Lorentz transformation [L.Silberstein "The Quaternion form of Relativity" , Philosophical Magazine,1913] The quaternion form of Dirac's equation was discovered by Cornelius Lanzcos 1n 1929. Danu♦ The original 1864 paper of James Clerk Maxwell, "A dynamical theory of the electromagnetic field" contained the whole electrodynamics in the form of 20 nonvectorial equations in 20 variables. Following this paper, efforts were made for casting these equations into a more systematic form. At those times, algebraic quaternions were the unique mathematical instrument available to that aim. J.C. Maxwell tried to work with quaternions, but was not very successful. Anyway, such efforts were merely considered for the beauty of the gest, because the complete theory was already available from Maxwell's original paper. Eventually, Josiah Gibbs and Oliver Heavyside came up with vector analysis, and what today is called "Maxwell's Equations" is in fact a reformulation by Oliver Heavyside of the original equations proposed by J.C. Maxwell. So far for the engineering part of the story. On the other hand, it has rarely been noticed that quaternions are the "square root" of an algebraic number identity: Leonhard Euler's 4-squares identity: $$\begin{align} (a_0 + ia_1 + ja_2 + ka_3)\times(b_0 + ib_1 + jb_2 + kb_3) &= (a_0b_0 - a_1b_1 - a_2b_2 - a_3b_3)\\ &+ i(a_0b_1 + a_1b_0 + a_2b_3 - a_3b_2)\\ &+ j(a_0b_2 - a_1b_3 + a_2b_0 + a_3b_1)\\ &+ k(a_0b_3 + a_1b_2 - a_2b_1 + a_3b_0) \end{align}$$ The square of this product is, $$(a\times b)\times (a\times b)'' = a\times(b\times b'')\times a''$$ and with the following $$q\times q'' = q''\times q = (q_0^2 + q_1^2 + q_2^2 + q_3^2)$$ We have Leonhard Euler's 4-squares-identity: $$\begin{align} (v_0^2 + v_1^2 + v_2^2 + v_3^2)(w_0^2 + w_1^2 + w_2^2 + w_3^2) &= (v_0w_0 - v_1w_1 - v_2w_2 - v_3w_3)^2\\ &+ (v_0w_1 + v_1w_0 + v_2w_3 - v_3w_2)^2\\ &+ (v_0w_2 - v_1w_3 + v_2w_0 + v_3w_1)^2\\ &+ (v_0w_3 + v_1w_2 - v_2w_1 + v_3w_0)^2 \end{align}$$ Electrodynamics using quaternions has thus the advantage of being metaphysically backed by an algebraic number identity: everything is conserved under such transformation. A further advantage is the easy and transparent formulation. The gradient, $\delta\times A$, of the vector potential $A$ is not a 2-form, but a vector, as expected for a physical quantity supposed to represent a force or a movement, having a direction in space: $$\begin{align} (\delta_0 + i\delta_1 + j\delta_2 + k\delta_3)\times(A_0 + iA_1 + jA_2 + kA_3) &= (\delta_0A_0 - \delta_1A_1 - \delta_2A_2 - \delta_3A_3)\\ &+ i(\delta_0A_1 + \delta_1A_0 + \delta_2A_3 - \delta_3A_2)\\ &+ j(\delta_0A_2 + \delta_2A_0 - \delta_1A_3 + \delta_3A_1)\\ &+ k(\delta_0A_3 + \delta_3A_0 + \delta_1A_2 - \delta_2A_1) \end{align}$$ Under Lorenz gauge, the first line vanishes identically, and the remaining lines yield the electromagnetic field in source $(E/c)$ and $\operatorname{curl} (B)$: $$\delta\times A = i(E_1/c + B_1) + j(E_2/c + B_2) + k(E_3/c+ B_3)$$ Source and curl are the two parts of a 4-dimensional isoclinic double-rotation around a point; a quaternion being an operator for performing general isoclinic double-rotation & stretching operations in 4-dimensional space. The square $(\delta\times A) \times (\delta\times A)''$ is equal to $(E⁄c)^2 + B^2$, i.e. there are no mixed terms involving $E$ and $B$, because both parts of an isoclinic double rotation are orthogonal to each other. Elements in Space Edgar MuellerEdgar Mueller William Rowan Hamilton invented the Del or Nabla operator when working with quaternions. ee https://books.google.com/books?id=iuoZSkSOBQsC&pg=PA142&dq=%22William+Rowan+Hamilton%22+and+Nabla&hl=en&sa=X&ei=u78_VaGAGsWEsAWevoGoCg&ved=0CDkQ6AEwBA RabiRabi $\begingroup$ Could you perhaps expand your answer a bit, making it more self-contained? Perhaps try to summarize what can be found in the books you linked? On Stack Exchange sites we strive to avoid link-only answers, and currently this answer is not very far away from being one. $\endgroup$ – Danu ♦ Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged mathematics physics calculus or ask your own question. How were vector quantities developed? When was the vector notation in physics and other sciences first introduced? How were vector calculus nabla ∇ identities first derived? First paper introducing the concept of four-vectors Is there a 'lost calculus'? Who first used the word "calculus", and what did it describe? Did Benjamin Franklin know calculus?
CommonCrawl
Truly random number generator: Turing computable? I am seeking a definitive answer to whether or not generation of "truly random" numbers is Turing computable. I don't know how to phrase this precisely. This StackExchange question on "efficient algorithms for random number generation" comes close to answering my question. Charles Stewart says in his answer, "it [Martin-Löf randomness] cannot be generated by a machine." Ross Snider says, "any deterministic process (such as Turing/Register Machines) can not produce 'philosophical' or 'true' random numbers." Is there a clear and accepted notion of what constitutes a truly random number generator? And if so, is it known that it cannot be computed by a Turing Machine? Perhaps pointing me to the relevant literature would suffice. Thanks for any help you can provide! Edit. Thanks to Ian and Aaron for the knowledgeable answers! I am relatively unschooled in this area, and I am grateful for the assistance. If I may extend the question a bit in this addendum: Is it the case that a TM with access to a pure source of randomness (an oracle?), can compute a function that a classical TM cannot? reference-request computability turing-machines randomness Joseph O'RourkeJoseph O'Rourke $\begingroup$ It helps if you consider the definition of "truly random" first. $\endgroup$ – M.S. Dousti Sep 13 '10 at 17:49 I am joining the discussion fairly late, but I will try to address several questions that were asked earlier. First, as observed by Aaron Sterling, it is important to first decide what we mean by "truly random" numbers, and especially if we are looking at things from a computational complexity or computability perspective. Let me argue however that in complexity theory, people are mainly interested in pseudo-randomness, and pseudo-random generators, i.e. functions from strings to strings such that the distribution of the output sequences cannot be told apart from the uniform distribution by some efficient process (where several meanings of efficient can be considered, e.g. polytime computable, polynomial-size circuits etc). It is a beautiful and very active research area, but I think most people would agree that the objects it studies are not truly random, it is enough that they just look random (hence the term "pseudo"). In computability theory, a concensus has emerged to what should be a good notion of "true randomness", and it is indeed the notion of Martin-Löf randomness which prevailed (other ones have been proposed and are interesting to study but do not bare all the nice properties Martin-Löf randomness has). To simplify matters, we will consider randomness for infinite binary sequences (other objects such as functions from strings to strings can easily be encoded by such sequence). An infinite binary sequence $\alpha$ is Martin-Löf random if no computable process (even if we allow this process to be computable in triple exponential time or higher) can detect a randomness flaw. (1) What do we mean by "randomness flaw"? That part is easy: it is a set of measure 0, i.e. a property that almost all sequences do not have (here we talk about Lebesgue measure i.e. the measure where each bit has a $1/2$ probability to be $0$ independently of all the other bits). An example of such a flaw is "having asymptotically 1/3 of zeroes and 2/3 of ones", which violates the law of large numbers. Another example is "for every n, the first 2n bits of $\alpha$ are perfectly distributed (as many zeroes as ones)". In this case the law of large numbers is satified, but not the central limit theorem. Etc etc. (2) How can a computable process test that a sequence does not belong to a particular set of measure 0? In other words, what sets of measure 0 can be computably described? This is precisely what Martin-Löf tests are about. A Martin-Löf test is a computable procedure which, given an input k, computably (i.e., via a Turing machine with input $k$) generates a sequence of strings $w_{k,0}$, $w_{k,1}$, ... such that the set $U_k$ of infinite sequences starting by one of those $w_{k,i}$ has measure at most $2^{-k}$ (if you like topology, notice that this is an open set in the product topology for the set of infinite binary sequences). Then the set $G=\bigcap_k U_k$ has measure $0$ and is referred to as Martin-Löf nullset. We can now define Martin-Löf randomness by saying that an infinite binary sequence $\alpha$ is Martin-Löf random if it does not belong to any Martin-Löf nullset. This definition might seem technical but it is widely accepted as being the right one for several reasons: it is effective enough, i.e. its definition involves computable processes it is strong enough: any "almost sure" property you may find in a probability theory textbook (law of large numbers, law of iterated logarithm, etc) can be tested by a Martin-Löf test (although this is sometimes hard to prove) it has been independently proposed by several people using different definitions (in particular the Levin-Chaitin definition using Kolmogorov complexity); and the fact that they all lead to the same concept is a hint that it should be the right notion (a little bit like the notion of computable function, which can be defined via Turing machines, recursive functions, lambda-calculus, etc.) the mathematical theory behind it is very nice! see the three excellent books An Introduction to Kolmogorov Complexity and Its Applications (Li and Vitanyi), Algorithmic randomness and complexity (Downey and Hirschfeldt) Computability and Randomness (Nies). What does a Martin-Löf random sequence look like? Well, take a perfectly balanced coin and start flipping it. At each flip, write a 0 for heads and a 1 for tails. Continue until the end of time. That's what a Martin-Löf sequence looks like :-) Now back to the initial question: is there a computable way to generate a Martin-Löf random sequence? Intuitively the answer should be NO, because if we can use a computable process to generate a sequence $\alpha$, then we can certainly use a computable process to describe the singleton {$\alpha$}, so $\alpha$ is not random. Formally this is done as follows. Suppose a sequence $\alpha$ is computable. Consider the following Martin-Löf test: for all $k$, just output the prefix $a_k$ of $\alpha$ of length $k$, and nothing else. This has measure at most (in fact, exactly) $2^{-k}$, and the intersection of the sets $U_k$ as in the definition is exactly {${\alpha}$}. QED!! In fact a Martin-Löf random sequence $\alpha$ is incomputable in a much stronger sense: if some oracle computation with oracle $\beta$ (which itself is an infinite binary sequence) can compute $\alpha$, then for all $n$, $n-O(1)$ bits of $\beta$ are needed to compute the first $n$ bits of $\alpha$ (this is in fact a characterization of Martin-Löf randomness, which unfortunately is rarely stated as is in the literature). Ok, now the "edit" part of Joseph's question: Is it the case that a TM with access to a pure source of randomness (an oracle?), can compute a function that a classical TM cannot? From a computability perspective, the answer is "yes and no". If you are given access to a random source as an oracle (where the output is presented as an infinite binary sequence), with probability 1 you will get a Martin-Löf random oracle, and as we saw earlier Martin-Löf random implies non-computable, so it suffices to output the oracle itself! Or if you want a function $f: \mathbb{N} \rightarrow \mathbb{N}$, you can consider the function $f$ which for all $n$ tells you how many zeroes there are among the first $n$ bits of your oracle. If the oracle is Martin-Löf random, this function will be non-computable. But of course you might argue that this is cheating: indeed, for a different oracle we might get a different function, so there is a non-reproducibility problem. Hence another way to understand your question is the following: is there a function $f$ which is non-computable, but which can be "computed with positive probability", in the sense that there is an Turing machine with access to a random oracle which, with positive probability (over the oracle), computes $f$. The answer is no, due to a theorem of Sacks whose proof is quite simple. Actually it has mainly been answered by Robin Kothari: if the probability for the TM to be correct is greater than 1/2, then one can look for all $n$ at all the possible oracle computations with input $n$ and find the output which gets the "majority vote", i.e. which is produced by a set of oracles of measure more than 1/2 (this can be done effectively). The argument even extend to smaller probabilities: suppose the TM outputs $f$ with probability $\epsilon >0$. By Lebesgue's density theorem, there exists a finite string $\sigma$ such that if we fix the first bits of the oracle to be exactly $\sigma$, and then get the other bits at random, then we compute $f$ with probability at least 0.99. By taking such a $\sigma$, we can apply the above argument again. LaurentBienvenuLaurentBienvenu $\begingroup$ what a beautiful answer. $\endgroup$ – Suresh Venkat Sep 17 '10 at 16:57 $\begingroup$ I am very appreciative of the clarity of your detailed response on this (to me!) tangled question. Thanks! $\endgroup$ – Joseph O'Rourke Sep 18 '10 at 1:16 There is (perhaps) a distinction to be made between "Turing computable" and "effectively computable" in order to answer your question. If one defines "random process" as "a process that cannot be predicted, no matter what resources we have," and one defines "deterministic process" as "predictable process, given the input and access to (maybe a lot of) resources," then no Turing computable function can be random, because if we knew the Turing machine and simulated it, we could always predict the outcome of the next "experiment" of the process. In this framework, a Martin-Lof test can be seen as a deterministic process, and the definition of a random sequence is precisely a sequence whose behavior is not predicted by any Martin-Lof test/Turing computable/deterministic process. This, however, begs the question: "Is a random sequence effectively calculable, in real life?" There is, in fact, an industry here. There are published CD's with billions of random (?) bits on them that are used to perform computer simulations of physical systems, etc. These CD's guarantee that their sequences of bits pass a bunch of Martin-Lof tests. The book The Drunkard's Walk: How Randomness Rules our Lives gives a pop-sci explanation of this issue, in greater detail. Irrelevant point: I enjoy your column. :-) Aaron SterlingAaron Sterling Intuitively, "random" means "unpredictable", and any sequence generated by a Turing machine can be predicted by running the machine, so Turing machines cannot produce "truly random" numbers. There are a number of formal definitions of random sequences (randomness only really makes sense as the length of a string goes to infinity), all of which are essentially equivalent. Perhaps the most natural of these are Martin-Lof randomness, which means that a sequence passes all possible computable statistical tests for stochasticity, and Chaitin random which means that all initial subsequences are incompressible (more specifically, have high Kolmogorov complexity). In both of these definitions it is incomputable to both generate random sequences and to recognize them. See the book "Information and Randomness: An Algorithmic Perspective" by Calude for a thorough treatment of this topic. IanIan $\begingroup$ Link to book here: amazon.com/… $\endgroup$ – Suresh Venkat Sep 13 '10 at 21:26 $\begingroup$ Thanks, Ian & Suresh, I am retrieving that book from our library! $\endgroup$ – Joseph O'Rourke Sep 13 '10 at 22:02 $\begingroup$ Another great book is Nies's "Computability and randomness". $\endgroup$ – Diego de Estrada Sep 14 '10 at 23:49 Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number — there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method. — John von Neumann $\begingroup$ Ha! Great quote, Jeff! And with a substantive point. $\endgroup$ – Joseph O'Rourke Sep 15 '10 at 12:21 It seems like no one has answered your addendum, so I'll take a shot at it: If I may extend the question a bit in this addendum: Is it the case that a TM with access to a pure source of randomness (an oracle?), can compute a function that a classical TM cannot? I'm going to try to make the question more precise, and then answer it. (My version might not be what you had in mind though, so let me know if it isn't.) We have a deterministic TM with access to a random number generator. This TM now computes some function (an actual function, i.e., a deterministic map from an input space to an output space) making use of the random number generator in some way. So is the TM with access to randomness allowed to make error? If not, then the DTM must give the correct answer no matter what random bits it was supplied. In this case the random bits are unnecessary, as you could just take the random string to be 00000... If the DTM is allowed to make error, but should get the right answer more often than random guessing, then we can still do without the randomness source. This DTM computes the function $f_i(x,r)$, where x is the input, r is the random string it got from the oracle, and the $f_i$s are the bits of the output. The DTM can now loop over all possible strings $r$ and see what the majority output is, and output that. Robin KothariRobin Kothari $\begingroup$ I find this insightful: "If not, then the DTM must give the correct answer no matter what random bits it was supplied." Thanks! $\endgroup$ – Joseph O'Rourke Sep 15 '10 at 12:20 $\begingroup$ Actually I don't get this. You seem to be suggesting that P = ZPP or that a randomized algorithm with zero error (for example a las Vegas algorithm) must be deterministic ? $\endgroup$ – Suresh Venkat Sep 15 '10 at 15:00 $\begingroup$ By a DTM with oracle access deciding a language, I assumed that the DTM halts after a finite amount of time. In this case, we can get rid of the oracle. For zero-error, we just replace it with 0000..., and for any other purpose one can brute force over all finite length random strings. (I'm sure someone probably holds the opinion that Las Vegas algorithms are not really algorithms since they don't necessarily terminate.) $\endgroup$ – Robin Kothari Sep 15 '10 at 15:53 Regarding your "edit question": it makes a big difference if you are asking about computability or complexity. If there are complexity bounds on the TM, then you obtain the so-called random oracle model. If the TM can use arbitrarily large-but-finite resources, then you're in the world of relative randomness: there are randomness hierarchies of oracles, much as there are Turing degrees. (Side point: one of the (in)famous critiques by Koblitz and Menzes was about the use of the random oracle model, so your meta-question is touching on recent academic debates.) $\begingroup$ Just to clarify though: did Joe want a random oracle (which is essentially a random hash function) or merely a source of randomness ? these are not the same thing, are they ? $\endgroup$ – Suresh Venkat Sep 15 '10 at 4:11 $\begingroup$ Thanks, Aaron, the mention of randomness oracle hierarchies is useful. $\endgroup$ – Joseph O'Rourke Sep 15 '10 at 12:22 $\begingroup$ @Suresh: I meant a source of randomness. $\endgroup$ – Joseph O'Rourke Sep 15 '10 at 13:46 $\begingroup$ You both are probably way ahead of me here, but I was trying to say that randomness needs to be defined relative to a "frame of reference," i.e., the resources available to make predictions. A "source of randomness" might be random with respect to a Turing machine, but not with respect to the Halting Oracle. I agree with Robin Kothari's answer; my point only was that a "pure source of randomness" seems not to exist under current definitions, because we could always diagonalize against it and obtain something random-er. $\endgroup$ – Aaron Sterling Sep 15 '10 at 14:16 I'm still trying to understand your modified question, especially what limits you place on the TM. So while this answer might not get at exactly what you want, maybe it will help narrow things a bit. We know that there is an unconditional impossibility result for approximating to with a subexponential factor the volume of a convex body deterministically (this is an old result by Bárány and Füredi). In contrast, we can get an FPRAS for this problem using sampling. Is this an example of the separation you are looking for ? Suresh VenkatSuresh Venkat $\begingroup$ This result is for polynomial time algorithms, right? I interpreted the OP's question as one about computability theory, not complexity theory. By which I mean I interpreted it to mean "Is the set of problems solved by a DTM + source of randomness larger than those solved by a DTM?" $\endgroup$ – Robin Kothari Sep 15 '10 at 17:17 $\begingroup$ this is possible. Hence my attempt to flesh it out in more detail. At the computability level, a discrepancy would to me invalidate the Church-Turing thesis though. $\endgroup$ – Suresh Venkat Sep 15 '10 at 17:29 $\begingroup$ I like that volume example! Although I asked specifically about computability theory, I am also interested in complexity differences. I don't see how this could invalidate C-T, because the previous answers established that a pure source of true randomness is not computable...? $\endgroup$ – Joseph O'Rourke Sep 15 '10 at 17:43 $\begingroup$ I think once we formalize what we mean by a DTM with access to a source of randomness (with is its acceptance criteria, halting probability, etc.), we should be able to show that this model also computes exactly the recursive languages. $\endgroup$ – Robin Kothari Sep 15 '10 at 18:22 $\begingroup$ True (in the comutable realm). But now I wonder: suppose we construct a string that whose ith bit is the outcome of running the ith turing machine on an encoding of itself. Would being able to predict this string correspond to solving the Halting problem, and is this string random in the Martin-Lof sense ? $\endgroup$ – Suresh Venkat Sep 16 '10 at 4:55 Not the answer you're looking for? Browse other questions tagged reference-request computability turing-machines randomness or ask your own question. Does a noisy version of Conway's game of life support universal computation? What are the most effective algorithms to find random number? Circuit complexity and statistical tests Is a turing machine with random number generator more powerful? Non-computable=>normal? How can you prove that all halting probabilites are normal real numbers?
CommonCrawl
Molecular features that predict the response to antimetabolite chemotherapies Mahya Mehrmohamadi1,2,3,4, Seong Ho Jeong4 & Jason W. Locasale ORCID: orcid.org/0000-0002-7766-35021,2,3 Antimetabolite chemotherapeutic agents that target cellular metabolism are widely used in the clinic and are thought to exert their anti-cancer effects mainly through non-specific cytotoxic effects. However, patients vary dramatically with respect to treatment outcome, and the sources of heterogeneity remain largely unknown. Here, we introduce a computational method for identifying gene expression signatures of response to chemotherapies and apply it to human tumors and cancer cell lines. Furthermore, we characterize a set of 17 antimetabolite agents in various contexts to investigate determinants of sensitivity to these agents. We identify distinct favorable and unfavorable metabolic expression signatures for 5-FU and Gemcitabine. Importantly, we find that metabolic pathways targeted by each of these antimetabolites are specifically enriched in its expression signatures. We provide evidence against the common notion about non-specific cytotoxic functions of antimetabolite drugs. This study demonstrates through unbiased analyses that the activities of metabolic pathways likely contribute to therapeutic response. Cancer cells adapt their metabolism to meet the requirements of inappropriate growth, survival, and proliferation [1,2,3]. Since these demands are often not present in normal cells to the same extent, there is considerable interest in exploiting metabolic alterations for therapeutic advances [4, 5]. Antimetabolite chemotherapies are one of the most commonly used therapeutic strategies for the treatment of neoplastic disease [6]. Historically, some of the first successful chemotherapeutic agents were derived from intermediates in the synthesis of folates [7, 8]. Subsequently, there are now at least 17 agents approved in the USA that target a specific metabolic enzyme [9]. These agents can often be tolerated and can achieve remarkable responses in advanced-stage cancers leading to complete remission in many cases. However, the clinical responses to these agents are heterogeneous with patients exhibiting varying degrees of sensitivity or resistance. To date, there is little molecular information that is used clinically for prognostication for these agents. For instance, 5-fluorouracil (5-FU) is a widely used antimetabolite chemotherapy that interferes with pyrimidine biosynthesis by targeting the enzyme thymidylate synthetase (TYMS). Previous studies that have associated the expression levels of TYMS and tumor response to 5-FU have been controversial, and currently, TYMS expression is not used as a biomarker in clinical decision-making [10]. Other studies have found TP53 mutational status a predictor of 5-FU therapy [11, 12]. However, it remains unclear whether the activities of specific pathways that are targeted by 5-FU associate with anti-tumor responses. Notably, a recent metabolomics study provided evidence that pyrimidine homeostasis is disrupted in response to 5-FU suggesting metabolic specificity in determinants of response to this drug [13]. A recent study used a large panel of cell lines from the catalog of somatic mutations in cancer (COSMIC) collection and characterized molecular markers of response to hundreds of different drugs [11]. This drug panel included a number of antimetabolite chemotherapies together with a number of other agents grouped as "cytotoxic drugs." This study comprehensively evaluated thousands of molecular features in their ability to act as predictive markers of sensitivity and found the TP53 mutational status as the most dominant marker for antimetabolite agents such as 5-FU and Gemcitabine. For 5-FU, a handful of copy number variants (CNVs) was also found to be predictive of cell line resistance [11]. However, this study did not explore gene expression beyond only 11 common pathways, which found no significant predictors. It remains to be investigated whether any differences among antimetabolite agents can be captured in gene expression signatures of response and whether such gene expression signatures can add to our power of distinguishing subtypes with heterogeneous therapeutic outcome. Previous assessments of molecular markers of response to chemotherapy have mostly been carried out in cancer cell lines. The wealth of genomic information on annotated human tumors now publically available through the cancer genome atlas (TCGA) allows for these questions to be addressed in patients in a more systematic way than previously possible. We and others have successfully utilized the TCGA to decipher novel aspects of cancer metabolism using computational approaches that integrate genomic information on thousands of human tumors [14,15,16,17,18]. A previous study applied an unbiased investigation of genomic data on ovarian cancer tumors from the TCGA and specifically looked for prognostic markers of response to Cisplatin using progression-free survival of recipients [19]. Despite difficulties in studying drug response in human patients in the presence of numerous confounding factors and heterogeneity in therapeutic regimens, the unbiased framework introduced in that study provided useful insights on novel genetic and epigenetic subgroups with variable outcome [19]. This motivated us to apply a similar approach to identify gene expression subgroups of response to antimetabolite chemotherapies. Here, we carry out an investigation of a set of antimetabolite chemotherapies that target metabolic enzymes. These agents target different pathways including folate synthesis, nucleotide metabolism, and glutathione biosynthesis. Instead of analyzing target enzyme expressions, we develop an unbiased approach to identify gene expression signatures of response. Subsequently, we assess specificity and heterogeneity in cell line sensitivities to various antimetabolite agents. Together, our results introduce specific metabolic determinants of response to these agents. Discretizing gene expressions and defining favorability scores We considered TCGA's COAD and PAAD cohorts. Level-3 RNA-seq RSEM gene-normalized counts were downloaded for each tumor through the GDC portal (https://gdc.cancer.gov/). The values were log2 normalized, and in each data set, genes with a count of 2 or smaller in over 80% of the samples were removed as low-count genes. We used the following criteria to discretize the signature gene expression matrix and label expressions "favorable" or "unfavorable" based on their relationship with progression-free survival (PFS; time-zero is date of diagnosis in the corresponding plots). A gene was assigned a value of 1 and was considered favorable if its high expression (higher than median plus half of the standard deviation for that gene) co-occurred with better prognosis (i.e., patient exhibited both high expression and good prognosis based on Cox survival test on the values of expression of a given gene), and a value of − 1 (unfavorable) if its high expression co-occurred with poor prognosis in univariate Cox regression: $$ F\left\{\begin{array}{l}=1,\mathrm{if}\ \mathrm{Eij}\ge \mathrm{med}+\mathrm{s}/2\ \mathbf{and}\ \mathrm{j}\in \mathrm{good}\ \mathrm{survival}\hfill \\ {}=-1,\mathrm{if}\ \mathrm{Eij}\ge \mathrm{med}+\mathrm{s}/2\ \mathbf{and}\ \mathrm{j}\in \mathrm{poor}\ \mathrm{survival}\hfill \\ {}=0,\mathrm{otherwise}\hfill \end{array}\right. $$ where Eij represents expression of gene "i" in individual tumor "j." For discretizing cell line expression data, the following modified scheme was used where cell lines were labeled either "sensitive" or "resistant" to a drug if their IC-50 value was at either extreme of the distribution of IC-50 values for that given drug across all cell lines. $$ F\left\{\begin{array}{l}=1,\mathrm{if}\ \mathrm{Eij}\ge \mathrm{med}+\mathrm{s}/2\ \mathbf{and}\ \mathrm{j}\in \mathrm{sensitive}\hfill \\ {}=-1,\mathrm{if}\ \mathrm{Eij}\ge \mathrm{med}+\mathrm{s}/2\ \mathbf{and}\ \mathrm{j}\in \mathrm{resistant}\hfill \\ {}=0,\mathrm{otherwise}\hfill \end{array}\right. $$ where Eij represents expression of gene "i" in cell line "j." Genome-wide identification of survival-associated expression Progression-free survival times for TCGA's COAD and PAAD cohorts were obtained through the cBioPortal for cancer genomics. We used cancer progression or patient death as "events" in Cox models and used the last day of follow-up to right censor the data in cases where no event was documented. R packages "survival" was used for univariate survival analyses independently for all genes (Fig. 1a and Fig. 3a). Combined gene expression signatures of response to 5-FU in colon cancer identify novel subgroups. a Schematic of the step-wise filtering used for gene selection in colon cancer (TCGA COAD). b Hierarchical clustering of heatmap of the discretized gene favorability scores. Columns represent genes and rows represent individuals. Favorable scores are shown by the color red (F = 1), unfavorable by blue (F = − 1), and neutral by yellow (F = 0) (see the "Methods" section). c Pathways enriched in the unfavorable gene set. Enrichment p values are calculated using Fisher's exact test (see the "Methods" section) Survival analysis using gene signatures When considering survival analysis for subgroups identified by our favorability scoring method (described in the following), we used the subgroup assignments based on the k-means clustering of favorability matrix in each case to label samples as "favorable signature group" and "unfavorable signature group." Subsequently, Cox regression was performed to assess the significance of the difference between PFS of the two groups as shown in Fig. 2a and Fig. 3d. Relationship between target enzyme expression and response to 5-FU in colon cancer. a Kaplan-Meier plot showing progression free survival in the two tumor subgroups identified in Fig. 1b. b Kaplan-Meier plot compares progression free survival in high-TYMS expression vs. low-TYMS expression subgroups of TCGA COAD patients. c Kaplan-Meier plot compares progression free survival in high-TYMS expression vs. low-TYMS expression subgroups of stage III TCGA COAD patients Combined gene expression signatures of response to Gemcitabine in pancreatic cancer identify novel subgroups. a Schematic of the step-wise filtering used for gene selection in pancreatic cancer (TCGA PAAD). b Hierarchical clustering of heatmap of the discretized gene favorability scores. Columns represent genes and rows represent individuals. Favorable scores are shown by the color red (F = 1), unfavorable by blue (F = − 1), and neutral by yellow (F = 0) (see the "Methods" section). c Pathways enriched in the unfavorable gene set. Enrichment p values are calculated using Fisher's exact test. d Kaplan-Meier plot showing the progression free survival in the two tumor subgroups identified in part (b) Cross validation To assess potential over-fitting of our approach for stratifying response subsets, we repeated the favorability scoring and the subsequent clustering using 5-fold cross validation as follows: we divided the cohort of COAD tumors into five independent test subsets. For each round of cross validation, we left one of the test subsets out and performed the survival analysis as described above only on the remaining four subsets (the training set). We next performed the survival analysis on the test subset using the training set gene expression data to determine "high" and "low" expression thresholds for each gene. The median log likelihood test p value for the significance of the difference between survival rates of the two subsets was p = 5.706671e-06 (with standard deviation of 0.003) on the training and p = 0.019 (with standard deviation of 0.017) on the test sets. Cell line sensitivity analyses For the COSMIC cell lines, RMA-normalized gene expressions were obtained through the Sanger Institute (http://cancer.sanger.ac.uk/cosmic). Genes with a coefficient of variation of 0.05 or smaller were removed. To test association with drug response, inhibitory concentration (IC-50) values were correlated with gene expression values and a Kendal tau was calculated. Genes with a correlation of over 0.2 and an associated p value of 0.01 or less were selected for subsequent discretization step (Fig. 4a and Additional file 1: Figure S2A). Combined gene expression signatures of response to 5-FU across colon cancer cell lines identify novel subgroups. a Schematic of the step-wise filtering used for gene selection in colon cancer (COSMIC COAD-READ). b Hierarchical clustering of heatmap of the discretized gene favorability scores. Columns represent genes and rows represent individuals. Favorable scores are shown by the color red (F = 1), unfavorable by blue (F = − 1), and neutral by yellow (F = 0) (see the "Methods" section). c Box-plots comparing the resistance to 5-FU (log IC-50 values) between the two cell line subgroups identified in part (b) (error bars show the range of the data points in each group) Gene selection approach Genes that passed our first filter, i.e., showed a significant association with PFS (Cox p value < 0.05), were subsequently evaluated by additional clinical and genetic attributes. To eliminate genes whose expression levels were significantly affected by TP53 mutational status, we compared expression levels in TP53 mutant with TP53 wild-type samples, and a Wilcoxon non-parametric test was used to assess statistical difference. This test allowed filtering out genes significantly associated with TP53 mutation. For other clinical attributes, such as cancer stage, patient age, tumor grade, and nodal status, the Spearman correlation was used to test associations between gene expression and these clinical factors across samples. Finally, genes that passed all of the above filters were used for subsequent discretization analyses. Survival analysis using expression of target enzymes To assess the strength of direct target enzymes of 5-FU and Gemcitabine as markers of PFS, we considered expression levels of TYMS and RRM1 (RRM2), respectively. We first used the function "cutp" in the R package "survMisc" to find the best cutting point in the continuous gene expression. We then used this cutting point as a threshold to divide the samples into two groups of "low" and "high" expression for samples below and above the cut point, respectively. Independent cross validation for pancreatic cancer To validate the clinical significance of the gene signatures comprised of 665 genes in pancreatic cancer, we looked at publicly available datasets (Accession: GSE17891) of a pancreas cohort comprised of 27 patients. First, we clustered the patients based on their 665 gene signatures by Spearman Rank Correlation Clustering, and there were two distinct clusters. We performed Kaplan-Meier survival analysis based on the clustering, and we found that the gene signature was able to stratify the cohort into two groups with distinct survival outcomes despite the small cohort size (n = 27). To compare this result to those of using single gene expression levels, we performed Kaplan-Meier analysis for RRM1 and RRM2. We divided the cohort into half according to RRM1 and RRM2 gene expression levels (n = 14 for high gene expression and n = 13 for low gene expression). Pathway enrichment analyses Pathway enrichment analysis was performed on the resulting gene list for each cancer type using Enrichr [20]. P values from the Fisher's exact test are reported for significant (p < 0.05) KEGG pathways (and HumanCyc (https://humancyc.org/) pathways for potential metabolic signatures not defined by KEGG pathways in detail). Analyses of non-gene expression cell attributes We obtained IC-50 values for the 17 antimetabolite compounds across a panel of 60 cell lines from the National Cancer Institute (NCI-60) [21]. To complement our gene expression analyses, we took advantage of the NCI-60 cell line panel where in addition to the comprehensive annotation of cell lines, a previous study has quantified the consumption and release rates (CORE) of hundreds of metabolites by each of these cell lines. We obtained cell volumes, proliferation rates, CORE values, and dose-response sensitivity information (IC-50 values) for 17 antimetabolite drugs across this cell line panel (https://dtp.cancer.gov/discovery_development/nci-60/). CORE values are positive if a metabolite is released into the media by cancer cells and is negative if the metabolite is consumed. The list of these antimetabolic agents is as follows: Gemcitabine, Methotrexate, Pemetrexed, Thioguanine, Thiopurine, Fluorouracil, 5-Fluorouracil deoxyriboside, Hydroxyurea, Ara-C, Azacytidine, Cladribine, Decitabine, Pentostatin, Cytarabine, Fluodarabine phosphate, Clofarabine, and Capecitabine. Growth rate calculations We obtained growth rate by correcting proliferation rates for volumes. At time zero—right after the cell division, the cell volume (V 0) is the minimum. At time T 1, the cell gets bigger to V 1. If we define growth rate (k g) as the increase of cell volume per time it takes, we can come up with the equation below: $$ {V}_1={V}_0+{T}_1{k}_{\mathrm{g}} $$ At doubling time (T d), the cell will divide into two, and we assume two divided cells will have the same volume as the initial volume, V 0. $$ {V}_2=2{V}_0={V}_0+{T}_{\mathrm{d}}{k}_{\mathrm{g}} $$ $$ {V}_0={T}_{\mathrm{d}}{k}_{\mathrm{g}}\kern2.25em $$ $$ {T}_d=\frac{\mathit{\ln}2}{k_p}\kern2.75em $$ $$ {V}_0=\left(\frac{\mathit{\ln}2}{k_p}\right){k}_g $$ We then solved the above equation to obtain the following equation for growth rate: $$ {k}_g=\frac{V_0{k}_p}{\mathit{\ln}2} $$ Gene expression signatures of patient response to antimetabolite chemotherapies are enriched for metabolic pathways To identify gene expression signatures associated with patients' response to chemotherapies, we undertook an unbiased genome-wide selection approach adapted and modified from a previous framework [19] (Fig. 1a). We used the TCGA as the source of our clinically annotated genomic data on human tumors [22]. Progression-free survival (PFS), a readily available metric of clinical outcome, was used as a measure of patient response to chemotherapy. TCGA cancer types in which patients were treated with a common antimetabolite agent were considered if both RNA-seq gene expression and follow-up data were available for a large enough cohort of patients (N > 50) that would allow quantitative analysis. Since our goal was to identify subtypes of cancer patients with "good response" and "poor response," we considered each cancer type separately. These criteria limited our analyses of human data to 5-FU treatment in colorectal cancers and Gemcitabine treatment in pancreatic cancers (see the "Methods" section). Both of these agents target one-carbon metabolism, a metabolic pathway that has previously been shown to play diverse critical roles in cancer initiation, progression, and pathogenesis [4, 14,15,16, 23, 24]. A total of 109 colon cancer patients were considered who received adjuvant 5-FU therapy as part of their chemotherapy regimen [22]. For this genome-wide study, we considered all of the genes in the genome after filtering out low-count mRNA expressions (see the "Methods" section). We first calculated association between expression of each gene with PFS using univariate Cox regression (see the "Methods" section) and excluded genes that did not show a significant (p < 0.05) association (Fig. 1a). Next, we considered the remaining 446 genes and further filtered out stage-, age-, TP53 mutation-, and nodal status-associated genes to eliminate confounding factors that might affect the association of genes with 5-FU response (see the "Methods" section). This filtering leads to a set of 299 genes that were each individually significantly associated with patient response to 5-FU in colon cancer, and their relationship to PFS was independent of stage, age, TP53 mutation, and nodal status of the tumors (Fig. 1a). Notably, this set included TYMS—the direct target enzyme of 5-FU. We next set out to assess the combined power of the 299 genes in separating response subgroups. For this, we used a scheme previously proposed by Hsu et al. for DNA methylation [19] and modified the method to apply to gene expression analysis (see the "Methods" section). First, we converted the gene expression matrix into a discretized matrix of "favorability scores," where a gene with high expression in a patient in the better prognosis subgroup was assigned a score of 1 ("favorable"), and a gene with high expression co-occurring with poorer prognosis subgroup was assigned a score of − 1 ("unfavorable"), and all other cases were assigned a score of 0 ("neutral") (see the "Methods" section). The clustered heatmap of the favorability scores discovered distinct subsets of genes (favorable vs. unfavorable) as well as distinct subgroups of patients (Fig. 1b). To assess the functional relevance of the favorable and unfavorable gene signatures, we performed gene set enrichment analysis based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The unfavorable gene set was enriched for the following KEGG pathways: circadian entrainment (p = 7e-03); nucleotide sugar metabolism (p = 7e-03); Notch signaling (p = 7e-03); and one-carbon metabolism (p = 1e-02) (Fig. 1c). TYMS, SHMT2, GALT, RENBP, and AMDHD2 were among the metabolic genes that had an unfavorable expression in colon cancer, meaning that their high expression in patients treated with 5-FU was associated with poorer prognosis. Consistent with our results, one-carbon metabolic fluxes have previously been shown to correlate with sensitivity to 5-FU in vitro and in mice [13]. These observations illustrate the importance of specific metabolic target pathways of 5-FU in explaining part of the variability in patient response to this drug. Enrichment analysis on the favorable gene cluster showed enrichment of lipid metabolic KEGG pathways (synthesis of unsaturated fatty acids (p = 4e-04) and fatty acid metabolism (p = 2e-03)), with SCD and ACOX1 fatty acid de-saturases being among the metabolic genes in this group. Lipid synthesis has long been known to increase upon carcinogenesis, producing cellular membrane subunits for rapidly proliferating cells [25]. However, lipidome analyses have shown that the role of fatty acids in cancers are more complex, with an enrichment of saturated fatty acids causing the loss of membrane fluidity, increase in drug resistance, and increase in malignancy of cancer cells [26]. Our results confirm previous studies by identifying fatty acid oxidases and de-saturases SCD and ACOX1 as favorable enzymes, suggesting a role for fatty acid metabolism. To compare the two patient subgroups identified by our approach, we performed k-means clustering on the matrix of favorability scores and identified a distinct subgroup enriched with favorable genes (group 1 in Fig. 1b) and a second subgroup enriched with unfavorable gene expression (group 2 in Fig. 1b) (see "Methods" section). When PFS was compared between these two subgroups, we found a highly significant difference (Cox p = 3.46e-07, hazard ratio (HR) = 6.7; Fig. 2a). Interestingly, when limiting the gene expression signatures to 17 metabolic genes among the 299, we could still see a significant separation (Cox p = 1.3e-03) suggesting that the metabolic genes alone are predictive of outcome. To control for potential bias, we repeated this procedure using 5-fold cross validation. We found the difference between the two subgroups to be significant in all five testing subsets (see "Methods" section). Next, we assessed the power of TYMS expression alone in distinguishing response subgroups. For this, we divided tumors into two groups based on their TYMS expression level: "low-TYMS" and "high-TYMS" (see "Methods" section). We then compared PFS between the two groups using Cox regression and found a modestly significant difference in response between the low-TYMS and high-TYMS groups (p = 4.9e-02; Fig. 2b). Given that adjuvant 5-FU therapy is usually administered in stage III colon cancer, we repeated this analysis in stage III tumors only (N = 59) and found a slightly stronger association (p = 6e-03; Fig. 2c). In both analyses, we found that higher expression of TYMS is associated with poorer response to 5-FU therapy, consistent with previous reports [27, 28], possibly explained by larger doses of the drug needed to achieve TYMS inhibition in high-expressing tumors. These results show that our scheme of discretizing combined gene expression signatures followed by favorability scoring and clustering is able to identify prognosis subgroups that are significantly more distinct than the subgroups identified based on TYMS expression alone, despite TYMS being the direct target of 5-FU and gene strongly correlated with drug response. Importantly, our gene expression signatures are not associated with other prominent clinical predictors of prognosis (e.g., age, stage, nodal status, and TP53 mutation), as we controlled for these confounding factors in the gene selection step (see the "Methods" section; Fig. 1a). This suggests that the gene expression signatures identified here offer additional information about prognosis beyond what is already captured by commonly used clinical metrics. Since metabolism is an interconnected network of reactions that work in concert; thus, the combined activity of multiple connected genes and pathways results in a better reflection of the biological state of a tumor than the activity of individual enzymes. We next set out to apply our gene expression analysis method to an independent TCGA cohort consisting of pancreatic cancer patients (N = 100) who were treated with adjuvant Gemcitabine chemotherapy as part of their chemotherapy regimen. Gemcitabine is another chemotherapeutic agent that targets nucleotide and glutathione metabolism. Gene selection and filtering steps resulted in a set of 665 genes associated with PFS in this cohort after controlling for patient age, tumor grade, and TP53 mutational status (Fig. 3a). Visualization of a discretized expression heatmap made apparent subsets of favorable and unfavorable genes (Fig. 3b). Pathway analysis of the favorable gene set showed Glycerophospholipid metabolism (p = 1e-04) pathway being enriched, while the following KEGG pathways were enriched in the unfavorable expression signature: mitotic cell cycle and nuclear division (p < 10e-9), viral carcinogenesis (p = 2e-04), mismatch repair (p = 2e-04), apoptosis (p = 8e-03), and Pyrimidine metabolism (p = 1e-02) (Fig. 3c). Notably, the unfavorable gene set included ribunucleotide reductases RRM1 and RRM2—direct targets of Gemcitabine— as well as DTYMK and TK1 in thymidine metabolism and NT5E in purine degradation pathways, demonstrating a role for specific target pathways of Gemcitabine in explaining the response to this agent. The favorable gene signature included the following metabolic genes: PLA2G2D, PLA2G4A, PLA2G4C, and PLD2 phospholipases, LPGAT1, PNPLA6, AGPAT1, and AGPAT4. This observation further supports previous cancer profiling studies that have established important structural and signaling roles for phospholipids in the pathogenesis and malignancy of cancer cells [25]. We next performed k-means clustering on the matrix of favorability scores across these 665 genes and identified clear subgroups of patients. Comparison of the subgroup enriched with unfavorable gene expression with that of the favorable subgroup showed a significant difference in PFS (Cox p = 1.8e-04, HR = 3.5; Fig. 3d). When limited to 39 metabolic genes among the 665, we still observed a significant separation of response subgroups (Cox p = 1.3e-04). Notably, when considered individually, RRM1 and RRM2 each had far less distinctive power (Cox p = 6e-03 for RRM1 and p = 5e-03 for RRM2; Additional file 1: Figure S1A, B) than the combined gene sets, further confirming the advantage of our approach by considering pathways rather than individual genes. Together, these results show the relevance of metabolic states of tumors in predicting drug response and also confirm the generalizability of this approach in identifying clinically distinct subgroups of cancer patients using gene expression signatures. Finally, our signature of 665 genes was used in a cross validation test from an independent study on 27 pancreatic cancer patients (see "Methods" section) [29]. In this cohort as well, while RRM1 and RRM2 expression were not capable of subdividing patients with respect to survival on Gemcitabine (Additional file 1: Figure S1A, B), our gene signature identified two survival subgroups significantly different in response (Likelihood ratio test = 4.57 on 1 df, p = 0.0326 Additional file 1: Figure S1C; see the "Methods" section). Analysis of gene expression signatures of response to antimetabolites in cell lines confirms metabolic specificity Due to limitations in the availability of sufficiently annotated human data with gene expression and follow-up information, we next turned to cancer cell line collections to further test the applicability of our method. We used the catalog of somatic mutations in cancer (COSMIC) cell line set as the largest collection of annotated cancer cell lines and obtained microarray gene expression data as well as drug sensitivity information in the form of 50% of maximal inhibition of cell proliferation (IC-50) for the same agents we had previously tested in human samples (i.e., 5-FU and Gemcitabine). In the case of cell lines, we considered a gene favorable if its high expression co-occurred with higher sensitivity to drug treatment (lower IC-50) and unfavorable if its high expression co-occurred with lower sensitivity (higher IC-50) (see the "Methods" section). A set of 44 cell lines from colorectal origin was considered. For the gene selection step, we calculated the correlation between expression of every gene in the genome with IC-50 value for 5-FU and selected genes with a Kendall's tau value of 0.2 or larger and a corresponding p value of 0.01 or smaller. A total of 364 genes passed this filter (Fig. 4a). Subsequently, the discretization and favorability scoring approach as described in the previous section was applied to this matrix and the clustering heatmap was visualized (Fig. 4b). Distinct subsets were immediately obvious, with favorable genes enriched in protein processing (p = 4e-05), arginine and proline metabolism (p = 7e-03), and glutathionie metabolism (p = 8e-03), while the unfavorable genes were not significantly enriched in any of the KEGG pathways. Notably, Dihydropyrimidine dehydrogenase (DPYD) was the only metabolic gene identified in the unfavorable set, consistent with its biological function [23] and previous reports of its predictive power in 5-FU treated rectal cancers [30]. Next, we compared response to 5-FU between the two subgroups of cell lines identified by k-means clustering of the favorability matrix. The subgroup of cells enriched with the unfavorable gene expression signature had a significantly higher IC-50 for 5-FU (higher resistance) than the subgroup enriched with the favorable signature (Wilcoxon test p = 1.96e-11; Fig. 4c). Together, these results confirm the generalizability of this method for identification of novel subgroups with distinct response to 5-FU and also find a specific metabolic target (DPYD) as a marker of cell line sensitivity. We next considered all COSMIC cell lines derived from pancreatic origins regarding their sensitivity to Gemcitabine. This set included only 17 cell lines, limiting the statistical power of this analysis. Only 201 genes passed our initial filtering (Additional file 1: Figure S3A). A visualization of the favorability heatmap illustrated two distinct clusters of genes, one with a mostly favorable expression score, but the second one with heterogeneous scores across the cell lines (Additional file 1: Figure S3B). Pathway analysis of the favorable set identified chemical carcinogenesis (p = 7e-03), glutathionie metabolism (p = 2e-02), and drug metabolism (p = 4e-02) KEGG pathways significantly enriched, while the unfavorable set was enriched in adherens junctions (p = 5e-03), cacterial invasion (p = 6e-03), and glycophospholipid synthesis (p = 7e-03). Finally, comparison of sensitivity to Gemcitabine between two of the cell line subgroups with distinct signatures revealed a significant difference in IC-50 (Wilcoxon p value = 8e-04; Additional file 1: Figure S3C), showing the power of this approach even when applied to very small data sets. Overall, our analyses of response to 5-FU and Gemcitabine in cell lines also confirmed relevance of metabolic determinants of response; however, we did not observe a perfect correspondence between the markers identified in human studies and those identified in cell lines. This result is important given that the majority of experiments aimed at drug response are typically performed in cell line settings. Our results suggest that cell line IC-50 values do not perfectly mimic cancer outcome in response to chemotherapies in patients. This is perhaps partly due to culture conditions and other limitations with using cell lines as models for cancer and partly explained by the fact that unlike the controlled experimental settings, the majority of patients underwent combination chemotherapies that could partially confound statistical analyses. Signatures of response to antimetabolite agents exhibit specificity and variability So far, our results have shown considerable contribution from the metabolic gene expression network in distinguishing drug response subsets within human tumors as well as cancer cell lines. Careful consideration of two nucleotide metabolism inhibitors—5-FU in colon and Gemcitabine in pancreatic cancers—revealed subtle differences in gene expression signatures associated with favorable and unfavorable response in each case, suggesting antimetabolite agents exert their function through different cellular pathways in these tissues and therefore be associated with different clinical markers. Our approach utilized gene expression levels of metabolic enzymes as surrogates for metabolic fluxes or enzyme activities in tumors. Next, we attempted to complement our results by taking advantage of direct metabolite measurements across a panel of 60 cancer cell lines (NCI-60). We calculated correlation between the metabolic activities in the form of consumption or release rates (CORE) as previously reported [31], and IC-50 values of 17 antimetabolite compounds (see the "Methods" section; Fig. 5a). Interestingly, the release rate of phosphocholine showed a strong negative correlation with sensitivity to six of the antimetabolite agents tested (Fig. 5a). This result suggests that cells that have a higher rate of phosphocholine production are less sensitive to drug treatments, consistent with our gene expression results showing the enrichment of phospholipid metabolic genes in response signatures. Previous studies have shown that an increase in phosphatidylcholine affects cancer cell membrane dynamics and correlates with higher tumor malignancy and poorer overall survival [25]. Our results agree with previous reports suggesting high activity of enzymes that degrade phosphatidylcholine renders cells more sensitive to drug treatments, potentially contributing to a more favorable outcome for chemotherapy [25]. An example of a specific interaction that was detected at the level of metabolite consumption and release was the case of Fludarabine—a purine analog—that was significantly associated with CORE of 2-deoxycytidine (Fig. 5a). Together, these results identify relationships between directly measured metabolic signatures of cancer cells and their sensitivity to antimetabolite chemotherapies, and also demonstrate variability among the 17 antimetabolites tested regarding their interaction with cellular metabolism. Analysis of additional determinants of sensitivity to antimetabolite agents demonstrates variability among these agents. a The significance of association between metabolic profiles (consumption and release rates (CORE)) and sensitivity to drugs (− log (IC-50)) was assessed using Spearman correlations (SC) across the NCI-60 cell line panel. The y-axis shows negative log-10 of the corresponding correlation p values for only the significant associations found (q value < 0.05). b Hierarchical clustering of the Pearson similarity matrix between the IC-50 values of 17 antimetabolite agents across the NCI-60 panel. The diagonal shows correlation of each drug with itself (= 1). The yellow boxes show three distinct clusters of drugs. c Spearman correlation coefficient (SCC) between proliferation rate (kp) and sensitivity to each drug (− log (IC-50)) is shown. Solid bars show significant correlations (FDR-corrected q value < 0.05). d Spearman correlation coefficient (SCC) between cell volume (V) and sensitivity to each drug (− log (IC-50)) is shown. Solid bars show significant correlations (FDR-corrected q value < 0.05). e Spearman correlation coefficient (SCC) between growth rate (kg) and sensitivity to each drug (− log (IC-50)) is shown. Solid bars show significant correlations (FDR-corrected q-value < 0.05) The gene expression results suggest that despite common cytotoxic effects of antimetabolite agents, they might have distinct biological markers in cells that are specific to their functions. Furthermore, the analysis of metabolic CORE profiles in cell lines suggested that markers of sensitivity to antimetabolite agents might be more variable than previously appreciated. This motivated us to further assess specificity of determinants of response across a large set of antimetabolite agents. We considered a set of 17 antimetabolite chemotherapeutic compounds (see the "Methods" section). These agents target enzymes involved in a number of metabolic pathways including de novo nucleotide metabolism, amino acid metabolism, and glutathione metabolism. To assess the extent of correlation in the sensitivities of cell lines to these compounds, we computed a similarity matrix of pairwise Pearson correlations between the IC-50 values for antimetabolite. Three distinct clusters were identified by hierarchical clustering: a cluster including Thiopurine and Thioguanine, a cluster for an anti-folate Methotrexate (MTX) and pyrimidine analogs (5-FU and 5-FUDR), and a cluster for other purine analogs (Fig. 5b). The antimetabolite compounds in the second cluster shared TYMS as a target enzyme. This analysis suggests that in general, compounds with common mechanisms of action tend to have similar sensitivity profiles across cell lines, suggesting some degree of specificity in response to antimetabolites. A common notion is that cytotoxicity of antimetabolite chemotherapies occurs in all rapidly dividing cells and thus lacks specificity. It has also been proposed that cell size, cell proliferation, and cellular metabolism are invariably coupled [21]. Given that data on proliferation rate, cell size, and metabolic profiles are readily available for the NCI-60 cell lines, we sought to re-investigate these relationships in the context of association with cell line sensitivities to antimetabolite agents. Spearman rank correlations between IC-50 and proliferation rate were computed and revealed significant positive correlations (q values < 0.05 in all cases except Capecitabine and Fluodarabine phosphase; Fig. 5c). When the cell volumes were correlated with responses to antimetabolites, all compounds except for Capecitabine showed a negative correlation (four compounds had q value < 0.05) (Fig. 5d). Together, these results confirm that cytotoxicity, as defined as the concentration of drug needed to achieve toxic dosages, is lower with smaller cells that also tend to divide more rapidly due to their size [21]. The significant negative correlation between proliferation rate and cell volume suggested that to obtain an overall growth rate corresponding to the rate of synthesis of macromolecules, the proliferation rate should be corrected for cell volume (see the "Methods" section). We next correlated dose responses with the volume-corrected proliferation rate, referred to hereinafter as the "growth rate" (Fig. 5e). The strong correlations that were observed between IC-50 values and proliferation rate were absent when considering the growth rates (Fig. 5e). This suggests that although cytotoxicity of antimetabolite agents appears highly non-specific with selectivity pertaining only to proliferation rate, these effects are completely removed when considering an overall growth rate. Importantly, a recent study independently demonstrated that growth rate inhibition normalizations correct for confounders in measuring cell line sensitivity to cancer drugs [32]. Together, our results provide evidence that unlike the common notion, variation in response to antimetabolite agents is not explained solely by differences in the rates of production of macromolecules in cells (i.e., growth rate), but is also explained by specific factors related to the functions of these agents in cells. The specificity of antimetabolite chemotherapeutic agents has unclear, and previous reports have been controversial around prognostic values for expression levels of target enzymes for most of these agents. Given that the metabolic network is composed of complex interactions between multiple enzymes and pathways, we hypothesized that perhaps by defining gene signatures instead of individual enzyme markers, we would gain power in distinguishing subgroups of tumors with differential response to therapy. Here, we introduced an unbiased approach for the assessment of combined prognostic power of expression of multiple genes and used this platform to define favorable and unfavorable signatures. Notably, we showed that these signatures allow for distinguishing novel "poor prognosis" (high progression rate) from "good prognosis" (low progression rate) subgroups far more robustly than individual target genes. Importantly, since the gene selection steps control for expression differences related to other important clinical and genetic attributes of response, we are assured that the gene signature analysis captures information about response subgroups beyond the already established markers. In both studied cases of 5-FU in colon cancer and Gemcitabine in pancreatic cancer, we found that expression of metabolic pathways related to direct targets of the drugs is enriched in the unfavorable gene set. This confirmed that tumors with higher activity of target pathways require higher doses of drug to elicit the inhibitory response and are therefore more resistant to treatment. However, our results discovered that metabolic state of cells are not fully reflected in the expression levels of individual target enzymes but rather captured more robustly in the collection of functionally and chemically linked enzymes in pathways. Although we were only able to illustrate the applicability of our method in two independent cohorts of human tumors due to data limitations, results suggest generalizability of this method to other antimetabolite agents as well. Gene signatures associated with favorable and unfavorable response to 5-FU and Gemcitabine exhibited functional similarities overall, but distinct markers for each drug were also discovered. In both cases of 5-FU and Gemcitabine, high expressions of the target metabolic pathways (i.e., nucleotide metabolism) were associated with unfavorable outcome, while high expression of lipid metabolizing pathways was associated with favorable outcome. These results point to common general mechanisms of cellular response to these drugs. However, a deeper look into specific genes and pathway within the signatures for 5-FU and Gemcitabine identified some differences. For instance, while "One-carbon metabolism" and "Nucleotide sugar metabolism" were identified as the unfavorable signature for 5-FU, "Pyrimidine metabolism" was discovered in the case of Gemcitabine. Furthermore, TYMS was among the unfavorable genes for 5-FU, while RRM1 and RRM2 were among the unfavorable genes for Gemcitabine. Together, these results suggest that despite similarities in overall mechanisms of action, antimetabolite agents have specific biological markers that have not been very well characterized and appreciated in the past. Our complementary analyses of cancer cell line sensitivities to the same chemotherapeutic agents also proved useful in identifying distinct subgroups using the gene signature approach. Other than lipid metabolic genes, the gene sets identified as favorable and unfavorable signatures in cell lines did not completely match those identified from the analysis of response in patients. The main sensitivity predictor in vitro seemed to be "Glutathione metabolism" and "Drug metabolism" that were found in cases of 5-FU and Gemcitabine to be associated with favorable outcome (i.e., higher sensitivity of cells to drug treatment). This observation is consistent with previous reports showing a critical role for glutathione metabolism in detoxification and protection against drugs in vitro [33]. These results illustrated that despite the availability and convenience of using cell lines as models of human tumors for drug response studies, analysis of patient tumors is advantageous in that it provides insights that are not fully reflected in cancer cell lines, potentially due to unwanted effects of culture media. This lack of concordance between in vitro and in vivo gene signatures can be interpreted as either differences in resistance mechanisms or differences in the gene expression correlates of resistance in vivo and in vitro. Together, our analyses of human tumors and cancer cell lines elucidated considerable variability among different antimetabolite agents, as well as specificity in metabolic markers of sensitivity to them. These demonstrate that despite the common notion, different classes of antimetabolite agents vary according to their distinct cellular functions. Our results suggest that potentially important biological markers of response to antimetabolite compounds exist, and a better understanding of these factors will provide useful insights for clinical decision-making. Notably, we showed that gene expression signatures have significant power to capture part of the previously unexplained variation in patients' responses to 5-FU and Gemcitabine in colon and pancreatic cancers, respectively. Future studies using larger cohorts of human tumors with well-annotated patient follow-up information can provide valuable additional insights about antimetabolite response signatures. Importantly, metabolism can not only be targeted with new drugs, but also by repurposing approved metabolic drugs for cancer therapy [34]. In general, drugs that target cellular metabolism are of new clinical interest [35], and future studies similar to this work are needed to shed light on identification of patient subgroups that are likely to benefit from antimetabolite therapies. This study demonstrates through unbiased analyses of multiple independent datasets that the activity of metabolic pathways likely contributes to the therapeutic response to antimetabolite chemotherapeutic agents that target these pathways. Importantly, we show that information captured by the metabolic network has the potential of stratifying patients beyond the ability of common markers currently used in the clinic such as tumor grade and cancer stage. Areas of translational relevance of these findings include novel biomarker design based on the metabolic network, and also identification of patients who are likely to benefit from antimetabolite chemotherapies. Together, results presented in this manuscript are of significant interest to the cancer and metabolism research communities and have important and immediate clinical implications for treatment decision-making. 5-FU: CNVs: Consumption and release rates COSMIC: Catalog of somatic mutations in cancer IC-50: Inhibitory concentration MTX: PFS: Progression-free survival TCGA: The cancer genome atlas TYMS: Pavlova NN, Thompson CB. The emerging hallmarks of cancer metabolism. Cell Metab. 2016;23:27–47. DeBerardinis RJ, Chandel NS. Fundamentals of cancer metabolism. Sci Adv. 2016;2:e1600200. Locasale JW. Serine, glycine and one-carbon units: cancer metabolism in full circle. Nat Rev Cancer. 2013;13:572–83. Hirschey MD, et al. Dysregulated metabolism contributes to oncogenesis. Semin Cancer Biol. 2015;35(Suppl):S129–50. Chabner BA, Roberts TG Jr. Timeline: chemotherapy and the war on cancer. Nat Rev Cancer. 2005;5:65–72. Farber S, Diamond LK. Temporary remissions in acute leukemia in children produced by folic acid antagonist, 4-aminopteroyl-glutamic acid. N Engl J Med. 1948;238:787–93. Farber S. Some observations on the effect of folic acid antagonists on acute leukemia and other forms of incurable cancer. Blood. 1949;4:160–7. Cheung-Ong K, Giaever G, Nislow C. DNA-damaging agents in cancer chemotherapy: serendipity and chemical biology. Chem Biol. 2013;20:648–59. Showalter SL, et al. Evaluating the drug-target relationship between thymidylate synthase expression and tumor response to 5-fluorouracil. Is it time to move forward? Cancer Biol Ther. 2008;7:986–94. Iorio F, et al. A landscape of pharmacogenomic interactions in cancer. Cell. 2016;166:740–54. Kandioler D, et al. TP53 mutational status and prediction of benefit from adjuvant 5-fluorouracil in stage III colon cancer patients. EBioMedicine. 2015;2:825–30. Ser Z, et al. Targeting one carbon metabolism with an antimetabolite disrupts pyrimidine homeostasis and induces nucleotide overflow. Cell Rep. 2016;15:2367–76. Nilsson R, et al. Metabolic enzyme expression highlights a key role for MTHFD2 and the mitochondrial folate pathway in cancer. Nat Commun. 2014;5:3128. Mehrmohamadi M, Mentch LK, Clark AG, Locasale JW. Integrative modelling of tumour DNA methylation quantifies the contribution of metabolism. Nat Commun. 2016;7:13666. Mehrmohamadi M, Liu X, Shestov AA, Locasale JW. Characterization of the usage of the serine metabolic network in human cancer. Cell Rep. 2014;9:1507–19. Madhukar NS, Warmoes MO, Locasale JW. Organization of enzyme concentration across the metabolic network in cancer cells. PLoS One. 2015;10:e0117131. Hu J, et al. Heterogeneity of tumor-induced gene expression changes in the human metabolic network. Nat Biotechnol. 2013;31:522–9. Hsu FH, et al. Reducing confounding and suppression effects in TCGA data: an integrated analysis of chemotherapy response in ovarian cancer. BMC Genomics. 2012;13(Suppl 6):S13. Chen EY, et al. Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool. BMC Bioinformatics. 2013;14:128. Dolfi SC, et al. The metabolic demands of cancer cells are coupled to their size and protein synthesis rates. Cancer Metab. 2013;1:20. Cancer Genome Atlas, N. Comprehensive molecular characterization of human colon and rectal cancer. Nature. 2012;487:330–7. Shaul YD, et al. Dihydropyrimidine accumulation is required for the epithelial-mesenchymal transition. Cell. 2014;158:1094–109. Kottakis F, et al. LKB1 loss links serine metabolism to DNA methylation and tumorigenesis. Nature. 2016;539:390–5. Beloribi-Djefaflia S, Vasseur S, Guillaumond F. Lipid metabolic reprogramming in cancer cells. Oncogene. 2016;5:e189. Rysman E, et al. De novo lipogenesis protects cancer cells from free radicals and chemotherapeutics by promoting membrane lipid saturation. Cancer Res. 2010;70:8117–26. Hu YC, et al. Thymidylate synthase expression predicts the response to 5-fluorouracil-based adjuvant therapy in pancreatic cancer. Clin Cancer Res. 2003;9:4165–71. Wakasa K, et al. Dynamic modulation of thymidylate synthase gene expression and fluorouracil sensitivity in human colorectal cancer cells. PLoS One. 2015;10:e0123076. Collisson EA, et al. Subtypes of pancreatic ductal adenocarcinoma and their differing responses to therapy. Nat Med. 2011;17:500–3. Huang MY, et al. DPYD, TYMS, TYMP, TK1, and TK2 genetic expressions as response markers in locally advanced rectal cancer patients treated with fluoropyrimidine-based chemoradiotherapy. Biomed Res Int. 2013;2013:931028. Jain M, et al. Metabolite profiling identifies a key role for glycine in rapid cancer cell proliferation. Science. 2012;336:1040–4. Hafner M, Niepel M, Chung M, Sorger PK. Growth rate inhibition metrics correct for confounders in measuring sensitivity to cancer drugs. Nat Methods. 2016;13:521–7. Traverso N, et al. Role of glutathione in cancer progression and chemoresistance. Oxidative Med Cell Longev. 2013;2013:972913. Liu X, Romero IL, Litchfield LM, Lengyel E, Locasale JW. Metformin targets central carbon metabolism and reveals mitochondrial requirements in human cancers. Cell Metab. 2016;24:728–39. Chan AT. Metformin for cancer prevention: a reason for optimism. Lancet Oncol. 2016;17:407–9. This work was supported by grants R01CA193256 and P30CA014236 to J.W.L. from the National Institutes of Health (NIH). MM was supported by F99 CA212457 from the NCI and a Graduate Fellowship from the Duke University School of Medicine. All data are, and material available as supplementary information, on our laboratory website (jlocasale.duke.edu), or by request. Duke Cancer Institute, Duke University School of Medicine, Durham, NC, 27710, USA Mahya Mehrmohamadi & Jason W. Locasale Duke Molecular Physiology Institute, Duke University School of Medicine, Durham, NC, 27710, USA Department of Pharmacology and Cancer Biology, Duke University School of Medicine, Durham, NC, 27710, USA Department of Molecular Biology and Genetics, Field of Genetics, Genomics and Development, Cornell University, Ithaca, NY, 14853, USA & Seong Ho Jeong Search for Mahya Mehrmohamadi in: Search for Seong Ho Jeong in: Search for Jason W. Locasale in: JWL and MM designed the study. MM performed all human analyses and wrote the manuscript. SJ performed the analyses of drug response in cell lines. JWL supervised the project and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Jason W. Locasale. All analyses were performed on publicly available data. All authors approve of publication of this study. Supplementary Figures. Figure S1. Relationship between target enzyme expression and response to Gemcitabine in TCGA pancreatic cancer. A) Kaplan-Meier plot compares progression free survival in high-RRM1 expression vs. low-RRM1 expression subgroups of TCGA PAAD patients. B) Kaplan-Meier plot compares progression free survival in high-RRM2 expression vs. low-RRM2 expression subgroups TCGA PAAD patients. Figure S2. Relationship between target enzyme expression and survival in an independent pancreatic cancer cohort. A) Kaplan-Meier plot compares overall survival in high-RRM1 expression vs. low-RRM1 expression subgroups of patients. B) Kaplan-Meier plot compares overall survival in high-RRM2 expression vs. low-RRM2 expression subgroups of patients. C) Kaplan-Meier plot compares overall survival in subgroups of patients divided based on our gene signature (see Methods). Figure S3. Identifying gene expression signatures of sensitivity to Gemcitabine in pancreatic cancer cell lines. A) Schematic of the step-wise filtering used for gene selection in pancreatic cancer (COSMIC PAAD). B) Hierarchical clustering heatmap of the discretized gene favorability scores. Columns represent genes and rows represent individuals. Favorable scores are shown by the color red (F=1), unfavorable by blue (F= -1), and neutral by yellow (F=0) (see Methods). C) Box-plots comparing the resistance to Gemcitabine (log IC-50 values) between the two cell line subgroups identified in part B (error bars show the range of the data points in each group). (DOCX 225 kb) Mehrmohamadi, M., Jeong, S.H. & Locasale, J.W. Molecular features that predict the response to antimetabolite chemotherapies. Cancer Metab 5, 8 (2017) doi:10.1186/s40170-017-0170-3 Antimetabolite chemotherapies Molecular determinants of response to chemotherapy
CommonCrawl
A remark on the weighted Radon transform on the plane A local uniqueness theorem for weighted Radon transforms November 2010, 4(4): 639-647. doi: 10.3934/ipi.2010.4.639 Leon Ehrenpreis 1, Department of Mathematics, Temple University, Philadelphia, PA 19122, United States Received March 2009 Published September 2010 Special functions are functions that show up in several contexts. The most classical special functions are the monomials and the exponential functions. On the next level we find the hypergeometric functions, which appear in such varied contexts as partial differential equations, number theory, and group representations. The standard hypergeometric functions have power series which satisfy 2 term recursion relations. This leads to the usual expressions for the power series coefficients as quotionts of rational and factorial-like expressions. We have developed a "hierarchy" of special functions which satisfy higher order recursion relations. They generalize the classical Mathieu and Lamé functions. These classical functions satisfy 3 term recursion relations and our theory produces "Lamé - like" functions which satisfy recursions of any order. Keywords: Special functions.. Mathematics Subject Classification: 22Exx, 42Cxx, 42x. Citation: Leon Ehrenpreis. Special functions. Inverse Problems & Imaging, 2010, 4 (4) : 639-647. doi: 10.3934/ipi.2010.4.639 L. Ehrenpreis, "Fourier Analysis in Several Complex Variables," Wiley & Sons, Interscience, 1970. Google Scholar L. Ehrenpreis, "The Universality of the Radon Transform," Oxford University Press, 2003. doi: doi:10.1093/acprof:oso/9780198509783.001.0001. Google Scholar L. Ehrenpreis, Hypergeometric functions, in "Algebraic Analysis," vol. I, Academic Press, New York, (1988), 85-128. Google Scholar H. Farkas and I. Kra, "Riemann Surfaces," Springer-Verlag, 1992. Google Scholar E. W. Hobson, "The Theory of Spherical and Ellipsoidal Harmonics," Cambridge University Press, 1931. Google Scholar E. G. Kalnins, "Separation of Variables for Riemannian Spaces of Constant Curvature," Longman, Sci. Tech., Wiley & Sons, New York 1986. Google Scholar W. Miller, Jr., "Symmetry and Separation of Variables," Addison-Wesley Publ. Co., Reading, Mass., 1977. Google Scholar N. Ja. Vilenkin and A. U. Klimyk, "Representations of Lie Groups and Special Functions," Kluwer Acad. Publ., Dortrecht, Netherlands, 1991. Google Scholar Gerard Gómez, Josep–Maria Mondelo, Carles Simó. A collocation method for the numerical Fourier analysis of quasi-periodic functions. I: Numerical tests and examples. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 41-74. doi: 10.3934/dcdsb.2010.14.41 Gerard Gómez, Josep–Maria Mondelo, Carles Simó. A collocation method for the numerical Fourier analysis of quasi-periodic functions. II: Analytical error estimates. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 75-109. doi: 10.3934/dcdsb.2010.14.75 Jacques Wolfmann. Special bent and near-bent functions. Advances in Mathematics of Communications, 2014, 8 (1) : 21-33. doi: 10.3934/amc.2014.8.21 Chihiro Matsuoka, Koichi Hiraide. Special functions created by Borel-Laplace transform of Hénon map. Electronic Research Announcements, 2011, 18: 1-11. doi: 10.3934/era.2011.18.1 Krzysztof Frączek, M. Lemańczyk, E. Lesigne. Mild mixing property for special flows under piecewise constant functions. Discrete & Continuous Dynamical Systems, 2007, 19 (4) : 691-710. doi: 10.3934/dcds.2007.19.691 Gümrah Uysal. On a special class of modified integral operators preserving some exponential functions. Mathematical Foundations of Computing, 2022 doi: 10.3934/mfc.2021044 Jiao Du, Longjiang Qu, Chao Li, Xin Liao. Constructing 1-resilient rotation symmetric functions over $ {\mathbb F}_{p} $ with $ {q} $ variables through special orthogonal arrays. Advances in Mathematics of Communications, 2020, 14 (2) : 247-263. doi: 10.3934/amc.2020018 Frank Sottile. The special Schubert calculus is real. Electronic Research Announcements, 1999, 5: 35-39. Genni Fragnelli, Dimitri Mugnai, Maria Cesarina Salvatori. Preface to this special issue. Discrete & Continuous Dynamical Systems - S, 2020, 13 (7) : i-ii. doi: 10.3934/dcdss.2020147 Genni Fragnelli, Luca Lorenzi, Alain Miranville. Preface to this special issue. Discrete & Continuous Dynamical Systems - S, 2020, 13 (5) : i-iii. doi: 10.3934/dcdss.2020080 Genni Fragnelli, Jerome A. Goldstein, Alain Miranville. Preface to this special issue. Discrete & Continuous Dynamical Systems - S, 2020, 13 (12) : i-ii. doi: 10.3934/dcdss.2020422 Yihong Du, Je-Chiang Tsai, Feng-Bin Wang, Xiao-Qiang Zhao. Preface of the special issue. Discrete & Continuous Dynamical Systems - B, 2021, 26 (4) : i-ii. doi: 10.3934/dcdsb.2021071 Alain Miranville. Special issue dedicated to SFBT 2017. Discrete & Continuous Dynamical Systems - S, 2020, 13 (8) : i-i. doi: 10.3934/dcdss.2020179 Juan Gabriel Brida, Viktoriya Semeshenko. Special Issue on: Complex systems in economics. Journal of Dynamics & Games, 2020, 7 (3) : i-ii. doi: 10.3934/jdg.2020011 Chaudry Masood Khalique, Muhammad Usman, Maria Luz Gandarais. Special issue dedicated to Professor David Paul Mason. Discrete & Continuous Dynamical Systems - S, 2020, 13 (10) : iii-iv. doi: 10.3934/dcdss.2020416 Monique Chyba, Benedetto Piccoli. Special issue on mathematical methods in systems biology. Networks & Heterogeneous Media, 2019, 14 (1) : i-ii. doi: 10.3934/nhm.20191i Marius Tucsnak. Preface to the special issue on control of infinite dimensional systems. Mathematical Control & Related Fields, 2019, 9 (4) : i-ii. doi: 10.3934/mcrf.2019042 Adrian Constantin, Joachim Escher. Introduction to the special issue on hydrodynamic model equations. Communications on Pure & Applied Analysis, 2012, 11 (4) : i-iii. doi: 10.3934/cpaa.2012.11.4i Przemysław Berk, Krzysztof Frączek. On special flows over IETs that are not isomorphic to their inverses. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 829-855. doi: 10.3934/dcds.2015.35.829 Rinaldo M. Colombo, Kenneth H. Karlsen, Frédéric Lagoutière, Andrea Marson. Special issue on contemporary topics in conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : i-ii. doi: 10.3934/nhm.2016.11.2i Leon Ehrenpreis
CommonCrawl
Which causal structures are absent from any "nice" patch of Minkowski space? Which "causal separation structures" (or "interval structures") can not be found among the events in "any nice patch ($P$) of Minkowski space"?, where "causal separation structure" ($s$) should be understood as a function from $n (n - 1) / 2$ distinct pairs (formed from $n$ elements/events of some set $E$) to the set of three possible assignments of "causal separation" (namely either "timelike", or "lightlike", or else "spacelike"). Of particular interest: what's the smallest applicable number $n$? -- for which a corresponding "causal separation structure" can be expressed which can not be found among the events in patch $P$; i.e. such that $E \, {\nsubseteq} P$. Finally: please indicate under which conditions (on which sort of patches, necessarily different from "any nice patch of Minkowski space") the proposed structure could be found instead; or otherwise, whether it is "impossible" in general. special-relativity spacetime geometry causality It seems you don't us to differentiate between past time-like (null) and future time-like (null). Perhaps you have in mind more general spacetimes, but I'm having a hard time imagining a situation where one could distinguish time-like and space-like separated points but not past time-like and future time-like. If one was allowed to differentiate future and past then one would have a simple example with three points. Without past and future, the minimal impossible configuration can be made with four points. Since any three points which are all lightlike to each other other determine a null ray one cannot have four points ABCD such that: A, B and C are lightlike separated from each other, B, C and D are all lightlike separated from each other but A and D are spacelike or timelike separated. BebopButUnsteadyBebopButUnsteady $\begingroup$ I should add I don't know off the top my head what the sufficient condition for a set of relations to be allowable is. $\endgroup$ – BebopButUnsteady $\begingroup$ You gave a clever solution (I voted it up); making me realize that I was negligent in stating my question: demanding "absence in Minkowski space" should (of course &) be moderated by requiring "present under certain more general conditions", such as, perhaps: at least in some manifold of dimension "7+1". Clearly, your solution doesn't match any "reasonable" additional requirement (which I may still add in editing my question), but it's "plainly impossible". (I had one "clearly less impossible solution" already prepared with "n = 15". But now what? ...) $\endgroup$ $\begingroup$ BebopButUnsteady: "It seems you don't use to differentiate between past time-like (null) and future time-like (null)." Well, I didn't bother (yet) to do so explicitly (because I'd dismiss any corresponding attempt at a solution as "plainly impossible"). I suppose I would have to look at en.wikipedia.org/wiki/Causality_conditions to spell out suitable "general conditions" under which the "solution(s)" I'm trying to ask for should be "readily present". $\endgroup$ $\begingroup$ On 2nd thoughts I regret the phrase "plainly impossible"; sorry. Instead we might ask whether there is any assignment of "past or future direction" to (lightlike, timelike) intervals of the structure you suggested (or any other) such that the resulting "causal structure" is "perfectly nice" in terms of the hierarchy of en.wikipedia.org/wiki/Causality_conditions. I doubt that your proposal is "nice" in this sense; I wish I could make the phrase "nice patch" of my question now more precise in this sense; and I wonder how to call the mere "structure" without directional assignments. $\endgroup$ One applicable "causal structure" involving 15 events can be illustrated as a subset of all events attributable to "five participants (conveniently called ${\mathcal A}, {\mathcal F}, {\mathcal J}, {\mathcal N}$ and ${\mathcal U}$), each finding coincident pings from the four others". Of the 15 events to be considered, each of the five participants shares in three events: ${\mathcal A}$ takes part in events A, B and C, which are (obviously supposed to be) pairwise timelike to each other, and with a consistent (causal, "nice") assignment of past or future direction; similarly ${\mathcal F}$ takes part in events F, G and H, ${\mathcal J}$ takes part in events J, K and L, ${\mathcal N}$ takes part in events N, P and Q, and ${\mathcal U}$ takes part in events U, V and W; further: AG, GC, AK, KC, AP, PC, AV, and VC are lightlike, FB, BH, FK, KH, FP, PH, FV, and VH are lightlike, JB, BL, JG, GL, JP, PL, JV, and VL are lightlike, NB, BQ, NG, GQ, NK, KQ, NV, and VQ are lightlike, and UB, BW, UG, GW, UK, KW, UP, and PW are lightlike; the separations of all ten pairs among the events A, F, J, N, U are spacelike, the separations of all ten pairs among the events B, G, K, P, V are spacelike, the separations of all ten pairs among the events C, H, L, Q, W are spacelike, and finally the separations of all twenty remaining event pairs are timelike; all together with consistent/causal "direction assignments". Here a (sketch of a) proof that this structure can not be found in a patch of Minkowski space (including its "nice/obvious direction assignments"): (1) In a suitable "projection into 3D-flat (Euclidean) space", but without loss of generality, events G, K, P, V are (supposed to be) situated on the surface of a sphere, at whose center are (in coincidence) events A and C, and with event B inside this sphere (but not necessarily coinciding with A and C). Further (2) events B, G, K, P are situated on an ellipsoid with focal points U and W; and moreover, at equal distance from U (and also from W). Consequently B, G, K, P are situated on a circle on a plane perpendicular to the ellipsoid axis UW, while event V is inside this ellipsoid. (Similarly, for events B, G, K, V wrt. to the ellipsoid axis NQ, and so on.) However: if G, K, P are situated on a sphere, and B, G, K, P are situated on a circle, then B is situated on that sphere as well; in contradiction to (1), q.e.d. In turn, as far as this argument would fail in a (or even in any) non-Minkowski case, without the possibility of "a suitable projection", the described "structure" is perhaps not ruled out, but may instead be found/present. p.s. Since the sketch of the proof (as I noticed only after having it written down and submitted) doesn't even explcitly mention the six events F, H, J, L and N, Q at all, the "structure" between the nine remaining events (explicitly mentioned in the sketch of the proof) appears sufficient to carry this particular proof that this "structure" could not be found in Minkowski space; therefore apparently $n \le 9$. $\begingroup$ Could you make clearer what advantages this argument has over Bebop's? $\endgroup$ – Emilio Pisanty $\begingroup$ @Emilio Pisanty: "what advantages this argument has over Bebop's?" -- IICU "Bebob's structure" (above), it'd be typical of a en.wikipedia.org/wiki/Gravitational_lens Does such a "structure" have any ranking in the hierarchy of en.wikipedia.org/wiki/Causality_conditions ?? If so, and if the corresponding ranking of "_this argument_/[my structure]" were lower, then I'd consider this an advantage of "_this argument_/[my structure]" in the intended sense of my question: "Which causal structures are absent from any "nice" patch of Minkowski space but not any other "nice patch"? $\endgroup$ Not the answer you're looking for? Browse other questions tagged special-relativity spacetime geometry causality or ask your own question. Einstein's postulates $\leftrightarrow$ Minkowski space for a Layman Is there a metatime required for space-time to change? How to determine "timelike"-ness without using a coordinate system? Formulation of general relativity Can the vanishing of the Riemann tensor be determined from causal relations? How to determine the order of indications of a clock? How much of Minkowski spacetime structure can be recovered from its causal structure? Exactly which components of special relativity are contained in Minkowski spacetime? How do we know that spacetime is uncountable/continuous/etc.?
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Probing the evolution of the EAS muon content in the atmosphere with KASCADE-Grande (1801.05513) KASCADE-Grande Collaboration: W.D. Apel, J.C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, D. Fuhrmann, A. Gherghel-Lascu, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, H.J. Mathes, H.J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Jan. 17, 2018 astro-ph.HE The evolution of the muon content of very high energy air showers (EAS) in the atmosphere is investigated with data of the KASCADE-Grande observatory. For this purpose, the muon attenuation length in the atmosphere is obtained to $\Lambda_\mu = 1256 \, \pm 85 \, ^{+229}_{-232}(\mbox{syst})\, \mbox{g/cm}^2$ from the experimental data for shower energies between $10^{16.3}$ and $10^{17.0} \, \mbox{eV}$. Comparison of this quantity with predictions of the high-energy hadronic interaction models QGSJET-II-02, SIBYLL 2.1, QGSJET-II-04 and EPOS-LHC reveals that the attenuation of the muon content of measured EAS in the atmosphere is lower than predicted. Deviations are, however, less significant with the post-LHC models. The presence of such deviations seems to be related to a difference between the simulated and the measured zenith angle evolutions of the lateral muon density distributions of EAS, which also causes a discrepancy between the measured absorption lengths of the density of shower muons and the predicted ones at large distances from the EAS core. The studied deficiencies show that all four considered hadronic interaction models fail to describe consistently the zenith angle evolution of the muon content of EAS in the aforesaid energy regime. KASCADE-Grande Limits on the Isotropic Diffuse Gamma-Ray Flux between 100 TeV and 1 EeV (1710.02889) KASCADE-Grande Collaboration: W. D. Apel, J. C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I. M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, Z. Feng, D. Fuhrmann, A. Gherghel-Lascu, H. J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J. R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H. O. Klages, K. Link, P. Łuczak, H. J. Mathes, H. J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F. G. Schröder, O. Sima, G. Toma, G. C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Oct. 8, 2017 astro-ph.HE KASCADE and KASCADE-Grande were multi-detector installations to measure individual air showers of cosmic rays at ultra-high energy. Based on data sets measured by KASCADE and KASCADE-Grande, 90% C.L. upper limits to the flux of gamma-rays in the primary cosmic ray flux are determined in an energy range of ${10}^{14} - {10}^{18}$ eV. The analysis is performed by selecting air showers with a low muon content as expected for gamma-ray-induced showers compared to air showers induced by energetic nuclei. The best upper limit of the fraction of gamma-rays to the total cosmic ray flux is obtained at $3.7 \times {10}^{15}$ eV with $1.1 \times {10}^{-5}$. Translated to an absolute gamma-ray flux this sets constraints on some fundamental astrophysical models, such as the distance of sources for at least one of the IceCube neutrino excess models. A comparison of the cosmic-ray energy scales of Tunka-133 and KASCADE-Grande via their radio extensions Tunka-Rex and LOPES (1610.08343) W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, P.A. Bezyazeekov, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, N.M. Budnev, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, O. Fedorov, B. Fuchs, H. Gemmeke, O. A. Gress, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, Y. Kazarina, M. Kleifges, E.E. Korosteleva, D. Kostunin, O. Krömer, J. Kuijpers, L.A. Kuzmichev, K. Link, N. Lubsandorzhiev, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, R.R. Mirgazov, R. Monkhoev, C. Morello, J. Oehlschläger, E.A. Osipova, A. Pakhorukov, N. Palmieri, L. Pankov, T. Pierog, V.V. Prosin, J. Rautenberg, H. Rebel, M. Roth, G.I. Rubtsov, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weind, R. Wischnewski, J. Wochele, J. Zabierowski, A. Zagorodnikov, J.A. Zensus Oct. 27, 2016 astro-ph.IM, astro-ph.HE The radio technique is a promising method for detection of cosmic-ray air showers of energies around $100\,$PeV and higher with an array of radio antennas. Since the amplitude of the radio signal can be measured absolutely and increases with the shower energy, radio measurements can be used to determine the air-shower energy on an absolute scale. We show that calibrated measurements of radio detectors operated in coincidence with host experiments measuring air showers based on other techniques can be used for comparing the energy scales of these host experiments. Using two approaches, first via direct amplitude measurements, and second via comparison of measurements with air shower simulations, we compare the energy scales of the air-shower experiments Tunka-133 and KASCADE-Grande, using their radio extensions, Tunka-Rex and LOPES, respectively. Due to the consistent amplitude calibration for Tunka-Rex and LOPES achieved by using the same reference source, this comparison reaches an accuracy of approximately $10\,\%$ - limited by some shortcomings of LOPES, which was a prototype experiment for the digital radio technique for air showers. In particular we show that the energy scales of cosmic-ray measurements by the independently calibrated experiments KASCADE-Grande and Tunka-133 are consistent with each other on this level. Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations (1507.07389) W.D. Apel, J.C. Arteaga-Velazquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, S. Nehls, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Dec. 18, 2015 astro-ph.IM, astro-ph.HE LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of $2.6 \pm 0.2$, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published in Astroparticle Physics 50-52 (2013) 76-91: With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data. Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations (1309.5920) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Dec. 18, 2015 astro-ph.HE Cosmic ray air showers emit radio pulses at MHz frequencies, which can be measured with radio antenna arrays - like LOPES at the Karlsruhe Institute of Technology in Germany. To improve the understanding of the radio emission, we test theoretical descriptions with measured data. The observables used for these tests are the absolute amplitude of the radio signal, and the shape of the radio lateral distribution. We compare lateral distributions of more than 500 LOPES events with two recent and public Monte Carlo simulation codes, REAS 3.11 and CoREAS (v 1.0). The absolute radio amplitudes predicted by REAS 3.11 are in good agreement with the LOPES measurements. The amplitudes predicted by CoREAS are lower by a factor of two, and marginally compatible with the LOPES measurements within the systematic scale uncertainties. In contrast to any previous versions of REAS, REAS 3.11 and CoREAS now reproduce the shape of the measured lateral distributions correctly. This reflects a remarkable progress compared to the situation a few years ago, and it seems that the main processes for the radio emission of air showers are now understood: The emission is mainly due to the geomagnetic deflection of the electrons and positrons in the shower. Less important but not negligible is the Askaryan effect (net charge variation). Moreover, we confirm that the refractive index of the air plays an important role, since it changes the coherence conditions for the emission: Only the new simulations including the refractive index can reproduce rising lateral distributions which we observe in a few LOPES events. Finally, we show that the lateral distribution is sensitive to the energy and the mass of the primary cosmic ray particles. Revised absolute amplitude calibration of the LOPES experiment (1508.03471) K. Link, T. Huege, W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, P.G. Isar, K-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Aug. 14, 2015 hep-ex, physics.ins-det, astro-ph.IM, astro-ph.HE One of the main aims of the LOPES experiment was the evaluation of the absolute amplitude of the radio signal of air showers. This is of special interest since the radio technique offers the possibility for an independent and highly precise determination of the energy scale of cosmic rays on the basis of signal predictions from Monte Carlo simulations. For the calibration of the amplitude measured by LOPES we used an external source. Previous comparisons of LOPES measurements and simulations of the radio signal amplitude predicted by CoREAS revealed a discrepancy of the order of a factor of two. A re-measurement of the reference calibration source, now performed for the free field, was recently performed by the manufacturer. The updated calibration values lead to a lowering of the reconstructed electric field measured by LOPES by a factor of $2.6 \pm 0.2$ and therefore to a significantly better agreement with CoREAS simulations. We discuss the updated calibration and its impact on the LOPES analysis results. Investigation of the radio wavefront of air showers with LOPES measurements and CoREAS simulations (ARENA 2014) (1507.07753) F.G. Schröder, W.D. Apel, J.C. Arteaga-Velazquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus July 28, 2015 astro-ph.IM, astro-ph.HE We investigated the radio wavefront of cosmic-ray air showers with LOPES measurements and CoREAS simulations: the wavefront is of approximately hyperbolic shape and its steepness is sensitive to the shower maximum. For this study we used 316 events with an energy above 0.1 EeV and zenith angles below $45^\circ$ measured by the LOPES experiment. LOPES was a digital radio interferometer consisting of up to 30 antennas on an area of approximately 200 m x 200 m at an altitude of 110 m above sea level. Triggered by KASCADE-Grande, LOPES measured the radio emission between 43 and 74 MHz, and our analysis might strictly hold only for such conditions. Moreover, we used CoREAS simulations made for each event, which show much clearer results than the measurements suffering from high background. A detailed description of our result is available in our recent paper published in JCAP09(2014)025. The present proceeding contains a summary and focuses on some additional aspects, e.g., the asymmetry of the wavefront: According to the CoREAS simulations the wavefront is slightly asymmetric, but on a much weaker level than the lateral distribution of the radio amplitude. Reconstruction of the energy and depth of maximum of cosmic-ray air-showers from LOPES radio measurements (1408.2346) W. D. Apel, J. C. Arteaga-Velazquez, L. Bähren, K. Bekk, M. Bertaina, P. L. Biermann, J. Blümer, H. Bozdog, I. M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J. R. Hörandel, A. Horneffer, D. Huber, T. Huege, P. G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H. J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F. G. Schröder, O. Sima, G. Toma, G. C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J. A. Zensus Aug. 11, 2014 hep-ex, astro-ph.IM, astro-ph.HE LOPES is a digital radio interferometer located at Karlsruhe Institute of Technology (KIT), Germany, which measures radio emission from extensive air showers at MHz frequencies in coincidence with KASCADE-Grande. In this article, we explore a method (slope method) which leverages the slope of the measured radio lateral distribution to reconstruct crucial attributes of primary cosmic rays. First, we present an investigation of the method on the basis of pure simulations. Second, we directly apply the slope method to LOPES measurements. Applying the slope method to simulations, we obtain uncertainties on the reconstruction of energy and depth of shower maximum Xmax of 13% and 50 g/cm^2, respectively. Applying it to LOPES measurements, we are able to reconstruct energy and Xmax of individual events with upper limits on the precision of 20-25% for the primary energy and 95 g/cm^2 for Xmax, despite strong human-made noise at the LOPES site. The wavefront of the radio signal emitted by cosmic ray air showers (1404.3283) W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Aug. 7, 2014 hep-ex, astro-ph.IM, astro-ph.HE Analyzing measurements of the LOPES antenna array together with corresponding CoREAS simulations for more than 300 measured events with energy above $10^{17}\,$eV and zenith angles smaller than $45^\circ$, we find that the radio wavefront of cosmic-ray air showers is of approximately hyperbolic shape. The simulations predict a slightly steeper wavefront towards East than towards West, but this asymmetry is negligible against the measurement uncertainties of LOPES. At axis distances $\gtrsim 50\,$m, the wavefront can be approximated by a simple cone. According to the simulations, the cone angle is clearly correlated with the shower maximum. Thus, we confirm earlier predictions that arrival time measurements can be used to study the longitudinal shower development, but now using a realistic wavefront. Moreover, we show that the hyperbolic wavefront is compatible with our measurement, and we present several experimental indications that the cone angle is indeed sensitive to the shower development. Consequently, the wavefront can be used to statistically study the primary composition of ultra-high energy cosmic rays. At LOPES, the experimentally achieved precision for the shower maximum is limited by measurement uncertainties to approximately $140\,$g/cm$^2$. But the simulations indicate that under better conditions this method might yield an accuracy for the atmospheric depth of the shower maximum, $X_\mathrm{max}$, better than $30\,$g/cm$^2$. This would be competitive with the established air-fluorescence and air-Cherenkov techniques, where the radio technique offers the advantage of a significantly higher duty-cycle. Finally, the hyperbolic wavefront can be used to reconstruct the shower geometry more accurately, which potentially allows a better reconstruction of all other shower parameters, too. Highlights from the Pierre Auger Observatory (1310.4620) Antoine Letessier-Selvon, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muniz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antivcic, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, R. Bardenet, J. Baeuml, C. Baus, J.J. Beatty, K.H. Becker, A. Belletoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blumer, M. Bohacova, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, R.E. Burton, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceicao, F. Contreras, H. Cook, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, J.C. Diaz, M.L. Diaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipcic, N. Foerster, B.D. Fox, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Frohlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. Garcia, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gomez Berisso, P.F. Gomez Vitale, P. Goncalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, P. Homola, J.R. Hoerandel, P. Horvath, M. Hrabovsky, D. Huber, T. Huege, A. Insolia, P.G. Isar, S. Jansen, C. Jarne, M. Josebachuili, K. Kadija, O. Kambeitz, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kegl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp d, R. Krause, N. Krohm, O. Kroemer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leao, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, I. Lhenry-Yvon, K. Link, R. Lopez, A. Lopez Aguera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martinez Bravo, D. Martraire, J.J. Masias Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Micanovic, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, J.C. Moreno, M. Mostafa, C.A. Moura, M.A. Muller, G. Muller, M. Munchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, L. Novzka, J. Oehlschlager, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pekala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, M. Pontz, A. Porcelli, T. Preda, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodriguez-Frias, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouille-d'Orfeuil, E. Roulet, A.C. Rovero, C. Ruhle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sanchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovanek, F.G. Schroeder, A. Schulz, J. Schulz, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Smialkowski, R. Smida, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, M. Straub, A. Stutz, F. Suarez, T. Suomijarvi, A.D. Supanitsky, T. Susa, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Tacscuau, R. Tcaciuc, N.T. Thao, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tome, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, D.B. Tridapalli, E. Trovato, M. Tueros, R. Ulrich, M. Unger, J.F. Valdes Galicia, I. Valino, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cardenas, G. Varner, J.R. Vazquez, R.A. Vazquez, D. Veberic, V. Verzi, J. Vicha, M. Videla, L. Villasenor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczynska, H. Wilczynski, M. Will, C. Williams, T. Winchen, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (the Pierre Auger Collaboration) Oct. 19, 2013 astro-ph.HE The Pierre Auger Observatory is the world's largest cosmic ray observatory. Our current exposure reaches nearly 40,000 km$^2$ str and provides us with an unprecedented quality data set. The performance and stability of the detectors and their enhancements are described. Data analyses have led to a number of major breakthroughs. Among these we discuss the energy spectrum and the searches for large-scale anisotropies. We present analyses of our X$_{max}$ data and show how it can be interpreted in terms of mass composition. We also describe some new analyses that extract mass sensitive parameters from the 100% duty cycle SD data. A coherent interpretation of all these recent results opens new directions. The consequences regarding the cosmic ray composition and the properties of UHECR sources are briefly discussed. Pierre Auger Observatory and Telescope Array: Joint Contributions to the 33rd International Cosmic Ray Conference (ICRC 2013) (1310.0647) The Telescope Array, Pierre Auger Collaborations: T. Abu-Zayyad, M. Allen, R. Anderson, R. Azuma, E. Barcikowski, J. W Belz, D. R. Bergman, S. A. Blake, R. Cady, M. J. Chae, B. G. Cheon, J. Chiba, M. Chikawa, W. R. Cho, T. Fujii, M. Fukushima, K. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C. C. H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H. B. Kim, J. H. Kim, J. H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y. J. Kwon, J. Lan, J. P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J. N. Matthews, M. Minamino, K. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, H. Nanpei, T. Nonaka, A. Nozato, S. Ogio, S. Oh, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I. H. Park, M. S. Pshirkov, D. C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, A. L. Sampson, L. M. Scott, P. D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B. K. Shin, T. Shirahama, J. D. Smith, P. Sokolsky, R. W. Springer, B. T. Stokes, S. R. Stratton, T. A. Stroman, M. Takamura, M. Takeda, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S. B. Thomas, G. B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, Y. Wada, T. Wong, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muniz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antivcic, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, R. Bardenet, J. Baeuml, C. Baus, J.J. Beatty, K.H. Becker, A. Belletoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blumer, M. Bohacova, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, R.E. Burton, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceicao, F. Contreras, H. Cook, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, J.C. Diaz, M.L. Diaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipcic, N. Foerster, B.D. Fox, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Frohlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. Garcia, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gomez Berisso, P.F. Gomez Vitale, P. Goncalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, P. Homola, J.R. Hoerandel, P. Horvath, M. Hrabovsky, D. Huber, T. Huege, A. Insolia, P.G. Isar, S. Jansen, C. Jarne, M. Josebachuili, K. Kadija, O. Kambeitz, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kegl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp, R. Krause, N. Krohm, O. Kroemer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leao, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. Lopez, A. Lopez Aguera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martinez Bravo, D. Martraire, J.J. Masias Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Micanovic, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, J.C. Moreno, M. Mostafa, C.A. Moura, M.A. Muller, G. Muller, M. Munchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, L. Novzka, J. Oehlschlager, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pekala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, M. Pontz, A. Porcelli, T. Preda, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodriguez-Frias, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouille-d'Orfeuil, E. Roulet, A.C. Rovero, C. Ruhle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sanchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovanek, F.G. Schroeder, A. Schulz, J. Schulz, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Smialkowski, R. Smida, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, M. Straub, A. Stutz, F. Suarez, T. Suomijarvi, A.D. Supanitsky, T. Susa, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Tacscuau, R. Tcaciuc, N.T. Thao, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tome, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, D.B. Tridapalli, E. Trovato, M. Tueros, R. Ulrich, M. Unger, J.F. Valdes Galicia, I. Valino, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cardenas, G. Varner, J.R. Vazquez, R.A. Vazquez, D. Veberic, V. Verzi, J. Vicha, M. Videla, L. Villasenor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczynska, H. Wilczynski, M. Will, C. Williams, T. Winchen, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (The Pierre Auger Collaboration) Oct. 2, 2013 astro-ph.IM, astro-ph.HE Joint contributions of the Pierre Auger and Telescope Array Collaborations to the 33rd International Cosmic Ray Conference, Rio de Janeiro, Brazil, July 2013: cross-calibration of the fluorescence telescopes, large scale anisotropies and mass composition. Investigation on the energy and mass composition of cosmic rays using LOPES radio data (1309.2410) N. Palmieri, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus - LOPES Collaboration - Sept. 10, 2013 astro-ph.HE The sensitivity to the mass composition as well as the reconstruction of the energy of the primary particle are explored here by leveraging the features of the radio lateral distribution function. For the purpose of this analysis, a set of events measured with the LOPES experiment is reproduced with the latest CoREAS radio simulation code. Based on simulation predictions, a method which exploits the slope of the radio lateral distribution function is developed (Slope Method) and directly applied on measurements. As a result, the possibility to reconstruct both the energy and the depth of the shower maximum of the cosmic ray air shower using radio data and achieving relatively small uncertainties is presented. Vectorial Radio Interferometry with LOPES 3D (1308.2512) D. Huber, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus Aug. 12, 2013 astro-ph.IM, astro-ph.HE One successful detection technique for high-energy cosmic rays is based on the radio signal emitted by the charged particles in an air shower. The LOPES experiment at Karlsruhe Institute of Technology, Germany, has made major contributions to the evolution of this technique. LOPES was reconfigured several times to improve and further develop the radio detection technique. In the latest setup LOPES consisted of 10 tripole antennas. With this, LOPES 3D was the first cosmic ray experiment measuring all three vectorial field components at once and thereby gaining the full information about the electric field vector. We present an analysis based on the data taken with special focus on the benefits of a direct measurement of the vertical polarization component. We demonstrate that by measuring all polarization components the detection and reconstruction efficiency is increased and noisy single channel data can be reconstructed by utilising the information from the other two channels of one antenna station. Comparison of LOPES data and CoREAS simulations using a full detector simulation (ICRC2013) (1308.2523) K. Link, W.D. Apel, J.C. Arteaga-VelÁzquez, L. BÄhren, K. Bekk, M. Bertaina, P.L. Biermann, J. BlÜmer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. De Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. HÖrandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K-H. Kampert, D. Kang, O. KrÖmer, J. Kuijpers, P. ŁUczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. OehlschlÄger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. RÜhle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. SchrÖder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus The LOPES experiment at the Karlsruhe Institute of Technology, Germany, has been measuring radio emission of air showers for almost 10 years. For a better understanding of the emission process a detailed comparison of data with simulations is necessary. This is possible using a newly developed detector simulation including all LOPES detector components. After propagating a simulated event through this full detector simulation a standard LOPES like event file is written. LOPES data and CoREAS simulations can then be treated equally and the same analysis software can be applied to both. This gives the opportunity to compare data and simulations directly. Furthermore, the standard analysis software can be used with simulations which provide the possibility to check the accuracy regarding reconstruction of air shower parameters. We point out the advantages and present first results using such a full LOPES detector simulation. A comparison of LOPES data and the Monte Carlo code CoREAS based on an analysis using this detector simulation is shown. The <lnA> study in the primary energy range 10^{16} - 10^{17} eV with the Muon Tracking Detector in the KASCADE-Grande experiment (1308.2059) P. Łuczak, W.D. Apel, J.C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, C. Curcio, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, J. Engler, B. Fuchs, D. Fuhrmann, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, D. Huber, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, M. Ludwig, H.J. Mathes, H.J. Mayer, M. Melissas, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, N. Palmieri, M. Petcu, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Aug. 9, 2013 astro-ph.HE The KASCADE-Grande Muon Tracking Detector enables with high accuracy the measurement of directions of EAS muons with energy above 0.8 GeV and up to 700 m distance from the shower centre. Reconstructed muon tracks are used to investigate muon pseudorapidity (eta) distributions. These distributions are nearly identical to the pseudorapidity distributions of their parent mesons produced in hadronic interactions. Comparison of the eta distributions from measured and simulated showers can be used to test the quality of the high energy hadronic interaction models. In this context a comparison of the QGSJet-II-2 and QGSJet-II-4 model will be shown. The pseudorapidity distributions reflect the longitudinal development of EAS and, as such, are sensitive to the mass of the cosmic rays primary particles. With various parameters of the eta distribution, obtained from the MTD data, it is possible to calculate the mean logarithmic mass of CRs. The results of the <lnA> analysis in the primary energy range 10^{16} eV - 10^{17} eV with the 1st quartile (Q1) of eta distribution will be presented. Mass sensitivity in the radio lateral distribution function (1308.0046) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, M. Finger, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus Measuring the mass composition of ultra-high energy cosmic rays is one of the main tasks in the cosmic rays field. Here we are exploring the composition signature in the coherent electromagnetic emission from extensive air showers, detected in the MHz frequency range. One of the experiments that successfully detects radio events in the frequency band of 40-80 MHz is the LOPES experiment at KIT. It is a digital interferometric antenna array and has the important advantage of taking data in coincidence with the particle detector array KASCADE-Grande. A possible method to look at the composition signature in the radio data, predicted by simulations, concerns the radio lateral distribution function, since its slope is strongly correlated with Xmax. Recent comparison between REAS3 simulations and LOPES data showed a significantly improved agreement in the lateral distribution function and for this reason an analysis on a possible LOPES mass signature through the slope method is promising. Trying to reproduce a realistic case, proton and iron showers are simulated with REAS3 using the LOPES selection information as input parameters. The obtained radio lateral distribution slope is analyzed in detail. The lateral slope method to look at the composition signature in the radio data is shown here and a possible signature of mass composition in the LOPES data is discussed. Reconstructing energy and Xmax of cosmic ray air showers using the radio lateral distribution measured with LOPES (1308.0053) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus The LOPES experiment, a digital radio interferometer located at KIT (Karlsruhe Institute of Technology), obtained remarkable results for the detection of radio emission from extensive air showers at MHz frequencies. Features of the radio lateral distribution function (LDF) measured by LOPES are explored in this work for a precise reconstruction of two fundamental air shower parameters: the primary energy and the shower Xmax. The method presented here has been developed on (REAS3-)simulations, and is applied to LOPES measurements. Despite the high human-made noise at the LOPES site, it is possible to reconstruct both the energy and Xmax for individual events. On the one hand, the energy resolution is promising and comparable to the one of the co-located KASCADE-Grande experiment. On the other hand, Xmax values are reconstructed with the LOPES measurements with a resolution of 90 g/cm2 . A precision on Xmax better than 30 g/cm2 is predicted and achievable in a region with a lower human-made noise level. KASCADE-Grande measurements of energy spectra for elemental groups of cosmic rays (1306.6283) The KASCADE-Grande Collaboration: W.D. Apel, J.C. Arteaga-Velàzquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, J. Engler, M. Finger, B. Fuchs, D. Fuhrmann, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, D. Huber, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, H.J. Mayer, M. Melissas, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, N. Palmieri, M. Petcu, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski June 26, 2013 astro-ph.HE The KASCADE-Grande air shower experiment [W. Apel, et al. (KASCADE-Grande collaboration), Nucl. Instrum. Methods A 620 (2010) 202] consists of, among others, a large scintillator array for measurements of charged particles, Nch, and of an array of shielded scintillation counters used for muon counting, Nmu. KASCADE-Grande is optimized for cosmic ray measurements in the energy range 10 PeV to about 2000 PeV, where exploring the composition is of fundamental importance for understanding the transition from galactic to extragalactic origin of cosmic rays. Following earlier studies of the all-particle and the elemental spectra reconstructed in the knee energy range from KASCADE data [T. Antoni, et al. (KASCADE collaboration), Astropart. Phys. 24 (2005) 1], we have now extended these measurements to beyond 200 PeV. By analysing the two-dimensional shower size spectrum Nch vs. Nmu for nearly vertical events, we reconstruct the energy spectra of different mass groups by means of unfolding methods over an energy range where the detector is fully efficient. The procedure and its results, which are derived based on the hadronic interaction model QGSJET-II-02 and which yield a strong indication for a dominance of heavy mass groups in the covered energy range and for a knee-like structure in the iron spectrum at around 80 PeV, are presented. This confirms and further refines the results obtained by other analyses of KASCADE-Grande data, which already gave evidence for a knee-like structure in the heavy component of cosmic rays at about 80 PeV [W. Apel, et al. (KASCADE-Grande collaboration), Phys. Rev. Lett. 107 (2011) 171104]. Ankle-like Feature in the Energy Spectrum of Light Elements of Cosmic Rays Observed with KASCADE-Grande (1304.7114) W.D. Apel, J.C. Arteaga-Velàzquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, J. Engler, M. Finger, B. Fuchs, D. Fuhrmann, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, D. Huber, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, H.J. Mayer, M. Melissas, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, N. Palmieri, M. Petcu, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski April 26, 2013 astro-ph.HE Recent results of the KASCADE-Grande experiment provided evidence for a mild knee-like structure in the all-particle spectrum of cosmic rays at $E = 10^{16.92 \pm 0.10} \, \mathrm{eV}$, which was found to be due to a steepening in the flux of heavy primary particles. The spectrum of the combined components of light and intermediate masses was found to be compatible with a single power law in the energy range from $10^{16.3} \, \mathrm{eV}$ to $10^{18} \, \mathrm{eV}$. In this paper, we present an update of this analysis by using data with increased statistics, originating both from a larger data set including more recent measurements and by using a larger fiducial area. In addition, optimized selection criteria for enhancing light primaries are applied. We find a spectral feature for light elements, namely a hardening at $E = 10^{17.08 \pm 0.08} \, \mathrm{eV}$ with a change of the power law index from $-3.25 \pm 0.05$ to $-2.79 \pm 0.08$. Thunderstorm Observations by Air-Shower Radio Antenna Arrays (1303.7068) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, S. Buitink, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, P. Doll, M. Ender, R. Engel, H. Falcke, M. Finger, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, S. Nehls, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus March 28, 2013 astro-ph.IM, astro-ph.HE Relativistic, charged particles present in extensive air showers lead to a coherent emission of radio pulses which are measured to identify the shower initiating high-energy cosmic rays. Especially during thunderstorms, there are additional strong electric fields in the atmosphere, which can lead to further multiplication and acceleration of the charged particles and thus have influence on the form and strength of the radio emission. For a reliable energy reconstruction of the primary cosmic ray by means of the measured radio signal it is very important to understand how electric fields affect the radio emission. In addition, lightning strikes are a prominent source of broadband radio emissions that are visible over very long distances. This, on the one hand, causes difficulties in the detection of the much lower signal of the air shower. On the other hand the recorded signals can be used to study features of the lightning development. The detection of cosmic rays via the radio emission and the influence of strong electric fields on this detection technique is investigated with the LOPES experiment in Karlsruhe, Germany. The important question if a lightning is initiated by the high electron density given at the maximum of a high-energy cosmic-ray air shower is also investigated, but could not be answered by LOPES. But, these investigations exhibit the capabilities of EAS radio antenna arrays for lightning studies. We report about the studies of LOPES measured radio signals of air showers taken during thunderstorms and give a short outlook to new measurements dedicated to search for correlations of lightning and cosmic rays. LOPES 3D reconfiguration and first measurements (1303.7070) D. Huber, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, M. Finger, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus The Radio detection technique of high-energy cosmic rays is based on the radio signal emitted by the charged particles in an air shower due to their deflection in the Earth's magnetic field. The LOPES experiment at Karlsruhe Institute of Technology, Germany with its simple dipoles made major contributions to the revival of this technique. LOPES is working in the frequency range from 40 to 80 MHz and was reconfigured several times to improve and further develop the radio detection technique. In the current setup LOPES consists of 10 tripole antennas which measure the complete electric field vector of the radio emission from cosmic rays. LOPES is the first experiment measuring all three vectorial components at once and thereby gaining the full information about the electric field vector and not only a two-dimensional projection. Such a setup including also measurements of the vertical electric field component is expected to increase the sensitivity to inclined showers and help to advance the understanding of the emission mechanism. We present the reconfiguration and calibration procedure of LOPES 3D and discuss first measurements. LOPES 3D - vectorial measurements of radio emission from cosmic ray induced air showers (1303.7080) March 28, 2013 astro-ph.HE LOPES 3D is able to measure all three components of the electric field vector of the radio emission from air showers. This allows a better comparison with emission models. The measurement of the vertical component increases the sensitivity to inclined showers. By measuring all three components of the electric field vector LOPES 3D demonstrates by how much the reconstruction accuracy of primary cosmic ray parameters increases. Thus LOPES 3D evaluates the usefulness of vectorial measurements for large scale applications. LOPES-3D, an antenna array for full signal detection of air-shower radio emission (1303.6808) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, M. Finger, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H. J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F. G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus To better understand the radio signal emitted by extensive air-showers and to further develop the radio detection technique of high-energy cosmic rays, the LOPES experiment was reconfigured to LOPES-3D. LOPES-3D is able to measure all three vectorial components of the electric field of radio emission from cosmic ray air showers. The additional measurement of the vertical component ought to increase the reconstruction accuracy of primary cosmic ray parameters like direction and energy, provides an improved sensitivity to inclined showers, and will help to validate simulation of the emission mechanisms in the atmosphere. LOPES-3D will evaluate the feasibility of vectorial measurements for large scale applications. In order to measure all three electric field components directly, a tailor-made antenna type (tripoles) was deployed. The change of the antenna type necessitated new pre-amplifiers and an overall recalibration. The reconfiguration and the recalibration procedure are presented and the operationality of LOPES-3D is demonstrated. Cosmic Ray Measurements with LOPES: Status and Recent Results (ARENA 2012) (1301.2557) F.G. Schröder, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Jan. 11, 2013 astro-ph.IM, astro-ph.HE LOPES is a digital antenna array at the Karlsruhe Institute of Technology, Germany, for cosmic-ray air-shower measurements. Triggered by the co-located KASCADE-Grande air-shower array, LOPES detects the radio emission of air showers via digital radio interferometry. We summarize the status of LOPES and recent results. In particular, we present an update on the reconstruction of the primary-particle properties based on almost 500 events above 100 PeV. With LOPES, the arrival direction can be reconstructed with a precision of at least 0.65{\deg}, and the energy with a precision of at least 20 %, which, however, does not include systematic uncertainties on the absolute energy scale. For many particle and astrophysics questions the reconstruction of the atmospheric depth of the shower maximum, Xmax, is important, since it yields information on the type of the primary particle and its interaction with the atmosphere. Recently, we found experimental evidence that the slope of the radio lateral distribution is indeed sensitive to the longitudinal development of the air shower, but unfortunately, the Xmax precision at LOPES is limited by the high level of anthropogenic radio background. Nevertheless, the developed methods can be transferred to next generation experiments with lower background, which should provide an Xmax precision competitive to other detection technologies. Antennas for the Detection of Radio Emission Pulses from Cosmic-Ray induced Air Showers at the Pierre Auger Observatory (1209.3840) P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, D. Allard, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antičić, C. Aramo, E. Arganda, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, M. Balzer, K.B. Barber, A.F. Barbosa, R. Bardenet, S.L.C. Barroso, B. Baughman, J. Bäuml, C. Baus, J.J. Beatty, K.H. Becker, A. Bellétoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blümer, M. M. Boháčová, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, R. Bruijn, P. Buchholz, A. Bueno, L. Buroker, R.E. Burton, K.S. Caballero-Mora, B. Caccianiga, L. Caramete, R. Caruso, A. Castellina, O. Catalano, G. Cataldi, L. Cazon, R. Cester, J. Chauvin, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chirinos Diaz, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, H. Cook, M.J. Cooper, J. Coppens, A. Cordier, S. Coutu, C.E. Covault, A. Creusot, A. Criss, J. Cronin, A. Curutiu, S. Dagoret-Campagne, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, C. De Donato, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, M. del Río, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, M.L. Díaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, D. D'Urso, I. Dutan, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, S. Fliescher, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Fröhlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. García, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, H. Glass, M.S. Gold, G. Golup, F. Gomez Albarracin, M. Gómez Berisso, P.F. Gómez Vitale, P. Gonçalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gouffon, E. Grashorn, S. Grebe, N. Griffith, M. Grigat, A.F. Grillo, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, C. Hojvat, N. Hollon, V.C. Holmes, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, F. Ionita, A. Italiano, S. Jansen, C. Jarne, S. Jiraskova, M. Josebachuili, K. Kadija, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kégl, B. Keilhauer, A. Keivani, J.L. Kelley, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp, D.-H. Koang, K. Kotera, N. Krohm, O. Krömer, D. Kruppke-Hansen, D. Kuempel, J.K. Kulbartz, N. Kunka, G. La Rosa, C. Lachaud, D. LaHurd, L. Latronico, R. Lauer, P. Lautridou, S. Le Coz, M.S.A.B. Leão, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. López, A. Lopez Agüera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, J. Marin, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, P.O. Mazur, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, P. Mertsch, C. Meurer, R. Meyhandan, S. Mićanović, M.I. Micheletti, I.A. Minaya, L. Miramonti, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, E. Moreno, J.C. Moreno, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, M. Münchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, N. Nierstenhoefer, D. Nitz, D. Nosek, L. Nožka, J. Oehlschläger, A. Olinto, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, E. Parizot, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, C. Pfendner, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, V.H. Ponce, M. Pontz, A. Porcelli, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, G. Rodriguez, I. Rodriguez Cabo, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodríguez-Frías, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouillé-d'Orfeuil, E. Roulet, A.C. Rovero, C. Rühle, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sánchez, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, S. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, J. Schovancova, P. Schovánek, F. Schröder, S. Schulte, D. Schuster, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, H.H. Silva Lopez, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, T. Suomijärvi, A.D. Supanitsky, T. Šuša, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Taşcău, R. Tcaciuc, N.T. Thao, D. Thomas, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, P. Travnicek, D.B. Tridapalli, G. Tristram, E. Trovato, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczyńska, H. Wilczyński, M. Will, C. Williams, T. Winchen, M. Wommer, B. Wundheiler, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano Garcia, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (The Pierre Auger Collaboration), D. Charrier, L. Denis, G. Hilgers, L. Mohrmann, B. Philipps, O. Seeger Sept. 18, 2012 astro-ph.IM The Pierre Auger Observatory is exploring the potential of the radio detection technique to study extensive air showers induced by ultra-high energy cosmic rays. The Auger Engineering Radio Array (AERA) addresses both technological and scientific aspects of the radio technique. A first phase of AERA has been operating since September 2010 with detector stations observing radio signals at frequencies between 30 and 80 MHz. In this paper we present comparative studies to identify and optimize the antenna design for the final configuration of AERA consisting of 160 individual radio detector stations. The transient nature of the air shower signal requires a detailed description of the antenna sensor. As the ultra-wideband reception of pulses is not widely discussed in antenna literature, we review the relevant antenna characteristics and enhance theoretical considerations towards the impulse response of antennas including polarization effects and multiple signal reflections. On the basis of the vector effective length we study the transient response characteristics of three candidate antennas in the time domain. Observing the variation of the continuous galactic background intensity we rank the antennas with respect to the noise level added to the galactic signal.
CommonCrawl
central bank calendar 2022 pdf » cute nintendo switch oled case » derive binomial series » derive binomial series Its power or exponent . Get the answer to your homework problem. It works fine if n is large enough and p is sufficiently near 1 / 2 (roughly speaking, so that n p and n ( 1 p) both exceed 5). + n C n1 n 1 x y n - 1 + n C n n x 0 y n and it can be derived using mathematical induction. Recognize and apply techniques to find the Taylor series for a function. The binomial theorem formula is (a+b) n = n r=0 n C r a n-r b r, where n is a positive integer and a, b are real numbers, and 0 < r n.This formula helps to expand the binomial expressions such as (x + a) 10, (2x + 5) 3, (x - (1/x)) 4, and so on. Therefore, trivially, the binomial coefficient will be equal to 1. Thankfully, Mathematicians have figured out something like Binomial Theorem to get this problem solved out in minutes. Modified 10 years, 3 months ago. a. 11.6 - Negative Binomial Examples. . Expand (4 + 2x) 6 in ascending powers of x up to the term in x 3. It is not hard to see that the series is the Maclaurin series for $(x+1)^r$, and that the series converges when $-1. 11.5 - Key Properties of a Negative Binomial Random Variable. 1 The Binomial Series 1.1 The Binomial Theorem This theorem deals with expanding expressions of the form (a+b)k where k is a positive integer. If is a natural number, the binomial coecient ( n) = ( 1) ( n+1) n! n + 1. f ( x) = ( 1 + x) {\displaystyle f (x)= (1+x)^ {\alpha }} , where. 2. 10.10) I Review: The Taylor Theorem. If you flip 10 coins and let X be the number of heads you get from those 10 flips, X is a binomial random variable (n = 10, p = 0.5) Define a "success" as rolling a 5 on a 6-sided die. Answer (1 of 4): If you know some high school calculus, this is a rather straightforward derivation. The purpose of this study was to explore the mental constructions of binomial series expansion of a class of 159 students. The first four . Furthermore, this theorem is the procedure of extending an expression that has been raised to the infinite power. So the probability of winning the first k and then losing the rest would be . $\qed$ Note that this quantity is We start by plugging in the binomial PMF into the general formula for the mean of a discrete probability distribution: Then we use and to rewrite it as: Finally, we use the variable substitutions m = n - 1 and j = k - 1 and simplify: Q.E.D. If the second term of the binomial is kx where k is a non-zero constant, the limits of convergence are Review: The Taylor Theorem Recall: If f : D R is innitely dierentiable, and a, x D, then f (x) = T n(x)+ R n(x), where the Taylor polynomial T n and the Remainder function R Step 2: Assume that the formula is true for n = k. The Binomial Series - Example 1. Derive the binomial series for (equation shown in picture). where f', f'', and f (n) are derivatives with respect to x.A Maclaurin series is the special case of a Taylor series with a=0. One uses a normal approximation to binomial distributions. Derivation of the Binomial Series This is fairly standard but is included for the bene t of anyone who has not seen it previously. Binomial theorem derivation: To learn what a binomial theorem is, we start with the basics. How do you find the coefficient of x^5 in the expansion of (2x+3)(x+1)^8? In mathematics, the binomial series is the Taylor series at. This means use the Binomial theorem to expand the terms in the brackets, but only go as high as x 3. Use the definition of Taylor series to derive the Binomial Series, that is, for any real number p and al1 (1.2) This might look the same as the binomial expansion given by . (If this is the case, scroll down until you see a bookmark) If you have not learned the first year of calculus yet, I think you can still understand why this works. Step 1: Prove the formula for n = 1. . First, we show how power series can be used to solve differential equations. f {\displaystyle f} given by. Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site If $ z = x $ and $ \alpha $ are real numbers, and $ \alpha $ is not a non-negative integer, the binomial series behaves as follows: 1) if $ \alpha > 0 $, it converges absolutely on $ -1 \leq x \leq 1 $; 2) if $ \alpha \leq -1 $, it converges absolutely in $ -1 < x < 1 $ and . Use Taylor series to solve differential equations. In the preceding section, we defined Taylor . What is surprising is just how quickly this happens. A derivation of the binomial theorem from one of the standard counting problems. ( x + 3) 5. This series is known as a binomial theorem. Press question mark to learn the rest of the keyboard shortcuts RUber said: If p is the probability of a win, then p^k is the probability of winning k times in a row. Solution: Note that the square root in the denominator can be rewritten with algebra as a power (to -), so we can use the formula with the rewritten function (1 + x) -. Show all work to get credit. Lets start with the standard representation of the binomial theorm, We could then rewrite this as a sum, Another way of writing the same thing would be, We observe here that the equation can be rewritten in terms of the . a(x) be the power series on the left hand side of the display. If you roll a die 20 . Binomial Theorem is a quick way of expanding binomial expression that has been raised to some power generally larger. ( x + 3) 5. Use Taylor series to evaluate nonelementary integrals. Comments (0) Answer & Explanation. is zero for > n so that the binomial series is a polynomial of degree which, by the binomial theorem, is equal to (1+x) . If $ z = x $ and $ \alpha $ are real numbers, and $ \alpha $ is not a non-negative integer, the binomial series behaves as follows: 1) if $ \alpha > 0 $, it converges absolutely on $ -1 \leq x \leq 1 $; 2) if $ \alpha \leq -1 $, it converges absolutely in $ -1 < x < 1 $ and . 11.4 - Negative Binomial Distributions. And so I can utilize this power Siri's, um notation and reduce it down to that binomial coefficients inside that sigma notation. The binomial theorem can actually be expressed in terms of the derivatives of x n instead of the use of combinations. 6.4.2 Recognize the Taylor series expansions of common functions. 1. C {\displaystyle \alpha \in \mathbb {C} } is an arbitrary complex number. . I Evaluating non-elementary integrals. This hand reviews the binomial theorem and presents the binomial series. . The Binomial Theorem. Step-by-step explanation . Try Numerade Free for 7 Days . 39. You wish to test H 0: p = 0.5 against H a: p 0.5. derive binomial series from Maclaurin series. How do you use the binomial series to expand #(1+x)^(1/2)#? Example 2 Write down the first four terms in the binomial series for 9x 9 x. ( x + y) 0 = 1 ( x + y) 1 = x + y ( x + y) 2 = x 2 + 2 x y + y 2. and we can easily expand. It is rather more difficult to prove that the series is equal to $(x+1)^r$; the proof may be found in many introductory real analysis books. In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes-no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability q = 1 p).A single success/failure experiment is also . So, in this case k = 1 2 k = 1 2 and we'll need to rewrite the term a little to put it into the form required. \displaystyle {n}+ {1} n+1 terms. New derivations ofdiscrete distributions via stochastic processes and random walksare introduced Binomial Series for (1 + x) 5. Properties of the Binomial Expansion (a + b)n. There are. generalizing the familiar notation when n is a nonnegative integer. We'll replace p with the Poisson intensity l = bacteria/ml, and the number of trials n with the amount of water . Question: Derive the binomial series by finding the Maclaurin series for f(x) = (1 + x)", where k is any real number. We will determine the interval of convergence of this series and when it represents f(x). Summary of derivation of Binomial distribution. derive binomial series from Maclaurin series. Derivation of Binomial Probability Formula (Probability for Bernoulli Experiments) One of the most challenging aspects of mathematics is extending knowledge into unfamiliar territory or unrehearsed exercises. . By the inequality of arithmetic and geometric means. x = 0 {\displaystyle x=0} of the function. Data were collected through a written assessment task by each member of . Fortunately, the Binomial Theorem gives us the expansion for any positive integer power . Part of a series on: Regression analysis; Models; Linear regression; Simple regression The following variant holds for arbitrary complex , but is especially useful for handling negative integer exponents in (): IN THIS VIDEO WE WILL SOLVE STEP BY STEP THE ABOVE PROBLEMS telegram group link for any queries: https://t.me/joinchat/IFI_5cCu72w0MDE1Instagram: https://w. Math Calculus. 12.3 - Poisson Properties. The Binomial Theorem states that, where n is a positive integer: (a + b) n = a n + (n C 1)a n-1 b + (n C 2)a n-2 b 2 + + (n C n-1)ab n-1 + b n. Example. In this section we show how to use those Taylor series to derive Taylor series for other functions. Show Solution. The binomial theorem formula helps . n. n n. The formula is as follows: ( a b) n = k = 0 n ( n k) a n k b k = ( n 0) a n ( n 1) a n 1 b + ( n 2) a n 2 b . Learning Objectives. Follow the below steps to get output of Binomial Series Calculator. a, b = terms with coefficients. June 29, 2022 was gary richrath married . Taylor series have additional appeal in the way they tie together many different topics in mathematics in a surprising and, in my opinion, amazing way. A binomial heap is a sequence of binomial trees such that: Each tree is heap-ordered Application of Decision tree with Python Here we will use the sci-kit learn package to implement the decision tree You can see the prices converging with increase in number of steps python by Cooperative Cowfish on Jan 09 2021 Donate 1 import turtle t = turtle Some eat mostly rodents, while others eat a . Precalculus The Binomial Theorem The Binomial Theorem. We can expand the expression. Using the Binomial Series to derive power series representations for another function. q-series distributions Parametric regression models and miscellanea Emphasis continues to be placed on the increasing relevance ofBayesian inference to discrete distribution, especially with regardto the binomial and Poisson distributions. I Taylor series table. The binomial series is therefore sometimes referred to as Newton's binomial theorem. 12.E. Derivation: You may derive the binomial theorem as a Maclaurin series. Maclaurin series is: f(x) = f(0) + x f^{'}(0) + \frac{x^2}{2!} Derivation of time dependent Schrodinger wave equation; Derivation of time independent Schrodinger wave equation; Particle in one dimensional box (Infinite Potential Well) Eigen Function, Eigen Values and Eigen Vectors ; Postulate of wave mechanics or Quantum Mechanics ; Quantum Mechanical Operators ; Normalized and Orthogonal wave function E.g (x 2 - y) 8. Example: Suppose you have x = 45 Successes out of n = 50 Bernoulli trials. Recognize the Taylor series expansions of common functions. From the above derivation, it is clear that as n approaches infinity, and p approaches zero, a Binomial(p,n) will be approximated by a Poisson(n*p). Derivation of binomial coefficient in binomial theorem. Derivation of Binomial (Bernoulli) Probability Formula One of the most challenging aspects of mathematics is extending knowledge into unfamiliar territory or unrehearsed exercises. How do you find the coefficient of x^6 in the expansion of #(2x+3)^10#? In fact, it is a special type of a Maclaurin series for functions, $\boldsymbol{f(x) = (1 + x)^m}$ , using a special series expansion formula. Provided x lies within certain limits, the series will converge, in other words, the terms will become smaller as we move from left to right. Special cases. In what follows we . f^{''}(0) + \frac{x^3}{3!} x3 + for arbitrary rational values of n. With this formula he was able to find infinite series for many algebraic functions (functions y of x that satisfy a polynomial . In this category might fall the general concept of "binomial probability," which The summation is equal to in equal zero to infinity of that binomial coefficients, times X to the end. Solved by verified expert. We can derive this by taking the log of the likelihood function and finding where its derivative is zero: $$\ln\left(nC_x~p^x(1-p)^{n-x}\right) = \ln(nC_x)+x\ln(p)+(n-x)\ln(1-p)$$ . obtained in the section on Taylor and Maclaurin series and combine them with a known and useful result known as the binomial theorem to derive a nice formula for a Maclaurin series for f (x) = (1+x)k for any number k. Philippe B. Laval (KSU) Binomial Series Today 2 / 8 An extremely important application of the Maclaurin expansion is the derivation of the binomial theorem. The simplest binomial expression is (x + y). I The Euler identity. with a M independent of k as follows. Generally multiplying an expression - (5x - 4) 10 with hands is not possible and highly time-consuming too. The function (1+x) n may be expressed as a Maclaurin series by evaluating the following derivatives: In the case k = 2, the result is a known identity (a+b) 2= a +2ab+b It is also easy to derive an identity for k = 3. Step 2: For output, press the "Submit or Solve" button. A series of free Calculus Video Lessons. Poisson approximation to the Binomial. Ask Question Asked 10 years, 3 months ago. One example is shown! Binomial functions and Taylor series (Sect. Before we can dive in to the beauty of Taylor polynomials and Taylor series, we need to review some fundamentals about sequences and series, topics you should have studied in your precalculus . 1 Answer Image transcription text. Steps to use Binomial Series Calculator:-. We use Binomial Theorem in the expansion of the equation similar to (a+b) n. To expand the given equation, we use the formula given below: In the formula above, n = power of the equation. 2. Consider the following deriv. The binomial series is a special case of a hypergeometric series . More Online Free Calculator. \left (x+3\right)^5 (x+3)5 using Newton's binomial theorem, which is a formula that allow us to find the expanded form of a binomial raised to a positive integer. This series is called the binomial series. I The binomial function. Mean of binomial distributions proof. For an arbitrary real number a and a nonnegative integer r we write a r = a(a 1) (a r + 1) r! We then present two common applications of power series. The Binomial Theorem - HMC Calculus Tutorial. Because the radius of convergence of a power series is the same for positive and for negative x, the binomial series converges for -1 < x < 1. 6.4.1 Write the terms of the binomial series. Press J to jump to the feed. If you have n trials and only win k times, then you lose the rest (n-k) of te trials. r = takes on the successive values from 0 to n. C = combination and its formula is given as: a You can go to the File menu when the graph window is active and save the graph from STAT 230 at University of Waterloo From Wikipedia the free encyclopedia. Maclaurin series is: f(x) = f(0) + x f^{'}(0) + \frac{x^2}{2!} Scroll down the page for more examples and solutions. Derive the binomial series by finding the Maclaurin series for f(x) = (1 + x)", where k is any real number. (ii) Term by term di erentiation yields the identity P0 a (x) = aP a 1(x) for all a and x such Use the known series for e (see Table 1 on page 490) to obtain the series erf(z) =2(2n+1)n! Recall that a Taylor series relates a function f(x) to its value at any arbitrary point x=a by . ( x + y) 3 = x 3 + 3 x 2 y + 3 x y 2 + y 3. 11.3 - Geometric Examples. Use the following steps to prove that the binomial series in Equation $(1)$ 01:07 Use the power series for $\left(1+x^{2}\right)^{-1}$ and differentiation to We can expand the expression. For higher powers, the expansion gets very tedious by hand! Here are some good "basic" examples of binomial random variables: Define a "success" as getting a "heads" on a coin flip. 12.1 - Poisson Distributions. \left (x+3\right)^5 (x+3)5 using Newton's binomial theorem, which is a formula that allow us to find the expanded form of a binomial raised to a positive integer. x2 + n(n 1) (n 2)/ 3! It can also be defined as a binomial theorem formula that arranges for the expansion of a polynomial with two terms. In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial.According to the theorem, it is possible to expand the polynomial (x + y) n into a sum involving terms of the form ax b y c, where the exponents b and c are nonnegative integers with b + c = n, and the coefficient a of each term is a specific positive integer depending . infinite series. The first term is a n and the final term is b n. Progressing from the first term to the last, the exponent of a decreases by. A binomial tree is a graphical representation of possible intrinsic values that an option may take at different nodes or A binomial tree allows investors to assess when and if an option will be exercised. the expansion is valid, when |x| < 1. x 1$. (a+b) 3= a . The probability mass function of the Binomial distribution is: (1) So, in the example above, x would be the number of bacteria I consume in n units of water, and p is the probability that a random unit of water contains a bacterium. 12.4 - Approximating the Binomial Distribution. Step 1 Calculate the first few values for the binomial coefficient (m k). The approximation works very well for n values as low as n = 100, and p values as high as 0.02. In this paper, we investigate certain asymptotic series for the tail of the Riemann Zeta function used by M. D. Hirschhorn to prove an asymptotic expansion of Ramanujan for the \ (n\)th Harmonic . The binomial expansion formula is (x + y) n = n C 0 0 x n y 0 + n C 1 1 x n - 1 y 1 + n C 2 2 x n-2 y 2 + n C 3 3 x n - 3 y 3 + . The binomial theorem is used to expand or find the solution of such expressions that have some exponents because they get a little tricky and lengthy to solve by hand. How do I use the binomial theorem to find the constant term? If is a nonnegative integer n, then the (n + 2) th term and all later terms in the series are 0, since each contains a factor (n n); thus in this case the series is finite and gives the algebraic binomial formula.. We know that. The binomial theorem formula is used in the expansion of any power of a binomial in the form of a series. Then the theory of power series in rst year calculus yields the following information: (i) This series converges absolutely if jxj< 1 and diverges if jxj> 1 by the ratio test. The binomial series is a special case of a hypergeometric series . The Maclaurin series for $(1+x)^n$ is called the binomial theorem expansion of $(1+x)^n$, . The binomial series is an infinite series that results in expanding a binomial by a given power. n. n n. The formula is as follows: ( a b) n = k = 0 n ( n k) a n k b k = ( n 0) a n ( n 1) a n 1 b + ( n 2) a n 2 b . Japanese Anime Art Prints Ediscovery Trends 2022 Miraculous Ladybug Face Drawing Usta Midwest Junior Rankings What Is Paddle In Table Tennis Kenwood High School Basketball Schedule
CommonCrawl
International Workshop on Computer Algebra in Scientific Computing CASC 2017: Computer Algebra in Scientific Computing pp 93–108Cite as Symbolic Versus Numerical Computation and Visualization of Parameter Regions for Multistationarity of Biological Networks Matthew England ORCID: orcid.org/0000-0001-5729-342017, Hassan Errami18, Dima Grigoriev19, Ovidiu Radulescu ORCID: orcid.org/0000-0001-6453-570720, Thomas Sturm ORCID: orcid.org/0000-0002-8088-340X21,22 & Andreas Weber ORCID: orcid.org/0000-0001-5624-336818 First Online: 30 August 2017 Circuit is open Part of the Lecture Notes in Computer Science book series (LNTCS,volume 10490) We investigate models of the mitogenactivated protein kinases (MAPK) network, with the aim of determining where in parameter space there exist multiple positive steady states. We build on recent progress which combines various symbolic computation methods for mixed systems of equalities and inequalities. We demonstrate that those techniques benefit tremendously from a newly implemented graph theoretical symbolic preprocessing method. We compare computation times and quality of results of numerical continuation methods with our symbolic approach before and after the application of our preprocessing. Download conference paper PDF The mathematical modelling of intra-cellular biological processes has been using nonlinear ordinary differential equations since the early ages of mathematical biophysics in the 1940s and 50s [28]. A standard modelling choice for cellular circuitry is to use chemical reactions with mass action law kinetics, leading to polynomial differential equations. Rational functions kinetics (for instance the Michaelis-Menten kinetics) can generally be decomposed into several mass action steps. An important property of biological systems is their multistationarity which means having multiple stable steady states. Multistationarity is instrumental to cellular memory and cell differentiation during development or regeneration of multicellular organisms and is also used by micro-organisms in survival strategies. It is thus important to determine the parameter values for which a biochemical model is multistationary. With mass action reactions, testing for multiple steady states boils down to counting real positive solutions of algebraic systems. The models benchmarked in this paper concern intracellular signaling pathways. These pathways transmit information about the cell environment by inducing cascades of protein modifications (phosphorylation) all the way from the plasma membrane via the cytosol to genes in the cell nucleus. Multistationarity of signaling usually occurs as a result of activation of upstream signaling proteins by downstream components [2]. A different mechanism for producing multistationarity in signaling pathways was proposed by Kholodenko [26]. In this mechanism the cause of multistationarity are multiple phosphorylation/dephosphorylation cycles that share enzymes. A simple, two steps phosphorylation/dephosphorylation cycle is capable of ultrasensitivity, a form of all or nothing response with no multiple steady states (Goldbeter–Koshland mechanism). In multiple phosphorylation/dephosphorylation cycles, enzyme sharing provides competitive interactions and positive feedback that ultimately leads to multistationarity [23, 26]. Our study is complementary to works applying numerical methods to ordinary differential equations models used for biology applications. Gross et al. [18] used polynomial homotopy continuation methods for global parameter estimation of mass action models. Bifurcations and multistationarity of signaling cascades was studied with numerical methods based on the Jacobian matrix [30]. Other symbolic approaches to multistationarity either propose necessary conditions or work for particular networks [8, 9, 20, 27]. Our work here follows [5], where it was demonstrated that determination of multistationarity of an 11-dimensional model of a mitogen-activated protein kinases (MAPK) cascade can be achieved by currently available symbolic methods when numeric values are known for all but potentially one parameter. We show that the symbolic methods used in [5], viz. real triangularization and cylindrical algebraic decomposition, and also polynomial homotopy continuation methods, benefit tremendously from a graph theoretical symbolic preprocessing method. This method has been sketched by Grigoriev et al. [17] and has been used for a "hand computation," but had not been implemented before. For our experiments we use the model already investigated in [5] and a higher dimensional model of the MAPK cascade. 2 The Systems for the Case Studies For our investigations we use models of the MAPK cascade that can be found in the Biomodels databaseFootnote 1 as numbers 26 and 28 [24]. We refer to those models as Biomod-26 and Biomod-28, respectively. 2.1 Biomod-26 Biomod-26, which we have studied also in [5], is given by the following set of differential equations. We have renamed the species names as \(x_1, \ldots , x_{11}\) and the rate constants as \(k_1, \ldots , k_{16}\) to facilitate reading: $$\begin{aligned} \dot{x}_1= & {} k_{2} x_{6} + k_{15} x_{11} - k_{1} x_{1} x_{4} - k_{16} x_{1} x_{5} \nonumber \\ \dot{x}_2= & {} k_{3} x_{6} + k_{5} x_{7} + k_{10} x_{9} + k_{13} x_{10} - x_{2} x_{5} (k_{11} + k_{12}) - k_{4} x_{2} x_{4}\nonumber \\ \dot{x}_3= & {} k_{6} x_{7} + k_{8} x_{8} - k_{7} x_{3} x_{5}\nonumber \\ \dot{x}_4= & {} x_{6} (k_{2} + k_{3}) + x_{7} (k_{5} + k_{6}) - k_{1} x_{1} x_{4} - k_{4} x_{2} x_{4}\nonumber \\ \dot{x}_5= & {} k_{8} x_{8} + k_{10} x_{9} + k_{13} x_{10} + k_{15} x_{11} - x_{2} x_{5} (k_{11} + k_{12}) - k_{7} x_{3} x_{5} - k_{16} x_{1} x_{5}\nonumber \\ \dot{x}_6= & {} k_{1} x_{1} x_{4} - x_{6} (k_{2} + k_{3})\nonumber \\ \dot{x}_7= & {} k_{4} x_{2} x_{4} - x_{7} (k_{5} + k_{6})\nonumber \\ \dot{x}_8= & {} k_{7} x_{3} x_{5} - x_{8} (k_{8} + k_{9})\nonumber \\ \dot{x}_9= & {} k_{9} x_{8} - k_{10} x_{9} + k_{11} x_{2} x_{5}\nonumber \\ \dot{x}_{10}= & {} k_{12} x_{2} x_{5} - x_{10} (k_{13} + k_{14})\nonumber \\ \dot{x}_{11}= & {} k_{14} x_{10} - k_{15} x_{11} + k_{16} x_{1} x_{5} \end{aligned}$$ The Biomodels database also gives us meaningful values for the rate constants, which we generally substitute into the corresponding systems for our purposes here: $$\begin{aligned} k_{1}&= 0.02,&k_{2}&= 1,&k_{3}&= 0.01,&k_{4}&= 0.032,\nonumber \\ k_{5}&= 1,&k_{6}&= 15,&k_{7}&= 0.045,&k_{8}&= 1,\nonumber \\ k_{9}&= 0.092,&k_{10}&= 1,&k_{11}&= 0.01,&k_{12}&= 0.01,\nonumber \\ k_{13}&= 1,&k_{14}&= 0.5,&k_{15}&= 0.086,&k_{16}&= 0.0011. \end{aligned}$$ Using the left-null space of the stoichiometric matrix under positive conditions as a conservation constraint [14] we obtain three linear conservation laws: $$\begin{aligned} x_{5} + x_{8} + x_{9} + x_{10} + x_{11}= & {} k_{17}, \nonumber \\ x_{4} + x_{6} + x_{7}= & {} k_{18},\nonumber \\ x_{1} + x_{2} + x_{3} + x_{6} + x_{7} + x_{8} + x_{9} + x_{10} + x_{11}= & {} k_{19}, \end{aligned}$$ where \(k_{17}\), \(k_{18}\), \(k_{19}\) are new constants computed from the initial data. Those constants are the parameters that we are interested in here. The steady state problem for the MAPK cascade can now be formulated as a real algebraic problem as follows. We replace the left hand sides of all equations in (1) with 0 and substitute the values from (2). This together with (3) yields a system of parametric polynomial equations with polynomials in \(\mathbb {Z}[k_{17},k_{18},k_{19}][x_1,\dots ,x_{11}]\). Since all entities in our model are strictly positive, we add to our system positivity conditions \(k_{17}>0\), \(k_{18}>0\), \(k_{19}>0\) and \(x_1>0\), ..., \(x_{11}>0\). In terms of first-order logic the conjunction over our equations and inequalities yields a quantifier-free Tarski formula. The system with number 28 in the Biomodels database is given by the following set of differential equations. Again, we have renamed the species names into \(x_1, \ldots , x_{16}\) and the rate constants into \(k_1, \ldots , k_{27}\) to facilitate reading: $$\begin{aligned} \dot{x}_{1}= & {} k_2 x_9 + k_8 x_{10} + k_{21} x_{15} + k_{26} x_{16} - k_1 x_1 x_5 - k_7 x_1 x_5 - k_{22} x_1 x_6 - k_{27} x_1 x_6 \nonumber \\ \dot{x}_{2}= & {} k_3 x_9 + k_5 x_7 + k_{24} x_{12} - k_4 x_2 x_5 - k_{23} x_2 x_6 \nonumber \\ \dot{x}_{3}= & {} k_9 x_{10} + k_{11} x_8 + k_{16} x_{13} + k_{19} x_{14} - k_{10} x_3 x_5 - k_{17} x_3 x_6 - k_{18} x_3 x_6 \nonumber \\ \dot{x}_{4}= & {} k_6 x_7 + k_{12} x_8 + k_{14} x_{11} - k_{13} x_4 x_6 \nonumber \\ \dot{x}_{5}= & {} k_2 x_9 + k_3 x_9 + k_5 x_7 + k_6 x_7 + k_8 x_{10} + k_9 x_{10} + k_{11} x_8 + k_{12} x_8 -\nonumber \\&\quad k_1 x_1 x_5 - k_4 x_2 x_5 - k_7 x_1 x_5 - k_{10} x_3 x_5 \nonumber \\ \dot{x}_{6}= & {} k_{14} x_{11} + k_{16} x_{13} + k_{19} x_{14} + k_{21} x_{15} + k_{24} x_{12} + k_{26} x_{16} - \nonumber \\&\quad k_{13} x_4 x_6 - k_{17} x_3 x_6 - k_{18} x_3 x_6 - k_{22} x_1 x_6 - k_{23} x_2 x_6 - k_{27} x_1 x_6 \nonumber \\ \dot{x}_{7}= & {} k_4 x_2 x_5 - k_6 x_7 - k_5 x_7 \nonumber \\ \dot{x}_{8}= & {} k_{10} x_3 x_5 - k_{12} x_8 - k_{11} x_8 \nonumber \\ \dot{x}_{9}= & {} k_1 x_1 x_5 - k_3 x_9 - k_2 x_9 \nonumber \\ \dot{x}_{10}= & {} k_7 x_1 x_5 - k_9 x_{10} - k_8 x_{10} \nonumber \\ \dot{x}_{11}= & {} k_{13} x_4 x_6 - k_{15} x_{11} - k_{14} x_{11} \nonumber \\ \dot{x}_{12}= & {} k_{23} x_2 x_6 - k_{25} x_{12} - k_{24} x_{12} \nonumber \\ \dot{x}_{13}= & {} k_{15} x_{11} - k_{16} x_{13} + k_{17} x_3 x_6 \nonumber \\ \dot{x}_{14}= & {} k_{18} x_3 x_6 - k_{20} x_{14} - k_{19} x_{14} \nonumber \\ \dot{x}_{15}= & {} k_{20} x_{14} - k_{21} x_{15} + k_{22} x_1 x_6 \nonumber \\ \dot{x}_{16}= & {} k_{25} x_{12} - k_{26} x_{16} + k_{27} x_1 x_6 \end{aligned}$$ The estimates of the rate constants given in the Biomodels database are: $$\begin{aligned} k_{1}&= 0.005,&k_{2}&= 1,&k_{3}&= 1.08,&k_{4}&= 0.025,\nonumber \\ k_{5}&= 1,&k_{6}&= 0.007,&k_{7}&= 0.05,&k_{8}&= 1,\nonumber \\ k_{9}&= 0.008,&k_{10}&= 0.005,&k_{11}&= 1,&k_{12}&= 0.45,\nonumber \\ k_{13}&= 0.045,&k_{14}&= 1,&k_{15}&= 0.092,&k_{16}&= 1,&\nonumber \\ k_{17}&= 0.01,&k_{18}&= 0.01,&k_{19}&= 1,&k_{20}&= 0.5,&\nonumber \\ k_{21}&= 0.086,&k_{22}&= 0.0011,&k_{23}&= 0.01,&k_{24}&= 1,&\nonumber \\ k_{25}&= 0.47,&k_{26}&= 0.14,&k_{27}&= 0.0018. \end{aligned}$$ Again, using the left-null space of the stoichiometric matrix under positive conditions as a conservation constraint [14] we obtain the following: $$\begin{aligned} x_6 + x_{11} + x_{12} + x_{13}+ x_{14} + x_{15} + x_{16}= & {} k_{28}, \nonumber \\ x_5 + x_7 + x_8 + x_9 + x_{10}= & {} k_{29} ,\nonumber \\ x_1 + x_2 + x_3 + x_4 + x_7 + x_8 + x_9 + x_{10} + x_{11} + {}&\nonumber \\ \quad x_{12} + x_{13} + x_{14} + x_{15} + x_{16}= & {} k_{30}, \end{aligned}$$ where \(k_{28}\), \(k_{29}\), \(k_{30}\) are new constants computed from the initial data. We formulate the real algebraic problem as described at the end of Sect. 2.1. In particular, note that we need positivity conditions for all variables and parameters. 3 Graph-Theoretical Symbolic Preprocessing The complexity, primarily in terms of dimension, of polynomial systems obtained with steady-state approximations of biological models plus conservation laws is comparatively high for the application of symbolic methods. It is therefore highly relevant for the success of such methods to identify and exploit particular structural properties of the input. Our models have remarkably low total degrees with many linear monomials after some substitutions for rate constants. This suggests to preprocess with essentially Gaussian elimination in the sense of solving single suitable equations with respect to some variable and substituting the corresponding solution into the system. Generalizing this idea to situations where linear variables have parametric coefficients in the other variables requires, in general, a parametric variant of Gaussian elimination, which replaces the input system with a finite case distinction with respect to the vanishing of certain coefficients and one reduced system for each case. With Biomod-26 and Biomod-28 considered here it turns out that the positivity assumptions on the variables are strong enough to effectively guarantee the non-vanishing of all relevant coefficients so that case distinctions are never necessary. On the other hand, those positivity conditions establish an apparent obstacle, because we are formally not dealing with a parametric system of linear equations but with a parametric linear programming problem. However, here the theory of real quantifier elimination by virtual substitution tells us that it is sufficient that the inequality constraints play a passive role. Those constraints must be considered when substituting Gauss solutions from the equations, but otherwise can be ignored [22, 25]. Parametric Gaussian elimination can increase the degrees of variables in the parametric coefficient, in particular destroying their linearity and suitability to be used for further reductions. As an example consider the steady-state approximation, i.e., all left hand sides replaced with 0, of the system in (1), solving the last equation for \(x_5\), and substituting into the first equation. The natural question for an optimal strategy to Gauss-eliminate a maximal number of variables has been answered positively only recently [17]: draw a graph, where vertices are variables and edges indicate multiplication between variables within some monomial. Then one can Gauss-eliminate a maximum independent set, which is the complement of a minimum vertex cover. Figure 1 shows that graph for Biomod-26, where \(\{x_4,x_5\}\) is a minimal vertex cover, and all other variables can be linearly eliminated. Similarly, for Biomod-28 we find \(\{x_5,x_6\}\) as a minimum vertex cover. Recall that minimum vertex cover is one of Karp's 21 classical NP complete problems [21]. However, our instances considered here and instances to be expected from other biological models are so small that the use of existing approximation algorithms [16] appears unnecessary. We have used real quantifier elimination, which did not consume measurable CPU time; alternatively one could use integer linear programming or SAT-solving. The graph for Biomod-26 is loosely connected. Its minimum vertex cover \(\{x_4,x_5\}\) is small. All other variables form a maximum independent set, which can be eliminated with linear methods. It is a most remarkable fact that a significant number of biological models in the databases have that property of loosely connected variables. This phenomenon resembles the well-known community structure of propositional satisfiability problems, which has been identified as one of the key structural reasons for the impressive success of state-of-the-art CDCL-based SAT solvers [15]. We conclude this section with the reduced systems as computed with our implementation in Redlog [11]. For Biomod-26 we obtain \(x_{5} >0\), \(x_{4} > 0\), \(k_{19} > 0\), \(k_{18}> 0\), \(k_{17} > 0\) and $$\begin{aligned} 1062444 k_{18} x_{4}^{2} x_{5} + 23478000 k_{18} x_{4}^{2} + 1153450 k_{18} x_{4} x_{5}^{2} + 2967000 k_{18} x_{4} x_{5}&\\ {} + 638825 k_{18} x_{5}^{3} + 49944500 k_{18} x_{5}^{2} - 5934 k_{19} x_{4}^{2} x_{5} - 989000 k_{19} x_{4} x_{5}^{2}&\\ {} - 1062444 x_{4}^{3} x_{5} - 23478000 x_{4}^{3} - 1153450 x_{4}^{2} x_{5}^{2}- 2967000 x_{4}^{2} x_{5}&\\ {} - 638825 x_{4} x_{5}^{3} - 49944500 x_{4} x_{5}^{2}= & {} 0,\\ 1062444 k_{17} x_{4}^{2} x_{5} + 23478000 k_{17} x_{4}^{2} + 1153450 k_{17}x_{4} x_{5}^{2} + 2967000 k_{17} x_{4} x_{5}&\\ {}+ 638825 k_{17} x_{5}^{3} + 49944500 k_{17} x_{5}^{2} - 1056510 k_{19} x_{4}^{2} x_{5} - 164450 k_{19} x_{4} x_{5}^{2}&\\ {}- 638825 k_{19} x_{5}^{3} - 1062444 x_{4}^{2} x_{5}^{2} - 23478000 x_{4}^{2} x_{5} - 1153450 x_{4} x_{5}^{3}&\\ {} - 2967000 x_{4} x_{5}^{2} - 638825 x_{5}^{4} - 49944500 x_{5}^{3}= & {} 0. \end{aligned}$$ For Biomod-28 we obtain \(x_{6} >0\), \(x_{5} > 0\), \(k_{30} > 0\), \(k_{29}> 0\), \(k_{28} > 0\) and $$\begin{aligned} 3796549898085 k_{29} x_{5}^{3} x_{6} + 71063292573000 k_{29} x_{5}^{3} + 106615407090630 k_{29} x_{5}^{2} x_{6}^{2}&\\ {}+ 479383905861000 k_{29} x_{5}^{2} x_{6} + 299076127852260 k_{29} x_{5} x_{6}^{3}&\\ {}+ 3505609439955600 k_{29} x_{5} x_{6}^{2} + 91244417457024 k_{29} x_{6}^{4}&\\ {}+ 3557586742819200 k_{29} x_{6}^{3} - 598701732300 k_{30} x_{5}^{3} x_{6}&\\ {} - 83232870778950 k_{30} x_{5}^{2} x_{6}^{2} - 185019487578700 k_{30} x_{5}x_{6}^{3}&\\ - 3796549898085 x_{5}^{4} x_{6} - 71063292573000 x_{5}^{4} - 106615407090630 x_{5}^{3} x_{6}^{2}&\\ {} - 479383905861000 x_{5}^{3} x_{6} - 299076127852260 x_{5}^{2} x_{6}^{3} - 3505609439955600 x_{5}^{2} x_{6}^{2}&\\ {}- 91244417457024 x_{5} x_{6}^{4} - 3557586742819200 x_{5} x_{6}^{3}= & {} 0, \\ 3796549898085 k_{28} x_{5}^{3} x_{6} + 71063292573000 k_{28} x_{5}^{3} + 106615407090630 k_{28} x_{5}^{2} x_{6}^{2}&\\ {}+ 479383905861000 k_{28} x_{5}^{2} x_{6} + 299076127852260 k_{28} x_{5} x_{6}^{3}&\\ {}+ 3505609439955600 k_{28} x_{5} x_{6}^{2} + 91244417457024 k_{28} x_{6}^{4}&\\ {}+ 3557586742819200 k_{28} x_{6}^{3} - 3197848165785 k_{30} x_{5}^{3} x_{6}&\\ {} - 23382536311680 k_{30} x_{5}^{2} x_{6}^{2} - 114056640273560 k_{30} x_{5} x_{6}^{3}&\\ {}- 91244417457024 k_{30} x_{6}^{4} - 3796549898085 x_{5}^{3} x_{6}^{2} - 71063292573000 x_{5}^{3} x_{6}&\\ {}- 106615407090630 x_{5}^{2} x_{6}^{3} - 479383905861000 x_{5}^{2} x_{6}^{2} - 299076127852260 x_{5} x_{6}^{4}&\\ {} - 3505609439955600 x_{5} x_{6}^{3} - 91244417457024 x_{6}^{5} - 3557586742819200 x_{6}^{4}= & {} 0. \end{aligned}$$ Notice that no complex positivity constraints come into existence with these examples. All corresponding substitution results are entailed by the other constraints, which is implicitly discovered by using the standard simplifier from [12] during preprocessing. 4 Determination of Multiple Steady States We aim to identify via grid sampling regions of parameter space where multistationarity occurs. Our focus is on the identification of regions with multiple positive real solutions for the parameters introduced with the conservation laws. We will encounter one or three such solutions and allow ourselves for biological reasons to assume monostability or bistability, respectively. Furthermore, a change in the number of solutions between one and three is indicative of a saddle-node bifurcation between a monostable and a bistable case. A mathematically rigorous treatment of stability would, possibly symbolically, analyze the eigenvalues of the Jacobian of the respective polynomial vector field. We consider two different approaches: first a polynomial homotopy continuation method implemented in Bertini, and second a combination of symbolic computation methods implemented in Maple. We compare the approaches with respect to performance and quality of results for both the reduced and the unreduced systems. 4.1 Numerical Approach We use the homotopy solver Bertini [1] in its standard configuration to compute complex roots. We parse the output of Bertini using Python, and determine numerically, which of the complex roots are real and positive using a threshold of \(10^{-6}\) for positivity. Computations are done in Python with Bertini embedded. For System Biomod-26 we produced the two plots in Fig. 2 using the original system and the two in Fig. 3 using the reduced system. The sampling range for \(k_{19}\) was from 200 to 1000 by 50. In the left plots the sampling range for \(k_{17}\) is from 80 to 200 by 10 with \(k_{18}\) fixed at 50. In the right plots the sampling range for \(k_{18}\) is 5 to 75 by 5 with \(k_{17}\) fixed to 100. We see two regions forming according to the number of fixed points: yellow discs indicate one fixed point and blue boxes three. The diamonds indicate numerical errors where zero (red) or two (green) fixed states were identified. We analyse these further in Sect. 4.3. For Biomod-28 we produced the two plots in Fig. 5 using the original system. The sampling range for \(k_{30}\) was from 100 to 1600 by 100. In the left plots the sampling range for \(k_{28}\) is from 40 to 160 by 10 with \(k_{29}\) fixed at 180. In the right plots the sampling range for \(k_{29}\) is from 120 to 240 by 10 with \(k_{28}\) fixed to 100. The colours and shapes indicate the number of fixed points as before. For the reduced system Bertini (wrongly) could not find any roots (not even complex ones) for any of the parameter settings. The situation did not change when going from adaptive precision to a very high fixed precision. However, we have not attempted more sophisticated techniques like providing user homotopies. We analyse these results further in Sect. 4.3. 4.2 Symbolic Approach Our next approach will still use grid sampling, but each sample point will undergo a symbolic computation. The result will still be an approximate identification of the region (since the sampling will be finite) but the results at those sample points will be guaranteed free of numerical errors. The computations follow the strategy introduced in [5, Sect. 2.1.2]. This combined tools from the Regular Chains LibraryFootnote 2 available for use in Maple. Regular chains are the triangular decompositions of systems of polynomial equations (triangular in terms of the variables in each polynomial). Highly efficient methods for working in complex space have been developed based on these (see [29] for a survey). We make use of recent work by Chen et al. [6] which adapts these tools to the real analogue: semi-algebraic systems. They describe algorithms to decompose any real polynomial system into finitely many regular semi-algebraic systems: both directly and by computation of components by dimension. The latter (the so called lazy variant) was key to solving the 1-parameter MAPK problem in [5]. However, for the zero dimensional computations of this paper there is only one solution component and so no savings from lazy computations. For a given system and sample point we apply the real triangularization (RT) on the quantifier-free formula (as described at the end of Sect. 2.1: a quantifier free conjunction of equalities and inequalities) evaluated with the parameter estimates and sample point values. This produces a simplified system in several senses. First, as guaranteed by the algorithm, the output is triangular according to a variable ordering. So there is a univariate component, then a bivariate component introducing one more variable and so on. Secondly, for all the MAPK models we have studied so far, all but the final (univariate) of these equations has been linear in its main variable. This thus allows for easy back substitution. Thirdly, most of the positivity conditions are implied by the output rather than being an explicit part of it, in which case a simpler sub-system can be solved and back substitution performed instantly. Biomod-26. For the original version of Biomod-26 the output of RT was a component consisting of 11 equations and a single inequality. The equations were in ascending main variable according to the provided ordering (same as the labelling). All but the final equation is linear in its main variable, with the final equation being univariate and degree 6 in \(x_1\). The output of the triangularization requires that this variable be positive, \(x_1>0\), with the positivity of the other variables implied by solutions to the system. So to proceed we must find the positive real roots of the degree 8 univariate polynomial in \(x_1\): counting these will imply the number of real positive solutions of the parent system. We do this using the root isolation tools in the Regular Chains Library. This whole process was performed iteratively for the same sampling regime as Bertini used to produce Fig. 4. We repeated the process on the reduced version of the system. The triangularization again reduced the problem to univariate real root isolation, this time with only one back substitution step needed. As to be expected from a fully symbolic computation, the output is identical and so again represented by Fig. 4. However, the computation was significantly quicker with this reduced system. More details are given in the comparison in Sect. 4.3. Biomod-28. The same process was conducted on Biomod-28. As with Biomod-26 the system was triangular with all but the final equation linear in its main variable; this time the final equation is degree 8. However, unlike Biomod-26 two positivity conditions were returned in the output meaning we must solve a bivariate problem before we can back substitute to the full system. Rather than just perform univariate real root isolation we must build a Cylindrical Algebraic Decomposition (CAD) (see, e.g., [4] and the references within) sign invariant for the final two equations and interrogate its cells to find those where the equations are satisfied and variable positive. Counting these we find always 1 or 3 cells, with the latter indicating bistability. This is similar to the approach used in [5], although in that case the 2D CAD was for one variable and one parameter. We used the implementation of CAD in the Regular Chains Library [3, 7] with the results producing the plots in Fig. 6. For the reduced system we proceeded similarly. A 2D CAD still needed to be produced after triangularization and so in this case there was no reduction in the number of equations to study with CAD via back substitution. However, it was still beneficial to pre-process CAD with real triangularization: the average time per sample point with pre-processing (and including time taken to pre-process) was 0.485 s while without it was 3.577 s. Bertini grid sampling on the original version of Biomod-26 (see Sect. 4.1). The online version of this article contains colored figures Figures 2, 3, and 4 all refer to Biomod-26. The latter, produced using the symbolic techniques in Maple, is guaranteed free of numerical error. We see that computing with the reduced system rather than the original system allowed Bertini to avoid such errors: the rouge red and green diamonds in Fig. 2. However, in the case of Biomod-28 the reduction led to catastrophic effects for Bertini: built-in heuristics quickly (and wrongly) concluded that there are no zero dimensional solutions for the system, and when switching to a positive dimensional run also no solutions could be found. Bertini grid sampling on the reduced version of Biomod-26 (see Sect. 4.1) Maple grid sampling on Biomod-26 (see Sect. 4.2) Bertini grid sampling on the original version of Biomod-28 (see Sect. 4.1) Table 1. Timing data (in seconds) of the grid samplings described in Sect. 4. Numerical computation is using Bertini; Symbolic computation is using Maple Regular Chains As Fig. 6 but with a higher sampling rate Bertini computations (v1.5.1) were carried out on a Linux 64 bit Desktop PC with Intel i7. Maple computations (v2016 with April 2017 Regular Chains) were carried out on a Windows 7 64 bit Desktop PC with Intel i5. For Biomod-26 the pairs of plots together contain 476 sample points. Table 1 shows timing data. We see that both Bertini and Maple benefited from the reduced system: Bertini took a third of the original time while the speedup for Maple was even greater: a tenth of the original. Also, perhaps surprisingly, the symbolic methods were quicker than the numerical ones here. For Biomod-28 the speed-up enjoyed by the symbolic methods was even greater (almost 100 fold). However, for this system Bertini was significantly faster. The symbolic methods used are well known for their doubly exponential computational complexity (in the number of variables) so it is not surprising that as the system size increases there so should the results of the comparison. We see some other statistical data for the timings in Maple: the standard deviation for the timings is fairly modest but in each row we see there are outliers many multiples of the mean value and so the median is always a little less than the mean average. 4.4 Going Further Of course, we could increase the sampling density to get an improved idea of the bistability region, as in Figs. 7 and 8. However, a greater understanding comes with 3D sampling. We have performed this using the symbolic approach described above, at a linear cost proportional to the increased number of sample points. This was completed for Biomod-26: the region in question is bounded to both sides in the \(k_{17}\) and \(k_{18}\) directions but extends infinitely above in \(k_{19}\). With the \(k_{19}\) range bound at 1000 the region is bounded by extending \(k_{17}\) to 800 and \(k_{18}\) to 600. For obtaining exact bounds (in one parameter) see [5]. Sampling in 20 s for \(k_{17}\) and \(k_{18}\) and 50 s for \(k_{19}\) produced a Maple point plot of 20400 in 18 min. Figure 9 shows 2D captures of the 3D bistable points and Fig. 10 the convex hull of these, produced using the convex packageFootnote 3. We note the lens shape seen in the orientation in the left plots is comparable with the image in the original paper of Markevich et al. [26, Fig. S7]. 3D Maple Point Plot produced grid sampling on Biomod-26 (see Sect. 4.4) Fig. 10. Convex Hull of the bistable points in Fig. 9 5 Conclusion and Future Work We described a new graph theoretical symbolic preprocessing method to reduce problems from the MAPK network. We experimented with two systems and found the reduction offered computation savings to both numerical and symbolic approaches for the determination of multistationarity regions of parameter space. In addition, the reduction avoided instability from rounding errors in the numerical approach to one system, but uncovered major problems in that approach for the other. An interesting side result is that, at least for the smaller system, the symbolic approach can compete with and even outperform the numerical one, demonstrating how far such methods have progressed in recent years. In future work we intend to combine the results of the present paper and our recent publication [5] to generate symbolic descriptions of the bistability region beyond the 1-parameter case. Other possible routes to achieve this is to consider the effect of the various degrees of freedom with the algorithms used. For example, we have a free choice of variable ordering: Biomod-26 has 11 variables corresponding to 39 916 800 possible orderings while Biomod-28 has 16 variables corresponding to more than \(10^{13}\) orderings. Heuristics exist to help with this choice [10] and machine learning may be applicable [19]. Also, since MAPK problems contain many equational constraints an approach as described in [13] may be applicable when higher dimensional CADs are needed. http://www.ebi.ac.uk/biomodels-main/ http://www.regularchains.org/ http://www-home.math.uwo.ca/~mfranz/convex/ Bates, D.J., Hauenstein, J.D., Sommese, A.J., Wampler, C.W.: Bertini: software for numerical algebraic geometry. doi:10.7274/R0H41PB5 Bhalla, U.S., Iyengar, R.: Emergent properties of networks of biological signaling pathways. Science 283(5400), 381–387 (1999) Bradford, R., Chen, C., Davenport, J.H., England, M., Moreno Maza, M., Wilson, D.: Truth table invariant cylindrical algebraic decomposition by regular chains. In: Gerdt, V.P., Koepf, W., Seiler, W.M., Vorozhtsov, E.V. (eds.) CASC 2014. LNCS, vol. 8660, pp. 44–58. Springer, Cham (2014). doi:10.1007/978-3-319-10515-4_4 Bradford, R., Davenport, J., England, M., McCallum, S., Wilson, D.: Truth table invariant cylindrical algebraic decomposition. J. Symb. Comput. 76, 1–35 (2016) CrossRef MathSciNet MATH Google Scholar Bradford, R., Davenport, J., England, M., Errami, H., Gerdt, V., Grigoriev, D., Hoyt, C., Kosta, M., Radulescu, O., Sturm, T., Weber, A.: A case study on the parametric occurrence of multiple steady states. In: Proceedings of the ISSAC 2017, pp. 45–52. ACM (2017) Chen, C., Davenport, J., May, J., Moreno Maza, M., Xia, B., Xiao, R.: Triangular decomposition of semi-algebraic systems. J. Symb. Comput. 49, 3–26 (2013) Chen, C., Moreno Maza, M., Xia, B., Yang, L.: Computing cylindrical algebraic decomposition via triangular decomposition. In: Proceedings of the ISSAC 2009, pp. 95–102. ACM (2009) Conradi, C., Mincheva, M.: Catalytic constants enable the emergence of bistability in dual phosphorylation. J. Roy. Soc. Interface 11(95) (2014) Conradi, C., Flockerzi, D., Raisch, J.: Multistationarity in the activation of a MAPK: parametrizing the relevant region in parameter space. Math. Biosci. 211(1), 105–31 (2008) Dolzmann, A., Seidl, A., Sturm, T.: Efficient projection orders for CAD. In: Proceedings of the ISSAC 2004, pp. 111–118. ACM (2004) Dolzmann, A., Sturm, T.: Redlog: computer algebra meets computer logic. ACM SIGSAM Bull. 31(2), 2–9 (1997) Dolzmann, A., Sturm, T.: Simplification of quantifier-free formulae over ordered fields. J. Symb. Comput. 24(2), 209–231 (1997) England, M., Bradford, R., Davenport, J.: Improving the use of equational constraints in cylindrical algebraic decomposition. In: Proceedings ISSAC 2015, pp. 165–172. ACM (2015) Famili, I., Palsson, B.Ø.: The convex basis of the left null space of the stoichiometric matrix leads to the definition of metabolically meaningful pools. Biophys. J. 85(1), 16–26 (2003) Girvan, M., Newman, M.E.J.: Community structure in social and biological networks. Proc. Natl. Acad. Sci. USA 99(12), 7821–7826 (2002) Grandoni, F., Könemann, J., Panconesi, A.: Distributed weighted vertex cover via maximal matchings. ACM Trans. Algorithms 5(1), 1–12 (2008) Grigoriev, D., Samal, S.S., Vakulenko, S., Weber, A.: Algorithms to study large metabolic network dynamics. Math. Model. Nat. Phenom. 10(5), 100–118 (2015) Gross, E., Davis, B., Ho, K.L., Bates, D.J., Harrington, H.A.: Numerical algebraic geometry for model selection and its application to the life sciences. J. Roy. Soc. Interface 13(123) (2016) Huang, Z., England, M., Wilson, D., Davenport, J.H., Paulson, L.C., Bridge, J.: Applying machine learning to the problem of choosing a heuristic to select the variable ordering for cylindrical algebraic decomposition. In: Watt, S.M., Davenport, J.H., Sexton, A.P., Sojka, P., Urban, J. (eds.) CICM 2014. LNCS, vol. 8543, pp. 92–107. Springer, Cham (2014). doi:10.1007/978-3-319-08434-3_8 Joshi, B., Shiu, A.: A survey of methods for deciding whether a reaction network is multistationary. Math. Model. Nat. Phenom. 10(5), 47–67 (2015) Karp, R.M.: Reducibility among combinatorial problems. In: Complexity of Computer Computations, pp. 85–103. Plenum Press, New York (1972) Košta, M.: New concepts for real quantifier elimination by virtual substitution. Doctoral dissertation, Saarland University, Germany, December 2016 Legewie, S., Schoeberl, B., Blüthgen, N., Herzel, H.: Competing docking interactions can bring about bistability in the MAPK cascade. Biophys. J. 93(7), 2279–2288 (2007) Li, C., Donizelli, M., Rodriguez, N., Dharuri, H., Endler, L., Chelliah, V., Li, L., He, E., Henry, A., Stefan, M.I., Snoep, J.L., Hucka, M., Le Novère, N., Laibe, C.: BioModels database: an enhanced, curated and annotated resource for published quantitative kinetic models. BMC Syst. Biol. 4, 92 (2010) Loos, R., Weispfenning, V.: Applying linear quantifier elimination. Comput. J. 36(5), 450–462 (1993) Markevich, N.I., Hoek, J.B., Kholodenko, B.N.: Signaling switches and bistability arising from multisite phosphorylation in protein kinase cascades. J. Cell Biol. 164(3), 353–359 (2004) Pérez Millán, M., Turjanski, A.G.: MAPK's networks and their capacity for multistationarity due to toric steady states. Math. Biosci. 262, 125–37 (2015) Rashevsky, N.: Mathematical Biophysics: Physico-Mathematical Foundations of Biology. Dover, New York (1960) Wang, D.: Elimination Methods. Springer, Heidelberg (2000) Zumsande, M., Gross, T.: Bifurcations and chaos in the MAPK signaling cascade. J. Theoret. Biol. 265(3), 481–491 (2010) CrossRef MathSciNet Google Scholar D. Grigoriev is grateful to the grant RSF 16-11-10075. H. Errami, O. Radulescu, and A. Weber thank the French-German Procope-DAAD program for partial support of this research. M. England and T. Sturm are grateful to EU H2020-FETOPEN-2015-CSA 712689 SC\(^{2}\). Research Data Statement: Data supporting the research in this paper is available from doi:10.5281/zenodo.807678. Fac. Engineering, Environment & Computing, Coventry University, Coventry, UK Matthew England Institut für Informatik II, Universität Bonn, Bonn, Germany Hassan Errami & Andreas Weber CNRS, Mathématiques, Université de Lille, Villeneuve d'Ascq, France Dima Grigoriev DIMNP UMR CNRS/UM 5235, University of Montpellier, Montpellier, France Ovidiu Radulescu University of Lorraine, CNRS, Inria, and LORIA, Nancy, France Thomas Sturm MPI Informatics and Saarland University, Saarbrücken, Germany Hassan Errami Andreas Weber Correspondence to Thomas Sturm . Joint Institute of Nuclear Research, Dubna, Russia Vladimir P. Gerdt Universität Kassel, Kassel, Germany Prof. Dr. Wolfram Koepf Werner M. Seiler Russian Academy of Sciences, Novosibirsk, Russia Evgenii V. Vorozhtsov The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. About this paper Cite this paper England, M., Errami, H., Grigoriev, D., Radulescu, O., Sturm, T., Weber, A. (2017). Symbolic Versus Numerical Computation and Visualization of Parameter Regions for Multistationarity of Biological Networks. In: Gerdt, V., Koepf, W., Seiler, W., Vorozhtsov, E. (eds) Computer Algebra in Scientific Computing. CASC 2017. Lecture Notes in Computer Science(), vol 10490. Springer, Cham. https://doi.org/10.1007/978-3-319-66320-3_8 eBook Packages: Computer ScienceComputer Science (R0)
CommonCrawl
Research | Open | Published: 02 May 2018 Wideband partial response CPM demodulation via multirate frequency transformations and decision feedback equalization Wenjing Liu ORCID: orcid.org/0000-0003-4506-71561 & Balu Santhanam1 ■■■ Continuous phase modulation (CPM) is a popular frequency modulation technique used in mobile communications due to its power efficiency and constant modulus properties. Conventional narrowband CPM demodulation employs the Viterbi algorithm after phase demodulation and requires that the phase states be rational and contain additive white noise. The complexity of the Viterbi approach further increases with the number of phase states. Frequency discrimination approaches that estimate the instantaneous frequency provide a simpler suboptimal approach but are primarily for full response CPM and are not well known for wideband partial response CPM. In this paper, we investigate an approach that combines multirate frequency transformations for wideband CPM demodulation with decision feedback equalization for memory removal. This combined approach avoids the problems of complexity and restrictive requirements of the Viterbi approach. Simulation results are used to demonstrate the validity of the combined approach. Continuous phase modulation (CPM) [1–3] is a popular form of frequency modulation employed in mobile communications [4] and has desirable spectral efficiency [5] and constant modulus properties that facilitate use of class-C amplifiers. Gaussian Minimum Shift Keying (GMSK), a specific form of CPM, is the main ingredient in the Global System for Mobile communications (GSM) [2, 3] used in GPS applications. Pragmatic CPM modulation schemes have recently been studied as capacity attaining low-complexity alternatives to serially concatenated CPM [6]. The conventional demodulation technique used for narrowband signals is phase demodulation [7] followed by unwrapping and maximum likelihood sequence estimation (MLSE) using the Viterbi algorithm [2, 3, 8]. This approach has a complexity that grows exponentially with the number of phase states and restrictions on the modulation index that needs to be the ratio of incommensurate integers. Other frequency discrimination approaches [9] rely upon instantaneous frequency (IF) estimation and are not subject to the restrictions required by the Viterbi but have not been investigated for wideband CPM with memory. In prior work [10, 11], it was shown that frequency discrimination for full response CPM demodulation has the same performance as that of binary phase-shift keying (BPSK) detection in additive white Guassian noise (AWGN). Further, in recent work [12], frequency tracking-based wideband FM demodulation was extended to large wideband to narrowband conversion factors using multirate frequency transformations (MFT). Frequency estimation-based approaches have the added advantage that they are immune to phase distortions introduced by the channel which would adversely affect the Viterbi approach. In addition, these approaches do not require prior knowledge of the carrier frequency. In this paper, we investigate the approach that combines the MFT and energy separation algorithm (ESA) [13, 14] with decision feedback equalization (DFE) to effectively demodulate wideband CPM with memory. The wideband CPM signal is first converted into narrowband via the MFT and then demodulated by the ESA to obtain the IF estimates. The DFE is eventually applied to equalize the partial response channel generated from the IF estimates in order to remove the CPM memory introduced by partial response signaling. Simulation results are used to verify the efficacy of the combined approach for both wideband binary CPM and multilevel CPM. Wideband CPM demodulation AM-FM signal model Monocomponent amplitude-modulation frequency-modulation (AM-FM) signals are expressed in the form of time-varying sinusoids by $$ s\left(t\right) =a(t)\cos \left(\int_{-\infty }^{t}\omega_{i}\left(\tau \right) d\tau +\theta_{1}\right), $$ where the instantaneous amplitude (IA) is denoted by a(t) and the instantaneous frequency (IF) is given by $$ \omega_{i}\left(t\right) =\omega_{c}+\omega_{m}q_{i}(t). $$ Note that ω c is the carrier (or mean) frequency, and q i (t) is the normalized baseband-modulated signal. Specific for sinusoidal FM, where a(t) remains a constant A, and q i (t) becomes a sinusoid, the IF can be further expressed as $$ \omega_{i}\left(t\right) =\omega_{c}+\omega_{m}\cos(\omega_{f}t+\theta_{2}). $$ Sinusoidal FM signals can be expressed via: $$ s\left(t\right) =A\sum_{n=-\infty }^{+\infty }J_{n}\left(\beta \right) \cos \left(\omega_{c}t+n\omega_{m}t\right), $$ where J n is the nth order cylindrical Bessel function of the first kind. The modulation index of the sinusoidal FM is defined as the ratio β=ω m /ω f and the associated Carson bandwidth is given by $$ B = 2(\beta+1)\omega_{f}. $$ If β≫1, then it corresponds to the wideband FM according to the literature of FM communication systems. In addition, the carrier-to-information-bandwidth ratio (CR/IB) and the carrier-to-frequency-deviation ratio (CR/FD) are defined respectively as: $$ \frac{\text{CR}}{\text{IB}}=\frac{\omega_{c}}{\omega_{f}},~~~ \frac{\text{CR}}{\text{FD}}=\frac{\omega_{c}}{\omega_{m}}. $$ Wideband FM is a popular modulation technique for satellite communications due to its ability to deal with trans-ionospheric distortion. A particular form of digital FM, multilevel Gaussian fequency-shift keying (FSK) has been proposed as a solution for high bandwidth satellite communications [15–17]. Continuous phase modulation CPM can be viewed as a specific form of FM. The standard CPM model depends on its pulse shaping function p(t), with duration length of symbol periods L and modulating symbols, i.e, binary PAM symbols a[ k]∈{−1,1}. The IF signal takes the form [2, 3]: $$ \omega_{i}(t) = \omega_{c} + 2\pi h \sum_{k=-\infty}^{\infty}a[\!k]p(t-kT_{b}), $$ where ω c is the carrier frequency and h is the modulation index of CPM. Note that in the Viterbi algorithm, the modulation index must be rational, which is restricted to be of the form: $$ h= \frac{m}{p}, $$ where m and p are relatively prime positive integers. The deviation of the phase from the carrier phase is given by: $$ \phi_{\text{dev}}(t;{a}) = 2\pi h \sum_{k=-\infty}^{\infty}a[\!k]q(t-{kT}_{b}), $$ where p(t) denotes the pulse shaping function, which is usually normalized and defined in the interval [ 0,LT b ], and q(t) is the corresponding phase pulse shaping function defined by $$ q(t) =\int_{0}^{t}p(\tau)d\tau. $$ In general, the following conditions are satisfied by the choice of p(t): $$ p(t) = p({LT}_{b}-t), $$ $$ q(t) = \int_{0}^{t} p(\tau) d\tau = \frac{1}{2},~~t \geq {LT}_{b}. $$ If p(t) is a rectangular pulse, then this form of CPM is referred to as (L-REC) CPM, and if p(t) is a raised cosine pulse, then it is referred to as (L-RAC) CPM. The CPM signal is then obtained via frequency modulation: $$ r(t) = A\cos\left(\int_{-\infty}^{t}\omega_{i}(\tau)d\tau\ +\theta \right). $$ Usually, the modulation index of a wideband CPM signal is large such that the frequency deviation of its IF is comparable to the carrier frequency. Using a pulse shaping function of duration larger than a symbol period (L>1), i.e., partial response signaling introduces memory into the modulation scheme, but results in significant increase of complexity for demodulation and detection. The memory introduced by CPM depends on the type of its pulse shaping function p(t) and the corresponding duration length L. Since p(t) is defined over the interval [ 0,LT b ], for nT b <t≤(n+1)T b , Eq. 7 can be rewritten as $$ \omega_{i}(t) = \omega_{c} + 2\pi h \sum_{k=n-L+1}^{n}a[\!k]p(t-{kT}_{b}). $$ Given the carrier frequency and modulation index, the information conveyed in the IF within the current symbol period depends on the most recent L symbols and the waveform of p(t). Energy separation algorithm The energy separation algorithm (ESA) as summarized in [13, 14], based on the Teager-Kaiser energy operator $\Psi [x(t)]= \dot x^{2}(t)-x(t) \ddot x(t)$, is widely used for monocomponent AM-FM demodulation, for example, to analyze the oscillation of signals with time-varying amplitude and frequency. The IA a(t) and the IF ω i (t) of an AM-FM signal x(t) can be estimated via the continuous ESA (CESA) summarized by $$ \frac{\Psi[x(t)]}{\sqrt{\Psi[\dot x(t)]}} \approx |a(t)|. $$ $$ \sqrt{\frac{\Psi[\dot x(t)]}{\Psi[x(t)]}}-\omega_{c} \approx \omega_{i}(t), $$ where we assume that the IA a(t) and the IF ω i (t) do not vary too fast or too greatly in value compared to the carrier frequency ω c . Carrier frequency and amplitude estimation The IF estimate of the MFT-ESA in the specific case of CPM takes the form of: $$ \hat{\omega_{i}}(t) = \omega_{c} + 2\pi h\sum_{k=-\infty}^{\infty}a[\!k]h_{f}(t-{kT}_{b}) + \epsilon_{\omega}(t), $$ where h f (t) corresponds to the pulse shaping function and ε w (t) corresponds to zero-mean IF noise, which unlike the observation noise is not white. Assuming equiprobable symbols and taking expectations on both sides yields: $$ E\{\hat{\omega}_{i}(t)\} = \omega_{c}. $$ The carrier frequency and the amplitude of the AM-FM signal can then be estimated from the IF and IA estimates from either algorithm by simple averaging: $$\begin{array}{@{}rcl@{}} \hat{\omega_{c}} & = & \frac{1}{T}\int_{0}^{T}\hat{\omega}_{i}(t)dt \\ \hat{A} & = & \frac{1}{T}\int_{0}^{T}\hat{a}_{i}(t)dt. \end{array} $$ This is a consequence of the fact that these approaches are bandpass estimation approaches whereas traditional in-phase and quadrature demodulation, employed in narrowband communication systems, is a baseband estimation approach requiring prior knowledge of the carrier frequency. Multirate frequency transformations The performance of ESA or any other demodulation technique directly applied to wideband FM or CPM signals is poor due to the narrowband constraint, as demonstrated in prior work [12]. In recent work of the authors, frequency transformations enacted via multirate signal processing as shown in Fig. 1 were used for wideband FM to narrowband FM conversion to enable a wider range of wideband FM signals [10, 12] and were also extended to two-dimensional images [18] via multidimensional energy operator [19]. The goal of the multirate processing module is to compress the bandwidth of the FM signal; however, this is accompanied by a reduction in the carrier frequency of the FM signal. To compensate, a heterodyning module that translates the FM signal in frequency is introduced. After the multirate heterodyne combination, the CR/IB and CR/FD of the transformed signal is constrained in a range, where standard narrowband monocomponent FM demodulation algorithms work optimally. The MFT framework can be combined with a variety of demodulation techniques such as the Hilbert transform demodulation algorithm (HTDA) [20] or the ESA to improve the demodulation perforamnce for wideband FM signals. In particular, the ESA combined with the MFT approach will be employed in this paper. Block diagram of the basic MFT framework. The wideband signal is first sampled above the Nyquist rate, interpolated by a factor R, and then heterodyned by multiplication with cos(ω d n), followed by a discrete FIR bandpass filter with a passband gain to achieve the MFT. Then, it goes through a demodulation block to generate IF estimates of the compressed heterodyned signal. To obtain the IF of the original signal, the compressed heterodyned IF is then shifted back by subtracting ω d , decimated by R, and scaled back appropriately, followed by the DAC module The MFT framework specifically allows for demodulation of wideband CPM signals with a large modulation depth as the examples shown in subsequent sections. Since the approach is based on IF estimation, the approach does not encounter the complexity problems or restrictions related to rational modulation depth as seen in the Viterbi algorithm. Memory removal of partial response CPM Partial response channel The memory introduced by the partial response signaling linearly distorts the transmitted signal and results in the intersymbol interference (ISI) for the IF of the CPM signal within each symbol period. If precise IF estimates of the CPM signal are accessible, the recovery of the original transmitted sequence is similar to equalization of the ISI channel. A discrete time-invariant channel is generally expressed as $$ y(t) = \sum^{\infty}_{m=-\infty}h_{m} x(t-m) + n(t). $$ where h m is the impulse response, x(t) is the input sequence of the channel, and n(t) is the noise of the channel. In terms of the normalized IF within each symbol period, the partial response channel is a special case of the ISI channel that has a finite-length impulse response and is also causal and monic with h0=1, according to Eq. 14. For example, the discrete-time partial response channel using L-REC CPM can be modeled as $$ m(t) = \sum_{k=0}^{L-1}s(t-k)+z(t), $$ where s(t) is the original transmitted symbol sequence and z(t) is the noise term averaged from the noise of the IF estimates within each symbol period. Unlike the baseband channel assuming additive white Guassian noise (AWGN), the noise term z(t) for this partial response channel of CPM is not AWGN since the noise presented in the IF estimates cannot be guaranteed as AWGN by any IF demodulation approach. The output sequence m(t) of the partial response channel can be obtained by demodulating the CPM signals and making decisions on the normalized demodulated IF within each symbol period. Therefore, our goal becomes recovering the original transmitted sequence from the output of the partial response channel. Partial response channel estimation via recursive prediction error method The channel response is required by general equalization approaches, in order to remove the memory of the partial response CPM. In this paper, we perform the channel estimation via the recursive prediction error method based on the ARMAX model as described in [21]. The partial response channel, depicted for example in Eq. (21), is actually a monic moving-average (MA) process. We are capable of estimating the channel response by fitting the output of the partial response channel to an ARMAX model. In general, the structure of the ARMAX model is described by $$ \sum_{k=0}^{n_{a}}a_{k}y[\!t-k] = \sum_{k=1}^{n_{b}}b_{k}u[\!t-k]+\sum_{k=0}^{n_{c}}c_{k}e[\!t-k], $$ where n a , n b , and n c are the number of coefficients for the auto-regressive (AR) part, system input, and moving-average (MA) part, respectively. It can also be written as $$ A(q)y(t) = B(q)u(t)+C(q)e(t). $$ where q is the backward shift operator. Specifically, $$\begin{array}{@{}rcl@{}} A(q) &=& 1+a_{1}q^{-1}+ \ldots + a_{n_{a}}q^{-n_{a}}, \end{array} $$ $$\begin{array}{@{}rcl@{}} B(q) &=& b_{1}q^{-1}+\ldots + b_{n_{b}}q^{-n_{b}}. \end{array} $$ $$\begin{array}{@{}rcl@{}} C(q) &=& 1+c_{1}q^{-1}+ \ldots + c_{n_{b}}q^{-n_{c}}. \end{array} $$ By assuming A(q)=1 and B(q)=0, the ARMAX model can be simplified to the MA model that exactly fits Eq. (21), as described by $$ y(t) = C(q)e(t)=\sum_{k=0}^{n_{c}}c_{k} q^{-k}e(t)=\sum_{k=0}^{n_{c}}c_{k}e[\!t-k]. $$ By satisfying certain conditions, C(q) is actually invertible, that is, e(t) can be calculated via an inverse operator $\tilde {C}(q)$ via $$ e(t) = \tilde{C}(q)y(t)=\sum_{k=0}^{\infty}\tilde{c}_{k} y(t-k). $$ The coefficient estimation for the ARMAX model can be achieved via an iterative search algorithm that minimizes a more robust quadratic prediction error criterion. The parameter vector $\vec {\theta }$ can be formed by grouping the coefficients of the ARMAX model as $$ \vec{\theta} = [\!a_{1},\ldots,a_{n_{a}},b_{1},\ldots,b_{n_{b}},c_{1},\ldots,c_{n_{c}}]. $$ The predictor for the ARMAX model is given by $$ C(q)\hat{y}(t|\vec{\theta}) = B(q)u(t) + [\!C(q)-A(q)]y(t). $$ It can be rewritten as $$ \hat{y}(t|\vec{\theta}) = B(q)u(t) + [\!1-A(q)]y(t)+[\!C(q)-1]\epsilon(t,\vec{\theta}), $$ where $\epsilon (t,\vec {\theta })$ is defined as the prediction error given by $$ \epsilon(t,\vec{\theta})=y(t)-\hat{y}(t|\vec{\theta}). $$ Therefore, we can express the predictor in the form of pseudolinear regression via $$ \hat{y}(t|\vec{\theta})=\vec{\varphi}^{T}(t,\vec{\theta})\vec{\theta}, $$ where we define the data vector $\vec {\varphi }(t,\vec {\theta })$ as $$\begin{array}{*{20}l} \vec{\varphi}(t,\vec{\theta})= &[-y(t-1),\ldots,-y(t-n_{a}),u(t-1),\ldots, \\ &u(t-n_{b}),\epsilon(t-1,\vec{\theta}),\ldots,\epsilon(t-n_{c},\vec{\theta})]. \end{array} $$ According to Eq. (31), the gradient of the predictor $\vec {\psi }(t,\vec {\theta })$ w.r.t $\vec {\theta }$ can be computed via $$ C(q)\vec{\psi}(t,\vec{\theta})=\vec{\varphi}(t,\vec{\theta}). $$ The gradient $\vec {\psi }(t,\vec {\theta })$ can be obtained by filtering the data vector $\vec {\varphi }(t,\vec {\theta })$ through an inverse filter of C(q). The cost function for the recursive prediction error method is defined as $$ V_{t}(\vec{\theta},\vec{Y}^{t}) = \gamma(t)\frac{1}{2}\sum_{k=1}^{t}\beta(t,k)\epsilon^{2}(k,\vec{\theta}), $$ where β(t,k) and γ(t) satisfy the following conditions $$ \beta(t,k) = \prod_{j=k+1}^{t}\lambda(j), ~~\beta(t,t) = 1, $$ $$ \sum_{k=1}^{t}\gamma(t)\beta(t,k) = 1, $$ Note that λ(j) is the forgetting factor, which is often set to a constant less than 1. It controls the convergence rate and leads to compromise between misadjustment and tracking, similar to the recursive least squares (RLS) approach in the context of adaptive filtering [22]. The algorithm of the recursive prediction error method is then summarized via $$ \epsilon(t) = y(t)-\hat{y}(t), $$ $$ \hat{\vec{\theta}}(t) = \hat{\vec{\theta}}(t-1) + \gamma(t) R^{-1}(t)\vec{\psi}(t)\epsilon(t), $$ $$ R(t) = R(t-1) + \gamma(t)\left[\vec{\psi}(t)\vec{\psi}^{T}(t)-R(t-1)\right], $$ where $\vec {\psi }(t)$ and $\hat {y}(t)$ are short for the resulting approximations of $\vec {\psi }\left (t,\hat {\vec {\theta }}(t-1)\right)$ and $\hat {y}\left (t|\vec {\theta }(t-1)\right)$, respectively. Memory removal via decision feedback equalization Similar to eliminating ISI, decision feedback equalization (DFE) [23] can be applied to the output sequence of the partial response channel to remove the CPM memory. The block diagram of the general decision feedback equalizer is illustrated in Fig. 2. According to the choice of the feedforward filter and feedback filter, a variety of decision feedback equalizers can be implemented, such as the zero-forcing DFE that directly inverts the channel and the MMSE-DFE that employs a minimum mean-square error criterion. Block diagram of the general decision feedback equalizer. The partial response channel output in this case are the mixed symbols due to memory introduced by partial response signaling. The mixed symbols can be obtained from the estimated IF of the MFT-ESA demodulation block ZF-DFE solution By inverting the partial response channel using the estimated channel response, the original symbol sequence can be recovered by applying the inverse filter Hinv(q) on the channel output sequence m(t) and classifying the filter output according to the optimal region of each symbol. For the previous case of the 3-REC multilevel CPM, the expression for the inverse filter is given by $$ H_{\text{inv}}(q) = \frac{1}{1+\hat{c}_{1}q^{-1}+\hat{c}_{2}q^{-2}}. $$ Note that the inverse filter Hinv(q) is an IIR All-pole filter, which can be simply implemented via direct recursion of its difference equation by $$ \hat{s}(t) = m(t) - \hat{c}_{1}\hat{s}(t-1)- \hat{c}_{2}\hat{s}(t-2), $$ where $\hat {s}$ represents the estimated symbol sequence. It requires the input, i.e., in our case, the partial response channel output sequence m(t), to be nearly perfect; otherwise, the error incurred by inaccurate input symbols can be significant. This in turn requires the MFT-ESA demodulation section to produce a sufficiently accurate demodulation result. By incorporating the slicer, i.e., the symbol by symbol detection device into the inverse filter recursion, we can implement the decision feedback version of the zero-forcing equalizer (ZF-DFE) to eliminate memory induced in the partial response CPM signals. MMSE-DFE solution Instead of focusing on just removal of the channel between the IF input and the information symbols, if we further incorporate a MMSE cost function that balances the task of eliminating memory while simultaneously reducing symbol distortion, we obtain the linear MMSE equalizer that can provide further improvement in the symbol error performance in low signal-to-noise ratio (SNR) environment. The corresponding decision feedback version of the linear MMSE equalizer (MMSE-DFE) incorporates both pre-cursor and post-cursor taps. The feedforward filter coefficients of the MMSE-DFE are obtained from the Wiener solution and then used to solve for the feedback filter coefficients. A detailed description of the MMSE-DFE solution is presented in [24]. Performance of carrier frequency estimation In practice, since prior knowledge of distributions is not available, we replace the expectation with a simple time-average as in Eq. 19. While in the discrete-time case, we replace the integral with a time-average sum: $$\hat{\Omega_{c}} = \frac{1}{L}\sum_{k=0}^{L-1}\Omega_{i}[\!k]. $$ Figure 3 depicts the carrier frequency estimation error of the MFT-ESA for the case where we have equiprobable symbols versus SNR. For larger SNR values, the carrier frequency estimation error approaches zero indicating that the IF yields a reliable carrier frequency estimate. Performance of carrier frequency estimation for the MFT-ESA approach Performance of wideband partial response multilevel CPM demodulation A wideband 3-REC multilevel CPM signal with symbols taking values in the alphabet {−3,−1,1,3} is used for performance test. The frequency deviation of the IF is equal to the carrier frequency in this extreme wideband case with modulation index h=4. The symbol error probability for the proposed MFT-ESA approach is depicted in Fig. 4. Since the 3-REC CPM signal has memory, the symbols here refer to the mixed symbols due to the memory effect of partial response signaling as described in Eq. 21. As the SNR increases, the MFT and ESA combination reduces the error dramatically. The error eventually drops to zero when the SNR passes certain threshold, while the error performances of the Hilbert transform (HTDA) and the ESA gradually saturate at certain levels due to carry-over effects from incomplete demodulation induced by narrowband constraints. Symbol error probability associated with the mixed symbols for wideband 3-REC multilevel CPM. Note that the SEP will drop to zero around 16 dB for the MFT-ESA approach, which is not shown due to limitations of the log scale Unlike the common AWGN channel, the AWGN imposed on the CPM signal is not linearly added to the modulation signals (or symbols) in the partial response channel since the modulation signals (or symbols) are conveyed in the IF of the CPM signal. Therefore, the symbol error performance for the CPM signal is different from what is observed with modulation schemes that fit into the AWGN channel analysis, such as the classic Q curve for BPSK modulation. The CPM format provides robustness to noise when the SNR exceeds a certain threshold and if the CPM signal is sufficiently sampled. Performance of partial response channel estimation via recursive error method The simulation result of the partial response channel estimation via the recursive error method for wideband 3-REC multilevel CPM is illustrated by Fig. 5. With a memory length L=3, the partial response channel for the 3-REC multilevel CPM can be expressed via $$ m[\!t] = s[\!t] + c_{1}s[\!t-1] + c_{2}s[\!t-2]+z(t), $$ Partial response channel estimation for 3-REC multilevel CPM via recursive error method. Note that the multilevel symbols take values in {− 3,− 1,1,3}. The red line indicates the convergence of the estimated coefficient $\hat {c}_{1}$, and the black line indicates the convergence of the estimated coefficient $\hat {c}_{2}$ where c1=c2=1. As we can observe from Fig. 5, the estimated coefficients converge close to the true value 1 in the case of REC CPM. They serve as useful estimates when other forms of pulse shaping such as in RAC-COM or SRAC-CPM are employed. Performance of partial response CPM memory removal via decision feedback equalization The MFT-ESA demodulation module described in the prior sections is then combined with the decisionfeedback equalization for memory removal to obtain estimates of the original information symbols. The channel response required by equalization has been estimated via the recursive error method. The symbol error probability performance of the proposed MFT-ESA demodulation combined with ZF-DFE and MMSE-DFE for memory removal are compared in Fig. 6 for both wideband binary and multilevel 3-REC CPM senarios. Note that the modulation indices of the wideband CPM signals in this example are deliberately chosen such that the implementation of the Viterbi algorithm is not practical due to the complexity of its phase states. For binary case, the MMSE-DFE performs slightly better than the zero-forcing DFE in the low SNR region, as shown in Fig. 6a. For multilevel case, the performance of both approaches are almost the same as in Fig. 6b. Since the multilevel signaling compress the decision region for symbol detection, the resolution of the symbol by symbol detector (or the slicer) is reduced, resulting in degraded performances of both approaches at the same level in the extreme wideband case where the IF deviation is close to the carrier frequency. Memory removal via decision feedback equalization: a symbol error probability of MFT-ESA approach for binary CPFSK with zero-forcing and MMSE decision feedback equalization to remove memory induced due to partial response signaling and b symbol error probability for multilevel CPFSK with zero-forcing and MMSE decision feedback equalization. In both cases, 3-REC CPM with parameters T b =1s,f s =50Hz, and f c =12Hz was employed. The MFT conversion factor was R=16, and the modulation indices for the binary and multilevel cases were h=97/21 and h=19/15, respectively. For equalization purposes, 50 pilot symbols were used. Note that the SEP of a and b will drop to zero around 11 and 16 dB, respectively, which are not shown due to limitations of the log scale From our previous analysis, we know that the error associated with the mixed symbols (or output of the partial response channel) obtained from the demodulated IF for low SNR environment is significant due to error propagation. The recovery of the original symbols for low SNR environment is hence significantly influenced since the proposed memory removal approach is very sensitive to its input. However, as the SNR increases, the output of the partial response channel determined by MFT-ESA demodulation module becomes more accurate, leading to a significant improvement in the ability to recover the transmitted symbols as evident in the symbol error probability. Above a SNR threshold of around 10 dB for the binary case and 12 dB for the multilevel case, the symbol error probability becomes negligible attributable to the fact that inverse filtering solution becomes nearly perfect after that threshold, which has been verified but not shown in Fig. 6 due to limitations of the log scale. In this paper, we have presented an approach towards wideband CPM demodulation by extending the MFT-ESA approach developed by the authors. The characteristic features of the proposed MFT-ESA approach are (1) unlike the Viterbi algorithm whose complexity increases with the number of phase states induced by m and p as in Eq. 8, the complexity of the proposed approach is independent of the modulation index; (2) the proposed approach does not require prior knowledge of the carrier frequency and this parameter can be extracted from the IF estimates; and (3) the proposed approach accommodates large modulation indices and multilevel signaling making it conducive to the large bandwidth requirements proposed in the M-ary FSK system for satellite communications [15]. The proposed MFT-ESA approach was then applied to the demodulation of wideband CPM signals with partial response signaling, where memory is introduced into the estimated IF. Subsequent to the MFT-ESA demodulation stage, a recursive prediction approach based on MA signal modeling of the estimated IF, together with decision feedback equalization, was presented to address the problem of removal of memory introduced by partial response signaling. Both the zero-forcing solution based on direct inversion of the memory channel, its corresponding decision feedback version and the MMSE-DFE solution to memory removal were investigated and shown to produce significant reduction in the symbol error probability over no equalization. AWGN: Additive white Gaussian noise AM-FM: Amplitude-modulation frequency-modulation BPSK: Binary phase-shift keying CPM: CR/FD: Carrer-to-frequency-deviation ratio CR/IB: Carrier-to-informaton-bandwidth ratio DFE: Decision feedback equalization ESA: FM: FSK: Frequency-shift keying GMSK: Gaussian Minimum Shift Keying HTDA: Hilbert transform demodulation algorithm IA: Instantaneous amplitude IF: Instantaneous frequency ISI: Inter-symbol interference MLSE: Maximum likelihood sequence estimation MFT: MMSE: Minimum mean-square error RLS: Recursive least squares SNR: ZF: Zero-forcing MJ Gertsman, JH Lodge, Symbol-by-symbol MAP demodulation of CPM and PSK signals on Rayleigh flat-fading channels. IEEE Trans. Commun.45(7), 788–799 (1997). T Aulin, JB Anderson, C-EW Sundberg, Digital Phase Modulation (Plenum, New York, 1986). CE Sundberg, Continuous phase modulation. IEEE Commun. Mag.24:, 25–38 (1986). S Li, N Zhang, S Lin, L Kong, A Katangur, MK Khan, M Ni, G Zhu, Joint admission control and resource allocation in edge computing for internet of things. IEEE Netw.32(1), 72–79 (2018). S Lin, L Kong, Q Gao, MK Khan, Z Zhong, X Jin, P Zeng, Advanced dynamic channel access strategy in spectrum sharing 5G systems. IEEE Wireless Commun.24(5), 74–80 (2017). A Perotti, A Tarable, S Benedetto, G Montorsi, Capacity-achieving CPM schemes. IEEE Trans. Inf. Theory. 56(4), 1521–1541 (2010). W Xue, W Shang, SB Makarov, Y Xu, A phase trajectories optimization method for CPM signal based on Pan-function model. EURASIP J. Adv. Signal Process.2016(1), 55 (2016). JG Proakis, M Salehi, Digital Communications, Fifth Edition (McGraw-Hill Publishing Company, New York, 1995). SS Abayesekara, in 2015 IEEE International Conference on Digital Signal Processing (DSP). Robust full response M-ary raised-cosine CPM receiver design via frequency estimation (Singapore, 2015), pp. 935–939. https://doi.org/10.1109/ICDSP.2015.7252014. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7252014&isnumber=7251315. B Santhanam, Generalized energy demodulation for large frequency deviations and wideband signals. IEEE Signal Process. Lett.11(1), 341–344 (2004). M Gupta, B Santhanam, in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, 1. Adaptive linear predictive frequency tracking and CPM demodulation, (2003), pp. 202–206. https://doi.org/10.1109/ACSSC.2003.1291897. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1291897&isnumber=28782. W Liu, B Santhanam, in 2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE). Wideband-FM demodulation for large wideband to narrowband conversion factors via multirate frequency transformations (Salt Lake City, 2015), pp. 7–12. https://doi.org/10.1109/DSP-SPE.2015.7369519. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7369519&isnumber=7369513. P Maragos, JF Kaiser, TF Quatieri, Energy separation in signal modulations with application to speech analysis. IEEE Trans. Signal Process.41(10), 3024–3051 (1993). A Potamianos, P Maragos, A comparison of the energy operator and hilbert transform approach to signal and speech demodulation. Signal Process.37(1), 95–120 (1994). M Fitch, K Briggs, Gaussian multilevel FM for high-bandwidth satellite communications. University College of London (2004). D Christopoulos, S Chatzinotas, G Zheng, J Grotz, B Ottersten, Linear and nonlinear techniques for multibeam joint processing in satellite communications. EURASIP J. Wireless Commun. Netw.2012(1), 162 (2012). SK Chronopoulos, C Koliopanos, CT Angelis, in Proceedings of the 3rd international conference on Mobile multimedia communications (MobiMedia '07). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering). Satellite multibeam signaling for multimedia services (ICSTBrussels, 2007), p. 4. W Liu, B Santhanam, Wideband image demodulation via bi-dimensional multirate frequency transformations. J. Optic. Soc. Am. A. 33:, 1668–1678 (2016). F Salzenstein, AO Boudraa, Multi-dimensional higher order differential operators derived from the Teager-Kaiser energy-tracking function. Signal Process.89(4), 623–640 (2009). FW King, Hilbert Transforms, vol. 2 (Cambridge University Press Cambridge, Cambridge, 2009). L Ljung, System Identification: Theory for the User, Second Edition (Prentice Hall, New Jersey, 1999). SS Haykin, Adaptive Filter Theory, 4th Edition (Prentice Hall Press, Upper Saddle River, 2005). R Fischer, J Huber, C Windpassinger, Signal processing in decision feedback equalization of intersymbol-interference and multiple-input/multiple-output channels: a unified view. Signal Process.83(8), 1633–1642 (2003). V Kavitha, V Sharma, Optimal MSE solution for a decision feedback equalizer. EURASIP J. Adv. Signal Process.2012(1), 172 (2012). This paper is based on research sponsored by the Air Force Research Labs (AFRL) under the agreement FA9453-16-1-0067. Department of Electrical and Computer Engineering, University of New Mexico, NM, Albuquerque, 87106, USA Wenjing Liu & Balu Santhanam Search for Wenjing Liu in: Search for Balu Santhanam in: Both authors contributed to design of the system and development of the algorithm. Both authors read and approved the submitted version of the manuscript. Correspondence to Wenjing Liu. AM-FM demodulation
CommonCrawl
Effects of sediment flushing operations versus natural floods on Chinook salmon survival Reburial potential and survivability of the striped venus clam (Chamelea gallina) in hydraulic dredge fisheries Giada Bargione, Andrea Petetta, … Alessandro Lucchetti Understanding drivers of wild oyster population persistence Mickael Teixeira Alves, Nick G. H. Taylor & Hannah J. Tidbury Impacts of low-head hydropower plants on cyprinid-dominated fish assemblages in Lithuanian rivers Tomas Virbickas, Paolo Vezza, … Andrius Steponėnas Effects of salinization on the occurrence of a long-lived vertebrate in a desert river Laramie B. Mahan, Lawrence G. Bassett, … Ivana Mali Effects of a low-head weir on multi-scaled movement and behavior of three riverine fish species Luke Carpenter-Bundhoo, Gavin L. Butler, … Mark J. Kennard Revealing environmental synchronicity that enhances anchovy recruitment in the Mediterranean Sea F. Quattrocchi & G. Garofalo Modelling ocean acidification effects with life stage-specific responses alters spatiotemporal patterns of catch and revenues of American lobster, Homarus americanus Travis C. Tai, Piero Calosi, … William W. L. Cheung Population dynamics of threatened Lahontan cutthroat trout in Summit Lake, Nevada James B. Simmons, Teresa Campbell, … Kevin Shoemaker Evidence for 'critical slowing down' in seagrass: a stress gradient experiment at the southern limit of its range El-Hacen M. El-Hacen, Tjeerd J. Bouma, … Laura L. Govers Manisha Panthi ORCID: orcid.org/0000-0002-6024-39311,2, Aaron A. Lee ORCID: orcid.org/0000-0002-6302-55183, Sudesh Dahal ORCID: orcid.org/0000-0003-3526-10514, Amgad Omer ORCID: orcid.org/0000-0003-3277-01415, Mário J. Franca ORCID: orcid.org/0000-0002-0265-85811,6,7 & Alessandra Crosato ORCID: orcid.org/0000-0001-9531-005X1,7 Scientific Reports volume 12, Article number: 15354 (2022) Cite this article Flushing is a common measure to manage and reduce the amount of sediment stored in reservoirs. However, the sudden release of large volumes of sediment abruptly increases the suspended solids concentration and alters the riverbed composition. Similar effects can be produced also by natural flood events. Do flushing operations have more detrimental impacts than natural floods? To answer this question, we investigated the impact of flushing on the survival of the Chinook salmon (Oncorhynchus tshawytscha) in the Sandy River (OR, USA), assuming that sediment is flushed from hypothetical bottom gates of the, now decommissioned, Marmot Dam. The effects of several flushing scenarios are analyzed with a 2D morphodynamic model, together with habitat suitability curves and stress indicators. The results show that attention has to be paid to duration: the shorter the flushing operation, the lesser the stresses on fish survival and spawning habitats. Flushing causes high stress to salmon eggs and larvae, due to unbearable levels of suspended sediment concentrations. It also decreases the areas usable for spawning due to fine-sediment deposition, with up to 95% loss at peak flow. Without the dam, the corresponding natural flood event would produce similar effects, with up to 93% loss. The study shows that well-planned flushing operations could mimic a natural impact, but only partly. In the long-term, larger losses of spawning grounds can be expected, since the removal of fine sediment with the release of clear water from the reservoir is a lengthy process that may be undesirable due to water storage reduction. Reservoirs trap sediment transported by rivers, with coarser material settling first, near the reservoir entrance, and finer sediment reaching further areas closer to the dam1. Progressive deposition of sediment reduces the water storage capacity and impacts reservoir and dam operations2,3,4. Flushing operations are a way to remove part of the deposited sediment, particularly the finest component that accumulates near the dam5. However, the rapid release of large quantities of water and sediment produces sudden alterations in flow velocity, water depth6 and temperature7, together with an abrupt increase of the concentration of suspended solids2,8,9,10,11,12 in the downstream river. These sudden alterations highly impact the river ecology13,14,15,16, also because they often produce persisting changes in bed composition and morphology17,18,19. The release of fine sediment affects fish and macroinvertebrates in several ways. High concentrations of suspended sediment have acute/short-term effects on respiratory organs, which can be lethal20, whereas the attenuation of light penetration produces negative effects on fishes and other forms of life, and hence their growth and development21. The release of reservoir water often results in a change in water temperature7 and this can lead to a behavioral drift and alter the distribution of benthos, as well as increase the mortality of invertebrates22. In addition, the deposition of fine sediment on the riverbed may: (i) bury and suffocate benthos, laid eggs, larvae and fry (acute/short-term effect)23,24; (ii) deteriorate spawning grounds (mid-to-long-term effect)25,26,27; and (iii) clog the hyporheic region of the bed hindering fluxes of oxygen and metabolic waste, as well as groundwater-surface water exchanges (mid-to-long-term effect)28. Quantifying the effects of flushing operations on biota requires the assessment of both acute and long-term impacts29. However, most studies focus either on the effects of high suspended solids concentrations10,30,31 or on riverine habitat deterioration32,33. Empirical relations have been developed to quantify the effects of high levels of suspended sediment concentration on fish20,34,35, whereas habitat suitability curves can be used to quantify the impact of fine sediment on spawning grounds25,29,36,37. Natural floods might have similar acute and long-term effects, because they also present a sudden increase in flow velocity and sediment transport, particularly in mountain streams38,39,40,41 and near the vicinity of urban areas, due to heavy rainfall42,43. However, in general, instream biota have adapted to the specific hydrology and sediment transport regimes of their river system, so natural floods may not threaten the long-term survival of fish and invertebrate species38,40,41,44,45,46,47. What is then the difference between natural floods and flushing operations? Can reservoir-flushing operations be designed with reduced acute and long-term effects on fish? To answer these questions, this research evaluates and compares the effects of both flushing operations and the corresponding natural flood on the survival and spawning habitats of Chinook salmon (Oncorhynchus tshawytscha) in the Sandy River (OR, USA). The study considers both the severity of stress caused by high sediment concentrations (acute short-term effects) and the decrease of spawning habitat due to fine sediment deposition (medium to long-term effects). A key assumption is that the Marmot Dam, built on the Sandy River in 1913 and decommissioned in 2007, is operational and provided by bottom gates for sediment flushing. The releases of water and sediment volumes from the reservoir are derived by simulating the opening of the hypothetical bottom gates of the dam using a two-dimensional (2D) morphodynamic model, here called the Reservoir model. The Marmot Dam is assumed absent for the quantification of the flow discharges and the sediment transport rates during the natural flood. The changes in hydraulic and sediment characteristics in the downstream river reach are computed with another 2D model, here called the River model. The results are post-processed to estimate the potential impacts of the considered events on Chinook salmon. Case study area In 1913, Portland General Electric (PGE) constructed the Marmot Dam across the Sandy River, 48 km upstream of the confluence with the Columbia River, as a wooden crib dam for hydropower generation (Fig. 1). The dam was later upgraded to a 15 m high concrete structure with an overflow spillway crest48 without bottom gates, which restricted bedload transport downstream. It was finally removed in 2007, when the anticipated cost of upgrading for the licensing requirement had become higher than the benefits. Location of Marmot Dam and study area showing the tributaries and gauging stations (Adapted from Major et al.9, Credit: Department of Interior/ USGS) created using Adobe Illustrator 2021: https://www.adobe.com/products/illustrator.html, U/S indicates "upstream" and D/S "downstream". At the time of dam removal, the reservoir was filled by 750,000 m3 of sediment, ranging from silt to boulders subdivided into layers of different compositions9,49. The upper layers comprised coarser sediment, mainly gravel and cobbles. While the lower layers contain fine sediment, ranging from clay to sand, but dominated by sand. A coarser layer underneath was the original riverbed. The dam removal was facilitated by the construction of a temporary cofferdam which diverted the flow of Sandy River bypassing the concrete dam; this cofferdam was later breached in a controlled way. The Sandy River flows from the western part of the Mt. Hood to the Columbia River. Its major tributaries are the Zigzag River, the Salmon River, the Little Sandy River, draining from Mt. Hood, and the Bull Run River draining from the Bull Run Lake. The combined basin area is 1300 km3, with an altitude ranging from 3428 m at Mt. Hood, to 3 m a. s. l.9 at the confluence with the Columbia River (Fig. 1). The gradient of the Sandy River gradually reduces from its steep mountain parts, where it transports cobbles and boulders, to its confluence, where, as low-gradient sand-bed river, it forms an inland delta. The river is fed by rainfalls and spring melts from snowpack50. Three-fourths of the annual precipitation falls in the period October–March, with the highest amounts occurring in November, December and January51. The study area extends from 3.5 km upstream to 18 km downstream of the (ex) Marmot Dam where the Sandy River meets the Bull Run River. At the Marmot Dam location, the river has a mild slope and presents a wider floodplain but 5 km downstream of the dam the river is confined and steep. Here the river presents rock outcrops and its bed is only partly alluvial. This river reach, known as the Sandy River Gorge, extends for 6.5 km. At the exit of the gorge, the river becomes unconfined with a mild slope, forming meanders for another 9.4 km. These reaches of river have different characteristics altering the flow depth, velocity and bed material composition, hence the use of these areas by different life stages of Chinook salmon differs with time of the year altering the consequences spatially and temporally. In 1964, after a flood with a discharge peak of 2400 m3/s, the highest since 1910, parts of the Sandy River and its tributaries were artificially straightened, their banks protected by rock berms and large obstructions caused by boulders and woody debris were removed52. After the removal of the Marmot Dam (2007) the river runs freely without any discharge regulation, providing extended habitat for existing biotas. The abundance of hydrodynamic and morphological data before and after the removal of the Marmot Dam is the reason behind the choice of this case study, since it allowed the construction of two modeling tools simulating the erosion of the reservoir deposit during flushing operations5 and the transport and deposition of the released sediment along the Sandy River32,53. Chinook salmon (Oncorhynchus tshawytscha) Two Pacific salmon species find their spawning habitats in the Sandy River52: Chinook salmon, both spring-run and fall-run, and coho salmon. Chinook is the largest Pacific salmon species, reaching a weight between 6 and 23 kg54. As most of Pacific salmon, the adults return to their natal gravel-bed streams from the ocean to spawn. The alevins emerge from the eggs and live within the gravel until they are large enough (fry) and start their migration towards deeper waters to finally reach the ocean where they grow into adults55. With a water temperature of 11 °C, Chinook salmon eggs hatch in roughly 47 days, whereas the alevins need 84 days to absorb their yolk sacs and become fry56,57. The spawning periods of fall and spring Chinook salmon in the Sandy River are September-December and September–October, respectively58. This means that fry appear in December-March, approximately four months after eggs are laid. The spawning grounds are especially found upstream of the ex-Marmot Dam and downstream of the Dodge Park (Fig. 1)59. The requirements regarding the physical properties (flow, velocity, depth) and water quality (temperature, dissolved oxygen, etc.60,61) must be met for successful salmon spawning37,62. Other important requirements are: Adult fish must arrive healthy at spawning grounds. This requires river continuity and that sub-lethal or lethal stress conditions are not met during migration and spawning. Spawning fish must be able to construct a nest. This requires appropriate gravel size at spawning areas. The nest must not be scoured or suffocated during egg incubation. This requires a strongly limited presence of fine sediment among gravel. General approach Our investigation focuses on Chinook salmon, considering two main effects of reservoir flushing and natural flood events: the effects of acute stress conditions imposed during these extreme occurrences and the effects of fine sediment deposition in the spawning areas. Two distinct two-dimensional morphodynamic models are used: the first one, the Reservoir Model, is used to route the water and sediment through the reservoir, generating water and sediment fluxes to the river downstream of the dam. It is an extended version of the model of Dahal et al.5. The second one, the River Model, represents the Sandy River from the downstream of Marmot Dam location to the confluence with the Bull Run River. It is an extension of the model developed by Lee32. The two models were developed using the open-source Delft3D code version 4.03 (https://www.deltares.nl/en/software/delft3d-4-suite/) considering sediment ranging from sand to cobbles, neglecting thus the finest components of the deposited material that are transported in suspension: fine sand, silt, and clay. Considering that this study focuses on fine sediment processes, both models had to be extended. The extended Reservoir Model is calibrated and validated, whereas lack of data on fine sediment deposition and transport rates along the Sandy River make calibration and validation of the extended River model impossible. For this, the runs include a sensitivity analysis on the effects of changing the size of suspended sediment, represented by its fall velocity, with the idea of covering a reasonable range of plausible scenarios. The results of the River model indicating the effects on water flow and sediment of the flushing scenarios and of the natural flood are then analyzed by means of stress and severity indices and suitability curves distinguishing the different life stages of salmon and its spawning habitat. Figure 2 schematizes the approach and the work flow. General approach of the study and workflow direction developed using Adobe Illustrator 2021 (https://www.adobe.com/products/illustrator.html). Delft3D is an open-source software used to simulate non-steady flow and transport phenomena in rivers, coasts and estuaries. It allows simulating 2D and 3D hydrodynamic and transport processes of sediment mixtures5,63,64,65. The use of both cohesive and non-cohesive sediment of different sizes is possible, and makes this tool useful for investigating sediment transport processes in different contexts66. The extended models used in this study are two-dimensional (2D) and solve depth-averaged shallow-water equations. The transport of suspended sediment is computed by means of 2D advection–diffusion equations coupled with a sediment entrainment and a sediment deposition equation. The sediment entrainment rate is assumed to be proportional to the difference between the local bed shear stress and its critical value for bed erosion, following the Krone and Partheniades approach67,68, multiplied by a coefficient, here named erosion coefficient. The deposition rate is obtained by multiplying the fall velocity of the sediment particles with the local depth-averaged sediment concentration69. Transport capacity formulas are used for the computation of bedload rates. The presence of several grain sizes requires considering the hiding of the smaller particles provided by the presence of the larger particles and the exposure of the latter being surrounded by smaller particles. However, the effect of hiding and exposure is not accounted in Delft 3D in the combination bedload/suspended load. This might overestimate the transport rates of the smallest bed-load particles and underestimate the largest particles. The computation of bed level change is based on sediment balances. For fine sediment travelling in suspension, bed level changes are given by the difference between sediment deposition and sediment entrainment rates. For bedload, the sediment balances follow Exner's approach70. The effects of transverse slope on bedload direction, important for 2D bed topography changes, are accounted for according to Ikeda71, and the effects of longitudinal slope according to Bagnold72. The domains of the models and their computational grids are shown in Fig. 3. Domain of the Reservoir and River models. The boundaries are indicated by cross-sections identified by letters. Along the Sandy River the letters are in alphabetic order from upstream to downstream. Note that the boundary B defines the junction between the two models. The inflow boundary of the Bull Run River is indicated by the letter "D". The numbered red dots indicate the USGS gauging stations. Names and identifiers are listed in the right-below corner. The figure is developed using QGIS 3.16.16 (https://qgis.org/en/site/index.html) and edited in Adobe Illustrator 2021 (https://www.adobe.com/products/illustrator.html). Model 1: Reservoir model The Reservoir model comprises a spatial domain that covers the Marmot Dam and its reservoir area (3.5 km long), with the boundaries being located 1 km upstream of the reservoir influence zone and 1.4 km downstream of the Marmot Dam, schematized with a curvilinear grid (Fig. 3). In the Delft3D model, the strata in the reservoir sediment deposit can be distinguished by sediment composition, extension, and thickness. Based on the granulometric analyses of the sediment collected from the different strata49 in the reservoir (refer Fig. 10 of the report from Stillwater Sciences71), the extended model includes three sediment fractions: fine sediment (several sizes from sand to clay, depending on scenario), gravel (D50 = 20 mm), and cobbles (D50 = 100 mm). The distinction in fractions is needed to correctly describe the processes of sediment transport, bed erosion and deposition along the river. The composition of units5,73 are selected as, Unit 1 imposed with a ratio 25:35:40 for fine sediment, gravel and cobbles, whereas Unit 2 and of the pre-dam riverbed have ratios 85:15:0 and 25:30:45, respectively. The use of different units and composition of strata is needed to correctly simulate sediment removal from the reservoir during the flushing operation. A cofferdam and wash load layer are also added to facilitate breaching. These percentages of sediment composition are derived from a range of percentages, since the sediment composition depends on location (data from Squier Associates49). Bedload is computed using the capacity formula of Aschida and Michiue74. The time series of the discharge measured at the Marmot Dam station (located 400 m upstream of Marmot Dam before dam removal and shifted downstream of Dam after removal) allows constructing the inflow hydrograph, whereas the sediment input at the upstream boundary is generated using a regression analysis from the available sediment measurement data at the Brightwood station. To properly reproduce the fine sediment input to the downstream river, a wash load layer, is added in the last 500 m of the reservoir to represent the most recent suspended sediment deposits located in the downstream part of the reservoir, at the time of cofferdam breaching. The extended Reservoir model is calibrated and validated for sediment erosion after dam removal to establish the properties of fine sediment. During the calibration process, the fall velocity and the erosion coefficient parameters, among others are tuned, to reproduce the field data, of suspended solids concentration measured during and after the Marmot Dam removal, assuming this operation is an extreme case of sediment flushing. The calibration runs cover a period of 5 days, which involves the dynamic process of cofferdam breaching. The cofferdam breaching process is modeled as a notch that is cumulatively eroded by flowing water. Since the outflow to the downstream river reach is a function of the cofferdam breach process, the measured water discharge record at the Marmot Dam station is used as a calibration metric. The concentration of the wash load layer is calibrated on suspended sediment concentration measurements, peaking at 49 g/m3, during dam removal9. Validation of the extended model is based on measured data of sediment concentration and reservoir bed erosion. These data were collected in the three months that followed dam removal, during which 45% of the reservoir sediment was eroded. Information on the erosion process is acquired from the USGS report9 (https://pubs.usgs.gov/pp/1792/) and from the surveys conducted by USGS, David Evans and PGE. Detailed information on the extended model is provided by Panthi75. For the flushing scenarios, a virtual Marmot Dam consisting of bottom gates is introduced at the old dam location (Fig. 4). A real-time control (RTC) mechanism is used in compliance with the model to simulate the openings of the gates5. Reservoir model at the location of the Marmot Dam with hypothetical gate openings, with three gates for flushing and one for water diversion use. Model 2: River model The Sandy River model covers 18 km from the Marmot Dam to the confluence of the Bull Run River (Fig. 3). The model grid comprises 1133 × 61 curvilinear cells. Lee32 derived the initial riverbed topography from the LiDAR survey 200732, but this does not comprise the submerged portions of the channel bed. An initial run (spin-up) was thus executed to compute a complete realistic 2D riverbed topography and sediment composition. The spin-up simulation started with a transversally plane-bed in the submerged portion, composed of sand, gravel, and cobbles32 with sizes of 0.3, 22 and 100 mm, respectively. The morphological evolution of the riverbed was obtained by imposing a schematic discharge time series with no sediment input, due to the presence of the dam, with a time step of 0.05 min. The sediment transport rate was computed with the transport formula of Meyer-Peter-Muller76, applicable for bedload transport in gravel-bedded streams. The result of the spin-up run was then compared with the morphological features that are visible from aerial imagery, such as extension and location of sediment deposits and the deep areas that are evident at low-flow conditions. Model calibration is based on this comparison, since no measured data on bed topography are available for the submerged part of the river channel. Information regarding the sediment composition of the Sandy River before dam removal is limited to pebble counts. Field investigations found bed armoring downstream of the Marmot Dam with an average grain size of the bed material of 100 mm8,73,77,78, whereas the percentage of fines was estimated from visual valuation32. This means that there is no way to quantitively check the model results in terms of sediment composition. The River model32 is here extended and the spin-up run re-done to include the suspended-sediment processes. The transport of coarse material is computed with the transport formula of Meyer-Peter and Müller76, whereas the transport of fine sediment is computed by means of advection–diffusion equations coupled to sediment entrainment and deposition formulations. The characteristics of the sediment-related variables in the extended model are listed in Table 1. Note that the adopted sediment characteristics are those obtained from the calibration of the extended Reservoir model. The initial bed composition is derived from the characteristics of the strata provided by Squier Associates49. Morphological calibration of the extended River model is not possible due to lack of measured data on bed topography, bed composition and deposition of fine sediment in the river downstream of the Marmot Dam. This means that some scenarios have to include different sediment characteristics to cover a wide range of possibilities. Table 1 Initial bed composition, sediment characteristics and sediment transport formula in the spin-up run of the River model computing the initial state of the river. A natural flood described by the discharge hydrograph of December 2007, with a sharp rising limb and a gentler falling limb, typical of Sandy River floods, is selected as a reference for the flushing and the natural flood scenarios (see Supplementary Fig. A1). Based on the flow frequency statistics provided by Major et al.9, this flood has a return time of less than two years. The boundary conditions of the Reservoir model are the measured daily discharge time series and the sediment inflow derived from the data measured at the Brightwood station. The input of suspended solids is derived using a relation between discharge and sediment load generated from a regression analysis of the available sediment concentration data from the same station. The flushing scenarios are chosen based on the timing of gate opening during the peaking of the flood wave, with only Gate 2 (width: 9.72 m; height: 3 m) in operation (Fig. 4). This pattern of gate opening is considered the most effective one5, since it allows flushing higher volumes of sediment in a shorter period. In the corresponding natural flood scenario, the Sandy River is assumed to be freely flowing without the dam. This means that in this case the Reservoir model is not used to generate the discharge and sediment inputs to the River model. The input of fine sediment to the river is computed from the regression law derived for the Brightwood station. The input of coarse sediment is computed using the selected transport formula. All scenarios are listed in Table 2. Table 2 Model scenarios.The River model also includes the additional natural flood scenario. The effects of different suspended sediment sizes are studied separately considering three fall velocities representing the characteristics of clay, silt, and medium sand (see Supplementary Table A1). The input of water and sediment to the river for these three scenarios corresponds to reservoir flushing with gate opening at 80% of peak flow (moment of gate opening is indicated in Supplementary Fig. A1). The impact of suspended solids concentration: stress and severity on salmonids High suspended sediment concentration in the water column causes acute effects on aquatic biota. These short-term effects are not accounted while defining the suitability of habitats, but they can create an unbearable environment and cause behavioral problems, physical damage and even death to fish and invertebrates20,24,30,31,34,35. The exposure to suspended solids is detrimental to salmon species too and depends on concentration and duration of exposure34,79. The severity scale developed by Newcombe and Jensen35 is here used to study the acute impact on salmonids caused by high suspended sediment concentrations during the flushing operation or by the flood event. The severity scale is given as a function of concentration of suspended sediment (SSC) and duration (D). The severity scale of Newcombe and Jensen are based on regression lines fitting literature data on various salmon species, considering suspended sediment with particle size up to 250 µm35. The scale relates to the effects of SSC on individual fish and does not account the fish preconditions nor any other factors causing unsuitable environment, e.g. rise in water temperature or decrease in dissolved oxygen. Considering the high level of uncertainty, also related to the fact that the data are from differing river environments, the severity index should be used to compare situations rather than to quantitatively predict the effect of SSC on a specific fish species in a specific river. Notwithstanding this, the choice of using this severity index lies in its simplicity of application and in the scope of the work, aiming at comparing the effects of flushing operation and flood events. Newcombe and Jensen35 expressed severity on a scale of 0 to 14, distinguishing the effects in behavioral (0–4), sub-lethal (5–8) and lethal (9–14) for each life stage. Adult salmonids $$\begin{array}{*{20}c} {SEV = 1.6814 + 0.4769 \left( {\log_{e} D} \right) + 0.7565\left( {\log_{e} SSC} \right)} \\ \end{array}$$ Juvenile salmonids Eggs and larvae of salmonids where, D is the duration of exposure in hours and SSC is the suspended sediment concentration in mg/l. The severity indexes are here calculated using three different metrics: The concentration of suspended sediment is plotted against the duration in which it is exceeded, which creates a concentration duration curve of the flushing operation. The value of the concentration that is exceeded for 50% of the time is then multiplied by total duration of the flushing operation to calculate the representative value of severity30. The time series of severity is computed for the different life stages based on the duration curves of suspended sediment concentration for different locations along the river reach. The acceptable duration of exposure is compared with the actual duration for different concentration values35. Fine sediment deposition: impacts on spawning areas Deposition of fine sediment on spawning grounds with laid eggs can cause mortality and sub-lethal effects to eggs and emerging larvae, creating an acute impact on salmon80. Moreover, the deposition alters the composition of the riverbed, which may further result in unsuitability of the area as spawning ground, which might be a persisting, and thus long-term effect. Habitat suitability index (HSI) models are developed to categorize and quantify the ability of defined areas to meet the physical requirements of specific habitats. They relate the values of each physical variable describing the local aquatic environment81,82, e.g. flow velocity, water depth, bed composition and water temperature, to its suitability for a specific species or populations of fish or invertebrates25,28,29,36,37,83. Considering the strongly differing requirements, distinct indices are derived for eggs, larvae, juveniles and adults of the same species35. Each HSI is normally based on the statistical analysis of scarce field data84, for instance local fish counts versus the value of a specific physical variable. It may thus present a large degree of uncertainty due to the weak or even inconsistent statistical relations that are often found85. Nevertheless, habitat suitability indexes are commonly used to assess the distribution of specific species, as well as to objectively translate the physical alterations caused by external factors into effects on habitats86. This study aims at highlighting the changes in spawning habitat suitability caused by flushing operations or by the corresponding flood event. Being specific for Chinook salmon, the study adopts the HSI model developed by Raleigh et al.37 based on a large number of field and laboratory data. The analysis is based on three variables: water depth, flow velocity and percentage of fines in the riverbed. The index is scaled as a real number from 0 to 1, where 0 means unsuitable and 1 is an optimal condition. HSI is computed for different life stages. The HSI is then calculated as the product of the suitability indexes of the three variables as $$\begin{array}{*{20}c} {HSI = SI_{1} * SI_{2} * SI_{3} } \\ \end{array}$$ The total suitable area for spawning in the considered river reach is finally calculated by applying the total weighted usable area (WUA) approach29 in which the suitable area is expressed as a percentage of the total surface area (wet area) of the considered reach: $$\begin{array}{*{20}c} {WUA = \sum A_{i} HSI_{i} } \\ \end{array}$$ where, Ai is area of the grid cell and HSIi is the value of the HSI value for the same cell. Reservoir model: calibration and validation The Reservoir model is updated and re-calibrated based on the measured sediment transport data collected by USGS (see appendix in Major et al.9). These data are collected during the first five days after dam removal, during the major erosive process of the cofferdam breaching. The results are: a fall velocity of 0.036 m/s, a critical bed shear stress for erosion of 1 N/m2 and an erosion coefficient of 4.17 × 10−3 kg/m2/s for the sand fraction. The insertion of a wash load layer with a concentration of 0.5 kg/m2 along a distance of 500 m upstream of dam in the reservoir water surface provides the best representation of the suspended sediment flux at the time of breaching. The suspended sediment flux reached the value of 0.6 m3/s, nevertheless, the computed values remain lower than the measured ones (Fig. 5a). Further details on this re-calibration are given by Panthi75. Modelled and measured (a) suspended sediment transport rates in the Sandy River at the Marmot Dam Station during cofferdam breaching (results of model calibration and validation) and, (b) percentage of sediment eroded from the reservoir during the validation period. The blue line indicates the incoming discharge. Validation is based on the cumulative sediment volumes eroded from the reservoir that is measured by USGS9 after cofferdam breaching. The computed values are in good agreement with the measured ones (Fig. 5b): after three months, 45% of the total reservoir sediment is eroded whereas the modelled erosion is 40%. Suspended sediment outputs during flushing The sudden opening of the bottom gates results in elevated levels of SSC, but for a relatively short duration (minutes/hours). The highest flushing discharge is 513 m3/s, obtained for gate opening at 80% of the peak flow (Fig. 6). The highest sand transport rates are 4.75 m3/s, 3.68 m3/s and 2.00 m3/s for gate opening at 80%, 50% and 20% of the peak flow, respectively. The highest concentrations, however, are rather similar, with 39,000 mg/l, 40,000 mg/l and 33,000 mg/l, respectively (Fig. 6). Computed discharge and suspended sediment concentration (sand) for different flushing scenarios corresponding to gate opening at 20%, 50% and 80% of the peak flow, 200 m downstream of the Marmot Dam location. The results of the sensitivity analysis, considering different sizes of the fine sediment fraction, show that deposition of silt and clay occurs sparsely in the study area (see Supplementary Fig. A2). This is due to the low fall velocity of these sediment types and the high velocity of the water flow. Instead, sand settles at several locations, mostly between the location of the dam and the entrance of the Sandy River gorge (upper reach, Fig. 1), but also in the pools within the gorge. Due to deposition, sand presents a decrease in concentration in the downstream direction, whereas silt and clay, instead, present an increase due to the entrainment of particles from the riverbed and banks. These fine particles are found to only alter the suspended sediment concentration and hardly change the bed composition. Therefore, the analysis of the effects of flushing on salmon spawning habitats concentrates on sand. The sand that immediately settles in the upper reach, between the dam location and the gorge, is later transported away by the later flow. This results in persisting high sand concentrations in the downstream reaches. The river banks were eroded in the first few hours of flushing, but later bank erosion stopped as the discharge decreased hence decreasing proximal sediment source. Impact of suspended sediment concentration on Chinook salmon The results of the three flushing scenarios and of the corresponding natural flood are post-processed to determine the corresponding value of severity (Eqs. 1–3) with the aim to quantify the impact of SSC on Chinook salmon, distinguishing the different life stages. The sediment concentration distribution and its duration at different locations along the Sandy River were computed by the River model. The upstream inputs were computed by the Reservoir model, but only for the flushing scenarios. The reference value of the severity index is obtained considering the value of SSC that exceeds 50% in the duration curve, multiplied by the duration of the entire flushing period (Table 3). Table 3 shows lethal conditions for eggs and larvae with values ranging from 10 to 12 indicating up to 40% mortality35 for all scenarios and along the entire Sandy River. Increasing order of severity is observed in the downstream direction. The severity is comparatively higher for the flushing scenario with gate opening at 20% of the incoming flood peak, whereas the lowest severity is found for the corresponding natural flood. Note that this analysis is performed for sand. The computed values of SSC for silt and clay are even higher, resulting in greater severity indexes. Table 3 Reference value of severity relative to three life stages of salmon for the considered flushing scenarios and the natural flood scenario at different locations along the Sandy River downstream of the Marmot Dam. The time series of severity for the three life stages of salmon is derived from the duration curves (Fig. 7). The results show lethal conditions for egg and larvae for all scenarios, including the corresponding natural flood. Low concentrations are present for longer periods of time and exceed their acceptable duration limit. Instead, the adult and juvenile stages of salmon do not reach the mortality level (SEV > 10). Instantaneous severity for three flushing scenarios at different locations: just downstream of the dam (200 m), at the end of the river gorge (10,500 m) and just before the confluence of the Bull Run River (18,000 m) (a) during the corresponding natural flood event; (b) gate opening at 80% of incoming peak, (c) gate opening at 50% of incoming peak and (d) gate opening at 20% of incoming peak. For the natural flood, the SSC does not exceed the value of 2500 mg/l, whereas for the flushing operations the concentration is as high as 40,000 mg/l. For the flushing scenario starting at 20% of the flood peak, the suspended sediment concentrations have the longest duration of exposure and produce the highest values of severity. The scenarios with gate opening at 50% and 80% of the flood peak have similar impacts. However, the scenario with the operation starting at 80% of the flood peak releases higher sediment volumes from the reservoir with similar or less acute stress to salmon and for this it is regarded as the most effective flushing operation. Impact of sediment deposition on Chinook salmon spawning habitat Being the most effective flushing scenario, for the study of the impact on spawning grounds and other habitat areas, we only consider the operation that starts at 80% of the flood peak. The changes in habitat suitability caused by this operation and by the corresponding natural flood at different times are shown in Fig. 8a, b. The foregoing weighted usable areas (Eq. 5) for spawning in the Sandy River are indicated too, for sake of comparison. Temporal variation of the percentage of river surface that is suitable for the different life stages of Chinook salmon after the start of (a) the natural flood event and (b) the flushing operation starting at 80% of the flood peak and, the percentage of area suitable for eggs according to water depth, flow velocity and fine sediment and composite Habitat Suitability (HSI) at different times after the start of (c) the natural flood event and (d) the flushing operation starting at 80% of the flood peak. Figure 8c, d shows the contribution of water depth, flow velocity and percentage of fines in the substrate through their specific suitability indices for eggs (Eq. 4). The initial condition at time t = 0 h presents a relatively low flow velocity and a pre-flushed riverbed composition. This corresponds to higher suitability for adults with optimum flow velocity. The initial riverbed composition appears less suitable, especially for juvenile fry and egg. After the events, the habitat suitability for salmon at the study area is reduced by both the natural flood and the flushing operation. The reduction in suitable area for adult salmon is due to flow velocity, which depends on discharge and is thus not permanent, whereas for the eggs it is due to the deposition of fines. The highest content of fines in the riverbed occurs 12 h after the discharge peak (Fig. 8c, d). Part of the deposited fine sediment is eroded when the incoming suspended sediment concentration is reduced. At the end of the simulation, when the discharge reaches to baseflow, the usable area for spawning is reduced to 8.3% and 5.6% of the total wet area by the natural flood and the flushing operation, respectively, compared to an initial 15%. This corresponds to a loss of 44% and 62%, respectively. The comparison between the natural flood and the flushing operation (Fig. 8c, d) shows that after the natural flood the substrate presents a slightly higher recovery with less fine sediment content in the riverbed. To investigate the possibility to clean the riverbed from the sand deposited during the flushing operation, an additional run is carried out in which the flushing scenario with gate opening at 80% of the flood peak is followed by bottom gate closure and release of clear water to the river at a rate of 30 m3/s, which represents the base-flow of the hydrograph (see Supplementary Fig. A1). As clear water flows, fine sediment is progressively removed from the riverbed. After nine days of clear water release, however, the riverbed is still only slightly cleaned (Fig. 9). This shows that at the chosen flow rate the riverbed takes weeks to be cleaned up and indicates that flushing would cause a long-term disturbance to the spawning habitats. Temporal variation of spawning Weighted Usable Area (WUA) for Chinook salmon (black bars) and suitability in terms of % of fines deposited (grey bars) in the Sandy River with clear water release at a rate of 30 m3/s after the flushing operation. The WUA is based on the percentage of fines in the riverbed. "Initial" refers to the conditions just before the flushing operation. The concentration of suspended sediment is key to salmon survival. However, the results indicate that the magnitude of the suspended sediment release can be underestimated by the Reservoir model. This model represents well the overall phenomena, with rapid channel excavation within the reservoir deposit and retrogressive bed erosion. The widening of the excavated channel, though, is small compared to the measured one and might be the reason for the underestimation of sediment release. However, this might be also related to the uncertainty posed by the composition of the deposit layers and horizontal sediment sorting in the reservoir. The possible underestimation of the sediment releases from the reservoir should be always considered when analyzing the results of the model, particularly if the predicted concentrations fall just under the sub-lethal threshold. About the downstream river, the availability of fine sediment in the bed is important for the computation of both SSC and deposition rates by the River model. The initial bed composition was obtained in a straightforward way through the spin-up run, but without the possibility to calibrate the morphodynamic part of the model. Lack of field data meant uncertainty in the identification of the initial spawning ground suitability, i.e. before the flushing operations and the natural event, affecting the assessment of their impacts. The difficulty of establishing the initial riverbed composition affected the impact assessment even more, considering that the results of the River model show that sediment entrainment from the riverbed and eroding banks87 is important for the assessment of both stress and habitat losses (Fig. 7). About the stress caused by the exposure to suspended solids, the results show that the differences in severity between the flushing scenarios and the natural flood are small. Flushing operations produced higher suspended sediment concentrations but for shorter periods of time compared to the natural flood and for this they resulted in similar severity levels. For instance, the conditions become lethal for eggs and larvae in all cases and the natural flood conditions result only in slightly lower severity for adults and juveniles as compared to the flushing operations. The deposition of fine sediment on the riverbed caused a similar reduction of suitable spawning habitat, but the recovery was higher and faster at the end of the flood event compared to the flushing. The severity was found to increase in downstream direction, due to the rise of suspended sediment concentration caused by the entrainment of fines from the riverbed, and the eroding banks. The severity for eggs and larvae mainly depends on the duration of exposure to suspended solids. The analysis is based on the severity scale of Newcombe and Jensen35, but this might not well represent the effects. This scale does not consider the stage or condition of the fish before the exposure, and the severity, calculated with one value of the concentration, does not consider the cumulative effect of exposure. Moreover, the derived severity scale is based on field data analysis, whereas the response to suspended solids is highly site-specific10. For instance, the fish that is exposed to extreme events frequently can be more tolerant or able to escape to neighboring tributaries with lower SSC88. Next to stress severity, the deposition of fines on the riverbed during the events is found to produce an important loss of spawning habitats. The results of the River model indicate that up to 95% of the eggs laid in the riverbed would be buried. Lab experiments by Jensen et al.24 showed a decrease in egg and fry survival by 16.9% for every 1% increase in the content of fines (< 0.85 mm) in the substrate. It is important to note that the highest natural floods of the Sandy River normally occur in November, December and January, followed by lower flow peaks in February-March50,51. In this river, salmon spawns in September-December. This means some natural floods occur during the critical spawning and egg incubation periods. A decline in live fish and redd counts after the rain storms of October 2016 was reported58, indicating that the water was highly turbid, qualitatively confirming the results of this study. The comparison between spawning area suitability shows that flushing causes a more persisting disturbance to spawning habitats than the natural flood, even if followed by riverbed cleaning by the constant release of clear water at a rate equal to the river base-flow (Fig. 10). The impact is present for almost a year if the SSC of the flow is high89, slowing the river recovery process and any restoration plans90. A more irregular clear water release, presenting short but higher discharge peaks, could increase the riverbed flushing rate and reduce the habitat restoration times. In any case, releasing large amounts of clear water might reduce the water storage in the reservoir to unacceptable levels, which limits the applicability of this type of operations. This means that riverbed flushing requires a specific study leading to the optimization of the release of precious clear water from the reservoir. Comparison of suitable spawning areas of the riverbed with natural flood, flushing only, and flushing followed by clean water release at a constant rate equal to the river base-flow. "Suitability" assumes that gravel of the right size is present in the riverbed. It is necessary to finally consider that the analysis of habitat suitability is only based on flow velocity, water depth and percentage of fines. The study considered a specific flood event, the equivalent one, i.e. the one considered also for the design of the flushing operation. Considering another flood event for both the flushing operations and the natural flood would lead to different physical conditions and habitat alterations. It is important to consider that also other factors, such as temperature, dissolved oxygen, turbidity, season, and water quality, as well as multiple events, which might affect the results, are not considered in this study. Extreme events change not only the sediment composition, but also the riverbed topography and the river course, which might alter the connectivity of the floodplains and hence the riverine habitats90,91. This study does not consider this aspect. Being based on models (severity and HSI) with high degrees of uncertainty, the results of this study should be interpreted in terms of comparison between scenarios. Finally, it should be considered that the results of this study are case-sensitive, with specific availability and type of sediment in the reservoir, flushing discharge, and reservoir characteristics, all factors that govern the quantity and quality of the sediment released from the reservoir and its impact on salmon. This work investigates the short (acute stress) and the long-term effects (river habitat alterations) of sediment flushing from a reservoir on Chinook salmon. Considering that flushing operations and natural floods present many similarities, the aim of this study is to assess the difference in terms of impacts between reservoir flushing and natural floods. Can the negative effects be reduced by proper planning/design of flushing operations? To meet the goal, the effects of reservoir flushing, conveniently starting during a flood event, are compared to the effects of the corresponding natural flood wave without reservoir. The Sandy River, Oregon USA, was chosen as a case study due to the availability of measured data from the period around the Marmot Dam removal in 2007, considering dam removal as an extreme case of sediment flushing. A 2D morphological model was used to simulate the propagation and deposition of fine sediment along the river downstream of the dam. The sediment inputs generated by the flushing operations were simulated by means of another 2D model covering the Marmot Dam Reservoir and the dam, assumed to be in operation with appropriate bottom gates. The model results allowed assessing the stress caused by the exposure to suspended solids (short-term effect) through the analysis of severity indices derived for salmonids and the habitat loss (long-term effect) through the analysis of Habitat Suitability Indexe for Chinook salmon. The results indicate that excessive exposure to suspended sediment concentration during either the natural or the artificial (flushing) high-flow events is lethal to salmon eggs and alevins. Spawning habitat losses for flushing and flood events are found to be very similar, with 95% and 93% of losses, respectively. The eggs already laid would have severe physiological and lethal effects during a flushing operation, as well as during a natural flood. Although the short-term impacts are similar, the long-term impact on spawning grounds due to fine sediment deposition is found to be higher in case of flushing. Cleaning of the riverbed by releasing sediment-free water at a rate equal to the river base-flow appears not effective in gaining suitable spawning beds and reaching the conditions that are present after the natural flood. The effectiveness of riverbed cleaning with different clear-water flow releases, possibly including peak discharges of short duration mimicking natural flows, should be further investigated. Reservoir flushing with gate opening at 80% of an incoming flood peak is advisable, not only considering the efficiency of the operation in terms of sediment release and duration, but also because this is the flushing operation having the smallest impact, although only slightly. In the study area the highest flows normally occur in November-January whereas the critical spawning and egg incubation period of salmon is September to December. Fry appear approximately four months after eggs are laid, i.e. in December-March. This means that most natural floods harm salmon eggs and alevins, as the results of this study show, especially if they occur in the last months of the year. Considering this, the best moment for sediment flushing would be after March. However, the operation should start at the peak of a flood wave and subsequent riverbed cleaning operations ideally need natural floods too. Looking at the typical yearly hydrograph of the Sandy River, having this operation in the second half of January could offer an acceptable compromise between loosing eggs and recently hatched fish and need to flush the reservoir and clean the riverbed. If flushing starts at the peak of a flood wave with duration comparable to the natural event, the timing of the operation would fall in the temporal ranges of natural floods, to which the river biota has adapted. By showing that the effects of floods and flushing are comparable, at least with regard to salmon, the results of this study indicate that well-planned timing of flushing followed by effective river bed cleaning, if achievable, would substantially minimize the effects of dam operation. All data, model inputs and files are downloadable upon request to the corresponding author from: http://www.hydroshare.org/resource/7674e96fa50b436fba80ba59d566a767. Information on discharge data and sediment data can be obtained from: https://pubs.usgs.gov/pp/1792/ and USGS stations. Morris, G. L. & Fan, J. Reservoir Sedimentation Handbook: Design and Management of Dams, Reservoirs, and Watersheds for Sustainable Use (McGraw Hill Professional, 1998). White, R. Evacuation of Sediments from Reservoirs, HR Wallingford, http://www.thomastelford.com (Thomas Telford Publishing, 2001). Kondolf, G. M. et al. Sustainable sediment management in reservoirs and regulated rivers: Experiences from five continents. Earth's Future 2, 256–280 (2014). Schleiss, A. J., Franca, M. J., Juez, C. & De Cesare, G. Reservoir sedimentation. J. Hydraul. Res. 54, 595–614 (2016). Dahal, S., Crosato, A., Omer, A. Y. A. & Lee, A. A. Validation of model-based optimization of reservoir sediment releases by dam removal. J. Water Resour. Plan. Manag. 147, 04021033 (2021). Williams, G. P. & Wolman, M. G. Effects of dams and reservoirs on surface water hydrology—Changes in rivers downstream from dams. Natl. Water Summ. Hydrol. Events Surf. Water Resour. 2300, 83 (1986). Toffolon, M., Siviglia, A. & Zolezzi, G. Thermal wave dynamics in rivers affected by hydropeaking. Water Resour. Res. https://doi.org/10.1029/2009WR008234 (2010). Stewart, G. B. Patterns and Processes of Sediment Transport Following Sediment-Filled Dam Removal in Gravel Bed Rivers. (PhD Thesis, Oregon State University, Oregon USA, 2006). Major, J. J. et al. Geomorphic Response of the Sandy River, Oregon, to Removal of Marmot Dam. U.S. Geological Survey Professional Paper, 64p https://pubs.usgs.gov/pp/1792/ (2012). Espa, P., Castelli, E., Crosa, G. & Gentili, G. Environmental effects of storage preservation practices: Controlled flushing of fine sediment from a small hydropower reservoir. Environ. Manag. 52, 261–276 (2013). Tena, A., Vericat, D. & Batalla, R. J. Suspended sediment dynamics during flushing flows in a large impounded river (the lower River Ebro). J Soils Sediments 14, 2057–2069 (2014). Antoine, G., Camenen, B., Jodeau, M., Némery, J. & Esteves, M. Downstream erosion and deposition dynamics of fine suspended sediments due to dam flushing. J. Hydrol. 585, 124763 (2020). Power, M., Dietrich, W. & Finlay, J. Dams and downstream aquatic biodiversity: Potential food web consequences of hydrologic and geomorphic change. Environ. Manag. 20, 887–895 (1996). Clarke, K. D., Pratt, T. C., Randall, R. G., Scruton, D. A. & Smokorowski, K. E. Validation of the flow management pathway: Effects of altered flow on fish habitat and fishes downstream from a hydropower dam. Can. Tech. Rep. Fish. Aquat. Sci. 2784, 111 (2008). Poff, N. L. & Zimmerman, J. K. H. Ecological responses to altered flow regimes: A literature review to inform the science and management of environmental flows. Freshw. Biol. 55, 194–205 (2010). Juracek, K. E. The aging of America's reservoirs: In-reservoir and downstream physical changes and habitat implications. JAWRA J. Am. Water Resour. Assoc. 51, 168–184 (2015). Brandt, S. A. & Swenning, J. Sedimentological and geomorphological effects of reservoir flushing: the Cachí Reservoir, Costa Rica, 1996. Geogr. Ann. Ser. B 81, 391–407 (1999). Grant, G. E., Schmidt, J. C. & Lewis, S. L. A geological framework for interpreting downstream effects of dams on rivers. In Water Science and Application (eds O'Connor, J. E. & Grant, G. E.) 203–219 (American Geophysical Union, 2003). https://doi.org/10.1029/007WS13. Petts, G. E. & Gurnell, A. M. Dams and geomorphology: Research progress and future directions. Geomorphology 71, 27–47 (2005). Newcombe, C. & MacDonald, D. D. Effects of suspended sediments on aquatic ecosystems. J. N. Am. J. Fish. Manag. 11, 72–82 (1991). Gilles, B. & Le Bail, P.-Y. Does light have an influence on fish growth?. Aquaculture 177, 129–152 (1999). Carolli, M., Bruno, M. C., Siviglia, A. & Maiolini, B. Responses of benthic invertebrates to abrupt changes of temperature in flume simulations. River Res. Appl. 28, 678–691 (2012). Bennel, D. H., Connor, W. P. & Eaton, C. A. Substrate composition and emergence success of fall Chinook salmon in the Snake river. Northwest Sci. 77, 93–99 (2003). Jensen, D. W., Steel, E. A., Fullerton, A. H. & Pess, G. R. Impact of fine sediment on egg-to-fry survival of Pacific salmon: A meta-analysis of published studies. Rev. Fish. Sci. 17, 348–359 (2009). Bjornn, T. C. & Reiser, D. W. Habitat requirements of salmonids in streams. Am. Fish. Soc. Spec. Publ. 19, 83–138 (1991). ASCE, N. Sediment and aquatic habitat in river systems. J. Hydraul. Eng. 118, 669–687 (1992). Louhi, P., Mäki-Petäys, A. & Erkinaro, J. Spawning habitat of Atlantic salmon and brown trout: General criteria and intragravel factors. River Res. Appl. 24, 330–339 (2008). Baxter, C. V. & Hauer, F. R. Geomorphology, hyporheic exchange, and selection of spawning habitat by bull trout (Salvelinus confluentus). Can. J. Fish. Aquat. Sci. 57, 1470–1481 (2000). Peviani, M., Saccardo, I., Crosato, A. & Gentili, G. Natural and artificial floods connected with river habitat. in Ecohydraulics 2000. Proceedings of the 2nd International Symposium on Habitat Hydraulics, IAHR- Que´bec, Canada, vol. B 175–186 (1996). Crosa, G., Castelli, E., Gentili, G. & Espa, P. Effects of suspended sediments from reservoir flushing on fish and macroinvertebrates in an alpine stream. Aquat. Sci. 72, 85 (2009). Espa, P., Crosa, G., Gentili, G., Quadroni, S. & Petts, G. Downstream ecological impacts of controlled sediment flushing in an Alpine valley river: A case study. River Res. Appl. 31, 931–942 (2014). Lee, A. Modelling Salmon Spawning Habitat Response to Dam Removal. (MSc Thesis, IHE Delft, the Netherlands, 2017). van Oorschot, M. et al. Impact of dam operations on the habitat suitability of Plecoglossus altivelis downstream of the Funagira dam, Japan. In River Flow 2020 (eds Uijttewaal et al.) (2020 Taylor & Francis Group, CRC Press, 2020). Newcombe, C. P. Suspended sediments in acquatic ecosystem: III effects as a function of concentration and duration of exposure. (1994). Newcombe, C. P. & Jensen, J. O. T. Channel suspended sediment and fisheries: A synthesis for quantitative assessment of risk and impact. N. Am. J. Fish. Manag. 16, 693–727 (1996). Hubert, W. A., Helzner, R. S., Lee, L. A. & Nelson, P. C. Habitat suitability index models and instream flow suitability curves: Arctic grayling riverine populations. Western Energy and Land Use Team, Division of Biological Services, Research and Development, Fish and Wildlife Service, US Department of Interior, Biological report 82 (10.110) (1985). Raleigh, R. F., Miller, W. J. & Nelson, P. C. Habitat suitability index models and instream flow suitability curves: Chinook salmon. Fish and Wildlife Service, US Department of the Interior, Biological Report 82(10.122) www.nwrc.usgs.gov/wdb/pub/hsi/hsi-122.pdf (1986). Fisher, S., Gray, L., Grimm, N. & Busch, D. E. Temporal succession in a desert stream ecosystem following flash flooding. Ecol. Monogr. https://doi.org/10.2307/2937346 (1982). Lapointe, M., Eaton, B., Driscoll, S. & Latulippe, C. Modelling the probability of salmonid egg pocket scour due to floods. Can. J. Fish. Aquat. Sci. 57, 11 (2000). Baldwin, D. & Mitchell, A. M. The effects of drying and re-flooding on the sediment and soil nutrient dynamics of lowland river–floodplain systems: A synthesis. River Res. Appl. 16, 457–467 (2000). Kowalski, D. The effects of stream flow on the trout populations of the Gunnison river. (2007). Konard, C. P. Effects of urban development on floods. U.S. Geological Survey—Water Resources Fact Sheet 076-03 https://pubs.usgs.gov/fs/fs07603/ (2016). Miller, J. D. & Hutchins, M. The impacts of urbanisation and climate change on urban flooding and urban water quality: A review of the evidence concerning the United Kingdom. J. Hydrol. Reg. Stud. 12, 345–362 (2017). Poff, N. L. & Ward, J. V. Implications of streamflow variability and predictability for lotic community structure: A regional analysis of streamflow patterns. Can. J. Fish. Aquat. Sci. 46, 1805–1818 (1989). George, S. D., Baldigo, B. P., Smith, A. J. & Robinson, G. R. Effects of extreme floods on trout populations and fish communities in a Catskill Mountain river. Freshw. Biol. 60, 2511–2522 (2015). Carlson, A. K., Fincel, M. J., Longhenry, C. M. & Graeb, B. D. Effects of historic flooding on fishes and aquatic habitats in a Missouri river delta. J. Freshw. Ecol. 31, 271–288 (2016). Ríos-Pulgarín, M. I., Barletta, M. & Mancera-Rodríguez, N. J. The role of the hydrological cycle on the distribution patterns of fish assemblages in an Andean stream. J. Fish Biol. 89, 102–130 (2016). Article PubMed CAS Google Scholar United States Federal energy regulatory Commission (FERC). Application for Surrender of License, Bull Run Hydropower Project: Environmental Impact Statement. https://catalog.hathitrust.org/Record/100940309 (2003). Squier Associates. Sandy river sediment study, Bull Run Hydroelectric Project. (2000). Taylor, B. Salmon and Steelhead Runs and Related Events of the Clackamas River Basin–A Historical Perspective. Portland General Electric Company, 64 https://www.eaglecreekfriends.org/links-references (1999). Trimble, D. E. Geology of Portland, Oregon, and Adjacent Areas. Bulletin U.S. G.P.O. https://pubs.er.usgs.gov/publication/b1119. https://doi.org/10.3133/b1119. (1963). Sandy River basin Working Group. Sandy River basin aquatic habitat restoration strategy: An anchor habitat-based prioritization of restoration opportunities Oregon Trout. Portland, Oregon. Preprint at https://www.fs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb5325660.pdf (2007). Lee, A., Crosato, A., Omer, A. Y. A. & Bregoli, F. Applying a two-dimensional morphodynamic model to assess impacts to Chinook salmon spawning habitat from dam removal. in AGU Fall meeting (2017). Healey, M. C. Life history of Chinook salmon (Oncorhynchus tshawytscha). Pacific Salmon Life Histories 311–394 (1991). Bourret, S. L., Caudill, C. C. & Keefer, M. L. Diversity of juvenile Chinook salmon life history pathways. Rev. Fish. Biol. Fish. 26, 375–403 (2016). Alderdice, D. & Velsen, F. Relation between temperature and incubation time for eggs of Chinook salmon (Oncorhynchus tshawytscha). J. Fish. Res. Board Can. 35, 69–75 (1978). Seattle Aquarium. Redd alert: Our Chinook salmon are hatching! |. Seattle Aquarium https://www.seattleaquarium.org/blog/redd-alert-our-chinook-salmon-are-hatching (2015). Whitman, L., Cannon, B. & Hart, S. Spring Chinook salmon in the Willamette and Sandy rivers: Sandy river basin Spring Chinook salmon spawning surveys. Oregon Department of Fish and Wildlife 4034 Fairview Industrial Drive SE Salem, Oregon 97302, 30 https://odfw.forestry.oregonstate.edu/willamettesalmonidrme/sites/default/files/2016_sandy_basin_spring_chinook_spawning_survey.pdf (2016). Cramer, S. P. Fish and habitat surveys of the lower Sandy and Bull Run rivers. Report of SP Cramer&Associates, Inc. to Portland General Electric and Portland Water bureau, Portland, Oregon (1998). Westley, P. A. Documentation of en route mortality of summer chum salmon in the Koyukuk river, Alaska and its potential linkage to the heatwave of 2019. Ecol. Evol. 10, 10296–10304 (2020). Bowerman, T. E., Keefer, M. L. & Caudill, C. C. Elevated stream temperature, origin, and individual size influence Chinook salmon prespawn mortality across the Columbia River Basin. Fish. Res. 237, 105874 (2021). Beechie, T. J. et al. Process-based principles for restoring river ecosystems. Bioscience 60, 209–222 (2010). Crosato, A. & Saleh, M. S. Numerical study on the effects of floodplain vegetation on river planform style. Earth Surf. Process. Landf. 36, 711–720 (2011). Schuurman, F., Marra, W. A. & Kleinhans, M. G. Physics-based modeling of large braided sand-bed rivers: Bar pattern formation, dynamics, and sensitivity. J. Geophys. Res. Earth Surf. 118, 2509–2527 (2013). Singh, U., Crosato, A., Giri, S. & Hicks, M. Sediment heterogeneity and mobility in the morphodynamic modelling of gravel-bed braided rivers. Adv. Water Resour. 104, 127–144 (2017). van ledden, M. Sand-Mud Segregation in Estuaries and Tidal Basins. (PhD Thesis, University of Technology Delft, 2003). Kandiah, A. Fundamental Aspects of Surface Erosion of Cohesive Soils. (University of California, Davis, 1974). Partheniades, E. Cohesive Sediments in Open Channels: Erosion, Transport and Deposition (Butterworth-Heinemann, 2009). Jiang, J. An Examination of Estuarine Lutocline Dynamics. (PhD Thesis, University of Florida, USA, 1999). Exner, F. M. Uber die wechselwirkung zwischen wasser und geschiebe in flussen (about the interaction between water and bedload in rivers). Akad. Wiss. Wien Math. Naturwiss. Kl. 134, 165–204 (1925). Ikeda, S. Incipient motion of sand particles on side slopes. J. Hydraul. Div. 108, 95–114 (1982). Bagnold, R. A. An approach to the sediment transport problem from general physics. Physiographic and hydraulic studies of rivers in US Geological Survey Professional Paper, vol. 422 I, 231–291 (1966). Stillwater Sciences. Numerical modeling of sediment transport in the Sandy river, Oregon following removal of Marmot dam. (2000). Ashida, K. & Michiue, M. Study on hydraulic resistance and bed transport rate in alluvial stream. Proc. Jpn. Soc. Civ. Eng. 201, 59–69 (1972). Panthi, M. Generation and Fate of Fine Sediment from Dam Flushing. (MSc Thesis, IHE Delft, Institute for Water Education, the Netherlands, 2020). Meyer-Peter, E. & Müller, R. Formulas for bed-load transport. in IAHSR 2nd Meeting, Stockholm, Appendix 2 (IAHR, 1948). Podolak, C. & Pittman, S. Marmot Dam Removal Geomorphic Monitoring & Modeling Project: Final Report. Sandy river basin watershed council. (2011). Keith, M. K. Reservoir Evolution Following the Removal of Marmot Dam on the Sandy River, Oregon. (MSc Thesis, Portland State University, Oregon, USA, 2012). Redding, J. M., Schreck, C. B. & Everest, F. H. Physiological effects on coho salmon and steelhead of exposure to suspended solids. Trans. Am. Fish. Soc. 116, 737–744 (1987). Lisle, T. E. Sediment transport and resulting deposition in spawning gravels, north coastal California. Water Resour. Res. 25, 1303–1319 (1989). Pitlick, J. & Wilcock, P. Relations between streamflow, sediment transport, and aquatic habitat in regulated rivers. In Geomorphic Processes and Riverine Habitat (eds Dorava, J. M. et al.) 185–198 (American Geophysical Union, 2001). Toupin, L. Freshwater Habitats: Life in Freshwater Ecosystems. (Franklin Watts, Watts Library, 2005). Bovee, K. D. A guide to Stream Habitat Analysis Using the Instream Flow Incremental Methodology. IFIP No. 12. FWS/OBS https://pubs.er.usgs.gov/publication/fwsobs82_26 (1982). Elith, J. et al. Novel methods improve prediction of species' distributions from occurrence data. Ecography 29, 129–151 (2006). Vadas, R. L. & Orth, D. J. Formulation of habitat suitability models for stream fish guilds: Do the standard methods work?. Trans. Am. Fish. Soc. 130, 217–235 (2001). Moir, H. J., Gibbins, C. N., Soulsby, C. & Youngson, A. F. PHABSIM modelling of Atlantic salmon spawning habitat in an upland stream: Testing the influence of habitat suitability indices on model output. River Res. Appl. 21, 1021–1034 (2005). Hauer, C. et al. State of the art, shortcomings and future challenges for a sustainable sediment management in hydropower: A review. Renew. Sustain. Energy Rev. 98, 40–55 (2018). Koizumi, I., Kanazawa, Y. & Tanaka, Y. The fishermen were right: Experimental evidence for tributary refuge hypothesis during floods. Zool. Sci. 30, 375–379 (2013). Quadroni, S. et al. Effects of sediment flushing from a small Alpine reservoir on downstream aquatic fauna. Ecohydrology 9, 1276–1288 (2016). Bond, M. H., Nodine, T. G., Beechie, T. J. & Zabel, R. W. Estimating the benefits of widespread floodplain reconnection for Columbia river Chinook salmon. Can. J. Fish. Aquat. Sci. 76, 1212–1226 (2019). Stanford, J. A., Lorang, M. S. & Hauer, F. R. The shifting habitat mosaic of river ecosystems. Int. Ver. Theor. Angew. Limnol. Verh. 29, 123–136 (2005). Department of Water Resources and Ecosystem, IHE Delft Institute for Water Education, Westvest 7, 2611 AX, Delft, The Netherlands Manisha Panthi, Mário J. Franca & Alessandra Crosato Utah Water Research Laboratory, Department of Civil and Environmental Engineering, Utah State University, 1600 Canyon Rd, Logan, UT, 84321, USA Manisha Panthi Natural Systems Design, 127 E 1st. St., Port Angeles, WA, 98362, USA Aaron A. Lee Laboratory of Hydraulics, Hydrology and Glaciology, ETH Zürich, Hönggerbergring 26, 8093, Zurich, Switzerland Sudesh Dahal Rivers-Reservoirs Dynamics and Morphology, Deltares, Boussineqweg 1, 2629 HV, Delft, The Netherlands Amgad Omer Karlsruhe Institute of Technology, Engesserstraße 22, Geb. 10.83 - Raum 108, 76131, Karlsruhe, Germany Mário J. Franca Department of Hydraulic Engineering, Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft, The Netherlands Mário J. Franca & Alessandra Crosato Alessandra Crosato Manuscript writing by M.P. and A.C. Initial model development by S.D and A.L. Model extension, model simulations and result analysis by M.P. Manuscript revision by M.P, A.C., S.D., A.L, A.O and M.F. Correspondence to Manisha Panthi. Supplementary Information. Panthi, M., Lee, A.A., Dahal, S. et al. Effects of sediment flushing operations versus natural floods on Chinook salmon survival. Sci Rep 12, 15354 (2022). https://doi.org/10.1038/s41598-022-19294-2 About Scientific Reports Guide to referees Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
Este sitio web utiliza Cookies propias y de terceros para recopilar información con la finalidad de mejorar nuestros servicios. Si continua navegando, supone la aceptación de la instalación de las mismas. El usuario tiene la posibilidad de configurar su navegador pudiendo, si así lo desea, impedir que sean instaladas en su disco duro, aunque deberá tener en cuenta que dicha acción podrá ocasionar dificultades de navegación de la página web. Como Configurar Aceptar Ingeniería de Sistemas - DevOps Consultoría y Formación Vacacionales Odoo / OpenERP Creanbits cerrar× Llámanos (+34) 971 43 97 71 Use of convolutional neural network for image classification jcgonzalez 20 de Noviembre de 2017 | Posted in: Convolutional Neural Network classification task In this article we will introduce the main concepts about of the convolutional neural networks (CNN) and its application in the image classification task. Before describe in detail the architecture of the CNNs, we will let's get acquainted with some definitions that will allow us to facilitate the understanding how the CNNs work. When we refer to a CNN, it is implicit that we are referring a Deep Learning too. What is deep learning ? The deep learning, is a set of automatic learning algorithms that attempts to model high-level abstractions from data, using architectures composed of multiple non-linear transformations (see reference 1 , 23). Deep learning is part of a broader set of machine learning methods based on learning data representations. For example, in a task of image recognition, the image can be represented in many forms e.g. like a matrix of pixels or like a bytes vector. However, some representations let us make more easy the learning action for a particular interest task. The goal of research in this area is to define which representations are better and how to create models capable of learning from these representations through multiple transformations, and thus obtain high performance in the tasks assigned to these models (see reference 2 ,3). While it is true that there is not single definition of deep learning, several publications focus on different characteristics such as: They use a cascade of levels (usually call layers) with non-linear processing units to extract and transform variables. Each layer uses the output of the previous layer as input. The algorithms can use supervised learning or unsupervised learning, and applications include data modeling and pattern recognition. It is based on the learning of multiple levels of characteristics or representations of data. The higher level characteristics are derived from the lower level characteristics to form a hierarchical representation. Learn multiple levels of representation that correspond to different levels of abstraction. These levels form a hierarchy of concepts. All these ways that define the deep learning have in common follow aspects: multiple levels of none-linear processing (usually call layers); and supervised or unsupervised learning of feature representations in each level. The levels form a hierarchy of characteristics from a lower level of abstraction to a higher one. The deep learning algorithms contrast with other learning algorithms by the number of transformations applied to the input data as it propagates from the first none-lineal transformation (input layer) until to the last none-lineal transformation (output layer). Each of these transformations includes parameters that can be trained as weights and thresholds (see references 2 , 3). However, there is not a standard rules for the number of transformations (or layers) that make an algorithm deep, but most researchers in the field believe that deep learning involves more than two intermediate transformations. Commonly, the multiples none-linear transformation in deep learning are included as hidden layers of deep neural networks (see reference 4). These architectures have been applied in different fields such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design (see reference 5), where they have produced results comparable to and in some cases superior (see reference 6) to human experts (see 7). Once already the general aspect of deep learning have been discussed, we will now formally introduce CNNs. Convolutional Neural Networks (CNNs ) Convolutional Neural Networks are a category of Neural Networks that have proven very effective in areas such as image recognition and classification. CNNs have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self driving cars. CNNs, therefore, are an important tool for most machine learning researchers today. However, understanding CNNs and learning to use them for the first time can sometimes be an intimidating experience. The main aim purpose of this article is to develop an understanding of how Convolutional Neural Networks work on images. if you are a newbie in neural network topics, We would recommend to read some tutorials on Multilayer Perceptrons before proceeding for a better understanding of how CNNs work. The LeNet Architecture (1990s) In the lasl years several new architectures of convolutional neural networks (8,9) have been proposed. However, many of them, use the main concepts from the LeNet. LeNet was one of the very first convolutional neural networks which helped propel the field of Deep Learning. This pioneering work by Yann LeCun was named LeNet5 after many previous successful iterations since the year 1998 (see 10). At that time the LeNet architecture was used mainly for character recognition tasks such as reading zip codes, digits, etc. Below, we will develop an intuitive description of how the LeNet architecture learns to recognize images works. Figure 1: A simple ConvNet. Source 11 The Convolutional Neural Network in Figure 1 is similar in architecture to the original LeNet and classifies an input image into four categories: dog, cat, boat or bird (the original LeNet was used mainly for character recognition tasks). As observe from figure 1, the net receive a boat image as input, then the network correctly assigns the highest probability for boat (0.94) among all four categories. For the recognition of the image, the network takes account four main operations: Non Linearity (ReLU) Pooling or Sub Sampling Classification (Fully Connected Layer) These operations are the basic components of each convolutional neuronal network. Thus, understanding how the CNNS work, is an important step to developing a solid understanding about of them. We will try to understand the what there are behind each of these operations. An Image as a matrix of pixel values Essentially, every image can be represented as a matrix of pixel values. Figure 2: Every image is a matrix of pixel values. Source 12 Channel is a conventional term used to refer to a certain component of an image. For example, an image from a standard digital camera will have three channels – red, green and blue – , thus, you can imagine those as three 2d-matrices stacked over each other (one for each color), each having pixel values in the range 0 to 255. These kind of objects (these three 2d-matrices) are called tensors in a mathematical context. Convolution Step CNNs derive their name from the "convolution" operator. The primary purpose of Convolution in case of a CNNs is to extract features from the input image. Convolution preserves the spatial relationship between pixels by learning image features using small squares of input data. We will not go into the mathematical details of Convolution here, but will try to understand how it works over images. As we discussed above, every image can be considered as a matrix of pixel values. Consider a 5 x 5 image whose pixel values are only 0 and 1 (note that for a gray scale image, pixel values range from 0 to 255, the green matrix below is a special case where pixel values are only 0 and 1): Also, consider another 3 x 3 matrix as shown below: Then, the Convolution of the 5 x 5 image and the 3 x 3 matrix can be computed as shown in the animation in Figure 5 below: Figure 5: The Convolution operation. The output matrix is called Convolved Feature or Feature Map. Source 13 We take a moment to understand how the computation above is being done. We slide the orange matrix over our original image (green) by 1 pixel (also called 'stride') and for every position, we compute element wise multiplication (between the two matrices) and add the multiplication outputs to get the final integer which forms a single element of the output matrix (pink). Note that the 3×3 matrix "sees" only a part of the input image in each stride. In CNNs terminology, the 3×3 matrix is called a 'filter' or 'kernel' or 'feature detector' and the matrix formed by sliding the filter over the image and computing the dot product is called the 'Convolved Feature' or 'Activation Map' or the 'Feature Map'. It is important to note that filters acts as feature detectors from the original input image. It is evident from the animation above that different values of the filter matrix will produce different Feature Maps for the same input image. As an example, consider the follow input image: In the table below, we can see the effects of convolution of the above image with different filters. As shown, we can perform operations such as Edge Detection, Sharpen and Blur just by changing the numeric values of our filter matrix before the convolution operation (for more details see reference 14) – this means that different filters can detect different features from an image, for example edges, curves etc. Another example that illustrate the Convolution operation is by looking at the animation in Figure 8 below: Figure 8: The Convolution Operation. Source 15 A filter (with red outline) slides over the input image (convolution operation) to produce a feature map. The convolution of another filter (with the green outline), over the same image gives a different feature map as shown. It is important to note that the Convolution operation captures the local dependencies in the original image. Also note how these two different filters generate different feature maps from the same original image. Remember that the image and the two filters above are just numeric matrices as we have discussed above. In practice, a CNN learns the values of these filters on its own during the training process (although we still need to specify parameters such as number of filters, filter size, architecture of the network etc. before the training process). The more number of filters we have, the more image features get extracted and the better our network becomes at recognizing patterns in unseen images. The size of the Feature Map (Convolved Feature) is controlled by three parameters 16 that we need to decide before the convolution step is performed: Depth: Depth corresponds to the number of filters we use for the convolution operation (multiples none- linear transformation discussed above). In the network shown in Figure 9, we are performing convolution of the original boat image using three distinct filters, thus producing three different feature maps as shown. You can think of these three feature maps as stacked 2d matrices, so, the 'depth' of the feature map would be three. Stride: Stride is the number of pixels by which we slide our filter matrix over the input matrix. When the stride is 1 then we move the filters one pixel at a time. When the stride is 2, then the filters jump 2 pixels at a time as we slide them around. Having a larger stride will produce smaller feature maps. Zero-padding: Sometimes, it is convenient to pad the input matrix with zeros around the border, so that we can apply the filter to bordering elements of our input image matrix. A nice feature of zero padding is that it allows us to control the size of the feature maps. Adding zero-padding is also called wide convolution, and not using zero-padding would be a narrow convolution. This has been explained clearly in 17. Introducing Non Linearity (ReLU) An additional operation called ReLU has been used after every Convolution operation in Figure 10 above. ReLU stands for Rectified Linear Unit and is a non-linear operation. Its output is given by: Figure 10: the ReLU operation ReLU is an element wise operation (applied per pixel) and replace all negative pixel values in the feature map by zero. The ReLU operation can be understood clearly from Figure 11 below. It shows the ReLU operation applied to one of the feature maps obtained in Figure 6 above. The output feature map here is also referred to as the 'Rectified' feature map. Figure 11 ReLu operation (see 18) Pooling Step Spatial Pooling (also called sub-sampling or down-sampling) reduces the dimensionality of each feature map but retains the most important information. Spatial Pooling can be of different types: Max, Average, Sum etc. In case of Max Pooling, we define a spatial neighborhood (for example, a 2×2 window) and take the largest element from the rectified feature map within that window. Instead of taking the largest element we could also take the average (Average Pooling) or sum of all elements in that window. In practice, Max Pooling has been shown to work better. Figure 12 shows an example of Max Pooling operation on a Rectified Feature map (obtained after convolution + ReLU operation) by using a 2×2 window. Figure 12: Max Pooling Operation. (Rectified Feature Map). 16 We slide our 2 x 2 window by 2 cells (also called 'stride') and take the maximum value in each region. As shown in Figure 10, this reduces the dimensionality of our feature map. In the network shown in Figure 13, pooling operation is applied separately to each feature map (notice that, due to this, we get three output maps from three input maps). Figure 13: Pooling applied to Rectified Feature Maps Figure 14 shows the effect of Pooling on the Rectified Feature Map we received after the ReLU operation in Figure 11 above. Figure 14: Pooling. Source 18 The function of Pooling is to progressively reduce the spatial size of the input representation 16. In particular, pooling makes the input representations (feature dimension) smaller and more manageable reduces the number of parameters and computations in the network, therefore, controlling overfitting 16 makes the network invariant to small transformations, distortions and translations in the input image (a small distortion in input will not change the output of Pooling – since we take the maximum / average value in a local neighborhood). helps us arrive at an almost scale invariant representation of our image (the exact term is "equivariant"). This is very powerful since we can detect objects in an image no matter where they are located (read 18 for details). So far we have seen how the Convolution, ReLU and Pooling layers work. It is important to understand that these layers are the basic building blocks of any CNN. As shown in Figure 15, we have two sets of Convolution, ReLU & Pooling layers – the 2nd Convolution layer performs convolution on the output of the first Pooling Layer using six filters to produce a total of six feature maps. ReLU is then applied individually on all of these six feature maps. We then perform Max Pooling operation separately on each of the six rectified feature maps. Together these layers extract the useful features from the images, introduce non-linearity in our network and reduce feature dimension while aiming to make the features somewhat equivariant to scale and translation 19. The output of the 2nd Pooling Layer acts as an input to the Fully Connected Layer, which we will discuss in the next part. Fully connected Layer part. The Fully Connected layer is a traditional Multi Layer Perceptron (20) that uses a softmax activation function in the output layer (other classifiers like SVM can also be used, but will stick to softmax in this post). The term "Fully Connected" implies that every neuron in the previous layer is connected to every neuron on the next layer. The output from the convolutional and pooling layers represent high-level features of the input image. The purpose of the Fully Connected layer is to use these features for classifying the input image into various classes based on the training dataset. For example, the image classification task we set out to perform has four possible outputs as shown in Figure 14 below (note that Figure 14 does not show connections between the nodes in the fully connected layer) Figure 16: Fully Connected Layer -each node is connected to every other node in the adjacent layer Adding a fully-connected layer is a cheap way of learning non-linear combinations of features obtained trrought of the convolution processes. Most of the features from convolutional and pooling layers may be good for the classification task, but combinations of those features might be even better. The sum of output probabilities from the Fully Connected Layer is 1. This is ensured by using the Softmax as the activation function in the output layer of the Fully Connected Layer. The Softmax function takes a vector of arbitrary real-valued scores and squashes it to a vector of values between zero and one that sum to one. Combining all processes above explained, where the Convolution + Pooling layers act as Feature Extractors from the input image; while Fully Connected layer acts as a classifier. Thus, going back to the initial example in which we want to classify the input image ( see Figure 16 showed below). Figure 17: Training the ConvNet Since the input image is a boat, the target probability is 1 for Boat class and 0 for other three classes, i.e. Input Image = Boat Output: Target vector [0,0,1,0] ([prob. to be Dog, prob to be cat, prob to be Boat, prob to be Bird]) In addition to the architecture of the neural network, another important aspect is the optimization of all its parameters. This implies the values of the thresholds for the filters that we have to choose, as well as, the weights of the connections in the layers of the multiperceptrons. The optimization process of all these parameters is defined as the training process of the network. The overall training process of the Convolution Network may be summarized as below: Step1: We initialize all filters and parameters / weights with random values Step2: The network takes a training image as input, goes through the forward propagation step (convolution, ReLU and pooling operations along with forward propagation in the Fully Connected layer) and finds the output probabilities for each class. Lets say the output probabilities for the boat image above are [0.2, 0.4, 0.1, 0.3] Since weights are randomly assigned for the first training example, output probabilities are also random. Step3: Calculate the total error at the output layer (summation over all 4 classes) Total Error = $ \sum \frac{1}{2}(target – output)^2$ Step4: Use Backpropagation to calculate the gradients of the error with respect to all weights in the network and use gradient descent to update all filter values / weights and parameter values to minimize the output error (see reference 21 and 22 for details). The weights are adjusted in proportion to their contribution to the total error. When the same image is input again, output probabilities might now be [0.1, 0.1, 0.7, 0.1], which is closer to the target vector [0, 0, 1, 0]. This means that the network has learn to classify this particular image correctly by adjusting its weights / filters such that the output error is reduced. Parameters like number of filters, filter sizes, architecture of the network etc. have all been fixed before Step 1 and do not change during training process – only the values of the filter matrix and connection weights get updated. Step5: Repeat steps 2-4 with all images in the training set. The above steps train the ConvNet – this essentially means that all the weights and parameters of the ConvNet have now been optimized to correctly classify images from the training set. When a new (unseen) image is input into the ConvNet, the network would go through the forward propagation step and output a probability for each class (for a new image, the output probabilities are calculated using the weights which have been optimized to correctly classify all the previous training examples). If our training set is large enough, the network will (hopefully) generalize well to new images and classify them into correct categories. The steps above have been oversimplified and mathematical details have been avoided to provide intuition into the training process. See 16 for a mathematical formulation and thorough understanding. In the example above we used two sets of alternating Convolution and Pooling layers. Please note however, that these operations can be repeated any number of times in a single ConvNet. In fact, some of the best performing ConvNets today have tens of Convolution and Pooling layers! Also, it is not necessary to have a Pooling layer after every Convolutional Layer. Convolutional Neural Networks have been used since early 1990s. We discussed the LeNet above which was one of the very first convolutional neural networks in order to the readers have intuitions about how the CNNs work. Some other influential architectures are listed below LeNet (1990s): Already covered in this article. 1990s to 2012: In the years from late 1990s to early 2010s convolutional neural network were in incubation. As more and more data and computing power became available, tasks that convolutional neural networks could tackle became more and more interesting. AlexNet (2012) – In 2012, Alex Krizhevsky (and others) released AlexNet which was a deeper and much wider version of the LeNet and won by a large margin the difficult ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012. It was a significant breakthrough with respect to the previous approaches and the current widespread application of CNNs can be attributed to this work. ZF Net (2013) – The ILSVRC 2013 winner was a Convolutional Network from Matthew Zeiler and Rob Fergus. It became known as the ZFNet (short for Zeiler & Fergus Net). It was an improvement on AlexNet by tweaking the architecture hyperparameters. GoogLeNet (2014) – The ILSVRC 2014 winner was a Convolutional Network from Szegedy et al. from Google. Its main contribution was the development of an Inception Module that dramatically reduced the number of parameters in the network (4M, compared to AlexNet with 60M). VGGNet (2014) – The runner-up in ILSVRC 2014 was the network that became known as the VGGNet. Its main contribution was in showing that the depth of the network (number of layers) is a critical component for good performance. ResNets (2015) – Residual Network developed by Kaiming He (and others) was the winner of ILSVRC 2015. ResNets are currently by far state of the art Convolutional Neural Network models and are the default choice for using ConvNets in practice (as of May 2016). DenseNet (August 2016) – Recently published by Gao Huang (and others), the Densely Connected Convolutional Network has each layer directly connected to every other layer in a feed-forward fashion. The DenseNet has been shown to obtain significant improvements over previous state-of-the-art architectures on five highly competitive object recognition benchmark tasks. Check out the Torch implementation here. In this article, we have explained the main concepts behind Convolutional Neural Networks in simple terms. There are several details that we have oversimplified / skipped, but hopefully this post gave you some intuition around how they work. All images and animations used in this post belong to their respective authors as listed in References section below. 1 Y. Bengio, A. Courville, and P. Vincent., "Representation Learning: A Review and New Perspectives," IEEE Trans. PAMI, special issue Learning Deep Architectures, 2013 2 JürgenSchmidhuber., Neural Networks Volume 61, January 2015, Pages 85-117 3 Deng, L.; Yu, D. (2014). "Deep Learning: Methods and Applications" (PDF). Foundations and Trends in Signal Processing. 7 (3–4): 1–199. doi:10.1561/2000000039 4 Bengio, Yoshua (2009). "Learning Deep Architectures for AI" (PDF). Foundations and Trends in Machine Learning. 2 (1): 1–127. doi:10.1561/2200000006. 5 Ghasemi, F.; Mehridehnavi, AR.; Fassihi, A.; Perez-Sanchez, H. (2017). "Deep Neural Network in Biological Activity Prediction using Deep Belief Network". Applied Soft Computing. 6 Ciresan, Dan; Meier, U.; Schmidhuber, J. (June 2012). "Multi-column deep neural networks for image classification". 2012 IEEE Conference on Computer Vision and Pattern Recognition: 3642–3649. doi:10.1109/cvpr.2012.6248110. 7 Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffry (2012). "ImageNet Classification with Deep Convolutional Neural Networks". NIPS 2012: Neural Information Processing Systems, Lake Tahoe, Nevada. 8 LeCun, Yann; Bottou Léon; Bengio Yoshua and Haffiner Patric. (1998). "Gradient-Based Learning Applied to Document Recognition". Proc. of the IEE. 11 Clarifai / Technology 12 Machine Learning is Fun! Part 3: Deep Learning and Convolutional Neural Networks 13 Feature extraction using convolution, Stanford 14 Wikipedia article on Kernel (image processing) 15 Deep Learning Methods for Vision, CVPR 2012 Tutorial 16 CS231n Convolutional Neural Networks for Visual Recognition, Stanford 17 Understanding Convolutional Neural Networks for NLP 18 Neural Networks by Rob Fergus, Machine Learning Summer School 2015 19 What is the difference between deep learning and usual machine learning? 20 Introduction to Multi-Layer Perceptrons 21 How the backpropagation algorithm works 22 Artificial Neural Networks: Mathematics of Backpropagation 23 Deep Learning with Neural Networks and TensorFlow Introduction APSL e-comerce IT Turismo | Posted in: redes neuronales Re:activa Redes Convolucinales Redes Convolucionales software-libre pyConES creantbits reactnativeeu PyDataMallorca ngModelController gestiondeproyectos redessociales Sant-Josep DjangoGirlsMCA cuidad continuus pynxo otra-vez Deep Learnning factorization matrix PyDay Mallorca 2019 #mejoradeprocesos #processimprovement #gestióndecambio QuerySet #transformacióndenegócios saleor #tecnologiadigital django-storages #changemanagement puput mkdocs concurrència pagesize appinventor2 packtpublishing tarugoconf ecomerce #digitaltransformation ng-model rootedcon #transformacióndigital pycones2018 dbre pythonv #businesstransformation migracion sobrassada #digitaltechnology reactotron Parte del grupo Logitravel Edifici Europa - Planta baja Galileo Galilei, s/n 07121 Parc Bit Palma - Illes Balears - España © APSL.net 2020
CommonCrawl
Prove Standard deviation greater than or equal to Mean deviation Ho do we prove that the standard deviation is greater than or equal to the mean deviation about the arithmetic mean ? $$ \sqrt\frac{\sum_{i=1}^{n}(x_i-\bar{x})^2}{n}\geq\frac{\sum_{i=1}^{n}|x_i-\bar{x}|}{n} $$ and under what conditions we get the equality ? I think i understand that it is because of the squaring in standard deviation which tends to give more weightage to the data far from the central tendency. statistics standard-deviation ss1729ss1729 $\begingroup$ The Cauchy–Schwarz inequality will do. $\endgroup$ – Chappers Jul 20 '17 at 18:18 $\begingroup$ @Chappers could u help me how to proceed. i'm unable to find where to start inorder to prove it. $\endgroup$ – ss1729 Jul 20 '17 at 18:41 Let $v=\left|\vec{x} - \bar{x}\vec{1}\right|$, where $|\cdot|$ is component-wise. Then: \begin{align} \frac{1}{n^2}\left( \sum_i |x_i - \bar{x}| \right)^2 &= \frac{ \left(v\cdot\vec{1}\right)^2}{n^2}\\ &\leq \frac{(\vec{1}\cdot\vec{1})(v\cdot v)}{n^2}\\ &= \frac{1}{n}\sum_i|x_i - \bar{x}|^2 \end{align} where we used the CS inequality for the second step. Now take the root of the first and last terms: $$ \sqrt{\frac{1}{n}\sum_i|x_i - \bar{x}|^2\;} \,\geq \frac{1}{n}\sum_i |x_i - \bar{x}| $$ $\begingroup$ How does $\left|\vec{x} - \bar{x}\vec{1}\right|= \sum_i \left|x_i - \bar{x} \right|$ ? $\endgroup$ – ThePassenger Jul 21 '17 at 8:22 The Cauchy Schwarz inequality $$\bigg(\sum_{i=1}^{n}a_{i}^2\bigg).\bigg(\sum_{i=1}^{n}b_{i}^2\bigg)\ge \bigg(\sum_{i=1}^{n}a_ib_i\bigg)^2 $$ taking $a_i=|x_i-\bar{x}|$ and $b_i=1/n$, $$ \bigg(\sum_{i=1}^{n}|x_i-\bar{x}|^2\bigg).\bigg(\sum_{i=1}^{n}\tfrac{1}{n^2}\bigg)\ge \bigg(\sum_{i=1}^{n}|x_i-\bar{x}|.\tfrac{1}{n}\bigg)^2\\ \bigg(\sum_{i=1}^{n}(x_i-\bar{x})^2\bigg).\bigg(n.\tfrac{1}{n^2}\bigg)\ge \bigg(\frac{\sum_{i=1}^{n}|x_i-\bar{x}|}{{n}}\bigg)^2\\ \frac{\sum_{i=1}^n(x_i-\bar{x})^2}{n}\ge \bigg(\frac{\sum_{i=1}^n|x_i-\bar{x}|}{n}\bigg)^2\\ \sqrt\frac{\sum_{i=1}^n(x_i-\bar{x})^2}{n}\ge\frac{\sum_{i=1}^n|x_i-\bar{x}|}{n}\\ S.D\ge M.D $$ Not the answer you're looking for? Browse other questions tagged statistics standard-deviation or ask your own question. Simple question on standard deviation and mean. Standard deviation of the weighted mean Solving Probability using standard Deviation and Mean How the step of squaring the deviations in Standard Deviation overcomes the drawback of ignoring the signs of mean deviation. The definition of the sample standard deviation geometric standard deviation Discrepancy between different methods for finding standard deviation? Standard deviation and standard error How to calculate mean for standard deviation The standard deviation is more stable than the mean?
CommonCrawl
University of Padova University of Hawaii University of Lübeck GitHub Repository arXiv Preprints ResearchGate Graph Basis Functions Lissajous 2D Lissajous Sphere Rhodonea Curves 1. Magnetic Particle Imaging and reconstruction on Lissajous curves Reduced models and reconstruction schemes for Magnetic Particle Imaging (MPI) based on efficient data sampling along Lissajous curves. 2. Mathematical models in Magnetic Particle Imaging. Description and analytic study of the imaging operator in Magnetic Particle Imaging within the scientific network MathMPI. 3. Polynomial interpolation on Lissajous-Chebyshev nodes Analysis of optimal interpolation nodes on Lissajous-Chebyshev varieties. Efficient implementation based on the fast Fourier transform. 4. Spectral interpolation in polar and spherical coordinates Analysis and numerical implementation of spectral interpolation methods for sampling nodes along rhodonea curves in the unit disk and spherical Lissajous curves. 5. Shape-driven interpolation with discontinuous kernels Study of kernels with a discontinuous scaling function for a shape-preserving interpolation of an image with known and unknown edges. 6. Iterative solvers for linear inverse problems Accelerated Landweber methods based on the usage of co-dilated orthogonal polynomials 7. Space-frequency analysis for orthogonal expansions An analogue of the Landau-Pollak-Slepian time-frequency analysis for orthogonal polynomials on the real line and for spherical harmonics. 8. Uncertainty principles on manifolds and in spectral graph theory Uncertainty principles on manifolds and on graphs based on the mutual correlation between a space and a frequency localization operator. 9. Anisotropic approximation with gaussian mixtures N-term approximation based on anisotropic gaussians has the same approximation power as N-term approximation with curvelets. 10. Graph interpolation and classification with Graph Basis Functions (GBF's) Kernel-based methods for interpolation, regression and classification on graphs. The kernels are generated by the generalized shifts of a principal graph basis function. 1. Magnetic Particle Imaging (MPI) and reconstruction on Lissajous curves Magnetic Particle Imaging (MPI) is a novel biomedical imaging modality in which the spatial distribution of superparamagnetic nanoparticles is determined by the non-linear magnetization response of the particles to an applied magnetic field. These applied magnetic fields are usually generated in such a way that the signal acquisition is carried out along a closed Lissajous curve. This raises the natural question how a continuous function or discretized data values along Lissajous trajectories can be extended into the surrounding region. Further, in MPI usually only a finite number of Fourier coefficients of the signal on the acquisition trajectory are measured. This leads directly to the fascinating mathematical question how a multivariate density function can be approximately recovered from limited spectral data on a curve. In 2014, I initiated with several colleagues from different universities and industry the DFG-funded interdisciplanary scientific network "Development, analysis and application of mathematical methods for Magnetic Particle Imaging (MathMPI) " in which mathematical questions related to the imaging modality MPI are tackled. Fig. 1: Reconstruction of magnetic particle distributions based on Chebyshev spectral methods for data measured along Lissajous curves. Left: Particle density reconstructed on Lissajous node points. Middle left: Chebyshev reconstruction without filtering. Middle right: Chebyshev reconstruction with classical spectral filtering. Right: Chebyshev reconstruction with adaptive spectral filtering. In the framework of this network we systematically studied mathematical questions related to the structure of Lissajous curves, its discretizations and the interpolation and approximation of data along these trajectories. Similar as in the theory of the Padua points, we developed a Chebyshev spectral method for an efficient polynomial interpolation on the node points of two- and higher dimensional Lissajous curves and applied them for the reconstruction in MPI. In particular, we studied Lissajous node points as a possibility to reduce MPI measurements and to obtain efficient and fast reconstructions at the same time. A second focus of research is the development of spectral basis elements such that the MPI system matrix can be represented and stored in a sparse way. A third goal is the construction of adaptive spectral filters in order to improve the reconstruction quality of the Chebyshev spectral methods. This research was done in collaboration with the Institute of Medical Engineering at the University of Lübeck and the Institute of Biomedical Imaging at the UKE in Hamburg. The adaptive spectral filters were developed in collaboration with the Padova-Verona research group on Constructive Approximation and Applications. Erb, W., Kaethner, C., Ahlborg, M. and Buzug, T.M. Bivariate Lagrange interpolation at the node points of non-degenerate Lissajous curves Numer. Math. 133, 4 (2016), 685-705 (Preprint) Erb, W., Kaethner, C., Dencker, P. and Ahlborg, M. A survey on bivariate Lagrange interpolation on Lissajous nodes Dolomites Research Notes on Approximation 8 (Special issue) (2015), 23-36 (Publication) Kaethner, C., Erb, W., Ahlborg, M., Szwargulski, P., Knopp, T. and Buzug, T. M. Non-Equispaced System Matrix Acquisition for Magnetic Particle Imaging based on Lissajous Node Points IEEE Transactions on Medical Imaging 35, 11 (2016), 2476-2485 Schmiester, L., Moddel, M., Erb, W. and Knopp, T. Direct Image Reconstruction of Lissajous Type Magnetic Particle Imaging Data using Chebyshev-based Matrix Compression IEEE Transactions on Computational Imaging 3, 4 (2017), 671-681 De Marchi, S., Erb, W. and Marchetti, F. Spectral filtering for the reduction of the Gibbs phenomenon for polynomial approximation methods on Lissajous curves with applications in MPI Dolomites Res. Notes Approx. 10 (2017), 128-137 (Publication) 2. Mathematical models in magnetic particle imaging The modeling and the analysis of the imaging process in magnetic particle imaging is an ongoing research project within the research network MathMPI. In this project, we systematically investigate the continuous MPI imaging operator in different mathematical setups. As underlying function spaces for the analysis we introduce suitable Hilbert spaces and decompose the MPI imaging operator into simple building blocks. These suboperations are then analyzed with respect to their mathematical properties. In this way, we could obtain a complete analysis of the continuous 1D-MPI forward operator and, in particular, a mathematical description of it's ill-posedness. Further, we could show an exponential decay of the singular values of the 1D-MPI operator. In the 3D imaging setting, we developed a model-based reconstruction approach that incorporates realistic magnetic field topologies in terms of expansions in spherical harmonics. Formula 1: Simulation of the MPI signal generation process in 1D. The concentration c(x) of particles given here is a delta sample indicated by a red dot. The additional small arrow displays the magnetization of the particle in the external magnetic field. The moving low field region (the field free point) of the external magnetic field is illustrated with a black dot. The generated voltage signal u(t) is shown as a red curve. On the right hand side the corresponding MPI imaging equation is formulated as an integral equation. More information on this topic can be found in the following slides. Bringout, G., Erb, W. and Frikel, J. A new 3D model for magnetic particle imaging using realistic magnetic field topologies for algebraic reconstruction. Inverse Problems 36 (2020), 124002 (Preprint) Erb, W., Weinmann, A., Ahlborg, M., Brandt, C., Bringout, G. , Buzug, T.M, Frikel J., Kaethner, C., Knopp, T., März, T., Möddel, M., Storath, M. and Weber, A. Mathematical Analysis of the 1D Model and Reconstruction Schemes for Magnetic Particle Imaging Inverse Problems 34 (2018), 055012 (Preprint) 3. Polynomial interpolation schemes on Lissajous curves Fig. 2: Two Lissajous figures with the respective Lissajous-Chebyshev interpolation node points. Starting from the applications in Magnetic Particle Imaging, I developed together with P. Dencker a comprehensive theory on multivariate polynomial interpolation related to Lissajous-Chebyshev points. This theory syntesizes and generalizes interpolation results for several well-known interpolation node sets as the Padua points, the Morrow-Patterson-Xu points or node points generated by a single Lissajous curve. We could show that these points give rise to a multivariate Chebyshev spectral interpolation scheme with some remarkable properties that can be implemented as efficiently as a tensor-product Chebyshev interpolant. Further, also the numerical condition and the convergence rates of the developed scheme are equivalent to the rates of a corresponding tensor-product Chebyshev scheme. Fig. 3: Two polynomial interpolants of function values on Lissajous-Chebyshev points. Erb, W. Bivariate Lagrange interpolation at the node points of Lissajous curves - the degenerate case Appl. Math. Comput. 289 (2016) 409-425 (Preprint) Erb, W., Kaethner, C. Dencker, P. and Ahlborg, M. A survey on bivariate Lagrange interpolation on Lissajous nodes Dolomites Research Notes on Approximation 8 (Special issue) (2015), 23-36 (Publication) Dencker, P. and Erb, W. Multivariate polynomial interpolation on Lissajous-Chebyshev nodes J. Approx. Theory, J. Approx. Theory 219 (2017), 15-45 (Preprint) Dencker, P., Erb, W., Kolomoitsev, Y. and Lomako, T. Lebesgue constants for polyhedral sets and polynomial interpolation on Lissajous-Chebyshev nodes Journal of Complexity 43, (2017), 1-27 (Preprint) Dencker, P. and Erb, W. A unifying theory for multivariate polynomial interpolation on general Lissajous-Chebyshev nodes arXiv:1711.00557 (2017) (Preprint) Fig. 4: The apps LC2Dfevalapp and LC2Dplotapp to test polynomial interpolation schemes on Lissajous-Chebyshev nodes. For polynomial interpolation on general Lissajous-Chebyshev points LC2Ditp: A software package that contains a Matlab implementation for 2D polynomial interpolation on general Lissajous-Chebyshev points. It contains also two apps to test the interpolation schemes (LC2Dfevalapp) and to display (LC2Dplotapp) the Lissajous-Chebyshev node points. LC3Ditp: A software package that contains a Matlab implementation for 3D polynomial interpolation on general Lissajous-Chebyshev points. For polynomial interpolation along particular Lissajous curves LS2Ditp: This software package contains a Matlab implementation for bivariate polynomial interpolation on the node points of degenerate and non-degenerate 2D-Lissajous curves. LD3Ditp: A small software package that contains a Matlab implementation for 3D polynomial interpolation on the node points of degenerate 3D-Lissajous curves. 4. Spectral interpolation in spherical and polar geometries Fig. 5: The interpolation nodes of spherical Lissajous curves on the unit sphere (left) and of rhodonea curves on the disk (right). Similar to the theory of polynomial interpolation on Lissajous-Chebyshev points, it is possible to derive interpolation schemes in spherical and polar coordinates. The generating curves in these geometries are spherical Lissajous curves on the unit sphere and the rhodonea curves on the unit disk. The corresponding interpolation spaces are not spanned by polynomials, but by parity-modified double Fourier series adapted to the underlying geometries. The resulting interpolation schemes can be implemented very efficiently by fast Fourier methods. LSphere: This software package contains a Matlab implementation for spectral interpolation on the nodes of spherical Lissajous curves. RDisk: A software package that contains a Matlab implementation for spectral interpolation on the disk at the nodes of rhodonea curves. Erb, W. A spectral interpolation scheme on the unit sphere based on the nodes of spherical Lissajous curves. IMA J. Numer. Anal. (accepted for publication) (2019) (Preprint) Erb, W. Rhodonea curves as sampling trajectories for spectral interpolation on the unit disk. arXiv:1812.00437 (2018) (Preprint) If an image with sharp edges is interpolated by smooth basis functions, as for instance radial basis functions (RBFs) or polynomials, ringing artifacts appear close to the edges. These effects are commonly referred to as Gibbs phenomenon. In order to improve the resolution of discontinuities in the interpolation of scattered data sets, we studied the usage of variably scaled discontinuous kernels (VSDKs). For these discontinuous kernels, we obtained characterizations and theoretical Sobolev type error estimates for the native spaces. Further, we could show in experiments that in the presence of edges convergence is faster and Gibbs artifacts can be avoided if the VSDK interpolation scheme is used. By estimating unknown edges of an image with machine learning tools, this adaptive meshless method can be applied also as an interpolation tool in Magnetic Particle Imaging. Fig. 6: Interpolation (upper right) with variably scaled discontinuous kernels of a painting of P. Mondrian (upper left) using image samples along a Lissajous curve (upper middle) and additional edge information for the three color channels (below). De Marchi, S., Erb, W., Marchetti, F., Perracchione, E. and Rossini, M. Shape-Driven Interpolation with Discontinuous Kernels: Error Analysis, Edge Extraction and Applications in MPI. SIAM J. Sci. Comput. 42:2 (2020), B472-B491 (Preprint) (Poster) Linked to an interdisciplinary project in magnetic resonance elastography (MRE), I studied mathematical problems related to inverse problems in medical imaging. We introduced and investigated accelerated Landweber methods for linear ill-posed problems obtained by an alteration of the coefficients in the three-term recurrence relation of the semi-iterative methods. The residual polynomials of the methods under consideration are linked to a family of co-dilated ultraspherical polynomials. This connection makes it possible to control the decay of the residual polynomials at the origin by means of a dilation parameter. Depending on the data, the approximation error of the semi-iterative methods can be improved by altering this dilation parameter. The asymptotic convergence order of the new semi-iterative methods turns out to be the same as for the original methods. We tested these new algorithms and developed a simple adaptive scheme in which the dilation parameter is optimized in every iteration step. In collaboration with Dr. Semenova these semi-iterative solvers were combined with adaptive discretization methods. We showed that such an adaptive discretization approach in combination with a stopping criterion as the discrepancy principle or the balancing principle yields an order optimal regularization scheme and allows to reduce the computational costs. Erb, W. Accelerated Landweber methods based on co-dilated orthogonal polynomials Numer. Alg. 68, 2 (2015) 229-260 Erb, W. and Semenova, E.V. On adaptive discretization schemes for the solution of ill-posed problems with semiiterative methods Appl. Anal. 94, 10 (2015) 2057-2076 Based on the uncertainty principles constructed in my Ph.D. thesis, we developed a time-frequency theory for orthogonal polynomials that runs parallel to the time-frequency analysis of bandlimited functions developed by Landau, Pollak and Slepian. For this purpose, the spectral decomposition of a particular compact time-frequency- operator is studied. This decomposition and its eigenvalues are closely related to the theory of orthogonal polynomials. Results from both theories, the theory of orthogonal polynomials and the Landau-Pollak-Slepian theory, can be used to prove localization and approximation properties of the corresponding eigenfunctions. Furthermore, an uncertainty principle was proven that reflects the limitation of coupled time and frequency locatability. In a second work, this theory was extended successfully to spherical polynomials on the sphere. Using weak limits related to the structure of spherical harmonics provided us with information on the proportion of basis functions needed to approximate localized functions. One big advantage of the developed theory is the fact that the localized basis functions can be represented and computed efficiently with orthogonal polynomials. Fig. 7: Decomposition of a function on the unit sphere with localized polynomial basis functions. Erb, W. An orthogonal polynomial analogue of the Landau-Pollak-Slepian time-frequency analysis J. Approx. Theory 166 (2013) 56-77 Erb, W. and Mathias, S. An alternative to Slepian functions on the unit sphere - A space-frequency analysis based on localized spherical polynomials Appl. Comput. Harmon. Anal. 38, 2 (2015) 222-241 8. Uncertainty principles on manifolds and graphs Uncertainty principles form important theoretical cornerstones in signal analysis. They describe the inherent impossibility of a signal to be localized in space and frequency at the same time. Uncertainty relations can be formulated in a multitude of ways. The first uncertainty principle discovered by Heisenberg can for instance be written in terms of a commutator relation of a position with a momentum operator. In other contexts, uncertainty relations are described in terms of inequalities, by the space-frequency support or the smoothness of functions, or in form of boundary curves for an uncertainty region. 8.1. Uncertainty principles on Riemannian manifolds As a PhD student, I was mainly interested in uncertainty principles on Riemannian manifolds. I showed that the product of a space and a frequency variance for functions on a Riemannian manifold is always larger than a particular constant. The key ingredient for the proof of this uncertainty principle is a commutator relation for particular Dunkl operators. In particular, these uncertainties turn out to be sharp. For 2-point homogeneous spaces, the space variance of the uncertainty product can be used to construct optimally space localized polynomials on symmetric spaces as the unit sphere. Fig. 8: Illustrations of optimally space localized polynomials on the unit sphere in different spaces of spherical harmonics. 8.2. Uncertainty principles on graphs The discrete harmonic structure in spectral graph theory allows to formulate uncertainty relations on graphs as well. The space and frequency localization of a graph signal in our general framework is measured in terms of suitable space and frequency filters. The resulting uncertainty relations can be characterized as the boundaries of an admissibility region and can be computed as the convex numerical range of a matrix. Fig. 9: Illustration of optimally space-frequency localized signals on a graph for different space-frequency filters generated with (GUPPY). Erb, W. Uncertainty principles on compact Riemannian manifolds Appl. Comput. Harmon. Anal. 29, 2 (2010) 182-197 Erb, W. Uncertainty principles on Riemannian manifolds Logos Verlag Berlin (2010) Dissertation TU München Erb, W. Optimally space localized polynomials with applications in signal processing J. Fourier Anal. Appl. 18, 1 (2012) 45-66 Erb, W. Shapes of Uncertainty in Spectral Graph Theory arXiv:1909.10865 (2019) (Preprint) (Software GUPPY) In this joint work with T. Hangelbroek and A. Ron, we showed that a representation system based on translated, dilated and rotated Gaussians yields essentially the same N-term approximation rate for cartoon class functions as a standard curvelet or shearlet system. The common construction principle for all these representation systems is to use one (or several) basic generating functions from which the entire system is generated by applying affine operations, as for instance the mentioned rotations, translations and dilations, to the generators. The idea of this work is to use adaptive Gaussian approximations of a given generator curvelet as new generating functions for a system of anisotropic Gaussian mixtures. Erb, W., Hangelbroek, T. and Ron, A. Anisotropic Gaussian approximation in $L_2(\mathbb{R}^2)$. arXiv:1910.10319 (2019) (Preprint) Graph basis functions (GBF's) are graph analogs of radial basis functions in the euclidean space and of spherical basis functions on the unit sphere. Linear combinations of generalized GBF shifts allow to approximate any signal on a graph. In this way, interpolation, regression and classification problems on graphs can be solved in a simple and efficient way. GBF's allow to include geometric information of the graph structure into these solutions and give simple characterizations of the involved approximation spaces in terms of the graph Fourier transform. Further, for kernel-based classification, GBF's offer a surprisingly simple possibility to construct feature-augmented kernels for semi-supervised learning on graphs. A more detailed introduction to graph basis functions can be found in this tutorial, or in the two publications below. Fig. 10: Supervised classification of a two-moon data set (left) using a GBF regularized least squares method with 2 (middle) and 8 (right) labeled nodes (see implementation in GBFlearn). Erb, W. Graph Signal Interpolation with Positive Definite Graph Basis Functions. arXiv:1912.02069 (2019) (Preprint) Erb, W. Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis Functions. arXiv:2003.07646 (2020) (Preprint) GBFlearn: A software package for the interpolation and classification of data on graphs with graph basis functions. CAA: Padova-Verona research group on "Constructive Approximation and Applications" for further information and references on Padua points and polynomial approximation on Lissajous curves Rete ITaliana di Approssimazione (RITA), an italian research network on approximation theory Gruppo di lavoro UMI: Teoria dell'Approssimazione e Applicazioni - T.A.A., a working group on approximation theory and its applications. GNCS-INdAM, a working group on scientific computing at INdAM. The Institute of Biomedical Imaging at the UKE in Hamburg Mathematical methods for Magnetic Particle Imaging (MathMPI) for further information on the DFG-funded interdisciplanary scientific network MathMPI International Workshop on Mathematical Imaging and Emerging Modalities (MIEM2016), 27-30/06/2016 in Osnabrück, Germany © 2016 - Wolfgang Erb
CommonCrawl
Search Results: 1 - 10 of 593720 matches for " M. J. Carson " Page 1 /593720 Crossing Language Barriers: Using Crossed Random Effects Modelling in Psycholinguistics Research Robyn J. Carson,Christina M. L. Beeson Tutorials in Quantitative Methods for Psychology , 2013, Abstract: The purpose of this paper is to provide a brief review of multilevel modelling (MLM), also called hierarchical linear modelling (HLM), and to present a step-by-step tutorial on how to perform a crossed random effects model (CREM) analysis. The first part provides an overview of how hierarchical data have been analyzed in the past and how they are being analyzed presently. It then focuses on how these types of data have been dealt with in psycholinguistic research. It concludes with an overview of the steps involved in CREM, a form of MLM used for psycholinguistics data. The second part includes a tutorial demonstrating how to conduct a CREM analysis in SPSS, using the following steps: 1) clarify your research question, 2) determine if CREM is necessary, 3) choose an estimation method, 4) build your model, and 5) estimate the model s effect size. A short example on how to report CREM results in a scholarly article is also included. Short-timescale Variability in the Broadband Emission of the Blazars Mkn421 and Mkn501 M. J. Carson,B. McKernan,T. Yaqoob,D. J. Fegan Physics , 1999, Abstract: We analyse ASCA x-ray data and Whipple \gamma-ray data from the blazars Mkn421 and Mkn501 for short-timescale variability. We find no evidence for statistically significant (>3\sigma) variability in these data, in either source, on timescales of less than \sim 10 minutes. Reduction of Coincident Photomultiplier Noise Relevant to Astroparticle Physics Experiment M. Robinson,P. K. Lightfoot,M. J. Carson,V. A. Kudryavtsev,N. J. C. Spooner Physics , 2005, DOI: 10.1016/j.nima.2005.01.319 Abstract: In low background and low threshold particle astrophysics experiments using observation of Cherenkov or scintillation light it is common to use pairs or arrays of photomultipliers operated in coincidence. In such circumstances, for instance in dark matter and neutrino experiments, unexpected PMT noise events have been observed, probably arising from generation of light from one PMT being detected by one or more other PMTs. We describe here experimental investigation of such coincident noise events and development of new techniques to remove them using novel pulse shape discrimination procedures. When applied to data from a low background NaI detector with facing PMTs the new procedures are found to improve noise rejection by a factor of 20 over conventional techniques, with significantly reduced loss of signal events. Physical Activity and Brain Function in Older Adults at Increased Risk for Alzheimer's Disease J. Carson Smith,Kristy A. Nielson,John L. Woodard,Michael Seidenberg,Stephen M. Rao Brain Sciences , 2013, DOI: 10.3390/brainsci3010054 Abstract: Leisure-time physical activity (PA) and exercise training are known to help maintain cognitive function in healthy older adults. However, relatively little is known about the effects of PA on cognitive function or brain function in those at increased risk for Alzheimer's disease through the presence of the apolipoproteinE epsilon4 (APOE-ε4) allele, diagnosis of mild cognitive impairment (MCI), or the presence of metabolic disease. Here, we examine the question of whether PA and exercise interventions may differentially impact cognitive trajectory, clinical outcomes, and brain structure and function among individuals at the greatest risk for AD. The literature suggests that the protective effects of PA on risk for future dementia appear to be larger in those at increased genetic risk for AD. Exercise training is also effective at helping to promote stable cognitive function in MCI patients, and greater cardiorespiratory fitness is associated with greater brain volume in early-stage AD patients. In APOE-ε4 allele carriers compared to non-carriers, greater levels of PA may be more effective in reducing amyloid burden and are associated with greater activation of semantic memory-related neural circuits. A greater research emphasis should be placed on randomized clinical trials for exercise, with clinical, behavioral, and neuroimaging outcomes in people at increased risk for AD. Evaluation of rK39 Rapid Diagnostic Tests for Canine Visceral Leishmaniasis: Longitudinal Study and Meta-Analysis Rupert J. Quinnell ,Connor Carson,Richard Reithinger,Lourdes M. Garcez,Orin Courtenay PLOS Neglected Tropical Diseases , 2013, DOI: 10.1371/journal.pntd.0001992 Abstract: Background There is a need for sensitive and specific rapid diagnostic tests (RDT) for canine visceral leishmaniasis. The aims of this study were to evaluate the diagnostic performance of immunochromatographic dipstick RDTs using rK39 antigen for canine visceral leishmaniasis by (i) investigating the sensitivity of RDTs to detect infection, disease and infectiousness in a longitudinal cohort study of natural infection in Brazil, and (ii) using meta-analysis to estimate the sensitivity and specificity of RDTs from published studies. Methodology We used a rK39 RDT (Kalazar Detect Canine Rapid Test; Inbios) to test sera collected from 54 sentinel dogs exposed to natural infection in an endemic area of Brazil. Dogs were sampled bimonthly for up to 27 months, and rK39 results compared to those of crude antigen ELISA, PCR, clinical status and infectiousness to sandflies. We then searched MEDLINE and Web of Knowledge (1993–2011) for original studies evaluating the performance of rK39 RDTs in dogs. Meta-analysis of sensitivity and specificity was performed using bivariate mixed effects models. Principal Findings The sensitivity of the rK39 RDT in Brazil to detect infection, disease and infectiousness was 46%, 77% and 78% respectively. Sensitivity increased with time since infection, antibody titre, parasite load, clinical score and infectiousness. Sixteen studies met the inclusion criteria for meta-analysis. The combined sensitivity of rK39 RDTs was 86.7% (95% CI: 76.9–92.8%) to detect clinical disease and 59.3% (37.9–77.6%) to detect infection. Combined specificity was 98.7% (89.5–99.9%). Both sensitivity and specificity varied considerably between studies. Conclusion The diagnostic performance of rK39 RDTs is reasonable for confirmation of infection in suspected clinical cases, but the sensitivity to detect infected dogs is too low for large-scale epidemiological studies and operational control programmes. Direct Imaging and Spectroscopy of a Young Extrasolar Kuiper Belt in the Nearest OB Association Thayne Currie,Carey M. Lisse,Marc J. Kuchner,Nikku Madhusudhan,Scott J. Kenyon,Christian Thalmann,Joseph Carson,John H. Debes Abstract: We describe the discovery of a bright, young Kuiper belt-like debris disk around HD 115600, a $\sim$ 1.4--1.5 M$_\mathrm{\odot}$, $\sim$ 15 Myr old member of the Sco-Cen OB Association. Our H-band coronagraphy/integral field spectroscopy from the \textit{Gemini Planet Imager} shows the ring has a (luminosity scaled) semi major axis of ($\sim$ 22 AU) $\sim$ 48 AU, similar to the current Kuiper belt. The disk appears to have neutral scattering dust, is eccentric (e $\sim$ 0.1--0.2), and could be sculpted by analogues to the outer solar system planets. Spectroscopy of the disk ansae reveal a slightly blue to gray disk color, consistent with major Kuiper belt chemical constituents, where water-ice is a very plausible dominant constituent. Besides being the first object discovered with the next generation of extreme adaptive optics systems (i.e. SCExAO, GPI, SPHERE), HD 115600's debris ring and planetary system provides a key reference point for the early evolution of the solar system, the structure and composition of the Kuiper belt, and the interaction between debris disks and planets. Second-generation PLINK: rising to the challenge of larger and richer datasets Christopher C. Chang,Carson C. Chow,Laurent C. A. M. Tellier,Shashaank Vattikuti,Shaun M. Purcell,James J. Lee Quantitative Biology , 2014, DOI: 10.1186/s13742-015-0047-8 Abstract: PLINK 1 is a widely used open-source C/C++ toolset for genome-wide association studies (GWAS) and research in population genetics. However, the steady accumulation of data from imputation and whole-genome sequencing studies has exposed a strong need for even faster and more scalable implementations of key functions. In addition, GWAS and population-genetic data now frequently contain probabilistic calls, phase information, and/or multiallelic variants, none of which can be represented by PLINK 1's primary data format. To address these issues, we are developing a second-generation codebase for PLINK. The first major release from this codebase, PLINK 1.9, introduces extensive use of bit-level parallelism, O(sqrt(n))-time/constant-space Hardy-Weinberg equilibrium and Fisher's exact tests, and many other algorithmic improvements. In combination, these changes accelerate most operations by 1-4 orders of magnitude, and allow the program to handle datasets too large to fit in RAM. This will be followed by PLINK 2.0, which will introduce (a) a new data format capable of efficiently representing probabilities, phase, and multiallelic variants, and (b) extensions of many functions to account for the new types of information. The second-generation versions of PLINK will offer dramatic improvements in performance and compatibility. For the first time, users without access to high-end computing resources can perform several essential analyses of the feature-rich and very large genetic datasets coming into use. Lipoprotein particle distribution and skeletal muscle lipoprotein lipase activity after acute exercise Michael Harrison, Niall M Moyna, Theodore W Zderic, Donal J O'Gorman, Noel McCaffrey, Brian P Carson, Marc T Hamilton Lipids in Health and Disease , 2012, DOI: 10.1186/1476-511x-11-64 Abstract: Using a randomized cross-over design, very low density lipoprotein (VLDL) responses were evaluated in eight men on the morning after i) an inactive control trial (CON), ii) exercising vigorously on the prior evening for 100?min followed by fasting overnight to maintain an energy and carbohydrate deficit (EX-DEF), and iii) after the same exercise session followed by carbohydrate intake to restore muscle glycogen and carbohydrate balance (EX-BAL).The intermediate, low and high density lipoprotein particle concentrations did not differ between trials. Fasting triglyceride (TG) determined biochemically, and mean VLDL size were lower in EX-DEF but not in EX-BAL compared to CON, primarily due to a reduction in VLDL-TG in the 70–120?nm (large) particle range. In contrast, VLDL-TG was lower in both EX-DEF and EX-BAL compared to CON in the 43–55?nm (medium) particle range. VLDL-TG in smaller particles (29–43?nm) was unaffected by exercise. Because the majority of VLDL particles were in this smallest size range and resistant to change, total VLDL particle concentration was not different between any of these conditions. Skeletal muscle lipoprotein lipase (LPL) activity was also not different across these 3 trials. However, in CON only, the inter-individual differences in LPL activity were inversely correlated with fasting TG, VLDL-TG, total, large and small VLDL particle concentration and VLDL size, indicating a regulatory role for LPL in the non-exercised state.These findings reveal a high level of differential regulation between different sized triglyceride-rich lipoproteins following exercise and feeding, in the absence of changes in LPL activity.Single sessions of exercise transiently reduce serum triglycerides (TG). This exercise effect is not always apparent immediately post-exercise, it can occur after a delay of hours and is generally maximal on the day following intense and prolonged exercise [1,2]. Reductions in serum triglycerides of 18 – 22% are typically observed Heterogeneities in Leishmania infantum Infection: Using Skin Parasite Burdens to Identify Highly Infectious Dogs Orin Courtenay equal contributor ,Connor Carson equal contributor,Leo Calvo-Bado,Lourdes M. Garcez,Rupert J. Quinnell Abstract: Background The relationships between heterogeneities in host infection and infectiousness (transmission to arthropod vectors) can provide important insights for disease management. Here, we quantify heterogeneities in Leishmania infantum parasite numbers in reservoir and non-reservoir host populations, and relate this to their infectiousness during natural infection. Tissue parasite number was evaluated as a potential surrogate marker of host transmission potential. Methods Parasite numbers were measured by qPCR in bone marrow and ear skin biopsies of 82 dogs and 34 crab-eating foxes collected during a longitudinal study in Amazon Brazil, for which previous data was available on infectiousness (by xenodiagnosis) and severity of infection. Results Parasite numbers were highly aggregated both between samples and between individuals. In dogs, total parasite abundance and relative numbers in ear skin compared to bone marrow increased with the duration and severity of infection. Infectiousness to the sandfly vector was associated with high parasite numbers; parasite number in skin was the best predictor of being infectious. Crab-eating foxes, which typically present asymptomatic infection and are non-infectious, had parasite numbers comparable to those of non-infectious dogs. Conclusions Skin parasite number provides an indirect marker of infectiousness, and could allow targeted control particularly of highly infectious dogs. Sex Differences in Wild Chimpanzee Behavior Emerge during Infancy Elizabeth V. Lonsdorf, A. Catherine Markham, Matthew R. Heintz, Karen E. Anderson, David J. Ciuk, Jane Goodall, Carson M. Murray Abstract: The role of biological and social influences on sex differences in human child development is a persistent topic of discussion and debate. Given their many similarities to humans, chimpanzees are an important study species for understanding the biological and evolutionary roots of sex differences in human development. In this study, we present the most detailed analyses of wild chimpanzee infant development to date, encompassing data from 40 infants from the long-term study of chimpanzees at Gombe National Park, Tanzania. Our goal was to characterize age-related changes, from birth to five years of age, in the percent of observation time spent performing behaviors that represent important benchmarks in nutritional, motor, and social development, and to determine whether and in which behaviors sex differences occur. Sex differences were found for indicators of social behavior, motor development and spatial independence with males being more physically precocious and peaking in play earlier than females. These results demonstrate early sex differentiation that may reflect adult reproductive strategies. Our findings also resemble those found in humans, which suggests that biologically-based sex differences may have been present in the common ancestor and operated independently from the influences of modern sex-biased parental behavior and gender socialization.
CommonCrawl
bio-physics-wiki Biological Systems Lecture Syllabi University Vienna Nosé Hoover Thermostat From bio-physics-wiki Andreas Piehler (Talk | contribs) At the fixed point $(q, p, z, T)=(xxxx)$ we get the linear System \begin{align} \delta \dot{T}_j=\mathbf{J} \cdot \delta T_j =\begin{pmatrix} \dot q\\ \dot p \\ \dot z\\ \dot T\\ \end{pmatrix}=\begin{pmatrix} 0 & 1 & 0 &0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ \varepsilon &0 &0 &0 \end{pmatrix}\begin{pmatrix} q\\ p \\ z\\ T\\ \end{pmatrix} \delta \dot{T}_j=\mathbf{J} \cdot \delta T_j =\begin{pmatrix} \dot q\\ \dot p \\ \dot z\\ \dot T\\ \end{pmatrix}=\begin{pmatrix} 0 & 1 & 0 &0 \\ -1 & -z & -p & 0 \\ 0 & 2p & -x & -1 \\ \frac{\varepsilon}{q^2+1} &0 &0 &0 \end{pmatrix}\begin{pmatrix} q\\ p \\ z\\ T\\ \end{pmatrix} \end{align} For initial conditions $q(0)=z(0)=x(0)=0$ and $p(0)=1$ we integrate numerically Molecular Dynamics (MD) Simulations can be used to calculate the position and momenta of a many particle system. The initial condition for particles of such simulations are often taken from a Maxwell distribution. However, since in this systems have constant energy, particle number and Volume, they represent Microcanonical Ensembles. Under experimental conditions it is easier to control the temperatur of a system. In the so called Nosé Hoover Thermostat the hamiltonian does not only contain terms arising from the particles, but also an additional term. The additional term simulates the energy transfer, thus keeping the termperature constant. This is why such equations are called thermostated or in this case the Nosé Hoover Thermostat. It can be shown that such a system represents a Canonical Ensemble$^{1}$ (T,N,V const.). The Hamiltonian of the system reads \begin{align} H=\sum_i \frac{\mathbf{p}_i^2}{2ms^2}+\frac{1}{2} \sum_{ij,i \not = j} U(\mathbf{r}_i-\mathbf{r}_j) + \frac{p_s^2}{2Q}+gkT ln(s) \end{align} from the Hamiltionian we can derive the ODE system \begin{align} \frac{d\mathbf{r}_i}{dt}&=\frac{\partial H}{\partial \mathbf{p}_i}=\frac{\mathbf{p}_i}{ms^2}\\ \frac{ds}{dt}&=\frac{\partial H}{\partial p_s}=\frac{p_s}{Q}\\ \frac{d \mathbf{p}_i}{dt}&=\frac{\partial H}{\partial \mathbf{r}_i}=- \nabla _i U(R)\\ \frac{dp_s}{dt}&=\frac{\partial H}{\partial s}=\left( \frac{\sum_i \mathbf{p}_i^2}{ms^2}-gk_BT \right)/s \end{align} According to Hoover$^{2}$ the equations for a one dimensional harmonic osciallator can be written as \begin{align} \dot{q}&=p\\ \dot{p}&=-q-\xi p\\ \dot{\xi}&=(p^2-1)/Q\\ \end{align} qpzx-Modul $\dot{\mathbf{x}}=\mathbf{G}(\mathbf{T})$ \begin{align} \dot{q}&=p\\ \dot{p}&=-q-z p\\ \dot{z}&=p^2 - T(q) - xz\\ \dot{T}&=1+ \varepsilon \arctan(q) \end{align} Fixed points: setting $\mathbf{G}(\mathbf{T})$ zero, $p$ must be zero, thus $q$ must be zero too, but $arctan(0)=\pm \infty$ diverges. ??? Linearising the system$^3$ about a fixpoint $\mathbf{T}_0$ gives \begin{align} \mathbf{G}(\mathbf{T}) \approx \underbrace{\mathbf{G}(\mathbf{T}_0)}_{=0} + \underbrace{\frac{\partial G_i(\mathbf{T}_0)}{\partial T_j}}_{\mathbf{J}} \cdot \delta T_j + o(\| \delta T_j \|) \end{align} The Jacobian of the linearised system reads \begin{align} \mathbf{J}=\begin{pmatrix} 0 & 1 & 0 &0 \\ -1 & -z & -p & 0 \\ 0 & 2p & -x & -1 \\ \frac{\varepsilon}{q^2+1} &0 &0 &0 \end{pmatrix} \end{align} At the fixed point $(q, p, z, T)=(xxxx)$ we get the linear System \begin{align} \delta \dot{T}_j=\mathbf{J} \cdot \delta T_j =\begin{pmatrix} \dot q\\ \dot p \\ \dot z\\ \dot T\\ \end{pmatrix}=\begin{pmatrix} 0 & 1 & 0 &0 \\ -1 & -z & -p & 0 \\ 0 & 2p & -x & -1 \\ \frac{\varepsilon}{q^2+1} &0 &0 &0 \end{pmatrix}\begin{pmatrix} q\\ p \\ z\\ T\\ \end{pmatrix} \end{align} For initial conditions $q(0)=z(0)=x(0)=0$ and $p(0)=1$ we integrate numerically Furhter Reading: [1] J.M. Thijssen - Computational Physics [2] Willian G. Hoover - Canonical dynamics: Equilibrium phase-space distibutions [3] Denis J. Evans and Gary P. Morriss - Statistical Mechanics of NonEquilibrium Liquids (Page 69) [1] About bio-physics-wiki Cavendish Skin
CommonCrawl
What IS Color Charge? This question has been asked twice already, with very detailed answers. After reading those answers, I am left with one more question: what is color charge? It has nothing to do with colored light, it's a property possessed by quarks and gluons in analogy to electric charge, relates to mediation of strong force through gluon exchange, has to be confined, is necessary for quarks to satisfy Heisenberg principle, and one of the answers provided a great colored Feynman diagram of its interaction, clearly detailing how gluon-exchane leads to the inter-nucleon force. But what is it? To see where I'm coming from, in Newton's equation for gravity, the "charge" is mass, and is always positive, hence the interaction between masses is always attractive. In electric fields the "charge" is electric charge, and is positive or negative. (++)=+, (--)=+, so like charges repel (+-)=(-+)=-, so opposite charges attract. In dipole fields, the "charge" is the dipole moment, which is a vector. It interacts with other dipole moments through dot and cross products, resulting in attraction, repulsion, and torque. In General Relativity, the "charge" is the stress-energy tensor that induces a curved metric field, in turn felt by objects with stress-energy through a more complicated process. So what is color charge? The closest that I've gotten is describing it through quaternions ($\mbox{red}\to i$, $\mbox{blue}\to j$, $\mbox{green}\to-k$, $\mbox{white}\to1$ , "anti"s negative), but that leads to weird results that don't entirely make sense (to me), being non-abelian. Since $SU(3)$ is implicated, what part of $SU(3)$ corresponds to, for instance, "red" or "antigreen"? (Like "positive charge" is $+e$, "negative charge" is $-e$). What is the mathematical interaction of red and antired (like positive and negative is $(+e)(-e)=-e^2$), and what happens when you apply that interaction to red and antiblue? (Like how electric charges interact with magnetic dipoles through their relative velocities). If I had to point to a thing on paper and say "this here represents the red color charge", what would that thing be? Does such a thing even exist? In short, what is color charge? I've had abstract algebra and group theory and some intro courses on field theory and QED, but I don't know a lot of jargon, or really a lot of algebra. Sorry the question's so long. Thanks for the future clarification! forces quantum-chromodynamics quarks color-charge rbostonrboston $\begingroup$ Scratch my earlier comment, perhaps I see where you are going with this, but I don't think that you have expressed it very well. Are you asking about the mathematical structure of the strong force? $\endgroup$ – dmckee♦ Jan 8 '13 at 2:59 $\begingroup$ It may not pay to press to far for an analogy similar to charge in EnM and mass in gravity since in QCD (or any nonabelian gauge theory really) gluon number is not conserved. This means that nearby gluons can change the representation of a state. I don't claim to fully understand the ramifications of this, but see users.ictp.it/~pub_off/lectures/lns007/Strassler/Strassler.pdf. $\endgroup$ – DJBunk Jan 8 '13 at 17:54 $\begingroup$ Also note the gluon propagator just differs from the photon propagator by an identity matrix in color space. SO at high energies, or for a large number of flavors so that the theory doesn't become confining, where the tree level Feynman diagram dominates (a single gluon exchange)-in this limit the theory reduces to just N copies of Coulomb's law, one for each color. $\endgroup$ – DJBunk Jan 8 '13 at 18:03 $\begingroup$ Photon number isn't conserved either in QED, though. So the fact that gluon number isn't conserved doesn't represent a radical departure from QED. Unless you meant how gluon number can change even without interactions with the charges? $\endgroup$ – David Z♦ Jan 8 '13 at 18:44 $\begingroup$ @DavidZaslavsky Photons don't carry charge though. Gluons carry color charge themselves (in the adjoint rep) so they can change the representations of the state. $\endgroup$ – DJBunk Jan 8 '13 at 21:04 I asked this question a few weeks ago and was dissatisfied with most of the answers I found on the internet, so I eventually managed to procure a copy of Griffiths' excellent text on elementary particles (really, all of his texts are excellent) which includes a section exactly answering my question with what I was looking for. I decided then to answer it myself, in case some other curious person reads this and wants to know. This is just a very cursory explanation, intended to answer my own question to my own satisfaction. Griffiths starts by introducing what are basically three copies of EM charge called color charge, and proposes these to be three-element column vectors: $$c_{red} = \left(\begin{array}{c}1\\0\\0\end{array}\right), c_{blue} = \left(\begin{array}{c}0\\1\\0\end{array}\right), c_{green}=\left(\begin{array}{c}0\\0\\1\end{array}\right).$$ These could in principle could take any vector value whatsoever, except for effects of symmetry in the theory and color confinement. To figure out how these vector charges interact, we turn to the Gell-Mann $\lambda$-matrices, which are to $SU(3)$ what the Pauli matrices are to $SU(2)$. These are listed by Griffiths, but writing matrices would be a pain; you can look them up on Wikipedia. Griffiths then takes Feynman scattering amplitudes in lowest order for the chromodynamic interaction, and from these develops potentials for various interactions. For quark-anti-quark, he has $$V_{q\bar{q}}(r) = -f\frac{\alpha_s\hbar c}{r}.$$ This is a long-range force in principle, but it is made short-range due to confinement. It takes the same form as the Coulomb potential. The important thing here is the $f$, which Griffiths calls the "color factor". This color factor is like $q_1q_2$ in electrostatics or $\mathbf{p}_1\circ\mathbf{p}_2$ for dipole-dipole forces, and will depend on the color state of the interacting particles in question. It is calculated by $$f = \frac{1}{4} (c_3^\dagger\lambda^\alpha c_1)(c_2^\dagger\lambda^\alpha c_4),$$ where summation is implied over $\alpha$. Here $c_1$ is charge of incoming quark, $c_3$ charge of outgoing quark, and $c_2,c_4$ charges of incoming and outgoing antiquark. As an example, Griffiths calculates the interaction between red and anti-blue. $$c_1=c_3=\left(\begin{array}{c}1\\0\\0\end{array}\right), c_2=c_4=\left(\begin{array}{c}0\\1\\0\end{array}\right).$$ Hence $$f = \frac{1}{4}\left[(1,0,0)\lambda^\alpha\left(\begin{array}{c}1\\0\\0\end{array}\right) \right]\left[ (0,1,0)\lambda^\alpha \left(\begin{array}{c}0\\1\\0\end{array}\right)\right] = \frac{1}{4}\lambda^\alpha_{11}\lambda^\alpha_{22}.$$ That is, it involves a sum over products of the 1st diagonal element and 2nd diagonal element o each of the Gell-Mann matrices. By looking at their form, the only matrices with both these elements non-zero are the ones labeled by Griffiths $\lambda^3$ and $\lambda^8$. These lead to $$f = \frac{1}{4}[(1)(-1)+(1/\sqrt{3})(1/\sqrt{3})] = -\frac{1}{6},$$ $$V_{r\bar{b}} = \frac{1}{6}\frac{\alpha_s \hbar c}{r},$$ which is evidently a repulsive force. Griffiths also calculates other interactions. For instance, quark-antiquark singlet interactions, $(1/\sqrt{3})(r\bar{r}+b\bar{b}+g\bar{g})$, which have color factor $f=\frac{4}{3}$ and thus are attractive, explaining confinement of quarks to color-singlet states and the lack of observation for colored states. He also calculates quark-quark interactions, which have a slightly different potential, $$V_{qq}=f\frac{\alpha_s \hbar c}{r}.$$ As an example, he calculates red-red interaction; it has factor 1/3, hence is repulsive. There is a lot of this in this very wonderful book, but that's enough to satisfy my curiosity of what color charge is and how it works. Hopefully it is helpful to anyone else. Of course, this was highly simplified for the sake of my own simplified brain and no doubt infuriating to pedants in the field, but if you would like a better explanation and understanding, this was all taken from Chapter 8.4 of Introduction to Elementary Particles by David Griffiths, published by Wiley-VCH, Second Revised Edition -- just to cite sources. Try this on for size: color charge is a name for a set of three related charges, which are arbitrarily labeled red, green, and blue. Each of the individual charges works kind of like electromagnetic charge, in that you have positive and negative values: red and antired, green and antigreen, blue and antiblue. So it's kind of like a three-dimensional space of charge, with three independent charges on the axes. Thus the color charge of a particle would be represented by a vector $(c_1, c_2, c_3)$. The one major difference that makes this not a regular 3D charge space is that an equal combination of all three charges is equivalent to no color charge at all. So you can imagine taking that 3D space of color charge and projecting it on to the 2D plane orthogonal to the neutral-color axis. That is, if you have a particle whose color charge is $(c_1, c_2, c_3)$, that's equivalent to the projection of that vector on to the plane perpendicular to $(1,1,1)$. Now on to the details. I don't know offhand what the formula equivalent to e.g. Coulomb's law for the strong force would be; there is some very complicated math involved. But qualitatively, I can tell you that color charges always try to stay in neutral groups. (Singlets, in the language of group theory) For example, if you have red, green, and blue particles together, they will be very difficult to break apart. Similarly, red and antired will be difficult to break apart, so you could say they attract each other. If you put a red and an antiblue particle together, they don't form a color neutral pair, so I think there will be a bit of a repulsion, unless the two of them come together with a third particle that has the right color charge to make them group color-neutral (which would actually just be a composite of an antired and a blue). For certain, it's not as simple as just multiplying two numbers. David Z♦David Z $\begingroup$ David, I think you should clarify that "color" is a quantized measure, the way spin is, and elementary particle charge is. You do refer to charge but do not stress the quantized nature of color at the particle level. $\endgroup$ – anna v Jan 8 '13 at 6:39 $\begingroup$ Your model then would make, for instance, red=$(1,0)$, blue=$(-\tfrac{1}{2}, \tfrac{\sqrt{3}}{2})$, green=$(-\tfrac{1}{2}, -\tfrac{\sqrt{3}}{2})$, white=$(0,0,0)$, and the anti's negative. This seems to work. At least it produces the right results for the known color interactions. It kind of bothers me that it doesn't produce directly a factor like $q_1q_2$ or $\vec{p}_1\circ\vec{p}_2$ that gives the interaction strength, but at least it's abelian. It also bothers me that it doesn't form a group and isn't closed. But it does work. Thanks for answering. $\endgroup$ – rboston Jan 8 '13 at 18:33 $\begingroup$ @rboston Remember that this is just an analogy, and you can't expect to get specific quantitative expressions from analogies. Plus, there's no reason to expect the charges to form a group. It's the transformations they are subject to that form the group $SU(3)$. $\endgroup$ – David Z♦ Jan 8 '13 at 18:42 $\begingroup$ @annav I don't know about that. As far as I know, quantization of color charge is something that is observed but not theoretically required for the theory to make sense. It's like EM charge, but unlike spin, in that sense. $\endgroup$ – David Z♦ Jan 8 '13 at 18:45 $\begingroup$ It is the unfortunate choice of the concept "color" ,. In contrast with strangeness and charm it gives the impression that one might have a half blue half red quark, for example.whereas we know the charge is either +/-1/3 or+/-2/3 or +/-1 for elementary particles. anyway it is just my opinion. $\endgroup$ – anna v Jan 8 '13 at 18:54 protected by Qmechanic♦ Jul 27 '13 at 17:13 Not the answer you're looking for? Browse other questions tagged forces quantum-chromodynamics quarks color-charge or ask your own question. What exactly is the color charge in QCD? How much color charge does a quark carry? What is the definition of colour (the quantum state)? Group theoretical reason that Gluons carry color-charge and anti-colorcharge Mathematically, what is color charge? Could the fractional model of Quarks electric charge turn out to be false? What is the role of the color-anticolor gluons? Color confinement and integer electric charge? "Color charge" of the adjoint fermion? Is color charge quantized? How to think about color charged objects How do we know that gluons have no electric charge?
CommonCrawl
ALEX DEHNERT, MIT alum working at Akamai Major: VI-3 College/Employer: Akamai I'm an MIT alum (B.S. in computer science and math in 2012, MEng in 2013) and former ESP adminstrator. I'm now working at Akamai, a Cambridge-based Content Delivery Network. CDNs: The Hidden Companies Handling 40% of the Internet in Splash 2015 (Nov. 21 - 22, 2015) What companies handle the most web traffic? You'd likely guess Google, Netflix, Facebook... household names. However, about 15-30% of the world's web traffic is handled by Akamai, a company that the vast majority of Internet users have probably never heard of. Akamai is what's known as a content delivery network -- a company that offloads bulk traffic (like images or videos) from servers run by household names like Facebook, providing higher performance and availability and letting those companies concentrate on their product. We'll be talking about how CDNs like Akamai work: What makes a request fast or slow? What's the relevance of the cache hit rate? How do CDNs decide which servers should handle which requests? What can a CDN do to accelerate dynamic content? Regular expressions and finite automata in Spark! 2012 (Mar. 10, 2012) The field of computability theory covers what sort of functions computers can compute. Two of the simplest --- and most restrictive --- formal definitions of "computation" are regular expressions and finite automata. We'll discuss what sorts of functions they can compute, and which is more powerful. Splash Contra Dance in Splash! 2011 (Nov. 19 - 20, 2011) Ever see how they dance in Jane Austin movies? Replace "stately" with "wild," and the baroque violin with a ragtag string band, and double the tempo and you have contra. Contra is easy to learn and fun to do. Come give it a try with us! Beginners and experienced dancers welcome. It looks something like this: How Not To Run A Website in Splash! 2011 (Nov. 19 - 20, 2011) It's 2011, and it's really easy for anyone to set up a website. It's much harder to set up a website that hackers aren't going to take over within a day. We'll look at many of the popular attacks on websites (including buzzwords like "SQL injection" and "clickjacking"), why these problems came about, and exactly how hard it is to avoid these problems. Copyright: Laws and Implications in Splash! 2010 (Nov. 20 - 21, 2010) We often hear scare stories about kids who download songs from the Internet and then gets sued for millions. Downloading music and other media is considered by many to be equivalent to stealing. But what is it that the kid steals when he downloads a song, and from whom does he steal it? We would like to think that it is the music itself, but the downloaded file just contains a bunch of numbers that the computer uses to make sound. And why is the fine so high? Surely, the song doesn't cost thousands of dollars, especially when a CD with a dozen of them costs just a few bucks. In this class, we will discuss the theory behind copyright laws, and what the court cases and battles that go into them are. We will also discuss some of the interesting implications of these laws (such as the fact that 80-year-old Mickey Mouse cartoons are still under copyright). Tricks with the Memory Management Unit in Splash! 2010 (Nov. 20 - 21, 2010) The Memory Management Unit of a CPU has the seemingly boring role of converting linear addresses, used in software, to the physical addresses used by the actual memory chips. The MMU is key to much of the functionality of the operating system, though. The MMU makes possible swap and direct memory access to files; helps enforce the user-mode/kernel-mode distinction; and keeps processes separated from each other. We'll go over what exactly the MMU does, and how it is used to implement this sort of functionality. Scheme in Spark! 2010 (Mar. 13, 2010) Ever wanted to learn Scheme? Have you heard of functional programming, but never learned any functional languages? Come to our class, and we'll teach you the basics of Scheme, and how to learn more. The Unicode Standard 5.0: A Dramatic Reading in Spark! 2010 (Mar. 13, 2010) In the history of alphabets and languages can be traced the history of man and the development of nations, and in the history of computer encodings of characters can be traced the history of computing and telecommunications. Join us for a journey through the Unicode standard, the standard for computer representation of text, encoding over 100,000 characters from around the world. In the process, we'll also learn the historical context behind the development of various languages, scripts, and pre-Unicode electronic encodings. We'll discuss such topics as Saints Cyril and Methodius, the "loopy" phi, whether Chinese, Japanese, and Korean should use the same character set, the English spelling reforms of George Bernard Shaw and of the Mormons, the letter G, handling bidirectional text, newlines, and the SMALL HIGH DOTLESS HEAD OF KHAH. Scheme in Splash! 2009 (Nov. 21 - 22, 2009) Copyright: Laws and Implication in Spark! Spring 2009 (Mar. 07, 2009) We often hear scare stories about kids who download songs from the Internet and then get sued for millions. Downloading music and other media is considered by many to be equivalent to stealing. But what is it that the kid steals when he downloads a song, and from whom does he steal it? We would like to think that it is the music itself, but the downloaded file just contains a bunch of numbers that the computer uses to make sound. And why is the fine so high? Surely, the song doesn't cost thousands of dollars, especially when a CD with a dozen of them costs just a few bucks. In this class, we will discuss the theory behind copyright laws, and what the court cases and battles that go into them are. We will also discuss some of the interesting implications of these laws (such as the fact that 80-year-old Mickey Mouse cartoons are still under copyright). (Note: This class will be a re-run of the Splash 2008 class. If you took that one, you probably will be bored). Scheme in Spark! Spring 2009 (Mar. 07, 2009) Ever wanted to learn to Scheme? Want to take over the world? We recommend a class in the social studies category. Ever wanted to learn Scheme? Have you heard of functional programming, but never learned any functional languages? Come to our class, and we'll teach you the basics of Scheme, and how to learn more. Ever wanted to learn to Scheme? Want to take over the world? We recommend a class in the social studies category. Ever wanted to learn Scheme? Have you heard of functional programming, but never learned any functional languages? Come to our class, and we'll teach you the basics of Scheme, and how to learn more. LaTeX in Splash! 2008 (Nov. 22 - 23, 2008) Ever tried to type up math, and found that Microsoft Word really is not up to the task? Want to learn a Turing-complete markup language? Liked the look of some of the textbooks you've read, and want to know how they typeset it? The tool most mathematicians use for typesetting math is $$\LaTeX$$, and we'll try to teach you the basics. We'll look at * Writing a basic document without any math * Basic math * Defining simple commands * Finding out more $$ \begin{align} \langle a, b \rangle &= \sum_{i=1}^n a_i\cdot b_i\\ (a+b)^n&=\sum_{k=0}^{n}{n \choose k}a^k b^{n-k} \end{align} $$ We often hear scare stories about kids who download songs from the Internet and then gets sued for millions. Downloading music and other media is considered by many to be equivalent to stealing. But what is it that the kid steals when he downloads a song, and from whom does he steal it? We would like to think that it is the music itself, but the downloaded file just contains a bunch of numbers that the computer uses to make sound. And why is the fine so high? Surely, the song doesn't cost thousands of dollars, especially when a CD with a dozen of them costs just a few bucks. In this class, we will discuss the theory behind copyright laws, and what the court cases and battles that go into them are. We will also discuss some of the interesting implications of these laws (such as the fact that 80-year-old Mickey Mouse cartoons are still under copyright). foo in HSSP (2011) Testing in SPARK (2011) Test The prerequisites for this class were: Foo Please don't approve this either in HSSP (2010) But don't delete it either. I just spent half an hour trying to figure out why mail wasn't working, before ...
CommonCrawl
Problems in Mathematics Problems by Topics Inverse Matrix Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Diagonalization Exam Problems Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems by Yu · Published 02/12/2018 Find a Nonsingular Matrix $A$ satisfying $3A=A^2+AB$ Problem 699 (a) Find a $3\times 3$ nonsingular matrix $A$ satisfying $3A=A^2+AB$, where \[B=\begin{bmatrix} 2 & 0 & -1 \\ 0 &2 &-1 \\ -1 & 0 & 1 \end{bmatrix}.\] (b) Find the inverse matrix of $A$. Read solution Click here if solved 63 Add to solve later Determine whether the Matrix is Nonsingular from the Given Relation Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$. \[A\begin{bmatrix} 1 \\ \end{bmatrix}=B\begin{bmatrix} \end{bmatrix},\] then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not. Find All Symmetric Matrices satisfying the Equation Find all $2\times 2$ symmetric matrices $A$ satisfying $A\begin{bmatrix} \end{bmatrix} \begin{bmatrix} \end{bmatrix}$? Express your solution using free variable(s). Compute $A^5\mathbf{u}$ Using Linear Combination \[A=\begin{bmatrix} -4 & -6 & -12 \\ -2 &-1 &-4 \\ 2 & 3 & 6 \end{bmatrix}, \quad \mathbf{u}=\begin{bmatrix} \end{bmatrix}, \quad \mathbf{v}=\begin{bmatrix} -2 \\ \end{bmatrix}, \quad \text{ and } \mathbf{w}=\begin{bmatrix} (a) Express the vector $\mathbf{u}$ as a linear combination of $\mathbf{v}$ and $\mathbf{w}$. (b) Compute $A^5\mathbf{v}$. (c) Compute $A^5\mathbf{w}$. (d) Compute $A^5\mathbf{u}$. If the Augmented Matrix is Row-Equivalent to the Identity Matrix, is the System Consistent? Consider the following system of linear equations: ax_1+bx_2 &=c\\ dx_1+ex_2 &=f\\ gx_1+hx_2 &=i. (a) Write down the augmented matrix. (b) Suppose that the augmented matrix is row equivalent to the identity matrix. Is the system consistent? Justify your answer. Using Properties of Inverse Matrices, Simplify the Expression Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression \[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\] which matrix do you get? (a) $A$ (b) $C^{-1}A^{-1}BC^{-1}AC^2$ (c) $B$ (d) $C^2$ (e) $C^{-1}BC$ (f) $C$ Elementary Questions about a Matrix -5 & 0 & 1 & 2 \\ 3 &8 & -3 & 7 \\ 0 & 11 & 13 & 28 (a) What is the size of the matrix $A$? (b) What is the third column of $A$? (c) Let $a_{ij}$ be the $(i,j)$-entry of $A$. Calculate $a_{23}-a_{31}$. Click here if solved 132 Are these vectors in the Nullspace of the Matrix? Let $A=\begin{bmatrix} 1 & 0 & 3 & -2 \\ 0 &3 & 1 & 1 \\ 1 & 3 & 4 & -1 \end{bmatrix}$. For each of the following vectors, determine whether the vector is in the nullspace $\calN(A)$. (a) $\begin{bmatrix} \end{bmatrix}$ (b) $\begin{bmatrix} (c) $\begin{bmatrix} (d) $\begin{bmatrix} Then, describe the nullspace $\calN(A)$ of the matrix $A$. Spanning Sets for $\R^2$ or its Subspaces In this problem, we use the following vectors in $\R^2$. \[\mathbf{a}=\begin{bmatrix} \end{bmatrix}, \mathbf{b}=\begin{bmatrix} \end{bmatrix}, \mathbf{c}=\begin{bmatrix} \end{bmatrix}, \mathbf{d}=\begin{bmatrix} \end{bmatrix}, \mathbf{e}=\begin{bmatrix} \end{bmatrix}, \mathbf{f}=\begin{bmatrix} \end{bmatrix}.\] For each set $S$, determine whether $\Span(S)=\R^2$. If $\Span(S)\neq \R^2$, then give algebraic description for $\Span(S)$ and explain the geometric shape of $\Span(S)$. (a) $S=\{\mathbf{a}, \mathbf{b}\}$ (b) $S=\{\mathbf{a}, \mathbf{c}\}$ (c) $S=\{\mathbf{c}, \mathbf{d}\}$ (d) $S=\{\mathbf{a}, \mathbf{f}\}$ (e) $S=\{\mathbf{e}, \mathbf{f}\}$ (f) $S=\{\mathbf{a}, \mathbf{b}, \mathbf{c}\}$ (g) $S=\{\mathbf{e}\}$ Is the Derivative Linear Transformation Diagonalizable? Let $\mathrm{P}_2$ denote the vector space of polynomials of degree $2$ or less, and let $T : \mathrm{P}_2 \rightarrow \mathrm{P}_2$ be the derivative linear transformation, defined by \[ T( ax^2 + bx + c ) = 2ax + b . \] Is $T$ diagonalizable? If so, find a diagonal matrix which represents $T$. If not, explain why not. Dot Product, Lengths, and Distances of Complex Vectors For this problem, use the complex vectors \[ \mathbf{w}_1 = \begin{bmatrix} 1 + i \\ 1 – i \\ 0 \end{bmatrix} , \, \mathbf{w}_2 = \begin{bmatrix} -i \\ 0 \\ 2 – i \end{bmatrix} , \, \mathbf{w}_3 = \begin{bmatrix} 2+i \\ 1 – 3i \\ 2i \end{bmatrix} . \] Suppose $\mathbf{w}_4$ is another complex vector which is orthogonal to both $\mathbf{w}_2$ and $\mathbf{w}_3$, and satisfies $\mathbf{w}_1 \cdot \mathbf{w}_4 = 2i$ and $\| \mathbf{w}_4 \| = 3$. Calculate the following expressions: (a) $ \mathbf{w}_1 \cdot \mathbf{w}_2 $. (b) $ \mathbf{w}_1 \cdot \mathbf{w}_3 $. (c) $((2+i)\mathbf{w}_1 – (1+i)\mathbf{w}_2 ) \cdot \mathbf{w}_4$. (d) $\| \mathbf{w}_1 \| , \| \mathbf{w}_2 \|$, and $\| \mathbf{w}_3 \|$. (e) $\| 3 \mathbf{w}_4 \|$. (f) What is the distance between $\mathbf{w}_2$ and $\mathbf{w}_3$? How to Obtain Information of a Vector if Information of Other Vectors are Given Let $A$ be a $3\times 3$ matrix and let \[\mathbf{v}=\begin{bmatrix} \end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix} \end{bmatrix}.\] Suppose that $A\mathbf{v}=-\mathbf{v}$ and $A\mathbf{w}=2\mathbf{w}$. Then find the vector \[A^5\begin{bmatrix} Inner Products, Lengths, and Distances of 3-Dimensional Real Vectors For this problem, use the real vectors \[ \mathbf{v}_1 = \begin{bmatrix} -1 \\ 0 \\ 2 \end{bmatrix} , \mathbf{v}_2 = \begin{bmatrix} 0 \\ 2 \\ -3 \end{bmatrix} , \mathbf{v}_3 = \begin{bmatrix} 2 \\ 2 \\ 3 \end{bmatrix} . \] Suppose that $\mathbf{v}_4$ is another vector which is orthogonal to $\mathbf{v}_1$ and $\mathbf{v}_3$, and satisfying \[ \mathbf{v}_2 \cdot \mathbf{v}_4 = -3 . \] (a) $\mathbf{v}_1 \cdot \mathbf{v}_2 $. (b) $\mathbf{v}_3 \cdot \mathbf{v}_4$. (c) $( 2 \mathbf{v}_1 + 3 \mathbf{v}_2 – \mathbf{v}_3 ) \cdot \mathbf{v}_4 $. (d) $\| \mathbf{v}_1 \| , \, \| \mathbf{v}_2 \| , \, \| \mathbf{v}_3 \| $. (e) What is the distance between $\mathbf{v}_1$ and $\mathbf{v}_2$? Given the Data of Eigenvalues, Determine if the Matrix is Invertible In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not. (a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$. (b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. A Recursive Relationship for a Power of a Matrix Suppose that the $2 \times 2$ matrix $A$ has eigenvalues $4$ and $-2$. For each integer $n \geq 1$, there are real numbers $b_n , c_n$ which satisfy the relation \[ A^{n} = b_n A + c_n I , \] where $I$ is the identity matrix. Find $b_n$ and $c_n$ for $2 \leq n \leq 5$, and then find a recursive relationship to find $b_n, c_n$ for every $n \geq 1$. The Rotation Matrix is an Orthogonal Transformation Let $\mathbb{R}^2$ be the vector space of size-2 column vectors. This vector space has an inner product defined by $ \langle \mathbf{v} , \mathbf{w} \rangle = \mathbf{v}^\trans \mathbf{w}$. A linear transformation $T : \R^2 \rightarrow \R^2$ is called an orthogonal transformation if for all $\mathbf{v} , \mathbf{w} \in \R^2$, \[\langle T(\mathbf{v}) , T(\mathbf{w}) \rangle = \langle \mathbf{v} , \mathbf{w} \rangle.\] For a fixed angle $\theta \in [0, 2 \pi )$ , define the matrix \[ [T] = \begin{bmatrix} \cos (\theta) & – \sin ( \theta ) \\ \sin ( \theta ) & \cos ( \theta ) \end{bmatrix} \] and the linear transformation $T : \R^2 \rightarrow \R^2$ by \[T( \mathbf{v} ) = [T] \mathbf{v}.\] Prove that $T$ is an orthogonal transformation. The Coordinate Vector for a Polynomial with respect to the Given Basis Let $\mathrm{P}_3$ denote the set of polynomials of degree $3$ or less with real coefficients. Consider the ordered basis \[B = \left\{ 1+x , 1+x^2 , x – x^2 + 2x^3 , 1 – x – x^2 \right\}.\] Write the coordinate vector for the polynomial $f(x) = -3 + 2x^3$ in terms of the basis $B$. Find a Basis for the Range of a Linear Transformation of Vector Spaces of Matrices Let $V$ denote the vector space of $2 \times 2$ matrices, and $W$ the vector space of $3 \times 2$ matrices. Define the linear transformation $T : V \rightarrow W$ by \[T \left( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right) = \begin{bmatrix} a+b & 2d \\ 2b – d & -3c \\ 2b – c & -3a \end{bmatrix}.\] Find a basis for the range of $T$. The Matrix Exponential of a Diagonal Matrix For a square matrix $M$, its matrix exponential is defined by \[e^M = \sum_{i=0}^\infty \frac{M^k}{k!}.\] Suppose that $M$ is a diagonal matrix \[ M = \begin{bmatrix} m_{1 1} & 0 & 0 & \cdots & 0 \\ 0 & m_{2 2} & 0 & \cdots & 0 \\ 0 & 0 & m_{3 3} & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & m_{n n} \end{bmatrix}.\] Find the matrix exponential $e^M$. Find the Nullspace and Range of the Linear Transformation $T(f)(x) = f(x)-f(0)$ Let $C([-1, 1])$ denote the vector space of real-valued functions on the interval $[-1, 1]$. Define the vector subspace \[W = \{ f \in C([-1, 1]) \mid f(0) = 0 \}.\] Define the map $T : C([-1, 1]) \rightarrow W$ by $T(f)(x) = f(x) – f(0)$. Determine if $T$ is a linear map. If it is, determine its nullspace and range. Page 4 of 38«12345678...2030...»Last » This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Probability (33) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. How to Prove Markov's Inequality and Chebyshev's Inequality How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions Expected Value and Variance of Exponential Random Variable Condition that a Function Be a Probability Density Function Conditional Probability When the Sum of Two Geometric Random Variables Are Known Automorphism Group of $\Q(\sqrt[3]{2})$ Over $\Q$. Determine a Matrix From Its Eigenvalue Diagonalize the Complex Symmetric 3 by 3 Matrix with $\sin x$ and $\cos x$ The Product of Distinct Sylow $p$-Subgroups Can Never be a Subgroup The Column Vectors of Every $3\times 5$ Matrix Are Linearly Dependent How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ Eigenvalues of Real Skew-Symmetric Matrix are Zero or Purely Imaginary and the Rank is Even Express the Eigenvalues of a 2 by 2 Matrix in Terms of the Trace and Determinant Express a Vector as a Linear Combination of Other Vectors Eigenvalues of a Matrix and its Transpose are the Same Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Find the Inverse Matrix Using the Cayley-Hamilton Theorem The Intersection of Two Subspaces is also a Subspace Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA probability rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2021. All Rights Reserved.
CommonCrawl
`); }); } $('#search-pretype-options').append(prevbooks); }); } function anon_pretype() { let prebooks = null; try { prebooks = JSON.parse(localStorage.getItem('PRETYPE_BOOKS_ANON')); }catch(e) {} if ('previous_books' in prebooks && 'recommended_books' in prebooks) { previous_books = prebooks.previous_books; recommended_books = prebooks.recommended_books; if (typeof PREVBOOKS !== 'undefined' && Array.isArray(PREVBOOKS)) { new_prevbooks = PREVBOOKS; previous_books.forEach(elem => { for (let i = 0; i < new_prevbooks.length; i++) { if (elem.id == new_prevbooks[i].id) { return; } } new_prevbooks.push(elem); }); new_prevbooks = new_prevbooks.slice(0,3); previous_books = new_prevbooks; } if (typeof RECBOOKS !== 'undefined' && Array.isArray(RECBOOKS)) { debugger; new_recbooks = RECBOOKS; for (let j = 0; j < new_recbooks.length; j++) { new_recbooks[j].viewed_at = new Date(); } let insert = true; for (let i=0; i < recommended_books.length; i++){ for (let j = 0; j < new_recbooks.length; j++) { if (recommended_books[i].id == new_recbooks[j].id) { insert = false; } } if (insert){ new_recbooks.push(recommended_books[i]); } } new_recbooks.sort((a,b)=>{ adate = new Date(2000, 0, 1); bdate = new Date(2000, 0, 1); if ('viewed_at' in a) {adate = new Date(a.viewed_at);} if ('viewed_at' in b) {bdate = new Date(b.viewed_at);} // 100000000: instead of just erasing the suggestions from previous week, // we just move them to the back of the queue acurweek = ((new Date()).getDate()-adate.getDate()>7)?0:100000000; bcurweek = ((new Date()).getDate()-bdate.getDate()>7)?0:100000000; aviews = 0; bviews = 0; if ('views' in a) {aviews = acurweek+a.views;} if ('views' in b) {bviews = bcurweek+b.views;} return bviews - aviews; }); new_recbooks = new_recbooks.slice(0,3); recommended_books = new_recbooks; } localStorage.setItem('PRETYPE_BOOKS_ANON', JSON.stringify({ previous_books: previous_books, recommended_books: recommended_books })); build_pretype(); } } var search_text_out = true; var search_popup_out = true; $( document ).ready(function() { $('#search-text').focusin(function() { $('#search-popup').addClass('show'); resize_popup(); search_text_out = false; }); $( window ).resize(function() { resize_popup(); }); $('#search-text').focusout(() => { search_text_out = true; if (search_text_out && search_popup_out) { $('#search-popup').removeClass('show'); } }); $('#search-popup').mouseenter(() => { search_popup_out = false; }); $('#search-popup').mouseleave(() => { search_popup_out = true; if (search_text_out && search_popup_out) { $('#search-popup').removeClass('show'); } }); build_pretype(); let prevbookUrl = `/search/pretype_books/`; const is_login = false; if (is_login) { $.ajax({ url: prevbookUrl, method: 'POST', data:{csrfmiddlewaretoken: "lcqnd6ufpkdbEh2DjSzdkyt81VWfePIqOGTtkdRc8SUXhJDaYtLgEbAFJc80ugp3"}, success: function(response){ previous_books = response.previous_books; recommended_books = response.recommended_books; build_pretype(); }, error: function(response){ console.log(response); } }); } else { let prebooks = null; try { prebooks = JSON.parse(localStorage.getItem('PRETYPE_BOOKS_ANON')); }catch(e) {} if (prebooks && 'previous_books' in prebooks && 'recommended_books' in prebooks) { anon_pretype(); } else { $.ajax({ url: prevbookUrl, method: 'POST', data:{csrfmiddlewaretoken: "lcqnd6ufpkdbEh2DjSzdkyt81VWfePIqOGTtkdRc8SUXhJDaYtLgEbAFJc80ugp3"}, success: function(response){ previous_books = response.previous_books; recommended_books = response.recommended_books; build_pretype(); }, error: function(response){ console.log(response); } }); } } }); is the ability to see or comprehend differences. It is the ability to distinguish between things that differ, and to generalize about such differences, making separations, discriminations, or distinctions. In psychology, differentiation is a human ability to organize and differentiate thoughts, to distinguish or classify them. It is one of the fundamental building blocks of human intelligence. 190 Practice Problems 21st Century Astronomy When viewed by radio telescopes, Jupiter is the second-brightest object in the sky. What is the source of its radiation? Worlds of Gas and Liquid-The Giant Planets Calculus for Scientists and Engineers: Early Transcendental Calculating limits exactly Use the definition of the derivative to evaluate the following limits. $$\lim _{h \rightarrow 0} \frac{(3+h)^{3+h}-27}{h}$$ Derivatives of Logarithmic and Exponential Functions Graph the functions $f(x)=x^{3}, g(x)=3^{x}$ and $h(x)=x^{x}$ and find their common intersection point (exactly). Product and Quotient Rules Calculus: Early Transcendental Functions Find an antiderivative by reversing the chain rule, product rule or quotient rule. $$\int\left(x \sin 2 x+x^{2} \cos 2 x\right) d x$$ Antiderivatives Use the product rule to show that if $g(x)=[f(x)]^{2}$ and $f(x)$ is differentiated, then $g^{\prime}(x)=2 f(x) f^{\prime}(x) .$ This is an example of the chain rule, to be discussed in section 2.5 Find the derivative of each function. $$f(x)=\frac{x^{2}+2 x+5}{x^{2}-5 x+1}$$ Derivatives of Trigonometric Functions Identifying derivatives from limits The following limits equal the derivative of a function $f$ at a point a. a. Find one possible $f$ and $a$ b. Evaluate the limit. $$\lim _{h \rightarrow 0} \frac{\sin \left(\frac{\pi}{6}+h\right)-\frac{1}{2}}{h}$$ Continuity of a piecewise function Let $$f(x)=\left\{\begin{array}{cc}\frac{3 \sin x}{x} & \text { if } x \neq 0 \\a & \text { if } x=0\end{array}\right.$$ For what values of $a$ is $f$ continuous? Proof of $\lim _{x \rightarrow 0} \frac{\cos x-1}{x}=0$ Use the trigonometric identity $\cos ^{2} x+\sin ^{2} x=1$ to prove that $\lim _{x \rightarrow 0} \frac{\cos x-1}{x}=0 .$ (Hint: Begin by multiplying the numerator and denominator by $\cos x+1 .)$ The Chain Rule Let $f(x, y)=0$ define $y$ as a twice differentiable function of $x$ a. Show that $y^{\prime \prime}(x)=-\frac{f_{x x} f_{y}^{2}-2 f_{x} f_{y} f_{x y}+f_{y y} f_{x}^{2}}{f_{y}^{3}}$. b. Verify part (a) using the function $f(x, y)=x y-1$. Functions of Several Variables Consider the following surfaces specified in the form $z=f(x, y)$ and the curve $C$ in the $x y$ -plane given parametrically in the form $x=g(t), y=h(t)$. a. In each case, find $z^{\prime}(t)$. b. Imagine that you are walking on the surface directly above the curve $C$ in the direction of increasing t. Find the values of t for which you are walking uphill (that is, z is increasing). $$z=2 x^{2}+y^{2}+1, C: x=1+\cos t, y=\sin t ; 0 \leq t \leq 2 \pi$$ Assume that $F(x, y, z(x, y))=0$ implicitly defines $z$ as a differentiable function of $x$ and $y .$ Extend Theorem 13.9 to show that $$\frac{\partial z}{\partial x}=-\frac{F_{x}}{F_{z}} \text { and } \frac{\partial z}{\partial y}=-\frac{F_{y}}{F_{z}}$$ Implicit Differentiation A challenging derivative Find $\frac{d y}{d x},$ where $\sqrt{3 x^{7}+y^{2}}=\sin ^{2} y+100 x y$ Orthogonal trajectories Two curves are orthogonal to each other if their tangent lines are perpendicular at each point of intersection (recall that two lines are perpendicular to each other if their slopes are negative reciprocals. . A family of curves forms orthogonal trajectories with another family of curves if each curve in one family is orthogonal to each curve in the other family. For example, the parabolas $y=c x^{2}$ form orthogonal trajectories with the family of ellipses $x^{2}+2 y^{2}=k,$ where $c$ and $k$ are constants (see figure). Use implicit differentiation if needed to find dy/dx for each equation of the following pairs. Use the derivatives to explain why the families of curves form orthogonal trajectories. CANT COPY THE GRAPH. $y=m x ; x^{2}+y^{2}=a^{2},$ where $m$ and $a$ are constants Normal lines $A$ normal line on a curve passes through a point P on the curve perpendicular to the line tangent to the curve at $P(\text {see figure}) .$ Use the following equations and graphs to determine an equation of the normal line at the given point. Illustrate your work by graphing the curve with the normal line. CANT COPY THE GRAPH Exercise 28 Derivatives of Logarithmic Functions Logarithmic differentiation Use logarithmic differentiation to evaluate $f^{\prime}(x)$. $$f(x)=x^{10 x}$$ Find the derivative of the following functions. $$y=\ln \left|x^{2}-1\right|$$ $$y=\ln 2 x^{8}$$ Exponential Growth and Decay Introductory and Intermediate Algebra for College Students Use this information to determine whether each statement is true or false. If the statement is false, make the necessary change(s) to produce a true statement. In $2006,$ Canada's population exceeded Uganda's by 4.9 million. Exponential Growth and Decay; Modeling Data Determine whether each statement "makes sense" or "does not make sense" and explain your reasoning. After 100 years, a population whose growth rate is $3 \%$ will have three times as many people as a population whose growth rate is $1 \%$ We used two data points and an exponential function to model the population of the United States from 1970 through 2009. The data are shown again in the table. Use all five data points. $$\begin{array}{c|c} \hline x, \text { Number of Years after } 1969 & y, \text { U.S. Population (millions) } \\ \hline 1(1970) & 203.3 \\ \hline 11(1980) & 226.5 \\ \end{array}$$ Use the values of $r$ in Exercises $45-48$ to select the two models of best fit. Use each of these models to predict by which year the U.S. population will reach 352 million. How do these answers compare to the year we found in Example $1,$ namely $2020 ?$ If you obtained different years, how do you account for this difference? Related Rates Draw a reaction coordinate diagram for the following reaction in which $C$ is the most stable and $B$ the least stable of the three species and the transition state going from $A$ to $B$ is more stable than the transition state going from B to C: $$A \stackrel{k_{1}}{E_{-1}} \quad B \quad \frac{k_{2}}{\overline{k_{-2}}} C$$ a. How many intermediates are there? b. How many transition states are there? c. Which step has the greater rate constant in the forward direction? d. Which step has the greater rate constant in the reverse direction? e. Of the four steps, which has the greatest rate constant? f. Which is the rate-determining step in the forward direction? g. Which is the rate-determining step in the reverse direction? Alkenes: Structure, Nomenclature, and an Introduction to Reactivity • Thermodynamics and Kinetics University Physics How does the self-inductance per unit length near the center of a solenoid (away from the ends) compare with its value near the end of the solenoid? The Sun emits electromagnetic waves (including light) equally in all directions. The intensity of the waves at Earth's upper atmosphere is $1.4 \mathrm{kW} / \mathrm{m}^{2} .$ At what rate does the Sun emit electromagnetic waves? (In other words, what is the power output?) Linear Approximation and Differentials Differentials Consider the following functions and express the relationship between a small change in $x$ and the corresponding change in $y$ in the form $d y=f^{\prime}(x) d x$. $$f(x)=3 x^{3}-4 x$$ Applications of the Derivative Approximate the change in the magnitude of the electrostatic force between two charges when the distance between them increases from $r=20 \mathrm{m}$ to $r=21 \mathrm{m}\left(F(r)=0.01 / r^{2}\right)$. Estimations with linear approximation Use linear approximations to estimate the following quantities. Choose a value of a to produce a small error. $$1 / \sqrt[3]{510}$$ Mathematical Statistics with Applications Applet Exercise In Exercise 16.15, we determined that the posterior density for $p$, the proportion of responders to the new treatment for a virulent disease, is a beta density with parameters $\alpha^{*}=5$ and $\beta^{*}=24 .$ What is the conclusion of a Bayesian test for $H_{0}: p<.3$ versus $H_{a}:$ $p \geq .3 ?$ [Use the applet Beta Probabilities and Quantiles at https://college.cengage.com/nextbook/statistics/wackerly 966371/student/html/index.html. Alternatively, if $W$ is a beta-distributed random variable with parameters $\alpha$ and $\beta$, the $R$ or $S$ -Plus command pbeta $(w, \alpha, \beta) \text { gives } P(W \leq w) .]$ Introduction to Bayesian Methods for Inference Bayesian Tests of Hypotheses Carry out the following steps to derive the formula $\left.\int \operatorname{csch} x \, d x=\ln |\tanh (x / 2)|+C \text { (Theorem } 6.9\right)$ a. Change variables with the substitution $u=x / 2$ to show that $$\int \operatorname{csch} x \, d x=\int \frac{2 d u}{\sinh 2 u}$$ b. Use the identity for sinh $2 u$ to show that $\frac{2}{\sinh 2 u}=\frac{\operatorname{sech}^{2} u}{\tanh u}$ c. Change variables again to determine $\int \frac{\operatorname{sech}^{2} u}{\tanh u} d u,$ and then express your answer in terms of $x$ Applications of Integration Verify the following identities. $$\sinh \left(\cosh ^{-1} x\right)=\sqrt{x^{2}-1}, \text { for } x \geq 1$$ Maximum and Minimum Values a. Find the critical points of the following functions on the given interval. b. Use a graphing device to determine whether the critical points correspond to local maxima, local minima, or neither. c. Find the absolute maximum and minimum values on the given interval when they exist. $$f(x)=x^{2 / 3}\left(4-x^{2}\right) ;[-3,4]$$ Maxima and Minima a. Find the critical points of $f$ on the given interval. b. Determine the absolute extreme values of $f$ on the given interval. c. Use a graphing utility to confirm your conclusions. $$f(x)=x \ln (x / 5) ;[0.1,5]$$ All rectangles with an area of 64 have a perimeter given by $P(x)=2 x+128 / x,$ where $x$ is the length of one side of the rectangle. Find the absolute minimum value of the perimeter function. What are the dimensions of the rectangle with minimum perimeter? The Mean Value Theorem 82 Practice Problems Statistics Informed Decisions Using Data What is the mean square due to treatment estimate of $\sigma^{2} ?$ What is the mean square due to error estimate of $\sigma^{2} ?$ Comparing Three or More Means Running pace Explain why if a runner completes a 6.2 -mi $(10-\mathrm{km})$ race in 32 min, then he must have been running at exactly $11 \mathrm{mi} / \mathrm{hr}$ at least twice in the race. Assume the runner's speed at the finish line is zero. Mean Value Theorem Mean Value Theorem and graphs By visual inspection, locate all points on the graph at which the slope of the tangent line equals the average rate of change of the function on the interval [-4,4] (GRAPH CAN'T COPY) How Derivatives Affect the Shape of a Graph Estimate the intervals where the function is concave up and concave down. (Hint: Estimate where the slope is increasing and decreasing.) Applications of Differentiation Concavity and The Second Derivative Test The "family of functions" contains a parameter $c .$ The value of $c$ affects the properties of the functions. Determine what differences, if any, there are for $c$ being zero, positive or negative. Then determine what the graph would look like for very large positive $c$ 's and for very large negative $c$ 's. $$f(x)=x^{4}+c x^{2}$$ Overview of Curve Sketching Determine all significant features (approximately if necessary) and sketch a graph. $$f(x)=\sin x-\frac{1}{2} \sin 2 x$$ Indeterminate Forms Find the indicated limits. $$\lim _{x \rightarrow \infty}\left(1+\frac{1}{x}\right)^{x}$$ Indeterminate Forms and L'Hopitals Rule $$\lim _{x \rightarrow 0} \frac{x \cos x-\sin x}{x \sin ^{2} x}$$ $$\lim _{x \rightarrow-2} \frac{x+2}{x^{2}-4}$$ l'Hospital's Rule $$\lim _{x \rightarrow \infty} x \sin (1 / x)$$ Limits Evaluate the following limits. Use l'Hópital's Rule when it is comvenient and applicable. $$\lim _{\theta \rightarrow \pi / 2^{-}}(\tan \theta)^{\cos \theta}$$ L'Hôpital's Rule $$\lim _{x \rightarrow \infty} \frac{x^{2}-\ln (2 / x)}{3 x^{2}+2 x}$$ A challenging pen problem Two triangular pens are built against a barn. Two hundred meters of fencing are to be used for the three sides and the diagonal dividing fence (see figure). What dimensions maximize the area of the pen? (FIGURE CAN'T COPY) Turning a corner with a pole a. What is the length of the longest pole that can be carried horizontally around a corner at which a 3 -ft corridor and a 4 -ft corridor meet at right angles? b. What is the length of the longest pole that can be carried horizontally around a corner at which a corridor that is $a$ feet wide and a corridor that is $b$ feet wide meet at right angles? c. What is the length of the longest pole that can be carried horizontally around a corner at which a corridor that is $a=5 \mathrm{ft}$ wide and a corridor that is $b=5$ ft wide meet at an angle of $120^{\circ} ?$ d. What is the length of the longest pole that can be carried around a corner at which a corridor that is $a$ feet wide and a corridor that is $b$ feet wide meet at right angles, assuming there is an 8 -foot ceiling and that you may tilt the pole at any angle? Another pen problem A rancher is building a horse pen on the corner of her property using $1000 \mathrm{ft}$ of fencing. Because of the unusual shape of her property, the pen must be built in the shape of a trapezoid (see figure). a. Determine the lengths of the sides that maximize the area of the pen. b. Suppose there is already a fence along the side of the property opposite the side of length $y .$ Find the lengths of the sides that maximize the area of the pen, using 1000 ft of fencing. (FIGURE CAN'T COPY) Newton's Method Physics: A Conceptual World View Explain how Newton's idea of light particles predicts that the speed of light in a transparent material will be faster than in a vacuum. A Model for Light How does Newton's idea of light particles explain the law of refraction? An eigenvalue problem A certain kind of differential equation (see Chapter 8 ) leads to the root-finding problem tan $\pi \lambda=\lambda$. where the roots $\lambda$ are called eigenvalues. Find the first three positive eigenvalues of this problem. Verify the following indefinite integrals by differentiation. These integrals are derived in later chapters. $$\int \frac{x}{\sqrt{x^{2}+1}} d x=\sqrt{x^{2}+1}+C$$ Determine the following indefinite integrals. Check your work by differentiation. $$\int \sqrt{x}\left(2 x^{6}-4 \sqrt[3]{x}\right) d x$$ Explain why or why not Determine whether the following statements are true and give an explanation or counterexample. a. $F(x)=x^{3}-4 x+100$ and $G(x)=x^{3}-4 x-100$ are antiderivatives of the same function. b. If $F^{\prime}(x)=f(x),$ then $f$ is an antiderivative of $F$ c. If $F^{\prime}(x)=f(x),$ then $\int f(x) d x=F(x)+C$ d. $f(x)=x^{3}+3$ and $g(x)=x^{3}-4$ are derivatives of the same function. e. If $F^{\prime}(x)=G^{\prime}(x),$ then $F(x)=G(x)$ Rolle's Theorem For $$f(x)=\left\{\begin{array}{ll} 2 x & \text { if } x \leq 0 \\ 2 x-4 & \text { if } x>0 \end{array}\right.$$ show that $f$ is continuous on the interval $(0,2),$ differentiable on the interval (0,2) and has $f(0)=f(2) .$ Show that there does not exist a value of $c$ such that $f^{\prime}(c)=0 .$ Which hypothesis of Rolle's Theorem is not satisfied? Check the hypotheses of Rolle's Theorem and the Mean Value Theorem and find a value of $c$ that makes the appropriate conclusion true. Illustrate the conclusion with a graph. $$f(x)=x^{3}+x^{2},[0,1]$$ Determine whether Rolle's Theorem applies to the following functions on the given interval. If so, find the point(s) guaranteed to exist by Rolle's Theorem. $$f(x)=1-x^{2 / 3}:[-1,1]$$ Derivatives of Inverse Trigonmetric Functions Identity proofs Prove the following identities and give the values of $x$ for which they are true. $$\sin \left(2 \sin ^{-1} x\right)=2 x \sqrt{1-x^{2}}$$ Derivatives of Inverse Trigonometric Functions Derivatives of inverse functions Consider the following functions (on the given interval, if specified). Find the inverse function, express it as a function of $x,$ and find the derivative of the inverse function. $$f(x)=\sqrt{x+2}, \text { for } x \geq-2$$ Graphing $f$ and $f^{\prime}$. a. Graph $f$ with a graphing utility. b. Compute and graph $f^{\prime}$ c. Verify that the zeros of $f^{\prime}$ correspond to points at which $f$ has a horizontal tangent line. $$f(x)=(x-1) \sin ^{-1} x \text { on }[-1,1]$$ Average Rates of Change and Secant Lines Use a CAS or graphing calculator. Animate the secant lines in exercise $9,$ parts $(\mathrm{b}),(\mathrm{d})$ and $(\mathrm{f})$ converging to the tangent line in part (g). Tangent Lines and Velocity Explain the difference between the average rate of change and the instantancous rate of change of a function $f$ Derivatives as Rates of Change Determine whether the following statements are true and give an explanation or counterexample. a. For linear functions, the slope of any secant line always equals the slope of any tangent line. b. The slope of the secant line passing through the points $P$ and $Q$ is less than the slope of the tangent line at $P$. c. Consider the graph of the parabola $f(x)=x^{2} .$ For $a>0$ and $h>0,$ the secant line through $(a, f(a))$ and $(a+h, f(a+h))$ always has a greater slope than the tangent line at $(a, f(a))$ Introducing the Derivative Instantaneous Rates of Change and Tangent Lines Applied Calculus Figure 2.11 shows $N=f(t),$ the number of farms in the US $^{2}$ between 1930 and 2000 as a function of year, $t$ (a) Is $f^{\prime}(1950)$ positive or negative? What does this tell you about the number of farms? (b) Which is more negative: $f^{\prime}(1960)$ or $f^{\prime}(1980) ?$ Explain. (Check your book to see figure) Rate of Change: The Derivative Instantaneous Rate of Change Find the limit of the difference quotient of the given function to obtain a function that represents the slope of a line drawn tangent to the curve at $x.$ $$f(x)=\frac{2}{x-1}$$ Bridges to Calculus: An Introduction to Limits Applications of Limits: Instantaneous Rates of Change and the Area under a Curve A Graphical Approach to Precalculus with Limits Find the equation of the tangent line to the function $f$ at the given point. Then graph the function and the tangent line together. $$f(x)=x-x^{2} \text { at }(-1,-2)$$ Limits, Derivatives, and Definite Integrals Tangent Lines and Derivatives Instantaneous Rates of Change and Tangent Lines: Estimating using Average Rate of Change A function $f$ has $f(5)=20, f^{\prime}(5)=2,$ and $f^{\prime \prime}(x)<0$ for $x \geq 5 .$ Which of the following are possible values for $f(7)$ and which are impossible? The Second Derivative Values of $f(t)$ are given in the following table. (a) Does this function appear to have a positive or negative first derivative? Second derivative? Explain. (b) Estimate $f^{\prime}(2)$ and $f^{\prime}(8)$ $$\begin{array}{c|c|c|c|c|c|c}\hline t & 0 & 2 & 4 & 6 & 8 & 10 \\\hline f(t) & 150 & 145 & 137 & 122 & 98 & 56 \\\hline\end{array}$$ For the function $g(x)$ graphed in Figure $2.39,$ are the following nonzero quantities positive or negative? (a) $g^{\prime}(0)$ (b) $g^{\prime \prime}(0)$ Limit Definition of Derivative Graph $f(x)=|x|+|x-2|$ and identify all $x$ -values at which $f(x)$ is not differentiable. Compute $f^{\prime}(a)$ using the limits (2.1) and (2.2). $$f(x)=\frac{3}{x+1}, a=2$$ Vertical tangent lines If a function $f$ is continuous at a and $\lim _{x \rightarrow a}\left|f^{\prime}(x)\right|=\infty,$ then the curve $y=f(x)$ has a vertical tangent line at $a,$ and the equation of the tangent line is $x=a$. If $a$ is an endpoint of a domain, then the appropriate one-sided derivative (Exercises $71-72$ ) is used. Use this information to answer the following questions. Graph the following curves and determine the location of any vertical tangent lines. a. $x^{2}+y^{2}=9$ b. $x^{2}+y^{2}+2 x=0$ The Derivative as a Function Derivative Rules Repeat example 5.1 by first substituting $x=t^{2}-1$ and $y=\sin t$ and then computing $g^{\prime}(t)$. Functions of Several Variables and Partial Differentiation Find the derivative of the expression for an unspecified differentiable function $f$. $$\frac{\sqrt{x}}{f(x)}$$ For $f(x)=\sin x,$ find $f^{(05)}(x)$ and $f^{(150)}(x)$ Derivative Rules: Constant Multiple Rule In $2009,$ the population of Mexico was 111 million and growing $1.13 \%$ annually, while the population of the US was 307 million and growing $0.975 \%$ annually. $^{6}$ If we measure growth rates in people/year, which population was growing faster in 2009 ? Shortcuts to Differentation With a yearly inflation rate of $5 \%,$ prices are given by $$P=P_{0}(1.05)^{t}$$ where $P_{0}$ is the price in dollars when $t=0$ and $t$ is time in years. Suppose $P_{0}=1 .$ How fast (in cents/year) are prices rising when $t=10 ?$ Find the equation of the tangent line to $f(x)=10 e^{-0.2 x}$ at $x=4.$ Derivative Rules: Power Rule Calculus and Its Applications Find $d y / d x .$ Each function can be differentiated using the rules developed in this section, but some algebra may be required beforehand. $$y=\frac{x^{5}+x}{x^{2}}$$ Differentiation Techniques: The Power and Sum-Difference Rules Find an equation of the tangent line to the graph of $f(x)=\frac{1}{x^{2}}$ a) $\operatorname{at}(1,1)$ b) $\operatorname{at}\left(3, \frac{1}{9}\right)$ c) $\operatorname{at}\left(-2, \frac{1}{4}\right)$ Find $f^{\prime}(x)$. $$f(x)=4 x-7$$ Derivative Rules: Sum/Difference rule For each of the following, graph $f$ and $f^{\prime}$ and then determine $f^{\prime}(1) .$ For Exercises use Deriv on the $T I-83$. $$f(x)=\frac{4 x}{x^{2}+1}$$ The yield, $Y$, of an apple orchard (measured in bushels of apples per acre) is a function of the amount $x$ of fertilizer in pounds used per acre. Suppose $$Y=f(x)=320+140 x-10 x^{2}$$ (a) What is the yield if 5 pounds of fertilizer is used per acre? (b) Find $f^{\prime}(5) .$ Give units with your answer and interpret it in terms of apples and fertilizer. (c) Given your answer to part (b), should more or less fertilizer be used? Explain. Derivative Formulas for Powers and Polynomials For each function, find the interval(s) for which $f^{\prime}(x)$ is positive. $$f(x)=x^{2}-4 x+1$$ Derivative Rules: Product/Quotient Rule Use the quotient rule to show that the derivative of $[g(x)]^{-1}$ is $-g^{\prime}(x)[g(x)]^{-2} .$ Then use the product rule to compute the derivative of $f(x)[g(x)]^{-1}$. Derivatives Find and simplify the derivative of the following functions. $y=\frac{x-a}{\sqrt{x}-\sqrt{a}},$ where $a$ is a positive constant $$h(x)=\frac{x+1}{x^{2} e^{x}}$$ Derivative Rules: Exponential/ Logarithm Rule Use logarithmic differentiation to find the derivative. $$f(x)=x^{\sqrt{x}}$$ Derivatives of Exponential and Logarithmic Functions Derivative of $u(x)^{\prime(x)}$ Use logarithmic differentiation to prove that \frac{d}{d x}\left(u(x)^{v(x)}\right)=u(x)^{v(x)}\left(\frac{d v}{d x} \ln u(x)+\frac{v(x)}{u(x)} \frac{d u}{d x}\right) $$f(x)=\frac{(x+1)^{3 / 2}(x-4)^{5 / 2}}{(5 x+3)^{2 / 3}}$$ Derivative Rules: Trigonometric/Hyperbolic Trigonometric Rule 9 Practice Problems Calculus of a Single Variable In Exercises $119-124,$ verify the differentiation formula. $$\frac{d}{d x}[\cosh x]=\sinh x$$ Logarithmic, Exponential, and Other Transcendental Functions In Exercises $65-74,$ find the derivative of the function. $$y=\sinh ^{-1}(\tan x)$$ $$y=x \cosh x-\sinh x$$ Increasing vs Decreasing Intervals Precalculus : Building Concepts and Connections Suppose $f$ is constant on an interval $[a, b] .$ Show that the average rate of change of $f$ on $[a, b]$ is zero. More About Functions and Equations Symmetry and Other Properties of Functions Use a graphing utility to decide if the function is odd, even, or neither. $$f(x)=x^{4}-5 x^{2}+4$$ Find the average rate of change of each ficnetion on the given interval. $$f(x)=2 x^{2}+3 x-1 ; \text { interval: }[-2,-1]$$
CommonCrawl
Home Journals ACSM Comparing the Tribological Properties of Chloride-Based and Tetra Fluoroborate-Based Ionic Liquids Comparing the Tribological Properties of Chloride-Based and Tetra Fluoroborate-Based Ionic Liquids Shirke Saurabh Dyaneshwar | Shah Aditya Manoj | Dulange Amit Gangadhar | Kamlesh Rudreshwar Balinge | Anil Payyappalli Mana* | Pundlik Rambhau Bhagat School of Mechanical Engineering, Vellore Institute of Technology, Vellore 632014, India Department of Science and Humanities, Saveetha School of Engineering, Chennai 602107, India Department of Chemistry, School of Advanced Sciences, Vellore Institute of Technology, Vellore 632014, India [email protected] Methyl-imidazolium based ionic liquids have good tribological properties, because of their ability to form a chemically reacted film on the surface. This paper mainly compares the tribological properties of two ionic liquids, namely, the chloride-based imidazolium ionic liquid (IL1) and the tetrafluoroborate-based methyl-imidazolium ionic liquid (IL2). The two ionic liquids were synthesized in the lab and blended with mineral base oil at various proportions, i.e. 0.5 %, 1 % and 1.5 % by weight. The tribological properties of the oil samples were tested for 1hr on a reciprocating wear testing machine using a ball-on-flat configuration. The flat is made of hardened AISI 52100 steel, while the bearing ball made of the same material serves as the counter facing surface. The results show that IL2 exhibited a nearly 40% smaller friction coefficient than IL1 at 100°C; the IL1 was found to corrode the steel specimens at 100°C; the AFM images showed the formation of chemically reacted tribofilms on the surface of samples tested with IL2; SEM and EDS results proved the presence of chlorine, boron, and fluorine on the respective wear tracks. tribological properties, friction, lubricant, ionic liquids, surface characterization The limitations of conventional lubricants are that they are suitable for one particular pair of materials. It may not work for another pair. Also, the various kinds of additives added in fully formulated oils may not be compatible with all types of materials [1]. Whereas ionic liquids are not so when added as additives. Ionic liquids are positively and negatively charged ions of compounds. They are neither acidic nor basic. Most of these compounds are in the liquid state. However semi-solid ionic liquids also exist. The major characteristics of ionic liquids are low volatility, low melting point, non-flammability, thermal stability and good thermal conductivity. The major disadvantage of using phosphonium and tungstate based ILs are that they corrode the metal at high concentration so the amount of the ionic liquid in lubricating oil should be controlled in such a way that they do not cause any harm to the surfaces in contact [1]. The first article that brought about the use of ionic liquids as lubricants was published in the year 1961 [2]. A mixed salt of LiF, BeF2, and UF4 melted at 460 °C was tested at 650-815 °C. However, its use was reported as a nonconventional class of solvents [3-4]. Room temperature ionic liquids were still not explored as lubricants until the year 2001. In 2001, Ye et al. [5], reported that alkyl imidazolium tetrafluoroborates at room temperatures have good antiwear properties and exhibited a low coefficient of friction when tested with various material pairs such as steel/steel, steel/ aluminum etc. [5]. Imidazolium-based ionic liquids have been the most investigated for their lubricating properties [5-12]. The superior tribological performance of ionic liquids is attributed to the dipolar structure and their ability in adsorbing on to the surface of the materials and subsequent formation of an antiwear film [5]. Even though some of the hydrophilic ionic liquids have been proved to be corrosive against steel [13-19], ionic liquids can be subjected to very severe load conditions [5, 7] and exhibit very high thermal decomposition temperatures [7]. A comparative study was reported between tribological properties of imidazolium-based ionic liquids under steadily advancing loads in alloys that were applied with ionic liquids versus uncoated alloys. The results proved that the alloys not coated with ILs did not show good wear resistance. The SEM analysis also showed that there was the formation of a phosphate-based reaction film which reduced the friction [9]. Ionic liquids applied with two layers on a silicon substrate; among them, one layer acting as an anchor layer proved to have better tribological properties than applied as a single layer [10]. Furthermore, carbon chain length on the imidazole ring of certain halogen-free ionic liquids was reported to influence the tribological properties when used in steel-copper contacts [11]. Load carrying capacity has been proved to be much better than conventional ZDDP based lubricants [7]. Environmental friendly ionic liquids that are free from halogens, phosphorous and sulfur have also been proved to have good tribological properties. These bis borate anion based and imidazolium/ammonium cation ionic liquids exhibited high viscosity, hydrophobicity and good miscibility with base stocks [20]. But Pyridinium based ionic liquids with methyl sulfate anions have been found to be highly corrosive. Dicationic structures exhibited better performance compared to mono cationic structures [21]. Much interest was observed among the researchers in phosphonium based ionic liquids in the last decade [22-31]. Good thermal stability and non-corrosiveness along with friction reduction properties were demonstrated by phosphonium based ionic liquids. Oil solubility of these ionic liquids was much influenced by the alkyl chain length and hydrogen bonding between the anion and cation [22]. Thicker tribofilms were observed in the case of ceramic-steel contacts compared to steel-steel contacts when tested with phosphonium based ILs. Higher thickness is attributed to higher contact pressures due to small contact zones [23]. While ZDDP failed to prevent scuffing at the early stages of the interaction, Trihexyl tetradecyl phosphonium bis (2-ethylhexyl) phosphate IL showed much satisfactory boundary lubrication properties with much lower wear rates. This has been attributed to the formation of a two-layer structured tribofilm on the surface [24]. DLC surfaces gave the higher coefficient of friction when compared with boride coatings while interacting in the presence of IL-based lubricants. Wear debris digestion has been reported to be responsible for the formation of the wear resistant tribofilm [25]. Better tribological properties were achieved when used engine oils containing depleted additives were added with phosphonium based ILs. Used engine oils can get enhanced service life with the proper addition of ILs [26]. Few researchers reported that better tribological properties and high-temperature performance were exhibited by phosphonium based ILs, especially the two phosphorous species when blended with base oil and polyol esters [27-29]. Zhu Lili et al. [29] used ionic liquids which can be easily mixed with oil. These liquids showed excellent friction reduction and also the SEM analysis confirmed the formation of phosphate and sulfate-based surface films. Good synergy was reported between phosphonium cation ILs and oxide nanoparticles when mixed together resulting in a substantial decrease in friction and wear [30]. Recently, the compatibility of room temperature ILs such as phosphonium and imidazolium with avocado oil was tested and found its effectiveness in reducing friction and wear [31]. Gearbox oils showed a different behavior when blended with ILs. Quality of the oil deteriorated over a period of time when the test severity increased. Tribological properties did not improve even with the addition of ILs [32]. Anil and Rajamohan [33] varied the surface texture of the base metal and used ZDDP as an additive and its effectiveness was tested. The results showed that if pressure (perpendicular to contact) is increased the wear also increases. Nano-Tribology tests were conducted by Tiago et al. [34] on methyl-pyridinium based ionic liquids. The results of the experiment showed that the place where a methyl group is located on the pyridine ring affects the film formation and leads to varying coefficient of friction. Inés et al. [35] used phosphate-contingent ionic liquids as additives in the lubricating oil. Phosphate-based ionic liquid showed less wear and friction than the others. Studies by Jian et al. [36] showed that polymer ionic liquids have the capacity to lubricate steel-steel contacts at high load. These ionic liquids are more superior than existing lubricant as they have the ability to sustain high temperature and high sliding velocity. Guowei et al. [37] used guanidinium ILs which proved to be a better lubricant than the esters. At high temperatures of around 300 °C, these liquids are stable and have exemplary anti-wear properties. Huaping et al. [38] studied the addition of amine-based ionic liquid to the lubricating oil. It showed remarkable results as compared to base oil. In this research, nano-intercalates were used for improving the anti-wear properties of the base oil. Ionic liquids used in the past have shown significant evidence of decreasing the friction and wear between two sliding surfaces. Various phosphonium and tungstate based liquids have been instrumental in decreasing the friction to a greater extent. However, the literature shows that most of the research on the methyl-imidazolium ionic liquid is conducted at one particular temperature and a comprehensive study on the effect of varying the temperature is lacking. The present work is focused on studying the influence of temperature on the tribological behavior of two lab-synthesized ILs when blended with mineral base oil at various concentrations. 2.1 Synthesis of ionic liquid-1(IL1) The ionic liquid required for preparing the lubricants were synthesized in the lab. All the solvents used were of analytical grade quality. A round bottom flask was filled with 10 mmol of methylimidazole liquid (0.82 gm) and 10 mmol of [3-MCPD] 3-Chloro-1, 2-propanediol (1.10 gm) in toluene as a reaction medium. The reacting agents were then kept at 80 °C for 48 hr with constant stirring. When the reaction was completed, the ionic liquid formed was separated and washed with diethyl ether to get pure IL1. The formation of ionic liquid was confirmed by 1H-NMR analysis. The molecular weight of the developed liquid is 212.72 gm/mole. 1H-NMR analysis provided the results as; 1H NMR (400 MHz, CDCl3) δ = 8.9 (s, 1H), 7.7 (d, 2H), 5.9 (s, 2H) 4.6 (d, 2H), 3.6 (m. 4H), 3.5 (d, 2H), 3.6 (s, 3H). A 100 mL round bottom flask was filled with 10 mmol of IL1 (1.92 gm) in water, in which 12 mmol aqueous solution of NaBF4 (1.308 gm) was added. The reaction mixture was refluxed for about 12 hr. Figure 1. Molecular structure of ionic liquids After the completion of the reaction, the water was evaporated by Rota-evaporator to get the IL2 ionic liquid. The 1H-NMR analysis for IL2 is same as that of IL1. The molecular weight is 264.0651 gm/mole. 2.3 Friction and wear test The tribological tests were performed using a ball-on-flat configuration. The flat base sample was machined to the size of 30×30×10 mm from heat treated AISI 52100 bearing steel plates. Surface grinding was done to obtain a surface roughness of 0.2 µm. The 10 mm bearing ball was used as received. The chemical composition of AISI 52100 steel was ensured through spectroscopic analysis as per Table 1. The synthesized ionic liquids were blended with mineral base oil SAE 30 at various proportions viz. 0.5 %, 1 % and 1.5 %. The friction and wear tests were conducted using a reciprocating wear testing machine (TR-285-M9 Ducom, Bangalore). The details of the test setup are mentioned elsewhere [33]. A schematic of the contact is depicted in Figure 2. Table 2 presents the test parameters. Figure 2. Schematic diagram of reciprocating contact Table 1. The chemical composition of AISI 52100 steel Chemical composition (Weight %) 2.4 Wear rate estimation Wear scar diameter on the 10 mm steel ball formed during the reciprocating wear tests were determined with the help of Dino-Lite Digital microscope for determining the wear rate. A representative image of the wear scar is shown in Figure 3, where two diameters are also marked. Table 2. Test parameters Concentration of IL (%) Lubricant notation Temperature (℃) Load (N) Base oil + IL1 Figure 3. Wear scar image on steel ball used in 1.5 % IL+ Base oil test at 50 °C The wear scar dimension was measured as shown in Figure 4. Figure 4. Wear scar diameter measurement of steel ball Now, d1 and d2 are the diameters of the wear scar taken from the image of the steel. $\mathrm{d}=\sqrt{d_{1} * d_{2}}$ (1) where, d = diameter of the wear scar Wear volume is calculated from wear scar diameter (d) by considering the geometrical features [39]. So, the formula for wear volume (V) is given as: $\mathrm{v}=\frac{\pi \mathrm{h}}{6}\left(\frac{3 \mathrm{d}^{2}}{4}+\mathrm{h}^{2}\right)$ (2) where, d = diameter of wear scar; h = depth of wear scar Also, the Wear scar depth (h) is given as: $\mathrm{h}=\mathrm{r}-\sqrt{r^{2}-\frac{d^{2}}{4}}$ (3) where, r = Radius of steel ball = 5 mm; d = Diameter of the wear scar The wear rate was calculated as follows: $\mathrm{Q}=\frac{\mathrm{V}}{\mathrm{x}}$ (4) where, X = Sliding distance (m) Now, the sliding distance in m is calculated by the formula given below: X = 0.002×t×f×L (5) where, t = Test duration = 3600 s f = Reciprocating frequency = 10 Hz L = Stroke length = 15 mm After completion of all the experiments, the surface characterization was done with the help of Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM). Also, the elemental composition of the reactive films formed on the surface is analyzed by Energy Dispersive X-ray Spectroscopy (EDS). 3.1 Analysis of coefficient of friction when tested with oil sample 1 Tribological reciprocating wear tests were carried out with base oil at two different temperatures at 50 °C and 100 °C. The results of the base oil tests are not presented here. As shown in Figure 5, the lubricant L1 is more effective at higher temperatures. But, from Figure 6 and Figure 7, it is observed that the coefficient of friction is higher at 100 °C for the lubricants M1 and H1. It means the ionic liquid (IL) with >1 % IL1 shows better lubrication at lower temperatures. This happens because of the decomposition of oil sample 1 at 100 °C [40]. Figure 5. Variation of the coefficient of friction for the sample tested with lubricant L1 Figure 6. Variation of the coefficient of friction for the sample tested with lubricant M1 Figure 7. Variation of the coefficient of friction for the sample tested with lubricant H1 The comparison graphs presented in Figure 8 and Figure 9 show the variation of coefficient of friction at temperatures of 50 °C and 100 °C. From Figure 10, at 50 °C, the coefficient of friction is the highest when tested with lubricant B when compared with the lubricants L1, M1, and H1. Also, the coefficient of friction decreases as the concentration of ionic liquid (IL) increases in the base oil. But, at 100 °C, as shown in Figure 11, contrary to belief, the coefficient of friction is lowest for lubricant B when compared to ionic liquids. Figure 8. Comparison of the coefficient of friction for lubricants B, L1, M1, and H1 at 50 °C Figure 9. Comparison of the coefficient of friction for lubricants B, L1, M1 and H1 at 100 °C When the concentration of ionic liquid increases in the base oil at 100 °C, the coefficient of friction also increases which shows ionic liquid is more suitable at room temperature. As the concentration of IL increases in the base oil, the chlorine concentration increases, thereby increasing decomposition which breaks the surface reactive film in turn leading to an increase in coefficient of friction. In a similar way, the analysis of coefficient of friction using oil sample 2, methyl-imidazolium tetrafluoroborate (BF4- ion) was carried out after the test. Figure 10. Variation of the coefficient of friction for the sample tested with lubricant L2 Figure 11. Variation of the coefficient of friction for the sample tested with lubricant M2 Figure 12. Variation of the coefficient of friction for the sample tested with lubricant H2 From the Figures 10, 11 and 12, we can state that the lubricants L2 and M2 show better friction reduction for 100 °C when compared to 50 °C, whereas for lubricant H2, the coefficient of friction is almost similar for both temperatures. This shows the lubricant 2 (IL2+Base oil) performs better lubrication at higher temperatures. The comparison graphs are also plotted in order to understand the variation of friction coefficient at constant temperature (for 50 °C and 100 °C) for various concentrations of IL2 in base oil and 100 % base oil. The plots clearly show that the friction coefficient between the steel ball and base sample decreases once the concentration of IL2 starts increasing in base oil at 50 °C. Also, the addition of IL2 to the base oil decreases the friction coefficient from 0.8 to approximately 0.15, which is 80 % reduction in the friction at 50 °C. At 100 °C, however even though the addition of IL2 decreases the friction, there we can't see any definite trend between the concentration of IL2 and friction coefficient. Still, for 0.5 % concentration of IL2 in base oil seems to be the best choice of lubrication at higher temperatures. The graphs are plotted to understand the effectiveness of various oil samples at various concentrations and temperatures to reduce the friction between reciprocating sliding surfaces. Similar graphs are plotted for other temperatures and concentrations. From these, it can be confirmed that addition of ionic liquid to base oil decreases friction between surfaces for almost all cases. Also, it is clearly interpreted that lubricant 2 is more suitable for functional applications than lubricant 1at all concentrations and temperatures. 3.3 Analysis of wear rate Figure 13 and 14 presents the wear rate values for all the tribological tests carried out with lubricants 1 and 2 at 50 °C and 100 °C respectively. At 50 °C, the wear rate of all concentrations is less than base oil and B1 at 50 °C showed the lowest wear rate. At 100 °C, some deviations were seen in the graph that M1 at 100 °C showed almost same wear rate as that of base oil whereas the L1 at 100 °C has the lowest wear rate. The wear rate of IL+ base oil at 50 °C is less as compared to when at 100 °C. So, the lubricant L1 showed the best results compared to other concentrations of ionic liquids and base oil. Figure 13. Wear rate of samples tested with lubricants B, L1, M1 and H1 at temperatures of 50 °C and 100 °C 3.4 Surface analysis The Atomic Force Microscopy (AFM) of the samples tested with IL+ base oil was done in order to analyze the wear track observed on the sample. The AFM was done for 1 % IL+ base oil at 50 °C and for all concentrations of IL+ base oil at 100 °C. The remaining two concentrations at 50 °C were left out because the surface of the lower flat samples was found to be corroded. For the sample tested with 1 % IL+base Oil at 50°C, Figure 15 shows smooth surfaces in AFM with no evidence of peaks whatsoever. This smooth surface with roughness 303 nm indicates the presence of reactive films which confirms further friction reduction. Also, from Figure 16, AFM image of the sample with 0.5 % IL+base Oil at 100 °C has very small peaks but the surface seems unaffected by wear. No presence of the reactive films on the surface is confirmed. As shown in Figure 17, the sample with 1 % IL+Base oil tested at 100 °C clearly shows the presence of deep grooves. The absence of white patches further confirms high friction and wear. In Figure 18, the AFM image of the sample with 1.5 % IL+Base oil at 100 °C shows intermediate discontinuous tribo-film formation. Traces of reactive films are observed on the surface indicating that minor corrosive wear has occurred. Also, it shows intermediate deep grooves thereby increasing the surface roughness to about 884 nm. Figure 15. AFM image obtained of wear track of sample tested with lubricant M1 at 50 °C Figure 16. AFM image obtained of wear track of sample tested with lubricant L1 at 100 °C Figure 17. AFM image obtained of wear track of sample tested with M1 at 100 °C Figure 18. AFM image obtained of wear track of sample tested with H1 at 100 °C Figures 19-23 presents the results of the SEM/EDS analysis carried out on the surface of the samples. All the images were taken at a magnification of 2000 X. Figure 19 (a) presents the SEM image of the flat sample tested at 100 °C with lubricant B. Wear particles are seen along the sliding direction. As there are no additives added in lubricant B, wear will be high compared to the other lubricants. When tested at 50 °C, worn particles were not seen, but only the sliding marks were seen which proves that the surface was not severely affected. In Figure 20 (a), for the sample tested with 1 % IL1+Base oil at 100 °C, black marks are observed which show that some surface reactions have occurred. Also, from EDS in Figure 20 (a), it is seen that 0.02 weight % of chlorine was present from the ionic liquid and the remaining Ferrous and carbon contents were from the base metal AISI 52100 steel sample used. When tested with H1 lubricant at 50 °C, fine parallel wear marks which are surrounded by deep parallel wear marks were observed. A significant amount of chlorine (around 0.04 weight %) was from the IL1. Figure 19. (a) SEM image and (b) EDS image of wear track of flat sample tested with lubricant B at 100 °C Figure 20. (a) SEM image and (b) EDS spectrum of wear track of flat sample tested with lubricant M1 at 100 °C Figure 21. (a) SEM image (b) EDS spectrum of wear track of flat sample tested with lubricant M2 at 100 °C Figure 22. (a) SEM image (b) EDS spectrum of wear track of flat sample tested with lubricant H2 at 50 °C Figure 23. (a) SEM image (b) EDS spectrum of wear track of flat sample tested with lubricant H2 at 100 °C Table 3 presents the elemental results of the EDS analysis. Table 3. Results of the elemental analysis Weight% Atomic% B tested at 100 ˚C M1 tested at H2 tested at When tested with 1% IL2+Base oil at 50°C, fine wear marks were observed. EDS analysis showed, 3.71 weight % of boron in the specimen. In Figure 21, wear marks were not seen clearly, but the EDS results showed that 3.67 weight % of boron and 0.85 weight % of fluorine present in the sample. In the sample tested with 1.5% IL2+Base oil as presented in Figure 22, wear tracks were clearly seen and boron of 2.64 weight % was found. From Figure 23, it may be seen that the surface has been damaged due to wear and 24.29 weight % of boron was found which is the highest of all the IL2 samples indicating that higher concentration of boron present on the surface. Two different ionic liquids IL1 (chloride based methyl-imidazolium ionic liquid) and IL2 (tetrafluoroborate based methyl-imidazolium ionic liquid) were mixed with base oil at three weight percentages 0.5 %, 1 % and 1.5 %, were then tested in reciprocating wear testing machine at two temperatures of 50 °C and 100 °C. Overall, it can be observed that the results of IL2+Base Oil were found to be better than IL1+Base Oil. This proves that the tetrafluoroborate ion is more reactive than chloride ion to reduce the friction between the sliding surfaces by forming tribological reaction films. The coefficient of friction was found to be low for IL2+Base Oil for all concentrations and temperatures. The addition of chloride-based methyl-imidazolium ionic liquid (IL1) to base oil has decreased the friction at 50°C. At higher temperature of 100°C, it has been observed that the addition of IL1 to base oil has instead increased the coefficient of friction. This proves that the IL1 is more suitable for lubrication applications at lower temperatures. Whereas for IL2 is added to base oil the coefficient of friction has decreased at both 50°C and 100°C. The wear rate is determined for all the wear tested sample in addition to the friction testing. Wear rate for all oil samples including base oil, oil sample 1 and oil sample 2 at low temperatures was less as compared to high temperatures. The wear rate for samples tested with IL1+Base Oil was found to increase as the temperature was increased. Whereas, the samples tested with IL2+Base Oil showed results better than IL1 at both high and low temperatures. AFM, SEM and EDAX analysis are used for surface characterization of all the tribologically wear tested samples. The AFM shows the formation chemically reacted of tribo-films on the surface which proves the reduction of friction and wear. The SEM images are used to analyze the wear tracks of the samples at microscopic level showing worn particles and wear marks. EDAX analysis indicates the deposition of chlorine, boron and fluorine on the wear tracks of the respective samples. [1] Minami, I. (2009). Ionic liquids in tribology. Molecules, 14(6): 2286-2305. https://doi.org/10.3390/molecules14062286 [2] Smith, P.G. (1961). High-temperature molten-salt lubricated hydrodynamic journal bearings. ASLE Transactions, 4(2): 263-274. https://doi.org/10.1007/978-0-387-92897-5_955 [3] Welton, T. (1999). Room-temperature ionic liquids. Solvents for synthesis and catalysis. Chemical Reviews, 99(8): 2071-2084. https://doi.org/10.1021/cr980032t [4] Earle, M.J., Seddon, K.R. (2000). Ionic liquids. Green solvents for the future. Pure and Applied Chemistry, 72(7): 1391-1398. http://dx.doi.org/10.1351/pac20007207139 [5] Ye, C.F., Liu, W.M., Chen, Y.X., Yu, L.G. (2001). Room-temperature ionic liquids: A novel versatile lubricant. Chemical Communications, 21: 2244-2245. https://doi.org/10.1039/B106935G [6] Reich, R.A., Stewart, P.A., Bohaychick, J., Urbanski, J.A. (2003). Base oil properties of ionic liquids (C). Tribology & Lubrication Technology, 59(7), 16. https://doi.org/10.3390/lubricants5030031 [7] Wang, H.Z., Lu, Q.M., Ye, C.F., Liu, W.M., Cui, Z.J. (2004). Friction and wear behaviors of ionic liquid of alkyl imidazolium hexafluorophosphates as lubricants for steel/steel contact. Wear, 256(1-2): 44-48. https://doi.org/10.1016/S0043-1648(03)00255-2 [8] Sanes, J., Carrión, F.J., Jiménez, A.E., Bermúdez, M.D. (2007). Influence of temperature on PA 6-steel contacts in the presence of an ionic liquid lubricant. Wear, 263(1-6): 658-662. https://doi.org/10.1016/j.wear.2006.11.034 [9] Espinosa, T., Jiménez, A.E., Martínez-Nicolás, G., Sanes, J., Bermúdez, M.D. (2014). Abrasion resistance of magnesium alloys with surface films generated from phosphonate imidazolium ionic liquids. Applied Surface Science, (320): 267-273. https://doi..org/10.1016/j.apsusc.2014.09.077 [10] Pu, J., Huang, D., Wang, L., Xue, Q. (2010). Tribology study of dual-layer ultrathin ionic liquid films with bonded phase: Influences of the self-assembled underlayer. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 372(1-3): 155-164. https://doi.org/10.1016/j.colsurfa.2010.10.017 [11] Fan, M., Wang, X., Yang, D., Wang, D., Yan, Y., Zhang, C., Liu, X. (2015). New ionic liquid lubricants derived from nonnutritive sweeteners. Tribology International, 92: 344-352. https://doi.org/10.1016/j.triboint.2015.07.020 [12] Han, Y.Y., Qiao, D., Zhang, L., Feng, D.P. (2015). Study of tribological performance and mechanism of phosphonate ionic liquids for steel/aluminum contact. Tribology International, 84: 71-80. https://doi.org/10.1016/j.triboint.2014.11.013 [13] Jiménez, A.E., Bermudez, M.D., Iglesias, P., Carrión, F.J., Martínez-Nicolás, G. (2006). 1-N-alkyl-3-methylimidazolium ionic liquids as neat lubricants and lubricant additives in steel- aluminum contacts. Wear, 260(7-8): 766-782. https://doi.org/10.1016/j.wear.2005.04.016 [14] Jimenez, A.E., Bermudez, M.D., Carrion, F.J., Martinez-Nicolas, G. (2006). Room temperature ionic liquids as lubricant additives in steel- aluminum contacts: Influence of sliding velocity, normal load, and temperature. Wear, 261(3-4): 347-359. https://doi.org/10.1016/j.wear.2005.11.004 [15] Sanes, J., Carrion, F.J., Bermudez, M.D., Martinez-Nicolas, G. (2006). Ionic liquids as lubricants of polystyrene and polyamide 6-steel contacts. Preparation and properties of new polymer-ionic liquid dispersions. Tribology Letters, 21(2): 121. https://doi.org/10.1007/s11249-006-9028-5 [16] Uerdingen, M., Treber, C., Balser, M., Schmitt, G., Werner, C. (2005). Corrosion behavior of ionic liquids. Green Chemistry, 7(5): 321-325. https://doi.org/10.1039/B419320M [17] Perissi, I., Bardi, U., Caporali, S., Lavacchi, A. (2006). High temperature corrosion properties of ionic liquids. Corrosion Science, 48(9): 2349-2362. https://doi.org/10.1016/j.corsci.2006.06.010 [18] Phillips, B.S., John, G., Zabinski, J.S. (2007). Surface chemistry of fluorine containing ionic liquids on steel substrates at elevated temperature using Mössbauer spectroscopy. Tribology Letters, 26(2): 85-91. https://doi.org/10.1007/s11249-006-9020-0 [19] Pisarova, L., Gabler, C., Dörr, N., Pittenauer, E., Allmaier, G. (2012). Thermo-oxidative stability and corrosion properties of ammonium based ionic liquids. Tribology International, 46(1): 73-83. https://doi.org/10.1016/j.triboint.2011.03.014 [20] Gusain, R., Singh, R., Sivakumar, K.L.N., Khatri, O.P. (2014). Halogen-free imidazolium/ammonium-bis (salicylato) borate ionic liquids as high performance lubricant additives. RSC Advances, 4(3): 1293-1301. https://doi.org/10.1039/C3RA43052A [21] Mahrova, M., Pagano, F., Pejakovic, V., Valea, A., Kalin, M., Igartua, A., Tojo, E. (2015). Pyridinium based dicationic ionic liquids as base lubricants or lubricant additives. Tribology International, 82: 245-254. https://doi.org/10.1016/j.triboint.2014.10.018 [22] Yu, B., Bansal, D.G., Qu, J., Sun, X.Q., Luo, H.M., Dai, S., Blau, P.J., Bunting, B.G., Mordukhovich, G., Smolenski, D.J. (2012). Oil-miscible and non-corrosive phosphonium-based ionic liquids as candidate lubricant additives. Wear, 289: 58-64. https://doi.org/10.1016/j.wear.2012.04.015 [23] Cai, Z.B., Meyer III, H.M., Ma, C., Chi, M.F., Luo, H.M., Qu, J. (2014). Comparison of the tribological behavior of steel–steel and Si3N4–steel contacts in lubricants with ZDDP or ionic liquid. Wear, 319(1-2): 172-183. https://doi.org/10.1016/j.wear.2014.08.002 [24] Qu, J., Luo, H.M., Chi, M.F., Ma, C., Blau, P.J., Dai, S., Viola, M.B. (2014). Comparison of an oil-miscible ionic liquid and ZDDP as a lubricant anti-wear additive. Tribology International, 71: 88-97. https://doi.org/10.1016/j.triboint.2013.11.010 [25] Qu, J., Meyer, H.M., Cai, Z.B., Ma, C., Luo, H. (2015). Characterization of ZDDP and ionic liquid tribofilms on non-metallic coatings providing insights of tribofilm formation mechanisms. Wear, 332-333: 1273-1285, https://doi.org/10.1016/j.wear.2015.01.076 [26] Anand, M., Hadfield, M., Viesca, J.L., Thomas, B., Battez, A.H., Austen, S. (2015). Ionic liquids as tribological performance improving additive for in-service and used fully-formulated diesel engine lubricants. Wear, 334-335: 67-74. https://doi.org/10.1016/j.wear.2015.01.055 [27] González, R., Bartolomé, M., Blanco, D., Viesca, J.L., Fernández-González, A., Battez, A.H. (2016). Effectiveness of phosphonium cation-based ionic liquids as lubricant additive. Tribology International, 98: 82-93. https://doi.org/10.1016/j.triboint.2016.02.016 [28] Battez, A.H., Bartolomé, M., Blanco, D., Viesca, J.L., Fernández-González, A., González, R. (2016). Phosphonium cation-based ionic liquids as neat lubricants: Physicochemical and tribological performance. Tribology International, 95: 118-131. https://doi.org/10.1016/j.triboint.2015.11.015 [29] Zhu, L., Zhao, G., Wang, X. (2017). Investigation on three oil-miscible ionic liquids as antiwear additives for polyol esters at elevated temperature. Tribology International, 109: 336-345. https://doi.org/10.1016/j.triboint.2016.10.032 [30] Välbe, R., Tarkanovskaja, M., Mäeorg, U., Reedo, V., Lohmus, A., Taaber, T., Lõhmus, R. (2017). Phosphonium-based ionic liquids mixed with stabilized oxide nanoparticles as highly promising lubricating oil additives. Proceedings of the Estonian Academy of Sciences, 66(2): 174. https://doi.org/10.3176/proc.2017.2.05 [31] Reeves, C.J., Siddaiah, A., Menezes, P.L. (2018). Tribological study of imidazolium and phosphonium ionic liquid-based lubricants as additives in carboxylic acid-based natural oil: Advancements in environmentally friendly lubricants. Journal of Cleaner Production, 176: 241-250. https://doi.org/10.1016/j.jclepro.2017.12.099 [32] Monge, R., González, R., Battez, A.H., Fernández-González, A., Viesca, J.L., García, A., Hadfield, M. (2015). Ionic liquids as an additive in fully formulated wind turbine gearbox oils. Wear, 328-329: 50-63. https://doi.org/10.1016/j.wear.2015.01.041 [33] Anil, P.M., Rajamohan, V. (2017). Influence of surface roughness and ZDDP additive on the friction and wear of reciprocating sliding surfaces at high contact pressures. Industrial Lubrication and Tribology, 69(5): 738-749. https://doi.org/10.1108/ILT-05-2016-0111 [34] Tiago, G., Restolho, J., Forte, A., Colaço, R., Branco, L.C., Saramago, B. (2015). Novel ionic liquids for interfacial and tribological applications. Colloids and Surfaces A: Physicochemical and Engineering Aspects, 472: 1-8. https://doi.org/10.1016/j.colsurfa.2015.02 .030 [35] Otero, I., López, E.R., Reichelt, M., Fernández, J. (2014). Friction and anti-wear properties of two tris (pentafluoroethyl) trifluorophosphate ionic liquids as neat lubricants. Tribology International, 70: 104-111. https://doi.org/10.1016/j.triboint.2013.10.002 [36] Wu, J., Zhu, J., Mu, L., Shi, Y., Dong, Y., Feng, X., Lu, X. (2016). High load capacity with ionic liquid-lubricated tribological system. Tribology International, 94: 315-322. https://doi.org/10.1016/j.triboint.2015.08.022 [37] Huang, G., Yu, Q., Cai, M., Zhou, F., Liu, W. (2017). Investigation of the lubricity and antiwear behavior of guanidinium ionic liquids at high temperature. Tribology International, 114: 65-76. https://doi.org/10.1016/j.triboint.2017.04.010 [38] Huang, G., Yu, Q., Cai, M., Zhou, F., Liu, W. (2017). Investigation of the lubricity and antiwear behavior of guanidinium ionic liquids at high temperature. Tribology International, 114: 65-76. https://doi.org/10.1016/j.apsusc.2014. 12.061 [39] Winer, W.O., Peterson, M.B. (Eds.). (1980). Wear Control Handbook, American Society of Mechanical Engineers, New York. [40] Ngo, H.L., LeCompte, K., Hargens, L., McEwen, A.B. (2000). Thermal properties of imidazolium ionic liquids. Thermochimica Acta, 357: 97-102. https://doi.org/10.1016/S0040-6031 (00)00373-7
CommonCrawl
Astronomy Research Group Total Solar Eclipse: August 21, 2017 BYU Astronomy Research Group Joins the Astrophysical Research Consortium (ARC) As of January 2021 BYU will be a member of the ARC Consortium (Link to Consortium) with access to the ARC 3.5-m telescope and the 0.5-m ARCSAT telescope. The primary use of the ARC 3.5-m telescope time is for graduate student projects. This provides a wide array of instrumentation that is currently being used to study objects in the solar system all the way to studies of the large scale structure of the Universe. Other BYU Astronomy Facilities In addition to our telescope time from the ARC consortium, we operate a number of our own astronomical facilities West Mountain Observatory (West Mountain) This is our mountain observatory at about 6600 ft above sea level. This consists of three telescopes: 0.9-m, 0.5-m, and a 0.32-m. It is a 40 minute drive that ends in a 5 miles drive up a dirt road. The mountain itself can be seen from campus. We don't provide any tours of this facility. Orson Pratt Observatory The Orson Pratt Observatory is named for an early apostle of the Church of Jesus Christ of Latter-Day Saints. It is our campus telescope facility and contains a wide variety of telescopes for student research and public outreach. We operate a 24" PlaneWave telescope in the main campus dome, plus a 16", two 12", one 8", and a 6" telescope on our observation deck. The telescopes are all fully robotic. Beyond this we have a large sections of telescopes used on public nights. Royden G. Derrick Planetarium (Planetarium) This is a 119 seat, 39" dome planetarium with acoustically treated walls to allow it's use as a lecture room. Recently we upgraded to an E&S Digistar7 operating system with 4K projectors. The planetarium is used for teaching classes, public outreach, and astronomy education research projects. A collisional family of icy objects in the Kuiper belt BYU Authors: Darin Ragozzine, published in Nature The small bodies in the Solar System are thought to have been highly affected by collisions and erosion. In the asteroid belt, direct evidence of the effects of large collisions can be seen in the existence of separate families of asteroids - a family consists of many asteroids with similar orbits and, frequently, similar surface properties, with each family being the remnant of a single catastrophic impact(1). In the region beyond Neptune, in contrast, no collisionally created families have hitherto been found(2). The third largest known Kuiper belt object, 2003 EL61, however, is thought to have experienced a giant impact that created its multiple satellite system, stripped away much of an overlying ice mantle, and left it with a rapid rotation(3-5). Here we report the discovery of a family of Kuiper belt objects with surface properties and orbits that are nearly identical to those of 2003 EL61. This family appears to be fragments of the ejected ice mantle of 2003 EL61. Modeling the normal modes and acoustics of a jet engine BYU Authors: Laralee Ireland and Scott D. Sommerfeldt, published in INTER-NOISE and NOISE-CON Congress and Conference Proceedings, NoiseCon00, pp. 71-76, (Newport Beach CA, December 2000). An ALMA Gas-dynamical Mass Measurement of the Supermassive Black Hole in the Local Compact Galaxy UGC 2698 BYU Authors: Benjamin D. Boizelle, published in Astrophys. J. We present 0.'' 14 resolution Atacama Large Millimeter/submillimeter Array (ALMA) CO(2-1) observations of the circumnuclear gas disk in UGC 2698, a local compact galaxy. The disk exhibits regular rotation with projected velocities rising to 450 km s(-1) near the galaxy center. We fit gas-dynamical models to the ALMA data cube, assuming the CO emission originates from a dynamically cold, thin disk, and measured the mass of the supermassive black hole (BH) in UGC 2698 to be M-BH = (2.46 +/- 0.07 [1 sigma statistical](-0.78)(+0.70) [systematic]) x 10(9) M-circle dot. UGC 2698 is part of a sample of nearby early-type galaxies that are plausible z similar to 2 red nugget relics. Previous stellar-dynamical modeling for three galaxies in the sample found BH masses consistent with the BH mass-stellar velocity dispersion (M-BH - sigma(star)) relation but over-massive relative to the BH mass-bulge luminosity (M-BH - L-bul) correlation, suggesting that BHs may gain the majority of their mass before their host galaxies. However, UGC 2698 is consistent with both M-BH - sigma(star) and M-BH - L-bul. As UGC 2698 has the largest stellar mass and effective radius in the local compact galaxy sample, it may have undergone more recent mergers that brought it in line with the BH scaling relations. Alternatively, given that the three previously measured compact galaxies are outliers from M-BH - L-bul, while UGC 2698 is not, there may be significant scatter at the poorly sampled high-mass end of the relation. Additional gas-dynamical M-BH measurements for the compact galaxy sample will improve our understanding of BH-galaxy co-evolution. Black Hole Mass Measurements of Radio Galaxies NGC 315 and NGC 4261 Using ALMA CO Observations We present Atacama Large Millimeter/submillimeter Array (ALMA) Cycle 5 and Cycle 6 observations of CO (2−1) and CO (3−2) emission at 0.″2−0.″3 resolution in two radio-bright, brightest group/cluster early-type galaxies, NGC 315 and NGC 4261. The data resolve CO emission that extends within their black hole (BH) spheres of influence (rg), tracing regular Keplerian rotation down to just tens of parsecs from the BHs. The projected molecular gas speeds in the highly inclined (i ≳ 60°) disks rise at least to 500 km s−1 near their galaxy centers. We fit dynamical models of thin-disk rotation directly to the ALMA data cubes and account for the extended stellar mass distributions by constructing galaxy surface brightness profiles corrected for a range of plausible dust extinction values. The best-fit models yield for NGC 315 and for NGC 4261, the latter of which is larger than previous estimates by a factor of ∼3. The BH masses are broadly consistent with the relations between BH masses and host galaxy properties. These are among the first ALMA observations to map dynamically cold gas kinematics well within the BH-dominated regions of radio galaxies, resolving the respective rg by factors of ∼5−10. The observations demonstrate ALMA's ability to precisely measure BH masses in active galaxies, which will enable more confident probes of accretion physics for the most massive galaxies. A Precision Measurement of the Mass of the Black Hole in NGC 3258 from High-resolution ALMA Observations of Its Circumnuclear Disk We present ~0farcs10 resolution Atacama Large Millimeter/submillimeter Array (ALMA) CO(2−1) imaging of the arcsecond-scale (r ≈ 150 pc) dusty molecular disk in the giant elliptical galaxy NGC 3258. The data provide unprecedented resolution of the cold gas disk kinematics within the dynamical sphere of influence of a supermassive black hole (BH), revealing a quasi-Keplerian central increase in projected rotation speed rising from 280 km s−1 at the disk's outer edge to >400 km s−1 near the disk center. We construct dynamical models for the rotating disk and fit beam-smeared model CO line profiles directly to the ALMA data cube. Our models incorporate both flat and tilted-ring disks that provide a better fit of the mildly warped structure in NGC 3258. We show that the exceptional angular resolution of the ALMA data makes it possible to infer the host galaxy's mass profile within r = 150 pc solely from the ALMA CO kinematics, without relying on optical or near-infrared imaging data to determine the stellar mass profile. Our model therefore circumvents any uncertainty in the BH mass that would result from the substantial dust extinction in the galaxy's central region. The best model fit yields ${M}_{\mathrm{BH}}=2.249\times {10}^{9}$ ${M}_{\odot }$, with a statistical model-fitting uncertainty of just 0.18% and systematic uncertainties of 0.62% from various aspects of the model construction and 12% from uncertainty in the distance to NGC 3258. This observation demonstrates the full potential of ALMA for carrying out highly precise measurements of ${M}_{\mathrm{BH}}$ in early-type galaxies containing circumnuclear gas disks. Precision Gas-dynamical Mass Measurement of Supermassive Black Holes with the ngVLA BYU Authors: Benjamin D. Boizelle, published in ASP Conference Series Emission line observations of circumnuclear gas disks in the ALMA era have begun to resolve molecular gas tracer kinematics near supermassive black holes (BHs), enabling highly precise mass determination in the best cases. The ngVLA is capable of extremely high spatial resolution imaging of the CO(1–0) transition at 115 GHz for nearby galaxies. Furthermore, its high (anticipated) emission line sensitivity suggests this array can produce benchmark BH mass measurements. We discuss lessons learned from gas-dynamical modeling of recent ALMA data sets and also compare ALMA and ngVLA CO simulations of a dynamically cold disk. While only a fraction of all local galaxies likely possess sufficiently bright, regularly-rotating nuclear molecular gas, in such cases the ngVLA is expected to more efficiently resolve such emission arising at a projected 50–100 mas from the central BH. N283 ESC [email protected] Department Office Royden Derrick Planetarium College of Physical & Mathematical Sciences REU/RET Program Provo, UT 84602, USA 801-422-4636 2022 © All Rights Reserved
CommonCrawl
Around the clock: gradient shape and noise impact the evolution of oscillatory segmentation dynamics Renske M. A. Vroomans ORCID: orcid.org/0000-0002-1353-797X1,2, Paulien Hogeweg2 & Kirsten H. W. J. ten Tusscher2 Segmentation, the subdivision of the major body axis into repeated elements, is considered one of the major evolutionary innovations in bilaterian animals. In all three segmented animal clades, the predominant segmentation mechanism is sequential segmentation, where segments are generated one by one in anterior–posterior order from a posterior undifferentiated zone. In vertebrates and arthropods, sequential segmentation is thought to arise from a clock-and-wavefront-type mechanism, where oscillations in the posterior growth zone are transformed into a segmental prepattern in the anterior by a receding wavefront. Previous evo-devo simulation studies have demonstrated that this segmentation type repeatedly arises, supporting the idea of parallel evolutionary origins in these animal clades. Sequential segmentation has been studied most extensively in vertebrates, where travelling waves have been observed that reflect the slowing down of oscillations prior to their cessation and where these oscillations involve a highly complex regulatory network. It is currently unclear under which conditions this oscillator complexity and slowing should be expected to evolve, how they are related and to what extent similar properties should be expected for sequential segmentation in other animal species. To investigate these questions, we extend a previously developed computational model for the evolution of segmentation. We vary the slope of the posterior morphogen gradient and the strength of gene expression noise. We find that compared to a shallow gradient, a steep morphogen gradient allows for faster evolution and evolved oscillator networks are simpler. Furthermore, under steep gradients, damped oscillators often evolve, whereas shallow gradients appear to require persistent oscillators which are regularly accompanied by travelling waves, indicative of a frequency gradient. We show that gene expression noise increases the likelihood of evolving persistent oscillators under steep gradients and of evolving frequency gradients under shallow gradients. Surprisingly, we find that the evolutions of oscillator complexity and travelling waves are not correlated, suggesting that these properties may have evolved separately. Based on our findings, we suggest that travelling waves may have evolved in response to shallow morphogen gradients and gene expression noise. These two factors may thus also be responsible for the observed differences between different species within both the arthropod and chordate phyla. Evolutionary developmental biology aims to understand how the developmental patterning mechanisms evolved that shape complex organisms. It also seeks to answer why evolution favours certain patterning mechanisms over alternative, theoretically possible, mechanisms, and whether and how these mechanisms can change into one another. Segmentation, the division of the body axis into repeated units, is considered a major evolutionary innovation and has been intensely studied on the level of the developmental mechanism and from an evolutionary perspective. Within the animal clade, there are three lineages with a clearly segmented organization: annelid worms, arthropods and chordates [1, 2]. There are both striking similarities and differences in the segmentation mechanism used by different species both between and within clades, making segmentation an ideal subject for evo-devo questions. In most segmented animals, segments are generated from a posterior growth zone and laid down in a regular anterior–posterior sequence. Sequential segmentation has been studied in most detail in vertebrates, where somites emanate sequentially from a posterior undifferentiated zone, the presomitic mesoderm (PSM), in which oscillatory gene expression occurs. A wavefront retreating across the PSM transforms this oscillatory gene expression into a spatially repeated pattern of segments (for review, see, e.g. [3]). Most arthropods appear to deploy a similar sequential segmentation mode although the molecular details underlying oscillations and the transformation to segments are still incompletely understood [4]. In addition to sequentially segmenting arthropods, amongst which the so-called short germband insects, also intermediate and long germband insects exist. These two types of insects pattern, respectively, their anterior segments or all their segments simultaneously, using a different developmental mechanism. While the segmentation process in annelids is also sequential, cell lineages with a different future fate are specified before segmentation through stereotyped divisions and appear to undergo distinct parallel sequential segmentation processes before fusing into segments [5]. Previous evo-devo simulation studies demonstrated that oscillation-driven sequential segmentation readily evolves out of an initial random gene regulatory network (not structured by prior evolution). References [6,7,8,9,10], provided that a posterior signalling centre has previously evolved [10]. These studies also showed that this type of segmentation mechanism should be expected to evolve due to its higher robustness and its greater ability to flexibly adjust segment numbers relative to alternative strategies. However, thus far, the potential conditions and selective pressures that cause differences in the more detailed aspects of sequential segmentation have remained unresolved. In vertebrates, the oscillatory nature of segment patterning was originally discovered from the observation of waves of gene expression traversing the unsegmented tissue [11]. These gene expression waves were shown to arise independently of cell–cell contact [11] and instead result from the gradual slowing down of oscillations before they arrest into segments [11,12,13,14]. Apart from these so-called kinematic waves, vertebrate segmentation is characterized by a complex regulatory network consisting of three coupled oscillator motifs involving the FGF, Wnt and Delta-Notch signalling pathways [15,16,17]. Kinematic waves have also been observed in sequentially segmenting arthropods, for example the centipede Strigamia [18, 19]. It has been suggested that oscillator slowing is a crucial part of the mechanism underlying the transition from oscillatory gene expression to segments [20, 21] or instead that it is an emergent property (a "side effect") of cell–cell signalling [22]. Additionally, it has been hypothesized that travelling waves enhance the robustness of the segmentation process [23]. Intriguingly, in both the chordate and arthropod lineages, variation exists in the extent of these travelling waves and the length of the undifferentiated region between the growth zone proper and the last-formed segment. For instance, in Amphioxus (a non-vertebrate chordate), segments are formed directly anterior to a small posterior zone, and no travelling wave dynamics have been reported thus far [24]. On a similar note, in the short-germ beetle Tribolium, travelling waves have been reported but the relative distance they travel before halting appears to be shorter than, for example, in Strigamia [4, 18]. When considering the genetic composition of the oscillator, Amphioxus does not seem to require FGF and also RA appears to be less involved than in vertebrates [25, 26]. This could potentially indicate a simpler oscillator architecture. Similarly, in Tribolium, so far only a simple negative feedback loop of pair-rule genes has been shown to underlie segment oscillations in the trunk [27], while in other sequentially segmenting insects this loop has not been identified, and possibly more complex mechanisms are at play [28]. One tempting possibility could thus be that more complex oscillators are correlated with and potentially responsible for more extensive kinematic waves. Alternatively, oscillator complexity may be related to mutational and developmental robustness and occur independent of kinematic waves. Finally, apparent oscillator simplicity in, for example, Amphioxus and Tribolium may merely reflect a lack of available data, and as a consequence the relation between kinematic waves and oscillator complexity is currently unclear. Here, we applied an evo-devo modelling framework to investigate under which conditions complex oscillator networks and travelling oscillator waves are likely to evolve, and to what extent they co-occur. Based on the observations outlined above, we speculate that differences in travelling wave dynamics could arise from the difference in relative size of the non-segmented zone between species, which are likely caused by differences in morphogen gradient lengths and slopes. We therefore vary the rate of morphogen decay to test the impact of gradient length scale and slope on the type of oscillatory segmentation that evolves. Since it is unclear to what extent oscillator complexity is necessary for either kinematic waves or developmental robustness, we also investigate the influence of gene expression noise on the phenotype resulting from evolution. To analyse large numbers of simulations more efficiently, we build an automated analysis pipeline to assess oscillator complexity and the occurrence of travelling waves. We find that shallow, long morphogen gradients often lead to the evolution of persistent oscillations, travelling waves and complex networks. In contrast, simulations with steep, short morphogen gradients resulted in slightly simpler networks and more often produced damped oscillators, while sequential segmentation evolved faster. Damped oscillators are more sensitive to perturbations and less easily allow for evolution of longer body axes containing more segments. Interestingly, gene expression noise increased the fraction of persistent oscillators under a steep gradient and also increased the fraction of travelling wave oscillators for both shallow and steep gradients. This suggests that in our model, evolution of oscillator slowing is enhanced by (indirect) selection for robustness. Surprisingly, we found that gene regulatory network complexity and oscillator slowing, both typical for vertebrate somitogenesis, did not evolve in a strongly correlated manner in our model. This implies that these properties may evolve separately. General set-up We use an individual-based model of a population of organisms evolving on a lattice, as has been applied before to evolution of segmentation and domains [8, 10] (Fig. 1a). Each organism has a so-called pearls-on-a-string genome consisting of genes (encoding transcription factors) and upstream regulatory regions with transcription factor binding sites (TFBS) [29]. Organisms also have a highly simplified multicellular body consisting of a one-dimensional row of cells. Instead of starting at full length as in previous models (for review, see [9]), organisms start out small and grow during the course of their development. The organisms reproduce in a fitness-dependent fashion, with fitness dependent on the number of segments pre-patterned by the final gene expression pattern in the row of cells. Importantly, since we explicitly select for segments, our modelling approach can not help answer why body axis segmentation evolved. However, no selective pressure is exerted on how segments should be generated, so evolution is free to evolve any mechanism capable of generating segments. Therefore, we can use our model to investigate how certain conditions influence what types of segmentation mechanisms evolve. Genome, network and genes The genome codes for a gene regulatory network. The genes in the genome form the nodes of the network; the set of TFBS upstream of each gene in the genome dictate the incoming regulatory edges of the GRN (Fig. 1a). Outgoing edges follow from genes matching the type of the TFBS in front of another gene. The regulatory interactions between genes can be repressive (strength \(-1\)) or activating (strength 1). The network governs gene expression dynamics and subsequent protein levels. Gene expression is modelled with ordinary differential equations as shown in Eq. 1: $$\begin{aligned} \frac{{\mathrm{d}}G_i}{{\mathrm{d}}t}={\mathrm{Max}}_{j=1}\left( \frac{A_j^n}{A_j^n+H^n}\right) *\Pi _{k=1}\left( \frac{H^n}{I_k^n+H^n} \right) *E-\delta *G_i \end{aligned}$$ Transcription of gene i is determined by the activating genes \(A_j\) (\(j=1\ldots l\)), where the activator with the highest activating input (as given by \(\frac{A_j^n}{A_j^n+H^n}\)) determines the overall activation, resulting in a so-called activating OR gate. Repressive inputs \(I_k\) (\(k=1\ldots m\)) are multiplied, resulting in a repressive AND gate (l and m are the total number of activating and repressing inputs for gene i). It should be noted that these choices are somewhat arbitrary, as for both activating and repressive TFs, AND as well as OR or even different types of integration have been reported. The main goal here is to incorporate at least partially the highly complex, nonlinear integration of TF inputs into gene expression levels. E is the maximum expression level; \(\delta\) is the degradation rate; H is a Hill constant, the transcription factor concentration level at which half-maximal activation or repression occurs; and n is the Hill coefficient governing the steepness of the transition from low to high gene expression depending on transcription factor concentrations. Overview of the model. a The developing organisms live on a 2D lattice. Each individual organism consists of a row of cells, of which the posterior-most cell divides at regular intervals. Within the growth zone, the morphogen (in blue) is maintained at a high concentration; it decays in cells outside of this zone. The genome of the organism codes for a network of regulatory interactions, which determines the spatio-temporal dynamics of the proteins within each cell (see d). b The gradients resulting from the different morphogen decay rates (d) used in our simulations. The lambda indicates the position (or time) at which the morphogen concentration is half-maximal, i.e. 50: \(\lambda ={\mathrm{ln}}(2)/d\). c The initial conditions for each new individual at the start of its development. There is a growth zone with high morphogen, and a "head" region without morphogen. d At the end of development, the expression of the segmentation gene is averaged over a number of time steps, and from this the segment boundaries are determined. e The mutational operators acting on the genome There are 16 types of genes, indicated with a number from 0 to 15. Gene 0 encodes the morphogen It is not regulated by any of the other gene products, but instead is set to high expression in the cells of the growth zone, while decaying with a predefined rate in the rest of the embryo (Fig. 1b). We run simulations with either a large or a small morphogen decay rate, yielding a steep or a shallow morphogen gradient, respectively. Gene 5 encodes the segmentation protein, whose final expression pattern after development determines the number of segments formed and hence the fitness of the organism. Gene expression noise In a subset of simulations, we implemented gene expression noise as follows. First, we computed the expected gene expression rates based on the first part of Eq. 1. Next, we computed the actual gene expression rate by sampling from a Gaussian distribution around the expected gene expression rate. Specifically, we assume a Gaussian distribution with a mean equal to the computed expected gene expression rate \(R_{\mathrm{expr}}\) (\(\mu =R_{\mathrm{expr}}\), \(\sigma =l*R_{\mathrm{epxr}}\)), where l in the standard deviation \(\sigma\) determines the overall level of noise (low: \(l=0.07\), medium: \(l=0.14\), high: \(l=0.21\)). Note that by scaling the standard deviation with the mean, the noise which is defined as the standard deviation divided by the mean, is kept constant independent of the mean gene expression rate. We avoid negative gene expression rates by capping any negative gene expression rates due to noise to zero: \(R_{\mathrm{actual}}={\mathrm{Max}}(0,R_{\mathrm{expr}}+{\hbox {noise}})\). Developmental dynamics Individuals start their development with a short row of 14 cells, where five cells form the primordial "growth zone" in which the morphogen concentration is high; in the remaining nine cells (the "head"), the morphogen is absent (Fig. 1c). The other genes have an expression level of 0 in all cells. This means that no gene expression will occur in the anterior-most nine cells. We ignore the developmental processes generating the head part of the body and their evolution and focus solely on the developmental processes governing formation of more posterior body parts and their evolutionary history. The posterior-most cell of the growth zone divides at regular intervals, pushing the other cells forward so that they eventually move out of this zone. Once a cell leaves the growth zone, the morphogen protein starts decaying. As a result, a gradient of the morphogen is formed due to the age difference of the cells (Fig. 1a, b). (The four cells in the growth zone that do not divide are there for cosmetic reasons; it makes it easier to see the dynamics in the growth zone on a time–space plot.) Throughout development, the concentrations of the other proteins (i.e. all except the morphogen protein) are updated according to the genetically specified network interactions (Eq. 1). The posterior cells stop dividing after 120 divisions (600 steps), after which developmental dynamics continue for another 600 time steps (see also Table 1) so that also the youngest cells reach a low morphogen concentration and can converge on a stable gene expression pattern. Table 1 parameter values Explanation of the Fourier analysis procedure. a We run the evolved network for 1800 steps with several, fixed concentrations of the morphogen. For every gene, we take the Fourier transform of the temporal gene expression dynamics to find the gene's oscillation frequency for that particular morphogen concentration. We plot the Fourier transform data of all concentrations together in one heat map, where the colour intensity represents the amplitude at every frequency for every concentration. See also c for a "real-life" example. b For the network run at the highest morphogen concentration (representing the growth zone), we also perform a sliding-window analysis: here, we take subsets of the time series generated as in a and apply the Fourier transform to every window to visualize the change in frequency and amplitude over time in the growth zone. The rest of the procedure is the same as in a. c Examples of frequency profiles from real simulations. The plots in the left column are generated as explained in a, and those on the right as in b Fitness evaluation By the end of development, the expression pattern of the segmentation gene is evaluated to determine the number of segments formed outside the growth zone (Fig. 1d). Segments should be at least seven cells wide, and boundaries between segments should consist of a clear transition of the expression of the segmentation gene from a high to a low level, or vice versa, within five cells (similar to earlier definitions [6, 8]). Given that the tissue grows out to be 134 cells, of which nine form the head segment and five form the growth zone, the maximum number of segments that can be formed is 18. The number of well-formed segments (i.e. fulfilling the above requirements) determines an individual's fitness. In addition, some penalties are applied. First, we require that at least one gene of each type is present in the genome; if this requirement is not met, the individual is not allowed to reproduce. Second, too-narrow segments are penalized. Third, small fitness penalties are used for gene and TFBS numbers in order to prevent excessive genome growth. Finally, when determining the number of segments, rather than considering the expression of the segmentation gene at the last time step of development, we average expression of the segmentation gene over the last 100 developmental steps. This averaging helps ensure temporally stable segmental patterning, as it will not reward oscillatory segmentation that fails to converge on a constant spatial pattern. To further ensure stability of the final developmental pattern, we apply an additional fitness penalty on the number of cells which have high variance in their gene expression over time, indicating pattern instability within these final 100 developmental steps. The fitness then becomes \({\mathrm{e}}^{{\mathrm{max}}(0,F)}-1\), where F is: $$\begin{aligned} \begin{aligned} F&={\text {nr good segments}}\\&\quad - {\text {nr narrow segments}}\\&\quad - G*{\text {gene nr}}\\&\quad - T*{\text {TFBS nr}}\\&\quad - U*{\text {nr unstable cells}} \end{aligned} \end{aligned}$$ See Table 1 for parameter values. Summary of simulation results. a Examples of the resulting space–time plots from an individual at the end of a simulation. The posterior growth zone on the right is anchored and the other cells shift position when the tissue grows. The colour reflects the cell type, which is determined by the precise combination of expression levels of all genes within a cell. Note the regular alternation of gene expression in the posterior growth zone. b Left: a simplified representation of the gene regulatory networks that evolve in our simulations; right; an example of an evolved network (pruned, see "Methods"). The clock that generates gene expression oscillations is indicated in blue, the bistable switch in red. c Frequency profile of the segmentation gene (wave and constant frequency profiles) or the strongest oscillating gene (damped profile) in three simulations with a shallow gradient. d Snapshots of the tissue-level gene expression, corresponding to the profiles in c (blue is high expression, white is low). The anterior ends (indicated by the black bars) are aligned for greater clarity. The pictures are taken 12 steps apart Initial conditions, mutations and simulations The population is initialized with 50 identical individuals. The population resides on a lattice of size \(30\times 30\), imposing an upper boundary of 900 individuals to the population size. The genome of the initial individuals contains a single copy of each gene, in randomized order and with an average of two TFBS of random type upstream. Individuals compete in a local \(7\times 7\) neighbourhood for the opportunity to reproduce into an empty spot. As mentioned before, local competition is more computationally efficient than all-against-all fitness comparisons and better reflects the natural situation. An individual's chance to reproduce is proportional to its fitness divided by the sum over the fitness values of itself and the other individuals neighbouring the empty position: \(P_i=\frac{f_i}{\sum _{j=1}^{nb}f_j}\). Death occurs with a constant probability d, and individuals move on the lattice via Margolus diffusion (two diffusion steps, one of each partition, per update step). Upon reproduction, the genome is mutated via duplications and deletions of TFBS and genes (including upstream TFBS), with a per-element probability (Fig. 1e). TFBS may also mutate their type (which gene product binds) and weight (activating or repressing), and new TFBS may appear de novo as an innovation. Gene duplication results in multiple genes of the same type that together determine the concentration of a single protein. Note that since we do not include mutations that change gene type, gene duplication cannot be followed by subsequent divergence. In order to simplify our model and decrease the number of different mutation rates in our simulations, we do not evolve maximum gene expression rates, protein decay rates or TF activation and deactivation thresholds (parameters E, D and H in Eq. 1) similar to the approach taken in [8]. Analysis pipeline It is highly non-trivial to derive the patterning strategy of an evolved network merely by looking at network architecture. Even for small networks evolved to the simple task of patterning a single stripe along the body axis, identical network architectures may lead to different patterning dynamics for different regulatory interaction strengths [30]. Additionally, patterning outcomes will depend on details of how transcription factor input is integrated, for example whether multiple activating transcription factors need to be simultaneously present (a logical AND gate), or rather that a single one suffices (a logical OR gate) to induce the downstream gene. Thus, to identify the patterning strategy, one needs to simulate the dynamics of gene expression resulting from the network, parameter settings and transcription factor integration. For small networks, it may still be feasible to determine the patterning strategy by examining the expression dynamics of individual genes; this strategy, however, will not provide a solution for larger networks evolved towards more complex patterning tasks, such as the one considered here. Previously, mostly individual case studies (selected from larger sets of evolutionary outcomes) were used to unravel the evolved developmental mechanism, analysing only a few network architectures and their gene expression dynamics in detail [6, 8, 10, 20, 31, 32]. However, if we aim to study the circumstances that drive evolution of complex oscillator networks and/or of sloped oscillatory frequency gradients, large numbers of simulation outcomes need to be assessed. Detailed manual analysis of each individual simulation outcome would be prohibitively slow. Furthermore, a different type of approach is needed to determine the nature of the evolved segmentation oscillator, i.e. whether it generates damped or persistent oscillations, and whether oscillation amplitude or period changes gradually or abruptly as a function of morphogen concentration. Therefore, we developed an automated analysis pipeline that can determine measures of network complexity and oscillatory frequency profiles for large numbers of simulations. This pipeline assesses for each individual simulation the size of the genome and complexity of the gene regulatory network: the genome is pruned beforehand to remove redundant elements and obtain the core network responsible for patterning. The evolved gene expression dynamics are assessed with Fourier analysis, to reveal the oscillatory dynamics at various points in the tissue. Our pipeline starts by extracting from each simulation the genome of a single fit individual present in the population at the end of evolution. Because an evolved genome consists partly of redundant interactions, we first prune the genomes via a repeated process of trying to remove genes and binding sites in the genome, while keeping the final spatial expression pattern of the segmentation gene the same [8]. We will refer to these pruned genomes and networks as core genomes and networks, as they embody the essential core necessary to generate the segmentation pattern. To obtain measures for the complexity of the evolved networks, we determine genome size (number of genes and TFBS), the number of regulatory loops present in the network encoded by the genome, the size (nr of genes) of these loops and the number of positive and negative feedback loops. All measures are obtained for the core genomes and networks. Fourier frequency profile analysis Since the model incorporates posterior growth, we expect a significant part of the evolutionary runs to evolve sequential segmentation, where temporal gene expression oscillations are translated into a spatial segment pattern [10]. To determine the precise nature of the oscillations, we apply a fast Fourier transform (FFT, C library fftw3.h) to the gene expression dynamics and quantify how the amplitude and frequency of oscillations change as a function of morphogen concentration. Since each cell leaving the posterior growth zone experiences the same morphogen decay, such an analysis will reveal both the temporal oscillation dynamics of an individual cell and the spatial oscillation profile across the tissue at a single time point. This method will therefore allow us to determine whether, in case of persistent oscillations, a sloped frequency profile is present and kinematic oscillation waves are to be expected. In principle, one could apply Fourier analysis directly to the gene expression dynamics of a cell as it leaves the growth zone and experiences morphogen decay. However, cells leaving the growth zone undergo only few oscillations in a short amount of time, and there are only a limited number of timepoints per individual morphogen concentration level. This makes it hard to extract the precise oscillatory dynamics as a function of morphogen concentration, especially when the morphogen decays rapidly. Furthermore, such an analysis would not be able to distinguish whether, at any given morphogen concentration, oscillations are stable or damped. Therefore, we decided to obtain longer time series of gene expression by running the evolved networks multiple times, each time with a different but constant morphogen concentration, using a linear set of concentration levels occurring along the morphogen gradient (Fig. 2a). This ensures that the same amount of data and detail is available for oscillators evolved under fast and slow morphogen decay. After developing this series of gene expression dynamics for different morphogen concentrations, we apply a Fourier analysis for each individual gene for each of these different time series (Fig. 2a). Subsequently, we select the gene oscillating with the largest amplitude. For this gene, we then plot the frequency distributions (amplitude per frequency) for each morphogen concentration next to each other in a 2D heat map, creating the so-called frequency profile (Fig. 2a). We give examples of the resulting plots in Fig. 2c, first column. Note how the frequency of the oscillations may or may not change with the morphogen concentration. A side effect of using this Fourier analysis is that, in addition to detecting the frequency of the genetic oscillator as the dominant mode, it also detects one or more so-called eigenmodes of this frequency, as can be clearly seen in Fig. 2c, second row. These eigenmodes have no particular biological meaning. We also investigate whether the frequency or the amplitude of oscillations changes within the growth zone, and whether oscillations are damped or persistent. To do so, we apply Fourier analysis to different subsections of the time series for the high morphogen concentration occurring in the growth zone (Fig. 2b). The procedure for making the frequency profile heat map remains the same, but now the x-axis represents developmental time rather than morphogen concentration. Examples can be found in Fig. 2c, second column. Oscillator classification To compare the evolutionary outcomes under different morphogen decay rates, gene expression noise levels and cell–cell signalling, we would like to classify the obtained frequency profiles into the three different categories illustrated in Fig. 2c. First, we distinguish between damped and persistent oscillators depending on the Fourier profile obtained from the growth zone. This is done by simple visual inspection of the profile, determining whether or not oscillations of nonzero amplitude persist throughout the time window. Next, within the category of persistent oscillators, we determine whether a frequency profile is constant across the morphogen gradient or rather has a sloped appearance, which is indicative of oscillations slowing down as morphogen levels decrease. This classification was formalized as follows: we measure the maximum oscillatory frequency occurring for the high morphogen concentrations in the posterior as well as the minimum frequency of the oscillations just prior to the ceasing of oscillations. Next, we determine the difference between these oscillation frequencies, indicating the extent of oscillator slowing across the morphogen gradient. We choose a particular threshold value for this frequency difference (0.02). For frequency differences larger than this threshold, we classify the oscillator as one with a sloped frequency profile, and for smaller frequency differences, we denote it as an oscillator with an approximately constant frequency profile. General evolutionary outcomes We started with two sets of 60 simulations: one with a low and one with a high morphogen decay rate, leading to shallow and steep gradients, respectively. Nearly all simulations resulted in the evolution of a tissue pattern with ten or more segments, where ten is the threshold we use to classify a simulation as successful (59 of 60 simulations with a shallow gradient, 60 out of 60 simulations with a steep gradient were successful). Of these successful simulations, the maximum number of 18 segments evolved in ten shallow-gradient and 11 steep-gradient simulations. Typical space–time plots for both kinds of gradient are shown in Fig. 3a. All mechanisms that evolved in our simulations use gene expression oscillations (a 'clock') coupled to a bistable switch to generate segments sequentially, which is in line with our previous studies [8, 10]. Due to the nonlinearity of gene expression regulation in our model, a positive feedback on the segmentation gene allows for bistability to stably maintain either high or low expression of this gene. In contrast, negative feedback loops can generate the oscillations of the clock, provided that there is sufficient delay between upregulation and inhibition [33]. Because we did not include the evolution of protein decay rates or expression levels, in our model evolution generates the necessary delays by connecting a series of genes into a negative feedback loop. The networks evolved in our simulation typically contain multiple interconnected negative feedback loops, with one or more of them connected to the bistability motif. Typically, one or more of the negative feedback loops are regulated by the morphogen (Fig. 3b). While the morphogen concentration is high, the network keeps oscillating between two regions that form the future basins of attraction of two unstable states formed by the bistable switch (Additional file 1: Fig. S1A). When morphogen concentrations drop, oscillations terminate, the two states become stable and the network converges to either the high- or low-segmentation gene expression state, depending on the phase of the cycle at which oscillations stopped. Thus, the bistability allows for a translation of oscillations into a stable segmented gene expression pattern. This structure is similar to the mechanisms that evolved in [8, 10], although the pruned networks tend to remain somewhat larger in our current model. Variations on this general theme do occur; for example, the inhibition by the morphogen may be indirect, or the segmentation gene and the genes in the positive feedback loop may be part of a negative feedback loop of the oscillator (Additional file 2: Fig. S2). Still, the overall mechanism always seems to use morphogen-dependent oscillations and translates them into a stable segmentation pattern with a bistable switch. Classifying evolved gene expression dynamics with Fourier analysis We next assessed whether Fourier analysis would allow us to distinguish differences in the evolved gene expression dynamics of individuals from different simulations—despite the similar gene network structure. In short, we assessed how the frequency of oscillations changes when cells exit the growth zone. We found that the gene expression dynamics could be classified into roughly three different categories, which display qualitatively different frequency profiles (Fig. 3c). In the first column, the computed frequency profile clearly shows a slope, implying the occurrence of slower oscillations for lower morphogen concentrations (we call this a sloped frequency profile). In the snapshots of the segmentation gene expression that occurs during in silico development (Fig. 3d), we indeed see that every segment starts as a travelling wave from the posterior and becomes narrower and more strongly expressed as it arrives at the anterior. Thus, a sloped frequency profile corresponds to travelling waves across the tissue, much like those observed in vertebrate development. In contrast, the individual used as an example in the middle column of Fig. 3c has a constant frequency profile, implying that oscillations have a constant frequency for a range of morphogen concentrations and then suddenly cease for lower morphogen concentrations (a constant frequency profile). The corresponding snapshots in Fig. 3d (centre) show that indeed, most of the tissue oscillates synchronously and that only the anterior end shows a minor deviation of these dynamics immediately prior to segment stabilization. Based on our frequency plot, we can deduce that in this small region, the cells are already in a non-oscillatory regime, converging towards one of the two stable states that allow for a segmented pattern. Note that this is different from the individual with travelling waves in the left column, where the anterior tissue that is out of sync with the posterior end is in a regime of sustained but slower oscillations. Finally, in the right column of Fig. 3c, we display an individual whose frequency profile only shows oscillatory dynamics for the high morphogen concentrations that occur in the posterior growth zone. An analysis of the temporal dynamics of these growth zone oscillations (Fig. 3c) reveals that they are damped, reducing their amplitude over time (a damped frequency profile). This is confirmed by the snapshots of gene expression dynamics, which show a clear decrease in oscillation amplitude in the growth zone (Fig. 3d). In all three cases illustrated above, there is a clear correspondence between the developmental gene expression dynamics as suggested by the computed Fourier frequency profile and the actual observed developmental dynamics. We therefore conclude that the Fourier frequency analysis is a useful tool for distinguishing differences in the oscillatory dynamics produced by evolved networks. Note that while the above examples are easily distinguishable, clear-cut cases, not all evolved mechanisms generate frequency profiles that are easy to interpret or fall into these three clear categories. Some profiles have a very modest slope; in other cases, oscillations extend beyond the growth zone but for only a limited part of the entire morphogen concentration range; and in yet other cases, oscillations may be damped for the high morphogen concentrations in the growth zone yet persistent for a range of lower concentrations (Additional file 3: Fig. S3). Still, also for these more complicated cases, the frequency profile reliably reflects the actual oscillatory developmental dynamics. Shallow gradients promote sustained oscillations and travelling waves To test how the length and slope of the morphogen gradient influence the evolution of segmentation, we next compared segmentation mechanisms evolved under high versus low morphogen decay rates. The two space–time plots shown earlier in Fig. 3a illustrate that the spatio-temporal transient during which cells are outside the growth zone but have not yet formed a segment, is considerably longer for shallow morphogen gradients than for steep gradients. We measured at which morphogen concentration oscillations cease and a stable stripe pattern is formed, the so-called freeze point, and found that under a shallow gradient, higher freeze points evolve (Fig. 4a). However, the position of this higher freeze point in the tissues with a shallow gradient is still further away from the growth zone than the position of the near-zero freeze point in the tissues with a steep gradient. The higher freeze point therefore only partially compensates for the longer time and distance required for morphogen decay. The question is whether this spatio-temporally extended transient—and the accompanying freeze point shift—has evolutionary consequences in terms of network complexity and the types of oscillatory dynamics that evolve. Comparison of genome, network and oscillatory dynamics properties. a Boxplot of the morphogen level at which individuals reach a stable expression (after the transition from the oscillatory to the non-oscillatory regime). b Violin plots (vertical histogram) of the number of genes and transcription factor binding sites (TFBS) in the pruned genomes of shallow-gradient (dark) and steep-gradient (light) simulations. Dots indicate the median value. Mann–Whitney U test between shallow and steep: genes, \(p=0.006\); TFBS, \(p=0.0007\). After removing the 14 largest genomes from both sets: genes, \(p=0.003\); TFBS, \(p=0.0001\) (corrected for ties with jitter). c Violin plots of the number of positive and negative feedback loops in the pruned networks of the shallow- and steep-gradient simulations. (MW test: pos.FBL, \(p=0.005\); neg.FBL, \(p=0.0003\). After removing 14 genomes with most loops: pos.FBL, \(p=0.008\); neg.FBL, \(p=0.0001\)). d Histogram of the number of loops (FFL and FBL) of a certain size. All histograms of individual simulations have been summed for this average histogram. e Histogram displaying for all successful individuals their frequency difference between oscillations in the growth zone and at the end of the profile, before sustained oscillations cease. (see indication in the profile on the left: a nice example of a strongly sloped frequency profile with a large difference). Profiles to the right of the red line are classified as "sloped" in Table 2. Note that the damped oscillators are grouped in the bin with 0.0 frequency difference. Bin size: 0.01 Table 2 Prevalence of frequency profiles To investigate this, we deployed our analysis pipeline to dissect genome and network complexity and details of the oscillation dynamics. We found that under shallow gradients, individuals evolve that have somewhat larger core genomes and networks (a small but significant difference), especially because of a larger number of TFBS (Fig. 4b). The networks evolved under shallow gradients also contain significantly more feedback loops, in particular the negative FBLs needed to construct an oscillator (Fig. 4c), and these loops tend to be larger (Fig. 4d). The variability between individual evolutionary trajectories with a shallow gradient is large: the increase in average loop number and size for the simulations with a shallow gradient is exacerbated by a subset of 14 simulations (out of a total of 59) which have more than 20 negative feedback loops. These simulations also have the largest genomes (Additional file 4: Fig. S4). Still, differences in genome size and feedback loops remained significant when we compared the two sets after removing the 14 simulations from both (see legend Fig. 4). When we classify the oscillatory dynamics of all simulations into the three broad categories of Fig. 3, the simulation set with a shallow gradient has a lower fraction of profiles with damped oscillations (shallow: 0.05 vs. steep: 0.23) and a higher fraction of sloped frequency profiles (0.27 vs. 0.12, Table 2), while the two sets contain a similar number of simulations with a constant frequency profile (0.61 vs. 0.60). To test the robustness of these results, we also measured the frequency difference within a profile (Fig. 4e), rather than categorizing the profiles using somewhat-arbitrary cut-offs to distinguish sloped from constant profiles. The distribution of these frequency differences makes it clear that not only do shallower gradients more often lead to the evolution of a sloped profile, but they also tend to evolve a slightly higher frequency difference across their profile (Fig. 4e). Gradient steepness influences evolutionary innovation speed We established that the steepness of the morphogen gradient influences both the type of oscillations that evolves and the network that generates these oscillations. Next, we investigated whether this difference in final evolutionary outcome is reflected by differences in the evolutionary trajectories leading up to these outcomes. We find that under a steep gradient, individuals with more than ten segments arise very early in evolution (Fig. 5a). In contrast, with a shallow gradient, the evolution of individuals with ten or more segments frequently required a much longer evolutionary time span. Much of this time, these evolutionary trajectories are either searching for or stuck in a primitive, two-segment stage, where the entire tissue that is generated by the growth zone expresses the segmentation gene while the head does not (Fig. 5b, c). These data indicate that it can be considerably harder for evolution to discover a segmentation pattern under a shallow gradient. A shallow gradient takes longer to find a solution. a Histogram of the number of generations it took for simulations to make ten or more stripes. Bin \(\hbox {size}=100\). b The waiting time until individuals with two or more stripes appear in the simulation. Bin \(\hbox {size}=25\). c The number of generations each simulation spent with only two stripes (see space–time plot). Note that the first bin includes those individuals which immediately find more than two stripes. Bin \(\hbox {size}=50\) To further investigate this difference, we removed the "head" from the initial tissue (see Fig. 1c). As discussed in the Methods section, the head region is the part of the tissue in which the morphogen gradient is absent and no gene expression occurs. As a segment boundary is defined as the transition from low to high expression of the segmentation gene or vice versa, simply expressing the segmentation gene in the non-head part of the tissue thus suffices to generate the first segment. Removing the head region will make it harder for evolution to discover the first segment and may therefore in some cases make it impossible to evolve segments. The rate of success of evolutionary simulations indeed decreases significantly in the absence of a head region and considerably more so for shallow than steep morphogen gradients. Only 28 out of 60 simulations find a solution for a shallow gradient, while 51 out of 60 simulations evolve a segmented pattern with at least ten segments for a steep gradient. This further supports our observation that a segmented body pattern evolves more easily for steep morphogen gradients. Evolved segmentation mechanisms adapt easily to a different morphogen gradient Having established that both final properties and evolutionary trajectories differ for segmentation mechanisms evolved under shallow or steep gradients, we next asked whether these differences are functionally relevant. To assess this, we extracted successful individuals evolved under a steep or shallow morphogen gradient and let them continue evolution in the presence of a morphogen gradient of the opposite steepness. For a transition from a shallow to a steep morphogen gradient, 22 out of 59 simulations are immediately able to generate more than three segments (Fig. 6a). In contrast, for the transition from a steep to a shallow gradient, only six out of 60 simulations can directly generate more than three segments (Fig. 6a). Still, in both cases evolution generally needs fewer than 30 generations to come to a new solution with a similar number of segments as before the transition. For the steep to shallow transition, three simulations needed more than 1000 time steps to restore their prior segmentation pattern. Switching to another decay rate reveals functionality of evolved differences. a Heat map of the number of segments (ratio original number of the transplanted individual/current nr of segments maximum fit individual) after switching the decay rate of all individuals. Dots indicate average ratio. b Violin plot of the difference in the number of genes and TFBS in the pruned genome between the start and end of the simulation. c Scatterplot with the frequency difference (see Fig. 4d) of the Fourier profile at the start and the end of the decay-switch run. Light is from steep to shallow, dark from shallow to steep We conclude that a segmentation strategy evolved under one type of morphogen gradient is not automatically fully functional under the other type of morphogen gradient but requires evolutionary adaptation. Although this evolutionary adaptation occurs rapidly and readily, the need for it suggests that functional differences exist between segmentation mechanisms evolved under different morphogen gradient types. To investigate this, we looked at the difference in (pruned) genome size between the original individuals and an individuals at the end of the evolutionary transition simulation. We observe that for a transition from a shallow to a steep gradient, genome size is more likely to decrease, while for a transition from a steep to a shallow gradient, genome size is more likely to increase (Fig. 6b). Although the observed differences are small, they are in line with the differences in genome size we showed in Fig. 4. Additionally, in Fig. 6c, we illustrate that the frequency profile also changes in accordance with our earlier results. For the evolutionary transition from a shallow to a steep gradient, the slope of the frequency profile is slightly more likely to decrease (27 decrease, 20 increase) and the number of damped oscillators increases (from 4 to 18). For the opposite evolutionary transition, the slope of the frequency profile is more likely to increase (32 increase, 19 decrease), and the number of damped oscillators decreases (16–7). Together this further supports the idea that differences between segmentation mechanisms evolved under shallow and steep gradients are functionally relevant. Finally, the results of our transition experiments also imply that the evolutionary transition from shallow to steep is easier than that from steep to shallow. This agrees with our earlier findings on the difference in speed with which segmentation patterns evolve under shallow and steep gradients. Gene expression noise promotes sustained oscillations and travelling waves Next, we aimed to find the functional differences between the types of segmentation mechanism evolving under steep or shallow morphogen gradients. We focused on the difference in the number of damped and travelling wave oscillators that evolve for steep and shallow morphogen gradients. In case of persistent oscillators, gene expression dynamics follow a stable limit cycle spanning the basins of attraction of the future two stable segmentation states (Additional file 1: Fig. S1A). Persistent oscillations thus allow a stable memorization of the initial oscillation phase at the cell's birth, right until the moment morphogen levels drop and the phase is translated into one of two segmentation states. In contrast, for damped oscillations, the gene expression dynamics are spiralling inward to the equilibrium inside the unstable limit cycle, which necessarily resides in only one of the basins of attraction of the segmentation states (Additional file 1: Fig. S1B). This causes cells to gradually lose their memory of their original oscillation phase, ultimately causing convergence to a single differentiated state irrespective of initial phase. We hypothesize that steep morphogen gradients suffer less from this memory loss as segmentation occurs rapidly, when damping has only just begun, and that this explains the higher likelihood of damped oscillators evolving under these conditions. Following this logic, we speculate that adding noise on gene expression could increase the sensitivity to phase memory loss: it might bring the cell faster to the single stable state by accident. Thus, we expect that noise decreases the fraction of simulations in which damped oscillators evolve, especially for steep gradient where damped oscillators are common. Under shallow gradients instead, sloped frequency profiles and travelling waves commonly evolve while damped oscillators are rare. If we assume that there is no inherent difference in functionality between having a constant or a sloped frequency profile, the higher number of sloped profiles could simply be due to the more general need for sustained oscillations when the gradient is shallow. In that case, a sloped profile represents just one of two ways of achieving persistent oscillations. On the other hand, if a sloped frequency profile were to have any additional functionality, such as its suggested larger robustness [23], it may have more space and time to exert this functionality under a shallow, more spread out morphogen gradient. If this is the case, increasing selection for robustness should increase the likelihood of evolving segmentation mechanisms with travelling waves under a shallow gradient. To test the above ideas, we added different levels of gene expression noise to our model, thereby inducing implicit selection for robustness. We found that the higher the noise, the lower the number of successful simulations; especially, the simulations with a shallow gradient were affected (Table 2). Furthermore, with higher noise, the size of the evolved genomes increases, mostly due to an increase in the number of TFBS, and again particularly noticeable for shallow gradients. These facts suggest that gene expression noise combined with a shallow morphogen gradient requires a more complex segmentation mechanism (Fig. 7a). We find that adding noise greatly increases the fraction of simulations with a steep gradient that yield persistent oscillations (Table 2). This confirms our hypothesis that damped oscillators are only tolerated if limited memorization of oscillator phase is required. Strikingly, for simulations with a shallow gradient, all levels of gene expression noise yield an increase in the fraction of sloped frequency profiles that evolve (Fig. 7b). For medium and high noise levels, the fraction of sloped frequency profiles also increases in simulations with steep gradients. Together this confirms the hypothesis that a sloped profile increases robustness against noise. Genome size and oscillatory dynamics for different levels of gene expression noise. a Violin plots (vertical histogram) of the number of genes and transcription factor binding sites (TFBS) in the pruned genomes of shallow-gradient (dark) and steep-gradient (light) simulations. b Histogram of the frequency difference between oscillations in the growth zone and at the end of the profile Given that shallow gradients and noise enhance both genome size and the occurrence of sloped frequency profiles, we investigated the correlation between these two properties: perhaps the increase in genome size observed with higher noise reflects the requirement for travelling waves. In Fig. 8, we plot the frequency difference (a measure for the slopedness of the frequency profile) against the genome size of simulations with different noise levels. (See also Additional file 5: Fig. S5 for plots separated by simulation condition, and Additional file 6: Fig. S6 for correlation with nr of loops.) From this, we conclude that no correlation exists between these two properties for individual evolutionary outcomes and that they likely evolved independently. Relation between genome size and frequency difference. The x-axis represents the difference in oscillation frequency between the growth zone and the point before oscillations cease (see also Fig. 4d). The y-axis shows genome size as the sum of # genes and TFBS. No clear correlation between genome size and frequency difference is apparent. See Additional file 6: Fig. S6 for separate scatterplots for each condition (noise level and gradient steepness) Segmentation is a major evolutionary innovation exhibited by the vertebrate, arthropod and annelid clades [1, 2]. In vertebrates, annelids and most arthropods, segments are generated in an anterior–posterior sequence and originate from a localized posterior growth zone. In vertebrates and arthropods, this sequential segmentation arises from oscillatory gene expression in the posterior growth zone, where morphogen levels are high. As cells are pushed out of this zone and morphogen levels drop, oscillations cease and a temporally stable gene expression pattern arises that prepatterns the segments. Despite this common clock-and-wavefront mechanism, intriguing species differences exist. While vertebrates and, for example, the arthropod Strigamia appear to have a long unsegmented zone and extensive kinematic waves, the cephalochordate Amphioxus and the beetle Tribolium appear to have shorter unsegmented regions and no or less extensive travelling of gene expression waves [4, 24]. Additionally, in Amphioxus and Tribolium, the oscillator clock appears to be less complex than in vertebrates and other arthropods [25, 26], although this may reflect merely a lack of data. It is currently unclear to what extent size of the posterior growth zone, oscillator slowing and oscillator complexity are related. Additionally, while both oscillator slowing and oscillator complexity have been suggested to contribute to developmental robustness, this has not been explicitly investigated. To investigate these matters, we extended previous evo-devo models for the evolution of body axis segmentation by incorporating growth from a posterior growth zone, with a posteriorly expressed morphogen that forms a gradient through decay. We have previously shown how this biases evolution towards oscillatory sequential segmentation [10]. In addition, we developed an analysis pipeline that allows us to compute parameters describing network complexity and oscillatory dynamics. With this, we investigated the effect of different morphogen decay rates, resulting in differently sloped morphogen gradients and hence differently sized unsegmented zones. In addition, we also investigated the influence of gene expression noise, resulting in different levels of selection for robustness. We showed that in our new model, different types of oscillators can evolve, with either damped oscillators or oscillators with a constant period frequently evolving. In a subset of simulations, we also observed the spontaneous evolution of oscillators with a sloped frequency profile resulting in a slowing down of oscillations and generation of travelling waves towards the anterior [14, 34], similar to those seen during, for example, vertebrate somitogenesis or Strigamia segmentation. Furthermore, for these sloped frequency profiles, we find that oscillation frequencies typically decrease by 50–60% before oscillations cease rather than decreasing all the way to zero, in agreement with experimental measurements of vertebrate somitogenesis [35]. We found that a steep morphogen gradient more often leads to the evolution of a damped oscillator. Under a shallow morphogen gradient, cells go through a prolonged transient before oscillations cease, so we hypothesize that sustained oscillations may be needed to maintain a robust dynamic memory of the oscillator phase with which the cell left the growth zone. We also show that in the presence of gene expression noise, the number of evolved persistent oscillators increases for steep morphogen gradients, supporting the notion that persistent oscillators contribute to robust patterning. In addition to differences in the occurrence of damped oscillators, shallow gradients also more often yield a sloped frequency profile. The likelihood of evolving travelling waves increases in the presence of gene expression noise, particularly for shallow gradients but also for steep gradients when noise levels are high. Our study thus confirms the hypothesis that sloped frequency gradients enhance the robustness of sequential segmentation. As to the mechanism of this enhanced robustness, we speculate that the slowing down of oscillations causes cell dynamics to spend more time inside the basins of attraction of the two segmentation states and relatively less time "in limbo" in between these two basins where it is less clear what to do when oscillations stop. As a consequence, the vulnerability to noise decreases. Finally, we found that genomes evolved under a shallow gradient tend to be larger and that networks have more and larger feedback loops, with noise contributing to this effect. In a switch experiment, we let evolved individuals continue evolution in the presence of a gradient of the opposite steepness. The results from these simulations suggest that the observed differences in genome size and frequency profiles, while small, are functionally significant, since simulations switched from a shallow to a steep morphogen gradient tend to decrease their genome size and slope of the frequency profile, and vice versa. Our results suggest a potentially important role for morphogen gradient length in causing the differences in segmentation processes found between species within both the arthropod and chordate clades. For instance, in both vertebrates and the centipede Strigamia, segmentation is preceded by a long spatio-temporal transient that is accompanied by extensive kinematic waves of gene expression [11, 18, 19, 35,36,37]. This is reminiscent of the outcomes we observed for a shallow morphogen gradient. Additionally, at least for vertebrates, the segmentation network is known to be highly complex and consists of an entanglement of three signalling pathways: FGF, Wnt and Notch [16], again similar to simulation outcomes under a shallow morphogen gradient. In contrast, the cephalochordate Amphioxus lays down its segments very close to the posterior growth zone, and no travelling waves have (thus far) been observed. Additionally, the FGF pathway does not seem to be involved in segmentation [24,25,26], suggesting a simpler oscillator network architecture. Based on the currently available data, it thus appears that Amphioxus segmentation more closely resembles the in silico mechanisms evolved under a steep gradient. On a similar note, in the beetle Tribolium, segment formation occurs relatively close to the posterior growth region, and both the travelled distance and contraction of kinematic waves are modest, indicating only a slightly sloped frequency profile [4]. Additionally, the currently available data suggest a relatively simple oscillator network [27]. Importantly, our switch experiments demonstrate that evolution easily adapts a short gradient mechanism into a long gradient mechanism and vice versa. This supports the generally accepted notion that at least within a single clade segmentation evolved once and that within-clade differences arose through subsequent divergence of the segmentation mechanism. Based on our finding that simulations with steep and shallow gradients differ in the ease with which segments evolve, we speculate that the initial evolution of segmentation within a clade was of the steep-gradient type. Recent studies have suggested that network complexity may reflect the need for two distinct oscillators, one with a constant frequency and one slowing down according to a decreasing frequency profile, with the resulting phase difference patterning somite boundaries and polarity [20, 36]. Intriguingly, in our simulations we did not observe a clear correlation between the evolution of high network complexity and travelling waves, despite the fact that the evolution of both these properties becomes more likely under shallow morphogen gradients and gene expression noise. These results demonstrate that (further) network complexity is not required for a sloped frequency profile. Instead, we speculate that network complexity is required for oscillator robustness and persistence. Together this suggests that network complexity and travelling waves could have evolved separately rather than simultaneously and may in fact play subtly differing roles. Obviously, in order to simulate developmental processes in many individuals and over many generations in a computationally tractable manner, the developmental process in our model was highly simplified. Important simplifications are the restriction to a one-dimensional tissue architecture and the absence of cell motility. These would be highly interesting extensions for future studies, as two-dimensional tissue architecture likely increases the impact of gene expression and morphogen gradient noise on segment formation, while cell motility instead has been shown to contribute to patterning robustness [38]. Importantly, although simplified, our current model did contain the necessary ingredients that enabled us to investigate the evolution of kinematic waves, in contrast to earlier models in which morphogen gradient shapes were superimposed and kept constant. In summary, we have shown that gradient slope and length influence the evolution of travelling waves in segmentation. First, we showed that shallow gradients lead to the evolution of slightly larger genomes and networks with more and larger loops as compared to steep gradients, and more often to persistent oscillations with travelling waves. We also showed that these differences are likely to be functional. Finally, we showed that gene expression noise increases the likelihood of evolving persistent oscillators, and, especially in the presence of shallow gradients, of evolving travelling waves. We therefore propose that gradient length and noise may play a role in creating the differences observed both between species within the chordate and arthropod clades. TF: TFBS: transcription factor binding site FBL: Davis GK, Patel NH. The origin and evolution of segmentation. Trends Genet. 1999;15(12):68–72. https://doi.org/10.1016/S0168-9525(99)01875-2. Peel A, Akam M. Evolution of segmentation: rolling back the clock. Curr Biol. 2003;13(18):708–10. https://doi.org/10.1016/j.cub.2003.08.045. Hubaud A, Pourquié O. Signalling dynamics in vertebrate segmentation. Nat Rev Mol Cell Biol. 2014;15:709–21. https://doi.org/10.1038/nrm3891. El-Sherif E, Averof M, Brown SJ. A segmentation clock operating in blastoderm and germband stages of Tribolium development. Development. 2012;139(23):4341–6. https://doi.org/10.1242/dev.085126. Shankland M, Seaver EC. Evolution of the bilaterian body plan: what have we learned from annelids? Proc Natl Acad Sci. 2000;97(9):4434–7. https://doi.org/10.1073/pnas.97.9.4434. François P, Hakim V, Siggia ED. Deriving structure from evolution: metazoan segmentation. Mol Syst Biol. 2007;3(1):154. https://doi.org/10.1038/msb4100192. François P. Evolving phenotypic networks in silico. Semin Cell Dev Biol. 2014;35:90–7. https://doi.org/10.1016/j.semcdb.2014.06.012. ten Tusscher KH, Hogeweg P. Evolution of networks for body plan patterning; interplay of modularity, robustness and evolvability. PLoS Comput Biol. 2011;7(10):1002208. https://doi.org/10.1371/journal.pcbi.1002208. ten Tusscher KHWJ. Mechanisms and constraints shaping the evolution of body plan segmentation. Eur Phys J E. 2013;36(5):1–12. https://doi.org/10.1140/epje/i2013-13054-7. Vroomans RMA, Hogeweg P, ten Tusscher KHWJ. In silico evo-devo: reconstructing stages in the evolution of animal segmentation. EvoDevo. 2016;7(1):14. https://doi.org/10.1186/s13227-016-0052-8. Palmeirim I, Henrique D, Ish-Horowicz D, Pourquié O. Avian hairy gene expression identifies a molecular clock linked to vertebrate segmentation and somitogenesis. Cell. 1997;91(5):639–48. https://doi.org/10.1016/S0092-8674(00)80451-1. Kaern M, Menzinger M, Hunding A. Segmentation and somitogenesis derived from phase dynamics in growing oscillatory media. J Theor Biol. 2000;207(4):473–93. https://doi.org/10.1006/jtbi.2000.2183. Jaeger J, Goodwin BC. A cellular oscillator model for periodic pattern formation. J Theor Biol. 2001;213(2):171–81. https://doi.org/10.1006/jtbi.2001.2414. Dequéant M-L, Pourquié O. Segmental patterning of the vertebrate embryonic axis. Nat Rev Genet. 2008;9:370–82. https://doi.org/10.1038/nrg2320. Dequéant M-L, Glynn E, Gaudenz K, Wahl M, Chen J, Mushegian A, Pourquié O. A complex oscillating network of signaling genes underlies the mouse segmentation clock. Science. 2006;314(5805):1595–8. https://doi.org/10.1126/science.1133141. Goldbeter A, Pourquié O. Modeling the segmentation clock as a network of coupled oscillations in the Notch, Wnt and FGF signaling pathways. J Theor Biol. 2008;252(3):574–85. https://doi.org/10.1016/j.jtbi.2008.01.006. Aulehla A, Pourquié O. Oscillating signaling pathways during embryonic development. Curr Opin Cell Biol. 2008;20(6):632–7. https://doi.org/10.1016/j.ceb.2008.09.002. Chipman AD, Arthur W, Akam M. A double segment periodicity underlies segment generation in centipede development. Curr Biol. 2004;14(14):1250–5. https://doi.org/10.1016/j.cub.2004.07.026. Brena C, Akam M. An analysis of segmentation dynamics throughout embryogenesis in the centipede Strigamia maritima. BMC Biol. 2013;11(1):112. https://doi.org/10.1186/1741-7007-11-112. Beaupeux M, François P. Positional information from oscillatory phase shifts: insights from in silico evolution. Phys Biol. 2016;13(3):036009. Boareto M, Tomka T, Iber D. Positional information encoded in the dynamic differences between neighbouring oscillators during vertebrate segmentation. bioRxiv. 2018. https://doi.org/10.1101/286328 Murray PJ, Maini PK, Baker RE. The clock and wavefront model revisited. J Theor Biol. 2011;283(1):227–38. https://doi.org/10.1016/j.jtbi.2011.05.004. El-Sherif E, Zhu X, Fu J, Brown SJ. Caudal regulates the spatiotemporal dynamics of pair-rule waves in Tribolium. PLoS Genet. 2014;10(10):1004677. https://doi.org/10.1371/journal.pgen.1004677. Schubert M, Holland LZ, Stokes MD, Holland ND. Three amphioxus Wnt genes (AmphiWnt3, AmphiWnt5, and AmphiWnt6) associated with the tail bud: the evolution of somitogenesis in chordates. Dev Biol. 2001;240(1):262–73. https://doi.org/10.1006/dbio.2001.0460. Bertrand S, Camasses A, Somorjai I, Belgacem MR, Chabrol O, Escande M-L, Pontarotti P, Escriva H. Amphioxus FGF signaling predicts the acquisition of vertebrate morphological traits. PNAS. 2011;108(22):9160–5. https://doi.org/10.1073/pnas.1014235108. Bertrand S, Aldea D, Oulion S, Subirana L, de Lera AR, Somorjai I, Escriva H. Evolution of the role of RA and FGF signals in the control of somitogenesis in chordates. PLoS ONE. 2015;10(9):0136587. https://doi.org/10.1371/journal.pone.0136587. Choe CP, Miller SC, Brown SJ. A pair-rule gene circuit defines segments sequentially in the short-germ insect Tribolium castaneum. Proc Natl Acad Sci USA. 2006;103(17):6560–4. https://doi.org/10.1073/pnas.0510440103. Chipman AD, Akam M. The segmentation cascade in the centipede Strigamia maritima: involvement of the notch pathway and pair-rule gene homologues. Dev Biol. 2008;319(1):160–9. https://doi.org/10.1016/j.ydbio.2008.02.038. Crombach A, Hogeweg P. Evolution of evolvability in gene regulatory networks. PLoS Comput Biol. 2008;4(7):1–13. https://doi.org/10.1371/journal.pcbi.1000112. Schaerli Y, Munteanu A, Gili M, Cotterell J, Sharpe J, Isalan M. A unified design space of synthetic stripe-forming networks. Nat Commun. 2014;5(4905):1–10. https://doi.org/10.1038/ncomms5905. Salazar-Ciudad I, Newman SA, Solé RV. Phenotypic and dynamical transitions in model genetic networks i. Emergence of patterns and genotype–phenotype relationships. Evol Dev. 2001;3(2):84–94. https://doi.org/10.1046/j.1525-142x.2001.003002084.x. Salazar-Ciudad I, Solé RV, Newman SA. Phenotypic and dynamical transitions in model genetic networks ii. Application to the evolution of segmentation mechanisms. Evol Dev. 2001;3(2):95–103. https://doi.org/10.1046/j.1525-142x.2001.003002095.x. Lewis J. Autoinhibition with transcriptional delay: a simple mechanism for the zebrafish somitogenesis oscillator. Curr Biol. 2003;13(16):1398–408. https://doi.org/10.1016/S0960-9822(03)00534-7. Morelli LG, Ares S, Herrgen L, Schröter C, Jülicher F, Oates AC. Delayed coupling theory of vertebrate segmentation. HFSP J. 2009;3(1):55–66. https://doi.org/10.2976/1.3027088. Shih NP, François P, Delaune EA, Amacher SL. Dynamics of the slowing segmentation clock reveal alternating two-segment periodicity. Development. 2015;142(10):1785–93. https://doi.org/10.1242/dev.119057. Lauschke VM, Tsiairis CD, François P, Aulehla A. Scaling of embryonic patterning based on phase-gradient encoding. Nature. 2013;493:101–5. https://doi.org/10.1038/nature11804. Soroldoni D, Jörg DJ, Morelli LG, Richmond DL, Schindelin J, Jülicher F, Oates AC. A Doppler effect in embryonic pattern formation. Science. 2014;345(6193):222–5. https://doi.org/10.1126/science.1253089. Uriu K, Morishita Y, Iwasa Y. Random cell movement promotes synchronization of the segmentation clock. Proc Natl Acad Sci. 2010;107(11):4979–84. https://doi.org/10.1073/pnas.0907122107. RMAV and KHWJT designed the study, RMAV performed the simulations, RMAV, PH and KHWJT analysed the results and wrote the manuscript. All authors read and approved the manuscript. We thank Enrico Sandro Colizzi for helpful suggestions and discussion of the manuscript. The source code used to generate the data is available upon request. RMAV was supported by the "Focus en Massa" program from Utrecht University. The funding body had no role in the design of the study, collection and analysis of the data, or writing of the manuscript. Centre of Excellence in Experimental and Computational Developmental Biology, Institute of Biotechnology, University of Helsinki, Viikinkaari 5, 00790, Helsinki, Finland Renske M. A. Vroomans Theoretical Biology, Utrecht University, Padualaan 8, 3584CH, Utrecht, Netherlands , Paulien Hogeweg & Kirsten H. W. J. ten Tusscher Search for Renske M. A. Vroomans in: Search for Paulien Hogeweg in: Search for Kirsten H. W. J. ten Tusscher in: Correspondence to Renske M. A. Vroomans. 13227_2018_113_MOESM1_ESM.pdf Additional file 1. Networks with persistent and damped oscillations have different origins. A) Persistent oscillations are the result of a stable limit cycle around an unstable equilibrium (open blue dot). As long as conditions (e.g. morphogen concentration) stay constant, these oscillations continue indefinitely. When the morphogen concentration decreases, the system will reach either of the two stable states (red dots), depending on the basin of attraction (red zones) in which it finds itself. B) Damped oscillations are caused by a stable spiral. Even if all else stays constant, the oscillations lose amplitude over time, and the system will end up with fixed gene expression. Such a system "loses" the memory of the oscillations and thus of the phase with which it started. Additional file 2. Networks with different structures. A) In this network, the genes constituting the bistable switch are also part of the oscillator. B) The segmentation gene can itself also be part of the oscillator. In this case, the genes responsible for generating a bistable switch are hard to identify, also due to the size of the network. Both networks are pruned, with the requirement that the number of segments should stay the same. Additional file 3. Examples of profiles that are harder to classify. Additional file 4. Larger genomes generate networks with more loops. Scatterplot of the number of loops in the network versus genome size. The two are clearly correlated, but note that particularly simulations with a shallow gradient (red dots) lead to larger genomes and networks with more loops. Additional file 5. The type of frequency profile is not correlated with genome size. Scatterplots of the posterior to anterior frequency difference in the profile versus genome size, separated by simulation condition (gradient steepness and noise level). Additional file 6. The type of frequency profile is not correlated with the number of loops. Scatterplot of the posterior to anterior frequency difference in the profile versus the number of loops in the network. Vroomans, R.M.A., Hogeweg, P. & ten Tusscher, K.H.W.J. Around the clock: gradient shape and noise impact the evolution of oscillatory segmentation dynamics. EvoDevo 9, 24 (2018) doi:10.1186/s13227-018-0113-2 Travelling waves
CommonCrawl
www.springer.com The European Mathematical Society Pages A-Z StatProb Collection Project talk Bolzano-Weierstrass selection principle From Encyclopedia of Mathematics Revision as of 16:49, 6 August 2014 by Ivan (talk | contribs) (TeX) A method of proof which is frequently employed in mathematical analysis and which is based on successive subdivision of a segment into halves, after which the segment having some property is chosen as the new, initial segment. This method may be employed if the nature of the segments is such that the fact that the property is present in some segment implies that at least one segment obtained by halving the original segment will also have this property. For instance, if the segment contains infinitely-many points of some set, or if some function is not bounded on the segment, or if a non-zero function assumes values of opposite sign at the two ends of the segment — all these are properties of this type. The Bolzano–Weierstrass selection principle can be used to prove the Bolzano–Weierstrass theorem and a number of other theorems in analysis. Depending on the criterion according to which the segments are chosen in applying the Bolzano–Weierstrass selection principle, the process obtained is effective or ineffective. An example of the former case is the application of the principle to prove that for a continuous real function that assumes values of opposite sign at the ends of a given segment, this segment contains a point at which the function vanishes (cf. Cauchy theorem on intermediate values of continuous functions). In this case the criterion chosen for the successive choice of the segments is that the function assumes values of different sign at the two ends of the chosen segment. If there is a way of computing the value of the function at every point, then, after performing a sufficient number of steps, it is possible to obtain the coordinates of the point at which the function vanishes, to within a given degree of accuracy. Thus, in addition to proving that a root of the equation $f(x)$ exists on a segment at the ends of which the values of the function are of different sign, one also has a method of approximately solving this equation. An example of an ineffective process is the use of the Bolzano–Weierstrass selection principle to prove that a continuous real function on a segment attains a maximum on the segment. Here, the segment chosen in the successive subdivisions is the one on which the maximum of the values of the function is not less than that on the other one. If, as in the former case, it is possible to calculate the value of the function at any point, this is still not sufficient for an effective choice of the required segment. Accordingly, the Bolzano–Weierstrass selection principle can be used in this case only to prove an existence theorem which says that the function assumes its maximum at some point, but not to specify this point within a given degree of accuracy. There exist various generalizations of the Bolzano–Weierstrass selection principle, e.g. to apply it in the $n$-dimensional Euclidean space ($n=2,3,\ldots$) to $n$-dimensional cubes, which are successively subdivided into congruent cubes with side-lengths of one-half that of the original cube. How to Cite This Entry: Bolzano-Weierstrass selection principle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bolzano-Weierstrass_selection_principle&oldid=32748 This article was adapted from an original article by L.D. Kudryavtsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Bolzano-Weierstrass_selection_principle&oldid=32748" TeX done About Encyclopedia of Mathematics Impressum-Legal
CommonCrawl
+ 8 more educators The sweeping second hand on your wall clock is 20 cm long. What is (a) the rotational speed of the second hand, (b) the translational speed of the tip of the second hand, and (c) the rotational acceleration of the second hand? Vedad B. You find an old record player in your attic. The turntable has two readings: 33 rpm and 45 rpm. What do they mean? Express these quantities in different units. Salamat A. Consider again the turntable described in the last problem. Determine the magnitudes of the rotational acceleration in each of the following situations. Indicate the assumptions you made for each case. (a) When on and rotating at 33 rpm, it is turned off and slows and stops in 60 s. (b) When off and you push the play button, the turntable attains a speed of 33 rpm in 15 s. (c) You switch the turntable from 33 rpm to 45 rpm, and it takes about 2.0 s for the speed to change. (d) In the situation in part (c), what is the magnitude of the average tangential acceleration of a point on the turntable that is 15 cm from the axis of rotation? You step on the gas pedal in your car, and the car engine's rotational speed changes from 1200 $\mathrm{rpm}$ to 3000 $\mathrm{rpm}$ in 3.0 $\mathrm{s}$ What is the engine's average rotational acceleration? You pull your car into your driveway and stop. The drive shaft of your car engine, initially rotating at $2400 \mathrm{rpm},$ slows with a constant rotational acceleration of magnitude 30 $\mathrm{rad} / \mathrm{s}^{2} .$ How long does it take for the drive shaft to stop turning? An old wheat-grinding wheel in a museum actually works. The sign on the wall says that the wheel has a rotational acceleration of 190 $\mathrm{rad} / \mathrm{s}^{2}$ as its spinning rotational speed increases from zero to 1800 $\mathrm{rpm} .$ How long does it take the wheel to attain this rotational speed? Centrifuge A centrifuge at the same museum is used to separate seeds of different sizes. The average rotational acceleration of the centrifuge according to a sign is 30 $\mathrm{rad} / \mathrm{s}^{2}$ . If starting at rest, what is the rotational velocity of the centrifuge after 10 $\mathrm{s?}$ Potter's wheel A fly sits on a potter's wheel 0.30 $\mathrm{m}$ from its axle. The wheels rotational speed decreases from 4.0 $\mathrm{rad} / \mathrm{s}$ to 2.0 $\mathrm{rad} / \mathrm{s}$ in 5.0 $\mathrm{s}$ . Determine (a) the wheel's average rotational acceleration, (b) the angle through which the fly turns during the $5.0 \mathrm{s},$ and (c) the distance traveled by the fly during that time interval. During your tennis serve, your racket and arm move in an approximately rigid arc with the top of the racket 1.5 m from your shoulder joint. The top accelerates from rest to a speed of 20 m/s in a time interval of 0.10 s. Determine (a) the magnitude of the average tangential acceleration of the top of the racket and (b) the magnitude of the rotational acceleration of your arm and racket. An ant clings to the outside edge of the tire of an exercise bicycle. When you start pedaling, the ant's speed increases from zero to 10 $\mathrm{m} / \mathrm{s}$ in 2.5 $\mathrm{s}$ . The wheel's rotational acceleration is 13 $\mathrm{rad} / \mathrm{s}^{2}$ . Determine everything you can about the motion of the wheel and the ant. The speedometer on a bicycle indicates that you travel 60 $\mathrm{m}$ while your speed increases from 0 to 10 $\mathrm{m} / \mathrm{s}$ . The radius of the wheel is 0.30 $\mathrm{m}$ . Determine three physical quantities relevant to this motion. You peddle your bicycle so that its wheel's rotational speed changes from 5.0 $\mathrm{rad} / \mathrm{s}$ to 8.0 $\mathrm{rad} / \mathrm{s}$ in 2.0 $\mathrm{s}$ . Determine (a) the wheel's average rotational acceleration, (b) the angle through which it turns during the $2.0 \mathrm{s},$ and $(\mathrm{c})$ the distance that a point 0.60 $\mathrm{m}$ from the axle travels. Mileage gauge The odometer on an automobile actually counts axle turns and converts the number of turns to miles based on knowledge that the diameter of the tires is 0.62 m. How many turns does the axle make when traveling 10 miles? Speedometer The speedometer on an automobile measures the rotational speed of the axle and converts that to a linear speed of the car, assuming the car has 0.62 -m-diameter tires. What is the rotational speed of the axle when the car is traveling at 20 $\mathrm{m} / \mathrm{s}(45 \mathrm{mph})$ ? Ferris wheel A Ferris wheel starts at rest, acquires a rotational velocity of $\omega$ rad/s after completing one revolution and continues to accelerate. Write an expression for (a) the magnitude of the wheel's rotational acceleration (assumed constant), (b) the time interval needed for the first revolution, (c) the time interval required for the second revolution, and (d) the distance a person travels in two revolutions if he is seated a distance $l$ from the axis of rotation. You push a disk-shaped platform on its edge 2.0 $\mathrm{m}$ from the axle. The platform starts at rest and has a rotational acceleration of 0.30 $\mathrm{rad} / \mathrm{s}^{2} .$ Determine the distance you must run while pushing the platform to increase its speed at the edge to 7.0 $\mathrm{m} / \mathrm{s}$ . Estimate what Earth's rotational acceleration would be in rad/s' if the length of a day increased from 24 $\mathrm{h}$ to 48 $\mathrm{h}$ during the next 100 years. A turntable turning at rotational speed 33 $\mathrm{rpm}$ stops in 50 $\mathrm{s}$ when turned off. The turntable's rotational inertia is $1.0 \times 10^{-2} \mathrm{kg} \cdot \mathrm{m}^{2} .$ How large is the resistive torque that slows the turntable? A 0.30 -kg ball is attached at the end of a 0.90 -m-long stick. The ball and stick rotate in a horizontal circle. Because of air resistance and to keep the ball moving at constant speed, a continual push must be exerted on the stick, causing a $0.036-\mathrm{N} \cdot \mathrm{m}$ torque. Determine the magnitude of the resistive force that the air exerts on the ball opposing its motion. What assumptions did you make? Centrifuge A centrifuge with a $0.40-\mathrm{kg} \cdot \mathrm{m}^{2}$ rotational inertia has a rotational acceleration of 100 $\mathrm{rad} / \mathrm{s}^{2}$ when the power is turned on. (a) Determine the minimum torque that the motor supplies. (b) What time interval is needed for the centrifuge's rotational velocity to increase from zero to 5000 $\mathrm{rad} / \mathrm{s} ?$ Airplane turbine What is the average torque needed to accelerate the turbine of a jet engine from rest to a rotational velocity of 160 $\mathrm{rad} / \mathrm{s}$ in 25 $\mathrm{s} ?$ The turbine's rotating parts have a $32-\mathrm{kg} \cdot \mathrm{m}^{2}$ rotational inertia. The solid two part pulley in Figure $\mathrm{P} 8.22$ initially rotates counterclock wise. Two ropes pull on the pulley as shown. The inner part has a radius of 1.5a, and the outer part has a radius of 2.0a. (a) Construct a force diagram for the pulley with the origin of the coordinate system at the center of the pulley. (b) Deter of mine the torque produced by each force (including the sign) and the resultant torque exerted on the pulley. (c) Based on the results of part (b), decide on the signs of the rotational velocity and the rotational acceleration. The flywheel shown in Figure P8.22 is initially rotating clockwise. Determine the relative force that the rope on the right needs to exert on the wheel compared to the force that the left rope exerts on the wheel in order for the wheel's rotational velocity to (a) remain constant, (b) increase in magnitude, and (c) decrease in magnitude. The outer radius is 2.0$a$ compared to 1.5$a$ for the inner radius. Prashant B. The flywheel shown in Figure $\mathrm{P} 8.22$ is initially rotating in the clockwise direction. The force that the rope on the right exerts on it is 1.5$T$ and the force that the rope on the left exerts on it is $T$ . Determine the ratio of the maximum radius of the inner circle compared to that of the outer circle in order for the wheel's rotational speed to decrease. A pulley such as that shown in Figure $P 8.25$ has rotational inertia 10 $\mathrm{kg} \cdot \mathrm{m}^{2}$ . Three ropes wind around different parts of the pulley and exert forces $T_{1 \mathrm{on} \mathrm{w}}=80 \mathrm{N}$ $T_{2 \mathrm{onw}}=100 \mathrm{N},$ and $T_{3 \mathrm{on} \mathrm{w}}=50 \mathrm{N}$ . Determine (a) the rotational acceleration of the pulley and (b) its rotational velocity after 4.0 $\mathrm{s}$ . It starts at rest. Zachary W. Equation Jeopardy 1 The equation below describes a rotational dynamics situation. Draw a sketch of a situation that is consistent with the equation and construct a word problem for which the equation might be a solution. There are many possibilities. $$-(2.2 \mathrm{N})(0.12 \mathrm{m})=\left[(1.0 \mathrm{kg})(0.12 \mathrm{m})^{2}\right] \alpha$$ $$\begin{array}{l}{-(2.0 \mathrm{N})(0.12 \mathrm{m})+(6.0 \mathrm{N})(0.06 \mathrm{m})} \\ {\quad=\left[(1.0 \mathrm{kg})(0.12 \mathrm{m})^{2}\right] \alpha}\end{array}$$ Determine the rotational inertia of the four balls shown in Figure $P 8.28$ about an axis perpendicular to the paper and passing through point A. The mass of each ball is $m$ . Ignore the mass of the rods to which the balls are attached. Repeat the previous problem for an axis perpendicular to the paper through point B. Repeat the previous problem for axis BC, which passes through two of the balls. Merry-go-round A mechanic needs to replace the motor for a merry-go-round. What torque specifications must the new motor satisfy if the merry-go-round should accelerate from rest to 1.5 $\mathrm{rad} / \mathrm{s}$ in 8.0 $\mathrm{s}$ ? You can consider the merry go round to be a uniform disk of radius 5.0 $\mathrm{m}$ and mass $25,000 \mathrm{kg}$ . A small $0.80-$ kg train propelled by a fan engine starts at rest and goes around a circular track with a $0.80-\mathrm{m}$ radius. The fan air exerts a $2.0-\mathrm{N}$ force on the train. Determine (a) the rotational acceleration of the train and (b) the time interval needed for it to acquire a speed of 3.0 $\mathrm{m} / \mathrm{s}$ . Indicate any assumptions you made. Zulfiqar A. The train from the previous problem is moving along the rails at a constant rotational speed of 5.4 $\mathrm{rad} / \mathrm{s}$ (the fan has stopped). Determine the time interval that is needed to stop the train if the wheels lock and the rails exert a 1.8 -N friction force on the train. Motor You wish to buy a motor that will be used to lift a 20 -kg bundle of shingles from the ground to the roof of a house. The shingles are to have a $1.5-\mathrm{m} / \mathrm{s}^{2}$ upward acceleration at the start of the lift. The very light pulley on the motor has a radius of 0.12 $\mathrm{m}$ . Determine the minimum torque that the motor must be able to provide. A thin cord is wrapped around a grindstone of radius 0.30 $\mathrm{m}$ and mass 25 $\mathrm{kg}$ supported by bearings that produce negligible friction torque. The cord exerts a steady $20-\mathrm{N}$ tension force on the grindstone, causing it to accelerate from rest to 60 $\mathrm{rad} / \mathrm{s}$ in 12 $\mathrm{s}$ . Determine the rotational inertia of the grindstone. A string wraps around a $6.0-\mathrm{kg}$ wheel of radius 0.20 m. The wheel is mounted on a frictionless horizontal axle at the top of an inclined plane tilted $37^{\circ}$ below the horizontal. The free end of the string is attached to a $2.0-\mathrm{kg}$ block that slides down the incline without friction. The block's acceleration while sliding down the incline is 2.0 $\mathrm{m} / \mathrm{s}^{2} .$ (a) Draw separate force diagrams for the wheel and for the block. (b) Apply Newton's second law (either the translational form or the rotational form) for the wheel and for the block. (c) Determine the rotational inertia for the wheel about its axis of rotation. Elena, a black belt in tae kwon do, is experienced in breaking boards with her fist. A high-speed video indicates that her forearm is moving with a rotational speed of 40 rad/s when it reaches the board. The board breaks in 0.0040 s and her arm is moving at 20 rad/s just after breaking the board. Her fist is 0.32 $\mathrm{m}$ from her elbow joint and the rotational inertia of her forearm is 0.050 $\mathrm{kg} \cdot \mathrm{m}^{2} .$ Determine the average force that the board exerts on her fist while breaking the board (equal in magnitude to the force that her fist exerts on the board). Ignore the gravitational force that Earth exerts on her arm and the force that her triceps muscle exerts on her arm during the break. Like a yo-yo Sam wraps a string around the outside of a 0.040 -m-radius 0.20 -kg solid cylinder and uses it like a yoyo (Figure $P 8.38 ) .$ When released, the cylinder accelerates downward at $(2 / 3) g$ . (a) Draw a force diagram for the cylinder and apply the translational form of Newton's second law to the cylinder in order to determine the force that the string exerts on the cylinder.(b) Determine the rotational inertia of the solid cylinder. (c) Apply the rotational form of Newton's second law and determine the cylinder's rotational acceleration. (d) Is your answer to part (c) consistent with the application of $a=r \alpha$ , which relates the cylinder's linear acceleration and its rotational acceleration? Explain. Ajay S. Fire escape A unique fire escape for a three-story house is shown in Figure P8.39. A $30 \mathrm{kg}$ child grabs a rope wrapped around a heavy flywheel outside a bedroom window. The flywheel is a 0.40 -m-radius uniform disk with a mass of 120 $\mathrm{kg}$ . (a) Make a force diagram for the child as he moves downward at increasing speed and another for the flywheel as it turns faster and faster. (b) Use Newton's second law for translational motion and the child force diagram to obtain an expression relating the force that the rope exerts on him and his acceleration. (c) Use Newton's second law for rotational motion and the flywheel force diagram to obtain an expression relating the force the rope exerts on the flywheel and the rotational acceleration of the fly wheel. (d) The child's acceleration $a$ and the flywheel's rotational acceleration $\alpha$ are related by the equation $a=r \alpha,$ where $r$ is the flywheel's radius. Combine this with your equations in parts (b) and (c) to determine the child's acceleration and the force that the rope exerts on the wheel and on the child. An Atwood machine is shown in Example 8.5. Use $m_{1}=0.20 \mathrm{kg}, m_{2}=0.16 \mathrm{kg}, M=0.50 \mathrm{kg},$ and $R=0.10 \mathrm{m}$ (a) Construct separate force diagrams for block 1, for block 2, and for the solid cylindrical pulley. (b) Determine the rotational inertia of the pulley. (c) Use the force diagrams for blocks 1 and 2 and Newton's second law to write expressions relating the unknown accelerations of the blocks. (d) Use the pulley force diagram and the rotational form of Newton's second law to write an expression for the rotational acceleration of the pulley. (e) Noting that $a=R \alpha$ for the pulley, use the three equations from parts $(c)$ and $(d)$ to determine the magnitude of the acceleration of the hanging blocks. A physics problem involves a massive pulley, a bucket filled with sand, a toy truck, and an incline (see Figure P8.41). You push lightly on the truck so it moves down the incline. When you stop pushing, it moves down the incline at constant speed and the bucket moves up at constant speed. (a) Construct separate force diagrams for the pulley, the bucket, and the truck. (b) Use the truck force diagram and the bucket force diagram to help write expressions in terms of quantities shown in the figure for the forces $T_{1 \text { on Truck and }} T_{2 \text { on Bucket that the }}$rope exerts on the truck and that the rope exerts on the bucket. (c) Use the rotational form of Newton's second law to determine if the tension force $T_{1 \text { on pulley that the rope on the right }}$ side exerts on the pulley is the same, greater than, or less than the force $T_{2 \text { on pulley that the rope exerts on the left side. }}$ Khoobchandra A. (a) Determine the rotational momentum of a $10-\mathrm{kg}$ diskshaped flywheel of radius 9.0 $\mathrm{cm}$ rotating with a rotational speed of 320 $\mathrm{rad} / \mathrm{s} .(\mathrm{b})$ With what magnitude rotational speed must a $10-\mathrm{kg}$ solid sphere of 9.0 $\mathrm{cm}$ radius rotate to have the same rotational momentum as the flywheel? Ballet A ballet student with her arms and a leg extended spins with an initial rotational speed of 1.0 $\mathrm{rev} / \mathrm{s}$ . As she draws her arms and leg in toward her body, her rotational inertia becomes 0.80 $\mathrm{kg} \cdot \mathrm{m}^{2}$ and her rotational velocity is 4.0 $\mathrm{rev} / \mathrm{s}$ . Determine her initial rotational inertia. A 0.20 -kg block moves at the end of a $0.50-\mathrm{m}$ string along a circular path on a frictionless air table. The block's initial rotational speed is 2.0 $\mathrm{rad} / \mathrm{s}$ . As the block moves in the circle, the string is pulled down through a hole in the air table at the axis of rotation. Determine the rotational speed and tangential speed of the block when the string is 0.20 $\mathrm{m}$ from the axis. Equation Jeopardy 3 The equation below describes a process. Draw a sketch representing the initial and final states of the process and construct a word problem for which the equation could be a solution. $$\left(\frac{2}{5} m R^{2}\right)\left(\frac{2 \pi}{30 \text { days }}\right)=\left[\frac{2}{5} m\left(\frac{R}{100}\right)^{2}\right]\left(\frac{2 \pi}{T_{\mathrm{f}}}\right)$$ A student sits motionless on a stool that can turn friction free about its vertical axis (total rotational inertia $I ) .$ The student is handed a spinning bicycle wheel, with rotational inertia $I_{\text { wheel }},$ that is spinning about a vertical axis with a counterclockwise rotational velocity $\omega_{0}$ . The student then turns the bicycle wheel over (that is, through $180^{\circ}$ ). Estimate, in terms of $\omega_{0}$ , the final rotational velocity acquired by the student. Neutron star An extremely dense neutron star with mass equal to that of the Sun has a radius of about 10 km—about the size of Manhattan Island. These stars are thought to rotate once about their axis every 0.03 to 4 s, depending on their size and mass. Suppose that the neutron star described in the first sentence rotates once every 0.040 s. If its volume then expanded to occupy a uniform sphere of radius $1.4 \times 10^{8} \mathrm{m}$ (most of the Sun's mass is in a sphere of this size) with no change in mass or rotational momentum, what time interval would be required for one rotation? By comparison, the Sun rotates once about its axis each month. Determine the change in rotational kinetic energy when the rotational velocity of the turntable of a stereo system increases from 0 to 33 $\mathrm{rpm}$ . Its rotational inertia is $6.0 \times 10^{-3} \mathrm{kg} \cdot \mathrm{m}^{2} .$ A grinding wheel with rotational inertia I gains rotational kinetic energy $K$ after starting from rest. Determine an expression for the wheel's final rotational speed. Flywheel energy for car The U.S. Department of Energy had plans for a 1500 -kg automobile to be powered completely by the rotational kinetic energy of a flywheel. (a) If the 300 -kg flywheel (included in the 1500 -kg mass of the automobile) had a $6.0-\mathrm{kg} \cdot \mathrm{m}^{2}$ rotational inertia and could turn at a maximum rotational speed of 3600 $\mathrm{rad} / \mathrm{s}$ , determine the energy stored in the flywheel. (b) How many accelerations from a speed of zero to 15 $\mathrm{m} / \mathrm{s}$ could the car make before the fly- wheel's energy was dissipated, assuming 100$\%$ energy transfer and no flywheel regeneration during braking? The rotational speed of a flywheel increases by 40$\% . \mathrm{By}$ what percent does its rotational kinetic energy increase? Explain your answer. Rotating student A student sitting on a chair on a circular platform of negligible mass rotates freely on an air table at initial rotational speed 2.0 $\mathrm{rad} / \mathrm{s}$ . The student's arms are initially extended with 6.0 -kg dumbbells in each hand. As the student pulls her arms in toward her body, the dumbbells move from a distance of 0.80 $\mathrm{m}$ to 0.10 $\mathrm{m}$ from the axis of rotation. The initial rotational inertia of the student's body (not including the dumbbells) with arms extended is 6.0 $\mathrm{kg} \cdot \mathrm{m}^{2}$ , and her final rotational inertia is 5.0 $\mathrm{kg} \cdot \mathrm{m}^{2} .$ (a) Determine the student's final rotational speed. (b) Determine the change of kinetic energy of the system consisting of the student together with the two dumbbells. (c) Determine the change in the kinetic energy of the system consisting of the two dumbbells alone without the student. (d) Determine the change of kinetic energy of the system consisting of student alone without the dumbbells. (e) Compare the kinetic energy changes in parts (b) through (d). A turntable whose rotational inertia is $1.0 \times 10^{-3} \mathrm{kg} \cdot \mathrm{m}^{2}$ rotates on a frictionless air cushion at a rotational speed of 2.0 $\mathrm{rev} / \mathrm{s}$ . A 1.0 $\mathrm{-g}$ beetle falls to the center of the turntable and then walks 0.15 $\mathrm{m}$ to its edge. (a) Determine the rotational speed of the turntable with the beetle at the edge. (b) Determine the kinetic energy change of the system consisting of the turntable and the beetle. (c) Account for this energy change. Repeat the previous problem, only assume that the beetle initially falls on the edge of the turntable and stays there. Water turbine A Verdant Power water turbine (a "windmill" in water) turns in the East River near New York City. Its propeller is 2.5 m in radius and spins at 32 rpm when in water that is moving at 2.0 $\mathrm{m} / \mathrm{s} .$ The rotational inertia of the propeller is approximately 3.0 $\mathrm{kg} \cdot \mathrm{m}^{2} .$ Determine the kinetic energy of the turbine and the electric energy in joules that it could provide in 1 day if it is 100$\%$ efficient at converting its kinetic energy into electric energy. Flywheel energy Engineers at the University of Texas at Austin are developing an Advanced Locomotive Propulsion System that uses a gas turbine and perhaps the largest high speed flywheel in the world in terms of the energy it can store. The flywheel can store $4.8 \times 10^{8} \mathrm{J}$ of energy when operating at its maximum rotational speed of $15,000 \mathrm{rpm} .$ At that rate, the perimeter of the rotor moves at approximately $1,000 \mathrm{m} / \mathrm{s}$ . Determine the radius of the flywheel and its rotational inertia. Equation Jeopardy 4 The equations below represent the initial and final states of a process (plus some ancillary information). Construct a sketch of a process that is consistent with the equations and write a word problem for which the equations could be a solution. \begin{aligned}(80 \mathrm{kg})(9.8 \mathrm{N} / \mathrm{kg})(16 \mathrm{m}) &=\frac{1}{2}(80 \mathrm{kg}) v_{\mathrm{f}}^{2}+\frac{1}{2}\left(240 \mathrm{kg} \cdot \mathrm{m}^{2}\right) \omega_{\mathrm{f}}^{2} \\ v_{\mathrm{f}} &=(0.40 \mathrm{m}) \omega_{\mathrm{f}} \end{aligned} A bug of a known mass $m$ stands at a distance $d \mathrm{cm}$ from the axis of a spinning disk (mass $m_{\mathrm{d}}$ and radius $r_{\mathrm{d}} )$ that is ro- tating at $f_{1}$ revolutions per second. After the bug walks out to the edge of the disk and stands there, the disk rotates at $f_{\mathrm{f}}$ revolutions per second. (a) Use the information above to write an expression for the rotational inertia of the disk. (b) Determine the change of kinetic energy in going from the initial to the final situation for the total bug-disk system. Merry-go-round A 40 -kg boy running at 4.0 $\mathrm{m} / \mathrm{s}$ jumps tangentially onto a small stationary circular merry-go-round of radius 2.0 m and rotational inertia 80 kg # m2 pivoting on a frictionless bearing on its central shaft. (a) Determine the rotational velocity of the merry-go-round after the boy jumps on it. (b) Find the change in kinetic energy of the system consisting of the boy and the merry-go-round. (c) Find the change in the boy's kinetic energy. (d) Find the change in the kinetic energy of the merry-go-round. (e) Compare the kinetic energy changes in parts (b) through (d). Repeat the previous problem with the merry-go-round initially rotating at 1.0 $\mathrm{rad} / \mathrm{s}$ in the same direction that the boy is running. Repeat the previous problem with the merry-go-round initially rotating at 1.0 $\mathrm{rad} / \mathrm{s}$ opposite the direction that the boy was running before he jumped on it. Another merry-go-round A carnival merry-go-round has a large disk-shaped platform of mass 120 kg that can rotate about a center axle. $A 60$ -kg student stands at rest at the edge of the platform 4.0 $\mathrm{m}$ from its center. The platform is also atm rest. The student starts running clockwise around the edge of the platform and attains a speed of 2.0 $\mathrm{m} / \mathrm{s}$ relative to the ground. (a) Determine the rotational velocity of the platform. (b) Determine the change of kinetic energy of the system consisting of the platform and the student. Alexander L. A rough-surfaced turntable mounted on frictionless bearings initially rotates at 1.8 rev/s about its vertical axis. The rotational inertia of the turntable is $0.020 \mathrm{kg} \cdot \mathrm{m}^{2} . \mathrm{A} 200-\mathrm{g}$ lump of putty is dropped onto the turntable from 0.0050 $\mathrm{m}$ above the turntable and at a distance of 0.15 $\mathrm{m}$ from its axis of rotation. The putty adheres to the surface of the turntable. (a) Find the initial kinetic energy of the turntable. (b) What is the final rotational speed of the system (the lump of putty and turntable)? (c) What is the final linear speed of the lump of putty? Find the change in kinetic energy of (d) the turntable, (e) the putty, and (f) the putty-turntable combination. How do you account for your answers? Stopping Earth's rotation Suppose that Superman wants to stop Earth so it does not rotate. He exerts a force on Earth $\vec{F}_{\text { son }}$ at Earth's equator tangent to its surface for a time interval of 1 year. What magnitude force must he exert to stop Earth's rotation? Indicate any assumptions you make when completing your estimate. BIO Triceps and darts Your upper arm is horizontal and your forearm is vertical with a $0.010-\mathrm{kg}$ dart in your hand (Figure $\mathbf{P} 8.65$ ). When your triceps muscle contracts, your forearm initially swings forward with a rotational acceleration of 35 rad $/ s^{2}$ . Determine the force that your triceps muscle exerts on your forearm during this initial part of the throw. The rotational inertia of your forearm is 0.12 $\mathrm{kg} \cdot \mathrm{m}^{2}$ and the dart is 0.38 $\mathrm{m}$ from your elbow joint. You triceps muscle attaches 0.03 $\mathrm{m}$ from your elbow joint. BIO Bowling At the start of your throw of a 2.7 -kg bowling ball, your arm is straight behind you and horizontal (Figure P. 66 ). Determine the rotational acceleration of your arm if the muscle is relaxed. Your arm is 0.64 $\mathrm{m}$ long, has a rotational inertia of $0.48 \mathrm{kg} \cdot \mathrm{m}^{2},$ and has a mass of 3.5 $\mathrm{kg}$ with its center of mass 0.28 $\mathrm{m}$ from your shoulder joint. Leg lift You are doing one-leg leg lifts (Figure P8.67) and decide to estimate the force that your iliopsoas muscle exerts on your upper leg bone (the femur) when being lifted (the lifting involves a variety of muscles). The mass of your entire leg is $15 \mathrm{kg},$ its center of mass is 0.45 $\mathrm{m}$ from the hip joint, and its rotational inertia is 4.0 $\mathrm{kg} \cdot \mathrm{m}^{2}$ , and you estimate that the rotational acceleration of the leg being lifted is 35 $\mathrm{rad} / \mathrm{s}^{2}$ . For calculation purposes assume that the iliopsoas attaches to the femur 0.10 $\mathrm{m}$ from the hip joint. Also assume that femur is oriented $15^{\circ}$ above the horizontal and that the muscle is horizontal. Estimate the force that the muscle exerts on the femur. Punting a football Estimate the tangential acceleration of the foot and the rotational acceleration of the leg of a football punter during the time interval that the leg starts to swing forward in an arc until the instant just before the foot hits the ball. Indicate any assumptions that you make and be sure that your method is clear. Estimate the average rotational acceleration of a car tire as you leave an intersection after a light turns green. Discuss the choice of numbers used in your estimate. Door on fingers Estimate the average force that a car door exerts on a person's fingers if the door is closed when the fingers are in the door opening. Justify all assumptions you make. A yo-yo rests on a horizontal table. The yo-yo is free to roll but friction prevents it from sliding. When the string exerts one of the following tension forces on the yo-yo (shown in Figure P8.71), which way does the yo-yo roll? Try the problem for each force: (a) $\vec{T}_{\mathrm{A}}$ s on $\mathrm{Y}$ (b) $\vec{T}_{\mathrm{B}}$ s on $\mathrm{Y} ;$ and (c) $\vec{T}_{\mathrm{CS} \text { on } \mathrm{Y}}$ Y. [Hint: Think about torques about a pivot point where the yo-yo touches the table.] Running to change time interval of day At present, the motion of people on Earth is fairly random; the number moving east equals the number moving west, etc. Assume that we could get all of Earth's inhabitants lined up along the land at the equator. If they all started running as fast as possible toward the west, estimate the change in the length of a day. Indicate any assumptions you made. Jerrah B. White dwarf A star the size of our Sun runs out of nuclear fuel and, without losing mass, collapses to a white dwarf star the size of our Earth. If the star initially rotates at the same rate as our Sun, which is once every 25 days, determine the rotation rate of the white dwarf. Indicate any assumptions you make. What is the force that provides the torque that causes the toast to rotate? (a) The normal force exerted by the plate on the trailing edge of the toast (b) The force due to air resistance exerted by the air on the toast (c) The gravitational force exerted by Earth on the toast when partly off the plate (d) The centripetal force of the toast's rotation (e) The answer depends on the choice of axis. The toast is more likely to fall on the jelly side if it makes how many revolutions? $\begin{array}{ll}{\text { (a) } 0.5 \text { revolutions }} & {\text { (b) } 0.8 \text { revolutions }} \\ {\text { (c) } 0.9 \text { revolutions }} & {\text { (d) No revolutions }}\end{array}$ What does the number of revolutions that the toast sliding off the plate will make before it touches the floor depend on? (a) The amount of jelly (b) The height of its starting position (c) The length of the toast Why does toast have a better chance of landing jelly-side up if it is quickly shoved off the plate or table? (a) It falls faster than if slowly slipping from the plate and does not have time to rotate. (b) It moves in a parabolic path. (c) The torque due to the gravitational force about the trailing edge of the toast as it leaves the plate has very little time to change the toast's rotational momentum. (d) The hand probably gives it an extra twist and the toast makes a full rotation instead of a half rotation. The length of the toast is about 0.10 $\mathrm{m}$ and the mass is about 0.050 $\mathrm{kg}$ . Which answer below is closest to the torque about the trailing edge of the toast due to the force that Earth exerts on the toast when its trailing edge is just barely on the plate and the rest is off the plate? $\begin{array}{l}{\text { (a) Zero }} & {\text { (b) } 0.0025 \mathrm{N} \cdot \mathrm{m}} \\ {\text { (c) } 0.005 \mathrm{N} \cdot \mathrm{m}} & {\text { (d) } 0.025 \mathrm{N} \cdot \mathrm{m}} \\ {\text { (e) } 0.05 \mathrm{N} \cdot \mathrm{m}}\end{array} If the La Rance tidal basin station in France could produce power 24 hours a day, which answer below is closest to the daily amount of energy in joules that it could produce? $\begin{array}{ll}{\text { (a) } 240 \mathrm{J}} & {\text { (b) } 240 \times 10^{6} \mathrm{J}} & {\text { (c) } 6 \times 10^{9} \mathrm{J}} \\ {\text { (d) } 2.5 \times 10^{10} \mathrm{J}} & {\text { (e) } 2 \times 10^{13} \mathrm{J}}\end{array}$ Suppose a tidal basin is 5 $\mathrm{m}$ above the ocean at low tide and that the area of the basin is $4 \times 10^{7}$ sq $\mathrm{m}$ (about 4 miles by 4 miles). Which answer below is closest to the gravitational potential energy change if the water is released from the tidal basin to the low-tide ocean level? The density of water is 1000 $\mathrm{kg} / \mathrm{m}^{3}$ . $[\text { Hint: The level does not change by } 5 \mathrm{m} \text { for all of the water.] }$ (a) $5 \times 10^{8} \mathrm{J}$ (b) $5 \times 10^{11} \mathrm{J}$ (c) $1 \times 10^{12} \mathrm{J}$ (d) $5 \times 10^{12} \mathrm{J}$ (e) $1 \times 10^{13} \mathrm{J}$ The La Rance tidal basin can only produce electricity when what is occurring? (a) Water is moving into the estuary from the ocean. (b) Water is moving into the ocean from the estuary. (c) Water is moving in either direction. (d) The moon is full. (e) The moon is full and directly overhead. Why do water turbines seem more promising than tidal basins for producing electric energy? (a) Turbines are less expensive to build. (b) Turbines have less impact on the environment. (c) There are many more locations for turbines than for tidal basins. (d) Turbines can operate 24 hours/day versus for only 10 hours/day for tidal basins. (e) All of the above Why do water turbines have an advantage over air turbines (windmills)? (a) Air moves faster than water. (b) The energy density of moving water is much greater than that of moving air. (c) Water turbines can float from one place to another, whereas air turbines are fixed. (d) All of the above (e) None of the above Which of the following is a correct statement about water turbines? (a) Water turbines can operate only in moving tidal water. (b) Water turbines can produce only a small amount of electricity. (c) Water turbines have not had a proof of concept. (d) Water turbines cause significant ocean warming. (e) None of the above are correct statements.
CommonCrawl
A wire of length 28 m is to be cut into two pieces. A wire of length 28 m is to be cut into two pieces. One of the pieces is to be made into a square and the other into a circle. What should be the length of the two pieces so that the combined area of the square and the circle is minimum? Let a piece of length l be cut from the given wire to make a square. Then, the other piece of wire to be made into a circle is of length (28 − l) m. Now, side of square $=\frac{l}{4}$. Let $r$ be the radius of the circle. Then, $2 \pi r=28-l \Rightarrow r=\frac{1}{2 \pi}(28-l)$. The combined areas of the square and the circle (A) is given by, $A=(\text { side of the square })^{2}+\pi r^{2}$ $=\frac{l^{2}}{16}+\pi\left[\frac{1}{2 \pi}(28-l)\right]^{2}$ $=\frac{l^{2}}{16}+\frac{1}{4 \pi}(28-l)^{2}$ $\therefore \frac{d A}{d l}=\frac{2 l}{16}+\frac{2}{4 \pi}(28-l)(-1)=\frac{l}{8}-\frac{1}{2 \pi}(28-l)$ $\frac{d^{2} A}{d l^{2}}=\frac{1}{8}+\frac{1}{2 \pi}>0$ Now, $\frac{d A}{d l}=0 \Rightarrow \frac{l}{8}-\frac{1}{2 \pi}(28-l)=0$ $\Rightarrow \frac{\pi l-4(28-l)}{8 \pi}=0$ $\Rightarrow(\pi+4 l-112=0$ $\Rightarrow l=\frac{112}{\pi+4}$ Thus, when $l=\frac{112}{\pi+4}, \frac{d^{2} \mathrm{~A}}{d l^{2}}>0 .$ $\therefore$ By second derivative test, the area $(A)$ is the minimum when $l=\frac{112}{\pi+4}$. Hence, the combined area is the minimum when the length of the wire in making the square is $\frac{112}{\pi+4} \mathrm{~m}$ while the length of the wire in making the circle is $28-\frac{112}{\pi+4}=\frac{28 \pi}{\pi+4} \mathrm{~m}$
CommonCrawl
Wafer-Scale Fabrication of Sub-10 nm TiO2-Ga2O3 n-p Heterojunctions with Efficient Photocatalytic Activity by Atomic Layer Deposition Hongyan Xu1, Feng Han1, Chengkai Xia1, Siyan Wang1, Ranish M. Ramachandran2, Christophe Detavernier2, Minsong Wei3, Liwei Lin3 and Serge Zhuiykov1, 4Email authorView ORCID ID profile Received: 19 February 2019 The Correction to this article has been published in Nanoscale Research Letters 2019 14:173 Wafer-scale, conformal, two-dimensional (2D) TiO2-Ga2O3 n-p heterostructures with a thickness of less than 10 nm were fabricated on the Si/SiO2 substrates by the atomic layer deposition (ALD) technique for the first time with subsequent post-deposition annealing at a temperature of 250 °C. The best deposition parameters were established. The structure and morphology of 2D TiO2-Ga2O3 n-p heterostructures were characterized by the scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), electrochemical impedance spectroscopy (EIS), etc. 2D TiO2-Ga2O3 n-p heterostructures demonstrated efficient photocatalytic activity towards methyl orange (MO) degradation at the UV light (λ = 254 nm) irradiation. The improvement of TiO2-Ga2O3 n-p heterostructure capabilities is due to the development of the defects on Ga2O3-TiO2 interface, which were able to trap electrons faster. Graphical Abstract TiO2-Ga2O3 n-p heterostructures atomic layer deposition 2D semiconductors Fabrication of 2D p-n heterojunctions of semiconductor oxides is one of the key directions of future development of nanostructures with unique distinguishable properties, as they are able to combine various outstanding features of both semiconductors at the nanoscale [1–4]. However, it is extremely challenging to fabricate them defects-free over the wafer area, particularly when the thickness of each oxide is only few nanometers [2]. In order to overcome numerous manufacturing challenges, ALD technology has already established clear and unprecedented advantages in the development of conformal nano- and monolayers of the semiconductor oxides and their 2D heterostructures with the thickness less than 10 nm on wafer-scale with high aspect ratio [3, 5–7]. In addition, various new approaches were also initiated recently for the development of 2D heterostructures with enhanced functional capabilities [8–11]. They specifically targeted both oxygen evolution reaction (OER) and hydrogen evolution reaction (HER), as a core processes for various renewable energy systems [12]. However, in comparison to HER, OER with multistep, four-electron process evolved is severely constrained by its sluggish kinetics [13]. Thus, more efforts have therefore been devoted to improve the conductivity of heterostructures and control the electronic structures of their surface active sites through the modulation of their morphology, constituent compositions, and/or dopants [8, 12]. Moreover, regulating the surface-adsorbed species may also provide an alternative valuable approach to fine-tuning the interfacial properties, particular at nanostructured heterojunctions, and the electronic structures of active materials [14]. More importantly, it was demonstrated that the decreasing free energy of the OER intermediates at the nano-interface would remarkably enhance the inherent electrochemical performance of catalyst [13]. In this regard, surface engineering is well illustrated to improve the accessibility of the reactants and to alter the electrochemical activity of the catalysts [14]. To achieve such enhancements of electrochemical properties of nanostructured heterostructures, various technological approaches have been utilized. Among them, the ALD technique can be used to deposit wafer-scaled nanomaterials with controlling their deposition rate at the Angstrom scale. Additional vital advantage of ALD is its self-limited nature by depositing materials in an atomic layer-to-layer [5, 6]. The alternative approach represents the development of 2D C-MOFs via the combination of "through-space" and "through-bind" strategies [14]. In particular, hexahydroxytriphenylene ligand-based 2D C-MOFs possess M-O4 (M–transitional metals) as their secondary building units and provide discrete metal-replicable layers as promising reactive sites for OER [14]. Moreover, these C-MOFs can remain stable in high pH solution, which is quite important for OER. Thus, all these above-mentioned recent advancements indirectly confirmed that no other technologies of making 2D nanostructures, including sol-gel, chemical vapor deposition (CVD), RF sputtering, etc., are capable to deliver uniformed deposition at the Ångstrom level over the large areas of Si/SiO2 wafers with precise control of the deposition rate and thickness. Therefore, most of the developed recipes for ALD of 2D nanostructures using specific precursors possess valuable know-how and represent a highly repeatable process on the semi-industrial scale [5, 15, 16]. One of the main 2D semiconductors successfully utilized in the different photovoltaic applications is titanium dioxide (TiO2), which is a typical n-type semiconductor with wide bandgap Eg = ~3.2 eV [5, 15–18]. There are numerous scientific reports focused on the different approaches for improvement of its properties such as changing thickness of nanostructured 2D TiO2 down to monolayer [15, 16], doping TiO2 by other nanostructures semiconductors [5, 17], surface functionalization of 2D TiO2 [18] and making n-p heterojunctions [19]. In addition, low electron/hole recombination is blamed for the low quantum yields, which is still a big obstacle for the improvement of photocatalytic activity. Therefore, fabrication of efficient n-p heterojunctions has been proposed and attempted with the different levels of success during the last few years [4, 17, 20–22]. Specifically, it was found that the fabricated n-p heterojunctions could sufficiently reduce the recombination rate of the photo-generated electron/hole pairs with the following enhancement of the overall photocatalytic activity [1, 23, 24]. Thus, the combination of p- and n-type semiconductor oxides has paved the way for the further development of n-p heterojunctions and optimization of their photocatalytic capabilities [25]. In this regard, 2D surface functionalization of 2D n-type TiO2 by ALD of another p-type semiconductor on the top of TiO2 represents a unique strategy of making n-p heterojunctions and combining various outstanding properties of both semiconductors [5]. On the other hand, semiconductor oxides with a d10 electron configuration have recently attracted considerable attention for their superior activities as potential dopant. This is mainly owing to their conduction bands being formed by hybridized sp orbits with a large dispersion, which enabled them to generate electrons with the large mobility [26]. Gallium oxide (Ga2O3), as a typical representative of such d10 semiconductor oxides, belongs to the group of transparent semiconducting oxides with a wide band gap and electrical conductivity. It exhibits the largest band gap with Eg = 4.8 eV and thus a unique transparency from the visible into the UV region and good luminescence properties [27]. β-Ga2O3 is reported to be the most stable polymorph among five existing polymorphs of Ga2O3 within the high-temperature range [28]. Moreover, nontoxic β-Ga2O3 displayed significant potential for photocatalytic air purification, particularly for the elimination of toxic aromatic compounds [29]. Therefore, all these distinguishable properties of β-Ga2O3 [30] substantiated a lot of efforts for the best suitable technologies of Ga2O3 deposition at the nanoscale [31–33]. Notwithstanding the great attempts dedicated to the ALD of 2D semiconductor oxides during last few years, authors wish to stress that so far 2D TiO2-Ga2O3 n-p heterostructures with the thickness less than 10 nm have not yet been reported. In this work, 2D TiO2-Ga2O3 n-p heterostructures were ALD-fabricated on wafer-scale for the first time using Ti(N(CH3)2)4 and C33H57GaO6 as TiO2 and Ga2O3 precursors, respectively. Their optimal deposition parameters were established and structural and photocatalytic properties were investigated. Figure 1 illustrates the fabrication process of 2D TiO2-Ga2O3 n-p heterostructures on the Si/SiO2 substrate. Figure 2 schematically depicts the details of ALD depositions. After depositions, wafers were diced on 1.0 × 1.0 cm segments (Fig. 2a) for further testing. For ALD 2D TiO2-Ga2O3 n-p heterostructures Ti(N(CH3)2)4 and C33H57GaO6 (Strem Chemicals Inc., USA) were used as TiO2 and Ga2O3 precursors, respectively. Their graphical interpretations are given in Fig. 2b, c. The growth per cycle (GPC) yielded from the slopes of growth curves shown in Fig. 2d, e was calculated to be around 0.7 Å/cycle and 0.16 Å/cycle for TiO2 and Ga2O3, respectively. The growth curves were linear without any nucleation delay for both TiO2 and Ga2O3 samples, implying that the self-limited property of ALD growth process and the film thickness could be developed precisely by varying the number of ALD cycles. The lower growth rate of 2D Ga2O3 nano-films makes its applications on the doping and modification possible [34]. Noteworthy, the optimal ALD deposition parameters for each precursor are usually established after several initial trials [6]. After each deposition cycle the variable angle in situ ellipsometry measurements (J.A. Woollam M2000 DI) were carried to monitor the uniformity and to measure the thickness of films. For example, Fig. 2f illustrates the in situ ellipsometry measurements for 2D TiO2 with the average thickness of ~ 6.45 nm. Since the thickness measurements were found difficult on heterostructure, the Ga2O3 film growth was followed, using in situ ellipsometry measurement, on SiO2/Si substrate that was placed on the heater block, together with the sample. After the deposition, the Ga2O3 film thickness on heterostructure was confirmed by comparing the amount of material deposited on it and the reference SiO2/Si using X-ray fluorescence measurements [19]. 2D Ga2O3 films had an average thickness of ~ 1.5 nm, which resulted in the total thickness of 2D TiO2-Ga2O3 heterostructures to be ~ 8.0 nm. All fabricated samples were subsequently annealed in the air for 1 h at 250 °C with a heating rate of 0.5 °C/min. Schematic fabrication process of 2D TiO2-Ga2O3 n-p heterostructures a The optical image of wafer-scale ALD-deposited TiO2-Ga2O3 n-p heterostructures films, insert—an individual 1 cm2 electrode. b, c Graphical scheme of chemical formula of Ti(N(CH3)2)4 and C33H57GaO6 precursors, respectively. d, e The graph of thickness versus ALD cycle number of TiO2 and graph of thickness versus ALD cycle of Ga2O3 films, respectively. f The spectroscopic ellipsometry mapping of thickness of 2D TiO2 film Figure 3 shows SEM surface morphology images for both ALD-fabricated 2D TiO2 (thickness ~ 6.5 nm) and Ga2O3 (thickness ~ 1.5 nm) nano-films. It is noteworthy that the TiO2 nano-grains in the fabricated films were uniformly distributed over Si/SiO2 wafer and varied in size from approximately ~ 30 to ~ 70 nm prior to Ga2O3 deposition. Figure 3a depicts surface morphology of TiO2 nano-film consisting of the flat nano-particles. Then, the ALD-developed ~ 1.5-nm-thick Ga2O3 nano-films were fabricated on the top of ~ 6.5-nm-thick TiO2 nano-films. The ALD-developed sub-10 nm Ga2O3-TiO2 heterostructures were subsequently annealed at 250 °C. Thus, Fig. 3b depicts crystalline surface morphology of the Ga2O3 in heterostructure after annealing. The Ga2O3 nano-film consists of uniformly distributed Ga2O3 nano-grains with the average size from ~ 80 to ~ 110 nm. Owing to the extremely thin nature of the ALD-fabricated nano-films, employment of the X-ray diffraction technique for crystallinity investigation of these films was not possible. SEM images of the ALD-deposited 2D (a) TiO2 and (b) TiO2-Ga2O3 heterostructure nano-films Chemical composition and bonding states of 2D TiO2-Ga2O3 heterostructures were studied by XPS with Fig. 4a representing the TiO2-Ga2O3 heterostructure scan survey. The charge shift spectrum was calibrated for C1s peak at 284.8 eV. Three main elements of Ti, O, and Ga are clearly observed. In addition, C1s peak was also detected as it was originated from the reference to calibrate the binding energies of the peaks. Figure 4b depicts high-resolution two quasi-symmetrical Ga 2p1/2 and Ga 2p3/2 peaks for Ga-O bonding at 1145.2 eV and 1118.4 eV with a separation distance of 26.8 eV, which is consistent with the binding energy of Ga 2p for doped β-Ga2O3 [35, 36]. The weak energy peak for Ga 3d is centered at 21.1 eV, which is caused by the presence of Ga-O bond reported for p-type β-Ga2O3 films [37], but not observed for the n-type β-Ga2O3 structures [38]. The Ga 3d peak is asymmetrical, which was ascribed to the hybridization of Ga 3d and O 2s states near the valence band [39]. Figure 4c displayed the high-resolution scan of Ti 2p. The doublet peaks demonstrated in Fig. 4c correspond to Ti 2p3/2 and Ti 2p1/2 with the spin-orbital splitting of 6.2 eV, which were attributed to Ti+4 oxidation state. It should be noted that the obtained XPS results in this investigation are slightly different from our previous report on the development of TiO2 monolayer [15] and bi-layer [3] grown by ALD. This difference is reasonable considering the amount of Ti in the samples. XPS spectra of 2D TiO2-Ga2O3 n-p heterostructures. a Full survey scan spectrum. b Ga 2p region. c Ti 2p region. d O 1s region. and e Ga 3d region The O 1s peak in the XPS spectrum (Fig. 4d) could be deconvoluted into two major peaks. The main binding energy component centered at 531.53 eV is attributed to oxygen vacancies or OH-1 adsorbed species on the surface [38]. The second binding energy peak at 530.01 eV can be the characteristic of the lattice oxygen in the TiO2-Ga2O3 heterostructure. Very relevant to this investigation was our previous study on ALD TiO2 bi-layer confirming the influence of SiO2 substrate, where the bottom oxygen of TiO2 is shared with SiO2 making 2D TiO2 slightly non-stoichiometric [3]. Thus, this non-stoichiometry plays a critical role in 2D TiO2-Ga2O3 heterostructure while the thickness of Ga2O3 ALD on the top of TiO2 is only ~ 1.5 nm. The enlarged energy peak for Ga 3d is presented in Fig 4e. Presence of Ga 3d peak in the spectrum is confirmation of the p-type conductivity for Ga2O3 in the heterostructure, as being reported [37]. For further investigation of the conductivity type of 2D β–Ga2O3, additional 4.8-nm-thick Ga2O3 samples were subjected to the Hall coefficient measurements at T = 25 °C. The measured Hall coefficient value of 8.292 × 104 cm3/C independently confirmed the stable p-type performance of 2D Ga2O3. Figure 5 expresses the plotted EIM measurements of the spectra for 2D TiO2 (~ 3.5 nm), Ga2O3 (~ 3.5 nm), and 2D TiO2-Ga2O3 heterostructures (~ 8.0 nm), respectively. EIS measurements were carried out in air at the temperature of 25 °C and the impedance results were obtained using the Randles equivalent circuit. It is noteworthy that the fitted Nyquist plots in Fig. 5 revealed the charge-transfer resistance (Rct = 4.5 kΩ) of 2D TiO2-Ga2O3 heterostructures with a thickness of ~ 8.0 nm being about 2.7-fold lower than that of ALD-developed 2D TiO2 (Ret = ~ 12.5 kΩ) and even slightly lower than that of 2D Ga2O3 (Ret = ~ 6.0 kΩ). This fact further designates that 2D TiO2-Ga2O3 heterostructures possess a much faster charge-transfer characteristics than that of 2D TiO2 and Ga2O3. Although the measured impedance value for of Ga2O3 was slightly higher than the reported value for 2D ALD-fabricated Ga2O3 [40], this was partially due to the sub-nanometer thickness of the Ga2O3 film [40] compare to the ~ 3.5-nm-thick Ga2O3 in our experiments and was also partially owing to the fact that the developed 2D Ga2O3 was not fully crystallized at the annealing temperature of 250 °C. Nyquist plots of the 2D TiO2, Ga2O3, and TiO2-Ga2O3 heterostructures tested in air at a temperature of 25 °C All FTIR spectra of 2D Ga2O3, TiO2 and TiO2-Ga2O3 n-p heterostructures are summarized in Fig. 6. As spectra for 2D TiO2 and Ga2O3 are nearly overlapping each other, they therefore were presented separately in Fig 6a and Fig. 6b, respectively, in comparison with the spectrum of 2D TiO2-Ga2O3 heterostructures. The peaks centered at about 1594 cm−1 are attributed to the O-H stretching and bending modes of the hydrated oxide surface and the adsorbed water [41]. Moreover, the adsorption of atmospheric CO2 on the surface of gallium oxide is characterized by the detection of bands at 1519 cm−1 and 1646 cm−1, which resulted from preparation and processing of the samples in ambient air [42]. More interesting results were observed in the perturbation area, presented as inserts in Fig. 7a and Fig. 7b, respectively. The IR band at 607.9 cm−1 is due to vibration of the Ga-O bond of GaO6 octahedra in Ga2O3 lattice [43]. Its intensity has the maximum in FTIR spectrum of 2D Ga2O3 and decreased in the FTIR spectrum of 2D TiO2-Ga2O3 n-p heterostructure. Compared with the FTIR spectrum of 2D Ga2O3 nano-film, a new peak at 464 cm−1 appeared in FTIR spectrum for ALD-fabricated 2D TiO2-Ga2O3 n-p heterostructures. This peak is near overlapping typical characteristic peak at 470 cm−1 for TiO2 [15]. FTIR spectra of ALD-fabricated 2D TiO2 and TiO2-Ga2O3 n-p heterostructures (a) and 2D Ga2O3 and TiO2-Ga2O3 n-p heterostructures (b) PL spectrum of 2D TiO2-Ga2O3 n-p heterostructures at room temperature with bandgaps for TiO2 and Ga2O3 Photoluminescence (PL) technique is usually employed to investigate the migration, transfer and recombination rate of the photo-induced electrons-holes pairs in semiconductors. Figure 7 shows the room temperature (25 °C) PL spectra of ALD-fabricated 2D TiO2-Ga2O3 n-p heterostructures annealed at 250 °C with the details of the measured bandgap for TiO2 and Ga2O3, respectively. There are two peaks in the PL spectra for the 2D TiO2-Ga2O3 n-p heterostructures (presented in insert in Fig. 7): one is called near band edge emission (NBE), which is in the UV region due to the recombination of free excitons through an exciton–exciton collision process; and the second one is called deep level emission (DPE), which is caused by the impurities and/or structural defects in the crystal [41]. The DPE intensity in 2D TiO2-Ga2O3 n-p heterostructures is lower than that in Ga2O3 [44], which indicates more efficient transfer and separation of the charge carriers owing to the electron-hole transfer in the heterojunctions between TiO2 and Ga2O3. Noteworthy, the DPE of 2D TiO2-Ga2O3 n-p heterostructures is shifted towards the UV region whereas DPE of Ga2O3 is within the visible light region [44]. In addition, the selected annealing temperature of 250 °C did not allow full crystallization of Ga2O3 nano-film in the heterostructure, which was reflected by the unchanged value of its bandgap (4.8 eV). However, in our previous investigation, it was found that further increase of the annealing temperature (above 250 °C) of such extremely-thin films causes their disintegration with the following agglomeration of their nano-grains into island-like nanostructure [6]. On the contrary, the bandgap for TiO2 slightly changed to ~ 3.14 eV compared to its microstructural counterpart. Consequently, all the above material characterization experiments clearly confirmed the successful, development of conformal and uniform sub-10 nm TiO2-Ga2O3 n-p heterostructures. Thus, these 2D TiO2-Ga2O3 n-p heterostructures were ALD-fabricated impurity-free on the wafer-scale and subsequently annealed at 250 oC for the establishment of developed n-p nano-interface. The photocatalytic degradation of MO under the UV light irradiation (λ = 254nm) was carried out at the room temperature (25 °C) to evaluate the photocatalytic activity of ALD-fabricated 2D TiO2, Ga2O3 and 2D TiO2-Ga2O3 n-p heterostructures. As presented in Fig. 8a, 2D TiO2-Ga2O3 n-p heterostructure demonstrated higher photocatalytic activity compared to both 2D TiO2 and Ga2O3 under the same UV irradiation. Specifically, using 2D TiO2-Ga2O3 n-p heterostructure as the catalyst, MO degradation efficiency reached ~90% within 70 h, while the values for 2D Ga2O3 and TiO2 were approximately ~ 70% and ~ 65%, respectively, at the same time. Considering the fact that 2D Ga2O3 has not been fully crystallized under the annealing temperature of 250 °C, it is assumed that the weak chemical bond developed between 2D TiO2 and Ga2O3is good enough to ensure the successful role of n-p heterojunction for the photocatalytic activity. MO degradation efficiency for 2D TiO2, Ga2O3, and TiO2-Ga2O3 n-p heterostructures under λ = 254 nm UV light (a). Schematic photocatalytic reaction process and charge separation transfer of 2D TiO2-Ga2O3 under UV light irradiation (b) The photocatalytic degradation mechanism by 2D TiO2-Ga2O3 n-p heterostructure under λ = 245 nm UV light irradiation is proposed in Fig. 8b. It is a common knowledge that the photocatalytic degradation of dyes mainly involves several active radical species such as hydroxyl radicals (·OH), holes (h+) and electrons (e−) [45]. The direct contact between 2D Ga2O3 and TiO2 induced the development of heterojunction owing to the different energy levels. Under λ = 254 nm UV light irradiation, both Ga2O3 and TiO2 were excited to generate electrons and holes simultaneously. Large numbers of defects consisting of robust acceptor state in the bandgap trap holes and prevent recombination. Various defect bands promote the electron-hole pair separation rate. The enhanced photo-catalytic performance is mainly derived from the large numbers of acceptor states accompany with Ga2O3 defects especially in its not fully crystallized phase. The acceptor states not only expand the light absorption edge of UV but also retard the rate of electron-hole pair recombination. In this regard, both large number of defects and acceptor states is responsible for enhancing the photocatalytic performance of 2D TiO2-Ga2O3 n-p heterostructure. At the same time, holes in the VB of TiO2 can migrate into the VB of Ga2O3. Thus, the concentration of photo-generated holes on the Ga2O3 surface increases. The photo-generated holes play a vital role in the photo-degradation process of 2D TiO2-Ga2O3 n-p heterostructures. Therefore, the increasing concentration of the photo-generated holes in the VB of Ga2O3 could also lead to its high photocatalytic activity. Moreover, the higher-specific surface area fabricated after annealing may additionally improve the overall photocatalytic activity of 2D TiO2-Ga2O3 n-p heterostructures. The absorption and desorption of molecules on the surface of the catalyst is the first step in the degradation process [46, 47]. Thus, higher surface-to-volume ratio in the surface morphology of the TiO2-Ga2O3 n-p heterostructures provides more unsaturated surface coordination sites. Therefore, the annealed 2D TiO2-Ga2O3 n-p heterostructures possess higher-specific surface area caused by numerous ultrathin nano-grains, as presented in SEM characterization. Consequently, high surface-to-volume ratio combined with the suitable nano-interfaces obtained for the 2D TiO2-Ga2O3 n-p heterostructures resulted in its great photocatalytic activity towards the efficient MO degradation. In this work, wafer-scale 2D TiO2-Ga2O3 n-p heterostructures with the average thickness of ~ 8.0 nm were successfully fabricated for the first time via a two-step ALD process by using Ti(N(CH3)2)4 and C33H57GaO6 as TiO2 and Ga2O3 precursors, respectively. Their optimal deposition parameters were established. The 2D TiO2-Ga2O3 n-p heterostructures were annealed at 250 °C for the structural stabilization and development of the n-p nano-interface. Subsequently, 2D TiO2-Ga2O3 n-p heterostructures were utilized for efficient MO degradation at the room temperature under the UV light (λ = 254 nm) irradiation. 2D TiO2-Ga2O3 n-p heterostructures have clearly demonstrated unique capabilities and higher photocatalytic activity than that of pure 2D TiO2 and Ga2O3 for MO degradation. Specifically, the effect of n-p heterojunction between n-type TiO2 and p-type Ga2O3 enabled a higher concentration of the photo-generated holes and larger-specific surface area, which ultimately led to its higher photocatalytic activity. Therefore, sub-10 nm, 2D n-p heterostructures can be potentially exploited as promising nano-materials for the practical photocatalytic devices. Synthesis 2D n-p Heterostructure All reagents and precursors were purchased from the commercial sources and represented analytical grade. They were used as received without further purification. The 4-in. Si/SiO2 wafers (12 Ω/cm) were utilized as substrates for ALD depositions, where the thickness of the native oxide was ~ 1.78–1.9 nm. 2D TiO2-Ga2O3 n-p heterostructures were prepared by a two-step fabrication method. Prior to ALD depositions, in order to reduce the influence of Si wafer on electrical measurements, an additional ~ 100-nm-thick SiO2 insulating layer was applied by CVD, (Oxford Instruments Plasmalab 100). After that 150-nm-thick Au/Cr films were deposited on SiO2/Si by the Electron Beam Evaporator method (Nanochrome II (Intivac, USA)) to develop electrodes for subsequent investigations. All ALD fabrications were carried out on Savannah S100 (Ultratech/Cambridge Nanotech). A pulse time of 5 s was used for both the Ga(TMHD)3 and O2 plasma, at a pressure of 3 × 10−3 mbar. The surface morphology and elemental analysis of ALD-fabricated sub-10 nm TiO2-Ga2O3 heterostructures were characterized by scanning electron microscopy (SEM, SU-500) and energy dispersive X-ray (EDX) spectroscopy (EDS, JEOL). Fourier transform infrared (FTIR) spectra were taken using a NEXUS Thermo Nicolet IR-spectrometer in the range 4000–400 cm−1 with a spectral resolution 2 cm−1. In order to investigate the surface chemistries of the developed samples, X-ray photoelectron spectroscopy (XPS) was employed in the ESCALAB system with AlK X-ray radiation at 15 kV. All XPS spectra were accurately calibrated by the C1s peak at 284.6 eV for the compensation of the charge effect. Hall effect measurement system (HMS3000) was employed at the room temperature to measure the Hall coefficient of Ga2O3 thin films by using a 0.55T magnet. EIS and all electrical measurements for 2D TiO2, Ga2O3, and TiO2-Ga2O3 heterostructures were carried out on AutoLab PGSTAT204 (Metrohm Autolab, B.V., Netherlands). Room temperature photoluminescence (PL) spectra of ALD 2D TiO2-Ga2O3 heterostructures were performed on an F-4600 fluorescent spectrophotometer (Hitachi Corp., Tokyo, Japan), and the maximal excitation wavelength was λ = 200 nm, and the filter was λ = 300 nm. The photocatalytic activity of 2D TiO2, Ga2O3 and 2D TiO2-Ga2O3 heterostructures for the MO (C14H14N3NaO3S) degradation in aqueous solution under the UV light was evaluated by measuring the absorbance of the irradiated solution. For this study, 2D TiO2-Ga2O3 heterostructures were placed into 100 mL of MO solutions with a concentration of 6 mg/L and a pH of 6.5. The solutions were continuously stirred in the dark for 2 h before illumination in order to reach the absorption-desorption equilibrium between MO and the 2D TiO2-Ga2O3 heterostructures. Then the solutions were irradiated by a 30 W low-pressure UV lamp (λ = 254 nm), which was located at the distance of 50 cm above the top of the dye solution. During the process, 5 mL solutions were pipetted every 12 h for the absorbance determination by a UNIC UV-2800A spectrophotometer using the maximum absorbance at 465 nm. All experiments were performed under the ambient condition and room temperature. The degradation efficiency of MO was defined as $$ D=\left[\left({\mathrm{A}}_0-{A}_t\right)/{\mathrm{A}}_0\right]\times 100\%, $$ where D is degradation efficiency, A0 is the initial absorbance of MO solution, and At is the absorbance of MO solution after UV irradiation within the elapsed time t. A correction to this article is available online at https://doi.org/10.1186/s11671-019-3028-5. EDS: Energy dispersive spectroscopy FTIR: Methyl orange Photoluminescence UV-vis: Ultraviolet-visible XPS: X-ray photoelectron spectroscopy S.Z. acknowledges the support from the "100 Talents Program" of Shanxi province, P.R. China. The work was performed in part at the Melbourne Center for Nanofabrication (MCN) in the Victoria Node of the Australian National Fabrication Facility (ANFF) and Ghent University, Belgium. C.D. acknowledges the Flemish Research Foundation (FWO) and the Special Research Fund BOF of Ghent University (GOA01G01513). R.K.R. is a postdoctoral fellow of the FWO. Authors acknowledged the help of Dr. M. Karbalai Akbari in some material characterization experiments. This study was supported and funded by the National Natural Science Foundation of China (Grant No. 61501408), the Shanxi Province International Cooperation Project (Grant No. 201703D421008) and the Research Project Supported by Shanxi Scholarship Council of China (No. 2017-094). The crystal structure and chemical bonding structure of the as-prepared samples were characterized by XPS (Fig. 4), electrochemical impedance spectroscopy (Fig. 5), FTIR (Fig. 6), and FL (Fig. 7) measurements. Surface morphology of the samples was investigated by SEM (Fig. 3). Photocatalytic tests were examined by UV light (λ = 254 nm) irradiation (Fig. 8). HX and SZ conceived the idea and designed the growth experiment and investigation process. HX, FH, and SW performed the growth experiments and photocatalytic tests. RMR and CD fabricated heterojunction samples by ALD. SW and HF performed FL and Raman tests. CX and SW performed FTIR, SEM, and XPS tests. HX, MW, LL, and SZ discussed all the results. HX, FH, CX, and SZ wrote the manuscript. All authors read, discussed, and corrected the manuscript, and approved the final manuscript. School of Materials Science and Engineering, North University of China, Taiyuan, 030051, People's Republic of China Department of Solid State Science, Ghent University, Krijgslaan 281/S1, B-9000 Ghent, Belgium Berkeley Sensor and Actuator Center, Department of Mechanical Engineering, University of California, Berkeley, CA 94720, USA Ghent University Global Campus, 119 Songdomunhwa-ro, Yeonsu-gu, Incheon, 21985, South Korea Xu H, Liang C, Wang S et al (2018) Effect of zinc acetate concentration on optimization of photocatalytic activity of p-Co3O4/n-ZnO heterostructures. Nanoscale Res Lett 13:195.View ArticleGoogle Scholar Zhuiykov S, Kats E, Carey B et al (2014) Proton intercalated two-dimensional WO3 nano-flakes with enhanced charge-carrier mobility at room temperature. Nanoscale 6:15029–15036.View ArticleGoogle Scholar Akbari MK, Hai Z, Wei Z et al (2017) Wafer-scale two-dimensional Au-TiO2 bilayer films for photocatalytic degradation of Palmitic acid under UV and visible light illumination. Mater Res Bull 95:380–391.View ArticleGoogle Scholar Lee JY, Jo WK (2016) Heterojunction-based two-dimensional N-doped TiO2/WO3 composite architectures for photocatalytic treatment of hazardous organic vapor. J Hazard Mater 314:22–31.View ArticleGoogle Scholar Hai Z, Akbari MK, Wei Z et al (2017) TiO2 nanoparticles-functionalized two-dimensional WO3 for high-performance supercapacitors developed by facile two-step ALD process. Mat Today Comm 12:55–62.View ArticleGoogle Scholar Zhuiykov S, Hyde L, Hai Z et al (2017) Atomic layer deposition-enabled single layer of tungsten trioxide across a large area. Appl Mater Today 6:44–53.View ArticleGoogle Scholar Zhuiykov S, Hai Z, Kawaguchi T et al (2017) Interfacial engineering of nanostructured materials by atomic layer deposition. Appl Surf Scien 392:231–243.View ArticleGoogle Scholar Peng L, Hu L, Fang X (2014) Energy harvesting for nanostructured self-powered photodetectors. Adv Func Mat 24:2591–2610.View ArticleGoogle Scholar Das S, Hosain MJ, Leung SF, Lenox A et al (2019) A leaf-inspired photon management scheme using optically tuned bilayer nanoparticles for ultra-thin and highly efficient photovoltaic devices. Nano Energy 58:47–56.View ArticleGoogle Scholar Yang W, Chen J, Zhang Y, Zhang Y, He JH, Fang X (2019) Silicon-compatible photodetectors: Trends to monolithically integrate photosensors with chip technology. Adv Func Mat. 1808182.View ArticleGoogle Scholar Alarawi A, Ramalingam V, He JH (2019) Recent advances in emerging single atom confined two-dimensional materials for water splitting applications. Mat Today Ener 11:1–23.View ArticleGoogle Scholar Gao W, Gou W, Zhou X, Ho JC et al (2018) Amine-modulated/engineered interfaces of NiMo electrocatalysts for improved hydrogen evolution reaction in alkaline solutions. ACS Appl Mater Interf 10:1728–1733.View ArticleGoogle Scholar Wei R, Fang M, Dong G, Lan C et al (2018) High-index faceted porous Co3O4 nanosheets with oxygen vacancies for highly efficient water oxidation. ACS Appl Mater Interf 10:7079–7086.View ArticleGoogle Scholar Li WH, Lv J, Li Q, Xie J et al (2019) Conductive metal-organic framework nanowire arrays for electrocatalytic oxygen evolution. J Mat Chem A 7:5069–5075.View ArticleGoogle Scholar Zhuiykov S, Akbari MK, Hai Z et al (2017) Data set for fabrication of conformal two-dimensional TiO2 by atomic layer deposition using tetrakis (dimethylamino) titanium (DTMAT) and H2O precursors. Mat Design 120:99–108.View ArticleGoogle Scholar Pozan GS, Isleyen M, Gokcen S (2013) Transition metal coated TiO2 nanoparticles: Synthesis, characterization and their photocatalytic activity. Appl Catal B Environ 140-141:537–545.View ArticleGoogle Scholar Hao C, Wang W, Zhang R et al (2018) Enhanced photoelectrochemical water splitting with TiO2@Ag2O nanowire arrays via p-n heterojunction formation. Sol Energy Mater Sol Cells 174:132–139.View ArticleGoogle Scholar Subramonian W, Wu TY, Chai SP (2017) Photocatalytic degradation of industrial pulp and paper mill effluent using synthesized magnetic Fe2O3-TiO2: Treatment efficiency and characterizations of reused photocatalyst. J Environ Manage 187:298–310.View ArticleGoogle Scholar Ramachandran RK, Dendooven J, Filez M et al (2016) Atomic layer deposition route to tailor nanoalloys of noble and non-noble metals. ACS Nano 10:8770–8777.View ArticleGoogle Scholar Hai Z, Du J, Akbari MK et al (2017) Carbon-doped MoS2 nanosheet photocatalysts for efficient degradation of methyl orange. Ionics 23:1921–1925.View ArticleGoogle Scholar Choi S, Bonyani M, Sun GJ et al (2018) Cr2O3 nanoparticle-functionalized WO3 nanorods for ethanol gas sensors. Appl Surf Sci. 432:241–249.View ArticleGoogle Scholar Prabhu RR, Saritha AC, Shijeesh MR et al (2017) Fabrication of p-CuO/n-ZnO heterojunction diode via sol-gel spin coating technique. Mater Sci Eng B Solid-State Mater Adv Technol 220:82–90.View ArticleGoogle Scholar Hoa NT, Van Cuong V, Lam ND (2018) Mechanism of the photocatalytic activity of p-Si(100)/n-ZnO nanorods heterojunction. Mater Chem Phys 204:397–402.View ArticleGoogle Scholar Wang H, Zhao L, Liu X et al (2017) Novel hydrogen bonding composite based on copper phthalocyanine/perylene diimide derivatives p-n heterojunction with improved photocatalytic activity. Dye Pigment 137:322–328.View ArticleGoogle Scholar Li S, Hu S, Xu K et al (2017) Construction of fiber-shaped silver oxide/tantalum nitride p-n heterojunctions as highly efficient visible-light-driven photocatalysts. J Colloid Interface Sci 504:561–569.View ArticleGoogle Scholar Hou X, Wang I, Wu Z et al (2006) Efficient decomposition of benzene over a β-Ga2O3 photocatalyst under ambient conditions. Environ Sci Technol 40:5799–5803.View ArticleGoogle Scholar Zatsepin DA, Boukhvalov DW, Zatsepin AF et al (2018) Atomic structure, electronic states, and optical properties of epitaxially grown β-Ga2O3 layers. Superlatt Microstr. 120:90–100.View ArticleGoogle Scholar He H, Oriando R, Bianko MA et al (2006) First-principle study of the structural, electronic and optical properties of Ga2O3 in its monoclinic and hexagonal phases. Phys Rev B Condens Matter Mater Phys. 74:195123.View ArticleGoogle Scholar Hou Y, Wu L, Wang X et al (2007) Photocatalytic performance of α-, β-, and γ-Ga2O3 for the destruction of volatile aromatic pollutants in air. J Catal 250:12–18.View ArticleGoogle Scholar Ismail AA, Abdelfattah I, Faisal M, Helal A (2018) Efficient photodecomposition of herbicide imazapyr over mesoporous Ga2O3-TiO2 nanocomposites. J Hazard Mater. 342:519–526.View ArticleGoogle Scholar Zhao W, Yang W, Hao R et al (2011) Synthesis of mesoporous β-Ga2O3 nanorods using PEG as template: Preparation, characterization and photocatalytic properties. J Hazard Mat 192:1548–1554.View ArticleGoogle Scholar Wang J, Zhuang H, Zhang X et al (2011) Synthesis and properties of β-Ga2O3 nanostructures. Vacuum 85:802–805.View ArticleGoogle Scholar Krehula S, Ristić M, Kubuki S, Iida Y, Musić S (2015) Synthesis and microstructural properties of mixed iron–gallium oxides. J Alloys Compd. 634:130–141.View ArticleGoogle Scholar Ramachandran RK, Dendooven L, Botterman J et al (2014) Plasma enhanced atomic layer deposition of Ga2O3 thin films. J Mater Chem A 2:19232–19238.View ArticleGoogle Scholar Mi W, Li Z, Luan C et al (2015) Transparent conducting tin-doped Ga2O3 films deposited on MgAl2O4 (1 0 0) substrates by MOCVD. Ceram Int 41:2572–2575.View ArticleGoogle Scholar Li WH, Peng YK, Wang C et al (2017) Structural, optical and photoluminescence properties of Pr-doped β-Ga2O3 thin films. J Alloys Comp. 697:388–391.View ArticleGoogle Scholar Qian YP, Cuo DY, Chu XL et al (2017) Mg-doped p-type β-Ga2O3 thin film for solar-blind ultraviolet photodetector. Mat Lett 209:558–561.View ArticleGoogle Scholar Chikoidze E, Fellous A, Perz-Tomas A (2017) P-type β–gallium oxide: A new perspective for power and optoelectronic devices. Mat Today Phys 3:118–126.View ArticleGoogle Scholar Navarro-Quezada A, Alame S, Esser N et al (2015) Near valence-band electronic properties of semiconducting β-Ga2O3 (100) single crystals. Phys Rev B 92:195306.View ArticleGoogle Scholar Chandiran AK, Tetreault N, Humphry-Baker R, Kessler F (2012) Subnanometer Ga2O3 tunnelling layer by atomic layer deposition to achieve 1.1 V open-circuit potential in dye-sensitized solar cells. Nano Lett. 12:3941–3947.View ArticleGoogle Scholar Girija K, Thirumalairajan S, Mastelaro VR, Mangalaraj D (2015) Photocatalytic degradation of organic pollutants by shape selective synthesis of β-Ga2O3 microspheres constituted by nanospheres for environmental remediation. J Mater Chem A 3:2617–2627.View ArticleGoogle Scholar Liu X, Qiu G, Zhao Y, Zhang N, Yi R (2007) Gallium oxide nanorods by the conversion of gallium oxide hydroxide nanorods. J Alloys Compd. 439:275–278.View ArticleGoogle Scholar Ghazali NM, Mahmood MR, Yasui K, Hashim AM (2014) Electrochemically deposited gallium oxide nanostructures on silicon substrates. Nanoscale Res. Lett. 9:120.Google Scholar Liu Q, Yu Z, Li M et al (2017) Fabrication of Ag/AgBr/Ga2O3 heterojunction composite with efficient photocatalytic activity. Mol Catal. 432:57–63.View ArticleGoogle Scholar Rao R, Rao AM (2005) Blueshifted Raman scattering and its correlation with the [110] growth direction in gallium oxide nanowires. J Appl Phys. 98:094312.View ArticleGoogle Scholar Saksornchai E, Kavinchan J, Thongtem S, Thongtem T (2018) Simple wet-chemical synthesis of superparamagnetic CTAB-modified magnetite nanoparticles using as adsorbents for anionic dye Congo red removal. Mater Lett 213:138–142.View ArticleGoogle Scholar Saksornchai E, Kavinchan J, Thongtem S, Thongtem T (2017) The Photocatalytic application of semiconductor stibnite nanostructure synthesized via a simple microwave-assisted approach in propylene glycol for degradation of dye pollutants and its optical property. Nanoscale Res Lett 12:589–598.View ArticleGoogle Scholar
CommonCrawl
In vitro and in silico antioxidant and antiproliferative activity of rhizospheric fungus Talaromyces purpureogenus isolate-ABRF2 Mahendra Kumar Sahu1, Komal Kaushik2,3, Amitava Das2,3 & Harit Jha1 The present study evaluated the potential biological activities of rhizospheric fungi isolated from the Achanakmar Biosphere Reserve, India. Fungus, Talaromyces purpureogenus isolate-ABRF2 from the soil of the Achanakmar biosphere was characterized by using morphological, biochemical and molecular techniques. Fungus was screened for the production of secondary metabolites using a specific medium. The metabolites were extracted using a suitable solvent and each fraction was subsequently evaluated for their antioxidant, antimicrobial, antiproliferative and anti-aging properties. The ethanolic extract depicted the highest antioxidant activity with 83%, 79%, 80% and 74% as assessed by ferric reducing power, 2,2-diphenyl 1-picrylhydrazyl, 2,2′-azino-bis3-ethylbenzthiazoline-6-sulfonic and phosphomolybdenum assays, respectively. Similarly, ethanolic extracts depicted marked antimicrobial activity as compared with standard antibiotics and antifungal agents as well as demonstrated significant antiproliferative property against a panel of mammalian cancer cell lines. Furthermore, different fractions of the purified ethanolic extract obtained using adsorption column chromatography were evaluated for antiproliferative property and identification of an active metabolite in the purified fraction using gas chromatography–mass spectroscopy and nuclear magnetic resonance techniques yielded 3-methyl-4-oxo-pentanoic acid. Thus, the present study suggests that the active metabolite 3-methyl-4-oxo-pentanoic acid extracted from Talaromyces purpureogenus isolate-ABRF2 has a potential antiproliferative, anti-aging, and antimicrobial therapeutic properties that will be further evaluated using in vivo studies in future. Fungi are a major source of metabolites with a wide range of biological and therapeutic activities (Baker and Alvi 2004). The fungal secondary metabolites have been reported as antibiotics, therapeutic agents as well as undesirable immunosuppressant and toxic substances (Miranda et al. 2010). They also show antimicrobial (erythromycin and bacitracin), antiproliferative, anti-aging, anti-inflammatory, anticancer (Maheshwari et al. 2017), hypocholesterolemic (Kwon et al. 2002), antifungal (Nikoletti et al. 2007) antiviral (Nishihara et al. 2000) and antioxidant (Gangadevi and Muthumary 2008) activities. The natural antioxidant activity of fungal secondary metabolites thus has a plausible major role in developing therapeutic interventions against cancer and myocardial infarction (Maritim et al. 2003). The fungal metabolites have been also reported to regulate different metabolic pathways and cellular activities due to pleiotropic action (Badri et al. 2009). Rhizospheric fungi are a major untapped source of novel metabolites which have not yet been explored and screened to assess their therapeutic potential. Thus, in the present study, fungi isolated from pristine soil of the Achanakmar Biosphere region (located in central India) were evaluated for their potent biological activities, extraction, and identification of novel secondary metabolites with putative therapeutic potentials using standard biochemical and cell biological approaches. Briefly, the extract of the strain with potent activity was selected and the secondary metabolites were purified by adsorption column chromatography, gas chromatography–mass spectroscopy (GC–MS) and nuclear magnetic resonance (NMR) techniques which were subsequently evaluated for antioxidant, antimicrobial and antiproliferative activities using in vitro analysis that corroborated well with in silico molecular docking analysis. Potato dextrose agar, potato dextrose broth, and Czapek Dox, malt extract, yeast extract were procured from Hi-Media, India. 2,2-diphenyl-1-picrylhydrazyl (DPPH), 2,2′ azinobis (3-ethyl benzthiazoline-6-sulphonic acid) (ABTS+), ascorbic acid, potassium persulfate, streptomycin fluconazole were purchased from Sigma-Aldrich, USA. All the reagents and chemicals used were of analytical reagent grade. Isolation and identification of the fungus Different fungal strains were isolated from the rhizospheric soil of Achanakmar Biosphere, Bilaspur, Chhattisgarh, India, by the method as described earlier (Radhika and Rodrigues 2010). The potent fungal isolate was characterized and identified based on morphological characterization using a compound microscope (AXIO SCOPE.A1 HBO 50, Zeiss, Germany) and scanning electron microscope (FEI Nova NanoSEM450, Thermo Fisher USA). Molecular characterization of the identified fungal isolate was performed by partial gene sequencing of the internal transcribed spacer (ITS) regions at a commercial center (Chromous Biotech Pvt. Ltd., Bangalore, India). Briefly, fungal DNA was isolated using a DNA isolation kit (Invitrogen, USA). PCR was performed for amplification of the ITS region with fungal ITS specific degenerate primer (forward primer 5′-TCMGTAGGTGADCCWBCGS-3′ and reverse primer 5′-TCCTNCGYTKATKGVTADGH-3′) followed by amplified ITS sequence alignment with similar sequences of other fungi using the BLASTN program (NCBI, USA). Mega 6 software (Pennsylvania State University, USA) was used for the construction of the phylogenetic tree with 26 aligned sequences of fungi using maximum likelihood analysis and Tamura 3-parameter nucleotide substitution methods (Aharwar and Parihar 2019). Microbial source and growth Five bacterial strains were used for evaluating the antibacterial activity of fungal metabolites. These bacterial strains namely, Bacillus circulans (Gram-positive, rods MTCC-7906), Bacillus subtilis (Gram-positive, rods MTCC-441), Escherichia coli (Gram-negative rods, MTCC-739), Ralstonia eutropha (Gram-positive Rhodococcus, MTCC-2487), Staphylococcus aureus (Gram-positive cocci, MTCC-96) and fungal cultures of Candida albicans (MTCC-3017), Saccharomyces cerevisiae specific mutant strain BY4742 (MTCC-3157) were procured from the microbial-type culture collection (MTCC, CSIR-IMTECH, Chandigarh, India) and used in the investigation of antimicrobial properties of the secondary fungal metabolites. All five bacterial cultures were grown overnight on Luria–Bertani agar (LB) slants and maintained at 4 °C for further experiments. Purification of ethanolic extract Ethanolic extract was purified to obtain active metabolites for further investigation. In brief, 1 g of dry extract was mixed with ethanol (1:1 w/v) and subjected to adsorption chromatography on a glass column packed with silica gel (60–120 mesh size) in toluene. Elution was carried out by standard method with increasing polarity of toluene, chloroform, ethyl acetate, methanol, and acetonitrile. Fractions obtained from each solvent were collected and subjected to spectrophotometric evaluation and selected fractions were further concentrated by the rotatory evaporator. High-performance liquid chromatography (HPLC) analysis The collected extracts and fractions were used for initial screening on TLC plates. The screening sample was dissolved in (1:1 w/v) in HPLC grade methanol. The purity of the compound was confirmed by HPLC (Shimadzu Liquid Chromatography LC10A; Shimadzu Corp., Kyoto, Japan) using a C18 analytical column. Injected sample (20 µL) was eluted at a flow rate of 1 mL/min under an isocratic mobile phase consisting of acetonitrile: water: acetic acid (18:80:2) (v/v) followed by analysis of elution profile at 280 nm (Shen et al. 2007). Identification of compound using spectroscopy techniques UV–visible spectroscopy The sample was dissolved in ethanol. The absorption maximum (λmax) of the purified compound was determined (UV-1800 Shimadzu spectrophotometer, Shimadzu, Kyoto, Japan) by scanning over 200–800 nm range to record the spectrum. Fourier transform infrared (FTIR) spectroscopy The FTIR was used for functional group analysis of the sample. An infrared spectrum of purified compounds was recorded on an FTIR spectrometer (I05 Nicolet Avatar 370, Thermo Scientific, USA) at room temperature. Briefly, the purified compound (5 mg) was mixed with spectroscopic grade KBr (95 mg) for pellet preparation. The IR spectrum was recorded in the transmission mode at the frequency range of 4000–400/cm. The KBr pellet without sample was used as control. Gas chromatography–mass spectroscopy (GC–MS) GC–MS analysis of the samples was performed (Shimadzu GC-MS-QP2020; Kyoto, Japan) for qualitative and quantitative analysis using the electron impact ionization (70 eV) method and mass spectra. The components were identified based on the comparison of their relative index and compared to the mass spectra of standards available in the GC–MS library of the National Institute of Standards and Technology (NIST; Gaithersburg, Maryland, United States). Further, the percentage of constituents was measured based on the peak area. 1H NMR of the sample was performed on Bruker advance III 400 MHz NMR spectrophotometer. In brief, 5 mg of the sample was dissolved in the DMSO and centrifuged at 8000 rpm for 10 min and then analyzed for 1H NMR as described earlier (Morcombe and Zilm 2003). Chemical shifts were expressed in terms of parts per million (į scale) and elemental analysis was carried out at Centre for Bio-separation Technology (CBST), Vellore Institute of Technology, Vellore, India. The data were used for the putative prediction of molecular formula and structural characteristics of the active compound comparing with the standards. Bioactive properties of isolated compound The antioxidant and antimicrobial functional characteristics of the isolated pure compound from T. purpureogenus and extracts were evaluated in triplicates with appropriate blanks and controls (ascorbic acid for antioxidant and streptomycin for antibacterial activity) in all the experiments. 2,2-Diphenyl 1-picrylhydrazyl (DPPH) activity radical scavenging activity The DPPH (0.2 mM) radical scavenging activity was measured using modified method as described previously (Dhale and Vijay-Raj 2009). The activity was evaluated using various concentrations (31.25–125 µg/mL) of purified compound at 517 nm and calculated according to the formula: $${\text{DPPH scavenging activity }}\left( \% \right)\, = \,\left[ {{\text{A}}_{0} - {\text{A}}_{ 30} } \right]/\left[ {{\text{A}}_{0} } \right]\, \times \, 100,$$ where [A0] was the absorbance of the control (DPPH without sample) and [A30] was the absorbance of the sample with DPPH (Barapatre et al. 2015). Ferric reducing antioxidant power (FRAP) assay The ferric reducing antioxidant power (FRAP) assay was determined, according to the method of Benzie and Strain (1996). Briefly, when a ferric tripyridyltriazine (Fe III-TPTZ) complex gets reduced to the ferrous (Fe II) form at low pH, an intense blue color appears. Low pH is responsible for maintaining iron solubility and a decrease in the ionization potential that drives electron transfer and increases the redox potential. The FRAP activity of the sample was determined at various concentrations (66.7–166.7 µg/mL) by observing an increase in the absorbance values at 595 nm. 2,2′-Azino-bis3-ethylbenzthiazoline-6-sulfonic (ABTS) antioxidant assay ABTS (2,2′-azino-bis3-ethylbenzthiazoline-6-sulfonic) radical cation decolorization assay is based on the inhibition by antioxidants of absorbance imparted by radical cation 2,2-azinobis-(3-ethylbenzothiazoline-6-sulphonate) (ABTS*+). Test samples were mixed with the ABTS*+ solution, incubated for 2 h in dark followed by observation of absorbance at 734 nm (Aadil et al. 2014). Phosphomolybdenum assay Phosphomolybdenum assay method was utilized for the spectrophotometric quantitation of total antioxidant capacity by combining the sample with 1 mL of reagent solution (0.6 M sulfuric acid, 28 mM sodium phosphate and 4 mM ammonium molybdate) followed by incubation at 95 °C for 90 min. The samples were cooled to room temperature, and the absorbance of the test solution was measured at 695 nm against a blank (Sowndhararajan and Kang 2013). The antibacterial activity assay was carried out using disc diffusion method (Balachandran et al. 2016) against pathogenic organism Bacillus circulans (MTCC-7906), Bacillus subtilis (MTCC-441), Escherichia coli (MTCC-739), Ralstonia eutropha (MTCC-2487) and Candida albicans (MTCC-3017) as test organisms. Samples impregnated onto Whatman filter paper number 1 discs were used to determine antibacterial activity. Plates were incubated for 24 h at 37 °C followed by measurement of zones of inhibition. Ethanol was used as vehicle control and streptomycin (1 mg/mL) as positive control. In silico analysis by molecular docking Molecular docking of isolated and characterized fungal metabolite, 3-methyl-4-oxo-pentanoic acid with standard anticancer and anti-aging drugs was carried out against selected target proteins to study the binding-affinity,-energy, -mode and scoring functions. The structure of the targets was retrieved from the Protein Data Bank (PDB). Molecular docking and ligand–receptor interactions study was carried out using Molecular Operating Environment 2008 (moe.2008) software (Chemical Computing Group, Montreal, Canada) to investigate possible binding conformations of the receptor–ligand complex (Naik et al. 2011). For anti-aging potential the targets chosen were yeast Taf14 containing YEAST domain at N-terminus (Schulze et al. 2010) (RCSB PDB ID: 2L7E); Yeast Hsp90 chaperone N-terminal domain (Huai et al. 2005) (RCSB PDB ID: 1AH8) and Yeast protein Dre2 containing Fe–S with S-adenosyl methionine methyl transferase-like domain (Soler et al. 2012) (RCSB PDB ID: 2KM1). Similarly, for anticancer activity, c-MYC promoter of DU-145, a human prostate cancer cell line (Luoto et al. 2010) (RCSB PDB ID: 6AU4); Focal Adhesion Kinase (FAK), an important mediator of cell adhesion, growth, proliferation, survival, angiogenesis, and migration, which is often disrupted in cancer cells (RCSB PDB ID:1MP8, MCF-7) (Golubovskaya 2010); vimentin coil 1A/1B fragment together with actin filaments and microtubules, intermediate filaments (IFs), the basic cytoskeletal components of metazoan cells (Chernyatina et al. 2012) (RCSB PDB ID: 3SSU, MDA-MB-231) and VHS domain of TOM1 protein of Homo sapiens, found at the N-termini of selected proteins involved in intracellular membrane trafficking (RCSB PDB ID: 1ELK MCF-7) (Misra et al. 2000) was chosen for docking study. Anti-aging analysis Traditional spot assay was employed for evaluation of the anti-aging capacity of identified compounds in eukaryotic haploid organism (Yeast) Saccharomyces cerevisiae specific mutant strain BY4742 (MTCC-3157) as described earlier (Zhao et al. 2017). BY4742 strain of S. cerevisiae was inoculated in 5 mL YPD (Yeast Peptone Dextrose broth (Hi-Media, Mumbai, MH, India) and incubated overnight at 28 ± 2 °C till exponential phase. 40 µL fungal extract was spotted in the YPD agar plate, fluconazole (Nystatin) was used as negative control while culture without extract, acarbose, and rapamycin were treated as positive control. Growth of yeast strain was observed after incubation of 72 h at 28 ± 2 °C. Further yeast growth curve determination was also performed using the method of Wei et al. (2017): Delaney et al. (2013): Tauk tornisielo et al. (2007) with slight modification. S. cerevisiae BY 4742 inoculum was prepared using yeast dextrose peptone nutrient medium at 28 °C. 100 µL of extract (10 mg) was taken and 100 µL of inoculum was added. Acarbose and rapamycin were taken as control. Absorbance was taken at 600 nm using ELISA Reader at different time points. Antiproliferative activity The antiproliferative activity of the extracts and fractions were determined in various tissue-specific cancer cells lines such as breast cancer (MDA-MB-231, MDA-MB-468, and MCF-7), liver cancer (HepG2), lung cancer (A-549), prostate cancer (DU-145) and primary control cell line (HEK-293) by sulforhodamine B (SRB) assay as described previously (Manupati et al. 2017). Briefly, each cell line was trypsinized and plated in 96-well plate at density 5 × 103 cells per well. After 24 h incubation, cells were treated with increasing concentration (1, 10, 100 and 300 mg/mL) of all the column fractions viz. toluene, chloroform, ethyl acetate, methanol, and acetonitrile for 48 h followed by fixation and SRB staining of treated cells along with respective vehicle controls. Doxorubicin was used as a positive control. The optical density at 510 nm was measured using a multimode reader (Perkin Elmer, Germany) and percent inhibition with IC-50 was calculated using GraphPad prism 6.0 as described previously (Manupati et al. 2019). All assays were performed in triplicates and the results were validated statistically using one-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. All the tests were considered statistically significant at p < 0.05. The analysis was carried out using GraphPad Prism Software Version 5.0. Results were represented as the mean ± standard deviation (SD). In the present work, isolated fungal strain, Talaromyces purpureogenus isolate-ABRF2 from the soil sample of Achanakmar Biosphere Reserve of Chhattisgarh forest, India, was screened for potential therapeutic secondary metabolites. Initially, we cultured the fungus in different nutrient media to identify the optimum growth conditions. The ethanolic extract of the isolated fungus Talaromyces purpureogenus isolate-ABRF2 was subjected to preliminary screening based on antioxidant and antibacterial activity. Potent crude extract (brown sticky, Fig. S1A) was further column purified based on the polarity of the solvent. Purified bioactive secondary metabolites were identified using analytical techniques such as UV–visible spectrophotometry, TLC, HPLC, GC–MS, and NMR. The identified bioactive metabolite, 3-methyl-4-oxo-pentanoic acid (Additional file 1: Fig. S1B) was evaluated for antimicrobial, anticancer and anti-aging properties through in vitro and in silico studies. Molecular taxonomic characterization and phylogenetic analysis of fungal isolate The isolate, ABRF2 was morphologically characterized by cotton blue staining (Additional file 1: Fig. S2A, S2B), bright field microscopy (Additional file 1: Fig. S2C) and scanning electron microscopy (Additional file 1: Fig. S2D). It displayed white green color mycelia, with ellipsoidal conidia, thick-walled and dark red coloration, dense sporulation and plane surface colony. Colonies produce red soluble pigments on nutrient media. Molecular characterization of fungi was performed using the PCR technique, agarose gel electrophoresis and sequencing reaction with the ITS rDNA sequence. Agarose gel electrophoresis was used for the analysis of isolated genomic DNA and the PCR amplified product of the ITS region of the fungal isolate. The amplicon size of the PCR was observed to be 422 base pairs. The amplicon was sequenced and submitted to the GenBank database (NCBI accession number, MG905442). It was further analyzed for sequence similarity using the BLASTN program of NCBI. The sequence showed a maximum of 72% similarity with Talaromyces purpureogenus strain NFML_X. A phylogenetic tree was prepared by 'maximum likelihood'—a statistical method using Mega 6 software with the Tamura 3-parameter model (Substitution Model) and bootstrap method (phylogeny test). The phylogenetic tree of selected taxa was formed by the neighbor-joining method. The model used was Tamura 3-parameter with 1000 Boost strap replications. The total numbers of sites were 3555. Structural characterization by spectroscopic methods The potent crude ethanolic extract and column purified fraction of isolate ABRF2 was further spotted on a silica gel TLC sheet. The components of the sample were separated by TLC and observed by visualization at UV-366 nm. Crude ethanolic extract (spot A and B) and column fractions of methanolic extract, on TLC separation, resolved to give fluorescent spots under UV light (spot D). The Rf value for spot A was observed to be 0.15 while that for spot B and C was found to be 0.36. The Rf values of spot D was observed to be 0.41 (Additional file 1: Fig. S3). The HPLC analysis of Talaromyces purpureogenus isolate-ABRF2 fungus with gradient elution depicted nine peaks in its ethanol extract spectra (Fig. 1a). The peaks with the following retention time (min) 2.35, 2.55, 2.8167, 3.4167, 3.8, 3.9167, 4.6, 4.85 and 7.0167 were resolved. The compound of interest was eluted at Rt 2.8167. It was soluble in organic solvents including methanol, chloroform, acetone and ethanol (Fig. 1a). The structure of the selected compound was analyzed using UV–visible spectroscopy. Ethanolic fraction of fungal extract depicted absorption spectra overlapping with that of standard pentanoic acid, suggesting the isolated compound to be pentanoic acid (C5H10O2) with absorption spectra between 300 and 400 nm (Fig. 1b). The purified compound when scanned over a range of 200–800 nm, exhibited peaks between 300 and 400 nm, suggesting the compound to be benzoic acid derivative (Fig. 1b). GC–MS analysis of the ethanolic fungal extract and purified fraction showed the presence of six major peaks (Fig. 1c). The components corresponding to the peaks were determined and represented in tabular form (Additional file 1: Table S1). Pentanoic acid, 3-methyl-4-oxo- was one of the metabolites (molecular weight, 103 g/mol) selected to determine its biological significance while the remaining of the detected peaks/compounds corresponded to the solvents like methyl alcohol, cyclopropane, and propane nitrile with relatively out of scope for therapeutic relevance in the present study. Spectroscopic analysis of the purified compounds. a Purification of secondary metabolites by using HPLC. b Structural elucidation performed by UV–visible spectrophotometry. c. Depicted graph is GC–MS analysis of natural products of Talaromyces purpureogenus isolate-ABRF2 The column-fractionated samples were subjected to the FTIR analysis for structural elucidation of the compound. A broad range of bands (3910–3660 cm−1) were observed in the chloroform and ethyl acetate fraction that can be attributed to the –OH group stretching in phenolic and aliphatic structures and oscillation of the hydroxyl group. A characteristic peak that appeared at 3155 cm−1 represented the stretching of –OH group, molecular hydrogen bonding and vibration of molecules. The IR spectrum depicted the presence of an aromatic ring substituted with an ester bond thereby predicting the probable functional group present in the compound. Putative prediction of the isolated compound identified it to be a pyrone derivative (Fig. 2 and Additional file 1: Table S2). The NMR spectrum of column fraction of the compound was further evaluated for structural identification using the obtained peaks at various 'į' values of proton 1H (Fig. 3). The data obtained has led us to the presumption that the antibacterial compound contains methyl, ketone, and hydroxyl functionalities. The NMR spectrum showed that the antibacterial compound has CH2, CH3, OH, and C–H protons functionalities while the elemental data showed the presence of carbon, hydrogen and hydroxyl group suggesting the compound to be an ester. The 1H NMR spectrum of the isolated compound, 3-methyl-4-oxo-pentanoic acid corroborated well with the same number of protons as pentanoic acid (Fig. 3). A comparative analysis of the 1H NMR spectrum has been described in tabular form (Additional file 1: Table S3). This shows that the primary, secondary and tertiary aliphatic group presence of a broad peak at 0.9, 1.3 and 1.5 δ, respectively. The carbonyl and hydroxyl groups were intact at 2.2 and 3.5 δ, respectively, although some changes in chemical shifts were noticed. FTIR analysis of the column fraction of Talaromyces purpureogenus isolate-ABRF2.a Graph represents the FTIR pattern of chloroform fraction of selected sample. b Graph represents the FTIR pattern of ethyl acetate fraction of selected sample NMR spectrum of column fraction of Talaromyces purpureogenus isolate-ABRF2 compared with standard. an-Valeric acid or pentanoic acid as a standard. b Column fraction of ABRF-2 fungus strain Antioxidants provide cellular defense through entrapping free radical generated by toxic metals and series of mechanisms get initiated by terminating chain reaction or chelating metal ions and reactive oxygen species or by maintaining the redox potential to stop or minimize reduction of molecular oxygen. Our findings indicate the presence of a higher concentration of potential reductones, 83% scavenging activity in YESB crude extract as compared with other extracts (Fig. 4). Hence, the FRAP reaction was correlated with the varying concentrations of the antioxidants that were observed to be reproducible (Fig. 4). The DPPH radical scavenging activity of secondary metabolites in the different medium was observed to be in the range of 18–79%. However, crude extract in YESB media demonstrated the highest DPPH radical scavenging activity at 79%. The activity of the crude extract was comparable to that of control, ascorbic acid. Ascorbic acid and un-inoculated media were used as positive and negative controls, respectively. Free radicals often affect the cellular macromolecules and signaling mechanisms while antioxidants protect it. We observed that YESB extract, with 80% scavenging capacity was very strong in the cation-free radical quenching compared to other extracts, thereby suggesting the presence of relative hydrophobic reductones in the latter. Formation of a bluish-green colored complex at acidic pH signifies the reduction of phosphomolybdate (VI) to phosphomolybdate (V) during the phosphomolybdenum assay. The highest percentage of scavenging activity of the crude extract was observed in the YESB medium (74%), although a significant difference in scavenging activity was observed between the positive control, ascorbic acid, and crude extract (Fig. 4). Thus, these data suggest YESB extract has potent free-radical scavenging activity. Antioxidant activity of crude extract of T. purpureogenus isolate-ABRF2 in YESB (yeast extract sucrose broth) nutrient medium. Ascorbic acid was used as positive control. One-way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. Sample with p < 0.05 were considered significant p < 0.05 and ≠ p < 0.01 Column-fractionated relatively pure compounds, tested for antibacterial activity in the concentration range of 20–100 µg/mL, inhibited the growth of B. circulans (MTCC-7906), B. subtilis (MTCC-441), Ralstonia eutropha (MTCC-2487), S. aureus (MTCC-96), and E. coli (MTCC-739) at a maximum concentration of 100 µg/mL in a qualitative assay (Fig. 5). Maximum activity was found in ethyl acetate column fraction with clearance zone of 19.17 ± 1.5 mm against Gram-positive bacteria B. subtilis whereas 16.16 ± 1.5 mm was observed against Gram-negative bacteria R. eutropha. The results were comparable to the positive control, streptomycin (Additional file 1: Table S4). Antimicrobial activity with zone of inhibition of natural products of Talaromyces purpureogenus isolate-ABRF2 against pathogenic bacterial and fungal strain. a Column fractions against Bacillus circulans.b Column fractions against Bacillus subtilis.c Column fractions against E. coli.d Column fractions against S. aureus.e Column fractions against R. eutropha.f Column fractions against Candida ablicans Antifungal activity of the crude fungal extract and column-fractionated sample obtained were evaluated for antifungal activity against C. albicans (MTCC-3017) (Additional file 1: Table S5). Plate depicted the zone of inhibition (10.11 ± 0.89) of acetonitrile column fraction of ethanolic extract comparable to the standard antifungal drug, fluconazole tested against C albicans (MTCC-3017) (Fig. 5). Anti-aging activity Prediction of anti-aging potential by molecular docking Multi-functional fungal extract regulating central cellular metabolism and metabolic pathways may serve as an effective anti-aging entrant. Therefore, molecular docking analysis was undertaken as described in the methods to dock 3-methyl-4-oxo-pentanoic acid against the chosen yeast and human anti-aging targets. The fungal metabolite, 3-methyl-4-oxo-pentanoic acid suggested the plausible role to support the growth of eukaryotic cells as demonstrated in the yeast. Slow aging may also lead to delay in carcinogenesis; reports suggest that calorie restriction affects aging by neutralizing the mammalian target of rapamycin (mTOR) (Karunadharma et al. 2015). Interestingly, our observations from the in silico analysis followed by in vitro validation assays using standard drug, sirolimus (rapamycin) suggested that 3-methyl-4-oxo-pentanoic acid has a potent anti-aging property (Tables 1 and 2). Table 1 In silico therapeutic studies of the compound with different targets in respect of binding energy and no. of direct contacts (all polar, non-polar interactions) Table 2 Table representing comparison between binding energy of anti-aging and anticancer targets and to identify the probable mechanism of action The binding energy of 3-methyl-4-oxo-pentanoic acid for site 1 of the target 1AH8 target, obtained from MDA-MB-231 cell line, was observed to be − 14.0109 kCal/mol which was lower than standard drug, Sirolimus (rapamycin, − 12.7754 kCal/mol), thereby suggesting a better interaction and stability of 3-methyl-4-oxo-pentanoic acid as compared with sirolimus. The presence of seven hydrogen bonds reacting with residues Lys A73, Val A74, Arg A75, Ile A54, Phe A63, Gln A206, and Glu A76 was observed. Val A74 and Val A208 were also involved in interaction through the water of hydration (Fig. 6a). Molecular docking of the compound pentanoic acid with anti-aging targets. a 2-D diagram of 1AH8 residues interacting with compound 3-methyl-4-oxo-pentanoic acid forming polar contacts with the ligands and residues showing in lines participating in other interactions. b 2-D diagram of 2KM1: protein binding interacting with compound 3-methyl-4-oxo-pentanoic acid. c 2L7E: transcription interacting with compound 3-methyl-4-oxo-pentanoic acid The binding energy of 3-methyl-4-oxo-pentanoic acid was observed to be -9.9277 kCal/mol for site 8 of the target, 2KM1 or Yeast protein Dre2 containing Fe–S with S-adenosyl methionine methyltransferase-like domain (RCSB PDB ID: 2KM1). However, for sirolimus (rapamycin) the binding energy was observed to be − 13.4799 kCal/mol, lower than the isolated compound. Only two residues showed direct binding Leu 63 and Phe 73 (Fig. 6b). In yeast, Taf14 containing the YEATS domain at N-terminus (RCSB PDB ID: 2L7E) is involved in transcription (Shanle et al. 2015). The binding energy of 3-methyl-4-oxo-pentanoic acid was observed to be − 9.6630 kCal/mol for site 2 of the target 2L7E which was comparable to that of sirolimus (− 9.5644 kCal/mol). Four hydrogen bonds were formed between residues Leu 86, Pro 102, Leu 108, and Gly83 (Fig. 6c). This binding analysis with different amino acids of the target sites suggested that the compound 3-methyl-4-oxo-pentanoic acid to be a potential regulator of different anti-aging pathways. Spot assay and yeast growth curve To validate the anti-aging activity, spot assay was performed with Saccharomyces cerevisiae specific mutant strain BY4742 (MTCC-3157) (Zhao et al. 2017). Negative control, nystatin and positive control, acarbose and/or rapamycin (1 mg/mL) were used in the experiment. The fungal extract showed a spot diameter of 12 ± 0.46 mm while the positive controls, acarbose, and rapamycin depicted 15 ± 0.87 and 17 ± 0.94 mm, respectively, in the study using S. cerevisiae BY4742 (Fig. 7a and Additional file 1: Table S6). The growth signifies the putative anti-aging effect of 3-methyl-4-oxo-pentanoic acid through the calorie restriction pathway. Furthermore, yeast growth curve depicted an increased exponential phase in the culture having ABRF2 extract as compared with the positive controls, acarbose, and rapamycin as well as and negative control, nystatin (Fig. 7b). In vitro spot assay of anti-aging activity of the crude extract of secondary metabolites of Talaromyces purpureogenus isolate-ABRF2. S. cerevisiae BY4742 cells spotted onto YPD plates. Growth curve for the S. cerevisiae BY4742 strain were measured in YPD medium. p < 0.05 indicated a significant difference. Fungal extract and their growth spots generated during spot assay along with control systems using S. cerevisiae BY4742. C—culture only (S. cerevisiae BY4742); C + E2—culture and extract of isolate ABRF2; C+N—culture and fluconazole (nystatin) as negative control; C+VA—culture and n-valeric acid; C+Ac—culture and acarbose and C+Ra—culture and rapamycin as positive control Prediction of anticancer potential by molecular docking The docking interaction of the isolated compound, 3-methyl-4-oxo-pentanoic acid against, c-MYC was performed as described in the methods to identify and predict the binding site residues. The docked ligand molecules were selected based on the interaction with the active site residues and lowest docking energy. The presence of a total of five hydrogen bonds with binding sites of 3-methyl-4-oxo-pentanoic acid, including two hydrogen bonds formed by the Lys 62 were observed. The other three hydrogen bonds were observed to be formed by the residues Asn 108, Thr 102 and Ile 103, respectively (Fig. 8). The compound also interacted with Asn 61, Glu 44, Arg 84 and Asn 41 through the water of hydration. The binding energy was observed to be − 13.5751 kCal/mol for site 6 of 6AU4 from the c-MYC of DU-145. Compound, 3-methyl-4-oxo-pentanoic acid was most potent with a higher docking score of − 11.4592 kCal/mol, compared to that of the standard drug, doxorubicin (− 10.088 kCal/mol). These findings confirm the plausible role of 3-methyl-4-oxo-pentanoic acid as an anticancer agent that can be evaluated in the future with the mechanism of action for potential drug development. The in vitro cytotoxic assay further confirmed the observed in silico activity of 3-methyl-4-oxo-pentanoic acid, with IC50 value at a low micromolar range of the 3-methyl-4-oxo-pentanoic acid fraction against DU-145 of 2.36 ± 0.156, thereby suggesting potential antiproliferative/anticancer activity of the isolated compound (3-methyl-4-oxo-pentanoic acid) (Table 4). Molecular docking and interaction of the compound pentanoic acid to the different anticancer cell line targets. a 3-Methyl-4-oxo-pentanoic acid with 1MP8: transferase. Interact with structure of the cancer-related Focal Adhesion Kinase (FAK) and molecule. b 3-Methyl-4-oxo-pentanoic acid with 1ELK. Interaction of 1ELK (MCF-7) VSH domain of TOM 1 (target of myb 1) protein from Homo sapiens with molecule. c 3-Methyl-4-oxo-pentanoic acid with 6AU4: DNA. Interaction with crystal structure of the major quadruplex formed in the human c-MYC promoter and molecule. d 3-Methyl-4-oxo-pentanoic acid with 3SSU. Interaction between crystal structure of vimentin central helical domain and its implications for intermediate filament assembly 3SSU (MDA MB 231) and obtained molecule The docking of compound against the anticancer site-5 on 1MP8, a well-known target of MCF-7, depicted a total of five hydrogen bonding with the residues Glu 506, Glu 500, Cys 502, Lys 454, Gln 432. The isolated compound also interacted with Glu 430, Ile 428, Leu 553, Lys 454, Phe 433 and Asp 564 through the water of hydration. The binding energy was observed to be − 13.2413 kCal/mol for site 5 of 1MP8: transferase from the MCF-7 which was lower than that of the potent anticancer drug, doxorubicin with docking score of − 21.29661 kcal/mol (Fig. 8a). 1ELK of the VHS domain of TOM1 protein reported as target in MCF-7 cancer cell line is involved in the degradation of growth factor receptor complexes through their translocation to the lysosome. The binding energy of the compound, 3-methyl, 4-oxo-pentanoic acid was − 15.2661 kCal/mol for site 4 of the target 1ELK on MCF-7 breast cancer cell line as compared to the maximum affinity and lowest binding energy score of − 29.5353 kCal/mol for doxorubicin. Direct binding via hydrogen bonds with Val A59, Arg A57, Val A54, Leu A51, and Ala A53 was observed. Compound has displayed two hydrogen bonds with Val A59 and Asp A93 (Fig. 8b). Similarly, site-1 of the 3SSU target from the MDA-MB-231 cell line displayed the highest docking with compound by three hydrogen bonding with Glu A187, Asp A181, and Arg A184. The binding energy of the isolated compound was observed to be − 7.7187 kCal/mol for site 1 of the target 3SSU of MDA-MB-231 cell line as compared to − 10.5860 kCal/mol for doxorubicin (Fig. 8c). These binding affinities of the molecule with different amino acids of the target sites propose that the compound 3-methyl-4-oxo-pentanoic acid may act as a therapeutic potential drug on different anticancer pathway. In vitro antiproliferative activity To determine the antiproliferative activity of the crude fungal extract were further purified and segregated as extracellular and intracellular extracts to localize the selected metabolites. All three types of extracts were tested against tissue-specific cancer cell lines, using SRB assay as described in the methods. Intracellular extracts demonstrated lower IC50 values and thus higher cytotoxicity (Table 3). The extract was further purified by adsorption column chromatography. The fractions were assessed for antiproliferative activity. Fraction A and C (toluene and ethyl acetate fraction) depicted more antiproliferative potential against breast cancer, MCF-7 and liver cancer, HepG2 cell line with IC50 of 2.79 and 2.75 µg/mL as compared with the positive control (doxorubicin) with IC50 of 5.06 and 1.65 µg/mL, respectively. MDA-MB-468 was observed to be highly sensitive with fraction B (IC50 < 0.35 µg/mL) suggesting these fractions might contain efficacious anticancer lead molecules which can be further isolated, identified and purified as they did not show any comparable toxicity in normal primary cells (HEK-293) (Table 4). However, toluene and ethyl acetate fraction was also observed to be highly potent against liver cancer, but toluene fraction had similar toxicity in cancer as well as non-cancerous (control) cells with IC50 of 2.75 and 2.29 µg/mL, respectively. Similarly, the antiproliferative potential of fraction C (ethyl acetate fraction) containing 3-methyl 4-oxo-pentanoic acid was observed to be comparable with cancer and normal cells. Table 3 IC50 (µg/mL) value of different extracts of isolate ABRF-2 (Talaromyces purpureogenus) Table 4 Assessment of cytotoxic profile of column chromatography fractions against various tissue-specific cancer Identified fungus Talaromyces purpureogenus isolate-ABRF2 from the Achanakmar Biosphere Reserve forest of central India was characterized based on the morphological parameters such as variation in shape and size of fungal spores and hyphae (Wyatt et al. 2013; Gautier et al. 2016). The analysis of phenotypic characteristics and spore structure forms the major identifying principle in fungi. The preliminary identification was corroborated by sequence analysis of the ITS region of the strain. Talaromyces purpureogenus isolate-ABRF2 fungus was grown on Yeast Extract Sucrose Broth media (YESB) under optimized incubation condition. Both the intracellular and extracellular secondary metabolites were extracted and screened for therapeutic potential, the intracellular secondary metabolites depicted higher potential compared to extracellular and hence, the isolated intracellular compound was further evaluated. Successive Soxhlet extraction of 50 g dry biomass resulted in 0.45, 0.67, 1.2, 4.8 and 2.7 g dry extract, respectively, for solvents diethyl ether toluene, chloroform, ethanol, and acetonitrile. The ethanolic extract was selected and subjected to column chromatography, obtaining 1.7 g of purified active component. The isolated secondary metabolites showed various characteristic features. The Rf value of Spots B, C, and D ranged from 0.30 to 0.40 which corresponds to valeric acid, or 3-methyl-4-oxo-pentanoic acid with Rf = 0.49 (Hassan et al. 2008; Singh et al. 2006). 3-methyl-4-oxo-pentanoic acid is a straight-chain alkyl carboxylic acid sesquiterpenoid constituent of the essential oil of the valerian plant. Further, the presence of 3-methyl-4-oxo-pentanoic acid was confirmed using various spectral analysis. UV absorbance at λ max 272 and 328 nm having the typical pattern of 3-methyl-4-oxo-pentanoic acid. The IR spectrum showed absorption bands at 1716.32, 1652.84 and 1181.53 cm−1 revealed the similarity with 3-methyl-4-oxo-pentanoic acid reported earlier in the literature suggesting biotransformation of pyrone to benzoic acid and derivatives by microorganisms (Parshikov et al. 2015). The GC–MS analysis of a crude extract of ABRF2 revealed the presence of ester compound, which when further validated by 1H NMR, showed chemical shift similar to standard n-valeric acid/pentanoic acid. Antioxidants are molecules useful in reducing the free radicals produced by oxidative stress and managing the cellular network. Clinical data suggest a correlation between the pathogenesis of disease with a high level of iron in the body (Siah et al. 2006). In the Fenton reaction, ferrous ions play an important role, catalyzing the production of hydroxyl radicals and hydroxyl anions from hydrogen peroxide (Liochev and Fridovich 1999). In our present study, FRAP values of YESB crude extract were relatively higher and comparable to standard ascorbic acid. FRAP values are based on reducing ferric ion with antioxidants as the reducing agent, higher FRAP value indicating greater antioxidant capacity (Fernandes et al. 2016). DPPH scavenging activity of YSEB extract from isolate ABRF2 was similar to the reported T. purpureogenus CFRM02 extracts (Pandit et al. 2018). The variation in the structure and different functional groups present in the molecules led to differences in the antioxidant activity across different methods. The metabolic processes, irradiation and oxidative processes may lead to the formation of primary ROS which then react with biomolecules forming secondary ROS (Loganayaki et al. 2013). These free radicals damage macromolecules and cellular components, however, antioxidants protect biomolecules. ABTS+ scavenging capacity represents free-radical scavenging efficiency in a hydrophilic medium (Re et al. 1999). Our data showed that ABTS+ activity was slightly lower than FRAP but higher than DPPH activity. Phosphomolybdate assay is a routine method for estimating the reducing capacity of plant-derived antioxidants (Prieto et al. 1999). Interestingly, we observed the lowest activity of our compound in phosphomolybdate assay as compared with other antioxidant assays that were comparable with control, l-ascorbic acid. The secondary metabolites of Talaromyces sp. predominantly containing esters including, linear polyesters are known to show various biological activities, including antibacterial actions (Zhai et al. 2016). The results obtained in the present study are comparable to the standard antibiotic streptomycin (Sarker et al. 2007). We observed a differential growth inhibition pattern of gram-positive by the ethyl acetate fraction containing the compound, 3-methyl, 4-oxo-pentanoic acid. Similarly, the zone of inhibition against gram-negative bacteria, R. eutropha was observed to be similar to that reported earlier of Alternaria sp (Palanichamy et al. 2018). Interestingly, antifungal activity against candida albicans was not observed in the ethyl acetate fraction. The outcome of the initial pharmacokinetic studies of the isolated compound, 3-methyl-4-oxo-pentanoic acid incited us to further explore the anti-aging and anticancer activity. Literature suggests that 1AH8, 2KM1 and 2L7E are the targets associated with transcription and translational modification in eukaryotic system and are linked to the process of aging. Interestingly, the same signaling molecules have been shown to be involved and targeted for cancer therapy (Blagosklonny 2012). Rapamycin slows down aging, suppresses cellular senescence, and postpone age-related diseases (Blagosklonny 2012). Further to validate the anti-aging activity, we employed specific spot assay with eukaryotic model organism Saccharomyces cerevisiae mutant strain BY4742 (MTCC-3157) as reported earlier (Zhao et al. 2017). Using the method of Wei et al. (2017), we observed that in the presence of ABRF2 crude extract, the formation of growth zone by yeast was higher in size as compared with control suggesting it's potential to enhance the lifespan and delay aging of yeast cell. In yeast growth curve determination, ABRF2 crude extract significantly enhanced the yeast growth exponential phase and delayed the aging process similar to the positive control. Anti-aging results suggested that fungal extracts have putative compounds responsible for the enhancement of cell life. The c-MYC oncogene is often dysregulated or overexpressed in multiple tumor cell survival pathways (Stump et al. 2018). One of the major targets of anticancer compounds is the DNA quadruplex formed in the NHE III1 region of the c-MYC promoter. Antitumor small molecules tend to stabilize this quadruplex thereby reducing c-MYC expression (Stump et al. 2018). These observations were further validated using in vitro antiproliferative activity against a variety of tissue-specific cancer cell lines that demonstrated potent antiproliferative activity specifically against breast cancer MCF-7 and liver cancer-HepG2 cell lines. Talaromyces purpureogenus isolate-ABRF2 crude extract and column-fractionated samples demonstrated modest antioxidant activity combined with antibacterial activity. The extract and fractions were subjected to GC–MS and NMR, leading to the identification of 3-methyl-4-oxo-pentanoic acid. In silico molecular docking analyses against anti-aging and anticancer targets of isolated compounds depicted higher binding energy as compared with standard drugs. Thus, the compound 3-methyl-4-oxo-pentanoic acid isolated from Talaromyces purpureogenus isolate-ABRF2 with antioxidant and anticancer (cytotoxic) activities is a potential candidate for drug development. All data generated or analyzed during this study are included in this research article. ABRF: Achanakmar Biosphere Reserve Fungus FRAP: Ferric reducing antioxidant power DPPH: 2,2-Diphenyl 1-picrylhydrazyl ABTS: 2,2′-Azino-bis3-ethylbenzthiazoline-6-sulfonic Phosphomolybdenum Gas chromatography–mass spectroscopy NMR: Luria–Bertani agar CDB: Czapek Dox broth CDYB: Czapek Dox yeast broth MEB: Malt extract broth PDB: Potato dextrose broth High-performance liquid chromatography FTIR: Fourier transform infrared spectroscopy NIST: CBST: Centre for Bio-separation Technology Fe III-TPTZ: Ferric tripyridyltriazine MOE: Molecular Operating Environment FAK: IF: Intermediate filaments SRB: Sulforhodamine B assay Aadil KR, Barapatre A, Sahu S, Jha H, Tiwary BN (2014) Free radical scavenging activity and reducing power of Acacia nilotica wood lignin. Int J Biol Macromol 67:220–227 Aharwar A, Parihar DK (2019) Talaromyces verruculosus tannase production, characterization and application in fruit juices detannification. Biocatal Agric Biotechnol 18:101014 Badri DV, Weir TL, Lelie D, Vivanco JM (2009) Rhizosphere chemical dialogues: plant-microbe interactions. Curr Opin Biotechnol 20:642–650 Baker DD, Alvi KA (2004) Small-molecule natural products: new structures, new activities. Curr Opin Biotechnol 15:576–83 Balachandran C, Duraipandiyan V, Arun Y, Sangeetha B, Emi N, Dhabi NA, Ignacimuthu S, Inaguma Y, Okamoto A, Perumal PT (2016) Isolation and characterization of 2-hydroxy-9, 10-anthraquinone from Streptomyces olivochromogenes (ERINLG-261) with antimicrobial and antiproliferative properties. Revista Brasileira de Farmacognosia 26:285–295 Barapatre A, Aadil KR, Tiwary BN, Jha H (2015) In vitro antioxidant and antidiabetic activity of biomodified Acacia wood lignin. Int J BiolMacromol 75:81–89 Benzie IF, Strain JJ (1996) The ferric reducing ability of plasma (FRAP) as a measure of "antioxidant power": the FRAP assay. Anal Biochem 239:70–76 Blagosklonny MV (2012) Rapalogs in cancer prevention: anti-aging or anticancer? Cancer Biol Ther 13:1349–1354 Chernyatina AA, Nicolet S, Aebi U, Herrmann H, Strelkov SV (2012) Atomic structure of the vimentin central α-helical domain and its implications for intermediate filament assembly. PNAS 109:13620–13625 Delaney JR, Ahmed U, Chou A (2013) Stress profiling of longevity mutants identifies Afg3 as a mitochondrial determinant of cytoplasmic mRNA translation and aging. Aging Cell 12:156–166 Dhale MA, Vijay-Raj AS (2009) Pigment and amylase production in Penicillium sp NIOM-02 and its radical scavenging activity. Int J Food Sci Technol 44:2424–2430 Fernandes RPP, Trindade MA, Tonin FG, Lima CG, Pugine SMP, Munekata PES, Lorenzo JM, Melo MP (2016) Evaluation of antioxidant capacity of 13 plant extracts by three different methods: cluster analyses applied for selection of the natural extracts with higher antioxidant capacity to replace synthetic antioxidant in lamb burgers. J Food Sci Technol 53:451–460 Gangadevi V, Muthumary J (2008) Taxol, an anticancer drug produced by an endophytic fungus Bartalinia robillardoides Tassi, isolated from a medicinal plant, Aegle marmelos Correa ex Roxb. World J Microbiol Biotechnol 24:717–724 Gautier M, Normand AC, Ranque S (2016) Previously unknown species of Aspergillus. Clin Microbiol Infect 22:662–669 Golubovskaya VM (2010) Focal adhesion kinase as a cancer therapy target. Anticancer Agents Med Chem 10:735–741 Hassan E, Tayebeh R, Samaneh ET, Zeinalabedin BS, Vahid N, Mehdi Z (2008) Quantification of valeric acid and its derivatives in some species of valeriana L. cantranthus longiflorus stev. Asian J Plant Sci 7:195–200 Huai Q, Wang H, Liu Y, Kim HY, Toft H, Ke H (2005) Structures of the N-terminal and middle domains of E. coli Hsp90 and Conformation Changes upon ADP Binding. Structure 13:579–590 Karunadharma PP, Basisty N, Dai DF, Chiao YA, Quarles EK, Hsieh EJ, Crispin D, Bielas JH, Ericson NG, Beyer RP, MacKay VL, MacCoss MJ, Rabinovitch PS (2015) Subacute calorie restriction and rapamycin discordantly alter mouse liver proteome homeostasis and reverse aging effects. Aging Cell 14:547–557 Kwon BK, Liu J, Messerer C, Kobayashi NR, McGraw J, Oschipok L, Tetzlaff W (2002) Survival and regeneration of rubrospinal neurons 1 year after spinal cord injury. Proc Natl Acad Sci U S A. 99:3246–3251 Liochev SI, Fridovich I (1999) Superoxide and iron: partners in crime. IUBMB Life 48:157–161 Loganayaki N, Siddhuraju P, Manian S (2013) Antioxidant activity and free radical scavenging capacity of phenolic extracts from Helicteres isora L. and Ceiba pentandra L. J Food Sci Technol 50:687–695 Luoto KR, Meng AX, Wasylishen AR, Zhao H, Coakley CL, Penn LZ, Bristow RG (2010) Tumor cell kill by c-MYC Depletion: role of MYC-regulated genes that control DNA double-strand break repair. Cancer Res 21:8748–8759 Maheshwari S, Miller MS, O'Meally R, Cole NR, Amzel LM, Gabelli SB (2017) Kinetic and structural analyses reveal residues in phosphoinositide 3-kinase α that are critical for catalysis and substrate recognition. J Biol Chem 292:13541–13550 Manupati K, Dhoke NR, Debnath T, Yeeravalli R, Guguloth K, Saeidpour S, De UC, Debnath S, Das A (2017) Inhibiting epidermal growth factor receptor signaling potentiates mesenchymal-epithelial transition of breast cancer stem cells and their responsiveness to anticancer drugs. FEBS J 284:1830–1854 Manupati K, Debnath S, Goswami K, Bhoj PS, Chandak HS, Bahekar SP, Das A (2019) Glutathione S-transferase omega 1 inhibition activates JNK-mediated apoptotic response in breast cancer stem cells. FEBS J. https://doi.org/10.1111/febs.14813 Maritim AC, Sanders RA, Watkins JB (2003) Diabetes, oxidative stress, and antioxidants: a review. J Biochem Mol Toxicol 17:24–38 Miranda H, Simão R, Santos Vigário P, Salles BF, Pacheco MT, Willardson JM (2010) Exercise order interacts with rest interval during upper-body resistance exercise. J Strength Cond Res 24:1573–1577 Misra S, Beech BM, Hurley JH (2000) Structure of the VHS domain of human Tom1 (target of myb 1): insights into interactions with proteins and membranes. Biochemistry 39:11282–11290 Morcombe CR, Zilm KW (2003) Chemical shift reference in MAS solid-state NMR. J Magn Reson 162:479–486 Naik PK, Santoshi S, Rai A, Joshi HC (2011) Molecular modeling and competition binding study of Br-noscapine and colchicine provide insight into noscapinoid-tubulin binding site. J Mol Graph Model 29:947–955 Nikoletti R, Lopez-Gresa MP, Manzo E, Carella AA, Clavatta ML (2007) Production and fungitoxic activity of Sch 642305, a secondary metabolite of Penicillium canescens. Mycopathologia 163:295–301 Nishihara K, Kanemori M, Yanagi H, Yura T (2000) Overexpression of trigger factor prevents aggregation of recombinant proteins in Escherichia coli. Appl Environ Microbiol 66:884–889 Palanichamy P, Krishnamoorthy G, Kannan S, Marudhamuthu M (2018) Bioactive potential of secondary metabolites derived from medicinal plant endophytes. Egypt J Basic Appl Sci. 5:303–312 Pandit S, Puttananjaih MH, Harohally N, Dhale MA (2018) Functional attributes of a new molecule- 2-hydroxymethyl-benzoic acid 2′-hydroxy-tetradecyl ester isolated from Talaromyces purpureogenus CFRM02. Food Chem 30:89–96 Parshikov IA, Woodling KA, Sutherland JB (2015) Biotransformations of organic compounds mediated by cultures of Aspergillus niger. Appl Microbiol Biotechnol 99:6971–6986 Prieto P, Pineda M, Aguilar M (1999) Spectrophotometric quantitation of antioxidant capacity through the formation of a phosphomolybdenum complex: specific application to the determination of vitamin E. Anal Biochem 269:337–341 Radhika KP, Rodrigues BF (2010) Arbuscular mycorrhizal fungal diversity in some commonly occurring medicinal plants of Western Ghats, Goa region. J Forest Res 21:45–52 Re R, Pellegrini N, Proteggente A, Pannala A, Yang M, Rice-Evans C (1999) Antioxidant activity applying an improved ABTS radical cation decolorization assay. Free Radic Biol Med 26:1231–1237 Sarker SD, Nahar L, Kumarasamy Y (2007) Microtitre plate-based antibacterial assay incorporating resazurin as an indicator of cell growth, and its application in the in vitro antibacterial screening of phytochemicals. Methods 42:321–324 Schulze JM, Kane CM, Ruiz-Manzano A (2010) The YEATS domain of Taf14 in Saccharomyces cerevisiae has a negative impact on cell growth. Mol Genet Genomics 283:365–380 Shanle EK, Andrews FH, Meriesh H, McDaniel SL, Dronamraju R, DiFiore JV, Jha D, Wozniak GG, Bridgers JB, Kerschner JL, Krajewski K, Martín GM, Morrison AJ, Kutateladze TG, Strahl BD (2015) Association of Taf14 with acetylated histone H3 directs gene transcription and the DNA damage response. Genes Dev 29:1795–1800 Shen HY, Jiang HL, Mao HL, Pan G, Zhou L, Cao YF (2007) Simultaneous determination of seven phthalates and four parabens in cosmetic products using HPLC-DAD and GC-MS methods. J Sep Sci 30:48–54 Siah CW, Ombiga J, Adams LA, Trinder D, Olynyk JK (2006) Normal iron metabolism and the pathophysiology of iron overload. Clin Biochem Rev. 27:5–16 Singh N, Gupta AP, Singh B, Kaul VK (2006) Quantification of valerenic acid in Valeriana jatamansi and Valeriana officinalis by HPTLC. Chromatographia 63:209–213 Soler N, Craescu CT, Gallay J, Frapart YM, Mansuy D, Raynal B, Baldacci G, Pastore A, Huang ME, Vernis L (2012) An S-adenosylmethionine methyltransferase-like domain within the essential, Fe-S-containing yeast protein Dre2. FEBS J 279:2108–2119 Sowndhararajan K, Kang SC (2013) Free radical scavenging activity from different extracts of leaves of Bauhinia vahlii Wight & Arn. Saudi J Biol Sci 20:319–325 Stump S, Mou T, Sprang SR, Natale NR, Beall HD (2018) Crystal structure of the major quadruplex formed in the promoter region of the human c-MYC oncogene. PLoS ONE 12:13 Wei SC, Levine JH, Cogdill AP, Zhao Y, Anang NAS, Andrews MC, Sharma P, Wang J, Wargo JA, Pe'er D, Allison JP (2017) Distinct cellular mechanisms underlie Anti-CTLA-4 and anti-PD-1 checkpoint blockade. Cell 170(6):1120–1133 Wyatt TT, Wösten HA, Dijksterhuis J (2013) Fungal spores for dispersion in space and time. Adv Appl Microbiol 85:43–91 Zhai MM, Jiang CX, Shi YP, Di DL, Crews P, Wu QX (2016) The bioactive secondary metabolites from talaromyces species Nat. Prod. Bioprospect 6:1–24 Zhao W, Zheng HZ, Zhou T, Hong XS, Cui HJ, Jiang ZW, Chena H, Zhou ZJ, Liu XZ (2017) CTT1 overexpression increases the replicative lifespan of MMS-sensitive Saccharomyces cerevisiae deficient in KSP1. Mech Ageing Dev 164:27–36 The authors are thankful to the Department of Biotechnology, Guru Ghasidas Vishwavidyalaya for providing necessary facilities to carry out the research work. Fellowship provided by UGC-SRF to KK is gratefully acknowledged. Dr. Sanjit Kumar, Centre for Bio-separation Technology, VIT Vellore, for NMR analysis, Prof. Pradeep Naik, Sambalpur University, Odisa and Miss Sharmishtha Pal are acknowledged for help in molecular docking analysis. Manuscript Communication Number: IICT/Pubs./2019/180. The work was supported by UGC SAP project no. F.3-14/2016/DRS-I (SAP-II). AD acknowledges the funding provided by DBT, Government of India, Cancer Pilot Project, and Sanction No. 6242-P65/RGCB/PMD/DBT/AMTD/2015. Department of Biotechnology, Guru Ghasidas Vishwavidyalaya, Bilaspur, Chhattisgarh, 495009, India Mahendra Kumar Sahu & Harit Jha Department of Applied Biology, CSIR-Indian Institute of Chemical Technology, Uppal Road, Tarnaka, Hyderabad, TS, 500 007, India Komal Kaushik & Amitava Das Academy of Scientific and Innovative Research (AcSIR), CSIR-IICT Campus, Hyderabad, India Mahendra Kumar Sahu Komal Kaushik Amitava Das Harit Jha MKS carried out the isolation of the fungus and extraction of the secondary metabolites, performed the antibacterial, antioxidant assay, anti-aging experiment, structural characterization and collaborated in the analysis of the obtained results. MKS also drafted the work. KK performed cytotoxicity and collaborated in the data interpretation. AD contributed to the analysis of different characterization results and also with the writing and revision of the draft. HJ is the corresponding author who provided the idea for the realization of this work. HJ also contributed to the data interpretation and the revision of the draft. All authors read and approved the final manuscript. Correspondence to Amitava Das or Harit Jha. The authors declared no conflict of interest. Presented work does not include any studies with human participants or animals performed. Methods, data on identification and biological activity profile of fungi and isolated active secondary metabolite. Sahu, M.K., Kaushik, K., Das, A. et al. In vitro and in silico antioxidant and antiproliferative activity of rhizospheric fungus Talaromyces purpureogenus isolate-ABRF2. Bioresour. Bioprocess. 7, 14 (2020). https://doi.org/10.1186/s40643-020-00303-z DOI: https://doi.org/10.1186/s40643-020-00303-z Talaromyces purpureogenus Rhizospheric Antiproliferative
CommonCrawl
Straight Line - JEE Advanced Previous Year Questions with Solutions JEE Advanced Previous Year Questions of Math with Solutions are available at eSaral. Practicing JEE Advanced Previous Year Papers Questions of mathematics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. eSaral helps the students in clearing and understanding each topic in a better way. eSaral also provides complete chapter-wise notes of Class 11th and 12th both for all subjects. Besides this, eSaral also offers NCERT Solutions, Previous year questions for JEE Main and Advance, Practice questions, Test Series for JEE Main, JEE Advanced and NEET, Important questions of Physics, Chemistry, Math, and Biology and many more. Download eSaral app for free study material and video tutorials. Q. Let $\mathrm{P}, \mathrm{Q}, \mathrm{R}$ and $\mathrm{S}$ be the points on the plane with position vectors $-2 \hat{\mathrm{i}}-\hat{\mathrm{j}}, 4 \hat{\mathrm{i}}, 3 \hat{\mathrm{i}}+3 \hat{\mathrm{j}}$ and $-3 \hat{\mathrm{j}}$ and $-3 \hat{\mathrm{i}}+2 \hat{\mathrm{j}}$ respectively. The quadrilateral PQRS must be a (A) parallelogram, which is neither a rhombus nor a rectangle (B) square (C) rectangle, but not a square (D) rhombus, but not a square [JEE 2010, 3] Ans. (A) $\Rightarrow$ PQRS is a parallelogram but neither a rhombus nor a rectangle. Q. A straight line L through the point $(3,-2)$ is inclined at an angle $60^{\circ}$ to the line $\sqrt{3} x+y=1$. If $L$ also intersect the x-axis, then the equation of $L$ is (A) $y+\sqrt{3} x+2-3 \sqrt{3}=0$ (B) $\mathrm{y}-\sqrt{3} \mathrm{x}+2+3 \sqrt{3}=0$ (C) $\sqrt{3} y-x+3+2 \sqrt{3}=0$ (D) $\sqrt{3} y+x-3+2 \sqrt{3}=0$ [JEE 2011, 3 (–1)] Q. For a > b > c > 0, the distance between (1, 1) and the point of intersection of the lines $a x+b y+c=0$ and $b x+a y+c=0$ is less than $2 \sqrt{2} .$ Then (A) a + b – c > 0 (B) a – b + c < 0 (C) a – b + c > 0 (D) a + b – c < 0 [JEE-Advanced 2013, 2] Ans. (A or C or A,C) Point of intersection of both lines is $\left(-\frac{c}{(a+b)},-\frac{c}{(a+b)}\right)$ Distance between $\left(-\frac{c}{(a+b)},-\frac{c}{(a+b)}\right) \&(1,1)$ is Distance $=\sqrt{\frac{(a+b+c)^{2}}{(a+b)^{2}} \times 2}<2 \sqrt{2}$ $a+b+c<2(a+b)$ $a+b-c>0$ According to given condition option (C) also correct. Q. For a point $P$ in the plane, let $d_{1}(P)$ and $d_{2}(P)$ be the distances of the point $P$ from the lines $x-y=0$ and $x+y=0$ respectively. The area of the region $R$ consisting of all points $P$ lying in the first quadrant of the plane and satisfying $2 \leq d_{1}(P)+d_{2}(P) \leq 4,$ is [JEE(Advanced)-2014, 3] arun kumar - July 30, 2022, 7:55 a.m. making fool sanchit - July 22, 2022, 10:30 p.m. please upload all the questions kyu - Sept. 15, 2021, 3:28 p.m. kyu Shruti Agarwal - Sept. 4, 2021, 8:52 a.m. Please edit all questions of jee advanced ..that will be more useful..... Keshav - May 11, 2021, 8:14 p.m. I suggest esaral to add more questions. I am unsatisfied from this😑. Lakshay Khandelwal - March 31, 2021, 8:25 p.m. Very few but challenging questions please edit more questions 👍 Samael - Oct. 13, 2020, 1:07 a.m. Btwlast question is fantastic.. Samael - Oct. 13, 2020, 12:44 a.m. Is that it? ...only this much questions from straight lines ? Radhika - Sept. 24, 2020, 7:59 p.m. Put more questions please sir and solution very clarity adarsh - Sept. 22, 2020, 2:33 p.m. abe yrr sirf 4 hi question?? Pokhu - Sept. 15, 2020, 5:16 p.m. Aur question daalo be... Sirf 4....
CommonCrawl
Bayes Comp 2020 Bayes Comp 2020 Sponsors About Bayes Comp Bayes Comp is a biennial conference sponsored by the ISBA section of the same name. The conference and the section both aim to promote original research into computational methods for inference and decision making and to encourage the use of frontier computational tools among practitioners, the development of adapted software, languages, platforms, and dedicated machines, and to translate and disseminate methods developed in other disciplines among statisticians. Bayes Comp is the current incarnation of the popular MCMSki series of conferences, and Bayes Comp 2020 is the second edition of this new conference series. The first edition was Bayes Comp 2018, which was held in Barcelona in March of 2018. Bayes Comp 2020 will take place in the Reitz Union at the University of Florida. It will start in the afternoon on Tuesday, January 7 (2020) and finish in the afternoon on Friday, January 10. Deadline for submission of poster proposals: December 15, 2019. Provide the name and affiliation of the speaker, as well as a title and an abstract for the poster. If the poster is associated with a technical report or publication, please also provide that information. Acceptance is conditional on registration, and decisions will be made on-the-fly, usually within a week of submission. Email your proposal to Christian Robert. Deadline for applications for travel support: September 20, 2019. (Scroll down for details.) Fees (in US$) Early (through Aug 14) Regular (Aug 15 - Oct 14) Late (starting Oct 15) Student Member of ISBA Student Non-member of ISBA Regular Member of ISBA Regular Non-Member of ISBA The fees are structured so that a non-member of ISBA will save money by joining ISBA before registering. The registration fee does not include the conference dinner. (The cost of the conference dinner is $50.) There are funds available for junior travel support. These funds are earmarked for people who are either currently enrolled in a PhD program, or have earned a PhD within the last three years (no earlier than January 1, 2017). To be eligible for funding, you must be presenting (talk or poster), and be registered for the conference. Applicants should email the following two items to Jim Hobert: (1) An up-to-date CV, and (2) proof of current enrollment in a PhD program (in the form of a short letter from PhD advisor), or a PhD certificate featuring the graduation date. The application deadline is September 20, 2019. Also note that, because Bayes Comp 2020 is co-sponsored by IMS, students and young researchers (within 5 years of PhD) attending the conference are eligible for IMS travel grants. The application deadline is February 1, which is after the meeting. Applicants can get reimbursed post meeting, but they will not know whether they have received a grant until the meeting has passed. Blocks of rooms have been reserved at three different hotels: Reitz Union Hotel (Room rate: $89-$119 + tax) To make a reservation, either click here, or call 352.392.2151, and in either case, be sure to use the Group Code "BC2020". Holiday Inn University Center (Room rate: $129 + tax, Distance to conference: 10 minute walk.) To make a reservation, either click here, or call 352.376.1661, and in either case, be sure to use the Group Code "BAY". AC Hotel Gainesville Downtown (Room rate: $149 + tax, Distance to conference: 10 minute walk.) To make a reservation click here Here is the program David Blei (Columbia University): Scaling and generalizing approximate Bayesian inference. Abstract & Bio Abstract: A core problem in statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this talk I review and discuss innovations in variational inference (VI), a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and Bayesian statistics. It tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling. After quickly reviewing the basics, I will discuss our recent research on VI. I first describe stochastic variational inference, an approximate inference algorithm for handling massive data sets, and demonstrate its application to probabilistic topic models of millions of articles. Then I discuss black box variational inference, a generic algorithm for approximating the posterior. Black box inference easily applies to many models but requires minimal mathematical work to implement. I will demonstrate black box inference on deep exponential families---a method for Bayesian deep learning---and describe how it enables powerful tools for probabilistic programming. Bio: David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research. He received a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research. He is a fellow of the ACM and the IMS. Paul Fearnhead (U of Lancaster): Continuous-time MCMC Abstract: Recently, there have been conceptually novel developments in Monte Carlo methods through the introduction of new MCMC algorithms which are based on continuous-time, rather than discrete-time, Markov processes. These show promise for scalable Bayesian Analysis: they naturally have non-reversible dynamics which enable them to mix faster in high-dimensional settings; sometimes they can be implemented in a way that requires access to only a small number of data points at each iteration, and yet still sample from the true posterior; and they automatically take account of sparsity in the dependence structure. This talk will give an overview of the recent work in this area, including applications to latent Gaussian models and model choice for robust regression. (Joint with Joris Bierkens, Augustin Chevalier, Gareth Roberts and Matt Sutton.) Bio: Paul Fearnhead is a Distinguished Professor of Statistics at Lancaster University. His research areas have included computational and Bayesian statistics, changepoint problems and statistical genetics. He has been awarded the Royal Statistical Society's Guy medal in bronze and Cambridge University's Adams Prize. He is currently editor of Biometrika. Emily Fox (U of Washington): Computational approaches for large-scale time series analysis. Abstract: We are increasingly faced with the need to analyze complex data streams; for example, sensor measurements from wearable devices. With the advent of such new sensing and measurement technologies, the length of the sequences we seek to analyze has grown tremendously, and traditional algorithms become computationally prohibitive. A popular approach for scaling learning to large datasets is through the use of stochastic gradients, where manageable subsamples of the data are analyzed at each iteration, rather than the full dataset. However, such algorithms implicitly rely on an assumption of i.i.d. data, where it is straightforward to show that the stochastic gradient is an unbiased estimate of the full data gradient. In the time-dependent setting, critical dependencies are broken when subsampling, introducing bias in the gradient estimates. We present a series of algorithms to handle this issue by leveraging the memory decay of the underlying time series, allowing us to act locally while accounting for the temporal dependencies being broken. Our theory then provides approximation guarantees. We first explore these ideas within the context of stochastic gradient MCMC algorithms. We then turn to mitigating stochastic gradient bias in training recurrent neural networks. Throughout the talk, we demonstrate the methods on neuroimaging, genomic, and electrophysiological datasets, as well as a language modeling task. Bio: Emily Fox is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering and Department of Statistics at the University of Washington, and is the Amazon Professor of Machine Learning. Currently, she is also Director of Health AI at Apple. She received her Ph.D. in EECS from MIT, with her dissertation being awarded the Leonard J. Savage Thesis Award in Applied Methodology and MIT EECS Jin-Au Kong Outstanding Doctoral Thesis Prize. She has also been awarded a Presidential Early Career Award for Scientists and Engineers (2017), Sloan Research Fellowship (2015), ONR Young Investigator award (2015), and NSF CAREER award (2014). Her research interests are in large-scale dynamic modeling and computations, with a focus on Bayesian methods and applications in health and computational neuroscience. Invited Sessions Theory & practice of HMC (and its variants) for Bayesian hierarchical models: Tamara Broderick (MIT), George Deligiannidis (U of Oxford), Aaron Smith (U of Ottawa). Titles & Abstracts Tamara Broderick: The kernel interaction trick: Fast Bayesian discovery of multi-way interactions in high dimensions Abstract: Discovering interaction effects on a response of interest is a fundamental problem faced in biology, medicine, economics, and many other scientific disciplines. In theory, Bayesian methods for discovering pairwise interactions enjoy many benefits such as coherent uncertainty quantification, the ability to incorporate background knowledge, and desirable shrinkage properties. In practice, however, Bayesian methods are often computationally intractable for even moderate-dimensional problems. Our key insight is that many hierarchical models of practical interest admit a particular Gaussian process (GP) representation; the GP allows us to capture the posterior with a vector of $O(p)$ kernel hyper-parameters rather than $O(p^2)$ interactions and main effects. With the implicit representation, we can run Markov chain Monte Carlo (MCMC) over model hyper-parameters in time and memory linear in p per iteration. We focus on sparsity-inducing models and show on datasets with a variety of covariate behaviors that our method: (1) reduces runtime by orders of magnitude over naive applications of MCMC, (2) provides lower Type I and Type II error relative to state-of-the-art LASSO-based approaches, and (3) offers improved computational scaling in high dimensions relative to existing Bayesian and LASSO-based approaches. George Deligiannidis: Randomized Hamiltonian Monte Carlo as scaling limit of the bouncy particle sampler and dimension-free convergence rates Carlo Abstract: The Bouncy Particle Sampler is a Markov chain Monte Carlo method based on a nonreversible piecewise deterministic Markov process. In this scheme, a particle explores the state space of interest by evolving according to a linear dynamics which is altered by bouncing on the hyperplane tangent to the gradient of the negative log-target density at the arrival times of an inhomogeneous Poisson Process (PP) and by randomly perturbing its velocity at the arrival times of an homogeneous PP. Under regularity conditions, we show here that the process corresponding to the first component of the particle and its corresponding velocity converges weakly towards a Randomized Hamiltonian Monte Carlo (RHMC) process as the dimension of the ambient space goes to infinity. RHMC is another piecewise deterministic non-reversible Markov process where a Hamiltonian dynamics is altered at the arrival times of a homogeneous PP by randomly perturbing the momentum component. We then establish dimension-free convergence rates for RHMC for strongly log-concave targets with bounded Hessians using coupling ideas and hypocoercivity techniques. Aaron Smith: Free lunches and subsampling Monte Carlo Abstract: It is widely known that the performance of MCMC algorithms can degrade quite quickly when targeting computationally expensive posterior distributions, including the posteriors associated with any large dataset. This has motivated the search for MCMC variants that scale well for large datasets. One general approach, taken by several research groups, has been to look at only a subsample of the data at every step. In this talk, we focus on a simple "no-free-lunch" results which provide some basic limits on the performance of many such algorithms. We apply these generic results to realistic statistical problems and proposed algorithms, and also discuss some special examples that can avoid our generic results and provide a free (or at least cheap) lunch. (Joint with Patrick Conrad, Andrew Davis, James Johndrow, Youssef Marzouk, Natesh Pillai, and Pengfei Wang.) Scalable methods for high-dimensional problems: Akihiko Nishimura (UCLA), Anirban Bhattacharya (Texas A&M), Lassi Roininen (LUT U). Akihiko Nishimura: Scalable Bayesian sparse generalized linear models and survival analysis via curvature-adaptive Hamiltonian Monte Carlo for high-dimensional log-concave distributions Abstract: Bayesian sparse regression based on shrinkage priors possess many desirable theoretical properties and yield posterior distributions whose conditionals mostly admit straightforward Gibbs updates. Sampling high-dimensional regression coefficients from its conditional distribution, however, presents a major scalability issue in posterior computation. The conditional distribution generally does not belong to a parametric family and the existing sampling approaches are hopelessly inefficient in high-dimensional settings. Inspired by recent advances in understanding the performance of Hamiltonian Monte Carlo (HMC) on log-concave target distributions, we develop *curvature-adaptive HMC* for scalable posterior inference under sparse regression models with log-concave likelihoods. As is well-known, HMC's performance critically depends on the integrator stepsize and mass matrix. These tuning parameters are typically adjusted over many HMC iterations by collecting statistics on the target distribution --- an impractical approach when employing HMC within a Gibbs sampler since the conditional distribution changes as the other parameters are updated. Instead, we achieve on-the-fly calibration of the key HMC tuning parameters through 1) the recently developed theory of *prior-preconditioning* for sparse regression and 2) a rapid estimation of the curvature of a given log-concave target via *iterative methods* from numerical linear algebra. We demonstrate the scalability of our method on a clinically relevant large-scale observational study involving n >= 80,000 patients and p >= 10,000 predictors, designed to assess the relative efficacy of two alternative hypertension treatments. Anirban Bhattacharya: Approximate MCMC for high-dimensional estimation Abstract: We discuss a number of applications of approximate MCMC to complex high-dimensional structured estimation problems. A unified theoretical treatment is provided to understand the impact of introducing approximations to the exact MCMC transition kernel. Lassi Roininen: Posterior inference for sparse hierarchical non-stationary models Abstract: Gaussian processes are valuable tools for non-parametric modelling, where typically an assumption of stationarity is employed. While removing this assumption can improve prediction, fitting such models is challenging. In this work, hierarchical models are constructed based on Gaussian Markov random fields with stochastic spatially varying parameters. Importantly, this allows for non-stationarity while also addressing the computational burden through a sparse representation of the precision matrix. The prior field is chosen to be Matérn, and two hyperpriors, for the spatially varying parameters, are considered. One hyperprior is Ornstein-Uhlenbeck, formulated through an autoregressive process. The other corresponds to the widely used squared exponential. In this setting, efficient Markov chain Monte Carlo (MCMC) sampling is challenging due to the strong coupling a posteriori of the parameters and hyperparameters. We develop and compare three MCMC schemes, which are adaptive and therefore free of parameter tuning. Furthermore, a novel extension to higher-dimensional settings is proposed through an additive structure that retains the flexibility and scalability of the model, while also inheriting interpretability from the additive approach. A thorough assessment of the ability of the methods to efficiently explore the posterior distribution and to account for non-stationarity is presented, in both simulated experiments and a real-world computer emulation problem. https://arxiv.org/abs/1804.01431 MCMC and scalable Bayesian computations: Philippe Gagnon (U of Oxford), Florian Maire (U de Montréal), Giacomo Zanella (Bocconi U). Philippe Gagnon: Nonreversible jump algorithms for nested models Abstract: It is now well known that nonreversible Markov chain Monte Carlo methods often outperform their reversible counterparts. Lifting the state space (Chen et al. (1999)) has proved to be a successful technique for constructing such samplers relying on nonreversible Markov chains. The idea is to see the random variables that we wish to generate as position variables to which we associate velocity (or direction) variables, doubling the size of the state space. At each iteration of such samplers, the positions evolve deterministically as a function of the directions, and this is followed by a possible update of the latter. This direction assisted scheme may induce persistent movements that allow to traverse the state space more quickly, compared with the traditional methods producing chains with diffusive patterns. This explains the gain in efficiency. Directions playing a central role, the technique can only be employed to explore state spaces for which this concept is well defined. In this paper, we introduce samplers that we call nonreversible jump algorithms that can be applied to simultaneously achieve model selection and parameter estimation, in situations where the family of models considered forms a sequence of nested models; there thus exists a natural order among the models, and therefore, directions. These samplers are constructed by modifying reversible jump algorithms after having lifted the part of the state space associated with the model indicator. We demonstrate their correctness and show that they compare favourably to their reversible counterpart using both theoretical arguments as well as numerical experiments. We address implementation challenges, facilitating application by users. Florian Maire: Can we improve convergence of MCMC methods by aggregating Markov kernels in a locally informed way? Abstract: For a given probability distribution $\pi$, there is virtually an infinite number of Markov kernels capable of generating useful Markov chains to infer $\pi$. Hybrid methods refer to algorithms where several Markov kernels are mixed with a fixed probability distribution $\omega$. In this talk, we introduce a dependence between $\omega$ and the current state of the Markov chain, a strategy that we refer to as Locally Informed Hybrid Markov chain, since $\omega$ can be specified so as to reflect the local topology of the state-space. The analysis of this intuitive construction reveals a number of surprises that question some of the usual Markov chain comparison tools, from a statistical learning viewpoint. These include tools based on the spectral analysis of the underlying Markov operator as well as Peskun ordering that give typically pessimistic results for metastable Markov chains, a framework which Locally Informed Hybrid Markov chains fall into. Finally, situations where the statistical efficiency of estimators based on Locally Informed Hybrid Markov chains is superior to that of traditional Hybrid algorithms are discussed. Giacomo Zanella: On the robustness of gradient-based sampling algorithms Abstract: We analyze the tension between robustness and efficiency for Markov chain Monte Carlo (MCMC) sampling algorithms. In particular, we focus on the robustness of MCMC algorithms with respect to heterogeneity in the target, an issue of great practical relevance but still understudied theoretically. We show that the spectral gap of the Markov chains induced by classical gradient-based MCMC schemes (e.g. Langevin and Hamiltonian Monte Carlo) decays exponentially fast in the degree of mismatch between the scales of the proposal and target, while for the random walk Metropolis (RWM) the decay is linear. This result provides theoretical support to the notion that gradient-based MCMC schemes are less robust to heterogeneity and more sensitive to tuning. Motivated by these considerations, we propose a novel and simple-to-implement gradient-based MCMC algorithm, inspired by the classical Barker accept-reject rule, with improved robustness properties. Extensive theoretical results, dealing with robustness to heterogeneity, geometric ergodicity and scaling with dimensionality, show that the novel scheme combines the robustness of RWM with the efficiency of classical gradient-based schemes. The theoretical results are illustrated with simulation studies. (Joint work with Samuel Livingstone.) Scalable methods for posterior inference from big data: Subharup Guha (U of Florida), Zhenyu Zhang (UCLA), David Dahl (Brigham Young U). Subharup Guha: Fast MCMC techniques for fitting Bayesian mixture models to massive multiple-platform cancer data Abstract: Recent advances in array-based and next-generation sequencing technologies have revolutionized biomedical research, especially in cancer. Bayesian mixture models, such as finite mixtures, hidden Markov models, and Dirichlet processes, offer elegant frameworks for inference, especially because they are flexible, avoid making unrealistic assumptions about the data features and the nature of the interactions, and permit nonlinear dependencies. However, existing inference procedures for these models do not scale to multiple-platform Big Data and often stretch computational resources past their limits. An investigation of the theoretical properties of these models offers insight into asymptotics that form the basis of broadly applicable, cost-effective MCMC strategies for large datasets. These MCMC techniques have the advantage of providing inferences from the posterior of interest, rather than an approximation, and are applicable to different Bayesian mixture models. Furthermore, they can be applied to develop massively parallel MCMC algorithms for these data. The versatility and impressive gains of the methodology are demonstrated by simulation studies and by a semiparametric integrative analysis that detects shared biological mechanisms in heterogeneous multi-platform cancer datasets. (Joint with Dongyan Yan and Veera Baladandayuthapani.) Zhenyu Zhang: Bayesian inference for large-scale phylogenetic multivariate probit models Abstract: Inferring correlation among biological features is an important yet challenging problem in evolutionary biology. In addition to adjusting for correlations induced from an uncertain evolutionary history, we also have to deal with features measured in different scales: continuous and binary. We jointly model the two feature types by introducing latent continuous parameters for binary features, giving rise to a phylogenetic multivariate probit model. Posterior computation under this model remains problematic with increasing sample size, requiring repeatedly sampling from a high-dimensional truncated Gaussian distribution. Best current approaches scale quadratically in sample size and suffer from slow-mixing. We develop a new computation approach that exploits 1) the state-of-the-art bouncy particle sampler based on piece-wise deterministic Markov process and 2) a novel dynamic programming approach that reduces the cost of likelihood and gradient evaluations to linear in sample size. In an application, we successfully handle a 14,980-dimensional truncated Gaussian, making it possible to estimate correlations among 28 HIV virulence and immunological epitope features across 535 viruses. The proposed approach is of independent interest, being applicable to a broader class of covariance structures beyond comparative biology. (Joint with Akihiko Nishimura, Philippe Lemey, and Marc A. Suchard.) David Dahl: Summarizing distributions of latent structure Abstract: In a typical Bayesian analysis, considerable effort is placed on "fitting the model" (e.g., obtaining samples from the posterior distribution) but this is only half of the inference problem. Meaningful inference usually requires summarizing the posterior distribution of the parameters of interest. Posterior summaries can be especially important in communicating the results and conclusions from a Bayesian analysis to a diverse audience. If the parameters of interest live in R^n, common posterior summaries are means, medians, and modes. Summarizing posterior distributions of parameters with complicated structure is a more difficult problem. For example, the "average" network in the posterior distribution on a network is not easily defined. This paper reviews methods for summarizing distributions of latent structure and then proposes a novel search algorithm for posterior summaries. We apply our method to distributions on variable selection indicators, partitions, feature allocations, and networks. We illustrate our approach in a variety of models for both simulated and real datasets. (Joint with Peter Müller.) Efficient computing strategies for high-dimensional problems: Gareth Roberts (U of Warwick), Veronika Rockova (U of Chicago), Gregor Kastner (Vienna U of Economics and Business). Gareth Roberts: Bayesian fusion Abstract: Suppose we can readily access samples from $\pi_i(x)$, $1\le i\le n$, but we wish to obtain samples from $\pi (x) = \prod_ {i=1}^n \pi_i (x) $. The so-called Bayesian Fusion problem comes up within various areas of modern Bayesian analysis, for example in the context of big data or privacy constraints, as well as more traditional areas such as meta-analysis. Many approximate solutions to this problem have been proposed. However this talk will present an exact solution based on rejection sampling in an extended state space, where the accept/reject decision is carried out by simulating the skeleton of a suitably constructed auxiliary collection of Brownian bridges. (This is joint work with Hongsheng Dai and Murray Pollock.) Veronika Rockova: Variable Selection with ABC Bayesian Forests Abstract: Few problems in statistics are as perplexing as variable selection in the presence of very many redundant covariates. The variable selection problem is most familiar in parametric environments such as the linear model or additive variants thereof. In this work, we abandon the linear model framework, which can be quite detrimental when the covariates impact the outcome in a non-linear way, and turn to tree-based methods for variable selection. Such variable screening is traditionally done by pruning down large trees or by ranking variables based on some importance measure. Despite heavily used in practice, these ad-hoc selection rules are not yet well understood from a theoretical point of view. In this work, we devise a Bayesian tree-based probabilistic method and show that it is consistent for variable selection when the regression surface is a smooth mix of p>n covariates. These results are the first model selection consistency results for Bayesian forest priors. Probabilistic assessment of variable importance is made feasible by a spike-and-slab wrapper around sum-of-trees priors. Sampling from posterior distributions over trees is inherently very difficult. As an alternative to MCMC, we propose ABC Bayesian Forests, a new ABC sampling method based on data-splitting that achieves higher ABC acceptance rate. We show that the method is robust and successful at finding variables with high marginal inclusion probabilities. Our ABC algorithm provides a new avenue towards approximating the median probability model in non-parametric setups where the marginal likelihood is intractable. (Joint with Yi Liu and Yuexi Wang.) Gregor Kastner: Efficient Bayesian computing in many dimensions - applications in economics and finance Abstract: Statistical inference for dynamic models in high dimensions often comes along with a huge amount of parameters that need to be estimated. Thus, to handle the curse of dimensionality, suitable regularization methods are of prime importance, and efficient computational tools are required to make practical estimation feasible. In this talk, we exemplify how these two principles can be implemented for models of importance in macroeconomics and finance. First, we discuss a Bayesian vector autoregressive (VAR) model with time-varying contemporaneous correlations that is capable of handling vast dimensional information sets. Second, we propose a straightforward algorithm to carry out inference in large dynamic regression settings with mixture innovation components for each coefficient in the system. MCMC methods in high dimension, theory and applications: Christophe Andrieu (U of Bristol), Gabriel Stoltz (Ecole des Ponts ParisTech), Umut Simsekli (Télécom ParisTech). Christophe Andrieu: All about the Metropolis-Hastings-Green update Gabriel Stoltz: Removing the mini-batching error in large scale Bayesian sampling Abstract: The cost of performing one step of a sampling method such as Langevin dynamics scales linearly with the number of data points in Bayesian inference. To alleviate this issue, mini-batching was put forward by Welling and Teh. However, mini-batching leads to some bias on the a posteriori distribution of parameters. Adaptive Langevin dynamics were devised to remove this bias. The idea is to consider an inertial Langevin dynamics where the friction is a dynamical variable, updated according to some Nose-Hoover feedback (inspired by techniques from molecular dynamics). We show here using techniques from hypocoercivity that the law of Adaptive Langevin dynamics converges exponentially fast to equilibrium, with a rate which can be quantified in terms of the key parameters of the dynamics (mass of the extra variable and magnitude of the fluctuation in the Langevin dynamics). This allows us in particular to obtain a Central Limit Theorem on time averages along realizations of the dynamics. Currently, this method is however limited to unknown diffusion matrices which do not depend on the parameters (additive noise). I will mention extensions to the case of multiplicative noise. Umut Simsekli: Nonparametric generative modeling via optimal transport and diffusions with provable guarantees Abstract: By building upon the recent theory that established the connection between implicit generative modeling (IGM) and optimal transport, in this study, we propose a novel parameter-free algorithm for learning the underlying distributions of complicated datasets and sampling from them. The proposed algorithm is based on a functional optimization problem, which aims at finding a measure that is close to the data distribution as much as possible and also expressive enough for generative modeling purposes. We formulate the problem as a gradient flow in the space of probability measures. The connections between gradient flows and stochastic differential equations let us develop a computationally efficient algorithm for solving the optimization problem. We provide formal theoretical analysis where we prove finite-time error guarantees for the proposed algorithm. Our experimental results support our theory and show that our algorithm is able to successfully capture the structure of different types of data distributions. Computational advancements in entity resolution: Brenda Betancourt (U of Florida), Andee Kaplan (Duke U), Rebecca Steorts (Duke U). Brenda Betancourt: Generalized flexible microclustering models for entity resolution Abstract: Classical clustering tasks accomplished with Bayesian random partition models seek to divide a given population or data set in a relatively small number of clusters whose size grows with the number of data points. For other clustering applications, such as entity resolution, this assumption is inappropriate. Entity resolution (record linkage or de-duplication) is the process of removing duplicate records from noisy databases often in the absence of a unique identifier. One natural approach to entity resolution is as a clustering problem, where each entity is implicitly associated with one or more records and the inference goal is to recover the latent entities (clusters) that correspond to the observed records (data points). In most entity resolution tasks, the clusters are very small and remain small as the number of records increases. This framework requires models that yield clusters whose sizes grow sublinearly with the total number of data points. We introduce a general class of microclustering models suitable for the 'microclustering' problem, and fully characterize its theoretical properties and asymptotic behavior. We also present a partially-collapsed MCMC sampler that, compared to common sampling schemes found in the literature, achieves a significantly better mixing by overcoming strong dependencies between some of the parameters in the model. To improve scalability, we combine the sampling algorithm with a common record linkage blocking technique that allows for parallel programing. (Joint with Giacomo Zanella and Rebecca Steorts.) Andee Kaplan: Life after record linkage: Tackling the downstream task with error propagation Abstract: Record linkage (entity resolution or de-duplication) is the process of merging noisy databases to remove duplicate entities that often lack a unique identifier. Linking data from multiple databases increases both the size and scope of a dataset, enabling post-processing tasks such as linear regression or capture-recapture to be performed. Any inferential or predictive task performed after linkage can be considered as the "downstream task." While recent advances have been made to improve flexibility and accuracy of record linkage, there are limitations in the downstream task due to the passage of errors through this two-step process. In this talk, I present a generalized framework for creating a representative dataset post-record linkage for the downstream task, called prototyping. Given the information about the representative records, I explore two downstream tasks—linear regression and binary classification via logistic regression. In addition, I discuss how error propagation occurs in both of these settings. I provide thorough empirical studies for the proposed methodology, and conclude with a discussion of practical insights into my work. (Joint with Brenda Betancourt and Rebecca Steorts.) Rebecca Steorts: Scalable end-to-end Bayesian entity resolution Abstract: Very often information about social entities is scattered across multiple databases. Combining that information into one database can result in enormous benefits for analysis, resulting in richer and more reliable conclusions. In most practical applications, however, analysts cannot simply link records across databases based on unique identifiers, such as social security numbers, either because they are not a part of some databases or are not available due to privacy concerns. In such cases, analysts need to use methods from statistical and computational science known as entity resolution (record linkage or de-duplication) to proceed with analysis. Entity resolution is not only a crucial task for social science and industrial applications, but is a challenging statistical and computational problem itself. One recent development in entity resolution methodology has been the application of Bayesian generative models. These models offer several advantages over conventional methods, namely: (i) they do not require labeled training data; (ii) they treat linkage as a clustering problem which preserves transitivity; (iii) they propagate uncertainty; and (iv) they allow for flexible modeling assumptions. However, due to difficulties in scaling, these models have so far been limited to small data sets of around 1000 records. In this talk, I propose the first scalable Bayesian models for entity resolution. This extension brings together several key ideas, including probabilistic blocking, indexing, and efficient sampling algorithms. The proposed methodology is illustrate on both synthetic and real data. (Joint with Neil Marchant, Benjamin Rubinstein, Andee Kaplan, and Daniel Elazar.) ABC: Ruth Baker (U of Oxford), David Frazier (Monash U), Umberto Picchini (Chalmers U of Tech & U of Gothenburg). Ruth Baker: Multifidelity approximate Bayesian computation Abstract: A vital stage in the mathematical modelling of real-world systems is to calibrate a model's parameters to observed data. Likelihood-free parameter inference methods, such as Approximate Bayesian Computation, build Monte Carlo samples of the uncertain parameter distribution by comparing the data with large numbers of model simulations. However, the computational expense of generating these simulations forms a significant bottleneck in the practical application of such methods. We identify how simulations of cheap, low-fidelity models have been used separately in two complementary ways to reduce the computational expense of building these samples, at the cost of introducing additional variance to the resulting parameter estimates. We explore how these approaches can be unified so that cost and benefit are optimally balanced, and we characterise the optimal choice of how often to simulate from cheap, low-fidelity models in place of expensive, high-fidelity models in Monte Carlo ABC algorithms. The resulting early accept/reject multifidelity ABC algorithm that we propose is shown to give improved performance over existing multifidelity and high-fidelity approaches. David Frazier: Robust approximate Bayesian inference with synthetic likelihood Abstract: Bayesian synthetic likelihood (BSL) is now a well-established method for conducting approximate Bayesian inference in complex models where exact Bayesian approaches are either infeasible, or computationally demanding, due to the intractability of likelihood function. Similar to other approximate Bayesian methods, such as the method of approximate Bayesian computation, implicit in the application of BSL is the maintained assumption that the data generating process can generate simulated summary statistics that mimic the behaviour of the observed summary statistics. This notion of model compatibility with the observed summaries is critical for the performance of BSL and its variants. We demonstrate theoretically, and through several examples, that if the assumed data generating process (DGP) differs from the true DGP, model compatibility may no longer be satisfied and BSL can give unreliable inferences. To circumvent the issue of incompatibility between the observed and simulated summary statistics, we propose two robust versions of BSL that can deliver reliable performance regardless of whether or not the observed and simulated summaries are compatible. Simulation results and two empirical examples demonstrate the good performance of this robust approach to BSL, and its superiority over standard BSL when model compatibility is not in evidence. Umberto Picchini: Variance reduction for fast ABC using resampling Abstract: Approximate Bayesian computation (ABC) is the state-of-art methodology for likelihood-free Bayesian inference. Its main feature is the ability to bypass the explicit calculation of the likelihood function, by only requiring access to a model simulator to generate many artificial datasets. In the context of pseudo-marginal ABC-MCMC (Bornn, Pillai, Smith and Woodard, 2017), generating $M> 1$ datasets for each MCMC iteration allows to construct a kernel-smoothed ABC likelihood which has lower variance, this resulting beneficial for the mixing of the ABC-MCMC chain, compared to the typical ABC setup which sets $M=1$. However, setting $M>1$ implies a computational bottleneck, and in Bornn, Pillai, Smith and Woodard (2017) it was found that the benefits of using $M>1$ are not worth the increasing computational effort. In Everitt (2017) it was shown that, when the intractable likelihood is replaced by a \textit{synthetic likelihood} (SL, Wood, 2010), it is possible to use $M=1$ and resample many times from this single simulated dataset, to construct computationally fast SL inference that artificially emulates the case $M>1$. Unfortunately, this approach was found to be ineffective within ABC, as the resampling generates inflated ABC posteriors. In this talk we show how to couple \textit{stratified sampling} with the resampling idea of Everitt (2017). We construct an ABC-MCMC algorithm that uses a small number of model simulations ($M=1$ or 2) for each MCMC iteration, while substantially reducing the additional variance in the approximate posterior distribution induced by resampling. We therefore enjoy the computational speedup from resampling approaches, and show that our stratified sampling procedure allows us to use a larger than usual ABC threshold, while still obtaining accurate inference. (Joint with Richard Everitt.) Continuous-time and non-reversible Monte Carlo methods: Yian Ma (U of California, Berkeley), Manon Michel (U Clermont-Auvergne). Yian Ma: Bridging MCMC and Optimization Abstract: Rapid growth in data size and model complexity has boosted questions on how computational tools can scale with the problem and data complexity. Optimization algorithms have had tremendous success for convex problems in this regard. MCMC algorithms for mean estimates, on the other hand, are slower than the optimization algorithms in convex unconstrained scenarios. It has even become folklore that the MCMC algorithms are in general computationally more intractable than optimization algorithms. In this talk, I will examine a class of non-convex objective functions arising from mixture models. For that class of objective functions, I discover that the computational complexity of MCMC algorithms scales linearly with the model dimension, while optimization problems are NP hard. I will then study MCMC algorithms as optimization over the KL-divergence in the space of measures. By incorporating a momentum variable, I will discuss an algorithm which performs accelerated gradient descent over the KL-divergence. Using optimization-like ideas, a suitable Lyapunov function is constructed to prove that an accelerated convergence rate is obtained. Manon Michel: Accelerations of MCMC methods by non-reversibility and factorization Abstract: During this talk, I will present the historical development of non-reversible Markov-chain Monte Carlo methods, based on piecewise deterministic Markov processes (PDMP). First developed for multiparticle systems, the goal was to emulate the successes of cluster algorithms for spin systems and was achieved through the replacement of the time reversibility by symmetries of the sampled probability distribution itself. These methods have shown to bring clear accelerations and are now competing with molecular dynamics methods in chemical physics or state-of-the-art sampling schemes, e.g. Hamiltonian Monte Carlo, in statistical inference. I will discuss their successes as well as the remaining open questions. Finally, I will explain how the factorization of the distribution can lead to computational complexity reduction. Markov chain convergence analysis and Wasserstein distance: Alain Durmus (ENS Paris-Saclay), Jonathan Mattingly (Duke U), Qian Qin (U of Minnesota). Alain Durmus: TBA Jonathan Mattingly: TBA Qian Qin: Geometric convergence bounds for Markov chains in Wasserstein distance based on generalized drift and contraction conditions Abstract: Quantitative bounds on the convergence rate of a Markov chain with respect to some Wasserstein distance can be derived using a set of drift and contraction conditions. Previous studies focus on the case where the parameters in this type of condition are constant. We propose a method for constructing convergence bounds based on generalized drift and contraction conditions whose parameters may vary across the state space. This can lead to significantly improved bounds. Our result also extends existing bounds in the literature to the case where the Wasserstein distance is unbounded. Young researchers' contributions to Bayesian computation: Tommaso Rigon (Bocconi U), Michael Jauch (Duke U), Nicholas Tawn (U of Warwick). Tommaso Rigon: Bayesian inference for finite-dimensional discrete priors Abstract: Discrete random probability measures are the main ingredient for addressing Bayesian clustering. The investigation in this area has been very lively, with strong emphasis on nonparametric procedures based either on the Dirichlet process or on more flexible generalizations, such as the Pitman-Yor (PY) process or the normalized random measures with independent increments (NRMI). The literature on finite-dimensional discrete priors, beyond the classic Dirichlet-multinomial model, is much more limited. We aim at filling this gap by introducing novel classes of priors closely related to the PY process and NRMIs, which are recovered as limiting case. Prior and posterior distributional properties are extensively studied. Specifically, we identify the induced random partitions and determine explicit expressions of the associated urn schemes and of the posterior distributions. A detailed comparison with the (infinite-dimensional) PY and NRMIs is provided. Finally, we employ our proposal for mixture modeling, and we assess its performance over existing methods in the analysis of a real dataset. Michael Jauch: Bayesian analysis with orthogonal matrix parameters Abstract: Statistical models for multivariate data are often parametrized by a set of orthogonal matrices. Bayesian analyses of models with orthogonal matrix parameters present two major challenges: posterior simulation on the constrained parameter space and incorporation of prior information such as sparsity or row dependence. We propose methodology to address both of these challenges. To simulate from posterior distributions defined on a set of orthogonal matrices, we propose polar parameter expansion, a parameter expanded Markov chain Monte Carlo approach suitable for routine and flexible posterior inference in standard simulation software. To incorporate prior information, we introduce prior distributions for orthogonal matrix parameters constructed via the polar decomposition of an unconstrained random matrix. Prior distributions constructed in this way satisfy a number of appealing properties and posterior inference can again be carried out in standard simulation software. We illustrate these techniques by fitting Bayesian models for a protein interaction network and gene expression data. Nicholas Tawn: The Annealed Leap Point Sampler (ALPS) for multimodal target distributions Abstract: This talk introduces a novel algorithm, ALPS, that is designed to provide a scalable approach to sampling from multimodal target distributions. The ALPS algorithm concatenates a number of the strengths of the current gold standard approaches for multimodality. It is strongly based around the well known parallel tempering procedure but rather than using "hot state" tempering levels the ALPS algorithm instead appeals to annealing. In annealed temperature levels the modes become even more isolated with the effects of modal skew less pronounced. Indeed the more annealed the temperature the more accurately the local mode is approximated by a Laplace approximation. The idea is to exploit this by utilizing a powerful Gaussian mixture independence sampler at the annealed temperature levels allowing rapid mixing between modes. This mixing information is then filtered back to the target of interest using a parallel tempering-like procedure with carefully designed marginal distributions. Approximate Bayesian nonparametrics: Peter Müller (U of Texas), Debdeep Pati (Texas A&M), Jeff Miller (Harvard U). Peter Müller: Consensus Monte Carlo for random subsets using shared anchors Abstract: We present a consensus Monte Carlo algorithm that scales existing Bayesian nonparametric models for clustering and feature allocation to big data. The algorithm is valid for any prior on random subsets such as partitions and latent feature allocation, under essentially any sampling model. Motivated by three case studies, we focus on clustering induced by a Dirichlet process mixture sampling model, inference under an Indian buffet process prior with a binomial sampling model, and with a categorical sampling model. We assess the proposed algorithm with simulation studies and show results for inference with three datasets: an MNIST image dataset, a dataset of pancreatic cancer mutations, and a large set of electronic health records (EHR). Debdeep Pati: Convergence of variational Bayes algorithms Abstract: We develop techniques for analyzing the convergence of variational Bayes algorithms in three classic examples: i) variational lower bound optimization using convex duality in generalized linear models ii) variational boosting and iii) coordinate ascent inference in discrete graphical models. The key idea is to relate the updates with an associated dynamical system and analyze its spectra. In some cases, we provide specific conditions for the algorithm to converge to the solution, exhibit periodicity or become unstable. Jeff Miller: Flexible perturbation models for robustness to misspecification Abstract: In many applications, there are natural statistical models with interpretable parameters that provide insight into questions of interest. While useful, these models are almost always wrong in the sense that they only approximate the true data generating process. In some cases, it is important to account for this model error when quantifying uncertainty in the parameters. We propose to model the distribution of the observed data as a perturbation of an idealized model of interest by using a nonparametric mixture model in which the base distribution is the idealized model. This provides robustness to small departures from the idealized model and, further, enables uncertainty quantification regarding the model error itself. Inference can easily be performed using existing methods for the idealized model in combination with standard methods for mixture models. Remarkably, inference can be even more computationally efficient than in the idealized model alone, because similar points are grouped into clusters that are treated as individual points from the idealized model. We demonstrate with simulations and an application to flow cytometry. Contributed Sessions Novel mixture-based computational approaches to Bayesian learning: Michele Guindani (U of California, Irvine), Antonietta Mira (U della Svizzera Italiana & U of Insubria), Sirio Legramanti (Bocconi U) Michele Guindani: Modeling human microbiome data via latent nested nonparametric priors Abstract: The study of the human microbiome has gained substantial attention in recent years due to its relationship with the regulation of the autoimmune system. During the data-preprocessing pipeline, microbes characterized by similar genome are grouped together in Operational Taxonomic Units (OTUs). Since OTU abundances vary widely across individuals within a population, it is of interest to characterize the diversity of the microbiome to study the association between asymmetries in the human microbiota and various diseases. Here, we propose a Bayesian Nonparametric approach to model abundance tables in presence of multiple populations: a common set of parameters (atoms at the observational level) is used to construct, at a higher level, a set of atoms on a distributional space. Using a common set of atoms at the lower level yields an important advantage: our model does not degenerate to the full exchangeable case when there are ties across samples, thus overcoming the crucial problem of the traditional Nested Dirichlet process outlined by Camerlenghi et al. (2018). To perform posterior inference, we propose a novel Nested independent slice-efficient algorithm. Since OTUs tables consist of frequency counts and are known to be sparse, we express the likelihood as a Rounded Mixture of Gaussian Kernels. Simulation studies confirm that our model does not suffer the nDPMM drawback anymore, and first applications to the microbiomes of Bangladesh babies have shown promising results. Antonietta Mira: Adaptive incremental mixture Markov chain Monte Carlo Abstract: We propose Adaptive Incremental Mixture Markov chain Monte Carlo (AIMM), a novel approach to sample from challenging probability distributions defined on a general state-space. While adaptive MCMC methods usually update a parametric proposal kernel with a global rule, AIMM locally adapts a semiparametric kernel. AIMM is based on an independent Metropolis-Hastings proposal distribution which takes the form of a finite mixture of Gaussian distributions. Central to this approach is the idea that the proposal distribution adapts to the target by locally adding a mixture component when the discrepancy between the proposal mixture and the target is deemed to be too large. As a result, the number of components in the mixture proposal is not fixed in advance. Theoretically, we prove that there exists a process that can be made arbitrarily close to AIMM and that converges to the correct target distribution. We also illustrate that it performs well in practice in a variety of challenging situations, including high-dimensional and multimodal target distributions. Sirio Legramanti: Bayesian cumulative shrinkage for infinite factorizations Abstract: There is a wide variety of models in which the dimension of the parameter space is unknown. For example, in factor analysis the number of latent factors is typically not known and has to be inferred from the observed data. Although classical shrinkage priors are useful in these contexts, increasing shrinkage priors can provide a more effective option, which progressively penalizes expansions with growing complexity. We propose a novel increasing shrinkage prior, named the cumulative shrinkage process, for the parameters controlling the dimension in over-complete formulations. Our construction has broad applicability, simple interpretation, and is based on a sequence of spike and slab distributions which assign increasing mass to the spike as model complexity grows. Using factor analysis as an illustrative example, we show that this formulation has theoretical and practical advantages over current competitors, including an improved ability to recover the model dimension. An adaptive Markov chain Monte Carlo algorithm is proposed, and the methods are evaluated in simulation studies and applied to personality traits data. (Joint with Daniele Durante and David Dunson) Using Bayesian methods to uncover the latent structures in real datasets: Louis Raynal (U of Montpellier & Harvard U), Francesco Denti (U of Milan – Bicocca & U della Svizzera Italiana), Alex Rodriguez (International Center for Theoretical Physics). Louis Raynal: Reconstructing the evolutionary history of the desert locust by means of ABC random forest Abstract: The Approximate Bayesian Computation - Random Forest (ABC-RF) method- ology recently developed to perform model choice (Pudlo et al., 2016; Estoup et al., 2018) and parameter inference (Raynal et al., 2019). It proved to achieve good performance, is mostly insensitive to noise variables and requires very few calibration. In this presentation we expose recent improvements, with a focus on the computation of error measures with random forests for parameter in- ference. As a case study, we are interested in the Schistocerca gregaria desert locust species which is divided in two distinct regions along the north-south axis of Africa. Using ABC-RF on microsatellite data, we reconstruct the evolu- tionary processes explaining the present geographical distribution and estimate parameters as the divergence time between the north and south sub-species. Francesco Denti: Bayesian nonparametric dimensionality reduction via estimation of data intrinsic dimensions Abstract: Even if they are defined on a space with a large dimension, data points usually lie onto hypersurfaces with a much smaller intrinsic dimension (ID). The recent Hidalgo method (Allegra et al., 2019), a Bayesian extension of the TWO-NN model (Facco et al., 2017, Scientific Report), allows estimating the ID when all points lie onto multiple latent manifolds. We consider the data points as a configuration of a Poisson Process (PP) with an intensity proportional to the true density. Hidalgo makes only two weak assumptions: (i) locally, on the scale of the second nearest neighbor, the original PP can be well approximated by a homogeneous one and (ii) points close to each other are more likely to belong to the same manifold. Under (i), the ratio of the distances of a point from its first and second neighbor follows a Pareto distribution that depends parametrically only on the ID. We extended Hidalgo to the Nonparametric case, allowing the estimation of the number of latent manifolds via Dirichlet Process Mixture Model and inducing a clustering among observations characterized by similar ID. We further derive the distributions of the ratios of subsequent distances between neighbors and we prove their independence. This enables us to extract more information from the data without compromising the scalability of our method. While the idea behind the extension is simple, a non-trivial Bayesian scheme is required for estimating the model and assigning each point to the correct manifold. Since the posterior distribution has no closed form, to sample from it we rely on the Slice Sampler algorithm. From preliminary analyses performed on simulated data, the model provides promising results. Moreover, we were able to uncover a surprising ID variability in several real-world datasets. Alex Rodriguez: Mapping the topography of complex datasets Abstract: Data sets can be considered an ensemble of realizations drawn from a density distribution. Obtaining a synthetic description of this distribution allows rationalizing the underlying generating process and building human-readable models. In simple cases, visualizing the distribution in a suitable low-dimensional projection is enough to capture its main features but real world data sets are often embedded in a high-dimensional space. Therefore, I present a procedure that allows obtaining such a synthetic description in an automatic way with the only information of pairwise data distances (or similarities). This methodology is based on a reliable estimation of the intrinsic dimension of the dataset (Facco, et al., 2017) and the probability density function (Rodriguez, et al., 2018) coupled with a modified Density Peaks clustering algorithm (Rodriguez and Laio, 2014). The final outcome of all this machinery working together is a hierarchical tree that summarizes the main features of the data set and a classification of the data that maps to which of these features they belong to (d'Errico, et al., 2018). MCMC-based Bayesian inference on Hilbert spaces: Nawaf Bou-Rabee (Rutgers U), Nathan Glatt-Holtz (Tulane U), Daniel Sanz-Alonso (U of Chicago) Nawaf Bou-Rabee: Two-scale coupling for preconditioned Hamiltonian Monte Carlo in infinite dimensions Abstract: We present non-asymptotic quantitative bounds for convergence to equilibrium of the exact preconditioned Hamiltonian Monte Carlo algorithm (pHMC) on a Hilbert space. As a consequence, we obtain explicit and dimension-free bounds for pHMC applied to high-dimensional distributions arising in transition path sampling and path integral molecular dynamics. Global convexity of the underlying potential energies is not required. Our results are based on a two-scale coupling which is contractive in a carefully designed distance. Nathan Glatt-Holtz: A Bayesian approach to quantifying uncertainty divergence free flows Abstract: We treat the statistical regularization of the ill-posed inverse problem of estimating a divergence free flow field $u$ from the partial and noisy observation of a passive scalar $\theta$. Our solution is Bayesian posterior distribution, a probability measure $\mu$ which precisely quantifies uncertainties in u once one specifies models for measurement error and prior knowledge for $u$. We present some of our recent work which analyzes $\mu$ both analytically and numerically. In particular we discuss some Markov Chain Monte Carlo (MCMC) algorithms which we have developed and refined to effectively sample from $\mu$. (This is joint work with Jeff Borggaard and Justin Krometis.) Daniel Sanz-Alonso: Scalable MCMC for graph based learning Abstract: In this talk I will consider two graph-based learning problems. The first one concerns a graph formulation of Bayesian semi-supervised learning, and the second one concerns kernel discretization of Bayesian inverse problems on manifolds. I will show that understanding the continuum limit of these graph-based problems is helpful in designing sampling algorithms whose rate of convergence does not deteriorate in the limit of large number of graph nodes. Advances in multiple importance sampling: Art Owen (Stanford U), Victor Elvira (U of Edinburgh), Felipe Medina Aguayo (U of Reading). Art Owen: Robust deterministic weighting of estimates from adaptive importance sampling Abstract: This talk presents a simple robust way to weight a sequence of estimates generated by adaptive importance sam- pling. Importance sampling is a useful method for estimating rare event probabilities and for sampling posterior distributions. It often generates data that can be used to find an improved sampler leading to methods of adaptive importance sampling (AIS). Under ideal conditions, AIS can approach a perfect sampler and the mean squared error (MSE) vanishes exponentially fast. Under less ideal conditions, including all nontrivial uses of self-normalized importance sampling, the MSE is bounded below by a positive multiple of $1/n$. That rules out exponential convergence but still allows for steady improvements. If we model steady improvement as yielding a sequence of unbiased and uncorrelated estimates with variance proportional to $k^{−y}$ for $1 \le k \le K < \infty$ and $0 \le y \le 1$, then a simple model weighting the $k$th iterate proportionally to $k^{1/2} is nearly optimal. It never raises variance by more than 9/8 over an oracle's variance even though the resulting convergence rate varies with $y$. Numerical investigation shows that these weights are also robust under additional models of gradual improvement. (This is joint work with Yi Zhou.) Victor Elvira: Multiple importance sampling for rare events estimation with an application in communication systems Abstract: Digital communications are based on the transmission of symbols that belong to a finite alphabet, each of them carrying one or several bits of information. The receiver estimates the symbol that was transmitted, and in the case of perfect communication without errors, the original sequence of bits is reconstructed. However, real-world communication systems (e.g., in wireless communications) introduce random distortions in the symbols, including additive Gaussian noise, provoking errors in the detected symbols at the receiver. The characterization of the symbol error rate (SER) of the system is of major interest in communications engineering. However, in many systems of interest, the integrals required to evaluate the symbol error rate (SER) in the presence of Gaussian noise are impossible to compute in closed-form, and therefore Monte Carlo simulation is typically used to estimate the SER. Naive Monte Carlo simulation has been traditionally used in the communications literature, even if it can be very inefficient and require very long simulation runs, especially at high signal-to-noise-ratio (SNR) scenarios. At high SNR, the variance of the additive Gaussian noise is small, and hence the rate of errors is very low, which yields raw Monte Carlo impracticable for this rare event estimation problem. In this talk, we start describing (for non-experts) the problem of SER estimation of communication system. Then, we adapt a recently proposed multiple importance sampling (MIS) technique, called ALOE (for "At Least One rare Event") to this problem. Conditioned to a transmitted symbol, an error (or rare event) occurs when the observation falls in a union of half-spaces or, equivalently, outside a given polytope. The proposal distribution for ALOE samples the system conditionally on an error taking place, which makes it more efficient than other importance sampling techniques. ALOE provides unbiased SER estimates with simulation times orders of magnitude shorter than conventional Monte Carlo. Then, we discuss the challenges of SER estimation in multiple-input multiple-output (MIMO) communications, where the rare-event estimation problem requires solving a large number of integrals in a higher-dimensional space. We propose a novel MIS-based approach exploiting the strengths of the ALOE estimator. Felipe Medina Aguayo: Revisiting balance heuristic with intractable proposals Abstract: Among the different flavours of multiple importance sampling, the celebrated balance heuristic (BH) from Veach and Guibas still remains a popular choice for estimating integrals. The basic ingredients in BH are: a set of proposals $q_l$ , indexed by some discrete label $l$, and a deterministic set of weights for these labels. However, in some scenarios sampling from $q_l$ is only achieved by sampling jointly with the label $l$; this commonly leads to a joint density whose conditionals and marginals are unavailable or expensive to compute. Despite BH being valid even if the labels are sampled randomly, the intractability of the joint proposal can be problematic, especially when the number of discrete labels is much larger than the number of permitted importance points. In this talk, we first revisit balance heuristic from an extended-space angle, which allows the introduction of intermediate distributions as in annealing importance sampling for variance reduction. We then look at estimating integrals when the proposal is only available in a joint form via a combination of correlated estimators. This idea also fits into the extended-space representation which will, in turn, provide other interesting solutions. (This is joint work with Richard Everitt, U of Reading.) Simulation in path space: Sebastiano Grazzi (TU Delft), Frank van der Meulen (TU Delft), Joris Bierkens (Vrije U Amsterdam). Sebastiano Grazzi: A piecewise deterministic Monte Carlo method for diffusion bridges Abstract: We introduce the use of the Zig-Zag sampler to the problem of sampling diffusion bridges. The Zig-Zag sampler is a rejection-free sampling scheme based on a non-reversible continuous piecewise deterministic Markov process. Similar to the L\'evy-Ciesielski's construction of a Brownian motion, we expand the diffusion path in a truncated Faber-Schauder basis. The coefficients within the basis are sampled using a Zig-Zag sampler with truncation error that vanishes with increasing truncation level. A key innovation is the use of a local version of the Zig-Zag sampler that allows to exploit the sparse dependency structure of the coefficients of the Faber-Schauder expansion to reduce the complexity of the algorithm. We illustrate the performance of the proposed methods in a number of examples. Contrary to some other Markov Chain Monte Carlo methods our approach works well in case of strong nonlinearity in the drift. Frank van der Meulen: Diffusion bridge simulation in geometric statistics Abstract: Recently various stochastic landmarks models have been introduced for shape deformation. The basic modelling consists of stochastic differential equations. Due to the high dimensionality of the state space of these equations the statistical analysis is challenging. Moreover, the diffusion process is hypo-elliptic. Novel methods are discussed to tackle this problem based on methods for simulation of conditioned diffusions. Joris Bierkens: Infinite dimensional piecewise deterministic Monte Carlo Abstract: In Bayesian inverse problems one is interested in performing computations with respect to an infinite dimensional probability distribution. A modern computational approach consists of approximating this infinite dimensional probability distribution by running a truncated version of a genuine infinite dimensional Markov chain. If a well-posed infinite dimensional chain exists, then the truncated, finite-dimensional approximation may be expected to have desirable scaling properties with respect to dimension. In this talk we present some preliminary explorations of this topic in conjunction with the recent advance of Piecewise Deterministic Monte Carlo methods such as the Bouncy Particle Sampler and the Zig-Zag Sampler. (Joint with Andrew Duncan and Michela Ottobre.) Sequential Monte Carlo: Recent advances in theory and practice: Richard Everitt (U of Reading), Liangliang Wang (Simon Fraser U), Anthony Lee (U of Bristol). Richard Everitt: Evolution with recombination using state-of-the-art computational methods Abstract: Recombination is a critical process in evolutionary inference, particularly when analysing within-species variation. In bacteria, despite being organisms that reproduce clonally, recombination commonly occurs when a donor cell contributes a small segment of its DNA. This process is typically modelled using an ancestral recombination graph (ARG), which is a generalisation of the coalescent. The ClonalOrigin model ([Didelot et al. 2010]) can be regarded as a good approximation of the aforementioned process, in which recombination events are modelled independently given the clonal genealogy. Inference in the ClonalOrigin model is performed via a reversible-jump MCMC (rjMCMC) algorithm, which attempts to jointly explore: the recombination rate, the number of recombination events, the departure and arrival points on the clonal genealogy for each recombination event, and the sites delimiting the start and end of each recombination event on the genome. However, as known by computational statisticians, the rjMCMC algorithm usually performs poorly due to the difficulty of proposing "good" trans- dimensional moves. Recent developments in Bayesian computation methodology provide ways of improving existing methods and code, but are not well-known outside the statistics community. We present a couple of ideas based on sequential Monte Carlo (SMC) methodology that can lead to faster inference when using the ClonalOrigin model. (This is joint work with Felipe Medina Aguayo and Xavier Didelot.) Liangliang Wang: Sequential Monte Carlo methods for Bayesian phylogenetics Abstract: Phylogenetic trees, playing a central role in biology, model evolutionary histories of taxa that range from genes to genomes. The goal of Bayesian phylogenetics is to approximate a posterior distribution of phylogenetic trees based on biological data. Standard Bayesian estimation of phylogenetic trees can handle rich evolutionary models but requires expensive Markov chain Monte Carlo (MCMC) simulations. Our previous work has shown that sequential Monte Carlo (SMC) methods can serve as a good alternative to MCMC in posterior inference over phylogenetic trees. In this talk, I will present our recent work on SMC methods for Bayesian Phylogenetics. We illustrate our methods using simulation studies and real data analysis. Anthony Lee: Latent variable models: statistical and computational efficiency for simple likelihood approximations Abstract: A popular statistical modelling technique is to model data as a partial observation of a random process. This allows, in principle, one to fit sophisticated domain-specific models with easily interpretable parameters. However, the likelihood function in such models is typically intractable, and so likelihood-based inference techniques must deal with this intractability in some way. I will briefly talk about two likelihood-based methodologies, pseudo-marginal Markov chain Monte Carlo and simulated maximum likelihood, and discuss statistical and computational scalability in some example settings. The results are also relevant to the use of sequential Monte Carlo algorithms in high-dimensional general state-space hidden Markov models. Advances in MCMC for high dimensional and functional spaces: Galin Jones (U of Minnesota), Vivekananda Roy (Iowa State U), Radu Herbei (The Ohio State U) Galin Jones: Convergence complexity of Gibbs samplers for Bayesian vector autoregressive processes Abstract: We propose a collapsed Gibbs sampler for Bayesian vector autoregressions with predictors, or exogenous variables, and study the proposed sampler's convergence properties. The Markov chain generated by our algorithm is shown to be geometrically ergodic regardless of whether the number of observations in the underlying vector autoregression is small or large in comparison to the order and dimension of it. We also establish conditions for when the geometric ergodicity is asymptotically stable as the number of observations tends to infinity. Specifically, the geometric convergence rate is shown to be bounded away from unity asymptotically, either in an almost sure sense or with probability tending to one, depending on what is assumed about the data generating process. (This is joint work with Karl Oskar Ekvall.) Vivekananda Roy: Posterior impropriety of relevance vector machines and a single penalty approach Abstract: Researchers often use sparse Bayesian learning models that take a reproducing kernel Hilbert space approach to carry out the task of prediction for high dimensional datasets. The popular relevance vector machines (RVM) is one such sparse Bayesian learning model. We show that the RVM with hyperparameter values currently used in the literature leads to improper posteriors. We propose a single penalty RVM (SPRVM) model and analyze it using a semi Bayesian approach. The necessary and sufficient conditions for posterior propriety of SPRVM are more liberal than those of RVM and allow for several improper priors over the penalty parameter. Additionally, we also prove geometric ergodicity of the Gibbs sampler used to analyze the SPRVM model and hence can estimate the asymptotic standard errors associated with the Monte Carlo estimate of the means of the posterior predictive distribution. The predictive performance of RVM and SPRVM is compared by analyzing several datasets. (This is joint work with Anand Dixit.) Radu Herbei: Exact inference in functional regression: Estimating hydrological controls on ecosystem dynamics in an Antarctic lake Abstract: Many of the modern-day statistical inference problems address the issue of estimating an infinite dimensional parameter (a function or a surface). Given that one can only store a finite representation of these objects on a computer, the typical approach is to employ some dimension-reduction strategy and proceed with a statistical inference procedure in a multivariate setting. We introduce an exact inference procedure for functional parameters in a Bayesian regression setting. By "exact" we mean that the MCMC sampler used to explore the posterior distribution over the functional parameter is unaffected by the fact that only finite dimensional ojects are used during the simulation procedure. We use techniques based on randomized acceptance probabilities and Bernoulli factories to ensure that the sampler targets the correct distribution. We apply our method to the problem of estimating the association between stream discharge and physical, chemical, and biological processes within an Antarctic lake system. Recent advances in Gaussian process computations and theory: Yun Yang (U of Illinois), Joseph Futoma (Harvard U), Michael Zhang (Princeton U). Yun Yang: Frequentist coverage and sup-norm convergence rate in Gaussian process regression Abstract: GP regression is a powerful interpolation technique due to its flexibility in capturing non-linearity. In this talk, we provide a general framework for understanding the frequentist coverage of point-wise and simultaneous Bayesian credible sets in random design GP regression. Identifying both the mean and covariance function of the posterior distribution of the Gaussian process as regularized M-estimators, we show that the sampling distribution of the posterior mean function and the centered posterior distribution can be respectively approximated by two population level GPs. By developing a comparison inequality between two GPs, we provide exact characterization of frequentist coverage probabilities of Bayesian pointwise credible intervals and simultaneous credible bands of the regression function. Our results show that inference based on GP regression tends to be conservative; when the prior is under-smoothed, the resulting credible intervals and bands have minimax-optimal sizes, with their frequentist coverage converging to a non-degenerate value between their nominal level and one. As a byproduct of our theory, we show that GP regression also yields minimax-optimal posterior contraction rate relative to the supremum norm, which provides positive evidence to the long-standing problem on optimal supremum norm contraction rate in GP regression. Joseph Futoma: Learning to Detect Sepsis with a Multi-output Gaussian Process RNN Classifier (in the Real World!) Abstract: Sepsis is a poorly understood and potentially life-threatening complication that can occur as a result of infection. Early detection and treatment improve patient outcomes, and as such it poses an important challenge in medicine. In this work, we develop a flexible classifier that leverages streaming lab results, vitals, and medications to predict sepsis before it occurs. We model patient clinical time series with multi-output Gaussian processes, maintaining uncertainty about the physiological state of a patient while also imputing missing values. Latent function values from the Gaussian process are then fed into a deep recurrent neural network to classify patient encounters as septic or not, and the overall model is trained end-to-end using back-propagation. We train and validate our model on a large retrospective dataset of 18 months of heterogeneous inpatient stays from the Duke University Health System, and develop a new "real-time" validation scheme for simulating the performance of our model as it will actually be used. We conclude by showing how this model is saving lives as a part of SepsisWatch, an application currently being used at Duke Hospital to screen, monitor, and coordinate treatment of septic patients. Michael Zhang: Embarrassingly parallel inference for Gaussian processes Abstract: Gaussian process-based models typically involves an $O(N^3)$ computational bottleneck due to inverting the covariance matrix. Popular methods for overcoming this matrix inversion problem cannot adequately model all types of latent functions and are often not parallelizable. However, judicious choice of model structure can ameliorate this problem. A mixture-of-experts model that uses a mixture of $K$ Gaussian processes offers modeling flexibility and opportunities for scalable inference. Our embarrassingly parallel algorithm combines low-dimensional matrix inversions with importance sampling to yield a flexible, scalable mixture-of-experts model that offers comparable performance to Gaussian process regression at a much lower computational cost. Posterior inference with misspecified models: Judith Rousseau (U of Oxford), Ryan Martin (North Carolina State U), Jonathan Huggins (Harvard U) Judith Rousseau: Using asymptotics to understand ABC Abstract: Approximate Bayesian computations are used typically when the model is so complex that the likelihood is intractable but data can be generated from the model. With the initial focus being primarily on the practical import of this algorithm, exploration of its formal statistical properties has begun to attract more attention. In this work we consider the asymptotic behaviour of the posterior obtained by this method and the ensuing poste- rior mean. We give general results on: (i) the rate of concentration of the resulting posterior on sets containing the true parameter (vector); (ii) the limiting shape of the posterior; and (iii) the asymptotic distribution of the ensuing posterior mean. These results hold under given rates for the toler- ance used within the method, mild regularity conditions on the summary statistics, and a condition linked to identification of the true parameters. I will show in particular that we have very different behaviours if the model is well or mis-specified. I will highlight what are the practical implications of these results on the understanding of the behaviour of the algorithm. (Joint work with David Frazier, Gael Martin and Christian Robert.) Ryan Martin: Construction, concentration, and calibration of Gibbs posteriors Abstract: A Bayesian approach, which bases inference on a posterior distribution, has certain advantages, but at the expense of requiring specification of a full statistical model. A Gibbs approach, on the other hand, provides a posterior distribution based on a loss function instead of a likelihood, which has its own advantages, including robustness and computational savings. While the concentration properties of suitably constructed Gibbs posteriors are fairly well understood, the mis- or under-specification affects the spread of the Gibbs posterior in subtle ways. In particular, it is not clear how to scale the Gibbs posterior so that the corresponding credible regions are calibrated in the sense that they achieve the nominal coverage probability. In this talk, I will present some generalities about the construction, concentration, and calibration of Gibbs posteriors along with applications, including an image boundary detection problem. Jonathan Huggins: Using bagged posteriors for robust inference and model criticism Abstract: Standard Bayesian inference is known to be sensitive to model misspecification, leading to unreliable uncertainty quantification and poor predictive performance. However, finding generally applicable and computationally feasible methods for robust Bayesian inference under misspecification has proven to be a difficult challenge. An intriguing approach is to use bagging on the Bayesian posterior ("BayesBag"); that is, to use the average of posterior distributions conditioned on bootstrapped datasets. In this talk, I comprehensively develop the asymptotic theory of BayesBag, propose a model–data mismatch index for model criticism using BayesBag, and empirically validate our theory and methodology on synthetic and real-world data. I find that in the presence of significant misspecification, BayesBag yields more reproducible inferences, has better predictive accuracy, and selects correct models more often than the standard Bayesian posterior; meanwhile, when the model is correctly specified, BayesBag produces superior or equally good results for parameter inference and prediction, while being slightly more conservative for model selection. Overall, my results demonstrate that BayesBag combines the attractive modeling features of standard Bayesian inference with the distributional robustness properties of frequentist methods. Convergence of MCMC in theory and in practice: Christina Knudson (U of St. Thomas, MN), Rui Jin (U of Iowa), Grant Backlund (U of Florida) Christina Knudson: Revisiting the Gelman-Rubin Diagnostic Abstract: Gelman and Rubin's (1992) convergence diagnostic is one of the most popular methods for terminating a Markov chain Monte Carlo (MCMC) sampler. Since the seminal paper, researchers have developed sophisticated methods of variance estimation for Monte Carlo averages. We show that this class of estimators find immediate use in the Gelman-Rubin statistic, a connection not established in the literature before. We incorporate these estimators to upgrade both the univariate and multivariate Gelman-Rubin statistics, leading to increased stability in MCMC termination time. An immediate advantage is that our new Gelman-Rubin statistic can be calculated for a single chain. In addition, we establish a relationship between the Gelman-Rubin statistic and effective sample size. Leveraging this relationship, we develop a principled cutoff criterion for the Gelman-Rubin statistic. Finally, we demonstrate the utility of our improved diagnostic via an example. Rui Jin: Central limit theorems for Markov chains based on their convergence rates in Wasserstein distance Abstract: Many tools are available to bound the convergence rate of Markov chains in total variation (TV) distance. Such results can be used to establish central limit theorems (CLT) that enable error evaluations of Monte Carlo estimates in practice. However, convergence analysis based on TV distance is often non-scalable to the increasing dimension of Markov chains (Qin and Hobert (2018); Rajaratnam and Sparks (2015)). Alternatively, bounding the convergence rate of Markov chains in Wasserstein distance can be more robust to increasing dimension, thanks to a coupling argument. Our work is concerned with the implication of such convergence results, in particular, do they lead to CLTs of the corresponding Markov chains? An indirect and typically non-trivial way is to first convert Wasserstein bounds into total variation bounds. Instead, we attempt to establish CLTs based on convergence rate in Wasserstein distance directly. We establish a CLT for Markov chains that enjoy certain convergence rates (including the geometric rate and some sub-geometric rates) in Wasserstein distance, and the CLT holds for Lipschitz functions under some moment conditions. Applications of the CLT and its variations are demonstrated with examples. (Joint work with Aixin Tan.) Grant Backlund: A hybrid scan Gibbs sampler for Bayesian models with latent variables Abstract: Gibbs sampling is a widely popular Markov chain Monte Carlo algorithm which is often used to analyze intractable posterior distributions associated with Bayesian hierarchical models. We introduce an alternative to traditional Gibbs sampling that is particularly well suited for Bayesian models which contain latent or missing data. This hybrid scan Gibbs algorithm is often easier to analyze from a theoretical standpoint than the systematic or random scan Gibbs sampler. Several examples including linear regression with heavy-tailed errors and a Bayesian version of the general linear mixed model will be presented. Results concerning the convergence rates of the corresponding Markov chains will also be discussed. Robust Markov chain Monte Carlo methods: Kengo Kamatani (Osaka U), Emilia Pompe (U of Oxford), Björn Sprungk (Göttingen U) Kengo Kamatani: Robust Markov chain Monte Carlo methodologies with respect to tail properties Abstract: In this talk, we will discuss Markov chain Monte Carlo (MCMC) methods with heavy-tailed invariant probability distributions. When the invariant distribution is heavy-tailed the algorithm has difficulty reaching the tail area. We study the ergodic properties of some MCMC methods with position dependent proposal kernels and apply them to heavy-tailed target distributions. Emilia Pompe: A framework for adaptive MCMC targeting multimodal distributions Abstract: We propose a new Monte Carlo method for sampling from multimodal distributions (Jumping Adaptive Multimodal Sampler). The idea of this technique is based on splitting the task into two: finding the modes of the target distribution and sampling, given the knowledge of the locations of the modes. The sampling algorithm is based on steps of two types: local ones, preserving the mode, and jumps to a region associated with a different mode. Besides, the method learns the optimal parameters while it runs, without requiring user intervention. Our technique should be considered as a flexible framework, in which the design of moves can follow various strategies known from the broad MCMC literature. In order to design an adaptive scheme that facilitates both local and jump moves, we introduce an auxiliary variable representing each mode and we define a new target distribution on an augmented state space. As the algorithm runs and updates its parameters, the target distribution also keeps being modified. This motivates a new class of algorithms, Auxiliary Variable Adaptive MCMC. We prove general ergodic results for the whole class before specialising to the case of our algorithm. The main properties of our method will be discussed and its performance will be illustrated with several examples of multimodal target distributions. Björn Sprungk: Noise level-robust Metropolis-Hastings algorithms for Bayesian inference with concentrated posteriors Abstract: We consider Metropolis-Hastings algorithms for Markov chain Monte Carlo integration w.r.t. a concentrated posterior measure which results from Bayesian inference with a small additive observational noise. Proposal kernels based only on prior information show a deteriorating efficiency for a decaying noise. We propose to use informed proposal kernels, i.e., random walk proposals with a covariance close to the posterior covariance. Here, we use the a-priori computable covariance of the Laplace approximation of the posterior. Besides some numerical evidence we prove that the resulting informed Metropolis-Hastings shows a non-degenerating mean acceptance rate and lag-one autocorrelation as the noise decays. Thus, it performs robustly w.r.t. a small noise-level in the Bayesian inference problem. The theoretical results are based on the recently established convergence of the Laplace approximation to the posterior measure in total variation norm. Approximate Markov chain Monte Carlo methods: Bamdad Hosseini California Institute of Technology, James Johndrow (U of Pennsylvania), Daniel Rudolf (Göttingen U) Bamdad Hosseini: Perturbation theory for a function space MCMC algorithm with non-Gaussian priors Abstract: In recent years a number of function space MCMC algorithms have been introduced in the literature. The goal here is to design an algorithm that is well-defined on an infinite-dimensional Banach space with the hope that it will be discretization invariant and overcome some issues that are encountered by standard MCMC algorithms in high-dimensions. However, most of the focus in the literature has been on algorithms that rely on the assumption that the prior measure is a Gaussian or at least absolutely continuous with a Gaussian measure. In this talk we introduce a new class of prior-aware Metropolis-Hastings algorithms for non-Gaussian priors and discuss their convergence and perturbation properties such as dimension-independent spectral gaps and various types of approximations beyond standard approximation by discretization or projections. James Johndrow: Metropolizing approximate Gibbs samplers Abstract: There has been much recent work on "approximate" MCMC algorithms, such as Metropolis-Hastings algorithms that rely on minibatches of data, resulting in bias in the invariant measure. Less studied are the various ways in which approximate Gibbs samplers can be designed. We describe a general strategy for using approximate Gibbs samplers as Metropolis-Hastings proposals. Because it is typically less costly to compute the unnormalized posterior density than to take one step of exact Gibbs, and because the Hastings ratio in these algorithms requires only computation of the approximating kernel at pairs of points, one can often achieve reductions in computational complexity per step with no bias in the invariant measure by using approximate Gibbs as a Metropolis-Hastings proposal. We demonstrate the approach with an application to high-dimensional regression. Daniel Rudolf: Time-inhomogeneous approximate Markov chain Monte Carlo Abstract: We discuss the approximation of a time-homogeneous Markov chain by a time-inhomogeneous one. An upper bound of the expected absolute difference of the stationary mean, w.r.t. the Markov chain of interest, and the ergodic average based on the approximating Markov chain will be presented. In addition to that we provide explicit estimates of the Wasserstein distance of the difference of the distributions of the Markov chains after n-steps. Sampling Techniques for High-Dimensional Bayesian Inverse Problems: Qiang Liu (U of Texas), Tan Bui-Thanh (U of Texas), Alex Thiery (National U of Singapore) Qiang Liu: Stein variational gradient descent: Algorithm, theory, applications Abstract: Approximate probabilistic inference is a key computational task in modern machine learning, which allows us to reason with complex, structured, hierarchical (deep) probabilistic models to extract information and quantify uncertainty. Traditionally, approximate inference is often performed by either Markov chain Monte Carlo (MCMC) and variational inference (VI), both of which, however, have their own critical weaknesses: MCMC is accurate and asymptotically consistent but suffers from slow convergence; VI is typically faster by formulating inference problem into gradient-based optimization, but introduces deterministic errors and lacks theoretical guarantees. Stein variational gradient descent (SVGD) is a new tool for approximate inference that combines the accuracy and flexibility of MCMC and practical speed of VI and gradient-based optimization. The key idea of SVGD is to directly optimize a non-parametric particle-based representation to fit intractable distributions with fast deterministic gradient-based updates, which is made possible by integrating and generalizing key mathematical tools from Stein's method, optimal transport, and interacting particle systems. SVGD has been found a powerful tool in various challenging settings, including Bayesian deep learning and deep generative models, reinforcement learning, and meta learning. This talk will introduce the basic ideas and theories of SVGD, and cover some examples of application. Tan Bui-Thanh: A data-consistent approach to statistical inverse problems Abstract: Given a hierarchy of reduced-order models to solve the inverse problems for quantities of interest, each model with varying levels of fidelity and computational cost, a machine learning framework is proposed to improve the models by learning the errors between each successive levels. Each reduced-order model is a statistical model generating rapid and reasonably accurate solutions to new parameters, and are typically formed using expensive forward solves to find the reduced subspace. These approximate reduced-order models speed up computational time but they introduce additional uncertainty to the solution. By statistically modeling errors of reduced order models and using training data involving forward solves of the reduced order models and the higher fidelity model, we train a deep neural network to learn the error between successive levels of the hierarchy of reduced order models thereby improving their error bounds. The training of the deep neural network occurs during the offline phase and the error bounds can be improved online as new training data is observed. Once the deep-learning-enhanced reduced model is constructed, it is amenable to any sampling method as its cos is a fraction of the cost of the original model. Alex Thiery: Exploiting geometry for walking larger steps in Bayesian inverse problems Abstract: Consider the observation $y = F(x) + \xi$ of a quantity of interest $x$ -- the random variable $\xi \sim \mathcal{N}(0, \sigma^2 I)$ is a vector of additive noise in the observation. In Bayesian inverse problems, the vector $x$ typically represents the high-dimensional discretization of a continuous and unobserved field while the evaluations of the forward operator $F(\cdot)$ involve solving a system of partial differential equations. In the low-noise regime, i.e. $\sigma \to 0$, the posterior distributions concentrates in the neighbourhood of a nonlinear manifold. As a result, the efficiency of standard MCMC algorithms deteriorates due to the need to take increasingly smaller steps. In this work, we present a constrained HMC algorithm that is robust to small $\sigma$ values, i.e. low noise. Taking the observations generated by the model to be constraints on the prior, we define a manifold on which the constrained HMC algorithm generate samples. By exploiting the geometry of the manifold, our algorithm is able to take larger step sizes than more standard MCMC methods, resulting in a more efficient sampler. If time permits, we will describe how similar ideas can be leveraged within other non-reversible samplers. Short Courses/Tutorials/Practice Labs The conference will begin with two Short Courses/Tutorials/Practice Labs on Tuesday (January 7, 2020). Introduction to Stan (10:30am-1:30pm) Outline: This half-day workshop will introduce you to the probabilistic programming language, Stan, and its Hamiltonian Monte Carlo algorithm. Many Bayesian models can be fitted to data more quickly, and with less sensitivity to priors and initial values, than Gibbs sampler software such as BUGS and JAGS. You will get some hands-on experience of coding for Stan, extracting results and checking for computational problems. This is a very interactive, hands-on workshop and we will use examples of Stan code throughout to give you practical experience. Trainer: Robert Grant is a medical statistician of 21 years' experience, and a professional trainer and coach for people working in data analysis. He developed and maintains the Stata interface for Stan and frequently teaches introductory courses on Bayesian statistics and data visualization. His personal website is robertgrantstats.co.uk and his company's is bayescamp.com Pre-requisites: Participants should know the basics of model fitting by MCMC simulation. There is no need for experience of Hamiltonian Monte Carlo or Stan but we will assume understanding of Bayesian analysis, model comparison and diagnosing MCMC problems such as non-convergence. Please bring a laptop with one of the Stan interfaces installed -- it doesn't matter which one as we will focus on the Stan code which is common to all. Learning outcomes: (1) Know how to get started with Stan via the various interfaces, including the common functionality of checking your model code for errors, translating it to C++, compiling it, sampling from the posterior, summarising the output and exporting chains. (2) Understand the basics of coding regression models up to multilevel models. (3) Be aware of tricks for more efficient parameterisation (4) Know how to obtain statistical and graphical diagnostic outputs, recognise problems and set about debugging. (5) Know how to add a new distribution as a Stan function, expose it to R/Python/Julia for debugging, and use it in the log-likelihood and posterior predictive checks. Developing, modifying, and sharing Bayesian algorithms (MCMC samplers, SMC, and more) using the NIMBLE platform in R (2:00-5:00pm) Overview: Do you want to share an algorithm you've developed with other researchers without having to build an entire platform? Do you want to use methods such as MCMC and tailor them for your application without having to implement everything from scratch? NIMBLE is a platform built on top of R that allows methodologists to write algorithms (and modify existing algorithms) in R-like syntax with automatic compilation for fast run-times via C++ that is auto-generated by the system. NIMBLE gives you access to a variety of tools for ease of implementation: querying of model graphical structure (e.g., parent and child nodes in the model graph), a wide range of mathematical functionality including linear algebra through the Eigen package, calculation of probability density values for nodes in the model graph, simulation of node values, automatic differentiation for gradients, optimization, and storage objects for samples from the model. This tutorial will introduce you to how to develop algorithms in NIMBLE, including new MCMC samplers and entire new algorithms. We will discuss how developers can build upon NIMBLE's existing algorithms (including a variety of MCMC, Bayesian nonparametric, and SMC methods) to avoid having to reimplement standard methods. Users of methods developed in NIMBLE write their model code in syntax almost identical to BUGS and JAGS but can then apply a variety of algorithms (various MCMC samplers, choosing between samplers, parameter blocking, user-defined samplers, various SMC algorithms, etc.) to the same model. The tutorial will demonstrate how algorithms that you write using NIMBLE are then easily available to users, who can try them out at low cost and compare them to other algorithms available in NIMBLE. Learning outcomes: The workshop will focus on live demos and hands-on coding. After the workshop, participants will understand (1) how to use NIMBLE to apply algorithms such as MCMC and SMC to fit hierarchical models, (2) how NIMBLE's built-in algorithms are implemented using nimbleFunctions, (3) how to use nimbleFunctions to extend NIMBLE's algorithms, and (4) how to develop algorithms in NIMBLE. Pre-requisites: Participants should have a basic understanding of Bayesian/hierarchical models and of one or more algorithms such as MCMC or SMC. Some experience with R is also expected. Please bring a laptop; we'll give instructions in advance for installing NIMBLE. Instructor: Chris Paciorek is one of the core developers of NIMBLE (code repository) and an adjunct professor of Statistics at UC Berkeley. He has presented a variety of workshops and courses on NIMBLE and more generally on statistical computing and Bayesian statistics. Info for poster presenters: All poster presenters must bring/print their own posters. Maximum size: 36" x 48". This can be horizontal or vertical, it will work both ways. Pins, dots, and other means of attachment will be provided on location. Please check in at the registration desk for information about where the poster presentations will be, set-up times, etc. Printing in Gainesville: Target Copy, Fedex Print & Ship Center, Office Depot Print and Copy Services Concentration inequalities and performance guarantees for hypocoercive MCMC samplers: Luc Rey-Bellet (U of Massachusetts, Amherst) Abstract: We prove a concentration inequalities for ergodic averages for hypo-cocercive samplers, in particular for the bouncy particle sampler, the zig-zag sampler, and hybrid HMC. This yields two types on performance guarantess: (a) non-asymptotic confidence intervals and (b) uncertainty quantification bounds when using an alternate approximate process. Convergence behaviour and contraction rates of Hamiltonian Monte Carlo in mean-field models: Katharina Schuh (U of Bonn) Abstract: We study the convergence behaviour of a transition step of Hamiltonian Monte Carlo (HMC) for probability distributions with a mean-field potential. This mean-field potential consists of a confinement potential for all particles and of a pairwise interaction potential for all pairs of particles. More precisely, we require for the confinement potential strong convexity at infinity but no global convexity, so that multiple-well potentials are allowed. We use a modification of the coupling approach established by Bou-Rabee, Eberle and Zimmer to prove exponential convergence w.r.t a specific constructed Wasserstein distance. In particular, we give for both the exact and the numerical HMC explicit contraction rates which are independent of the number of particles in the mean-field particle system. The number of steps until the target probability distribution is approximated by HMC up to a given error $\epsilon$ follows as a direct consequence from contractivity and is fix if we increase the number of particles. Fast algorithms and theory for high-dimensional Bayesian varying coefficient models: Ray Bai (U of Pennsylvania) Abstract: We introduce the nonparametric varying coefficient spike-and-slab lasso (NVC-SSL) for Bayesian estimation and variable selection in high-dimensional varying coefficient models. The NVC-SSL simultaneously estimates the functionals of the significant time-varying covariates while thresholding out insignificant ones. Our model can be implemented using a highly efficient expectation-maximization (EM) algorithm, thus avoiding the computational intensiveness of Markov chain Monte Carlo (MCMC) in high dimensions. Finally, we prove the first theoretical results for Bayesian varying coefficient models when p>>n. Specifically, we derive posterior contraction rates under the NVC-SSL model. Our method is illustrated through simulation studies and data analysis. Efficient hierarchical Bayesian kernel regression model for grouped count data: Jin-Zhu Yu (Vanderbilt U) Abstract: Various research applications suffer from small data sets and require highly predictive models. For instance, a major challenge in predicting the recovery rate of communities after disasters is that recovery data are often scarce due to the nature of extreme events. To address this challenge, we propose a model called the Hierarchical Bayesian Kernel Model (HBKM). This model integrates the Bayesian property of improving predictive accuracy as data are dynamically accumulated, the kernel function that can make nonlinear data more manageable, and the hierarchical property of borrowing information from different sources in scarce and diverse data samples. Since the inference of HBKM can be highly inefficient as the number of groups increases while the number of data points of each group remains relatively small, we develop an efficient Gibbs sampler in which the conditional distributions have approximate closed-formed solution. The proposed method is illustrated with synthesized grouped count data and the historical power outage data in Shelby County, Tennessee after the most severe storms since 2007. (Joint with Hiba Baroud.) Delayed-acceptance sequential Monte Carlo: Optimising computational efficiency on the fly: Joshua Bon (Queensland U of Technology) Abstract: Delayed-acceptance is a technique for reducing computational effort for expensive likelihoods within a Metropolis-Hasting (MH) sampler. It uses a surrogate to approximate an expensive likelihood, delaying evaluation of proposals (hence acceptance) until they have passed scrutiny by the surrogate likelihood. Importantly, delayed-acceptance preserves the correct MH ratio, and hence target distribution. Within the sequential Monte Carlo (SMC) framework, we adaptively tune the surrogate model to yield better approximations of the expensive likelihood. For example, we can tune linear noise approximations of Markov processes or adapt nonparametric approximations to better match the true likelihood. Overall, we develop a novel algorithm for computationally efficient SMC with expensive likelihood functions. The method is demonstrated on toy and real examples. (Joint with Christopher Drovandi and Anthony Lee.) Geometrically adapted Langevin algorithm for Markov chain Monte Carlo simulations: Mariya Mamajiwala (U College London) Abstract: Markov Chain Monte Carlo (MCMC) is a class of methods to sample from a given probability distribution. Of its myriad variants, the one based on the simulation of Langevin dynamics, which approaches the target distribution asymptotically, has gained prominence. The dynamics is specifically captured through a Stochastic Differential Equation (SDE), with the drift term given by the negative of the gradient of the log-likelihood function with respect to the parameters of the distribution. However, the unbounded variation of the noise (i.e. the diffusion term) tends to slow down the convergence, which limits the usefulness of the method. By recognizing that the solution of the Langevin dynamics may be interpreted as evolving on a suitably constructed Riemannian Manifold (RM), considerable improvement in the performance of the method can be realised. Specifically, based on the notion of stochastic development - a concept available in the differential geometric treatment of SDEs - we propose a geometrically adapted variant of MCMC. Unlike the standard Euclidean case, in our setting, the drift term in the modified MCMC dynamics is constrained within the tangent space of an RM defined through the Fisher information metric and the related connection. We show, through extensive numerical simulations, how such a mathematically tenable geometric restriction of the flow enables a significantly faster and accurate convergence of the algorithm. From the Bernoulli factory to a dice enterprise via perfect sampling of Markov chains: Giulio Morina (U of Warwick) Abstract: Given a $p$-coin that lands heads with unknown probability $p$, we wish to produce an $f(p)$-coin for a given function $f: (0,1) \rightarrow (0,1)$. This problem is commonly known as the Bernoulli Factory and results on its solvability and complexity have been obtained in \cite{Keane1994,Nacu2005}. Nevertheless, generic ways to design a practical Bernoulli Factory for a given function $f$ exist only in a few special cases. We present a constructive way to build an efficient Bernoulli Factory when $f(p)$ is a rational function with coefficients in $\mathbb{R}$. Moreover, we extend the Bernoulli Factory problem to a more general setting where we have access to an $m$-sided die and we wish to roll a $v$-sided one, i.e. we consider rational functions $f: \Delta^m \rightarrow \Delta^v$ between open probability simplices. Our construction consists of rephrasing the original problem as simulating from the stationary distribution of a certain class of Markov chains - a task that we show can be achieved using perfect simulation techniques with the original $m$-sided die as the only source of randomness. The number of $m$-sided die rolls needed by the algorithm has exponential tails and, in the Bernoulli Factory case, can be bounded uniformly in $p$. En route to optimizing the algorithm we show a fact of independent interest: every finite, integer valued, random variable will eventually become log-concave after convolving with enough Bernoulli trials. (Joint with Krzysztof Latuszynski and Alex Wendland) Sequential Monte Carlo for Fredholm Integral Equations of the First Kind: Francesca R. Crucinio (U of Warwick) Abstract: Fredholm integral equations of the first kind $h(y) = \int g(y \mid x)f(x)\ dx$ describe a wide class of inverse problems in which a signal $f$ has to be reconstructed from a distorted signal $h$ given some knowledge of the distortion $g$ (e.g. image processing, medical imaging, stereology). A popular method to approximate $f$ is an infinite dimensional Expectation-Maximization (EM) algorithm that, given an initial guess for $f$, iteratively refines the approximation by including the information given by $h$ and $g$. The EM recursion is then discretised assuming piecewise constant signals, leading to the Richardson-Lucy algorithm (Richardson, 1972; Lucy 1974). We use Sequential Monte Carlo (SMC) to develop a stochastic discretisation of the Expectation-Maximization-Smoothing (EMS) algorithm (Silverman et al., 1990), a regularised variant of EM. This stochastic discretisation does not assume piecewise constant signals and can be implemented when only samples from $h$ are available and $g$ can be evaluated pointwise. This leads to a non-standard SMC scheme for which we extend some asymptotic results ($\mathbb{L}_p$-inequality, strong law of large numbers and almost sure convergence in the weak topology). We compare the novel method with alternatives using a simulation study and present results for realistic systems. Towards automatic zig-zag sampling: Alice Corbella (U of Warwick) Abstract: Zig-Zag sampling, introduced by Bierkens et al. 2019, is based on the simulation of a piecewise deterministic Markov process (PDMP) whose switching rate $λ(t)$ is governed by the derivative of the log-target density. To our knowledge, Zig-Zag sampling has been used mainly on simple targets for which derivatives can be computed manually in a reasonable time. To expand the applicability of this method, we incorporate Automatic Differen- tiation (AD) tools in the Zig-Zag algorithm, computing $λ(t)$ automatically from the functional form of the log-target density. Moreover, to allow the simulation of the PDMP via thinning, we use standard optimization routines to find a local upper bound for the rate. We present several implementations of our automatic Zig-Zag sampling and we measure the potential loss in computational time caused by AD and optimization routines. Among the examples, we consider the case of data arising from an epidemic which can be approximated by a deterministic system of equations; here manual derivation of the posterior density is practically infeasible due to the recursive rela- tionships contained the likelihood function. Automatic Zig-Zag sampling successfully explores the parameter space and samples efficiently from the posterior distribution. Lastly, we compare our automatic Zig-Zag sampling against Stan, a well established software that matches AD to another gradient-based method (HMC). (Joint work with Gareth O. Roberts and Simon E. F. Spencer) Markov chain Monte Carlo algorithms with sequential proposals: Joonha Park (Boston U) Abstract: We explore a general framework in Markov chain Monte Carlo (MCMC) sampling where sequential proposals are tried as a candidate for the next state of the Markov chain. This sequential-proposal framework can be applied to various existing MCMC methods, including Metropolis-Hastings algorithms using random proposals and methods that use deterministic proposals such as Hamiltonian Monte Carlo (HMC) or the bouncy particle sampler. Sequential-proposal MCMC methods construct the same Markov chains as those constructed by the delayed rejection method under certain circumstances. In the context of HMC, the sequential-proposal approach has been proposed as extra chance generalized hybrid Monte Carlo (XCGHMC). We develop two novel methods in which the trajectories leading to proposals in HMC are automatically tuned to avoid doubling back, as in the No-U-Turn sampler (NUTS). The numerical efficiency of these new methods compare favorably to the NUTS. We additionally show that the sequential-proposal bouncy particle sampler enables the constructed Markov chain to pass through regions of low target density and thus facilitates better mixing of the chain when the target density is multimodal. (Joint with Yves Atchadé.) Restore: A continuous-time, rejection-free regenerative sampler: Andi Q. Wang (U of Bristol) Abstract: We introduce the Restore sampler. This is a continuous-time nonreversible sampler, which combines general local dynamics with rebirths from a fixed global rebirth distribution, which occur at a state-dependent rate. Under suitable conditions this rate can be chosen to enforce stationarity of a given target density, making it suitable for Monte Carlo inference. The resulting sampler has several desirable properties: simplicity, lack of rejections, regenerations and a potential coupling from the past implementation. The Restore sampler can also be used as a recipe for introducing rejection-free moves into existing MCMC samplers in continuous time, or potentially to correct posterior approximations such as INLA. (Joint work with Helen Ogden, Murray Pollock, Gareth Roberts and David Steinsaltz.) A piecewise deterministic Monte Carlo method for diffusion bridges: Sebastiano Grazzi (Delft U of Technology) Abstract: The simulation of a diffusion process conditioned to hit a point at a certain time (diffusion bridge) is an essential tool in Bayesian inference of diffusion models with low frequency observations. This has been proven to be a challenging problem, as the transition density of the conditioned process is only known in very special cases. Standard techniques rely on reversible Markov Chain Monte Carlo methods, that propose simpler bridges from which it is possible to sample. These techniques may perform poorly when the diffusion of interest is non-linear. Motivated by this, we explore and apply the Zig-Zag sampler, a rejection-free scheme based on a non-reversible continuous piecewise deterministic Markov process. Starting from the Lévy-Ciesielski construction of a Brownian motion, we expand the infinite dimensional diffusion path in the Faber-Schauder basis. The finite dimensional projection of it, gives an approximated representation of the diffusion process. In this setting, a bridge is simply obtained by fixing the coefficient of the first Faber-Schauder basis function. The Zig-Zag sampler is a flexible scheme able to exploit the conditional independence structure induced by this basis and to explore with different velocities the coefficients of the hierarchical basis. Surprisingly, the sampler does not require the evaluation of the integral appearing in the density function given by the Girsanov's formula. By its non-reversible nature, it is promising for improving mixing properties of the process. In the poster session, I will explain in detail how the Zig-Zag sampler scheme works for diffusion bridge simulation and show its performance for some challenging diffusion processes. (Joint with Joris Bierkens, Frank van der Meulen, Moritz Schauer.) Bayesian treed varying coefficient models: Sameer Deshpande (Massachusetts Institute of Technology) Abstract: The linear varying coefficient model generalizes the conventional linear model by allowing the additive effect of each covariate X on the outcome Y to vary as a function of additional effect modifiers Z. While there are many existing procedures for fitting such a model when the effect modifier Z is a scalar (typically time), there has been comparatively less development for settings with multivariate Z. State-of-the-art methods for this latter setting typically assume either complete knowledge of which components of Z modify which covariate effects or a restrictive additive assumption about the unknown covariate effect functions. These procedures are, prima facie, ill-suited for applications in which we might reasonably suspect covariate effects actually vary systematically with respect to interactions between multiple modifiers. In this work, we present an extension of Bayesian Additive Regression Trees (BART) to the varying coefficient model for such applications that does not impose these strong assumptions. We derive a straightforward Gibbs sampler based on the familiar "Bayesian backfitting" procedure of Chipman, George, and McCulloch (2010) that also allows for correlated residual errors. We further build on recent theoretical advances for the varying-coefficient model and BART to derive posterior concentration rates under our model. (Joint with Ray Bai, Cecilia Balocchi and Jennifer Starling). Efficient Bayesian estimation of the stochastic volatility model with leverage: Darjus Hosszejni (Vienna U of Economics and Business) Abstract: The sampling efficiency of MCMC methods in Bayesian inference for stochastic volatility (SV) models is known to highly depend on the actual parameter values, and the effectiveness of samplers based on different parameterizations differs significantly. We derive novel samplers for the centered and the non-centered parameterizations of the practically highly relevant SV model with leverage (SVL), where the return process and the innovations of the volatility process are allowed to correlate. Additionally, based on the idea of ancillarity-sufficiency interweaving, we combine the resulting samplers in order to achieve superior sampling efficiency. The method is implemented using R and C++. (Joint work with Gregor Kastner.) Scalable Bayesian sparsity-path analysis with the posterior bootstrap: Brieuc Lehmann (U of Oxford) Abstract: In classical penalised regression, it is common to perform model estimation across a range of regularisation parameter values, typically with the aim of maximising out-of-sample predictive performance. The analogue in the Bayesian paradigm is to place a sparsity-inducing prior on the regression coefficients and explore a range of prior precisions. This, however, can be computationally challenging due to the need to generate a separate posterior distribution for each precision value. Here, we explore the use of the posterior bootstrap to scalably generate a posterior distribution over sparsity-paths. We develop an embarrassingly parallel method that exploits fast algorithms for computing classical regularisation paths and can thus handle large problems. We demonstrate our method on a sparse logistic regression example using genomic data from the UK Biobank. (Joint work with Chris Holmes, Gil McVean, Edwin Fong & Xilin Jiang) Constrained Bayesian optimization for small area measurement models: Sepideh Mosaferi (Iowa State U) Abstract: Statistical agencies are often asked to produce small area estimates (SAEs) for skewed vari- ables or those containing outliers. When domain sample sizes are too small to support direct estimators, effects of skewness or outliers of the response variable can be large, and appropriately accounting for the distribution of the response variable given available auxiliary information is important. First, in order to stabilize the skewness and achieve normality in the response variable, we propose an area-level multiplicative log-measurement error model on the response variable, contrasting the proposed additive measurement error model. In addition, we propose a multiplicative measurement error model on the covariates. Second, under our proposed modeling framework, we derive the empirical Bayes predictors (EB) of positive small area quantities sub- ject to the covariates containing measurement error. Third, under our proposed framework, we explore how this methodology can be utilized more generally in SAE by developing constrained estimation methods for small area problems with measurement error. Third, we propose a cor- responding mean squared prediction error using a bootstrap method, where we illustrate that the order of the bias is $O(m^{-1})$, under certain regularity conditions. Finally, we illustrate the performance of our methodology in both model-based simulation and design-based simulation studies, where we comment on the computational complexity of each method. (Joint with Rebecca Steorts.) Non-uniform subsampling for stochastic gradient MCMC: Srshti Putcha (Lancaster U) Abstract: Markov chain Monte Carlo (MCMC) scales poorly with dataset size. This is because it requires a full pass through the dataset at least once per iteration. Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional methods, by estimating the gradient of the log-likelihood with a small, uniform subsample of the data at each iteration. While efficient to compute, the resulting gradient estimator may exhibit a relatively high variance. This can adversely affect the convergence of the sampling algorithm to the desired posterior distribution. One way to reduce this variance is to sample the data points from a carefully selected (discrete) non-uniform distribution. The goal of this work is to propose a robust framework to conduct non-uniform subsampling for SGMCMC. To do this, we will draw inspiration from existing methodology proposed in the stochastic optimisation literature. We plan to demonstrate our method on various large-scale applications. An adaptive scheme for the Zig-Zag sampler: Andrea Bertazzi (Delft U of Technology) Abstract: The Zig-Zag sampler, introduced in Bierkens et al. (2019), is a Monte Carlo method based on a piecewise deterministic Markov process known as the Zig-Zag process. Empirical experiments have shown that the speed of convergence of this continuous time sampler can be affected by the shape of the target distribution, as for instance in the case of elliptical level curves. This issue can be tackled by tuning the parameters of the process, i.e. the vector of velocities. We then consider linearly transforming the state space and running the standard Zig-Zag sampler on this appropriately transformed space. The optimal transformation matrix can be learned on-the-go while the process explores the state space. This fits in the framework of adaptive Markov chain Monte Carlo algorithms after a suitable discretisation of the time variable. We study the ergodicity of the resulting adaptive Zig-Zag sampler by taking advantage of the existing literature on adaptive algorithms such as Roberts and Rosenthal (2007) and Fort et al. (2011). (Joint work with Joris Bierkens). Speeding Up the ZigZag Process: Giorgos Vasdekis (U of Warwick) Abstract: Piecewise Deterministic Markov Processes have recently drawn the attention of the Markov Chain Monte Carlo community. The first reason for this is that, in general, one can simulate exactly the entire path of such a process. The second is that these processes are non-reversible, which sometimes leads to faster mixing. One of the processes used is the ZigZag process, which moves linearly in the space $\mathbb{R}^d$ in specific directions for a random period of time, changing direction one coordinate at a time. An important question related to these samplers is the existence of a Central Limit Theorem which is closely connected to the property of Geometric Ergodicity. It turns out that the ZigZag process is not Geometrically Ergodic when targeting a heavy tail distribution. On this poster we present a way to speed up the ZigZag process to make the algorithm Geometrically Ergodic under heavy tails. Shrinkage in the time-varying parameter model framework using the R package shrinkTVP: Peter Knaus (Vienna U of Economics and Business) Abstract: Time-varying parameter (TVP) models are widely used in time series analysis to flexibly deal with processes which gradually change over time. However, the risk of overfitting in TVP models is well known. This issue can be dealt with using appropriate global-local shrinkage priors, which pull time-varying parameters towards static ones. In this paper, we introduce the R package shrinkTVP, which provides a fully Bayesian implementation of shrinkage priors for TVP models, taking advantage of recent developments in the literature, in particular that of Bitto and Frühwirth-Schnatter (2019). The package shrinkTVP allows for posterior simulation of the parameters through an efficient Markov Chain Monte Carlo scheme. Moreover, summary and visualization methods, as well as the possibility of assessing predictive performance through log predictive density scores, are provided. The computationally intensive tasks have been implemented in C++ and interfaced with R. The paper includes a brief overview of the models and shrinkage priors implemented in the package. Furthermore, core functionalities are illustrated, both with simulated and real data. (Joint with Angela Bitto-Nemling, Annalisa Cadonna, and Sylvia Frühwirth-Schnatter.) Distributed Bayesian computation for model choice: Alexander Buchholz (U of Cambridge) Abstract: We propose a general method for distributed Bayesian model choice, where each worker has access only to non-overlapping subsets of the data. Our approach ap- proximates the model evidence for the full dataset through Monte Carlo sampling from the posterior on every subset which is produced by any suitable method to return an estimator for the evidence. The model evidences per worker are then consistently combined using a novel approach which corrects for the splitting using summary statistics of the generated samples. This divide-and-conquer approach allows Bayesian model choice in the large data setting, exploiting all available in- formation but limiting communication between workers. Our work thereby comple- ments the work on consensus Monte Carlo (Scott et al., 2016) by explicitly enabling model choice. In addition, we show how the suggested approach can be extended to model choice within a reversible jump setting that explores multiple models within one run. (Joint with D. Ahfock and S. Richardson) Hamiltonian Monte Carlo with boundary reflections, and application to polytope volume calculations: Augustin Chevallier (Lancaster U) Abstract: This poster presents a study of HMC with reflections on the boundary of a domain, providing an enhanced alternative to Hit-and-run (HAR) to sample a target distribution in a bounded domain. We make three contributions. First, we provide a convergence bound, paving the way to more precise mixing time analysis. Second, we present a robust implementation based on multi-precision arithmetic – a mandatory ingredient to guarantee exact predicates and robust constructions. Third, we use our HMC random walk to perform polytope volume calculations, using it as an alternative to HAR within the volume algorithm by Cousins and Vempala. The tests, conducted up to dimension 50, show that the HMC RW outperforms HAR. (Joint work with Frederic Cazals and Sylvain Pion) Developments in Stein-based control variates: Leah South (Lancaster U) Abstract: Stein's method has recently been used to generate control variates which can improve Monte Carlo estimators of expectations when the derivatives of the log target are available. The two most popular Stein-based variance reduction techniques are zero-variance control variates (ZV-CV, a parametric approach) and control functionals (CF, a non-parametric alternative). This poster will describe two recent developments in this area. The first method applies regularisation methods in ZV-CV to give reduced-variance estimators in high-dimensional Monte Carlo integration (South, Oates, Mira, & Drovandi, 2018). A novel kernel-based method motivated by CF and by Sard's method for numerical integration will also be introduced. This kernel-based approach allows for misspecification in the ZV-CV regression problem, and represents a balance between ZV-CV and CF when the number of samples is sufficiently large. The benefits of the proposed variance reduction techniques will be illustrated using several Bayesian inference examples. (Joint with Chris Oates, Chris Nemeth, Toni Karvonen, Antonietta Mira, Mark Girolami and Chris Drovandi.) Hug and Hop: A discrete-time, non-reversible Markov chain Monte Carlo algorithm: Matthew Ludkin (Lancaster U) Abstract: We introduced the Hug and Hop Markov chain Monte Carlo algorithm for estimating expectations with respect to an intractable distribution $\pi$. The algorithm alternates between two kernels: Hug and Hop. Hug is a non-reversible kernel that uses repeated applications of the bounce mechanism from the recently proposed Bouncy Particle Sampler to produce a proposal point far from the current position, yet on almost the same contour of the target density, leading to a high acceptance probability. Hug is complemented by Hop, which deliberately proposes jumps between contours and has an efficiency that degrades very slowly with increasing dimension. There are many parallels between Hug and Hamiltonian Monte Carlo (HMC) using a leapfrog intergator, including an $\mathcal{O}{\delta^2}$ error in the integration scheme, however Hug is also able to make use of local Hessian information without requiring implicit numerical integration steps, improving efficiency when the gains in mixing outweigh the additional computational costs. We test Hug and Hop empirically on a variety of toy targets and real statistical models and find that it can, and often does, outperform HMC on the exploration of components of the target. Hierarchical variance shrinkage through the triple gamma prior: Annalisa Cadonna (Vienna U of Economics and Business) Abstract: Time-varying parameter (TVP) models are very flexible in capturing gradual changes in the effect of a predictor on the outcome variable. However, in particular when the number of predictors is large, there is a known risk of overfitting and poor predictive performance, since the effect of some predictors is constant over time. In the present work, a triple gamma prior is proposed for variance shrinkage in TVP models. The triple gamma prior encompasses a number of priors that have been suggested previously, such as the Bayesian Lasso, the double gamma prior and the Horseshoe prior. Interesting properties of the triple gamma prior are outlined and an efficient Markov Chain Monte Carlo algorithm is developed. An extended simulation study is conducted and the proposed modeling approach is applied to real data, both in a univariate and a multivariate framework. The predictive performance of different shrinkage priors is compared in terms of log predictive density scores. (Joint with Peter Knaus & Sylvia Frühwirth-Schnatter.) Scaling Bayesian probabilistic record linkage with post-hoc blocking: An application to the California Great Registers: Brendan McVeigh (Carnegie Mellon U) Abstract: Probabilistic record linkage (PRL) is the process of determining which records in two databases correspond to the same underlying entity in the absence of a unique identifier. Bayesian solutions to this problem provide a powerful mechanism for propagating uncertainty due to uncertain links between records (via the posterior distribution over linkage structures). However, computational considerations severely limit the practical applicability of existing Bayesian methods. We propose a new computational approach that dramatically improves scalability of posterior inference, scaling Bayesian inference to problems orders of magnitude larger than state-of-the-art algorithms. We demonstrate our method on a subset of an OCR'd dataset, the California Great Registers, containing hundreds of thousands of voter registrations. Despite lacking a high quality blocking key our approach allows a posterior distribution to be estimated on a single machine in a matter of hours. Our advances make it possible to perform Bayesian PRL for larger problems, and to assess the sensitivity of results to varying model specifications. Finite mixtures are typically inconsistent for the number of components: Diana Cai (Princeton U) Abstract: Scientists and engineers are often interested in learning the number of subpopulations (or clusters) present in a data set. It is common to use a Dirichlet process mixture model (DPMM) for this purpose. But Miller and Harrison (2013) warn that the DPMM posterior is severely inconsistent for the number of clusters when the data are truly generated from a finite mixture; that is, the posterior probability of the true number of clusters goes to zero in the limit of infinite data. A potential alternative is to use a finite mixture model (FMM) with a prior on the number of clusters. Past work has shown the resulting posterior in this case is consistent. But these results crucially depend on the assumption that the cluster likelihoods are perfectly specified. In practice, this assumption is unrealistic, and empirical evidence (Miller and Dunson, 2018) suggests that the posterior on the number of clusters is sensitive to the likelihood choice. In this paper, we prove that under even the slightest model misspecification, the FMM posterior on the number of components is also severely inconsistent. We support our theory with empirical results on simulated and real data sets. (Joint work with Trevor Campbell and Tamara Broderick.) Entity resolution, canonicalization, and the downstream task: Kelsey Lieberman (Duke U) Abstract: Entity resolution (ER) is the process of merging noisy databases to remove duplicate entities, often in the absence of a unique identifier. ER can be thought of a data cleansing task, where most analysts are most concerned about the downstream tasks of inference/prediction. Crucial to this is understanding the uncertainty of errors at each stage in the pipeline, such that these are appropriately passed into the downstream task. In this talk, we consider a three stage pipeline. First, we consider the ER task, which could be probabilistic or Bayesian. Specifically, we propose a Bayesian graphical model that incorporates training data into the model directly such that the sampler can make faster updates when applied to very large datasets. Second, we propose new methodology for selecting the most representative record from the output of the ER task, which is known as canonicalization. Third, we consider the prediction task of linear and logistic regression on experiments making comparisons to benchmarks in the literature. Finally, we will give a discussion of the proposed work and future directions. (Joint with Rebecca Steorts). Genomic variety estimation via Bayesian nonparametrics: Lorenzo Masoero (Massachusetts Institute of Technology) Abstract: The exponential growth in size of human genomic studies, with tens of thousands of observations, opens up the intriguing possibility to investigate the role of rare genetic variants in biological human evolution. A better understanding of rare genetic variants is crucial for the study of rare genetic diseases, as well for personalized medicine. A crucial challenge when working with rare variants, is to develop a statistical framework to assess if the observed sample is truly representative of the underlying population. In particular, it is important to understand (i) what fraction of the relevant variation present in human genome is not yet captured by available datasets and (ii) how to design future experiments in order to maximize the number of hitherto unseen genomic variants. We propose a novel rigorous methodology to address both problems using a nonparametric Bayesian framework. Our contribution is twofold: first,we provide an estimator for the number hitherto unseen variants which are going to be observed when additional samples from the same population are collected and study its theoretical and empirical properties. Moreover, we show how this approach can be used in the context of the optimal design of genomic studies. For this problem, under a fixed budget, one is interested in maximizing the number of genomic discoveries by optimally enlarging a dataset, trading off between the additional number of individuals to be sequenced and the quality of the individual samples. (Joint work with Stefano Favaro, Federico Camerlenghi and Tamara Broderick.) Spike and slab priors for undirected Gaussian graphical model selection: Jack Carter (U of Warwick) Abstract: We introduce a class of prior distributions on the precision matrix of a Gaussian random variable. Priors in this class involve a spike and slab density being set on the partial correlations; this induces sparsity on the related undirected graphical model and aids computational efficiency by leading to an EM algorithm and posterior Gibbs sampler that are easy to derive. We pay particular attention to the use of a non-local MOM density being used for the slab which better represents the hypothesis that the partial correlation is 0 by having zero density at 0. For this we suggest default parameter values which ensure interpretability of the prior and control the threshold on the partial correlations at which we include an edge in the model. This has links to causality by ensuring that any edge included in the graphical model is of a certain specified strength - one of the Bradford Hill criteria. We also discuss the computational aspects related to posterior inference. The use of a spike and slab prior removes the need for a model search algorithm over the space of undirected graphical models, however direct inference on the model is not possible. We propose a number of methods involving an EM algorithm and Gibbs sampling to make posterior inference. (Joint work with David Rossell and Jim Smith). Bayesian analysis for hierarchical models using piecewise-deterministic Markov process: Matthew Sutton (Lancaster U) Abstract: Piecewise-deterministic Markov process (PDMPs) are an emerging class of Markov chain Monte Carlo methods for efficient sampling of complex targets. In practice, sampling from a PDMP involves simulating from a non-homogeneous Poisson process. This non-trivial task is usually accomplished through thinning which requires simulating from an upper-bound on the Poisson rate. The efficiency of the sampler is effected by how tight this upper bound is (statistical efficiency) and how quickly it can be simulated (computational efficiency). In this work, we explore the efficiency of PDMPs for sampling in a popular class of Bayesian inference models. Specifically, we focus on latent Gaussian models where there is a non-Gaussian response and the latent field is a Gaussian distribution controlled by a few hyper-parameters. We take advantage of the sparsity in the precision of the Gaussian field to ensure computational efficiency and derive tight upper-bounds for the thinning in these models. Finally, we measure the potential of these methods alongside alternatives such as Hamiltonian Monte Carlo and the Metropolis-adjusted Langevin Algorithm. A Bayesian nonparametric test for conditional independence: Onur Teymur (Imperial College London) Abstract: We present a Bayesian nonparametric method for assessing the dependence or independence of two variables conditional on a third. Our approach uses Polya tree priors on spaces of conditional probability distributions; these random measures are then embedded within a decision-theoretic test for conditional (in)dependence. The setup supports the testing of large datasets while relaxing the linearity assumptions central to classical approaches such as partial correlation. In fact, no assumption whatsoever is made on the form of dependence between the variables. The test is fully Bayesian, meaning both hypotheses can be positively evidenced—this feature is particularly useful for causal discovery and is not employed by any previous procedure of this type. Efficient stochastic optimisation by unadjusted Langevin Monte Carlo. Application to maximum marginal likelihood and empirical Bayesian estimation: Valentin De Bortoli (ENS Paris Saclay) Abstract: Stochastic approximation methods play a central role in maximum likelihood estimation problems involving intractable likelihood functions, such as marginal likelihoods arising in problems with missing or incomplete data, and in parametric empirical Bayesian estimation. Combined with Markov chain Monte Carlo algorithms, these stochastic optimisation methods have been successfully applied to a wide range of problems in science and industry. However, this strategy scales poorly to large problems because of methodological and theoretical difficulties related to using high-dimensional Markov chain Monte Carlo algorithms within a stochastic approximation scheme. This paper proposes to address these difficulties by using unadjusted Langevin algorithms to construct the stochastic approximation. This leads to a highly efficient stochastic optimisation methodology with favourable convergence properties that can be quantified explicitly and easily checked. The proposed methodology is demonstrated with three experiments, including a challenging application to high-dimensional statistical audio analysis and a sparse Bayesian logistic regression with random effects problem. (Joint work with Alain Durmus, Marcelo Pereyra and Ana Fernandez Vidal.) Noise contrastive estimation: Asymptotic properties, formal comparison with MC-MLE: Lionel Riou-Durand (U of Warwick) Abstract: A statistical model is said to be un-normalised when its likelihood function involves an intractable normalising constant. Two popular methods for parameter inference for these models are MC-MLE (Monte Carlo maximum likelihood estimation), and NCE (noise contrastive estimation); both methods rely on simulating artificial data-points to approximate the normalising constant. While the asymptotics of MC-MLE have been established under general hypotheses (Geyer, 1994), this is not so for NCE. We establish consistency and asymptotic normality of NCE estimators under mild assumptions. We compare NCE and MC-MLE under several asymptotic regimes. In particular, we show that, when $m \rightarrow \infty$ while $n$ is fixed ($m$ and $n$ being respectively the number of artificial data-points, and actual data-points), the two estimators are asymptotically equivalent. Conversely, we prove that, when the artificial data-points are IID, and when $n \rightarrow \infty$ while $m/n$ converges to a positive constant, the asymptotic variance of a NCE estimator is always smaller than the asymptotic variance of the corresponding MC-MLE estimator. We illustrate the variance reduction brought by NCE through a numerical study. (Joint with N. Chopin.) Alternative tests for financial risk model validation: Elena Goldman (Pace U) Abstract: The current practice of financial risk management is to evaluate models based on ex-post outcome: backtesting. For example, finan- cial institutions need to evaluate Value at Risk (VaR) estimates for setting bank's economic capital or initial margins for clearing agencies (CCP's). The advantage of Bayesian methods is the ability to obtain full posterior distribution of risk measures compared to simple point esti- mates. Furthemore, Bayesian approach can produce predictive scores and posterior pdfs of loss functions. For CCP's margins we introduce a method based on the distribution of model loss function that cap- tures the trade off of margin shortfall and procyclicality. We show how this loss function performs for various risk models in measuring tail risk. We perform a test of model selection using the posterior dis- tribution of the differences between CDFs of loss measures introduced in Goldman et al (2013). (Joint with Xiangjin Shen.) The $f$-Divergence Expectation Iteration scheme: Kamelia Daudel (Institut Polytechnique de Paris) Abstract: We introduce the $f$-EI$(\phi)$ algorithm, a novel iterative algorithm which operates on measures and performs $f$-divergence minimisation in a Bayesian framework. We prove that for a rich family of values of $(f,\phi)$ this algorithm leads at each step to a systematic decrease in the $f$-divergence and show that we achieve an optimum. In the particular case where we consider a weighted sum of Dirac measures and the α-divergence, we obtain that the calculations involved in the $f$-EI$(\phi)$ algorithm simplify to gradient-based computations. Empirical results support the claim that the $f$-EI$(\phi)$ algorithm serves as a powerful tool to assist Variational methods. (Joint with Randal Douc, Francois Portier and Francois Roueff.) Scalable approximate inference for state space models with normalising flows: Tom Ryder (Newcastle U) Abstract: By exploiting mini-batch stochastic gradient optimisation, variational inference has had great success in scaling up approximate Bayesian inference to big data. To date, however, this strategy has only been applicable to models of independent data. Here we extend mini-batch variational methods to state space models of time series data. To do so we introduce a novel generative model as our variational approximation, a local inverse autoregressive flow. This allows a subsequence to be sampled without sampling the entire distribution. Hence we can perform training iterations using short portions of the time series at low computational cost. We illustrate our method on AR (1), Lotka-Volterra and FitzHugh-Nagumo models, achieving accurate parameter estimation in a short time. Record Linakge and Time Series Regression: Shubhi Sharma (Duke U) Abstract: Entity resolution (ER) (record linkage or de-duplication) is the process of merging noisy datasets and removing duplicate entries, often in the absence of a unique identifier for records. We propose a novel unsu- pervised approach for linking records across arbitrarily many files, while simultaneously detecting duplicate records within files, in a temporal setting. We leverage existing work in the literature such that we can represent patterns of links between records as a bipartite graph, in which records are directly linked to latent true individuals, and only indirectly linked to other records. We propose a an efficient, linear-time, hybrid Markov chain Monte Carlo algorithm, which overcomes many obstacles, such as low acceptance rates, encountered by previously proposed methods of ER. We assess our results on real and simulated data. (Joint with Jairo Fuquene and Rebecca Steorts.) Exploring alternative likelihoods and priors in geographic profiling with the R package Silverblaze: Michael Stevens (Queen Mary U of London) Abstract: Geographic profiling (GP), a model originally developed in criminology in cases of serial crime such as rape, murder and arson, is used to prioritise large lists of suspects associated with a set of linked crimes. GP is used to identify likely areas containing an offender's home or workplace given where they've committed their crimes. GP now boasts a range of applications in ecology and epidemiology; finding nesting locations of invasive species or areas associated with the outbreak of an infectious disease. The model spatially clusters point pattern data in geographical space using a Dirichlet Process Mixture (DPM) model. Despite the wide success of this model the DPMM relies on a Gibbs sampling algorithm (Neal 2000, Verity et al. 2014) that restricts the user to specific forms for the likelihood and prior given the conjugacy between them. Part of my work revolves around introducing a Metropolis-Hastings algorithm alongside a Gibbs sampler to GP. This poster will describe the development of Silverblaze, an R package for running the GP model using different kinds of likelihoods and priors specified by the user. A large proportion of publications following on from Verity et al. (2014) consistently fit a mixture of normal distributions to the data. Silverblaze is the first instance in GP a user can specify different decaying distributions as well as inferring under a Poisson model, where a user may have collected count data in place of point pattern data, to estimate additional parameters of interest such as population density. Transport Monte Carlo: Leo Duan (U of Florida) Abstract: In Bayesian posterior estimation, the transport map finds a deterministic transform from a simple reference distribution to a potentially complicated posterior distribution. Compared to other sampling approaches, it is capable of generating independent samples while exploiting efficient optimization toolboxes. However, a fundamental concern is that the invertible map is challenging to parameterize with sufficient flexibility, and may even fail to exist between the two distributions. To address this issue, we propose Transport Monte Carlo, which models the transform as a random choice from multiple maps. It corresponds to a coupling distribution of the reference and posterior, which is guaranteed to exist under mild conditions. This framework allows us to decompose a sophisticated transform into multiple components; each is now simple to parameterize and estimate. In the meantime, it enjoys a direct extension to coupling a continuous reference and a discrete posterior. We examine its theoretical properties, including the error rate due to the finite training sample size. Compared to existing methods such as Hamiltonian Monte Carlo or neural network-based transport map, our method demonstrates much-improved performances in several common sampling problems, including the multi-modal distribution, high-dimensional sparse regression, and combinatorial sampling of the graph edges. Removing the mini-batching bias for large scale Bayesian inference: Inass Sekkat (Ecole des Ponts) Abstract: The computational cost of usual Monte Carlo methods for sampling a posteriori laws in Bayesian inference scales linearly with the number of data points, which becomes prohibitive in the big data context. One option is to resort to mini-batching in conjunction with unadjusted discretizations of Langevin dynamics, in which case only a random fraction of the data is used to estimate the gradient. However, this leads to an additional noise in the dynamics and hence a bias on the invariant measure which is sampled by the Markov chain. We advocate using the so-called Adaptive Langevin dynamics, which is a modification of standard inertial Langevin dynamics with a dynamical friction which automatically corrects for the increased noise arising from mini-batching. We investigate in particular the practical relevance of the assumptions underpinning Adaptive Langevin (constant covariance for the estimation of the gradient), which are not satisfied in typical models of Bayesian inference; and discuss how to extend the approach to more general situations. Optimal scaling of MALA with Laplace distribution as a target: Pablo Jimenez (Ecole Polytechnique) Abstract: This paper considers the optimal scaling problem for Metropolis adjusted approximations of Langevin dynamics for the Laplace distribution. We obtain, similarly to the results established in [Roberts, Rosenthal 1997], and under the same setting - independent and identically distributed models and at stationarity - the convergence of the first component of the corresponding Markov chain, rescaled in time and space, to a Langevin diffusion process as the dimension d goes to infinity. However, maybe surprisingly, the optimal scaling obtained with respect to the dimension d is 2/3, which is therefore different from the one holding for smooth distributions. As a result, we obtain a new optimal acceptance rate, approximatively 0.360. Bayesian nonparametric models for graph structured data: Florence Forbes (U of Grenoble, INRIA) Abstract: We consider the issue of determining the structure of clustered data, both in terms of finding the appropriate number of clusters and of modelling the right dependence structure between the observations. Bayesian nonparametric (BNP) models, which do not impose an upper limit on the number of clusters, are appropriate to avoid the required guess on the number of clusters but have been mainly developed for independent data. In contrast, Markov random fields (MRF) have been extensively used to model dependencies in a tractable manner but usually reduce to finite cluster numbers when clustering tasks are addressed. Our main contribution is to propose a general scheme to design tractable BNP-MRF priors that combine both features: no commitment to an arbitrary number of clusters and a dependence modelling. A key ingredient in this construction is the availability of a stick-breaking representation which has the threefold advantage to allowing us to extend standard discrete MRFs to infinite state space, to design a tractable estimation algorithm using variational approximation and to derive theoretical properties on the predictive distribution and the number of clusters of the proposed model. This approach is illustrated on a challenging natural image segmentation task for which it shows good performance with respect to the literature. Overfitted mixture models to learn the number of chains in the Factorial Hidden Markov Models: Applications to stochastic volatility modeling: Jan Greve (Vienna U of Economics and Business) Abstract: When dealing with moderate to high-dimensional Markov switching models, factorial hidden Markov models (FHMMs) present a more parsimonious alternative to the traditional hidden Markov models (HMMs). The more restric- tive representation of the overall state space in a distributed manner is especially useful when the state space in question is relatively large, a case in which most HMM based approaches would encounter computational difficulties when resolving label-switches, a typical issue of combinatorial complexity. For the FHMM, which usually restricts the number of states within each distributed Markov chain to be equal, the only model parameter is the number of latent chains that determines the overall size of the state space. We rephrase this model selection problem into the overfitted mixture framework, where the number of latent chains are set to be much larger than the true value and we would like our sampler to learn the number of effective chains. This is achieved through the combination of component-wise shrinkage within each chain and shrinkage applied to the distribution of the persistence probabilities. In this way, it is possible to make redundant chains to be "inactive", in a sense that it does not contribute to the likelihood nor to the entropy of the joint transition matrix which is constructed by taking a tensor product of transition matrices within each chain. Finally, the overall framework of this paper will be demonstrated through the application to the stochastic volatility models. (Joint work with Sylvia Frühwirth-Schnatter.) LR-GLM: High-dimensional Bayesian inference using low-rank data approximations: Brian Trippe (Massachusetts Institute of Technology) Abstract: Due to the ease of modern data collection, applied statisticians often have access to a large set of covariates that they wish to relate to some observed outcome. Generalized linear models (GLMs) offer a particularly interpretable framework for such an analysis. In these high-dimensional problems, the number of covariates is often large relative to the number of observations, so we face non-trivial inferential uncertainty; a Bayesian approach allows coherent quantification of this uncertainty. Unfortunately, existing methods for Bayesian inference in GLMs require running times roughly cubic in parameter dimension, and so are limited to settings with at most tens of thousand parameters. We propose to reduce time and memory costs with a low-rank approximation of the data in an approach we call LR-GLM. When used with the Laplace approximation or Markov chain Monte Carlo, LR-GLM provides a full Bayesian posterior approximation and admits running times reduced by a full factor of the parameter dimension. We rigorously establish the quality of our approximation and show how the choice of rank allows a tunable computational-statistical trade-off. Experiments support our theory and demonstrate the efficacy of LR-GLM on real large-scale datasets. Scalable inference for agent-based models: Nianqiao Ju (Harvard U) Abstract: Agent-based models (ABMs) represent systems at the level of their constituent units, because in many complex dynamic networks, macro-level phenomena arise from micro-level behaviors. For example, in a susceptible-infected-recovered (SIR) stochastic agent-based model for infectious diseases, agents interact in a (possibly dynamic) network by infecting each other and perform individual-level actions such as birth, death, and recovery. We consider the task of learning ABMs, which can be viewed as a statistical inference task in a large-dimensional hidden Markov model. In this poster, we focus on approximating the marginal likelihood function for the SIR disease process where we observe only a fraction of the total number of infections. We develop two data augmentation schemes that lead to the auxiliary particle filter (APF) proposal, and we present their connections to the Poisson-Binomial and Conditional-Binomial distributions. With informative population observations, the APF avoids particle degeneracy of the bootstrap particle filter. (Joint work with Pierre Jacob and Jeremy Heng). A Gibbs-like integrator for Hamiltonian Monte Carlo: Melissa Malcom (Rutgers U) Abstract: This poster compares the convergence of Gibbs and HMC for Bayesian hierarchical models. The Hamiltonian dynamics in HMC is approximated by a Gibbs-like symplectic integrator adapted to the structure of hierarchical models. This integrator allows larger time step sizes than Verlet, which in turn, accelerates convergence of HMC. Estimating learning coefficients for model evaluation using MCMC simulations: Toru Imai (Kyoto U) Abstract: Evaluation of the marginal likelihood of singular models such as deep learning is a challenging task. The singular Bayesian information criterion (sBIC) gives the state-of-the-art approximation to the log marginal likelihood, which can be applied to both regular and singular models. However, sBIC requires the theoretical values of the learning coefficients, but only few learning coefficients are known. In this presentation, we propose a new estimator of the learning coefficients using MCMC simulations. A method to estimate state space model by spatiotemporal continuity: Tsuyoshi Ishizone (Meiji U) Abstract: Model estimation from time series and/or spatio-temporal data is important topic since it helps us to extract useful information from big data in recent years. In this poster, we introduce an estimation algorithm of the linear Gaussian state space model with focusing on the real-time property. Our algorithm is quicker than and as accurate as existing methods, therefore, it suffices the requirement of the rapid response for the alternation of the fields. Moreover, we introduce localization and spatial uniformity into the algorithm to reduce the number of the parameters. Thanks to this, we obtain stable method to estimate parameters regarding state transition and states. Stochastic scale mixture modeling in Bayesian longitudinal data analysis: Anish Mukherjee (Case Western Reserve U) Abstract: A variety of methods for generalizing standard mixed model to explain time correlated response structure have been developed in the context of longitudinal data analysis. Using Gaussian process (GP) to specify an AR structure is a typical way to introduce within subject serial correlation into the model. In order to generalize, Quintana et al 2016 proposed a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. Here we propose a scale mixture of GPs as an alternative approach that allows for modeling heterogeneous covariance structure in a more flexible way. Different mixing distributions and the associated shrinkage behavior can be utilized to explain different covariance structures present in the data. We discuss the computational challenges associated with our approach and report promising estimation and prediction performances as compared to the DPM based method in different simulation setups and in a real data example. Finite-sample correction for estimators of real log canonical threshold based on Markov chain Monte Carlo: Shiro Tanaka (Kyoto U) Abstract: In singular model selection problems that involve models whose Fisher information matrices may fail to be invertible, the penalty structure in deviance information criteria or Schwarz's Bayesian information criteria (BIC) does not reflect the theoretical large sample behavior under the regularity conditions. Drton and Plummer (2017) presented an extension of BIC, singular BIC, based on a large sample approximation of the marginal likelihood that involves a constant called as real log canonical threshold (Watanabe 2009). Real log canonical threshold is determined by the algebraic geometrical structure of the statistical model and prior distribution and is generally unknown. In this work, we consider generic methods for computing real log canonical threshold and singular Bayesian information criteria based on Markov-chain Monte Carlo. Simulation experiments suggested that an estimator with finite-sample correction outperforms other estimators under normal distribution, normal linear models, logistic regression, and normal mixture. (Joint work with Toru Imai.) Estimating convergence of Markov chains with L-lag couplings: Niloy Biswas (Harvard U) Abstract: Markov chain Monte Carlo (MCMC) methods generate samples that are asymptotically distributed from a target distribution of interest as the number of iterations goes to infinity. Various theoretical results provide upper bounds on the distance between the target and marginal distribution after a fixed number of iterations. These upper bounds are on a case by case basis and typically involve intractable quantities, which limits their use for practitioners. We introduce L-lag couplings to generate computable, non-asymptotic upper bound estimates for the total variation or the Wasserstein distance of general Markov chains. We apply L-lag couplings to the tasks of (i) determining MCMC burn-in, (ii) comparing different MCMC algorithms with the same target, and (iii) comparing exact and approximate MCMC. Lastly, we (iv) assess the bias of sequential Monte Carlo and self-normalized importance samplers. Bayesian adaptive sequential design: Dinko Franceschi (Columbia U) Abstract: We introduce a novel model on how to do better decision making, going beyond the current approach of Bandit or A/B testing in clinical trials. We provide a better bridge between existing literature and real world applications by creating a more realistic framework for clinical drug trials. This method integrates adaptive testing, includes estimated costs and benefits into the decisions, and considers a stream of innovations rather than treating decisions one at a time. We present a Bayesian model which shows the value of the proposed framework in the medical treatment world and offer experimental results demonstrating its performance over traditional non-Bayesian approaches. A divide and conquer algorithm of Bayesian density estimation: Ya Su (U of Kentucky) Abstract: Data sets for statistical analysis become extremely large even with some difficulty of being stored on one single machine. Even when the data can be stored in one machine, the computational cost would still be intimidating. We propose a divide and conquer solution to density estimation using Bayesian mixture modeling including the infinite mixture case. The methodology can be generalized to other application problems where a Bayesian mixture model is adopted. The proposed prior on each machine or subsample modifies the original prior on both mixing probabilities as well as on the rest of parameters in the distributions being mixed. The ultimate estimator is obtained by taking the average of the posterior samples corresponding to the proposed prior on each subset. Despite the tremendous reduction in time thanks to data splitting, the posterior contraction rate of the proposed estimator stays the same (up to a log factor) as that of the original prior when the data is analyzed as a whole. Simulation studies also justify the competency of the proposed method compared to the established WASP estimator in the finite dimension case. In addition, one of our simulations is performed in a shape constrained deconvolution context and reveals promising results. The application to a GWAS data set reveals the advantage over a naive method that uses the original prior. Efficient Bayesian synthetic likelihood with whitening transformations: Chris Drovandi (Queensland U of Technology) Abstract: Likelihood-free methods are an established approach for performing approximate Bayesian inference for models with intractable likelihood functions. However, they can be computationally demanding. Bayesian synthetic likelihood (BSL) is a popular such method that approximates the likelihood function of the summary statistic with a known, tractable distribution -- typically Gaussian -- and then performs statistical inference using standard likelihood-based techniques. However, as the number of summary statistics grows, the number of model simulations required to accurately estimate the covariance matrix for this likelihood rapidly increases. This poses a significant challenge for the application of BSL, especially in cases where model simulation is expensive. Here we propose whitening BSL (wBSL) -- an efficient BSL method that uses approximate whitening transformations to decorrelate the summary statistics at each algorithm iteration. We show empirically that this can reduce the number of model simulations required to implement BSL by more than an order of magnitude, without much loss of accuracy. (Joint work with Jacob Priddle and Scott Sisson.) Eric Moulines, Co-chair (Ecole Polytechnique) Christian P. Robert, Co-chair (Université Paris-Dauphine) Mylène Bédard (Université de Montréal) Arnaud Doucet (University of Oxford) Andreas Eberle (University of Bonn) Florence Forbes (INRIA Grenoble) Sylvia Frühwirth-Schnatter (Wirtschaftsuniverstiät - Vienna U. of Economics and Business) Kerrie Mengersen (Queensland University of Technology) Sean Meyn (University of Florida) Kevin Murphy (Google) Mario Peruggia (The Ohio State University) David Rossell (Universitat Pompeu Fabra) Aki Vehtari (Aalto University) Jim Hobert, Chair (University of Florida) Brenda Betancourt (University of Florida) Kshitij Khare (University of Florida) George Michailidis (University of Florida) Rohit Kumar Patra (University of Florida) ISBA takes very seriously any form of misconduct, including but not limited to sexual harassment and bullying. All meeting participants are expected to adhere strictly to the official ISBA Code of Conduct. Following the safeISBA motto, we want ISBA meetings to be safe and to be fun. We encourage participants to report any concerns or perceived misconduct to the meeting organizers, Jim Hobert and Christian Robert. Further suggestions can be sent to [email protected].
CommonCrawl
Symposium - International Astronomical Union (6) Accretion phenomena in nearby star-forming dwarf galaxies F. Annibali, M. Tosi, A. Aloisi, M. Bellazzini, A. Buzzoni, M. Cignoni, L. Ciotti, F. Cusano, C. Nipoti, E. Sacchi, D. Paris, D. Romano Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S321 / March 2016 We present two pilot studies for the search and characterization of accretion events in star-forming dwarf galaxies. Our strategy consists of two complementary approaches: i) the direct search for stellar substructures around dwarf galaxies through deep wide-field imaging, and ii) the characterization of the chemical properties in these systems up to large galacto-centric distances. We show our results for two star-forming dwarf galaxies, the starburst irregular NGC 4449, and the extremely metal-poor dwarf DDO 68. Kinematic structure in the Galactic halo at the North Galactic Pole: RR Lyrae and BHB stars show different kinematics T. D. Kinman, C. Cacciari, A. Bragaglia, A. Buzzoni, A. Spagna Journal: Proceedings of the International Astronomical Union / Volume 2 / Issue S241 / December 2006 Heliocentric (UVW) and galactocentric (VR VΦ VZ) space motions were derived for 38 RR Lyrae (RRL) and 79 blue horizontal branch (BHB) stars in a 200-sq degree area near the North Galactic Pole (NGP). A kinematic analysis of the 26 RRL and 52 BHB stars whose height (Z) above the plane is < 8 kpc shows that the sample is not homogeneous. Our BHB sample shows zero galactic rotation and roughly isotropic velocity dispersions. whereas the RRL sample shows a definite retrograde rotation and non-isotropic velocity dispersions. The combined BHB and RRL sample shows a smaller retrograde rotation that is similar to that found by Majewski et al. (1996) for a sample of subdwarfs in SA 57 at the NGP. There are significantly more RRL with negative W-velocity (streaming down) than positive W-velocity, whereas the numbers of BHB stars are comparable. This indicates the presence near the NGP of an accreted halo component that is rich in RRL (probably Oosterhoff type I) stars. These results are presented in detail in a forthcoming paper (Kinman et al. 2007). The luminosity-specific Planetary Nebulae density in Local Group galaxies R.L.M. Corradi, A. Buzzoni, M. Arnaboldi Journal: Proceedings of the International Astronomical Union / Volume 2 / Issue S234 / April 2006 The value of the $\alpha$ ratio, the number of PNe per unit bolometric luminosity in a galaxy, is computed using stellar population synthesis models covering the whole range of Hubble types of galaxies. Model predictions are compared with the PNe counts in the Local Group, which indicate a fairly constant value of $\alpha$ – between 1 and 6 PNe per 10$^7$ solar luminosities – along the Hubble sequence. The Vertical Structure of the Halo Rotation C. Boily, P. Patsis, S. Portegies, R. Spurzem, C. Theis, T. D. Kinman, C. Cacciari, A. Bragaglia, A. Buzzoni, A. Spagna Published online by Cambridge University Press: 22 December 2003, p. 115 New GSC-II proper motions of RR Lyrae and Blue Horizontal Branch (BHB) stars near the North Galactic Pole are used to show that the Galactic Halo 5 kpc above the Plane has a significantly retrograde galactic rotation. Evolutionary Population Synthesis Models of Primeval Galaxies: A Critical Appraisal A. Buzzoni Journal: Symposium - International Astronomical Union / Volume 183 / 1999 A recognized problem when searching for primeval galaxies at cosmological distances is a definition of a firm selection criterion (alternative to a direct but extremely time-demanding measure of z) to single out high-redshift candidates from the plethora of other more or less peculiar objects (i.e. AGN, starburst galaxies etc.) at lower distances. The case of the HST "Deep Field" (HDF) observations (Williams et al. 1996) is especially relevant in this regard as galaxies up to z = 3.2 have been detected in the field (Steidel et al. 1996). Synthetic photometric indices for Galactic globular clusters M.L. Malagnini, L.E. Pasinetti Fracassini, S. Covino, A. Buzzoni Published online by Cambridge University Press: 07 August 2017, pp. 343-344 A grid of synthetic spectral energy distributions, representative of old stellar populations, has been used to derive colors in different photometric systems, and to compare the theoretical predictions with observational data for galactic globular clusters. Testing the World Model Through High Redshift Galaxies in Cluster Alberto Buzzoni, Guido Chincarini, Emilio Molinari We discuss the results of a cosmological test involving mean colors of the early-type galaxy population in distant clusters for tracing the World model. A global approach to the cosmological problem is attempted deriving the allowed combinations for the fundamental parameters (Ho, qo , λ o ), and the redshift of galaxy formation (zf). Metallicity distribution of elliptical galaxies through a quantitative calibration of the Magnesium Mg2 index Alberto Buzzoni, Giorgio Gariboldi, Luciano Mantegazza Published online by Cambridge University Press: 07 August 2017, p. 397 In this contribution we give a progress report for our work intending to approach in a more complete way the problem of a quantitative calibration of the Mg2 index (Faber et al. 1977, A.J., 82, 941; Buzzoni, Gariboldi & Mantegazza 1991 submitted to A.J.). We have first investigated empirically the relationship between the index and the fundamental parameters for a wide set of Galactic standard stars deriving a detailed calibration for dwarfs and giants. This allowed to build up synthetic models for stellar populations exploring Mg2 in the galaxies with varying overall distinctive parameters of the populations. Evolutionary status of galaxy population in clusters at intermediate redshift (z ~ 0.2) Emilio Molinari, Dolores Pedrana, Massimo Banzi, Alberto Buzzoni, Guido Chincarini In this contribution we present observations for a set of three clusters of galaxies at intermediate redshift (z ~ 0.2) selected from the revised Abell catalog (Abell et al. 1989, Ap.J. Suppl., 70, 1) and observed in the Gunn photometric system at the ESO 3.6m telescope in La Silla (Chile) during various runs between 1986 and 1990. This is part of a more extended project addressing the study of the distant clusters, as outlined in more detail in Molinari et al. (1990, M.N.R.A.S., 246, 576). High Precision Photometry of 10,000 Stars in M 3 R. Buonanno, A. Buzzoni, C. E. Corsi, F. Fusi Pecci, A. R. Sandage A new color-magnitude diagram for M 3 is presented. 10,000 stars have been measured down to V = 22 with an internal accuracy better than 0.03 mag to get complete and very accurate samples over well defined areas. More than 10,000 stars have been measured down to V = 22 in two different areas. In the first, with 3.5 < r < 6.0 arcmin, photometric completeness has been achieved down to V = 21.5 and an algorithm to correct for losses due to unrecoverable crowding and blending has been experimentally computed. In the second, within a square field of 15 × 15 arcmin, completeness has been extended only to V = 18, well below the horizontal branch. Many tests made on the data guarantee an internal photometric accuracy better than 0.03 mag at V = 21. Therefore, both the total population of each branch and the relative star-number ratios are "bona fide" representatives of the corresponding evolutionary time-scales. Here we simply present: 1) the color-magnitude diagram (see Fig. 1) obtained from the reduction of a wide collection of Palomar plates; 2) a table which presents the contribution of the various branches to the integrated cluster light; 3) the preliminary indication that, within the annulus we have considered, the blue stragglers seem to be slightly less centrally concentrated than the subgiants in the same magnitude interval.
CommonCrawl
Educational inequalities in premature mortality by region in the Belgian population in the 2000s Françoise Renard ORCID: orcid.org/0000-0002-7184-97791, Brecht Devleesschauwer1, Sylvie Gadeyne2, Jean Tafforeau1 & Patrick Deboosere2 Archives of Public Health volume 75, Article number: 44 (2017) Cite this article In Belgium, socio-economic inequalities in mortality have long been described at country-level. As Belgium is a federal state with many responsibilities in health policies being transferred to the regional levels, regional breakdown of health indicators is becoming increasingly relevant for policy-makers, as a tool for planning and evaluation. We analyzed the educational disparities by region for all-cause and cause-specific premature mortality in the Belgian population. Residents with Belgian nationality at birth registered in the census 2001 aged 25–64 were included, and followed up for 10 years though a linkage with the cause-of-death database. The role of 3 socio-economic variables (education, employment and housing) in explaining the regional mortality difference was explored through a Poisson regression. Age-standardised mortality rates (ASMRs) by educational level (EL), rate differences (RD), rate ratios (RR), and population attributable fractions (PAF) were computed in the 3 regions of Belgium and compared with pairwise regional ratios. The global PAFs were also decomposed into the main causes of death. Regional health gaps are observed within each EL, with ASMRs in Brussels and Wallonia exceeding those of Flanders by about 50% in males and 40% in females among Belgian. Individual SE variables only explained up to half of the regional differences. Educational inequalities were also larger in Brussels and Wallonia than in Flanders, with RDs ratios reaching 1.8 and 1.6 for Brussels versus Flanders, and Wallonia versus Flanders respectively; regional ratios in relative inequalities (RRs and PAFs) were smaller. This pattern was observed for all-cause and most specific causes of premature mortality. Ranking the cause-specific PAFs revealed a higher health impact of inequalities in causes combining high mortality rate and relative inequality, with lung cancer and ischemic heart disease on top for all regions and both sexes. The ranking showed few regional differences. For the first time in Belgium, educational inequalities were studied by region. Among the Belgian, educational inequalities were higher in Brussels, followed by Wallonia and Flanders. The region-specific PAF decomposition, leading to a ranking of causes according to their population-level impact on overall inequality, is useful for regional policy-making processes. Socio-economic (SE) inequalities in health are a well-known fact, and reducing them is a public health priority [1,2,3] requiring careful monitoring [4]. In Belgium, SE inequalities in mortality, life and health expectancies have mostly been studied at country-level [5,6,7,8]. In a previous study [9], we focused on educational inequalities in all-cause and cause-specific premature mortality and their evolutions from the 1990s to the 2000s in Belgium as a whole. However, country averages for health outcomes hide important within-country variations. As Belgium is a federal state with more and more responsibilities in health policies transferred to the regional level – i.e., the Flemish, the Brussels Capital, and the Walloon Region - the regional breakdown of health indicators is highly relevant for policy-makers, not only as a possible mirror of different risk factor patterns but also as a tool for planning and evaluation. While geographical disparities in mortality have been long and abundantly studied [10,11,12,13,14,15,16,17], up to now only two studies analyzed both the regional and SE disparities in mortality, yet with an aim to explain the geographical pattern of all-cause mortality [18, 19]. Building on our previous studies [9, 10, 17] that described the regional and educational premature mortality gaps in the 2000s, this papers aims to assess regional disparities in all-cause and cause-specific premature mortality in the Belgian population. The aim of this study is triple. First, we want to estimate which proportion of the regional disparities in premature mortality in the 2000s can be explained by individual socio-economic characteristics. Secondly, we aim to compare the relative, absolute and population-level educational inequalities in mortality by region and thirdly, estimate and rank, in each region, the potential population-level impact that would result from reducing inequalities in a selection of twelve important avoidable causes of death. The identification of these causes of death with the largest population-level inequalities is of particular public health relevance, as this information can help to set priorities in policies tackling health inequalities. To achieve this ranking, we perform a decomposition of the population attributable fraction (PAF) by causes of death. The data used in the current study were obtained by linking a) the 2001 Belgian population census, b) the National Population Register, and c) the causes of death database for the period 2001–2011 [5, 20]. Although a more recent census has been held in 2011, it has not been used as the databases linkages have not been performed yet. The study population comprised all persons aged 25–64 at census, officially residing in Belgium, and having the Belgian nationality at birth (N = 4,556,830 persons). First generation migrants (operationally defined as not having the Belgian nationality at birth) made up 17.5% of the census 2001 in the age group 25–64, with substantial differences by region (i.e., 9%, 50% and 22% in Flanders, Brussels and Wallonia, respectively). First generation migrants (henceforth referred to as "Migrants") experience lower mortality rates than people with Belgian nationality at birth (henceforth referred to as "Belgians") [21, 22], for each educational level and each region (Additional file 1: Table S1). In this study, we chose to focus on the Belgian population, which has been exposed since birth to the life conditions and health policies prevailing in Belgium. Furthermore, migrants represent a highly inhomogeneous population with various ethnic and socio-economic backgrounds and would therefore deserve a careful study by origin and ethnicity. The follow-up consisted of a 10 years period after the census (Table 1). Table 1 Number of persons, of person-years of follow up and of deaths included in the follow up by region. Distribution by age, education level housing score and employment status, people of Belgian origin aged 25–64 at census, Belgium, follow up 2001–11 This study included people aged 25–64 at census who were followed up for 10 years, except for the age group 60–64 for which the follow up time was censored at 70 years. Mortality before the age of 70 was defined as "premature mortality", for the sake of simplicity also referred to as "mortality" in the manuscript. To assess the contribution of the individual socioeconomic (SE) status to regional mortality differences, we used a set of three SE variables: educational level (EL), employment status and housing status. EL was categorized according to the highest obtained degree using the International Standard Classification of Education (ISCED), version 1997 [23]. Three categories were created: lower secondary education or less (ISCED 0–2; "low"), higher secondary education (ISCED 3–4; "mid") and tertiary education (ISCED 5–6; "high"). The employment status was classified into 4 classes: "working, including students", "unemployed" (designating people getting unemployment allocations), "retired", and "not working, other". This last group, although quite heterogeneous with respect to the reasons for non-working, contains, in men, a large proportion of people with health problems, which probably reflect a health selection in the labor market (people in good health are more prone to work). The housing status was based on information regarding tenure status and housing quality. This variable consisted of six categories (low-, mid- and high-comfort tenants and low-, mid- and high comfort owners) and was measured at the household level [24, 25]. It can be considered as a good proxy for wealth. To compare SE inequalities by region, we focused on educational level only. Educational attainment is a relatively stable measure of SE position, as usually achieved early in adulthood, and is usually of rather good quality [26]. Causes of death were classified according to the International Classification of Diseases (ICD), version 10 [27]. All-cause premature mortality was divided into two categories (avoidable and non-avoidable mortality) according to the recent UK Office of Statistics definition of avoidable mortality [28] also adopted by Eurostat [29]. In addition, we divided the total premature mortality in four broad groups of causes of deaths (circulatory diseases, cancers, other natural causes of deaths and external causes), and further analyzed 12 avoidable causes of death with a high burden in Belgian society (lung cancer, lip, oral cavity and pharynx cancer, colorectal cancer, liver cancer, ischemic heart diseases (IHD), cerebrovascular diseases that were grouped with hypertension (HTA) as usually recommended [30, 31], alcohol related deaths, diabetes, chronic obstructive pulmonary diseases (COPD), suicide, and transport accidents). In women, breast cancer was analyzed as well. The corresponding ICD10 codes are shown in Additional file 1: Table S2. Analyses were performed separately for men and women as they have very different mortality levels. Premature mortality rates by region and individual socio-economic characteristics We first computed in each region age-standardized mortality rates (ASMR) by EL, sex and cause of death, using the European population as reference population [32]. Rates were expressed per 100,000 person-years (PYs); the PYs were calculated as the sum across all people included in the census cohort, of individual times between census date and either date of death, emigration date or last day of the study. People having emigrated were censored at emigration date. To take into account the ageing process during follow-up, age was introduced as a time-varying variable. Standard errors on rates were computed in Stata assuming a binomial distribution. We first focused on regional rates, comparing EL-specific all-cause ASMRs between each pair (i, j) of regions (i.e., Brussels versus Flanders, Wallonia versus Flanders and Wallonia versus Brussels). For each EL (x = 1, … , 4), we calculated between-region rate differences, \( \left( ASM{R}_{i,x}- ASM{R}_{j,x}\right) \), as well as between-region rate excesses, \( \left(\left[{ASMR}_{i,x}/{ASMR}_{j,x}\right]-1\right) \), and used a z-test to assess statistical significance [33, 34]. In order to assess together the regional pattern and the influence of individual SE variables on this pattern, we fitted three different Poisson regression models (Table 2). In the first model, mortality was simply regressed against region, controlling for current age. In three variants of an intermediate model (models 2a, 2b, and 2c), each SE variable – EL, employment status and housing status – was added separately. As all three SE variables revealed to have a significant effect on overall mortality, they were introduced simultaneously in a multivariable model (model 3). This third model allowed assessing to which extent the individual SE level could explain the regional gaps in mortality. Cases with missing information were introduced as specific classes in the analyses. Table 2 Premature mortality: Rate Ratios and p-values for different Poisson regression models including age at inclusion in the cohort, region of residence, and SE variables (education level, employment status, housing score). People of Belgian nationality at birth aged 25–64 at census, Belgium, follow up 2001–11 Calculation of educational inequalities by region Measuring inequalities is a complex issue [35,36,37]. Indeed, health inequality can be measured from several perspectives [38] – for instance a simple comparison of two social groups versus a population-wide perspective, or the measurement of absolute versus relative inequalities. As each inequality measure captures only a partial aspect of inequality, it is recommended to use a set of complementary indices [36, 38, 39], preferably including simple absolute and relative pairwise measures along with measures summarizing inequalities across the whole population. Within each region, three inequality indices were calculated for all-cause mortality, for each broad cause group and for each selected cause of premature mortality: namely, two pairwise inequality indices, i.e., the low-versus-high EL absolute rate difference (RD), calculated as ASMRlow EL – ASMRHigh EL and rate ratio (RR), calculated as ASMRlow EL / ASMRHigh EL, and one composite measure, i.e., the population attributable fraction (PAF), measuring the population impact of inequality on mortality. The global PAF indicates which fraction of all deaths would have been avoided (in people aged 25–64 at baseline) if the mortality of the total population were equal to the one observed in the highest EL. The global PAF is calculated as: $$ \frac{\mathrm{ASMR}\ \mathrm{in}\ \mathrm{the}\ \mathrm{total}\ \mathrm{population}-\mathrm{ASMR}\ \mathrm{in}\ \mathrm{the}\ \mathrm{highest}\ EL}{\mathrm{Overall}\ \mathrm{mortality}\ \mathrm{in}\ \mathrm{the}\ \mathrm{total}\ \mathrm{population}} $$ We also estimated the specific contributions of the 12 main avoidable causes of death to the PAF, by calculating the cause-specific PAFs. This measure indicates which fraction of all deaths would have been avoided (in people aged 25–64 at baseline) if the mortality from this cause in the whole population were equal to the one observed in the highest EL. The cause-specific PAFs are calculated as: $$ \frac{\mathrm{ASMR}\ \mathrm{for}\ \mathrm{a}\ \mathrm{specific}\ \mathrm{cause}\ \mathrm{in}\ \mathrm{the}\ \mathrm{total}\ \mathrm{population}-\mathrm{ASMR}\ \mathrm{for}\ \mathrm{a}\ \mathrm{specific}\ \mathrm{cause}\ \mathrm{in}\ \mathrm{the}\ \mathrm{highest}\ \mathrm{EL}\ }{\mathrm{Overall}\ \mathrm{mortality}\ \mathrm{in}\ \mathrm{the}\ \mathrm{total}\ \mathrm{population}} $$ Standard errors on the PAFs were calculated by a Monte Carlo simulation approach [40]. Comparison of the magnitude of inequalities between the regions The relative differences in region-specific educational RDs, RRs and PAFs were measured through three ratios (i.e. one for Brussels-Flanders, for Wallonia-Flanders and for Wallonia-Brussels), each calculated for all-cause and for cause-specific mortality. For instance, the Brussels-Flanders RD ratio was computed by dividing the educational RD in Brussels by the educational RD in Flanders; the Brussels-Flanders RR ratio was computed by dividing the educational RR in Brussels by the educational RR in Flanders. The statistical significance of these comparisons was calculated according to Altman's method [41]. Analyses were performed in Stata version 14, and in R version 3.4.0. Basic characteristics of the population by region Table 1 reveals clear regional differences with respect to individual SE features in our study population. Brussels is characterized by a higher proportion of higher educated individuals compared with the two other regions, whereas Flanders shows a higher housing score than Wallonia and Brussels (with a smaller proportion of owners in Brussels) and a lower rate of unemployment. It is also noteworthy that the proportion of missing values for the SE variables is lowest in Flanders. Premature mortality rates by region and socio-economic characteristics Figure 1 shows the total and the EL-specific ASMRs by region for the total Belgian population (detailed numbers in Additional file 1: Table S1, middle part). In the Walloon region, the ASMR was 54% higher than in Flanders among males and 40% higher among females, while in Brussels the ASMR was respectively 52% and 48% higher compared with Flanders. Similarly, all EL-specific ASMRs were higher in Wallonia and Brussels than in Flanders both among men and women, particularly in the lowest EL. In men, ASMRs were higher in Brussels than in Wallonia for the low and mid ELs, but lower than in the Walloon Region for the highest EL. In women, the ASMRs were higher in Brussels than in Wallonia for all ELs. Cause-specific ASMRs by region and regional differences are displayed in Additional file 1: Table S3. The respective effects of three individual SE variables on the regional effect were explored through several Poisson models and summarized in Table 2. Compared to the first basic model, containing region and age only, models 2a, 2b and 2c revealed that all three SE variables had an important and statistically significant effect on the mortality rates (RR between 1.5 and 3.2). In addition, the SE variables had a significant impact on regional mortality differences. This confounding effect varied by SE dimension and gender: 1) after adjusting for education (model 2a), the RR for Brussels versus Flanders increased slightly (which was expected given the higher proportion of the higher educated in Brussels); 2) adjusting for employment status (model 2b) reduced regional differences in males both in Brussels and the Walloon region compared with Flanders (the RR decreased from 1.54 to 1.40 and from 1.53 to 1.40 respectively), but slightly increased the regional RRs in women; 3) introducing the housing score (model 2c) substantially decreased (up to 50% in Brussels) the regional RRs both among men and women. Model 3 showed the combined effect of all three SE variables on regional differences, that all remained significant. Age-standardized all-cause premature mortality rates by region and by educational levels. People of Belgian origin aged 25–64 at census, Belgium, follow up 2001–11 Inequality indices by region Educational rate differences (RDs) and rate ratios (RRs) by region Absolute inequalities (RDs): Tables 3 and 4 (left part) show the pairwise educational RDs by region, for all-cause mortality, broad classes of causes and cause-specific premature mortality for men and women. In men, the all-cause and most cause-specific educational RDs were very important in all three regions; in women, the RDs were somewhat smaller than in men, given the lower ASMRs. Table 3 All cause and cause-specific premature mortality by region: low and mid versus high educational level rate differences and rate ratio, males of Belgian origin aged 25–64 at census, Belgium, follow up 2001–11 Table 4 All-cause and cause-specific premature mortality by region: low and mid versus high education level Rate Differences and Rate Ratio, females of Belgian origin aged 25–64 at census, Belgium, follow up 2001–11 In both sexes, the RDs differed considerably between regions. The highest RDs were observed in Brussels, followed by Wallonia and Flanders. All-cause mortality RDs were equal to 464, 395 and 252 in males and 196, 164 and 107 in females, in Brussels, Wallonia and Flanders, respectively. The ratios between the region-specific educational RDs for all-cause mortality were almost identical for men and women: 1.84, 1.57 and 0.85 in men and 1.83, 1.52 and 0.83 in women for Brussels-Flanders, Wallonia-Flanders and Wallonia-Brussels respectively. In cause-specific mortality, educational RDs were most pronounced in Brussels as well. Particularly, the Brussels-Flanders ratios were elevated with RDs higher than 3 for alcohol-related deaths, liver cancer, diabetes, mental and neurological diseases in men. For most of these causes, the mid versus high Brussels-Flanders RD ratios were very high too. The Wallonia-Flanders RDs ratios were slightly smaller than the Brussels-Flanders RDs ratios; and most of the Brussels-Wallonia RDs ratios were not statistically significant. A notable exception to this general picture was the RDs of transport accidents mortality, a very rare cause of death in Brussels. For prostate cancer, the Brussels versus Flanders RD ratio was reversed but not significant. Furthermore the RDs were slightly higher in Wallonia than in Brussels for COPD. In women, the cause-specific regional differences in RDs exceeded 3 also for alcohol-related deaths (Brussels versus Flanders and Wallonia versus Flanders) and lip-oral cavity-pharynx cancers. Relative inequalities (RRs): the ratios between region-specific educational RRs (Tables 3 and 4, right part) were smaller than the above described ratios between educational RDs: for the all-cause mortality and for Brussels-Flanders, Wallonia-Flanders and Wallonia-Brussels they are respectively equal to 1.15, 1.04 and 0.91 in men, and 1.12, 1.07 and 0.96 in women. The Brussels-Flanders RR ratios in men were higher for cancers (highest ratios observed for lip-oral cavity and pharynx and liver cancers) and circulatory diseases than for other natural deaths, with however high RRs ratios for diabetes, mental and neurological diseases and alcohol-related deaths. The Wallonia-Flanders ratios were more moderate, exceeding 1.3 only for COPD. In women, most cause-specific RR-ratios comparing regions were not significant. Population attributable fractions (PAF) and their decomposition into specific causes by region In males, the fraction of all deaths that would have been avoided if the mortality of the total population were equal to the one observed in the highest EL (the total PAF) was respectively 13% and 11% higher in Brussels and Wallonia than in Flanders. In females, the PAFs of Brussels and Wallonia exceeded the one of Flanders with respectively 14 and 20% (Table 5). Table 5 Decomposition of the total population attributable fraction in specific causes of death by region in Belgians aged 25–64 at census 2001, 10 years follow up The analysis of the cause-specific contribution to the PAF by region (Table 5), revealed that among men the contribution of lung cancer to educational inequalities was higher in Flanders (7.0%) than in the other regions (5.3% and 6.0% respectively in Brussels and Wallonia), while that of COPD, alcohol-related deaths, diabetes and mental/neurological diseases was higher in Wallonia and Brussels than in Flanders. In women, the specific contribution to the PAFs of COPD, alcohol-related deaths and mental-neurological diseases was higher in Wallonia than in Flanders, while that of alcohol-related mortality and lip, oral cavity and pharynx cancers were higher in Brussels than in Flanders. Ranking of the causes of death based on their impact on inequalities The ranking of the specific contribution of each detailed cause, by sex and region, is shown in Fig. 2. In men, the ranking of the main contributors was quite similar between regions, with lung cancers and ischemic heart diseases at the top. Alcohol-related deaths and mental/neurological diseases contributed more to the PAFs in Wallonia and Brussels than in Flanders, while transport accidents contributed much less to the PAF in Brussels than in the other regions. Cause-specific population attributable fractions by gender and region. People of Belgian origin aged 25–64 at census, Belgium, follow up 2001–11 In women, ischemic heart diseases ranked first in Wallonia, followed by lung cancer. In Flanders, they ranked equally. In Brussels, lung cancer ranked as first contributor to the PAF. Colorectal cancer ranked lower in Wallonia and Brussels than in Flanders. During the 2000s, Belgium was characterized by a high premature mortality rate compared to the average of the EU15 countries [17]. As overall country-level rates can hide important disparities, this paper examined the regional and educational health gaps in premature mortality rates, compared the educational inequalities across the Belgian regions and decomposed the inequality population-level impact into its main causes. The study focused on people of Belgian nationality. Summary of previous work Previous research has documented important mortality differences at the regional or district level in Belgium [10,11,12,13,14,15,16,17] with consistently higher rates in Wallonia (especially in the poorest districts of the Hainaut province) and Brussels as compared to Flanders (with the lowest rates in the eastern districts of the province of Limburg) since World War 2. Until the turn of the century, the link between individual SE characteristics and mortality could only be investigated through ecological studies because of the lack of appropriate data at the individual level. The constitution, in the early 2000s, of a "National Mortality Database" [6, 20], aiming to perform a population-based mortality follow-up, finally allowed for the study of SE mortality differentials in Belgium. Several studies, first based on the 1991 census and later on the 2001 census, assessed the magnitude of the SE health gap (and its change over time) in terms of differences in life expectancy [5], health expectancy [7, 8], all-cause and cause-specific mortality [6, 9] in Belgium. Results from individual studies about inequalities are difficult to compare with each other because they generally present variations in the design of the follow up, the age limits and the standard population. Moreover, by focusing on the Belgian population, our results are not comparable with studies that also included migrant populations. However, a recent European study included Belgium in cross-country comparisons of inequalities in mortality [1, 2]. This study used 2 years of follow up and focused on people aged 30–74 at entry, which is 10 years older than in our study. Major inequalities were revealed in the Eastern European countries (with RDs in men exceeding 1500 per 100,000 PY and RRs situated between 2.6 and 3.3), rather large inequalities in Northern Europe (with RDs in men between 500 and 700 per 100,000 PY and RRs around 2), an inhomogeneous pattern in Western Europe (RDs in men between 300 and 600 per 100,000 PY, RRs between 1.6 and 2.4), and lower inequalities in Southern Europe (RDs in men between 250 and 400 per 100,000 PY and RRs around 1.6). As compared to the other European countries, inequalities in Belgium could be qualified as moderately high, with a RD and RR in men respectively equal to 385 per 100,000 PY and 1.86, while the mean RD for all the countries was 636 (range: 234–1696) and the mean RR was 2.1 (range: 1.51–3.26). Only few studies jointly analyzed the effects of place of residence (region, province, district) and SE characteristics. Deboosere et al. [17] examined the district-level patterns of all-cause mortality in the 1990s with and without adjustment for individual SE variables and concluded that individual SE characteristics accounted for half of the mortality risk excess in the poorer districts of the old industrial belt of the Walloon region. Van Hemelrijck et al. [19] found a weak effect of area-level unemployment and percentage of laborers on the sub-district mortality RR, in addition to the effect of individual SE characteristics. The analysis of inequalities by region has however not yet been performed by specific cause of death. Large premature mortality excesses were observed in Brussels and Wallonia as compared to Flanders, which is in line with previous geographical mortality studies in Belgium. Our focus on people of Belgian origin increased the magnitude of those mortality excesses even further compared to studies including all residents, which is expected given both the mortality advantage in first generation migrants [21, 22, 42] and their higher representation in Wallonia and particularly in Brussels. In line with previous findings of Deboosere et al. in the 1990s [18], our analysis showed significant regional differences for premature mortality at the global level and within each level of education. The results of the Poisson regression also revealed that the educational distributions do not explain at all the regional differences in mortality. The level of employment in men accounts for a small part of the regional differences, while up to half of the differences can be explained by the housing score, which is a proxy for wealth. With respect to inequalities in all-cause mortality, the largest inequalities were observed in Brussels and the lowest in Flanders, independently of the three inequality indices used. Important regional differences in absolute inequalities (RDs) were observed, but relative differences (RRs and PAFs) were weak. This pattern was observed for all-cause as well as for most specific causes of premature mortality. The ranking of the cause-specific PAFs revealed a higher health impact of inequalities in causes combining a high mortality rate and a high relative inequality, with lung cancer and ischemic heart disease on top for all regions and both sexes. Suicide, COPD and cerebrovascular diseases also ranked high. The ranking of the contribution of the specific causes of death showed few differences between the regions. Interpretation and policy implication Our results clearly show that the mortality excess in Brussels and Wallonia is still persisting. While this health gap appears to be difficult to eliminate in the context of unequal economical background between the different regions, this study, as well as two previous ones, showed that individual SE variables accounted only for half of the regional differences. The distribution of poverty in the regions (approached by the housing score), and to a lower extent, the employment status (in men only) are the main individual SE factors that explaining half of the regional difference, while educational level does not account. Up to now, the residual regional effect on mortality has not yet been elucidated, and could involve several factors such as: other macroeconomic variables that could not be captured by the existing data, cultural habits leading to less healthy lifestyle in some regions, effects of indoor and outdoor pollution and/or differences in health policies or health care management. In a multilevel analysis, Van Hemelrijck et al. [19] showed small additional effects of two aggregated variables, the level of unemployment and the percentage of laborers in the district. This means that an important unexplained residual regional effect remains after adjusting for individual and some macro SE variables. Further studies should try to disentangle the respective roles of those other risk factors in order to support policies oriented to reducing the health gap. Larger inequalities were observed in the regions that also had higher mortality rates, and particularly in Brussels. The magnitude of the regional differences in inequality differs when it is expressed in absolute (RDs) or relative (RR and PAFs) terms, which is quite expected since the ASMR varies greatly between the regions. Indeed, when a region has high mortality rates, larger RDs will be observed for a same RR between ELs. In this study, however, we also observed higher relative inequalities in the regions with higher mortality rates, which is less expected. Indeed, since high RRs are more easily observed when the denominator is small (low mortality rates), observing a higher RR in a region that also have high rates, as compared to a region with lower rates, can only be observed when inequalities are strong. The more unfavorable situation of Belgians living in Brussels, with respect to both mortality rates and inequality as compared to Flanders and even to Wallonia, should be interpreted in the light of the particular situation of this region. Indeed, in contrast to the other regions, the Brussels region is actually a big city without rural/suburban areas. There is a well-known 'town attraction', where people with social problems tend to move towards big cities to search for solutions, leading to some poverty concentration. The PAF decomposition in specific causes is of major interest for policy-making since it can help set priorities by addressing inequalities in the causes with the highest population level impact. Causes of death combining high mortality rates and high inequalities rank highest. It is important to note that the causes ranking highest for the population-level inequality impact are related to major risk factors and in some measure amenable to health promotion measures. The ranking varies little by region, with the exception of a higher ranking of the PAF for alcohol-related deaths in Brussels, together with a lower PAF for transport accidents, which is expected given its lower mortality rate. Policies addressing inequalities in smoking, obesity and cholesterol level, as well as the medical management of ischemic and cerebrovascular diseases can be recommended in all three regions; ideally, a comprehensive "national and regional" policy should be implemented. Particular attention should be paid to the health of the low educated people in Brussels. For the first time, inequalities were comprehensively measured and compared at the regional level. In the Belgian situation, where many public health responsibilities are transferred to the regional level, the calculation of inequality indices for each region is highly relevant to support regional policies. Inequalities were analyzed both for all-cause and for cause-specific premature mortality, using various inequality indices, namely the absolute and relative RDs and the PAF. By taking into account the EL distribution, the PAF is probably the best population-based measure to estimate the impact of inequalities on the total population health. It is also the first time that the cause-specific mortality inequalities were expressed by region in terms of a contribution to the total PAF. By combining national census data with mortality data, the study covered practically the complete Belgian population. The large size of the cohort ensures a good statistical power. It is important to note that the conclusions only apply to the Belgian population. Migrants registered with another nationality at birth were not included. As they represented a substantial fraction of the population, especially in Brussels, and they also experience lower mortality rates than the Belgians, our findings should not be extrapolated to the whole population. This choice on focusing on Belgian people only was made in first instance to keep a certain homogeneity in the population under study, since migrants have various backgrounds and socio-economic levels. Secondly, as the proportion of migrants varies between regions, nationality may represent a confounding factor in regional comparisons. Health inequalities should also be specifically studied in migrants, taking into account their ethnic specificities. By definition, people living in Belgium without being registered were also not part of the studied population. The percentage of missing values for the EL indicator was rather low, with some disparities between regions. We did not impute the missing values as we cannot expect them to be missing at random. Instead, we just treated them as a separate category. Since the study population included people of Belgian origin only, the language was probably not expected to be a barrier to answer the census questionnaire (as it could be in migrants). People hospitalized or severely sick at the time of the census were very likely not to have answered the questionnaire. Also part of the homeless people could not receive the post-mailed questionnaire; this led to missing values for all SE variables for the more sick or the more deprived people. In the Poisson models estimating RRs for all-cause mortality, the cases with missing SES were treated as a separate category for all three SE variables; as expected, those cases appeared to experience higher mortality rates than all other categories. This poorer health status in non-respondents is in line with previous findings related to the 1991 census [5]. In the subsequent analysis, we used as pairwise measures the low-versus-high and mid-versus-high RR and RD, and did not make use of the missing-value group; this is likely to lead to an underestimation of the inequalities (conservative bias). Some inaccuracy in cause of deaths codification exists, resulting in 8% of ill-defined codes in the considered age group [17], and in an underestimation of cause specific rates. This could possibly be more the case in lower SE classes (if less efforts are made to get an accurate diagnosis in less than in more educated people), resulting in underestimated cause-specific inequalities (conservative effect). Age limits for 'premature mortality' vary in literature. Our study covered people aged 25–64 at baseline, allowing them to reach at maximum 69 years at the end of the follow-up period. Our findings cannot be generalized to other age groups, and any comparison with other studies has to take the age range in consideration. We chose the old European population as a standard population- even if a new one is now proposed by Eurostat - since this had been used in 2014 and 2016 European comparisons [43, 44]. Of course, the standardized rates are sensitive to the standard population, but most probably the impact on the inequalities will not be very important. Regional disparities in premature mortality persist in Belgium, with much higher rates in Brussels and Wallonia than in Flanders, and this within each EL. Individual SE characteristics only account for half of these regional differences, especially the unequal distribution of poverty and employment status between the regions. Explanations for the residual regional effect should be searched in macroeconomic characteristics, differences in lifestyles and in inside/outside air pollution, differences in health policies and in health care management. Regional educational inequalities in premature mortality were studied for the first time and revealed higher absolute inequalities in Brussels and Wallonia compared to Flanders, as well as a weak excess in relative inequalities. The PAF decomposition in specific causes and its ranking according to the highest population-level impact of the inequalities in mortality is important for the policy-making process, since it can help set priorities by addressing inequalities in the causes with the highest population level impact. Causes ranking highest for the population-level inequality impact are lung cancer, ischemic heart disease, suicide, cerebrovascular disease, most of which are related to major risk factors like alcohol or tobacco consumption, and in some extent amenable to health promotion measures. The rankings varied little by region, with the exception of a higher ranking of the PAF for alcohol-related deaths in Brussels, together with a lower PAF for transport accidents. WHO Regional Office for Europe. Health 21: the health for all policy framework for the WHO European region. Copenhagen: World Health Organization; 1999. Marmot M, Friel S, Bell R, Houweling TA, Taylor S. Closing the gap in a generation: health equity through action on the social determinants of health. Lancet. 2008;372(9650):1661–9. Executive Agency for Health and Consumer. Second Programme of Community Action in the Field of Health 2008–2013. European Commission; 2007. Braveman PA. Monitoring equity in health and healthcare: a conceptual framework. J Health Popul Nutr. 2003;21(3):181–92. Deboosere P, Gadeyne S, Van Oyen H. The 1991–2004 evolution in life expectancy by educational level in Belgium based on linked census and population register data. Eur J Popul. 2008;25(2):175–96. Gadeyne S. The ultimate inequality : socio-economic differences in all-cause and cause-specific mortality in Belgium on the first part of the 1990s. Centrum voor Bevolking en Gezinsstudie: Brussels; 2006. Bossuyt N, Gadeyne S, Deboosere P, Van Oyen H. Socio-economic inequalities in health expectancy in Belgium. Public Health. 2004;118(1):3–10. Van Oyen H, Charafeddine R, Deboosere P, Cox B, Lorant V, Nusselder W, et al. Contribution of mortality and disability to the secular trend in health inequality at the turn of century in Belgium. Eur J Publ Health. 2011;21(6):781–7. Renard F, Gadeyne S, Devleesschauwer B, Tafforeau J, Deboosere P. Trends in educational inequalities in premature mortality in Belgium between the 1990s and the 2000s: the contribution of specific causes of death. J Epidemiol Community Health. 2017;71(4):371–80. Renard F, Deboosere P, Tafforeau J. Mapping the cause-specific premature mortality reveals large between-districts disparity in Belgium, 2003–2009. Archives Public Health 2015;73(1):13-doi:10.1186/s13690-015-0060-5. Van Houte-Minet M, Wunsch G. La mortalité Masculine aux âges adultes: causes et déterminants régionaux. Popul Famille. 1978;44(2):19–48. Van Houte-Minet M, Wunsch G. La mortalité masculine aux âges adultes, un essai d'analyse régionale. Popul Famille. 1978;43:37–68. Dooghe D. Gedifferentieerd sterftebeeld: toepassing U.Yule method. Popul Famille. 1965;6:211–29. Humblet P, Lagasse R, Moens G, Van de Voorde H, Wollast E. Atlas de la mortalité évitable en Belgique - Atlas van de vermijdbare strefte in België (1974–1978). School voor Maatschappelijke Gezondheidszorg: Brussels; 1986. Lagasse R, Humblet PC, Hooft P, Van de Voorde H, Wollast E. Atlas of avoidable mortality in Belgium 1980–1984. Arch Public Health. 1992;50:1–97. Leveque A, Humblet PC, Lagasse R. Atlas of avoidable mortality in Belgium 1985–1989. Arch Public Health. 1999;57:1–87. Renard F, Tafforeau J, Deboosere P. Premature mortality in Belgium in 1993–2009: leading causes, regional differences and 15 years changes. Arch Pub Health. 2014;72(1):34. doi:10.1186/2049-3258-72-34. Deboosere P, Gadeyne S. Can regional patterns of mortality in Belgium e explained by individual socio-economic characteristics? Reflets et Perspectives de la vie économique. 2002;XII(4):87–103. Van Hemelrijck WM, Willaert D, Gadeyne S. The geographic pattern of Belgian mortality: can socio-economic characteristics explain area differences? Arch Public Health. 2016;74:22. Deboosere P, Gadeyne S. De Nationale databank Mortaliteit. Aanmaak van een databank voor onderzoek van differentiële sterfte naar socio-economische status en leefvorm. Brussel: Steunpunt Demografie, Vakgroep Sociaal Onderzoek, Vrije Universiteit Brussel; 1999. Report No.: 1999–7 Deboosere P, Gadeyne S. La sous-mortalité des immigrés adultes en Belgique: une réalité attestée par les recensements et les registres. Population. 2005;60(5–6):765–812. Anson J. The migrants mortality advantage: a 70 months follow up of the Brussels population. Eur J Popul. 2004;20:191–218. UNESCO. International standard Classification of education, ISCED 1997. 1997. Vandenheede H, Lammens L, Deboosere P, Gadeyne S, De SM. Ethnic differences in diabetes-related mortality in the Brussels-capital region (2001–05): the role of socioeconomic position. Int J Public Health. 2011;56(5):533–9. Deboosere P, Willaert D. Codeboek algemene socio-economic enquête 2001, Working paper: Steunpunt Demografie, Vakgroep Sociaal Ondezoek, Vrije Universiteir, Brussel; 2004. Galobardes B, Shaw M, Lawlor DA, Lynch JW, Davey SG. Indicators of socioeconomic position (part 1). J Epidemiol Community Health. 2006;60(1):7–12. WHO. International statistical Classification of diseases and related health problems: 10th revision. 1st ed. Geneva: World Health Organisation; 1994. Office for National Statistics, UK. Definition of Avoidable Mortality. Final avoidability causes list; http://www.ons.gov.uk/ons/about-ons/get-involved/consultations/archived-consultations/2011/definitions-of-avoidable-mortality/index.html. Office for National Statistics, UK. 2012. EUROSTAT. Amenable and preventable deaths Statistics; data from may 2016. http://ec.europa.eu/eurostat/statistics-explained/index.php/Amenable_and_preventable_deaths_statistics. 2016. Holland WW, Fitzgerald AP, Hildrey SJ, Philips SJ. Heaven can wait. J Public Health Med. 1994;16(3):321–30. Humblet PC, Lagasse R, Moens GFG, Wollast E, Van de Voorde H. La mortalité évitable en Belgique. Soc Sci Med. 1987;25:485–93. Waterhouse J, Muir CS, Correa P, Powell J. Cancer incidence in five continents. IARC: Lyon; 1976. Armitage P, Berry G. Statistical methods in medical research. Oxford: Blackwell Scientific Publications; 1987. Smith P. Comparison between registries: age-standardized rates. In: Muir C, Waterhouse J, Mack T, Powell J, Whelan S, editors. Cancer incidence in five continents, volume V. Lyon: IARC; 1987. p. 790–5. Harper S, Lynch J. Methods for measuring cancer disparities:using data relevant to healthy people 2010 cancer-related objectives. Bethesda: National Cancer Institute: NIH Publication No. 05–5777; 2005. Harper S, King NB, Meersman SC, Reichman ME, Breen N, Lynch J. Implicit value judgments in the measurement of health inequalities. Milbank Q. 2010;88(1):4–29. Speybroeck N, Harper S, de Savigny D, Victora C. Inequalities of health indicators for policy makers: six hints. Int J Public Health. 2012;57(5):855–8. Mackenbach JP, Kunst AE. Measuring the magnitude of socio-economic inequalities in health: an overview of available measures illustrated with two examples from Europe. Soc Sci Med. 1997;44(6):757–71. Wagstaff A, Paci P, van Doorslaer E. On the measurement of inequalities in health. Soc Sci Med. 1991;33(5):545–57. Robert C, Casella G. Monte Carlo statistical methods. New York: Springer Science & Business Media; 2004. Altman DG, Bland JM. Interaction revisited: the difference between two estimates. BMJ. 2003;326(7382):219. Vandenheede H, Willaert D, De GH, Simoens S, Vanroelen C. Mortality in adult immigrants in the 2000s in Belgium: a test of the 'healthy-migrant' and the 'migration-as-rapid-health-transition' hypotheses. Tropical Med Int Health. 2015;20(12):1832–45. Mackenbach JP, Kulhanova I, Menvielle G, Bopp M, Borrell C, Costa G, et al. Trends in inequalities in premature mortality: a study of 3.2 million deaths in 13 European countries. J Epidemiol Community Health. 2015;69(3):207–17. Mackenbach JP, Kulhanova I, Artnik B, Bopp M, Borrell C, Clemens T, et al. Changes in mortality inequalities over two decades: register based study of European countries. BMJ. 2016;353:i1732. The authors thank Statistics Belgium for providing us the census data, the mortality data, and the population data. The authors are grateful to Didier Willaert for the preparation of the data files. A request should be addressed to the Privacy commission to ask and justify the secondary access to the data, and to the Vrije Universiteit Brussels, Faculty Sociology, Interface demography, to receive the data. Department of Public Health and Surveillance, Scientific Institute of Public Health (WIV-ISP), Rue Juliette Wytsmanstraat 14, 1050, Brussels, Belgium Françoise Renard, Brecht Devleesschauwer & Jean Tafforeau Interface Demography, Section Social Research, Vrije Universiteit Brussels, Brussels, Belgium Sylvie Gadeyne & Patrick Deboosere Françoise Renard Brecht Devleesschauwer Sylvie Gadeyne Jean Tafforeau Patrick Deboosere FR and PD designed the protocol and led the project. PD collected the necessary data. FR and BD performed the statistical analyses. All authors contributed to the interpretation of results. FR wrote the first draft, with all authors providing critical comments. All authors read and approved the final manuscript. Correspondence to Françoise Renard. Statistical Supervisory Committee of the Commission for the Protection of Privacy: "Beraadslaging STAT nr 20/2014 van 16 september 2014". All authors read and approved the final manuscript. Additional file 1: Table S1a. Age-adjusted all-cause premature mortality rates (ASMR) in males by educational level, region and origin, Belgium 2000s; people aged 25–64 at census, 10 years follow up. Table S1b. Age-adjusted premature mortality rates (ASMR) in females by educational level, region and origin, Belgium 2000s; people aged 25–64 at census, 10 years follow up. Table S2 Codes of deaths selected for the analysis, in ICD9 and ICD10. Table S3a. Age-adjusted cause-specific premature mortality rate (ASMR), by region and educational level in Males, Belgium 2000s; Belgian men aged 25–64 at census, 10 years follow up. Table S3b. Age-adjusted cause-specific premature mortality rate (ASMR), by region and educational level in females, Belgium 2000s; Belgian women aged 25–64 at census, 10 years follow up. (PDF 605 kb) Renard, F., Devleesschauwer, B., Gadeyne, S. et al. Educational inequalities in premature mortality by region in the Belgian population in the 2000s. Arch Public Health 75, 44 (2017). https://doi.org/10.1186/s13690-017-0212-x Health inequalities Educational inequalities Premature mortality
CommonCrawl
measurement error book - Rural Solutions Inc 1512 Morningside Dr, Milbank, SD 57252 http://www.ruralsolutions.com measurement error book Corona, South Dakota Find out more BrowseBooks Journals Digital Library Products Digital Solutions for your Course Catalogs Resources ForInstructors Societies Book Authors/Editors Journal Authors/Editors/Reviewers Students Researchers Librarians AboutAbout SAGE Contact News My Account Careers Ships from and sold by Amazon.com. Stefanski and Ciprian Crainiceanu July 2006 488 pages ISBN: 1584886331 CRC Web Site Information This book follows up on the authors' 1995 text on measurement error, but it is completely rewritten Privacy Policy Shipping Policy Contact Us © CRC Press, Taylor & Francis Group, an Informa Group company. For people familiar with statistics it does not matter but for engineers it might be a significant barrier to understand the text. CDs, access codes etc... Herr,Frank R. E-books are non-returnable and non-refundable. By using our website and agreeing to our cookies policy, you consent to our use of cookies in accordance with the terms of this policy. BuonaccorsiCRC Press, Mar 2, 2010 - Mathematics - 464 pages 0 Reviewshttps://books.google.com/books/about/Measurement_Error.html?id=QVtVmaCqLHMCOver the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of Subject Index. Stefanski,Ciprian M. The analysis of nonlinear regression models includes generalized linear models, transform-both-sides models and quasilikelihood and variance function problems. If this is a republication request please include details of the new work in which the Wiley content will appear. Its activities include the development and distribution of educational materials for adult education, nutrition, and other programs targeted at low-literate consumers in the US, and development and provision of business and Email Country -- select your country of residence -- Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Overview Types of Measures Types of Response Formats Specific Examples of Scales From Different Disciplines Cross-Cultural Measurement Summary 9. Carroll Hardcover $118.51 Only 8 left in stock (more on the way).Ships from and sold by Amazon.com.FREE Shipping. If your library doesn't have access, ask your librarian to start a trial. Every chapter concludes with bibliographic notes. Stefanski has recently published several papers on measurement error models with Carroll. Jackson, University of Windsor, Ontario, Canada"This book provides a useful systematic introduction to an important and neglected area, that of measurement error in the social sciences. What are VitalSource eBooks? Amazon Try Prime Books All Departments Amazon Video Amazon Warehouse Deals Appliances Apps & Games Arts, Crafts & Sewing Automotive Parts & Accessories Baby Beauty & Personal Care Books CDs All rights reserved. That is good. An author maintained website, http://www.business.uiuc.edu/~madhuv/msmt.html features datasets and suggestions for using the book in courses. Access codes and supplements are not guaranteed with used items. 16 Used from $98.16 +$3.99shipping Add to Cart Turn on 1-Click ordering Buy new On clicking this link, a new layer Her broad research interests include measurement error models, missing data problems, high dimensional data analysis, survival data and longitudinal data analysis, estimating function and likelihood methods, and medical applications.Prof. The text concentrates on the general ideas and strategies of estimation and inference rather than being concerned with a specific problem. in Statistics from the University of Toronto in 2000. Please try again. It provides a thorough and concise description of ... is very impressive" -- J. The book will replace the 1995 text as the authoritative review of measurement error and uncertainties in exposures. It's a pity because the book is outstanding.The author presents knowledge from the highest shelf. Wir haben eine Seite speziell für unsere Nutzer in Deutschland This website uses cookies. Offline Computer – Download Bookshelf software to your desktop so you can view your eBooks with or without Internet access. Sell on Amazon Add to List Sorry, there was a problem. Hall, University of Georgia, in Journal of the American Statistical Association, March 2008, Vol. 103, No. 481 "…This book is a successful attempt at collecting, organizing, and presenting the scattered literature Fuller ISBN: 978-0-470-09571-3 440 pages September 2006 Description The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal Yes No Sending feedback... List unavailable. Table of Contents Guide to Notation Introduction The Double/Triple-Whammy of Measurement Error Classical Measurement Error A Nutrition Example Measurement Error Examples Radiation Epidemiology and Berkson Errors Classical Measurement Error Model Extensions Overview Guidelines for Identifying and Correcting For Error in Measure Development Generic Issues in Designing Psychometric Tests Item-to-Total Correlations (Internal Consistency Procedures) Item Means Test-Retest Correlations WHAT IS THE ROLE OF MEASUREMENT IN SCIENCE? Morrell Department of Psychology, Texas Tech University Key features The book discusses the broader issues of science and measurement, placing measurement in its scientific context. In fact, quite the opposite has occurred. measurement system evaluation measurement error SPC charts are used to measure stability and many software programs have this option.�� .?�� �Accuracy and precision. Six Sigma Basics Introduction to Six SigmaSix Sigma Courses: What Exactly Do They Entail?Free Six Sigma Certification Six Sigma Videos Understanding Six SigmaSix Sigma BasicsCareer as a Six Sigma ProfessionalSix Sigma All rights reserved. As with statistical process control charts, stability means the absence of "Special Cause Variation", leaving only "Common Cause Vari... measurement error vs measurement uncertainty The deviations are: The average deviation is: d = 0.086 cm. A general expression for a measurement model is h ( Y , {\displaystyle h(Y,} X 1 , … , X N ) = 0. {\displaystyle X_{1},\ldots ,X_{N})=0.} It is taken that Consider estimates x 1 , … , x N {\displaystyle x_{1},\ldots ,x_{N}} , respectively, of the input quantities X 1 , … , X N {\displaystyle X_{1},\ldots ,X_{N}} , obtained from The random component has a real distribution with repeated measurement and, since thes... measurement uncertainty and measurement error Failure to account for a factor (usually systematic) — The most challenging part of designing an experiment is trying to control or account for all possible factors except the one independent It is not possible to correct for random error. Also, the ruler itself may be too short or too long causing a systematic error. In other words, the next time Maria repeats all five measurements, the average she will get will be between (0.41 s - 0.05 s) and (0.41 s + 0.05 s). The uncertainty... my book error Dolphin Data Lab 49,818 views 29:39 WD Smartware Tutorial - Duration: 11:15. Type Whats my IP address in Google. I'd really appreciate some help with this... KDP Terms and Conditions | Privacy Notice| Conditions of Use| Contact Us Skip navigation UploadSign inSearch Loading... Neither WD Drive Utility or Apple's Disk Utility find any issues with the drive. Reboot the NAS device. Like others uninstalled WD's Drive Utilities. Whatever the reason the drive appears to be working fine now. ...
CommonCrawl
Comparison of inequity in health-related quality of life among unemployed and employed individuals in China Yaxin Zhao1, Zhongliang Zhou ORCID: orcid.org/0000-0003-1047-30232, Xiaojing Fan2, Rashed Nawaz2, Dantong Zhao2, Tiange Xu2, Min Su3, Dan Cao2, Chi Shen2 & Sha Lai2 In China, achieving health equity has been regarded as a key issue for health reform and development in the current context. It is well known that unemployment has a negative effect on health. However, few studies have addressed the association between unemployment and inequity in health-related quality of life (HRQOL). This study aims to compare the inequality and inequity in HRQOL between the unemployed and employed in China. The material regarding this study was derived from the Chinese National Health Services Survey of Shaanxi Province for 2013. We controlled for confounding factors by utilizing the coarsened exact matching method. Finally, 7524 employed individuals and 283 unemployed individuals who were 15 to 64 years old in urban areas were included in this study. We used HRQOL as the outcome variable, which was evaluated by using the Chinese version of EQ-5D-3L. The health concentration index, decomposition analysis based on the Tobit model, and the horizontal inequity index were employed to compute the socioeconomic-related equity between the unemployed and employed and the contribution of various factors. After matching, unemployed people tended to have poorer EQ-5D utility scores than employed people. There were statistically pro-rich inequalities in HRQOL among both employed and unemployed people, and the pro-rich health inequity of unemployed people was substantially higher than that of employed people. Economic status, age, education, smoking and health insurance were the factors influencing inequality in HRQOL between employed and unemployed individuals. Education status and basic health insurance have reduced the pro-rich inequity in HRQOL for unemployed people. It is suggested that unemployment intensifies inequality and inequity in HRQOL. According to policymakers, basic health insurance is still a critical health policy for improving health equity for the unemployed. Intervention initiatives aiming to tackle long-term unemployment through active labour market programmes, narrow economic gaps, improve educational equity and promote the health status of the unemployed should be considered by the government to achieve health equity. Health equity has gradually become a research hotspot in the field of health system reform [1, 2]. Achieving health equity has been a source of concern with a strong degree of support and response from all countries of the world [3]. China also regards the realization of health equity as the key issue of health reform and development in the current context. Specifically, the planning outline of "Healthy China 2030" has proposed that we should focus on the health problems of vulnerable groups of people to achieve health equity [4]. As an important economic and material basis for people, economic status is an important factor affecting health and health inequity. The widening of the income gap in China has also aroused widespread public concern. Empirical studies about health inequality have commonly used economic status to analyse health inequalities and inequities [5,6,7]. As Wagstaff suggested, in order to analyse socioeconomic health inequities, health-related information must be supplemented by data on socioeconomic status. There are many approaches to measuring socioeconomic status such as income, expenditure, or consumption [8]. Health inequalities are not only affected by physiological conditions but also widely determined by socioeconomic characteristics and inequalities may be further widened by unemployment [9]. The World Health Organization proposed that each country should set up health equity monitoring systems to reduce health inequalities by collecting data on key indicators such as employment status, which can be determined by the labour market [10]. Unlike retired people, most unemployed people quit the labour force for non-physiological reasons and cannot sell their labour at a balanced price in the market [11]. In addition, some articles have examined the impact of unemployment and income inequalities on the degree of criminality and mental health [12], as well as the associations between unemployment, income inequality and suicide mortality [13]. It is of great practical significance to compare and measure the income-related health inequality between unemployed and employed individuals in China. There is a body of literature that has explored the association between unemployment and lifestyle behaviours (e.g., alcohol consumption and smoking) [14, 15], the effects of unemployment on mental health (e.g., depression, mental disorder and suicide thoughts) [16,17,18,19], and the effects of unemployment on physical health outcomes (e.g., mortality) and subjective health outcomes (e.g., self-reported health) [20,21,22]. Empirical evidence has demonstrated that unemployment has a severely negative effect on health and that unemployment also significantly raises the risk of mental disorders and suicide [23, 24]. In addition, some international studies have revealed that unemployed people were significantly more likely to have poor self-reported health than employed people [20,21,22]. Unemployment may lead households into a cycle of poverty [25] and households may be disadvantaged in terms of health and access to health care services, which leads to changes in health equity. Therefore, it is logical to start from the key groups and to carry out research on inequity in health-related quality of life among the unemployed. It is highly important to prevent unemployed individuals from falling into long-term health problems and poverty, to improve the precision of poverty alleviation policies and to promote the construction of "Healthy China 2030". Despite many health indicators being used to assess the effect of unemployment on health, health-related quality of life remained remarkably absent from health measurement [26]. Health-related quality of life (HRQOL) is generally considered a key measurement indicator of health care outcomes and is constructed multidimensionally in relation to a person's self-perceived health [27]. The EuroQol 5 dimensions (EQ-5D) is a standardized instrument, and is most commonly used for measuring the quality of life in public health research [28, 29]. Some recent studies have examined the correlates of unemployment and HRQOL by using the MOS 8-item short-form health survey instrument, SF-12 instrument and SF-36 instrument [30,31,32,33], but studies that used the EQ-5D instrument to explore the relation of unemployment and HRQOL are relatively few in number. The EQ-5D instrument is easy to operate and has high applicability, as it has been tested in a large-sample and large-scale Chinese National Health Services Survey. Most importantly, the EQ-5D has time trade-off values based on a conversion of Chinese preferences for EQ-5D health states, which can more accurately reflect the HRQOL of Chinese residents [34]. Despite the importance of unemployment in models of social and ecological determinants of health, we know very little about the relationship of unemployment and health inequities in HRQOL. Leaving unemployment and employment out of public health inequity research creates a blind spot. This paper thus contributes to two strands of literature on the empirical evaluation of HRQOL for unemployed individuals and health inequities in China. First, the existing literature on the relation of unemployment and health has focused on mental health and self-assessed health [16,17,18,19,20,21,22], but the literature on the association of unemployment and HRQOL is scarce. In addition, the inequity in HRQOL for the unemployed and employed has not yet been evaluated using the CI and HI. Second, with the instruments for measuring quality of life, studies attempt to investigate the relationship between unemployment and HRQOL by using the EQ-5D instrument are very limited [30,31,32]. Third, in terms of methods, researchers often analysed HRQOL and inequities in HRQOL by using descriptive statistical analysis and linear regression, lacking a scientific method to balance the comparison groups; thus, such approaches cannot reflect the ceiling effect of EQ-5D and measure the inequity of HRQOL quantitatively [33]. In this article, we make an initial contribution to filling what is a rather large gap in the public health inequity research by investigating the relationship between unemployment and health inequities in HRQOL. Based on the abovementioned background, we have attempted to answer three main questions: (1) What is the health utility of the employed and the unemployed in China? Is the health utility value of the unemployed higher than that of the employed? (2) What are the levels of inequality and inequity in HRQOL between the employed and unemployed? Are the concentration index and horizontal equity index of the unemployed higher than those of the employed? (3) How do relevant factors contribute to the health inequalities in HRQOL between the employed and unemployed? In this paper, we have calculated and compared the health utility between the employed and unemployed in China. In addition, we decomposed the inequality and analysed the inequity in HRQOL between the employed and unemployed in China. Careful consideration of unemployment in public health research can allow us to make better progress towards achieving health equity. Data and Sample This study draws upon data from the Chinese National Health Services Survey of Shaanxi Province in 2013, a representative cross-sectional survey of households and individuals (adults and children) launched in 1993 by the National Health Commission of China every 5 years. The 5th wave survey adopted a multi-stage stratified cluster sampling method that was conducted in Shaanxi Province. In the first stage, this survey selected 32 counties (districts); 160 towns (streets) were selected in the next stage, and 320 villages (communities) were selected in the final stage. Finally, 20,700 households (57,529 people) were identified [34, 35]. With this survey, we attached great importance to data quality and implemented a considerable number of quality control measures in the following stages: survey design, training of investigators, field investigation and data collection. Based on a series of quality control measures, high response rates (> 85%) were achieved for this survey [36]. The Chinese National Health Services survey also focused on the health status and the health services need and utilization of the Chinese residents, covering a broad range of information on socioeconomic characteristics (e.g., age, gender, education status and economic level), health (e.g., self-assessed health and HRQOL) and health service utilization. In this study, 10,337 employed and 285 unemployed respondents whose ages ranged from 15 to 64 years in urban areas were identified in the final sample before matching. Health-related quality of life variables We used EQ-5D health utility as the outcome variable. HRQOL was measured by the classic 3-level EQ-5D (EQ-5D-3L), which has been widely validated and utilized worldwide [37]. The EQ-5D is a self-report questionnaire, that includes five dimensions: (1) mobility, (2) self-care, (3) usual activities (such as work, studies, housework and leisure activities), (4) pain/ discomfort, and (5) anxiety/depression. The three response alternatives to the five dimensions mentioned above are (1) no problem, (2) some problems, and (3) extreme problems [38]. Finally, we used the conversion for Chinese preferences to generate the score of EQ-5D utility between the unemployed and employed, which ranges from − 0.1490 (stands for the worst health) to 1 (stands for the full health) [39]. By combining one level of each of the five dimensions, a total of 243 possible health states can be defined, and a score for EQ-5D utility for all 243 health states can be calculated based on the results in additional files in Table A. In light of the existing literature, we controlled for variables including socio-demographic characteristics and health behaviour related to inhabitants, such as gender (0 = male, 1 = female), age (in years), per capita annual income (Yuan) (1 = lowest group, 0 = other; 1 = lower group, 0 = other; 1 = medium group, 0 = other; 1 = higher group, 0 = other; 1 = highest group, 0 = other), marital status (1 = single, 0 = other; 1 = marriage, 0 = other; 1 = widowed and divorced, 0 = other), education status (1 = elementary school and below, 0 = other; 1 = middle school, 0 = other; 1 = senior high school, 0 = other; 1 = college degree and above, 0 = other), health insurance (1 = no, 0 = other; 1 = basic medical insurance, 0 = other; 1 = commercial insurance and other insurance, 0 = other;), smoking status (1 = no smoking, 0 = other; 1 = non-daily smoking, 0 = other; 1 = daily smoking, 0 = other;) and drinking status (0 = no drinking, 1 = drinking). Coarsened exact matching A rough comparison of equity in HRQOL between the unemployed and employed would ignore the fact that there may be other potential confounding factors. Therefore, in this article, we adopted the coarsened exact matching method, which is a new technique for improving the assessment of causal inference between two groups by controlling for potentially confounding variables [40, 41]. The purpose of this method is to keep the distribution of covariates between the treatment group and the control group as balanced as possible, thereby improving the comparability between the two groups. The exact matching algorithm was used to accurately match the research objects in each layer according to the empirical distribution of samples. During the matching process, weighted variables were generated to ensure that there were at least one treatment group and one control group in each layer; otherwise, the research objects were deleted. Finally, the matched research objects were retained, and the matched data were employed for the analysis [5]. The multivariate imbalance measure L1 was employed to ensure the balance before and after matching. L1 ranges from 0 to 1, where 1 indicates that the data of two comparison groups are completely unbalanced and a smaller value indicates a better balance between comparison groups. The multivariate imbalance was measured by Eq.1 [41]: $$ {L}_1\left(f,g;H\right)=\frac{1}{2}\sum \limits_{\varepsilon_1\dots {\varepsilon}_K\in H(X)}\left|{f}_{\varepsilon_1\dots {\varepsilon}_k}-{g}_{\varepsilon_1\cdots {\varepsilon}_k}\right| $$ f and g are the relative frequencies for the distributions of the two groups. H(X) represents the Cartesian product of H(X1) × ⋯ × H(Xk). \( {f}_{\varepsilon_1\dots {\varepsilon}_k} \) indicates the relative frequency for samples falling into the cell with coordinates ε1…εk of the multivariate cross-tabulated of the treated units and \( {g}_{\varepsilon_1\dots {\varepsilon}_k} \) for the control units. Analysis of inequity in health-related quality of life Concentration index Health inequality and health inequity appear to be similar, but there is a difference between them. Health inequality is a relative condition that refers to differences in health status or in the distribution of health determinants among different population groups [42]. However, strict equality in health for all would not be a feasible or achievable goal because some determinants of health are unavoidable and are beyond human control [8]. We measured health inequality with the concentration index (CI). CI has been widely accepted as a standard method for measuring the socioeconomic-related inequality of health status [6]. The CI value is between − 1 and 1. A positive CI indicates that health is more concentrated among members with higher per capita household income; by contrast, 0 indicates that there is no inequality [43]. The concentration index was computed with Eq.2: $$ C=2\mathit{\operatorname{cov}}\left(x,h\right)/\mu $$ where C denotes the concentration index, x refers to HRQOL, μ is the average of EQ-5D utility value, and h symbolizes the ranking of per capita household income. Decomposition of the concentration index The decomposition analysis is intended to decompose the concentration index into the contribution of every variable to the inequality in HRQOL. Empirical research has demonstrated that the work, age, education, lifestyle behaviours, health insurance, economic level and resources are the immediate causes of some of these health inequities [44]. Therefore, we selected the contributing variables that include age, gender, education, income, marriage, health behaviours (smoking and drinking) and health insurance to decompose the inequality in HRQOL. These variables were divided into need variables and non-need variables of HRQOL to calculate the horizontal inequity index. Need variables, or x, referred to the unavoidable determinants of health, while the control, or z, non-need variables referred to the avoidable determinants of health. As Wagstaff and a number of studies have suggested, age and gender are commonly used to reflect unavoidable determinants of health in the analysis of health inequity [5, 42, 45]. In addition, the EQ-5D utility value generally has a ceiling effect, with only 3 levels in each dimension; that is, most respondents are in good physical condition, and most health utility values calculated from it are 1. If the effect were ignored in the analysis of health-related quality of life factors, the traditional regression method would inevitably produce false estimates. Therefore, to consider the limited dependent variable, we used the Tobit model to solve the problem. The decomposition analysis based on the Tobit model [42] was the commonly used method shown in Eq.3: $$ {y}_i=\alpha +\sum \limits_j{\beta}_j^m{x}_{ji}+\sum \limits_k{\gamma}_k^n{z}_{ki}+{\varepsilon}_i $$ where yi is the score for EQ-5D utility; x are the need variables of HRQOL (e.g., gender and age); z indicates the non-need variables of HRQOL (e.g., health insurance, education status, marital status, economic level and health behaviour); \( {\beta}_j^m \) and \( {\gamma}_k^n \) indicate the marginal effects (dy/dx) of every variable; and εi refers to the error term. The decomposition of the concentration index C can be written as follows: $$ C=\sum \limits_j\left({\beta}_j^m{\overline{x}}_j/\mu \right){C}_j+{GC}_{\varepsilon }/\mu $$ where μ represents the mean of EQ-5D utility, Cj denotes the concentration index of xj, and \( {\overline{x}}_j \) is the mean for xj. The last term is the concentration index of ε. Horizontal inequity index Inequity implies a state that results from a lack of fairness, which is related to a normative view of social justice and is the most relevant to our discussion about the pursuit of health equity [7]. Health inequities are avoidable, unfair and potentially remediable inequalities in health between groups of people [42]. Most of the literature on health economics has employed horizontal equity as the criterion of equity, stating that people with equal health needs should be treated equally [46]. Thus, we measured health inequity by using the horizontal inequity index (HI). The horizontal inequity (HI) of HRQOL indicates the inequality in HRQOL by eliminating the contribution of need variables. In the present investigation, the horizontal inequity index was generated by subtracting the contribution of the need variables (e.g., gender and age) from the concentration index of HRQOL [43]. The HI is positive if there exists a pro-rich inequity and vice versa. Summary statistics for the employed and unemployed individuals before and after coarsened exact matching are presented in Table 1. The results before matching indicate that the differences in socio-demographic characteristics among the two groups were statistically significant except for drinking alcohol and smoking status. Specifically, results after matching demonstrated that the differences in socio-demographic characteristics between employed and unemployed individuals were statistically insignificant, except for medical health insurance, which was controlled in the health inequity analysis. Additionally, the results of the multivariate imbalance measure L1 are shown in additional files in Table B. The value of L1 (6.17*10–15) between employed and unemployed individuals after matching was obviously lower than that before matching (0.448), which signified that the matching effect was good and that the two groups became more comparable. As presented in Table 1, data for a total of 7857 residents were collected in this study, with data for 7574 employed and 283 unemployed residents after coarsened exact matching. Table 1 Summary statistics and description of independent variables before and after coarsened exact matching Description of EQ-5D dimensions The distribution of the three response alternatives for EQ-5D in each dimension for the employed and unemployed residents are shown in Table 2. After matching, the results indicated that the unemployed residents reported a higher proportion of some problems/extreme problems in five dimensions than the employed residents did, and this result was statistically significant. Table 3 presents the EQ-5D health utility scores with a conversion based on Chinese preferences between the employed and unemployed in China. After matching, the results indicated that the differences in the mean of EQ-5D utility scores and the utility scores for the five dimensions were statistically significant between the employed and unemployed people. Moreover, unemployed residents tended to exhibit significantly higher EQ-5D utility scores than employed residents. Thus, unemployed people are significantly expected to suffer from health troubles in each of the EQ-5D dimensions significantly more than employed people. We analyzed the data before matching in additional files in Table C and Table D and found that the results were almost consistent with the results after matching, which also indicated that the results were robust. Table 2 Distribution of the three response alternatives for EQ-5D in each dimensions for the employed and unemployed Table 3 The values for EQ-5D utility and each dimension for the employed and unemployed Inequity in HRQOL between Employed and Unemployed People The CIs for the EQ-5D utility scores between the employed and unemployed are presented in Table 4. The overall CIs for the EQ-5D utility values for both employed (0.0028) and unemployed (0.0089) individuals were positive, signifying that there is a statistically pro-rich inequality in HRQOL between employed and unemployed people in Shaanxi Province, China. This indicates that overall better HRQOL are more concentrated in two groups with higher economic levels. In contrast, the respondents with lower economic levels had more health issues than those with higher economic level residents. Furthermore, the degree of inequality in HRQOL among unemployed people was higher than that among employed people. Table 4 Decomposition of the concentration index in HRQOL among the employed and unemployed The overall decomposition analysis for the EQ-5D utility values between the employed and unemployed are presented in Table 4. The marginal effect estimates from the two groups suggested that education status had a positive marginal effect, indicating that a higher level of education was significantly related to higher EQ-5D utility values. However, age had a negative marginal effect, suggesting that being older was associated with a decline in HRQOL. As distinguished in Table 4, the key contributions were from economic level (38.31%), age (13.61%) and educational status (7.23%) for the employed, whereas the three key contributors were economic status (60.08%), educational status (− 12.11%) and smoking status (8.04%) for the unemployed. Furthermore, the effects of different types of health insurances have different directions. The basic health insurance had a negative contribution and reduced the pro-rich impact on HRQOL for the employed and unemployed. However, the commercial insurance and other insurance had a positive contribution to the inequity of HRQOL. In addition, health behaviours also contributed to increasing pro-rich inequality in HRQOL. As depicted in Fig. 1, the contributors of need variables, economic status, other control variables and the residual to the inequality in HRQOL were above the level of the horizontal equity line, implying that these variables increased the pro-rich inequity between the employed and unemployed. Classification analysis of inequity in HRQoL for the employed and the unemployed The horizontal inequity index of HRQOL is also presented in Table 5. After deduction of the contributions of the need variables in health (e.g., age and gender) from the concentration index of EQ-5D utility value, the horizontal inequity indexes of the HRQOL between employed and unemployed individuals were 0.0024 and 0.0075, respectively, which indicated a pro-rich inequity in HRQOL between the unemployed and employed. In addition, the horizontal inequity was greater for unemployed individuals than for employed individuals. To test the robustness of the results, we analysed the inequity of EQ-5D scores for the employed and unemployed before matching in additional files in Table E and Table F. The horizontal inequity indexes of the HRQOL among employed and unemployed individuals before matching were the same as the results after matching, which indicated that the results are robust. Table 5 Horizontal inequity of EQ-5D scores for the employed and unemployed In the present research, we have assessed the long-studied topic of HRQOL in the research area of health care and economics. Based on the matched data, our results demonstrated that unemployed people reported lower HRQOL than employed people. In addition, unemployed people had higher levels of pro-rich inequality and horizontal inequity in HRQOL, which was mainly related to factors of economic status, educational status, age, smoking and health insurance. Therefore, there are three aspects of this study that should be discussed. First, the most fascinating finding was that there was statistically higher EQ-5D utility for employed individuals compared with unemployed individuals, and this study was the first to assess HRQOL among the employed and unemployed individuals by using the EQ-5D-3L instrument in China. This indicated that unemployment was associated with poor HRQOL. This result is consistent with several reports that unemployed people are likely to have poorer HRQOL than employed people [26, 31, 47]. Specifically, this may be because people who experience unemployment are deprived of these benefits (e.g., income, social contact, status and activity), face greater financial and mental stress, and have lower health care utilization. Second, the present study verified that the CI of HRQOL between the employed and the unemployed were both positive values, suggesting that the higher HRQOL was concentrated among rich men between the employed and unemployed people in Shaanxi. Additionally, the CI of the EQ-5D utility values among the unemployed was higher than that among the employed, which suggested that the unemployed had a higher pro-rich inequality in HRQOL than the employed. This study fills the gap in the literature by the comparing socioeconomic-related inequality between employed and unemployed individuals. Since previous research has not primarily focused on health inequality between employed and unemployed people in China, we can only compare this estimation with previous research on different kinds of people. Consistent with several previous reports of the different insured populations [5], findings from the marginal effect estimates among employed and unemployed individuals indicated that an advanced level of education was connected to better HRQOL. This might be because highly educated people have a stronger health awareness and better ability to cope with diseases. Moreover, as expected, age had a negative marginal effect, signifying that elderly people tend to have lower health outcomes. Furthermore, our findings indicate that the economic level intensified the pro-rich inequality in HRQOL and that the gap between the rich and poor people remains the key factor influencing inequality in HRQOL between the employed and unemployed, which was in agreement with previous studies of the different populations [5, 34, 42]. Apart from the economic level, age, educational status, health insurance and health behaviour also contributed to inequality in HRQOL. From the government point of view, this research demonstrated that basic health insurance schemes and educational level would reduce the pro-rich inequity in HRQOL for unemployed people. Ensuring basic medical insurance and enhancing education remain important health policies to reduce the inequity in HRQOL [33]. In contrast, commercial insurance and other insurance also increased the pro-rich inequity of HRQOL in unemployed individuals. It seems that commercial insurance has focused on efficiency due to market competition and most of the beneficiaries have been high-income groups. Smoking and drinking also contributed to increasing the pro-rich inequity of HRQOL, which is consistent with several reports that tobacco use and alcohol consumption were adverse health consequences and significant causes of health inequity worldwide [48]. This is because those who experienced social disadvantage, with low incomes or unemployment, were more likely to become regular smokers [49]. The purchase of tobacco products by households of tobacco users with lower socioeconomic status exacerbates poverty and social inequities by reducing the funds available for basic expenditures such as housing, clothing and food [50]. Third, our results regarding the inequity in HRQOL may be attractive to policy makers in regions where unemployment has increased significantly due to the financial crisis. In our research, after subtracting the contribution of the need variables, we found that the horizontal inequity index illustrated not only that there was pro-rich inequity in HRQOL between the two groups but also that this inequity for unemployed individuals was still higher than that for employed individuals, which may be explained by the reduction in income associated with unemployment [25, 33]. People have unequal access to social resources, including health resource, resulting in an increase in horizontal inequity in HRQOL. Specifically, unemployment had a negative effect on health equity and increased the pro-rich inequity in HRQOL. Therefore, when promoting a "Healthy China 2030" to achieve health equity among different groups, such as the unemployed and employed groups, the government should consider the contribution of education and basic health insurance schemes to reduce pro-rich inequity. The current investigation has three key strengths. First, it is the first to compare the HRQOL of unemployed and employed individuals by using the EQ-5D-3L based on a conversion for Chinese preferences. Furthermore, we offer well-informed estimates of the associations between unemployment and socioeconomic-related inequality and inequity in Chinese HRQOL. The third key strength is that the findings of this investigation were based on a stronger balance between the unemployed and the employed groups by using the coarsened exact matching method. At the same time, we acknowledge that the present study also has some limitations. First, in the data material, self-reported information regarding socioeconomic variables and EQ-5D scores may contain measurement errors and possibly introduce recall bias. Second, the data derived from Shaanxi Province and our conclusion may not be generalizable to all of China. Third, we must indicate that without valid instrumental variables, causal interpretations are hazardous, and possible endogenous problems could not be omitted in these cross-sectional data. Therefore, we refer to associations between unemployment and HRQOL. Fourth, the present study was subject to possibly unobserved confounding factors, such as disability status, access to healthy food, and social interaction. Finally, in the analytical techniques, the coarsened exact matching may exclude some observations that are very dissimilar in observable characteristics to obtain two groups that are as similar as possible. In conclusion, the unemployed had poorer HRQOL than the employed in this study in China, and the unemployed had higher pro-rich inequity in HRQOL than the employed. Unemployment is linked with health-related quality of life and inequality in HRQOL. It appeared that unemployment intensified the inequality and inequity in HRQOL. The major contributors to inequality in HRQOL were economic status, education status, age, smoking and health insurance for employed and unemployed residents. Education status and basic health insurance have positive effects on decreasing inequity in HRQOL among the unemployed. Intervention initiatives aiming to tackle long-term unemployment through active labour market programmes, narrow economic gaps, improve educational equity and improve the health status of the unemployed should be considered by the government to achieve greater health equity. Additionally, the socialization of health insurance for the unemployed should be improved. These data were drawn from the fifth Chinese National Health Services Survey of Shaanxi Province, which is not open to everyone. Researchers who want to use the data should contact Zhongliang Zhou ([email protected]). EQ-5D: EuroQol 5 dimensions Horizontal inequity Marmot M. Achieving health equity: from root causes to fair outcomes. Lancet. 2007;370(9593):1153–63. Bailey ZD, Krieger N, Agenor M, Graves J, Linos N, Bassett MT. Structural racism and health inequities in the USA: evidence and interventions. Lancet. 2017;389(10077):1453–63. Marmot M, Friel S, Bell R, Houweling TAJ, Taylor S. Closing the gap in a generation: health equity through action on the social determinants of health. Lancet. 2008;372(9650):1661–9. The L. The best science for achieving Healthy China 2030. Lancet (London, England). 2016;388(10054):1851. Su M, Zhou Z, Si Y, Wei X, Xu Y, Fan X, et al. Comparing the effects of China's three basic health insurance schemes on the equity of health-related quality of life: Using the method of coarsened exact matching. Health Qual Life Outcomes. 2018;16(1):41. Xu Y, Zhu S, Zhang T, Wang D, Hu J, Gao J, Zhou Z. Explaining Income-Related Inequalities in Dietary Knowledge: Evidence from the China Health and Nutrition Survey. Int J Environ Res Public Health. 2020;17(2):532. PubMed Central Article Google Scholar Ahonen EQ, Fujishiro K, Cunningham T, Flynn M. Work as an Inclusive Part of Population Health Inequities Research and Prevention. Am J Public Health. 2018;108(3):306–11. van Doorslaer E, Wagstaff A, van der Burg H, Christiansen T, De Graeve D, Duchesne I, et al. Equity in the delivery of health care in Europe and the US. J Health Econ. 2000;19(5):553–83. Huang J, Birkenmaier J, Kim Y. Job Loss and Unmet Health Care Needs in the Economic Recession: Different Associations by Family Income. Am J Public Health. 2014;104(11):E178–83. Puig-Barrachina V, Malmusi D, Martínez JM, Benach J. Monitoring social determinants of health inequalities: the impact of unemployment among vulnerable groups. Int J Health Serv. 2011;41(3):459–82. Lundin A, Hemmingsson T. Unemployment and suicide. Lancet. 2009;374(9686):270. Barbalat G, Franck N. Ecological study of the association between mental illness with human development, income inequalities and unemployment across OECD countries. BMJ Open. 2020;10:e035055. Andres AR. Income inequality, unemployment, and suicide: a panel data analysis of 15 European countries. Appl Econ. 2005;37(4):439–51. Jorgensen MB, Pedersen J, Thygesen LC, Lau CJ, Christensen AI, Becker U, et al. Alcohol consumption and labour market participation: a prospective cohort study of transitions between work, unemployment, sickness absence, and social benefits. Eur J Epidemiol. 2019;34(4):397–407. Latif E. The impact of recession on drinking and smoking behaviours in Canada. Econ Model. 2014;42:43–56. Cygan-Rehm K, Kuehnle D, Oberfichtner M. Bounding the causal effect of unemployment on mental health: Nonparametric evidence from four countries. Health Econ. 2017;26(12):1844–61. Stankunas M, Kalediene R, Starkuviene S, Kapustinskiene V. Duration of unemployment and depression: a cross-sectional survey in Lithuania. BMC Public Health. 2006;6(1):174. Frasquilho D, Matos MG, Salonna F, Guerreiro D, Storti CC, Gaspar T, et al. Mental health outcomes in times of economic recession: a systematic literature review. BMC Public Health. 2016;16:115. Chang S-S, Gunnell D, Sterne JA, Lu T-H, Cheng AT. Was the economic crisis 1997–1998 responsible for rising suicide rates in East/Southeast Asia? A time–trend analysis for Japan, Hong Kong, South Korea, Taiwan, Singapore and Thailand. Soc Sci Med. 2009;68(7):1322–31. Kristian H, Ivar EJ. Is it Easier to Be Unemployed When the Experience Is More Widely Shared? Effects of Unemployment on Self-rated Health in 25 European Countries with Diverging Macroeconomic Conditions. Eur Sociol Rev. 2018;34(1):22–39. del Amo Gonzalez MPL, Benitez V, Martin-Martin JJ. Long term unemployment, income, poverty, and social public expenditure, and their relationship with self-perceived health in Spain (2007-2011). BMC Public Health. 2018;18(1):133. Ronchetti J, Terriau A. Impact of unemployment on self-perceived health. Eur J Health Econ. 2019;20(6):879–89. Karanikolos M, Mladovsky P, Cylus J, Thomson S, Basu S, Stuckler D, et al. Financial crisis, austerity, and health in Europe. Lancet. 2013;381(9874):1323–31. Drydakis N. The effect of unemployment on self-reported health and mental health in Greece from 2008 to 2013: a longitudinal study before and during the financial crisis. Soc Sci Med. 2015;128:43–51. Hooghe M, Vanhoutte B, Hardyns W, Bircan T. Unemployment, Inequality, Poverty and Crime: Spatial Distribution Patterns of Criminal Acts in Belgium, 2001-06. Br J Criminol. 2011;51(1):1–20. Norstrom F, Waenerlund A-K, Lindholm L, Nygren R, Sahlen K-G, Brydsten A. Does unemployment contribute to poorer health-related quality of life among Swedish adults? BMC Public Health. 2019;19:457. Evaristo OS, Moreira C, Lopes L, Abreu S, Agostinis-Sobrinho C, Oliveira-Santos J, et al. Associations between physical fitness and adherence to the Mediterranean diet with health-related quality of life in adolescents: results from the LabMed Physical Activity Study. Eur J Pub Health. 2018;28(4):631–5. Brooks R. EuroQol: The current state of play. Health Policy. 1996;37(1):53–72. Alcaniz M, Sole-Auro A. Feeling good in old age: factors explaining health-related quality of life. Health Qual Life Outcomes. 2018;16:48. Hirao K, Kobayashi R. Health-related quality of life and sense of coherence among the unemployed with autotelic, average, and non-autotelic personalities: a cross-sectional survey in Hiroshima. Japan PLoS One. 2013;8(9):e73915. Extremera N, Rey L. Health-related quality of life and cognitive emotion regulation strategies in the unemployed: a cross-sectional survey. Health Qual Life Outcomes. 2014;12(1):172. Gonzalez-Chica DA, Adams R, Dal Grande E, Avery J, Hay P, Stocks N. Lower educational level and unemployment increase the impact of cardiometabolic conditions on the quality of life: results of a population-based study in South Australia. Qual Life Res. 2017;26(6):1521–30. Axelsson L, Andersson IH, Edén L, Ejlertsson G. Inequalities of quality of life in unemployed young adults: A population-based questionnaire study. Int J Equity Health. 2007;6(1):1. Xu Y, Yang J, Gao J, Zhou Z, Zhang T, Ren J, et al. Decomposing socioeconomic inequalities in depressive symptoms among the elderly in China. BMC Public Health. 2016;16(1):1214. Fan X, Zhou Z, Dang S, Xu Y, Gao J, Zhou Z, et al. Exploring status and determinants of prenatal and postnatal visits in western China: in the background of the new health system reform. BMC Public Health. 2018;18(1):39. Lai S, Shen C, Xu Y, Yang X, Si Y, Gao J, et al. The distribution of benefits under China's new rural cooperative medical system: evidence from western rural China. Int J Equity Health. 2018;17:137. Mangen M-JJ, Bolkenbaas M, Huijts SM, van Werkhoven CH, Bonten MJM, de Wit GA. Quality of life in community-dwelling Dutch elderly measured by EQ-5D-3L. Health Qual Life Outcomes. 2017;15(1):3. Davison NJ, Thompson AJ, Turner AJ, Longworth L, McElhone K, Griffiths CEM, et al. Generating EQ-5D-3L Utility Scores from the Dermatology Life Quality Index: A Mapping Study in Patients with Psoriasis. Value Health. 2018;21(8):1010–8. Liu GG, Wu H, Li M, Gao C, Luo N. Chinese time trade-off values for EQ-5D health states. Value Health. 2014;17(5):597–604. Blackwell M, Iacus S, King G. Porro G: cem: Coarsened exact matching in Stata. Stata J. 2009;9(4):524–46. Iacus SM, King G, Porro G. Multivariate matching methods that are monotonic imbalance bounding. J Am Stat Assoc. 2011;106(493):345–61. Zhou Z, Fang Y, Zhou Z, Li D, Wang D, Li Y, et al. Assessing income-related health inequality and horizontal inequity in China. Soc Indic Res. 2017;132(1):241–56. Fan X, Xu Y, Stewart M, Zhou Z, Dang S, Wang D, et al. Effect of China's maternal health policy on improving rural hospital delivery: Evidence from two cross-sectional surveys. Sci Rep. 2018;8(1):12326. PubMed PubMed Central Article CAS Google Scholar Marmot M, Allen J, Bell R, Bloomer E, Goldblatt P. Consortium European Review S: WHO European review of social determinants of health and the health divide. Lancet. 2012;380(9846):1011–29. Wagstaff A, van Doorslaer E, Watanabe N. On decomposing the causes of health sector inequalities with an application to malnutrition inequalities in Vietnam. J Econ. 2003;112(1):207–23. Garcia-Gomez P, Hernandez-Quevedo C, Jimenez-Rubio D, Oliva-Moreno J. Inequity in long-term care use and unmet need: Two sides of the same coin. J Health Econ. 2015;39:147–58. McKee-Ryan FM, Song ZL, Wanberg CR, Kinicki AJ. Psychological and physical well-being during unemployment: A meta-analytic study. J Appl Psychol. 2005;90(1):53–76. Savage C. Alcohol and tobacco related health inequity: a population health perspective. J Addict Nurs. 2012;23(1):72–4. Hiscock R, Bauld L, Amos A, Platt S. Smoking and socioeconomic status in England: the rise of the never smoker and the disadvantaged smoker. J Public Health. 2012;34(3):390–6. Siahpush M, Borland R, Scollo M. Smoking and financial stress. Tob Control. 2003;12(1):60–6. We want to express our appreciation to the Health Department of Shaanxi Province for providing data. We also express our gratitude to all participants in this study for their participation and cooperation in the data collection. This study was funded by the China Medical Board (15–277 and 16–262), the National Natural Science Foundation of China (71874137), and the Shaanxi Social Science Foundation (2017S024). They had no role in the design of the study and data collection, analysis, and interpretation of data and in writing the manuscript. School of Public Health, Health Science Center, Xi'an Jiaotong University, No.76 West Yanta Road, Xi'an, 710061, Shaanxi, China Yaxin Zhao School of Public Policy and Administration, Xi'an Jiaotong University, No. 28 Xianning West Road, Xi'an, 710049, Shaanxi, China Zhongliang Zhou, Xiaojing Fan, Rashed Nawaz, Dantong Zhao, Tiange Xu, Dan Cao, Chi Shen & Sha Lai School of Public Administration, Inner Mongolia University, No. 235 College Road, Hohhot, 010021, Inner Mongolia, China Min Su Zhongliang Zhou Xiaojing Fan Rashed Nawaz Dantong Zhao Tiange Xu Dan Cao Chi Shen Sha Lai YXZ conceptualized the research idea, performed the analysis and wrote the manuscript. CS, XJF and SL contributed to the analysis and interpretation of the data. ZLZ and MS made substantial contributions to the study design, and critically edited and approved the final manuscript. NR, DTZ, TGX and DC participated sufficiently in providing constructive suggestions and revising the manuscript. All authors read and approved the final manuscript. Correspondence to Zhongliang Zhou. In this study, we obtained verbal informed consent from each participant. The Health Department of Shaanxi Province has issued a document in which the guidance staff of the sample counties will contact each participant who agrees to be interviewed and make an appointment with them. The investigators then went to the participants' homes, which meant that if we had questionnaires for the participants, we had the consent of the participants. This study is classified as a risk-free investigation because the study adopted literature research techniques and methods, with no intervention or intentional modification of the clinical, biometric or social data of the participants participating in the study. Informed consent in our study was approved by the Ethics Committee of Xi'an Jiaotong University Health Science Center because the research we conducted is classified as a risk-free investigation (No. 2015–644), and it was conducted under the ethics guidelines of the Declaration of Helsinki. The participants consented to the use of the information for investigative purposes, and the data was anonymized when analyzed and do not contain information that might lead to the identification of them. The authors declare that they have no other competing interests. Additional file 1: Table A. Chinese time trade-off utility values for EQ-5D health states. Table B. The multivariate imbalance measure L1 before and after coarsened exact matching. Table C. Distribution of the three response alternatives for EQ-5D in each dimension for the employed and unemployed before matching. Table D. The values for EQ-5D utility and each dimension for the employed and unemployed before matching. Table E. Decomposition of the concentration index in HRQOL among the employed and unemployed before matching. Table F. Horizontal inequity of EQ-5D scores for the employed and unemployed before matching. Zhao, Y., Zhou, Z., Fan, X. et al. Comparison of inequity in health-related quality of life among unemployed and employed individuals in China. BMC Public Health 21, 52 (2021). https://doi.org/10.1186/s12889-020-10038-3 EQ-5D
CommonCrawl
Synchronization of application-driven WSN Bruno Marques ORCID: orcid.org/0000-0002-3795-337X1,2 & Manuel Ricardo2 The growth of wireless sensor networks (WSN) has resulted in part from requirements for connecting sensors and advances in radio technologies. WSN nodes may be required to save energy and therefore wake up and sleep in a synchronized way. In this paper, we propose an application-driven WSN node synchronization mechanism which, by making use of cross-layer information such as application ID and duty cycle, and by using the exponentially weighted moving average (EWMA) technique, enables nodes to wake up and sleep without losing synchronization. The results obtained confirm that this mechanism maintains the nodes in a mesh network synchronized according to the applications they run, while maintaining a high packet reception ratio. Recently, there has been an increasing trend towards the deployment of WSN, where a large number of tiny devices interacting with their environments may be inter-networked and accessible through the Internet. For that purpose, several communication protocols have been defined making use of the IEEE 802.15.4 Physical and MAC layers [1]. The 6LoWPAN Network Layer adaptation protocol [2] is also used to enable the interconnection between low-power devices and the IP network. Since its release, the design of routing protocols became increasingly important [3] and RPL [4] emerged as the IETF proposed standard protocol for IPv6-based multi-hop WSN. WSNs are constituted by sensor devices equipped with their own local clock for internal operations [5]. Events related to them, which include sensing, processing, and communication, are normally associated to timing information. In the particular case of WSNs, there are challenges and factors related to node synchronization, which include low-cost clocks, effects of wireless communication, and node failures. Moreover, WSNs are distributed and their nodes have multiple hardware and software constraints such as low processing power, low memory and storage capabilities, and low-power consumption. These characteristics make time synchronization an important part of communication in WSNs, and synchronization protocols are required. In [6], we presented a new paradigm, the application-driven WSN paradigm, as a cross-layer solution aimed to help reducing the energy consumed by a network of sensors executing a set of applications. This paradigm assumes that each application defines its own network and set of nodes so that the exchange of information can be confined to the nodes associated to the application. The nodes share information about the applications they run and their duty cycles. In [7], we proposed an extension to the RPL routing protocol, the RPL-BMARQ, with the purpose of making the network aware of the traffic generated by applications. The main objective of this extension was to construct directed acyclic graphs (DAGs), by using information shared by the application and network layers, allowing the nodes to select parents by considering the applications they run. In that work, we characterized the energy consumption and the energy gain and also end-to-end delay and fairness. For evaluation purposes, we selected four scenarios in which all the nodes joined the network at same time and performed simulations considering regular RPL and RPL-BMARQ. Later, we started to study the behavior of RPL-BMARQ considering that the nodes would not join simultaneously the WSN. At this end, we presented a draft of a possible node synchronization mechanism, and estimated the energy gains introduced by RPL-BMARQ. In this paper, we consider a more realistic situation in which the nodes join the WSN at a non-predictable and different time. At this end, the sensor nodes must share some kind of time reference which allow them to be synchronized with respect to the life cycle of the applications they run. Therefore, in this paper, we propose a novel synchronization mechanism for RPL-BMARQ, which will help the nodes to wake up and to go asleep in a synchronized manner so that they can successfully send, receive, and forward packets, maintaining the energy consumption low. The major contribution of this paper is then a mechanism for WSN which synchronizes the sensor nodes with respect to the applications life cycles they run, enabling these nodes to wake up and to go asleep in synchronism, while maintaining a packet reception ratio high. The novelty of our contribution comes from (1) the adaptation of the well-known exponentially weighted moving average technique to wireless mesh network scenarios and (2) using this mechanism to control the behavior of sensor nodes so that they become synchronized in relevant time instants which are defined by their application duty cycles. The paper is organized in 6 sections. Section 2 presents the related work. Section 3 describes the application-driven WSN concept. Section 4 describes the rationale of the contribution–the synchronization mechanism. Section 5 evaluates the proposed mechanism, describes the methodology adopted for its validation, and discusses the results obtained. Finally, Section 6 draws the conclusions and presents future work. In this section, we present and discuss related work in three main areas: time synchronization, wake-up mechanisms for WSN, and 6LowPAN/IPv6/ RPL evaluations. WSNs are constituted by sensor devices equipped with their own local clock for internal operations [5]. Events related to them, which include sensing, processing, and communication, are normally associated to timing information. In the particular case of WSN, there are many challenges related to time synchronization because these networks are distributed by nature and because of the constraints of the sensor nodes in terms of hardware and of software. Akyildiz and Vuran [5] state that in order for the nodes to synchronize, they must exchange information about their clocks and use this information to synchronize their local clocks. By using wireless communications, WSNs create challenges for synchronization that result from the error-prone communication nature of the wireless channel which may cause packet losses due to low signal-to-noise plus interference ratios, or highly and variant non-deterministic delays caused by MAC access and packet retransmissions. These factors affect also the time synchronization messages. Therefore, some nodes may be unsynchronized. On the other hand, synchronization messages sent by nodes may lead other nodes to adapt to their unsynchronized local clocks. As a consequence, the network may be partitioned into different areas with different time that prevents synchronization of the entire network. Also, the wireless channel may introduce asymmetric delays between two nodes, which is important for synchronization because some synchronization solutions depend on consecutive message exchange and round-trip-time delays. Therefore, robust synchronization methods are needed. We start to identify some factors that influence the synchronization of the nodes and that should be considered in the design of time synchronization mechanisms for WSN. As the Network Time Protocol (NTP) protocol [8] is a synchronization protocol normally used in IP networks, we provide an overview of it and also describe synchronization protocols for WSN related to our work. Factors influencing time synchronization According to [9], some of the factors influencing time synchronization in large systems constituted for example by personals computers, also apply to sensor networks, where temperature, phase noise, frequency noise, asymmetric delays, clock glitches, and sensors constraints are examples of these factors. In the case of the temperature, since sensor nodes are deployed in various places, temperature variations throughout the day may cause the clock to speed up or slow down. In the case of the phase noise factor, some of its causes are due to fluctuations in the hardware interface, response variation of the operating system to interrupts, and jitter in the network. The frequency noise results from the instability of the clock crystal. In the asymmetric delay factor, the delay of the path from one node to another node may be different from the return path which may result in an asymmetric delay and may cause an offset to the clock, which may go undetected. Clock glitches are abrupt jumps in time, caused by hardware or software anomalies such as frequency and time steps. Finally, WSN nodes are constrained by nature because of limited resources (e.g., low in energy consumption, low in processing power, or low in memory). The transmission and reception of packets are the factors that cause more energy consumption in a sensor node. Therefore, a time synchronization protocol for sensor networks should help overcome the synchronization problems introduced by the factors described above, avoid frequent message exchanges, and be self-configurable. Network Time Protocol The NTP [8] is the synchronization protocol more often used in the Internet. This protocol includes several synchronization mechanisms that have been also adapted for developed WSN synchronization protocols. Reference-broadcast synchronization (RBS) [10], timing-sync protocol for sensor networks (TPSN) [11], lightweight tree-based synchronization (LTS) [12], and TSync [13] are some examples of these protocols. NTP is used to adjust the clock of each network node. This synchronization is achieved by using a hierarchical structure of time servers. The root node is synchronized with the Coordinated Universal Time (UTC). In each level of this hierarchy, the time server nodes synchronize the clocks of their subnetwork peers. NTP uses a two-way handshake between two nodes to estimate the delay between these nodes and compute the relative offset accordingly (see Fig. 1, where node s will synchronize himself with node r). However, NTP assumes that the transmission delay between two nodes is the same in both directions. This is reasonable for the Internet, but some of the characteristics of WSN make this assumption inadequate. NTP is useful to discipline the oscillators of the sensor nodes, but using it to connect to time servers may be impossible because of sensor node failures, which are frequent in WSN. Using a single clock reference to synchronize all the nodes could be a problem due to the variations in network delays. Moreover, NTP requires intensive computing, requires a precise time server to synchronize the nodes, and does not consider the energy the nodes may spent to synchronize their clocks. All these problems may cause NTP to inaccurately measure delays and inaccurately estimate clock offsets. NTP two-way handshake mechanism Synchronization protocols for WSN WSN poses unique challenges in the design of synchronization protocols, which calls for specific synchronization solutions. An example is the effect of the broadcast wireless channel. However, wireless communication introduce random delays between two nodes. Let us consider Fig. 2, which represents a handshake scheme. The delay between two nodes is characterized by four components: (i) the sending delay (t send), (ii) the access delay (t acc), (iii) the propagation time (t prop), and (iv) the receiving delay (t recv). Synchronization delay between two nodes The handshake is initiated when node s issues a SYNC packet with the timestamp \(t_{1}^{s}\). Between the time the synchronization protocol issues the synchronization command and the time during which the SYNC packet is prepared, there is a delay, t send, resulting from the combination of operating system delays and transceiver delays on the node's hardware; t acc corresponds to the additional delay introduced by the wireless channel after the packet has been prepared and transferred to the transceiver. This delay depends on the MAC protocol when the node waits for accessing the channel; as an example, MAC protocols using CSMA introduce a significant amount of access delay when the channel is very occupied. t prop is the amount of time needed to transmit a SYNC packet to a receiver. Finally, t recv is the time required for the transceiver of the receiver node r to receive the packet and process it. The transmission delay, t tx , is a component of the receiving delay, which is important and characterized by the time needed for the SYNC packet to be completely received (see Fig. 2); it depends on the transmission rate and on the length of the SYNC packet. These components contribute to the overall communication delay, also referred as critical path. Delays are non-deterministic and create challenges when estimating clock offsets using the NTP's methods. Most of the synchronization protocols for WSN tend to minimize the effects of these delays, which are random. In what follows, four related existing synchronization protocols are described. Reference-broadcast synchronization: In Fig. 3, a sender-receiver handshake scheme is shown which introduces a significant amount of non-deterministic delay [10]. The RBS protocol tries to minimize the overall communication delay in the synchronization process. It eliminates the effect of the broadcast node. Instead of synchronizing the receiver with the sender, RBS synchronizes a set of receivers that are within the reference transmission of a sender. Considering that propagation times are negligible on wireless channels, as soon as a packet is transmitted, it is received at all sender's neighbors almost at the same time. Therefore, the synchronization may be improved if only the receivers are synchronized. As shown in Fig. 3, node 1 broadcasts m reference packets and each one of the receivers, within its broadcast range, records the time the packets are received. Then, the receiver nodes communicate with each other to estimate the offsets, just like the traditional synchronization. Figure 4 a shows the critical path for traditional synchronization. Sending delays and the access delays should be accurately estimated to improve the synchronization. Reference-broadcast synchronization does not involve node 1 in the synchronization; only the receivers (nodes 2, 3, 4, 5, 6, and 7) synchronize among themselves based on a reference-broadcast message from node 1. As shown in Fig. 4 b, this reduces the critical path duration. In fact, the possible origin of uncertainty in RBS is the time between when a broadcast packet is received and when it is completely processed. Reference broadcast. Node 1 broadcasts m messages which are used by the other nodes for synchronization purposes Critical paths for: a a pair of nodes and b RBS A method used to determine with efficiency the clock offset of each node in relation to its neighbors is the receiver-receiver synchronization method. By exchanging messages with each neighbor, a node fills a table consisting of relative offsets. Therefore, the main goal of RBS is not to correct the clocks of the nodes but, every time a packet is received, to translate its timestamp to the node's clock using the relative offset information. This synchronization method can only provide synchronization in a broadcast area. In order to provide multi-hop synchronization, RBS uses nodes that receive two or more different reference-broadcast messages. These nodes are called translation nodes, and they are used to translate the time between different broadcast domains (see Fig. 5). As it can be observed, nodes A, B, and C are, respectively, the transmitter, the receiver, and the translation nodes. The transmitter node broadcasts its timing messages, the receiver node receives those messages, and then the nodes synchronize with each other. RBS multi-hop synchronization scheme Timing-sync protocol for sensor networks [11]: TPSN uses some of the NTP concepts: it uses a hierarchical structure to synchronize the entire WSN to a single time server. TPSN uses the root node to synchronize all or part of the network, consisting of two phases: (1) the discovery phase, where the structure of TPSN is built, starting from the root node and (2) the synchronization phase, where pairwise synchronization is performed across the network. In (1), the root node is assigned to level 0 and the other nodes in the network are assigned to levels according to their distance to the root node (see Fig. 6). Synchronization architecture of TPSN Firstly, the root node starts to construct the TPSN structure. To this end, it broadcasts a special packet called level_discovery packet. In this structure, the first level is assigned to the number 0, which is the level of the root node. The other nodes that receive this packet are the nodes that belong to level 1. Afterward, these nodes broadcast their level_discovery packet. Then, the neighbor nodes receiving those packets are labeled as level 2 nodes, and the process is repeated until all the nodes in the network are assigned to a level. In (2), each node in the structure is synchronized with a node from a higher level. The root node sends another packet (the time_sync packet) which initializes the time synchronization process. Afterwards, the nodes in the next level start to synchronize with the root node by sending a synchronization_pulse to it, as shown in Fig. 7. In order to avoid collisions with other nodes, each node in level 1 waits for a random amount of time before transmitting the time_sync packet. After the reception of this packet, the root node sends an acknowledgment back to finish the synchronization process. In this way, nodes belonging to level 1 of the structure are synchronized with root node (see Fig. 7). This time_sync packet also serves as a synchronization_pulse to level 2 nodes. Upon a reception of this packet from a node in level 1, the nodes in level 2 wait for a random amount of time for the level 1 nodes to finish their synchronization. Then, they initialize the synchronization process by transmitting a synchronization_pulse. Acting like the root node in level 0, a level 1 node sends back an acknowledgment, the process continues until all the nodes at different levels are synchronized, and the entire network becomes synchronized. Two-way message handshake In TPSN, the receiver synchronizes with the local clock of the sender according to the two-way message handshake, as shown in Fig. 7. For this reason, TPSN is based on a sender-receiver synchronization method. Hierarchical structures created by TPSN are similar to the structures created by NTP. Like in NTP, nodes may fail causing nodes to become unsynchronized. Also, nodes mobility can make the hierarchy useless, as they may move out of their levels. Therefore, nodes at level n cannot synchronize with nodes at level n−1, without requiring additional and periodical synchronization. Lightweight tree-based synchronization [12]: LTS is similar to TPSN and follows two design approaches: centralized and distributed. The centralized design is based on the construction of a tree such that each node is synchronized to the root node. After the tree is constructed, the root initiates pairwise synchronization with its children nodes and the synchronization is propagated along the tree to the leaf nodes. In the distributed design, LTS does not rely on the construction of a tree and synchronization can be initiated by any node in the network. Each node performs synchronization only when it has a packet to send. Therefore, each node is informed about its distance (in number of hops) to the reference node for synchronization, the desired accuracy, the clock drift, and a record of the time that has passed since they were synchronized. Then, the nodes adjust its synchronization rate accordingly. Nodes farther apart from the reference node perform synchronization more frequently because synchronization accuracy is inversely proportional to distance. In general, LTS is based on message exchanges between two nodes to estimate the clock drift between their clocks. This synchronization scheme is named pairwise synchronization scheme, and it is extended for multi-hop synchronization. In contrast to our centralized and asynchronous proposed synchronization mechanism, in [14], a synchronous protocol is proposed that provides a distributed strategy which guarantees convergence for any undirected connected communication graph. This strategy tries to control the nominal clock period and the clock offset based on the information received from neighbor nodes in order to achieve synchronization. Moreover, when an underlying communication graph is known, the authors purpose an optimal design strategy which can be used to study the effect of noise and external disturbances on the steady-state performance. There are additional works proposing and analyzing time synchronization mechanisms. In [15], the authors use factor-graph methods for network clock estimation and propose two methods for message passing: belief propagation (BP) and mean field (MF). In [16], two joint synchronization and localization algorithms in both line of seeing (LOS) and in non-line of seeing (NLOS) environments are proposed. They applied Taylor expansions in order to represent factor graphs in closed Gaussian forms where the means and variances of beliefs of node estimates can be easily obtained by simple arithmetic operations. In [17], the authors propose a global clock synchronization method by adopting a packet-based synchronization scheme. The proposed distributed algorithm requires communications only between neighboring sensors and computes a set of marginal distributions using the BP message passing [18]. The authors have observed that the state of clock offset at any sensor depends directly only on its neighboring sensors and that the algorithm synchronizes clocks with a consistent reference value instead of adjusting clocks to an average value. In [19], WSN time synchronization follows two strategies: (i) maximum time synchronization (MTS) to simultaneously synchronize the skew and offset of each node when the communication delay is negligible and (ii) a weighted maximum time synchronization (WMTS) when the communication delay between the nodes is random. In contrast to our work, in which we synchronize a virtual clock, these authors attempt to synchronize the clock skew, in order to obtain acceptable synchronization accuracies. The main idea of MTS and WMTS is to drive all clocks to the maximum value among the network. In [19], random communication delays with normal distribution are considered, while we validated our solutions against Gaussian and exponentially distributed delays. This solution can be classified as distributed and asynchronous algorithm, whereas ours can be classified as centralized and asynchronous. Since synchronization is a widely studied topic, in [20], a survey of clock synchronization for wireless sensor networks is published. Wake-up mechanisms for WSN WSN are energy limited so typically the nodes cannot keep radios active all the time, having to sleep and to wake up periodically [21]. Addressing this issue, there have been proposed several MAC protocols which were categorized as synchronous or asynchronous MAC protocols. Although asynchronous protocols are simpler, they tend to consume more energy. But in WSN, where energy must be saved, a different approach may be used. One possibility is to use synchronous methods. Using these protocols, some techniques are adopted to increase the nodes lifetime: (i) duty cycling and (ii) scheduled rendezvous. Duty cycling: This is one mechanism widely used for energy-efficient MAC protocols in WSN. A MAC protocol that implements duty cycling uses appropriate sleep/wake-up mechanisms to conserve energy, and in [22], it is demonstrated that when sensor nodes remain in the sleep mode, they consume less energy than when in the idle mode. When there is no need for communication, the radio is put to sleep and, although applying duty cycling energy is conserved, it has some disadvantages. Putting sensors into sleep mode makes it difficult to the all network to function or at least certain part of it. As showed in [23], a few issues are needed to overcome such as deciding when to switch a device to low power mode or deciding "for how long should a device remain in the low power mode?" To solve these issues, efficient and flexible duty cycling techniques have been proposed. The S-MAC [24] and the T-MAC [25] protocols are examples of them. These protocols transmit a SYNC packet to notify neighbors about their schedule and to synchronize the clocks of all nodes in the network. The method only compensates for clock offset and does not consider clock drift [21]. Moreover, the knowledgement of traffic patterns can also help to take decisions about waking up. This method is known as adaptive duty cycling. S-MAC [24] is one of the major energy-efficient MAC protocols that efficiently exploits the idea of adaptive duty cycling. It uses a periodic sleep-wake-up mechanism in order to lower power consumption. If a node has no packet to receive, it can waste a large amount of energy by just listening to the channel. Consequently, a node can save a significant amount of energy if it simply goes to sleep mode by switching off its radios [22]. T-MAC is an improvement over S-MAC duty cycling. In the T-MAC, listening period ends when no event has occurred for a time threshold TA. Though it improves on S-MAC, T-MAC has the disadvantage that it can face an early sleeping problem where a node can go to sleep even though its neighbor may still have messages for it. Synchronization is also an issue in duty cycling MAC protocols. In [26], that synchronous MACs such as S-MAC have low energy consumption for sending packets but are complicated due to the need of synchronization is argued. Conversely, asynchronous MACs, for example WiseMAC [27], is very simple, but it spends much energy in finding the neighbor's wake-up time. Moreover, synchronous methods can be characterized as one-way methods. Usually, the senders broadcast a reference message and receivers, upon the reception of the message, record the arrival time by their own clocks, and exchange this information among each other to compensate clock offset between them. In [21], a synchronous method is proposed in which clocks in the all network are not modified. Instead, the nodes are synchronized with their own clocks. Since the periodic broadcast event in the network is the same, although they have different measurement results for this period by their own clock unit independently, they are able to interact with each other at the same physical time. Without complicating the estimation process and without modifying the clock of a node, this synchronization method becomes simpler and more energy-efficient than the traditional synchronization one-way method. Scheduled rendezvous:This type of MAC protocol requires a prescheduled rendezvous time at which neighboring nodes wake up simultaneously. In this method, a node wakes up periodically and sleeps until the next rendezvous time. A scheduled rendezvous scheme is shown in Fig. 8 [22]. A scheduled rendezvous scheme The advantage of this scheme is that when a node is awake, it is guaranteed that all its neighbors are awake as well. Consequently, it is easier to send/receive packets. Broadcasting a message to all neighbors is also simpler in scheduled rendezvous schemes. RI-MAC [28] is a receiver-initiated asynchronous duty cycle MAC protocol for WSN. It uses a receiver-initiated data transmission in order to proficiently operate over a wide range of traffic loads. It attempts to minimize the time a sender and the receiver occupy the medium to find a rendezvous time for exchanging data, while still decoupling the sender and receiver's duty cycle schedules. A disadvantage of such MAC protocol is the requirement to maintain strict synchronization because clock drifting may deeply affect the rendezvous time. 6LowPAN/IPv6/RPL evaluations In [29], a cross-layering design for RPL which provides enhanced link estimation and efficient management of neighbor tables is proposed. They used AMI as a case study and employed the Cooja emulator to evaluate their proposal. The authors analyzed RPL together with the underlying X-MAC and ContikiMAC and Nullrdc protocols from the reliability stand point by considering packet loss, end-to-end delay, and energy consumption and have implement a testbed using ContikiOS to validate their work. In [30], the performance of RPL used for multi-sink WSNs considering the hop-count and/or ETX, packet loss, and energy consumption metrics is evaluated. To validate the results from the performed simulations, the authors performed on a real-life testbed the same tests. In both works [29, 30], the authors considered networks supporting single application where the nodes join the network at same time. The performance metrics they considered were packet loss, end-to-end packet delay, and energy consumption. In our work, the networks deployed support multiple applications and the sensor nodes join the network at different times what demands a node synchronization mechanism. In order to characterize the performance of our system, we used a set o metrics including end-to-end packet delay, energy consumption, query success ratio, and fairness Index. Query success ratio (QSR) quantifies the success of a sink node with respect to the reception of all the expected reply packets upon the transmission of a query packet; this metric allows us to see if all the nodes receive the query packets and if they reply back. Therefore, it is easy to verify packet loss. The fairness index metric is used to investigate if the nodes have the same opportunity to reply back to the sink. We used ContikiOS/Cooja for the simulations and validated our work by implementing two testbeds. Application-driven WSN The application-driven WSN paradigm [6] assumes that each application defines its own network and set of nodes so that the exchange of information can be confined to the nodes associated to the application. The nodes share information about the applications they run and their duty cycles, and nodes are put asleep when there is no activity related to their applications. When nodes receive a query packet, they know exactly when they must wake up on the next period. The nodes alternate between wake and sleep states, and the amount of time spent in each phase is determined by the application duty cycle. When the wake-up time expires, the node switches to the sleep state, waking up again by the time computed by the synchronization mechanism proposed in Section 4 of this paper. We assume that every node can participate in route discovery and packet forwarding. However, the nodes forwarding a given type of data will be primarily selected from the set of nodes running the same application to which the data is associated. For that purpose, each query packet includes information about the associated application (APPID), which is known by the nodes running that application. Our routing scheme tries to insure that data of an application is relayed mainly by the nodes running that application. When the sink node queries the other nodes running the same application, routing paths follow the directed acyclic graph (DAG) created. This DAG is created and maintained by a change to the RPL protocol scheme which uses mainly the nodes running that application; the nodes not associated to this application will not participate in the routing process, in a first attempt. In our proposal, the subset of nodes running the same application forms a "subnetwork" with multi-hop connectivity and application packets carry out also information about the application duty cycle (T CYCLE and T ON) that is used to create and maintain the DAGs in which not only the nodes running the same application but also the nodes having the same application duty cycle can be "grouped". Figure 9 a shows a network topology supporting two different applications. Figure 9 b shows the DAG created with standard RPL, and Fig. 9 c shows the DAG created by our proposed solution. The wake-up mechanism is based on the applications time cycle information (T CYCLE and T ON), carried by every application query sent by the sink nodes. When a node receives a query packet, it knows exactly when it must wake up on the next period. Application-driven WSN concept. a Network topology. b RPL DAG. c BMARQ-RPL DAG Application-driven synchronization mechanism According to our application-driven concept, synchronization is achieved between the nodes that run the same applications or between the nodes that have the same application duty cycle, by considering their duty cycles. Therefore, the first time a node joins the network, it waits for an application query packet to adjust its virtual clock to the time carried by the query packet. We realize that this corresponds to setting the time's nodes to a value which does not consider network delays but, as demonstrated in the paper, this has no impact on our synchronization mechanism as the nodes dynamically adjust their sleeping offset (see β·|δ k,n | component in Eq. 2) and wake up and sleep almost at the same time during the network lifetime. As such, the synchronization algorithm takes advantage of the application query packets that are sent by the sink nodes once in every application duty cycle to maintain the sensor nodes synchronized. A network may support several applications but only the nodes running the same application or having the same duty cycle will synchronize between them. Therefore, a network supporting different applications may have different sets of nodes with different synchronizations and still be fully functional. Without having to send or to receive other type of packets for synchronization purposes, the nodes will rely only on the queries received to synchronize. In fact, this algorithm is centralized on a sink node, but its design is simple and adequate for our purposes. A distributed design would be more complex and imply the use of other types of packets for synchronization, often broadcasted through the network, which would have impact in energy consumption due to packet transmission and reception costs. It is unlikely that all the sensor nodes would join a network at the same time. Having the nodes active all the time would deplete their batteries, so the nodes have to go sleep and to wake up periodically. All the nodes have to be awake almost at the same times in order to receive sink queries and to forward them to the other nodes. As a result, the nodes must be synchronized according to the application cycle they run. In order to synchronize all the nodes in the network, our proposed synchronization mechanism uses a synchronous method which includes two phases: the synchronization setup phase and the synchronization maintenance phase, described below. The synchronization setup phase When a sensor node joins the network, it remains in the wake state and waits for the reception of its first query packet sent by the sink node and forwarded by other nodes. Upon its reception, the node adjusts a virtual clock to the timestamp carried by the query. As it can be observed from Fig. 10, the query packet sent by a sensor node n towards a sensor node n+1 is the same query packet that node n received from the sink node. The timestamp carried by the query is extracted from the query packet. This phase is used to readjust the virtual clock; the periodicity of this readjustment depends on how often the nodes have to readjust their virtual clock. It is known that this phase corresponds to setting the time's nodes to a value which does not consider network delays. Synchronization setup phase In the example shown in Fig. 11 a, sink node A issues a query (Q k,j ) before sink node B. The query packet is disseminated through the network as expected using the RPL-BMARQ routing solution [7]. Sensor nodes C and D, which run this sink's application, set their virtual clock to the timestamp carried out by the packet. Sensor node E, not running this application, also sets his virtual clock to the timestamp carried out by the query packet since it is the first query it receives. The same query packet (Q k,j ) is then forwarded to the other sensor nodes (nodes G, H, and K) which will also set their virtual clock to the same timestamp. Sensor node E will not forward the query packet Q k,j since it does not run this application and does not have neighbors running it. Similarly, node F, upon the first query packet (Q k,i ) reception from sink node B, and because it runs the same application, adjusts it virtual clock to the time carried out by the sink B query packet. As this sink has already adjusted its virtual clock using the sink A timestamp, sensor node F will have the same time as the other nodes. Again, the query Q k,i will be forwarded to the other sensor nodes (nodes I, J, and L) which will perform the same virtual clock adjustment. Figure 11 b shows the same virtual clock adjustments, but in this case, it is sink node B that issues the first query packet and adjusts all the network node's virtual clocks. Example of nodes synchronization The synchronization maintenance phase Since all the nodes know the characteristics of the applications they run, after the reception of the first query packet, they expect to receive the second query packet by \(t^{\prime }_{2} = t_{1} + T_{\text {ON}} + T_{\text {OFF}}\). The time the nodes are sleeping (T OFF) is defined as T Cycle−T ON where T Cycle is the application duty cycle time and T ON is the time the nodes are awaked during each duty cycle. However, because network delays are variable, the nodes will receive this second query packet not in \(t^{\prime }_{2}\) but in t 2, as shown in Fig. 12. There is a difference between the expected value \(t^{\prime }_{2}\) and the real value t 2, δ 2=t2′−t 2. For example, if a node is expected to receive a query packet by \(t^{\prime }_{2} = 100\) and receives it by t 2=102, then δ 2=−2. A negative value means that a query was received in delay, and a positive value means that the query was received in advance. Moreover, delays are the sum of all per-hop delays for each sensor query packet reception and characterized by the sum of the processing and queueing delays in intermediate and destination sensor nodes, and the transmission delays and propagation delays in intermediate nodes. An in-depth characterization of these delays may be found in [31]. Synchronization maintenance phase Our proposed mechanism estimates δ k,n by using the exponentially weighted moving average (EWMA) technique (see Appendix). According to Fig. 12, the difference between the expected time to receive the next query and the time it is really received is computed by Eq. 1: $$ \begin{array}{rcl} t'_{k,n} & = & t_{k-1,n} + T_{\text{ON}} + T_{\text{OFF}} \\ \delta_{k,n} & = & (1 - \alpha) \cdot \delta_{k-1,n} + \alpha \cdot (t'_{k,n} - t_{k,n});\\ 0 &<\alpha <& 1 \\ \end{array} $$ where \(t^{\prime }_{k,n}\) is the expected packet reception time and t k,n is the real packet reception time. δ k,n is evaluated according to EWMA as in Eq. 3 with α reflecting the weight of the last observation. The δ k,n value is dynamically adjusted every time a node wakes and receives a query packet, and it is used to control the time the node would sleep in the next cycle, given by Eq. 2. $$ T_{\text{Sleep}_{k,n}} = T_{\text{OFF}} - \beta \cdot|\delta_{k,n}| $$ In Eq. 2, the β factor is used to amplify the δ k,n value to guarantee that the sensor node will wake some time before the next application cycle. β·|δ k,n | is the sleeping offset and represents the time the node will wake up before the start of the next application duty cycle. Algorithm 1 shows the pseudo-code of the application-driven synchronization mechanism with values given to α and β and to the virtual clock adjustment periodicity time (adjust_periodicity_time). In order to validate this mechanism, first we present a study on how the nodes can maintain their synchronization by estimating and evaluating the parameters presented in Eqs. 1 and 2, which corresponds to investigate in depth the synchronization maintenance phase. We also present and discuss results from the proposed synchronization mechanism using different values for α and β parameters, and query success ratio (QSR) results from simulations, and finally present and discuss some of the results obtained from two real testbeds. The QSR metric is defined as the ratio between the number of reply packets received by a sink node in response to a query packet and the number of replies the sink expects to receive. Basic simulation of the synchronization mechanism The node synchronization mechanism was evaluated considering the following probabilistic distribution of network delays: (1) uniform distribution, (2) Gaussian distribution, and (3) exponential distribution. Figure 13 shows one sink node and three sensor nodes. The sink node transmits queries regularly. Each query time reception is affected by those different network delays, and the sensor nodes upon their reception will adjust their sleep time in order to try to wake up at same time on the next application duty cycle. For each node, different mean delays were considered: sensor node 1, 0.5 s; sensor node 2, 1 s; and sensor node 3, 2 s. WSN delay model A Python program was written in order to randomly generate different network delay distributions. The program generates 105 queries, uses Eq. 1 to estimate the new expected query reception time by each node, and uses it to adjust the time each node must sleep (Eq. 2) in order to wake up on time for the next application cycle. Finally, the program computes how many time the nodes are waked up simultaneously. We consider that nodes are simultaneously awaked up if the three sensors are awaked for at least Δ=80%·T ON. Let us also define \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) as a random variable which captures the time during which the three sensors are simultaneously on the ON state, having values \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}} \epsilon [\!0s,T_{\text {ON}}s]\) (see Fig. 14). An occurrence of \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) is computed as the time the first sensor goes asleep minus the time the last sensor wakes up. Nodes adjustment of T ON simultaneity In a first attempt, for α in Eq. 1, the value was set to 0.125, following current IETF recommendations for managing TCP timers [32], and for Eq. 2, the β value was empirically set to 10. All the sensor nodes wake every 15 min remaining waked for 1 min (T ON=60 s and T OFF=840 s). Figure 15 shows results for the first situation evaluated—uniformly distributed network delays, with delays varying between ±20%×0.5 s, ±20%×1.0 s, and ±20%×2.0 s. In Fig. 15 a, one can see the histogram of randomly generated delays; Fig. 15 b shows \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. Again, we can observe that \(\phantom {\dot {i}\!}P[\!T_{\text {Sensors}_{\text {ON}}}\geq \Delta ]=1\). In fact, it is verified that \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\epsilon [\!57.88, 59.66]~\)s, and the mean value of \(\phantom {\dot {i}\!}E[\!T_{\text {Sensors}_{\text {ON}}}]=58.7~\)s. As in the first situation, the nodes maintain synchronism in all the cycles. Uniformly distributed network delays. a Delay histogram. b \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram Figure 16 shows results for the second situation evaluated—Gaussian distributed network delays, with delays having a standard deviation which is 20% of the mean values which are 0.5, 1.0, and 2.0s respectively. In Fig. 16 a we can observe the histogram of randomly generated delays; Fig. 16 b shows \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. As it can be observed, δ k,n factor from Eq. 2 also affects the time each node must sleep (\(\phantom {\dot {i}\!}T_{\text {Sleep}_{k,n}}\)). Similar to the previous cases, \(\phantom {\dot {i}\!}P[\!T_{\text {Sensors}_{\text {ON}}}\geq \Delta ]=1\), and the mean value is E[ T Sensors]=58.77 s. In this situation the nodes will also maintain synchronism in every application cycle. Gaussian distributed network delays. a Delay histogram. b \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram Figure 17 shows results from the last situation evaluated—exponentially distributed network delays, with mean delays targeting 0.5, 1.0, and 2.0 s, respectively. In Fig 17 a, the histogram is shown and, as expected, there are variations; Fig. 17 b shows \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. As can be observed, there are situations where the success condition is not satisfied. In this case, \(\phantom {\dot {i}\!}E[\!T_{{Sensors}_{\text {ON}}}]=57.52\) s and \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\epsilon [\!25.06,59.99]\) meaning that the nodes will maintain synchronism by about 99% of the cycles. Exponentially distributed network delays. a Delay histogram. b \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram Finally, Fig. 18 shows the box plot for the β·|δ k,n | component, which corresponds to the amount of time the nodes use to adjust sleep timers in order to wake up in synchronism in the next cycle. The worst value for the mean value of the β·|δ k,n | component is 1.11 s, and it corresponds to the exponential distribution, what means that a node will not sleep during T OFF s but, in average, will sleep during T OFF−1.11s. Moreover, results showed that the probability \(\phantom {\dot {i}\!}P[\!T_{\text {Sensors}_{\text {ON}}}\geq \Delta ]=1\) is observed in 99% of the occurrences, which means that all the considered nodes will be active at same time during at least Δ=80%·T ON in 99% of the application's duty cycles. Box plot for β·|δ k,n |, the sleeping offset represented in Eq. 2 The box plot figures in this paper give the standard metrics: the 25th percentile, the 75th percentile, and the red line is the median value. The top and bottom of the whiskers show the maximum and minimum values, respectively. Finally, the black dashed line in the box represents the mean value. From this analysis, we may conclude that the synchronization mechanism may be adequate for our purposes. In order to increase the trust in these results, a sensibility analysis is also carried out, in order to understand how \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) is affected by different values of α and β. α and β values estimation We performed studies using different values for the synchronization mechanism parameters α and β. We considered four sensor nodes and assumed a uniformly distributed delays varying in ±20%×0.5 s, ±20%×1.0s, ±20%×2.0s. Figures 19, 20, 21, 22, 23, 24, 25, 26, and 27 show the results obtained when considering different values for the α and β parameters. Each of these figures present: (a) the \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram; (b) the box plot for \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) (in % of T ON); and (c) the box plot for β·|δ k,n |. Uniformly distributed network delays with α=0.125; β=10. a \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. b Box plot for \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) (% of T ON). c Box plot for β·|δ k,n | Uniformly distributed network delays with α=0.125; β=100. a \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. b Box plot for \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) (% of T ON). c Box plot for β·|δ k,n | Uniformly distributed network delays with α=0.50; β=10. a \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. b Box plot for \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) (% of T ON). c Box plot for β·|δ k,n | Uniformly distributed network delays with α=0.50; β=100. a \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\)'s histogram. b Box plot for \(\protect \phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) (% of T ON). c Box plot for β·|δ k,n | In the sensibility analysis shown below, we select two discrete set of values for α and β, α∈{0.125,0,5,0.875} and β∈{1,10,50,100}. We vary one parameter at time while maintaining the other constant. α estimation: the weight given to the last sample in the calculation δ. Therefore, we want to investigate how it affects the synchronization mechanism by giving α different values, namely 0.125, 0.50, and 0.875. β estimation since the δ k,n value from Eq. 1 is small, we amplify it. The amplifying factor is the β parameter, and for it, we selected three values, β∈{10,50, and 100}. Figures 19, 20, 21, 22, 23, 24, 25, 26, and 27 show the results obtained for different combinations of the parameter's values. Table 1 summarizes it, showing: (a) the α and β values, (b) the \(\phantom {\dot {i}\!}E[\!T_{\text {Sensors}_{\text {ON}}}]\), (c) average \(\phantom {\dot {i}\!}E[\!T_{\text {Sensors}_{\text {ON}}}]\)'s time in % of T ON, and (d) E[ β·|δ k,n |] component, the resulting sleeping offset. Table 1 Summary of synchronization mechanism results as a function of α and β For the selection of the α and β values, we considered the values that satisfy at the same time: (i) values of \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) in % of T ON above 80% and (ii) lowest β·|δ k,n | component value. Italicized values correspond to the ones that better satisfy our purposes. Results discussion This analysis of the results showed that not all the values chosen for α and β parameters satisfy our synchronization mechanism requirements. In fact, if we consider, respectively, α=0.50 and β∈{50;100}, the mechanism will fail because the probability \(\phantom {\dot {i}\!}P[\!T_{\text {Sensors}_{\text {ON}}}\geq \Delta ] < 1\) (see, respectively, Figs. 23 and 24, what means that the sensor nodes will not be synchronized in all their duty cycles. The same applies if we consider α=0.875 and β∈{50;100}, as shown in Figs. 26 and 27. From Figs. 26 c and 27 c, we can observe that there are occurrences for \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) below 80%, the threshold established for success, being in average equal to 97.82% of T ON. Therefore, those values do not satisfy our selection criteria. From the other values evaluated, we may consider that α=0.125 and β=10 are the values that better satisfy our purposes, for the scenarios considered. Comparing to other pair of values for α and β, these values present at the same time (i) greater \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) value (58.7), which is almost the same theoretical value of T ON; (ii) all the occurrences for \(\phantom {\dot {i}\!}T_{\text {Sensors}_{\text {ON}}}\) in terms of T ON % are above 97%, being in average equal to 97.82%; and (iii) the mean β·|δ k,n | component has, in average, the lowest value 0.149 what means that the sensor nodes have to wake up before the next duty cycle less time than in the other cases. This will have impact in energy consumption since the sensor nodes do not have to stay unnecessary time awaked. In [6] and in [7], two different applications were used in three different scenarios, being the nodes distributed as shown in Fig. 28. Simulations ran in Contiki's Cooja simulator [33]. All the nodes are within a distance of 25 m for a transmission range of 30 m and support one of the two applications. Each application is running in eight nodes, and each node runs a single application. In scenario 1, the nodes running App. A were selected in a way that a long path could be obtained; in scenario 2, both applications have the same node distribution; scenario 3 is used to investigate situations where at least one node from other application is required to relay data. Let us, for example, consider Fig. 28 c. In this scenario, we can observe that node 9 routes/forwards packets of an application that it does not run. In the scenarios simulated, sink nodes are always awake, and sink node running App. B (node 9) was chosen as the network DAG root because of its application duty cycle. For the nodes running application A, T ON=60 s, T OFF=3540 s; for the nodes running application B, T ON=60 s, T OFF=840 s. Scenarios simulated. a Scenario 1. b Scenario 2. c Scenario 3 We simulated two situations: (i) a situation where all the nodes join the network at the same time, so that the proposed synchronization mechanism is not used as, in simulations with COOJA, clock drifting is the same for all the sensor nodes and (ii) the nodes will join the network at different times. The later implies the use of the synchronization mechanism described in Section 4 in order to keep the nodes synchronized with respect to the applications they run. The nodes join the network at different times which were randomly generated between 317 and 1102 s. In [34], the authors noticed timing inaccuracies in comparison to experiences made on TelosB motes hardware. Their simulations showed unexplained delays during packet transmission (TX) over the radio medium that were not observed during similar experiences on physical motes. According to their investigations, they discovered that the problem is with the emulation of MSP430-powered, radio-enabled WSN motes by the MSPSim software package when loading packet data into the transmission buffer. The emulating mote performs this TX buffer loading at a different speed than the actual hardware. This may result in inexact simulations results. Nevertheless, the authors argue that, for the WSN application studied, time precision is not a key issue since the applications are not designed for real-time critical applications. The authors have selected the TelosB Hardware platform and ContikiOS/Cooja because there is no need to write the code twice since it is the same for physical motes and emulated motes, and the TelosB platform is the most used platform in the academia. This time inaccuracy has no impact in our synchronization mechanism. The EWMA technique used to control the synchronization of the sensor nodes also considers the resulting unexplained delays during packet transmission to estimate the arrival of the next query packet, and the simulation and testbed results show that the synchronization mechanism performs well when having different network delays. In our solution, each time a node receives a query, it computes the time it must wake up before the start of the next application cycle in order to be able to receive and forward packets and to successfully reply back to the sink. The synchronization mechanism was configured with α=0.125 and β=10. In the simulations, 16 nodes have been used, half of them running each application. Each scenario was simulated ten times, and information was extracted in order to estimate delays, QSR, and the E[β·|δ k,n |] component. The results obtained are the following. We considered delay as the sum of all per-hop delays for each sensor query packet reception and characterized by the sum of the processing and queueing delays in intermediate and destination sensor nodes and the transmission delays and propagation delays in intermediate nodes. Per hop β·|δ k,n | component: From the simulations, we have extracted information about the β·|δ k,n | component on a per-hop basis. Fig. 29 shows the box plot for this component in scenario 1. We can observe that, except for the first hop, this component presents per hop similar values, and sensor nodes would have to wake with an average sleeping offset of about 0.232 s. In the first hop, the sleeping offset has a grater value (0.49 s in average) because in this hop, we can observe some congestion, particularly between the sink node (node 1) and the sensor node 2. Box plot for β·|δ k,n |, with α=0.125 and β=10, for each hop in scenario 1 Figure 30 shows the box plot for the β·|δ k,n | component in scenario 2. As in scenario 1, we can also see that this component presents similar values per hop, with an average sleeping offset of about 0.176 s. Finally, Fig. 31 shows the box plot for the β·|δ k,n | component in scenario 3. As in the other two scenarios, we observed that this component presents similar values per hop, in an average of about 0.242 s. In the case of the sleeping offset for two hop nodes, it has in average a grater value (0.35 s). Analyzing this scenario's topology, and the traffic that may occur, we can observe some congestion around sensor nodes 3 and 13. For sensor node 3, it needs to forward replies from sensor nodes 4, 7, and 8. For sensor node 13, it also forwards replies from sensor nodes 11, 12, 14, 15, and 16. However, this sleeping offset value can be also considered as negligible as it has a small additional value. In Fig. 32, the box plot for the expected sleeping offset value for each of the three scenarios studied is shown. As it can be verified, the nodes would sleep not the T OFF time, but in average T OFF−0.716 s. Moreover, we observed that, independent of the network topology, this component has almost the same values, what confirms that the synchronization mechanism proposed is adequate for our purposes. Moreover, comparing the results from Figs. 34 and 35, we observe that for scenarios 2 and 3, the maximum and minimum β·|δ k,n | values are different. In scenario 2, the nodes have more neighbors running the same application, what implies that each of them may need more time to access the wireless medium to forward a query. This is also reflected on the network delays and affects the β·|δ k,n | component. Box plot for β·|δ k,n |, with α=0.125 and β=10, for each scenario (in s) QSR: Fig. 33 shows the box plot for QSR. In this figure,(1) the results using the standard RPL routing protocol, (2) the results using RPL-BMARQ solution proposed in [7] without the synchronization mechanism implemented, and (3) the results using the same RPL-BMARQ solution fully implemented are showed. As it can be observed, in average, 98.8% of the queries sent by sinks are replied by sensor nodes. With this success ratio, we can argue that the quality of the proposed synchronization mechanism is confirmed. Mean query success ratio—QSR for each scenario (in %) Testbed experiments In order to confirm the results obtained from theoretical studies and simulations, we also tested our proposed solution in a real environment. For that purpose, two of the scenarios studied were selected (scenarios 1 and 3) and deployed. Since it was not possible to reproduce them at the same scale, the scenarios deployed correspond to a 3×3 square lattice topology, while keeping all the other functionalities. In order to obtain reliable terms of comparison, we have simulated these deployments using the same methods as in Section 5.2 and compared the simulated results with those obtained in testbeds. Figure 34 shows both topologies deployed and simulated, which were realized using TelosB motes [35], placed at distances of 5 m, and the radio transmission power was reduced to −7 dBm in order to reduce the node reception distance. Application A run in five nodes (1, 2, 3, 4, and 5), and application B runs in four nodes (9, 10, 11, and 12). Node 9 is, at the same time, the root of the DAGs and a sink. Node 1 is the other sink. The nodes ran ContikiOS (2.6) [36] which is an operating system for WSN which incorporates an implementation of the IPv6 protocol stack and uses RPL as the default routing protocol. Scenarios deployed. a Deployment 1. b Deployment 2 β·|δ k,n |, with α=0.125 and β=10, histogram for each deployment (in %) Each testbed experiment was carried out for 4 h. To log real-time data, two Raspberry Pi platforms were used, connected to both sink nodes via a serial connection. Inside each Raspberry Pi [37] platform was a python program running, responsible to get timestamp data from each sink with respect to query packets sent and reply packets received. In order to verify our proposed synchronization mechanism, we considered in this work (i) synchronization parameter's values α=0.125 and β=10; (ii) packet reception time on the sink nodes side to estimate the expected reception time and to compute the sleeping offset component (β·|δ k,n |); and (iii) QSR results. The main results obtained include the following: β·|δ k,n ] component: Figure 35 shows β·|δ k,n | component, the sleeping offset represented in Eq. 2 histogram for each deployment. As expected, it presents the same uniform distribution characteristics as the theoretical evaluation and the simulations performed. Moreover, we can see in Fig. 36 that this component presents in average a sleeping offset of 0.185 s. Box plot for β·|δ k,n |, with α=0.125 and β=10, for each deployment (in s) Figure 37 shows simulation and real implementation results. As it can be seen, both present same values (100%), which means that also in real testbeds, the nodes reply to all the queries sent by sinks, going to sleep and waking up while being synchronized. Query success ratio (QSR) for the scenarios deployed (in %) From the above results, we can conclude that there are no major differences between what was observed in the theoretical studies and in the simulation environment and what was expected in the testbed environment. This confirms the usability and the quality of the synchronization mechanism proposed, when applied to application-driven WSNs with the characteristics described in this work. This paper proposed an application-driven WSN synchronization mechanism using the EWMA technique to maintain synchronization of all the nodes in WSNs defined by the applications they run. The paper presents and discusses the performance of the synchronization mechanism for sensor devices using IEEE 802.15.4 radios. The work presented reflects our analysis of the mechanism which assumes that the nodes are affected by different network delay distributions. The mechanism allows the nodes to go asleep and to wake up in synchronism. The mechanism was evaluated by means of simulations using ContikiOS and Cooja, and confirmed its functionalities. Finally, real testbed experiments confirmed our simulations results, showing that the mechanism also works in real applications. Exponentially weighted moving average The exponentially weighted moving average (EWMA) [38] is a technique used for calculating a run-time average characterized by giving less and less weight to data as they get older and older. EWMA is easily plotted and may be also viewed as a forecast for the next observation. The EWMA equals the present predicted value plus lambda times the present observed error of prediction, $$ \text{EWMA} = \hat{y}_{t} + \lambda(y_{t}-\hat{y}_{t}) $$ where \(\hat {y}_{t}\) is the predicted value at time t (the old EWMA), y t is the observed value at time t, \(y_{t}-\hat {y}_{t}\) is the observed error at time t, and λ is a constant (0<λ<1) that determines the depth of memory of the EWMA. Equation 3 can be written as $$ \hat{y}_{t+1} = \lambda y_{t} + (1 - \lambda)\hat{y}_{t} $$ EWMA statistics are currently used, for instance, by TCP to recover from undelivered segments; the mechanism is based on [39] and EWMA is used to estimate the timeout value that depends on the round trip time. IEEE-Computer-Society, IEEE Std 802.15.4: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (WPANs) (2006). Revision of IEEE Std 802.15.4-2003. G Montenegro, N Kushalnagar, D Culler, Transmission of IPv6 Packets over IEEE 802.15.4 Networks (2007). IETF. CM Tang, Y Zhang, YP Wu, in Instrumentation, Measurement, Computer, Communication and Control (IMCCC) 2012 Second International Conference on. The P2P-RPL Routing Protocol Research and Implementation in Contiki Operating System, (2012), pp. 1472–1475. doi:10.1109/IMCCC.2012.345. T Winter, P Thubert, A Brandt, J Hui, R Kelsey, P Levis, K Pister, R Struik, J Vasseur, R Alexander, RPL: IPv6 Routing Protocol for Low-Power and Lossy Networks. RFC 6550 (Proposed Standard) (2012). http://www.ietf.org/rfc/rfc6550.txt. IF Akyildiz, MC Vuran, Time Synchronization (John Wiley and Sons, Ltd, 2010). doi:http://dx.doi.org/10.1002/9780470515181.ch11. BF Marques, MP Ricardo, in Ad Hoc Networking Workshop (MED-HOC-NET) 2014 13th Annual Mediterranean. Improving the energy efficiency of WSN by using application-layer topologies to constrain RPL-defined routing trees, (2014), pp. 126–133. doi:10.1109/MedHocNet.2014.6849114. B Marques, M Ricardo, in Wireless Networks, The Journal of Mobile Communication, Computation and Information. Energy-efficient node selection in application-driven WSN, (2016). doi:10.1007/s11276-016-1194-2 http://link.springer.com/article/10.1007%2Fs11276-016-1194-2. DL Mills. IEEE Trans. Commun. 39:, 1482 (1991). W Su, I Akyildiz, Time-diffusion synchronization protocol for wireless sensor networks. IEEE/ACM Trans. Networking. 13(2), 384 (2005). doi:10.1109/TNET.2004.842228. J Elson, L Girod, D Estrin, Fine-grained network time synchronization using reference broadcasts. SIGOPS Oper. Syst. Rev. 36(SI), 2002. doi:10.1145/844128.844143. http://doi.acm.org/10.1145/844128.844143. S Ganeriwal, R Kumar, MB Srivastava, in Proceedings of the 1st International Conference on Embedded Networked Sensor Systems. SenSys '03. Timing-sync protocol for sensor networks (ACMNew York, 2003), pp. 138–149. doi:10.1145/958491.958508. http://doi.acm.org/10.1145/958491.958508. J van Greunen, J Rabaey, in Proceedings of the 2Nd ACM International Conference on Wireless Sensor Networks and Applications. WSNA '03. Information assurance in sensor networks (ACMNew York, 2003), pp. 11–19. doi:10.1145/941350.941353. http://doi.acm.org/10.1145/941350.941353. H Dai, R Han, TSync: a lightweight bidirectional time synchronization service for wireless sensor networks. SIGMOBILE Mob. Comput. Commun. Rev. 8(1), 125 (2004). doi:10.1145/980159.980173. http://doi.acm.org/10.1145/980159.980173. R Carli, A Chiuso, L Schenato, S Zampieri, Optimal Synchronization for Networks of Noisy Double Integrators. IEEE Trans. Autom. Control. 56(5) (2011). doi:10.1109/TAC.2011.2107051. B Etzlinger, H Wymeersch, A Springer, Cooperative Synchronization in Wireless Networks. IEEE Trans. Signal. Process. 62(11), 2837 (2014). doi:10.1109/TSP.2014.2313531. W Yuan, N Wu, B Etzlinger, H Wang, J Kuang, Cooperative Joint Localization and Clock Synchronization Based on Gaussian Message Passing in Asynchronous Wireless Networks. IEEE Trans. Veh. Technol. 65(9), 7258 (2016). doi:10.1109/TVT.2016.2518185. M Leng, YC Wu, Distributed Clock Synchronization for Wireless Sensor Networks Using Belief Propagation. IEEE Trans. Signal Process. 59(11), 5404 (2011). doi:10.1109/TSP.2011.2162832. J Pearl. Artif. Intell. 29(3), 241 (1986). J He, P Cheng, L Shi, J Chen, Y Sun, Time Synchronization in WSNs: a Maximum-Value-Based Consensus Approach. IEEE Trans. Autom. Control. 59(3), 660 (2014). doi:10.1109/TAC.2013.2286893. B Sundararaman, U Buy, AD Kshemkalyani. Ad Hoc Netw. 3(3), 281 (2005). T Ma, Z Xu, M Hempel, D Peng, H Sharif, Performance Analysis of a Novel Low-Complexity High-Precision Timing Synchronization Method for Wireless Sensor Networks. IEEE Trans. Wirel. Commun. 13(9), 4758 (2014). doi:10.1109/TWC.2014.2331286. M Al Ameen, SMR Islam, K Kwak. Int. J. Distrib. Sensor Netw. 2010:, 16 (2010). doi:http://dx.doi.org/10.1155/2010/163413%163413. M Miller, N Vaidya, A MAC protocol to reduce sensor network energy consumption using a wakeup radio. IEEE Trans. Mob. Comput. 4(3), 228 (2005). doi:10.1109/TMC.2005.31. W Ye, J Heidemann, D Estrin, in INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, 3. An energy-efficient, MAC protocol for wireless sensor networks, (2002), pp. 1567–1576. doi:10.1109/INFCOM.2002.1019408. T van Dam, K Langendoen, in Proceedings of the 1st International Conference on Embedded Networked Sensor Systems. SenSys '03. An adaptive energy-efficient, MAC protocol for wireless sensor networks (ACM, 2003), pp. 171–180. doi:10.1145/958491.958512. http://doi.acm.org/10.1145/958491.958512. W Pak, KT Cho, J Lee, S Bahk, in Global Telecommunications Conference 2008. IEEE GLOBECOM 2008. W-MAC: Supporting Ultra, Low Duty Cycle in Wireless Sensor Networks (IEEE, 2008), pp. 1–5. doi:10.1109/GLOCOM.2008.ECP.79. A El-hoiydi, Jd Decotignie, in 9th International Symposium on Computers and Communications (ISCC '04), (2004). Y Sun, O Gurewitz, DB Johnson, in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems. SenSys '08. RI-MAC: a receiver-initiated asynchronous duty cycle, MAC protocol for dynamic traffic loads in wireless sensor networks (ACMNew York, 2008), pp. 1–14. doi:10.1145/1460412.1460414. http://doi.acm.org/10.1145/1460412.1460414. E Ancillotti, R Bruno, M Conti, Reliable Data Delivery with the IETF Routing Protocol for Low-Power and Lossy Networks. IEEE Trans. Ind. Inform. 10(3) (2014). doi:10.1109/TII.2014.2332117. D Carels, N Derdaele, ED Poorter, W Vandenberghe, I Moerman, P Demeester, Support of multiple sinks via a virtual root for the RPL routing protocol. EURASIP J. Wirel. Commun. Netw. 2014(1), 91 (2014). doi:http://dx.doi.org/10.1186/1687-1499-2014-91. P Pinto, A Pinto, M Ricardo, in Wireless Days (WD) 2013 IFIP. End-to-end delay estimation using RPL metrics in WSN, (2013), pp. 1–6. doi:10.1109/WD.2013.6686524. V Paxson, M Allman, HJ Chu, M Sargent, Computing TCP's Retransmission Timer (2011). http://tools.ietf.org/html/rfc6298. F Osterlind, A Dunkels, J Eriksson, N Finne, T Voigt, in Local Computer Networks, Proceedings 2006 31st IEEE Conference on. Cross-Level Sensor, Network Simulation with COOJA, (2006), pp. 641–648. doi:10.1109/LCN.2006.322172. K Roussel, YQ Song, O Zendra, in EWSN 2016 - NextMote workshop, ed. by K Roemer. ACM EWSN 2016- NextMote workshop (Junction PublishingGraz, Austria, 2016), pp. 319–324. https://hal.inria.fr/hal-01240986. Crossbow TelosB. http://www.memsic.com/userfiles/files/Datasheets/WSN/6020-0094-02_B_TELOSB.pdf. A Dunkels, Contiki OS, open source, highly portable, multi-tasking operating system for memory-efficient networked embedded systems and wireless sensor networks (2013). http://www.contiki-os.org. Pi RaspBerry (2014). http://www.raspberrypi.org. JS Hunter. Qual. Technol.18:, 203 (1986). V Jacobson, Congestion avoidance and control. SIGCOMM Comput. Commun. Rev. 18(4), 314 (1988). doi:10.1145/52325.52356. http://doi.acm.org/10.1145/52325.52356. This work was financed by the Project "NORTE-07-0124-FEDER-000056" by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF), and by national funds, through the Portuguese funding agency, Fundação para a Ciência e a Tecnologia (FCT) within the fellowship "SFRH/BD/ 36221/2007". The authors would like to thank also the support from the Faculty of Engineering, University of Porto, to thank the support from the INESC TEC, and to thank the support from the School of Technology and Management of Viseu. The authors declare that the work does not contain any studies with human participants or animals performed by any of the authors; the work has not been published before (except in the form of an abstract or as part of a published lecture, review, or thesis); the work is not under consideration elsewhere; copyright has not been breached in seeking its publication; and that the publication has been approved by all co-authors and responsible authorities at the institute or organization where the work has been carried out. Bruno Marques received in 2017 a PhD degree in Electrical and Computers Engineering from the University of Porto. He is an Adjunct Professor at the School of Technology and Management of Viseu, where he gives courses in industrial communications and computer networks. He also is an invited collaborator at the Centre for Telecommunications and Multimedia of the INESC TEC research institute. Manuel Ricardo received in 2000 a PhD degree in Electrical and Computers Engineering from Porto University. He is an associate professor at the Faculty of Engineering of the University of Porto, where he gives courses in mobile communications and computer networks. He also leads the Centre for Telecommunications and Multimedia of the INESC TEC research institute. Departamento Engenharia Eletrotécnica, Escola Superior de Tecnologia e Gestão, Instituto Superior Politécnico de Viseu, Viseu, Portugal INESC TEC, Faculdade de Engenharia, Universidade do Porto, Porto, Portugal & Manuel Ricardo Search for Bruno Marques in: Search for Manuel Ricardo in: Correspondence to Bruno Marques. Marques, B., Ricardo, M. Synchronization of application-driven WSN. J Wireless Com Network 2017, 37 (2017) doi:10.1186/s13638-017-0821-7 Wireless sensor network (WSN) ContikiRPL Nodes synchronization EWMA
CommonCrawl
Symbolic Math + A Probability Monad¶ You can do some fun things if you replace the floats in a probability monad with symbolic representations. As long as the computation is exact over the description, you can operate without any need to resort to a numeric representation (as would be the case for a sampling based approach). This yields something quite interesting that could be of pedagogical or even exploratory interest. In effect, since running the computation builds a mathematical formula, we get to observe how probabilities accumulate down each branch into a node (without explicitly instantiating a tree visualization). The result is something that's more general than a simulation but less general than an equation. Simple Coin Flipping Examples¶ In this example we look at 3 flips of a coin that's heads with probability $p$. In [2]: let rec coinflip flips n = cont{ if n = 0 then return flips let! flip = bernoulliChoice p ("H","T") return! coinflip (flip+flips) (n-1) } Model(coinflip "" 3).Reify() |> latexTable Out[2]: $$\begin{array} \hline Probability & Value \\ \hline {p}^{3} & HHH \\ \hline {p}^{2}\left(1 - p\right) & HHT \\ \hline {p}^{2}\left(1 - p\right) & HTH \\ \hline p{\left(1 - p\right)}^{2} & HTT \\ \hline {p}^{2}\left(1 - p\right) & THH \\ \hline p{\left(1 - p\right)}^{2} & THT \\ \hline p{\left(1 - p\right)}^{2} & TTH \\ \hline {\left(1 - p\right)}^{3} & TTT \\ \hline\end{array}$$ The next example asks: what is the chance of at least n heads in m coin flips where heads has a probability $p$. Sound familiar? It's like the binomial distribution except you have a limited number of trials. let hasHeads count = List.filter ((=) "H") >> List.length >> ((<=) count) let rec atleast_n_in_m_flips count flips n = cont{ if n = 0 then if hasHeads count flips then return true else return false else let! flip = bernoulliChoice p ("H","T") return! atleast_n_in_m_flips count (flip::flips) (n-1) } Model(atleast_n_in_m_flips 2 [] 2).Reify() $$\begin{array} \hline Probability & Value \\ \hline 2p\left(1 - p\right) + {\left(1 - p\right)}^{2} & False \\ \hline {p}^{2} & True \\ \hline\end{array}$$ One can try and work out if there is some kind of general pattern or at least gain some idea towards the general behavior of model: $$\begin{array} \hline Probability & Value \\ \hline 3p{\left(1 - p\right)}^{2} + {\left(1 - p\right)}^{3} & False \\ \hline {p}^{3} + 3{p}^{2}\left(1 - p\right) & True \\ \hline\end{array}$$ $$\begin{array} \hline Probability & Value \\ \hline 4p{\left(1 - p\right)}^{3} + {\left(1 - p\right)}^{4} & False \\ \hline {p}^{4} + 4{p}^{3}\left(1 - p\right) + 6{p}^{2}{\left(1 - p\right)}^{2} & True \\ \hline\end{array}$$ $$\begin{array} \hline Probability & Value \\ \hline 3{p}^{2}\left(1 - p\right) + 3p{\left(1 - p\right)}^{2} + {\left(1 - p\right)}^{3} & False \\ \hline {p}^{3} & True \\ \hline\end{array}$$ $$\begin{array} \hline Probability & Value \\ \hline 6{p}^{2}{\left(1 - p\right)}^{2} + 4p{\left(1 - p\right)}^{3} + {\left(1 - p\right)}^{4} & False \\ \hline {p}^{4} + 4{p}^{3}\left(1 - p\right) & True \\ \hline\end{array}$$ $$\begin{array} \hline Probability & Value \\ \hline 10{p}^{2}{\left(1 - p\right)}^{3} + 5p{\left(1 - p\right)}^{4} + {\left(1 - p\right)}^{5} & False \\ \hline {p}^{5} + 5{p}^{4}\left(1 - p\right) + 10{p}^{3}{\left(1 - p\right)}^{2} & True \\ \hline\end{array}$$ $$\begin{array} \hline Probability & Value \\ \hline 15{p}^{2}{\left(1 - p\right)}^{4} + 6p{\left(1 - p\right)}^{5} + {\left(1 - p\right)}^{6} & False \\ \hline {p}^{6} + 6{p}^{5}\left(1 - p\right) + 15{p}^{4}{\left(1 - p\right)}^{2} + 20{p}^{3}{\left(1 - p\right)}^{3} & True \\ \hline\end{array}$$ Notice how probabilities represent paths but unless you are exceptionally good at maths, the generalization might be hard to deduce (such as that the pattern of the numeric coefficients are the same as would be found in Pascal's triangle). Nonetheless, the more concrete description paired with some expanded examples might aid in becoming more comfortable with the more general but quite abstract binomial's equation. It's about the number of paths through a tree like structure, where each plus can be read as an or, to our destination. Next, we will try to get a better understanding by listing the possibilities. In [10]: let rec atleast_n_in_m_flips_list count flips n = cont{ do! (observe (hasHeads count flips)) return (List.rev flips) let flips' = flip :: flips if hasHeads count flips' then return List.rev flips' else return! atleast_n_in_m_flips_list count flips' (n-1) Model(atleast_n_in_m_flips_list 2 [] 4).Reify() Out[10]: $$\begin{array} \hline Probability & Value \\ \hline {p}^{2} & [H,H] \\ \hline {p}^{2}\left(1 - p\right) & [H,T,H] \\ \hline {p}^{2}{\left(1 - p\right)}^{2} & [H,T,T,H] \\ \hline {p}^{2}\left(1 - p\right) & [T,H,H] \\ \hline {p}^{2}{\left(1 - p\right)}^{2} & [T,H,T,H] \\ \hline {p}^{2}{\left(1 - p\right)}^{2} & [T,T,H,H] \\ \hline\end{array}$$ Geometric Distribution¶ The geometric distribution is recursively defined and will not terminate. We therefore limit exploration to depth 10. Model(cont{ return! geometric 18 p }).Reify(10) |> latexTable $$\begin{array} \hline Probability & Value \\ \hline p{\left(1 - p\right)}^{11} & thunk \\ \hline {\left(1 - p\right)}^{12} & thunk \\ \hline p & 18 \\ \hline p\left(1 - p\right) & 19 \\ \hline p{\left(1 - p\right)}^{2} & 20 \\ \hline p{\left(1 - p\right)}^{3} & 21 \\ \hline p{\left(1 - p\right)}^{4} & 22 \\ \hline p{\left(1 - p\right)}^{5} & 23 \\ \hline p{\left(1 - p\right)}^{6} & 24 \\ \hline p{\left(1 - p\right)}^{7} & 25 \\ \hline p{\left(1 - p\right)}^{8} & 26 \\ \hline p{\left(1 - p\right)}^{9} & 27 \\ \hline p{\left(1 - p\right)}^{10} & 28 \\ \hline\end{array}$$ It's easy to read the meaning of the paths here compared to the previous: N failures and then a success. Thunk here, signifies unexpanded/unexplored paths. Fair coin¶ John von Neumann is said to have invented a technique to make any coin fair: flip it twice, if both values are the same, try again; otherwise, take the first flip. Notice that this algorithm has a non-zero chance of never terminating. However, to get an intuition, simply exploring a very shallow number of depths can be a large aid for getting an intuitive understanding of the method. Indeed, a too high max depth can be obscuring. The essence of the technique is in how it contrives to have there be an equal number of paths to either a T or an H. One can also notice something about flips that are even numbers; only on odd number of max flips do the possibilities change. let rec faircoin path p = cont { let! flip1 = bernoulliChoice p ("H", "T") if flip1 = flip2 then return! faircoin ((sprintf "%s\&%s\\;" flip1 flip2) + path) p else return (sprintf "[%s]\&%s\\;;\\;" flip1 flip2) + path Model(faircoin "" p).Reify(limit = 5) |> List.filter notThunk $$\begin{array} \hline Probability & Value \\ \hline p\left(1 - p\right) & [H]\&T\;;\; \\ \hline {p}^{3}\left(1 - p\right) & [H]\&T\;;\;H\&H\; \\ \hline {p}^{5}\left(1 - p\right) & [H]\&T\;;\;H\&H\;H\&H\; \\ \hline {p}^{3}{\left(1 - p\right)}^{3} & [H]\&T\;;\;H\&H\;T\&T\; \\ \hline p{\left(1 - p\right)}^{3} & [H]\&T\;;\;T\&T\; \\ \hline {p}^{3}{\left(1 - p\right)}^{3} & [H]\&T\;;\;T\&T\;H\&H\; \\ \hline p{\left(1 - p\right)}^{5} & [H]\&T\;;\;T\&T\;T\&T\; \\ \hline p\left(1 - p\right) & [T]\&H\;;\; \\ \hline {p}^{3}\left(1 - p\right) & [T]\&H\;;\;H\&H\; \\ \hline {p}^{5}\left(1 - p\right) & [T]\&H\;;\;H\&H\;H\&H\; \\ \hline {p}^{3}{\left(1 - p\right)}^{3} & [T]\&H\;;\;H\&H\;T\&T\; \\ \hline p{\left(1 - p\right)}^{3} & [T]\&H\;;\;T\&T\; \\ \hline {p}^{3}{\left(1 - p\right)}^{3} & [T]\&H\;;\;T\&T\;H\&H\; \\ \hline p{\left(1 - p\right)}^{5} & [T]\&H\;;\;T\&T\;T\&T\; \\ \hline\end{array}$$ Boy or Girl Paradox¶ The boy or girl paradox is a seeming paradox of probability where multiple interpretations of a sentence result in ambiguity as to what the correct answer is. The problem as stated by Martin Gardner in 1959 goes as: Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls? Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys? The first problem is straightforward: let firstproblem p = cont{ let! child1 = bernoulliChoice p ("boy", "girl") do! observe(child1 = "girl") return (child1,child2) } Model(firstproblem p).Reify() |> normalize $$\begin{array} \hline Probability & Value \\ \hline \frac{p\left(1 - p\right)}{p\left(1 - p\right) + {\left(1 - p\right)}^{2}} & (girl, boy) \\ \hline \frac{{\left(1 - p\right)}^{2}}{p\left(1 - p\right) + {\left(1 - p\right)}^{2}} & (girl, girl) \\ \hline\end{array}$$ Model(firstproblem (1/2Q)).Reify() $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{2} & (girl, boy) \\ \hline \frac{1}{2} & (girl, girl) \\ \hline\end{array}$$ Now, the second problem is trickier because there seem to be two valid interpretations. The below is how I interpreted it and, I was confused by how one might consider it any other way: let paradox p = do! observe(child1 = "boy" || child2 = "boy") return (child1,child2) Model(paradox p).Reify() |> List.map (keepRight Rational.expand) $$\begin{array} \hline Probability & Value \\ \hline \frac{p}{2 - p} & (boy, boy) \\ \hline \frac{1 - p}{2 - p} & (boy, girl) \\ \hline \frac{1 - p}{2 - p} & (girl, boy) \\ \hline\end{array}$$ Model(paradox (1/2Q)).Reify() $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{3} & (boy, boy) \\ \hline \frac{1}{3} & (boy, girl) \\ \hline \frac{1}{3} & (girl, boy) \\ \hline\end{array}$$ With some thought, I was able to realize that the second interpretation was about whether the first of the two children seen was a boy. let paradoxform2 p = cont{ let! seefirst = uniform [child1; child2] do! observe(seefirst = "boy") Model(paradoxform2 p).Reify() $$\begin{array} \hline Probability & Value \\ \hline \frac{{p}^{2}}{{p}^{2} + p\left(1 - p\right)} & (boy, boy) \\ \hline \frac{p\left(1 - p\right)}{2\left({p}^{2} + p\left(1 - p\right)\right)} & (boy, girl) \\ \hline \frac{p\left(1 - p\right)}{2\left({p}^{2} + p\left(1 - p\right)\right)} & (girl, boy) \\ \hline\end{array}$$ Model(paradoxform2 (1Q/2Q)).Reify() It's also noticeable, by comparing the generated equations, that even though the answer to both questions can be $\frac{1}{2}$, the manner or path there is quite different. I'll posit that there is yet another way to approach the problem and that approach is the most common (and incorrect). It goes as: if one of them is a boy then either the other is a boy or is a girl. That is, it's given the same structure as the first problem. latexTable [1Q/2Q, Value("boy","boy"); 1Q/2Q, Value("boy","girl") ] $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{2} & (boy, girl) \\ \hline \frac{1}{2} & (boy, boy) \\ \hline\end{array}$$ This is not a valid inference to make as it does not properly account for the added probability of which child was seen first. I've seen this problem argued over at length on the internet; even the wikipedia treatment is quite long. I think, however, that this treatment is clear on the matter while being compact and manipulable. Streaks¶ The following is a puzzle I found on twitter: Imagine three basketball players & hoops all lined up in a row, with each player attempting a single shot at the exact same time. Assume each player is a 50% shooter and not influenced by the other players. After all three players have taken their shot, tell each player: Put on a red shirt if the player to your left hit their shot Put on a blue shirt if the player to your left missed their shot Put on a grey shirt otherwise You ask for a player with a blue shirt to step forward. What is the probability his/her shot was successful? My straightforward interpretation of how it is written yields 50% as the answer. type Shot = Score | Miss | NoShot let assignShirtWith = function Some Score -> "red" | Some Miss -> "blue" | _ -> "grey" cont { //Sample Shots let! shotResult1 = bernoulliChoice (1Q/2Q) (Score, Miss) //Get Colors let color1 = assignShirtWith None let color2 = assignShirtWith (Some shotResult1) //Sample random person let! shotresult,color = uniform [shotResult1,color1; shotResult2,color2; shotResult3,color3] //condition on blue do! (observe (color = "blue")) return shotresult } |> exact_reify $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{2} & Score \\ \hline \frac{1}{2} & Miss \\ \hline\end{array}$$ This is incorrect however, with the issue being not accounting for streaks. Although, I also think the occluding nature of the problem description is problematic. Divesting of extraneous detail, it's clear that the problem is really about collecting streaks of misses greater than some length. Then, considering all the ways the shots could have gone, what is the chance that a randomly selected shot from the streak of misses is a score? I then normalize, as we've filtered away some paths. In this, I will skip using variables and work directly with $p=0.5$. For a lower number of shots, working with rational numbers is still clearer than with floats (at higher numbers, rational numbers become essentially ungraspable). For clarity, I also show the possible collected shots. let prettify (shot,coll) = sprintf "%A\;\mathrm{from}\;[%s]" shot (Strings.joinToStringWith "\;\&\;" coll) let rec shootStreak streaklen streakcount collected prevShooterResult n = cont { let! shot = uniform collected return shot, collected let! shotResult = bernoulliChoice (1/2Q) (Score, Miss) let streakcount' = if prevShooterResult = Miss then streakcount + 1 else 0 let collected' = if streakcount' >= streaklen then shotResult::collected else collected return! (shootStreak streaklen streakcount' collected' shotResult (n-1)) shootStreak 1 0 [] NoShot 3 |> exact_reify |> ProbVal.map prettify $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{2} & Score\;\mathrm{from}\;[Score] \\ \hline \frac{1}{12} & Score\;\mathrm{from}\;[Score\;\&\;Miss] \\ \hline \frac{1}{12} & Miss\;\mathrm{from}\;[Score\;\&\;Miss] \\ \hline \frac{1}{6} & Miss\;\mathrm{from}\;[Miss] \\ \hline \frac{1}{6} & Miss\;\mathrm{from}\;[Miss\;\&\;Miss] \\ \hline\end{array}$$ Here is an example with a higher number of shots to more clearly show what's going on. $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{2} & Score\;\mathrm{from}\;[Score] \\ \hline \frac{1}{10} & Score\;\mathrm{from}\;[Score\;\&\;Miss] \\ \hline \frac{1}{40} & Score\;\mathrm{from}\;[Score\;\&\;Miss\;\&\;Miss] \\ \hline \frac{1}{160} & Score\;\mathrm{from}\;[Score\;\&\;Miss\;\&\;Miss\;\&\;Miss] \\ \hline \frac{1}{10} & Miss\;\mathrm{from}\;[Score\;\&\;Miss] \\ \hline \frac{1}{20} & Miss\;\mathrm{from}\;[Score\;\&\;Miss\;\&\;Miss] \\ \hline \frac{3}{160} & Miss\;\mathrm{from}\;[Score\;\&\;Miss\;\&\;Miss\;\&\;Miss] \\ \hline \frac{1}{10} & Miss\;\mathrm{from}\;[Miss] \\ \hline \frac{1}{20} & Miss\;\mathrm{from}\;[Miss\;\&\;Miss] \\ \hline \frac{1}{40} & Miss\;\mathrm{from}\;[Miss\;\&\;Miss\;\&\;Miss] \\ \hline \frac{1}{40} & Miss\;\mathrm{from}\;[Miss\;\&\;Miss\;\&\;Miss\;\&\;Miss] \\ \hline\end{array}$$ Differential Privacy¶ This differential privacy example is, I think, the neatest, in that we are not computing a truncated slice of some more general equation. The specification by code also fully generates the corresponding equation. It's really neat how writing such a general natural specification of a problem also generates a detailed mathematical description of it. It's rare to run into such an example. By the way, I wrote an intro to Differential Privacy in this article and you can read the wiki article too. The essence of differential privacy is that it is a way of gathering data while offering plausible deniability to the respondent. Model(cont{ let! belief = bernoulliChoice p_antibacon (@"Hate\;bacon🛑🥓",@"Love\;bacon❤🥓") let! flip = bernoulli p_coin if flip then return belief else return "Hate\;bacon🛑🥓" }).Reify() $$\begin{array} \hline Probability & Value \\ \hline {p_{antibacon}}{p_{coin}} + {p_{antibacon}}\left(1 - {p_{coin}}\right) + \left(1 - {p_{antibacon}}\right)\left(1 - {p_{coin}}\right) & Hate\;bacon🛑🥓 \\ \hline \left(1 - {p_{antibacon}}\right){p_{coin}} & Love\;bacon❤🥓 \\ \hline\end{array}$$ In the above example, an unethical bacon loving society polls people on their thoughts on bacon. Differential privacy is used to avoid shaming alleged heretic respondents. Notice that the hate bacon choice is ambiguous due to there being multiple paths to it. A slightly more complex example below: a survey with 3 options for a favorite sandwich type. Notice that this version offers a bit more privacy since all end points now have multiple paths to them. Once again, it's really neat how a declarative specification in a straightforward way yields the mathematical formulas below. let! sandwich = categorical [p_hotdog,"hotdog🌭";p_sw,"Sandwich🥖";p_vburg,@"Vegan\;Hamburger🍔"] if flip then return sandwich else return! uniform ["hotdog🌭";"Sandwich🥖";"Vegan\\;Hamburger🍔"] |> List.map (keepRight Rational.reduce) $$\begin{array} \hline Probability & Value \\ \hline \frac{1}{3}\left(1 - {p_{coin}}\right){p_{hotdog}} + {p_{coin}}{p_{sandwich}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{sandwich}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{vburger}} & Sandwich🥖 \\ \hline \frac{1}{3}\left(1 - {p_{coin}}\right){p_{hotdog}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{sandwich}} + {p_{coin}}{p_{vburger}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{vburger}} & Vegan\;Hamburger🍔 \\ \hline {p_{coin}}{p_{hotdog}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{hotdog}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{sandwich}} + \frac{1}{3}\left(1 - {p_{coin}}\right){p_{vburger}} & hotdog🌭 \\ \hline\end{array}$$ Conclusion¶ In this fun little article, I've combined an exact discrete probability monad with a symbolic algebra system. In so doing, the running/unfolding of the computation also generates a mathematical description. The description, due to limits placed by tractability, is usually (but not always just!) a snapshot. However, it can often be more illuminating and more general than a full simulation. I believe this has high potential as a pedagogical tool or aid, especially when looking at causality examples and phenomena like Berkson's or Simpson's Paradox (which I plan to do someday hopefully soonish). One surprising take away from this is that mathematics is (probably) not declarative. In the descriptions above, the style of thinking that gets you to the equations is much more a detailed investigation of what must/is occurring than it is a specification. This is not the first time I've wondered if everyday math is not a simple imperative assembly language meant to run on a powerful computer capable of incredible deductive inference. 2018-11-04 - Deen Abiola
CommonCrawl
nLab > Latest Changes: ring CommentTimeSep 5th 2012 Format: MarkdownItexat the beginning of _[[ring]]_ I have spelled out a more explicit definition. Also added the examples of rings on cyclic groups to explain the origin of the word "ring". at the beginning of ring I have spelled out a more explicit definition. Also added the examples of rings on cyclic groups to explain the origin of the word "ring". CommentAuthorTodd_Trimble Author: Todd_Trimble Format: MarkdownItex> to explain the origin of the word "ring". I didn't know that! Do you have a source for that? to explain the origin of the word "ring". I didn't know that! Do you have a source for that? (edited Sep 5th 2012) Format: MarkdownItex> Do you have a source for that? Hm. Let's see. That's what they told me when I was a gullible student. I never checked the originals. The entry [Mathworld -- Ring](http://mathworld.wolfram.com/Ring.html) tells its readers the same story: > The term was introduced by Hilbert to describe rings like [...] By successively multiplying the new element [...], it eventually loops around to become something already generated, something like a ring, but apparently it's just a story, not a review of Hilbert's way of introducing the term. Okay, so I went to Google books and read Hilbert's original article > [[David Hilbert]], _Die Theorie der algebraischen Zahlkörper_, Jahresbericht der Deutschen Mathematiker-Vereinigung 4 (1879) and there, in section 9.31, indeed no motivation like this is given. Instead it just says: > [...] ein Zahlring, Ring oder Integritätsbereich genannt with a footnote that reads > Nach Dedekind "eine Ordnung". And that's it. But so that means already that I was wrong, since neither Dedekind nor Hilbert meant to invoke the picture of clock arithmetic. (And Hilbert does not even mention anything as simple as $\mathbb{Z}_n$). Now I don't have Dedekind's original text. Because also "Ordnung" is ambiguous. One meaning is "order" as in "ordered set". But it also is used in the sense of "a collection of beings of the same nature" in the sense used as a taxonomic rank. Dedekind can't have meant "ordered set". So he must have meant "taxonomical order". Maybe thinking of a "taxonomy of numbers"? I don't don't know. But anyway, I suppose that Hilbert's "ring" is therefore also to be read as meaning "collection of beings", as in "drug-dealer ring". :-) Which, I must say, is too bad. Do you have a source for that? Hm. Let's see. That's what they told me when I was a gullible student. I never checked the originals. The entry Mathworld – Ring tells its readers the same story: The term was introduced by Hilbert to describe rings like […] By successively multiplying the new element […], it eventually loops around to become something already generated, something like a ring, but apparently it's just a story, not a review of Hilbert's way of introducing the term. Okay, so I went to Google books and read Hilbert's original article David Hilbert, Die Theorie der algebraischen Zahlkörper, Jahresbericht der Deutschen Mathematiker-Vereinigung 4 (1879) and there, in section 9.31, indeed no motivation like this is given. Instead it just says: […] ein Zahlring, Ring oder Integritätsbereich genannt with a footnote that reads Nach Dedekind "eine Ordnung". And that's it. But so that means already that I was wrong, since neither Dedekind nor Hilbert meant to invoke the picture of clock arithmetic. (And Hilbert does not even mention anything as simple as ℤ n\mathbb{Z}_n). Now I don't have Dedekind's original text. Because also "Ordnung" is ambiguous. One meaning is "order" as in "ordered set". But it also is used in the sense of "a collection of beings of the same nature" in the sense used as a taxonomic rank. Dedekind can't have meant "ordered set". So he must have meant "taxonomical order". Maybe thinking of a "taxonomy of numbers"? I don't don't know. But anyway, I suppose that Hilbert's "ring" is therefore also to be read as meaning "collection of beings", as in "drug-dealer ring". :-) Which, I must say, is too bad. Format: MarkdownItexWhat is the _Dedekind's original text_ ? What is the Dedekind's original text ? Format: MarkdownItexI don't know, I just meant to say that I haven't seen any original text on Dedekind's "Ordnungen". Maybe he didn't even write it up. He is just being credited for the idea (by Hilbert). I have made further notes at _[ring - References - History](http://ncatlab.org/nlab/show/ring#ReferencesHistory)_ I don't know, I just meant to say that I haven't seen any original text on Dedekind's "Ordnungen". Maybe he didn't even write it up. He is just being credited for the idea (by Hilbert). I have made further notes at ring - References - History CommentAuthorTim_Porter Author: Tim_Porter Format: MarkdownItexA google search did turn up [this](http://www-history.mcs.st-and.ac.uk/HistTopics/Ring_theory.html), but that does not seem to answer the question. A google search did turn up this, but that does not seem to answer the question. Format: MarkdownItex> A google search did turn up this, Yes, that's already linked to in the entry. > but that does not seem to answer the question. I believe I just answered the question in #3. :-) A google search did turn up this, Yes, that's already linked to in the entry. but that does not seem to answer the question. I believe I just answered the question in #3. :-) Format: MarkdownItexBy the way, I had put some text into the _[Idea-section](http://ncatlab.org/nlab/show/ring#Idea)_. Not meant to be perfect. Please edit as you see the need. By the way, I had put some text into the Idea-section. Not meant to be perfect. Please edit as you see the need. Format: MarkdownItexThere is an Stackexchange question [here](http://math.stackexchange.com/questions/362/history-of-the-concept-of-a-ring), with another historical reference. There is an Stackexchange question here, with another historical reference. Format: MarkdownItexStarted at _[[ring]]_ an _[Examples](http://ncatlab.org/nlab/show/ring#Examples)_-section. Just some very basic examples so far Started at ring an Examples-section. Just some very basic examples so far CommentTimeOct 2nd 2012 (edited Oct 2nd 2012) Format: MarkdownItexI added the following standard observation > The structure of an $A\otimes A^{op}$-ring $(R,\mu_R,\eta)$ is determined by the structure of $A$ as a ring, together with the two natural homomorphisms of rings $s = \eta(-\otimes 1_A):A\to R$ and $t=\eta(1_A\otimes -):A^{op}\to R$ which have commuting images ($s(a)t(a')=t(a')s(a)$, for all $a,a'\in A$). This is very interesting when dualizing the notion of groupoid (algebra of functions/space duality) -- source and target map in algebraic language get sometimes conveniently packed into $A\otimes A^{op}$-ring language, as in the case of [[bialgebroid]]s. I added the following standard observation The structure of an A⊗A opA\otimes A^{op}-ring (R,μ R,η)(R,\mu_R,\eta) is determined by the structure of AA as a ring, together with the two natural homomorphisms of rings s=η(−⊗1 A):A→Rs = \eta(-\otimes 1_A):A\to R and t=η(1 A⊗−):A op→Rt=\eta(1_A\otimes -):A^{op}\to R which have commuting images (s(a)t(a′)=t(a′)s(a)s(a)t(a')=t(a')s(a), for all a,a′∈Aa,a'\in A). This is very interesting when dualizing the notion of groupoid (algebra of functions/space duality) – source and target map in algebraic language get sometimes conveniently packed into A⊗A opA\otimes A^{op}-ring language, as in the case of bialgebroids. Format: MarkdownItexYou should put _something_ around that paragraph, wrapping it, something that allows to discern it as a new idea within the text that surropunds it. At least maybe a remark-environment. I knew it had to be there, but even so I only found it after hitting _see changes_. You should put something around that paragraph, wrapping it, something that allows to discern it as a new idea within the text that surropunds it. At least maybe a remark-environment. I knew it had to be there, but even so I only found it after hitting see changes.
CommonCrawl
Derived Metric Formulas Air Density Delta T Feels Like Temperature Heat Index Temperature Pressure Trend Sea Level Pressure Vapor Pressure Wind Chill Temperature $$\frac{P_{stn}\times{}100}{R_{specific}T}$$ \(P_{stn}\) = station pressure in millibars (mb) \(T\) = temperature in Kelvin \(R_{specific}\) = specific gas constant for dry air (287.058 J/(kg·K)) Delta T, \(\Delta T\), is used in agriculture to indicate acceptable conditions for spraying pesticides and fertilizers. It is simply the difference between the air temperature (aka "dry bulb temperature") and the wet bulb temperature: $$\Delta T = T - T_{wb}$$ Source: RSMAS $$T_{d} = \frac{243.04 \bigg[\ln\big(\frac{RH}{100}\big) + \frac{17.625 \times{} T}{243.04 + T}\bigg]}{17.625 - \ln\big(\frac{RH}{100}\big) - \frac{17.625 \times{} T}{243.04 + T}}$$ \(T_{d}\) = dew point in degrees Celsius (&degC) \(T\) = temperature in degrees Celsius (&degC) \(RH\) = relative humidity (%) The Feels Like temperature is equal to the Heat Index if the temperature is at or above 80&degF and the relative humidity is at or above 40%. Alternatively, the Feels Like temperature is equal to Wind Chill if the temperature is at or below 50&degF and wind speeds are above 3mph. If neither condition applies, the Feels Like temperature is equal to the air temperature. Source: Weather.gov Heat Index is calculated for temperatures at or above 80&degF and a relative humidity at or above 40%. $$T_{hi} = -42.379 + (2.04901523\times{}T) \\+ (10.1433127\times{}RH) - (0.22475541\times{}T\times{}RH) \\-(6.83783\times{}10^{-3}\times{}T^2) -(5.481717\times{}10^{-2}\times{}RH^2) \\+(1.22874\times{}10^{-3}\times{}T^2\times{}RH)+(8.5282\times{}10^{-4}\times{}T\times{}RH^2) \\-(1.99\times{}10^{-6}\times{}T^2\times{}RH^2)$$ \(T\) = temperature in degrees Fahrenheit (°F) The Pressure Trend description is determined by the rate of change over the past 3 hours. $$\Delta P = P_{0h} - P_{3h}$$ \(P_{0h}\) = the latest pressure reading in millibars (mb) \(P_{3h}\) = pressure reading 3 hours ago in millibars (mb) Steady \(-1 mb < \Delta P < 1 mb \) Falling \(\Delta P \le -1 mb\) Rising \(\Delta P \ge 1 mb \) The Rain Rate description is set according to the latest one minute accumulation, extrapolated to an hourly rate. $$\Delta R = \frac{V_{r} \times{} 60min}{1h}$$ \(V_{r}\) = rain accumulation in millimeters over one minute (mm/min) None \(\Delta R = 0 mm/h\) Very Light \(0 mm/h < \Delta R < 0.25 mm/h\) Light \(0.25 mm/h \le \Delta R < 1.0 mm/h\) Moderate \(1.0 mm/h \le \Delta R < 4.0 mm/h\) Heavy \(4.0mm/h \le \Delta R < 16.0 mm/h\) Very Heavy \(16.0 mm/h \le \Delta R < 50.0 mm/h\) Extreme \(\Delta R \ge 50.0 mm/h\) Source: AMS $$P_{sl} = P_{sta}\Big[1 + \frac{P_{0}}{P_{sta}}^{\frac{R_{d}\gamma_{s}}{g}}\frac{\gamma_{s}(h_{el} + h_{ag})}{T_{0}}\Big]^{\frac{g}{R_{d}\gamma_{s}}}$$ \(P_{sta}\) = station pressure in millibars (mb) \(P_{0}\) = standard sea level pressure (1013.25mb) \(R_{d}\) = gas constant for dry air (\(287.05 \frac{J}{kg \cdot K}\)) \(\gamma_{s}\) = standard atmosphere lapse rate (\(0.0065 \frac{K}{m}\)) \(g\) = gravity (\(9.80665 \frac{m}{s^{2}}\)) \(h_{el}\) = ground elevation in meters (m) \(h_{ag}\) = station height above ground in meters (m) \(T_{0}\) = standard sea level temperature (\(288.15 K\)) Vapor pressure, \(P_{v}\) can be estimated in units of millibar (mb) as follows: $$P_{v} = \frac{RH}{100} \times{} 6.112 \times{} e^{\Big(\frac{17.67 \times{} T}{T + 243.5}\Big)}$$ Wet Bulb Temperature (\(T_{wb}\)), is determined using the following formulas for actual vapor pressure (\(P_{v}\)) and the vapor pressure related to wet bulb temperature (\(P_{v,wb}\)) in millibar (mb): $$P_{v} = P_{v,wb} - P_{stn} \times (T - T_{wb}) \times 0.00066 \times (1 + (0.00115 \times T_{wb}))$$ $$P_{v,wb} = 6.112\times{}e^{\Big(\frac{17.67\times{}T_{wb}}{T_{wb}+243.5}\Big)}$$ \(P_{stn}\) = station pressure in millibar (mb) Note, the above equations can't be solved for \(T_{wb}\) directly, but several iterative methods may be used to determine \(T_{wb}\). Wind Chill is calculated for temperatures at or below 50&degF and wind speeds above 3mph. $$T_{wc} = 35.74 + (0.6215\times{} T) \\- \Big(35.75\times{}V^{0.16}\Big) \\+ \Big(0.4275\times{}T\times{}V^{0.16}\Big)$$ \(T\) = temperature in degrees Fahrenheit (&degF) \(V\) = wind speed in mph
CommonCrawl
One-Way Analysis of Variance (ANOVA) ANOVA You should recall from our previous tutorials how to conduct a two sample t-test to compare two means. However, what do we do when we have more than two groups? ANOVA (Analysis of Variance) is how we can make these comparisons. The Logic of ANOVA The total variability we see in our dependent variables can have two sources. Within-groups variance: what is the variability of the dependent variable inside a particular group? The t-Distribution and Conducting t-Tests The t-Distribution According to the central limit theorem, the distribution of means across repeated sampling (the sampling distribution) will be normal, centered on the true population mean, and have a standard error (the standard deviation of the sampling distribution) equal to \[ {\sigma_M}=\frac{\sigma}{\sqrt{n}} \] The numerator \(\sigma\) is the standard deviation of values in the population, calculated as \[ \sigma = \sqrt{ \frac{ \sum (x_i-\mu)^2}{N}} \] We can convert the distribution of means to be unit normal by converting the means from each sample to \(z\)-scores. Running t-Tests in R t-Tests in R All three types of \(t\)-tests can be performed using the same t.test function in R. The primary arguments are the following: x and (optionally) y, or a formula, e.g. y ~ x. These specify the interval-level outcome variable y and the two-level factor variable x. The formula syntax can be used for the independent samples \(t\)-test. If a formula is specified, the data argument can be specified so that it is not necessary to specify the data frame using df$x and df$y notation. Running t-Tests in SPSS t-Tests in SPSS SPSS allows you to conduct one-sample, independent samples, and paired samples \(t\)-tests. This page demonstrates how to perform each using SPSS. The data used in this tutorial can be downloaded from here. The one-sample and independent samples examples will use the iq_long.sav data, and the paired samples example will use iq_wide.sav. One Sample \(t\)-Test Say we have data from 200 subjects who have taken an IQ test. We know in the general population the mean IQ is 100.
CommonCrawl
Large Plasma Device The Large Plasma Device (often stylized as LArge Plasma Device or LAPD) is an experimental physics device located at UCLA. It is designed as a general purpose laboratory for experimental plasma physics research. The device began operation in 1991[1] and was upgraded in 2001[2] to its current version. The modern LAPD is operated as the primary device for a national user facility, the Basic Plasma Science Facility (or BaPSF), which is supported by the US Department of Energy, Fusion Energy Sciences and the National Science Foundation.[3] Half of the operation time of the device is available to scientists at other institutions and facilities who can compete for time through a yearly solicitation.[4][5] The first version of the LAPD was a 10 meter long device constructed by a team led by Walter Gekelman in 1991. The construction took 3.5 years to complete and was funded by the Office of Naval Research (ONR). A major upgrade to a 20 meter version was funded by ONR and an NSF Major Research Instrumentation award in 1999.[6] Following the completion of that major upgrade, the award of a $4.8 million grant by the US Department of Energy and the National Science Foundation in 2001 enabled the creation of the Basic Plasma Science Facility and the operation of the LAPD as part of this national user facility. Gekelman was director of the facility until 2016, when Troy Carter became BaPSF director. Machine overview Photo taken through an end port with the plasma off, showing the thermionic cathode. The LAPD is a linear pulsed-discharge device operated at a high (1 Hz) repetition rate, producing a strongly magnetized background plasma which is physically large enough to support Alfvén waves. Plasma is produced from a barium oxide (BaO) cathode-anode discharge at one end of a 20-meter long, 1 meter diameter cylindrical vacuum vessel (diagram). The resulting plasma column is roughly 16.5 meters long and 60 cm in diameter. The background magnetic field, produced by a series of large electromagnets surrounding the chamber, can be varied from 400 gauss to 2.5 kilogauss (40 to 250 mT). Plasma parameters Because the LAPD is a general-purpose research device, the plasma parameters are carefully selected to make diagnostics simple without the problems associated with hotter (e.g. fusion-level) plasmas, while still providing a useful environment in which to do research. The typical operational parameters are: Density: n = 1–4 \( \times \) 1012 cm−3 Temperature: Te = 6 eV, Ti = 1 eV Background field: B = 400–2500 gauss (40–250 mT) In principle, a plasma may be generated from any kind of gas, but inert gases are typically used to prevent the plasma from destroying the coating on the barium oxide cathode. Examples of gases used are helium, argon, nitrogen and neon. Hydrogen is sometimes used for short periods of time. Multiple gases can also be mixed in varying ratios within the chamber to produce multi-species plasmas. At these parameters, the ion Larmor radius is a few millimeters, and the Debye length is tens of micrometres. Importantly, it also implies that the Alfvén wavelength is a few meters, and in fact shear Alfvén waves are routinely observed in the LAPD. This is the main reason for the 20-meter length of the device. Plasma sources The main source of plasma within the LAPD is produced via discharge from the barium oxide (BaO) coated cathode, which emits electrons via thermionic emission. The cathode is located near the end of the LAPD and is made from a thin nickel sheet, uniformly heated to roughly 900 °C. The circuit is closed by a molybdenum mesh anode a short distance away. Typical discharge currents are in the range of 3-8 kiloamperes at 60-90 volts, supplied by a custom-designed transistor switch backed by a 4-farad capacitor bank. A secondary cathode source made of lanthanum hexaboride (LaB6) was developed in 2010[7] to provide a hotter and denser plasma when required. It consists of four square tiles joined to form a 20 × {\displaystyle \times } \times 20 cm2 area and is located at the other end of the LAPD. The circuit is also closed by a molybdenum mesh anode, which may be placed further down the machine, and is slightly smaller in size to the one used to close the BaO cathode source. The LaB6 cathode is typically heated to temperatures above 1750 °C by a graphite heater, and produces discharge currents of 2.2 kiloamperes at 150 volts. The plasma in the LAPD is usually pulsed at 1 Hz, with the background BaO source on for 10-20 milliseconds at a time. If the LaB6 source is being utilized, it typically discharges together BaO cathode, but for a shorter period of time (about 5–8 ms) nearing the end of each discharge cycle. The use of an oxide-cathode plasma source, along with a well-designed transistor switch for the discharge, allows for a plasma environment which is extremely reproducible shot-to-shot. One interesting aspect of the BaO plasma source is its ability to act as an "Alfvén Maser", a source of large-amplitude, coherent shear Alfvén waves.[8] The resonant cavity is formed by the highly reflective nickel cathode and the semitransparent grid anode. Since the source is located at the end of the solenoid which generates the main LAPD background field, there is a gradient in the magnetic field within the cavity. As shear waves do not propagate above the ion cyclotron frequency, the practical effect of this is to act as a filter on the modes which may be excited. Maser activity occurs spontaneously at certain combinations of magnetic field strength and discharge current, and in practice may be activated (or avoided) by the machine user. Diagnostic access and probes The main diagnostic is the movable probe. The relatively low electron temperature makes probe construction straightforward and does not require the use of exotic materials. Most probes are constructed in-house within the facility and include magnetic field probes,[9] Langmuir probes, Mach probes (to measure flow), electric dipole probes and many others. Standard probe design also allows external users to bring their own diagnostics with them, if they desire. Each probe is inserted through its own vacuum interlock, which allows probes to be added and removed while the device is in operation. A 1 Hz rep-rate, coupled with the high reproducibility of the background plasma, allows the rapid collection of enormous datasets. An experiment on LAPD is typically designed to be repeated once per second, for as many hours or days as is necessary to assemble a complete set of observations. This makes it possible to diagnose experiments using a small number of movable probes, in contrast to the large probe arrays used in many other devices. The entire length of the device is fitted with "ball joints," vacuum-tight angular couplings (invented by a LAPD staff member) which allow probes to be inserted and rotated, both vertically and horizontally. In practice, these are used in conjunction with computer-controlled motorized probe drives to sample "planes" (vertical cross-sections) of the background plasma with whatever probe is desired. Since the only limitation on the amount of data to be taken (number of points in the plane) is the amount of time spent recording shots at 1 Hz, it is possible to assemble large volumetric datasets consisting of many planes at different axial locations. Visualizations composed from such volumetric measurements can be seen at the LAPD gallery. Including the ball joints, there are a total of 450 access ports on the machine, some of which are fitted with windows for optical or microwave observation. Other diagnostics A variety of other diagnostics are also available at the LAPD to complement probe measurements. These include photodiodes, microwave interferometers, a high speed camera (3 ns/frame) and laser-induced fluorescence. Enormous Toroidal Plasma Device (ETPD), a toroidal plasma device housed in the same facility as the LAPD Gekelman, W.; Pfister, H.; Lucky, Z.; Bamber, J.; Leneman, D.; Maggs, J. (1991). "Design, construction, and properties of the large plasma research device−The LAPD at UCLA". Review of Scientific Instruments. 62 (12): 2875–2883. Bibcode:1991RScI...62.2875G. doi:10.1063/1.1142175. ISSN 0034-6748. Gekelman, W.; Pribyl, P.; Lucky, Z.; Drandell, M.; Leneman, D.; Maggs, J.; Vincena, S.; Van Compernolle, B.; Tripathi, S. K. P. (2016). "The upgraded Large Plasma Device, a machine for studying frontier basic plasma physics". Review of Scientific Instruments. 87 (2): 025105. Bibcode:2016RScI...87b5105G. doi:10.1063/1.4941079. ISSN 0034-6748. PMID 26931889. "US NSF - MPS - PHY - Facilities and Centers". www.nsf.gov. Retrieved July 29, 2020. Samuel Reich, Eugenie (2012). "Lab astrophysics aims for the stars". Nature. 491 (7425): 509. Bibcode:2012Natur.491..509R. doi:10.1038/491509a. ISSN 0028-0836. PMID 23172193. Perez, Jean C.; Horton, W.; Bengtson, Roger D.; Carter, Troy (2006). "Study of strong cross-field sheared flow with the vorticity probe in the Large Plasma Device". Physics of Plasmas. 13 (5): 055701. Bibcode:2006PhPl...13e5701P. doi:10.1063/1.2179423. ISSN 1070-664X. "NSF Award Search: Award#9724366 - To Upgrade a Large Plasma Device". www.nsf.gov. Retrieved July 29, 2020. Cooper, C. M.; Gekelman, W.; Pribyl, P.; Lucky, Z. (2010). "A new large area lanthanum hexaboride plasma source". Review of Scientific Instruments. 81 (8): 083503. Bibcode:2010RScI...81h3503C. doi:10.1063/1.3471917. ISSN 0034-6748. PMID 20815604. Maggs, J. E.; Morales, G. J.; Carter, T. A. (2004). "An Alfvén wave maser in the laboratory". Physics of Plasmas. 12 (1): 013103. Bibcode:2005PhPl...12a3103M. doi:10.1063/1.1823413. ISSN 1070-664X. PMID 12906425. Everson, E. T.; Pribyl, P.; Constantin, C. G.; Zylstra, A.; Schaeffer, D.; Kugland, N. L.; Niemann, C. (2009). "Design, construction, and calibration of a three-axis, high-frequency magnetic probe (B-dot probe) as a diagnostic for exploding plasmas". Review of Scientific Instruments. 80 (11): 113505. Bibcode:2009RScI...80k3505E. doi:10.1063/1.3246785. ISSN 0034-6748. PMID 19947729.
CommonCrawl
MimiFUND.jl 1. Resolution 2. Population and income 3. Emission, abatement and costs 4. Atmosphere and climate 5. Impacts Edit on GitHub FUND 3.9 is defined for 16 regions, specified in Table R. The model runs from 1950 to 3000 in time-steps of a year. Population and per capita income follow exogenous scenarios. There are five standard scenarios, specified in Tables P and Y. The FUND scenario is based on the EMF14 Standardised Scenario, and lies somewhere in between the IS92a and IS92f scenarios (Leggett et al., 1992). The other scenarios follow the SRES A1B, A2, B1 and B2 scenarios (Nakicenovic and Swart, 2001), as implemented in the IMAGE model (IMAGE Team, 2001). We assume that all regions are in a steady state after the year 2300. For the years 2301-3000 per capita income growth rates are constant and equal to the values of the year 2300, while population does not change. 3.1. Carbon dioxide (CO₂) Carbon dioxide emissions are calculated on the basis of the Kaya identity: \[M_{t,r}=\frac{M_{t,r}}{E_{t,r}}\frac{E_{t,r}}{Y_{t,r}}\frac{Y_{t,r}}{P_{t,r}}P_{t,r}=\psi_{t,r}\varphi_{t,r}Y_{t,r} \tag{CO1.1}\] where $M$ denotes emissions, $E$ denote energy use, $Y$ denotes GDP and $P$ denotes population; $t$ is the index for time, $r$ for region. The carbon intensity of energy use, and the energy intensity of production follow from: \[\psi_{t,r} = g_{t - 1,r}^{\psi}\psi_{t - 1,r} - \alpha_{t - 1,r}\tau_{t - 1,r} \tag{CO2.2}\] \[\varphi_{t,r} = g_{t - 1,r}^{\varphi}\varphi_{t - 1,r} - \alpha_{t - 1,r}\tau_{t - 1,r} \tag{CO2.3}\] where $\tau$ is policy intervention and $\alpha$ is a parameter. The exogenous growth rates $g$ are referred to as the Autonomous Energy Efficiency Improvement (AEEI) and the Autonomous Carbon Efficiency Improvement (ACEI). See Tables AEEI and ACEI for the five alternative scenarios (values for the years 2301-3000 again equal the values for the year 2300). Policy also affects emissions via \[M_{t,r} = \left( \psi_{t,r} - \chi_{t,r}^{\psi} \right)\left( \varphi_{t,r} - \chi_{t,r}^{\varphi} \right)Y_{t,r} \tag{CO2.1}\] \[\chi_{t,r}^{\psi} = \kappa_{\psi}\chi_{t - 1,r} + \left( 1 - \alpha_{t - 1,r} \right)\tau_{t - 1,r}^{\psi} \tag{CO2.4}\] \[\chi_{t,r}^{\varphi} = \kappa_{\varphi}\chi_{t - 1,r} + \left( 1 - \alpha_{t - 1,r} \right)\tau_{t - 1,r}^{\varphi} \tag{CO2.5}\] Thus, the variable $0 < \alpha < 1$ governs which part of emission reduction is permanent (reducing carbon and energy intensities at all future times) and which part of emission reduction is temporary (reducing current energy consumptions and carbon emissions), fading at a rate of $0 < \kappa < 1$. In the base case, $\kappa_{\psi} = \kappa_{\varphi} = 0.9$ and \[\alpha_{t,r} = 1 - \frac{\tau_{t,r}/100}{1 + \tau_{t,r}/100} \tag{CO2.6}\] So that $\alpha = 0.5$ if $\tau = \$100/\text{tC}$. One may interpret the difference between permanent and temporary emission reduction as affecting commercial technologies and capital stocks, respectively. The emission reduction module is a reduced form way of modelling that part of the emission reduction fades away after the policy intervention is reversed, but that another part remains through technological lock-in. Learning effects are described below. The parameters of the model are chosen so that FUND roughly resembles the behaviour of other models, particularly those of the Energy Modeling Forum (Weyant, 2004; Weyant et al., 2006). The costs of emission reduction $C$ are given by \[\frac{C_{t,r}}{Y_{t,r}} = \frac{\beta_{t,r}\tau_{t,r}^{2}}{H_{t,r}H_{t}^{g}} \tag{CO2.7}\] $H$ denotes the stock of knowledge. Equation (CO2.6) gives the costs of emission reduction in a particular year for emission reduction in that year. In combination with Equations (CO2.2)-(CO2.5), emission reduction is cheaper if smeared out over a longer time period. The parameter $\beta$ follows from \[\beta_{t,r} = 0.784 - 0.084\sqrt{\frac{M_{t,r}}{Y_{t,r}} - \operatorname{}\frac{M_{t,s}}{Y_{t,s}}} \tag{CO2.8}\] That is, emission reduction is relatively expensive for the region that has the lowest emission intensity. The calibration is such that a 10% emission reduction cut in 2003 would cost 1.57% (1.38%) of GDP of the least (most) carbon-intensive region; this is calibrated to Hourcade et al. (1996, 2001). An 80% (85%) emission reduction would completely ruin the economy. Later emission reductions are cheaper by Equations (CO2.7) and (CO2.8). Emission reduction is relatively cheap for regions with high emission intensities. The thought is that emission reduction is cheap in countries that use a lot of energy and rely heavily on fossil fuels, while other countries use less energy and less fossil fuels and are therefore closer to the technological frontier of emission abatement. For relatively small emission reduction, the costs in FUND correspond closely to those reported by other top-down models, but for higher emission reduction, FUND finds higher costs, because FUND does not include backstop technologies, that is, a carbon-free energy supply that is available in unlimited quantities at fixed average costs. The regional and global knowledge stocks follow from \[H_{t,r} = H_{t - 1,r}\sqrt{1 + \gamma_{R}\tau_{t - 1,r}} \tag{CO2.9}\] \[H_{t}^{G} = H_{t - 1}^{G}\sqrt{1 + \gamma_{G}\tau_{t,r}} \tag{CO2.10}\] Knowledge accumulates with emission abatement. More knowledge implies lower emission reduction costs. The parameters $\gamma$ determine which part of the knowledge is kept within the region, and which part spills over to other regions as well. In the base case, $\gamma_{R} = 0.9$ and $\gamma_{G} = 0.1$. The model is similar in structure and numbers to that of Goulder and Schneider (1999) and Goulder and Mathai (2000). Emissions from land use change and deforestation are exogenous, and cannot be mitigated. Numbers are found in Tables CO2F, again for five alternative scenarios. 3.2. Methane (CH₄) Methane emissions are exogenous, specified in Table CH4 (emissions for the years 2301-3000 are equal to emissions in the year 2300). There is a single scenario only, based on IS92a (Leggett et al., 1992). The costs of emission reduction are quadratic. Table OC specifies the parameters, which are calibrated to USEPA (2003). 3.3. Nitrous oxide (N₂O) Nitrous oxide emissions are exogenous, specified in Table N2O (emissions for the years 2301-3000 are equal to emissions in the year 2300). There is a single scenario only, based on IS92a (Leggett et al., 1992). The costs of emission reduction are quadratic. Table OC specifies the parameters, which are calibrated to USEPA (2003). 3.4. Sulfurhexafluoride (SF₆) SF₆ emissions are linear in GDP and GDP per capita. Table SF6 gives the parameters. The numbers for 1990 and 1995 are estimated from IEA data (http://data.iea.org/ieastore/product.asp?dept_id=101&pf_id=305). There is no option to reduce SF₆ emissions. 3.5. Dynamic Biosphere Emissions from the terrestrial biosphere follow \[E_{t}^{B} = \beta\left( T_{t} - T_{2010} \right)\frac{B_{t}}{B_{\mathrm{\max}}} \tag{DB.1}\] \[B_{t} = B_{t - 1} - E_{t - 1}^{B} \tag{DB.2}\] $E^{B}$ are emissions (in million metric tonnes of carbon); $t$ denotes time; $T$ is the global mean temperature (in degree Celsius); $B_{t}$ is the remaining stock of potential emissions (in million metric tonnes of carbon, GtC); $B_{\mathrm{\max}}$ is the total stock of potential emissions; $B_{\mathrm{\max}} = 1,900\ GtC$; $\beta$ is a parameter; $\beta = 2.6\frac{\text{GtC}}{C}$ (with a gamma distribution with shape=4.9 and scale=662.8). The model is calibrated to the review of (Denman et al. 2007). Emissions from the terrestrial biosphere before the year 2010 are zero. 4.1. Concentrations Methane, nitrous oxide and sulphur hexafluoride are taken up in the atmosphere, and then geometrically depleted: \[C_{t} = C_{t - 1} + \alpha E_{t} - \beta\left( C_{t - 1} - C_{\text{pre}} \right) \tag{C.1}\] where $C$ denotes concentration, $E$ emissions, $t$ year, and $\text{pre}$ pre-industrial. Table C displays the parameters $\alpha$ and $\beta$ for all gases. Parameters are taken from Forster et al. (2007). The atmospheric concentration of carbon dioxide follows from a five-box model: \[Box_{i,t} = \rho_{i}Box_{i,t - 1} + 0.000471\alpha_{i}E_{t} \tag{C.2a}\] \[C_{t} = \sum_{i = 1}^{5}{\alpha_{i}\text{Bo}x_{i,t}} \tag{C.2b}\] where $\alpha_{i}$ denotes the fraction of emissions $E$ (in million metric tonnes of carbon) that is allocated to $Box_{i}$ (0.13, 0.20, 0.32, 0.25 and 0.10, respectively) and $\rho$ the decay-rate of the boxes ($\rho = exp( - \frac{1}{\mathrm{\text{lifetime}}})$, with life-times infinity, 363, 74, 17 and 2 years, respectively). The model is due to Maier-Reimer and Hasselmann (1987), its parameters are due to Hammitt et al. (1992). Thus, 13% of total emissions remains forever in the atmosphere, while 10% is—on average—removed in two years. Carbon dioxide concentrations are measured in parts per million by volume. 4.2. Radiative forcing Radiative forcing is specified as follows: \[\begin{aligned} RF_{t} = &5.35\ln\frac{\text{CO}2_{t}}{275}\\ &+ 0.036 \times 1.4\left( \sqrt{\text{CH}4_{t}} - \sqrt{790} \right) + 0.12\left( \sqrt{N2O_{t}} - \sqrt{285} \right) \\ &-0.47\ln\left( 1 + 2.01 \times 10^{- 5}\text{CH}4_{t}^{0.75}285^{0.75} + 5.31 \times 10^{- 15}\text{CH}4_{t}^{2.52}285^{1.52} \right) \\ &- 0.47\ln\left( 1 + 2.01 + 10^{- 5}790^{0.75}N2O_{t}^{0.75} + 5.31 \times 10^{- 15}790^{2.52}N2O_{t}^{1.52} \right) \\ &+ 2 \times 0.47\ln\left( 1 + 2.01 \times 10^{- 5}790^{0.75}285^{0.75} + 5.31 \times 10^{- 15}790^{2.52}285^{1.52} \right) \\ &+ 0.00052\left( \text{SF}6_{t} - 0.04 \right) + rfSO2_{t} \end{aligned} \tag{C.3}\] Parameters are taken from Ramaswamy et al. (2001) and Forster et al. (2007) for the indirect effect of methane on tropospheric ozone. Radiative forcing from SO2 at time t ($\text{rfSO}2_{t}$) is exogenous; the FUND scenario uses the forcing from RCP85 and the SRES scenarios use the forcing as interpreted by IMAGE 2.2. 4.3. Temperature and sea level rise The global mean temperature $T$ is governed by a geometric build-up to its equilibrium (determined by radiative forcing $RF$). In the base case, global mean temperature $T$ rises in equilibrium by 3.0°C for a doubling of carbon dioxide equivalents, so: \[T_{t} = \left( 1 - \frac{1}{\varphi} \right)T_{t - 1} + \frac{1}{\varphi}\frac{CS}{5.35\ln 2}RF_{t} \tag{C.4}\] where $CS$ is climate sensitivity, set to 3.0 (with a gamma distribution with shape=6.48 and scale=0.55). $\varphi$ is the e-folding time and set to \[\varphi = \max\left( \alpha + \beta^{l}CS + \beta^{q}CS^{2},1 \right) \tag{C.5}\] where $\alpha$ is set to -42.7, $\beta^{l}$ is set to 29.1 and $\beta^{q}$ is set to 0.001, such that the best guess e-folding time for a climate sensitvitiy of 3.0 is 44 years. Regional temperature is derived by multiplying the global mean temperature by a fixed factor (see Table RT) which corresponds to the spatial climate change pattern averaged over 14 GCMs (Mendelsohn et al. 2000). Global mean sea level is also geometric, with its equilibrium level determined by the temperature and a life-time of 500 years: \[S_{t} = \left( 1 - \frac{1}{\rho} \right)S_{t - 1} + \gamma\frac{1}{\rho}T_{t} \tag{C.6}\] where $\rho = 500$ (with a triangular distribution bounded by 250 and 1000) is the e-folding time. $\gamma = 2$ (with a gamma distribution with shape=6 and scale=0.4) is sea-level sensitivity to temperature. Temperature and sea level are calibrated to the best guess temperature and sea level for the IS92a scenario of Kattenberg et al. (1996). The code for the climate dynamics component can be found at https://github.com/fund-model/MimiFUND.jl/blob/master/src/components/ClimateDynamicsComponent.jl. The calibration code for the climate dynamics component can be found at https://github.com/fund-model/MimiFUND.jl/blob/master/calibration/climate/fundcscalibration.ipynb. 5.1. Agriculture The impacts of climate change on agriculture at time $t$ in region $r$ are split into three parts: impacts due to the rate of climate change $A_{t,r}^{r}$; impacts due to the level of climate change $A_{t,r}^{l}$; and impacts from carbon dioxide fertilisation $A_{t,r}^{f}$: \[A_{t,r} = A_{t,r}^{r} + A_{t,r}^{l} + A_{t,r}^{f} \tag{A.1}\] The first part (rate) is always negative: As farmers have imperfect foresight and are locked into production practices, climate change implies that farmers are maladapted. Faster climate change means greater damages. The third part (fertilization) is always positive. CO₂ fertilization means that plants grow faster and use less water. The second part (level) can be positive or negative. There is an optimal climate for agriculture. If climate change moves a region closer to (away from) the optimum, impacts are positive (negative); and impacts are smaller nearer to the optimum. For the impact of the rate of climate change (i.e., the annual change of climate) on agriculture, the assumed model is: \[A_{t,r}^{r} = \alpha_{r}\left( \frac{\Delta T_{t}}{0.04} \right)^{\beta} + \left( 1 - \frac{1}{\rho} \right)A_{t - 1,r}^{r} \tag{A.2}\] $A^{r}$ denotes damage in agricultural production as a fraction due the rate of climate change by time and region; $r$ denotes region; $\Delta T$ denotes the change in the regional mean temperature (in degrees Celsius) between time $t$ and $t - 1$; $\alpha$ is a parameter, denoting the regional change in agricultural production for an annual warming of 0.04°C (see Table A, column 2-3); $\beta$ = 2.0 (1.5-2.5) is a parameter, equal for all regions, denoting the non-linearity of the reaction to temperature; $\beta$ is an expert guess; $\rho$ = 10 (5-15) is a parameter, equal for all regions, denoting the speed of adaptation; $\rho$ is an expert guess. The model for the impact due to the level of climate change is: \[A_{t,r}^{l} = \delta_{r}^{l}T_{t} + \delta_{r}^{q}T_{t}^{2} \tag{A.3}\] $A^{l}$ denotes the damage in agricultural production as a fraction due to the level of climate change by time and region; $T$ denotes the global mean temperature above pre-industrial (in degree Celsius) at time $t$; $\delta_{r}^{l}$ and $\delta_{r}^{q}$ are parameters (see Table A), that follow from the regional change (in per cent) in agricultural production for a warming of 2.5°C above today or 3.2°C above pre-industrial and the the optimal temperature (in degree Celsius) for agriculture in each region. CO₂ fertilisation has a positive, but saturating effect on agriculture, specified by \[A_{t,r}^{f} = \frac{\gamma_{r}}{\ln2}\ln\frac{\text{CO}2_{t}}{275} \tag{A.4}\] $A^{f}$ denotes damage in agricultural production as a fraction due to the CO2 fertilisation by time and region; $CO2$ denotes the atmospheric concentration of carbon dioxide (in parts per million by volume); 275 ppm is the pre-industrial concentration; $\gamma$ is a parameter that gives the impact of a doubling of CO2 concentrations (see Table A, column 8-9). The parameters in Table A are calibrated, following the procedure described in Tol (2002a), to the results of Kane et al. (1992), Reilly et al. (1994), Morita et al. (1994), Fischer et al. (1996), and Tsigas et al. (1996). These studies all use a global computable general equilibrium model, and report results with and without adaptation, and with and without CO₂ fertilisation. The regional results from these studies are assumed to hold for each country in the respective regions. They are averaged over the studies and the climate scenarios for each country, and aggregated to the FUND regions. The standard deviations in Table A follow from the spread between studies and scenarios. Equation (A.4) follows from the difference in results with and without CO2 fertilization. Equation (A.3) follows from the results with full adaptation. Equation (A.2) follows from the difference in results with and without adaptation. Equations (A.1)-(A.4) express the impact of climate change as a percentage of agricultural production. In order to express this as a percentage of income, we need to know the share of agricultural production in total income. This is assumed to fall with per capita income, that is, \[\frac{\text{GA}P_{t,r}}{Y_{t,r}} = \frac{\text{GA}P_{1990,r}}{Y_{1990,r}}\left( \frac{y_{1990,r}}{y_{t,r}} \right)^{\epsilon} \tag{A.5}\] $\text{GAP}$ denotes gross agricultural product (in 1995 US dollar per year) by time and region; $Y$ denotes gross domestic product (in 1995 US dollar per year) by time and region; $y$ denotes gross domestic product per capita (in 1995 US dollar per person per year) by time and region; $\epsilon$ = 0.31 (0.15-0.45) is a parameter; it is the income elasticity of the share of agriculture in the economy; it is taken from Tol (2002b), who regressed the regional share in agriculture on per capita income, using 1995 data from the World Resources Institute (http://earthtrends.wri.org). The code for the agricultural impacts component can be found at https://github.com/fund-model/MimiFUND.jl/blob/master/src/components/ImpactAgricultureComponent.jl. 5.2. Forestry The model is: \[F_{t,r} = \alpha_{r}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\epsilon}\left( 0.5\left( \frac{T_{t}}{1.0} \right)^{\beta} + 0.5\gamma\ln\left( \frac{\text{CO}2_{t}}{275} \right) \right) \tag{F.1}\] $F$ denotes the change in forestry consumer and producer surplus (as a share of total income); $y$ denotes per capita income (in 1995 US dollar per person per year); $T$ denotes the global mean temperature (in degree centigrade); $\alpha$ is a parameter, that measures the impact of climate change of a 1ºC global warming on economic welfare; see Table EFW; $\epsilon$ = 0.31 (0.11-0.51) is a parameter, and equals the income elasticity for agriculture; $\beta$ = 1 (0.5-1.5) is a parameter; this is an expert guess; $\gamma$ = 0.44 (0.29-0.87) is a parameter; $\gamma$ is such that a doubling of the atmospheric concentration of carbon dioxide would lead to a change of forest value of 15% (10-30%); this parameter is taken from Gitay et al., (2001). The parameter $\alpha$ is estimated as the average of the estimates by Perez-Garcia et al. (1995) and Sohngen et al. (2001). Perez-Garcia et al. (1995) present results for four different climate scenarios and two management scenarios, while Sohngen et al. (2001) use two different climate scenario and two alternative ecological scenarios. The results are mapped to the FUND regions assuming that the impact is uniform elative to GDP. The impact is averaged within the study results, and then the weighted average between the two studies is computed and shown in Table EFW. The standard deviation follows. 5.3. Water resources The impact of climate change on water resources follows: \[W_{t,r} = \min\left\{ \alpha_{r}Y_{1990,r}\left( 1 - \tau \right)^{t - 2000}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\beta}\left( \frac{P_{t,r}}{P_{1990,r}} \right)^{\eta}\left( \frac{T_{t}}{1.0} \right)^{\gamma},\frac{Y_{t,r}}{10} \right\} \tag{W.1}\] $W$ denotes the change in water resources (in 1995 US dollar) at time $t$ in region $r$; $y$ denotes per capita income (in 1995 US dollar) at time $t$ in region $r$; $P$ denotes population at time $t$ in region $r$; $\alpha$ is a parameter (in percent of 1990 GDP per degree Celsius) that specifies the benchmark impact; see Table EFW; $\beta$ = 0.85 (0.15, &gt;0) is a parameter, that specifies how impacts respond to economic growth; $\eta$ = 0.85 (0.15,&gt;0) is a parameter that specifies how impacts respond to population growth; $\gamma$ = 1 (0.5,&gt;0) is a parameter, that determines the response of impact to warming; $\tau$ = 0.005 (0.005, &gt;0) is a parameter, that measures technological progress in water supply and demand. These parameters are from calibrating FUND to the results of Downing et al. (1995, 1996). 5.4. Energy consumption For space heating, the model is: \[SH_{t,r} = \frac{\alpha_{r}Y_{1990,r}\frac{\operatorname{atan}T_{t}}{\operatorname{atan}{1.0}}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\epsilon}\left( \frac{P_{t,r}}{P_{1990,r}} \right)}{\prod_{s = 1990}^{t}{\text{AEE}I_{s,r}}} \tag{E.1}\] $\text{SH}$ denotes the decrease in expenditure on space heating (in 1995 US dollar) at time $t$ in region $r$; $\text{t\ }$denotes time; $Y$ denotes income (in 1995 US dollar) at time $t$ in region $r$; $y$ denotes per capita income (in 1995 US dollar per person per year) at time $t$ in region $r$; $P$ denotes population size at time $t$ in region $r$; $\alpha$ is a parameter (in dollar per degree Celsius), that specifies the benchmark impact; see Table EFW, column 6-7 $\epsilon$ is a parameter; it is the income elasticity of space heating demand; $\epsilon$ = 0.8 (0.1,&gt;0,&lt;1); $\text{AEEI}$ is a parameter (cf. Tables AEEI and Equation CO2.3); it is the Autonomous Energy Efficiency Improvement, measuring technological progress in energy provision; the global average value is about 1% per year in 1990, converging to 0.2% in 2200; its standard deviation is set at a quarter of the mean. These parameters are from calibrating FUND to the results of Downing et al. (1995, 1996). Savings on space heating are assumed to saturate. The income elasticity of heating demand is taken from Hodgson and Miller (1995, cited in Downing et al., 1996), and estimated for the . Space heating demand is linear in the number of people for want of scenarios of number of households and house sizes. Energy efficiency improvements in space heating are assumed to be equal to the average energy efficiency improvements in the economy. For space cooling, the model is: \[SC_{t,r} = \frac{\alpha_{r}Y_{1990,r}\left( \frac{T_{t}}{1.0} \right)^{\beta}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\epsilon}\left( \frac{P_{t,r}}{P_{1990,r}} \right)}{\prod_{s = 1990}^{t}{\text{AEE}I_{s,r}}} \tag{E.2}\] $\text{SC}$ denotes the increase in expenditure on space cooling (1995 US dollar) at time $t$ in region $r$; $\alpha$ is a parameter (see Table EFW, column 8-9); $\beta$ is a parameter; $\beta$ = 1.5 (1.0-2.0); $\epsilon$ is a parameter; it is the income elasticity of space heating demand; $\epsilon$ = 0.8 (0.6-1.0); These parameters are from calibrating FUND to the results of Downing et al. (1995, 1996). Space cooling is assumed to be more than linear in temperature because cooling demand accelerates as it gets warmer. The income elasticity of cooling demand is taken from Hodgson and Miller (1995, cited in Downing et al., 1996), and estimated for the . Space cooling demand is linear in the number of people for want of scenarios of number of households and house sizes. Energy efficiency improvements in space cooling are assumed to be equal to the average energy efficiency improvements in the economy. 5.5. Sea level rise Table SLR shows the accumulated loss of drylands and wetlands for a one metre rise in sea level. The data are taken from Hoozemans et al. (1993), supplemented by data from Bijlsma et al. (1995), Leatherman and Nicholls (1995) and Nicholls and Leatherman (1995), following the procedures of Tol (2002a). Potential cumulative dryland loss without protection is assumed to be a function of sea level rise: \[{\overset{\overline{}}{\text{CD}}}_{t,r} = \min\left\lbrack \delta_{r}S_{t}^{\gamma_{r}},\zeta_{r} \right\rbrack \tag{SLR.1}\] ${\overset{\overline{}}{\text{CD}}}_{t,r}$ is the potential cumulative dryland lost at time $t$ in region $r$ that would occur without protection; $\delta_{r}$ is the dryland loss due to one metre sea level rise (in square kilometre per metre) in region $r$; $S_{t}$ is sea level rise above pre-industrial levels at time $t$; note that is assumed to equal for all regions; $\gamma_{r}$ is a parameter, calibrated to a digital elevation model; $\zeta_{r}$ is the maximum dryland loss in region $r$, which is equal to the area in the year 2000. Potential dryland loss in the current year without protection is given by potential cumulative dryland loss without protection minus actual cumulative dryland lost in previous years: \[{\overset{\overline{}}{D}}_{t,r} = {\overset{\overline{}}{\text{CD}}}_{t,r} - CD_{t - 1,r} \tag{SLR.2}\] ${\overset{\overline{}}{D}}_{t,r}$ is potential dryland loss in year $t$ and region $r$ without protection; $CD_{t,r}$ is the actual cumulative dryland lost at time $t$ in region $r$. Actual dryland loss in the current year depends on the level of protection: \[D_{t,r} = \left( 1 - P_{t,r} \right){\overset{\overline{}}{D}}_{t,r} \tag{SLR.3}\] $D_{t,r}$ is dryland loss in year $t$ and region $r$; $P_{t,r}$ is the fraction of the coastline protected in year $t$ and region $r$; ${\overset{\overline{}}{D}}_{t,r}$ is potential dryland loss in year $t$ and region $r$ without protection. Actual cumulative dryland loss is given by: \[\text{CD}_{t,r} = CD_{t - 1,r} + D_{t,r} \tag{SLR.4}\] $CD_{t,r}$ is the actual cumulative dryland lost at time $t$ in region $r$; $D_{t,r}$ is dryland loss in year $t$ and region $r$. The value of dryland is assumed to be linear in income density ($/km^2^): \[VD_{t,r} = \varphi\left( \frac{\frac{Y_{t,r}}{A_{t,r}}}{YA_{0}} \right)^{\epsilon} \tag{SLR.5}\] $\text{VD}$ is the unit value of dryland (in million dollar per square kilometre) at time $t$ in region $r$; $Y$ is the total income (in billion dollar) at time $t$ in region $r$; $A$ is the area (in square kilometre) at time $t$ of region $r$; $\varphi$ is a parameter; $\varphi$ = 4 (2,&gt;0) million dollar per square kilometre (Darwin et al., 1995); $YA_{0}$ =0.635 (million dollar per square kilometre) is a normalisation constant, the average incomde density of the OECD in 1990; $\epsilon$ is a parameter, the income density elasticity of land value; $\epsilon$ = 1 (0.25). Wetland loss is assumed to be a linear function of sea level rise: \[W_{t,r} = \omega_{r}^{S}\Delta S_{t} + \omega_{r}^{M}P_{t,r}\Delta S_{t} \tag{SLR.6}\] $W_{t,r}$ is the wetland lost at time $t$ in region $r$; $P_{t,r}$ is fraction of coast protected against sea level rise at time $t$ in region $r$; $\Delta S_{t}$ is sea level rise at time $t$; note that is assumed to equal for all regions; $\omega^{S}$ is a parameter, the annual unit wetland loss due to sea level rise (in square kilometre per metre) in region $r$; note that is assumed to be constant over time; $\omega^{M}$ is a parameter, the annual unit wetland loss due to coastal squeeze (in square kilometre per metre) in region $r$; note that is assumed to be constant over time. Cumulative wetland loss is given by \[W_{t,r}^{C} = \min\left( W_{t - 1,r}^{C} + W_{t - 1,r},W_{r}^{M} \right) \tag{SLR.7}\] $W^{C}$ is cumulative wetland loss (in square kilometre) at time $t$ in region $r$ $W^{M}$ is a parameter, the total amount of wetland that is exposed to sea level rise; this is assumed to be smaller than the total amount of wetlands in 1990. Wetland loss (SLR.6) goes to zero if all wetland threatened by sea-level rise in a region is lost. Wetland value is assumed to increase with income and population density, and fall with wetland size: \[VW_{t,r} = \alpha\left( \frac{y_{t,r}}{y_{0}} \right)^{\beta}\left( \frac{d_{t,r}}{d_{0}} \right)^{\gamma}\left( \frac{W_{1990,r} - W_{t,r}^{C}}{W_{1990,r}} \right)^{\delta} \tag{SLR.8}\] $\text{VW}$ is the wetland value (in dollar per square kilometre) at time $t$ in region $r$; $y$ is per capita income (in dollar per person per year) at time $t$ in region $r$; $d$ is population density (in person per square kilometre) at time $t$ in region $r$; $W^{C}$ is cumulative wetland loss (in square kilometre) at time $t$ in region $r$; $W_{1990}$ is the total amount of wetlands in 1990 in region $r$; $\alpha$ is a parameter, the net present value of the future stream of wetland services; note that we thus account present and future wetland values in the year that the wetland is lost; $\alpha = \alpha^{'}\frac{1 + \rho + \eta g_{t,r}}{\rho + \eta g_{t,r}} = \alpha^{'}\frac{1 + 0.03 + 1 \times 0.02}{0.03 + 1 \times 0.02} = 21\alpha^{'}$ $\alpha^{'}$ = 280,000 $/km^2^, with a standard deviation of 187,000 $/km^2^; $\alpha$ is the average of the meta-analysis of Brander et al. (2006); the standard deviation is based on the coefficient of variation of the intercept in their analysis; $\beta$ is a parameter, the income elasticity of wetland value; $\beta$ = 1.16 (0.46,&gt;0); this value is taken from Brander et al. (2006); $y_{0}$ is a normalisation constant; $y_{0}$ = 25,000 $/p/yr (Brander, personal communication); $d_{0}$ is a normalisation constant; $d_{0}$ = 27.59; $\gamma$ is a parameter, the population density elasticity of wetland value; $\gamma$ = 0.47 (0.12,&gt;0,&lt;1); this value is taken from Brander et al. (2006); $\delta$ is a parameter, the size elasticity of wetland value; $\delta$ = -0.11 (0.05,&gt;-1,&lt;0); this value is taken from Brander et al. (2006); If dryland gets lost, the people living there are forced to move. The number of forced migrants follows from the amount of land lost and the average population density in the region. The value of this is set at 3 (1.5,&gt;0) times the regional per capita income per migrant (Tol, 1995). In the receiving country, costs equal 40% (20%,&gt;0) of per capita income per migrant (Cline, 1992). Table SLR displays the annual costs of fully protecting all coasts against a one metre sea level rise in a hundred years time. If sea level would rise slower, annual costs are assumed to be proportionally lower; that is, costs of coastal protection are linear in sea level rise. The level of protection, that is, the share of the coastline protected, is based on a cost-benefit analysis: \[P_{t,r} = \max\left\{ 0,1 - \frac{1}{2}\left( \frac{\mathrm{\text{NPV}}VP_{t,r} + \mathrm{\text{NPV}}VW_{t,r}}{\mathrm{\text{NPV}}VD_{t,r}} \right) \right\} \tag{SLR.9}\] $P$ is the fraction of the coastline to be protected; $\mathrm{\text{NPV}}\text{VP}$ is the net present value of the protection if the whole coast is protected (defined below); $\mathrm{\text{NPV}}\text{VW}$ is the net present value of the wetland lost due to coastal squeeze if the whole coast is protected (defined below); $\mathrm{\text{NPV}}\text{VD}$ is the net present value of the land lost without any coastal protection (defined below). Equation (SLR.9) is due to Fankhauser (1994). See below. Table SLR reports average costs per year over the next century. $\mathrm{\text{NPV}}\text{VP}$ is calculated assuming annual costs to be constant. This is based on the following. Firstly, the coastal protection decision makers anticipate a linear sea level rise. Secondly, coastal protection entails large infrastructural works which last for decades. Thirdly, the considered costs are direct investments only, and technologies for coastal protection are mature. Throughout the analysis, a pure rate of time preference, $\rho$, of 1% per year is used. The actual discount rate lies thus 1% above the growth rate of the economy, $g$. The net present costs of protection $PC$ equal \[\mathrm{\text{NPV}}VP_{t,r} = \sum_{s = t}^{\infty}{\left( \frac{1}{1 + \rho + \eta g_{t,r}} \right)^{s - t}\pi_{r}\Delta S_{t}} = \frac{1 + \rho + \eta g_{t,r}}{\rho + \eta g_{t,r}}\pi_{r}\Delta S_{t} \tag{SLR.10}\] $\mathrm{\text{NPV}}\text{VP}$ is the net present costs of coastal protection at time $t$ in region $r$; $\pi_{r}$ is the annual unit cost of coastal protection (in million dollar per vertical metre) in region $r$; note that is assumed to be constant over time; $g$ is the growth rate of per capita income at time $t$ in region $r$; $\rho$ is a parameter, the rate of pure time preference; $\rho$ = 0.03; $\eta$ is a parameter, the consumption elasticity of marginal utility; $\eta$ = 1; $\mathrm{\text{NPV}}\text{VW}$ is the net present value of the wetlands lost due to full coastal protection. Wetland values are assumed to rise in line with Equation (SLR.8). All growth rates and the rate of wetland loss are as in the current year. The net present costs of wetland loss $\text{WL}$ follow from \[\mathrm{\text{NPV}}VW_{t,r} = \sum_{s = t}^{\infty}{W_{t,r}VW_{s,r}\left( \frac{1}{1 + \rho + \eta g_{t,r}} \right)^{s - t}} = W_{t,r}VW_{t,r}\frac{1 + \rho + \eta g_{t,r}}{\rho + \eta g_{t,r} - \beta g_{t,r} - \gamma p_{t,r} - \delta w_{t,r}} \tag{SLR.11}\] $\mathrm{\text{NPV}}\text{VW}$ denotes the net present value of wetland loss. at time $t$ in region $r$; $\omega_{r}$ is the annual unit wetland loss due to full coastal protection (in square kilometre per metre sea level rise) in region $r$; note that is assumed to be constant over time; $p$ is the population growth rate at time $t$ in region $r$; $w$ is the growth rate of wetland at time $t$ in region $r$; note that wetlands shrink, so that $w < 0$; $\mathrm{\text{NPV}}\text{VD}$ denotes the net present value of the dryland lost if no protection takes place. Land values are assumed to rise at the rate of income growth. All growth rates and the rate of wetland loss are as in the current year. The net present costs of dryland loss are \[\mathrm{\text{NPV}}VD_{t,r} = \sum_{s = t}^{\infty}{{\overset{\overline{}}{D}}_{t,r}VD_{t,r}\left( \frac{1 + \epsilon d_{t,r}}{1 + \rho + \eta g_{t,r}} \right)^{s - t}} = {\overset{\overline{}}{D}}_{t,r}VD_{t,r}\frac{1 + \rho + \eta g_{t,r}}{\rho + \eta g_{t,r} - \epsilon d_{t,r}} \tag{SLR.12}\] $\mathrm{\text{NPV}}\text{VD}$ is the net present value of dryland loss at time $t$ in region $r$; $\overset{\overline{}}{D}$ is the current dryland loss without protection at time $t$ in region $r$; $\text{VD}$ is the current dryland value; $\rho$ is a parameter, the rate of pure time preference; $\rho = 0.03$; $\eta$ is a parameter, the consumption elasticity of marginal utility; $\eta = 1$; $\epsilon$ is a parameter, the income elasticity of dryland value; $\epsilon = 1.0$, with a standard deviation of 0.2; $d$ is the current income density growth rate at time $t$ in region $r$. Protection levels are bounded between 0 and 1. 5.6. Ecosystems Tol (2002a) assesses the impact of climate change on ecosystems, biodiversity, species, landscape etcetera based on the "warm-glow" effect. Essentially, the value, which people are assumed to place on such impacts, are independent of any real change in ecosystems, of the location and time of the presumed change, etcetera – although the probability of detection of impacts by the "general public" is increasing in the rate of warming. This value is specified as \[E_{t,r} = \alpha P_{t,r}\frac{\frac{y_{t,r}}{y_{r}^{b}}}{1 + \frac{y_{t,r}}{y_{r}^{b}}}\frac{\frac{\Delta T_{t}}{\tau}}{1 + \frac{\Delta T_{t}}{\tau}}\left( 1 - \sigma + \sigma\frac{B_{0}}{B_{t}} \right) \tag{E.1}\] $E$ denotes the value of the loss of ecosystems (in 1995 US dollar) at time $t$ in region $r$; $y$ denotes per capita income (in 1995 dollar per person per year) at time $t$ in region $r$; $P$ denotes population size (in millions) at time $t$ in region $r$; $\Delta T$ denotes the change in temperature (in degree Celsius); $B$ is the number of species, which makes that the value increases as the number of species falls – using Weitzman's (1998) ranking criterion and Weitzman's (1992, 1993) biodiversity index, the scarcity value of biodiversity is inversely proportional to the number of species; $\alpha$=50 (0-100, &gt;0) is a parameter such that the value equals $50 per person if per capita income equals the OECD average in 1990 (Pearce and Moran, 1994); $y^{b}$ = is a parameter; $y^{b}$ = $30,000, with a standard deviation of $10,000; it is normally distributed, but knotted at zero. $\tau$=0.025ºC is a parameter; $\sigma$=0.05 (triangular distribution,&gt;0,&lt;1) is a parameter, based on an expert guess; and $B_{0}$ =14,000,000 is a parameter. The number of species follows \[B_{t} = \max\left\{ \frac{B_{0}}{100},B_{t - 1}\left( 1 - \rho - \gamma\frac{\Delta T^{2}}{\tau^{2}} \right) \right\} \tag{E.2}\] $\rho$ = 0.003 (0.001-0.005, &gt;0.0) is a parameter; $\gamma$ = 0.001 (0.0-0.002, &gt;0.0) is a parameter; and These parameters are expert guesses. The number of species is assumed to be constant until the year 2000 at 14,000,000 species. 5.7. Human health: Diarrhoea The number of additional diarrhoea deaths $D_{t,r}^{d}$ in region $r$ and time $t$ is given by \[D_{t,r}^{d} = \mu_{r}^{d}P_{t,r}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\epsilon}\left( \frac{T_{t,r}}{T_{\mathrm{pre - industrial},r}} \right)^{\eta} \tag{HD.1}\] $P_{t,r}$ denotes population, $r$ indexes region $t$ indexes time, $y_{t,r}$ is the per capita income in region $r$ and year $t$ in 1995 US dollars, $T_{t,r}$ is regional temperature in year $t$, in degrees Celcius (C); $\mu_{r}^{d}$ is the rate of mortality from diarrhoea in 2000 in region $r$, taken from the WHO Global Burden of Disease (see Table HD, column 3); $\epsilon$ = -1.58 (0.23)is the income elasticity of diarrhoea mortality $\eta$ = 1.14 (0.51) is a parameter, the degree of non-linearity of the response of diarrhoea mortality to regional warming. Equation (HD.1), specifically parameters $\epsilon$ and $\eta$, was estimated based on the WHO Global Burden of Diseases data (http://www.who.int/health_topics/global_burden_of_disease/en/). Diarrhoea morbidity has the same equation as mortality, but with $\epsilon$=-0.42 (0.12) and $\eta$=0.70 (0.26); base morbidity is given in Table HD, column 4. Table HD gives impact estimates, ignoring economic and population growth. See section 5.12. for a description of the valuation of mortality and morbidity. 5.8. Human health: Vector-borne diseases The number of additional deaths from vector-borne diseases, $D_{t,r}^{v}$ is given by: \[D_{t,r}^{v} = D_{1990,r}^{v}\alpha_{r}^{v}T_{t}^{\beta}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\gamma} \tag{HV}\] $D_{t,r}^{v}$ denotes climate-change-induced mortality due to disease $v$ in region $r$ at time $t$; $D_{1990,r}^{v}$ denotes mortality from vector-borne diseases in region $r$ in 1990 (see Table HV, column "base"); $v$ denotes vector-borne disease (malaria, schistosomiasis, dengue fever); $\alpha$ is a parameter, indicating the benchmark impact of climate change on vector-borne diseases (see Table HV, column "impact"); the best guess is the average of Martin and Lefebvre (1995), Martens et al. (1995, 1997) and Morita et al. (1995), while the standard deviation is the spread between models and the scenarios. $y_{t,r}$ denotes per capita income; $T_{t}$ denotes the mean temperature in year $t$, in degrees Celcius (C); $\beta$ = 1.0 (0.5) is a parameter, the degree of non-linearity of mortality in warming; the parameter is calibrated to the results of Martens et al. (1997); $\gamma$ = -2.65 (0.69) is the income elasticity of vector-borne mortality, taken from Link and Tol (2004), who regress malaria mortality on income for the 14 WHO regions.. See section 5.12. for a description of the valuation of mortality and morbidity. Morbidity is proportional to mortality, using the factor specified in Table HM. 5.9. Human health: Cardiovascular and respiratory mortality Cardiovascular and respiratory disorders are worsened by both extreme cold and extreme hot weather. Martens (1998) assesses the increase in mortality for 17 countries. Tol (2002a) extrapolates these findings to all other countries, based on formulae of the shape: \[D^{c} = \alpha^{c} + \beta^{c}T_{B} \tag{HC.1}\] $D^{c}$ denotes the change in mortality (in deaths per 100,000 people) due to a one degree global warming; $c$ indexes the disease (heat-related cardiovascular under 65, heat-related cardiovascular over 65, cold-related cardiovascular under 65, cold-related cardiovascular over 65, respiratory); $T_{B}$ is the current temperature of the hottest or coldest month in the country (in degree Celsius); $\alpha$ and $\beta$ are parameters, specified in Table HC.1. Equation (HC.1) is specified for populations above and below 65 years of age for cardiovascular disorders. Cardiovascular mortality is affected by both heat and cold. In the case of heat, $T_{B}$ denotes the average temperature of the warmest month. In the case of cold, $T_{B}$ denotes the average temperature of the coldest month. Respiratory mortality is not age-specific. Equation (HC.1) is readily extrapolated. With warming, the baseline temperature $T_{B}$ changes. If this change is proportional to the change in the global mean temperature, the equation becomes quadratic. Summing country-specific quadratic functions results in quadratic functions for the regions: \[D_{t,r}^{c} = \alpha_{r}^{c}T_{t} + \beta_{r}^{c}T_{t}^{2} \tag{HC.2}\] $D_{t,r}^{c}$ denotes climate-change-induced mortality (in deaths per 100,000 people) due to disease $c$ in region $r$ at time $t$; $r$ indexes region; $t$ indexes time; $\alpha$ and $\beta$ are parameters, specified in Tables HC.2-4 (in probabilistic mode all probablitiy distributions are constrained so that only values with the same sign as the mean can be sampled). One problem with (HC.2) is that it is a non-linear extrapolation based on a data-set that is limited to 17 countries and, more importantly, a single climate change scenario. A global warming of 1°C leads to changes in cardiovascular and respiratory mortality in the order of magnitude of 1% of baseline mortality due to such disorders. Per cause, the total change in mortality is restricted to a maximum of 5% of baseline mortality, an expert guess. This restriction is binding. Baseline cardiovascular and respiratory mortality derives from the share of the population above 65 in the total population. If the fraction of people over 65 increases by 1%, cardiovascular mortality increases by 0.0259% (0.0096%). For respiratory mortality, the change is 0.0016% (0.0005%). These parameters are estimated from the variation in population above 65 and cardiovascular and respiratory mortality over the nine regions in 1990, using data from http://www.who.int/health_topics/global_burden_of_disease/en/. Mortality as in equations (HC.1) and (HC.2) is expressed as a fraction of population size. Cardiovascular mortality, however, is separately specified for younger and older people. In 1990, the per capita income elasticity of the share of the population over 65 is 0.25 (0.08). This is estimated using data from http://earthtrends.wri.org Heat-related mortality is assumed to be limited to urban populations. Urbanisation is a function of per capita income and population density: \[U_{t,r} = \frac{\alpha\sqrt{y_{t,r}} + \beta\sqrt{PD_{t,r}}}{1 + \alpha\sqrt{y_{t,r}} + \beta\sqrt{PD_{t,r}}} \tag{HC.3}\] $U$ is the fraction of people living in cities; $y$ is per capita income (in 1995 $ per person per year); $\text{PD}$ is population density (in people per square kilometre); $t$ is time; $r$ is region; $\alpha$ and $\beta$ are parameters, estimated from a cross-section of countries for the year 1995, using data from http://earthtrends.wri.org; $\alpha$=0.031 (0.002) and $\beta$=-0.011 (0.005); R^2^=0.66. 5.10. Extreme weather: Tropical storms The economic damage $TD$ due to an increase in the intensity of tropical storms (hurricanes, typhoons) follows \[TD_{t,r} = \alpha_{r}Y_{t,r}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\epsilon}\left\lbrack \left( 1 + \delta T_{t,r} \right)^{\gamma} - 1 \right\rbrack \tag{TS.1}\] $r$ denotes region $\text{TD}$ is the damage due to tropical storms (1995 $ per year) in region $r$ at time $t$; $Y$ is the gross domestic product (in 1995 $ per year) in region $r$ at time $t$; $\alpha$ is the current damage as fraction of GDP, specified in Table TS; the data are from the CRED EM-DAT database; http://www.emdat.be/; $y$ is per capita income (in 1995 $ per person per year) in region $r$ at time $t$; $\epsilon$ is the income elasticity of storm damage; $\epsilon$ = -0.514 (0.027;&gt;-1,&lt;0) after Toya and Skidmore (2007); $\delta$ is a parameter, indicating how much wind speed increases per degree warming; $\delta$=0.04/ºC (0.005) after WMO (2006); $T$ is the temperature increase since pre-industrial times (in degree Celsius) in region $r$ at time $t$; $\gamma$ is a parameter; $\gamma$=3 because the power of the wind in the cube of its speed. The mortality $\text{TM}$ due to an increase in the intensity of tropical storms (hurricanes, typhoons) follows \[TM_{t,r} = \beta_{r}P_{t,r}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\eta}\left\lbrack \left( 1 + \delta T_{t,r} \right)^{\gamma} - 1 \right\rbrack \tag{TS.2}\] $\text{TM}$ is the mortality due to tropical storms (in people per year) in region $r$ at time $t$; $P$ is the population (in people) in region $r$ at time $t$; $\beta$ is the current mortality (as a fraction of population), specified in Table TS; the data are from the CRED EM-DAT database; http://www.emdat.be/; $\eta$ is the income elasticity of storm damage; $\eta$ = -0.501 (0.051;&lt;0) after Toya and Skidmore (2007); $\delta$ is parameter, indicating how much wind speed increases per degree warming; $\delta$=0.04/ºC (0.005) after WMO (2006); 5.11. Extreme weather: Extratropical storms The economic damage due to an increase in the intensity of extratropical storms follows the equation below: \[\text{ET}D_{t,r} = \alpha_{r}Y_{t,r}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\epsilon}\delta_{r}\left\lbrack \left( \frac{C_{CO2,t}}{C_{CO2,pre}} \right)^{\gamma} - 1 \right\rbrack \tag{ETS.1}\] $\text{ET}D_{t,r}$ is the damage from extratropical cyclones at time $t$ in region $r$; $Y_{t,r}$ is GDP in region $r$ and time $t$; $\alpha_{r}$ is benchmark damage from extratropical cyclones for region $r$; $y$ is per capita income at time $t$ in region $r$; $\epsilon$=-0.514(0.027,&gt;-1,&lt;0) is the income elasticity of extratropical storm damages (Toya and Skidmore 2007); $\delta_{r}$ is the storm sensitivity to atmospheric CO2 concentrations for region $r$; $C_{CO2,t}$ is atmospheric CO₂ concentrations; $C_{CO2,pre}$ is the CO₂ concentrations in the pre-industrial era; $\gamma$=1 is a parameter. \[\text{ET}M_{t,r} = \beta_{r}P_{t,r}\left( \frac{y_{t,r}}{y_{1990,r}} \right)^{\varphi}\delta_{r}\left\lbrack \left( \frac{C_{CO2,t}}{C_{CO2,pre}} \right)^{\gamma} - 1 \right\rbrack \tag{EST.2}\] $\text{ET}M_{t,r}$ is the mortality from extratropical cyclones at time $t$ in region $r$; $P_{t,r}$ is population in region $r$ and time $t$; $\beta_{r}$ is benchmark mortality from extratropical cyclones for region $r$; $\varphi$=-0.501(0.051,&gt;-1,&lt;0) is the income elasticity of extratropical storm mortality (Toya and Skidmore 2007); 5.12. Mortality and Morbidity The value of a statistical life is given by \[\text{VS}L_{t,r} = \alpha\left( \frac{y_{t,r}}{y_{0}} \right)^{\epsilon} \tag{MM.1}\] $\text{VSL}$ is the value of a statistical life at time $t$ in region $r$; $\alpha$=4992523 (2496261,&gt;0) is a parameter; $y_{0}$ =24963 is a normalisation constant; $\epsilon$=1 (0.2,&gt;0) is the income elasticity of the value of a statistical life; This calibration results in a best guess value of a statistical life that is 200 times per capita income (Cline, 1992). The value of a year of morbidity is given by \[VM_{t,r} = \beta\left( \frac{y_{t,r}}{y_{0}} \right)^{\eta} \tag{MM.2}\] $\text{VM}$ is the value of a statistical life at time $t$ in region $r$; $\beta$= 19970 (29955,&gt;0) is a parameter; $y_{0}$=24963 is a normalisation constant; $\eta$=1 (0.2,&gt;0) is the income elasticity of the value of a year of morbidity; This calibration results in a best guess value of a year of morbidity that is 0.8 times per capita income (Navrud, 2001). We thank Adriana Ciccone for helpful comments on this documentation. Nakicenovic, N. and R.J. Swart (eds.) (2001), IPCC Special Report on Press, . Bijlsma, L., C.N.Ehler, R.J.T.Klein, S.M.Kulshrestha, R.F.McLean, N.Mimura, R.J.Nicholls, L.A.Nurse, H.Perez Nieto, E.Z.Stakhiv, R.K.Turner, and R.A.Warrick (1996), 'Coastal Zones and Small Islands', in Climate Change 1995: Impacts, Adaptations and Mitigation of Climate Change: Scientific-Technical Analyses – Contribution of Working Group II to the Second Assessment Report of the Intergovernmental Panel on Climate Change, 1 edn, R.T. Watson, M.C. Zinyowera, and R.H. Moss (eds.), Cambridge University Press, Cambridge, pp. 289-324. Cline, W.R. (1992), The Economics of Global Warming Institute for International Economics, Darwin, R.F., M.Tsigas, J.Lewandrowski, and A.Raneses (1996), 'Land use and cover in ecological economics', Ecological Economics, 17, 157-181. Darwin, R.F., M.Tsigas, J.Lewandrowski, and A.Raneses (1995), World Agriculture and Climate Change - Economic Adaptations, U.S. Department of Agriculture, Washington, D.C., 703. Downing, T.E., N.Eyre, R.Greener, and D.Blackwell (1996), Full Fuel Cycle Study: Evaluation of the Global Warming Externality for Fossil Fuel Cycles with and without CO2 Abatament and for Two Reference Scenarios, Environmental Change Unit, University of Oxford, Oxford. Downing, T.E., R.A.Greener, and N.Eyre (1995), The Economic Impacts of Climate Change: Assessment of Fossil Fuel Cycles for the ExternE Project, and Lonsdale, Environmental Change Unit and Eyre Energy Environment. Fankhauser, S. (1994), 'Protection vs. Retreat – The Economic Costs of Sea Level Rise', Environment and Planning A, 27, 299-319. Fischer, G., K.Frohberg, M.L.Parry, and C.Rosenzweig (1993), 'Climate Change and World Food Supply, Demand and Trade', in Costs, Impacts, and Benefits of CO₂ Mitigation, Y. Kaya et al. (eds.), pp. 133-152. Fischer, G., K.Frohberg, M.L.Parry, and C.Rosenzweig (1996), 'Impacts of Potential Climate Change on Global and Regional Food Production and Vulnerability', in Climate Change and World Food Security, T.E. Downing (ed.), Springer-Verlag, Berlin, pp. 115-159. Forster, P., V. Ramaswamy, P. Artaxo, T. Berntsen, R. Betts, D. W. Fahey, J. Haywood, J. Lean, D. C. Lowe, G. Myhre, J. Nganga, R. Prinn, G. Raga, M. Schulz and R. V. Dorland (2007). Changes in Atmospheric Constituents and in Radiative Forcing. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. S. Solomon, D. Qin, M. Manning et al. Cambridge, United Kingdom and New York, NY, USA, Cambridge University Press. Gitay, H., S.Brown, W.Easterling, B.P.Jallow, J.M.Antle, M.Apps, R.Beamish, T.Chapin, W.Cramer, J.Frangi, J.Laine, E.Lin, J.J.Magnuson, I.Noble, J.Price, T.D.Prowse, T.L.Root, E.-D.Schulze, O.Sitotenko, B.L.Sohngen, and J.-F.Soussana (2001), 'Ecosystems and their Goods and Services', in Climate Change 2001: Impacts, Adaptation and Vulnerability – Contribution of Working Group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change, J.J. McCarthy et al. (eds.), Cambridge University Press, Cambridge, pp. 235-342. Goulder, L.H. and S.H.Schneider (1999), 'Induced technological change and the attractiveness of CO₂ abatement policies', Resource and Energy Economics, 21, 211-253. Goulder, L.H. and K.Mathai (2000), 'Optimal CO₂ Abatement in the Presence of Induced Technological Change', Journal of Environmental Economics and Management, 39, 1-38. Hammitt, J.K., R.J.Lempert, and M.E.Schlesinger (1992), 'A Sequential-Decision Strategy for Abating Climate Change', Nature, 357, 315-318. Hodgson, D. and K. Miller (1995), 'Modelling UK Energy Demand' in T. Barker, P. Ekins and N. Johnstone (eds.), Global Warming and Energy Demand, Routledge, London. Hoozemans, F.M.J., M.Marchand, and H.A.Pennekamp (1993), A Global Vulnerability Analysis: Vulnerability Assessment for Population, Coastal Wetlands and Rice Production and a Global Scale (second, revised edition), Delft Hydraulics, . Hourcade, J.-C., K.Halsneas, M.Jaccard, W.D.Montgomery, R.G.Richels, J.Robinson, P.R.Shukla, and P.Sturm (1996), 'A Review of Mitigation Cost Studies', in Climate Change 1995: Economic and Social Dimensions – Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change, J.P. Bruce, H. Lee, and E.F. Haites (eds.), Cambridge University Press, Cambridge, pp. 297-366. Hourcade, J.-C., P.R.Shukla, L.Cifuentes, D.Davis, J.A.Edmonds, B.S.Fisher, E.Fortin, A.Golub, O.Hohmeyer, A.Krupnick, S.Kverndokk, R.Loulou, R.G.Richels, H.Segenovic, and K.Yamaji (2001), 'Global, Regional and National Costs and Ancillary Benefits of Mitigation', in Climate Change 2001: Mitigation – Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change, O.R. Davidson and B. Metz (eds.), Cambridge University Press, Cambridge, pp. 499-559. IMAGE Team (2001), The IMAGE 2.2 Implementation of the SRES Scenarios: A Comprehensive Analysis of Emissions, Climate Change, and Impacts in the 21st Century, National Institute for Public Health and the Environment, Bilthoven, 481508018. Kane, S., J.M.Reilly, and J.Tobey (1992), 'An Empirical Study of the Economic Effects of Climate Change on World Agriculture', Climatic Change, 21, 17-35. Kattenberg, A., F.Giorgi, H.Grassl, G.A.Meehl, J.F.B.Mitchell, R.J.Stouffer, T.Tokioka, A.J.Weaver, and T.M.L.Wigley (1996), 'Climate Models - Projections of Future Climate', in Climate Change 1995: The Science of Climate Change – Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change, 1 edn, J.T. Houghton et al. (eds.), Cambridge University Press, Cambridge, pp. 285-357. Leatherman, S.P. and R.J.Nicholls (1995), 'Accelerated Sea-Level Rise and Developing Countries: An Overview', Journal of Coastal Research, 14, 1-14. Leggett, J., W.J.Pepper, and R.J.Swart (1992), 'Emissions Scenarios for the IPCC: An Update', in Climate Change 1992 - The Supplementary Report to the IPCC Scientific Assessment, 1 edn, vol. 1 J.T. Houghton, B.A. Callander, and S.K. Varney (eds.), Cambridge University Press, Cambridge, pp. 71-95. Link, P.M. and R.S.J. Tol (2004), 'Possible Economic Impacts of a Shutdown of the Thermohaline Circulation: An Application of FUND', Portuguese Economic Journal, 3, 99-114. Maier-Reimer, E. and K.Hasselmann (1987), 'Transport and Storage of Carbon Dioxide in the Ocean: An Inorganic Ocean Circulation Carbon Cycle Model', Climate Dynamics, 2, 63-90. Martens, W.J.M. (1998), 'Climate Change, Thermal Stress and Mortality Changes', Social Science and Medicine, 46, (3), 331-344. Martens, W.J.M., T.H. Jetten, J. Rotmans and L.W. Niessen (1995). Climate Change and Vector-Borne Diseases – A Global Modelling Perspective. Global Environmental Change 5 (3):195-209. Martens, W.J.M., T.H. Jetten and D.A. Focks (1997). Sensitivity of Malaria, Schistosomiasis and Dengue to Global Warming. Climatic Change 35 145-156. Martin, P.H. and M.G. Lefebvre (1995). Malaria and Climate: Sensitivity of Malaria Potential Transmission to Climate. Ambio 24 (4):200-207. Mendelsohn, R.O., Schlesinger M.E., Williams L.J. (2000) Comparing impacts across climate models. Integr Assess 1:37–48. doi:10.1023/A:1019111327619 Morita, T., M.Kainuma, H.Harasawa, K.Kai, L.Dong-Kun, and Y.Matsuoka (1994), Asian-Pacific Integrated Model for Evaluating Policy Options to Reduce Greenhouse Gas Emissions and Global Warming Impacts, National Institute for Environmental Studies, Tsukuba. Navrud, S. (2001), 'Valuing Health Impacts from Air Pollution in ', Environmental and Resource Economics, 20, (4), 305-329. Nicholls, R.J. and S.P.Leatherman (1995), 'The Implications of Accelerated Sea-Level Rise for Developing Countries: A Discussion', Journal of Coastal Research, 14, 303-323. Pearce, D.W. and D.Moran (1994), The Economic Value of Biodiversity EarthScan, . Perez-Garcia, J., Joyce, L. A., Binkley, C. S., & McGuire, A. D. "Economic Impacts of Climatic Change on the Global Sector: An Integrated Ecological/Economic Assessment", Bergendal. Ramaswamy, V., O.Boucher, J.Haigh, D.Hauglustaine, J.Haywood, G.Myhre, T.Nakajima, G.Y.Shi, and S.Solomon (2001), 'Radiative Forcing of Climate Change', in Climate Change 2001: The Scientific Basis – Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, J.T. Houghton and Y. Ding (eds.), Cambridge University Press, Cambridge, pp. 349-416. Reilly, J.M., N.Hohmann, and S.Kane (1994), 'Climate Change and Agricultural Trade: Who Benefits, Who Loses?', Global Environmental Change, 4, (1), 24-36. Sohngen, B.L., R.O.Mendelsohn, and R.A.Sedjo (2001), 'A Global Model of Climate Change Impacts on Timber Markets', Journal of Agricultural and Resource Economics, 26, (2), 326-343. Tol, R.S.J. (2002), 'Estimates of the Damage Costs of Climate Change - Part 1: Benchmark Estimates', Environmental and Resource Economics, 21, 47-73. Tol, R.S.J. (2002), 'Estimates of the Damage Costs of Climate Change - Part II: Dynamic Estimates', Environmental and Resource Economics, 21, 135-160. Tol, R.S.J. (1995), 'The Damage Costs of Climate Change Toward More Comprehensive Calculations', Environmental and Resource Economics, 5, 353-374. Toya, H. and M. Skidmore (2007), 'Economic Development and the Impact of Natural Disasters', Economics Letters, 94, 20-25. , , G.B.Frisvold, and B.Kuhn (1996), 'Global Climate Change in Agriculture', in Global Trade Analysis: Modelling and Applications, T.W. Hertel (ed.), Press, . USEPA (2003), International Analysis of Methane and Nitrous Oxide Abatement Opportunities: Report to Energy Modeling Forum, Working Group 21, Environmental , D.C.. Weitzman, M.L. (1992), 'On Diversity', Quarterly Journal of Economics, 364-405. Weitzman, M.L. (1998), 'The Noah's Problem', Econometrica, 66, (6), 1279-1298. Weitzman, M.L. (1993), 'What to preserve? An application of diversity theory to crane conservation', Quarterly Journal of Economics, 157-183. Weyant, J.P. (2004), 'Introduction and overview', Energy Economics, 26, 501-515. Weyant, J.P., F.C.de la Chesnaye, and G.J.Blanford (2006), 'Overview of EMF-21: Multigas Mitigation and Climate Policy', Energy Journal (Multi-Greenhouse Gas Mitigation and Climate Policy Special Issue), 1-32. WMO (2006), Summary Statement on Tropical Cyclones and Climate Change, World Meteorological Organization. http://www.wmo.ch/pages/prog/arep/tmrp/documents/iwtc_summary.pdf « HomeTables » This document was generated with Documenter.jl on Friday 14 October 2022. Using Julia version 1.8.2.
CommonCrawl
Infinitude of Primes: A Topological Proof Although topology made away with metric properties of shapes, it was helped very much by algebra in classification of knots. Following is a wonderful example (due to Harry Fürstenberg of the Hebrew University of Jerusalem, Israel) of a returned favor (albeit on a smaller scale): Euclid's theorem - one of the basic statements of arithmetic - is proven by very simple topological means! (Fürstenberg's proof has been recast elsewhere with the verbiage that avoids topological terms.) We call a set closed if it contains all its near points. A set is open if its complement is closed. There are always two sets (the empty set and the space itself) that are simultaneously open and closed. Open and closed sets are also characterized by complementary properties: $\begin{array}{ll} \text{Any union of open sets is open.} & &\text{Any intersection of closed sets is closed.}\\ \text{A finite intersection of open sets is open.} & & \text{A finite union of closed sets is closed.} \end{array}$ Let's prove the right two. The left ones will follow from de Morgan's laws, $(\cup A)^{c} = \cap A^{c},$ and $(\cap A)^{c} = \cup A^{c},$ Any intersection of closed sets is closed. Let point p be near $\cap F_{t},$ where $F_{t}$'s are closed sets for all values of parameter $t$ from some set $T.$ This means that every neighborhood of $p$ has a nonempty intersection with $\cap F_{t}$, and, therefore, with every $F_{t}.$ Since the latter are closed sets, $p\in F_{t}$ for all $t.$ Thus $p\in \cap F_{t}$. A finite union of closed sets is closed. Let $p$ be near $\cup F_{t},$ where $t\in T$ a finite set. $p$ must be near at least one of $F_{t}\mbox{'s}.$ For, if it were not the case, for every $t$ there would exist a neighborhood of $p$ that did not intersect $F_{t}.$ From the way the neighborhoods were defined, there would exist a neighborhood (take the smallest of the aforementioned neighborhoods) of $p$ that did not intersect any of $F_{t}\mbox{'s},$ nor would it intersect the union $\cup F_{t}$ in contradiction with nearness of $p$ to $\cup F_{t}.$ Hence $p$ is near one of $F_{t}\mbox{'s}.$ Therefore, $p$ belongs to that $F_{t},$ and, finally, $p\in \cup F_{t}.$ There are various ways to define a topology on a given space. One may start with neighborhoods and deduce from their properties properties of open and closed sets or those of the operation of closure. It's also possible to start with Statements 1 and 2 and the stipulation that the empty set and the whole space are closed, and define neighborhoods and open sets with all the expected properties of the latter. Of course, one can start with open sets as well. And, finally, neighborhoods must not be defined as balls in a metric space. The statements below capture the most important properties of the neighborhoods: Each neighborhood of a point p contains p. The whole space is a neighborhood of all its points. A superset of a neighborhood of a point is a neighborhood of that point. The intersection of two neighborhoods of a point, is a neighborhood of that point. Each neighborhood of a point p contains a neighborhood of p which is also a neighborhood of each of its points. Any family of sets that satisfy (N1-N4) can be used to define closed and open sets and other attributes of a topology. We now look at Fürstenberg's example. Let $\mathbb{Z}$ be the set of all integers - positive, negative, and $0.$ For $a, b\in \mathbb{Z}$, $b > 0$ let $N_{a, b} = \{ a + nb: n \in \mathbb{Z} \}.$ Each $N_{a, b}$ is a two-sided arithmetic progression. Note also that $a\in N_{a, b}$ for every b. $\mbox{"}N\mbox{"}$ in the definition is supposed to remind us of neighborhoods. Indeed, think of $N_{a, b}$ as those basic neighborhoods that are neighborhoods of each of their points (N4). Add to the collection of neighborhoods all supersets of $N_{a, b}\mbox{'s}$. With this definition we only need to verify property (N3). But note that $N_{a, b}\cap N_{a, c} = N_{a, \text{lcm}(b, c)},$ where $\mbox{lcm}(n,m)$ is the least common multiple of $n$ and $m$. From this N3 follows easily. Call a set $U$ open if, for every $a\in U,$ there exists $b \gt 0,$ such that $N_{a, b}\subset U.$ We can check that Statements 1 and 2 hold as are their analogues for the open sets. Two facts are important: Any non-empty open set is infinite. Besides being open, any set $N_{a, b}$ is also closed! The first of these follows from the definition, the second from $N_{a, b} = \mathbb{Z}\space\setminus\cup N_{a+i, b},$ where the union is taken over $i = 1, 2, ..., b-1.$ $N_{a, b}$ is then closed as a complement of a finite union of open sets. Now the punch line. Except for $\pm 1$ and $0,$ all integers have prime factors. Therefore each is contained in one or more $N_{0, p},$ where $p$ is prime. We thus arrive at the identity: $\mathbb{Z}\space\setminus\{-1, 1\} = \cup N_{0, p},$ where union is taken over the set $\{p\}= \mathbf{P}$ of all primes. If the latter were finite, the right hand side would be closed as the union of a finite number of closed sets (O2). The set $\{-1, 1\}$ would then be open as a complement of a closed set. This would contradict (O1). M. Aigner, G. Ziegler, Proofs from THE BOOK, Springer, 2000 H. Fürstenberg, On the Infinitude of Primes, Amer. Math. Monthly 62 (1955), 353 K. Janich, Topology, UTM, Springer-Verlag, 1984 Infinitude of Primes Infinitude of Primes - A Topological Proof without Topology Infinitude of Primes Via *-Sets Infinitude of Primes Via Coprime Pairs Infinitude of Primes Via Fermat Numbers Infinitude of Primes Via Harmonic Series Infinitude of Primes Via Lower Bounds Infinitude of Primes - via Fibonacci Numbers New Proof of Euclid's Theorem Infinitude Of Primes From Legendre's Formula Infinitude of Primes - via Bertrand's Postulate Infinitude of Primes - An Impossible Injection Infinitude of Primes - from Infinitude of Mutual Primes Infinitude of Primes Via Euler's Product Formula Infinitude of Primes Via Euler's Product Formula for Pi Infinitude of Primes Via Powers of 2 Infinitude of Primes As a Quickie A One-Line Proof of the Infinitude of Primes Infinitude of Primes - A. Thue's Proof Why The Number of Primes Could Not Be Finite? |Contact| |Front page| |Contents| |Algebra| |Geometry| |Up|
CommonCrawl
Search SpringerLink Coupled, Physics-Based Modeling Reveals Earthquake Displacements are Critical to the 2018 Palu, Sulawesi Tsunami T. Ulrich ORCID: orcid.org/0000-0002-4164-89331, S. Vater2, E. H. Madden1,3, J. Behrens4, Y. van Dinther5, I. van Zelst6, E. J. Fielding7, C. Liang8 & A.-A. Gabriel1 Pure and Applied Geophysics volume 176, pages4069–4109(2019)Cite this article The September 2018, \(M_w\) 7.5 Sulawesi earthquake occurring on the Palu-Koro strike-slip fault system was followed by an unexpected localized tsunami. We show that direct earthquake-induced uplift and subsidence could have sourced the observed tsunami within Palu Bay. To this end, we use a physics-based, coupled earthquake–tsunami modeling framework tightly constrained by observations. The model combines rupture dynamics, seismic wave propagation, tsunami propagation and inundation. The earthquake scenario, featuring sustained supershear rupture propagation, matches key observed earthquake characteristics, including the moment magnitude, rupture duration, fault plane solution, teleseismic waveforms and inferred horizontal ground displacements. The remote stress regime reflecting regional transtension applied in the model produces a combination of up to 6 m left-lateral slip and up to 2 m normal slip on the straight fault segment dipping \(65^{\circ }\) East beneath Palu Bay. The time-dependent, 3D seafloor displacements are translated into bathymetry perturbations with a mean vertical offset of 1.5 m across the submarine fault segment. This sources a tsunami with wave amplitudes and periods that match those measured at the Pantoloan wave gauge and inundation that reproduces observations from field surveys. We conclude that a source related to earthquake displacements is probable and that landsliding may not have been the primary source of the tsunami. These results have important implications for submarine strike-slip fault systems worldwide. Physics-based modeling offers rapid response specifically in tectonic settings that are currently underrepresented in operational tsunami hazard assessment. Tsunamis occur due to abrupt perturbations to the water column, usually caused by the seafloor deforming during earthquakes or submarine landslides. Devastating tsunamis associated with submarine strike-slip earthquakes are rare. While such events may trigger landslides that in turn trigger tsunamis, the associated ground displacements are predominantly horizontal, not vertical, which does not favor tsunami genesis. However, strike-slip fault systems in complex tectonic regions, such as the Palu-Koro fault zone cutting across the island of Sulawesi, may host vertical deformation. For example, a transtensional tectonic regime can favour strike-slip faulting overall, while also inducing normal faulting. Strike-slip systems may also include complicated fault geometries, such as non-vertical faults, bends or en echelon step-over structures. These can host complex rupture dynamics and produce a variety of displacement patterns when ruptured, which may promote tsunami generation (Legg and Borrero 2001; Borrero et al. 2004). To mitigate the commonly under-represented hazard of strike-slip induced tsunamis, it is crucial to fundamentally understand the direct effect of coseismic displacements on tsunami genesis. Globally, geological settings similar to that governing the Sulawesi earthquake–tsunami sequence are not unique. Large strike-slip faults crossing off-shore and running through narrow gulfs include the elongated Bodega and Tomales bays in northern California, USA, hosting major segments of the right-lateral strike-slip San Andreas fault system, and the left-lateral Anatolian fault system in Turkey, extending beneath the Marmara Sea just south of Istanbul. Indeed, historical data do record local tsunamis generated from earthquakes along these and other strike-slip fault systems, such as in the 1906 San Francisco (California), 1994 Mindoro (Philippines), and 1999 Izmit (Turkey) earthquakes (Legg et al. 2003) and, more recently, the 2016 Kaikōura, New Zealand earthquake (Ulrich et al. 2019; Power et al. 2017). Large magnitude strike-slip earthquakes can also produce tsunamigenic aftershocks (e.g., Geist and Parsons 2005). In most tsunami modelling approaches, the tsunami source is computed according to the approach of Mansinha and Smylie (1971) and subsequently parameterized by the Okada model (Okada 1985), which translates finite fault models into seafloor displacements. Okada's model allows for the analytical computation of static ground displacements generated by a uniform dislocation over a finite rectangular fault assuming a homogeneous elastic half space. Heterogeneous slip can be captured by linking several dislocations in space, and time-dependence is approximated by allowing these dislocations to move in sequence (e.g., Tanioka et al. 2006). While seafloor and coastal topography are ignored, the contribution of horizontal displacements may be additionally accounted for by a filtering approach suggested by Tanioka and Satake (1996), which includes the gradient of local bathymetry. Applying a traditional Okada source to study tsunami genesis is specifically limited for near-field tsunami observations and localized events due to its underlying, simplifying assumptions. Realistic modeling of earthquakes and tsunamis benefits from physics-based approaches. Kinematic models of earthquake slip are the result of solving data-driven inverse problems. Such models aim to closely fit observations with a large number of free parameters. In contrast, dynamic rupture models aim at reproducing the physical processes that govern the way the fault yields and slides, and are therefore often referred to as 'physics-based'. Finite fault models are affected by inherent non-uniqueness, which may spread via the ground displacement fields to the modeled tsunami genesis. Constraining the kinematics of multi-fault rupture is especially challenging, since initial assumptions on fault geometry strongly affect the slip inversion results. Mechanically viable earthquake source descriptions are provided by dynamic rupture modeling combining spontaneous frictional failure and seismic wave propagation. Dynamic rupture simulations fully coupled to the time-dependent response of an overlying water layer have been performed by Lotto et al. (2017a, b, 2018). These have been instrumental in determining the influence of different earthquake parameters and material properties on coupled systems, but are restricted to 2D. Maeda and Furumura (2013) showcase a fully-coupled 3D modeling framework capable of simultaneously modeling seismic and tsunami waves, but not earthquake rupture dynamics. Ryan et al. (2015) couple a 3D dynamic earthquake rupture model to a tsunami model, but these are restricted to using a static snapshot of the seafloor displacement field as the tsunami source. In order to capture the physics of the interaction between the Palu earthquake and the subsequent tsunami, we utilize a physics-based, coupled earthquake–tsunami model. While the feasibility of formal dynamic rupture inversion approaches has been demonstrated (e.g. Peyrat et al. 2001; Gallovič et al. 2019a, b), these are limited by the computational cost of each forward dynamic rupture model and therefore rely on model simplifications. In this study, we do not perform a formal dynamic rupture inversion, but constrain the earthquake model by static considerations and few trial dynamic simulations. The forward model of the dynamic earthquake rupture incorporates 3D spatial variation in subsurface material properties, spontaneously developing slip on a complex, non-planar system of 3D faults, off-fault plastic deformation, and the non-linear interaction of frictional failure with seismic waves. The coseismic deformation of the crust generates time-dependent seafloor displacements, which we translate into bathymetry perturbations to source the tsunami. The tsunami model solves for non-linear wave propagation and inundation at the coast. Using this coupled approach, we evaluate the influence of coseismic deformation during the strike-slip Sulawesi earthquake on generating the observed tsunami waves. The physics-based model reveals that the rupture of a fault crossing Palu Bay with a moderate, but wide-spread, component of normal fault slip produces vertical deformation, which can explain the observed tsunami wave amplitudes and inundation elevations. The 2018 Palu, Sulawesi Earthquake and Tsunami a Tectonic setting of the September 28, 2018 \(M_w\) 7.5 Sulawesi earthquake (epicenter indicated by yellow star). Black lines indicate plate boundaries based on Bird (2003); Socquet et al. (2006); Argus et al. (2011). BH Bird's Head plate, BS Banda Sea plate, MF Matano fault zone, PKF Palu-Koro fault zone, MS Molucca Sea plate, SSF Sula-Sorong fault zone, TI Timor plate. Arrows indicate the far-field plate velocities with respect to Eurasia (Socquet et al. 2006). The black box corresponds to the region displayed in b. b A zoom of the region of interest. The site of the harbor tide gauge of Pantoloan is indicated as well as the city of Palu. Locations of the GPS stations at which we provide synthetic ground displacement time series (see Appendix 7.2) are indicated by the red triangles. Focal mechanisms and epicenters of the September 28, 2018 Palu earthquake (USGS (2018a), top), October 1, 2018 Palu aftershock (middle), and January 23, 2005 Sulawesi earthquake (bottom) are shown. These later two events provide constraints on the dip angles of individual segments of the fault network. Individual fault segments of the Palu-Koro fault used in the dynamic rupture model are coloured. c, d, e 3D model of the fault network viewed from top, SW and S The Indonesian island of Sulawesi is located at the triple junction between the Sunda plate, the Australian plate and the Philippine Sea plate (Bellier et al. 2006; Socquet et al. 2006, 2019) (Fig. 1a). Convergence of the Philippine and Australian plates toward the Sunda plate is accommodated by subduction and rotation of the Molucca Sea, Banda Sea and Timor plates, leading to complicated patterns of faulting. In central Sulawesi, the NNW-striking Palu-Koro fault (PKF) and the WNW-striking Matano faults (MF) (Fig. 1a) comprise the Central Sulawesi Fault System. The Palu-Koro fault runs off-shore to the north of Sulawesi through the narrow Palu Bay and is the fault that hosted the earthquake that occurred on 28 September 2018. With a relatively high slip rate inferred from recent geodetic measurements (40 mm/year, Socquet et al. 2006; Walpersdorf et al. 1998) and from geomorphology (upper limit 58 mm/year, Daryono 2018) and clear evidence for Quaternary activity (Watkinson and Hall 2017), the Palu-Koro fault was presumed to pose a threat to the region (Watkinson and Hall 2017). In addition, four tsunamis associated with earthquakes on the Palu-Koro fault have struck the northwest coast of Sulawesi in the past century (1927, 1938, 1968 and 1996) (Pelinovsky et al. 1997; Prasetya et al. 2001). The complex regional tectonics subject northwestern Sulawesi to transtensional strain (Socquet et al. 2006). Transtension promotes some component of dip-slip faulting on the predominantly strike-slipping Palu-Koro fault (Bellier et al. 2006; Watkinson and Hall 2017) and leads to more complicated surface deformation than is expected from slip along a fault hosting purely strike-slip motion. The 2018 Palu, Sulawesi Earthquake The \(M_w\) 7.5 Sulawesi earthquake that occurred on September 28, 2018 ruptured a 180 km long section of the Palu-Koro fault (Socquet et al. 2019). It nucleated 70 km north of the city of Palu at shallow depths, with inferred hypocentral depths varying between 10 and 22 km (Valkaniotis et al. 2018). The rupture propagated predominantly southward, passing under Palu Bay and the city of Palu. It arrested after a total rupture time of 30–40 s (Socquet et al. 2019; Okuwaki et al. 2018; Bao et al. 2019). The earthquake was well-captured by satellite data and inversions of these data by Socquet et al. (2019) return several locations of dip-slip offset along the rupture, including within Palu Bay. Similarly, Song et al. (2019) reveal predominantly left-lateral, strike-slip faulting on relatively straight, connected fault segments with a component of dip-slip offset. Song et al. (2019) also suggest possible rupture on a secondary normal fault north of Palu Bay. The earthquake appears to have propagated at a supershear rupture speed, i.e., faster than the shear waves produced by the earthquake are able to travel through the surrounding rock (e.g., Socquet et al. 2019; Bao et al. 2019; Mai 2019). Socquet et al. (2019) note that the characteristics of the relatively straight, clear rupture trace south of the Bay, with few aftershocks, match those for which supershear rupture speeds have been inferred in other earthquakes. Using back-projection analysis, which maps the location and timing of earthquake energy from the waves recorded on distant seismic arrays, Bao et al. (2019) do not resolve any portion of the rupture as traveling at sub-Rayleigh speeds. The authors conclude that this fast rupture velocity began at, or soon after, earthquake nucleation and was sustained for the length of the rupture. Surprisingly, Bao et al. (2019) infer supershear rupture speeds at the lower end of speeds considered theoretically stable, possibly due to the influence of widespread, pre-existing damage around the fault. While the exact speed, point of onset, and underlying mechanics of this event's supershear rupture propagation remain to be studied further, it will initiate re-assessment of the hazard associated with supershear rupture on strike-slip faults worldwide, with respect to the potential intensification of shaking. The Induced Tsunami The Palu earthquake triggered a local but powerful tsunami that devastated the coastal area of Palu Bay quickly after the earthquake. Inundation depths of over 6 m and run-up heights of over 9 m were recorded at specific locations (e.g. Yalciner et al. 2018). At the only tide gauge with available data, located at Pantoloan harbor, a trough-to-peak wave amplitude of almost 4 m was recorded just 5 min after the rupture (Muhari et al. 2018). In Ngapa (Wani), on the northeastern shore of Palu Bay, CCTV coverage show the arrival of the tsunami wave after only 3 min. Coseismic subsidence and uplift, as well as submarine and coastal landsliding, have been suggested as causes of the tsunami in Palu Bay (Heidarzadeh et al. 2018). Both displacements and landsliding are documented on land (Valkaniotis et al. 2018; Løvholt et al. 2018; Sassa and Takagawa 2019), and also at coastal slopes (Yalciner et al. 2018). Early tsunami models of the Sulawesi event performed using Okada's solution in combination with the USGS finite fault model (USGS 2018b) do not generate tsunami amplitudes large enough to agree with observations (Heidarzadeh et al. 2018; Sepulveda et al. 2018; Liu et al. 2018; van Dongeren et al. 2018). Liu et al. (2018) and Sepulveda et al. (2018) perform Okada-based tsunami modeling with earthquake sources generated by inverting satellite data, but also produce wave amplitudes that are too small. Reasonable tsunami waves are produced by combining tectonic and hypothetical landslide sources (van Dongeren et al. 2018; Liu et al. 2018). However, the predominantly short wavelengths associated with the observed small scale, localized landsliding (Yalciner et al. 2018) appears to be incompatible with the observed long period tsunami waves (Løvholt et al. 2018). Physical and Computational Models Earthquake–Tsunami Coupled Modeling Since the earthquake and tsunami communities use different vocabulary, we specify the terminology used throughout this manuscript here. We refer to the complete physical setup, including, e.g., the bathymetry data set, fault structure and the governing equations for an earthquake or tsunami, as a 'physical model'. A computer program discretizing the equations and implementing the numerical workflow is termed a 'computational model'. The result of a computation for a specific event achieved with a computational model and according to a specific physical model is called a 'scenario'. We use 'model' where the use of the term as either physical or computational model is unambiguous. SeisSol, the computational model used to produce the earthquake scenario (e.g., Dumbser and Käser 2006; Pelties et al. 2014; Uphoff et al. 2017), solves the elastodynamic wave equation for spontaneous dynamic rupture and seismic wave propagation. It determines the temporal and spatial evolution of slip on predefined frictional interfaces and the stress and velocity fields throughout the modeling domain. With this approach, the earthquake source is not predetermined, but evolves spontaneously as a consequence of the model's initial conditions and of the time-dependent, non-linear processes occurring during the earthquake. Initial conditions include the geometry and frictional strength of the fault(s), the tectonic stress state, and the regional lithological structure. Fault slip evolves as frictional shear failure according to an assigned friction law that controls how the fault yields and slides. Model outputs include spatial and temporal evolution of the earthquake rupture front(s), off-fault plastic strain, surface displacements, and the ground shaking caused by the radiated seismic waves. SeisSol uses the Arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). It employs fully non-uniform, unstructured tetrahedral meshes to combine geometrically complex 3D geological structures, nonlinear rheologies, and high-order accurate propagation of seismic waves. Fast time to solution is achieved thanks to end-to-end computational optimization (Breuer et al. 2014; Heinecke et al. 2014; Rettenberger et al. 2016) and an efficient local time-stepping algorithm (Breuer et al. 2016; Uphoff et al. 2017). To this end, dynamic rupture simulations can reach high spatial and temporal resolution of increasingly complex geometrical and physical modelling components (e.g. Bauer et al. 2017; Wollherr et al. 2019). SeisSol is verified with a wide range of community benchmarks, including dipping and branching fault geometries, laboratory derived friction laws, as well as heterogeneous on-fault initial stresses and material properties (de la Puente et al. 2009; Pelties et al. 2012, 2013, 2014; Wollherr et al. 2018) in line with the SCEC/USGS Dynamic Rupture Code Verification exercises (Harris et al. 2011, 2018). SeisSol is freely available (SeisSol website 2019; SeisSol GitHub 2019). The computational model to generate the tsunami scenario is StormFlash2D, which solves the nonlinear shallow water equations using an explicit Runge-Kutta discontinuous Galerkin discretization combined with a sophisticated wetting and drying treatment for the inundation at the coast (Vater and Behrens 2014; Vater et al. 2015, 2017). A tsunami is triggered by a (possibly time-dependent) perturbation of the discrete bathymetry. The shallow water approximation does not account for complex 3D effects such as dispersion and non-hydrostatic effects (e.g., compressive waves). Nevertheless, StormFlash2D allows for stable and accurate simulation of large-scale wave propagation in deep sea, as well as small-scale wave shoaling and inundation at the shore, thanks to a multi-resolution adaptive mesh refinement approach based on a triangular refinement strategy (Behrens et al. 2005; Behrens and Bader 2009). Bottom friction is parameterized through Manning friction by a split-implicit discretization (Liang and Marche 2009). The model's applicability for tsunami events has been validated by a number of test cases (Vater et al. 2019), which are standard for the evaluation of operational tsunami codes (Synolakis et al. 2007). Coupling between the earthquake and tsunami models is realized through the time-dependent coseismic 3D seafloor displacement field computed in the dynamic earthquake rupture scenario, which is translated into 2D bathymetry perturbations of the tsunami model using the ASCETE framework (Advanced Simulation of Coupled Earthquake and Tsunami Events, Gabriel et al. 2018). Earthquake Model The 3D dynamic rupture model of the Sulawesi earthquake requires initial assumptions related to the structure of the Earth, the structure of the fault system, the stress state, and the frictional strength of the faults. These input parameters are constrained by a variety of independent near-source and far-field data sets. Most importantly, we aim to ensure mechanical viability by a systematic approach integrating the observed regional stress state and frictional parameters and including state-of-the-art earthquake physics and fracture mechanics concepts in the model (Ulrich et al. 2019). The earthquake model incorporates topography and bathymetry data and state-of-the-art information about the subsurface structure in the Palu region. Local topography and bathymetry are honored at a resolution of approximately 900 m (GEBCO 2015; Weatherall et al. 2015). 3D heterogeneous media are included by combining two subsurface velocity data sets at depth (see also Sect. 7.7). A local model by Awaliah et al. (2018), which is built from ambient noise tomography, covers the model domain down to 40 km depth. In this region, we assume a Poisson medium. The Collaborative Seismic Earth Model (Fichtner et al. 2018) is used for the rest of the model domain down to 150 km. Fault Structure For this model, we construct a network of non-planar, intersecting crustal faults involved in this earthquake. This includes three major fault segments: the Northern segment, a previously unmapped fault on which the earthquake nucleated, and the Palu and the Saluki segments of the Palu-Koro fault (cf. Fig. 1b–e). We map the fault traces from the horizontal ground displacement field inferred from correlation of Sentinel-2 optical images (De Michele 2019) and from synthetic aperture radar (SAR) data (Bao et al. 2019), which is discussed more below. Differential north-south offsets clearly delineate the on-land traces of the Palu and Saluki fault segments. The trace of the Northern segment is less well-constrained in both data sets. Nevertheless, we produce a robust map by honoring the clearest features in both data sets and smoothing regions of large variance using QGIS v2.14 (Quantum GIS 2013). Beneath the Bay, we adopt a relatively simple fault geometry motivated by the on land fault strikes, the homogeneous pattern of horizontal ground deformation east of the Bay (De Michele 2019), which suggests slip on a straight, continuous fault under the Bay, and the absence of direct information available to constrain the rupture's path. We extend the Northern segment southward as a straight line from the point where it enters the Bay to the point where the Palu segment enters the Bay. We extend the Palu segment northward, adopting the same strike that it displays on land to the south of the Bay. This trace deviates a few km from the mapping reported in Bellier et al (2006, their Fig. 2), both on and off land. South of the Bay, the modeled segment mostly aligns with the fault as mapped by Watkinson and Hall (2017, their Fig. 5). We constrain the 3D structure of these faults using focal mechanisms and geodetic data. We assume that the Northern and Palu segments both dip \(65^{\circ }\) East, as suggested by the mainshock focal mechanisms (\(67^{\circ }\), USGS (2018a) and \(69^{\circ }\), IPGP (2018), Fig. 1b) and the focal mechanism of the 2018 October 1st \(M_w\) 5.3 aftershock (\(67^{\circ }\), BMKG solution, Fig. 1b). This also is consistent with pronounced asymmetric patterns of ground displacement suggesting slip on dipping faults around the city of Palu and the Northern fault segment in both the optical and SAR data. In addition, the eastward dip of the Palu segment on land is consistent with the analysis of Bellier et al. (2006). The southern end of the Palu segment bends towards the Saluki segment and features a dip of \(60^{\circ }\) to the northeast, as constrained by the source mechanism of the 2005 \(M_w\) 6.3 event (see Fig. 1b). In contrast, we assume that the Saluki segment is vertical. The assigned dip of \(90^{\circ }\) is consistent with the inferred ground displacement of comparable amplitude and extent on both sides of this fault segment (De Michele 2019). All faults extend from the surface to a depth of 20 km. Stress State The fault system is subject to a laterally homogeneous regional stress field with systematic constraints following Ulrich et al. (2019) from seismo-tectonic observations, knowledge of fault fluid pressurization, and the Mohr-Coulomb theory of frictional failure. This is motivated by the fact that the tractions on and strength of natural faults are difficult to quantify. With this approach, only four parameters must be specified to fully describe the state of stress and strength governing the fault system, as further detailed in Appendix 7.3. This systematic approach facilitates rapid dynamic rupture modeling of an earthquake. Using static considerations and few trial dynamic simulations, we identify an optimal stress configuration for this scenario that simultaneously (i) maximizes the ratio of shear over normal stress across the fault system; (ii) determines shear traction orientations that predict surface deformation compatible with the measured ground deformation and focal mechanisms; and (iii) allows dynamic rupture across the fault system's geometric complexities. The resulting physical model is characterized by a stress regime acknowledging transtensional strain, high fluid pressure, and relatively well oriented, apparently weak faults. The effective confining stress increases with depth by a gradient of 5.5 MPa/km. From 11 to 15 km depth, we taper the deviatoric stresses to zero, to represent the transition from a brittle to a ductile deformation regime. This depth range is consistent with the 12 km interseismic locking depth estimated by Vigny et al. (2002). Earthquake Nucleation and Fault Friction Fault failure is initiated within a highly overstressed circular patch with a radius of 1.5 km situated at the hypocenter location as inferred by the GFZ (\(119.86^\circ\)E, \(0.22^\circ\)S, at 10 km depth). This depth is at the shallow end of the range of inferred hypocentral depths (Valkaniotis et al. 2018) and shallower than the modeled brittle–ductile transition, marking the lower limit of the seismogenic zone. Slip evolves on the fault according to a rapid velocity-weakening friction formulation, which is motivated by laboratory experiments that show strong dynamic weakening at coseismic slip rates (e.g., Di Toro et al. 2011). This formulation reproduces realistic rupture characteristics, such as reactivation and pulse-like behavior, without imposing small-scale heterogeneities (e.g., Dunham et al. 2011; Gabriel et al. 2012). We here use a form of fast-velocity weakening friction proposed in the community benchmark problem TPV104 of the Southern California Earthquake Center (Harris et al. 2018) and as parameterized by Ulrich et al. (2019). Friction drops rapidly from a steady-state, low-velocity friction coefficient, here 0.6, to a fully weakened friction coefficient, here 0.1 (see Appendix 7.4). Model Resolution A high resolution computational model is crucial in order to accurately resolve the full dynamic complexity of the earthquake scenario. The required high numerical accuracy is achieved by combining a numerical scheme that is accurate to high-orders and a mesh that is locally refined around the fault network. The earthquake model domain is discretized into an unstructured computational mesh of 8 million tetrahedral elements. The shortest element edge lengths are 200 m close to faults. The static mesh resolution is coarsened away from the fault system. Simulating 50 s of this event using 4th order accuracy in space and time requires about 2.5 h on 560 Haswell cores of phase 2 of the SuperMUC supercomputer of the Leibniz Supercomputing Centre in Garching, Germany. We point out that running hundreds of such simulations is well within the scope of resources available to typical users of supercomputing centres. All data required to reproduce the earthquake scenario are detailed in Appendix 7.11. Tsunami Model The bathymetry and topography for the tsunami model is composed of the high-resolution data set BATNAS (v1.0), provided by the Indonesian Geospatial Data Agency (DEMNAS 2018). This data set has a horizontal resolution of 6 arc seconds (or approximately 190 m), and it allows for sufficiently accurate representation of bathymetric features, but is certainly relatively inaccurate with respect to inundation treatment. However, we note that the data set is more accurate than data sets for which the vertical 'roof-top' approach is used, such as typical SRTM data (see, e.g., the accuracy analysis in McAdoo et al. 2007; Kolecka and Kozak 2014). The coupling between the earthquake and tsunami models is enforced by adding a perturbation, derived from the 3D coseismic seafloor displacements in the dynamic rupture scenario, to the initial 2D bathymetry and topography of the tsunami model. These time-dependent displacement fields are given by the three-dimensional vector \((\varDelta x, \varDelta y, \varDelta z)\). In addition to the vertical displacement \(\varDelta z\), we incorporate the east-west and north-south horizontal components, \(\varDelta x\) and \(\varDelta y\), into the tsunami source by applying the method proposed by Tanioka and Satake (1996). This is motivated by the potential influence of Palu Bay's steep seafloor slopes (more than 50%). The ground displacement of the earthquake model is translated into the tsunami generating bathymetry perturbation by $$\begin{aligned} \varDelta b= \varDelta z - \varDelta x \frac{\partial b}{\partial x} - \varDelta y \frac{\partial b}{\partial y} , \end{aligned}$$ where \(b = b(x,y)\) is the bathymetry (increasing in the upward direction). \(\varDelta b\) is time-dependent, since \(\varDelta x\), \(\varDelta y\) and \(\varDelta z\) are time-dependent. The tsunami is sourced by adding \(\varDelta b\) to the initial bathymetry and topography of the tsunami model. It should be noted that a comparative scenario using only \(\varDelta z\) as the bathymetry perturbation (see Appendix 7.5) does not result in large deviations with regards to the preferred model. Setup of the tsunami model including high-resolution bathymetry and topography data overlain by the initial adaptive triangular mesh refined near the coast The domain of the computational tsunami model (latitudes ranging from \(-1^\circ\) to \(0^\circ\), longitudes ranging from \(119^\circ\) to \(120^\circ\), see Fig. 2) encompasses Palu Bay and the nearby surroundings in the Makassar Strait, since we here focus on the wave behavior within the Palu Bay. The tsunami model is initialized as an ocean at rest, for which (at \(t=0\)) the initial fluid depth is set in such manner that the sea surface height (ssh, deviation from mean sea level) is equal to zero everywhere in the model domain. Additionally, the fluid velocity is set to zero. This defined initial steady state is then altered by the time-dependent bathymetry perturbation throughout the simulation, which triggers the tsunami. The simulation is run for 40 min (simulation time), which needs 13 487 time steps. The triangle-based computational grid is initially refined near the coast, where the highest resolution within Palu Bay is about 3 arc seconds (or 80 m). This results in an initial mesh of 153,346 cells, which expands to more than 300,000 cells during the dynamically adaptive computation. The refinement strategy is based on the gradient in sea surface height. The parametrization of bottom friction includes the Manning's roughness coefficient n. We assume \(n=0.03\), which is a typical value for tsunami simulations (Harig et al. 2008). In the following, we present the physics-based coupled earthquake and tsunami scenario. We highlight key features and evaluate the model results against seismic and tsunami observations. The Dynamic Earthquake Rupture Scenario: Sustained Supershear Rupture and Normal Slip Component Within Palu Bay This earthquake rupture scenario is based on the systematic derivation of initial conditions presented in Sect. 3.2. We evaluate it by comparison of model synthetics with seismological data, geodetic data, and field observations in the near- and far-field. a Snapshot of the wavefield (absolute particle velocity in m/s) and the slip rate (in m/s) across the fault network at a rupture time of 15 s. b Overview of the simulated rupture propagation. Snapshots of the absolute slip rate are shown at a rupture time of 2, 9, 13, 23 and 28 s. Labels indicate noteworthy features of the rupture Earthquake Rupture The dynamic earthquake scenario is characterized by an unilaterally propagating southward rupture (see Fig. 3 and animations in Appendix 7.10. The rupture nucleates at the northern tip of the Northern segment, then transfers to the Palu segment at the southern end of Palu Bay. Additionally, a shallow portion of the Palu-Koro fault beneath the Bay ruptures from North to South (see inset of Fig. 9a). This segment is dynamically unclamped due to a transient reduction of normal tractions while the rupture passes on the Northern segment. The rupture passes from the Palu segment onto the Saluki segment through a restraining bend at a latitude of \(-1.2^{\circ }\). In total, 195 km of faults are ruptured leading to a \(M_w\) 7.6 earthquake scenario. Comparison of modeled (red) and observed (black) teleseismic displacement waveforms. a Full seismograms dominated by surface waves. A 66–450 s band-pass filter is applied to all traces. b Zoom in to body wave arrivals. A 10–450 s band-pass filter is applied to all traces. Synthetics are generated using Instaseis (Krischer et al. 2017) and the PREM model including anisotropic effects and a maximum period of 2 s. For each panel, a misfit value (rRMS) quantifies the agreement between synthetics and observations. rRMS equal to 0 corresponds to a perfect fit. For more details see Appendix 7.8. Waveforms at 10 additional stations are compared in Figs. 28, 29 Moment-tensor representation of the dynamic rupture scenario and locations at which synthetic data are compared with observed records (red: stations compared in Fig. 4, blue: stations compared in Figs. 28, 29) Synthetic moment rate release function compared with those inferred from teleseismic data by Okuwaki et al. (2018), the USGS and the SCARDEC method (optimal solution, Vallée et al. 2011) Teleseismic Waves, Focal Mechanism, and Moment Release Rate The dynamic rupture scenario satisfactorily reproduces the teleseismic surface waves (Figs. 4a, 28) and body waves (Figs. 4b, 29). Synthetics are generated at 15 teleseismic stations around the event (Fig. 5). Note that the data from these teleseismic stations is not used to build our model, as it is done in classical kinematic models, but to validate the dynamic rupture scenario a posteriori, by comparing the model results to these measurements. Following Ulrich et al. (2019), we translate the dynamic fault slip time histories of the dynamic rupture scenario into a subset of 40 double couple point sources (20 along strike by 2 along depth). From these sources, broadband seismograms are calculated from a Green's function database using Instaseis (Krischer et al. 2017) and the PREM model (Preliminary Reference Earth Model) for a maximum period of 2 s and including anisotropic effects. The synthetics agree well with the observed teleseismic signals in terms of both the dominant, long-period surface waves and the body wave signatures. The focal mechanism of the modeled source is compatible with the one inferred by the USGS (compare in Fig. 1b and Fig. 5). The nodal plane characterizing this model earthquake features strike/dip/rake angles of \(354^{\circ }\)/\(69^{\circ }\)/\(-14^{\circ }\), which are very close to the values of \(350^{\circ }\)/\(67^{\circ }\)/\(-17^{\circ }\) for the focal plane determined by the USGS. The dynamically released moment rate is in agreement with source time functions inferred from teleseismic data (Fig 6). The scenario yields a relatively smooth, roughly box-car shaped moment release rate spanning the full rupture duration. This is consistent with the source time function from Okuwaki et al. (2018) and also with the smooth fault slip reported by Socquet et al. (2019). The rupture slows down at the Northern segment restraining bend at \(-0.35^{\circ }\) latitude. This resembles the moment rate solutions by the USGS and SCARDEC at \(\approx\) 5 s rupture time. The transfer of the rupture from the Palu segment to the Saluki segment at 23 s also produces a transient decrease in the modeled moment release rate, which is discernible in those inferred from observations as well. Earthquake Surface Displacements We use observations from optical and radar satellites, both sensitive to the horizontal coseismic surface displacements, to validate the outcome of the earthquake scenario (Figs. 7, 8). Along most of the rupture, fault displacements are sharp and linear, highlighting smooth and straight fault orientations with a few bends. The patterns and magnitudes of the final horizontal surface displacements (black arrows in Fig. 7) are determined from subpixel correlation of coseismic optical images acquired by the Copernicus Sentinel-2 satellites operated by the European Space Agency (ESA) (De Michele 2019). We use both the east-west and north-south components from optical image correlation. We also infer coseismic surface displacements by incoherent cross correlation of synthetic aperture radar (SAR) images acquired by the Japan Aerospace Exploration Agency (JAXA) Advanced Land Observation Satellite-2 (ALOS-2). SAR can capture horizontal surface displacement in the along-track direction and a combination of vertical and horizontal displacement in the slant range direction between the satellite and the ground. Here, we use the along-track horizontal displacements (Fig. 8b), which are nearly parallel to the general strike of the ruptured faults. Further details about the SAR data can be found in Appendix 7.6. The use of two independent, but partially coinciding, data sets provides insight into data quality. We identify robust features in the imaged surface displacements by projecting the optical data into the along-track direction of the SAR data. The data sets appear to be consistent to first order (\(\pm 1\) m) in a 30 km wide area centered on the fault and south of \(-\,0.6^{\circ }\) latitude (region identified in Fig. 7). North of the Bay, we find that the optical displacements are large in magnitude relative to the SAR measurements. Such large displacements continue north of the inferred rupture trace, suggesting a bias in the optical data in this region. These large apparent displacements may be due to partial cloud cover in the optical images or to image misalignment. The east-west component seems unaffected by this problem. Significant differences are also observed near the Palu-Saluki bend. Thus, deviations between model synthetics and observational data in these areas are analyzed with caution. Overall, the earthquake dynamic rupture scenario matches observed ground displacements well. East of the Palu segment, a good agreement between synthetic displacements and observations is achieved. Horizontal surface displacement vectors predicted by the model are well aligned with and of comparable amplitude to optical observations (Fig. 7). West of the Palu segment, the modeled amplitudes are in good agreement with the SAR (Fig. 8a) and optical data, however the synthetic orientations point to the southwest, whereas the optical data are oriented to the southeast (Fig. 7). While surface displacement orientations around the Saluki segment are reproduced well, amplitudes may be overestimated by about 1 m on the eastern side of the fault (Fig. 8c). North of the Bay, the modeled amplitudes exceed SAR measurements by about 2 m (Fig. 8c). Nevertheless, the subtle eastward rotation of the horizontal displacement vectors near the Northern segment bend (at \(-0.35^{\circ }\) latitude) is captured well by the scenario (Fig. 7). Comparison of the modeled and inferred horizontal surface displacements from subpixel correlation of Sentinel-2 optical images by De Michele (2019). Some parts of large inferred displacements, e.g., north of \(-\, 0.5^{\circ }\) latitude, are probably artifacts, because they are not visible in the SAR data (see Fig. 8). The black polygon highlights where an at least first order agreement between SAR and optical data is achieved Our a modeled and b measured ground displacements in the SAR satellite along-track direction (see text). c\(\text {Residual} = \text {(b)} - \text {(a)}\) Fault Slip The modeled slip distributions and orientations (Fig. 9) are modulated by the geometric complexities of the fault system. On the northern part of the Northern segment, slip is lower than elsewhere along the fault due to a restraining fault bend near \(-\,\,0.35^{\circ }\) latitude (Fig. 9a). South of this small bend, the slip magnitude increases and remains mostly homogeneous, ranging between 6 and 8 m. Peak slip occurs on the Palu segment. Over most of the fault network, the faulting mechanism is predominantly strike-slip, but does include a small to moderate normal slip component (Fig. 9b). This dip-slip component varies as a function of fault orientation with respect to the regional stress field. It increases at the junction between the Northern and Palu segment just south of Palu Bay, and at the big bend between the Palu and Saluki fault segments, where dip-slip reaches a maximum of approx. 4 m. Pure strike-slip faulting is modeled on the southern part of the vertical Saluki segment (Fig. 9b). The dip-slip component along the rupture shown in Fig. 9b produces subsidence above the hanging wall (east of the fault traces) and uplift above the foot wall (west of the fault traces). The resulting seafloor displacements are further discussed in Sect. 4.2. Source properties of the dynamic rupture scenario. a Final slip magnitude. The inset shows the slip magnitude on the main Palu-Koro fault within the Bay. b Dip-slip component. c Final rake angle. b, c both illustrate a moderate normal slip component. d Maximum rupture velocity indicating pervasive supershear rupture Earthquake Rupture Speed The earthquake scenario features an early and persistent supershear rupture velocity (Fig. 9d). This means that the rupture speed exceeds the seismic shear wave velocity (\(V_s\)) of 2.5–3.1 km/s in the vicinity of the fault network from the onset of the event. This agrees with the inferences for supershear rupture by Bao et al. (2019) from back-projection analyses and by Socquet et al. (2019) from satellite data analyses. However, we here infer supershear propagation faster than Eshelby speed (\(\sqrt{2}V_s\)), thus faster than Bao et al. (2019) and well within the stable supershear rupture regime (Burridge 1973). Tsunami Propagation and Inundation: An Earthquake-Induced Tsunami The surface displacements induced by the earthquake result in a bathymetry perturbation \(\varDelta b\) (as defined in Eq. (1)), which is visualized after 50 s simulation time (20 s after rupture arrest, which is when seismic waves have left Palu Bay) in Fig. 10a. In general, the bathymetry perturbation shows subsidence east of the faults and uplift west of the faults. The additional bathymetry effect incorporated through the approach of Tanioka and Satake (1996) locally modulates the smooth displacement fields from the earthquake rupture scenario (see Appendix 7.5, Figs. 22, 23). Four cross-sections of the final perturbation in the west-east direction are shown in Fig. 10b. These capture the area of Palu Bay and clearly show the step induced by the normal component of fault slip. The step varies between 0.8 and 2.8 m, with an average of 1.5 m. Note that this step is defined as fault throw in structural geology. However, here we explicitly incorporate effects of bathymetry and thus 'step' here refers to the total seafloor perturbation. Variation in the step magnitude along the fault is displayed in Fig. 10c. a Snapshot of the computed bathymetry perturbation \(\varDelta b\) used as input for the tsunami model. The snapshot corresponds to a 50 s simulation time at the end of the earthquake scenario. b West-east cross-sections of the bathymetry perturbation at \(-\,0.85^\circ\) (blue), \(-\,0.8^\circ\) (orange), \(-\,0.75^\circ\) (green), \(-\,0.7^\circ\) (red) latitude showing the induced step in bathymetry perturbation across the fault. c Step in bathymetry perturbation (as indicated in b) as function of latitude. Grey dashed line shows the average The tsunami generated in this scenario is mostly localized in Palu Bay, which is illustrated in snapshots of the dynamically adaptive tsunami simulation after 20 s and 600 s simulation time in Fig. 11. This is expected as the modeled fault system is offshore only within the Bay. At 20 s, the seafloor displacement due to the earthquake is clearly visible in the sea surface height (ssh) within Palu Bay. Additionally, the effect of a small uplift is visible along the coast north of the Bay. The local behavior within Palu Bay is displayed in Fig. 12 at 20 s, 180 s and 300 s (see also the tsunami animation in Appendix 7.10). The local extrema along the coast reveal the complex wave reflections and refractions within the Bay caused by complex, shallow bathymetry as well as funnel effects. Snapshots of the tsunami scenario at 20 s (left) and 600 s (right), showing the dynamic mesh adaptivity of the model Snapshots of the tsunami scenario at 20 s, 180 s and 300 s (left to right), showing only the area of Palu Bay. Colors depict the sea surface height (ssh), which is the deviation from mean sea level We compare the synthetic time series of the Pantoloan harbor tide gauge at (\(119.856155^{\circ }\)E, \(0.71114^{\circ }\)S) to the observational gauge data. Additionally, a wealth of post-event field surveys characterize the inundation of the Palu tsunami (e.g. Widiyanto et al. 2019; Muhari et al. 2018; Omira et al. 2019; Yalciner et al. 2018; Pribadi et al. 2018). We compare the tsunami modeling results with observational data from a comprehensive overview of run-up data, inundation data, and arrival times of tsunami waves around the shores of the Palu Bay compiled by Yalciner et al. (2018) and Pribadi et al. (2018). The Pantoloan tide gauge is the only tide gauge with available data in Palu Bay. The instrument is installed on a pier in Pantoloan harbor and thus records the change of water height with respect to a pier moving synchronously with the land. It has a 1-min sampling rate and the observational time series was detided by a low-pass filter eliminating wave periods above 2 h. The tsunami arrived 5 min after the earthquake onset time with a leading trough (Fig. 13). The first and highest wave arrived approximately 8 min after the earthquake rupture time. The difference between trough and cusp amounts to almost 4 m. A second wave arrived after approximately 13 min with a preceding trough at 12 min. Time series from the wave gauge at Pantoloan port. Blue dashed: measurements, orange: output from the model scenario The corresponding synthetic time series derived from the tsunami scenario is also shown in Fig. 13. Although a leading wave trough is not present in the scenario results, the magnitude of the wave is well captured. Note that coseismic subsidence produces a negative shift of approx. 80 cm within the first minute of the scenario. This effect is not captured by the tide gauge due to the way the instrument is designed. We detail this issue in Appendix 5.3. It cannot be easily filtered out, due to re-adjustments throughout the computation to the background mean sea level. After 5 min of simulated time, the model mareogram resembles the measured wave behavior, characterized by a dominant wave period of about 4 min. The scenario exposes a clear resonating wave behavior due to the narrow geometry of the Bay. We note that these wave amplitudes are produced due to displacements resulting from the earthquake, without any contribution from landsliding. We conduct a macro-scale comparison between the scenario and the inundation data, rather than point-wise comparison, in view of the relatively low resolution topography data available. We adopt the following terminology, which is commonly used in the tsunami community and in the field surveys we reference (Yalciner et al. 2018; Pribadi et al. 2018): inundation elevation at a given point above ground is measured by adding the inundation depth to the ground elevation. In distinction, run-up elevation is the inundation elevation measured at the inundation point that is the farthest inland. We consistently report synthetic inundation elevations from the model. In Figs. 14 and 15, we compare model results to run-up elevations that are reported in the field surveys. For practical reasons, we compare the observed run-up elevations to synthetic inundation elevations at the exact measurement locations. In doing so, we consider only those points on land that are reached by water in the tsunami scenario. While inundation and run-up elevations are different observations, observed run-up and simulated inundation elevations can be compared if the run-up site is precisely georeferenced, which is here the case. Fig. 14 illustrates the distribution of the modeled maximum inundation elevations around the Bay. A quantitative view comparing these same results with observations is shown in Fig. 15. Because of the limited model resolution, the validity of the scenario cannot be analysed site by site, and we only discuss the overall agreement of the simulated inundation elevations with observations. It is remarkable that the model yields similar inundation elevations as observed, with some overestimation at the northern margins of the bay and some slight underestimation in the southern part near Grandmall Palu City. What we can conclude is that large misfit in the inundation elevations are more or less randomly distributed, suggesting it comes from local amplification effects that cannot be captured in the scenario due to insufficient bathymetry/topography resolution. Fig. 16 shows maximum inundation depths computed from the tsunami scenario near Palu City. Qualitatively, the results from the scenario agree quite well with observations, as the largest inundation depths are close to the Grandmall area, where vast damage due to the tsunami is reported. Simulated inundation elevations at different locations around Palu Bay, where observations have been recorded Inundation elevations from observation (blue) and simulation (orange) at different locations around Palu Bay (left to right: around the Bay from the northwest to the south to the northeast, see Fig. 14 for locations) In summary, the tsunami scenario sourced by coseismic displacements from the dynamic earthquake rupture scenario yields results that are qualitatively comparable to available observations. Wave amplitudes match well, as do the inundation elevations given the limited quality of the available topography data. Maximum inundation depth near Palu City computed from the tsunami scenario The Palu, Sulawesi tsunami was as unexpected as it was devastating. While the Palu-Koro fault system was known as a very active strike-slip plate boundary, tsunamis from strike-slip events are generally not anticipated. Fears arise that other regions, currently not expected to sustain tsunami-triggering ruptures, are at risk. This physics-based, coupled earthquake–tsunami model shows that a submarine strike-slip fault can produce a tsunami, if a component of dip-slip faulting occurs. In the following, we discuss advantages and limitations of physics-based models of tsunamigenesis, as well as of the individual earthquake and tsunami models. We then focus on the broader implications of rapid coupled scenarios for seismic hazard mitigation and response. Finally, we look ahead to improving the here-presented coupled model in light of newly available information and data. Success and Limitation of the Physics-Based Tsunami Source We constrain the initial conditions for the coupled model according to the available earthquake data and physical constraints provided by previous studies, including those reporting regional transtensional strain (Walpersdorf et al. 1998; Socquet et al. 2006; Bellier et al. 2006). A stress field characterized by transtension induces a normal component of slip on the dipping faults in the earthquake scenario. The degree of transtension assumed here translates into a fault slip rake of about \(15^{\circ }\) on the \(65^{\circ }\) dipping modeled faults (Fig. 9c), which is consistent with the earthquake focal mechanism (USGS 2018a). This normal slip component results in widespread uplift and subsidence. The surface rupture generates a throw across the fault of 1.5 m on average in Palu Bay, which translates into a step of a similar magnitude in the bathymetry perturbation used to source the tsunami (Fig. 10c). This is sufficient for triggering a realistic tsunami that reproduces the observational data quite well. In particular it is enough to obtain the observed wave amplitude at the Pantoloan harbor wave gauge and the recorded inundation elevations. However, we point out that transtension is not an necessary condition to generate oblique faulting on such a fault network. From static considerations, we show that specific alternative stress orientations can induce a considerable dip-slip component, particularly near fault bends, in biaxial stress regimes reflecting pure-shear (Appendix 7.3, Fig. 20). The coupled earthquake–tsunami model performs well at reproducing observations from a macroscopic perspective and suggests that additional tsunami sources are not needed to explain the main tsunami. However, it does not constrain the small-scale features of the tsunami source and thus does not completely rule out other, potentially additional, sources, such as those suggested by Carvajal et al. (2018) based on local tsunami waves captured on video. For example, despite the overall consistency of the earthquake scenario results with data, the fault slip scenario has viable alternatives. The fault within Palu Bay may have hosted a different or more complicated slip profile than this scenario produces. Also, the fault geometry underneath the Bay is not known. We choose a simple geometry that honors the information at hand (see Sect. 3.2.2). However, complex faulting may also exist within Palu Bay, as is observed south of the Bay where slip was partitioned between minor dip-slip fault strands and the primary strike-slip rupture (Socquet et al. 2019). Such complexity would change the seafloor displacements and therefore the tsunami results. Furthermore, a less smooth fault geometry in the Northern region, closely fitting inferred fault traces, could reduce fault slip locally, and therefore produce better fitting ground displacement observations in the North. However, the influence on seafloor displacements within Palu Bay is likely to be small. In contrast, a different slip scenario along the Palu-Koro fault within Palu Bay could have a large influence on the seafloor displacements and modeled tsunami. The earthquake model shows a decrease in normal stress (unclamping) here as the model rupture front passes. Though slip is limited in the current scenario, alternative fault geometry or a lower assigned static coefficient of friction on the Palu-Koro fault could lead to more triggered slip and alternative earthquake and tsunami scenarios. Finally, incorporating the effect of landslides is likely necessary to capture local features of the tsunami wave and inundation patterns. Constraining these sources is very difficult without pre- and post-event high-resolution bathymetric charts. This study suggests that these sources play a secondary role in explaining the overall tsunami magnitude and wave patterns, since these can be generated by strike-slip faulting with a normal slip component. The Sulawesi Earthquake Scenario We review and discuss the dynamic earthquake scenario here and note avenues for additional modeling. For example, the speed of this earthquake is of utmost interest, although it does not provide an important contribution to the tsunami generation in this scenario. The initial stress state and lithology included in the physical earthquake model are areas that could be improved with more in-depth study and better available data. The dynamic earthquake model requires supershear rupture velocities to produce results that agree with the teleseismic data and moment rate function. This scenario also provides new perspectives on the possible timing and mechanism of this supershear rupture. Bao et al. (2019) infer an average rupture velocity of about 4 km/s from back-projection. This speed corresponds to a barely stable mechanical regime, which is interpreted as being promoted by a damage zone around the mature Palu-Koro fault that formed during previous earthquakes. In contrast, the earthquake scenario features an early and persistent rupture velocity of 5 km/s on average, close to P-wave speed. Supershear rupture speed is enabled in the model by a relatively low fault strength and triggered immediately at rupture onset by a highly overstressed nucleation patch. Supershear transition is enabled and enhanced by high background stresses (or more generally, low ratios of strength excess over stress drop) (Andrews 1976). The transition distance, the rupture propagation distance at which supershear rupture starts to occur, also depends on nucleation energy (Dunham 2007; Gabriel et al. 2012, 2013). Observational support for the existence of a highly stressed nucleation region arises from the series of foreshocks that occurred nearby in the days before the mainshock, including a \(M_w\) 6.1 on the same day of the mainshock. We conducted numerical experiments reducing the level of overstress within the nucleation patch, reaching a critical overstress level at which supershear is no longer triggered immediately at rupture onset. These alternative models initiate at subshear rupture speeds and never transition to supershear. Importantly, these slower earthquake scenarios do not match the observational constraints, specifically the teleseismic waveforms and moment release rate. Stress and/or strength variations due to, for example, variations in tectonic loading, stress changes from previous earthquakes, or local material heterogeneities, are expected, but poorly constrained, and therefore not included in this dynamic rupture model. Accounting for such features in relation to long term deformation can distinctly influence the stress field and lithological contrasts (e.g., van Dinther et al. 2013; Dal Zilio et al. 2018, 2019; Preuss et al. 2019; D'Acquisto et al. 2018; van Zelst et al. 2019). Realistic initial conditions in terms of stress and lithology are shown to significantly influence the dynamics of individual ruptures (Lotto et al. 2017a; van Zelst et al. 2019). Specifically, different fault stress states for the Palu and the Northern fault segments are possible, since the Palu-Koro fault acts as the regional plate-bounding fault that likely experiences increased tectonic loading (Fig. 1a). The introduction of self-consistent, physics-based stress and strength states could be obtained by coupling this earthquake–tsunami framework to geodynamic seismic cycle models (e.g., van Dinther et al. 2013, 2014; van Zelst et al. 2019), as done in Gabriel et al. (2018). However, in light of an absence of data or models justifying the introduction of complexity, we here use the simplest option with a laterally homogeneous stress field that honors the regional scale transtension. We also note that the earthquake scenario is dependent on the subsurface structural model (e.g., Lotto et al. 2017a; van Zelst et al. 2019). The local velocity model of Awaliah et al. (2018) is of limited resolution within the Palu area, since only one of the stations used illuminates this region. Despite the strong effects of data regularization, this is, to our knowledge, the most detailed data set characterizing the subsurface in the area of study. The Sulawesi Tsunami Scenario Overall, the tsunami model shows good agreement with available key observations. Wave amplitudes and periods at the only available tide gauge station in Palu Bay match well. Inundation data from the model show satisfactory agreement with the observations by international survey teams (Yalciner et al. 2018). Apart from the earthquake model limitations discussed in Sect. 5.1 that may influence the tsunami characteristics, the following items may cause deviations between the tsunami model results and observations: (a) insufficiently accurate bathymetry/topography data; (b) approximation by hydrostatic shallow water wave theory; (c) simplified coupling between earthquake rupture and tsunami scenarios. In the following we will briefly discuss these topics. The limited resolution of the bathymetry and topography data sets may prevent us from properly capturing local effects, which in turn may affect site-specific tsunami and inundation observations. This is discussed further and quantified in Appendix 7.9. While the adaptively refined computational mesh, which refines down to 80 m near the shore, allows inundation to be resolved numerically, interpolating the bathymetry data does not increase its resolution. Therefore, in Sect. 4.2, we focus on the overall agreement between model and observation in the distribution of simulated inundation elevations around Palu Bay. This is a relevant result, since it confirms that the modeled tsunami wave behavior is reasonable overall. The accuracy of the tsunami model may also be affected by the simplifications underlying the shallow water equations. In particular, a near-field tsunami within a narrow bay may be affected by large bathymetry gradients. In the shallow-water framework, all three spatial components of the ground displacements generated by the earthquake model cannot be properly accounted for. In fact, a direct application of a horizontal displacement to the hydrostatic (single layer) shallow water model would lead to unrealistic momentum in the whole water column. Additionally, all bottom movements are immediately and directly transferred to the entire water column, since we model the water wave by (essentially 2D) shallow water theory. In reality, an adjustment process takes place. The large bathymetry gradients may also lead to non-hydrostatic effects in the water column, which cannot be neglected. Whilst fully 3D simulations of tsunami genesis and propagation have been undertaken (e.g. Saito and Furumura 2009), less compute-intensive alternatives are underway (e.g., Jeschke et al. 2017), and should be tested to quantify the influence of such effects in realistic situations such as the Sulawesi event. We account for the effect of the horizontal seafloor displacements by applying the method proposed by Tanioka and Satake (1996). We observe only minor differences in the modeled water waves when including the effect of the horizontal ground displacements (see Figs. 12, 16, 25, 26). We thus conclude that vertical ground displacements are the primary cause of the tsunami. Directly after the earthquake, about 80 cm of ground subsidence is imprinted on the synthetic mareogram at Pantaloan wave gauge, but is not visible in the observed signal (cf. Figs. 10, 13, 18). The tide gauge at Pantaloan is indeed not sensitive to ground vertical displacements, since the instrument and the water surface are displaced jointly during ground subsidence, and therefore their distance remains fixed. Note that we also cannot remove this shift from the synthetic time series, since the tsunami model includes a background mean sea level, to which it re-adjusts throughout the computation. The tsunami model produces inundation elevations of more than 10 m at several locations in Palu Bay. Similarly large values are also reported in field surveys (e.g. Yalciner et al. 2018). We note that offshore tsunami heights ranging between 0 and 2 m are not inconsistent with large run-up elevations. A moderate tsunami wave can generate significant run-up elevation if it reaches the shoreline with significant inertia (velocity). Amplification factors of 5–10 from wave height to local run-up height are not uncommon (see e.g. Okal et al. 2010), and result from shoaling due to local bathymetry features. Advantages and Outcome of a Physics-Based Coupled Model A physics-based earthquake and coupled tsunami model is well-posed to shed light on the mechanisms and competing hypotheses governing earthquake–tsunami sequences as puzzling as the Sulawesi event. By capturing dynamic slip evolution that is consistent with the fault geometry and the regional stress field, the dynamic rupture model produces mechanically consistent ground deformation, even in submarine areas where space borne imaging techniques are blind. These seafloor displacement time-histories, which include the influence of seismic waves, in nature contribute to source the tsunami and are utilized as such in this coupled framework. However, the earthquake–tsunami coupling is not physically seamless. For example, as noted above, seismic waves cannot be captured using the shallow water approach, but rather require a non-hydrostatic water body (e.g. Lotto et al. 2018). The coupled system nevertheless remains mechanically consistent to the order of the typical spatio-temporal scales governing tsunami modeling. The use of a dynamic rupture earthquake source has distinct contributions relative to the standard finite-fault inversion source approach, which is typically used in tsunami models. The latter enables close fitting of observations through the use of a large number of free parameters. Despite recent advances (e.g., Shimizu et al. 2019), kinematic models typically need to pre-define fault geometries. Naive first-order finite-fault sources are automatically determined after an earthquake and this can be done quickly (e.g. by the USGS or GFZ German Research Centre for Geoscience), which is a great advantage. Models can be improved later on by including new data and more complexity. However, kinematic models are characterized by inherent non-uniqueness and do not ensure mechanical consistency of the source (e.g., Mai et al. 2016). The physics-based model also suffers from non-uniqueness, but this is reduced, since it excludes scenarios that are not mechanically viable. These advantages and the demonstrated progress potentially make physics-based, coupled earthquake–tsunami modeling an important tool for seismic hazard mitigation and rapid earthquake response. We facilitate rapid modeling of the earthquake scenario by systematically defining a suitable parameterization for the regional and fault-specific characteristics. We use a pre-established, efficient algorithm, based on physical relationships between parameters, to assign the ill-constrained stress state and strength on the fault using a few trial simulations (Ulrich et al. 2019). This limits the required input parameters to subsurface structure, fault structure, and four parameters governing the stress state and fault conditions. This enables rapid response in delivering physics-driven interpretations that can be integrated synergistically with established data-driven efforts within the first days and weeks after an earthquake. The coupled model presented here produces a realistic scenario that agrees with key characteristics of available earthquake and tsunami data. However, future efforts will be directed toward improving the model as new information on fault structure or displacements within the Bay or additional tide gauge measurements become available. In addition, different earthquake models varying in their fault geometry or in the physical laws governing on- and off-fault behavior can be utilized in further studies of the influence of earthquake characteristics on tsunami generation and impact. This model provides high resolution synthetics of, e.g., ground deformation in space and time. These results can be readily compared to observational data that are yet to be made available to the scientific community. We provide time series of modeled ground displacements in Appendix 7.2. Spatial variations of regional stress and fault strength could be constrained in the future by tectonic seismic cycle modeling capable of handling complex fault geometries. Future dynamic earthquake rupture modeling may additionally explore how varying levels of preexisting and coseismic off-fault damage affect the rupture speed specifically and rupture dynamics in general. Future research should also be directed towards an even more realistic coupling strategy together with an extended sensitivity analysis on the effects of such coupling. This, e.g., requires the integration of non-hydrostatic extensions for the tsunami modeling part (Jeschke et al. 2017) into the coupling framework. We present a coupled, physics-based scenario of the 2018 Palu, Sulawesi earthquake and tsunami, which is constrained by rapidly available observations. We demonstrate that coseismic oblique-slip on a dipping strike-slip fault produces a vertical step across the submarine fault segment of 1.5 m on average in the tsunami source. This is sufficient to produce reasonable tsunami amplitude and inundation elevations. The critical normal-faulting component results from transtension, prevailing in this region, and the fault system geometry. The fully dynamic earthquake model captures important features, including the timing and speed of the rupture, 3D geometric complexities of the faults, and the influence of seismic waves on the rupture propagation. We find that an early onset of supershear rupture speed, sustained for the duration of the rupture across geometric complexities, is required to match a range of far-field and near-fault observations. The modelled tsunami amplitudes and inundation elevations agree with observations within the range of modeling uncertainties dominated by the available bathymetry and topography data. We conclude that the primary tsunami source may have been coseismically generated vertical displacements. However, in a holistic approach aiming to match high-frequency tsunami features, local effects such as landsliding, non-hydrostatic wave effects, and high resolution topographical features should be included. A physics-based earthquake and coupled tsunami model is specifically useful to assess tsunami hazard in tectonic settings currently underrepresented in operational hazard assessment. We demonstrate that high-performance computing empowered dynamic rupture modeling produces well-constrained studies integrating source observations and earthquake physics very quickly after an event occurs. In the future, such physics-based earthquake–tsunami response can complement both on-going hazard mitigation and the established urgent response tool set. Andrews, D. (1976). Rupture velocity of plane strain shear cracks. Journal of Geophysical Research, 81(32), 5679–5687. https://doi.org/10.1029/JB081i032p05679. Aochi, H., & Madariaga, R. (2003). The 1999 Izmit, Turkey, earthquake: Nonplanar fault structure, dynamic rupture process, and strong ground motion. Bulletin of the Seismological Society of America, 93(3), 1249–1266. https://doi.org/10.1785/0120020167. Aochi, H., Douglas, J., & Ulrich, T. (2017). Stress accumulation in the Marmara Sea estimated through ground-motion simulations from dynamic rupture scenarios: Stress Accumulation in the Marmara Sea. Journal of Geophysical Research: Solid Earth, 122(3), 12219–2235. https://doi.org/10.1002/2016JB013790. Argus, D. F., Gordon, R. G., & DeMets, C. (2011). Geologically current motion of 56 plates relative to the no-net-rotation reference frame. Geochemistry, Geophysics, Geosystems. https://doi.org/10.1029/2011GC003751. Awaliah, W. O., Yudistira, T., & Nugraha, A. D. (2018). Identification of 3-d shear wave velocity structure beneath sulawesi island using ambient noise tomography method. In 10th ACES international workshop. http://quaketm.bosai.go.jp/~shiqing/ACES2018/abstracts/aces_abstract_awaliah.pdf. Accessed 7 Aug 2019. Bao, H., Ampuero, J. P., Meng, L., Fielding, E. J., Liang, C., Milliner, C. W. D., et al. (2019). Early and persistent supershear rupture of the 2018 magnitude 7.5 Palu earthquake. Nature Geoscience, 12, 200–205. https://doi.org/10.1038/s41561-018-0297-z. Bauer, A., Scheipl, F., Küchenhoff, H., & Gabriel, A. A. (2017). Modeling spatio-temporal earthquake dynamics using generalized functional additive regression. In Proceedings of the 32nd international workshop on statistical modelling, vol. 2, pp. 146–149. Behrens, J., & Bader, M. (2009). Efficiency considerations in triangular adaptive mesh refinement. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering, 367(1907), 4577–4589. https://doi.org/10.1098/rsta.2009.0175. Behrens, J., Rakowsky, N., Hiller, W., Handorf, D., Läuter, M., Päpke, J., et al. (2005). amatos: Parallel adaptive mesh generator for atmospheric and oceanic simulation. Ocean Modelling, 10(1–2), 171–183. https://doi.org/10.1016/j.ocemod.2004.06.003. Bellier, O., Sébrier, M., Seward, D., Beaudouin, T., Villeneuve, M., & Putranto, E. (2006). Fission track and fault kinematics analyses for new insight into the Late Cenozoic tectonic regime changes in West-Central Sulawesi (Indonesia). Tectonophysics, 413(3–4), 201–220. https://doi.org/10.1016/j.tecto.2005.10.036. Beyreuther, M., Barsch, R., Krischer, L., Megies, T., Behr, Y., & Wassermann, J. (2010). ObsPy: A Python toolbox for seismology. Seismological Research Letters, 81(3), 530–533. https://doi.org/10.1785/gssrl.81.3.530. Bird, P. (2003). An updated digital model of plate boundaries. Geochemistry, Geophysics, Geosystems. https://doi.org/10.1029/2001GC000252. Borrero, J. C., Legg, M. R., & Synolakis, C. E. (2004). Tsunami sources in the southern California bight. Geophysical Research Letters, 31(L13), 211. https://doi.org/10.1029/2004GL020078. Breuer, A., Heinecke, A., & Bader, M. (2016). Petascale local time stepping for the ADER-DG finite element method. In 2016 IEEE international parallel and distributed processing symposium (IPDPS) (pp 854–863). Chicago, IL: IEEE. https://doi.org/10.1109/IPDPS.2016.109 Breuer, A., Heinecke, A., Rettenberger, S., Bader, M., Gabriel, A. A., & Pelties, C. (2014). Sustained Petascale performance of seismic simulations with SeisSol on SuperMUC. In Supercomputing. ISC 2014. Lecture Notes in Computer Science (vol. 8488, pp. 1–18). Cham: Springer. https://doi.org/10.1007/978-3-319-07518-1_1. Burridge, R. (1973). Admissible speeds for plane-strain self-similar shear cracks with friction but lacking cohesion. Geophysical Journal International, 35(4), 439–455. https://doi.org/10.1111/j.1365-246X.1973.tb00608.x. Carvajal, M., Araya-Cornejo, C., Sepúlveda, I., Melnick, D., & Haase, J. S. (2018). Nearly instantaneous tsunamis following the Mw 7.5 2018 palu earthquake. Geophysical Research Letters. https://doi.org/10.1029/2019gl082578. D'Acquisto, M., Dal Zilio, L., van Dinther, Y., Molinari, I., Gerya, T., & Kissling, E. (2018). Modelling tectonics and seismicity due to slab retreat along the northern apennines thrust belt. In AGU fall meeting 2018. https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/431867. Accessed 7 Aug 2019. Dal Zilio, L., van Dinther, Y., Gerya, T., & Pranger, C. (2018). Seismic behaviour of mountain belts controlled by plate convergence rate. Earth and Planetary Science Letters, 482, 81–92. https://doi.org/10.1016/j.epsl.2017.10.053. Dal Zilio, L., van Dinther, Y., Gerya, T., & Avouac, J. (2019). Bimodal seismicity in the himalaya controlled by fault friction and geometry. Nature Communications, 10, 48. https://doi.org/10.1038/s41467-018-07874-8. Daryono, M. R. (2018). Paleoseismologi Tropis Indonesia (Dengan Studi Kasus Di Sesar Sumatra, Sesar Palukoro-Matano, Dan Sesar Lembang). http://docplayer.info/111161004-Paleoseismologi-tropis-indonesia-dengan-studi-kasus-di-sesar-sumatra-sesar-palukoro-matano-dan-sesar-lembang-disertasi.html. Accessed 7 Aug 2019. de la Puente, J., Ampuero, J. P., & Käser, M. (2009). Dynamic rupture modeling on unstructured meshes using a discontinuous Galerkin method. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2008JB006271. De Michele, M. (2019). Subpixel offsets of copernicus sentinel 2 data, related to the displacement field of the sulawesi earthquake (2018, \(M_w\) 7.5). https://doi.org/10.5281/zenodo.2573936. DEMNAS. (2018). Seamless digital elevation model (DEM) dan batimetri nasional. Badan Informasi Geospasial. http://tides.big.go.id/DEMNAS. Accessed 1 Oct 2018. Di Toro, G., Han, R., Hirose, T., De Paola, N., Nielsen, S., Mizoguchi, K., et al. (2011). Fault lubrication during earthquakes. Nature, 471(7339), 494–498. https://doi.org/10.1038/nature09838. Dumbser, M., & Käser, M. (2006). An arbitrary high-order discontinuous Galerkin method for elastic waves on unstructured meshes—II. The three-dimensional isotropic case. Geophysical Journal International, 167(1), 319–336. https://doi.org/10.1111/j.1365-246X.2006.03120.x. Dunham, E. M. (2007). Conditions governing the occurrence of supershear ruptures under slip-weakening friction. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2006JB004717. Dunham, E. M., Belanger, D., Cong, L., & Kozdon, J. E. (2011). Earthquake ruptures with strongly rate-weakening friction and off-fault plasticity, Part 1: Planar faults. Bulletin of the Seismological Society of America, 101(5), 2296–2307. https://doi.org/10.1785/0120100075. Fichtner, A., van Herwaarden, D. P., Afanasiev, M., Simute, S., Krischer, L., Cubuk-Sabuncu, Y., et al. (2018). The collaborative seismic earth model: Generation 1. Geophysical Research Letters, 45(9), 4007–4016. https://doi.org/10.1029/2018GL077338. Gabriel, A. A., Ampuero, J. P., Dalguer, L. A., & Mai, P. M. (2012). The transition of dynamic rupture styles in elastic media under velocity-weakening friction. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2012JB009468. Gabriel, A. A., Ampuero, J. P., Dalguer, L. A., & Mai, P. M. (2013). Source properties of dynamic rupture pulses with off-fault plasticity. Journal of Geophysical Research: Solid Earth, 118(8), 4117–4126. https://doi.org/10.1002/jgrb.50213. Gabriel, A. A., Behrens, J., Bader, M., van Dinther, Y., Gunawan, T., Madden, E. H., et al. (2018). S21E-0492: Coupled seismic cycle—Earthquake dynamic rupture—Tsunami models. In AGU fall meeting 2018, Washington, D.C. https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/453669. Acceseed 7 Aug 2019. Gallovič, F., Valentová, Ľ., Ampuero, J.‐P., & Gabriel, A.‐A. (2019a). Bayesian Dynamic Finite‐Fault Inversion: 1. Method and Synthetic Test. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2019JB017510. Gallovič, F., Valentová, Ľ., Ampuero, J.‐P., & Gabriel, A.‐A. (2019b). Bayesian Dynamic Finite‐Fault Inversion: 2. Application to the 2016 \(M_w\)6.2 Amatrice, Italy, Earthquake. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2019JB017512. GEBCO. (2015). The GEBCO_2014 Grid, version 20150318. Geist, E. L., & Parsons, T. (2005). Triggering of tsunamigenic aftershocks from large strike-slip earthquakes: Analysis of the November 2000 New Ireland earthquake sequence. Geochemistry, Geophysics, Geosystems. https://doi.org/10.1029/2005GC000935. Harig, S., Chaeroni, Pranowo W. S., & Behrens, J. (2008). Tsunami simulations on several scales: Comparison of approaches with unstructured meshes and nested grids. Ocean Dynamics, 58, 429–440. https://doi.org/10.1007/s10236-008-0162-5. Harris, R. A., Barall, M., Andrews, D., Duan, B., Ma, S., Dunham, E., et al. (2011). Verifying a computational method for predicting extreme ground motion. Seismological Research Letters, 82(5), 638–644. https://doi.org/10.1785/gssrl.82.5.638. Harris, R. A., Barall, M., Aagaard, B., Ma, S., Roten, D., Olsen, K., et al. (2018). A suite of exercises for verifying dynamic earthquake rupture codes. Seismological Research Letters, 89(3), 1146–1162. https://doi.org/10.1785/0220170222. Heidarzadeh, M., Muhari, A., & Wijanarto, A. B. (2018). Insights on the source of the 28 september 2018 sulawesi tsunami, Indonesia based on spectral analyses and numerical simulations. Pure and Applied Geophysics. https://doi.org/10.1007/s00024-018-2065-9. Heidbach, O., Rajabi, M., Cui, X., Fuchs, K., Müller, B., Reinecker, J., et al. (2018). The World Stress Map database release 2016: Crustal stress pattern across scales. Tectonophysics, 744, 484–498. https://doi.org/10.1016/J.TECTO.2018.07.007. Heinecke, A., Breuer, A., Rettenberger, S., Bader, M., Gabriel, A. A., Pelties, C., et al. (2014). Petascale high order dynamic rupture earthquake simulations on heterogeneous supercomputers. In SC14: International conference for high performance computing, networking, atorage and analysis (pp. 3–14). IEEE. https://doi.org/10.1109/SC.2014.6. IPGP. (2018). http://geoscope.ipgp.fr/index.php/en/catalog/earthquake-description?seis=us1000h3p4. Accessed 1 Oct 2018. Jeschke, A., Pedersen, G. K., Vater, S., & Behrens, J. (2017). Depth-averaged non-hydrostatic extension for shallow water equations with quadratic vertical pressure profile: Equivalence to Boussinesq-type equations. International Journal for Numerical Methods in Fluids, 84(10), 569–583. https://doi.org/10.1002/fld.4361. Kolecka, N., & Kozak, J. (2014). Assessment of the accuracy of SRTM C- and X-Band high mountain elevation data: A case study of the polish tatra mountains. Pure and Applied Geophysics, 171(6), 897–912. https://doi.org/10.1007/s00024-013-0695-5. Krischer, L., Hutko, A. R., van Driel, M., Stähler, S., Bahavar, M., Trabant, C., et al. (2017). On-demand custom broadband synthetic seismograms. Seismological Research Letters, 88(4), 1127–1140. https://doi.org/10.1785/0220160210. Legg, M. R., & Borrero, J. C. (2001). Tsunami potential of major restraining bends along submarine strike-slip faults. In Proceedings of the international tsunami symposium 2001. NOAA/PMEL, 1, pp. 331–342. Legg, M. R., Borrero, J. C., & Synolakis, C. E. (2003). Tsunami hazards from strike-slip earthquakes. American Geophysical Union, Fall Meeting 2003, abstract id OS21D-06. http://adsabs.harvard.edu/abs/2003AGUFMOS21D..06L. Accessed 7 Aug 2019. Liang, C., & Fielding, E. J. (2017). Interferometry with ALOS-2 full-aperture ScanSAR data. IEEE Transactions on Geoscience and Remote Sensing, 55(5), 2739–2750. Liang, Q., & Marche, F. (2009). Numerical resolution of well-balanced shallow water equations with complex source terms. Advances in Water Resources, 32, 873–884. https://doi.org/10.1016/j.advwatres.2009.02.010. Liu, P. L. F., Barranco, I., Fritz, H. M., Haase, J. S., Prasetya, G. S., Qiu, Q., et al. (2018). What we do and don't know about the 2018 Palu Tsunami—A future plan. In AGU fall meeting 2018. https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/476669. Accessed 7 Aug 2019. Lotto, G. C., Dunham, E. M., Jeppson, T. N., & Tobin, H. J. (2017a). The effect of compliant prisms on subduction zone earthquakes and tsunamis. Earth and Planetary Science Letters, 458, 213–222. https://doi.org/10.1016/j.epsl.2016.10.050. Lotto, G. C., Nava, G., & Dunham, E. M. (2017b). Should tsunami simulations include a nonzero initial horizontal velocity? Earth, Planets and Space, 69(1), 117. https://doi.org/10.1186/s40623-017-0701-8. Lotto, G. C., Jeppson, T. N., & Dunham, E. M. (2018). Fully coupled simulations of megathrust earthquakes and tsunamis in the Japan trench, Nankai Trough, and Cascadia Subduction Zone. Pure and Applied Geophysics, 1, 1–33. https://doi.org/10.1007/s00024-018-1990-y. Løvholt, F., Hasan, H., Lorito, S., Romano, F., Brizuela, B., Piatanesi, A., et al. (2018). Multiple source sensitivity study to model the 28 September Sulawesi tsunami – landslide and strike slip sources. In AGU fall meeting 2018, Washington, DC. https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/476627. Accessed 7 Aug 2019. Maeda, T., & Furumura, T. (2013). FDM simulation of seismic waves, ocean acoustic waves, and tsunamis based on tsunami-coupled equations of motion. Pure and Applied Geophysics, 170(1–2), 109–127. https://doi.org/10.1007/s00024-011-0430-z. Mai, P. M. (2019). Supershear tsunami disaster. Nature Geoscience, 12, 150–151. https://doi.org/10.1038/s41561-019-0308-8. Mai, P. M., Schorlemmer, D., Page, M., Ampuero, J. P., Asano, K., Causse, M., et al. (2016). The earthquake-source inversion validation (SIV) project. Seismological Research Letters, 87(3), 690–708. https://doi.org/10.1785/0220150231. Mansinha, L., & Smylie, D. E. (1971). The displacement fields of inclined faults. Bulletin of the Seismological Society of America, 61(5), 1433–1440. McAdoo, B. G., Richardson, N., & Borrero, J. (2007). Inundation distances and run-up measurements from ASTER, QuickBird and SRTM data, Aceh coast, Indonesia. International Journal of Remote Sensing, 28(13–14), 2961–2975. https://doi.org/10.1080/01431160601091795. Muhari, A., Imamura, F., Arikawa, T., Hakim, A. R., & Afriyanto, B. (2018) Solving the puzzle of the september 2018 Palu, Indonesia, tsunami mystery: clues from the tsunami waveform and the initial field survey data. Journal of Disaster Research 13(Scientific Communication), sc20181108. https://doi.org/10.20965/jdr.2018.sc20181108. Oeser, J., Bunge, H. P., & Mohr, M. (2006). Cluster design in the earth sciences: Tethys. International conference on high performance computing and communications (pp. 31–40). Berlin: Springer. Okada, Y. (1985). Surface deformation due to shear and tensile faults in a half-space. Bulletin of the Seismological Society of America, 75(4), 1135. Okal, E. A., Fritz, H. M., Synolakis, C. E., Borrero, J. C., Weiss, R., Lynett, P. J., et al. (2010). Field survey of the Samoa Tsunami of 29 September 2009. Seismological Research Letters, 81(4), 577–591. https://doi.org/10.1785/gssrl.81.4.577. Okuwaki, R., Yagi, Y., & Shimizu, K. (2018). rokuwaki/2018paluindonesia: v2.0. https://doi.org/10.5281/zenodo.1469007. Omira, R., Dogan, G. G., Hidayat, R., Husrin, S., Prasetya, G., Annunziato, A., et al. (2019). The september 28th, 2018, tsunami In Palu-Sulawesi, Indonesia: a post-event field survey. Pure and Applied Geophysics, 176(4), 1379–1395. https://doi.org/10.1007/s00024-019-02145-z. Pelinovsky, E., Yuliadi, D., Prasetya, G., & Hidayat, R. (1997). The 1996 Sulawesi Tsunami. Natural Hazards, 16(1), 29–38. https://doi.org/10.1023/A:1007904610680. Pelties, C., Puente, J., Ampuero, J. P., Brietzke, G. B., & Käser, M. (2012). Three-dimensional dynamic rupture simulation with a high-order discontinuous Galerkin method on unstructured tetrahedral meshes. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2011JB008857. Pelties, C., Gabriel, A. A., & Ampuero, J. P. (2013). Verification of an ADER-DG method for complex dynamic rupture problems. Geoscientific Model Development Discussions, 6, 5981–6034. https://doi.org/10.5194/gmdd-6-5981-2013. Pelties, C., Gabriel, A. A., & Ampuero, J. P. (2014). Verification of an ADER-DG method for complex dynamic rupture problems. Geoscientific Model Development, 7(3), 847–866. https://doi.org/10.5194/gmd-7-847-2014. Peyrat, S., Olsen, K., & Madariaga, R. (2001). Dynamic modeling of the 1992 Landers earthquake. Journal of Geophysical Research: Solid Earth, 106(B11), 26,467–26,482. https://doi.org/10.1029/2001JB000205. Power, W., Clark, K., King, D. N., Borrero, J., Howarth, J., Lane, E. M., et al. (2017). Tsunami runup and tide-gauge observations from the 14 november 2016 M7.8 Kaikōura earthquake, New Zealand. Pure and Applied Geophysics, 174(7), 2457–2473. https://doi.org/10.1007/s00024-017-1566-2. Prasetya, G. S., De Lange, W. P., & Healy, T. R. (2001). The Makassar Strait Tsunamigenic region, Indonesia. Natural Hazards, 24(3), 295–307. https://doi.org/10.1023/A:1012297413280. Preuss, S., Herrendörfer, R., Gerya, T., Ampuero, J., & van Dinther, Y. (2019). Seismic and aseismic fault growth lead to different fault orientations. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2019JB017324. Pribadi, S., Nugraha, J., Susanto, E., Chandra, Gunawan I., Haryono, T., & Hery, I. (2018). Laporan pendahuluan gempabumi dan tsunami donggala-palu 2018 (Preliminary report on the Donggala-Palu 2018 earthquake and tsunami). Pers. comm. Quantum GIS. (2013). Development team. Quantum GIS geographic information system. Open Source geospatial foundation project. Rettenberger, S., Meister, O., Bader, M., & Gabriel, A. A. (2016). Asagi: A parallel server for adaptive geoinformation. In Proceedings of the Exascale applications and software conference 2016, ACM, New York, NY, USA, EASC '16, pp. 2:1–2:9. https://doi.org/10.1145/2938615.2938618 Rosen, P. A., Gurrola, E., Sacco, G. F., & Zebker, H. (2012). The insar scientific computing environment. In Synthetic aperture radar, 2012. EUSAR. 9th European conference on, VDE, pp. 730–733. Ryan, K. J., Geist, E. L., Barall, M., & Oglesby, D. D. (2015). Dynamic models of an earthquake and tsunami offshore Ventura, California. Geophysical Research Letters, 42(16), 6599–6606. https://doi.org/10.1002/2015GL064507. Saito, T., & Furumura, T. (2009). Three-dimensional simulation of tsunami generation and propagation: Application to intraplate events. Journal of Geophysical Research, 114(B2), B02,307. https://doi.org/10.1029/2007JB005523. Sassa, S., & Takagawa, T. (2019). Liquefied gravity flow-induced tsunami: first evidence and comparison from the 2018 Indonesia sulawesi earthquake and tsunami disasters. Landslides, 16(1), 195–200. https://doi.org/10.1007/s10346-018-1114-x. SeisSol GitHub (2019). https://github.com/SeisSol/SeisSol. Accessed 7 Aug 2019. SeisSol website (2019). https://www.seissol.org. Accessed 7 Aug 2019. Sepulveda, I., Haase, J. S., Liu, P. L. F., Xu, X., Carvajal, M. (2018). On the contribution of co-seismic displacements to the 2018 Palu tsunami. In AGU Fall Meeting 2018. https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/476717. Accessed 7 Aug 2019. Shimizu, K., Yagi, Y., Okuwaki, R., & Fukahata, Y. (2019). Development of an inversion method to extract information on fault geometry from teleseismic data. https://doi.org/10.31223/osf.io/q58t7. Simons, W. J., Riva, R., Pietrzak, J., et al. (2018). Tsunami potential of the 2018 Sulawesi earthquake from GNSS constrained source mechanism. In AGU Fall Meeting 2018, Washington, D.C. https://agu.confex.com/agu/fm18/meetingapp.cgi/Paper/476730. Accessed 7 Aug 2019. Socquet, A., Simons, W., Vigny, C., McCaffrey, R., Subarya, C., Sarsito, D., et al. (2006). Microblock rotations and fault coupling in SE Asia triple junction (Sulawesi, Indonesia) from GPS and earthquake slip vector data. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2005JB003963. Socquet, A., Hollingsworth, J., Pathier, E., & Bouchon, M. (2019). Evidence of supershear during the 2018 magnitude 7.5 Palu earthquake from space geodesy. Nature Geoscience, 12, 192–199. https://doi.org/10.1038/s41561-018-0296-0. Song, X., Zhang, Y., Shan, X., Liu, Y., Gong, W., & Qu, C. (2019). Geodetic observations of the 2018 Mw 7.5 Sulawesi earthquake and its implications for the kinematics of the Palu fault. Geophysical Research Letters, 46(8), 4212–4220. https://doi.org/10.1029/2019GL082045. Synolakis, C. E., Bernard, E. N., Titov, V. V., Kânoğlu, U., & González, F. I. (2007). Standards, criteria, and procedures for NOAA evaluation of tsunami numerical models. Tech. Rep. NOAA Technical Memorandum OAR PMEL-135, NOAA/OAR/PMEL. Tanioka, Y., & Satake, K. (1996). Tsunami generation by horizontal displacement of ocean bottom. Geophysical Research Letters, 23(8), 861–864. https://doi.org/10.1029/96GL00736. Tanioka, Y., Yudhicara, Kususose T., Kathiroli, S., Nishimura, Y., Iwasaki, S. I., & Satake, K. (2006). Rupture process of the 2004 great Sumatra-Andaman earthquake estimated from tsunami waveforms. Earth, Planets and Space, 58(2), 203–209. https://doi.org/10.1186/BF03353379. Ulrich, T., Gabriel, A. A., Ampuero, J. P., & Xu, W. (2019). Dynamic viability of the 2016 Mw 7.8 Kaikōura earthquake cascade on weak crustal faults. Nature Communications, 10(1), 1213. https://doi.org/10.1038/s41467-019-09125-w. Uphoff, C., Rettenberger, S., Bader, M., Madden, E., Ulrich, T., Wollherr, S., & Gabriel, A. A. (2017). Extreme scale multi-physics simulations of the tsunamigenic 2004 sumatra megathrust earthquake. In Proceedings of the international conference for high performance computing, networking, storage and analysis, SC 2017. https://doi.org/10.1145/3126908.3126948 USGS. (2018a). https://earthquake.usgs.gov/earthquakes/eventpage/us1000h3p4/moment-tensor. Accessed 7 Aug 2019. USGS. (2018b). https://earthquake.usgs.gov/earthquakes/eventpage/us1000h3p4/finite-fault. Accessed 7 Aug 2019. Valkaniotis, S., Ganas, A., Tsironi, V., & Barberopoulou, A. (2018). A preliminary report on the M7.5 Palu 2018 earthquake co-seismic ruptures and landslides using image correlation techniques on optical satellite data. https://doi.org/10.5281/zenodo.1467128, report submitted to EMSC. Vallée, M., Charléty, J., Ferreira, A. M. G., Delouis, B., & Vergoz, J. (2011). SCARDEC: a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body-wave deconvolution. Geophysical Journal International, 184(1), 338–358. https://doi.org/10.1111/j.1365-246X.2010.04836.x. van Dinther, Y., Gerya, T., Dalguer, L., Mai, P., Morra, G., & Giardini, D. (2013). The seismic cycle at subduction thrusts: Insights from seismo-thermo-mechanical models. Journal Geophysical Research, 118, 6183–6202. https://doi.org/10.1002/2013JB010380. van Dinther, Y., Mai, P. M., Dalguer, L. A., & Gerya, T. V. (2014). Modeling the seismic cycle in subduction zones: The role and spatiotemporal occurrence of off-megathrust events. Geophysical Research Letters, 41(4), 1194–1201. https://doi.org/10.1002/2013GL058886. van Dongeren, A., Vatvani, D., & van Ormondt, M. (2018). Simulation of 2018 tsunami along the coastal areas in the palu bay. In AGU fall meeting 2018. https://agu.confex.com/agu/fm18/meetingapp.cgi/Session/66627. Accessed 7 Aug 2019. van Zelst, I., Wollherr, S., Gabriel, A. A., Madden, E., & van Dinther, Y. (2019). Modelling coupled subduction and earthquake dynamics. https://doi.org/10.31223/osf.io/f6ng5. Vater, S., & Behrens, J. (2014). Well-balanced inundation modeling for shallow-water flows with Discontinuous Galerkin schemes. In J. Fuhrmann, M. Ohlberger, M., & Rohde, C. (Eds). Finite volumes for complex applications VII—elliptic, parabolic and hyperbolic problems, Springer Proceedings in mathematics & statistics, Vol. 78, pp. 965–973. https://doi.org/10.1007/978-3-319-05591-6_98. Vater, S., Beisiegel, N., & Behrens, J. (2015). A limiter-based well-balanced discontinuous galerkin method for shallow-water flows with wetting and drying: One-dimensional case. Advances in Water Resources, 85, 1–13. https://doi.org/10.1016/j.advwatres.2015.08.008. Vater, S., Beisiegel, N., & Behrens, J. (2017). Comparison of wetting and drying between a RKDG2 method and classical FV based second-order hydrostatic reconstruction. In C. Cancès, & P. Omnes (Eds.), Finite volumes for complex applications VIII—hyperbolic, elliptic and parabolic problems (pp. 237–245). Springer. https://doi.org/10.1007/978-3-319-57394-6_26. Vater, S., Beisiegel, N., & Behrens, J. (2019). A limiter-based well-balanced discontinuous Galerkin method for shallow-water flows with wetting and drying: Triangular grids. International Journal for Numerical Methods in Fluids. https://doi.org/10.1002/fld.4762. Vigny, C., Perfettini, H., Walpersdorf, A., Lemoine, A., Simons, W., van Loon, D., et al. (2002). Migration of seismicity and earthquake interactions monitored by GPS in SE Asia triple junction: Sulawesi, Indonesia. Journal of Geophysical Research: Solid Earth, 107(B10), ETG-7. https://doi.org/10.1029/2001JB000377. Walpersdorf, A., Rangin, C., & Vigny, C. (1998). GPS compared to long-term geologic motion of the north arm of Sulawesi. Earth and Planetary Science Letters, 159(1), 47–55. https://doi.org/10.1016/S0012-821X(98)00056-9. Watkinson, I. M., & Hall, R. (2017). Fault systems of the eastern Indonesian triple junction: Evaluation of Quaternary activity and implications for seismic hazards. Geological Society, London, Special Publications, 441(1), 71–120. https://doi.org/10.1144/SP441.8. Weatherall, P., Marks, K. M., Jakobsson, M., Schmitt, T., Tani, S., Arndt, J. E., et al. (2015). A new digital bathymetric model of the world's oceans. Earth and Space Science, 2(8), 331–345. https://doi.org/10.1002/2015EA000107. Widiyanto, W., Santoso, P. B., Hsiao, S. C., & Imananta, R. T. (2019). Post-event Field Survey of 28 September 2018 Sulawesi Earthquake and Tsunami. Natural Hazards and Earth System Sciences Discussions, 1, 1–23. https://doi.org/10.5194/nhess-2019-91. Wollherr, S., Gabriel, A. A., & Uphoff, C. (2018). Off-fault plasticity in three-dimensional dynamic rupture simulations using a modal Discontinuous Galerkin method on unstructured meshes: implementation, verification and application. Geophysical Journal International, 214(3), 1556–1584. https://doi.org/10.1093/gji/ggy213. Wollherr, S., Gabriel, A. A., & Mai, P. M. (2019). Landers 1992 "reloaded": Integrative dynamic earthquake rupture modeling. Journal of Geophysical Research: Solid Earth. https://doi.org/10.1029/2018JB016355. Yalciner, A. C., Hidayat, R., Husrin, S., Prasetya, G., Annunziato, A., Doǧan, G. G., et al. (2018). The 28th September 2018 Palu earthquake and tsunami ITST 07-11 November 2018 post tsunami field survey report (short). Report, Middle East Technical University (and others), Ankara, Turkey. http://itic.ioc-unesco.org/images/stories/itst_tsunami_survey/itst_palu/ITST-Nov-7-11-Short-Survey-Report-due-on-November-23-2018.pdf. Accessed 7 Aug 2019. We thank Taufiqurrahman for helping us accessing data on Indonesian websites, and for putting us in contact with Indonesian researchers. We thank Dr. T. Yudistira for providing their crustal velocity model of Sulawesi and Dr. Andreas Fichtner for providing us part of the 'Collaborative Seismic Earth Model'. We thank Dr. Marcello de Michele for providing his inferred ground-deformation data and for fruitful discussions. The ALOS-2 original data are copyright JAXA and provided under JAXA RA6 PI projects P3278 and P3360. Dr. Widodo S. Pranowo provided access to very early field survey observations. Furthermore, Dr. Abdul Muhari supported this work by providing 1-min tide gauge data for the Pantoloan tide gauge. We thank two anonymous reviewers and the editor-in-chief Alexander Rabinovich for their constructive comments. Finally, we thank the #geotweeps twitter community and the participants of the AGU special session about the Palu earthquake and tsunami for stimulating discussions. The work presented in this paper was enabled by the Volkswagen Foundation (project "ASCETE", Grant no. 88479). Computing resources were provided by the Institute of Geophysics of LMU Munich (Oeser et al. 2006), the Leibniz Supercomputing Centre (LRZ, Projects no. h019z, pr63qo and pr45fi on SuperMUC), and the Center for Earth System Research and Sustainability (CEN) at University of Hamburg. T.U., E. H. M. and A.-A. G. acknowledge support by the German Research Foundation (DFG) (projects no. KA 2281/4-1, GA 2465/2-1, GA 2465/3-1), by BaCaTec (Project no. A4) and BayLat, by KONWIHR—the Bavarian Competence Network for Technical and Scientific High Performance Computing (Project NewWave), by KAUST-CRG (GAST, Grant no. ORS-2016-CRG5-3027 and FRAGEN, Grant no. ORS-2017-CRG6 3389.02), by the European Union's Horizon 2020 research and innovation program (ExaHyPE, Grant no. 671698 and ChEESE, grant no. 823844). S. V. acknowledges support by Einstein Stiftung Berlin through Grant EVF-2017-358(FU). Part of this research was performed at the Jet Propulsion Laboratory, California Institute of Technology under contract with the National Aeronautics and Space Administration (NASA) by Earth Surface and Interior focus area and NISAR Science Team. Department of Earth and Environmental Sciences, Ludwig-Maximilians-Universität München, Munich, Germany T. Ulrich, E. H. Madden & A.-A. Gabriel Institute of Mathematics, Freie Universität Berlin, Berlin, Germany S. Vater Observatório Sismológico, Instituto de Geociências, Universidade de Brasília, Brasilia, Brazil E. H. Madden Numerical Methods in Geosciences, Department of Mathematics, Universität Hamburg, Hamburg, Germany J. Behrens Department of Earth Sciences, Utrecht University, Utrecht, The Netherlands Y. van Dinther Seismology and Wave Physics, Department of Earth Sciences, Institute of Geophysics, ETH Zürich, Zürich, Switzerland I. van Zelst Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA E. J. Fielding Seismological Laboratory, California Institute of Technology, Pasadena, CA, USA C. Liang T. Ulrich A.-A. Gabriel Correspondence to T. Ulrich. Off-Fault Plasticity We account for the possibility of off-fault energy dissipation by assuming a Drucker–Prager elasto-viscoplastic rheology (Wollherr et al. 2018). The model is parameterized following Ulrich et al. (2019). The internal friction coefficient is set equal to the reference fault friction coefficient (0.6). Similarly, off-fault initial stresses are set equal to the depth-dependent initial stresses prescribed on the fault. The relaxation time \(T_v\) is set to 0.05 s. Finally, we assume depth-dependent bulk cohesion (see Fig. 17) to account for the hardening of the rock structure with depth. Depth dependence of bulk cohesion in the off-fault plastic yielding criterion Displacement Time Histories Many high-rate GNSS stations have recorded the Palu event in the near field (Simons et al. 2018). Nevertheless, these data are not yet available. In Fig. 18, we provide the displacements time histories at a few of these sites (see Fig. 19). We hope future access to this data will provide further constraint on the model. Synthetic unfiltered time-dependent ground displacement in meters at selected locations (see Fig. 19) Locations of known geodetic observation sites for which we provide synthetic ground displacement time series (see Fig. 18) Initial Stress In this section, we detail the initial stress parametrization, presented in general terms in Sect. 3.2. The fault system is loaded by a laterally homogeneous regional stress regime. Assuming an Andersonian stress regime, where \(s_1> s_2 > s_3 > 0\) are the principal stresses and \(s_2\) is vertically oriented, the stress state is fully characterized by four parameters: \(S{\!}H_\mathrm {max}\), \(\nu\), \(R_0\) and \(\gamma\). \(S{\!}H_\mathrm {max}\) is the azimuth of the maximum horizontal compressive stress; \(\nu\) is a stress shape ratio balancing the principal stress amplitudes; \(R_0\) is a ratio describing the relative strength of the faults; and \(\gamma\) is the fluid pressure ratio. The World Stress Map (Heidbach et al. 2018) constrains \(S{\!}H_\mathrm {max}\) to the range of \(120 \pm 15^{\circ }\). The stress shape ratio \(\nu = (s_2-s_3)/(s_1-s_2)\) characterizes the stress regime: \(\nu \approx 0.5\) indicates pure shear, \(\nu >0.5\) indicates transtension and \(\nu <0.5\) indicates transpression. A transtensional regime is suggested by geodetic studies (Walpersdorf et al. 1998; Socquet et al. 2006), fault kinematic analyses from field data (Bellier et al. 2006), and by the USGS focal mechanism of the mainshock, which clearly features a normal faulting component. However, the exact value of \(\nu\) is not constrained. The fault prestress ratio \(R_0\) describes the closeness to failure of a virtual, optimally oriented plane according to Mohr–Coulomb theory (Aochi and Madariaga 2003). On this virtual plane, the Coulomb stress is maximized. Optimally oriented planes are critically loaded when \(R_0=1\). Faults are typically not optimally oriented in reality. In a dynamic rupture scenario, only a small part of the modeled faults need to reach failure in order to nucleate sustained rupture. Other parts of the fault network can fail and slip progressively, even if well below failure before rupture initiation. The propagating rupture front or traveling seismic waves can raise the local shear tractions to match fault strength locally. Magnitude and rake of prestress resolved on the fault system for a range of plausible \(S{\!}H_\mathrm {max}\) values, assuming a stress shape ratio \(\nu =0.5\) (pure-shear). For each stress state we show the spatial distribution of the pre-stress ratio (left) and the rake angle of the shear traction (right). Here we assume \(R_0=0.7\) on the optimal plane, which results in \(R<R_0\) for all faults, since these are not optimally oriented. In blue, we label the (out-of-scale) minimum rake angle on the Palu-Saluki bend We assume fluid pressure \(P_f\) throughout the crust is proportional to the lithostatic stress: \(P_f = \gamma \sigma _c\), where \(\gamma\) is the fluid-pressure ratio and \(\sigma _c = \rho g z\) is the lithostatic pressure. A fluid pressure of \(\gamma =\rho _\mathrm {water}/\rho = 0.37\) indicates purely hydrostatic pressure. Higher values correspond to overpressurized stress states. Together, \(R_0\) and \(\gamma\) control the average stress drop \(d\tau\) in the dynamic rupture model as: $$\begin{aligned} d\tau \sim (\mu _s - \mu _d)R_0(1-\gamma )\sigma _c . \end{aligned}$$ where μs and μd are the static and dynamic fault friction assigned in the model. \(d\tau\), is a critical characteristic of the earthquake dynamic rupture model, controlling the average fault slip, rupture speed and earthquake size. Following Ulrich et al. (2019), we can evaluate different initial stress and strength settings using purely static considerations. By varying the stress parameters within their observational constraints, we compute the distribution of the relative prestress ratio R and of the shear traction orientation resolved on the fault system for each configuration. R is defined as: $$\begin{aligned} R = (\tau _0 - \mu _s\sigma _n)/((\mu _s-\mu _d)\sigma _n) \ , \end{aligned}$$ where \(\tau _0\) and \(\sigma _n\) are the initial shear and normal tractions resolved on the fault plane. We can characterize the spatially variable fault strength in the model by calculating R (Eq. (3)) at every point on each fault (Figs. 20 and 21). By definition, R is always lower or equal to \(R_0\), since the faults are not necessary optimally oriented. Same as Fig. 20, but assuming a stress shape ratio \(\nu =0.7\) (transtension) We then select the stress configuration that maximizes R across the fault system, especially around rupture transition zones to enable triggering, and that represents a shear stress orientation compatible with the inferred ground deformations and the inferred focal mechanisms. These purely static considerations suggest that a transtensional regime is required to achieve a favourable stress orientation on the fault system. In fact, we see that a biaxial stress regime (\(\nu = 0.5\)) does not resolve sufficient shear stress simultaneously on the main north-south striking faults and on the Palu-Saluki bend (see Fig. 20). Dynamic rupture experiments confirm that the Saluki fault could not be triggered under such a stress regime. On the other hand, such optimal configuration can be achieved by a transtensional stress state, for instance by choosing \(\nu = 0.7\) and \(S{\!}H_\mathrm {max}\) in the range \(125^{\circ }\)\(-135^{\circ }\) (see Fig. 21). We choose \(S{\!}H_\mathrm {max}= 135^{\circ }\), which allows for nucleation with less overstress than lower values and generates ruptures with the expected slip orientations and magnitudes. The here-assumed fault system does not feature pronounced geometrical barriers apart from the Palu-Saluki bend. As a consequence, \(R_0\) is actually poorly constrained, and trade-offs between \(R_0\) and \(\gamma\) are expected. The preferred, realistic model is characterized by \(R_0=0.7\) and \(\gamma =0.79\). This results in an effective confining stress \((1-\gamma )\sigma _c\) that increases with depth by a gradient of 5.5 MPa/km. Friction Law We here use a form of fast-velocity weakening friction proposed in the community benchmark problem TPV104 of the Southern California Earthquake Center (Harris et al. 2018) and as parameterized by Ulrich et al. (2019). Friction drops rapidly from a steady-state, low-velocity friction coefficient, here \(f_0 = 0.6\), to a fully weakened friction coefficient, here \(f_w = 0.1\) (see Table 1). Table 1 Fault frictional properties assumed in this study Horizontal Displacements as Additional Tsunami Source Final horizontal surface displacements (\(\varDelta x\) and \(\varDelta y\)) as computed by the earthquake model Final vertical surface displacements (\(\varDelta z\)) as computed by the earthquake model For computing the bathymetry perturbation used as the source for the tsunami model, we apply the method of Tanioka and Satake (1996) to additionally account for horizontal displacements computed in the earthquake model. The final states of the three displacement components \(\varDelta x\),\(\varDelta y\) and \(\varDelta z\) are given in Figs. 22 and 23. Applying the approach of Tanioka and Satake by using Eq. (1), the displacements are transformed into the bathymetry perturbation, \(\varDelta b\) (Fig. 10). The difference between \(\varDelta z\) and \(\varDelta b\) locally is up to 0.6 m, as shown in Fig. 24. Although this difference is quite large, and compared to the overall magnitude more than 30%, it is only very local. The contribution \(\varDelta b- \varDelta z\) of horizontal displacements to the final bathymetry perturbation, following Tanioka and Satake (1996) We have run the same tsunami scenario, but with the computed seafloor displacement \(\varDelta z\) as tsunami source. Snapshots of this scenario in Palu Bay can be seen in Fig. 25. Such new scenario differs from the original scenario only by local effects (Fig. 12), especially at points along the coast. The maximum inundation depths at Palu city are mapped for this alternative scenario in Fig. 26. Again, only minor differences appear (compare with Fig. 16). This illustrates that the method by Tanioka and Satake (1996) might be important to capture some local effects of the tsunami, but is not crucial for the general result, which is also confirmed by other studies (Heidarzadeh et al. 2018). Snapshots at 20 s, 180 s, and 300 s of the tsunami scenario using only the vertical displacement \(\varDelta z\) from the rupture simulation as the source for the tsunami model Computed maximum inundation at Palu City using only the vertical displacement \(\varDelta z\) from the rupture simulation as the source for the tsunami model Along-Track SAR Measurements We here describe measurements of the final coseismic surface displacements in along-track direction from SAR images acquired by the Japan Aerospace Exploration Agency (JAXA) Advanced Land Observation Satellite-2 (ALOS-2) SAR. We measure along-track pixel offsets incoherent cross correlation of ALOS-2 stripmap SAR images acquired along ascending path 126 on 2018/08/17 and 2018/10/12 and ascending path 127 on 2018/08/08 and 2018/10/03. We used modules of the InSAR Scientific Computing Environment (ISCE) (Liang and Fielding 2017; Rosen et al. 2012) for ALOS-2 SAR data processing. 3D Subsurface Structure 3D heterogeneous media are included in the earthquake model by combining the local model of Awaliah et al. (2018), which is built from ambient noise tomography and covers the model domain down to 40 km depth, and the Collaborative Seismic Earth Model (Fichtner et al. 2018), which covers the model domain down to 150 km. Figure 27 shows a few cross-sections of the 3D subsurface structure of Awaliah et al. (2018). As this model only defines \(V_s\), we compute the P-wave speed \(V_p\) assuming a Poisson's ratio of 0.25. $$\begin{aligned} V_p = V_s\sqrt{(}3) \end{aligned}$$ The density \(\rho\) is calculated using an empirical relationship (Aochi et al. 2017, and references therein). $$\begin{aligned} \rho =-\,0.0045V_s^2+0.432V_s+1711~kg/m^3 \end{aligned}$$ S-wave speeds (\(V_s\)) on five cross-sections of the 3D subsurface structure of Awaliah et al. (2018), incorporated into the model Model Validation with Teleseismic Data The teleseismic data used in the manuscript for validation of the earthquake model were downloaded from IRIS using Obspy (Beyreuther et al. 2010). The instrument response is removed using the remove_response function of Obspy. Waveform fits are estimated by computing a relative root-mean-square misfit given by: $$\begin{aligned} rRMS =(1/ RMS _{obs}) \sqrt{\int _{t_0}^{t_1} (d_{syn}(t)-d_{obs}(t))^2 dt} \end{aligned}$$ where \(d_{syn}\) and \(d_{obs}\) are respectively the synthetic and observed displacement waveforms, \(t_0\) and \(t_1\) define the interval over which the misfit is calculated (here we use the same range as the range that we plot in Fig. 4a, b), and \(RMS _{obs}\) is given by: $$\begin{aligned} RMS _{obs} = \sqrt{\int _{t_0}^{t_1} d_{obs}(t)^2 dt} \end{aligned}$$ Comparison of modeled (red) and observed (black) teleseismic displacement waveforms at the 10 stations identified by blue triangles in Fig. 5. Full seismograms are dominated by surface waves. For more information, please refer to the caption of Fig. 4 Comparison of modeled (red) and observed (black) teleseismic displacement waveforms at the 10 stations identified by blue triangles in Fig. 5. Zoom in to body wave arrivals. For more information, please refer to the caption of Fig. 4 Reliability of the BATNAS Data Set in Palu Bay Nearshore Areas BATNAS (v1.0) (DEMNAS 2018) is to our knowledge the highest resolution data set describing the pre-event bathymetry in the area of interest, with a horizontal resolution of approximately 190 m. This allows for sufficiently accurate representation of bathymetric features. However, the resolution is relatively inaccurate with respect to inundation treatment. High resolution (8 m) topography (but not bathymetry) is available from DEMNAS (2018). Thus, DEMNAS topography and BATNAS bathymetry could be used conjointly in an effort to improve the local resolution of the modeled inundation. Nevertheless, merging the two data sets is a non-trivial task. To analyze whether this is necessary to support the conclusions of this paper, we here provide a quantitative analysis (Figs. 28, 29). We randomly pick 8 profiles crossing the Bay (Figs. 30, 31) along which we compare BATNAS and DEMNAS data. Within the range of the observed inundation elevation (0–10 m), we observe that BATNAS captures slopes rather realistically (e.g., profiles 2, 4, 8), especially if topography is smooth. At specific locations, however, the topography is clearly smoothed by the BATNAS data set (e.g. profiles 1, 6, 7) and local biases can be expected. We conclude that the amplitude variation of inundation synthetics around the bay based on BATNAS data, and the qualitative comparison to observations, is relevant as discussed in the main text (Sect. 4.2). Despite limited resolution, the qualitative analysis of inundation behavior across the Bay yields valuable insights on the interplay of tsunami waves and (smoothed) nearshore topography. Locations of 8 sections across the shoreline across which the topography of the 8 m resolution DEMNAS data set and the 190 m sampled BATNAS bathymetry and topography data set are compared in Fig. 31 Topography and bathymetry profiles of BATNAS and DEMNAS data sets across the 8 sections of Fig. 30. Profiles are aligned with respect to the shoreline to facilitate comparison Three animations illustrating the earthquake and tsunami scenario are provided. The animations can be downloaded at https://doi.org/10.5281/zenodo.3233885. The earthquake animations show the absolute slip rate (m/s) across the fault network during the model earthquake, with (https://zenodo.org/record/3233885/files/movie_Sulawesi_wavefield-cp.mov) and without (https://zenodo.org/record/3233885/files/movie_Sulawesi_SR-cp.mov) the seismic wavefield (absolute particle velocity in m/s). The tsunami animation (https://zenodo.org/record/3233885/files/SulawesiTanioka.mp4) shows the evolution with time of the sea surface height (m) as predicted by the tsunami scenario. Code and Data Availability For the earthquake modeling, we use the open-source software SeisSol (master branch, version tag 201905_Palu), which is available on GitHub (http://www.github.com/seissol/seissol). The procedure to download, compile, and run the code is described in the documentation (https://seissol.readthedocs.io). All data required to reproduce the earthquake scenario can be downloaded from https://zenodo.org/record/3234664. We use the following projection: DGN95 / Indonesia TM-3 zone 51.1 (EPSG:23839). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Ulrich, T., Vater, S., Madden, E.H. et al. Coupled, Physics-Based Modeling Reveals Earthquake Displacements are Critical to the 2018 Palu, Sulawesi Tsunami. Pure Appl. Geophys. 176, 4069–4109 (2019). https://doi.org/10.1007/s00024-019-02290-5 Revised: 25 July 2019 Issue Date: October 2019 earthquake dynamics coupled model physics-based modeling strike slip Part of a collection: Sulawesi/Palu-2018 and Anak/Krakatau-2018 Over 10 million scientific documents at your fingertips Academic Edition Corporate Edition Not logged in - 54.85.57.0 Not affiliated © 2021 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Mathematical induction and its consequences Part 2 3. M.I. VS mod In most case, M.I. would not be the most preferred choice to prove the divisibility of an expression because we have another powerful tool: modolar arithmetic. I would not put too much technical formulae about mod because it could be very difficult. Definition 1. If $a=b+n$ where n is a positive integer larger than 1, then we say $a\equiv b\mod n$. Corollary 1. If a is divisible by b, then $a\equiv 0\mod b$. Corollary 2. There's a unique, least positive integer so that for every a we have b so that $a\equiv b\mod n$, we can say b is the principal value of a mod n, or the remainder when a is divided by n. Corollary 3. $a+bm\equiv a\mod n$, then by binomial theorem, $(a+bn)^k\equiv a^k \mod n$. Proof: exercise. By such method we can now proof all previous examples. Example 7. (Example 3 - 4 revisited) $6^n\equiv 1\mod 5$ and $6^n(5n-1)\equiv -1\mod 5$ The first one is trivial because $6^n=(1+5)^n\equiv -1\mod 25$. The second statement can't be proved by diredctly proved in mod 25. Instead we show that $\frac{6^n(5n-1)+1}{5}\equiv 0\mod5$. $\frac{6^n(5n-1)+1}{5}=n6^n+(1+6+6^2+...+6^{n-1})\equiv n-n=0\mod 5$. For example 6, we know that $7\pm \sqrt{13}$ are the roots of $x^2-14x+36=0$, by binomial theorem, $(7+\sqrt{13})^n+(7-\sqrt{13})^n=2(C^n_07^n+C^n_27^{n-2}(13)+...)$, but at this level we can't find the bridge to show the divisibility through mod, but we can actually prove this by M.I. through this formula. Lemma 1. (Extension of M.I.) The proposition is true for all natrual number n if: a) The proposition is true for n = 1,2,3...,i b) Assuming n = k is correct, then n = i + k is correct. From (b), for every $a\equiv b\mod n$ where a is smaller than b, if n = a is true, then n = b is true, eventually it's true for all natrual number n. By dividing the case into odd n and even n and complete M.I. separately, we can prove the statement as well. 4. Sequence and inequality Example 8. (HKALE 1994 Pure I2) If $\sum a_i=(\frac{1+a_n}{2})^2$, prove that $a_n=2n-1$ if all terms are positive numbers. For sequence, we usually prove the n=k+1 part by comparing their difference. Define $s_n=\sum^{n}_{i=1}a_i=\frac{1}{4}(1+a_n)^2$. Now $a_{n+1}=s_{n+1}-s_n$. Assuming that $a_n=2n-1$, then $s_n=\frac{1}{4}(1+2n+1)^2=n^2+2n+1$. Now $a_{n+1}=\frac{1}{4}(1+a_{n+1})^2-(n+1)^2$ $\frac{1}{4}(1-a_{n+1})^2=(n+1)^2$, $a_{n+1}=2n+1$, where all negative terms are rejected. Apart from some trivial geometric sequences, some sequences can also be transferred into something like geometric sequences, and we will introduce one of the sequence in form of $a_n=c+dr^n$. Example 9. Find the general term of sequence $a_{n+1}=xa_n+y$. Define $b_n=a_{n+1}-a_n$, then $b_{n+1}=a_{n+2}-a_{n+1}=(xa_{n+1}+y)-(xa_n+y)=x(a_{n+1}-a_n)=xb_{n-1}$, therefore $b_n=x^{n-1}(a_2-a_1)$ is a geometric sequence. Now $a_n=a_1+b_1+b_2+...+b_{n-1}=a_1+(a_2-a_1)(1+x+...+x^{n-2})$ $=a_1+(a_2-a_1)\frac{1-x^{n-1}}{1-x}$. One of the stupid example is that $a_1=1$, $a_n=2a_{n-1}-1$, then $a_2=1$, $a_n=1+(1-1)\frac{1-2^{n-1}}{1-2}=1$, and therefore the sequence {1,1,1,1,...} is an A.S., G.S., as well as a recurrence sequence. Exercise 4. It's given that the sum of a sequence is given by $S_n=4a_n+3n-4$, find its general term by i) Guess and proved by M.I. ii) Sequential analysis Example 10. (HKALE 1995 Pure I6) For a non-negative integral integral sequence so that $n\leq \sum^{n}_{i=1}a_i^2\leq n+1+(-1)^n$ for all natrual number n. Prove that the sequence is identically 1. This inequality is quite trivial. Proof 1. $1\leq a_{n+1}^2\leq 3$ when we subtract case (n+1) from case n, since it's non-negative integer, $a_i=1$. Proof 2. We use another lemma from MI. Lemma 2. The proposition is true for all natrual n if: 1) n = 1 is true. 2) if n = 1,2,...,(k-1) is true, then n = k is true. Assuming that $a_1=a_2=...=a_n=1$, then $n+1\leq n+a_{n+1}^2\leq n+2+(-11)^n\leq n+3$, the same result is given. Exercise 5. Try to prove the above statement by separating n into odd and even case. Example 11 (BAS ex. 9.4.8, p.84) For positive real so that $\prod (1+a_i)=2^n$, show that $\prod a_i \leq 1$. Assuming n = k is true. Now if one more number $a_{k+1}$ is added that $\prod (1+a_n)=2^{k+1}$ with $\sigma a_i$ greater than 1. Then $a_{n+1}=(a_1a_2...a_n)^-1$ which is greater than 1, then $\prod (1+a_i)=2^n(1+a_{n+1})\neq 2^{n+1}$. The above proof is incomplete and it would be nice for readers to finish the proof himself. Read the rest of this passage: Mathematical Induction and its consequences -- Finale Mathematical induction and its consequences - Part 3 Irrationality II
CommonCrawl
CircularLogo: A lightweight web application to visualize intra-motif dependencies Zhenqing Ye1, Tao Ma2, Michael T. Kalmbach1, Surendra Dasari1, Jean-Pierre A. Kocher1 & Liguo Wang ORCID: orcid.org/0000-0003-2072-48261,2 The sequence logo has been widely used to represent DNA or RNA motifs for more than three decades. Despite its intelligibility and intuitiveness, the traditional sequence logo is unable to display the intra-motif dependencies and therefore is insufficient to fully characterize nucleotide motifs. Many methods have been developed to quantify the intra-motif dependencies, but fewer tools are available for visualization. We developed CircularLogo, a web-based interactive application, which is able to not only visualize the position-specific nucleotide consensus and diversity but also display the intra-motif dependencies. Applying CircularLogo to HNF6 binding sites and tRNA sequences demonstrated its ability to show intra-motif dependencies and intuitively reveal biomolecular structure. CircularLogo is implemented in JavaScript and Python based on the Django web framework. The program's source code and user's manual are freely available at http://circularlogo.sourceforge.net. CircularLogo web server can be accessed from http://bioinformaticstools.mayo.edu/circularlogo/index.html. CircularLogo is an innovative web application that is specifically designed to visualize and interactively explore intra-motif dependencies. Many DNA and RNA binding proteins recognize their binding sites through specific nucleotide patterns called motifs. Motif sites bound by the same protein do not necessarily have same sequence but typically share consensus sequence patterns. Several methods have been developed to statistically model the position-specific consensus and diversity of nucleotide motifs using the position weight matrix (PWM) or position-specific scoring matrix (PSSM) [1, 2]. These mathematical representations are usually visualized using sequence logos, which depict the consensus and diversity of each motif residue as a stack of nucleotide symbols. The height of each symbol within the stack indicates its relative frequency, and the total height of symbols is scaled to the information content of that position [3, 4]. Traditional PWM and PSSM assume statistical independence between nucleotides of a motif. However, such assumption is not completely justified, and accumulated evidence indicates the existence of intra-motif dependencies [5,6,7,8]. For example, an analysis of wild-type and mutant Zif268 (EGR-1) zinc fingers, using microarray binding experiments, suggested that the nucleotides within transcription factor binding site (TFBS) should not be treated independently [5]. In addition, the intra-dependences within a motif were also revealed by a comprehensive experiment to examine the binding specificities of 104 distinct DNA binding proteins in mouse [8]. Intra-motif dependencies when into consideration could substantially improve the accuracy of de novo motif discovery [9]. Therefore, many statistical methods have been developed to characterize the intra-motif dependencies, which include the generalized weight matrix model [10], sparse local inhomogeneous mixture model (Slim) [11], transcription factor flexible model based on hidden Markov models (TFFMs) [12], the binding energy model (BEM) [13], and the inhomogeneous parsimonious Markov model (PMM) [14]. However, the most commonly used visualization tools such as WebLogo [3] and Seq2Logo [15] are incapable of displaying these intra-motif dependencies. Only a handful of tools like CorreLogo, enoLOGOS, and ELRM are capable of visualizing positional dependencies [16,17,18]. CorreLogo depicts mutual information from DNA or RNA alignment using three-dimensional sequence logos generated via VRML and JVX. However, CorreLogo's three-dimensional graphs are difficult to interpret because of the excessively complex and distorted perspective associated with the third dimension. ELRM generates static graphs to visualize intra-motif dependences. ELRM splits up "base features" and "association features" and fails to comprehensively integrate nucleotide diversities and dependencies. In addition, ELRM is limited to measuring dependence with its own built-in method. Similar to ELRM, enoLOGOS represents the dependency between different positions using a matrix plot underneath the nucleotide logo. While pLogo allows user to visualize correlations to a particular nucleotide position, it fails to provide overall view of intra-motif dependencies [4]. Finally, all of these tools lack the functionality for users to explore and interpret the data in an interactive fashion. In this study, we developed CircularLogo, an interactive web application, which is capable of simultaneously displaying position-specific nucleotide frequencies and intra-motif dependencies. CircularLogo uses an open-standard, human-readable, flexible and programming language independent JSON (JavaScript Object Notation) data format to describe various properties of DNA motifs. Other commonly used motif formats such as MEME, TRANSFAC, and JASPAR can be easily converted into JSON format. JSON-Graph specifications of nucleotide motif representation We used the JSON-Graph format to describe nucleotide motif in order to make it intelligible and malleable. The schema of JSON-Graph format is illustrated as below: The contents within two curly braces describe a DNA or RNA motif. Specifically, the "id" keyword specifies the name of the motif. The "background" keyword designates nucleotides frequencies (in the order of A, T, C and G) of the relevant genomic background. For example, when studying motifs in human genome, these percentages are computed from the human reference genome as background distribution. By default, they are set to 0.25 representing equal frequencies. The "pseudocounts" keyword represents the extra nucleotides added to each position of the motif to avoid zero-division error in small data set; these are set to 0.25 for each nucleotide by default. The "nodes" section describes various properties of motif residues using the following keywords: a) the "index" keyword specifies the sequential order (in anticlockwise) of nucleotide stacks b) the "label" keyword denotes the identity of each nucleotide stack c) the "bit" keyword refers to the information content calculated for each nucleotide stack d) the "base" keyword indicates the four nucleotides sorted incrementally by their corresponding frequencies as designated by the "freq" keyword. The "links" section describes the pairwise dependencies between nucleotide stacks using the following keywords: a) the "source" and "target" keywords denoting the start and the end positions of nucleotide stacks b) the "value" keyword indicates the width of the link that is proportional to the strength of dependence between the two linked positions. CircularLogo web server CircularLogo web application uses NGINX (https://www.nginx.com/) web server with uWSGI (https://pypi.python.org/pypi/uWSGI) gateway interface to handle multiple concurrent client requests. The application is hosted on Amazon Elastic Compute Cloud (Amazon EC2). Measure intra-motif dependencies using χ2 statistic We implemented two metrics to calculate the dependence between a pair of nucleotide positions: mutual information and the χ2 statistic. The χ2 statistic is widely used to test the independence of two categorical variables and corresponding Q score is a natural measure of dependency between two events that quantifies the co-incidence as follows. Let us assume that a DNA motif is l nucleotides long and is built from N sequences. For given two positions i and j within the motif (1 ≤ i ≤ l, 1 ≤ j ≤ l, i ≠ j), the observed di-nucleotide frequency is denoted as O ij , which can be obtained by counting di-nucleotide combinations from the input N sequences. The expected di-nucleotide frequency is represented as E ij . The χ2 statistic score is then calculated as: $$ Q={\displaystyle \sum_{k=1}^m\frac{{\left({O}_{ij}^k-{E}_{ij}^k\right)}^2}{E_{ij}^k}, Q\sim {x}^2\left( m-1\right), m=16,{O}_{ij}\in \left[ AA, AT, AC, AG,\dots \right]} $$ Here, m is the total number of di-nucleotides (42 = 16). Measure intra-motif dependencies using mutual information The second built-in approach to measure dependence is the mutual information. This metric quantifies the mutual dependence between two discrete random variables X (X = [A, C, G, T]) and Y (Y = [A, C, G, T]) and it is defined as: $$ I\left( X; Y\right)={\displaystyle \sum_{y\in Y}{\displaystyle \sum_{x\in X} p\left( x, y\right) log}}\left(\frac{p\left( x, y\right)}{p(x) p(y)}\right) $$ Here, x (x ∈ [A, C, G, T]) and y (y ∈ [A, C, G, T]) represent nucleotides at two nucleotide stacks X and Y, respectively. p (x) and p (y) denote the nucleotide frequencies of x and y. p (x, y) defines the frequencies of dinucleotides (xy) from X and Y. The significance of dependency between two positions was evaluated using Chebyshev's inequality. For example, if the observed mutual information is K × stdev times larger than that expected from random background model. P < = 1/K2. HNF6 motif analysis HNF6 ChIP-exo data was obtained from Array Express (accession number E-MTAB-2060; http://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-2060/), processed with MACE [19], and HNF6 binding sites were extracted. The 5549 65-nucleotide (upstream 20 nucleotides + 25 nucleotides HNF6 binding site + downstream 20 nucleotides) sequences were published to https://sourceforge.net/projects/circularlogo/files/test/. All sequences were aligned by the HNF6 motif, which start from postion-29 to position-36. tRNA sequence analysis A total of 1114 tRNA sequences were downloaded from RFAM database [20] in the form of RFAM 'seed' alignment format (accession # RF00005; https://correlogo.ncifcrf.gov/ccrnp/trnafull.html). After excluding sequences with gaps in the alignment, 291 sequences were used as the final dataset to generate circular logo of tRNA (https://sourceforge.net/projects/circularlogo/files/test/). Mutual information was used as the metric to measure intra-motif dependencies. The lower 33% links were filtered out. Synthesized DNA fragments of splice sites and branch-points for analysis We used the synthesized DNA fragments by concatenating the 5′ donor site (16 bp), branch-point (21 bp) and the 3′ acceptor site (16 bp) to represent the splicing motif. Briefly, a total of 59,359 predefined, high-confidence human branch-points were downloaded from the supplementary data of the study [21]. We excluded introns with multiple branch-points, small introns (<1 kb) and introns with small gap (≤25 bp) between the branch-point and the acceptor site. For each of the remained introns, we first extracted upstream 6 bp and downstream 10 bp of 5′ donor site. Then we extracted a 21 bp DNA sequence encompassing branch-point by extending 10 bp to both upstream and downstream of the branch-point; thirdly, we extracted upstream 10 bp and downstream 6 bp of 3′ acceptor site. At last, we concatenated these three DNA sequences in the order of "5′ donor site–branch-point–3′ acceptor site" to form a 53 bp DNA fragment. We used a final set of 10,316 DNA fragments to generate circular logo (https://sourceforge.net/projects/circularlogo/files/test/). Circular nucleotide motif Unlike the traditional sequence logos that display motif residues on a two-dimensional Cartesian coordinate system (with the x-axis denoting the position of residue stacks and the y-axis denoting the information contents), CircularLogo visualizes motifs using a polar coordinate system that facilitates the display of pairwise intra-motif dependencies with linked ribbons (Fig. 1). Since traditional PWM or PSSM representations do not preserve intra-motif dependency information, we use the JSON-Graph as the main input format to CircularLogo. When the input file is in JSON-Graph format that has pre-calculated nucleotide frequencies and dependencies, the CircularLogo simply transforms this file into a pictorial representation. In addition, CircularLogo also accepts the FASTA format motif representation as input. In this scenario, CircularLogo transforms the FASTA information into a JSON-Graph format by calculating the intra-motif dependency using the built-in χ2 statistic or mutual information metric, and determine the height of each nucleotide stack in the same way as webLogo [3]. In brief, CircularLogo generates a sector for each motif position and draws nucleotide stack within that sector based on the information content and relative frequencies of nucleotides. All sectors are properly arranged into a circular layout. The width of linked arcs indicates the strength of intra-dependency between each pair of nucleotide positions. a Motif generated from CircularLogo describing the pairwise dependencies between 65 nucleotides (20 upstream nucleotides + 25 HNF6 binding sites defined from ChIP-exo data + 20 downstream nucleotides). b All links related to node 33. c All links related on node 5, representing background level dependencies. d Links related to node 33 after removing spurious, background links CircularLogo allows users to interactively adjust a variety of parameters and explore intra-motif dependencies and fine-tune the appearance of the final output. For example, any nucleotide in the genome has a certain level of dependencies with its immediate neighbors. Such dependencies are considered as the background noise since they are not likely to be biologically meaningful. CircularLogo automatically filters out weak links according to user-specified p-value, and also provides a slider bar to let user to do interactive filtering. Nucleotide dependencies within HNF6 motif HNF6 (also known as ONECUT1) is a transcription factor that regulates expression of genes involved in a variety of cellular processes. The exact protein-DNA binding boundaries of HNF6 in mouse genome were previously defined by our group [19]. A total of 5549 binding sites, each of 25 nucleotides long, were used to explore the intra-motif dependencies. Each binding site was also extended 20 nucleotides up- and downstream in order to estimate the background dependency level. Pair-wise dependencies between all 65 positions were displayed in Fig. 1a. As we expected, dependencies between positions within the HNF6 binding site (i.e. nucleotides within 29th and 36th position) were much higher than those of flanking regions (Fig. 1b). Figure 1c indicated background links relating to node 5 (i.e. the 5th position of input DNA sequence). Figure 1d indicated dependencies related to node 33 within the HNF6 binding site after spurious links were removed. Nucleotide dependencies within tRNAs The transfer RNA (tRNA) is involved in translating message RNA (mRNA) into the amino acid sequence. It's typical cloverleaf secondary structure is composed of D-loop, anticodon loop, variable loop and TΨC loop, as well as four base-paired stems between these loops (Fig. 2a). The nucleotides within stems are less conserved than those of loops, but base pairings within stems are required for structural stability. Thus we expect higher positional dependencies between nucleotides within stems than those within loops. We used CircularLogo, with mutual information as a measurement of dependence, to generate tRNA circular motif. After filtering out weak links (lower 33%), we observed four apparent clusters of connected links corresponding to the four stems (Fig. 2b). Comparing to motif logo generated from enoLOGOS (http://www.benoslab.pitt.edu/cgi-bin/enologos/enologos.cgi) using the same dataset, CircularLogo provided more intuitive view of intra-dependencies within the four stems (Fig. 2c). Figure 2b also shows that nucleotides with three loops (D-loop, Anticodon loop, and TΨC loop) exhibited much higher sequence conservation than that of nucleotides located in stems, suggesting that the loops are main functional domains of tRNA. For example, D-loop is the recognition site of aminoacyl-tRNA synthetase, an enzyme involved in amino-acylation of the tRNA molecule [22, 23], and TΨC loop is the recognition site of the ribosome. a The typical cloverleaf secondary structure of Phe-tRNA in yeast. b tRNA motif represented with the circular motif logo. The width of links indicates the strength of dependency (measured by mutual information). c tRNA motif logo generated from enoLOGOS using the same dataset. The labels ①, ②, ③, ④ indicate acceptor stem, D-stem, anticodon stem, and T-stem, respectively Nucleotide dependencies between splicing sites and branch site in eukaryotic introns Splicing is a critical step during pre-mRNA processing, where introns are removed and exons are joined together by the spliceosome complex. The eukaryotic genes contain three splicing motifs that are essential for successful intron excision: an almost invariant 5′-splice site (donor site), 3′-splice site (acceptor site) and the branch site that is about 20–50 bp upstream of acceptor site. Generally, two successive biochemical reactions are involved in the spliceosomal splicing: First, a specific branch-point nucleotide within the intron, defined during spliceosome assembly, performs a nucleophilic attack on the 5′-splice donor site to form a lariat intermediate. Second, the released 5′-exon attacks 3′-splice acceptor site to excise lariat structure and join the adjacent exons [24]. Recently, Mercer et al. identified 59,359 high-confidence human branch-points using high-throughput sequencing technique [21]. These reliable sites provide us a great opportunity to investigate how those elements interact with each other. We extracted the motif DNA sequences (see Implementation section) and explored their nucleotide dependencies using CircularLogo with χ2 statistic approach (Fig. 3). After filtering those weak links, we found strong dependencies among the three sites (donor site, branch-point and acceptor site). In addition, CircularLogo further revealed the interactions between the polypyrimidine tract and the two splice sites (donor site and acceptor site). Motif logo generated from CircularLogo describing the pairwise dependencies among 5′ donor site, branchpoint, polypyrimidine tract and the 3′ acceptor site New statistical models and experimental approaches are being developed for measuring intra-motif dependency. CircularLogo uses a plain text, JSON-Graph formatted, file to describe DNA/RNA motifs, which enables users to generate a customized JSON-Graph file containing positional dependencies that are pre-calculated by their choice methods. When the raw sequences were given to CircularLogo, it provides two approaches (χ2 statistic and mutual information) for measuring the positional dependency. Both of these methods, although commonly used, are biased and unable to quantify dependencies between highly conserved nucleotide stacks (e.g. invariable sites) [6, 25]. This problem could be address by users providing as many sequences as possible in order to capture the low-frequent variants at those highly conserved sites. This is feasible due to genome-wide, high-throughput, screening technologies. For example, researchers usually identify tens of thousands of potential TFBSs using ChIP-seq or other similar technologies. After retrieving the potential TFBSs from ChIP-seq data, a researcher can align them using the predicted DNA motif and give the final alignment file as input for CircularLogo. We recommend that a FASTA input file should contain at least 25 sequences. It is worth noting that the χ2 statistic and mutual information are two different measures of dependence, each suited for use under different conditions. Essentially, the χ2 statistic measures the co-occurrence of nucleotides of two different positions. Hence, χ2 method is suited for measuring dependency between two conserved (i.e. less variable) positions but it has limited power to measure dependency between two highly variable positions wherein the dinucleotide frequencies are close to background (i.e. 1/16) and the χ2 statistic approaches 0. In contrast, mutual information measures the reduction in uncertainty about nucleotide frequencies in one position, given some knowledge of nucleotide frequencies at another position. For a pair of highly conserved positions that are dominated by particular nucleotides, the information content of each position and the mutual information between them approaches to 0 bit. Hence, mutual information is suited for measuring dependency between two highly variable positions. Visualization is key for efficient data exploration and effective communication in scientific research. CircularLogo is an innovative tool offering the panorama of DNA or RNA motifs taking into consideration the intra-site dependencies. We demonstrated the utility and practicality of this tool using examples wherein CircularLogo was able to depict complex dependencies within motifs and reveal biomolecular structure (such as stem structures in tRNA) in an effective manner. BEM: the Binding energy model Java script object notation JVX: Java view geometry file MACE: Model-based analysis of ChIP-Exo MEME: Multiple Em for motif elicitation Mutual information PMM: the Inhomogeneous parsimonious Markov model PSSM: Position-specific scoring matrix PWM: Position weight matrix TFBS: Transcription factor binding sites TFFMs: Transcription factor flexible model VRML: Virtual reality modeling language Stormo GD. DNA binding sites: representation and discovery. Bioinformatics. 2000;16:16–23. Boeva V. Analysis of Genomic Sequence Motifs for Deciphering Transcription Factor Binding and Transcriptional Regulation in Eukaryotic Cells. Front Genet. 2016;7:24. Crooks GE, Hon G, Chandonia J-M, Brenner SE. WebLogo: a sequence logo generator. Genome Res. 2004;14:1188–90. O'Shea JP, Chou MF, Quader SA, Ryan JK, Church GM, Schwartz D. pLogo: a probabilistic approach to visualizing sequence motifs. Nat Methods. 2013;10:1211-1212. Bulyk ML, Johnson PLF, Church GM. Nucleotides of transcription factor binding sites exert interdependent effects on the binding affinities of transcription factors. Nucleic Acids Res. 2002;30:1255–61. Eggeling R, Gohr A, Keilwagen J, Mohr M, Posch S, Smith AD, et al. On the value of intra-motif dependencies of human insulator protein CTCF. PLoS ONE. 2014;9, e85629. Man TK, Stormo GD. Non-independence of Mnt repressor-operator interaction determined by a new quantitative multiple fluorescence relative affinity (QuMFRA) assay. Nucleic Acids Res. 2001;29:2471–8. Badis G, Berger MF, Philippakis AA, Talukder S, Gehrke AR, Jaeger SA, et al. Diversity and complexity in DNA recognition by transcription factors. Science. 2009;324:1720–3. Grau J, Posch S, Grosse I, Keilwagen J. A general approach for discriminative de novo motif discovery from high-throughput data. Nucleic Acids Res. 2013;41, e197. Zhou Q, Liu JS. Modeling within-motif dependence for transcription factor binding site predictions. Bioinformatics. 2004;20:909–16. Keilwagen J, Grau J. Varying levels of complexity in transcription factor binding motifs. Nucleic Acids Res. 2015;43, e119. Mathelier A, Wasserman WW. The Next Generation of Transcription Factor Binding Site Prediction. PLoS Comput Biol Public Library of Science. 2013;9:e1003214. Zhao Y, Ruan S, Pandey M, Stormo GD. Improved models for transcription factor binding site identification using nonindependent interactions. Genetics. 2012;191:781–90. Eggeling R, Roos T, Myllymäki P, Grosse I. Inferring intra-motif dependencies of DNA binding sites from ChIP-seq data. BMC bioinformatics. 2015;16:375. Thomsen MCF, Nielsen M. Seq2Logo: a method for construction and visualization of amino acid binding motifs and sequence profiles including sequence weighting, pseudo counts and two-sided representation of amino acid enrichment and depletion. Nucleic Acids Res. 2012;40:W281–7. Bindewald E, Schneider TD, Shapiro BA. CorreLogo: an online server for 3D sequence logos of RNA and DNA alignments. Nucleic Acids Res. 2006;34:W405–11. Yang C, Chang C-H. Exploring comprehensive within-motif dependence of transcription factor binding in Escherichia coli. Sci Rep. 2015;5:17021. Workman CT, Yin Y, Corcoran DL, Ideker T, Stormo GD, Benos PV. enoLOGOS: a versatile web tool for energy normalized sequence logos. Nucleic Acids Res. 2005;33:W389–92. Wang L, Chen J, Wang C, Uusküla-Reimand L, Chen K, Medina-Rivera A, et al. MACE: model based analysis of ChIP-exo. Nucleic Acids Res. 2014;42:e156. Griffiths-Jones S, Bateman A, Marshall M, Khanna A, Eddy SR. Rfam: an RNA family database. Nucleic Acids Res. 2003;31:439–41. Mercer TR, Clark MB, Andersen SB, Brunck ME, Haerty W, Crawford J, Taft RJ, Nielsen LK, Dinger ME, Mattick JS. Genome-wide discovery of human splicing branchpoints. Genome Res. 2015;25:290–303. Smith D, Yarus M. Transfer RNA structure and coding specificity. I. Evidence that a D-arm mutation reduces tRNA dissociation from the ribosome. J Mol Biol. 1989;206:489–501. Hardt WD, Schlegl J, Erdmann VA, Hartmann RK. Role of the D arm and the anticodon arm in tRNA recognition by eubacterial and eukaryotic RNase P enzymes. Biochemistry. 1993;32:13046–53. Lee Y, Rio DC. Mechanisms and regulation of alternative pre-mRNA splicing. Annu Rev Biochem. 2015;84:291–323. Paninski L. Estimation of entropy and mutual information. Neural Comput. 2003;15:1191-253. This works is partly supported by the Mayo Clinic Center for Individualized Medicine. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Availability and requirements CircularLogo (http://circularlogo.sourceforge.net/) is implemented in Python and Django and is released under the GNU General Public License (GPLv2). CircularLogo web server (http://bioinformaticstools.mayo.edu/circularlogo/index.html) is hosted on Amazon Elastic Compute Cloud and uses NGINX web server with uWSGI gateway interface to handle multiple concurrent client requests. Local installation of CircularLogo on Linux, Mac OS X and Windows systems requires these modules: python2.7.10 (https://www.python.org/downloads/release/python-2710/), Django (https://www.djangoproject.com/), biopython (https://github.com/biopython/biopython.github.io/), numpy (http://www.numpy.org/) and scipy (https://www.scipy.org/). The source codes and datasets analyzed during the current study are available at: https://sourceforge.net/projects/circularlogo/files/. CircularLogo web server can be accessed from http://bioinformaticstools.mayo.edu/circularlogo/index.html. LW and JPK conceived the study. ZY and TM implemented CircularLogo software and performed the analysis. MK built CircularLogo web server. LW, ZY, SD and JPK wrote the manuscript. All authors read and approved the final manuscript. Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, Rochester, MN, USA Zhenqing Ye, Michael T. Kalmbach, Surendra Dasari, Jean-Pierre A. Kocher & Liguo Wang Department of Biochemistry and Molecular Biology, Mayo Clinic, Rochester, MN, USA Tao Ma & Liguo Wang Zhenqing Ye Tao Ma Michael T. Kalmbach Surendra Dasari Jean-Pierre A. Kocher Liguo Wang Correspondence to Liguo Wang. Ye, Z., Ma, T., Kalmbach, M.T. et al. CircularLogo: A lightweight web application to visualize intra-motif dependencies. BMC Bioinformatics 18, 269 (2017). https://doi.org/10.1186/s12859-017-1680-2 Accepted: 11 May 2017 CircularLogo Intra-motif dependency
CommonCrawl
Artificial Intelligence Blog We're blogging machines! 100 Theorems You are currently browsing the archive for the Programming category. Yann Esposito's Category theory Slides with Haskell August 30, 2013 in Category Theory, Programming by hundalhh | 1 comment I really enjoyed Yann Esposito's slides "Category Theory Presentation" which give a relatively simple and artistic introduction to category theory for Haskell programmers. (I like Haskell too.) His slides present the basic definitions of categories, functors, natural transformations, and monads along with working Haskell code. Here is a fragment of a typical slide. Searching a Game Tree with a GPU July 28, 2013 in Games, Programming, Technology by hundalhh | Permalink One of my friends showed me his new gaming computer and said that the GPU could do 1.3 teraflops (1.3 trillion floating point operations per second) which is about 500 times faster than my home computer, so I thought, "Imagine how quickly we could search a game tree." So I started looking around the internet for a super-great GPU chess engine and found basically nothing!! Turns out that the amount of memory per thread is too low, the size of the L1 cache is too small, and the alpha-beta pruning algorithm is not quite parallel enough for GPUs to play chess well. Here is a nice graphic of the L1 access time for a few CPUs and GPUs. In the paper "Parallel Game Tree Search Using GPU" (2011), L'ubomír Lackovic improved the tree search speed by a factor of two to three by using a GPU instead of the more traditional CPU based tree search for Czech draughts (similar to American Checkers). His tests were based on the ATI Radeon 4890 GPU, the Nvidia GTX460 GPU, and the quad-core processor Intel i5 750 CPU. I had hoped for more speed. In "Large-Scale Parallel State Space Search Utilizing Graphics Processing Units and Solid State Disks" (2011), von Damian Sulewski invented and test several algorithms for search, reviewed game theory algorithms, and applied GPU processing to several games including "Nine Men's Morris". Sulewski used an Intel Core i7 CPU 920 with an NVIDIA GeForce 285 GTX GPU to run his tests. He reported that the GPU was faster by a factor of three to twelve as long as sufficient RAM was available. If the computer ran out of RAM and had to use disk storage, then the GPU performance degraded significantly. He states, "The observed speed-ups of over one order of magnitude have been obtained (plotted in bold font), exceeding the number of cores on most current PCs. Note that this assertion is true for the dual 6-core CPUs available from Intel, but not on a dual Xeon machine with two quad-core CPUs creating 16 logical cores due to multi-threading. Nonetheless, better speed-ups are possible since NVIDIA GPUs can be used in parallel and the Fermi architecture (e.g. located on the GeForce GTX 480 graphics card) is coming out which will go far beyond the 240 GPU cores we had access to. For larger levels, however, we observe that the GPU performance degrades. When profiling the code, we identified I/O access as one limiting factor. For example, reading S8,8 from one HDD required 100 seconds, while the expansion of 8 million states, including ranking and unranking required only about 1 second on the GPU." So GPU's are not the silver bullet for games yet. What is probabilistic programming and Why it Matters April 18, 2013 in Graphical Models, Programming by hundalhh | Permalink "A probabilistic programming language is a high-level language that makes it easy for a developer to define probability models and then "solve" these models automatically. These languages incorporate random events as primitives and their runtime environment handles inference. Now, it is a matter of programming that enables a clean separation between modeling and inference." writes Beau Cronin in this post about Probabilistic Programming (PP) (see e.g. [1] and [2]). He goes on to informally describe a probabilistic graphical model (PGM) and how PP languages or extensions to existing languages like BLOG, BUGS, Church (Lisp), FACTORIE, Figaro (Scala), HANSEI (Ocaml), Hierarchical Bayesian Compiler, Infer.NET, ProbLog, and Stan make it much easier to set up and solve PGMs. He provides links to a tutorial, a great easy-to-understand video, the NIPS workshop on PP, and several ongoing PP projects. I also enjoyed the more detailed post "Why Probabilistic Programming Matters" by Rob Zinkov. Rob shows how to represent the following machine learning techniques in a PP language: Bayesian Linear Regression Naive Bayes K-Means Clustering Latent Dirichlet Allocation (LDA) Correlated Topic Models (CTM) Autoregressive Integrated Moving Average (ARIMA) Hidden Markov Models (HMM) Matrix Factorization Sparsity and Sparse Bayes Conditional Random Fields (CRF) "Stochastic Superoptimization" and "Programming by Optimization" April 12, 2013 in Multi-Armed Bandit Problem, Optimization, Programming by hundalhh | Permalink John Regehr writes this post about a compiler speed optimization technique called "Stochastic Superopimization". "Stochastic Superopimization" systematically searches for algorithmic improvement in code using machine learning algorithms similar to multi-armed bandit strategies. It appears to be related to "Programming by Optimization". "Stochastic Superopimization" is more like a very good optimization flag on a compiler. "Programming by Optimization" is constructing the program in such a fashion that design options are exposed and easily manipulable by an optimization program trying to maximize some performance metric. The "Programming by Optimization" community seems to mostly use BOA, the Bayesian optimization algorithm (see [1], [2]). I am hoping to read and write more about both of these ideas later. "Poor Man's Agile: Scrum in 5 Simple Steps" April 8, 2013 in Programming by hundalhh | Permalink In enjoyed reading Scott Porad's abbreviated post on Scrum. SCRUM Resources October 12, 2012 in Programming by hundalhh | Permalink My Friend Mark K pointed me toward these resources for Scrum agile software development. (See also fueled.com) The Scrum Framework in 10 minutes: http://www.youtube.com/watch?v=_BWbaZs1M_8 Jeff Sutherland videos (the inventor of scrum) Discusses His Vision for Scrum 3:00 http://www.youtube.com/watch?v=LjBN2CjKDcU The Structure of Scrum 5:37 http://www.youtube.com/watch?v=1RmCahV3Tbw&feature=relmfu How does Scrum Really Work? 3:31 http://www.youtube.com/watch?v=eIyaCPcUuyQ&feature=relmfu What Is the Product Backlog Review in Scrum? 1:54 http://www.youtube.com/watch?v=iwkb56GQg9Q&feature=relmfu What does it mean to be Ready-Ready? 3:30 http://www.youtube.com/watch?v=XkhJDbaW0j0&feature=relmfu What Is the Scrum Daily Meeting? 2:09 http://www.youtube.com/watch?v=lXOhfKV6jLQ&feature=relmfu Inside the Sprint Retrospective 2:30 http://www.youtube.com/watch?v=MFLvQXMNrO8&feature=relmfu Understanding the Burndown Chart in Scrum 3:48 http://www.youtube.com/watch?v=HV76WzqpSI0&feature=relmfu Best practices for Scrum 1:47 http://www.youtube.com/watch?v=jQULZDTDG8Q&feature=relmfu The Nokia Test 6:14 http://www.youtube.com/watch?v=1yZ3J8C4MK0&feature=relmfu The 32 most important algorithms October 4, 2012 in Programming by hundalhh | Permalink http://www.risc.jku.at/people/ckoutsch/stuff/e_algorithms.html http://www.uta.edu/faculty/rcli/TopTen/topten.pdf If you use the integer detection relation algorithm, please leave a comment on how you used it. (The Euclidean algorithm does not count!) I think the fast multipole algorithm is used by electrical engineers. If you use it, please leave a comment. Optimal coding July 19, 2012 in Programming by hundalhh | Permalink It occurs to me that when designing an algorithm for a large organization the designer should minimize $$\alpha T + \beta L + \gamma H$$ where $\alpha$, $\beta$, and $\gamma$ are unknown positive constants, $T$ is the time your code needs to run, $L$ is the number of lines of code required, and $H$ is the number of hours required by other engineers to understand your algorithm. PS: The above statement is considerably less funny if you try to estimate $\alpha$, $\beta$, and $\gamma$. Abstraction for Learning (9) Assorted Links (3) Category Theory (7) Compressed Sensing (1) Control Systems (1) Deep Belief Networks (26) Ensemble Learning (12) General ML (36) Graphical Models (15) Multi-Armed Bandit Problem (27) Neural Nets (26) PDEs (1) Reinforcement Learning (14) Sparsity (5) Support Vector Machines (3) Subscribe to ArtEnt via Email
CommonCrawl
Centre for Environmental Informatics CEI Web-Projects Ice streams National Institute for Applied Statistics Research Australia Home Physical-statisical modeling of the NE ice stream Components of the process model Bayesian calculations Bayesian-analysis results for the northeast ice stream, Greenland Modern studies of the behaviors of glaciers, ice sheets, and ice streams rely heavily on both observations and physically based models. Data acquired via remote sensing provide critical information on geometry and movement of ice over large sections of Antarctica and Greenland. Though these datasets are significant advances in terms of spatial coverage and the variety of processes we can observe, the physical systems to be modeled are nevertheless imperfectly observed. Uncertainties associated with measurement errors are present, and physical models are also subject to uncertainties. Hence, there is a need for combining observations and models in a fashion that incorporates uncertainty and quantifies its impact on conclusions. The goal of combining models and observations is hardly new in glaciology, or in the broad areas of the geosciences (e.g., data assimilation as practiced in numerical weather forecasting). We focus on the development of statistical models with strong reliance on physical modeling, a strategy Berliner (2003) called physical-statistical modeling, and then use Bayes' Theorem to make inference on all unknowns given the data. This is different from traditional physical modeling, perhaps with data-based parameter estimates, and traditional statistical modeling, perhaps relying on vague, qualitative physical reasoning. In the paragraphs that follow, we develop statistically enhanced versions of a simple physical model of driving stress and a familiar model for velocity based on stress. This presentation is based on the preprint, Berliner, Jezek, Cressie, Kim, Lam, and van der Veen (2005). Glaciological motivations Since glaciers flow under the force of gravity, important factors in determining velocities include quantities such as the ice thickness acting in combination with forces acting along the sides and at the base of the glacier and under the constraints of the constitutive relationship. In particular, driving stress is associated with gravitational force acting on the ice. Hence, spatial variations in the stress arise from longitudinal gradients in ice-surface elevation and ice thickness. Based on existing theory (e.g., Paterson 1994), we consider equating driving stress to stresses acting on the sides and base of the glacier. A simple approximation equates driving stress along the flow to basal shear stress as follows: \begin{equation} \tau_{dx} \approx \tau_{bx} = - \rho g H \frac{ds}{dx}, \end{equation} where $s$ is ice-surface elevation, $H$ is the ice thickness, $\rho$ is the density of ice, and $g$ is the gravity constant. Under these assumptions, it is reasonably straightforward to estimate directly driving stress based on observations of $s$ and $H$. However, even though estimation may be relatively straightforward, assessment of uncertainties in such estimates can be difficult. Furthermore, a concern in estimating driving stress from geometry is that the reliance on the slope of the upper ice surface in (1) implies that results are very sensitive to small-scale variations in surface topography, and to small-scale, perhaps unimportant variations in ice thickness. From (1), there is no theoretical requirement that driving stress be spatially averaged, however it is usually calculated over horizontal distances of a few ice thicknesses or so to eliminate small-scale flow features not important to the large-scale flow (e.g., Kamb and Echelmeyer 1986). Indeed, if averaging is not done, the driving stress estimates exhibit unreasonably large variations. We assembled surface topography and ice thickness observations for a portion of the Northeast Ice Stream in Greenland; see Figure 1. The data were gathered as part of the Program for Arctic Climate Regional Assessments (PARCA). Surface topography and ice thickness were sampled every few hundred meters using equipment mounted on the Wallops Flight Facility P-3 aircraft. Surface velocity data were calculated by Ian Joughin and provided as part of the PARCA dataset. The three derived datasets are: ${\bf S}$ (Figure 2), surface topography; ${\bf B}$ (Figure 2), basal topography; and ${\bf U}$ (Figure 3), surface velocities. The primary output of a Bayesian analysis is a posterior distribution, namely, the joint probability distribution for unknown quantities conditional on the observed data. Even in our simple illustration for the Northeast Ice Stream in Greenland, we have on the order of 8,000 unknowns, so explicit presentation of their joint distribution is not feasible. Hence, a key aspect of Bayesian analysis in such high-dimensional settings is the ability to generate realizations (or ensembles) from the posterior distribution; the posterior is then studied through statistical summaries of such ensembles. A separate webpage can be viewed for an introduction to Bayesian Statistics: Tutorial on Bayesian Statistics for Geophysicists Figure 1. NE Ice Stream Showing PARCA Flight Line. Figure 2. Surface and Basal Elevation. Figure 3. Surface Velocities. Physical-statistical modeling of the NE ice stream Recall that our three datasets are: ${\bf S}$, surface observations; ${\bf B}$, basal observations; and ${\bf U}$, velocity data. The corresponding processes of interest are true surface topography $s(x)$, true basal topography $b(x)$, and true velocities $u(x)$, where $x$ indexes a transect down the middle of the ice stream. There are no observations on the stresses acting on the ice, though, as we shall see, physical relations allow us to make inference on modeled stresses. We incorporate three physically based models. First, following the discussion leading to (1), we consider the stress, \begin{equation} \tau = \rho g H \frac{ds}{dx}, \end{equation} where the ice thickness is $H=s-b$, $\rho$ is the density of ice, and $g$ is the gravity constant. The negative sign present in (1) is omitted here because we model $\tau$ and velocity in the negative-$x$ direction. In all computations, we set $\rho = 911 \, kg/m^3$ and $g = 9.81\, m/s^2$ . Second, under a laminar-flow assumption and treating the flow parameter $A$ as a constant, the surface velocity $u$ is given by, \begin{equation} u=u_b + \frac{2A}{n+1} \, H \, \tau^n, \end{equation} where $u_b$ is the sliding velocity and $n$ is a flow parameter (e.g., Paterson 1994, p. 251, eq. 21). Finally, as suggested by the analysis given in Paterson (1994, p. 243, eq. 8), we consider the following basic model for the surface: \begin{equation} s= k \, (L^{1+n^{-1}} - (L-x)^{1+n^{-1}})^{0.50 n/(n+1)}\,. \end{equation} Bayesian hierarchical modeling We see from Tutorial on Bayesian Statistics for Geophysicists that our main tasks are the development of the following probability distributions: \begin{eqnarray*} \mbox{ Data Model: } & [{\bf B},{\bf S},{\bf U}\mid b,s,u,\mbox{$\boldsymbol \theta$}] & \\ \mbox{ Process Model: } & [b,s,u \mid \mbox{$\boldsymbol \theta$}] & \\ \mbox{ Parameter Model: } & [\mbox{$\boldsymbol \theta$}] & \end{eqnarray*} where $\mbox{$\boldsymbol \theta$}$ denotes the collection of all model parameters. The specifications of these probability distributions are described in detail in Berliner et al. (2005). Our goal is to obtain the posterior distribution $[b,s,u,\mbox{$\boldsymbol \theta$}\vert{\bf B},{\bf S},{\bf U}]$, which then can be used to obtain the posterior distribution of stresses, $[\tau\vert{\bf B},{\bf S} ,{\bf U}]$. Our main assumption regarding the data model is that we assume it takes the form \begin{equation} [{\bf B},{\bf S},{\bf U}\mid b,s,u,\mbox{$\boldsymbol \theta$}] = [{\bf B}\mid b,\mbox{$\boldsymbol \theta$}_B][{\bf S}\mid s,\mbox{$\boldsymbol \theta$}_S] [{\bf U}\mid u,\mbox{$\boldsymbol \theta$}_U]\,, \end{equation} where notation such as $\mbox{$\boldsymbol \theta$}_B$ is used to indicate those parameters (subsets of $\mbox{$\boldsymbol \theta$}$ explicitly appearing in the indicated models. A possible objection one might make to (5) is that because the basal data ${\bf B}$ is actually computed as the differences of surface and thickness observations, the assumed conditional independence may not hold. We checked our posterior results for indications of degrees of departure from this assumption and found none that would affect our results. See Berliner et al. (2005) for more details. Our process model begins with a probabilistic equality (i.e., this is not an assumption, but a fact): \begin{equation} [b,s,u \mid \mbox{$\boldsymbol \theta$}] = [u \mid b,s,\mbox{$\boldsymbol \theta$}] [b, s \mid \mbox{$\boldsymbol \theta$}]. \end{equation} Then assuming that the base $b$ and the surface $s$ are independent conditional on the model parameters, we obtain: \begin{equation} [b,s,u \mid \mbox{$\boldsymbol \theta$}] = [u \mid b,s,\mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u] [b \mid \mbox{$\boldsymbol \theta$}_b] [s \mid \mbox{$\boldsymbol \theta$}_s]\,, \end{equation} where again notation such as $\mbox{$\boldsymbol \theta$}_b$ is used to indicate appropriate subsets of $\mbox{$\boldsymbol \theta$}$. It is critical to note here that we are not assuming that the base and surface are independent. Our modelling of both the base and surface is conditional upon smooth processes included in definitions of $\mbox{$\boldsymbol \theta$}_b$ and $\mbox{$\boldsymbol \theta$}_s$. Our assumption then is that the small-scale departures from those large-scale processes are independent. Finally, we turn to the conditional model for $u$ in (7). We assume that the velocity profile depends on the base and surface only through their respective smoothed versions $\mbox{$\boldsymbol \theta$}_b$ and $\mbox{$\boldsymbol \theta$}_s$; that is, \begin{equation} [u \mid b,s,\mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u] =[u \mid \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u]. \end{equation} Once we get this far in the analysis, we assume that the relationship is deterministic (this assumption could be relaxed). That is, the probability distribution on the right-hand side of (8) is degenerate and is based on (3). The parameter model (i.e., specification of prior distributions) is given in Berliner et al. (2005). Bed model We choose to use wavelets to model the true basal topography (i) because of their flexibility in representing highly variable processes, and (ii) because we can easily control the smoothness of the fitted wavelets. Wavelets do best for equally spaced data where the number of data points is an integer power of $2$. Hence, we partition the domain of the data into $2^{11} = 2048$ bins of equal length ($189.5 \, m$). Let $\bar{{\bf b}}$ denote the 2048-dimensional vector constructed by averaging $b$ within each bin. Note that $\bar{{\bf b}}$ is not observed; define the associated basal data vector $\bar{{\bf B}}$ of length 2048 with $i^{\mbox{\small th}}$ element given by the simple arithmetic average of those basal observations lying in bin $i$. The data model we propose for $[{\bf B}\vert b,\mbox{$\boldsymbol \theta$}_b]$ in (5) implies that the elements of $\bar{{\bf B}}$ are conditionally independent, each being normally distributed with mean equal the corresponding element of $\bar{{\bf b}}$ and variance determined by the measurement error variability of an individual observation, denoted by $\sigma_{B}^2$, and the number of observations in the corresponding bin (see Berliner et al., 2005). After converting to a discrete wavelet form and fixing the resolution, we obtain a linear model for $[b\vert\mbox{$\boldsymbol \theta$}_b]$ in (7); that is, \begin{equation} \bar{{\bf b}} \mid \mbox{$\boldsymbol \theta$}_b \sim N({\bf W}{\bf C}, \sigma^2 \mbox{$\boldsymbol \Sigma$}(\phi_{1},\phi_{2})), \end{equation} where ${\bf W}$ is the $2048 \times k$ matrix of discretized wavelet basis functions, ${\bf C}$ is the $k \times 1$ vector of wavelet coefficients, $s$ is determined by the chosen resolution, $\mbox{$\boldsymbol \Sigma$}(\phi_{1},\phi_{2})$ is the correlation matrix of an autoregressive process of order two (AR(2)) with variance $\sigma^2$, and $\mbox{$\boldsymbol \theta$}_b = ({\bf C},\sigma^2,\phi_1,\phi_2)$. The selection of an AR(2) error model to account for spatial dependence among these model errors (i.e., local variations in basal topography) was based on preliminary data analysis and practicality; our Bayesian computations require repeated inversion of a $2048 \times 2048$ matrix involving the inverse of $\mbox{$\boldsymbol \Sigma$}(\phi_{1},\phi_{2})$, which is simple for an AR(2) process. Different choices of k lead to different resolutions of the mean basal elevation; we performed analyses for four resolutions, $r=1,\ldots,4$, corresponding to $k=8,16,32$ and $64$ coefficients in (9); see Berliner et al. (2005) for more details. Surface model Our modelling strategy for $[s\vert\mbox{$\boldsymbol \theta$}_s]$ in (7), separates the large-scale and small-scale behaviors of the surface. We suppose \begin{equation} s(x) = s_p(x) + {\cal S}(x), \end{equation} where the large-scale surface is given by a parameterized function $s_p$, assumed known up to a low-dimensional set of parameters, and $\cal S$ is a zero-mean spatial stochastic process, described in Berliner et al. (2005). To model $s_p(\cdot )$, we rely on the physical model (4); that is, we assume that \begin{equation} s_p(x) = \mu + K \, (L^{1+n^{-1}} - (L-x)^{1+n^{-1}})^{0.50 n/(n+1)}, \end{equation} where $\mu, K$, and $L$ are treated as the unknown parameters. In the analysis here, we set $n=3$, though we could model $n$ as an unknown as well. We use only the large-scale surface (11) to compute ice thickness, the derivative, and hence the stress in (2). Enhancements that incorporate $\cal S$ will be explored elsewhere. Nevertheless, the presence of $\cal S$ is important when determining the data model in (5). Under the modelling strategy that uses (11) to obtain the stress, we need $[{\bf S}\vert s_p,\mbox{$\boldsymbol \theta$}_s]$, which is given in Berliner et al. (2005). Velocity model Our data model $[{\bf U}\vert u,\mbox{$\boldsymbol \theta$}_U]$ is again a basic measurement-error model; that is, we assume that conditional on the true velocities, the data vector ${\bf U}$ has a Gaussian distribution with mean equal to the vector of velocities at the corresponding locations, common variances $\sigma_U^2$, and are independent (see Berliner et al., 2005). Turning to the process model, recall from (8) that we want $[u \mid \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u]$ based on (2) and (3). We consider (i) the corresponding smoothed versions of ice thickness, defined as \begin{equation} {\bf H}= {\bf s}_p - {\bf W}{\bf C}\,, \end{equation} where ${\bf s}_p$ is the $2048$-dimensional vector of parameterized surface elevations; and (ii) similarly defined smoothed values of stress $\tau$, defined as \begin{equation} \mbox{$\boldsymbol \tau$}= \rho g ({\bf H}\cdot \frac{d {\bf s}_p}{dx}), \end{equation} where the right-hand side means each coordinate of the $2048$-dimensional vector $\mbox{$\boldsymbol \tau$}$ is obtained as the elementwise product of the corresponding coordinates of the smoothed thickness and the derivative of the surface. From (3), we should model ${\bf u}$, the vector of true velocities at the observation locations, as a linear function of the corresponding coordinates of ${\bf H}$ times the $n^{th}$ powers of coordinates of $\mbox{$\boldsymbol \tau$}$. But, in preliminary data analyses, we noted that at least two models (one for small $x$ and another for large $x$) are needed. Let $x=c$ be an unknown change point, and consider different linear functions above and below the change point. Finally, the model for the velocity data vector ${\bf U}$ is \begin{equation} {\bf U}= \left( \begin{array}{c} u_{b,1} \, {\bf 1}_{1} \\u_{b,2} \, {\bf 1}_{2} \end{array} \right) + \left( \begin{array}{c} 0.50 A_1 \, ({\bf H}\, \cdot \mbox{$\boldsymbol \tau$}^{n})_{1} \\ 0.50 A_2 \, ({\bf H}\, \cdot \mbox{$\boldsymbol \tau$}^{n})_{2} \end{array} \right) + {\bf e}_U, \end{equation} where the subscripts 1 and 2 indicate the varying dimensions of the vectors ${\bf 1}$ (a vector with all elements equal to 1) and ${\bf H}\, \cdot \mbox{$\boldsymbol \tau$}^{n}$, depending on the value of the change point $c$, and ${\bf e}_U$ are errors primarily representing measurement error associated with the velocity data. See Berliner et al. (2005) for more details. A separate webpage can be viewed for an introduction to Bayesian statistics: Tutorial on Bayesian Statistics for Geophysicists. Though we can write down Bayes' Theorem for the posterior distribution of all unknowns conditional on the observations, the result is typically not computable in closed form. We use a Monte Carlo approach that produces an ensemble of realizations from the target posterior distribution. The method relies on the emerging technology of Markov Chain Monte Carlo (MCMC). The idea of MCMC is to simulate a Markov chain that has been carefully designed so that its stationary distribution coincides with the target posterior distribution. It follows that, after a burn-in or transience period, the generated realizations of the chain comprise a simulated sample from the posterior. Data analysis (often known as "output analysis") is performed on this sample to produce the desired inferences. In our case, direct use of MCMC is quite challenging, primarily due to the nonlinearities present in (2) and (3). Hence, we combine MCMC with the technique of Importance Sampling Monte Carlo (ISMC). The basic idea of ISMC begins with a setting in which direct simulation from a target distribution is difficult or inefficient. One generates an ensemble from another, more manageable distribution. The theory of ISMC provides formulas for the calculation of weights that are used to reweight the ensemble, permitting inferences relative to the original target. General introductions to both MCMC and ISMC can be found in Robert and Casella (1999). An illustration of these technologies in a geophysical problem is given in Berliner, Milliff, and Wikle (2003). An outline of the calculations used here goes as follows. We first run separate, independent MCMC algorithms for the basal model and the surface model. These runs produce ensembles from the posterior distributions $[b, \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_B \mid {\bf B}]$ and $[\mbox{$\boldsymbol \theta$}_s \mid {\bf S}]$. Due to the various conditional-independence assumptions described above, these ensembles are summaries of the posterior distribution of the unknowns conditional on the two datasets ${\bf B}$ and ${\bf S}$. They are then used in conjunction with the velocity model $[u \mid \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u]$ (recall (8)) to simulate velocities conditional on ${\bf B}$ and ${\bf S}$. To incorporate the velocity data ${\bf U}$, we reweight all of these samples using ISMC results. Posterior results For each of the four resolutions, Figure 4 presents 10 realizations of the smoothed base ${\bf W}{\bf C}$, superimposed on the original data. We see that the posterior distributions of the smoothed base are increasingly faithful to the basal data as the resolution is increased. We tried even higher resolution wavelets, but detected very little difference from the results for r = 4. Figure 4. Posterior Smoothed Basal Topographies at Each Resolution. Shown are basal data and 10 posterior realizations of smoothed basal topographies for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4. For each of the four resolutions, Figure 5 presents 50 realizations and the posterior mean, estimated using ensembles of size 2000, of the smoothed stresses $\mbox{$\boldsymbol \tau$}$ (recall (13)). For each resolution, Figure 6 presents 100 realizations and the posterior means estimated using ensembles of size 2000; the original velocity data is also shown in each plot. Note that the change point at x = 77.5 km is clearly seen in these graphs. Figure 5. Posterior Realizations of Smoothed Stress $\mbox{$\boldsymbol \tau$}$ at Each Resolution. Shown are 50 posterior realizations of smoothed $\mbox{$\boldsymbol \tau$}$ (kPa) and posterior mean of $\mbox{$\boldsymbol \tau$}$ based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4. Figure 6. Summaries of Posterior Distributions for Velocities. Shown are 100 posterior realizations of velocity profiles and their posterior means based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4, as well as the original velocity data. The estimates (i.e., posterior means) for the other parameters in the model of the large-scale surface elevation (11) are $\hat{\mu}=-450.53$, $\hat{K}= 4.75$, and $\hat{L}= 444901$. Prior and posterior means for other key model parameters are presented in Berliner et al. (2005). Variance estimation and model selection We estimated $\sigma_U^2$ as follows. For each of our 2000 simulated ensemble members, we compute the variance, say $v_{m}^2$, where the subscript m indicates the ensemble member) of the "residuals", namely the observed velocity data minus the generated velocities from the m-th ensemble. The average then provides a posterior estimate of $\sigma_U^2$ (due to the very large sample sizes, the prior distribution on $\sigma_U^2$ "washes out"). For each resolution, the resulting estimate of $\sigma_U^2$ is about 50, corresponding to a standard deviation of about 7-8 m/yr. This compares fairly well to the suggestion that most measurement errors in velocity data are expected to be less than 10 m/yr (Goldstein, Engelhardt, Kamb, and Frolich 1993). To check on the plausibility of the treatment of $\sigma_U^2$ as a constant, we partitioned the length of our profile into eight subintervals and estimated $\sigma_U^2$ as described above, but from data restricted to the subintervals. The results are summarized in Figure 7. First, we find evidence that the magnitudes of the variations around the model differ substantially on either side of the change point. In particular, we see very large variances to the left of the change point, suggesting a severe (and anticipated) breakdown of the laminar-flow approximation. Indeed, the model predicts increasing velocities when approaching the change point from the left and decreasing velocities to the right, whereas the velocity observations decrease relatively smoothly from left to right through this region. Also, we note differences in $\sigma_U^2$ throughout the profile. This suggests looking more closely at local behavior of the velocities. For example, a model with mutliple change points (i.e., corresponding to multiple, local values of sliding velocity and flow parameter A) could be suggested. Beyond assessments of local model misfit, Figure 7 also contains information regarding comparisons of the resolutions used for basal smoothing. Focusing on the region to the right of the change point, we note that resolution r = 2 would be the preferred choice even though it appears to severely smooth some features of the basal topography (recall Figure 4). Figure 7. Local Estimates of $\sigma_U^2$. Estimates of $\sigma_U^2$ using data restricted to eight subregions for each resolution r = 1,...,4. Model stability and predictive power To assess both model stability and predictive power, we did two simple experiments. First, we re-ran the velocity model using only 20% of the velocity data (every fifth observation). The resulting analyses are reported in Figure 8. In comparing this figure to Figure 6, we find fairly strong similarities, suggesting good stability and interpolation properties. To quantify the behavior, we computed an estimate of the predictive variance of the velocity. Specifically, for the velocity data left out of the analysis, we computed the average squared prediction errors (observed value minus posterior mean). We obtained the value of 50, which compares very well with our estimates of $\sigma_U^2$ based on all the data. Figure 8. Posterior Distributions for Velocities for Subsampled Data. Bayesian analysis using only every fifth velocity data point. Shown are 100 posterior realizations of velocity profiles and their posterior means based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4, as well as the original velocity data. Second, we again repeated the analysis leaving out some velocity data, but in this case we omitted all observations occurring between x = 148 km and x = 202 km. The posterior results are shown in Figure 9. Models for resolutions 2,3, and 4 do reasonably well even at predicting velocities in the unobserved region. However, the additional smoothing associated with r = 1 leads to very poor predictions in this region. Note that the spreads in the ensemble members are larger than in Figure 3, reflecting the extra uncertainty. It is also interesting that the models appear to systematically predict slightly larger velocities in the unobserved range compared to both the observed velocities and the Bayesian analyses incorporating that data, though r = 2 again does the best job in this region. Figure 9. Posterior Distributions for Velocities with Region Omitted. Bayesian analysis using no velocity data from the range x = 150 km to x = 200 km; shown are 100 realizations of posterior velocity profiles and posterior means of velocities based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4, as well as the original velocity data. We now address briefly the issue of changing the amount of smoothing of the surface. In a straightforward analysis, where we did no Bayesian or spatial modeling, we fit a very simple class of local smoothing models to the original surface data. We then examined the differences between these fits and a single smoothed function, looking for systematic differences locally in space. The most interesting region corresponds to x ≥ 250 km. Note that from our Bayesian analysis, there are systematic errors in this range: from 250-300 km we overestimate the velocity, and from 300-390 km, we underestimate the velocity. These regions correlate very well with regions in which our smoothed surface derivative underestimates and then overestimates, respectively, the surface derivative obtained from the local fit. However, using such local surface models degrades the velocity model in other regions. Ultimately, we should smooth both the surface and the base interactively. This research was supported by the National Science Foundation, Office of Polar Programs and Probability and Statistics Program, under Grant No. 0229292. Berliner, L.M. (2003). Physical-statistical modeling in geophysics. Journal of Geophysical Research, 108(D24), 8776, doi: 10.1029/2002JD002865. Berliner, L.M., Jezek, K., Cressie, N., Kim, Y., Lam, C.Q., and van der Veen, C.J. (2005). Physical-statistical modeling of ice-stream dynamics. Department of Statistics Preprint No. 759, The Ohio State University. Berliner, L.M., Milliff, R.F., and Wikle, C.K. (2003). Bayesian hierarchical modeling of air-sea interaction. Journal of Geophysical Research, 108(C4), 3104, doi:10.1029/ 2002JC001413. Goldstein, R.M., Engelhardt, H., Kamb, B., and Frolich, R.M. (1993). Satellite radar interferometry for monitoring ice sheet motion: Application to an Antarctic ice stream. Science, 262, 1526-1530. Kamb, B., and Echelmeyer, K.A. (1986). Stress-gradient coupling in glacier flow: I. Longitudinal averaging of the influence of ice thickness and surface slope. Journal of Glaciology, 32, 267-284. Paterson, W.S.P. (1994). The Physics of Glaciers, 3rd edn. Butterworth-Heinemann, Wolburn, MA. Robert, C.P. and Casella, G. (1999). Monte Carlo Statistical Methods. Springer-Verlag, New York. Byrd Polar Research Center This page originally appeared at www.stat.osu.edu/~sses/collab_ice.html
CommonCrawl
Quantization in zero leakage helper data schemes Joep de Groot1,4, Boris Škorić2, Niels de Vreede3 & Jean-Paul Linnartz1 A helper data scheme (HDS) is a cryptographic primitive that extracts a high-entropy noise-free string from noisy data. Helper data schemes are used for preserving privacy in biometric databases and for physical unclonable functions. HDSs are known for the guided quantization of continuous-valued sources as well as for repairing errors in discrete-valued (digitized) sources. We refine the theory of helper data schemes with the zero leakage (ZL) property, i.e., the mutual information between the helper data and the extracted secret is zero. We focus on quantization and prove that ZL necessitates particular properties of the helper data generating function: (1) the existence of "sibling points", enrollment values that lead to the same helper data but different secrets and (2) quantile helper data. We present an optimal reconstruction algorithm for our ZL scheme, that not only minimizes the reconstruction error rate but also yields a very efficient implementation of the verification. We compare the error rate to schemes that do not have the ZL property. Biometric authentication: the noise problem Biometrics have become a popular solution for authentication or identification, mainly because of their convenience. A biometric feature cannot be forgotten (like a password) or lost (like a token). Nowadays identity documents such as passports nearly always include biometric features extracted from fingerprints, faces, or irises. Governments store biometric data for forensic investigations. Some laptops and smart phones authenticate users by means of biometrics. Strictly speaking, biometrics are not secret. In fact, fingerprints can be found on many objects. It is hard to prevent one's face or iris from being photographed. However, storing biometric features in an unprotected, open database. Introduces both security and privacy risks. Security risks include the production of fake biometrics from the stored data, e.g., rubber fingers [1, 2]. These fake biometrics can be used to obtain unauthorized access to services, to gain confidential information or to leave fake evidence at crime scenes. We also mention two privacy risks. (1) Some biometrics are known to reveal diseases and disorders of the user. (2) Unprotected storage allows for cross-matching between databases. These security and privacy problems cannot be solved by simply encrypting the database. It would not prevent insider attacks, i.e., attacks or misuse by people who are authorized to access the database. As they legally possess the decryption keys, database encryption does not stop them. The problem of storing biometrics is very similar to the problem of storing passwords. The standard solution is to store hashed passwords. Cryptographic hash functions are one-way functions, i.e., inverting them to calculate a secret password from a public hash value is computationally infeasible. Even inside attackers who have access to all the hashed passwords cannot deduce the user passwords from them. Straightforward application of this hashing method to biometrics does not work for biometrics, however. Biometric measurements are noisy, which causes (small) differences between the digital representation of the enrollment measurement and the digitized measurement during verification. Particularly if the biometric value lies near a quantization boundary, a small amount of noise can flip the discretized value and trigger an avalanche of bit flips at the output of the hash. Helper data schemes The solution to the noise problem is to use a helper data scheme (HDS) [3, 4]. A HDS consists of two algorithms, Gen and Rep. In the enrollment phase, the Gen algorithm takes a noisy (biometric) value as input and generates not only a secret but also public data called helper data. The Rep algorithm is used in the verification phase. It has two inputs: the helper data and a fresh noisy (biometric) value obtained from the same source. The Rep algorithm outputs an estimator for the secret that was generated by Gen. The helper data makes it possible to derive the (discrete) secret reproducibly from noisy measurements, i.e., to perform error correction, while not revealing too much information about the enrollment measurement. The noise-resistant secret can be hashed as in the password protection scheme. A two-stage approach We describe a commonly adopted two-stage approach for real-valued sources, as for instance presented in ([5], Chap. 16). The main idea is as follows. A first-stage HDS performs quantization (discretization) of the real-valued input. Helper data is applied in the "analog" domain, i.e., before quantization. Typically, the helper data consists of a 'pointer' to the center of a quantization interval. The quantization intervals can be chosen at will, which allows for optimizations of various sorts [6–8]. After the first stage, there is typically still some noise in the quantized output. A second-stage HDS employs digital error correction techniques, for instance the code offset method (also known as Fuzzy Commitment) [3, 9] or a variant thereof [10, 11]. Such a two-stage approach is also common practice in communication systems that suffer from unreliable (wireless) channels: the signal conditioning prior to the quantization involves optimization of signal constellations and multidimensional transforms. The discrete mathematical operations, such as error correction decoding, are known to be effective only for sufficiently error-free signals. According to the asymptotic Elias bound ([12], Chap. 17), at bit error probabilities above 10 % one cannot achieve code rates better than 0.5. Similarly, in biometric authentication, optimization of the first stage appears essential to achieve adequate system performance. The design of the first stage is the prime motivation, and key contribution, of this paper. Figure 1 shows the data flow and processing steps in the two-stage helper data scheme. In a preparation phase preceding all enrollments, the population's biometrics are studied and a transform is derived (using well known techniques such as principal component analysis or linear discriminant analysis [13]). The transform splits the biometric vector x̲ into scalar components \((x_{i})_{i=1}^{M}\). We will refer to these components x i as features. The transform ensures that they are mutually independent, or nearly so. Common steps in a privacy-preserving biometric verification scheme At enrollment, a person's biometric x̲ is obtained. The transform is applied, yielding features \((x_{i})_{i=1}^{M}\). The Gen algorithm of the first-stage HDS is applied to each feature independently. This gives continuous helper data \((w_{i})_{i=1}^{M}\) and short secret strings s 1,…,s M which may or may not have equal length, depending on the signal-to-noise ratio of the features. All these secrets are combined into one high-entropy secret k, e.g., by concatenating them after Gray-coding. Biometric features are subject to noise, which will lead to some errors in the reproduced secret \(\hat {k}\); hence, a second stage of error correction is done with another HDS. The output of the second-stage Gen algorithm is discrete helper data r and a practically noiseless string c. The hash h(c∥z) is stored in the enrollment database, along with the helper data \((w_{i})_{i=1}^{M}\) and r. Here,z is salt, a random string to prevent easy cross-matching. In the authentication phase, a fresh biometric measurement y̲ is obtained and split into components \((y_{i})_{i=1}^{M}\). For each i independently, the estimator \(\hat {s}_{i}\) is computed from y i and w i . The \(\hat s_{i}\) are combined into an estimator \(\hat {k}\), which is then input into the 2nd-stage HDS reconstruction together with r. The result is an estimator \(\hat {c}\). Finally, \(h(\hat {c}\|z)\) is compared with the stored hash h(c∥z). Fuzzy extractors and secure sketches Special algorithms have been developed for HDSs [4, 6, 8, 9]: Fuzzy extractors (FE) and secure sketches (SS). The FE and SS are special cases of the general HDS concept. They have different requirements, Fuzzy extractor The probability distribution of s given w has to be (nearly) uniform. Secure sketch s given w must have high entropy, but does not have to be uniform. Typically, s is equal to (a discretized version of) x. The FE is typically used for the extraction of cryptographic keys from noisy sources such as physical unclonable functions (PUFs) [14–16]. Some fixed quantization schemes support the use of a fuzzy extractor, provided that the quantization intervals can be chosen such that each secret s is equiprobable, as in [17]. The SS is very well suited to the biometrics scenario described above. In the HDS context, the main privacy question is how much information, and which information, about the biometric x̲ is leaked by the helper data. Ideally, the helper data would contain just enough information to enable the error correction. Roughly speaking, this means that the vector \({\underbar {w}=(w_{i})_{i=1}^{M}}\) consists of the noisy "least significant bits" of x̲, which typically do not reveal sensitive information since they are noisy anyway. In order to make this kind of intuitive statement more precise, one studies the information-theoretic properties of HDSs. In the system as sketched in Fig. 1, the mutual information1 I(C;W̲,R) is of particular interest: it measures the leakage about the string c caused by the fact that the attacker observes w̲ and r. By properly separating the "most significant digits" of x̲ from the "least significant digits", it is possible to achieve I(C;W̲,R)=0. We call this zero secrecy leakage or, more compactly, zero leakage (ZL).2 HDSs with the ZL property are very interesting for quantifying privacy guarantees: if a privacy-sensitive piece of a biometric is fully contained in c, and not in (w̲,r), then a ZL HDS based database reveals absolutely nothing about that piece.3 We will focus in particular on schemes whose first stage has the ZL property for each feature separately: I(S i ;W i )=0. If the transform in Fig. 1 yields independent features, then automatically I(S j ;W i )=0 for all i,j, and the whole first stage has the ZL property. Contributions and organization of this paper In this paper, we zoom in on the first-stage HDS and focus on the ZL property in particular. Our aim is to minimize reconstruction errors in ZL HDSs that have scalar input \(x\in \mathbb R\). We treat the helper data as being real-valued, \(w\in \mathbb {R}\), though of course w is in practice stored as a finite-precision value. We show that the ZL constraint for continuous helper data necessitates the existence of "Sibling Points", points x that correspond to different s but give rise to the same helper data w. We prove that the ZL constraint for \(x\in \mathbb R\) implies "quantile" helper data. This holds for uniformly distributed s as well as for non-uniform s. Thus, we identify a simple quantile construction as being the generic ZL scheme for all HDS types, including the FE and SS as special cases. It turns out that the continuum limit of a FE scheme of Verbitskiy et al. [7] precisely corresponds to our quantile HDS. We derive a reconstruction algorithm for the quantile ZL FE that minimizes the reconstruction errors. It amounts to using a set of optimized threshold values, and is very suitable for low-footprint implementation. We analyze, in an all-Gaussian example, the performance (in terms of reconstruction error rate) of our ZL FE combined with the optimal reconstruction algorithm. We compare this scheme to fixed quantization and a likelihood-based classifier. It turns out that our error rate is better than that of fixed quantization, and not much worse than that of the likelihood-based classifier. The organization of this paper is as follows. Section 2 discusses quantization techniques. After some preliminaries (Section 3), the sibling points and the quantile helper data are treated in Section 4. Section 5 discusses the optimal reconstruction thresholds. The performance analysis in the Gaussian model is presented in Section 6. Related work on biometric quantization Many biometric parameters can be converted by a principal component analysis (PCA) into a vector of (near)independent components [18]. For this reason, most papers on helper data in the analog domain can restrict themselves to a one-dimensional quantization, e.g., [4, 6, 18]. Yet, the quantization strategies differ, as we will review below. Figure 2 shows the probability density function (PDF) of the measurement y in the verification phase and how the choice of quantization regions in the verification phase affects the probability of erroneous reconstruction (shaded area) in the various schemes. Examples of adaptating the genuine user PDF in the verification phase. FQ does not translate the PDF; QIM centers the PDF on a quantization interval; LQ uses a likelihood ratio to adjust the quantization regions. a Fixed equiprobable quantization. b Quantization Index Modulation. c Multi-bits based on likelihood ratio [6] Fixed quantization (FQ) The simplest form of quantization applies a uniform, fixed quantization grid during both enrollment and verification. An example for N=4 quantization regions is depicted in Fig. 2a. An unfavorably located genuine user pdf, near a quantization boundary, can cause a high reconstruction error. The inherently large error probability can be mitigated by "reliable component" selection [17]. Only components x i far away from a boundary are selected; the rest are discarded. The indices of the reliable components constitute the helper data. Such a scheme is very inefficient, as it wastes resources: features that are unfavorably located w.r.t. the quantization grid, but nonetheless carry information, are eliminated. Furthermore, the helper data leaks information about the biometric, since the intervals have unequal width and therefore unequal probabilities of producing reliable components [19]. Quantization index modulation (QIM) QIM borrows principles from digital watermarking [20] and writing on dirty paper [21]. QIM has quantization intervals alternatingly labeled with 's'0" and "1" as the values for the secret s. The helper data w is constructed as the distance from x to the middle of a quantization interval; adding w to y then offsets the pdf so that the pdf is centered on the interval (Fig. 2b), yielding a significantly lower reconstruction error probability than FQ. The freedom to choose quantization step sizes allows for a trade-off between reconstruction performance and leakage [4]. The alternating labeling was adopted to reduce leakage but sacrifices a large part of the source's entropy. Likelihood-based quantization (LQ) At enrollment, the LQ scheme [6] allocates N quantization regions as follows. The first two boundaries are chosen such that they yield the same probability of y given x, and at the same time enclose a probability mass 1/N on the background distribution (the whole population's distribution). Subsequent quantization intervals are chosen contiguous to the first and again enclose a 1/N probability mass. Finally, the probability mass in the tails of the background distribution is added up as a wrap-around interval, which also holds a probability mass of 1/N. Since the quantization boundaries are at fixed probability mass intervals, it suffices to communicate a single boundary t as helper data to the verification phase. In LQ, the secret s is not equiprobable. The error rates are low, but the revealed t leaks information about s. Dynamic detection-rate-based bit allocation In [22], Lim et al. proposed dynamic genuine interval search (DGIS) as an improvement of the bit allocation scheme of Chen et al. [23]. The resulting scheme has some similarity to our approach in that they both determine discretization intervals per user and store these intervals as helper data. However, their scheme is motivated solely by optimization of the detection rate, whereas in our scheme the optimization is subject to the zero leakage restriction. Applying the DGIS method introduces some additional leakage to the underlying bit allocation scheme. Furthermore, DGIS performs its search for the optimal discretization intervals using a sliding window algorithm, which in general will not succeed in finding the exact optimum. In contrast, in our scheme, we analytically derive the optimal solution from the background distribution. Random variables are denoted with capital letters and their realizations in lowercase. The notation \({\mathbb {E}}\) stands for expectation. Sets are written in calligraphic font. We zoom in on the one-dimensional first-stage HDS in Fig. 1. For brevity of notation the index i∈{1,…,M} on x i ,w i ,s i ,y i and \(\hat {s}_{i}\) will be omitted. The probability density function (PDF) or probability mass function (PMF) of a random variable A is denoted as f A , and the cumulative distribution function (CDF) as F A . We consider \(X\in {\mathbb {R}}\). The helper data is considered continuous, \(W\in \mathcal {W}\subset \mathbb {R}\). Without loss of generality we fix \(\mathcal {W} = [0, 1)\). The secret S is an integer in the range \({\mathcal {S}}=\{0,\ldots,N-1\}\), where N is a system design choice, typically chosen according to the signal to noise ratio of the biometric feature. The helper data is computed from X using a function g, i.e., W=g(X). Similarly, we define a quantization function Q such that S=Q(X). The enrollment part of the HDS is given by the pair Q,g. We define quantization regions as follows, $$ A_{s}=\{x\in\mathbb{R}:Q(x)=s\}. $$ ((1)) The quantization regions are non-overlapping and cover the complete feature space, hence form a partitioning: $$ A_{s}\cap A_{t} =\emptyset\quad\text{for}~s\neq t ~;\qquad \bigcup_{s\in\mathcal{S}} A_{s} =\mathbb{R}. $$ We consider only quantization regions that are contiguous, i.e., for all s it holds that A s is a simple interval. In Section 5.3, we will see that many other choices may work equally well, but not better; our preference for contiguous A s regions is tantamount to choosing the simplest element Q out of a whole equivalence class of quantization functions that lead to the same HDS performance. We define quantization boundaries q s = infA s . Without loss of generality, we choose Q to be a monotonically increasing function. This gives supA s =q s+1. An overview of the quantization regions and boundaries is depicted in Fig. 3. Quantization regions A s and boundaries q s . The locations of the quantization boundaries are based on the distribution of x, such that secret s occurs with probability p s In a generic HDS, the probabilities \(\mathbb {P}[S=s]\) can be different for each s. We will use shorthand notation $$ \mathbb{P}[S=s]=p_{s}>0. $$ The quantization boundaries are given by $$ q_{s}=F_{X}^{-1}\left(\sum\limits_{t=0}^{s-1}p_{t}\right), $$ where \(F_{X}^{-1}\) is the inverse CDF. For a Fuzzy extractor, one requires p s =1/N for all s, in which case (4) simplifies to $$ q_{s}^{\text{FE}}=F_{X}^{-1}\left(\frac sN\right). $$ Zero leakage We will work with a definition of the zeroleakage property that is a bit stricter than the usual formulation [7], which pertains to mutual information. This is necessary in order to avoid problems caused by the fact that W is a continuum variable (e.g., pathological cases where some property does not hold on measure-zero subsets of \(\mathcal {W}\)), Definition 3.1. We call a helper data scheme Zero Leakage if and only if $$ \forall_{{\mathcal{V}}\subseteq {\mathcal{W}}}\quad {\mathbb{P}}[S=s|W\in{\mathcal{V}}]= {\mathbb{P}}[S=s]. $$ In words, we define the ZL property as independence between S and W. Knowledge about W has no effect on the adversary's uncertainty about S. ZL implies I(S;W)=0 or, equivalently, H(S|W)=H(S). Here H stands for Shannon entropy, and I for mutual information (see, e.g., ([24], Eq. (2.35)–(2.39)). Noise model It is common to assume a noise model in which the enrollment measurement x and verification measurement y are both derived from a hidden 'true' biometric value z, i.e., X=Z+N e and Y=Z+N v, where N e stands for the noise in the enrollment measurement and N v for the noise in the verification measurement. It is assumed that N e and N v are mutually independent and independent of X and Y. The N e,N v have zero mean and variance \(\sigma _{\mathrm {e}}^{2},\sigma _{\mathrm {v}}^{2}\) respectively. The variance of z is denoted as \({\sigma _{Z}^{2}}\). This is a very generic model. It allows for various special cases such as noiseless enrollment, equal noise at enrollment, and verification, etc. It is readily seen that the variance of X and Y is given by \({\sigma _{X}^{2}}={\sigma _{Z}^{2}}+\sigma _{\mathrm {e}}^{2}\) and \({\sigma _{Y}^{2}}={\sigma _{Z}^{2}}+\sigma _{\mathrm {v}}^{2}\). The correlation coefficient ρ between X and Y is defined as \(\rho =({\mathbb {E}}[XY]-{\mathbb {E}}[X]{\mathbb {E}}[Y])/(\sigma _{X} \sigma _{Y})\) and it can be expressed as \(\rho ={\sigma _{Z}^{2}}/(\sigma _{X} \sigma _{Y})\). For zero-mean X,Y, it is possible to write $$ \begin{aligned} Y=\lambda X+R, \quad \text{with }\lambda=\frac{\sigma_{Y}}{\sigma_{X}}\rho \quad\text{and Var}(R)=\sigma^{2}\stackrel{\text{def}}={\sigma_{Y}^{2}}(1-\rho^{2}), \end{aligned} $$ where R is zero-mean noise. This way of expressing Y is motivated by the first and second order statistics, i.e., the variance of Y is \(\lambda ^{2} {\sigma _{X}^{2}}+\sigma ^{2}={\sigma _{Y}^{2}}\), and the correlation between X and Y is \({\mathbb {E}}[XY]/(\sigma _{X}\sigma _{Y})=\lambda {\sigma _{X}^{2}}/(\sigma _{X}\sigma _{Y})=\rho \). In the case of Gaussian Z,N e,N v, the X and Y are Gaussian, and the noise R is Gaussian as well. From (7), it follows that the PDF of y given x (i.e., the noise between enrollment and verification) is centered on λ x and has variance σ 2. The parameter λ is called the attenuation parameter. In the "identical conditions" case σ e=σ v it holds that \(\lambda =\rho ={\sigma _{Z}^{2}}/\left ({\sigma _{Z}^{2}}+\sigma _{\mathrm {v}}^{2}\right)\). In the "noiseless enrollment" case σ e=0 we have λ=1 and \(\rho =\sigma _{Z}/\sqrt {{\sigma _{Z}^{2}}+\sigma _{\mathrm {v}}^{2}}\). We will adopt expression (7) as our noise model. In Section 5.1, we will be considering a class of noise distributions that we call symmetric fading noise. Let X be the enrollment measurement and let Y be the verification measurement, where we adopt be model of Eq. ( 7 ). Let f Y|X denote the probability density function of Y given X. The noise is called symmetric fading noise if for all x,y 1,y 2 it holds that $$ |y_{1}-\lambda x|<|y_{2}-\lambda x| \implies f_{Y|X}(y_{1}|x)>f_{Y|X}(y_{2}|x). $$ Equation (8) reflects the property that small noise excursions are more likely than large ones, and that the sign of the noise is equally likely to be positive or negative. Gaussian noise is an example of symmetric fading noise. Zero leakage: quantile helper data In Section 4.1, we present a chain of arguments from which we conclude that, for ZL helper data, it is sufficient to consider only functions g with the following properties: (1) covering \(\mathcal {W}\) on each quantization interval (surjective); (2) monotonically increasing on each quantization interval. This is then used in Section 4.2 to derive the main result, Theorem 4.8: Zero Leakage is equivalent to having helper data obeying a specific quantile rule. This rule makes it possible to construct a very simple ZL HDS which is entirely generic. Why it is sufficient to consider monotonically increasing surjective functions g The reasoning in this section is as follows. We define sibling points as points x in different quantization intervals but with equal w. We first show that for every w, there must be at least one sibling point in each interval (surjectivity); then, we demonstrate that having more than one is bad for the reconstruction error rate. This establishes that each interval must contain exactly one sibling point for each w. Then, we show that the ordering of sibling points must be the same in each interval, because otherwise the error rate increases. Finally, assuming g to be differentiable yields the monotonicity property. The verifier has to reconstruct x based on y and w. In general this is done by first identifying which points \(x\in \mathbb R\) are compatible with w, and then selecting which one is most likely, given y and w. For the first step, we introduce the concept of sibling points. (Sibling points): Two points \(x,x'\in \mathbb {R}\) , with x≠x ′ , are called Sibling Points if g(x)=g(x ′). The verifier determines a set \({\mathcal {X}}_{w}=\{x\in \mathbb {R}| g(x)=w\}\) of sibling points that correspond to helper data value w. We write \({\mathcal {X}}_{w}=\cup _{s\in {\mathcal {S}}}{\mathcal {X}}_{sw}\), with \({\mathcal {X}}_{sw}=\{x\in \mathbb {R}| Q(x)=s\wedge g(x)=w\}\). We derive a number of requirements on the sets \({\mathcal {X}}_{sw}\). Lemma 4.2. ZL implies that $$ \forall_{w\in {\mathcal{W}},s\in{\mathcal{S}}}\quad {\mathcal{X}}_{sw}\neq\emptyset. $$ Proof: see Appendix A.1 Proof of Lemma 4.2. Lemma 4.2 tells us that there is significant leakage if there is not at least one sibling point compatible with w in each interval A s , for all \(w\in \mathcal {W}\). Since we are interested in zero leakage, we will from this point onward consider only functions g such that \({\mathcal {X}}_{sw}\neq \emptyset \) for all s,w. Next, we look at the requirement of low reconstruction error probability. We focus on the minimum distance between sibling points that belong to different quantization intervals. The minimum distance between sibling points in different quantization intervals is defined as $$\begin{array}{*{20}l} D_{\text{min}}(w)&=\min_{s,t\in{\mathcal{S}}:~ s< t} |\min{\mathcal{X}}_{tw}-\max{\mathcal{X}}_{sw}|, \end{array} $$ $$\begin{array}{*{20}l} D_{\text{min}}&=\min_{w\in\mathcal{W}}D_{\text{min}}(w). \end{array} $$ We take the approach of maximizing D min. It is intuitively clear that such an approach yields low error rates given the noise model introduced in Section 3.3. The following lemma gives a constraint that improves the D min. Let \(w\in \mathcal {W}\) and \({\mathcal {X}}_{sw}\neq \emptyset \) for all \(s\in \mathcal {S}\) . The D min(w)is maximized by setting \(|{\mathcal {X}}_{sw}|= 1\) for all \(s\in \mathcal {S}\). Proof: see Appendix A.2 Proof of Lemma 4.4. Lemma 4.4 states that each quantization interval A s should contain exactly one point x compatible with w. From here onward we will only consider functions g with this property. The set \({\mathcal {X}}_{sw}\) consists of a single point which we will denote as x sw . Note that g is then an invertible function on each interval A s . For given \(w\in \mathcal {W}\), we now have a set \({\mathcal {X}}_{w}=\cup _{s\in \mathcal {S}}x_{sw}\) that consists of one sibling point per quantization interval. This vastly simplifies the analysis. Our next step is to put further constraints on g. Let x 1,x 2∈A s and x 3,x 4∈A t ,s≠t , with x 1<x 2<x 3<x 4 and g(x 1)=w 1,g(x 2)=w 2 . Consider two cases, g(x 3)=w 1; g(x 4)=w 2 g(x 4)=w 1; g(x 3)=w 2. Then it holds that $$ \min_{w\in\{w_{1},w_{2}\}}D_{\text{min}}^{\text{case}~2}(w) \leq \min_{w\in\{w_{1},w_{2}\}}D_{\text{min}}^{\text{case}~1}(w). $$ Proof: see Appendix A.3 Proof of Lemma 4.5. Lemma 4.5 tells us that the ordering of sibling points should be the same in each quantization interval, for otherwise the overall minimum distance D min suffers. If, for some s, a point x with helper data w 2 is higher than a point with helper data w 1, then this order has to be the same for all intervals. The combination of having a preserved order (Lemma 4.5) together with g being invertible on each interval (Lemma 4.4) points us in the direction of "smooth" functions. If g is piecewise differentiable, then we can formulate a simple constraint as follows. Theorem 4.6. (sign of g ′ equal on each A s ):Let x s ∈A s ,x t ∈A t be sibling points as defined in Def. 4.1 . Let g be differentiable in x s and x t . Then having sign g ′(x s )=sign g ′(x t )leads to a higher D min than sign g ′(x s )≠sign g ′(x t ). Proof: see Appendix A.4 Proof of Theorem 4.5. If we consider a function g that is differentiable on each quantization interval, then (1) its piecewise invertibility implies that it has to be either monotonously increasing or monotonously decreasing on each interval, and (2) Theorem 4.5 then implies that it has to be either increasing on all intervals or decreasing on all intervals. Of course there is no reason to assume that g is piecewise differentiable. For instance, take a piecewise differentiable g and apply a permutation to the w-axis. This procedure yields a function g 2 which, in terms of error probabilities, has exactly the same performance as g, but is not differentiable (nor even continuous). Thus, there exist huge equivalence classes of helper data generating functions that satisfy invertibility (Lemma 4.4) and proper ordering (Lemma 4.5). This brings us to the following conjecture, which allows us to concentrate on functions that are easy to analyze. Conjecture 4.7. Without loss of generality we can choose the function g to be differentiable on each quantization interval \(A_{s},s\in \mathcal {S}\). Based on Conjecture 4.7, we will consider only functions g that are monotonically increasing on each interval. This assumption is in line with all (first stage) HDSs [4, 6, 8] known to us. Quantile helper data We state our main result in the theorem below. (ZL is equivalent to quantile relationship between sibling points): Let g be monotonously increasing on each interval A s , with \(g(A_{0})=\cdots =g(A_{N-1})=\mathcal {W}\) . Let \(s,t\in \mathcal {S}\) . Let x s ∈A s ,x t ∈A t be sibling points as defined in Def. 4.1 . In order to satisfy Zero Leakage we have the following necessary and sufficient condition on the sibling points, $$ \frac{F_{X}(x_{s})-F_{X}(q_{s})}{p_{s}}=\frac{F_{X}(x_{t})-F_{X}(q_{t})}{p_{t}}. $$ Corollary 4.9. (ZL FE sibling point relation): Let g be monotonously increasing on each interval A s , with \(g(A_{0})=\cdots =g(A_{N-1})=\mathcal {W}\) . Let \(s,t\in \mathcal {S}\) . Let x s ∈A s ,x t ∈A t be sibling points as defined in Def. 4.1 . Then for a Fuzzy Extractor we have the following necessary and sufficient condition on the sibling points in order to satisfy Zero Leakage, $$ F_{X}(x_{s})-\frac sN=F_{X}(x_{t})-\frac tN. $$ Immediately follows by combining Eq. (13) with the fact that \(p_{s}=1/N\;\forall s\in \mathcal {S}\) in a FE scheme, and with the FE quantization boundaries given in Eq. (5). Theorem 4.8 allows us to define the enrollment steps in a ZL HDS in a very simple way, $$\begin{array}{*{20}l} s&=Q(x) \\ w&=g(x)= \frac{F_{X}(x)-F_{X}(q_{s})}{p_{s}}. \end{array} $$ Note that w∈[0,1), and \(F_{X}(q_{s})=\sum _{t=0}^{s-1} p_{t}\). The helper data can be interpreted as a quantile distance between x and the quantization boundary q s , normalized with respect to the probability mass p s in the interval A s . An example of such a function is depicted in Fig. 4. For a specific distribution, e.g., a standard Gaussian distribution, the helper data generation function is depicted in Fig. 5. In the FE case, Eq. (15) simplifies to $$ F_{X}(x)=\frac{s+w}{N};\qquad w\in\,[0,1) $$ Example of helper data generating function g for N=4 on quantile x, i.e., F X (x). The probabilities of the secrets do not have to be equal; in this case, we have used (p 0,…,p 3)=(0.4,0.2,0.3,0.1) Example of a helper data generating function g for a standard Gaussian distribution, i.e., \(x\sim \mathcal {N}(0,1)\), and N=4. Sibling points x sw are given for s∈{0,…,3} and w=0.3 and the helper data generation function becomes $$ w=g^{\text{FE}}(x)=N\cdot F_{X}(x)-s. $$ Equation (16) coincides with the continuum limit of the Fuzzy extractor construction by Verbitskiy et al. [7]. A similar equation was later independently proposed for uniform key generation from a noisy channel by Ye et al. [25]. Equation (15) is the simplest way to implement an enrollment that satisfies the sibling point relation of Theorem 4.8. However, it is not the only way. For instance, by applying any invertible function to w, a new helper data scheme is obtained that also satisfies the sibling point relation (13) and hence is ZL. Another example is to store the whole set of sibling points \(\{x_{tw}\}_{t\in \mathcal {S}}\); this contains exactly the same information as w. The transformed scheme can be seen as merely a different representation of the "basic" ZL HDS (15). Such a representation may have various advantages over (15), e.g., allowing for a faster reconstruction procedure, while being completely equivalent in terms of the ZL property. We will see such a case in Section 5.3. Optimal reconstruction Maximum likelihood and thresholds The goal of the HDS reconstruction algorithm Rep(y,w) is to reliably reproduce the secret s. The best way to achieve this is to choose the most probable \(\hat {s}\) given y and w, i.e., a maximum likelihood algorithm. We derive optimal decision intervals for the reconstruction phase in a Zero Leakage Fuzzy Extractor. Let Rep(y,w)be the reproduction algorithm of a ZL FE system. Let \(g_{s}^{-1}\) be the inverse of the helper data generation function for a given secret s. Then optimal reconstruction is achieved by $$ \text{Rep}(y,w)=\mathop{\arg\max}_{s\in\mathcal{S}}f_{Y|X}\left(y|g_{s}^{-1}(w)\right). $$ Proof: see Appendix A.6 Proof of Lemma 5.1. To simplify the verification phase we can identify thresholds τ s that denote the lower boundary of a decision region. If τ s ≤y<τ s+1, we reconstruct \(\hat {s}=s\). The τ 0=−∞ and τ N =∞ are fixed, which implies we have to find optimal values only for the N−1 variables τ 1,…,τ N−1 as a function of w. Let f Y|X represent symmetric fading noise. Then optimal reconstruction in a FE scheme is obtained by the following choice of thresholds $$ \tau_{s}=\lambda\frac{g_{s}^{-1}(w)+g_{s-1}^{-1}(w)}{2}. $$ In case of symmetric fading noise we know that $$ f_{Y|X}(y|x)=\varphi(|y-\lambda x|), $$ with φ some monotonic decreasing function. Combining this notion with that of Eq. (18) to find a point y=τ s that gives equal probability for s and s−1 yields $$ \varphi\left(|\tau_{s}-\lambda g_{s-1}^{-1}(w)|\right)=\varphi\left(|\tau_{s}-\lambda g_{s}^{-1}(w)|\right). $$ The left and right hand side of this equation can only be equal for equal arguments, and hence $$ \tau_{s}-\lambda g_{s-1}^{-1}(w)=\pm\left(\tau_{s}-\lambda g_{s}^{-1}(w)\right). $$ Since \(g_{s}^{-1}(w)\neq g_{s-1}^{-1}(w)\) the only viable solution is Eq. (19). Instead of storing the ZL helper data w according to (15), one can also store the set of thresholds τ 1,…,τ N−1. This contains precisely the same information, and allows for quicker reconstruction of s: just a thresholding operation on y and the τ s values, which can be implemented on computationally limited devices. Special case: 1-bit secret In the case of a one-bit secret s, i.e., N=2, the above ZL FE scheme is reduced to storing a single threshold τ 1. It is interesting and somewhat counterintuitive that this yields a threshold for verification that does not leak information about the secret. In case the average of X is zero, one might assume that a positive threshold value implies s=0. However, both s=0 and s=1 allow positive as well as negative τ 1, dependent on the relative location of x in the quantization interval. FE: equivalent choices for the quantization Let us reconsider the quantization function Q(x) in the case of a Fuzzy extractor. Let us fix N and take the g(x) as specified in Eq. (16). Then, it is possible to find an infinite number of different functions Q that will conserve the ZL property and lead to exactly the same error rate as the original scheme. This is seen as follows. For any w∈[0,1), there is an N-tuplet of sibling points. Without any impact on the reconstruction performance, we can permute the s-values of these points; the error rate of the reconstruction procedure depends only on the x-values of the sibling points, not on the s-label they carry. It is allowed to do this permutation for every w independently, resulting in an infinite equivalence class of Q-functions. The choice we made in Section 3 yields the simplest function in an equivalence class. Example: Gaussian features and BCH codes To benchmark the reproduction performance of our scheme, we give an example based on Gaussian-distributed variables. In this example, we will assume all variables to be Gaussian distributed, though we remind the reader that our scheme specifies optimal reconstruction thresholds even for non-Gaussian distributions. We compare the reproduction performance of our ZL quantization scheme with Likelihood-based reproduction (ZLQ-LR) to a scheme with (1) fixed quantization (FQ), see Section 2.1, and (2) likelihood classification (LC). The former is, to our knowledge, the only other scheme sharing the zero secrecy leakage property, since it does not use any helper data. An example with N=4 intervals is depicted in Fig. 6 a. LC is not an actual quantization scheme since it requires the enrollment sample to be stored in-the-clear. However, a likelihood based classifier provides an optimal trade-off between false acceptance and false rejection according to communication theory [24] and should therefore yield the lowest possible error rate. Instead of quantization boundaries, the classifier is characterized by decision boundaries as depicted in Fig. 6 b. Quantization and decision patterns based on the genuine user and impostor PDFs. Ideally the genuine user PDF should be contained in the authentic zone and the impostor PDF should have a large mass outside the authentic zone. Fifty percent probability mass is contained in the genuine user and impostor PDF ellipse and circle. The genuine user PDF is based on a 10 dB SNR. a Fixed equiprobable quantization (FQ). b Likelihood classification (LC). c Zero leakage quantization scheme with likelihood based reproduction (ZLQ-LR) A comparison with QIM cannot be made since there the probability for an impostor to guess the enrolled secret cannot be made equal to 1/N. This would result in an unfair comparison since the other schemes are designed to possess this property. Moreover, the QIM scheme allows the reproduction error probability to be made arbitrary small by increasing the quantization width at the cost of leakage. Also, the likelihood based classification can be tuned by setting the decision threshold. However, for this scheme, it is possible to choose a threshold such that an impostor will have a probability of 1/N to be accepted, which corresponds to the 1/N probability of guessing the enrolled secret in a FE scheme. Note that for a likelihood classifier, there is no enrolled secret since this is not a quantization scheme. As can be seen from Fig. 7, the reproduction performance for a ZL scheme with likelihood based reproduction is always better than that of a fixed quantization scheme. However, it is outperformed by the likelihood classifier. Differences are especially apparent for features with a higher signal-to-noise ratio. In these regions, the fixed quantization struggles with a inherent high error probability, while the ZL scheme follows the LC. Reconstruction error probability for Gaussian-distributed features and Gaussian noise for N=4 In a good quantization scheme, the gap between I(X;Y) and \(I(S;\hat S)\) must be small. For a Gaussian channel, standard expressions are known from ([24], Eq. (9.16)). Figure 8 shows that a fixed quantization requires a higher SNR on order to converge to the maximum number of bits, whereas the ZLQ-LR scheme directly reaches this value. Mutual information between S and \(\hat S\) for Gaussian-distributed features and Gaussian noise Finally, we consider the vector case of the two quantization schemes discussed above. We concluded that FQ has a larger error probability, but we now show how this relates to either false rejection or secret length when combined with a code offset method [3]. We assume i.i.d. features and therefore we can calculate false acceptance rate (FAR) and false rejection rate (FRR) based on a binomial distribution. In practice, features can be made (nearly) independent, but they will in general not be identically distributed. However, results will be similar. Furthermore we assume the error correcting code can be applied such that its error correcting properties can be fully exploited. This implies that we have to use a Gray code to label the extracted secrets before concatenation. We used 64 i.i.d. features, each having a SNR of 17 dB, which is a typical average value for biometric features [8, 17]. From these features, we extract 2 bits per feature on which we apply BCH codes with a code length of 127. (We omit one bit). For analysis, we have also included the code (127,127,0), which is not an actual code, but represents the case in which no error correction is applied. Suppose we want to achieve a target FRR of 1·10−3, the topmost dotted line in Fig. 9, then we require a BCH (127,92,5) code for the ZLQ-LR scheme, while a BCH (127,15,27) code is required for the FQ scheme. This implies that we would have a secret key size of 92 bits versus 15 bits. Clearly, the latter is not sufficient for any security application. At the same time, due to the small key size, FQ has an increased FAR. System performance of ZLQ-LR and FQ In this paper, we have studied a generic helper data scheme (HDS) which comprises the Fuzzy extractor (FE) and the secure sketch (SS) as special cases. In particular, we have looked at the zero leakage (ZL) property of HDSs in the case of a one-dimensional continuous source X and continuous helper data W. We make minimal assumptions, justified by Conjecture 4.7: we consider only monotonic g(x). We have shown that the ZL property implies the existence of sibling points \(\{x_{sw}\}_{s\in \mathcal {S}}\) for every w. These are values of x that have the same helper data w. Furthermore, the ZL requirement is equivalent to a quantile relationship (Theorem 4.8) between the sibling points. This directly leads to Eq. (15) for computing w from x. (Applying any reversible function to this w yields a completely equivalent helper data system.) The special case of a FE (p s =1/N) yields the m→∞ limit of the Verbitskiy et al. [7] construction. We have derived reconstruction thresholds τ s for a ZL FE that minimize the error rate in the reconstruction of s (Theorem 5.2). This result holds under very mild assumptions on the noise: symmetric and fading. Equation (19) contains the attenuation parameter λ, which follows from the noise model as specified in Section 3.3. Finally, we have analyzed reproduction performance in an all-Gaussian example. Fixed quantization struggles with inherent high error probability, while the ZL FE with optimal reproduction follows the performance of the optimal classification algorithm. This results in a larger key size in the protected template compared to the fixed quantization scheme, since an ECC with a larger message length can be applied in the second stage HDS to achieve the same FRR. In this paper, we have focused on arbitrary but known probability densities. Experiments with real data are beyond the scope of this paper, but have been reported in [26, 27]. A key finding there was that modeling the distributions can be problematic, especially due to statistical outliers. Even so, improvements were obtained with respect to earlier ZL schemes. We see modeling refinements as a topic for future research. 1 For information-theoretic concepts such as Shannon entropy and mutual information we refer to e.g. [24]. 2 This concept is not new. Achieving zero mutual information has always been a (sometimes achievable) desideratum in the literature on fuzzy vaults/fuzzy extractors/secure sketches for discrete and continuous sources. 3 An overall Zero-Leakage scheme can be obtained in the final HDS stage even from a leaky HDS by applying privacy amplification as a post-processing step. However, this procedure discards substantial amounts of source entropy, while in many practical applications it is already a challenge to achieve reasonable security levels from biometrics without privacy protection. Appendix A: Proofs A.1 Proof of Lemma 4.2 Pick any \(w\in \mathcal {W}\). The statement \(w\in {\mathcal {W}}\) means that there exists at least one \(s'\in \mathcal {S}\) such that \({\mathcal {X}}_{s'w}\neq \emptyset \). Now suppose there exists some \(s'' \in \mathcal {S}\) with \({\mathcal {X}}_{s''w}=\emptyset \). Then knowledge of w reveals information about S, i.e. \({\mathbb {P}}[S=s|W=w]\neq {\mathbb {P}}[S=s]\), which contradicts ZL. Let g be such that \(|{\mathcal {X}}_{sw}| > 1\) for some s, w. Then choose a point \({\bar x} \in {\mathcal {X}}_{sw}\). Construct a function g 2 such that $$ g_{2}(x)\left\{ \begin{array}{ll} = g(x) & \text{if}~x = \bar{x}~\text{or}~x \notin {\mathcal{X}}_{sw}\\ \ne g(x) & \text{otherwise} \end{array} \right.~. $$ The D min(w) for g 2 cannot be smaller than D min(w) for g. We tabulate the D min(w) values for case 1 and 2, The smallest of these distances is x 3−x 2. A.4 Proof of Theorem 4.6 Let 0<ε≪1 and 0<δ≪1. Without loss of generality we consider s<t. We invoke Lemma 4.5 with x 1=x s ,x 2=x s +ε,x 3=x t ,x 4=x t +δ. According to Lemma 4.5 we have to take g(x 2)=g(x 4) in order to obtain a large D min. Applying a first order Taylor expansion, this gives $$ g(x_{s})+\varepsilon g'(x_{s})+{\mathcal{O}}(\varepsilon^{2}) =g(x_{t})+\delta g'(x_{t})+{\mathcal{O}}(\delta^{2}). $$ We use the fact that g(x s )=g(x t ), and that ε and δ are positive. Taking the sign of both sides of (24) and neglecting second order contributions, we get sign g ′(x s )=sign g ′(x t ). The ZL property is equivalent to f W =f W|S , which gives for all \(s\in \mathcal {S}\) $$ f_{W}(w)=f_{W|S}(w|s)=\frac{f_{W,S}(w,s)}{p_{s}}, $$ where f W,S is the joint distribution for W and S. We work under the assumption that w=g(x) is a monotonous function on each interval A s , fully spanning \(\mathcal {W}\). Then for given s and w there exists exactly one point x sw that satisfies Q(x)=s and g(x)=w. Furthermore, conservation of probability then gives f W,S (w,s) dw=f X (x sw ) dx sw . Since the right hand side of (25) is independent of s, we can write \(f_{W}(w)\mathrm {d}w = p_{s}^{-1}f_{X}(x_{sw})\mathrm {d}x_{sw}\) for any \(s\in \mathcal {S}\). Hence for any \(s,t\in \mathcal {S},w\in \mathcal {W}\) it holds that $$ \frac {f_{X}(x_{sw})\mathrm{d}x_{sw}}{p_{s}}=\frac {f_{X}(x_{tw})\mathrm{d}x_{tw}}{p_{t}}, $$ which can be rewritten as $$ \frac{\mathrm{d}F_{X}(x_{sw})}{p_{s}}=\frac{\mathrm{d}F_{X}(x_{tw})}{p_{t}}. $$ The result (13) follows by integration, using the fact that A s has lower boundary q s . Optimal reconstruction can be done by selecting the most likely secret given y,w, $$ \begin{aligned} \text{Rep}(y,w)=\mathop{\arg\max}_{s\in\mathcal{S}}f_{S|Y,W}(s|y,w) =\mathop{\arg\max}_{s\in\mathcal{S}}\frac{f_{Y,S,W}(y,s,w)}{f_{Y,W}(y,w)}. \end{aligned} $$ The denominator does not depend on s, and can hence be omitted. This gives $$\begin{array}{*{20}l} \text{Rep}(y,w)&=\mathop{\arg\max}_{s\in\mathcal{S}}f_{S,Y,W}(s,y,w) \end{array} $$ $$\begin{array}{*{20}l} &=\mathop{\arg\max}_{s\in\mathcal{S}}f_{Y|S,W}(y|s,w) f_{W|S}(w|s)p_{s}. \end{array} $$ We constructed the scheme to be a FE with ZL, and therefore p s =1/N and f W|S (w|s)=f W (w). We see that both p s and f W|S (w|s) do not depend on s, which implies they can be omitted from Eq. (30), yielding \(\text {Rep}({y,w})=\mathop {\arg \max }_{s\in \mathcal {S}}f_{Y|{S,W}}(y|{s,w})\). Finally, knowing S and W is equivalent to knowing X. Hence f Y|S,W (y|s,w) can be replaced by f Y|X (y|x) with x satisfying Q(x)=s and g(x)=w. The unique x value that satisfies these constraints is \(g_{s}^{-1}(w)\). T van der Putte, J Keuning, in Proceedings of the Fourth Working Conference on Smart Card Research and Advanced Applications on Smart Card Research and Advanced Applications. Biometrical fingerprint recognition: don't get your fingers burned (Kluwer Academic PublishersNorwell, MA, USA, 2001), pp. 289–303. T Matsumoto, H Matsumoto, K Yamada, S Hoshino, Impact of artificial "gummy" fingers on fingerprint systems. Opt. Secur. Counterfeit Deterrence Tech.4677:, 275–289 (2002). A Juels, M Wattenberg, in CCS '99: Proceedings of the 6th ACM Conf on Comp and Comm Security. A fuzzy commitment scheme (ACMNew York, NY, USA, 1999), pp. 28–36, doi:10.1145/319709.31971410.1145/319709.319714. http://doi.acm.org/10.1145/319709.319714. J-P Linnartz, P Tuyls, in New Shielding Functions to Enhance Privacy and Prevent Misuse of Biometric Templates, ed. by J Kittler, Mark Nixon. Audio- and Video-Based Biometric Person Authentication: 4th International Conference, AVBPA 2003 Guildford, UK, June 9–11, 2003 Proceedings (Springer Berlin HeidelbergBerlin, Heidelberg, 2003), pp. 393–402, doi:10.1007/3-540-44887-X_47. http://dx.doi.org/10.1007/3-540-44887-X_47. P Tuyls, B Škorić, T Kevenaar, Security with Noisy Data: Private Biometrics, Secure Key Storage and Anti-Counterfeiting (Springer, Secaucus, NJ, USA, 2007). C Chen, RNJ Veldhuis, TAM Kevenaar, AHM Akkermans, in Proc. IEEE Int. Conf. on Biometrics: Theory, Applications, and Systems. Multi-bits biometric string generation based on the likelihood ratio (IEEEPiscataway, 2007). EA Verbitskiy, P Tuyls, C Obi, B Schoenmakers, B Škorić, Key extraction from general nondiscrete signals. Inform. Forensics Secur. IEEE Trans.5(2), 269–279 (2010). JA de Groot, J-PMG Linnartz, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. Zero leakage quantization scheme for biometric verification (Piscataway, 2011). Y Dodis, L Reyzin, A Smith, in Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data, ed. by C Cachin, JL Camenisch. Advances in Cryptology - EUROCRYPT 2004: International Conference on the Theory and Applications of Cryptographic Techniques, Interlaken, Switzerland, May 2-6, 2004. Proceedings (Springer Berlin HeidelbergBerlin, Heidelberg, 2004), pp. 523–540, doi:10.1007/978-3-540-24676-3_31. http://dx.doi.org/10.1007/978-3-540-24676-3_31. AV Herrewege, S Katzenbeisser, R Maes, R Peeters, A-R Sadeghi, I Verbauwhede, C Wachsmann, in Reverse Fuzzy Extractors: Enabling Lightweight Mutual Authentication for PUF-Enabled RFIDs, ed. by AD Keromytis. Financial Cryptography and Data Security: 16th International Conference, FC 2012, Kralendijk, Bonaire, Februray 27-March 2, 2012, Revised Selected Papers (Springer Berlin HeidelbergBerlin, Heidelberg, 2012), pp. 374–389, doi:10.1007/978-3-642-32946-3_27. http://dx.doi.org/10.1007/978-3-642-32946-3_27. B Škorić, N de Vreede, The spammed code offset method. IEEE Trans. Inform. Forensics Secur.9(5), 875–884 (2014). F MacWilliams, N Sloane, The Theory of Error Correcting Codes (Elsevier, Amsterdam, 1978). JL Wayman, AK Jain, D Maltoni, D Maio (eds.), Biometric Systems: Technology, Design and Performance Evaluation, 1st edn. (Spring Verlag, London, 2005). B Škorić, P Tuyls, W Ophey, in Robust Key Extraction from Physical Uncloneable Functions, ed. by J Ioannidis, A Keromytis, and M Yung. Applied Cryptography and Network Security: Third International Conference, ACNS 2005, New York, NY, USA, June 7-10, 2005. Proceedings (Springer Berlin HeidelbergBerlin, Heidelberg, 2005), pp. 407–422, doi:10.1007/11496137_28. http://dx.doi.org/10.1007/11496137_28. GE Suh, S Devadas, in Proceedings of the 44th Annual Design Automation Conference. DAC '07. Physical unclonable functions for device authentication and secret key generation (ACMNew York, NY, USA, 2007), pp. 9–14. DE Holcomb, WP Burleson, K Fu, Power-Up SRAM state as an identifying fingerprint and source of true random numbers. Comput. IEEE Trans.58(9), 1198–1210 (2009). P Tuyls, A Akkermans, T Kevenaar, G-J Schrijen, A Bazen, R Veldhuis, in Practical Biometric Authentication with Template Protection, ed. by T Kanade, A Jain, and N Ratha. Audio- and Video-Based Biometric Person Authentication: 5th International Conference, AVBPA 2005, Hilton Rye Town, NY, USA, July 20-22, 2005. Proceedings (Springer Berlin HeidelbergBerlin, Heidelberg, 2005), pp. 436–446, doi:10.1007/11527923_45. http://dx.doi.org/10.1007/11527923_45 EJC Kelkboom, GG Molina, J Breebaart, RNJ Veldhuis, TAM Kevenaar, W Jonker, Binary biometrics: an analytic framework to estimate the performance curves under gaussian assumption. Syst. Man Cybernetics, Part A: Syst. Hum. IEEE Trans.40(3), 555–571 (2010). doi:10.1109/TSMCA.2010.2041657. EJC Kelkboom, KTJ de Groot, C Chen, J Breebaart, RNJ Veldhuis, in Biometrics: Theory, Applications, and Systems, 2009. BTAS '09. IEEE 3rd International Conference On. Pitfall of the detection rate optimized bit allocation within template protection and a remedy, (2009), pp. 1–8, doi:10.1109/BTAS.2009.5339046. B Chen, GW Wornell, Quantization index modulation: a class of provably good methods for digital watermarking and information embedding. Inform. Theory IEEE Trans.47(4), 1423–1443 (2001). doi:10.1109/18.923725. MHM Costa, Writing on dirty paper (corresp.)IEEE Trans. Inform. Theory. 29(3), 439–441 (1983). M-H Lim, ABJ Teoh, K-A Toh, Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation. IEEE Trans. Cybernet., 843–857 (2013). C Chen, RNJ Veldhuis, TAM Kevenaar, AHM Akkermans, Biometric quantization through detection rate optimized bit allocation. EURASIP J. Adv. Signal Process.2009:, 29–12916 (2009). doi:10.1155/2009/784834. TM Cover, JA Thomas, Elements of Information Theory, 2nd edn. (John Wiley & Sons, Inc., 2005). C Ye, S Mathur, A Reznik, Y Shah, W Trappe, NB Mandayam, Information-theoretically secret key generation for fading wireless channels. IEEE Trans. Inform. Forensics Secur.5(2), 240–254 (2010). JA de Groot, J-PMG Linnartz, in Proc. WIC Symposium on Information Theory in the Benelux. Improved privacy protection in authentication by fingerprints (WICThe Netherlands, 2011). JA de Groot, B Škorić, N de Vreede, J-PMG Linnartz, in Security and Cryptography (SECRYPT), 2013 International Conference On. Diagnostic category leakage in helper data schemes for biometric authentication (IEEEPiscataway, 2013), pp. 1–6. Signal Processing Systems group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, 5600 MB, The Netherlands Joep de Groot & Jean-Paul Linnartz Security and Embedded Networked Systems group, Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, 5600 MB, The Netherlands Boris Škorić Discrete Mathematics group, Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, 5600 MB, The Netherlands Niels de Vreede Genkey Solutions B.V., High Tech Campus 69, Eindhoven, 5656 AG, The Netherlands Search for Joep de Groot in: Search for Boris Škorić in: Search for Niels de Vreede in: Search for Jean-Paul Linnartz in: Correspondence to Jean-Paul Linnartz. de Groot, J., Škorić, B., de Vreede, N. et al. Quantization in zero leakage helper data schemes. EURASIP J. Adv. Signal Process. 2016, 54 (2016) doi:10.1186/s13634-016-0353-z Accepted: 20 April 2016 Helper data Secrecy leakage
CommonCrawl
Home › MIT 2019: Applied Category Theory Artur Grzesiak Puzzle 4. List some interesting and important examples of posets that haven't already been listed in other comments in this thread. A very simple example. Let \(P = \{ (a,b): a < b \land (a,b) \in \mathbb{R^2} \}\) then following relations are posets: \((a,b)R(c,d) \iff a \le c \land b \le d \) \(xRy \iff x \subseteq y \) Since this is applied course one interpretation could be that first number denotes start of some process and the second number its end. So in case of 1. a process is larger than another one if it started after another one started and ended after another one ended. While in case of 2. process is larger than another one if the latter started and ended when the first one was active. Comment Source:> **Puzzle 4.** List some interesting and important examples of posets that haven't already been listed in other comments in this thread. A very simple example. Let \\(P = \\{ (a,b): a < b \land (a,b) \in \mathbb{R^2} \\}\\) then following relations are posets: 1. \\((a,b)R(c,d) \iff a \le c \land b \le d \\) 2. \\(xRy \iff x \subseteq y \\) Since this is applied course one interpretation could be that first number denotes start of some process and the second number its end. So in case of 1. a process is larger than another one if it started after another one started and ended after another one ended. While in case of 2. process is larger than another one if the latter started and ended when the first one was active. Pablo Sanchez Ocal Daniel@30: A typical way to prove \(A = \cup_{p\in P}{A_p} \) is by double inclusion. Since \(A_p\) are all subsets of \(A\) one inclusion is clear. For the other, suppose there is an \(a\in A\) and \(a\not\in\cup_{p\in P}{A_p} \), what can be said about \({a}\) ? Edit: For some reason the brackets are not showing, both "a" are supposed to be the set containing only "a". SPOILER: \({a}\) is a \(\sim\)-connected and \(\sim\)-closed subset, contradiction. Comment Source:Daniel@30: A typical way to prove \\(A = \cup_{p\in P}{A_p} \\) is by double inclusion. Since \\(A_p\\) are all subsets of \\(A\\) one inclusion is clear. For the other, suppose there is an \\(a\in A\\) and \\(a\not\in\cup_{p\in P}{A_p} \\), what can be said about \\(\{a\}\\) ? Edit: For some reason the brackets are not showing, both "a" are supposed to be the set containing only "a". SPOILER: \\(\{a\}\\) is a \\(\sim\\)-connected and \\(\sim\\)-closed subset, contradiction. There can be cycles in a preorder, so it doesn't look ordered. Consider that the complete relation, where \(x \le y\) for all \(x\), \(y\), is a preorder. It's "pre" because further conditions are needed to make an order. p.s. No need to apologize for asking an honest question. Nobody was born knowing about the definitions of preorders. Comment Source:There can be cycles in a preorder, so it doesn't look ordered. Consider that the complete relation, where \\(x \le y\\) for all \\(x\\), \\(y\\), is a preorder. It's "pre" because further conditions are needed to make an order. p.s. No need to apologize for asking an honest question. Nobody was born knowing about the definitions of preorders. 56 @David Tanzer Thanks a lot. Makes sense to me. Comment Source:#56 @David Tanzer > It's "pre" because further conditions are needed to make an order. Thanks a lot. Makes sense to me. Example of poset: tasks in a feasible schedule, ordered by the dependency (direct or indirect) relationship. More generally, every DAG induces a poset, where the ordering is by the ancestor relationship (stipulating X as being trivially an ancestor of itself). Comment Source:Example of poset: tasks in a feasible schedule, ordered by the dependency (direct or indirect) relationship. More generally, every DAG induces a poset, where the ordering is by the ancestor relationship (stipulating X as being trivially an ancestor of itself). Scott Finnie @Bob Haugen (51): Dataflow as a partially ordered set had also occurred to me (as long as it's an acyclic graph). Comment Source:@Bob Haugen (51): Dataflow as a partially ordered set had also occurred to me (as long as it's an acyclic graph). Section 1.1 says observation is inherently lossy: in order to extract information from something, one must drop the details. Is that true of observation in general? Or does it conflate observation with the human approach to it - namely abstraction? Abstraction is necessarily a lossy process; it's not clear to me that observation is. Comment Source:Section 1.1 says > observation is inherently lossy: in order to extract information from something, one must drop the details. Is that true of observation in general? Or does it conflate observation with the human approach to it - namely abstraction? Abstraction is necessarily a lossy process; it's not clear to me that observation is. Another question on the lossy nature of observation. Re-stating the final paragraph of section 1.1, from an information-theoretic perspective: Let the initial system \(X\) have information content \(Ix\) Subject \(X\) to a lossy observation \(f\) producing system \(Y\) with information content \(Iy\) By definition, \(Iy < Ix\) Assuming that's valid, is it reasonable to interpret generative effects as (re)discovery? In other words: we are re-discovering latent properties of \(X\) that were elided by \(f\). An important consequence would seem, therefore, that the information resulting from those generative effects is bounded, pre-determined, and equivalent to \(Ix - Iy\). Comment Source:Another question on the lossy nature of observation. Re-stating the final paragraph of section 1.1, from an information-theoretic perspective: 1. Let the initial system \\(X\\) have information content \\(Ix\\) 2. Subject \\(X\\) to a lossy observation \\(f\\) producing system \\(Y\\) with information content \\(Iy\\\) 3. By definition, \\(Iy < Ix\\) Assuming that's valid, is it reasonable to interpret generative effects as *(re)discovery*? In other words: we are re-discovering latent properties of \\(X\\) that were elided by \\(f\\). An important consequence would seem, therefore, that the information resulting from those generative effects is bounded, pre-determined, and equivalent to \\(Ix - Iy\\). Joe Moeller In a lot of math, the prefix "pre-" is added to a structure you want to talk about, but you don't quite have, but you have a general construction to get you there. Given a preorder, there is a general construction which turns it into a partial order. Given a presheaf, there is a general construction which turns it into a sheaf (called sheafification :P). That's why its "pre-" and not "semi-" or some other "half", because its not quite there, but you know it will get there eventually! Comment Source:In a lot of math, the prefix "pre-" is added to a structure you want to talk about, but you don't quite have, but you have a general construction to get you there. Given a preorder, there is a general construction which turns it into a partial order. Given a presheaf, there is a general construction which turns it into a sheaf (called sheafification :P). That's why its "pre-" and not "semi-" or some other "half", because its not quite there, but you know it will get there eventually! Scott Fleischman What is the motivation for the authors to use a non-standard definition of poset (and moreover, not call it out as non-standard in the text)? I definitely appreciate the puzzles and responses calling out the discrepancy! Comment Source:What is the motivation for the authors to use a non-standard definition of poset (and moreover, not call it out as non-standard in the text)? I definitely appreciate the puzzles and responses calling out the discrepancy! Daniel Cellucci Pablo @ 55: is it fair to say, then, that all elements of \(A\) must belong to a ~-connected and ~-closed subset of \(A\)? I thought of a few possible examples of posets on the train this morning, that are maybe isomorphic to the ones already described but would represent interesting scientific applications. 1) level of detail in a worldview being generated by mobile agent performing information/feature fusion on a WSN (6.1.4 of Nakamura Et. Al.) \(\leq\) would be an operator comparing "level of detail" or "breadth of knowledge", and would be used to arbitrate conflicts in decisions drawn from different agents 2) the dependency graph of spacecraft failures, where \(\leq\) would indicate "leads to" or "increases the likelihood of" 3) circumstantial evidence in a set of experiments toward a conclusion in a life detection experiment (such as McKay et. al.), where \(\leq\) would indicate that a conclusion is "more likely to be caused by life" References: Nakamura, Eduardo F., Antonio AF Loureiro, and Alejandro C. Frery. "Information fusion for wireless sensor networks: Methods, models, and classifications." ACM Computing Surveys (CSUR) 39.3 (2007): 9. McKay, David S., et al. "Search for past life on Mars: possible relic biogenic activity in Martian meteorite ALH84001." Science 273.5277 (1996): 924-930. Comment Source:Pablo @ 55: is it fair to say, then, that all elements of \\(A\\) must belong to a ~-connected and ~-closed subset of \\(A\\)? I thought of a few possible examples of posets on the train this morning, that are maybe isomorphic to the ones already described but would represent interesting scientific applications. 1) level of detail in a worldview being generated by mobile agent performing information/feature fusion on a WSN (6.1.4 of Nakamura Et. Al.) \\(\leq\\) would be an operator comparing "level of detail" or "breadth of knowledge", and would be used to arbitrate conflicts in decisions drawn from different agents 2) the dependency graph of spacecraft failures, where \\(\leq\\) would indicate "leads to" or "increases the likelihood of" 3) circumstantial evidence in a set of experiments toward a conclusion in a life detection experiment (such as McKay et. al.), where \\(\leq\\) would indicate that a conclusion is "more likely to be caused by life" References: Nakamura, Eduardo F., Antonio AF Loureiro, and Alejandro C. Frery. "Information fusion for wireless sensor networks: Methods, models, and classifications." ACM Computing Surveys (CSUR) 39.3 (2007): 9. McKay, David S., et al. "Search for past life on Mars: possible relic biogenic activity in Martian meteorite ALH84001." Science 273.5277 (1996): 924-930. Scott Fleischman: they do say, at the top of p. 11, "Contrary to the definition we've chosen, the term poset frequently is used to mean partially ordered set, rather than preordered set", but their wording makes it sound like it's more of a convention that they took one choice on, rather than a complete break with standard definitions. I too am curious why they went this route. If preordered sets are somehow easier to work with, they could use them but still call them the right thing. Comment Source:Scott Fleischman: they do say, at the top of p. 11, "Contrary to the definition we've chosen, the term poset frequently is used to mean partially ordered set, rather than preordered set", but their wording makes it sound like it's more of a convention that they took one choice on, rather than a complete break with standard definitions. I too am curious why they went this route. If preordered sets are somehow easier to work with, they could use them but still call them the right thing. Scott Fleischman, the difference between partial orders and preorders is that partial orders demand equality where preorders only demand equivalence. Equality is sometimes called "evil" in category theory. https://ncatlab.org/nlab/show/principle+of+equivalence. Comment Source:Scott Fleischman, the difference between partial orders and preorders is that partial orders demand equality where preorders only demand equivalence. Equality is sometimes called "evil" in category theory. https://ncatlab.org/nlab/show/principle+of+equivalence. Two more puzzles connected to Lecture 3: Puzzle 6. How do reflexivity and transitivity of \(\le\) follow from the rules of a category, if we have a category with at most one morphism from any object \(x\) to any object \(y\), and we write \(x \le y\) when there exists a morphism from \(x\) to \(y\)? Puzzle 7. Why does any set with a reflexive and transitive relation \(\le\) yield a category with at most one morphism from any object \(x\) to any object \(y\)? That is: why are reflexivity and transitivity enough? Comment Source:Two more puzzles connected to [Lecture 3](https://forum.azimuthproject.org/discussion/1812/lecture-3-chapter-1-posets): **Puzzle 6.** How do reflexivity and transitivity of \\(\le\\) follow from the rules of a category, if we have a category with at most one morphism from any object \\(x\\) to any object \\(y\\), and we write \\(x \le y\\) when there exists a morphism from \\(x\\) to \\(y\\)? **Puzzle 7.** Why does any set with a reflexive and transitive relation \\(\le\\) yield a category with at most one morphism from any object \\(x\\) to any object \\(y\\)? That is: why are reflexivity and transitivity _enough?_ I don't believe that the set of all legal chess positions, with \(\leq\) meaning "can produce by legal moves", is a preorder set, since it does not satisfy the reflexivity condition. For example, if the current position \(x\) is a check in which the only legal response is a pawn move, then \(x\nleq x\). (Maybe this can be fixed by allowing the number of intervening legal moves to be 0, and perhaps that's what you meant anyway.) I hope that's what he meant. Here's what a category theorist would say if they weren't trying to be polite: The most reasonable interpretation of "can produce by legal moves" is "can produce by some natural number of legal moves" - and any mathematician who doesn't count 0 as a natural number ought to be taken out and shot, along with those who don't think the empty set is a set. We take nothing seriously. Very seriously. Math works much more smoothly if you treat degenerate cases, like zero and the empty set, with the same respect as nondegenerate ones. Comment Source:Dan wrote: > I don't believe that the set of all legal chess positions, with \\(\leq\\) meaning "can produce by legal moves", is a preorder set, since it does not satisfy the reflexivity condition. For example, if the current position \\(x\\) is a check in which the only legal response is a pawn move, then \\(x\nleq x\\). (Maybe this can be fixed by allowing the number of intervening legal moves to be 0, and perhaps that's what you meant anyway.) I hope that's what he meant. Here's what a category theorist would say if they weren't trying to be polite: > The most reasonable interpretation of "can produce by legal moves" is "can produce by some natural number of legal moves" - and any mathematician who doesn't count 0 as a natural number ought to be taken out and shot, along with those who don't think the empty set is a set. We take nothing seriously. _Very_ seriously. Math works much more smoothly if you treat degenerate cases, like zero and the empty set, with the same respect as nondegenerate ones. @BobHaugen wrote: Question: forgive my ignorance, but why is it called "preorder" and not just "order"? What is the significance of the "pre"? The original concept of totally ordered set or order, still dominant today, obeys a bunch of rules: reflexivity: \(x \le x\) transitivity: \(x \le y\) and \(y \le z\) imply \(x \le z\) antisymmetry: if \(x \le y\) and \(y \le x\) then \(x = y\) trichotomy: for all \(x,y\) we either have \(x\le y\) or \(y \le x\). The real numbers with the usual \(\le\) obeys all these. Then people discovered many situations where rule 4 does not apply. If only rules 1-3 hold they called it a partially ordered set or poset. Then people discovered many situations where rule 3 does not hold either! If only rules 1-2 hold they called it a preordered set or preorder. Category theory teaches us that preorders are the fundamental thing: see Lecture 3. But we backed our way into this concept, so it has an awkward name. Fong and Spivak try to remedy this by calling them posets, but that's gonna confuse everyone even more! If they wanted to save the day they should have made up a beautiful brand new term. Comment Source:@BobHaugen wrote: > Question: forgive my ignorance, but why is it called "preorder" and not just "order"? What is the significance of the "pre"? The original concept of **totally ordered set** or **order**, still dominant today, obeys a bunch of rules: 1. **reflexivity**: \\(x \le x\\) 2. **transitivity**: \\(x \le y\\) and \\(y \le z\\) imply \\(x \le z\\) 3. **antisymmetry**: if \\(x \le y\\) and \\(y \le x\\) then \\(x = y\\) 4. **trichotomy**: for all \\(x,y\\) we either have \\(x\le y\\) or \\(y \le x\\). The real numbers with the usual \\(\le\\) obeys all these. Then people discovered many situations where rule 4 does not apply. If only rules 1-3 hold they called it a **partially ordered set** or **poset**. Then people discovered many situations where rule 3 does not hold either! If only rules 1-2 hold they called it a **preordered set** or **preorder**. Category theory teaches us that preorders are the fundamental thing: see [Lecture 3](https://forum.azimuthproject.org/discussion/1812/lecture-3-chapter-1-posets). But we backed our way into this concept, so it has an awkward name. Fong and Spivak try to remedy this by calling them posets, but that's gonna confuse everyone even more! If they wanted to save the day they should have made up a beautiful brand new term. @David Tanzer gave some nice answers to Puzzle 4, where I was asking for examples of preorders and posets: Example of poset: Any collection of sets, ordered by the inclusion relation. Yes indeed! Like this: We could even go all out and consider the collection of all sets, ordered by the inclusion relation! This collection is too big to be a set. But we can get around that in various ways, e.g. by considering it as a proper class, or using a universe of "small" sets, which itself is a "large" set. A closely related example is Ord, the class of all ordinals. Ordinals form a totally ordered class. Ord starts out like this: $$ 0, 1, 2, 3, \dots, \omega, \omega + 1, \omega + 2, \dots, \omega \cdot 2, \dots, \omega^2, \dots, \omega^3, \dots, \omega^\omega, \dots, \epsilon_0, \dots $$ but it goes on a lot longer. In fact it goes on longer than anything! Comment Source:@David Tanzer gave some nice answers to Puzzle 4, where I was asking for examples of preorders and posets: > Example of poset: Any collection of sets, ordered by the inclusion relation. Yes indeed! Like this: <img src = "https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Hasse_diagram_of_powerset_of_3.svg/500px-Hasse_diagram_of_powerset_of_3.svg.png"> We could even go all out and consider the collection of _all_ sets, ordered by the inclusion relation! This collection is too big to be a set. But we can get around that in various ways, e.g. by considering it as a proper class, or using a [universe](https://en.wikipedia.org/wiki/Grothendieck_universe) of "small" sets, which itself is a "large" set. A closely related example is Ord, the class of all [ordinals](https://en.wikipedia.org/wiki/Ordinal_number). Ordinals form a _totally_ ordered class. Ord starts out like this: $$ 0, 1, 2, 3, \dots, \omega, \omega + 1, \omega + 2, \dots, \omega \cdot 2, \dots, \omega^2, \dots, \omega^3, \dots, \omega^\omega, \dots, \epsilon_0, \dots $$ but it goes on a lot longer. In fact it goes on _longer than anything!_ Scott Fleischmann wrote: They're evil. They're trying to spread confusion and destroy mathematics. They're funded by Putin and they're working for Cambridge Analytica. Seriously, as Dan Schmidt suggests in 62, they believe - correctly! - that posets are less fundamental than preorders. However, I think it's misguided for them to tackle this by renaming preorders "posets". They should either call them "preorders", like everyone else does, or make up some brand new term if they think "preorder" is too ugly. Comment Source:Scott Fleischmann wrote: > What is the motivation for the authors to use a non-standard definition of poset (and moreover, not call it out as non-standard in the text)? They're evil. They're trying to spread confusion and destroy mathematics. They're funded by Putin and they're working for Cambridge Analytica. Seriously, as Dan Schmidt suggests in 62, they believe - correctly! - that posets are less fundamental than preorders. However, I think it's misguided for them to tackle this by renaming preorders "posets". They should either call them "preorders", like everyone else does, or make up some brand new term if they think "preorder" is too ugly. I agree about the natural numbers including 0. I'm trying to think of how to justify the strength of my opinion on this though. Usually I think notation should be adjusted to the situation to facilitate explanation. Why is the definition of the naturals more important though? Comment Source:I agree about the natural numbers including 0. I'm trying to think of how to justify the strength of my opinion on this though. Usually I think notation should be adjusted to the situation to facilitate explanation. Why is the definition of the naturals more important though? Dan Cellucci gave this answer to Puzzle 4: the dependency graph of spacecraft failures, where \(\le\) would indicate "leads to" or "increases the likelihood of". Indeed, this kind of example is very important in applications! A PERT chart is a way of planning tasks, where the edges indicate dependencies: There's more to a PERT chart than a mere preorder, as Simon Willerton has explained. We may get into that later. However, any PERT chart gives rise to a preorder. Comment Source:Dan Cellucci gave this answer to Puzzle 4: > the dependency graph of spacecraft failures, where \\(\le\\) would indicate "leads to" or "increases the likelihood of". Indeed, this kind of example is very important in applications! A [PERT chart](https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique) is a way of planning tasks, where the edges indicate dependencies: <img src = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/37/Pert_chart_colored.svg/320px-Pert_chart_colored.svg.png"> There's more to a PERT chart than a mere preorder, as [Simon Willerton has explained](https://golem.ph.utexas.edu/category/2013/03/project_planning_parallel_proc.html). We may get into that later. However, any PERT chart gives rise to a preorder. Scott Finnie: I prefer to tackles these questions of "lossy observation" and "generative effects" a bit later, after we have some machinery built up. Fong and Spivak dangle them as a lure to get people interested in posets. Then they come back and develop the ideas a bit further. But they seem to be drawing on this thesis, which goes much further: Elie M. Adam, Systems, Generativity and Interactional Effects, Ph.D. thesis, Massachusetts Institute of Technology, July 2017. So, we may want to dig into that eventually. Comment Source:[Scott Finnie](https://forum.azimuthproject.org/discussion/comment/16058/#Comment_16058): I prefer to tackles these questions of "lossy observation" and "generative effects" a bit later, after we have some machinery built up. Fong and Spivak dangle them as a lure to get people interested in posets. Then they come back and develop the ideas a bit further. But they seem to be drawing on this thesis, which goes much further: * Elie M. Adam, _[Systems, Generativity and Interactional Effects](http://www.mit.edu/~eadam/eadam_PhDThesis.pdf)_, Ph.D. thesis, Massachusetts Institute of Technology, July 2017. So, we may want to dig into that eventually. Possibly interesting, didn't see it in SImon WIllerton's article, a PERT chart is usually created from a work breakdown structure by means of https://en.wikipedia.org/wiki/Topological_sorting - just from the name, you would think it should appear in applied cats! Toposorts work on partial orderred sets. Comment Source:Possibly interesting, didn't see it in SImon WIllerton's article, a PERT chart is usually created from a work breakdown structure by means of https://en.wikipedia.org/wiki/Topological_sorting - just from the name, you would think it should appear in applied cats! Toposorts work on partial orderred sets. Andrew Ballinger git commits appear to form a poset. They don't acquire any of the other properties of ordered sets until after you've resolved all conflicts and are looking at a particular branch on a particular remote. The lack of a total order is one of the biggest complaints newcomers have about git, and the commit log looking at one particular branch is an observation on the commit history (and it is plenty lossy). Comment Source:> **Puzzle 4.** List some interesting and important examples of posets that haven't already been listed in other comments in this thread. git commits appear to form a poset. They don't acquire any of the other properties of ordered sets until after you've resolved all conflicts _and_ are looking at a particular branch on a particular remote. The lack of a total order is one of the biggest complaints newcomers have about git, and the commit log looking at one particular branch is an observation on the commit history (and it is plenty lossy). MatthewDoty They should either call them "preorders", like everyone else does, or make up some brand new term if they think "preorder" is too ugly. Some people call them quasi orders. In particular, Isabelle/HOL has a quasi_order axiom class in its Orderings theory. Some people working in combinatorics also use this term (here is a short bibliography). Preorder is still far more common. Unlike Isabelle/HOL, Coq uses the conventional term (see Coq.Classes.RelationClasses). Comment Source:> They should either call them "preorders", like everyone else does, or make up some brand new term if they think "preorder" is too ugly. Some people call them _quasi orders_. In particular, Isabelle/HOL has a <code>quasi_order</code> axiom class in its [Orderings theory](https://isabelle.in.tum.de/dist/library/HOL/HOL/Orderings.html). Some people working in combinatorics also use this term ([here](https://en.wikipedia.org/wiki/Well-quasi-ordering#Notes) is a short bibliography). Preorder is still far more common. Unlike Isabelle/HOL, Coq uses the conventional term (see [Coq.Classes.RelationClasses](https://coq.inria.fr/library/Coq.Classes.RelationClasses.html)). The Agda Standard Library also uses the conventional term(s). See Relation.Binary. Comment Source:The Agda Standard Library also uses the conventional term(s). See [Relation.Binary](https://agda.github.io/agda-stdlib/Relation.Binary.html). One partial order more, on the set of functions of the natural numbers on themselves. One function is bigger than another, when it is bigger in all natural numbers of the domain except at finite quantity at worst. The order is not total. Plotkin has translated the german papers of Hausdorff on order theory in a beautiful book and this order appears there nontrivially. Comment Source:One partial order more, on the set of functions of the natural numbers on themselves. One function is bigger than another, when it is bigger in all natural numbers of the domain except at finite quantity at worst. The order is not total. Plotkin has translated the german papers of Hausdorff on order theory in a beautiful book and this order appears there nontrivially. Daniel Michael Cicala Here are two more examples of preorders. Take your favorite set X. 1) For all x in X, setting \(x \leq x\) gives you a preorder. 2) For all x,y in X, setting \(x \leq y \) gives you a preorder. Comment Source:Here are two more examples of preorders. Take your favorite set X. 1) For all x in X, setting \\(x \leq x\\) gives you a preorder. 2) For all x,y in X, setting \\(x \leq y \\) gives you a preorder. Vladislav Papayan March 2018 edited April 2018 here is my take on the question. 1) In a operating system (or language env. like nodejs) -- a set of packages. For example all versions of all packages form a set. There is a dependency relation between some packages, and in some other cases -- there is no dependency relation at all The dependency relation is uni-directional (in the sense that Package B depends on package A, can be viewed as PackageB < PackageA ). Because the relation does not exist across all the entities, this is partially ordered set. 2)[This example is not correct. See comments below. ] In Borrower-Lender business relation. Same thing. There are some borrowers and lenders that can be represented as B1<B2 meaning that B1 borrows from B2. Of course, not everybody borrows from everybody. So just like example 1) above, this binary relation does not exist across all the members. Therefore this is partially ordered, just like the above. Comment Source:> Puzzle 4. List some interesting and important examples of posets that haven't already been listed in other comments in this thread. here is my take on the question. 1) In a operating system (or language env. like nodejs) -- a set of packages. For example all versions of all packages form a set. There is a dependency relation between some packages, and in some other cases -- there is no dependency relation at all The dependency relation is uni-directional (in the sense that Package B depends on package A, can be viewed as PackageB < PackageA ). Because the relation does not exist across all the entities, this is partially ordered set. 2)[This example is not correct. See comments below. ] In Borrower-Lender business relation. Same thing. There are some borrowers and lenders that can be represented as B1<B2 meaning that B1 borrows from B2. Of course, not everybody borrows from everybody. So just like example 1) above, this binary relation does not exist across all the members. Therefore this is partially ordered, just like the above. Daniel Michael Cicala wrote: Nice! This leads to some other puzzles: Puzzle 8. What simple law do Daniel's preorders obey, that does not hold for the real numbers with its usual notion of \(\leq\)? Puzzle 9. What do you call preorders that obey this law? Clearly Puzzle 9 relies on getting the "right" answer to Puzzle 8, meaning the one that I'm thinking of. There could be other correct answers to Puzzle 8 that I haven't thought of. Comment Source:Daniel Michael Cicala wrote: > Here are two more examples of preorders. Take your favorite set X. > 1) For all x in X, setting \\(x \leq x\\) gives you a preorder. > 2) For all x,y in X, setting \\(x \leq y \\) gives you a preorder. Nice! This leads to some other puzzles: **Puzzle 8.** What simple law do Daniel's preorders obey, that does not hold for the real numbers with its usual notion of \\(\leq\\)? **Puzzle 9.** What do you call preorders that obey this law? Clearly Puzzle 9 relies on getting the "right" answer to Puzzle 8, meaning the one that I'm thinking of. There could be other correct answers to Puzzle 8 that I haven't thought of. This is an interesting question. The natural numbers are the free monoid on one generator. The "bad" natural numbers, without 0, are the free semigroup on one generator. So, this question is related to another: "why are monoids better than semigroups?" One answer is that, contrary to what you might think, the category of semigroups and semigroup homomorphisms embeds as a subcategory of the category of monoids and monoid homomorphisms in a very nice way. That is, there's a very real sense in which monoids are more general! I'll let you ponder this puzzle. It's a special case of another puzzle: why do we demand categories have identity morphisms? Shouldn't we focus on more general "semicategories", where we don't demand the existence of identities? When you put the puzzle this way, another answer rises into view: in a semicategory, you can't define what it means for objects to be isomorphic! (Well, okay, you can, since some objects will have identity morphisms, and you can use those when they're available. But then you'll get objects that aren't isomorphic to themselves, which is annoying.) Comment Source:Joseph wrote: > I agree about the natural numbers including 0. I'm trying to think of how to justify the strength of my opinion on this though. Usually I think notation should be adjusted to the situation to facilitate explanation. Why is the definition of the naturals more important though? This is an interesting question. The natural numbers are the free monoid on one generator. The "bad" natural numbers, without 0, are the free semigroup on one generator. So, this question is related to another: "why are monoids better than semigroups?" One answer is that, contrary to what you might think, the category of semigroups and semigroup homomorphisms embeds as a subcategory of the category of monoids and monoid homomorphisms in a very nice way. That is, there's a very real sense in which monoids are _more general!_ How does this work? I'll let you ponder this puzzle. It's a special case of another puzzle: why do we demand categories have identity morphisms? Shouldn't we focus on more general "semicategories", where we don't demand the existence of identities? When you put the puzzle this way, another answer rises into view: in a semicategory, you can't define what it means for objects to be isomorphic! (Well, okay, you can, since _some_ objects will have identity morphisms, and you can use those when they're available. But then you'll get objects that aren't isomorphic to themselves, which is annoying.) Let's see if I'm thinking along the same lines as John... Puzzle 8: \(x \leq y \rightarrow x \cong y\). That is, the only way for elements to be related is to be in the same equivalence class. Puzzle 9: Partitions. Comment Source:Let's see if I'm thinking along the same lines as John... Puzzle 8: \\(x \leq y \rightarrow x \cong y\\). That is, the only way for elements to be related is to be in the same equivalence class. Puzzle 9: Partitions. Jerry Wedekind Daniel #27: In Exercise 1.12, to prove that the specified union = A (no more no less), show that every element in the union is in A (ez by definition of the sets making up the union), and that every element x of A is in the union (by describing the ~-closed, ~-connected subset of A which contains x) Comment Source:Daniel #27: In Exercise 1.12, to prove that the specified union = A (no more no less), show that every element in the union is in A (ez by definition of the sets making up the union), and that every element x of A is in the union (by describing the ~-closed, ~-connected subset of A which contains x) Dan Schmidt wrote: You're definitely on the right track, but your proposed answer to Puzzle 8 isn't really a law obeyed by the relation \(\le\). You're writing a law that involves some other concept \(\cong\) which we're supposed to have in mind already. I want a law that you can write down using just the relation \(\le\), sort of like reflexivity and transitivity and those other laws. Comment Source:Dan Schmidt wrote: > Let's see if I'm thinking along the same lines as John... You're definitely on the right track, but your proposed answer to Puzzle 8 isn't really a law obeyed by the relation \\(\le\\). You're writing a law that involves some _other_ concept \\(\cong\\) which we're supposed to have in mind already. I want a law that you can write down using just the relation \\(\le\\), sort of like reflexivity and transitivity and those other laws. Keith Lewis I thought they were calling lattices posets. Just enjoying following along. Reading a biography of my mathematical hero John von Neumann. What an amazing man. Voevodesky was the closest recent approximation. Comment Source:I thought they were calling lattices posets. Just enjoying following along. Reading a biography of my mathematical hero John von Neumann. What an amazing man. Voevodesky was the closest recent approximation. Both obey the symmetry rule: if \(x \leq y\) then \(y \leq x\) They are equivalence relations. Comment Source:> **Puzzle 8.** What simple law do Daniel's preorders obey, that does not hold for the real numbers with its usual notion of \\(\leq\\)? Both obey the *symmetry* rule: if \\(x \leq y\\) then \\(y \leq x\\) > **Puzzle 9.** What do you call preorders that obey this law? They are [*equivalence relations*](https://en.wikipedia.org/wiki/Equivalence_relation). Another example of a preorder is the Specialization preorder for a topology \(\tau\). This is defined: $$ x \leq y \Longleftrightarrow x \in \mathbf{cl}(\{y \}) $$ Where \(\mathbf{cl}(\cdot)\) is the closure operator for \(\tau\). As Daniel Michael Cicala mentioned: Here are two more examples of preorders. Take your favorite set \(X\). 1) For all \(x\) in \(X\), setting \(x\leq x\) gives you a preorder. 2) For all \(x, y\) in \(X\), setting \(x\leq y\) gives you a preorder. (1) is the specialization preorder of the discrete topology over \(X\). (2) is the specialization preorder of the trivial topology over \(X\). Comment Source:Another example of a preorder is the [Specialization preorder](https://en.wikipedia.org/wiki/Specialization_(pre)order) for a topology \\(\tau\\). This is defined: $$ x \leq y \Longleftrightarrow x \in \mathbf{cl}(\\{y \\}) $$ Where \\(\mathbf{cl}(\cdot)\\) is the [closure](https://en.wikipedia.org/wiki/Closure_(topology)) operator for \\(\tau\\). As Daniel Michael Cicala mentioned: > Here are two more examples of preorders. Take your favorite set \\(X\\). > 1) For all \\(x\\) in \\(X\\), setting \\(x\leq x\\) gives you a preorder. > 2) For all \\(x, y\\) in \\(X\\), setting \\(x\leq y\\) gives you a preorder. (1) is the specialization preorder of the [discrete topology](https://en.wikipedia.org/wiki/Discrete_space) over \\(X\\). (2) is the specialization preorder of the [trivial topology](https://en.wikipedia.org/wiki/Trivial_topology) over \\(X\\). Chris Goughnour Since preorders don't guarantee antisymmetry, I'm a little concerned that meets and joins might not be unique. Is this a case in which category theory only concerns itself with "uniqueness up to isomorphism"? That seems reasonable enough at first, but it's at odds with phrases like "the meet" or "the join" and when considering the proof of Proposition 1.88, f, defined pointwise as a meet, doesn't seem like it's well-defined. Comment Source:Since preorders don't guarantee antisymmetry, I'm a little concerned that meets and joins might not be unique. Is this a case in which category theory only concerns itself with "uniqueness up to isomorphism"? That seems reasonable enough at first, but it's at odds with phrases like "the meet" or "the join" and when considering the proof of Proposition 1.88, f, defined pointwise as a meet, doesn't seem like it's well-defined. Derrick Lin John, if you "prefer to tackle these questions of "lossy observation" and "generative effects" a bit later", should we skip straight to the poset part? I was able to follow everything up to 1.1.2 and do the exercises, but felt rather unclear about what the author was saying. Comment Source:John, if you "prefer to tackle these questions of "lossy observation" and "generative effects" a bit later", should we skip straight to the poset part? I was able to follow everything up to 1.1.2 and do the exercises, but felt rather unclear about what the author was saying. Matthew Doty wrote: Right! This is what I was looking for. But Dan Schmidt was on the right track, because any partition determines an equivalence relation, and vice versa. They are two ways of thinking about the same thing. This sort of preorder is sort of the extreme opposite case from a total order, since total orders obey the antisymmetry rule: if \(x \leq y\) and \(y \leq x\) then \(x = y\). What are all the total orders that are also equivalence relations? Comment Source:Matthew Doty wrote: > > > **Puzzle 8.** What simple law do Daniel's preorders obey, that does not hold for the real numbers with its usual notion of \\(\leq\\)? > > Both obey the *symmetry* rule: if \\(x \leq y\\) then \\(y \leq x\\) > > **Puzzle 9.** What do you call preorders that obey this law? > They are [*equivalence relations*](https://en.wikipedia.org/wiki/Equivalence_relation). Right! This is what I was looking for. But Dan Schmidt was on the right track, because any partition determines an equivalence relation, and vice versa. They are two ways of thinking about the same thing. This sort of preorder is sort of the extreme opposite case from a total order, since total orders obey the **antisymmetry** rule: if \\(x \leq y\\) and \\(y \leq x\\) then \\(x = y\\). What are all the total orders that are also equivalence relations? Chris Goughnor wrote: Since preorders don't guarantee antisymmetry, I'm a little concerned that meets and joins might not be unique. You're right, they're not. They're unique when our preorder is a poset. (They may still not exist, but they're unique.) Is this a case in which category theory only concerns itself with "uniqueness up to isomorphism"? Yes. (If we think of preorders as categories, we can even show off and define a poset to be a preorder where isomorphic objects are equal, but we're getting way ahead of ourselves here.) That seems reasonable enough at first, but it's at odds with phrases like "the meet" or "the join" and when considering the proof of Proposition 1.88, f, defined pointwise as a meet, doesn't seem like it's well-defined. I'll have to check this out - thanks! Category theorists often use the word "the" in a more sophisticated way where we can talk about "the" object with some property if that property determines the object up to a unique isomorphism. For example, they talk about "the" direct sum of vector spaces, or "the" 1-element set. If we talk this way, we can talk about "the" meet or join of elements in a preorder. But it would be risky for Fong and Spivak to talk this way without explaining why it's okay. Maybe they just slipped... or maybe they're talking about "the" meet or join in a situation where it's really unique, namely a poset (which they call a "skeletal poset"). Comment Source:Chris Goughnor wrote: > Since preorders don't guarantee antisymmetry, I'm a little concerned that meets and joins might not be unique. You're right, they're not. They're unique when our preorder is a poset. (They may still not _exist_, but they're unique.) > Is this a case in which category theory only concerns itself with "uniqueness up to isomorphism"? Yes. (If we think of preorders as categories, we can even show off and define a poset to be a preorder where isomorphic objects are equal, but we're getting way ahead of ourselves here.) > That seems reasonable enough at first, but it's at odds with phrases like "the meet" or "the join" and when considering the proof of Proposition 1.88, f, defined pointwise as a meet, doesn't seem like it's well-defined. I'll have to check this out - thanks! Category theorists often use the word "the" in a more sophisticated way where we can talk about "the" object with some property if that property determines the object up to a unique isomorphism. For example, they talk about "the" direct sum of vector spaces, or "the" 1-element set. If we talk this way, we can talk about "the" meet or join of elements in a preorder. But it would be risky for Fong and Spivak to talk this way without explaining why it's okay. Maybe they just slipped... or maybe they're talking about "the" meet or join in a situation where it's really unique, namely a poset (which they call a "skeletal poset"). Jesus Lopez wrote: One partial order more, on the set of functions of the natural numbers on themselves. One function is bigger than another, when it is bigger in all natural numbers of the domain except at finite quantity at worst. You mean except on a finite set of natural numbers? ("At finite quantity" isn't quite how English-speaking mathematicians talk.) The order is not total. Right: if we take the characteristic function of the even numbers versus the characteristic function of the odd numbers, neither is \(\le\) the other. Plotkin has translated the German papers of Hausdorff on order theory in a beautiful book and this order appears there nontrivially. Hmm, interesting! Gordon Plotkin? I know him, he's a great guy...but I have trouble imagining him translating the German papers of Hausdorff. There's another great partial order on the set of functions \(f : \mathbb{N} \to \mathbb{N} \) - did anyone here talk about it yet? In this one we say \(f \le g\) if and only if $$ \mathrm{limsup}_{n \to \infty} \frac{f(n)}{g(n)} \le 1 $$ Comment Source:Jesus Lopez wrote: > One partial order more, on the set of functions of the natural numbers on themselves. One function is bigger than another, when it is bigger in all natural numbers of the domain except at finite quantity at worst. You mean except on a finite set of natural numbers? ("At finite quantity" isn't quite how English-speaking mathematicians talk.) > The order is not total. Right: if we take the characteristic function of the even numbers versus the characteristic function of the odd numbers, neither is \\(\le\\) the other. > Plotkin has translated the German papers of Hausdorff on order theory in a beautiful book and this order appears there nontrivially. Hmm, interesting! Gordon Plotkin? I know him, he's a great guy...but I have trouble imagining him translating the German papers of Hausdorff. There's another great partial order on the set of functions \\(f : \mathbb{N} \to \mathbb{N} \\) - did anyone here talk about it yet? In this one we say \\(f \le g\\) if and only if $$ \mathrm{limsup}_{n \to \infty} \frac{f(n)}{g(n)} \le 1 $$ David #23: Re: Exercise: What is the relationship between DAGs and posets? I think the Hasse diagram of a preordered set is a DAG if the preorder is also antisymmetric, i.e. if the preorder is a poset (a partially ordered set). [additional incorrect statements deleted, though perhaps this is overkill] Comment Source:David #23: Re: Exercise: What is the relationship between DAGs and posets? I think the Hasse diagram of a preordered set is a DAG if the preorder is also antisymmetric, i.e. if the preorder is a poset (a partially ordered set). [additional incorrect statements deleted, though perhaps this is overkill] David #23 wrote: What is the relationship between DAGs and posets? Jerry just said the Hasse diagram of a poset is a DAG. That sounds correct to me, and it's very nice. He also said that any DAG is the Hasse diagram of a poset. That makes me very nervous. Comment Source:[David #23 wrote](https://forum.azimuthproject.org/discussion/comment/15975/#Comment_15975): > What is the relationship between DAGs and posets? Jerry just said the [Hasse diagram](https://en.wikipedia.org/wiki/Hasse_diagram) of a poset is a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph). That sounds correct to me, and it's very nice. He also said that any DAG is the Hasse diagram of a poset. That makes me very nervous. Sidenote: everyone feel free to edit your comments after you have made them. When the cursor is in your post, a gear will appear at the top right corner of your comment. Comment Source:Sidenote: everyone feel free to edit your comments after you have made them. When the cursor is in your post, a gear will appear at the top right corner of your comment. Answer. Reflexivity follows from the existence of identity morphisms in a category. Transitivity follows from the fact that given morphisms f: A -> B, and g: B -> C, the composite morphism taking A -> C is provided by the category. Comment Source:**Puzzle 6.** How do reflexivity and transitivity of \\(\le\\) follow from the rules of a category, if we have a category with at most one morphism from any object \\(x\\) to any object \\(y\\), and we write \\(x \le y\\) when there exists a morphism from \\(x\\) to \\(y\\)? Answer. Reflexivity follows from the existence of identity morphisms in a category. Transitivity follows from the fact that given morphisms f: A -> B, and g: B -> C, the composite morphism taking A -> C is provided by the category. Jerry wrote: I no longer believe anything in #92 after the first sentence of my answer. Good! I think you wrote #93 while I was figuring out how to phrase #94. Here's a puzzle I considered posing: What is the smallest DAG that is not the Hasse diagram of a poset? Comment Source:Jerry wrote: > I no longer believe anything in #92 after the first sentence of my answer. Good! I think you wrote #93 while I was figuring out how to phrase #94. Here's a puzzle I considered posing: What is the smallest DAG that is not the Hasse diagram of a poset? David Tanzer wrote Reflexivity follows from the existence of identity morphisms in a category. Transitivity follows from the fact that given morphisms \(f: A \to B\), and \(g: B \to C\), the composite morphism taking \(A \to C\) is provided by the category. Right! So reflexivity in a preorder is secretly the identity morphisms in a specially simple sort of category, while transitivity is really just composition! This is one of those nice little "aha!" moments where category theory starts unifying concepts and eliminating redundancy, freeing up neurons for deeper thoughts. Comment Source:[David Tanzer wrote](https://forum.azimuthproject.org/discussion/comment/16164/#Comment_16164) > Reflexivity follows from the existence of identity morphisms in a category. Transitivity follows from the fact that given morphisms \\(f: A \to B\\), and \\(g: B \to C\\), the composite morphism taking \\(A \to C\\) is provided by the category. Right! So reflexivity in a preorder is secretly the identity morphisms in a specially simple sort of category, while transitivity is really just composition! This is one of those nice little "aha!" moments where category theory starts unifying concepts and eliminating redundancy, freeing up neurons for deeper thoughts. Yes, I meant \( \forall f,g \in \mathbb{N}^\mathbb{N}, f \leq g \Leftrightarrow \{n \in \mathbb{N} | f(n) > g(n)\} \) is finite. And you are right, It happens to be other Plotkin! But I thank him the effort no less. Comment Source:Yes, I meant \\( \forall f,g \in \mathbb{N}^\mathbb{N}, f \leq g \Leftrightarrow \\{n \in \mathbb{N} | f(n) > g(n)\\} \\) is finite. And you are right, It happens to be other Plotkin! But I thank him the effort no less. Jules Hedges Belatedly, yet another preorder: In an \(n\)-player non-cooperative game, outcomes live in \(\mathbb R^n\). Player \(i\) has a preference relation \(\geq_i\) on \(\mathbb R^n\) given by \(x \geq_i y\) iff \(x_i \geq y_i\), i.e. they prefer to maximise their own payoff and are indifferent about everyone else's. This makes \(\geq_i\) a preorder. Then \(x \cong_i y\) iff \(x_i = y_i\), which means that player \(i\) is indifferent between \(x\) and \(y\). Comment Source:Belatedly, yet another preorder: In an \\(n\\)-player non-cooperative game, outcomes live in \\(\mathbb R^n\\). Player \\(i\\) has a preference relation \\(\geq_i\\) on \\(\mathbb R^n\\) given by \\(x \geq_i y\\) iff \\(x_i \geq y_i\\), i.e. they prefer to maximise their own payoff and are indifferent about everyone else's. This makes \\(\geq_i\\) a preorder. Then \\(x \cong_i y\\) iff \\(x_i = y_i\\), which means that player \\(i\\) is indifferent between \\(x\\) and \\(y\\). Vladislav wrote: In Borrower-Lender business relation. Same thing. There are some borrowers and lenders that can be represented as B1<B2 meaning that B1 borrows from B2. Of course, not everybody borrows from everybody. So just like example 1) above, this binary relation does not exist across all the members. Therefore this is partially ordered, just like the above. Is this really a preorder? If you borrow from me and I borrow from Brendan, does that mean you borrow from Brendan? I suppose this could be true in some sense, but do you owe money to Brendan? This made me realize it's very good to find examples of relations that seem like they might be preorders but aren't. Well-chosen counterexamples are often just as revealing as examples, since they indicate the limits of a concept. Comment Source:[Vladislav wrote](https://forum.azimuthproject.org/discussion/comment/16120/#Comment_16120): > In Borrower-Lender business relation. Same thing. There are some borrowers and lenders that can be represented as B1<B2 meaning that B1 borrows from B2. Of course, not everybody borrows from everybody. So just like example 1) above, this binary relation does not exist across all the members. Therefore this is partially ordered, just like the above. Is this really a preorder? If you borrow from me and I borrow from Brendan, does that mean you borrow from Brendan? I suppose this could be true in _some_ sense, but do you owe money to Brendan? This made me realize it's very good to find examples of relations that _seem like they might be preorders but aren't_. Well-chosen counterexamples are often just as revealing as examples, since they indicate the _limits_ of a concept.
CommonCrawl
When should we first teach variables in school math? And how? From a pedagogical point of view, when is the "right" moment to introduce letters and variables to school children? Let's say we taught arithmetic, the four operations, did computation exercises, or even "disguised" equations, when in place of a variable we use a box, and ask what numbers should we put in that box. How should we introduce the practice of using letters, to make the transition from arithmetic to algebra? The context: I'm a high school teacher, I do my job pretty well and have good results with my high school students, but recently I've been tutoring a fifth grader, and for the future it is possible that I would have to teach also in middle school (grades 5-8). I find it is kind of hard to work with this student I tutor; he is having a hard time doing/understanding even simple stuff (simple for me, of course), and I find that variables/letters "intimidate" him. How should I introduce and motivate using letters? I feel that even high schoolers are sometimes "intimidated" by letters. I once had a student that said, "teacher, I really loved mathematics in the past, but only when we worked with numbers. Since we were introduced to letters in math, I don't like it anymore." What should we do to avoid this issue? i.e., How should we introduce letters/variables in order for children to understand them better and not hate algebra? mathematical-pedagogy reference-request algebra Benjamin Dickman amarius8312amarius8312 $\begingroup$ Can you give an example of an equation with letters that intimidates him? $\endgroup$ – Amy B $\begingroup$ Experienced teachers and textbook writers introduce the use of numbers at an appropriate pace. Done gradually, it can be less intimidating. So (until you are an experienced teacher) I recommend following them. $\endgroup$ $\begingroup$ Arithmetic is concrete, but algebra is abstract and requires a higher level of intellectual development. That's why California's attempt to mandate 8th-grade algebra was a failure -- many kids just aren't developmentally ready at that age. Was this the approximate age of the kid who said "I don't like it anymore?" My guess is that it's probably a great idea to do simple examples like 2+3=4+___, followed by 2+3=4+x. But keep the complexity minimal until they're older. $\endgroup$ $\begingroup$ See "How do I teach algebra?" $\endgroup$ – Jasper $\begingroup$ As one data point, Common Core standards specify this for the 6th grade: corestandards.org/Math/Content/6/EE/B/6 $\endgroup$ – Daniel R. Collins You don't need 1-letter variable names to do algebra. Basically, as soon as you start giving story problems to children, you need to start teaching algebra techniques. You can teach them as "easier ways" to solve the problems, that help kids "keep track of things" and "avoid mistakes". Most computer programming languages (including spreadsheets with named cells) allow using long variable names. Languages like Logo have been successfully taught to primary school children. The key is to clearly distinguish the long variable names from other text that might represent operations. Parentheses can suffice. After not very long, children will get tired of writing out complete variable names. At this point, you can show them how to: Draw a picture if it helps to understand the variables. Explicitly abbreviate their variable names. At the end of the problem, write a short summary of the answer, and circle it in a cloud. For example: At the beginning of the problem: (# apples) = (number of apples) At the end of the problem: There are 7 apples. And later on: (a) = (number of apples) a = number of apples JasperJasper I'm not an expert, but I understand the curriculum in Russia has gone back and forth on this point, with a lot of debate. Generally, grades 1 to 4 are spent on the basic arithmetic operations, while grades 5 and 6 are mainly focused on: familiarizing students with the algebraic properties of numbers in a concrete way, systematizing and extending knowledge of fractions and decimal numbers, and studying signed numbers. Algebra starts in earnest in grade 7. In the earlier grades, letters are used freely to state the general properties of numbers, to state formulas, and to write down simple equations. But they are not used systematically to manipulate expressions and equations and isolate an unknown as is done in algebra. In the 1970s, there was an effort to introduce algebra in grades 5 and 6 (called grades 4 and 5 at that time, since school then started at age 7). But it was found that by turning the process of solving arithmetic problems into a kind of algorithm, it took away a lot of the opportunity for kids to experiment and develop intuition for numbers. So there was a kind of "back to basics" reaction in the 1980s on this point. Let me give an example to show what I'm talking about. Your sister had twice as many books as you do until a friend gave her 6 books today, and now she has 24 books. How many books do you have? Before she was given the six books, she had 24 - 6 = 18 books. So you have 18/2 = 9 books. If you compare this solution with the mathematically equivalent steps in solving $2x + 6 = 24$, the difference is that the steps have a concrete meaning in the first solution, but possibly not in the second. So the first solution helps develop intuition for numbers in a way that the second may not. In fact, the second kind of solution is more likely to make sense to you if you are already proficient in the first kind and can therefore make the connection between the two. There needs to be a phase in which kids are encouraged to think concretely about problems rather than turning them into equations and going on auto-pilot. However, there is a role for equations where they clarify the meaning of an operation. For example, it's useful to say that subtracting $7$ from $12$ means finding the number $x$ for which $7 + x = 12$. Or that dividing $\frac{4}{7}$ by $\frac{3}{5}$ means finding the number $x$ for which $\frac{3}{5} \times x = \frac{4}{7}$. Using formulas like $d = vt$ or $A = \frac{1}{2}bh$ also accustoms students to the use of variables without turning everything into algebra. Sorry for adding another answer, I'm new here. This paper: Carraher, D. W., Schliemann, A. D., Brizuela, B. M., & Earnest, D. (2006). Arithmetic and algebra in early mathematics education. Journal for Research in Mathematics education, 87-115. is a good start. Lots of references to existing research. I can add a link but not sure if it's behind a paywall. A quick glance tells me that the gist is that you can introduce variables as early as 9-10 years of age but that it's important to build algebraic thinking by looking for patterns in number properties first. EDIT: Adding some detail as per Ben's comment. This is something of a rabbit hole. It seems that very young children can solve equations with boxes, question marks, or even letters to represent unknowns, however, they may not be truly engaging with variables in and algebraic way. The question becomes whether they are actually operating on unknown values or using "pure-arithmetic" strategies like counting and working backwards. The authors present several sources claiming to show that young children (at least as young as third grade) can work with symbols for unknowns in an algebraic way, and that algebra and arithmetic are not entirely separate modes of thinking. For anyone wanting answers to the posted question, I'd suggest reading the brief literature review in the above article. The article itself is really focused on using functional relationships to teach arithmetic, rather than variables in particular. The authors seem to feel that there is strong evidence that students can engage with variables in a meaningful way from about age 8 or 9, but that we need to be careful about what we define as "meaningful," since researchers who argue that younger children aren't "developmentally ready" may have somewhat rigid views about meaningful use of variables and the nature of the relationship between arithmetic and algebra. Nat StahlNat Stahl $\begingroup$ Ah, I've looked through this paper (link without paywall) and it seems the best as concerns the +50 bounty. You have only "a quick glance" in this answer now, so probably summarizing the content of the Carraher et al paper more fully would improve matters. [In which case, I will delete this comment...] $\endgroup$ $\begingroup$ Interesting paper, thanks for the link. The thing I tend to be a bit skeptical for findings like this is: While a classroom full of students could communally answer these questions (i.e., perhaps requiring about 1-in-20 to suggest the answer to each part), do most of the students individually have the skill at the end (i.e., does it "scale down" to individuals)? There doesn't seem to be any testing of individuals at the end here to know one way or another. $\endgroup$ The letters intimidate him because he doesn't recognise them or their meaning. Only introduce them when he demonstrates that he is ready. Ask him what his initials are. When he tells you, ask him if he would recognise he was being addressed if you used only his initials. When he (hopefully) says yes, then ask him if distance might have an 'initial' he could suggest so that when you used that letter, he would know you were referring to distance. Hopefully he will suggest 'd'. And so on. You could even introduce the lesson by having every student only address other students using their initials... good fun, but it breaks down the resistance to the dreaded 'letters' :) Then only introduce variables which have immediately obvious 'initials', such as p for price or pizza, h for height, l for length, a for adults, h for hours, etc, depending on the context. Only after he has begun to associate letters with practical measurements and quantities should you mention that 'algebra is just the same... it just uses abbreviations to simplify things, just like his initials. Any talk of x and y and z will re-introduce the fear of symbols. It's better to wait until he gets frustrated with 'all these words' and the idea of an abbreviation is his idea... I often tell students that they have been using algebra 'for ages', only they didn't realise it. Then I tell them a word story like this: Jimmy goes to the bank and gets out enough money for three pizzas. On his way to the pizza shop, he finds 8 dollars on the footpath (sidewalk). When he gets to the shop he has 41 dollars. How much was each pizza going to cost him? Almost all students who can add and subtract will get it correct very quickly. Then say "I'm going to write the algebra abbreviation for that story, and write 3p + 8 = 41 on the board. Say "if he hadn't found the 8 dollars, would he have more or less?' They all will say 'less'. Then write 3p = 41 - 8 NB! don't introduce any rules about sides and signs and what you do to one side, etc, same for words like solve and expression and equation. These should only come after the process is understood using things they already know... or you'll turn him off all over again. and ask how much money did he get from the bank? They will say '33' 3p = 33 The say 'So, how much was each pizza going to cost? They will say '11 dollars'. Trust them to refer to what they already do at the shops, not what you have to tell them about dividing, or whatever. Trust that their experience already contains algebraic processes, only they didn't call it that... but they were perfectly good at getting it right. Hint! money and food questions are very familiar to almost every student everywhere. Then say 'That's all algebra is: an abbreviation for words, using a consistent set of rules.' They will get it... and then wait for them to ask about or discover or intuit or teach each other the rules. Teaching is as much about asking and listening and enabling as it is about telling. Then make up similar stories that you read out and they write the abbreviations for. Include things like 'on the way he lost 12 dollars', etc. I have done this for 25+ years and it it has 'worked' almost every time with almost every student, so long as they can add, subtract, multiply and divide competently. If not, then address that first. Only introduce abstract concepts and 'letters' when students are more than comfortable... and you have exhausted most of the other letters and they are ready for some meat in the sandwich. KiwiSteveKiwiSteve I'm a teacher at a small Christian school and so I fill a lot of gaps. I've taught 4th, 5th, 6th, 7th, Alg. 1, and Geometry, as well as tutoring everything in between up to Pre-Calc with great success. My 4th graders just recently got introduced to algebra. I've found that at that age, a lesson dedicated to the subject matter is more than sufficient for mastery of the simple idea that a letter represents some unknown or variable amount. Our curriculum is very good at building from the "boxes" mentioned in other's posts, and lends opportunity for a teacher to get students excited earlier in the year if the class seems ready, by telling them, "What you're really doing is Algebra! The only difference, is that in Algebra, a letter is used instead of a box!" This takes the intimidation factor out of algebra, makes the students feel smart, and excited about furthering their math knowledge. They beg for the lesson on algebra before you get there, and then as a teacher, you have an easy time keeping their attention and making sure correct steps are used in finding the correct answer that will build a foundation in 4th grade that they'll use all the way through Algebra. I've also found while teaching higher grades, that students without this foundation have a much harder time of developing it after they've developed instead, an aversion to algebra in general, but a similar approach can definitely be done in a tutoring scenario. Elem-Teach-w-Bach-n-Math-EdElem-Teach-w-Bach-n-Math-Ed $\begingroup$ Let me add this. An answer was given here: "start with algebra the student can do mentally." Although these are the type of problems to start with, students should be required to show work on these. Let them know that they are easy problems intentionally, so that they can know that they did it correctly! In other words, do the problem mentally after showing the correct algebraic steps so that they see that the algebraic steps are valid. Then when they get to $5.7x+7.9=22.7$ where they can't do it mentally, the steps are ingrained, and they know both that the steps work and why. $\endgroup$ – Elem-Teach-w-Bach-n-Math-Ed Many elementary students have a problems with the equals sign. They may not be able to solve for the missing number 3+2=__+1. They see = and think "the answer is". Therefore they will fill in the blank with 5, which is of course incorrect. If you add to the difficulty of this problem with a variable 3+2=x+1, many students and teachers will think the problem is that there is an x and not realize the student just doesn't understand how equality works. BTW this is why students often in a multi-step problem may write: 3+2=5-2=3+17 =20. I suggest that you introduce variables first where they are the answer to the problem, so that you have 3(77) = x and 3(42-4.1)=x etc. If the student is overwhelmed by a letter, replace with a box and then show that they are the same. Once the student is comfortable with x as the missing answer, go on to problems where one side of the equality is a single number and the other side is an expression containing a variable. Start with simple problems that the students can do mentally and then proceed to more complex examples. For example 3+x=5 and x- 7 =2. Proceed to more complex problems such as 3x+2 =8. Finally make sure the students understands how equality works, by giving problems such as 3+11 = __*7. If the student understands that, then you can proceed to replacing these examples with variables. Above all be patient. Amy BAmy B $\begingroup$ The issue around the equal sign is discussed in an earlier MESE thread; see, in particular, the answer of D. Hast here. But what does the mathematics education literature say about introducing variables and the age/stage at which it is appropriate? $\endgroup$ $\begingroup$ @BenjaminDickman I was responding to the OP's comment that he is tutoring a 5th grader who is having a hard time understanding variables. He also asked how should we introduce the practice of using letters to make the transition from Arithmetic to Algebra. I was suggesting that first he make sure that the student understands the equal sign because without that basis the student will not understand variables. $\endgroup$ I would start early with "boxes" -- just empty squares and playing "what goes in the box?" for simple equations. Works well for kids as young as second grade and definitely by 4th. It makes it much easier to introduce letters as variables later, with the understanding that they act like containers. And, as someone else said, you can bridge the gap by modeling story problems with known quantities (at least that's what my son gets in 4th grade). I should note, I've used boxes for adults in GED class, equally effective. They really liked "what did I start with?" where I'd write [BOX]+7-2+4-9+6=12 or something and have them try to work out a general method. Not the answer you're looking for? Browse other questions tagged mathematical-pedagogy reference-request algebra or ask your own question. How do I teach algebra? Issues with "equals", where does this come from and how do I combat it? How students write their work, and learning outcomes Is it possible to improve logical thinking and problem solving abilities? Literature on development of algebraic thinking What is the name of this discipline in mathematics education? Introducing the concept of variables to kids When are non-homogeneous sets first encountered by students? Reference request: Textbooks in mathematics for future kindergarten teachers Should I be teaching point-slope formula to high school algebra students? Should one justify formulae in middle school? Should high school teachers say "real numbers" before teaching complex numbers? What topics are considered to be part of pre-algebra? How to teach the Pythagorean theorem in a satisfying way to high school students?
CommonCrawl
1.Overview 2. Baseband Operation 2.1. Baseband Transmit (Tx) Operation Baseband Tx Tuning Baseband Tx Mechanics 2.2. Baseband Receive (Rx) Operation Baseband Rx Tuning Baseband Rx Mechanics 3. Midband Operation 3.1. Midband Transmit (Tx) Operation Midband Tx Tuning Midband Tx Mechanics 3.2. Midband Recieve (Rx) Operation Midband Rx Tuning Midband Rx Mechanics 4. Highband Operation 4.1. High-band Transmit Operation Highband Tx Tuning Highband Tx Mechanics 4.2. Highband Receive Operation Highband Rx Tuning Highband Rx Mechanics 5. Basic Defintions and Worked Examples 5.1 Rx Tuning 5.2 Tx Tuning 5.3 Example of Tx Tuning on Crimson TNG 6. Glossary of Bandwidth Terminology and Explanations SDR Frequency Tuning Mechanics This application note discusses the frequency tuning mechanics associated with Per Vices Software Defined Radio (SDR) products for both Tx and Rx functions. Due to the high instantaneous bandwidth of our products, along with tuning ranges that vary between DC to over 18 GHz, our hardware architecture supports various mechanisms to support frequency tuning. The specific mechanisms used for tuning into various frequencies for various device configurations will be discussed, along with expected results. The diagrams for Rx and Tx mechanics are not to scale, but are simply used to illustrate how Per Vices shifts signals (i.e. up-converts and down-converts signals) within the bandwidths of different components of the radio chain. This application note has been broken down into separate sections for baseband, midband, and highband tuning for both Tx and Rx. The specific frequency range associated with a given tuning methodology is product dependent; we've aimed to provide specific ranges here. Though we've provided rough operational bounds, please refer to the device product page for authoritative information. A signal may be shifted (i.e. downconverted/upconverted), either using a mixer, the convertor, or within the FPGA. As each element operates independently of each other, it is possible to shift the frequency at a number of different places, within both the analog and digital domains. This application note discusses the specific tuning mechanisms, along with the relevant frequency mechanics, for each mode of operation (baseband, midband and high-band) for both Tx and Rx. We use the term Tx mechanics to describe the process by which a low frequency signal is shifted up from baseband, i.e. to an intermediate frequency, that is appropriate for conversion using a digital-to-analog (DAC) convertor, before being transmitted via power amplifier and antenna. In this chain, a specific signal path for transmission is selected based on the frequency of the signal (different signal paths are optimized for different frequency ranges). Within each radio chain, the frequency may be mixed by analog or digital mixers, effectively shifting the frequency up to what will be transmitted. Conversely, we use the term Rx mechanics to describe the process by which a high frequency signal is shifted down from baseband, i.e.to an intermediate frequency, that is appropriate for sampling by a analog-to-digital convertor (ADC), as well as the subsequent frequency translations that may occur once the signal is digitized. Referring to Figure 1.1, we see a generalized overview of the frequency mechanics process. Figure 1.1: Frequency Mechanics Process Overview The baseband Tx and Rx chains connect the external SMA input or output to/from the FPGA and converter device through a series RFE amplifiers, filters, mixers, switches, and anti-aliasing or anti-imaging filters. The baseband chain is automatically enabled by configuring the SDR to frequencies below those supported by the analog up/down convertor, and within the range shown in Table 1a. The baseband frequency chains do not include any analog mixing stages. Table 1: Baseband Frequency Bands Min Frequency Max Frequency Crimson DC 120 MHz Cyan DC 500 MHz (may be extended to 800 MHz) During baseband Tx tuning, the external data port is connected to the FPGA where re-sampling and digital up conversion occur on the FPGA with a CORDIC mixer. There are no analog up- or down- conversion, or mixing stage facilities used when operating in baseband Tx mode. During Tx operation, data originates from the host user application, passes through the FPGA-based interpolation routine, FPGA-based digital up conversion (using an numerically controlled oscillator (NCO)), before being converted into JESD204B serial data and sent to the DAC. Any user applied frequency shifting occurs after interpolation, prior to sending the JESD serial data to the DAC, as shown in Figure 2.1.1 This also shows the baseband radio front end (RFE) configuration, where the DAC converts the JESD204B data into an analog signal, and passes the signal through anti-imaging filters, before reaching the SMA port which can be connected to a power amplifier and antenna to transmit the signal. Figure 2.1.1: Baseband Transmission (Tx) Signal Flow As the signals progress through the various components/stages of the SDR, the signal's frequencies and bandwidths that we are dealing with change. Now that we have a good understanding of our circuitry, let's look at what is happening to the frequency of the signals at each of the SDR stages. Figure 2.1.2: Baseband Tx Frequency Mechanics Figure 2.1.2. shows what needs to takes place in our radio to enable the transmission of an analog signal: Data Port Generated Samples: the left side of the image shows three waveforms (A, B and C) that we might be looking to transmit. Prior to the samples getting generated, the user defines a sample rate, which we will call user bandwidth (UBW). The sample rate serves to specify the user bandwidth; an interval [-UBW/2 , UBW/2] which is centred around 0 Hz. Since these waveforms will be offset by the NCO frequency at a later stage, the initial sine waves may (in some cases) have a negative frequency (like signal A in the diagram). Once generated, the samples will be sent to the SDR for further processing. However, it is important to note that sometimes not all of the samples in the user bandwidth will get transmitted (this will become more clear at later stages). Interpolation: after we have generated the user samples, the next step is to perform interpolation used to obtain a larger bandwidth. This new bandwidth specifies a larger interval (also centred around 0 Hz) that is defined by the sample rate of the device (325 MSPS for Crimson TNG, 1 GSPS for Cyan). The user bandwidth is always smaller than the conversion bandwidth. Interpolating the samples to a larger bandwidth is crucial for the next stage where the digital up conversion takes place. Digital Mixing(CORDIC): after interpolating the signal to the conversion bandwidth of the device, the FPGA can proceed to upconvert the samples. Both Crimson TNG and Cyan have CORDIC digital mixers that are capable of both up-conversion and down-conversion (DDC, DUC). Up-conversion is accomplished by mixing the Tx samples with a local oscillator found in the FPGA (set to what is referred to as the NCO frequency). This causes the frequency of all our signals to increase by the NCO frequency. Using the larger conversion bandwidth that we obtained from interpolation ensures that we can capture more of our mixing products. The signals are then serialized and framed via a JESD204B serial link. In some cases, mixing our generated signal with the NCO frequency results in a frequency that does not lie within the user bandwidth. Here the mixing product will still have an image that is rotated, but outside the analog transmit bandwidth: for baseband signals, we discard the negative frequency component (C), so this image isn't transmitted at the SMA. Digital-to-Analog Converter (DAC): the DAC then converts the serial data to its analog form. The production of images is a fundamental and expected outcome of the Nyquist theorem and so we will still have images of our original signal at each Nyquist zone (i.e. at each multiple of our conversion bandwidth, we are likely to see images of the signal at their corresponding offsets). Anti-Imaging filters: these are used to suppress the images that would typically show up at higher Nyquist zones (multiples of the conversion bandwidth). It has a bandwidth that is ~80% of the DAC bandwidth. RF Gain: the final analog signal now has gain added to it. SMA Edge Launch: the gained analog signal can now be transmitted via power amplifier and antenna. During baseband Rx tuning, the external SMA port is connected to the RFE through a series of amplifiers, filters and switches before reaching the ADC. In addition, re-sampling and digital down conversion occur on the FPGA using an NCO (CORDIC mixer). There are no analog down conversion facilities used when operating in baseband mode. The ADC converts the analog signal to digital, which is then sent to an FPGA which provides framing of the data before being sent over the data port to the host user application. During Rx operation, a signal originates from the SMA, passes through amplifiers, then anti-aliasing filters, followed by digitization by the ADC into digital JESD204B serial data. This digital signal may then be shifted on either the ADC or FPGA, where is it parsed and passed to the host application over the Ethernet ports. Any user applied frequency shifting occurs after digitization, but prior to decimation, as shown in Figure 2.2.1. Figure 2.2.1: Baseband Receiver (Rx) Signal Flow For reception (SDR to user application), the receiver data flow is a mirror of the transmit data-flow we previously discussed. The analog signal incident to the SMA is amplified through a low noise amplifier (LNA), passes through an anti-aliasing filter, and is sampled at the ADC input channel. The digital signal is then passed to the FPGA, where it undergoes digital down conversion using an NCO (CORDIC), and then decimation, prior to being sent to the user application. During receive operation, frequency shifting occurs only prior to decimation. Figure 2.2.2: Baseband Rx Frequency Mechanics Figure 2.2.2 shows what needs to takes place in our radio to receive an analog signal: RF Input/Gain: the first part of the image (see left) shows several pure sine waves (A-E) as they are picked up by the antenna and pass through initial gain amplifier. Lowpass Filter: a low pass filter cuts off any frequencies above a threshold (product dependent). Anti-aliasing filter: an analog anti-aliasing filter aims to restrict the incoming signal to only those that fall in the converter's bandwidth (an AAF is ~80% of the converter bandwidth). This is important because the ADC's capture bandwidth is limited by the fixed sampling rate; signals outside the capture bandwith would otherwise be aliased back into the digitized signal, which would be undesirable. The converter bandwidth specifies a large interval centred around 0 Hz that is defined by the sample rate of the device (325 MSPS for Crimson TNG, 1 GSPS for Cyan). We will shrink our reception interval in later stages so that we only capture the signals we are "tuned" to. Analog-to-digital converter (ADC): this converts the incoming signals into a digital form. Digital Mixing (CORDIC): at this point, the converter BW is large. To prepare for decimation, the samples are digitally down-converted. This decreases the frequency of all the signals from the CORDIC mixer on the FPGA. Both Crimson TNG and Cyan have CORDIC digital mixers that are capable of both up-conversion and down-conversion (DDC, DUC). Down-conversion is accomplished by mixing the Rx samples with a local oscillator found in the FPGA (set to what is referred to as the NCO frequency). Note that after this takes place, some of the frequencies might be negative. Decimation: prior to the samples being received, the user defines a sample rate (call it user bandwidth). The sample rate specifies this user bandwidth (UBW), an interval [-UBW/2 , UBW/2] which is centred around 0 Hz. Decimation ensures that all the incoming signals fall within the user bandwidth. Application: the sampled data can now be sent over Ethernet as frames and used for a specific application. The midband Tx and Rx chain connects the external SMA input or output to/from the FPGA and converter device through the RFE containing amplifiers, filters, mixers, switches, and anti-aliasing or anti-imaging filters. The midband chain is automatically enabled by configuring the SDR frequencies within the midband range, shown in Table 2 below. Table 2: Midband Frequency Bands Crimson 120 MHz 6000 MHz Cyan 500 MHz 6000 MHz When compared to the baseband signal flow, there are two notable differences: The converter devices operate in dual channel mode to support complex (IQ) operation. Unlike the baseband case, this avoids discarding negative frequency (or the imaginary component) from samples. The RFE includes a complex, analog, mixing stage. During midband Tx operation, data originates from the host user application, passes through the FPGA-based interpolation routine, FPGA-based digital up conversion (using an NCO), before being converted into JESD serial data and sent to the DAC. Any user applied frequency shifting occurs after interpolation, prior to sending the JESD204B serial data to the DAC, as shown in Figure 3.1.1. This figure also shows the baseband radio front end (RFE) configuration, where the DAC converts the JESD204B data into an analog signal, and passes the signal through anti-imaging filters, IQ upconverter/complex mixer, before reaching the SMA port which can be connected to a power amplifier and antenna to transmit the signal. Figure 3.1.1: Midband Transmission (Tx) Signal Flow Figure 3.1.2 shows what needs to takes place in our radio to enable the transmission of an analog signal in midband. Note how negative frequencies are preserved after the interpolation/complex digital mixing. Figure 3.1.2: Midband Tx Frequency Mechanics Data Port Generated Samples: the left side of the image shows three waveforms (A,B and C) that we might be looking to transmit. Prior to the samples getting generated, the user defines a sample rate, which we will call user bandwidth (UBW). The sample rate serves to specify the user bandwidth; an interval [-UBW/2 , UBW/2] which is centred around 0 Hz. Since these waveform will be offset by the NCO frequency at a later stage, the initial sine waves may (in some cases) have a negative frequency (like signal A in the diagram). Once generated, the samples will be sent to the SDR for further processing. However it is important to note that sometimes not all of the samples in the user bandwidth will get transmitted (this will become more clear at later stages). Interpolation: after we have generated the user samples, the next step is to perform interpolation to obtain a larger bandwidth. This new bandwidth specifies a larger interval (also centred around 0 Hz) that is defined by the sample rate of the device (325 MSPS for Crimson TNG, 1 GSPS for Cyan). The user bandwidth is always smaller than the conversion bandwidth. Interpolating the samples to a larger bandwidth is crucial for the next stage where the digital up-conversion takes place. Digital Mixing(CORDIC): after interpolating the signal to the conversion bandwidth of the device, the FPGA can proceed to upconvert the samples. Both Crimson TNG and Cyan have CORDIC digital mixers that are capable of both up-conversion and down-conversion (DDC, DUC). Up-conversion is accomplished by mixing the Tx samples with a local oscillator found in the FPGA (set to what is referred to as the NCO frequency). This causes all the frequency of all our signals to increase. Using the larger conversion bandwidth that we obtained from interpolation ensures that we can capture more of our mixing products. (The squiggley line on the frequency axis indicates that the signals (A,B and C) are actually shifted to a much higher bandwidth, and the diagram is for illustrative purposes only). In some cases, mixing our generated signal with the NCO frequency results in a frequency that does not lie within the user bandwidth. Here the mixing product will still have an image that is rotated to fit within our capture bandwidth. Digital-to-analog Converter (DAC): the DAC then converts the signals to their analog form. The production of images is a fundamental and expected outcome of the Nyquist theorem and so we will still have images of our original signal at each Nyquist zone (i.e. at each multiple of our conversion bandwidth, we are likely to see images of the signal at their corresponding offsets). Analog (IQ) Upconverter/Mixer: this mixer shifts the signals to a higher frequency (upconverts the signal with a frequency synthesizer that generates an LO) at a much greater bandwidth than the converter, and is complex. RF Gain: the final analog signal has gain added to it. (The squiggley line on the frequency axis indicates that the signals (A,B and C) are actually shifted to a much higher bandwidth, and the diagram is for illustrative purposes only). During midband Rx tuning, the external SMA port is connected to the RFE through a series of amplifiers, filters and switches before reaching the ADC. In addition, re-sampling and digital down conversion occur on the FPGA using an CORDIC mixer. There is further analog down conversion facilities used when operating in midband mode. The ADC converts the analog signal to digital, which is then sent to an FPGA which provides framing of the data before being sent over the data port to the host user application. During midband Rx operation, a signal originates from the SMA, goes through a low-pass filter, before passing through amplifiers, then anti-aliasing filters, followed by digitization by the ADC into digital IQ pairs and transmitted using JESD204B serial data protocol. This digital signal may then be shifted on either the ADC or FPGA, where is it parsed and passed to the host application over the Ethernet ports. Any user applied frequency shifting occurs after digitization, but prior to decimation, as shown in Figure 3.2.1. Figure 3.2.1: Midband Receiver (Rx) Signal Flow Figure 3.2.2: Midband Rx Frequency Mechanics Figure 3.2.2 shows what needs to takes place in our radio to receive an analog signal. RF Input/Gain: the first part of the image (see left) shows several pure sine waves (A-E) as they are picked up by the antenna and passing through an initial gain amplifier. Low-pass Filter: the LPF then cuts off and attenuates all signals above 6GHz. Analog RF Mixer: the signal is then further downconverted using a frequency synthesizer (generates an LO). (The squiggley line on the frequency axis indicates that the signals (A,B and C) are actually shifted to a much lower bandwidth, and the diagram is for illustrative purposes only). Anti-aliasing filter: an analog anti-aliasing filter aims to restrict the incoming signals to only those that fall in the converter's domain (AAF bandwidth is ~80% that of the ADCs). This is important because the ADC has a finite operating range. The converter bandwidth specifies a large interval centred around 0 Hz that is defined by the sample rate of the device (325 MSPS for Crimson TNG, 1 GSPS for Cyan). We will shrink our reception interval in later stages so that we only capture the signals we are "tuned" to. Digital Mixing (CORDIC): at this point, the converter BW is large. To prepare for decimation, the samples are digitally down-converted. This decreases the frequency of all the signals by the NCO frequency set on the FPGA. Both Crimson TNG and Cyan have CORDIC digital mixers that are capable of both up-conversion and down-conversion (DDC, DUC). Down-conversion is accomplished by mixing the Rx samples with a local oscillator found in the FPGA (set to what is referred to as the NCO frequency). Note that after this takes place, some of the frequencies might be negative. (The squiggley line on the frequency axis indicates that the signals (A,B and C) are actually shifted to a much lower bandwidth, and the diagram is for illustrative purposes only). Application: the sampled data can now be used for a specific application. To transmit and receive high frequency waves, we need to make use of an additional local oscillator and mixer on the LOgen board. This allows for even higher frequency operation, since our architectures implement a superheterodyne architecture, as discussed on this wikipedia page. This effectively means that the analog frequency conversion is performed in two stages, using a complex base-band stage that is mixed, using an IQ convertor, to a known, real valued IF stage, which is subsequently converted to a final RF stage. The IF stage is implemented using the midband mechanics, as previously discussed in midband mechanics. The highband Tx and Rx chains connects the external SMA input or output to/from the FPGA and converter device through the RFE containing amplifiers, filters, mixers, switches, and anti-aliasing or anti-imaging filters. The highband chain is automatically enabled by configuring the SDR frequencies within the midband range, shown in Table 3 below. Table 3: Highband Frequency Bands Cyan 6000 MHz 18000 MHz Figure 4.1.1: Highband Transmission (Tx) Signal Flow As the samples progress through the various components in the SDR, the frequencies and bandwidths that we are dealing with change. Now that we have a good understanding of our circuitry, let's look at what is happening to the frequency at each of these steps, as shown in the figure below: Figure 4.1.2: Tx Highband Mechanics Data Port Generated Samples: the left side of the figure shows three waveforms (A,B and C) from the data port which we want to transmit. Prior to the samples getting generated, the user defines a sample rate, which we will call user bandwidth (UBW). The sample rate serves to specify the user bandwidth; an interval [-UBW/2 , UBW/2] which is centred around 0 Hz. Once the samples are generated, they are sent to the SDR for further processing. Interpolation: after we have generated the user samples, the next step is to perform interpolation used to obtain a larger bandwidth. This new bandwidth specifies a larger interval (also centred around 0 Hz) that is defined by the sample rate of the device (325 MSPS for Crimson TNG, 1 GSPS for Cyan). The user bandwidth is always smaller than the conversion bandwidth. Interpolating the samples to a larger bandwidth is crucial for the next stage where the digital upconversion takes place. CORDIC Mixing: after interpolating the signal to the conversion bandwidth of the device, the FPGA can proceed to upconvert the samples. Both Crimson TNG and Cyan have CORDIC digital mixers that are capable of both up-conversion and down-conversion (DDC, DUC). Up-conversion is accomplished by mixing the Tx samples with a local oscillator found in the FPGA (set to what is referred to as the NCO frequency). This causes the frequency of all our signals to increase. Using the larger conversion bandwidth that we obtained from interpolation ensures that we can capture more of our mixing products. This also produces image signals via image wrap-around (signal C). The production of images is a fundamental and expected outcome of the Nyquist theorem and so we will still have images of our original signal at each Nyquist zone (i.e. at each multiple of our conversion bandwidth, we are likely to see images of the signal at their corresponding offsets). (The squiggley line on the frequency axis indicates that the signals (A,B and C) are actually shifted to a much higher bandwidth, and the diagram is for illustrative purposes only). Digital-Analog-Converter: the DAC then converts this digital signal to analog, which attenuates the image signals significantly. Anti-Aliasing Filter: an AAF is then able to get rid of the image signals, while also decreasing the bandwidth to 80% of the DAC bandwidth. Complex (IQ) Upconverter/Mixer 1: an analog upconverter/mixer is then used to further increase the frequency of each signal, using a midband frequency LO (IF). (The squiggley line on the frequency axis indicates that the signals (A and B) are actually shifted to a much higher bandwidth, and the diagram is for illustrative purposes only). Band-pass Filter: this signal then goes through a BPF, which only permits signals from 1.4 - 2.3 GHz. Passive (Real) Upconverter/Mixer 2: this further upconverts the signal using a highband frequency LO (and is a real valued mixer), and then through a HP filter which rejects signals below 6GHz. (The squiggley line on the frequency axis indicates that the signals (A and B) are actually shifted to a much higher bandwidth, and the diagram is for illustrative purposes only). RF Gain: the final signals which will be transmitted are amplified. (The squiggley lines are to indicate that the signals (A and B) are actually at a much higher bandwidth, and the diagram is for illustrative purposes only). SMA edgelaunch: the gained final analog signal can now be transmitted via power amplifier and antenna. Figure 4c: Highband Receiver (Rx) Signal Flow Figure 4d: Rx Highband Frequency Mechanics The figure above shows what needs to take place in our radio in order to receive an analog signal. RF Input/Gain: the first part of the figure shows several pure sine waves (A-F) and their images being picked up by an antenna attached the SMA jack and then amplified. High Pass Filter: in this stage, the signals pass through a HPF which attenuates all signals below 6GHz. RF (Real/Passive) Downconverter/Mixer: next the signals reach a mixer (real) which performs frequency down conversion using an high frequency (HF) LO, as shown on the figure above. Note how signal B reflects from the horizontal (0Hz) axis due to this being a real valued mixer. Band Pass Filter: the down converted signals then pass through a BPF, which permits frequencies between 1.4 - 2.3 GHz while attenuating all other frequencies outside this band. Complex (IQ) Downconverter/Mixer: these signals are further downconverted by a mixer (complex), using the IF LO, which then separate the signal into IQ pairs, before each component being filtered further and then amplified. Anti-Aliasing Filter: signals then reach an AAF which are applied to both the I and Q signal, while attenuating signals outside the AAF bandwidth, which is 80% that of the converter bandwidth. Analog-Digital Converter: this converts the signals from analog to digital domain, where the signals are now able to be manipulated on the FPGA. Digital Mixing (CORDIC): this decreases the frequency of all the signals by the NCO frequency set on the FPGA. Both Crimson TNG and Cyan have CORDIC digital mixers that are capable of both up-conversion and down-conversion (DDC, DUC). Down-conversion is accomplished by mixing the Rx samples with a local oscillator found in the FPGA (set to what is referred to as the NCO frequency). Note that after this takes place, some of the frequencies might be negative (signals D and C). Signals that are rotated beyond the convertor boundary, wrap around on the otherside of the bandwidth (signals C and D). This operation does not lose any information, it simply rotates the signal in the frequency domain. (The squiggley line on the frequency axis indicates that the signals (C,D, and E) are actually shifted to a much lower bandwidth, and the diagram is for illustrative purposes only). Decimation: this further downsamples the signals and dictates the final user bandwidth before proceeding to the host computer/application. Below we discuss how one would use the frequency tuning mechanics discussed above in order to transmit or receive a signal. This is done using the mixers, both real/complex analog and complex (CORDIC) digital mixers,as well as decimation. First, a few basic definitions: Downconversion is the process of mixing an RF input and LO to obtain an output IF lower than the RF input: \[ f_{IF}= \mid f_{LO} -f_{RF} \mid \] Upconversion is the process of mixing an LO such that RF output is larger than the RF input: \[ f_{RF_1}= f_{LO} - f_{IF} \] \[ f_{RF_2}= f_{LO} + f_{IF} \] Figure 5a: Downconversion and Upconversion The total offset that we wish to shift a signal (i.e. upconvert or downconvert) is defined as follows for each band: Baseband: \(f_{offset} = f_{NCO, FPGA}\) Midband: \(f_{offset} = f_{LO,MB} + f_{NCO}\) Highband: \(f_{offset} = f_{LO,HF} + f_{LO,IF} + f_{NCO, FPGA}\) Decimation refers to the process of reducing the sampling rate (i.e. downsampling) by essentially throwing away samples. The decimation factor (M) is defined as the ratio of the input rate to output rate. In equation form, we have: \[ M = \frac {input (SPS)}{output (SPS)} \] Interpolation refers to the process of increasing the sampling rate (i.e.upsampling) by essentially inserting zero valued samples. The interpolation factor (L) is defined as the ratio of the output rate to input rate. In equation form, we have: \[ L = \frac {output (SPS)}{input (SPS)} \] Multiple solutions exist for these examples. Baseband Rx Example: Suppose we are in baseband mode. There's a low-pass filter which cuts anything above 120MHz and an ADC with a sample rate of 200MSPS. We want to capture a center frequency of 79 MHz within a 700KHz band at a sample rate of 4MSPS. How much should the \(f_{NCO, FPGA}\) downconvert our signal to be centred in the 400KHz band? What is the decimation factor? Solution: We know that since we are looking for a signal at 79 MHz, it will not be cut off by the LPF. In order to achieve a signal to be centred in a 700KHz bandwidth, we will need to enable the \(f_{NCO, FPGA}\) (since this is the only mixing stage in baseband) at the following frequency: \(f_{offset}= 79\) MHz \(-400\) KHz \(=78.6\) MHz \(=f_{NCO,FPGA}\) Therefore we need to downconvert the signal by 78.6 MHz. Our decimation factor is simply: $$ M= \frac {200 MSPS}{4 MSPS}=50 $$ Midband Tx Example: Suppose we have a user generated signal of 1000MHz with a sample rate of 32.5MSPS and we wish to transmit this signal at a center frequency of 5025MHz frequency centered at 0. Our \(f_{LO, MB}=N*100\) MHz, where N is a whole number and the DAC has a sample rate of 325MSPS. What should be our interpolation factor, value for N, and \(f_{NCO,FPGA}\)? Solution: First off, in order for our DAC to work properly, we will need to upconvert the signals sample rate using the interpolation factor: $$ L=\frac{325MSPS}{32.5MSPS}=10 $$ Second, our total offset value needed for upconversion to 5025 MHz is: \(f_{offset} = 5025\) MHz \(- 1000\) MHz \(= 4025\) MHz \(= f_{LO,MB} + f_{NCO,FPGA}\) Therefore, one possible solution is to upconvert the signal using the \(f_{LO,MB}\) using a value of N=40 and an \(f_{FPGA,NCO}\)=25MHz. This gives us the the 5025MHz center frequency. This example refers to the Midband Rx Mechanics. Analog Domain From the SMA (RF Input), the RF signal is down-converted in the analog domain by the RF (IQ) mixer. At this point, two low pass filters (LPF) (one for I and one for Q), serve to attenuate any other RF signal outside this bandwidth. Within the complex domain, this corresponds to +/- 120Mhz (for 240 MHz about DC, which, because this is a direct IQ down converter, would corresponds to whatever frequency you set the LO to. All signals within the pass band of the LPF will be sampled, including any harmonics, spurs, or blockers. The down convertor is subject to some limitations, including image rejection about that 240 MHz band, which is typically around -40dB. As these images are, by definition, within the pass band of the low pass filter, they are present within the bandwidth of the analog signal. So, if injecting a 2.8 GHz signal, and setting the LO to 2.7GHz, after IQ down-conversion, the complex spectrum (don't think about the real valued one for now), would contain your fundamental at +100MHz, say, at 0dBm, and the image at -100MHz, at, say, -40dBm. As both of these signals fall within pass band of +/-120MHz, they actually exist within the analog signal bandwidth and will, by extension, exist in the sampled signal bandwidth (i.e. ADC bandwidth). From there, the digitized signal is shifted by the NCO. Mathematically, this occurs in a discrete and periodic space, whose period corresponds to the sample rate. Again, because this is a complex transform, the Nyquist criteria works out so that the magnitude of the bandwidth corresponds to the sample rate. When using the NCO, it shifts all frequencies within the sample bandwidth of 325MHz. So, continuing the above example, using the NCO to shift the effective IF by -100MHz (thereby shifting the relative frequency of the fundamental from +100MHz to 0Hz in the complex spectrum), also results in the image, which used to be at -100MHz, being shifting to +125MHz. This happens as follows; The total complex bandwidth of 325MHz is centred about 0Hz, with boundaries at +/-162.5MHz. The image is initially located at -100MHz relative to the 0Hz center frequency. When we shift the image by -100MHz (in order to center the fundamental tone), we also need to shift the image by the corresponding amount. In order to shift the image by -100MHz, we first shift the image by -62.5MHz, which brings us the lower bound of -162.5MHz, and results in us wrapping around to +162.5MHz, with an additional -37.5MHz shift to go. Applying the -37.5MHz balance of the -100MHz shift (-62.5MHz + -37.5MHz = -100MHz) from the top moves the image to +125MHz. So now, after conversion and the frequency shift of the NCO, we have our fundamental centred at 0Hz, and the original IQ image at +125MHz. Next, comes decimation. This process is a bit complex, but basically comes down to first applying a low pass filter on both the I and Q streams (which corresponds to a band pass filter centred about 0Hz in the complex domain), and then decimating by the required amount. So if we aim to decimate the bandwidth to, say, 10MHz, and our fundamental is still centred on 0Hz, and the image is at +125MHz, this corresponds to first applying a 5MHz low pass filter on both the I and Q streams, which is equivalent to applying a 10MHz band pass filter centred about 0Hz on the complex stream with a bandwidth of 325MHz. This band pass filter attenuates any signal outside the 10MHz band centered about 0Hz, which happens to include our image at +125MHz. Only after filtering does the actual decimation (i.e. removing dropped samples from the digital stream) occur. As a practical matter, the greater the amount of decimation, the narrower the band pass filters are, and the greater the attenuation of any images. In addition, the further from the stop band frequencies, the greater the attenuation. However, for the purposes of this very specific application and discussion, the filters work exceedingly well to attenuate any out of band images, with stop band attenuation close to the 96dB dynamic range limitation associated with 16-bit integers. Takehome message 1) The FPGA NCO chain or decimation change do not introduce any (unexpected) artifacts or harmonics into the digital stream. For the frequency ranges we've discussed above, the signals observed for a given NCO shift are actually present in the signal. (This does not necessarily hold when operating close to the ADC sample rate of 162.5MHz, in which case you will see aliasing, but, given the example above, this should not be an issue). 2) Although it's convenient to consider the FPGA NCO as operating as a mixer, this is not entirely true. When we discuss mixing, it's actually implemented as a CORDIC, with an internal phase resolution of 32bits, implemented as a 20th order series, and includes correction factors to ensure spur reduction. The end result is that the mode of operation for the NCO is fundamentally different than that of an analog mixer. As we've implemented a mathematical transform, it doesn't add spurs. Sampling Bandwidth (ADC/DAC Bandwidth): This is the bandwidth at which the convertors are operating at. For a single real-valued signal, this corresponds to half the sample rate (by Nyquist's theorem). For a complex signal (i.e. IQ pairs), this corresponds to the sample rate. This should make sense intuitively; given a fixed sample rate, a complex signal, with its two orthogonal components (i.e. I and Q), carries twice the amount of information than a real valued signal. Note that when our device operates in baseband mode, we only sample from a single channel (the "I" channel, at 16bits), and convert it to a complex signal by adding a virtual "zero" to the Q channel within the FPGA. As an example, the default Cyan product has a constant sample rate of 1GSPS for all convertor devices and channels. This means that, when operating in baseband mode, our sampling bandwidth corresponds to half the convertor sample rate (i.e. 500MHz for Cyan). However, when operating in Mid- or Highband, we use both channels (transmitting/receiving I and Q, each at 16bits), and the product therefore has a sampling bandwidth of 1GHz. This is an important distinction for two reasons; it provides the theoretical bound on the maximum bandwidth we can send and receive at any given time. It also allows customers who want to bypass the default radio front end, an idea of what kind of analog filtering they would require. RF Bandwidth (AAF/AIF bandwidth): The RF bandwidth corresponds to the actual radio randwidth of the product. This is generally the 3dB bandwidth at which we have implemented the analog anti-aliasing filters (AAF) and anti-imaging filters (AIF). In the case of our default Cyan product, this comes to around 800MHz. It's important to note that this is NOT a brick wall filter since we will be able to observe frequencies past 800MHz, they'll just be attenuated. Moreover, signals immediately beyond the sampling bandwidth of the ADC/DAC will appear to be reflected about that bandwidth. Because the analog domain is continuous, there will exist frequencies beyond the sampling bandwidth. By virtue of Nyquist, we know that, once digitized, any frequencies beyond the sampling bandwidth will reflect back on to the signal (i.e. be aliased, in the case of receiving), or be multiplied on the output (imaging, in the case of transmission). Imaging and aliasing are both very undesirable, as it entails that there's no way to differentiate between, say, a signal that is at 900MHz vs. a signal at 1100MHz. Unless we use analog filtering, the aliasing ans imaging can cause serious problems. Particularly this is problematic since the bandwidth of the analog components used in the radio chain are often greater than the sampling bandwidth the convertors are running at. It's therefore important to match these in order to reduce the impact of aliasing or images. For example, if you receive a sufficiently strong base band signal, say at 0dBm, at 1100MHz, the RF anti-aliasing filter may attenuate that signal to, -40dBm. However, when sampling that signal at the convertor, you'll actually see it placed at 900MHz (because the sampling bandwidth is at 1000MHz, and the signal is 100MHz above that, it gets reflected back down to 900MHz). However, the amplitude of this signal as its frequency increases will get increasingly smaller, because the analog anti-aliasing filter will have stronger rejection as the input signal frequency increases beyond the passband (which we've set to 800MHz). As a general rule of thumb, we define the pass band as 80% of the total sampling bandwidth, and it's usually reserved for the filter's roll off/transition band. We use 80% because designing analog filters is hard. Ideally, we want to have a flat pass band, and super steep rejection (i.e. a very small "transition band") to the "stop band" value. For example, a flat, say, -2dB insertion loss within the pass band, but then immediately outside of it, -60dB of stop band attenuation. The general problem here is that having a steep/small transition bandwidth limits the maximum stop band attenuation. Which is to say, we could have a much steeper transition band, but then the attenuation of signals may be insufficient to prevent aliasing/meet the SNR/SFDR requirements of the system. We picked an 80% value because it helps ensure that the stop band attenuation is sufficiently high so as to match the overall SNR/SFDR requirements of the system (targeted at 40-60dB). Application Bandwidth (User Bandwidth): Everything we've mentioned above reflects fixed architectural characteristics of the clocking configuration and radio front end chains. It also impacts the fixed link between the FPGA and the convertor device. However, once the data is on the FPGA, we provide the ability to either interpolate data upwards to a sample rate (in the case of transmitting information from the SDR), or decimate it down to a lower sample rate (in the case of sending information). This is accomplished through Digital Up Conversion (DUC; for sending info from the SDR), or Digital Down Conversion (DDC; to recieve data from SDR). As these filters are implemented digitally, we can use substantially longer taps (i.e. co-efficients) than what would commercially and economically be possible using analog components. This means our filters can have more ideal behaviour. It also provides customers with the ability to select more convienent sample rates depending on their actual application. Provided that the input sample rate is an integer multiple (or divisor) of the fixed sample rate of the device (1GSPS for Cyan/ 325GSPS for Crimson), then the SDR is capable of handling the matching between the rate at which the application or user sends/receives information, and the rate at which the FPGA is sending/receiving information to/from the convertor.
CommonCrawl
The effect of an antibiotic stewardship program on tigecycline use in a Tertiary Care Hospital, an intervention study Rima Moghnieh1,2, Dania Abdallah3, Lyn Awad3, Marwa Jadayel4, Nicholas Haddad5, Hani Tamim6, Aline Zaiter7 na1, Diana-Caroline Awwad7 na1, Loubna Sinno8, Salam El-Hassan9, Rawad Lakkis10, Rabab Khalil11 & Tamima Jisr12 A drug-oriented antibiotic stewardship intervention targeting tigecycline utilization was launched at Makassed General Hospital, Beirut, Lebanon, in 2016 as a part of a comprehensive Antibiotic Stewardship Program (ASP). In this study, we evaluated the effect of this intervention on changing tigecycline prescription behavior in different types of infections, patient outcome and mortality, along with tigecycline drug use density, when compared to an earlier period before the initiation of ASP. This is a retrospective chart review of all adult inpatients who received tigecycline for more than 72 h between Jan-2012 and Dec-2013 [period (P) 1 before ASP] and between Oct-2016 and Dec-2018 [period (P) 2 during ASP]. Tigecycline was administered to 153 patients during P1 and 116 patients during P2. The proportion of patients suffering from cancer, those requiring mechanical ventilation, and those with hemodynamic failure was significantly reduced between P1 and P2. The proportion of patients who received tigecycline for FDA-approved indications increased from 19% during P1 to 78% during P2 (P < 0.001). On the other hand, its use in off-label indications was restricted, including ventilator-associated pneumonia (26.1% in P1, 3.4% in P2, P < 0.001), hospital-acquired pneumonia (19.6% in P1, 5.2% in P2, P = 0.001), sepsis (9.2% in P1, 3% in P2, P = 0.028), and febrile neutropenia (15.7% in P1, 0.9% in P2, P < 0.001). The clinical success rate of tigecycline therapy showed an overall significant increase from 48.4% during P1 to 65.5% during P2 (P = 0.005) in the entire patient population. All-cause mortality in the tigecycline-treated patients decreased from 45.1% during P1 to 20.7% during P2 (P < 0.0001). In general, mean tigecycline consumption decreased by 55% between P1 and P2 (P < 0.0001). The drug-oriented ASP intervention targeting tigecycline prescriptions improved its use and patient outcomes, where it helped curb the over-optimistic use of this drug in off-label indications where it is not a suitable treatment option. Tigecycline was first introduced to the Lebanese pharmaceutical market in 2006. It has demonstrated promising in vitro activity against antibiotic-resistant Gram-negative bacteria, including extended spectrum beta-lactamase-producing Enterobacteriaceae, carbapenem-resistant Enterobacteriaceae, and extensively drug-resistant (XDR) Acinetobacter baumannii. It is also active against various Gram-positive organisms, including Staphylococcus aureus, streptococci, and enterococci [1,2,3]. Multi-drug-resistant (MDR) and XDR A. baumannii has spread in Lebanese hospitals since 2004, becoming one of the primary nosocomial pathogens that compromises the outcome of hospitalized patients [4,5,6,7,8]. A. baumannii has displayed in vitro resistance to all available antimicrobial classes across different Lebanese hospitals, except for tigecycline and colistimethate sodium [4,5,6,7,8]. Therefore, physicians use tigecycline whenever a MDR or XDR pathogen is suspected or proven to cause a serious infection. A utilization review was performed among inpatients that received tigecycline in our facility between 2012 and 2013 [9]. The tigecycline clinical success rate reached 43.4% and total mortality was 45% [9]. Stratifying tigecycline use among different patient subgroups revealed that it was mostly prescribed for indications not approved by the US Food and Drug Administration (FDA) or the European Medicines Agency (EMA) (81%), specifically in critically ill patients [9]. Total mortality was significantly higher in severely ill patients and for off-label indications, such as nosocomial pneumonia, bacteremia, and sepsis [9]. These results indicated a need to improve tigecycline prescription procedures. Therefore, a drug-oriented antibiotic stewardship intervention was launched in our facility in 2016 as a part of a comprehensive "Handshake Antibiotic Stewardship Program" (ASP) [10]. The primary endpoints of the current study were to observe the intervention effects on: Shifting tigecycline use from the clinically vulnerable patient population toward the less critical population by avoiding its use in patients with signs of clinical severity, like hemodynamic failure and in those on mechanical ventilation. Limiting tigecycline use to complicated intra-abdominal infections (cIAI) and to complicated skin and soft tissue infections (cSSTI), paired with reduced tigecycline prescription for infections like ventilator-associated pneumonia (VAP), hospital-acquired pneumonia (HAP), and bacteremia. Changes in total tigecycline consumption and prescription trends resulting from ASP team oversight and controlling therapy duration, when possible, for patients who received it. The secondary aims of this study were to assess patient outcome and all-cause mortality in patients who received tigecycline before and during the ASP. We also compared bacterial flora isolated from patients treated with tigecycline before and during the ASP. Setting and study design This was a retrospective chart review conducted at Makassed General Hospital, a 186-bed University hospital in Beirut, Lebanon. This study included adult inpatients who received tigecycline for more than 72 h between January 2012 and December 2013 [period 1 (P1), before ASP] and October 2016 to December 2018 [period 2 (P2), during ASP]. The hospital Institutional Review Board approved this study. The ASP was adopted by the hospital starting in September 2016 and was based on the "handshake" strategy of prospective audit and immediate feedback to prescribers. The aims were to decrease high-end antibiotic use, namely antipseudomonal carbapenems and colistimethate sodium, and to control tigecycline use based on its utilization review [9, 10]. Regarding tigecycline intervention, a workshop was conducted in our facility to target broad-spectrum antibiotic prescribers, including infectious disease specialists, pulmonologists, and intensivists. This workshop discussed the results of the previous tigecycline utilization review and evaluated and agreed upon a new tigecycline use protocol. This protocol essentially limited tigecycline prescription to FDA/EMA-approved indications, namely cIAI and cSSTI. The ASP team was given the authority to modify the choice and duration of the prescribed antimicrobial therapy after discussing the patient management plan and corresponding treatment guidelines with prescribers during daily clinical rounds. Data collection, definitions, and metrics for tigecycline use The following information was collected during both periods: baseline demographic and clinical characteristics, indication for tigecycline therapy, tigecycline treatment strategy (empiric, targeted, monotherapy, or combination), duration of therapy, microbiological findings, clinical and microbiological outcomes, and all-cause mortality. Monthly tigecycline use data were obtained from the hospital pharmacy. The World Health Organization (WHO) and Anatomical Therapeutic Chemical (ATC) classification systems were used to express data as WHO/ATC defined daily doses (DDD). Tigecycline use density was measured as the number of DDD/1000 patient days (PD). The WHO definition was used to define tigecycline DDD as 0.1 g/day. The PD number was the number of patients present in any given location (e.g., hospital or ward) at a single time during a 24-h period [11]. We studied the quarterly change in tigecycline consumption levels and trends at P1 and P2. Primary infections for which tigecycline was prescribed were defined according to clinical diagnostic criteria established by the U.S. Center for Disease Control and Prevention [12,13,14]. FDA- and EMA-approved indications for tigecycline were cSSTI and cIAI [15]. Off-label indications were HAP, VAP, urinary tract infection, diabetic ulcers, sepsis, bacteremia, and febrile neutropenia [15]. Empiric tigecycline use was defined as its administration to a patient with signs and symptoms of infection without a known bacterial isolate [9, 16]. Targeted therapy was defined as tigecycline administration in the presence of an identified organism [9, 16]. Clinical success was defined as an improvement in signs and symptoms of the primary infection treated by tigecycline, without the need to change the antibiotic regimen 72 h after starting tigecycline or without the need to restart other antibiotics within 72 h of discontinuing tigecycline [9, 17, 18]. The clinical success proportion was calculated as [9]: $$({\text{Number}}\;{\text{of}}\;{\text{patients}}\;{\text{with}}\;{\text{clinical}}\;{\text{success}}/{\text{Total}}\;{\text{number of}}\;{\text{patients}}) \, \times \, 100.$$ Microbiological outcome success was defined as the eradication of the organism causing the primary infection during or after tigecycline therapy [9, 18]. Persistent identification of the same organism 72 h after initiating tigecycline therapy was considered a microbiological failure [9, 18]. The response was considered indeterminate when follow-up cultures were not available to verify eradication [9, 18]. The microbiological success proportion was calculated as [9]: $$\left[ {{\text{Number}}\;{\text{of}}\;{\text{patients}}\;{\text{with}}\;{\text{microbiological}}\;{\text{success}}/({\text{Total}}\;{\text{number}}\;{\text{of}}\;{\text{patients }}{-}{\text{ Number}}\;{\text{of}}\;{\text{patients}}\;{\text{with}}\;{\text{undetermined}}\;{\text{microbiological}}\;{\text{response}})} \right] \, \times 100.$$ Mortality was quantified using 28-day all-cause mortality defined as deaths occurring between 72 h after treatment started and 28 days after tigecycline discontinuation [18]. Patient bacterial flora was assessed as a compilation of all bacterial isolates from any cultured site from patients treated with tigecycline. Bacterial identification was performed according to standard microbiological procedures. Antibiotic susceptibility was performed using the disc diffusion method as recommended by the Clinical and Laboratory Standards Institute (CLSI). All microbiological methods were consistent with CLSI guidelines for the corresponding year and antimicrobial susceptibility was determined using the CLSI breakpoints for the corresponding year [19]. The average turnaround time for bacterial identification and antibiogram results was 3 working days. Rapid diagnostic tests that detect antimicrobial resistance were not available in the hospital laboratory at the time of the study [20]. Patients' characteristics, treatment strategy, overall clinical outcome, microbiological outcome, all-cause mortality, and bacterial flora were quantified in patients during P1 and P2. The clinical outcome and all-cause mortality were also broken down by infectious disease diagnosis. The Statistical Package for Social Sciences program (SPSS Statistics for Windows, Version 23.0, IBM Corp., Armonk, NY, USA) was used for data entry, management, and analyses. Descriptive analysis was broken down by categorical independent variables for outcomes quantified using numbers and percentages. Bivariate analysis for ASP (P1 vs. P2) was conducted using the Chi square test for categorical variables and independent t-test for continuous variables. Parameters with P-value < 0.05 at the univariate level were considered statistically significant. The ASP impact on drug use density was evaluated using a segmented regression analysis of an interrupted time series adjusted for autocorrelation. We calculated the "change in level" as follows: [(mean P2 value – mean P1 value)/mean P1 value] × 100. We defined "change in trend" as the difference between the P1 and P2 change rates. The segmented regression analysis was applied using the newey command (considering Newey-West standard errors) in STATA version 15 (StataCorp LLC., College Station, TX). Statistical significance was defined as P-value< 0.05. Patient characteristics Tigecycline was administered to 153 patients during P1 and 116 patients during P2. All patients' demographic and clinical characteristics are detailed in Table 1. The comorbidities showed similar patterns during P1 and P2. However, the proportion of patients suffering from cancer decreased from 32.7% (50/153 patients) in P1 to 19% (22/116 patients) in P2 (P = 0.012). Similarly, the percentage of patients on mechanical ventilation dropped from 22.9% (35/153) during P1 to 11.2% during P2 (13/116) (P = 0.013), as well as those requiring vasopressor use [24.2% (37/153 patients) in P1 vs. 12.9% (15/116 patients) in P2, P = 0.021] (Table 1). Table 1 Comparison of baseline demographic and clinical characteristics, indications, duration, and treatment strategy in patients who received tigecycline before and after the antibiotic stewardship program intervention Consumption of tigecycline The average tigecycline consumption was 26 DDD/1000 PD during P1 compared to 11 DDD/1000 PD during P2; there was 55% decline in the drug density level (P < 0.0001). The ASP intervention resulted in a change in trend of − 14.22 DDD/1000 PD per quarter (P < 0.01) (Fig. 1). Quarterly variation of tigecycline consumption (DDD/1000 PD) before and after the antibiotic stewardship program implementation at Makassed General Hospital. ASP antibiotic stewardship program, DDD defined daily dose, PD patient days, ∆ change in Indications of tigecycline The proportion of patients who received tigecycline for cSSTI and cIAI, the FDA/EMA-approved indications, significantly increased between P1 and P2 (19% (29/153 patients) vs. 78% (91/116 patients) respectively, P < 0.001) (Table 1). Conversely, there was a significant decline in tigecycline prescription in all the observed off-label indications, including VAP [26.1% (40/153 patients) during P1 and 3.4% (4/116 patients) during P2, P < 0.001], HAP [19.6% (30/153 patients) during P1and 5.2% (6/116 patients) during P2, P = 0.001], BSI [9.2% (14/153 patients) during P1 and "Zero" during P2, P = 0.001], sepsis [9.2% (14/153 patients) during P1 and 3% (3/116 patients) during P2, P = 0.028] and febrile neutropenia [15.7% (24/153 patients) during P1 and 0.9% (1/116 patients) in P2, P < 0.001] (Table 1). Duration of therapy The median duration of tigecycline therapy was 8 days (IQR, 5–13 days) during P1 and 7 days (IQR, 5–9 days) during P2 (P = 0.973), regardless of the indication (Table 1). In general, the intervention reduced the proportion of patients who received tigecycline for more than 10 days (35.9% during P1, 18.1% during P2, P = 0.001) and more than 15 days (17.6% during P1, 6% during P2, P = 0.005) (Table 1). Treatment strategy Empiric use of tigecycline to treat primary infections increased from 43.8% (67/153 patients) during P1 to 54.3% (63/116 patients) during P2 (P = 0.087), whereas its use as a targeted therapy decreased from 56.2% (86/153 patients) during P1 to 45.7% (53/116 patients) during P2 (P = 0.087) (Table 1). Tigecycline prescription for combination therapy with other antibiotics decreased from 83% (127/153 patients) during P1 to 59.6% (69/116 patients) during P2 (P < 0.0001), whereas it increased as a monotherapy from 17% (26/153 patients) during P1 to 40.5% during P2 (47/116 patients), (P < 0.0001) (Table 1). Patient outcome and mortality The clinical success rate of tigecycline therapy showed an overall significant increase from 48.4% (74/153 patients) during P1 to 65.5% (76/116 patients) during P2 (P = 0.005) in the entire patient population (Table 2). Notably, individual clinical success rates for each indication were not significantly different between P1 and P2. Table 2 Effect of the antibiotic stewardship program intervention on the clinical success, microbiological success and 28-day all cause mortality rates in patients treated with tigecycline in different subgroups Follow-up cultures to assess the microbiological success or failure were only available for 39.2% of patients (60/153) in P1 and 22.4% of patients (26/116) in P2. During P1, the microbiological success rate was 43% (26/60 patients), compared to 19% (5/26 patients) during P2 (P = 0.03) (Table 2). All-cause mortality in the entire tigecycline-treated patient population decreased from 45.1% (69/153 patients) during P1 to 20.7% (24/116 patients) during P2 (P < 0.0001) (Table 2). Mortality rates did not change based on the type of infection tigecycline was prescribed for. Microbiological flora The microbiological culture results from patients treated with tigecycline during P1 and P2 were very similar, with few exceptions (Table 3). During both periods, the majority of cultured organisms were Gram-negative bacteria [81.3% of isolates (304/374) during P1 and 88.9% of isolates (296/333) during P2], with A. baumannii and Enterobacteriaceae predominating. The proportion of carbapenem-resistant A. baumannii from all isolated bacteria decreased from 23.3% (87/374 isolates) during P1 to 17.1% (57/333 isolates) during P2 (P = 0.04). Enterobacteriaceae species resistant to third generation cephalosporins constituted 17% of all isolated bacteria (64/374 isolates) during P1 compared to 22% (73/333 isolates) during P2. Specifically, Klebsiella spp. resistant to third generation cephalosporins represented 4.8% (18/374 isolates) of the isolated flora during P1 and increased to 9.3% (31/333 isolates) during P2 (P = 0.02) (Table 3). Carbapenem resistance among Escherichia coli and Klebsiella spp. emerged following the ASP [(0.6% (2/333 isolates) during P1 and 3.9% (13/333 isolates) during P2]. Table 3 Comparison of the bacterial flora isolated from patients treated with tigecycline before and after the antibiotic stewardship program intervention In cultured Gram-positive bacteria, methicillin-resistant S. aureus isolation decreased during the intervention period [1.6% of isolates (6/374) during P1 and 0.6% of isolates (2/333) during P2, P = 0.22]. Additionally, vancomycin-resistant Enterococci were isolated from patients at P2 (0.9%, 3/333 isolates) (Table 3). This study observed the effects of an ASP for tigecycline use among inpatients by comparing the 2 years before the intervention to 2 years during the intervention. Before the intervention, a formulary restriction policy was used to control prescription of broad-spectrum antibiotics in our facility, including tigecycline. During the intervention period, a dedicated ASP team was implemented to prospectively audit prescribed antimicrobials and give immediate feedback during daily ward rounds. The program also included educational activities for prescribers regarding rational use of antibiotics and disseminating guidelines for the management of common infectious diseases in our facility. Patients' characteristics and comorbidities were similar before and during the intervention, including older age and the presence of comorbid illness like cardiovascular disease, diabetes, and respiratory disease. This demonstrates that a similar range and complexity of cases were being treated at our tertiary care facility during the study [Internal Hospital Data]. Yet, we observed a change in tigecycline use and therapy strategy, where tigecycline prescriptions to treat infections were reduced in neutropenic patients with cancer, patients on mechanical ventilation, and patients with hemodynamic failure. In 2010 and again in 2013, the U.S. FDA issued a boxed warning regarding the increased risk of mortality with tigecycline therapy compared to alternatives for approved and unapproved indications, thus cautioning its administration for all cases and advising the use of available alternative antibiotics [21, 22]. At the time of the intervention, XDR A. baumannii and carbapenem-resistant Enterobacteriaceae were on the rise in most Lebanese hospitals and many prescribers were eager to avoid using carbapenem whenever possible [23,24,25,26]. The recently approved antibiotic formulations containing cephalosporins and beta-lactamase inhibitors that show promise against these organisms, such as ceftolozane/tazobatam and ceftazidime/avibactam, were not available in Lebanon during the study. Accordingly, tigecycline use continued during the ASP as part of the carbapenem-sparing strategy, but the intervention succeeded in decreasing tigecycline use in high-risk and immunocompromised patients. The type of infection was also a consideration. Tigecycline was primarily restricted to FDA-approved indications per our ASP protocols. It was mainly prescribed to manage acute bacterial cSSTI and cIAI in non-critically ill patients. Its use in severely ill patients and for off-label indications like HAP, VAP, bacteremia, sepsis, and febrile neutropenia was significantly reduced. Shifting the types of infections treated with tigecycline was one of the main priorities of the ASP. The empiric use of tigecycline as a mono- or combination therapy to reduce carbapenem use in non-severely ill patients with cSSTI and cIAI who were at risk for infection with MDR bacteria was supported by national and international guidelines, multicenter studies, and expert opinions [16, 27,28,29,30,31,32]. The ASP significantly decreased tigecycline consumption levels by 55%, which was accompanied by a prominent reduction in its prescription rate. It is well known that unnecessary, increased antibiotic consumption is positively correlated with the emergence of antibiotic resistance [33,34,35]. Shifting tigecycline prescription from off-label to FDA-approved indications, in addition to shortening therapy duration under appropriate conditions based on international guidelines, collectively reduced its consumption and prescription rates. The intervention produced a significant decline in tigecycline prescription for more than 10 days, with the mean therapy duration being 7 days (IQR, 5–9 days), regardless of the indication. An extended duration of antibiotic therapy is usually associated with emergence of resistance because selection of antibiotic-resistant strains increases over the time of antibiotic exposure [36]. The Infectious Disease Society of America (IDSA) and Society for Healthcare Epidemiology of America guidelines for ASP implementation in hospitals strongly recommend strategies that reduce antibiotic therapy to the shortest effective duration [37]. For instance, recently updated international guidelines indicate that the optimal duration of antibiotic therapy is 3–5 days in cIAI cases where patients undergo an adequate source-control procedure [28, 29]. The duration can be extended to 7 days depending on the presence of concomitant bacteremia, rate of fever resolution and other signs of infection, and the presence of comorbidities [27, 38]. For other infections like acute bacterial cSSTI, the latest IDSA guidelines suggest 7 to 10 days of therapy with individualization based on clinical response and factors like comorbidities, etiology, and appropriateness of drug or dosages [39]. ASPs aim to effectively control antibiotic utilization and antimicrobial resistance rates [37]. In Lebanon, carbapenem resistance has been on the rise over the past 10 years in clinically-relevant Gram-negative bacteria, an alarming situation in light of limited resources [23,24,25,26]. Colistin resistance has also recently been detected in Enterobacteriaceae and Acinetobacter species from clinical samples [40, 41]. One potential side effect of drug-oriented ASPs is that decreasing consumption of one antimicrobial can result in increased consumption of another one. Despite similar patient population complexity in our tertiary care facility before and during the intervention, decreased tigecycline use was not compensated for by increased consumption of carbapenems or colistin [10]. Implementing the handshake ASP in our facility led to an important reduction in the density and rate of prescribing other broad-spectrum antibiotics, including the antipseudomonal carbapenems imipenem and meropenem and colistimethate sodium [10]. Our ASP protocols were not only based on modifying antibiotics, but on stratifying patients according to the risk of infection or acquisition of Gram-negative organisms resistant to carbapenems and extended-spectrum cephalosporins [10]. This key measure allowed us to properly choose empiric treatment options, thus sparing empiric use of carbapenems and colistin when suitable. Antibiotic therapy was escalated or de-escalated to suit the patient's condition when microbiological culture results were available. Notably, there was a compensatory increase in consumption of other antibiotics, like piperacillin-tazobactam and third and fourth generation cephalosporins, in the hospital as a whole. This was due to the various complicated cases frequently managed in our facility, such as neutropenic patients with cancer, bone marrow transplant recipients, and critically ill patients [10]. Piperacillin-tazobactam is a less potent inducer of antimicrobial resistance in Gram-negative bacteria compared to carbapenems, fluoroquinolones, and extended-spectrum cephalosporins [42, 43]. Similar to our predictions, the clinical outcome of patients treated with tigecycline did not change during the ASP based on the type of infection only. However, the ASP improved patient outcomes across the entire tigecycline-treated cohort and decreased overall mortality. Favorable outcome rates and decreased death were due to prescribing tigecycline for FDA-approved indications and avoiding its use for off-label indications where it is not effective, such as HAP, VAP, bacteremia, and sepsis, and in cases with high mortality risks, as per the FDA boxed warning [21, 22]. We also observed the bacterial flora in patients who received tigecycline before and during the intervention, which does not represent the full hospital ecology. Microbiological culture results were similar during both periods, with mostly Gram-negative bacteria isolated from clinical samples. However, fewer patients with carbapenem-resistant A. baumannii received tigecycline during the intervention because most of the cases infected with this organism were critically ill and suffering from the off-label diseases HAP or VAP [Internal Hospital Data]. Conversely, we observed an increase in Enterobacteriaceae species resistant to third generation cephalosporins and the emergence of carbapenem resistance in these species. The ASP did not induce this issue because the rate of change of susceptibility patterns of nosocomial flora to tigecycline and other available antibiotics lags behind that of antibiotic prescription during the observed intervention period [44]. It may have been induced by the extensive use of carbapenems and other broad-spectrum antibiotics before the ASP. It is important to note that tigecycline does not stimulate cross-resistance to other antibiotic classes [15]. Its induction of resistance does not have the same impact on patient outcomes or altering the microbiome that is observed for other antibiotics after excessive use [45, 46]. Limitations and strengths A main limitation in this study is that it does not consider alternative antibiotics used for non-FDA-approved indications along with the corresponding patient outcome, particularly in the context of increasing carbapenem resistance. Its retrospective design, small sample size, and lack of adjusted analyses for results that are subject to confounds are also considerable limitations. The observed microbiological success rate was also subject to surveillance bias because most cases did not have follow-up cultures. Nevertheless, this study highlights the importance of drug utilization reviews and how ASPs can reduce drawbacks when using newly introduced antibiotics. This study details a real-life experience in a developing country where the incidence of nosocomial extensively drug-resistant organisms has steadily increased, creating a significant threat of no antimicrobial options in the therapeutic armamentarium. The ASP targeting tigecycline prescriptions improved its use and patient outcomes. Tigecycline played an important role in managing cIAI and acute bacterial cSSTI during the antibiotic resistance era, when it was crucial to spare carbapenem use. Our targeted intervention helped to curb the over-optimistic use of this drug in off-label indications where it is not a suitable treatment option. The data that support the findings of this study are available from Makassed General Hospital but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Makassed General Hospital. ASP: Antibiotic stewardship program Anatomical therapeutic chemical cIAIs: Complicated intra-abdominal infections CLSI: Clinical and Laboratory Standards Institute cSSTIs: Complicated skin and soft tissue infections DDD: Defined daily dose EMA: FDA: HAP: Hospital-acquired pneumonia IDSA: Infectious Disease Society of America Multi-drug-resistant PD: Patient days VAP: XDR: Extensive-drug-resistant Mendes RE, Farrell DJ, Sader HS, Jones RN. Comprehensive assessment of tigecycline activity tested against a worldwide collection of Acinetobacter spp. (2005–2009). Diagn Microbiol Infect Dis. 2010;68:307–11. Balode A, Punda-Polic V, Dowzicky MJ. Antimicrobial susceptibility of gram-negative and gram-positive bacteria collected from countries in Eastern Europe: results from the Tigecycline Evaluation and Surveillance Trial (T.E.S.T) 2004–2010. Int J Antimicrob Agents. 2013;41:527–35. Sader HS, Castanheira M, Flamm RK, Mendes RE, Farrell DJ, Jones RN. Tigecycline activity tested against carbapenem-resistant Enterobacteriaceae from 18 European nations: results from the SENTRY surveillance program (2010–2013). Diagn Microbiol Infect Dis. 2015;83:183–6. Zarrilli R, Vitale D, Di Popolo A, Bagattini M, Daoud Z, Khan AU, et al. A plasmid-borne blaOXA-58 gene confers imipenem resistance to Acinetobacter baumannii isolates from a Lebanese hospital. Antimicrob Agents Chemother. 2008;52:4115–20. Hammoudi D, Moubareck CA, Hakime N, Houmani M, Barakat A, Najjar Z, et al. Spread of imipenem-resistant Acinetobacter baumannii co-expressing OXA-23 and GES-11 carbapenemases in Lebanon. Int J Infect Dis. 2015;36(56–61):17. Moghnieh R, Siblani L, Ghadban D, El Mchad H, Zeineddine R, Abdallah D, et al. Extensively drug-resistant Acinetobacter baumannii in a Lebanese intensive care unit: risk factors for acquisition and determination of a colonization score. J Hosp Infect. 2016;92(1):47–53. Hammoudi Halat D, Moubareck CA, Sarkis DK. Heterogeneity of carbapenem resistance mechanisms among gram-negative pathogens in Lebanon: results of the first cross-sectional countrywide study. Microb Drug Resist. 2017;23(733–43):16. Moghnieh R, Araj GF, Awad L, Daoud Z, Mokhbat JE, Jisr T, et al. A compilation of antimicrobial susceptibility data from a network of 13 Lebanese hospitals reflecting the national situation during 2015–2016. Antimicrob Resist Infect Control. 2019;8(1):41. Moghnieh RA, Abdallah DI, Fawaz IA, Hamandi T, Kassem M, El-Rajab N, et al. Prescription patterns for tigecycline in severely Ill patients for non-FDA approved indications in a developing country: a compromised outcome. Front Microbiol. 2017;27(8):497. Moghnieh R, Awad L, Abdallah D, Jadayel M, Sinno L, Tamim H, Jisr T, et al. Effect of a "handshake" stewardship program versus a formulary restriction policy on High-End antibiotic use, expenditure, antibiotic resistance, and patient outcome. J Chemother. 2020. https://doi.org/10.1080/1120009x.2020.1755589(Epub ahead of print). Ibrahim OM, Polk RE. Antimicrobial use metrics and benchmarking to improve stewardship outcomes: methodology, opportunities, and challenges. Infect Dis Clin N Am. 2014;28(2):195–214. Garner JS, Jarvis WR, Emori TG, Horan TC, Hughes JM. CDC definitions for nosocomial infections. Am J Infect Control. 1998;16:128–40. Centers for Disease Control and Prevention. National Healthcare Safety Network (NHSN) Patient Safety Component Manuel. 2018. Levy MM, Evans LE, Rhodes A. The surviving sepsis campaign bundle: 2018 update. Intensive Care Med. 2018;44(6):925–8. De Rosa FG, Corcione S, Di Perri G, Scaglione F. Re-defining tigecycline therapy. New Microbiol. 2015;38:121–36. Bassetti M, Nicolini L, Repetto E, Righi E, Del Bono V, Viscoli C. Tigecycline use in serious nosocomial infections: a drug use evaluation. BMC Infect Dis. 2010;10:287. Kuo SC, Wang FD, Fung CP, Chen LY, Chen SJ, Chiang MC, et al. Clinical experience with tigecycline as treatment for serious infections in elderly and critically ill patients. J Microbiol Immunol Infect. 2011;44:45–51. Montravers P, Dupont H, Bedos JP, Bret P, The Tigecycline Group. Tigecycline use in critically ill patients: a multicentre prospective observational study in the intensive care setting. Intensive Care Med. 2014;40:988–97. CLSI. Performance standards for antimicrobial susceptibility testing; twenty-eighth informational supplement. In: CLSI document M100-S28. Wayne: Clinical and Laboratory Standards Institute; 2018. Leonard H, Colodner R, Halachmi S, Segal E. Recent advances in the race to design a rapid diagnostic test for antimicrobial resistance. ACS Sens. 2018;3(11):2202–17. US FDA. FDA Drug Safety Communication: increased risk of death with tygacil (tigecycline) compared to other antibiotics used to treat similar infections. 2010. https://www.fda.gov/drugs/drug-safety-and-availability/fda-drug-safety-communication-increased-risk-death-tygacil-tigecycline-compared-other-antibiotics. Accessed 3 Aug 2020. US FDA. FDA Drug Safety Communication: FDA warns of increased risk of death with IV antibacterial Tygacil (tigecycline) and approves new Boxed Warning. 2013. https://www.fda.gov/drugs/drug-safety-and-availability/fda-drug-safety-communication-fda-warns-increased-risk-death-iv-antibacterial-tygacil-tigecycline. Accessed 3 Aug 2020. Hammoudi D, Moubareck CA, Aires J, Adaime A, Barakat A, Fayad N, et al. Countrywide spread of OXA-48 carbapenemase in Lebanon: surveillance and genetic characterization of carbapenem-non-susceptible Enterobacteriaceae in 10 hospitals over a one-year period. Int J Infect Dis. 2014;29:139–44. Daoud Z, Farah J, Sokhn ES, El Kfoury K, Dahdouh E, Masri K, et al. Multidrug-resistant Enterobacteriaceae in Lebanese hospital wastewater: implication in the one health concept. Microbial Drug Resist. 2018;24(2):166–74. Moghnieh RA, Kanafani ZA, Tabaja HZ, Sharara SL, Awad LS, Kanj SS. Epidemiology of common resistant bacterial pathogens in the countries of the Arab League. Lancet Infect Dis. 2018;18(12):e379–94. WHO. Global antimicrobial resistance surveillance system (GLASS) report: early implementation 2017–2018. Geneva: World Health Organization; 2018. Haddad N, Kanj SS, Awad LS, Abdallah DI, Moghnieh RA. The 2018 Lebanese Society of Infectious Diseases and Clinical Microbiology Guidelines for the use of antimicrobial therapy in complicated intra-abdominal infections in the era of antimicrobial resistance. BMC Infect Dis. 2019;19(1):293. Sartelli M, Chichom-Mefire A, Labricciosa FM, Hardcastle T, Abu-Zidan FM, Adesunkanmi AK, et al. The management of intra-abdominal infections from a global perspective: 2017 WSES guidelines for management of intraabdominal infections. World J Emerg Surg. 2017;12(1):29. Mazuski JE, Tessier JM, May AK, Sawyer RG, Nadler EP, Rosengart MR, et al. The surgical infection society revised guidelines on the management of intra-abdominal infection. Surg Infect. 2017;18(1):1–76. Heizmann WR, Löschmann PA, Eckmann C, von Eiff C, Bodmann KF, Petrik C. Clinical efficacy of tigecycline used as monotherapy or in combination regimens for complicated infections with documented involvement of multiresistant bacteria. Infection. 2015;43(1):37–43. Montravers P, Bassetti M, Dupont H, Eckmann C, Heizmann WR, Guirao X, et al. Efficacy of tigecycline for the treatment of complicated skin and soft-tissue infections in real-life clinical practice from five European observational studies. J Antimicrob Chemother. 2013;68(Suppl 2):ii15–24. Ni W, Han Y, Liu J, Wei C, Zhao J, Cui J, Wang R, Liu Y. Tigecycline treatment for carbapenem-resistant Enterobacteriaceae infections: a systematic review and meta-analysis. Medicine. 2016;95(11):e3126. Tan CK, Tang HJ, Lai CC, Chen YY, Chang PC, Liu WL. Correlation between antibiotic consumption and carbapenem-resistant Acinetobacter baumannii causing health care-associated infections at a hospital from 2005 to 2010. J Microbiol Immunol Infect. 2015;48(5):540–4. Mascarello M, Simonetti O, Knezevich A, Carniel LI, Monticelli J, Busetti M, et al. Correlation between antibiotic consumption and resistance of bloodstream bacteria in a University Hospital in North Eastern Italy, 2008–2014. Infection. 2017;45(4):459–67. Zhang D, Hu S, Sun J, Zhang L, Dong H, Feng W, et al. Antibiotic consumption versus the prevalence of carbapenem-resistant Gram-negative bacteria at a tertiary hospital in China from 2011 to 2017. J Infect Public Health. 2019;12(2):195–9. Pasquau J, de Jesus ES, Sadyrbaeva S, Aznarte P, Hidalgo-Tenorio C. The reduction in duration of antibiotic therapy as a key element of antibiotic stewardship programs. J Antimicrob Agents. 2015. https://doi.org/10.4172/2472-1212.1000103. Barlam TF, Cosgrove SE, Abbo LM, MacDougall C, Schuetz AN, Septimus EJ, et al. Implementing an Antibiotic Stewardship Program: guidelines by the Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America. Clin Infect Dis. 2016;62(10):e51–77. Solomkin JS, Mazuski JE, Bradley JS, Rodvold KA, Goldstein EJ, Baron EJ, et al. Diagnosis and management of complicated intra-abdominal infection in adults and children: guidelines by the Surgical Infection Society and the Infectious Diseases Society of America. Clin Infect Dis. 2010;50:133–64. Stevens DL, Bisno AL, Chambers HF, Dellinger EP, Goldstein EJ, et al. Practice guidelines for the diagnosis and management of skin and soft tissue infections: 2014 update by the Infectious Diseases Society of America. Clin Infect Dis. 2014;59(2):e10–52. Al-Mir H, Osman M, Azar N, Madec JY, Hamze M, Haenni M. Emergence of clinical mcr-1-positive Escherichia coli in Lebanon. J Glob Antimicrob Resist. 2019;19:83–4. Abdallah D, El Mchad H, Moghnieh R. Pandrug-resistant Acinetobacter baumannii infections: case series, contributing factors, outcomes and available treatment options. In: Matar G, editor. Clinical cases in microbiology and infectious diseases. New York: Elsevier; 2017. p. 47–57. Peterson LR. Squeezing the antibiotic balloon: the impact of antimicrobial classes on emerging resistance. Clin Microbiol Infect. 2005;11(Suppl 5):4–16. Lee J, Oh CE, Choi EH, Lee HJ. The impact of the increased use of piperacillin/tazobactam on the selection of antibiotic resistance among invasive Escherichia coli and Klebsiella pneumoniae isolates. Int J Infect Dis. 2013;17(8):e638–43. Chen IL, Lee CH, Su LH, Tang YF, Chang SJ, Liu JW. Antibiotic consumption and healthcare-associated infections caused by multidrug-resistant gram-negative bacilli at a large medical center in Taiwan from 2002 to 2009: implicating the importance of antibiotic stewardship. PLoS ONE. 2013;8(5):e65621. Hawser SP. Global monitoring of cross-resistance between tigecycline and minocycline, 2004–2009. J Infect. 2010;60(5):401–2. Sato T, Suzuki Y, Shiraishi T, Honda H, Shinagawa M, Yamamoto S, et al. Tigecycline non-susceptibility occurs exclusively in fluoroquinolone-resistant Escherichia coli clinical isolates, including the major multidrug-resistant lineages O25b:H4-ST131-H30R and O1-ST648. Antimicrob Agents Chemother. 2017;61(2):e01654-16. Aline Zaiter and Diana-Caroline Awwad contributed equally to this work Division of Infectious Diseases, Department of Internal Medicine, Makassed General Hospital, Beirut, Lebanon Rima Moghnieh Division of Infectious Diseases, Department of Internal Medicine, Hôtel Dieu de France, Beirut, Lebanon Pharmacy Department, Makassed General Hospital, Beirut, Lebanon Dania Abdallah & Lyn Awad School of Pharmacy, Beirut Arab University, Beirut, Lebanon Marwa Jadayel Infectious Disease and Residency Program, Internal Medicine, Central Michigan University, Saginaw, MI, 48602, USA Nicholas Haddad Department of Internal Medicine, American University of Beirut, Beirut, Lebanon Hani Tamim Faculty of Medicine, Lebanese University, Beirut, Lebanon Aline Zaiter & Diana-Caroline Awwad Department of Medical Research, Makassed General Hospital, Beirut, Lebanon Loubna Sinno Nursing Office, Makassed General Hospital, Beirut, Lebanon Salam El-Hassan Faculty of Arts and Sciences, American University of Beirut, Beirut, Lebanon Rawad Lakkis Department of Internal Medicine, Makassed General Hospital, Beirut, Lebanon Rabab Khalil Department of Laboratory Medicine, Makassed General Hospital, Beirut, Lebanon Tamima Jisr Dania Abdallah Lyn Awad Aline Zaiter Diana-Caroline Awwad RM was responsible for study conception, result analysis and drafting of the manuscript. DA performed data analysis and contributed to drafting and reviewing the final version of the manuscript. LA, MJ, AZ, DCA, SH, RL, and RK contributed to data collection and analysis. HT and LS were responsible for results analysis. TJ was responsible for the microbiological analysis. NH edited the final version of the manuscript. All authors read and approved the final manuscript. Correspondence to Rima Moghnieh. The institutional review board (IRB) committee of Makassed General Hospital, Beirut, Lebanon, granted this study ethical approval. The IRB committee waived the requirement of informed consent from patients due to the retrospective nature of this study. During the data collection phase, only subject case numbers were included. At a later stage, a different number was assigned to each of our cases to safeguard subject privacy. The contributing authors only performed data entry and analysis as well as the drafting of the manuscript. Moghnieh, R., Abdallah, D., Awad, L. et al. The effect of an antibiotic stewardship program on tigecycline use in a Tertiary Care Hospital, an intervention study. Ann Clin Microbiol Antimicrob 19, 35 (2020). https://doi.org/10.1186/s12941-020-00377-9 Off-label use FDA-approved indication
CommonCrawl
Accepted Manuscript: Turbulent Cosmic Ray–Mediated Shocks in the Hot Ionized Interstellar Medium This content will become publicly available on June 1, 2023 Title: Turbulent Cosmic Ray–Mediated Shocks in the Hot Ionized Interstellar Medium Abstract The structure of shocks and turbulence are strongly modified during the acceleration of cosmic rays (CRs) at a shock wave. The pressure and the collisionless viscous stress decelerate the incoming thermal gas and thus modify the shock structure. A CR streaming instability ahead of the shock generates the turbulence on which CRs scatter. The turbulent magnetic field in turn determines the CR diffusion coefficient and further affects the CR energy spectrum and pressure distribution. The dissipation of turbulence contributes to heating the thermal gas. Within a multicomponent fluid framework, CRs and thermal gas are treated as fluids and are closely coupled to the turbulence. The system equations comprise the gas dynamic equations, the CR pressure evolution equation, and the turbulence transport equations, and we adopt typical parameters for the hot ionized interstellar medium. It is shown that the shock has no discontinuity but possesses a narrow but smooth transition. The self-generated turbulent magnetic field is much stronger than both the large-scale magnetic field and the preexisting turbulent magnetic field. The resulting CR diffusion coefficient is substantially suppressed and is more than three orders smaller near the shock than it is far upstream. The results are qualitatively consistent with certain more » observations. « less Wang, B.-B.; Zank, G. P.; Zhao, L.-L.; Adhikari, L. Thermal instability of halo gas heated by streaming cosmic rays https://doi.org/10.1093/mnras/staa385 Kempski, Philipp ; Quataert, Eliot ( April 2020 , Monthly Notices of the Royal Astronomical Society) ABSTRACT Heating of virialized gas by streaming cosmic rays (CRs) may be energetically important in galaxy haloes, groups, and clusters. We present a linear thermal stability analysis of plasmas heated by streaming CRs. We separately treat equilibria with and without background gradients, and with and without gravity. We include both CR streaming and diffusion along the magnetic-field direction. Thermal stability depends strongly on the ratio of CR pressure to gas pressure, which determines whether modes are isobaric or isochoric. Modes with $\boldsymbol {k \cdot B }\ne 0$ are strongly affected by CR diffusion. When the streaming time is shorter than the CR diffusion time, thermally unstable modes (with $\boldsymbol {k \cdot B }\ne 0$) are waves propagating at a speed ∝ the Alfvén speed. Halo gas in photoionization equilibrium is thermally stable independent of CR pressure, while gas in collisional ionization equilibrium is unstable for physically realistic parameters. In gravitationally stratified plasmas, the oscillation frequency of thermally overstable modes can be higher in the presence of CR streaming than the buoyancy/free-fall frequency. This may modify the critical tcool/tff at which multiphase gas is present. The criterion for convective instability of a stratified, CR-heated medium can be written in the familiar Schwarzschild formmore »dseff/dz < 0, where seff is an effective entropy involving the gas and CR pressures. We discuss the implications of our results for the thermal evolution and multiphase structure of galaxy haloes, groups, and clusters.« less Turbulent Reacceleration of Streaming Cosmic Rays https://doi.org/10.3847/1538-4357/aca021 Bustard, Chad ; Oh, S. Peng ( December 2022 , The Astrophysical Journal) Subsonic, compressive turbulence transfers energy to cosmic rays (CRs), a process known as nonresonant reacceleration. It is often invoked to explain the observed ratios of primary to secondary CRs at ∼GeV energies, assuming wholly diffusive CR transport. However, such estimates ignore the impact of CR self-confinement and streaming. We study these issues in stirring box magnetohydrodynamic (MHD) simulations using Athena++, with field-aligned diffusive and streaming CR transport. For diffusion only, we find CR reacceleration rates in good agreement with analytic predictions. When streaming is included, reacceleration rates depend on plasmaβ. Due to streaming-modified phase shifts between CR and gas variables, they are slower than canonical reacceleration rates in low-βenvironments like the interstellar medium but remain unchanged in high-βenvironments like the intracluster medium. We also quantify the streaming energy-loss rate in our simulations. For sub-Alfvénic turbulence, it is resolution dependent (hence unconverged in large-scale simulations) and heavily suppressed compared to the isotropic loss ratevA· ∇PCR/PCR∼vA/L0, due to misalignment between the mean field and isotropic CR gradients. Unlike acceleration efficiencies, CR losses are almost independent of magnetic field strength overβ∼ 1–100 and are, therefore, not the primary factor behind lower acceleration rates when streaming is included. While this paper is primarilymore »concerned with how turbulence affects CRs, in a follow-up paper we consider how CRs affect turbulence by diverting energy from the MHD cascade, altering the pathway to gas heating and steepening the turbulent spectrum. Acceleration and escape processes of high-energy particles in turbulence inside hot accretion flows https://doi.org/10.1093/mnras/stz329 Kimura, Shigeo S. ; Tomida, Kengo ; Murase, Kohta ( February 2019 , Monthly Notices of the Royal Astronomical Society) We investigate acceleration and propagation processes of high-energy particles inside hot accretion flows. The magnetorotational instability (MRI) creates turbulence inside accretion flows, which triggers magnetic reconnection and may produce non-thermal particles. They can be further accelerated stochastically by the turbulence. To probe the properties of such relativistic particles, we perform magnetohydrodynamic simulations to obtain the turbulent fields generated by the MRI, and calculate orbits of the high-energy particles using snapshot data of the MRI turbulence. We find that the particle acceleration is described by a diffusion phenomenon in energy space with a diffusion coefficient of the hard-sphere type: Dε ∝ ε2, where ε is the particle energy. Eddies in the largest scale of the turbulence play a dominant role in the acceleration process. On the other hand, the stochastic behaviour in configuration space is not usual diffusion but superdiffusion: the radial displacement increases with time faster than that in the normal diffusion. Also, the magnetic field configuration in the hot accretion flow creates outward bulk motion of high-energy particles. This bulk motion is more effective than the diffusive motion for higher energy particles. Our results imply that typical active galactic nuclei that host hot accretion flows can accelerate CRs up tomore »ε ∼ 0.1−10 PeV. A consistent reduced-speed-of-light formulation of cosmic ray transport valid in weak- and strong-scattering regimes https://doi.org/10.1093/mnras/stab2635 Hopkins, Philip F ; Squire, Jonathan ; Butsky, Iryna S ( December 2021 , Monthly Notices of the Royal Astronomical Society) ABSTRACT We derive a consistent set of moment equations for cosmic ray (CR)-magnetohydrodynamics, assuming a gyrotropic distribution function (DF). Unlike previous efforts, we derive a closure, akin to the M1 closure in radiation hydrodynamics (RHD), that is valid in both the nearly isotropic DF and/or strong-scattering regimes, and the arbitrarily anisotropic DF or free-streaming regimes, as well as allowing for anisotropic scattering and transport/magnetic field structure. We present the appropriate two-moment closure and equations for various choices of evolved variables, including the CR phase space DF f, number density n, total energy e, kinetic energy ϵ, and their fluxes or higher moments, and the appropriate coupling terms to the gas. We show that this naturally includes and generalizes a variety of terms including convection/fluid motion, anisotropic CR pressure, streaming, diffusion, gyro-resonant/streaming losses, and re-acceleration. We discuss how this extends previous treatments of CR transport including diffusion and moment methods and popular forms of the Fokker–Planck equation, as well as how this differs from the analogous M1-RHD equations. We also present two different methods for incorporating a reduced speed of light (RSOL) to reduce time-step limitations: In both, we carefully address where the RSOL (versus true c) must appear for themore »correct behaviour to be recovered in all interesting limits, and show how current implementations of CRs with an RSOL neglect some additional terms.« less But what about...: cosmic rays, magnetic fields, conduction, and viscosity in galaxy formation https://doi.org/10.1093/mnras/stz3321 Hopkins, Philip F ; Chan, T K ; Garrison-Kimmel, Shea ; Ji, Suoqing ; Su, Kung-Yi ; Hummels, Cameron B ; Kereš, Dušan ; Quataert, Eliot ; Faucher-Giguère, Claude-André ( March 2020 , Monthly Notices of the Royal Astronomical Society) ABSTRACT We present and study a large suite of high-resolution cosmological zoom-in simulations, using the FIRE-2 treatment of mechanical and radiative feedback from massive stars, together with explicit treatment of magnetic fields, anisotropic conduction and viscosity (accounting for saturation and limitation by plasma instabilities at high β), and cosmic rays (CRs) injected in supernovae shocks (including anisotropic diffusion, streaming, adiabatic, hadronic and Coulomb losses). We survey systems from ultrafaint dwarf ($M_{\ast }\sim 10^{4}\, \mathrm{M}_{\odot }$, $M_{\rm halo}\sim 10^{9}\, \mathrm{M}_{\odot }$) through Milky Way/Local Group (MW/LG) masses, systematically vary uncertain CR parameters (e.g. the diffusion coefficient κ and streaming velocity), and study a broad ensemble of galaxy properties [masses, star formation (SF) histories, mass profiles, phase structure, morphologies, etc.]. We confirm previous conclusions that magnetic fields, conduction, and viscosity on resolved ($\gtrsim 1\,$ pc) scales have only small effects on bulk galaxy properties. CRs have relatively weak effects on all galaxy properties studied in dwarfs ($M_{\ast } \ll 10^{10}\, \mathrm{M}_{\odot }$, $M_{\rm halo} \lesssim 10^{11}\, \mathrm{M}_{\odot }$), or at high redshifts (z ≳ 1–2), for any physically reasonable parameters. However, at higher masses ($M_{\rm halo} \gtrsim 10^{11}\, \mathrm{M}_{\odot }$) and z ≲ 1–2, CRs can suppress SF and stellar masses by factorsmore »∼2–4, given reasonable injection efficiencies and relatively high effective diffusion coefficients $\kappa \gtrsim 3\times 10^{29}\, {\rm cm^{2}\, s^{-1}}$. At lower κ, CRs take too long to escape dense star-forming gas and lose their energy to collisional hadronic losses, producing negligible effects on galaxies and violating empirical constraints from spallation and γ-ray emission. At much higher κ CRs escape too efficiently to have appreciable effects even in the CGM. But around $\kappa \sim 3\times 10^{29}\, {\rm cm^{2}\, s^{-1}}$, CRs escape the galaxy and build up a CR-pressure-dominated halo which maintains approximate virial equilibrium and supports relatively dense, cool (T ≪ 106 K) gas that would otherwise rain on to the galaxy. CR 'heating' (from collisional and streaming losses) is never dominant.« less https://doi.org/10.3847/1538-4357/ac6ddc Wang, B.-B., Zank, G. P., Zhao, L.-L., and Adhikari, L.. Turbulent Cosmic Ray–Mediated Shocks in the Hot Ionized Interstellar Medium. Retrieved from https://par.nsf.gov/biblio/10355939. The Astrophysical Journal 932.1 Web. doi:10.3847/1538-4357/ac6ddc. Wang, B.-B., Zank, G. P., Zhao, L.-L., & Adhikari, L.. Turbulent Cosmic Ray–Mediated Shocks in the Hot Ionized Interstellar Medium. The Astrophysical Journal, 932 (1). Retrieved from https://par.nsf.gov/biblio/10355939. https://doi.org/10.3847/1538-4357/ac6ddc Wang, B.-B., Zank, G. P., Zhao, L.-L., and Adhikari, L.. "Turbulent Cosmic Ray–Mediated Shocks in the Hot Ionized Interstellar Medium". The Astrophysical Journal 932 (1). Country unknown/Code not available. https://doi.org/10.3847/1538-4357/ac6ddc. https://par.nsf.gov/biblio/10355939. place = {Country unknown/Code not available}, title = {Turbulent Cosmic Ray–Mediated Shocks in the Hot Ionized Interstellar Medium}, url = {https://par.nsf.gov/biblio/10355939}, DOI = {10.3847/1538-4357/ac6ddc}, abstractNote = {Abstract The structure of shocks and turbulence are strongly modified during the acceleration of cosmic rays (CRs) at a shock wave. The pressure and the collisionless viscous stress decelerate the incoming thermal gas and thus modify the shock structure. A CR streaming instability ahead of the shock generates the turbulence on which CRs scatter. The turbulent magnetic field in turn determines the CR diffusion coefficient and further affects the CR energy spectrum and pressure distribution. The dissipation of turbulence contributes to heating the thermal gas. Within a multicomponent fluid framework, CRs and thermal gas are treated as fluids and are closely coupled to the turbulence. The system equations comprise the gas dynamic equations, the CR pressure evolution equation, and the turbulence transport equations, and we adopt typical parameters for the hot ionized interstellar medium. It is shown that the shock has no discontinuity but possesses a narrow but smooth transition. The self-generated turbulent magnetic field is much stronger than both the large-scale magnetic field and the preexisting turbulent magnetic field. The resulting CR diffusion coefficient is substantially suppressed and is more than three orders smaller near the shock than it is far upstream. The results are qualitatively consistent with certain observations.}, journal = {The Astrophysical Journal}, volume = {932}, number = {1}, author = {Wang, B.-B. and Zank, G. P. and Zhao, L.-L. and Adhikari, L.}, }
CommonCrawl
Proceedings of the International Astronomical Union (10) The European Physical Journal - Applied Physics (3) Journal of Plasma Physics (2) Laser and Particle Beams (2) The Journal of Laryngology & Otology (2) Clay Minerals (1) High Power Laser Science and Engineering (1) Journal of the London Mathematical Society (1) Symposium - International Astronomical Union (1) International Astronomical Union (13) The Australian Society of Otolaryngology Head and Neck Surgery (2) Canadian Mathematical Society (1) X. Liu, H. Lin, X. Chen, W. Shen, X. Ye, Y. Lin, Z. Lin, S. Zhou, M. Gao, Y. Ding, N. He Published online by Cambridge University Press: 08 March 2019, e117 Feeding corn grain steeped in citric acid modulates rumen fermentation and inflammatory responses in dairy goats Y. Z. Shen, L. Y. Ding, L. M. Chen, J. H. Xu, R. Zhao, W. Z. Yang, H. R. Wang, M. Z. Wang Journal: animal / Volume 13 / Issue 2 / February 2019 Cereal grains treated with organic acids were proved to increase ruminal resistant starch and can relieve the risk of ruminal acidosis. However, previous study mainly focussed on acid-treated barley, the effects of organic acid-treated corn is still unknown. The objectives of this study were to evaluate whether feeding ground corn steeped in citric acid (CA) would affect ruminal pH and fermentation patterns, milk production and innate immunity responses in dairy goats. Eight ruminally cannulated Saanen dairy goats were used in a crossover designed experiment. Each experimental period was 21 day long including 14 days for adaption to new diet and 7 days for sampling and data collection. The goats were fed high-grain diet contained 30% hay and 70% corn-based concentrate. The corn was steeped either in water (control) or in 0.5% (wt/vol) CA solution for 48 h. Goats fed CA diet showed improved ruminal pH status with greater mean and minimum ruminal pH, and shorter (P<0.05) duration of ruminal pH<5.6 and less area of ruminal pH<5.6, 5.8 and 6.0. Concentration of total volatile fatty acid and molar proportion of propionate were less but the molar proportion of acetate was greater (P<0.05) in goats fed the CA diet than the control diet. Concentration of ruminal lipopolysaccharide (LPS) was lower (P<0.05) and that of lactic acid also tended (P<0.10) to be lower in goats fed CA than the control. Although dry matter intake, actual milk yield, yield and content of milk protein and lactose were not affected, the milk fat content and 4% fat-corrected milk tended (P<0.10) to be greater in goats fed CA diet. For the inflammatory responses, peripheral LPS did not differ, whereas the concentration of LPS binding protein and serum amyloid A tended (P<0.10) to be less in goats fed CA diet. Similarly, goats fed CA diet had less (P<0.05) concentration of haptoglobin and tumour necrosis factor. These results indicated that feeding ground corn treated with CA effectively improved ruminal pH status, thus alleviated the risk of ruminal acidosis, reduced inflammatory response, and tend to improve milk yield and milk fat test. He+ Irradiation Induced Cracking and Exfoliating on the Surface of Ti3AlC2 H. H. Shen, F. Z. Li, S. M. Peng, H. B. Zhang, K. Sun, X. T. Zu Yield, water productivity and economic return of dryland wheat in the Loess Plateau in response to conservation tillage practices Z. LI, Q. ZHANG, Q. YANG, X. YANG, J. LI, S. CUI, Y. SHEN Journal: The Journal of Agricultural Science / Volume 155 / Issue 8 / October 2017 Winter wheat (Triticum aestivum L.) production on the Loess Plateau in China has been threatened by water scarcity and climate change during the last decade. Sustainable crop production in this region requires managerial practices that can provide high yield and high water productivity (WP). A 7-year (2001–2008) study at the Loess Plateau Research Station of Lanzhou University investigated the effects of various conservation tillage practices on grain yield, soil water content (SWC), WP and economic return of winter wheat production. Tillage treatments included: conventional tillage (T), conventional tillage followed by stubble retention (TS), no-till (NT) and no-till followed by stubble retention (NTS). Over the entire experimental period, grain yield and WP of winter wheat ranged from 1279 to 4894 kg/ha and 0·32 to 2·41 kg/m3, respectively. Both were significantly affected by tillage treatment and year, while SWC was only affected by year. Grain yield and WP in TS was increased by 4·9, 12·1, 0·9% and 13·7, 20·4 and 3·9% compared with NTS, NT and T, respectively, over seven growing seasons. Additionally, a multiple linear regression analysis indicated that grain yield is mainly limited by SWC during planting. Despite its lower grain yield, the NTS treatment increased economic benefit by US$ 328, US$ 23 and US$ 87/ha compared with TS, NT and T, respectively. Therefore, it is suggested that increasing soil water storage at wheat sowing time and encouraging the use of NTS could improve economic returns in this region. Long-term effectiveness of plasma-derived hepatitis B vaccine 22–28 years after immunization in a hepatitis B virus endemic rural area: is an adult booster dose needed? H. LI, G. J. LI, Q. Y. CHEN, Z. L. FANG, X. Y. WANG, C. TAN, Q. L. YANG, F. Z. WANG, F. WANG, S. ZHANG, S. L. BI, L. P. SHEN Journal: Epidemiology & Infection / Volume 145 / Issue 5 / April 2017 Longan County is considered a highly endemic area for hepatitis B virus (HBV). The plasma-derived vaccine has been used in newborns in this area since 1987. A cross-sectional survey was conducted to evaluate the long-term effectiveness of this vaccine. In total, 1634 participants born during 1987–1993 and who had received a series of plasma-derived HB vaccinations at ages 0, 1, and 6 months were enrolled. Serological HBV markers were detected and compared with previous survey data. Overall the prevalence of hepatitis B surface antigen (HBsAg) in all participants was 3·79%; 3·47% of subjects who had received the first dose within 24 h were HBsAg positive, and 8·41% of subjects who had received a delayed first dose were also HBsAg positive. There were 1527 subjects identified who had received the first dose within 24 h and whose HBsAg and anti-HBc prevalence increased yearly after immunization, while the anti-HBs-positive rate and vaccine effectiveness declined. The geometric mean concentration of antibody in the anti-HB-positive participants was 55·13 mIU/ml and this declined after immunization. Fewer than 2·0% of participants had anti-HB levels ⩾1000 mIU/ml. The data show that the protective efficacy of the plasma-derived vaccinations declined and administration of HB vaccine within 24 h of birth was very important. To reduce the risk of HBV infection in this highly endemic area, a booster dose might be necessary if anti-HBs levels fall below 10 mIU/ml after age 18 years. Furthermore, studies on the immune memory induced by plasma-derived HB vaccine are needed. High prevalence of fosfomycin resistance gene fosA3 in bla CTX-M-harbouring Escherichia coli from urine in a Chinese tertiary hospital during 2010–2014 X.-L. CAO, H. SHEN, Y.-Y. XU, X.-J. XU, Z.-F. ZHANG, L. CHENG, J.-H. CHEN, Y. ARAKAWA Journal: Epidemiology & Infection / Volume 145 / Issue 4 / March 2017 Fosfomycin has become a therapeutic option in urinary tract infections. We identified 57 fosfomycin-resistant Escherichia coli from 465 urine-derived extended-spectrum β-lactamase (ESBL)-producing isolates from a Chinese hospital during 2010–2014. Of the 57 fosfomycin-resistant isolates, 51 (89·5%) carried fosA3, and one carried fosA1. Divergent pulsed-field gel electrophoresis profiles and multi-locus sequence typing results revealed high clonal diversity in the fosA3-positive isolates. Conjugation experiments showed that the fosA3 genes from 50 isolates were transferable, with IncFII or IncI1 being the most prevalent types of plasmids. The high prevalence of fosA3 was closely associated with that of bla CTX-M. Horizontal transfer, rather than clonal expansion, might play a central role in dissemination. Such strains may constitute an important reservoir of fosA3 and bla CTX-M, which may well be readily disseminated to other potential human pathogens. Since most ESBL-producing E. coli have acquired resistance to fluoroquinolones worldwide, further spread of fosA3 in such E. coli isolates should be monitored closely. 10Be, 14C Distribution, and Soil Production Rate in a Soil Profile of a Grassland Slope at Heshan Hilly Land, Guangdong CD Shen, J Beer, S Ivy-Ochs, Y Sun, W Yi, P W Kubik, M Suter, Z Li, S Peng, Y Yang Concentrations of organic carbon, carbon isotopes (13C and 14C), atmospheric 10Be in soil, and in situ 10Be in bedrock and weathering rock were determined in a study of a profile of a grassland slope at the Heshan Hilly Land Interdisciplinary Experimental Station, Chinese Academy of Sciences, in Guangdong Province, China. A good linear relationship between depth and the 14C apparent age of the organic carbon demonstrates that the rock weathering process and the accumulation process of organic matter in the slope are relatively stable. Both 14C and 10Be results show that about 34% of soil in the grassland slope has been eroded during the past 3800 yr. The 10Be results for interstitial soil from weathered rocks show that the 90-cm-thick weathering rock layer above the bedrock has evolved over a period of 1.36 Myr. The concentrations of in situ 10Be in the weathered rock and bedrock are 10.7 × 104 atoms/g and 8.31 × 104 atoms/g, respectively. The weathering rate of the bedrock, equivalent to the soil production rate, was estimated at 8.8 × 10-4 cm/yr, and the exposure ages of the weathered rock and the bedrock were 72 kyr and 230 kyr, respectively. VLBI Observations of a Sample of 15 EGRET-detected AGNs at 5 GHz X. Y. Hong, D.R. Jiang, R. T. Schilizzi, G. Nicolson, Z.-Q. Shen, W. H. Wang We report VLBI observations of 15 EGRET-detected AGNs with European VLBI Network (EVN) at 5 GHz. All sources in the sample display core-jet structures. VLBI of Southern EGRET Identifications D.W. Murphy, S.J. Tingay, R.A. Preston, D.L. Meier, D.L. Jones, P.G. Edwards, M.E. Costa, J.E.J. Lovell, P.M. McCulloch, D.L. Jauncey, J.E. Reynolds, A.K. Tzioumis, E.A. King, G.D. Nicolson, J.F.H. Quick, T.-S. Wan, Z.-Q. Shen Published online by Cambridge University Press: 12 April 2016, pp. 55-56 We have undertaken VLBI observations of 8 Southern Hemisphere EGRET radio sources. Using our data as well as data obtained from the literature we have examined the difference in radio properties between gamma-ray loud and gamma-ray quiet radio sources. In particular, we find no evidence that gamma-ray loud radio sources lie preferentially in sources with straight radio jets as has been suggested. 5-GHz VLBI Imaging Observations of 7 Equatorial AGNs Z.-Q. Shen, D. R. Jiang, Y.J. Chen, T.-S. Wan Since 1992 we have been conducting a 5-GHz VLBI imaging survey of southern and equatorial radio sources. So far, we have published the results of two observing sessions with 26 southern radio sources imaged in total (Shen et al. 1997; 1998). In this paper, we present the preliminary results of the third session of observations of 7 equatorial sources in the sample. Population genetic structure and migration patterns of Liriomyza sativae in China: moderate subdivision and no Bridgehead effect revealed by microsatellites X.-T. Tang, Y. Ji, Y.-W. Chang, Y. Shen, Z.-H. Tian, W.-R. Gong, Y.-Z. Du Journal: Bulletin of Entomological Research / Volume 106 / Issue 1 / February 2016 While Liriomyza sativae (Diptera: Agromyzidae), an important invasive pest of ornamentals and vegetables has been found in China for the past two decades, few studies have focused on its genetics or route of invasive. In this study, we collected 288 L. sativae individuals across 12 provinces to explore its population genetic structure and migration patterns in China using seven microsatellites. We found relatively low levels of genetic diversity but moderate population genetic structure (0.05 < F ST < 0.15) in L. sativae from China. All populations deviated significantly from the Hardy–Weinberg equilibrium due to heterozygote deficiency. Molecular variance analysis revealed that more than 89% of variation was among samples within populations. A UPGMA dendrogram revealed that SH and GXNN populations formed one cluster separate from the other populations, which is in accordance with STRUCTURE and GENELAND analyses. A Mantel test indicated that genetic distance was not correlated to geographic distance (r = −0.0814, P = 0.7610), coupled with high levels of gene flow (M = 40.1–817.7), suggesting a possible anthropogenic influence on the spread of L. sativae in China and on the effect of hosts. The trend of asymmetrical gene flow was from southern to northern populations in general and did not exhibit a Bridgehead effect during the course of invasion, as can be seen by the low genetic diversity of southern populations. Herschel/SPIRE colors of galaxies at z>2.5 F.-T. Yuan, V. Buat, D. Burgarella, L. Ciesla, S. Heinis, S. Shen, Z.-Y. Shao, J. -L. Hou Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S319 / August 2015 Published online by Cambridge University Press: 17 August 2016, p. 110 We compiled a sample of 57 galaxies with spectroscopically confirmed redshifts and SPIRE detections in all three bands at z = 2.5-6.4, and compared their SPIRE colors with SED templates from local and high-z libraries. We find that local calibrations are inconsistent with high-z observations. For high-z libraries, the templates with an evolution from z = 0 to 3 can describe the average colors of the observations at high redshift well. Based on the templates, we defined color cuts to divide the SPIRE color-color diagram into different regions with different mean redshifts. We tested this method and two other color cut methods using a larger sample (783 galaxies) with photometric redshifts. We find that these color cuts can separate the sample into subsamples with different mean redshifts, but the dispersion of redshifts in each subsample is considerably large. Additional information is needed for better sampling. Hepatitis C virus infection and risk factors in the general population: a large community-based study in eastern China, 2011–2012 P. HUANG, L. G. ZHU, X. J. ZHAI, Y. F. ZHU, M. YUE, J. SU, J. WANG, H. T. YANG, Y. ZHANG, H. B. SHEN, Z. H. PENG, R. B. YU Journal: Epidemiology & Infection / Volume 143 / Issue 13 / October 2015 Limited information is available on the prevalence of hepatitis C virus (HCV) in the general population in China. A community-based epidemiological study was conducted in three counties in eastern China. A total of 149 175 individuals were investigated in 60 communities in three counties in Jiangsu province, eastern China, of whom 1175 subjects [0·79%, 95% confidence interval (CI) 0·74–0·83] were HCV antibody positive. The prevalence was low in children (0·09%, 95% CI 0·04–0·17), but increased progressively from adolescents (0·20%, 95% CI 0·15–0·28) to adults aged ⩾21 years (95% CI 0·15–1·64). Women had a higher prevalence of HCV infection than men in most age groups. In a multilevel regression analysis, age, sex, education, occupation, blood transfusion [odds ratio (OR) 2·91, 95% CI 1·09–5·37], invasive testing (OR 1·28, 95% CI 1·14–1·61), and dental therapy (OR 2·27, 95% CI 1·41–3·42) were associated with HCV infection. In conclusion, although the prevalence of HCV in this population was lower than reported from national levels, the total reservoir of infection is significant and warrants public health measures, such as health education to limit the magnitude of the problem. Neutron diffraction residual stress measurements of welds made with pulsed tandem gas metal arc welding (PT-GMAW) A.M. Paradowska, N. Larkin, H. Li, Z. Pan, C. Shen, M. Law Journal: Powder Diffraction / Volume 29 / Issue S1 / December 2014 Published online by Cambridge University Press: 10 November 2014, pp. S24-S27 Pulsed tandem gas metal arc welding (PT-GMAW) is being developed to increase productivity and minimise weld-induced distortion in ship-building. The PT-GMAW process was used in pulse–pulse mode to butt-weld two different strength and thickness steels; the residual stress and hardness profiles of the welds are reported and correlated. Novel Polymer Gel Electrolytes with Poly(oxyethylene)-Amidoacid Microstructures for Highly Efficient Quasi-Solid-State Dye-Sensitized Solar Cells Sheng-Yen Shen, Rui-Xuan Dong, Po-Ta Shih, Kuo-Chuan Ho, Jiang-Jen Lin Published online by Cambridge University Press: 07 October 2014, mrss14-1667-b07-05 A cross-linked copolymer was designed and synthesized bythe imidation of poly(oxyethylene)-diamine and 4,4'-oxydiphthalic anhydride, and followed by a late-stage curing to generate the cross-linked gels. The copolymers consisting of crosslinking sites and multiple functionalities such aspoly(oxyethylene)-segments, amido-acids, imides, and amine termini, characterized by Fourier Transform Infrared Spectroscopy. After the self-curing at 80 °C, the gel-like material enabled to absorb liquid form of electrolytesin the medium of propylene carbonate(PC), dimethylformamide(DMF),and N-methyl-2-pyrrolidone(NMP).By using a field emission scanning electronic microscope, we observed a 3D interconnected nanochannel microstructure, within which, the liquid electrolytes were absorbed. When the novel polymer gel electrolyte (PGE) was fabricated into a dye-sensitized solar cell (DSSC), an extremely high photovoltaic performance was demonstrated. The PGE, absorbed 76.7 wt% of the liquid electrolyte (soaking in the PC solution) based on the polymer's weight gave rise to a power conversion efficiency of 8.31%, superior to that (7.89%) of the DSSC with liquid electrolytes. It was further demonstrated that the cell had a long-term stabilityduring the test of 1000hat-rest at room temperature or only slightly decreasing in efficiency of 5%.This is the first time demonstration for a PGE exhibiting a higher performance than its liquid counterpart cell. The observation is ascribed to the suppression of the back electron transfer through the unique morphology of the polymer microstructures. High concentrate-induced subacute ruminal acidosis (SARA) increases plasma acute phase proteins (APPs) and cortisol in goats Y. Y. Jia, S. Q. Wang, Y. D. Ni, Y. S. Zhang, S. Zhuang, X. Z. Shen Journal: animal / Volume 8 / Issue 9 / September 2014 The aim of this study was to investigate changes of stress status in dairy goats induced to subacute ruminal acidosis (SARA). The level of acute phase proteins (APPs) including haptoglobin (HP) and serum amyloid A (SAA) in plasma and their mRNA expression in liver, as well as plasma cortisol and genes expression of key factors controlling cortisol synthesis in adrenal cortex were compared between SARA and control goats. SARA was induced by feeding high concentrate diet (60% concentrate of dry matter) for 3 weeks (SARA, n=6), while control goats (Con, n=6) received a low concentrate diet (40% concentrate of dry matter) during the experimental time. SARA goats showed ruminal pH below 5.8 for more than 3 h per day, which was significantly lower than control goats (pH>6.0). SARA goats demonstrated a significant increase of hepatic HP and SAA mRNA expression (P<0.05), and the level of HP but not SAA in plasma was markedly increased compared with control (P<0.05). The level of cortisol in plasma showed a trend to increase in SARA goats (0.05<P<0.1). In adrenal cortex, mRNA expression of 17α-hydroxylase cytochrome (P45017α ) (P<0.01) and 3β-hydroxysteroid dehydrogenase (3β-HSD) (P<0.05) was significantly increased in SARA goats. The contents of 3β-HSD and P450 side-chain cleavage protein were increased by 58.6% and 39.4%, respectively, but did not reach the statistical significance (P>0.05). These results suggested that SARA goats experienced a certain stress status, exhibiting an increase in HP production and cortisol secretion. Subtypes of major depression: latent class analysis in depressed Han Chinese women Y. Li, S. Aggen, S. Shi, J. Gao, Y. Li, M. Tao, K. Zhang, X. Wang, C. Gao, L. Yang, Y. Liu, K. Li, J. Shi, G. Wang, L. Liu, J. Zhang, B. Du, G. Jiang, J. Shen, Z. Zhang, W. Liang, J. Sun, J. Hu, T. Liu, X. Wang, G. Miao, H. Meng, Y. Li, C. Hu, Y. Li, G. Huang, G. Li, B. Ha, H. Deng, Q. Mei, H. Zhong, S. Gao, H. Sang, Y. Zhang, X. Fang, F. Yu, D. Yang, T. Liu, Y. Chen, X. Hong, W. Wu, G. Chen, M. Cai, Y. Song, J. Pan, J. Dong, R. Pan, W. Zhang, Z. Shen, Z. Liu, D. Gu, X. Wang, X. Liu, Q. Zhang, J. Flint, K. S. Kendler Journal: Psychological Medicine / Volume 44 / Issue 15 / November 2014 Background. Despite substantial research, uncertainty remains about the clinical and etiological heterogeneity of major depression (MD). Can meaningful and valid subtypes be identified and would they be stable cross-culturally? Symptoms at their lifetime worst depressive episode were assessed at structured psychiatric interview in 6008 women of Han Chinese descent, age ⩾30 years, with recurrent DSM-IV MD. Latent class analysis (LCA) was performed in Mplus. Using the nine DSM-IV MD symptomatic A criteria, the 14 disaggregated DSM-IV criteria and all independently assessed depressive symptoms (n = 27), the best LCA model identified respectively three, four and six classes. A severe and non-suicidal class was seen in all solutions, as was a mild/moderate subtype. An atypical class emerged once bidirectional neurovegetative symptoms were included. The non-suicidal class demonstrated low levels of worthlessness/guilt and hopelessness. Patterns of co-morbidity, family history, personality, environmental precipitants, recurrence and body mass index (BMI) differed meaningfully across subtypes, with the atypical class standing out as particularly distinct. Conclusions. MD is a clinically complex syndrome with several detectable subtypes with distinct clinical and demographic correlates. Three subtypes were most consistently identified in our analyses: severe, atypical and non-suicidal. Severe and atypical MD have been identified in multiple prior studies in samples of European ethnicity. Our non-suicidal subtype, with low levels of guilt and hopelessness, may represent a pathoplastic variant reflecting Chinese cultural influences. Ion motion effects on the generation of short-cycle relativistic laser pulses during radiation pressure acceleration W. P. Wang, X. M. Zhang, X. F. Wang, X. Y. Zhao, J. C. Xu, Y. H. Yu, L. Q. Yi, Y. Shi, L. G. Zhang, T. J. Xu, C. Liu, Z. K. Pei, B. F. Shen Journal: High Power Laser Science and Engineering / Volume 2 / 01 July 2014 Published online by Cambridge University Press: 30 April 2014, e9 Print publication: 01 July 2014 The effects of ion motion on the generation of short-cycle relativistic laser pulses during radiation pressure acceleration are investigated by analytical modeling and particle-in-cell simulations. Studies show that the rear part of the transmitted pulse modulated by ion motion is sharper compared with the case of the electron shutter only. In this study, the ions further modulate the short-cycle pulses transmitted. A 3.9 fs laser pulse with an intensity of $1.33\times 10^{21}\ {\rm W}\ {\rm cm}^{-2}$ is generated by properly controlling the motions of the electron and ion in the simulations. The short-cycle laser pulse source proposed can be applied in the generation of single attosecond pulses and electron acceleration in a small bubble regime. Survivability and molecular variation in Vibrio cholerae from epidemic sites in China X. Q. LI, M. WANG, Z. A. DENG, J. C. SHEN, X. Q. ZHANG, Y. F. LIU, Y. S. CAI, X. W. WU, B. DI Journal: Epidemiology & Infection / Volume 143 / Issue 2 / January 2015 The survival behaviour of Vibrio cholerae in cholera epidemics, together with its attributes of virulence-associated genes and molecular fingerprints, are significant for managing cholera epidemics. Here, we selected five strains representative of V. cholerae O1 and O139 involved in cholera events, examined their survival capacity in large volumes of water sampled from epidemic sites of a 2005 cholera outbreak, and determined virulence-associated genes and molecular subtype changes of the surviving isolates recovered. The five strains exhibited different survival capacities varying from 17 to 38 days. The virulence-associated genes of the surviving isolates remained unchanged, while their pulsotypes underwent slight variation. In particular, one waterway-isolated strain maintained virulence-associated genes and evolved to share the same pulsotype as patient strains, highlighting its role in the cholera outbreak. The strong survival capacity and molecular attributes of V. cholerae might account for its persistence in environmental waters and the long duration of the cholera outbreak, allowing effective control measures. Serological survey of a new type of reovirus in humans in China B. BAI, H. SHEN, Y. HU, J. HOU, Z. LIU, R. LI, Y. CHAI, W. HUANG, P. MAO To evaluate the presence of a new type of reovirus (designated R4) in humans, we determined the prevalence of specific antibodies using a neutralization assay and ELISA. The sera from 97 healthy people and 219 patients in our hospital with measles, hand-foot-and-mouth disease, liver diseases, and diarrhoea were investigated. Although the study population was limited, our data suggested that R4 is widespread in the human population. A significantly higher level of R4-specific antibody in patients than in healthy people is worthy of consideration, since it poses a risk for aggravation of the extant illness by the reovirus.
CommonCrawl
Is there any history into the methodology that Balmer used for the spectral line formula? What I am referring to is the Balmer formula as it appears in Wikipedia. To come up with this series by trial and error along with its constants is asking a little too much but I can't understand how one would go about deriving this formula which involve such exquisite measurements of its constants. Specifically was there some perhaps experimental methodology that makes the formula a little more easy to see how he may have arrived at it? It appears to be an ad hoc truly amazing leap in intuition. Which may be the case. Certainly doesn't hurt to ask. mathematics atomic-theory nuclear-physics SedumjoySedumjoy Here the German Wikipedia helps better than the English one: Balmer was interested in a very broad range of sciences and "sciences" including numerology and cabbala. For instance he calculated the number of steps of pyramids and the floor plan of Jewish temples, probably from the information supplied by the bible. Therefore it is not surprising that he was interested in the problem of spectral wavelengths which Eduard Hagenbach (Prof. of physics at Basel university), knowing Balmer's deciphering skill, had suggested to him. In 1885 Balmer found the well-known formula $\lambda = \frac{m^2h}{m^2-n^2}$ where $n = 2$ and $m = 3, 4, 5, ...$ which in 1888 was generalized by Rydberg. Balmer predicted the line for m = 7 which was confirmed by Angström. Other lines, i.e. higher quantum states, were found in the spectra of white stars. Perhaps also of interest, but only available in German from the Swiss Mathematical Society: Balmer also lectured as a Privatdozent at the university of Basel. Alas there were no students interested. Of about 70 announced courses 50 did not take place. Only very rarely he had more than 1 or 2 students. So his main job remained to teach girls how to add and multiply fractions and to express thoughts in written form, 17 hours a week. In his notes of 1891 he speculates about atomic movement, points of force in circular or elliptic motion and even something resembling waves. These (historically) interesting notes are partially available on p. 59. OttoOtto Not the answer you're looking for? Browse other questions tagged mathematics atomic-theory nuclear-physics or ask your own question. Are there any theorems that become "lost" and discarded over time? Primary reference for a formula related to the Faa di Bruno formula Euler's formula for product of cosines History of Plato's formula for generating pythagorean triples Was there any atomic model(s) that came between Bohr's and the actual beginning of Quantum Mechanics in early 20s? Is there any complete works of Abel, Lagrange, Jacobi and Gauss translated into English? What was the notion of limit that Newton used? I am searching for a book of this form and content, is there any?
CommonCrawl
How to properly typeset all forms of punctuation used in English-language documents? English documents tend to use these types of punctuation: brackets of various kinds colons and semicolons dashes and hyphens exclamation and question marks Some software, such as word processors, will attempt to automatically handle and properly typeset punctuation, but with TeX, everything is done manually. One must be careful in placing such punctuation in a document. There are two issues: Some punctuation requires special symbols before and after, depending on the situation, to ensure they are placed correctly in the document. Some punctuation is easily confused with other, similar punctuation. According to Is a period after an abbreviation the same as an end of sentence period? and Abbreviations and spacing, to give proper spacing after a period appearing within a sentence, and not used to mark the end, one should use a \. Mr.\ Green ate lunch at approx.\ 10:45. According to Range of Dates (1999-01 -- 2012-12)?, to display a range of dates, one should use --, rather than -. Mr.\ Green (1950--1999) always ate three fish sandwiches for lunch. Generally, style guides do not consider such small details, however, TeX offers such control which the authors of these guides may have overlooked. In addition to the recommendations above, about the proper typesetting of periods and hyphens, what other special considerations must one take when typesetting the other punctuation in various situations with TeX? Has anyone compiled a comprehensive list which details the best method of typesetting the various forms of punctuation typically found in English-language documents? punctuation typography best-practices VillageVillage ~ is used to provide an unbreakable space. – user11232 May 10 '12 at 9:34 The tilde is unbreakable and should be placed between words which should not be broken, e.g. Dr.~Knuth. You should place a \ after a dot which is not a fullstop, for example after e.g.\ this and that. This is not required if you are using \frenchspacing. – Martin Scharrer♦ May 10 '12 at 9:40 I would urge you to re-post your question in English.SX. A complete list of the type you're interested in will depend crucially on whether you need to follow UK or US English puctuation conventions. Alright, other English-speaking countries may also have their own punctuation conventions, but it's fair to say that they tend to follow either the US or the UK model. The UK model is to omit the period ("full stop") after most abbreviations, whereas the US model takes the more traditional approach. Viz.: a.m., p.m., i.e., e.g., Mr., Mrs., Messrs., etc., etc. – Mico May 10 '12 at 10:16 Just a small correction, the full stop in british english tends to be used after abreviations made of a truncation of the original word and is not used when the abreviation is made of letters from that word. for example "Prof." for Professor but "Dr" for Doctor. – ArTourter May 10 '12 at 12:06 Something English style guides will not mention: abbreviations such as "i.e." and "e.g." look ugly both without a space and with a full space between them. I always typeset them as "e.\,g.\ " which is borrowed from German typography but looks just as good in English. – Christian May 16 '12 at 14:40 To answer your question properly, one must first distinguish style from typesetting. It follows that there are two aspects for which you seek an answer. The first is the Style. As you mention many style guides do not go into great detail about the whithertos and whyfores of the typesetting process. In this way they define the general act of punctuation, in which intonation and logical pauses are indicated (where to place a period, and when to use square brackets, etc.). For answers to such questions I shall direct you to the appropriate style guide, or your general appreciation for structure and locution. But for the second, the typesetting, I shall address your question more directly. Typesetting is an art, pure and simple. It is an act in which you set the above structure–of alphabetic prose and an-alphabetic structure (punctuation)–into an attractive form. This may seem a little high handed, but a typesetting system, such as TeX, provides characters, and it is up to you or, more commonly, macros to define where TeX is to place which character. For a "comprehensive list which details the best method of typesetting the various forms of punctuation" I would direct you to the following; James Felici, The Complete Manual of Typography. Robert Bringhurst, The Elements of Typographic Style. Typesetting is a process in which you are trying to get the right characters separated by the appropriate amount of space, so practically speaking in TeX you simply write which characters you want, and adjust space accordingly (by choosing different "spacing" characters when required). Sometimes this is done for you by the TeX engine. With what you have mentioned you are pretty close to all the basics required for typesetting most documents, the particular examples you give are required because of some default macros in TeX. This process is aided a lot by many modern fonts, which have great kerning and, if you have Xelatex, OpenType features which aid a lot in making contextual decisions for spacing on your behalf. What I have listed below is a good-enough-list of how to typeset punctuation in TeX. Periods A period is a period, period. However, typesetting a period has more to do with the spaces around it. Tex assumes that a period which is preceded by a lower-case letter is the end of a sentence. To prevent this, say in an abbreviation, use, as you mentioned above, etc.\ and. If a upper-case letter ends a sentence, use THE END\@. Using these macros allows for sentence spacing to be adjusted by your style (i.e. \frenchspacing). However, there may be times when you wish for larger or smaller spaces, thus when writing an initialled name, your write J.\,M.~Smith, which gives a thin space between initials, and a non-breaking space between that and the surname. \, thin space (normally 1/6 of a quad); \> medium space (normally 2/9 of a quad); \; thick space (normally 5/18 of a quad); \! negative thin space (normally 1/6 of a quad); \quad quad space (a quad). Hyphens and Dashes There are three main forms of this. The hyphen is used to indicate a conjoined word, or is used at the end of a line to indicate the word continues on the next line, use the basic hyphen for this. The en-dash is used to indicate range, this is written in TeX by two consecutive hyphens, -- or textendash. An em-dash is used to indicate a parenthetical clause/phrase (which, if removed does not interrupt the sentence), this is written in TeX by three consecutive hyphens, --- or \textemdash. Some fonts include characters for a minus sign \textminus, and others such as a figure dash. Ellipsis is either written by three periods separated by spaces .~.~. (some style-guides require larger spaces, ~ enforces nnon-breaking spaces), or by using the \ldots character which is three periods condensed into one character. Alternatively you may use medium spaces between the periods, .\>.\>.. Quotation Marks There are two types of quotation marks, and both have a single and double variants. The basic form is the "typewriter" froms, written in TeX with simply the ' and Shift+' key combinations. Though in normal prose you will prefer the "curly" quotation marks, entered by the use of ` and `` for left/open quotes, and ' and '' for right/closing quotes. Special characters often need "escaping" simply because they are used for special purposes: \%, \$, \&, \#, \_, \{, \}. You cau use puctuation marks and basic mathematical symbols without restriction. For a backslash, use \textbackslash. Apostrophes, Parentheses, Colons, Semicolons, Commas, Exclamation and Question Marks All of these are pretty basic in TeX, you simply type them in. When and how to use them should be suggested by your style-guide and grammar. (I think TeX automatically converts an ' [apostrophe] into a single right quote?) GhostpsalmGhostpsalm \/ does not escape a slash; it produces "italic correction". – Lover of Structure Feb 8 '13 at 8:04 You raise a few interesting questions, however your statement that other software handle the full breadth of punctuation typesetting automatically is incorrect. This depends on the in-house style used. Has anyone compiled a comprehensive list which details the proper method of typesetting all forms of punctuation typically found in English-language documents? The most comprehensive lists, with a lot of exceptions can be found in the various Style Guides, such as the Oxford, MLA, Chicago Manual of Style, CBE Manual etc, the European Union Style Guide and many others. Take for example the simple case of an honorific such as PhD or a name with a Jr ending; you get the following recommendations: MLA Chicago WIT CBE PhD Ph.D. Ph.D. PhD John Smith, Jr. John Smith Jr. John Smith, Jr. John Smith Jr As a matter of interest none of the above guides offers suggestions as to the spacing of the dots (thin spaces etc). Sometime back I tried to use TeX to enforce style, but it became too cumbersome to use, First you create a DB to hold all the styles: \def\stylesDB{MLA,CM,WIT,CBE,AP,YL,DS} \def\createifs#1{% \@for\next:=#1\do{% % create newif \expandafter\newif\csname if@\next\endcsname \expandafter\edef\csname @\next\endcsname{@\next} \createifs{\stylesDB} Then you define commands for common usages such as \phd \def\phd{% \if@MLA PhD\fi \if@CM Ph.D. \fi \if@WIT Ph.D.\fi \if@CBE PhD\fi In your document you use @MLAtrue to pick up the style book and use the relevant commands as you go along. Best advice, pick up a Style Book that is widely accepted in your specialty and study it. Consistency is the key rather than particular fast and hard rules. You may find of interest the "rules of composition" which Tschischold implemented at Penguin books and with millions of Penguin books published (with Tschischold involved in about 500), you cannot go wrong. Here is a summary relating to punctuation: Put thin spaces before question marks, exclamation marks, colons, and semicolons. (TeX does ok here). Between initials and names, as in G. B. Shaw and after all abbreviations where a full point is used, use a smaller (fixed) space than between the other words in the line. In most cases in LaTeX publications such names are references and are handled correctly by packages such as BibLaTeX, otherwise some manual intervention is necessary, including putting the name in an hbox if you do not want it to break at line endings. Marks of omission should consist of three full points. These should be set without any spaces, but be preceded and followed by word spaces. \ldots is ok here and don't place them at the end of sentences so you do not need to argue with anyone if it should be three dots plus a full stop. Use full points sparingly and omit after these abbreviations: Mr, Mrs, Messrs, Dr, St, WC2, 8vo, and others containing the last letter of the abbreviated word. Use single quotes for a first quotation and double quotes for quotations within quotations. If there is still another quotation within the second, return to single quotes. Punctuation belonging to a quotation comes within the quotes, otherwise outside. Opening quotes should be followed by a hairspace except before A and J. Closing quotes should be preceded by a hairspace except after a comma or a full point. If this cannot be done on the keyboard, omit these hairspaces, but try to get the necessary attachment. (Let TeX handle this for you). When long extracts are set in small type do not use quotes. Use parentheses () for explanation and interpolations; brackets [] for notes. For all other queries on spelling, consult the Rules for Compositors and Readers at the University Press, Oxford, or Collins's Authors' and Printers' Dictionary. As for hyphens let TeX do its job lest you incite another hyphen war. Yiannis LazaridesYiannis Lazarides This question is rather broad, and while I completely agree with Yiannis Lazarides about style related issues, I will try to give some answers to the questions that are not usually dealt with in style books, as well as some generic answers for those who are not required to use a specific style. Abbreviations are usually provided for in style guides, but in case they are not, here are some rules: americans tend to omit the period in all abbreviations, which will result in: USA, Prof, Dr, Sgt (Sergeant); brits usually put a period in abbreviations that were made by truncation, resulting in: U.S.A. and Prof. but Dr and Sgt (because the abbreviation contains the last letter of the word); putting a period after a word that is not truncated, as is sometimes done by americans who do use periods (Dr., Sgt.) is simply wrong, but you should indeed do it if your style requires it; you will find many americans who put periods and many brits who don't, and you will find even more people and styles that recommend a hybrid rule (e.g. using periods for abbreviations but not for acronyms), so this isn't a hard and fast rule; some abbreviations will never take a period, no matter what set of rules you are using: they include measurement units (in for inch, cm for centimetre, kg for kilogram, etc.) and chemical symbols (Ca for calcium, Au for gold). As for periods at the end of abbreviations, I will direct you to Will Robertson's article, which is the best explanation I have found on the subject. In summary, you should use: % '\@' after the period signifies it is not an end of sentence Prof.\@ Jones % '\@' between a capital letter and a period signifies that it is an end of sentence … said Mr X\@. Later, … If you are creating your own macros to ease the work, you should make sure they look ahead for the next character and thus avoid putting double periods. The abbreviation dot should further be set in the same style as the abbreviated word. This can be done using a macro such as this one: \usepackage{xspace} \newcommand{\xdot}{% \@ifnextchar{.}% {\@}% {.\@\xspace}% Hyphens (-) should generally be used inside compound words (See: Correct use of hyphen, en dash, or em dash in compound words, although it is sometimes possible to use an en-dash (--) to mark a comparison or opposition between the words, or simply to denote a gradation (i.e. when you want to combine a compound word and another word). Hyphens are also used for hyphenation, indeed. Similarly, an en-dash (--) can be used to indicate a range (numbers, dates, pages); in that case, it should not be surrounded by spaces (See: Dashes: - vs. – vs. —). Both the em-dash (---) and the en-dash (--) can be used to indicate a digression, or a break in the sentence (voluntary ellipsis) or a dialogue. Two rules then apply: pick one of them and stick to it; the em-dash should not be surrounded by spaces, while the en-dash should (on that point, see below). Brackets are can be used to mark a digression (though dashes are much more elegant) or, more appropriately, to denote a parenthesis (i.e. an explanation of secondary importance) or a citation. Some styles will require nested brackets to alternate between the normal form and the square form. Square brackets ([]) can also be seen in some citation formats, and in order to denote omissions or the author's own incursion while quoting something. Whether to use them or normal brackets depends on styles. Slashes are used to mark the existence of alternatives, as a substitute for "or"; while common in english, they are not necessarily good practice, and not all typographers encourage their use. Their original and proper use in in fractions, or in some ratios that are written with that sign. They do not seem to require any special spacing, however I notice that Bringhurst has put spaces around them (p. 33) when used as a substitute for "or" (but not when they are properly used, e.g. when writing a font size and leading: "11/13"). In the former case, I would think either a thin space (\thinspace or \,) or a normal space would do. The ellipsis sign (...) is used to mark an omission, typically inside a quotation; it is a poor replacement for "etc." in an enumeration and should be avoided in that case. The least that can be done is to use \ldots (see: Ellipsis with a period/fullstop) or typeset the correct UTF-8 character … (XeLaTeX for instance). The ellipsis should be surrounded with spaces, especially if it follows an existing punctuation mark. You may also consider using the ellipsis package or setting your own macro, depending on your style guide's requirements – some styles require the use of brackets or square brackets, for instance. There are no rules that I know of regarding apostrophes, which are mainly used to indicate elision (i.e. the omission of a sound) in english. Note however that, strange as it may be, this sign's spacing varies tremendously across fonts and some will simply look better than others. The spacing of apostrophes is especially important if you intend to write some text in other languages that do not use the apostrophe in the same way (for instance, french uses it between words and hence requires a larger spacing). Because it can be used in macros, it would be unwise to make this sign active in order to correct its spacing, so pick your font well if you are using any language other than english. The same advice essentially applies to quotation marks (choose the font well if you intend to use any language other than english). There are no special rules there either, except that: americans tend to use double marks ("") for the main quotation, and single marks for nested quotations (''); brits do the opposite. You can typeset the real, curly quotes directly in XeLaTeX, if you do not want to have to type `` and ''. To my knowledge, english has no special rules regarding periods, commas, colons, semicolons, interrogations marks and exclammation marks. Their correct usage would really be off topic here and I won't get into it. They are not preceded by anything, and they are followed by a normal space. However, you should know that: in american english, the punctuation belongs to the quotation, parenthesis or digression, especially if it is a period or a comma, and should hence be placed inside the quotation marks, brackets or dashes; in british english, the opposite rule applies; in both traditions, "high" punctuation marks (:, ;, ? and !) can be placed inside or outside the quotation marks, brackets or dashes depending on the context (e.g., if there is a question in a dialogue, brits will put the quotation mark inside, and if a quotation appears inside a question, americans will put the interrogation mark outside the quotation marks). The "rule" according to which end of sentence punctuation marks should be followed by a space larger than the inter-word space is outdated and has no reason to be enforced any longer, although some styles do want it. It can be removed by calling: \frenchspacing Another statement could perhaps be made regarding quotations, parentheses and digressions. In english, there is no special rule about their spacing, and you will find that these punctuation signs can start a line, or even a page, in most books (including Bringhurst's). The french have a somewhat different stand on that, and since applying it in english can't hurt, I will state that rule. The basic idea is that a line should never start with a disruptive sign such as: a number, a quotation mark, a bracket or a dash; hence it is recommended to put a tie (~) before an opening quotation mark, an opening bracket and an en-dash indicating a digression (both at the beginning and at the end of the digression). However, I think that if applied at all, this rule should be seen as secondary, i.e. it shouldn't be used if it causes an underfull hbox or a bad line-break (which can happen with long words). There are contradictory recommendations on that matter, especially as regards en-dashes: some typographers recommend to put ties outside the dashes and normal spaces inside, so that the digression is "framed" and stands out more in case of a line break. Hence, this is largely a matter of personal choice and consistency (unless your citation style says anything about it). ienisseiienissei It appears that csquotes + babel can be helpful for dealing with quotation differences mentioned, that "in AmE the punctuation belongs to the quotation...In BrE, the opposite rule applies". – Jodi Schneider Jul 26 '13 at 10:29 Not the answer you're looking for? Browse other questions tagged punctuation typography best-practices or ask your own question. Dashes: - vs. – vs. — Is a period after an abbreviation the same as an end of sentence period? What's the advantage of using csquotes over using an editor's auto-replacement for "? Range of Dates (1999-01 — 2012-12)? Correct use of hyphen, en dash, or em dash in compound words Ellipsis with a period/fullstop Quick question about curly braces not showing up Abbreviations and spacing Implementing a sort of "tightened frenchspacing" Any suggestions/requests for features for a new package that allows disabling ligatures for (pre)selected words? Vertical position of punctuation following fractions in display-style equations illogical punctuation and non french spacing How can I typeset and annotate bilingual poetry?
CommonCrawl
The impact of life-saving interventions on fertility My client GiveWell, which is working closely with the new foundation Good Ventures, commissioned me to review the scientific evidence on the impact of deaths on births. When a life is saved, especially a child's life, do families go on to have fewer additional births than they otherwise would? Is the effect more than one-to-one or less? As I write just below, a criticism sometimes thrown at life-saving interventions is that they do as much harm as good by accelerating population growth. At GiveWell's request, I am posting the full review here in draft, for public exposure and critique. We see public circulation at this stage as an efficient way to get this analysis as good as we can. So if you disagree with any of it, please let me know by e-mail or by commenting below. Please share this with others who might be interested. This is the final version. The review mixes analysis with (I hope) pedagogic explanations of the strengths and weaknesses of different study designs. At the bottom are two tables, one summarizing my interpretations of the studies, one listing sources. The preliminary conclusion is rather intuitive: I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1 [meaning that averting a death rarely causes a net drop in population]. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an ideal family size in mind, and achieved in part by access to modern contraception. In those contexts, saving one child's life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change. Many interventions in global health save lives. One criticism sometimes lobbed as these interventions invokes Thomas Malthus's famous theory of population. The good done, the charge goes, is offset by the harm of spreading the earth's limited resources more thinly: more people, and more misery per person. To the extent this holds, the net benefit of savings lives is lower than it appears at first. But perhaps Malthus is wrong—at least today, at least in most places. Especially when children's lives are saved, couples may respond by having fewer children. This fertility reduction could partially or fully offset the effect of life-saving interventions on total population. In fact it could more than compensate, because parents may view children in part as investments in old-age security, and make those investments with great aversion to risk. Parents who are surer that each child will survive will become more certain some will be there for them later on, and will feel less need for the safety of "extra" children (Heer and Smith 1968, Pg 106). This document embodies an effort to review what is known about how mortality declines have affected fertility in poorer countries. The starting question was the one just implied: Do parents compensate for the loss of children by having more or—to flip that—do they effectively compensate for additional lives saved by having fewer children? However, the confrontation with the evidence forced two conceptual refinements: The question of the impact of mortality on fertility is abstract. There is no intervention that only saves lives. Distributing bed nets, for instance, prevents non-fatal as well as fatal cases of malaria. Modern programs for prevention of mother-to-child transmission (PMTCT) of HIV dispense drugs even as they encourage breastfeeding. This makes it hard to statistically distinguish the separate impacts of mortality, morbidity, and health advice. Inability to distinguish channels does make it harder to generalize from the available evidence to other diseases and interventions. But it is not a complete loss, since the practical question is often about the total impacts of a specific intervention such as bed net distribution, for which studies of the impact of malaria eradication are quite relevant. The difficulty of these distinctions also explains why our inquiry is more about the impact of life-saving interventions than that of saving lives. (But for focus, interventions that save few lives, such as deworming, are still excluded.) Broadly, mortality and morbidity affect fertility through biological and volitional channels. A woman free of malaria is more likely to bring a pregnancy to completion in a live birth. A woman who loses an infant and thus stops breastfeeding is apt to begin menstruating again (cessation of lactational amenorrhea). These biological effects are distinct from parental decision-making (volition) in childbearing. This distinction too complicates the study of the impact of mortality on fertility. Some studies manage to make the distinction and some do not. All retain practical relevance. As is typical in the social sciences, the phenomena we study are diverse while the evidence on them is fragmentary and suspect. This combination makes responsible generalization—estimation of the "truth"—quite hard. How couples in Sahelian northern Ghana decide whether to have another child is different from how couples in rural Bangladesh or urban America do. Culture, gender power dynamics, and economic circumstances all figure. 1 The impacts of an intervention on these dynamics depend on the specifics of the disease, the technology of the intervention, and its delivery. But to make decisions, judgments must be made. One fact beyond dispute is that fertility in developing countries, measured as lifetime births per woman, has fallen much faster than mortality since 1950 in developing countries. A "folk regression" of the fertility trend on the mortality trend would conclude that saving one life prevents two births: Malthus was completely wrong. Some studies reviewed here reach essentially that statistical result (though most are cautious about interpreting this correlation as causation). I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an ideal family size in mind, and achieved in part by access to modern contraception. In those contexts, saving one child's life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change. In the increasingly exceptional locales where couples appear not to limit fertility much, such as Niger and Mali (Bongaarts 2013, Pg 3), the impact of saving a life on total births will be smaller, and may come about mainly through the biological channel of lactational amenorrhea. Here, mortality-drop-fertility-drop ratios of 1:0.5 and 1:0.33 appear more plausible. Here, saving a life can be expected to increase population in the short-term. In the long-term, it would be surprising if these few countries do not join the rest of the world in the transition to lower and more intentionally controlled fertility. After explaining some of the obstacles to the use of statistics for studying the impact of mortality on fertility, the review examines five kinds of evidence: historical, modern cross-country, modern cross-country panels, quasi-experimental, and large-sample microdata studies. A table summarizing my interpretations of the studies is in the conclusion. Mortality and fertility: Trends and causes Since 1950, humanity has progressed remarkably in lowering death and birth rates. The reasons for the declines in mortality are well known; prominent among them are advances in medicine and public health in combating infectious diseases, as well as global campaigns to deliver those advances. This figures shows the trends by continent in child mortality, here defined as death before age 15 (UN Population Division 2013): Fertility has also fallen dramatically. The reasons behind this trend are broadly understood too. Reliable forms of contraception have been developed and made widely available. Economic growth has created opportunities for women to earn more outside the home, thus a reward for having fewer children. Earning power has given many women more voice within the household, including in matters of sex and contraception. Women's greater access to education has amplified the effects of economic growth by multiplying their earning potential and empowering them with greater knowledge of contraception. By the same token, rising access to and value of education has probably led parents to have fewer children while investing more in each of them, notably in their schooling. Finally, norms about family size have been shifted by public campaigns—forcibly in China, voluntarily most other places—and even by soap operas (La Ferrara, Chong, and Duryea 2012). (Note that mortality rates and lifetime births/woman are statistical abstractions. For example, the figure of 5.83 births/woman in Asia for 1950–55 represents what would happen to a woman who spent all her fertile years in that hypothetically fixed context rather than in the actual context of falling fertility over subsequent years. Similarly the mortality figures represent the chance of death for the imaginary child who grows up entirely within a given five-year period, during which the health regime is held fixed.) Mortality and fertility declines have exhibited the following broad patterns, most of which can be glimpsed in the graphs above: Mortality has declined steadily almost everywhere, the most globally significant exceptions being the disruption caused by the Great Leap Forward in China in the early 1960s and the stalling of progress in Africa circa 1990 with the spread of HIV. In contrast, in most countries, fertility initially held steady around traditional levels of 6 or more lifetime births/woman, then bent discernibly downward at a particular historical moment. Demographers speak of "onsets of fertility decline" as distinct phenomena. Today, only a few countries appear not to have reached such an onset. 2 Countries are reaching given mortality and fertility rates at lower GDP/capita levels than was once the case. Sub-Saharan Africa reached an infant mortality rate (deaths before age 1) of 7.9% in 2005–10 (UN Population Division 2013), when its GDP/capita was about $2,000 on a purchasing power basis (World Bank 2014, series "GDP per capita, PPP (constant 2005 international $)," region "Sub-Saharan Africa (all income levels)," year 2007). The United States achieved that mortality rate in 1921 (Bureau of the Census 1939, Pg 23), by which time its GDP/capita in today's dollars had surpassed $5,000 (Bolt and van Zanden 2013). The same goes for onsets of fertility decline: although they arrive later in poorer countries, the more-recent onsets are occurring at lower levels of GDP/capita (and average education levels). Poor countries haven't had to become as rich (and educated) as they once did for fertility to begin falling (Bongaarts 2013, Pg 7). These correlations are hardly mechanical. For example, some countries have reached the fertility decline onset at especially low levels of development while others entered it at higher levels. One reason appears to be large-scale campaigns to promote family planning. Kenya's fertility rate fell from 7.6 to 5.0 births/woman between 1975–80 and 1995–2000 while nearby Uganda's hardly budged, going from 7.1 to 6.7 (UN Population Division 2013). It is probably not a coincidence that Kenya was one of the first African countries to develop a population policy and that its fertility decline slowed after the mid-1990s, when funding for the program was cut (Bongaarts et al. 2012, Pg 39–40). A similar comparison holds for Bangladesh and Pakistan: Bangladesh's much more rapid fertility drop "can plausibly be attributed" to its stronger family planning program (Bongaarts et al. 2012, Pg 39). From these broad patterns, we can infer that: Modern mortality and fertility declines are historically unprecedented in that never before have such poor societies so limited deaths and births. It is only in some of the very poorest countries, such as Mali, Niger, and Afghanistan, that mortality has fallen while fertility remains high, at 6 or more births/woman. It is there that population growth from reproduction (as distinct from immigration) is fastest. 3 Niger's population has tripled since 1975 for instance, and will triple again by 2038 under the UN's medium-fertility projection. The same scenario has Africa's total population expanding from 1 billion today to 4 billion in 2100 (UN Population Division 2013). While the mortality and fertility declines are broadly related, mortality declines are not the sole reason for fertility declines. One reason for this belief is somewhat evident in the graphs: it would be hard to explain why fertility held level for decades in many countries even as mortality fell. In particular, it appears that fertility onsets happen when the fraction of families engaging in family planning—deliberately limiting fertility—begins to rise. This decision is a function of many factors, of which the probability of losing a child is but one. The last point is relevant to the question of how mortality affects fertility. Where couples do not limit fertility, mortality can only affect fertility through biological, not volitional, channels. Essentially, couples continue to have children as nature allows. At the opposite extreme, where couples aim for a specific number of children, such as two, then the volitional channel will be strong: when a couple loses a child, it is apt to replace it in pursuit of its desired family size. All that is rather coarse generalizing, though. How mortality influences fertility almost certainly depends on many factors and has varied over time and place. So the bulk of this document reviews empirical studies meant to give us a sharper understanding of the link. To help us think about implications for population growth, the next graph presents the mortality and fertility trends in a way that makes them more directly comparable, by estimating the number of child deaths per woman. As shown, in developing countries as a group, lifetime births per woman fell from 6.1 to 2.7. Meanwhile, the chance of a child in a developing country dying before age 1 plummeted from 15% to 5%; before age 5, from 25% to 7%; and before age 15 from 29% to 8%. Multiplying these percentages against total births, we find that the statistically average woman having 6.1 children in 1950–55 lost 1.8 of them before they grew up, taking age 15 as threshold to adulthood, while her counterpart in 2005–10 lost only 0.21 out of 2.7. Child deaths/woman fell by 1.6 while births fell by 3.4, for a ratio of about 1:2. ). If we switch from tracking mortality and fertility over time to taking a snapshot of all countries at a specific time, the fertility-drop-mortality-drop ratio appears even larger. The graph below uses data for 1990, a year in or near the time periods of all the cross-country statistical studies examined below. For each country, it plots lifetime births/woman and lifetime under-five child deaths. Moving down the orange best-fit line from 8 to 4 births/woman (marked on the vertical axis) corresponds to a move from about 1.5 to 0.5 lifetime under-five deaths/woman (on the horizontal axis). That's a mortality-drop-fertility-drop ratio of 1:4. This cross-country pattern seems hard to fully explain as mortality leading to reduced fertility: would a family have four extra children if it lost one? But it is easy to understand as fertility affecting mortality, given how mortality is measured here: fewer children born to each woman means fewer who can die. This suggests that cross-country comparisons are particularly prone to overestimate the impact of mortality declines on fertility. On the econometric challenges of studying causality If we naively interpreted the mortality-fertility correlations in the graphs above as proving that mortality decline over the last 60 years caused all the fertility decline, we would conclude that the population increase from saving children's lives was offset 2 –4 times over by the consequent drop in fertility. Saving lives reduced population growth. Of course, the story is more complicated than that. As already discussed, many factors are at work behind the two broad trends, some shared, some distinctive. And while mortality may itself affect fertility, causality also runs the other way. In developing countries overall, the chance of a baby surviving falls with the age of the mother (Rutstein and Winter 2014, Pg 38, Table 13.). So reducing fertility early in womanhood also reduces mortality. Having fewer children in a family of limited means may also reduce effective competition for food and other necessities, making children less vulnerable to disease. Clearly, fertility and mortality are connected by many causal arrows. It is the job of social science to ferret out the particular causal pathways we are interested in, from mortality to fertility. This section explains why social science has a hard job. Along the way, it introduces some terms and a visual language that will be used in discussing studies. For concreteness, it refers to Schultz 1997, the first study reviewed below. The study gathers and analyzes data on national mortality and fertility rates for about 70 developing countries in 1972, 1982, and 1988/89 (Schultz 1997, Pg. 397). Among the causal relationships analyzed is the one of interest to us: If this were known to be the only possible connection between mortality and fertility, then we could interpret any statistical correlation between them as measuring the sign and strength of that connection. But Schultz understands that the world works more like this: "Other factors" includes, for example, female education and GDP/capita, which could simultaneously cut deaths and births. They are "confounders" because their potential importance confounds any naïve attempt to attribute the mortality-fertility correlation to a simple causal story. In the event, Schultz 1997, Pg 398, Table 5, col 3, finds a correlation between mortality and fertility that is hard to ascribe to chance. Three families of theories could explain that, which I indicate by bolding arrows 4: Schultz 1997 deploys two standard strategies to reduce the plausibility of the second and third theories. First, it "controls for" some of those other factors, such as GDP/capita. Roughly speaking, in instances where fertility and mortality fall just as GDP/capita rises, those drops are discarded as possible examples of the third theory above. In fact, Schultz 1997 controls for GDP/capita; education levels of men and of women, measured as average number of years of schooling in the current population; share of the labor force in agriculture; and the Muslim, Catholic, and Protestant shares of the population. Statistical analyses of this sort are called "regressions." (The controls are entered linearly, so nonlinear relationships between them and the variables of primary interest are not removed.) But, the Schultz 1997 control set does not exhaust the possibilities for hidden third variables that could, behind the statistical scenes, reduce mortality and fertility at once and create the false appearance of causality from one to the other. Countries with better-run governments might have better public health programs, which reduce deaths and births in many ways. This motivates the second strategy, which is to "instrument": Schultz 1997 introduces one more variable, national calorie intake/capita, along with an important assumption about its relationship with other variables. The picture looks like this: The red X's indicate causal pathways that are assumed not to operate; they are "exclusion restrictions." Calorie intake is said to be "exogenous" because it sits largely outside the causally entangled complex of other variables. One arrow that is assumed to operate certainly makes sense: calorie intake is believed to affect mortality. But according to the red X's, calorie intake does not affect fertility so directly. This assumption "is justified by biological and demographic investigations which conclude the effects of nutrition on reproductive potential or fecundity are negligible" (Schultz 1997, Pg 400). To understand the power of this assumption, consider that Schultz 1997, Pg 398, Table 5, col 4, detects a correlation between calorie intake and fertility. Within the confines of that diagram, only one theory can explain this link, indicated by the bolded arrows: In particular, mortality affects fertility. Notice how the introduction of calorie intake into the analysis, along with assumptions about its causal relationship to other variables, lets Schultz 1997 study the impact of mortality on fertility. Calorie intake is said to "instrument for" mortality. It's worth noting that the assumptions required to interpret the Schultz 1997 as the impact of mortality on fertility are actually stronger than Schultz suggests in citing "biological and demographic investigations." Those investigations only justify one of the three red X's depicted above, the one on the arrow from calorie intake to "factors not controlled for." (This prevents calories from affecting fertility by leapfrogging mortality.) If this were the only exclusion restriction we imposed, the world could work like this: In words, a factor not controlled for might simultaneously affect fertility and calorie intake, the latter in turn affecting mortality. Perhaps economic inequality or government effectiveness accelerates progress on fertility out of proportion to its effects on mortality. Once again this would create a correlation between the variables of interest without any impact from one to the other. Or conceivably the world works like this: This shows that the technique of instrumenting, while useful, is not a cure all—and in fact works less well than many econometricians seem to recognize. Lest you despair about the capacity of statistics to enlighten, imagine changing the picture in one way. Imagine a government tests a new, potentially lifesaving vaccine for children, but randomizes which villages receive it in order to study effects on mortality and knock-on effects to fertility. That would look like this: Now it is easy to believe all the red X's. No factor but chance would affect the random assignment of vaccines to villages. And randomly vaccinating some kids should not affect the fertility of their mothers. So if fertility eventually dropped more in villages receiving the vaccine, the case would be strong that lower mortality was an intermediate cause. Absent randomization, interpreting statistical correlations as evidence of causation requires significant assumptions like those in the Schultz 1997 example above. This raises a fundamental question: if we are prepared to make such a broad assumption about how, e.g., calorie intake relates to other variables, why don't we just assume that mortality affects fertility and skip the econometrics? Putting that constructively, for a study to be useful, "the assumptions on which it rests must be more credible than the assumptions that it tests" (Roodman 2009a). None of the studies in this review is randomized. However, they all incorporate steps to combat the sorts of issues raised above. Some control for lots of third factors. Others rely on time's arrow, arguing that if a mortality drop is followed by a fertility drop, the first is most likely to have caused the second. And some attempt to be "quasi-experimental" by exploiting an arbitrary event such as the sudden eradication of malaria from a country. A major focus of the review will be how well the various studies approximate the ideal of a controlled experiment, requiring only the weak assumptions in order to interpret their findings as causation from mortality to fertility. Quantitative historical economists ("cliometricians") have harvested data from ancient records in order to estimate birth and death rates in northern Europe over centuries. Galor 2012 summarizes some of this work in a pair of graphs. They show deaths and births per year per thousand of population: Death rates fell fairly steadily throughout the period, if apparently punctuated by outbreaks of disease and famine. As for fertility, except in France, it mostly declined only starting in the late 19th century. 5 That declining mortality preceded declining fertility suggests that the first caused the second. On other hand, the lags were so long—more than 150 years in England—that it is hard to put much faith in this interpretation. Galor 2012, Pg 7–8, emphasizes the latter view, citing various quantitative analyses of such historical data. Cross-country evidence In the graphs above, the country is the unit analysis rather than the state, region, or family. There is one number for each country, year, and variable of interest (births or deaths). One popular strategy in the study of the mortality-to-fertility link has been to bring formal statistical methods to such data, mostly focusing on the post–World War II era, for which data are better. In fact, having come into vogue the 1990s, the cross-country approach is now somewhat in disrepute. 6 Compared to studies that track tens of thousands of families in a homogeneous district, which we will come to, studies of 50–100 developing countries have small and diverse samples. Smallness reduces statistical power, which is the ability to distinguish any patterns founds from the products of pure chance. Diversity—Ukraine differs from Bangladesh in dozens of important ways—increases the risk that omitted variables ("factors not controlled for" in the graphs above) are influencing mortality and fertility unbeknownst to the researcher. Such factors can be controlled for to the extent that they have been quantified—researchers have produced indexes of democracy, for example. But adding controls further depletes statistical power. So researchers must limit their control sets; this need increases the incentive to "mine" for the combination of controls that gives the best results, and reduces the credibility of what is published. 7 Schultz 1997, "Demand for Children in Low Income Countries," Handbook of Population and Family Economics A noted example of this literature is the paper already dissected, Schultz 1997. Regressions that control for several confounders and instrument with calorie intake produce the finding that for every percentage point reduction in the fraction of children who die before their fifth birthday, women had 0.25 fewer births. 8 Taking a representative fertility rate during the study period of 4.5 births/woman (the rate was 4.6 in 1975–80 and 4.2 in 1980–85 for developing countries, UN Population Division 2013), this means that a 1% reduction in child deaths—an average 0.045 children saved per mother—caused a fertility reduction 0.25/.045 = 5.6 times as large. The estimated effect in this statistical analysis, in other words, is comparable to the 1:4 correspondence in the cross-country graph above. 9 No one can know the exact degree to which the mortality-fertility correspondence in Schultz 1997 truly reflect causality from deaths to births. What is certain is the assumption required to believe that interpretation. As explained earlier it is: after controlling for GDP/capita, male and female education levels, % of labor force in agriculture, and distribution among major religious denominations, a country's average per-capita calorie intake is causally linked to fertility only via mortality. In particular, no omitted variables simultaneously influence calorie intake and fertility. Given the general disrepute of cross-country studies, and given the size of the correspondence—saving 1 life prevents more than 5 births—I find the result hard to take at face value. That suggests other causal stories are at work, which may well be masking the one of interest. Conley, McCord, and Sachs 2007, "Africa's Lagging Demographic Transition: Evidence from Exogenous Impacts of Malaria Ecology and Agricultural Technology," NBER working paper A study appearing 10 years later, Conley, McCord, and Sachs 2007, attempts to improve on Schultz 1997 and other earlier work by instrumenting mortality something more plausibly exogenous—i.e., a factor more surely disconnected from fertility except via its power to affect mortality. One version of this instrument is an index of malaria ecology, which represents the fraction of a country's area with biophysical characteristics such as temperature, elevation, and rainfall that favor malaria endemicity (Kiszewski et al. 2004). This version looks especially exogenous. Temperature and elevation are not affected by female education or GDP/capita. Yet malaria ecology is not randomly distributed around the world. More precisely, its distribution is concentrated in the poorest nations and so is related to national traits that could themselves affect fertility (Kiszewski et al. 2004, Pg 491, Fig 2): The causal map for this instrument looks like this: These leaves two stories to link mortality and fertility, only one of which involves the first affecting the second: To put the upper version into words, the malaria ecology index is highest in tropical areas, where, on average over 1960–2004, fertility was also high. Even after controlling for measurable traits such as GDP/capita, it seems debatable to conclude that malaria ecology is by far the best explanation to explain elevated mortality and fertility. One can easily nominate some national trait not controlled for as invisibly correlated with malaria and influencing fertility, creating that false appearance of causal connection. The industrial revolution started in a non-tropical region, perhaps by chance, perhaps not, so tropical regions are poorer and have higher fertility. That doesn't mean, the argument would go, that malaria raises fertility. The second version of the malaria instrument in Conley, McCord, and Sachs 2007 is a measure of actual risk of contracting malaria as a function of place and time. In additional to ecological factors, this variable depends on societal efforts such as a history of DDT spraying and swamp draining, making it less plausibly exogenous. Its advantage is that it varies over time. Statistically, this allows the authors to shift to viewing their quasi-experiment as playing out over time as well as space. E.g., instead of asking whether countries with worse malaria ecology, thus more deaths, had higher fertility during 1960–2004, they can ask whether, within countries, fertility fell as malaria risk fell. This allows them to introduce "fixed effects," i.e., to control for any trait of a country whose impact on fertility is (nearly) constant over time (see next section for more). Ironically, it also controls away the more-exogenous malaria ecology traits that are the basis of the first version of the malaria instrument. Using the malaria ecology version of their instrument, Conley, McCord, and Sachs 2007, Pg 48, Table 4, col 2, finds an impact about 40% as large as in Schultz 1997: reducing infant mortality by 1 percentage point—saving 0.0489 infants/woman since average births/woman in the study is 4.89 (Conley, McCord and Sachs, Pg 41)—cuts lifetime births by 0.1 (only one significant digit is reported), for a 1:2 ratio instead of 1:5. But the impact is still large: taken at face value, the finding implies that living in malaria-prone zones increases population growth. For reasons already given, it is unclear whether that should be taken at face value. The impact based on the second, less credible instrument appears to be bigger, but is not directly comparable because it is expressed per infant (under-1) rather than child (under-5) mortality, and infant mortality is always lower. 10 Lorentzen, McMillan, and Wacziarg 2008, "Death and Development," Journal of Economic Growth Lorentzen, McMillan, and Wacziarg 2008 includes similar cross-country regressions. They copy the malaria ecology instrument of Conley, McCord, and Sachs 2007. And it adds more instruments: indicators of geography, such as land area, distances from the equator and the nearest coast, and percentages of a country's area in various climatic zones. The paper includes a causal map (Pg 97): (where "Growth" refers to economic growth). As indicated, the key assumption is that the instruments affect fertility and other outcomes only via adult and infant mortality. Lorentzen, McMillan, and Wacziarg 2008 finds that a percentage point reduction in infant mortality reduced births per woman by 0.15 on average during 1960–2000 (Lorentzen, McMillan, and Wacziarg 2008, Pg 107, Table 9, col 6). Applying this figure to the study's average of 4.183 births/woman produces a ratio of 1%×4.183=.04183 to 0.15, or about 1:3. That impact, being between the results of the previous two studies, again seems large. Summary: First-generation cross-country evidence Overall, these cross-country studies produce results similar in magnitude to what one would get from the folk regression of the long-term fertility trend on the long-term mortality trend. It is unclear how well the studies have isolated the component of this broad historical relationship that is causation of fertility by mortality. Cross-country panel studies The three studies just reviewed share two traits: They do not allow for dynamic effects such as mortality affecting fertility after a lag of some years. They compare mortality in a given period to fertility in the same period. But we might expect a lag if, for instance, parents gradually notice that fewer children in their community are dying, become convinced the change is permanent, and then have fewer kids themselves. Similarly, the fertility level today might depend on the fertility level in the past, since if it was already low, it is less likely to fall much more. When left out of the analysis, past levels of fertility and mortality are potential confounders. For example, a low fertility rate five years ago might lead to a lower mortality rate today (as fewer children compete for the food supply) even as it foreshadows a lower fertility rate today (as family planning practices show some inertia). This is yet another story producing a correlation between mortality and fertility today without any impact from the former to the latter. The studies rely mostly or wholly on variation across countries, not over time. Staying with Lorentzen, McMillan, and Wacziarg 2008 as an example, the regressions using the malaria ecology instrument view history as an experiment in which each country is a roll of the dice. Some countries are arbitrarily endowed with conditions favoring malaria, and some not. The paper checks whether countries so endowed have lower fertility. A general fear about such regressions, as explained before, is about the failure to control for national-level confounders. The alternative statistical strategy has been to collect data on countries for many time periods, say, every 5 or 10 years. Then researchers can ask: within each country, when mortality falls, does fertility fall too, perhaps with a delay? This compares not Brazil to Zimbabwe but Brazil in 1990 to Brazil in 2000. It allows researchers to control for a whole class of potential confounders: those that are fixed over time or, in practice, nearly so. These include climate, culture, legal tradition, and linguistic and religious composition. More precisely, it allows researchers to control for these factors to the extent that their impact is fixed over time—what are called "fixed effects." For if the impact on fertility of having many Catholics is fixed, then that impact washes out when examining changes over time within a country. One thrust in modern econometrics has been to depart from those two common characteristics, to allow for dynamics and fixed effects. Both steps sharpen the focus on how variables interrelate over time. They require more complicated econometric methods; they pre-process the data to remove fixed effects and/or post-process the regression results to infer the long-term evolution arising from dynamic interactions among variables. The methods are called "panel" methods because they use data collected at regular intervals from a set of families, firms, or countries—in our case, a "panel" of countries followed over time. Two main approaches have developed in the last 20–30 years: dynamic panel data models; and vector autoregressions. I will review one leading, recent example of each, explaining the methods in brief. Murtin 2013, "Long-term Determinants of the Demographic Transition," Review of Economics and Statistics Murtin 2013 works with decadal data series from 70 countries, some reaching back to 1870. The variables of primary interest to us are the national birth and death rate, as in the Galor 2012 graphs above, and the infant mortality rate. 11 Controls include GDP/capita; the average number of years of primary schooling that adults have completed; and the fractions of the population in their 20s and 30s, respectively, since a younger population will have more children and more child deaths. 12 Most of the fixed-effect regressions—the ones controlling for national traits whose impact on the birth rate stays the same over time—find that a 1% reduction in infant mortality (not 1 percentage point) is associated with a 0.3% (not 0.3 percentage point) drop in the birth rate (Murtin 2013, Pg 624–25, Tables 4–6). Interpreted as causation from mortality to fertility, this statistic again points to a large effect, which has saving lives leading to slower population growth. We can see this by taking values for infant mortality and total fertility from the center of the graph at the beginning of this document (circa 1980) and assuming that the proportional effect on the birth rate (births per population, which is what Murtin studies) is the same as on total fertility (lifetime births/woman). The figures are then that a woman averages 4.5 lifetime births, of which 9% (0.4) result in death before age 1. According to the Murtin 2013 result, reducing the 0.4 infants lost by 1%, or 0.004 children, would cause a reduction in the woman's lifetime births of 0.3%, or 0.3%×4.5=0.0135 children, for a mortality-change-fertility-change ratio of about 1:3, comparable to the earlier studies. However, these regressions do not instrument infant mortality, so they do not try to rule out reverse causation, nor causation by (time-varying) third variables. Murtin's dynamic panel regressions introduce two major changes and one minor one. First, being "dynamic," they control for the previous decade's birth rate ("lagged birth rate") even as they study predictors of the current decade's rate. Since fertility trends are stable over time—fertility does not jump up and down from year to year—the lagged birth rate is a strong predictor of the current birth rate. Bringing it in as a control creates a powerful competitor for mortality as explanator of the current period's birth rate. Mortality only wins to the extent that it is more correlated with the current birth rate. This makes the regressions more conservative in ascribing causality to mortality. The model is called "dynamic" because the birth rate in the present is assumed to depend on the birth rate in the past. The flip side of this conservatism is a fundamental change in the nature of the interrelationships modeled. Imagine that the invention of antibiotics reduces mortality starting in the 1940s: it is a one-time but permanent drop. Murtin's dynamic model, which is standard, allows the mortality drop in the 1940s to reduce fertility in the 1940s; it then allows this change to ripple through the generations as, say, parents influence through their example the fertility choices of their grown children. One decade's fertility directly affects that of the next. The ripples do decay—after all, the fertility choices of our 18th-century ancestors don't affect us much now. But in addition, since the mortality drop is permanent, from a mathematical point of view, antibiotics separately cut mortality in the 1950s. According to the model, this too sends a ripple through the generations, just like the first but starting one decade later. Yet another wave starts in the 1960s, piling on top of the earlier ones. And so on. The total effect is not infinite because all the ripples are decaying as they age. In the long-term, the effect of a permanent mortality drop converges smoothly to a limit (assuming antibiotics don't become ineffective). This long-term effect can be much larger that the short-term (one-decade) effect. 13 The second major change is that the regressions instrument most of the variables. This they do in a few different ways, which appear rather arbitrary, are never justified, and ultimately do not convince. The most credible variants for us (Murtin 2013, Pg 626, Table 7, cols V & VI) assume a world essentially like this: Infant mortality a generation or so ago—or the set of deeper factors it proxies, such as the state of medical practice in the country at the time—is assumed to affect current infant mortality, and only thereby the current birth rate. This practice of instrumenting variables with older observations of themselves is now common. It is easier to think of than a clever instrument like malaria ecology; and freely available software—in this case, a program I wrote (Roodman 2009b)—automates the implementation. The strategy is motivated by the top red X in the diagram above, the idea that current realizations of a variable can't affect past ones. But it doesn't justify the other red X's—a point that is broadly relevant in panel econometrics and broadly underappreciated. There are still ways to explain any correlations found among the variables along the top without mortality affecting fertility: The third change that Murtin 2013 makes in moving beyond the preliminary fixed effect regressions is the least important but has the most impressive name: adopting the Generalized Method of Moments (GMM). Notice that the regressions depicted above introduce two instruments for infant mortality: infant mortality 30 years ago and infant mortality 40 years ago. Under Murtin 2013's assumptions, either would suffice alone. But a general principle in econometrics is that the more of the available information that is exploited, the more precise will be the results. This is essentially why Murtin 2013 uses both instruments. The use of more instruments than is strictly needed raises a mathematical question: how much weight should each get? The estimate of the impact of mortality on fertility will vary depending on the weighting, though, one hopes, not by much. GMM is a method for choosing the weights that will maximize the precision of the results, at least as the sample size goes to infinity. 14 Probably because of the first major change embodied in the dynamic panel GMM regressions—controlling for the previous decade's birth rate—the contemporaneous correlation between infant mortality and the birth rate drops by two-thirds in these more complex Murtin 2013 regressions. Now, a 1% reduction in infant mortality is associated with a 0.1% drop in the birth rate, instead of 0.3%. For the representative woman with 4.5 births, this cuts the mortality-change-fertility-change ratio from 1:3 to 1:1 (Murtin 2013, Pg 626, Table 7, cols V & VI). But if the 1% infant mortality drop is permanent, its long-term effect grows large again: to a 0.2–0.4% fertility drop, or a ratio of 1:2 to 1:4. The dynamic panel GMM regressions, however, appear problematic to me in several technical respects. I think the best interpretation is that the ratios just inferred represent real correspondences but that, just as in the studies previously reviewed, causality from mortality to fertility may not be the only mechanism at work. 15 Herzer, Strulik, and Vollmer 2012, "The Long-run Determinants of Fertility: One Century of Demographic Change 1900–1999," Journal of Economic Growth In the context of this review, the most significant and credible innovation in Murtin 2013 is the addition of the lagged birth rate as a control when studying correlates of the current birth rate. Another stream in econometrics goes much farther in that direction, as manifest in Herzer, Strulik, and Vollmer 2012. The source of that stream is a paper by Clive Granger (Granger 1969) that confronts the question of what it means, as a matter of statistics, for one variable to cause another. He offers one definition: X causes Y if forecasts of Y based on only on knowledge of past values of Y can be improved with information about the history of X. Bond investors, for example, use their knowledge of the history of interest rates to predict future interest rates. But they also use other information—say, the fact that the country has just plunged into deep recession—to improve their predictions. We would then say that recessions "Granger-cause" falling interest rates. If on average event A precedes event B than A Granger-causes B. In many cases, Granger causality aligns with everyday notions of causality. An investment bank collapse today influences and predicts the stock market drop tomorrow. The death of a child leads a couple to have another. Sometimes, however, causality between observed variables goes backwards in time. The stock market might rise in anticipation of an interest rate cut. Cars slow down at traffic lights before they turn red. But studying Granger causality brings the great virtues of transparency and humility. If researchers report that traffic lights turn red after cars slow down, their conclusion is indisputable and should be understood to mean no more or less than its literal statement. At the same time, as the traffic light example suggests, analysis in the Granger tradition does not eliminate the fundamental difficulty of inferring true causality from non-experimental data. Granger analysis shifts the analysis from how a large set of variables—female education, GDP/capita, etc.—affect an outcome of interest to how a small set of variables affect each other over time. A Granger analysis might look at how unemployment and short-term interest rates affect each other, and with what lags. A key uncertainty for the U.S. Federal Reserve, for example, is the time profile of the impact of an interest rate change on the unemployment rate. Perhaps the impact begins to show up after 6 months and reaches peak size after 12. Meanwhile, a key uncertainty for investors runs the other way: how will an unemployment drop play out in Fed policy? To trace the time profiles of these effects, typical Granger regressions include many lags of the variables of interest. The current month's unemployment rate, for example, is simultaneously checked for correlation with the short-term interest rate a month ago, the rate two months ago, the rate three months ago, and so on. The results of such regressions depict causal spirals: unemployment in one year affects interest rates in the next, which perturbs unemployment the year after that. Often, researchers are interested in how an unexpected or unprecedented change in one of the variables—an investment bank meltdown, a DDT spraying campaign—will ricochet through such a system of variables that affect each other with time delays. Standard methods and software provide the answer by crunching the regression results and plotting "impulse response functions," graphs that show how one variable deviates from its otherwise expected path in the months and years after a sudden change in itself or another variable. Herzer, Strulik, and Vollmer 2012 is a Granger-style study based on data from 20 countries for 1900–99, with observations taken every 5 years. Most of the countries are today classed as high-income by the World Bank—Canada, Japan, Western European states, Chile, and Uruguay. Three are considered upper-middle income: Argentina, Colombia, and Venezuela. Only Sri Lanka is considered poorer (World Bank 2014). The regressions in Herzer, Strulik, and Vollmer 2012 that are key for our inquiry examine the correlations among just three variables: the birth rate and death rate per thousand of population, and GDP/capita. Each variable is allowed in the model to be influenced by the values of all three in the previous two periods: the average values in the previous 5 years, and in the 5 before them (Herzer, Strulik, and Vollmer 2012, Pg 371). 16 Since there are three variables potentially influenced by the past, three variables to exercise the influence (the same three), and two periods over which influence may be detected, these regressions produce 3×3×2=18 numbers. Examples: the correlation between the birth rate 5 years ago and the birth rate now, the correlation between mortality 10 years ago and the birth rate now; the correlation between GDP/capita change 10 years ago and mortality change now. 17 Herzer, Strulik, and Vollmer 2012, Pg 373, do not report those numbers, but they graph what happens when the numbers are fed into a simulation of the dynamics. This is how the birth rate evolves after an unexpected jump in the death rate: Going by the patterns in this data set, a one-time jump in the death rates led over about a generation to an increase in the birth rate. The full Granger effect peaks at 80% of initial jump in the death rate. Thus, a reduction in the death rate is eventually followed by a fertility reduction 80% as large, meaning a permanent but modest increase in population growth. The 1:0.8 mortality-change-fertility-change ratio from this rich dynamic model is not much lower than the 1:1 ratio from Murtin's simple one, and far lower than the non-dynamic studies in the previous section. To repeat, studies in the Granger tradition tell us what follows what—not what causes what, in the everyday sense of "cause." This one tells us that in the 20th century, in these mostly-wealthy nations, drops in mortality that would not have been expected based on historical trends have been followed, after a generation, by slightly smaller drops in fertility. Summary: cross-country panel studies Though emerging from different econometric traditions, these two panel studies share some important traits. Both focus on changes over time within countries and allow for dynamics. Of the two, Murtin 2013 emphasizes using instruments to pin down cause and effect, and does not completely succeed. Herzer, Strulik, and Vollmer 2012 gives up on that task and instead more fully models the dynamic relationships among variables. It estimates the correlation between mortality drops and subsequent, long-term fertility changes at about 1:0.8—notably smaller than the other studies so far scrutinized. Quasi-experimental studies Recognizing the difficulty of pinpointing cause and effect through instrumental variables studies like those reviewed above, economists have turned in the last 15 years to performing experiments, as well as searching for and exploiting "quasi-experiments" such as the sudden introduction of antibiotics. I am aware of no experimental studies linking mortality to fertility. This is perhaps unsurprising given the ethical problems in randomly depriving some people of a potentially lifesaving intervention. 18 But several studies have appeared that attempt to take advantage of quasi-experiments affecting mortality in order to quantify the knock-on effects for fertility. It should be said that "quasi-experiment" is only quasi-defined. Arguably any instrumental variables study is a quasi-experiment. Schultz 1997, for example, takes advantage of allegedly arbitrary cross-country differences in the instrument, per-capita calorie intake. What distinguishes the new crop of studies is a reliance on external developments such as the HIV pandemic and the global push for child immunization, which trigger sudden changes in the health regime within states, regions, or countries. Like the panel studies described earlier, these quasi-experimental studies focus on changes over time rather than purely differences across space. And because of their focus on external events, their claims to quasi-experimental status are more credible. Still, just like other studies, quasi-experimental ones make assumptions—exclusion restrictions—that are not beyond debate. The convergence problem Most of the quasi-experimental studies reviewed here focus on episodes in which a sudden improvement in the health regime—DDT spraying for malaria, the arrival of antibiotics or vaccines—occurs over a large area. In the years that follow such events, different geographic regions realize different degrees of decline in their death rates. Researchers analyze whether where deaths fell, births fell too. These episodes look "quasi-experimental" in two ways. First, their timing is usually not a function of local events—in particular, not a function of the local fertility rate or third factors such as inequality and GDP/capita. DDT spraying, for example, went global in the 1950s because of a combination of scientific advance and advocacy by international players [cite]. Second, the suddenness can create useful statistical discontinuities. Sri Lanka in 1945 was probably statistically similar to Sri Lanka in 1950 except that a national malaria eradication had been launched in between. A before-after comparison tightly bracketing a major health regime change is more experiment-like than a comparison over a longer period. It is harder to argue that a third factor such as falling inequality obscured the true mortality-fertility link if that third factor could hardly have changed in such a short period. Studies exploiting such episodes do suffer one weakness: the episodes usually cause convergence. At the end, malaria or antibiotic-treatable diseases are gone where they were once prevalent and where they never were. Thus the mortality drops can be statistically indistinguishable from the initial mortality levels, which are not random. A good illustration of the conceptual issue (though not so relevant to this review because it does not cover fertility) is the Bleakley 2010 assessment of the economic impacts of anti-malaria campaigns in Brazil, Colombia, Mexico, and the United States in the 20th century. Consider the case of Brazil. The map on the left shows the Kiszewski et al. 2004, Pg 491, malaria ecology index—the same one used in Conley, McCord, and Sachs 2007 and Lorentzen, McMillan, and Wacziarg 2008—for Brazil. A higher number (toward brown) indicates a zone more favorable to malaria. Brazil's malaria ecology follows a strong pattern, with inland areas more prone than coastal ones, which are themselves more urbanized and industrialized and less poor, as the map on the left shows (Ferreira Filho and Horridge 2006). Those same inland areas are probably marked by different histories, cultures, and economic circumstances. Bleakley 2010 uses this index for Brazil, in studying the impacts of a mid-century malaria eradication campaign built around DDT. The geographical pattern enters the left end of the Bleakley 2010 causal graph this way, with the unit of analysis being the Brazilian state: Bleakley 2010 favors the causal story running along the top: geography influences pre-campaign malaria levels among children, which influences subsequent drops, which influences changes in the earnings gap between natives of malarial and non-malarial regions. But as shown, other causal pathways could link geography to a narrowing of the earnings gap. One candidate for a hidden factor not controlled for would be a long-term pattern of economic convergence among regions. Maybe the poorer, more malarial states, just caught up economically over the decades of the mid-20th century for reasons separate from malaria eradication. However, Bleakley does something clever to attack this competing theory. He collects follow-up data at many points in time (yearly) after the intervention on his outcome of interest, which is earnings as adults of children who grew up in states with different malaria ecology levels. This is useful because while economic convergence and DDT spraying might both have narrowed the earnings gap, the latter would probably have done so with distinctive timing. The narrowing should commence just as the first children whose malaria exposure was reduced by the spraying campaign enter the workforce—perhaps they were 15 when the spraying campaign started—and continue steadily until all entering the workforce grew up minimally exposed. The data seem to confirm this pattern. This graph shows the gap in future earnings between people from malarial and non-malarial regions by year of birth (Bleakley 2010, Pg 26). The figures start negative, indicating lower earnings for those from malarial regions: Children born in malarial areas before 1940 reached adulthood without ever benefiting from the national DDT spraying campaign that began in the mid-1950s. Their future earnings were systematically lower according to the left part of the graph. This finding is consistent with the theory that malaria impedes children's cognitive development, for reasons of biology and lower school attendance, and thus reduces their adult earning power. Meanwhile, children born after 1960 in ecologically malaria-prone fully benefited from eradication, and for them the earnings gap disappeared, even reversed slightly. One might argue that the kinks are an optical illusion and that a straight line fits the data nearly as well. But Bleakley 2010, Pg 23, Table 4, Panel B, reports statistical tests that suggest otherwise. The point for this review is that Bleakley's graph is more much more convincing than it would be if it had only two data points, from before and after eradication. For then there can be no distinctive fingerprint: Quasi-experimental studies that look for fingerprints in time series, rather than just performing before-after comparisons, are more convincing. Lucas 2013, "The Impact of Malaria Eradication on Fertility," Economic Development and Cultural Change Among the quasi-experimental studies reviewed here, Lucas 2013 comes closest to the ideal implied above of checking for a temporal signature. Rather like Bleakley 2010, Lucas 2013 studies a malaria eradication episode—this one in Sri Lanka circa 1947. The outcome data come from a survey of 6,810 women conducted in 1975, which gathered detailed birth and child death histories from each respondent. Many of those births occurred before the spraying and many after. This data structure allows Lucas 2013 to examine the evolution of birth and death probabilities year by year, in order to discriminate between general trends and developments more precisely correlated with eradication. Lucas 2013's measure of how malarial a region was before eradication is the "spleen rate," that is, the fraction of children with an enlarged spleen, which indicates a history of exposure to malaria. In the late 1930s the spleen rate varied by region between 1.5% and 68% (Lucas 2013, Pg 611). By 1960, it was essentially zero throughout Sri Lanka (Lucas 2013, Pg 612). The graph below (Lucas 2013, Pg 619) shows how the probability of a woman having a baby in a given year evolved in the once-malarial areas. A value on the graph of, say, 0.2 in 1950 does not mean 20% of women gave birth in 1950. Rather, it means that if a woman lived in an area that in 1937 had a spleen rate of 50%, her chance of giving birth was in 1950 0.2 × 50% = 10% higher than it had been back in 1937. It represents, in other words, the impact of the passage of time on fertility in malarial areas relative to non-malarial areas. The solid line in the graph traces the best estimates of this impact year by year. Since such statistics are always measured with uncertainty, the dashed lines indicate confidence intervals, ranges within which Lucas 2013 is 95% sure the true value lies: Notice that before the nationwide eradication begins in 1947, zero lies well within the 95% confidence intervals: there is no clear change in relative fertility in 1937–47. But soon after, the difference rises to statistical significance, as the confidence intervals then exclude zero. The difference appears fairly stable around 0.2 through the mid-1960s. Its slight decline thereafter might reflect a general compression in differences across Sri Lanka as fertility fell overall. This looks like a fingerprint of malaria eradication. Lucas 2013 backs the observations above with formal regressions. These control for district fixed effects, individual-level traits such as ethnicity and, most importantly, district-specific linear (straight-line) trends (Lucas 2013, Pg 613, Eq 1). As in the Bleakley 2010 Brazil analysis, the regressions check whether allowing a particular nonlinear pattern, a jump in 1947, can explain the data better than straight-line fertility trends alone. Consistent with the graph, the probability of a live birth is 0.22 higher after 1937 (Lucas 2013, Pg 620, Table 2, col 1). Notice that the rise in fertility is the opposite of what we would expect if we thought that the primary causal story was from eradication to lower mortality to the volitional channel of parents choosing to have fewer children. But it is compatible with a direct biological link from malaria to fertility. Malaria can cause spontaneous abortions and stillbirths, which translate statistically into lower fertility. This effect may be distinguishable in the data because it mostly tells on a woman's first pregnancy (Lucas 2013, Pg 608). When Lucas 2013 splits the data between first births and later births, the clearest impact of malaria eradication is indeed on first births. The estimates of the impact on fertility thereafter are unstable and overall indistinguishable from 0 (Lucas 2013, Pg 625, Table 4). 19 The difference between first and later births is another fingerprint, evidently left by the biological consequences of eradication. The quasi-experiment demonstrates rather convincingly that eradicating malaria increased fertility, at least by increasing the probability of successful completion of first births. Technically, that is what this review is after: rigorous evidence of the impact of life-saving interventions on fertility. However, it makes evident that conceptual line between mortality and fertility is thin. Saving a pre-term baby a month before its due date is increased fertility. Saving it a month after birth is reduced mortality. In spirit, the Lucas 2013 finding is closer to an impact on mortality than the sort of knock-on effect on fertility that we set out to find. Kumar 2009, "Fertility and birth spacing consequences of childhood immunization program: Evidence from India," Ph.D. dissertation, University of Houston Kumar 2009 looks at how fertility evolved in India after a national childhood immunization program was rolled out across the states over 1985-90. The study is constructed much like Lucas 2013, from birth histories collected through a national survey after the campaign. Kumar 2009, Pg 25, finds that a woman who started her family—had her first child—after the immunization program launched in her district had births spaced farther apart. The probability that her second birth followed within 2 years of the first fell 1.4%; within 3 years by 2.3%; and within 5 years by 1.5%. The regressions control for a woman's religion, caste, poverty level, age at marriage, and other characteristics. They include district fixed effects, which should ward off the theory that the results are explained merely by the fact that poorer districts, by virtue of being poor, had higher fertility and got the immunization program later. However, unlike the Lucas regressions, the Kumar 2009 ones do not control for time trends, which would allow us to similarly dismiss a theory of general fertility convergence across India. Nor does it include graphs like those in Bleakley 2010 and Lucas 2013 in order to check this possibility visually. On the evidence in the paper, the correlation between the arrival of immunization and falling fertility could just reflect strong convergence on both fronts across India in the 1980s and 1990s. 20 Wilson 2013, "Child Mortality Risk and Fertility: Evidence from Prevention of Mother-to-Child Transmission of HIV," revised and resubmitted to Journal of Development Economics According to a source cited in Wilson 2013, Pg 4, approximately a tenth of all HIV infections circa 2000 were caused by mothers transmitting the virus to their babies, whether in utero, during birth, or through breastfeeding. Prevention of mother-to-child transmission (PMTCT) programs arose in many countries to combat this problem. In Zambia, the site of the Wilson study, PMTCT efforts began around 2000, apparently focusing on providing HIV tests and counseling expectant mothers on such matters as breastfeeding. Within a few years, antiretroviral provision entered PMTCT practice in Zambia. "Single-dose nevirapine (NVP) was the main prophylaxis in the early years of the Zambian PMTCT program and zidovudine (ZDV) (also known as azidothymidine (AZT)) and NVP in the later years of the program" (Wilson 2013, Pg 6). Until recently, the ability of the HIV virus to spread through breastfeeding created an excruciating choice for PMTCT programs: discourage breastfeeding in order to reduce infection (25–60% of babies who got HIV from their mothers died before age 2 circa 2000 (Wilson 2013, Pg 4)), or encourage it anyway because of its separate life-saving benefits. The Zambia government appears to have taken its first position on the issue in 2007, which approximated neutrality: "HIV positive mothers were to be 'given enough information about advantages and disadvantages of the available options for them to be able to make an informed choice about what might be best for them' and advised to completely avoid breastfeeding when quality replacement feeding was available" (Wilson 2013, Pg 5). This ambiguity appears to have reflected the muddle in practice in the preceding years: "Evidence from Ndola city…suggests clinics may have offered advice similar to the 2007 National Protocol Guidelines. However, evidence…suggests that breastfeeding advice varied across health workers within a given clinic" (Wilson 2013, Pg 5–6). After a randomized study in Kenya, Burkina Faso, and South Africa demonstrated that triple-antiretroviral drug prophylaxis for the mother was safe and reduced transmission through breastfeeding (Kesho Bora Study Group 2011), the government of Zambia, like the WHO (WHO 2010), revised its guidelines to favor breastfeeding in tandem with such treatment. The data for the Wilson study, however, were gathered before that, through four surveys fielded in 2001–07. Wilson 2013 largely follows the quasi-experimental template of the studies just above. It studies an episode during which a particular health intervention was rolled out across a territory—in this case PMTCT across Zambia during 2000–07. In particular, Wilson 2013 studies whether women living within 20 kilometers of an active PMTCT site, such as a district clinic, were more likely to breastfeed or less likely to get pregnant. Wilson 2013 differs from the above studies in not reconstructing birth histories for each woman interviewed by surveyors. At least as far as this analysis goes, women were merely asked whether they had been pregnant any time in the last 12 months; and whether, if they had a child under 2, they were breastfeeding. But since PMTCT programs arrived in different places at different times, Wilson 2013 can still examine the evolution of correlations with pregnancy and breastfeeding. That is, Wilson can compute the average breastfeeding rate among women who have had have had access to PMTCT for 1 year, 2 years, 3 years, etc. A strength of Wilson 2013 is its recognition that the rollout of PMTCT was hardly a perfect quasi-experiment. It was not random: people near the programs were better educated on average, closer to major roads, less likely to be married, and more likely to be HIV-positive (Wilson 2013, Table 2 & Fig 2). These correlations open the door to competing theories for any results found. In response, Wilson 2013 stepwise introduces an aggressive set of controls. These include individual traits such as age, years of schooling completed, and number of children. They include indicators for each month and year during which interviews took place in order to remove any long-term trends in national-average PMTCT access, breastfeeding, and fertility rates. As in Lucas 2013, the controls also include variables to remove province-level fixed effects and linear trends, such as might occur in general convergence. And they include indicators for the spread of other life-saving interventions: bed nets, piped water, and access to other HIV/AIDS prevention and treatment services. One peculiarity in this quasi-experiment is the ambiguity of the intervention. We don't know if women were advised more for or against breastfeeding. And whether more or less breastfeeding saved or cost lives depended in part on another unknown: how many mothers were introduced to or already taking antiretrovirals, without which breastfeeding would have more likely transmitted HIV. This is a major weakness from our point of view, since Wilson 2013 does not look at child mortality, and without data it is hard to be confident even of the sign of the impact of PMTCT on child survival. When Wilson 2013 finds that access to PMTCT is correlated with lower fertility (see below), it is not clear whether to interpret that as the result of higher or lower child mortality, or neither. Wilson 2013 finds that mothers of under-2s were 3.7–23.6% more likely to be breastfeeding if living within 20 kilometers of a PMTCT site (Wilson 2013, Table 3, Panel B). The high end of that range arises from the most conservative regression, with the fullest set of controls (Col 6). Meanwhile, mothers near PMTCT sites were 2.0–8.9% less likely to be pregnant—although here the smallest value comes from the most conservative regression, and is not statistically significant by conventional standards (Wilson 2013, Table 3, Panel A). 21 Since this pregnancy regression is the one that controls for the rollout of other lifesaving interventions, we cannot, going just by these results, fully rule out the theory that the negative correlation between PMTCT access and pregnancy is caused by temporal and geographic similarities in the patterns of arrival of PMTCT and at least one of the other interventions. Graphical analysis, however, mostly supports PMTCT as the driver. This graph (Wilson 2013, Fig 3) shows the change in the chance of a woman being pregnant relative to when a PMTCT program first began operating near her. (By construction, that change is 0 at time 0, when the program arrives.) The data include adjustments for the individual and household traits: The high values at 84 and 72 months (7 and 6 years) appear to be a statistically insignificant aberration caused by very small samples of women living in areas where programs were set up so many years after the surveyor visited. 22 That aside, for the five years before PMTCT, fertility is static. After arrival—month zero—it begins to fall steadily. This favors PMTCT as the true cause of the decline. As noted, Wilson 2013 presents no data on mortality. As a result, the simplest explanation for its results is that: PMTCT programs persuaded women to breastfeed longer, for better and worse. Lactational amenorrhea reduced fertility, a channel that Wilson recognizes. This story does not require causation of fertility by mortality or even morbidity. As a result, Wilson 2013 has limited relevance for our inquiry. Bhalotra, Hollywood, and Venkataramani 2012, "Fertility Responses to Infant and Maternal Mortality: Quasi-Experimental Evidence from 20th Century America," preliminary and incomplete Bhalotra, Hollywood, and Venkataramani 2012 sets out to trace the consequences of the introduction of the first (sulfanomide) antibiotics in the United States. Adoption was rapid, and the authors date it to 1937. More than in any other quasi-experimental study reviewed here, this one involves a change in the health regime with implications for many diseases, which affect both mortality and morbidity for both mother and child. The operation of multiple channels creates the risk that those captured in the analysis are merely proxying for ones left out, and that those that are captured offset or reinforce each other in ways hard to disentangle. Bhalotra, Hollywood, and Venkataramani 2012 focus on two diseases that were plausibly the dominant channels of impact for sulfa drugs. One affects babies: infant pneumonia. "Pneumonia was the leading cause of infant morbidity and mortality after death from premature birth and congenital defects and it is similarly the leading cause of infant death in developing countries today" (Bhalotra, Hollywood, and Venkataramani 2012, Pg 5). The other affects mothers, as mothers: puerperal fever, an infection contracted during birth. "Such infections remain the leading cause of maternal mortality in the developing world and, among those who survive, can cause uterine scarring and infertility." The authors demonstrate through graphs that the trend in both diseases bent downward around 1937. Before 1937, the diseases varied in prevalence across U.S. states, and differed from each other in these patterns of prevalence. With the introduction of the drugs, the declines in infant and maternal mortality therefore differed by state and from each other. This allows Bhalotra, Hollywood, and Venkataramani 2012 to study the correlations of infant and maternal mortality with fertility simultaneously in the years that followed. As usual, a primary concern is that long-term convergence—in this case, among U.S. states—in mortality and fertility will be mistaken for one causing the other. Bhalotra, Hollywood, and Venkataramani 2012 take the appropriate step of introducing controls that allow each state to have its own long-term linear trend in fertility. As in the Bleakley Brazil example, in the regressions including these controls—the most conservative ones—only deviations from straight-line trends, influence the results. However, I believe there is a serious limitation in this preliminary study, or at least in the data it has to work with. The fingerprint of the impact of the arrival of sulfa drugs on fertility appears to be missing. The paper does not search for it graphically. And the statistical results say that the effects of reduced infant and maternal mortality on fertility largely cancel out. "A change in [a state's pre-1937 infant pneumonia prevalence] equivalent to a movement from the 75th percentile to 25th percentile…implies a 0.47% point drop in the probability of a birth after 1937, which is 4.3% of the mean (0.11). A similar shift in [a state's pre-1937 maternal mortality rate] implies a 0.48% point increase in the probability of a birth after 1937, 4.4% of mean. [Emphasis added.]" (Bhalotra, Hollywood, and Venkataramani 2012, Pg 12). The inclusion of controls for linear trends cannot solve this problem, and actually produces results that increase my doubts. It cannot solve the problem because, absent a corner in 1937, it forces the infant and maternal mortality indicators to search for other deviations from straight-line trends to explain. Convergence could well occur with curvature: as declines proceed, they could decelerate, as room for further decline shrinks. Curvature would leave some discrepancies from the linear trend to be falsely attributed to the impact of falling mortality. In fact, controlling for linear trends triples the apparent impact of infant mortality on fertility and quadruples that of maternal mortality (Table 3, cols 5–6). This suggests that the state trends are quite similar in time profile with the effects ascribed to the mortality changes, if of opposite sign (substantially, negatively collinear). Added to the mix is the partial collinearity in the pre-1937 geographic patterns of infant and maternal mortality, and thus of the subsequent drops therein. When the partially collinear variables enter regressions together, as they usually do in this study, they can receive large coefficients but nonetheless have very little collective explanatory power, as their effects are estimated to cancel out. On balance, the premise of this study is smart, that the arrival of sulfa drugs offer a potentially valuable quasi-experiment, but the apparent lack of a sharp drop in fertility combines with the multiple disease channels to leave somewhat unclear what is driving the results. Juhn, Kalemli-Ozcan, and Turan 2009, "HIV and Fertility in Africa: First Evidence from Population Based Surveys," Leibniz Information Centre for Economics discussion paper; and Fortson 2009, "HIV/AIDS and Fertility," American Economic Journal: Applied Economics The last quasi-experiment to appear in this review is not a sudden improvement in the health regime, but a worsening: the spread of HIV in Sub-Saharan Africa. Many of the studies already reviewed use data from Demographic Health Surveys, a family of household surveys in developing countries that USAID has supported for 30 years. (Some studies use the data directly while others take national averages derived from the surveys.) Timing tends to be quinquennial. 23 Recognizing the poor quality of data on HIV prevalence in developing countries, and taking advantage of cost-saving scientific advances, DHS surveys began in 2001 to include voluntary HIV testing. This has given researchers large datasets that link a person's HIV status with such traits as education, marital status, and fertility. Juhn, Kalemli-Ozcan, and Turan 2009 and Fortson 2009 are among the first economics papers to harvest this data. The two studies substantially overlap in data and methods. Fortson 2009 uses survey data from Cameron, Côte d'Ivoire, Ethiopia, Ghana, Kenya, Malawi, Mali, Niger, Rwanda, Tanzania, Zambia, and Zimbabwe, gathered between 2001 and 2006. Juhn, Kalemli-Ozcan, and Turan 2009 drops Mali and Zambia and adds Burkina Faso, Guinea, and Senegal. Neither paper claims "quasi-experimental" status, perhaps because the spread of HIV was more gradual than a DDT spraying campaign as well as less exogenous, being modulated by local cultural, social, and economic factors. Still, the studies fit in the quasi-experimental family in analyzing the consequences of a major, novel, and relatively swift shift in the health regime. Both studies recognize that HIV, like malaria, can affect fertility through biological and volitional channels. As a matter of biology, it can increase miscarriages and vulnerability to other infections that can cause infertility (Juhn, Kalemli-Ozcan, and Turan 2009, Pg, 5–6). As for volition, knowing that a woman is HIV+ or that HIV is spreading locally could affect a couple's preferences and decisions about having children. Recall that Lucas 2013 distinguished biological and volitional effects of malaria by looking separately at women giving birth for the first time, since they are more susceptible to the disease. HIV is not known to affect first births differently. But researchers can still distinguish the channels by separately studying the impact of the spread of HIV in a community on women who test negative in the survey (but who might not have known their HIV status before); these women are, as it were, immune to the biological channel. In Juhn, Kalemli-Ozcan, and Turan 2009, about 85% of respondents agreed to HIV testing. Among them, being HIV+ reduced the probability that they had given birth within the year leading up to the survey by 3.4%, and by 9.2% for the last 3 years and 13.6% for the last 5 (Juhn, Kalemli-Ozcan, and Turan 2009, Table 5, col 3). HIV seemed to lower fertility. At least two considerations should impede us from immediately taking this result at face value. First, the women who refused the test may have been systematically more or less HIV+ on average, in which case their absence would throw the results a bit. Juhn, Kalemli-Ozcan, and Turan 2009 finds test refusers to be more educated, wealthy, and urban, making them statistically akin to respondents who tested HIV+ (Juhn, Kalemli-Ozcan, and Turan 2009, Pg 8). Second, and more important, confounders compete to explain the correlation. Education, wealth, and urban location could lead to lower fertility along with higher HIV prevalence. But Juhn, Kalemli-Ozcan, and Turan 2009 control for such demographic factors, along with indicators of sexual behavior such as condom use, and their results stand. Meanwhile, the biological channel alone credibly explains the correlation as causation from HIV to lower fertility. To isolate biology from cognition, Juhn, Kalemli-Ozcan, and Turan 2009 next calculate whether, HIV-negative women in high-HIV regions have fewer babies than HIV-negative women elsewhere. This could indicate that anticipation of mortality affects fertility. It would be relevant to the question of whether couples deliberately adjust family size in response to perceived mortality risks in their community. The answer from Juhn, Kalemli-Ozcan, and Turan 2009 turns out to be "no." The paper confirms the robustness of this result in a few ways, such as by instrumenting HIV prevalence with distance from the Democratic Republic of Congo, where the virus originated (Juhn, Kalemli-Ozcan, and Turan 2009, Tables 9–11). Fortson 2009 reaches the same conclusion independently, despite a somewhat different approach. The major methodological difference is that Fortson 2009, like most of the other quasi-experimental studies, reconstructs full birth histories for each woman surveyed. This results in a data set on fertility by subnational region (there are 108 within the 12 countries) and by year, rather than just by region. Having observations at multiple times within regions allows Fortson 2009 to remove any national trends, linear or otherwise. 24 Paralleling Juhn, Kalemli-Ozcan, and Turan 2009, Fortson 2009, Table 3, Panel A, col 3, computes that HIV+ women have 0.146 fewer lifetime births. 25 But among HIV-negative women, those living in high-HIV regions are not detectably more or less fertile than HIV-negative women in low-HIV regions. Again, there is no support for a volitional pathway from mortality to fertility (Fortson 2009, Pg 180, Table 3, Panel B). Summary: quasi-experimental studies Of the 5 studies in this section, several provide credible evidence of impacts on fertility. Lucas 2013 (on malaria in Sri Lanka) and Wilson 2013 (on PMTCT in Zambia) graph trend breaks that are rather convincingly timed. Juhn, Kalemli-Ozcan, and Turan 2009 and Fortson 2009 correlate HIV status with fertility. But none of the studies turns out to speak directly to our interest. In Sri Lanka, malaria eradication directly raised fertility in a way that is conceptually tantamount to lowering mortality, rather than being a knock-on effect from changed mortality. Similarly, the multi-country Africa HIV studies mostly suggest that HIV biologically reduces fertility. In Zambia, PMTCT may have reduced fertility mainly by encouraging breastfeeding. Large-sample micro-studies The final class of studies reviewed here consists of ones based on data sets similar to those in the previous class. Data are collected through interviews with adults in households. Birth histories of women are then reconstructed. These studies stake no claim to quasi-experimental status. But some offer an alternative virtue: very large data sets, which allow the researchers to aggressively control for variables embodying theories they want to rule out, and to use clever techniques to distinguish the various channels by which mortality affects fertility. For a statistician, a large data set is like a powerful telescope. The more independent observations she has, be they of families, trees, or stars, the more confidently she can discern statistical relationships. Flip a coin once and you cannot tell if it is fair. Flip it a million times and you almost certainly can. With large data sets, researchers can control for many third factors without worry about depleting the statistical power needed to detect the relationships of interest, in this case between mortality and fertility. The results they obtain are still only correlations. But they are ones for which, ideally, many explanations have been ruled out, raising confidence in those that remain. Although these studies draw on similar data to the quasi-experimental studies in the last section, they make a fundamental but subtle shift in perspective. Before, the source of identifying variation—the basis of the quasi-experiment—was regional because diseases arose and disappeared on that scale. Now the family in not only the unit of observation but the source of variation. The natural focus now is on whether a death in the family changes reproductive behavior in that family. Effects of regional changes in the health regime—a family having fewer kids because it perceives that fewer children are dying in its community—are harder to capture. But the studies below try. Hundreds of representative households surveys have been conducted in developing countries in recent decades, and many of them obtained birth and death histories from respondents. As a result, many mortality-fertility studies have been based on them. Here I review some of the more recent and rigorous ones. Bhalotra and van Soest 2008, "Birth-spacing, Fertility and Neonatal Mortality in India: Dynamics, Frailty, and Fecundity," Journal of Econometrics The most complex study in this group is Bhalotra and van Soest 2008, which works on data about 30,000 births between 1963 and 1999 in 7,300 households surveyed in Uttar Pradesh state, India, in 1998–99. Uttar Pradesh is one of the poorest states in India, and in one 2004 survey had the country's highest fertility rate, at 4.39 births/woman (Haub 2011, Pg 21). Still that was down from about 6.5 in 1972, suggesting that family planning was spreading during the years covered by the women's birth histories. The regressions are rather like the Granger-inspired ones in Herzer, Strulik, and Vollmer 2012. The three outcomes of greatest interest are whether a given child dies in its first month of life; whether, either way, the mother has another child after this child; and, if she does, how many months pass between births. The first two variables, mortality and birth interval, can interact over time. If a child dies, its mother may get pregnant again sooner. If she gets pregnant sooner, the next child will face tougher survival odds (Rutstein and Winter 2014, Pg 38, Table 13). Research points to several reasons: soon after one pregnancy, the mother's body may be less fit for the next; and greater competition among closely spaced siblings may reduce the sustenance and attention that each receives (Rutstein and Winter 2014, Pg 1). The regressions include many variables. Whether an infant survives its first month is allowed to depend on the survival outcomes and birth intervals for all of the mother's previous children, and vice versa. Also controlled for is a household's caste (scheduled caste, scheduled tribe, other backward caste, other, or unknown) and religion (Hindu, Muslim, or other), mother's and father's education levels, the child's gender, year of birth, and birth order, and the mother's age. All controls that can take a continuous range of values are included in ways that flexibly allow for nonlinear (curved) relationships between them and the three outcome variables. Education levels are quantized into five categories, each allowed to have a different impact. The birth year, birth order, and mother's age enter quadratically, so that they can exhibit "U"-shaped relationships with the outcomes. And they allow for fixed effects for each mother and each community surveyed: some mothers and some communities might experience higher fertility and mortality across their lifetimes because of third factors not picked up in the variable list above. 26 Bhalotra and van Soest 2008 focus on neonatal mortality—death within the first month—in order to simplify their statistical model in one respect and make it more practical for fitting to actual data. If they had studied mortality over 1 or 5 years, they would have had to allow for the potential impact of later births on a given child's survival. If a child gained a younger sibling at 2 and died at 4, then the later arrival might have affected the fate of the earlier one. But survival for the first month can be assumed to be influenced only by earlier births, making the math simpler. After fitting their regression model to the data, Bhalotra and van Soest 2008, Table 3, perform simulations to interpret it. For each mother in the data, the regression results are used to estimate the probability that her first child would survive its first month, considering the mother's age, caste, etc. As an example, the probability could be 90%. A random number is drawn between 0% and 100%. If it is below 90%, the child survives in the simulation. Then, the computer simulates whether the mother becomes pregnant again and, if so, after how many months. The process repeats until the simulated mother does not get pregnant again. Each step is influenced by the mother's full history of births and deaths to that point. Bhalotra and van Soest 2008 also run the simulations with some modifications: they zero out some or all of the links their model allows between mortality and fertility. This lets them partly distinguish the channels that link mortality and fertility, assuming their model is accurate. The first column of the table below assumes neonatal mortality has no impact on fertility. In this alternate-universe Uttar Pradesh, women go 31.39 months between births, and 0.26 of their 3.99 children die in the first month of life. If the community- and mother-level fixed effects are accounted for—by which higher mortality in a family or community correlates across all births with higher fertility, for reasons that are not really explained by the model—then months between births falls slightly to 31.09 and number of births and neonatal deaths rise slightly to 4.02 and 0.27 (second column). One explanation for the slight rise in fertility is that families that experience or witness more neonatal deaths have more kids, and have them sooner, to compensate for the expectation that more will die ("hoarding"). However, other explanations are available: fertility and mortality may go hand-in-hand because of badly run clinics. In turn, infant mortality can rise both because more babies are born and because they are born closer together. The third column introduces time's arrow. It allows the fate of the last baby to predict that of the next. Why might the death of one baby make life riskier for the next? One reason, Bhalotra and van Soest 2008 explains, is that when a baby dies right after birth, the mother is more likely to have another soon. Tighter birth spacing is known to be associated with higher risk of death for subsequent children. At any rate, allowing for this link hardly changes the number of children born/family. The final two columns allow for the finding that a mother whose infant has died is more likely to get pregnant again, and sooner. This is called the "replacement effect," on the idea that parents are having more children to replace the ones they lost. On average, simulated women now go 30.59 months between births, and have 4.13 children, of whom 0.31 (7.4%) die in the first month. Since these figures incorporate all the effects modeled, they are closest to the true values in the survey. Impacts of neonatal death rate on simulated fertility and mortality in Uttar Pradesh, Bhalotra and van Soest 2008 Moving from the first to the last column, as all the statistical ripple effects of neonatal mortality are added in, the number of children born per family climbs by 4.13–3.99 = 0.14. If we fully attribute this fertility increase to ongoing mortality—0.26 deaths/family in the base case of the first column—we conclude that each neonatal death caused women to have 0.14/0.26 = 0.52 more children. Since hoarding is not the only viable explanation for the fertility climb between columns 1 and 3, we can conservatively count just the replacement effect manifesting from column 3 to column 5. (Here, the effects are Granger-style, with earlier events allowed to affect later ones, making the causal chain from mortality to fertility more convincing.) The replacement effect is 4.13 – 4.01 = 0.11 extra children per family. Blame that on the deaths of 0.29 neonates/family listed in column 3, and we concluded that families go on to have 0.11/0.29 = 0.37 more babies for each they lose in the first month of life. While we cannot ascribe causal significance to this result as confidently as in a clean experiment, it does have time's arrow in its favor. Possibly the true effect is much larger and some countervailing causal story is counteracting it, such as disease reduction simultaneously reducing deaths and increasing fertility. On the other hand, the calculations above probably overstate the effect in a different way. For they take the ratio of all extra births to a subset of the deaths, just neonatal ones. If a high neonatal death rate is proxying for higher under-5 and under-10 death rates, then the true mortality-change-fertility-change causal ratio might be only half the 0.37–0.52 suggested above. On balance these results make a true average effect as large as 1:1 seem unlikely in Uttar Pradesh across 1963–99. This suggests that saving lives in Uttar Pradesh accelerated population growth on average. Hossain, Phillips, and LeGrand 2005, "The Impact of Childhood Mortality on Fertility in Six Rural Thanas of Bangladesh," Population Council working paper The data set for this study is distinctive in coming from longitudinal surveillance. The source is not a single, randomized survey from which birth and death histories are constructed (based on potentially faulty recollections of events long past). Nor is it a series of randomized surveys in a given country, each interviewing different people. Rather, 8,000 women in the Matlab district of Bangladesh were randomly chosen in 1982, and repeatedly visited through 1993, to gather recollections of household events when they were fresher. Those years proved a remarkable time in Bangladesh. Nationally, total fertility fell from 6.5 to 3.5 births/woman between 1980 and 1995. "The pace of reproductive change in this period ranks among the most rapid ever recorded" (Hossain, Phillips, and LeGrand 2005, Pg 4). Hossain, Phillips, and LeGrand 2005 differs greatly from Bhalotra and van Soest 2008 in method. It starts with the observation that how the death of child within the family is correlated with or causally connected to the decision to have another depends on who dies and when. Imagine a women who has just given birth, and ask: on average, how long will it be before she becomes pregnant again, and how does that depend on the family's history of child deaths? Borrowing from work of other authors, Hossain, Phillips, and LeGrand 2005 enumerate these possibilities: If having another child sooner after this birth is correlated with having lost a child before this recent birth, that would mainly point to a forward-looking "hoarding" effect: a history of loss leads to anticipation of future loss, which leads to long-term desire for extra children to compensate. If having another child sooner after this birth is correlated with having lost an older child after this recent birth, that suggests a backward-looking, volitional replacement effect. If having another child sooner after this birth is correlated with the just-born child dying, that would pick up both the volitional replacement motive above and biological replacement mechanism of interrupted lactational amenorrhea. We could comparing the size of this correlation to the previous one to estimate the additional impact of the biological pathway. If having another child sooner after this birth is correlated with losing a child well in the future, since the future death couldn't cause the decision to get pregnant sooner now, this would suggest that ongoing third factors such as poverty are causing the family to have more births and deaths. Including future deaths in the regressions becomes a way to control for such third variables. Hossain, Phillips, and LeGrand 2005, Pg 10, Table 2, col 3, finds a strong positive associations in all cases. If the given newborn soon dies, that multiplies the probability per unit time that the mother will get pregnant again by 59.7—from about 1 in 300, for example, to 1 in 5. 27 That is the combination of the biological and volitional replacement effects. The death of an older child, which would trigger only the volitional replacement effect since the mother could still breastfeed, expands the probability by "only" a factor of 6.2. The death of a child farther in the past, indicating a history of loss and expectation of future loss, has a smaller effect still, at 3.5. Multiplier on probability per month of getting pregnant again Child died before most recent birth (hoarding) Older child died after most recent birth (volitional replacement) Newborn died (volitional + biological replacement) A child dies in year after next pregnancy (third factors raising births and deaths) Unfortunately, Hossain, Phillips, and LeGrand 2005 does not translate these abstract results into a form that compares total births to total deaths. So it is hard to infer implications for the impact of saving lives on population growth. The results do suggest that the short-term biological link from death to birth is large enough to statistically mask the volitional effect unless controlled for. Binka, Bawah, and Hossain 2004, "The Role of Childhood Mortality in Fertility Transition in a Rural Sahelian District of Northern Ghana," incomplete The Navrongo district in northern Ghana, like Matlab in Bangladesh, has been the site of extensive demographic data collection over the years, which has facilitated research on family planning in the extremely poor Sahelian region of Africa. Binka, Bawah, and Hossain 2004 was presented at a conference in 2004 and was posted as an incomplete 3-page document. The results, based on data from an impressive 43,000 women, are that "the death of a child has no effect on the odds of subsequent parity progression," i.e., having another child. This is tantalizing evidence that in the poorest places, where families have not yet begun to limit fertility, the loss of a child matters little for whether the woman gets pregnant again. If she does get pregnant again, she would have anyway. The result makes sense, but unfortunately cannot be evaluated without a full paper. Older microdata studies As mentioned, there are many studies in this genre. This quote from the Schultz 1997, Pg. 384–85, review suggests that the older ones line up with Bhalotra and van Soest 2008 in finding a mortality-drop-fertility-drop ratio of less than 1:1. Using the 1973 Census sample of Colombia, Olsen (1980) estimated that the replacement response effect was about 0.3, rather than the ordinary least squares [no-instrument] estimate of 0.5, suggesting that for every three child deaths prevented there was one fewer birth. Rosenzweig and Schultz (1982b) estimated, from the same data source using instrumental variables method, the sum of replacement and expectation response rates of between 0.14 and 0.42 for various cohorts of women between the age of 24 and 54. Lee and Schultz (1982) estimated the replacement response in Korea in 1971 as between 0.35 and 0.51….Maglad (1990), using the instrumental variables method, estimated for a small sample in rural Sudan a replacement/expectation ["hoarding"] rate in 1987 of between 0.56 and 0.73. Okojie (1991) obtained significant estimates of replacement/expectation responses for Bendel state of Nigeria from a sample collected in 1985. Benefo and Schultz (1992) estimated by instrumental variables a replacement/expectation rate of about 0.2 from a national sample of Ghana collected in 1987-1989 and obtained a similar value for Côte d'Ivoire in 1985–1988. Mauskopf and Wallace (1984) estimated the replacement probability for Brazil was nearly 0.6 and found that it increased from 0.44 to 0.98 as the woman's education increased from none to five or more years.…Finally, in a high-income environment, Rosenzweig and Schultz (1983) estimated by instrumental variables a replacement effect of about 0.2 from a 1967–1969 sample of legitimate births for the United States. Not having reviewed these studies, I cannot comment on the strength of their conclusions. GiveWell and I suspect that to delve into them would be to reap diminishing returns. This table summarizes my current interpretations of the studies reviewed above: Study Methodology & setting Effect size (study's unit) Effect size (births/death) Schultz 1997 Cross-country, 80 developing countries, 1972–89 0.25 births/woman per under-5 deaths/100 births Interpreting cross-country correlations as influence of mortality on fertility requires strong, debatable assumptions Conley, McCord, and Sachs 2007 Cross-country, 138 developing countries, 1960–2004 0.1 births/woman per under-1 deaths/100 births Lorentzen, McMillan, and Wacziarg 2008 Cross-country, 85 countries, 1960–2000 0.15 births/woman per under-1 deaths/100 births Murtin 2012 Cross-country panel, 70 countries, 1870–2000 0.2–0.4% change in births/population per 1% change in under-1 deaths/100 births, long-term 2–4 over generations Study of changes over time within countries is closer to a controlled experiment than cross-country comparisons, since many national traits evolve only slowly. But strong assumptions still required, since as fertility and mortality change over decades, so do many other factors that influence them. Herzer, Strulik, and Vollmer 2012 Cross-country panel, 20 countries, 1900–2000 0.8 births/population per all-age deaths/population, long-term 0.8, over a generation Ditto; however, this study has virtue of transparency, in describing how fertility evolves after a change in mortality without making deeper claims as to causality Lucas 2013 Quasi-experiment based on late-1940s near-eradication of malaria, Sri Lankan households, 1939–75 Per pre-1937 % of children with enlarged spleens (indicating malaria prevalence), % of women having live birth per year after campaign up 22%; % of first-borns dying before age 5 down 45%; death rate of later-born unchanged Negative; size hard to infer because relationship between birth probability and total births is complex Causal relationships credibly discerned, thanks to clear fertility jump after eradication. Best explanation for result is not deaths influencing births, but eradication of malaria increasing survival before and after birth. Kumar 2009 Quasi-experiment based on 1980s child immunization campaign; Indian households, ~1973–2003 Women who had first child after the immunization program arrived in district spaced births farther apart: probability of 2nd birth within 2 years of the first fell 1.4%; within 3 years, 2.3%; and within 5 years, 1.5% Not estimable: intervention's correlation with mortality not analyzed Causal relationships not as credibly discerned: does not control for long-terms trends in health & fertility, such as convergence across regions; does not identify quasi-experimental discontinuities. Wilson 2013 Quasi-experiment based on 2000s Prevention of Mother-to-Child Transmission (PMTCT) program rollout; Zambian households, 2000–07 Mothers near a PMTCT site 2% less likely to have been pregnant in given year Causal relationships credibly discerned, thanks to clear fertility drop after PMTCT arrival. Best explanation is: the program encouraged breastfeeding, delaying pregnancy via lactational amenorrhea—rather than falling mortality reducing fertility. Bhalotra, Hollywood, and Venkataramani 2012 Quasi-experiment based on introduction of first antibiotics in U.S. circa 1937; U.S. states, 1930–70 Safer birth leads to more births while higher infant survival leads to fewer, to offsetting degrees (for maternal and infant death reductions altogether) Clear drops in mortality starting circa 1937 bolster case for quasi-experiment. But results sensitive to controlling for long-terms trends in health & fertility, such as convergence among states; decomposition of impacts into offsetting maternal and infant death channels less credible than net (zero) effect. Juhn, Kalemli-Ozcan, and Turan 2009 Quasi-experiment based on spread of HIV in 13 African nations in 2000s, 2002–06 HIV+ women 3.4% less likely to have been pregnant in last year, 9.2% in 3 years, 13.6% in last 5; among HIV-negative women, being in a high-prevalence area did not raise probability of being pregnant Not estimable: HIV's correlation with mortality not analyzed Interpretation of results as impact of HIV on fertility is reasonable. But best explanation is biological—HIV and associated infections impeding successful pregnancy—rather than volitional, since among HIV-negative women, witnessing higher mortality locally did not affect fertility. Fortson 2009 Quasi-experiment based on spread of HIV in 12 African nations in 2000s, 1981–2005 HIV+ women have 0.15 fewer lifetime births Bhalotra and van Soest 2008 Non-experimental study based on household data for 1963–99, Uttar Pradesh, India Death of a neonate (under 1 month) is followed in a family by 0.37–0.52 extra births Ascription of causal link not as strong as in good quasi-experiments, but study is transparent like Herzer, Strulik, and Vollmer 2012, estimating how many extra births follow a death on average. Hossain, Phillips, and LeGrand 2005 Non-experimental study based on household data for 1982–93, Matlab, Bangladesh An older sibling dying in family after a recent birth multiplies chance per unit time of getting pregnant again by 6.2. Hard to infer from available data Data drawn from repeated visits to same women, not imperfectly recalled birth histories. Teases apart biological and volitional effects by distinguishing by when a child death occurred in family relative to given birth. Binka, Bawah, and Hossain 2004 Non-experimental study based on household data for 1993–2003, Navrongo, Ghana Child death in family does not raise chance of having another. Study incomplete. Results only suggestive. None of the studies in this review produces evidence that is both relevant to our question and beyond challenge. However, if we combine the best evidence with general knowledge about the spread of family planning, a consistent picture emerges. In my view, The message of the historical evidence is reasonable: the long delays between mortality declines and the onsets of fertility declines suggest that there is more behind the latter than the former. The cross-country studies seem most suspect. They imply very large impacts of mortality decline on fertility. I find it hard to reject the hypothesis that their results are driven by the crude 1:4 ratio revealed in the cross-country graph for 1990 in the "Mortality and fertility: Trends and causes" section above. Some of the quasi-experimental studies produce convincing results. The closest they come to answering our empirical question is in suggesting that health interventions for women of child-bearing age make them more fertile, e.g., by helping them bring pregnancies to term. This is worth noting, especially since the effect is opposite in sign of what we were looking for: life-saving interventions reduce deaths and increase births. It would not generalize to interventions aimed at other demographics, such as children. I find the Granger-style studies, the ones that systematically explore the relationships over time between deaths and births, most useful. They do not aspire to measure true causality—only what happens after what, on average. But if that aspiration of measuring true causality is unrealistic, then perhaps the humility is for the best. Working with country-level statistics, mostly from relatively wealthy countries over the 20th century, Herzer, Strulik, and Vollmer 2012 finds that drops in mortality are followed over a generation by fertility drops nearly as large. Looking within families in Uttar Pradesh in the decades up to 1999, a context in which fertility was high but had begun to fall, Bhalotra and van Soest 2008 finds partial replacement, with 0.37–0.52 extra births for each neonatal death. The incomplete Binka, Bawah, and Hossain 2004 hints that in a region where fertility was high and mostly not controlled, the loss of a child did not lead on average to a family having more thereafter. As mentioned at the outset, we should expect that where fertility is most controlled, typically indicated by total fertility of about 2 births/woman or less, that the volitional replacement effect is large—that for every child's life saved, parents avert one birth. That births/woman averaged 2.7 in developing countries as a whole in 2005–10, and that the number has probably fallen more since, suggest that most couples today are engaging in family planning. Meanwhile, where the fertility transition does not yet appear to have occurred the replacement effect is likely much smaller. The studies I find most informative tend to corroborate this theory, indicating near-full replacement among a group of relatively affluent countries; partial replacement in a context where fertility had begun to decline but still had far to go (Uttar Pradesh); and no replacement in an area of continuing high fertility (Northern Ghana). A corollary to this interpretation is that the mortality decline in developing countries during the last 60 years probably caused a minority of the contemporaneous fertility decline. Recall that under-15 deaths fell from 1.8 to 0.21 per woman between 1950–55 and 2005–10, a drop of 1.6. If the impact of mortality declines on fertility ranged by place and time between 1:0 and 1:1 then it could not have caused more than 1.6 of the decline in births/woman, which was from 6.1 to 2.7, a drop of 3.4 children/woman. Other factors, such as female education, economic growth, contraceptive availability and family planning promotion together likely mattered more. Bhalotra, Hollywood, and Venkataramani 2012 Source (archive) Bhalotra and van Soest 2008 Source (ungated) Binka, Bawah, and Hossain 2004 Source (archive) Bleakley 2007 Source (ungated) Bolt and van Zanden 2013 Source (data) Bongaarts 2013 Source (archive) Bongaarts et al. 2012 Source (archive) Bryant 2007 Source Bureau of the Census 1939 Source (archive) Cohen and Dupas 2010 Source (ungated) Conley, McCord, and Sachs 2007 Source (archive) D'Souza 1981 Source (archive) Galor 2012 Source (ungated) Ferreira Filho and Horridge 2006 Source Fortson 2009 Source (ungated) Granger 1969 Source (ungated) Haub 2011 Source (archive) Heer and Smith 1968 Source Herzer, Strulik, and Vollmer 2012 Source Hossain, Phillips, and LeGrand 2005 Source (archive) Juhn, Kalemli-Ozcan, and Turan 2009 Source Kesho Bora Study Group 2011 Source Kiszewski et al. 2004 Source (archive) Kumar 2009 Source (archive) La Ferrara, Chong, and Duryea 2012 Source (ungated) Lorentzen, McMillan, and Wacziarg 2008 Source (ungated) Lucas 2013 Source (ungated) Mankiw 1995 Source (ungated) Mishra et al. 2006 Source (archive) Murtin 2013 Source (ungated data & code) Pritchett 1994 Source (ungated) Roodman 2009a Source (archive) Roodman 2009b Source (archive) Roodman 2009c Source (ungated) Rutstein and Winter 2014 Source (archive) Schultz 1997 Source (ungated) Turan 2011 Source (archive) UN Population Division 2013 Fertility Survival rate by age Wilson 2013 Source (archive) World Bank 2014 Source (archive) WHO 2010 Source (archive) Wrigley 1985 Source Thanks to Colin Rust for excellent and much-needed proofreading, as well as substantive commentary. One dimension of difference is how much say the woman has in the decision to try for another child. For concision, I will sometimes speak of couples collectively "deciding" to have another child. But the power differences between man and woman should always be borne in mind. (back) Total fertility and under-five mortality are graphed by country here, using the same UN data source. (back) Qatar, Bahrain, the United Arab Emirates, and Kuwait had four of the six fastest-growing national populations in the 2000's, but mainly because of immigration. The rest of the top 20 were quite poor (UN Population Division 2013). (back) There is some conceptual redundancy in these graphs. Mortality might affect fertility via "other factors." But there are always intermediating factors. Evens neurons firing cause hands to move via intermediating mechanisms. So this possibility is indistinguishable from the first graphed. (back) The main mechanism behind the early French decline appears to have involved reduced probability of marriage per unit time, which led to more women marrying later or not at all (Wrigley 1985, Pg 47). (back) Most of the criticism has centered on regressions in which economic growth is the variable to be explained. E.g.: "Although I applaud the empirical emphasis in recent work on economic growth, I am not sanguine about the future of this work" (Mankiw 1995, Pg 307). (back) Mining can occur in many ways, conscious and unconscious. Journals, for example, may favor papers with seemingly statistically significant results. (back) Schultz 1997, Pg 398, Table 5, col 3. The coefficient on child mortality to age 5 is 0.0251. Since child mortality is measured per 1000 births (Pg 419), a 1% reduction reduces total fertility by .0251 × 10 = .251 births/woman. (back) Regressions in Table 6, cols 2 & 6, produce essentially the same result. A regression in Table 7, col 2, restricting to 1988 and controlling for oral contraceptive prices, produces a somewhat lower number of 0.21. Regression in cols 4 & 6 of Table 9 produce much larger estimates, but Schultz 1997, Pg 413, does not find the overall pattern of results from these regressions "plausible." (back) Conley, McCord, and Sachs 2007, Pg 50, Table 6, col 2, puts a coefficient of 0.06 on infant mortality. Infant mortality is used because of a reported lack of child mortality data for many years in the study period. (back) The birth and death rates are "crude" rates expressed per population. This makes them dependent on the distribution of the population across age groups. Populations with more young adults will have higher crude birth rates, even if the populations have the same long-term reproductive rate. This is why studies on modern data usually use sharper measures of fertility (births/woman on current birth probabilities at each age and mortality (death rates for specific age groups). Studies on historical data typically do not have this luxury, for lack of requisite data. (back) All variables except population fractions and schooling are entered in logs. Log GDP/capita enters in cubic form. Decade dummies are also included. (back) In a dynamic model \(y_t=\alpha y_{t-1}+\beta x_{t-1}\), the long-term impact of a permanent increase in \(x\) on \(y\) works out to \(\frac{\beta}{1-\alpha}\). (back) Roodman 2009b, Pg 88–94, formally presents linear GMM. (back) The regressions appear to suffer from instrument proliferation, manifest by Hansen overidentification J test p values near 1. Murtin 2013 cites Roodman 2009c for the rule of thumb that the number of instruments should not exceed the number of countries. However, that text is much more cautionary: "The…results just cited and replications below suggest that keeping the instrument count below N does not safeguard the J test." Separately, the Murtin's unexplained choice to instrument all variables other than time dummies with lags of just one or two probably results in weak instrumentation. Roodman 2009b describes typical use of the xtabond2 program: "most regressors appear twice in a command line, once before the comma for inclusion in X and once after as a source of IV- or GMM-style instruments." I modified the publicly posted code for the two Murtin 2013 regressions of focus in the text by generating lagged instruments from all the regressors other than time dummies, and "collapsing" them to reduce their number. These regressions still put coefficients of about 0.1 on log infant mortality, with statistical significance around p = 0.1. The commands are: "xtabond2 lFert l.lFert fmsoto lInfantMort lDeath lyact lyactsq lyactcub p30joint p40joint d1* if OKfert==1, gmm(fmsoto lFert lInfantMort lyact lyactsq lyactcub p30joint p40joint, lag(3 4) collapse) iv(d1*) two robust" and "xtabond2 lFert l.lFert fmsotop fmsotosh lInfantMort lDeath lyact lyactsq lyactcub p30joint p40joint d1* if OKfert==1, gmm(fmsotop fmsotosh lFert lInfantMort lyact lyactsq lyactcub p30joint p40joint, lag(3 4) collapse) iv(d1*) two robust". Borderline results on the Hansen overidentification test (p = 0.07 and 0.15) undercut the validity of the instruments, thus the causal interpretation. Using just 4th lags as instruments improves the Hansen test somewhat (p = 0.22 and 0.26) but destroys the statistical significance of the results. (back) GDP/capita is taken in logarithms. The regressions focused on here also include an "error correction" term, which is the deviation of the birth rate in the previous period from the level predicted by a separate regression determining the overall, long-term correspondence between the three variables. This deviation represents innovations in fertility—changes not predicted by past levels of any of the variables—and it proves a statically significant correlate of future values of all three variables, meaning that it Granger-causes mortality and GDP/capita. Herzer, Strulik, and Vollmer 2012, note 16, states that specifications with 1 or 3 lags instead of 2 produce qualitatively similar results. (back) By "correlation" here I mean partial correlation. (back) Nevertheless, such an experiment is not inconceivable. Cohen and Dupas 2010 randomize the price of bed nets, which could affect mortality and fertility, although they do not track these outcomes. (back) In the baseline hazard regression for second births, the coefficient on a region's 1937 spleen rate×pre-1947 dummy is –0.136 (standard error 0.060). This indicates that after the eradication campaign, living in a malarial area increased the probability per unit time of a woman becoming pregnant. However, when the sample is restricted to 20 years centered around the eradication campaign, 1938–58, in which the campaign is most plausibly influential, the coefficient flips to 0.075 (standard error 0.056). (back) Turan 2011, Pg 25, documents strong convergence across Indian states in child mortality during the 1980s. The immunization program was probably a major driver. (back) The point estimate is –0.02 and the standard error 0.017, for a two-tailed p value of 0.24. That is, if the true correlation is 0 and the statistical model correct, the probability of obtaining a coefficient so large in magnitude—less than –0.02 or greater than 0.02—is 0.24. Other regressions with full controls (Cols 6 of Tables 4, 5, 6) produce coefficients at least as weak. (back) E-mail from Nicholas Wilson, April 1, 2014. (back) Data are at http://dhsprogram.com/data/available-datasets.cfm. (back) The preferred regressions (Table 2, col 6; Table 3, Panel B, col 4) have one observation for each region-year combination. They include country-year dummies, and region fixed effects. HIV prevalence is assumed zero through 1990, and constant at the survey values in 2000 and later. Observations for 1991–99 are excluded for lack of data on HIV prevalence. (back) The standard deviation of 0.500 corresponds to a standard error of about 0.500/sqrt(108)=0.048 if the region is taken as the unit of observation, there being 108 regions. This puts 0.146 3 standard errors above 0, for high statistical significance. (back) However, with 24.4 mothers/cluster and 4.1 births/mother, Bhalotra and van Soest 2008 judged the statistical power inadequate for independent fixed effects for each mother. They impose the assumption that the 333 community-level impact values and 7,286 mother-level impact values come from a normal distribution, so they estimate only the parameters of this distribution not the individual draws from it, in a "multilevel random effects" model. (back) These results are from the regression with the largest control set. The dependent variable is the log of the hazard ratio, which is the log of the probability per unit time that an event will occur if it has not yet occurred. The tabulated numbers are antilogarithms of coefficient estimates in Hossain, Phillips, and LeGrand 2005. (back) One dimension of difference is how much say the woman has in the decision to try for another child. For concision, I will sometimes speak of couples collectively "deciding" to have another child. But the power differences between man and woman should always be borne in mind. Total fertility and under-five mortality are graphed by country here, using the same UN data source. Qatar, Bahrain, the United Arab Emirates, and Kuwait had four of the six fastest-growing national populations in the 2000's, but mainly because of immigration. The rest of the top 20 were quite poor (UN Population Division 2013). There is some conceptual redundancy in these graphs. Mortality might affect fertility via "other factors." But there are always intermediating factors. Evens neurons firing cause hands to move via intermediating mechanisms. So this possibility is indistinguishable from the first graphed. The main mechanism behind the early French decline appears to have involved reduced probability of marriage per unit time, which led to more women marrying later or not at all (Wrigley 1985, Pg 47). Most of the criticism has centered on regressions in which economic growth is the variable to be explained. E.g.: "Although I applaud the empirical emphasis in recent work on economic growth, I am not sanguine about the future of this work" (Mankiw 1995, Pg 307). Mining can occur in many ways, conscious and unconscious. Journals, for example, may favor papers with seemingly statistically significant results. Schultz 1997, Pg 398, Table 5, col 3. The coefficient on child mortality to age 5 is 0.0251. Since child mortality is measured per 1000 births (Pg 419), a 1% reduction reduces total fertility by .0251 × 10 = .251 births/woman. Regressions in Table 6, cols 2 & 6, produce essentially the same result. A regression in Table 7, col 2, restricting to 1988 and controlling for oral contraceptive prices, produces a somewhat lower number of 0.21. Regression in cols 4 & 6 of Table 9 produce much larger estimates, but Schultz 1997, Pg 413, does not find the overall pattern of results from these regressions "plausible." Conley, McCord, and Sachs 2007, Pg 50, Table 6, col 2, puts a coefficient of 0.06 on infant mortality. Infant mortality is used because of a reported lack of child mortality data for many years in the study period. The birth and death rates are "crude" rates expressed per population. This makes them dependent on the distribution of the population across age groups. Populations with more young adults will have higher crude birth rates, even if the populations have the same long-term reproductive rate. This is why studies on modern data usually use sharper measures of fertility (births/woman on current birth probabilities at each age and mortality (death rates for specific age groups). Studies on historical data typically do not have this luxury, for lack of requisite data. All variables except population fractions and schooling are entered in logs. Log GDP/capita enters in cubic form. Decade dummies are also included. In a dynamic model \(y_t=\alpha y_{t-1}+\beta x_{t-1}\), the long-term impact of a permanent increase in \(x\) on \(y\) works out to \(\frac{\beta}{1-\alpha}\). Roodman 2009b, Pg 88–94, formally presents linear GMM. The regressions appear to suffer from instrument proliferation, manifest by Hansen overidentification J test p values near 1. Murtin 2013 cites Roodman 2009c for the rule of thumb that the number of instruments should not exceed the number of countries. However, that text is much more cautionary: "The…results just cited and replications below suggest that keeping the instrument count below N does not safeguard the J test." Separately, the Murtin's unexplained choice to instrument all variables other than time dummies with lags of just one or two probably results in weak instrumentation. Roodman 2009b describes typical use of the xtabond2 program: "most regressors appear twice in a command line, once before the comma for inclusion in X and once after as a source of IV- or GMM-style instruments." I modified the publicly posted code for the two Murtin 2013 regressions of focus in the text by generating lagged instruments from all the regressors other than time dummies, and "collapsing" them to reduce their number. These regressions still put coefficients of about 0.1 on log infant mortality, with statistical significance around p = 0.1. The commands are: "xtabond2 lFert l.lFert fmsoto lInfantMort lDeath lyact lyactsq lyactcub p30joint p40joint d1* if OKfert==1, gmm(fmsoto lFert lInfantMort lyact lyactsq lyactcub p30joint p40joint, lag(3 4) collapse) iv(d1*) two robust" and "xtabond2 lFert l.lFert fmsotop fmsotosh lInfantMort lDeath lyact lyactsq lyactcub p30joint p40joint d1* if OKfert==1, gmm(fmsotop fmsotosh lFert lInfantMort lyact lyactsq lyactcub p30joint p40joint, lag(3 4) collapse) iv(d1*) two robust". Borderline results on the Hansen overidentification test (p = 0.07 and 0.15) undercut the validity of the instruments, thus the causal interpretation. Using just 4th lags as instruments improves the Hansen test somewhat (p = 0.22 and 0.26) but destroys the statistical significance of the results. GDP/capita is taken in logarithms. The regressions focused on here also include an "error correction" term, which is the deviation of the birth rate in the previous period from the level predicted by a separate regression determining the overall, long-term correspondence between the three variables. This deviation represents innovations in fertility—changes not predicted by past levels of any of the variables—and it proves a statically significant correlate of future values of all three variables, meaning that it Granger-causes mortality and GDP/capita. Herzer, Strulik, and Vollmer 2012, note 16, states that specifications with 1 or 3 lags instead of 2 produce qualitatively similar results. By "correlation" here I mean partial correlation. Nevertheless, such an experiment is not inconceivable. Cohen and Dupas 2010 randomize the price of bed nets, which could affect mortality and fertility, although they do not track these outcomes. In the baseline hazard regression for second births, the coefficient on a region's 1937 spleen rate×pre-1947 dummy is –0.136 (standard error 0.060). This indicates that after the eradication campaign, living in a malarial area increased the probability per unit time of a woman becoming pregnant. However, when the sample is restricted to 20 years centered around the eradication campaign, 1938–58, in which the campaign is most plausibly influential, the coefficient flips to 0.075 (standard error 0.056). Turan 2011, Pg 25, documents strong convergence across Indian states in child mortality during the 1980s. The immunization program was probably a major driver. The point estimate is –0.02 and the standard error 0.017, for a two-tailed p value of 0.24. That is, if the true correlation is 0 and the statistical model correct, the probability of obtaining a coefficient so large in magnitude—less than –0.02 or greater than 0.02—is 0.24. Other regressions with full controls (Cols 6 of Tables 4, 5, 6) produce coefficients at least as weak. E-mail from Nicholas Wilson, April 1, 2014. Data are at http://dhsprogram.com/data/available-datasets.cfm. The preferred regressions (Table 2, col 6; Table 3, Panel B, col 4) have one observation for each region-year combination. They include country-year dummies, and region fixed effects. HIV prevalence is assumed zero through 1990, and constant at the survey values in 2000 and later. Observations for 1991–99 are excluded for lack of data on HIV prevalence. The standard deviation of 0.500 corresponds to a standard error of about 0.500/sqrt(108)=0.048 if the region is taken as the unit of observation, there being 108 regions. This puts 0.146 3 standard errors above 0, for high statistical significance. However, with 24.4 mothers/cluster and 4.1 births/mother, Bhalotra and van Soest 2008 judged the statistical power inadequate for independent fixed effects for each mother. They impose the assumption that the 333 community-level impact values and 7,286 mother-level impact values come from a normal distribution, so they estimate only the parameters of this distribution not the individual draws from it, in a "multilevel random effects" model. These results are from the regression with the largest control set. The dependent variable is the log of the hazard ratio, which is the log of the probability per unit time that an event will occur if it has not yet occurred. The tabulated numbers are antilogarithms of coefficient estimates in Hossain, Phillips, and LeGrand 2005. Senior Advisor at Open Philanthropy; more. The domestic economic impacts of immigration Little Greek letters become weapons in war of words over trend in violence Are the benefits of moderate drinking a myth? Four points on the debate over the impact of the Mariel boatlift On the geometric interpretation of the determinant of a matrix Malaria maps, then and now About Bill Easterly Book review Counting Chickens data revolution deworming extreme value theory foreign aid Generalized extreme value distribution Generalized Pareto distribution geomagnetic storms GiveWell Good ventures Healthy back hookworm Immigration Infographics Microfinance Monte Carlo Open Philanthropy Project Pitt & Khandker Redefining ODA replication Stata Violence
CommonCrawl
Why is push_back in C++ vectors constant amortized? I am learning C++ and noticed that the running time for the push_back function for vectors is constant "amortized." The documentation further notes that "If a reallocation happens, the reallocation is itself up to linear in the entire size." Shouldn't this mean the push_back function is $O(n)$, where $n$ is the length of the vector? After all, we are interested in worst case analysis, right? I guess, crucially, I don't understand how the adjective "amortized" changes the running time. algorithms time-complexity amortized-analysis David FauxDavid Faux $\begingroup$ With a RAM machine, allocating $n$ bytes of memory is not an $O(n)$ operation -- it is considered pretty much constant time. $\endgroup$ – usul $\begingroup$ The word "amortised" clearly indicates that we are not asking for the worst case or average case of one pushback operation but the amortised case of performing many pushback operations. $\endgroup$ – gnasher729 The important word here is "amortized". Amortized analysis is an analysis technique that examines a sequence of $n$ operations. If the whole sequence runs in $T(n)$ time, then each operation in the sequence runs in $T(n)/n$. The idea is that while a few operations in the sequence might be costly, they can't happen often enough to weigh down the program. It's important to note that this is different from average case analysis over some input distribution or randomized analysis. Amortized analysis established a worst case bound for the performance of an algorithm irrespective of the inputs. It's most commonly used to analyse data structures, which have a persistent state throughout the program. One of the most common examples given is the analysis of a stack with a multipop operations that pops $k$ elements. A naive analysis of multipop would say that in the worst case multipop must take $O(n)$ time since it might have to pop off all the elements of the stack. However, if you look at a sequence of operations, you'll notice that the number of pops can not exceed the number of pushes. Thus over any sequence of $n$ operations the number of pops can't exceed $O(n)$, and so multipop runs in $O(1)$ amortized time even though occasionally a single call might take more time. Now how does this relate to C++ vectors? Vectors are implemented with arrays so to increase the size of a vector you must reallocate memory and copy the whole array over. Obviously we wouldn't want to do this very often. So if you perform a push_back operation and the vector needs to allocate more space, it will increase the size by a factor $m$. Now this takes more memory, which you may not use in full, but the next few push_back operations all run in constant time. Now if we do the amortized analysis of the push_back operation (which I found here) we'll find that it runs in constant amortized time. Suppose you have $n$ items and your multiplication factor is $m$. Then the number of relocations is roughly $\log_m(n)$. The $i$th reallocation will cost proportional to $m^i$, about the size of the current array. Thus the total time for $n$ push back is $\sum_{i=1}^{\log_m(n)}m^i \approx \frac{nm}{m-1}$, since it's a geometric series. Divide this by $n$ operations and we get that each operation takes $\frac{m}{m-1}$, a constant. Lastly you have to be careful about choosing your factor $m$. If it's too close to $1$ then this constant gets too large for practical applications, but if $m$ is too large, say 2, then you start wasting a lot of memory. The ideal growth rate varies by application, but I think some implementations use $1.5$. Marc KhouryMarc Khoury Although @Marc has given (what I think is) an excellent analysis, some people might prefer to consider things from a slightly different angle. One is to consider a slightly different way of doing a reallocation. Instead of copying all the elements from the old storage to the new storage immediately, consider copying only one element at a time -- i.e., each time you do a push_back, it adds the new element to the new space, and copies exactly one existing element from the old space to the new space. Assuming a growth factor of 2, it's pretty obvious that when the new space is full, we'd have finished copying all the elements from the old space to the new space, and each push_back have been exactly constant time. At that point, we'd discard the old space, allocate a new block of memory that was twice as large again, and repeat the process. Pretty clearly, we can continue this indefinitely (or as long as there's memory available, anyway) and every push_back would involve adding one new element and copying one old element. A typical implementation still has exactly the same number of copies -- but instead of doing the copies one at a time, it copies all the existing elements at once. On one hand, you're right: that does mean that if you look at individual invocations of push_back, some of them will be substantially slower than others. If we look at a long term average, however, the amount of copying done per invocation of push_back remains constant, regardless of the size of the vector. Although it's irrelevant to the computational complexity, I think it's worth pointing out why it's advantageous to do things as they do, instead of copying one element per push_back, so the time per push_back remains constant. There are are least three reasons to consider. The first is simply memory availability. The old memory can be freed for other uses only after the copying is finished. If you only copied one item at a time, the old block of memory would remain allocated much longer. In fact, you'd have one old block and one new block allocated essentially all the time. If you decided on a growth factor smaller than two (which you usually want) you'd need even more memory allocated all the time. Second, if you only copied one old element at a time, indexing into the array would be a little more tricky -- each indexing operation would need to figure out whether the element at the given index was currently in the old block of memory or the new one. That's not terribly complex by any means, but for an elementary operation like indexing into an array, almost any slow-down could be significant. Third, by copying all at once, you take much better advantage of caching. Copying all at once, you can expect both the source and destination to be in the cache in most cases, so the cost of a cache miss is amortized over the number of elements that will fit in a cache line. If you copy one element at a time, you might easily have a cache miss for every element you copy. That only changes the constant factor, not the complexity, but it can still be fairly significant -- for a typical machine, you could easily expect a factor of 10 to 20. It's probably also worth considering the other direction for a moment: if you were designing a system with real-time requirements, it might well make sense to copy only one element at a time instead of all at once. Although overall speed might (or might not) be lower, you'd still have a hard upper bound on the time taken for a single execution of push_back -- presuming you had a real-time allocator (though of course, many real-time systems simply prohibit dynamic allocation of memory at all, at least in portions with real-time requirements). Jerry CoffinJerry Coffin $\begingroup$ +1 This is a wonderful Feynman-style explanation. $\endgroup$ – Kuba hasn't forgotten Monica Not the answer you're looking for? Browse other questions tagged algorithms time-complexity amortized-analysis or ask your own question. how to verify permutation generated in constant amortized time? Can element uniqueness be solved in deterministic linear time? I can not see why MSD radix sort is theoretically as efficient as LSD radix sort Why do we need "potential" for amortized analysis? Does the following algorithm has amortized constant time per element? Properties of roots of recurrence relations in the context of exponential algorithms in order to decrease the upper bound of the running time Ω(f(x)) and worst case analysis Does this data structure already exist?
CommonCrawl
Trace: • st12_impedance_spectroscopy Heart-rate: PPG ECG/EMG: Bio Amplifier Stress: HRV Stress: EMG Body Temperature Accelerometry Projects (Password) Theory and background: Safety and Ethics theory:sensor_technology:st12_impedance_spectroscopy Transducer Characterization by Impedance Spectroscopy Example 1: Fourier analysis of RC networks Example 2: Fourier analysis of RL networks Equivalent topologies Manual data fitting of RC circuits Systematic data fitting: elimination method Automated model fitting The Kramers-Kronig test How to measure impedance? Sensor Technology TOC Transducers (both sensors and actuators) can be modelled as a network of discrete equivalent components by a method called Impedance Spectroscopy. The method uses the fact that a two-port passive linear network is completely defined by its response in case we know the phase and magnitude of its impedance over a sufficiently broad bandwidth. As a result, we can find an equivalent network for a transducer by measuring the electrical impedance at several frequencies. The result is a network, both the structure and the values of discrete passive components (R, L and C), that describe the transducer sufficiently. This method is commonly used in the field of electrochemistry, and good tutorails can be found when searching for Electrochemical Impedance Spectroscopy. An example of such a tutorial is published by Gamry Instruments on line1). The original method was developed mainly by J.Ross McDonald 2)3) in 1987 and also published by Bernard Boukamp in 19954). Consider a two-port black box circuit as drawn in figure 1. The only thing we know about the box is that it contains time-invariant, linear, and passive components. In practice, a network of resistors, inductors and capacitors (RLC) satisfies this criterion. Fig. 1: The time response on a step function applied to a two-port black box Now imagine we apply a known voltage to the two-port, for example a step \begin{equation} \begin{cases} U(t)=0 & \text{ if } t< 0 \\ U(t)=U_{0} & \text{ if } t\geqslant 0 \end{cases} \end{equation} and we measure the current as a function of time as \begin{equation} I(t)=G\left ( U(t) \right ) \label {eq:TransferAdmittanceForm} \end{equation} with $G$ the transfer function for $U(t)\rightarrow I(t)$. Note that in a similar way, we can apply a current \begin{equation} \begin{cases} I(t)=0 & \text{ if } t< 0 \\ I(t)=I_{0} & \text{ if } t\geqslant 0 \end{cases} \end{equation} and measure the potential as a function of time \begin{equation} U(t)=H\left ( I(t) \right ). \label {eq:TransferImpedanceForm} \end{equation} Now $H$ is a transfer function for $I(t)\rightarrow U(t)$. We call $G$ the admittance function and $H$ the impedance function5). What is most important, is that based on observing the transfer function, we have a strong clue what is inside the box. For example, the response of figure 1 strongly reminds us of an RC-series network. Based on the time constant $\tau$, we can even determine the RC-product. The assumed RC-network is an equivalent circuit: it is not necessarily the real network inside the box. Later on we will see that some responses can be implemented by multiple different equivalent circuits: in most cases there is not a unique topology. However, the shape of the network has to be determined to resemble logical physical phenomena. For example, when the response of figure 1 is observed with a electrolytic capacitor, the series network of a resistor and a capacitor makes sense: we have modelled it as an ideal capacitor with a series resistor representing the resistance of the connection wires. Because a step function contains all frequencies, the response of a network on a step function gives a broad-spectrum fingerprint. Therefore, the transfer function of the black box is completely characterised by applying a step. Both for mathematical and practical reasons it is more convenient to apply a series of harmonic signals (sine-waves) to characterise the black box. This needs some knowledge on the analysis by Fourier series. Assume the black box is filled with a series network of a resistor and a capacitor like figure 2. The differential equation is \begin{equation} I(t)=C\frac{\mathrm{d} }{\mathrm{d} t}\left [ U(t)-RI(t) \right ] \end{equation} and for the boundary condition we find the solution \begin{equation} I(t)=\frac{U_{0}}{R}e^{-\frac{t}{RC}}. \label{eq:TransferFunctionTimeRCseries} \end{equation} Fig. 2: Example for a two-port that is a series network of a resistor and a capcitor Equation \eqref{eq:TransferFunctionTimeRCseries} is the transfer function $G$ in the admittance form \eqref{eq:TransferAdmittanceForm}. Although this response characterises the circuit completely, it is easier to approach the identification in the frequency domain than in the time domain. For the frequency domain we have to apply the theory of Fourier transforms. Any time dependent function can be written as an infinite series of harmonic terms. This is the Fourier transform where the time signal is represented by the sum of the harmonics \begin{equation} U(t)=\frac{1}{2 \pi}\int_{-\infty }^{\infty } \! \overline{U}(\omega)e^{j \omega t} \ \mathrm{d} \omega \end{equation} with the coefficients $\overline{U}(\omega)$ given by \begin{equation} \overline{U}(\omega)=\int_{-\infty }^{\infty } \! U(t)e^{-j \omega t} \ \mathrm{d} t \end{equation} with all $\overline{U}(\omega)$ orthogonal. Note that this makes use of Euler's identity \begin{equation} e^{jx}=\cos (x) + j \sin (x). \end{equation} We can directly describe the impedance of the circuit of figure 2 as \begin{equation} \overline{Z}_{RCs}(j \omega)=R+\frac{1}{j \omega C} \end{equation} and the admittance as \begin{equation} \overline{Y}_{RCs}(j \omega)=\frac{R^{-1}\cdot j \omega C}{R^{-1} + j \omega C}=\frac{j \omega C}{1+j \omega RC} \end{equation} where the convention is to write $H(t) \rightarrow \overline{Z}(j \omega)$ for the impedance and $G(t) \rightarrow \overline{Y}(j \omega)$ for the admittance expressions of the transfer function in the time- and frequency domain respectively. In figure 3 the Bode diagram of the series circuit figure 2 is drawn. In a Bode diagram, the magnitude $\left | \overline{Z} \right |= \sqrt { Re^{2} + Im^{2} }$ is plotted as a function of the frequency on a log-log scale. The second part of the Bode diagram is the phase of $\overline{Z}$ as a function of frequency on a log-lin scale. As a reference, the Bode diagram for a parallel circuit of a resistor and a capacitor is drawn as well, that will result into an impedance of \begin{equation} \overline{Z}_{RCp}(j \omega)= \frac{R\cdot \frac{1}{j \omega C}}{R + \frac{1}{j \omega C}}=\frac{R}{1+j \omega RC } \end{equation} and the admittance of \begin{equation} \overline{Y}_{RCp}(j \omega)=R^{-1} + \frac{1}{j \omega C} \end{equation} Fig. 3: The bode plot for the impedance of a series and parallel RC network For electrical engineerins, the Bode diagram is a common representation of the transfer function. However, there are other options to plot the same transfer functions. For recognising patterns of networks, the Polar plot is more convenient because it represents both phase and magnitude in a single graph. In a Polar plot, the horizontal axes is the real part and the vertical axes the imaginary part. Here the convention is used to plot the imaginary part with a minus sign. A vector from the origin to a certain frequency has a length $|\overline{Z}|$ and an angle $arg(\overline{Z})$. Fig. 4: The polar plot for the impedance of a series and parallel RC network In the series case, we can see a pure resistive element of $10 k\Omega$ on the horizontal real axes for $\omega \rightarrow \infty $. In the same figure, the phase goes to $\pi /2$ for DC, meaning $\omega \rightarrow 0 $. An upward line in the upper-right quadrant represents pure capacitive behaviour. The right-hand picture represents the case where the resistor and capacitor are in parallel. In the impledance plot, this combination becomes a clear semi-circle on the upper-right quadrant. Using polar plots, we can find combinations of resistors and capacitors by pattern recognition. In case the black-box circuit of figure 2 is not filled with a resistor and a capacitor, but with a resistor and an inductor, the impedance equation becomes for a series circuit \begin{equation} \overline{Z}_{RLs}(j \omega)=R+j \omega L \end{equation} and the admittance is \begin{equation} \overline{Y}_{RLs}(j \omega)=\frac{R^{-1}\cdot \left ( j \omega L \right )^{-1}}{R^{-1} + \left ( j \omega L \right )^{-1}}=\frac{1}{R+j \omega L} \end{equation} The Bode diagram and the polar plots can be seen in figure 5 and figure 6 respectively. It can be seen that in an resistor-inductor configuration, the polar plot goes from infinite in the lower-right quadrant to the real axes for DC for a series circuit. A semi-circle in the lower-right quadrant can be seen for the parallel configuration. Fig. 5: The bode plot for the impedance of a series and parallel RL network Fig. 6: The polar plot for the impedance of a series and parallel RL network When it is claimed that combinations of capacitors, resistors and inductors result into characteristic patterns in the polar plot, we may think all more complex circuits can be recognised as well. This is not the case. The two circuits in figure 7 result in the same polar plot and Bode diagrams. Fig. 7: These two basic structures can not be distinguished form the polar plots or Bode diagrams The shared polar plot is represented in figure 8 and is equal if \begin{equation} \begin{aligned} R_{1} & = R_{a}\left ( 1+\frac{R_{a}}{R_{c}} \right )\\ C_{2} & = \left ( \frac{R_{c}}{R_{a}+R_{c}} \right )^{2}C_{b}\\ R_{3} & = R_{a}+R_{c}. \end{aligned} \end{equation} In this specific plot, the values $R_{a} = 1k\Omega$, $R_{c} = 10k\Omega$ and $C_{b} = 1\mu F$ are chosen. Fig. 8: The polar plot for the two equivalent circuits Fletcher6) descibes the equivalences in a more standardized way, and gives many more equivalent circuit topologies. We have seen that in the polar plot a series capacitor results into a vertical line in the upper-right quadrant a series resistor results into a shift along the real axis an RC parallel combination results into a semi-circle in the upper-right quadrant. In fact, a series resistor is an RC parallel combination with an infinitesimally small capacitor, and a series capacitor is an RC parallel combination with an infinitely large resistor. Knowing this, we can assume that any complex circuit of resistors and capacitors consists of a series circuit of parallel combinations as represented in the circuit of figure 9. Fig. 9: A complex RC circuit can be modelled as a series of R-C parallel combinations The circuit of figure 9 is described by \begin{equation} \overline{Z}_{fitted}(\omega)=\sum_{i=1}^{N}\frac{R_{i}}{1+j \omega R_{i}C_{i}} \label{eq:TransferFunctionFrequencyRCseries} \end{equation} with $R_{i}$ and $C_{i}$ the $N$ combinations of parallel resistor-capacitor combinations. The number $N$ of the series is equal to the number of semi-circles observed in the polar plot, plus one for the optional series resistor and one for the optional series capacitor. Note that this is an empirical model: to change the topology to a model structure that makes physically sense, the strcutuce can be transformed into an equivalent circuit that matches physical phenomena by the method of Fletcher7) can be used. As an example, consider the polar plot of figure 10. In figure 11 the same polar plot is used to fit two circles. Note that the circles must touch because when $\omega$ becomes infinite, the sum of all resistances is equal to the real part. Fig. 10: An example of a polar plot for an unknown circuit Fig. 11: The same polar plot of an unknown circuit with fitted circles Now we can derive: This circuit has at least two parallel resistor-capacitor combinations; There is an offset along the real axis: for high frequencies, the plot becomes completely real at $1k \Omega$. This means there is a series resistor of $R_{0} = 1k \Omega$; For low frequencies, there is a completely imaginary vertical line, indicating a series capacitor. There is one measurement point taken along this line (not indicated in the graph): at $\omega = 1 rad/s$ the imaginary part is $10.105 k \Omega$. This means that the series capacitor is $C_{3}=99.0 \mu F$ because $\overline{Z}=1/j\omega C$. The next question is how to deal with the two semi-circles. In figure 12, it can be seen that the width of the semi circle is equal to the resistor value $R$. The capacitor value can be found from the frequency where we have $\pi /4$ phase shift, because there $\omega_{\pi /4} = RC$. Fig. 12: How to derive the R and C from a polar plot semi-circle To go back to figure 10, we find: $R_{1}=20k \Omega$ and $C_{1}=0.01 \mu F$ as a pair and $R_{2}=10.1k \Omega$ and $C_{2}=99.0 \mu F$ as the second pair as shown in figure 13. Fig. 13: The structure of the emperically fitted circuit Equation \eqref{eq:TransferFunctionFrequencyRCseries} which describes the universal structure of figure 9 shows something important: every parallel RC-combination adds a term to the equation. This means that if we find one RC comination, we can subtract this from the response. Next, another subcircuit RC combination will become visible. Consider the measured response of figure 10. Assume it is measured as $N$ points with each a real impedance $\overline{Z}_{Re,i}$ and an imaginary impedance $\overline{Z}_{Im,i}$ (that is what impedance analyzers do). For each point, we know the applied radial frequency. So we have three data vectors $\mathbf{\omega}$, $\mathbf{\overline{Z}_{Re}}$, and $\mathbf{\overline{Z}_{Im}}$ with the $N$ elements $\omega_{i}$, $\overline{Z}_{Re,i}$, and $\overline{Z}_{Im,i}$. First, we observe that the biggest fitted circle (but we can also start with another feature) in figure 11 has a diameter of $20 k \Omega$. This measn that there is one RC-combination in the equivalent circuit that has an $R_{1}=20 k \Omega$. With the method of figure 12 and finding the top of the corresponding circle at $\omega = 5012 rad/s$, we can find $C_{1}=0.01 \mu F$. So far, this is the same as in the previous paragraph. But now we subtract the spectrum of the found subcircuit from the datasets $\overline{Z}_{Re,i}$ and $\overline{Z}_{Re,i}$: \begin{equation} \begin{aligned} \overline{Z}^{-R_{1}C_{1}}_{Re,i} & = \overline{Z}_{Re,i}-Re\left ( \overline{Z}_{R_{1}C_{1}} \right )\\ & = \overline{Z}_{Re,i}-Re\left ( \frac{R_{1}}{1+j\omega_{i} R_{1}C_{1}} \right )\\ & = \overline{Z}_{Re,i}- \frac{R_{1}}{1+\left (\omega_{i} R_{1}C_{1}\right )^{2}} \label{eq:EliminateReal} \end{aligned} \end{equation} \begin{equation} \begin{aligned} \overline{Z}^{-R_{1}C_{1}}_{Im,i} & = \overline{Z}_{Im,i}-Im\left ( \overline{Z}_{R_{1}C_{1}} \right )\\ & = \overline{Z}_{Im,i}-Im\left ( \frac{R_{1}}{1+j\omega_{i} R_{1}C_{1}} \right )\\ & = \overline{Z}_{Im,i}+ \frac{\omega_{i}R_{1}^{2}C_{1}}{1+\left (\omega_{i} R_{1}C_{1}\right )^{2}} . \label{eq:EliminateImag} \end{aligned} \end{equation} Next repeat this elimination to make $\overline{Z}^{-R_{2}C_{2}}_{Re,i}$ from $\overline{Z}^{-R_{1}C_{1}}_{Re,i}$ and $\overline{Z}^{-R_{2}C_{2}}_{Im,i}$ from $\overline{Z}^{-R_{1}C_{1}}_{Im,i}$ until only one small dot at the origin remains. This procedure is applied to the dataset in figure 14. Fig. 14: Our dataset is reduced in four steps by subtracting the calculated responses of the found RC combinations What has happened in fact is illustrated in figure 15: we model the equivalent circtui by equation \eqref{eq:TransferFunctionFrequencyRCseries} which describes the universal structure of figure 9. Fig. 15: The overall procedure of recognizing several RC combinations from a polar plot Now have a model (equation \eqref{eq:TransferFunctionFrequencyRCseries} and figure 9), and we derived the interpretation that the response is characterised by identifying semi-circles to find $R_{i}$ and $\omega_{i}$ and subsequently $C_{i}$ as illustrated in figure 15. It should not be difficult to fit the semi-circles automatically. However, because equation \eqref{eq:TransferFunctionFrequencyRCseries} is not linear towards $R_{i}$ and $C_{i}$, this can not be done using a simple LMS algorithm From the complex functions theory, we know that under certain circumstances, there is a deterministic relation between the imaginary and real data (or magnitude and phase) in a single spectrum. Mathematically, the conditions are that the system should be causal. Practically for us it means that our systems are passive and stationary: constant resistors, capacitors and inductors. This relation is described by the Kramers-Kronig relations, which state that for causal complex plane spectral data there is a dependency between magnitude and phase. The real part of a spectrum can be obtained by an integration of the imaginary part and vice versa as described in the Kramers-Kronig equations: \begin{equation} \begin{aligned} Z_{Re}\left ( \omega \right ) & = Z_{Re}\left ( \infty \right )+\frac{2}{\pi}\int_{0}^{\infty}\frac{xZ_{im}\left ( x \right )-\omega Z_{im}\left ( \omega \right )}{x^{2}-\omega^{2}}dx \\ Z_{im}\left ( \omega \right ) & =\frac{2}{\pi}\int_{0}^{\infty}\frac{Z_{re}\left ( x \right )- Z_{re}\left ( \omega \right )}{x^{2}-\omega^{2}}dx \end{aligned} \end{equation} This means that Kramers-Kronig relations can be used to evaluate data quality. What is done by Boukamp8), is that the universal model of equation \eqref{eq:TransferFunctionFrequencyRCseries} and figure 9 is applied to the dataset. If the Kramers-Kronig relations can be acknowledged by the fitted model, we may assume the system is passive, stationary and causal (there is for example no drift). The trick of Boukamp is that he fits the model with $N$ compinations of RC circuits when we have $N$ measurements in the dataset. This makes no physical sense, because we want to have RC combinations that represent physical phenomena, but acording to Boukamp this linearises the fitting algorithm just for the test. An impedance analyzer (HP4194, HP4294, E4990, E4991), or the 2-port option of a network analyzer9). A configuration around the Analog Devices AD593310), like the evaluation kit of Analog Devices11) or the PmodIA module of Digilent12). See my explorations of using this board and the LabVIEW code below. A high-end LCR meter like the Keysight E4980AL. This LCR meter can measure impedances at a range of frequencies. By controlling it with MATLAB or LabVIEW (see code below), complete Bodeplots ans Polar plots can be made. Keysight_Impedance_Spectroscopy_LabVIEW_v1_0.zip LabVIEW 2018 1.0 LabVIEW vi for an impedance sweep (polar or Bode plot) on a Keysight E4980AL Impedance_Spectroscopy_AD5933_LabVIEW_2018.zip LabVIEW 2018 1.0 LabVIEW vi for an impedance sweep (polar or Bode plot) on a AD5933 using a BusPirate for I2C These are the chapters for the Sensor Technology course: Chapter 1: Measurement Theory Chapter 2: Measurement Errors Chapter 3: Measurement Technology Chapter 4: Circuits, Graphs, Tables, Pictures and Code Chapter 5: Basic Sensor Theory Chapter 6: Sensor-Actuator Systems Chapter 7: Modelling Chapter 8: Modelling: The Accelerometer - example of a second order system Chapter 9: Modelling: Scaling - why small things appear to be stiffer Chapter 10: Modelling: Lumped Element Models Chapter 11: Modelling: Finite Element Models Chapter 12: Modelling: Transducer Characterization by Impedance Spectroscopy Chapter 13: Modelling: Systems Theory ← Next Chapter 14: Modelling: Numerical Integration Chapter 15: Signal Conditioning and Sensor Read-out Chapter 16: Resistive Sensors Chapter 17: Capacitive Sensors Chapter 18: Magnetic Sensors Chapter 19: Optical Sensors Chapter 20: Actuators - an example of an electrodynamic motor Chapter 21: Actuator principles for small speakers Chapter 22: ADC and DAC Chapter 23: Bus Interfaces - SPI, I2C, IO-Link, Ethernet based Appendix A: Systematic unit conversion Appendix B: Common Mode Rejection Ratio (CMRR) Appendix C: A Schmitt Trigger for sensor level detection Gamry Instruments, Basics of Electrochemical Impedance Spectroscopy http://www.gamry.com/application-notes/EIS/basics-of-electrochemical-impedance-spectroscopy/ Mcdonald, J. Ross. "Impedance spectroscopy: emphasizing solid materials and systems." Impedance Spectroscopy Emphasizing Solid Materials and Systems (1987). Macdonald, J. R., & Barsoukov, E. (2005). Impedance spectroscopy: theory, experiment, and applications. History, 1(8). B.A. Boukamp, J. Electrochem. Soc, 142, 1885 (1995). Keysight Technologies, Impedance Measurement Handbook, A guide to measurement technology and techniques, 6th Edition, Application Note, http://literature.cdn.keysight.com/litweb/pdf/5950-3000.pdf Fletcher, S. (1994). Tables of Degenerate Electrical Networks for Use in the Equivalent‐Circuit Analysis of Electrochemical Systems. Journal of The Electrochemical Society, 141(7), 1823-1826. Analog Devices, 1 MSPS, 12 Bit Impedance Converter Network Analyzer, http://www.analog.com/en/products/rf-microwave/direct-digital-synthesis-modulators/ad5933.html Analog Devices, AD5933 Evaluation Board, http://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-boards-kits/EVAL-AD5933.html Digilent PmodIA, http://store.digilentinc.com/pmodia-impedance-analyzer/ theory/sensor_technology/st12_impedance_spectroscopy.txt · Last modified: 2019/03/19 07:25 by glangereis
CommonCrawl
Physical History First published on May 17, 2019. Last updated on February 13, 2020. Print book for this course 2 Big Bang and the Formation of Our World 2.1 Energy Balance of the Earth 2.2 Energy of Life 2.3 Energy Flows in Ecology 2.4 Formation and Endurance of Life 2.5 Statistical and Evolutionary Intelligence 2.6 Smarter Intelligence 2.7 Development of Agriculture and Civilization 3 Fast Entropy and the eth law 4 Flows and Bubbles 4.1 Resource Bubbles 4.2 Economic Bubbles 4.3 Business Bubbles 4.4 Exponential functions 5 Psychological Reactions 5.1 Psychology Versus Fast Entropy Paradox 5.2 Psychological Reactions to Change 6 Modeling History 6.1 Creating And Using Models 6.2 Fitting, Uncertainty, Significance and Error 6.3 To What Extent Can History Be Quantitatively Modeled? 6.4 Long-Term Trends and the Emergence of Societies 6.5 Emergence of Dynasties 6.6 Modeling A Dynasty Using EDEG 6.7 Comparing Historical Data to Dynastic Models 6.8 Secondary Dynastic Events 6.9 Modeling History as A Series of Dynasties 6.10 Interrelations Between Concurrent Dynasties 6.11 The Colossus Model of World History 6.12 A GIS Approach to World History 6.13 Modern Times and the Near Future 7 The Future: Beyond Our World And Time 7.1 Longterm Trends – Future 7.2 Modeling The Future 7.3 Other Worlds and Societies This book strives to demonstrate how humanity has developed and progressed as a result of cosmological processes, as well as to explore the development of a science of historical processes. This book goes on to discuss the implications and uses for analyzing societies. First published on May 16, 2019. Last updated on January 20, 2021. This section introduces the subject of Physical History and Economics (PHE). Physical History and Economics is a small treatise on the development of a unified science of history. It is chiefly a physical model, in that it deals with physical principles, quantities, tendencies and constraints. At attempts to do so quantitatively where possible. It also delves into other areas such as psychology, and traditional history. However, the author has great respect for researchers and their work in those areas, and does not assert expertise in those areas. This entire publication is a work-in-progress. Substantive research is being conducted behind the scenes and presented in academic settings. Trying to weave everything together is a challenge, and one never knows how much time may be left to improve the content. The author engages in this work in the hope that it may be of great benefit to both the positive advancement and sustainability of humanity, and the seeking of knowledge. So changes will be made in a hard-to-predict manner to the various sections. Therefore, if you do cite this work, please include the date last viewed. Imagine the hot sun shining brightly upon the Earth situated in cold space. Much light is reflected back from the Earth into space. The remainder of the light is absorbed by the Earth and heats its surface. Nature abhors temperature differences, and tries to rectify the situation as quickly as possible by having the Earth emit heat back into space. Yet the Earth's atmosphere a good insulator. To bypass that insulation, great blobs of hot air at the surface rise wholesale into the upper atmospheric cooler regions, so the escape of heat is greatly increased, and Nature is pleased. Yet the light that gets reflected from the Earth is not heat nor does much to warm the coldness of space. Nature does not gladly tolerate such rogue light. So living organisms develop upon the Earth that can capture and photosynthesize some of the rogue light. Those organisms release heat or are consumed by other organisms that produce heat. Nature is still not satisfied and demands greater haste. Intelligent organisms form that can release heat faster, and civilizations form that can release heat yet faster, further pleasing Nature. Nature is greedy and demands all that it can seize. Just as great blobs of air form and rise through the atmosphere, dynasties and empires form in succession one after another, releasing heat that is otherwise inaccessible. History is literally a pot of water boiling on a hot stove in a cold kitchen, with dynasties and empires forming and bubbling up to the surface. Is there more that Nature can yet demand? New technologies and untapped sources of energy? New forms of civilization? Or the yet totally unknown? This book is intended to serve as an introduction and handbook. Rich descriptions as well as much technical detail have been omitted to improve readability and avoid confusion. Additional sources of information are cited for the reader who wishes to know more. In this book, you will envision how humans are linked to the entire universe and how we share its drive and destiny. Unfortunately, PHE does not provide quick, easy answers to society's challenges. Nevertheless, you will discover analytical tools as powerful as the astronomer's telescope and the biologist's microscope to investigate human affairs. This is a tall order to fill. It is best to remember that this book is more of a framework of perspectives and tools to help you get started, rather than an encyclopedia of answers. This is still a pioneering field. There are considerable opportunities for further contributions of the greatest significance. PHE derives social science primarily from physics, but also from other areas such as cosmology, ecology and psychology. PHE is more fundamental than social science derived merely from the observation of humans, because it views the existence of humans as the result of cosmological trends and physical processes. Likewise, PHE strives to be generic, so that it can be used to describe and analyze any society anywhere and anytime, be it the Carolingian dynasty in medieval France or an extraterrestrial society across the galaxy. Observation strongly suggests that the laws of physics remain invariant across time and space, allowing for the possibility of a truly generic, non-geocentric social science derived from physical principles. Although PHE is based upon the physical sciences, no claim is made for its ability to "produce" a perfectly deterministic science. In fact the approaches of PHE are only practical because people act as individuals and have a wide freedom of action. This seems paradoxical, but that is the way things work out. Inner Versus Outer Philosophy In ancient times, natural (outer) and social (inner) philosophy were closely linked. Then, a philosopher's view of the composition of matter might be closely linked to their view of the best type of government for society. This unity of inner and outer philosophy continued in Europe until the Renaissance.[1] However, the heliocentric universe proposed by Copernicus and the findings of imperfect heavens by Galileo were deemed inconsistent with the inner, social philosophy of that time. The resulting severance of inner and outer philosophy began in earnest and has continued to this day. PHE approaches social science from the perspective of outer philosophy. Both approaches are necessary for the development of a complete and meaningful social science. We are humans who attempt to develop social science. We try to be impartial, but must admit that our ability to do so is inherently limited. Motivation and incentives are always a factor in what gets studied. Why should we develop social science if it does not benefit those of us who endeavor to do so? Even physical scientists are human and have the same sort of needs that other people have. The subject of psychology and how it colors people's reaction to PHE is discussed in a later section. A Unified Model The social sciences already utilize some quantitative methods. Economists utilize them perhaps exhaustively and several historians practice cliometrics. Nevertheless, the social sciences have lacked the type of unified model that Newton provided for the physical sciences. Ever since Newton created his three laws to describe the mechanical universe, numerous philosophers and social scientists have tried to create a mechanical model of society without success. Meanwhile, in the early 1900s, Newton's laws of mechanics were shown to be idealizations of a much less deterministic, statistical universe. Ironically, it is the fall of Newtonian mechanics that allows for the achievement of a true "science of society." PHE is not the purely deterministic dream of early "Newtonian" sociologists. Rather PHE uses concepts from modern statistical mechanics to provide a firm foundation for a fundamental understanding of history and economics. This book provides the skeleton of such a unified model. The Principle of Fast Entropy, an extension of the Second Law of Thermodynamics[1], is suggested as a unifying, driving principle. Just as gravity is the key force in Newton's unified model of the physical universe, Fast Entropy is the key tendency for a unified model of the social universe. Fast Entropy is literally the "gravity" of social science. Fast Entropy applies to both the social and physical sciences. Fast Entropy can be used to analyze, understand and validate other economic and historical methodologies. It is a constraint that can be used to identify other constraints. In science, a known constraint is a valuable piece of knowledge. The author hopes you will find this text useful. The philosophical implications are glossed over in favor of presenting pragmatic approaches and tools. It is hoped that this work will stimulate you to develop your own ideas and approaches, for one of the fundamental characteristics of science is that it is always unfinished. Notes & References [1] H. Scott had previously proposed deriving economic policy from thermodynamics, in particular the works of W. Gibbs, in the 1920s. Source: www.technocracy.org. First published on . Last updated on January 19, 2021. Here, we describe the cosmological context of Physical History and Economics. Without the contrasts provided by this context, the rest of this book would be moot. The Big Bang and the Expansion of the Universe 13.772 billion years ago, the known universe began from a single point in time and space in a tremendous explosion known as the Big Bang. For a brief moment, the universe was filled with pure light containing all of the energy in the universe, a literal swarm of light. The universe was so hot that no matter could exist. Cosmology from Big Bang to present (credit: NASA) Growing Darkness and Clumpiness In the Universe, total energy has essentially remained constant (but see below). As the universe began to expand, that constant amount of energy spread over a larger area, first rapidly during its inflationary area, then more slowly until relatively recently. Therefore, as the energy density of the universe decreased, the universe cooled down and became darker. At the same time, the universe began to exhibit clumsiness in terms of temperature and density variation. Cosmic microwave background radiation (photo credit: NASA) The Contrasted Universe As the universe progressed in time, contrasting trends have occurred. Overall, the universe expands, cools and dims. Yet, in local areas, the universe heats up and grows brighter. In certain very important ways, the universe has become less homogeneous in some regions over particular periods of time. The Formation of Matter, Stars and Planets As the universe continued to further expand, it eventually became sufficiently cool for matter to form.[1]The first matter comprised sub-atomic particles, since the universe was still too hot for more atoms and molecules to form. The, eventually, as the universe continued to further expand and cool, atoms[2]and then molecules formed. Matter gravitationally attracts itself[3], so it pulled itself together into gigantic clouds and structures. Horsehead nebula (photo credit: NASA) Within those clouds, some matter condensed into spheres of gas. When gravitational contraction caused many of those spheres to heat up sufficiently so that nuclear fusion[4]occurred in their centers, those spheres became stars[5]. Fusion caused those stars to become much hotter and begin to emit large amounts of light. Disks of dust and gas formed around many of those stars. Dust disk around protostar (photo credit: NASA) Some of those dust particles stuck together due to gravity and heat, forming larger and larger clumps. Gravitational attraction between these clumps and gasses resulted in the consolidation of increasingly larger rocky and gaseous spheres. Some of those spheres further violently collided together to form planets. Collision of planetessimals form Earth's Moon (photo credit: NASA) Some of those planets were dominated by rocket components. Eventually, some of them cooled down sufficiently to allow liquid water on their surfaces, but remained close enough to their stars to prevent all of that water from freezing. Planets such as the Earth formed. Earth as seen from Apollo 17 (credit: NASA) As the Universe progressed in time, tremendous differentiation of temperature, density and structure have developed. Overall, the Universe has expanded, resulting in a decreasing energy and matter density as manifested by a decreasing mean temperature. Yet, in local regions, density and temperature have increased to the point that complex structures, such as stars, have formed and have triggered energy release mechanisms such as nuclear fusion. In between the coldness of the voids of space and the hot stars are planets, which receive sunlight then expel that energy back into space. [1]As Einstein's relationship between mass and energy shows, it takes a great deal of energy to form even a small amount of matter, since the speed of light is a large quantity. However, the universe contains a great deal of energy. Energy = mass x (speed of light)2, or more familiarly E = mc2. [2]Most of the initial atoms that formed were of the element hydrogen, with a lesser amount of the element helium. [3]Hydrogen and helium are the least massive of all the elements. Yet they still have mass, and so are gravitationally attracted towards other matter. [4]When hydrogen gas becomes hot enough, individual hydrogen atoms combine to form helium atoms. This nuclear reaction releases a tremendous amount of energy. [5]Stars themselves are part of larger structures called star clusters such as the Pleiades, which in term are part of galaxies. Galaxies themselves are part of clusters and super-clusters of Galaxies that weave the fabric of the universe. First published on May 17, 2019. Last updated on May 16, 2022. Sources of Energy Sunlight is the chief source of energy for the Earth. Gravitational contraction provides a tiny amount. Tidal interactions with the Moon provide a small but significant amount at the surface. Radioactive decay provides an important source of energy below the Earth's surface. The burning of fossil fuels can release considerable heat locally (enough to upset ecosystems), but the total amount of heat released is small compared to that from solar and radioactive heating. Sun photographed in various wavelengths. (Credit: NASA) Energy in the Atmosphere The Earth is bathed in sunlight. Some of that sunlight is reflected back into space by the Earth's surface and atmosphere. The reflectivity of the Earth is called its albedo. Some of the remaining sunlight directly heats up the atmosphere. A small amount is absorbed by processes such as photosynthesis. Much of the remaining sunlight heats up the Earth's surface. As surface temperature becomes raised, the Earth emits increasing amounts of infrared energy. This radiation in turn further heats the atmosphere. Much atmospheric radiation is re-emited to the Earth's surface. Some of it eventually makes it to the upper atmosphere and is radiated back into space. The amount of energy entering and leaving the Earth's atmosphere is called its energy balance. If more energy enters the Earth's atmosphere than is emitted, the temperature of the Earth's atmosphere increases. This is the current situation and is called global warming. Climate change results from global warming. Energy flows to and from the Earth's surface and atmosphere (credit: U.S. Govt.) NASA Earth Energy Budget poster NOAA Earth-Atmosphere Energy Balance NASA Climate and Earth's Energy Budget (more detailed information) Development of Energy Processes In Life Energy is essential to the functioning of life. A chief characteristic of life is that it moves, does things and changes. Such activities require energy. As early forms of life on Earth metabolized hydrocarbons in their environment, which were initially abundant. These initial hydrocarbons were limited in quantity and nonrenewable, and as they became consumed, they became scarce. Life required a more sustainable energy source to endure. Sunlight arrived at the Earth in bountiful supply. Plankton and plants formed that could photosynthesize sunlight into sugars, an energy-rich fuel. Animals formed that ate plants or each other for energy. Living organisms can be viewed as a form of engine. An engine requires a potential to operate across. The relative coolness of the environment (ocean, atmosphere) in contrast to the higher energy of sunlight provides such a potential. Jungle plants (credit: NASA) Chemical Processs Energy from photons in sunlight gets photosynthesized into carbohydrates by plants and phytoplankton. Such molecules are composed of carbon, hydrogen and oxygen. Mitochondria are specialized organelles in both plant and animal cells that can metabolize carbohydrates to produce ATP in a process called aerobic respiration. The cell can than use ATP to power its own processes. Waste energy is given off as heat. Ball and stick model of organic molecule (credit: US NIH) Nature Education, Mitochondria. Aydin Tözeren, Stephen W. Byers, New Biology for Engineers and Computer Scientists. Pearson Prentice Hall, 2004. First published on . Last updated on February 6, 2021. Energy flows through ecological networks such as food webs. Generally, sunlight flows into plants that create sugars. Animals eat sugars. Both plants and animals expel heat into their environment. Marine food web in Alaska (Source: US Govt.) Food webs are generally energy webs. Energy in the form of high order photons flows to plants and phytoplankton that produce sugars and starches. Other organisms and animal eat plants and phytoplankton to gain energy. Predators eat those animals to gain energy. All plants, animals and organisms give off some heat into the atmosphere. The Jouleis the standard unit of energy, but for food, the calorie and Calorie are often used. A calorie is sufficient energy to raise one gram of water one degree Celsius (1 K). A Calorie (with a capital C) is equal to 1000 calories, and also known is a kilocalorie. If the energy leaving the food web is the same as that entering it, then the temperature will generally stay the same (after being adjusted for season and weather). However, a food web will typically store some of the energy into biomass, which contains varying amounts of energy. Organism bodies contain some energy, such as the cellulose that makes up much plant structure, such as cell walls. Proteins also contain energy. Fruits contain considerable energy in the form of sugar, typically 4 Calories per gram. Seeds contain tremendous amounts of energy in the form of oils (typically 9 Calories per gram) and starches. Food webs can also release more energy than is imputed for relatively short amounts of time, such as during forest fires. Energy typically enters and leaves an ecological system in the form of higher energy photons (visible light) and leaves in the form of lower energy photons (but possibly more of them). Yet, there are alternatives. Some of the energy may leave in the form of "waste" biomass. Such as dead plant structures, dead animal bodies and dead bacteria that get stored in the soil or ocean sediments, and eventually become nutrients, minerals or fossil fuels. Alternatively, some energy may enter an ecosystem in the form of high energy molecules, such as near ocean thermal vents. Energy typically travels through an ecosystem in the form of chemical energy, such as sugars, carbohydrates, fats and proteins. Note: only part of physical energy can be utilized by living organisms and in industrial processes. The useful part is called exergy. A related quantity, emergy, is the amount of energy consumed in these processes. Emergy Systems, University of Florida U.S. Geological Survey (USGS), Food Web and its Function Recall the Contrasted Universe Recall that due to expansion, the universe has become much cooler and darker over time. In fact, the typical temperature of the space between stars is nearly absolute zero. We have learned that heat energy tends to flow from warmer places to cooler places, as systems attempt to move towards thermal equilibrium (that is, until their temperatures are the same). Space is much cooler than stars, so energy tends to flow from within stars out into space. This is why we see stars shine.[1] Planets are typically much cooler than stars. In fact, the temperature of a geologically dead, barren rocky planet would be about the same as space, that is nearly absolute zero. Shaded areas of moons and planets that lack atmospheres quickly drop to near zero. The side of the planet Mercury that faces away from the sun is such an example. Solar System showing sun and planets (credit: NASA) Yet planets orbiting around a star receive a significant continuing dose of energy in the form of light emitted from that star that then warms up the planet. The planet then becomes warmer than space, and so then the planet must start shedding energy into space. For example, the Earth receives significant amounts of sunlight that warms the earth. The Earth must then shed some of the energy into to attempt to move towards thermal equilibrium with space.[2] Sun-Earth-Space Potential Heat Engine Analogy to Life Recall the heat engine example. A heat engine bridges a temperature difference. Heat flows across that difference through the heat engine. Some of that heat energy is converted to work while the rest is exhausted as waste heat. Entropy is produced while the engine continues to function. Part of the work done by a heat engine can be used to maintain that heat engine. More significantly, part of the work can go to build additional heat engines. These additional heat engines can produce yet more work to produce even more heat engines. The growth of heat engines is then exponential, at least until limiting factors come into play. This is a key point. Because heat engines can beget heat engines, an exponential increase in entropy production can take place. Here, entropy production is proportional to the quantity of heat engines. Fast entropy favors exponential growth in entropy production, so fast entropy favors the "spontaneous" appearance and endurance of heat engines. Under the Second Law along, the spontaneous appearance of a heat engine is improbable but possible.[3]Fast entropy then utilizes those improbable appearances to create probable, self-sustaining, exponentially growing systems. Some of those systems have developed into what we call life. Formation of Life The motion of atoms and small molecules in a liquid or gas is nearly random. The statistics of these particles is known as statistical mechanics, or more traditionally, thermodynamics. The formation of life from this random motion involves several steps. Microscopic structures frequently appear by random chance. For example, atoms can combine to form molecules, and some molecules combine form to larger molecules. Even more complex microscopic structures occasionally appear by random chance. Some very complex microscopic structures form. Some of those forms will be durable. Some of those durable structures will be self-replicating. (Or they will be replicated by environment such as by catalysts). Such structures can be defined as the simplest form of life. Durable, self-replicating structures that degrade energy more quickly than their environment will be more probable (they will be favored under the principle fast entropy). Free energy will tend to be degraded through these structures. Where frequent chemical reactions can take place, where they can be durable and where there is an available source of free energy (such as from a thermodynamic potential), then the existence of the most basic life forms (as defined above) will approach being a certainty, given the passage of sufficient time. Life Appears! Once these steps have occurred, life has developed. One can view life as the residue of random action subjected to the principle of fast entropy. The Earth's surface reflects some sunlight into space. Reflected light results in little entropy. However, plants absorb much of that sunlight that would otherwise maintain its high level energy by being reflected into space. Plants store some of that energy in the form of biomass. Animals have evolved to consume biomass. Life As A Faster Path Life itself can be viewed as the process of heat engines begetting heat engines. Bacteria are an easy example. Hence, life represents a mechanism to maximize the rate of entropy production. Therefore, life is not due to pure luck; rather, the formation and evolution of life is favored under the e th Law. Intelligence allows life to produce entropy even faster; thus the formation of increasingly powerful brains and intelligence are favored. [1]That humans should have developed eyes that are particularly sensitive to the peak wavelengths emitted from the star our planet orbits should not be surprising. [2]As long as the sun shines upon the Earth, the Earth will not reach thermal equilibrium with space. This continuing energy flow between the sun and the Earth maintains a continuing potential. [3]I. Prigogine has proposed that dissipative structures can appear that increase entropy production. In his terminology, living organisms can be viewed as dissipative structures. Astrobiologist J. Lunine has paraphrased Prigogine' s finding as follows: "complicated systems that are held away from equilibrium and have access to sufficiently large amounts of free energy exhibit self-organizing, self-complexifying properties." (J. Lunine, Astrobiology, A Multidisciplinary Approach. Peason Addision Wesley, 2005). First published on . Last updated on February 15, 2020. Reproducing molecules are a far cry from the complex genetic machinery of the living cell. this section will explain how Fast Entropy results in the development of a form of random intelligence known as evolution. Random Action Recalled Random action involves a statistically significant amount of actors that are free to behave independently of each other in at least one way. One example of random action would involve the roll of a dice. The results of a large number of rolls should be random. Another example of random action is the movement of molecules in a gas. Even though the gas may have an overall motion, such as in a gust of wind, the individual molecules may be moving in absolutely any direction. Molecules moving about in a liquid may be a reasonable representation of random movement. Steps in the Development of Random Intelligence Random action can "figure out and solve" some problems. Recall the parallel conductor example, where the correct proportion of heat flow through each conductor was channeled through each conductor to maximize free energy degradation. The combination of the random actions of many tiny particles[1]within the conductors effectively figures out how to solve this problem and maximize entropy production. The term "random action intelligence" may seem an oxymoron. Perhaps a more appropriate sounding term would be "dumb luck" or to refer to the proverbial monkey at a typewriter who eventually pounds out Shakespeare. Yet, the term "dumb luck" here is not accurate. In reality, random action is not quite random. There are slight asymmetries in the distribution of behavior. It is the combination of these asymmetries along with large numbers of nearly random acting actors (such as particles) that produces the intelligent result. Some of the durable complex structures (see Chapter 5) developed into RNA[2]and (most likely later) DNA[3]and represent the genetic code and operating instructions for all known living organisms. RNA and DNA mutations may themselves involve a significant component of random chance in forming mutations. (Naturally occurring radiation, itself a random phenomena, may have played a role in this). Most RNA and DNA mutations are of no known consequence, and most others will be detrimental and even fatal. Neutral changes will be passed on but not favored. Detrimental changes will be disfavored and less likely to be passed on. Positive changes will be favored and be more likely to be passed on. Therefore, the mutations of RNA and DNA can be viewed as a form of random intelligence. Essentially, nature throws the dice again and again until it gets to solve problems (such as maximizing entropy production), if given enough time. This process is commonly known as evolution. Typically considerable time is required. Fast entropy represents an asymmetry that tilts the random mutations of RNA and DNA in favor of maximizing entropy production. Therefore, the desire to maximize entropy production is essentially the driving desire of each one of our cells. Yet remember, what matters is the maximization of entropy by an entire system. Cells within multi-cellular organisms have specialized. So each such specialized individual cell will act in a manner to maximize entropy production by the organism (or some larger system), and not necessarily in a manner to maximize entropy production as an individual cell. [1]Typically electrons, if the conductors are metals. [2]More fully known as ribonucleic acid. RNA is involved in the synthesis of proteins, that in turn form much of the structure and processes of cells. [3]More fully known as deoxyribonucleic acid. DNA encodes genetic information that is vital for cell and organism reproduction. Recall how the random action of microscopic particles and energy reactions acts as a "brain" to "figure out and solve" problems such as degrading free energy more quickly. Evolution (the random mutation of RNA and DNA) acts to figure out problems related to the endurance of more complicated life, typically requires considerable time. If faster energy degradation is favored, then it is conceivable that faster means of problem-solving and intelligence will have developed. This section will discuss how Fast Entropy encourages the formation of more powerful, efficient forms of intelligence. Random Action Considerations Random action intelligence uses considerable amounts of time and is relatively inefficient. Evolution is a form of random action intelligence. Nature literally keeps throwing the dice, producing random genetic mutations. Most are unsuitable and can even be fatal. However, all it takes is one successful mutation to solve a problem, as long as the bearer of that gene reproduces. Evolution can take many millions of years to solve problems and can result in incomprehensibly large amounts of wasted mutations. Chemical Signaling Even simple living organisms have developed chemical signaling that can respond to internal and environmental changes, such as the need for a cell to absorb more oxygen within seconds as compared to millions of years for evolution. Chemical signaling may be sufficiently quick to help an organism decide to move out of the sunlight into shade to keep from overheating. However, chemical signaling may itself be dependent upon evolution to adapt the way it functions, so its short-term abilities apply to only a range of situations, and cannot easily keep pace with unprecedented environmental changes. Nervous Systems Nervous systems are electric networks in more complex, multi-cellular organisms. They can perceive and relay information nearly instantaneously across many cells, and so they can make decisions quickly. Yet, their reactions are in the form of reflexes, so that their problem-solving is quite limited and inflexible. The Development of Brains and Bigger Brains Nervous systems can further develop so that they can be partially controlled and operated by a computing organism known as a brain. Nervous system brains have formed that can make decisions quickly. Such brains can change the way in which decisions are made and make more complicated decisions. Further, brains can learn, and so are more quickly adaptable. The formation of such brains is favored to the extent that they improve endurance of their entropy-producing species. Species with bigger brains displace other groups of less brainy organisms who degrade free energy less quickly, so there is a thermodynamic push for brain size and capability to grow. Characteristics of Brains The simplest brains, such as of a worm or insect, follow regular patterns of decision making that vary relatively little among members of a species (although there is some variation). However, even for the simplest of organisms that possess a brain, changes in environment and physical characteristics will provide a large range of actions. Imagine a fly deciding which direction to fly. Wind direction and the presence of predators can be from any direction, and so the fly may decide to fly in any direction. Yet, an individual brain does not appear to make decisions randomly, but rather it tends to act in particular ways with patterns of reaction These traits are often called habit and stubbornness. The more complex a brain, the greater flexibility it has to vary its decisions from those of other members of its species. Memory becomes more consciously accessible. Processing becomes more sophisticated. For example, a simple nervous system may respond to one-dimensional changes of light intensity. A sudden change in light might cause a jerking reaction, which may be sufficient to escape from a predator. However, a brain may be able to organize sensations of light and recognize images. Predator versus prey can be distinguished visually. Plans for hunting or escape can be devised and improvised. Certain types of problems can be solved more quickly or with greater sophistication. Further, organisms with brains have more complicated social interactions, particularly with members of its own species. Brains allow organisms to differentiate between other members of its species, so that organisms become individual, rather than just another member of their species. Preferences, grudges and hierarchy can be formed, organized and remembered. So the development of nervous systems allow living organisms (and by inference nature) to solve numerous problems of entropy maximization much more quickly than they could have been solved by mere random intelligence. Therefore the formation of such "smarter" intelligence is favored under the principle of fast entropy. For example, the human brain learned how to make and master fire, which more quickly produces entropy from materials such as wood than mere rotting. The human brain's next type of solution, civilization, would really put entropy maximization into the fast lane. Important events in the progression of humanity is the development of agriculture and civilizations. Societies involve collective action between individual organisms such as individual humans that tends to increase entropy production by increasing efficiency or accessing otherwise inaccessible useful energy. Civilization tends to further involve centralization and coordination that increases efficiency even further. Green field (Credit: US govt.) From Brains to Civilization Although brains do not make random decisions, a collection of brains can exhibit nearly random behavior. Recall that even the simplest brains provide a large range of actions in response to environmental factors. Further, the more complex a brain, the greater flexibility it has to vary its decisions from those of other members of its species. Admittedly, diverse action is not necessarily purely random. Certainly, brains of a particular species will tend to exhibit similar responses to certain types of events, to the extent that brains are an artifact of evolution. This can be thought of as evolutionary "inertia". Further, an individual brain does not appear to make decisions randomly, but rather it tends to act in particular ways with patterns of reaction. These traits are often called habit and stubbornness. Nevertheless, despite the particular habits and stubbornness of individual brains, a collection of brains, especially the highly developed brains of humans, act in many different ways. For some purposes, a large of collections of brains produces a roughly random set of reactions. (Although for other purposes, brains make very similar decisions, such as where "swarm logic" applies). Random types of decisions can be modeled statistically, in some ways even thermodynamically. In fact, the random aspects of brain decision-making can be used to give predictability to social models. Fast entropy still favors the more rapid degradation for free energy. Although an individual brain can make decisions that are highly unencumbered by the considerations of fast entropy, there will still be the subtle pressure of fast entropy on each deciding brain. Therefore a collection of brains will, everything else being equal, tend to make decisions that are consist with more rapidly degrading energy. Otherwise, the collection of brains may lack endurance, especially when there are other competing collections of brains. Civilization tends to act to more rapidly degrade free energy. It forms organization and develops technology. In fact, civilization often replicates biological structures that themselves increase entropy production. Roads and railroad lines are analogous to blood vessels. Telephone and internet lines are analogous to nerves. Civilized, more organized groups of people (who degrade free energy more quickly) tend to displace other groups of less civilized, less organized groups of people who degrade free energy less quickly. This there is thermodynamic pressure to become more and more civilized. The term civilization refers to developing a complex, organized, technologically capable society rather than polite "civilized" behavior. For example, having the complex social structure required to build a nuclear weapon would be considered being civilized here, while merely wiping ones mouth with a napkin after dinner would not, although those traits often do go hand in hand. Civilizations form that consume energy even more quickly. Irrigation allows more areas to be covered by plants. Mining allows depletion of energy trapped in fossil fuel biomass. Cities are a complex structure that allows greater concentrations of energy use. Civilization As An Even Faster Path Civilization allows for coordinated behavior that allows a society to produce entropy even more quickly. The e th Law can be used as the foundation for a unified social science that can be used to describe any society, whether on Earth or elsewhere. Agriculture patterns (credit: NASA) "The question is not whether nature abhors a vacuum, but how much nature abhors it." Here we introduce the ethLaw of Thermodynamics, or more descriptively as Fast Entropy. (Here, "e" represents the transcendental number e, which is about 2.718. The number efits nicely between Laws 2 and 3 of Thermodynamics and expresses the importance of the eth Law in numerous cases of exponential growth.) Physics is a relatively "pure" subject. Physics is not as pure as mathematics. However, the motions and behaviors of subatomic particles exhibit a beauty and perfection reminiscent of the celestial spheres of the ancient Greeks. Newton's Three Laws likewise brook no ambiguity, and describe a precise ballet of mechanical motion in the vacuum of the planetary heavens. With this purity in mind, physicists tend to consider thermodynamic systems in terms of before and after a change. Thermodynamic processes themselves tend to me "messy". The state of a system in terms of entropy, temperature and other quantities is compared before and after a change, such as heat flow or the performance of work. By doing so, it is possible to neglect the amount of time required for thermodynamic changes to take place. This works well in physics, and the First and Second Laws of Thermodynamics typically suffice. However, much of the world is a mess (involving tremendous complexity and uncertainty) and frequently must be studied in less than ideal conditions. Further, in the fields of Physical History and Economics, time is of the essence. Utopian idealism aside, how long changes take can make all of the difference in societies. For example, people can't wait forever to be fed and late armies will often lose wars. The element of time mustbe introduced in order to apply thermodynamics to social science, which is the thrust of this entire book. This chapter will do so. Fast Entropy As A Unifying Principle Fast entropy can be used as a unifying principle among both the physical and social sciences. Fast entropy has application to applied and professional fields as well. A better name for fast entropy could be the "e" th Law of Thermodynamics. The ethLaw of Thermodynamics states that an isolated system will tend to configure itself to maximize the rate of entropy production.[1] Heat flow through a thermal conductor example Most introductory physics textbooks do have an example concerning thermodynamics that involves time.[2]Picture a simple thermal conductor through which energy flows from a hot reservoir to a cold one. For this example, we will consider the term reservoirhere refers to a body whose temperature remains constant regardless of how much heat energy flows in or out of it. [3] Heat flow through a thermal conductor. The magnitude of that flow is proportional to both the area of the conductor and as its thermal conductivity. More heat will flow through a broad conductor than a narrow one. Also, more heat will flow through a material with a high thermal conductivity such as aluminum than through one with low thermal conductivity such as wood. Heat flow is inverselyproportional to the conductor's length. Thus, more heat will flow through a short conductor than a long one. Heat flow is also proportional to the difference in the two temperatures that the thermal conductor bridges. This difference in temperatures has nothing to do with the conductor itself. A greater temperature difference will provide a greater heat flow across a given conductor, regardless of the characteristics of that conductor. Equation for thermal energy flow through a conductor: \(\frac{\Delta Q}{\Delta t} = k A \frac{\Delta T}{L} \). where, Qis the flow of thermal energy, tis time, kis a constant dependent upon conductor material, Lis conductor length, and A is conductor area, and \(\Delta T\) is the temperature difference. This equation states how much heat will flow through a conductor, assuming the temperature difference remains constant. So once again, we face an example that is constant with respect to time, but it provides a reasonable starting point. Electrical engineers will find this equation similar to a rearrangement of Ohms Law, where electric current is proportional to voltage divided by resistance: \(I = \frac{V}{R}. \) Recalling The Second Law of Thermodynamics The Second Law of Thermodynamics states that the universe is moving towards greater entropy. Stated another way, the entropy of an isolated system shall tend to increase.[4]A corollary is that a system will approach a state of maximum entropy if given enough time. A system in a state of maximum entropy is analogous to a system in equilibrium. However, neither law nor corollary describe the rate at which entropy shall be produced, nor how long it would take a system to produce maximum entropy. The ethLaw—Fast Entropy The author has proposed[5]that the Second Law can be extended by stating that not only will entropy tend to increase, but also it will tend to do so as quickly as possible.[6] (Others have made the same observation. e.g. A. Annila, R. Swenson). In other words, entropy increase will not happen in a lazy, casual way. Rather, entropy will increase in a relentless, vigorous manner. The author calls this extension the ethof Thermodynamics[7], or more descriptively, Fast Entropy. A more precise statement of the ethLaw is that "entropy increase shall tend to be subject to the principle of least time." The ethLaw gives teeth to the Second Law. It will need those teeth in order to be useful for the social sciences. Really, though, the ethLaw is already widely practiced astrophysicists and atmospheric scientists. Whether a stellar or planetary atmosphere tends to convect or radiate depends on which results in the greatest heat flow. The maximization of heat flow results in the maximization of entropy increase, so this scenario represents the ethLaw in action. Fast Entropy can be used as a unifying principle among both the physical and social sciences. Fast entropy has applications to applied and professional fields as well. More Precise Statement of ethLaw The ethLaw needs to be stated more precisely to be of much use. A more precise statement is that "entropy increase shall tend to be subject to the Principle of Least Time." The Principle of Least Time is a general principle in physics that applies to diverse areas such as mechanics and optics. Snell's Law of Refraction is an example. Physical Examples Neither the ethLaw nor Fast Entropy will be found in a typical physics textbook, although it could said to fall under non-equilibrium thermodynamics or transport theory discussed in some texts. Fast Entropy involves an element of change over time that can involve challenging mathematics and measurements. Nevertheless, a few simple examples can be offered to support the validity of Fast Entropy. One example is heat flow through two parallel conductors each bridging the same two thermal reservoirs (see figure). No matter what area, materials or other characteristic comprise each of the conductors, the percentage of heat that flows through each conductor is always that which maximizes total heat flow. In this case, when total heat flow is maximized, so to is entropy production maximized. Thermal conductors in parallel Another example is heat flow through conductors in series between a warmer and cooler heat reservoir (see figures). This example replicates the classic demonstration the applicability of the Principle of Least Time in optics (Snell's Law), but using thermal conductors in place of refractive material, and replacing the entrance point of light with a contact point with a warmer reservoir and the exit point of light with a contact point with a cooler reservoir.[9] Thermal conductors in series While heat flow tends be a nebulous affair, the path of maximum heat flow can nevertheless be ascertained. This can be accomplished by noting perpendicular paths to isotherms indicated by placing temperature sensitive color indicator film upon the conductors (below). The greatest color change gradient represents the path of maximum heat flow. Observations show that the path of maximum heat flow is consistent mathematically with Snell's Law (which is based upon the principle of least time but usually reserved for light rays). This example is reasonably easy to replicate. Idealized path of maximum heat flow through conductors in series A third example is well known to atmospheric scientists. Here, in an atmosphere where heat is flowing from a warm planetary or stellar surface, whether thermal radiation or convection will occur tends to be dependent upon whichever produces the greatest heat flow. Whichever produces the greatest heat flow tends to produce the entropy most quickly. A Heat Engine Begetting Heat Engines The work done by heat engines can be used for human activities. Part of it can be used to maintain the heat engine. More significantly, part of the work can go to build additional heat engines. These additional heat engines can produce more work to produce even more heat engines. This idea is pictured here (see figure). The growth of heat engines is then exponential, at least until limiting factors come into play. This is a key point. Because heat engines can beget heat engines, an exponential increase in entropy can take place. Heat engines begetting heat engines Here, entropy production is proportional to the quantity of heat engines. Fast entropy favors exponential growth in entropy production, so fast entropy favors the "spontaneous" appearance and endurance of heat engines. Under the Second Law alone, the spontaneous appearance of a heat engine is possible, but improbable. Fast entropy then utilizes those improbable appearances to create self-sustaining, exponentially growing systems. Emergence of Complex Dissipation Structure When systems are far out of equilibrium, there is a tendency for complex structures to form to dissipate potential (Progogine, ___). Such a process is an example of fast entropy. The emergence of atmospheric convection structures are examples of complex dissipative structures. Convective structures tend to form where convection results in greater thermal energy transport from the surface of the Earth to its upper atmosphere than does simple radiation. Storm systems, tornadoes and hurricanes are further examples. The spiral arms of galaxies are similar in appearance to those of hurricanes. this is no coincidence, since the spiral structure of galaxies also result in greater production of entropy (see paper from Naval Observatory astrophysicist ____). Rising cloud column (credit: NOAA) Applicability of Fast Entropy to Life and Social Sciences If Fast Entropy is a fundamental tendency in physics that especially applies to living organisms, life would have evolved to produce entropy in a manner consistent with the Principle of Least Time. Evolution is quite similar to statistical mechanics. It finds the answer it is seeking by rolling the dice an unimaginable amount of times. Statistical mechanics, including thermodynamics, operates most reliably upon systems of many components. Evolution likewise requires a sufficiently high population to operate upon. Endangered species are especially at risk, because their populations often become to small to support the evolution of that species, making it especially vulnerable to change. Evolution is whatever survives the "dice throwing" in response to environmental change. Successful mutations out survive non-mutants and other mutations to multiply and dominate their environment. In thermodynamics, the Second Law statistically allows small regions of lower entropy. Most of these regions will quickly disappear due to the random motion of molecules. However, a rare few of these regions, by pure statistical chance, will be able to act as heat engines and will increase overall entropy (despite their own lower entropy). If these rare, entropy-creating regions can reproduce, then they will be favored by fast entropy, and will come to dominate their region. Certain chemical reactions are examples, and from chemistry comes life. So then, life can be viewed as a literal express lane from lower to higher entropy. Although living organisms comprise regions of reduced entropy, they can only maintain themselves by producing entropy. Life has produced a diversity of organisms in order to maximize entropy production with respect to time. For example, if one drops a sandwich in a San Francisco park, a dog will rush by to bite off a big piece of the sandwich, then the large seagulls will tear away medium sized pieces to eat. Smaller birds will eat smaller pieces, and injects and bacteria will consume smaller pieces yet. If only one or two of those organisms existed, some of the pieces or certain sizes could not easily be consumed. If they couldn't be consumed, they could not be used to increase entropy. Humans are living organisms and do their part to contribute to maximizing entropy production with respect to time. In fact, the more complex, structured and technologically advanced human civilization becomes, the faster it creates entropy. It is true that cities and technology themselves represent regions of lower entropy, but only at the cost of increased overall entropy. Further Applications There are both physical and social applications for Fast Entropy.[8]Physically, Fast Entropy might be used to improve heat distribution and removal. Socially, Fast Entropy drives Hubbert Curves. Further, Fast Entropy might be used to determine key parameters of Hubbert curves and constraints upon them. Fast Entropy analysis requires that some indication of entropy production with respect to time be determined. An exact determination might prove to be difficult, but comparisons of entropy production are easier. For example, if people consume a known mean number of calories, then the more people a regime has, the more entropy it produces. Most historic regimes have a sufficiently low level of technology that this type of analysis is quite practicable. Conclusions and Future Research Fast Entropy can be used in history as a criterion of success for a regime. Was a regime overtaken by another regime that was able to produce more entropy more quickly? In economics, Fast Entropy can be used to study the progress of a regime along its Hubbert curve, and infer factors such as efficiency, economic centralization and wealth distribution. Fast Entropy can be a power tool for the analysis of proposed social policy. However, an important issue to be investigated is whether and how the value of entropy production needs to be weighted with regards to its distance in time. [1]However, the behavior of systems the atomic level can vary from that discussed in this chapter. [2]One can infer the passage of time by multiplying the calculated heat flow by time. However, this is example is not really time dependent. The heat flow remains constant regardless of how much time passes in this idealized example. It is nevertheless a good approximation for many real situations. [3]Heat flow is also proportional to the difference in two temperatures that the thermal conductor bridges. This difference has nothing to do with the conductors themselves. Heat flows through a thermal conductor in proportion to the area of the conductor as well as its thermal conductivity. More heat will flow through a broad conductor than a narrow one. Also, more heat will flow through a material with a high thermal conductivity such as aluminum than through a material with low thermal conductivity such as wood. Heat flow is inverselyproportional to the conductor's length. More heat will flow through a shorter conductor than a long one. This is known as Fourier's heat conduction law [4]A more precise definition is that "any large system in equilibrium will be found in the macrostate with the greatest multiplicity (aside from fluctuations that are normally too small to measure)." D. Schroeder, An Introduction to Thermal Physics. San Francisco: Addison-Wesley, 2002. [5]This proposed extension was anticipated in a talk given by the author to a COSETI conference (San Jose, CA, Jan. 2001, SPIE Vol. 4273), was presented at a talk entitled Hurting Towards Heat Death (Sept. 2002) and appeared in the Fall 2003 issue of the North American Technocrat. Subsequent to this proposal, the author has observed that a form of this extension is already in use by astrophysicists and meteorologists. When modeling atmospheres, their models will tend to choose the form of energy transfer that maximizes heat flow, such as convection versus conduction or radiation. See B. Carroll and D. Ostlie, An Introduction to Modern Astrophysics, 2ndEd., Pearson Addison-Wesley, 2007, p. 315. [6]The Second and A Half Law is not well known and therefore is neither generally accepted nor rejected by most physicists. Although the Second and A Half Law is fairly consistent with standard physics, it is primarily intended for use in the applied physical sciences and the social sciences. There is some possibility that this proposed law is flawed. However, it has somemerit and is somewhat better than what we have without it. [7]As stated above, e in ethlaw referring to the transcendental number e, that is 2.718. [8]Psychologist and musician Rod Swenson had proposed some elements of this, perhaps as early as 1989. He suggested that a law of maximum entropy production could apply to economic phenomena. [9] Mark Ciotola, Olivia Mah, A Colorful Demonstration of Thermal Refraction, arXiv, submitted on 21 May 2014. Many phenomena in both nature and society can be examined in terms of bubbles and flows. Many can be modeled as a combination of potential, flows, barriers and bubbles. In the most general sense, a flow is the continuous transport of something from one place to another. In a more abstract sense, it is the continuous change of a quantity. For a short amount of time, a flow can be caused by inertia. For longer periods, something must drive the flow. The consumption of potential can drive a flow. Then the flow can be said to contribute to the achievement of the potential. The flow can continue indefinitely as long as both the potential and that which flows are both steadily replenished. For many purposes, a flow can be viewed as a the result of a continuous supply of potential. The shining of the Sun on the Earth in cold space is a continuous flow of energy that has lasted billions of years. The current of water down the Nile River is another flow that has lasted thousands of years. Physical Flows The current of water down the Nile River is another flow generated by a gravitational force. Let us examine this. Water flows from higher elevations to lower ones, such as via the Nile. Water in highlands represent a higher gravitational potential than water at sea level. Water flowing downhill consumes (achieves) this potential. Yet the Nile has been flowing for many thousands of years. How does the water at the high elevations get replenished? Atmospheric storm systems represent complex structures to dissipate potential. Sunlight places powerful amounts of energy at the surface of the oceans and wet land. Storms form to pump this energy more quickly away from the surface into the cold upper atmosphere. The transport of water into the atmosphere and its rain on the Earth's surface increases the rate of energy transport. (Condensing water vapor in the upper atmosphere releases prodigious amounts of energy into outer space). Resource and Economic Flows There are also many physical flows in our economy. The transport of food from farm to city and of mineral from mine to factory represent flows. Generalizing the Emergence of Structures We discussed how regimes can emerge from civilizations as dissipative structures to increase entropy production. Here, we generalize the concept of a regime. Formation of Bubbles Bubbles emerge when a flow gets blocked. As potential builds up, the force against the blockage increases. Eventually the accumulation and force become so large that the blockage can no longer impede the flow. At this point, the blockage might be partially overcome, or it might become catastrophically destroyed. This is analogous to the formation and popping of a bubble. Another term for blockage is "Logjam". Emergence of Exponential Structures In the case of a flow, heat engines will exponentially grow until they reach a limiting efficiency. Heat engine population and entropy production will reach a limit called a carrying capacity. Thermodynamic Interpretation Heat engines begetting heat engines results in exponential growth in both quantity of heat engines and entropy production. Where the magnitude of potential is fixed, as entropy is produced, the potential decreases. As potential decreases the efficiency of the heat engines decreases. This decrease in efficiency comprises a limiting factor. This decreased efficiency decreases the ability of heat engines to do work. Eventually, the total amount of both work and entropy production will decrease. Less work will be available to beget heat engines. If the heat engines require work to be maintained, the number of functioning heat engines will decline. Irreplaceable potential entropy continues to decrease as it gets consumed. Eventually, the potential entropy will be completely consumed, and both work and entropy production will cease. As this scenario begins, proceeds and ends, a dissipative structure (a literal thermodynamic "bubble") forms, grows, possibly shrinks and eventually disappears. Entropy production versus time can often be graphed as a roughly bell-shaped curve, giving a graphic illusion of a rising bubble. Bubbles Involving Life Populations of living organisms can experience thermodynamics bubbles. A bacteria colony placed in a media dish full of nutrients faces a potential of fixed magnitude. Each bacterium fills the role of a heat engine, producing both work and entropy. The bacteria reproduce exponentially, increasing the consumption of potential entropy exponentially. Eventually, it becomes increasingly difficult for the bacteria to locate nutrients[1], decreasing their efficiency. As efficiency decreases, the bacteria will reproduce at a slower rate and eventually stop functioning. Ultimate Bubbles Ultimately, all potentials are fixed in magnitude. Possibly, the entire Big Bang and its progression could be viewed as a bubble. In practice, many potentials are renewable to a limited extent. For example, as long as the Sun shines upon the Earth in cold space, a potential will exist there. Series of Bubbles As long as a system maintains the ability to produce new heat engines, then instead of a single bubble, there will be a series of bubbles over time. There are several reasons that systems form bubbles instead of maintaining a single flow. Chaos (in the mathematical sense) provides one reason. Another reason is that a series of bubbles may provide for an overall higher entropy production rate than a more steady, consistent rate of production. Heat engines in a bubble may be able to obtain much higher efficiency during a bubble than during steady state, so that the average production in a series of bubbles may be much higher than during a steady flow, despite the below average production between bubbles. Overshoot and the Predator-Prey Cycle Yet even in the case of a flow, the rate of replenishment will be limited. Yet the rate of engine reproduction may have continued beyond carrying capacity. This can be called overshoot, a systematic "momentum" in a sense. In this case, even the flow can be treated as a substantially fixed (or "conserved" in the physics sense) quantity. A thermodynamic bubble will form. Another case such as predator-prey cycles can also form where overshoot occurs, where the population of a predator overshoots the available prey, reducing both the population of the predators and the prey, so that there are cycles where the population of the predator is always "reacting " to the population of the prey. Predator-prey cycles can also be expressed in terms of flows, bubbles and efficiencies. [1]Or escape toxins produced by the colony. Rise-fall nonrenewable resource consumption functions ("curves") are examples of resource "bubbles". M. King Hubbert's modeling of Peak Oil is the most famous example of a resource bubble. However, that case was inspired by the earlier work of Donnel Foster Hewett regarding regional metal mining. The essence of a bubble is a build up of potential that then gets relieved. The key is that there is a critical resource that is not renewable. Any amount that gets consumed cannot get replaced. Once that resource is consumed, it is gone forever. So production must eventually end. Deposits of gold are an example that represent built-up potential. Usually achievement (consumption) of the potential begins slowly, but then grows exponentially. Hence production will grow quickly, but intrinsic efficiency begins to drop, impacting production. Eventually efficiency will drop below the level at which achievement can be obtained, or the entire potential will be consumed, and the bubble ends. Regional Example—San Juan Mining Region To apply bubble analysis, the region considered must be sufficiently large enough to initially support many mines. The San Juans region in Colorado is such a region, and is a suitable example of a single historical regime is the mining society that developed in the San Juan mining region. Since precious metals tend to be a nonrenewable resource, they can be said to be conserved (that only a fixed amount of the resource ever will exist) for a given region. In other words: Potential consumption + cumulative production = constant at any point of time. Potential consumption equals that constant before exploitation begins. The San Juan mining region of Colorado produced gold and silver from dozens of mines, around which towns and communities eventually developed. Mining began as early as 1765. Its heydays were between about 1889 and 1900. There is again mining in miscellaneous minerals, but the not much in gold, which was the primary economic driver for the "great days". The region is now used primarily for recreation and some agriculture. (Smith, 1982) Spanish gold mining of placer deposits took place between about 1765-1776 (native pieces of nearly pure gold found on the surface). Some mining took place in 1860, but it was interrupted by U.S. Civil War. At this point, "only the smaller deposits of high-grade ore could be mined profitably." Mining slowly started again in 1869. There were 200 miners by 1870. An Indian Treaty was negotiated in 1873, which removed a major obstacle to an increase of mining. (Smith, 1982) By 1880 there was nationally a "surplus of silver; pressures to lower wages; labor troubles." In 1881 a railroad service was established, resulting in a "decline in ore shipping rates." By 1889, $1 million[1] in gold and silver were being produced each year (for one particular sub-region). Around 1889, English investors had come to control the major mines by this time. The 1890 production total for San Juans was $1,120,000 in gold; $5,176,000 in silver. The region produced saw $4,325,000 in gold and $5,377,000 in silver in 1899. (Smith, 1982) By 1900, the region began to take on more of the characteristics of a settled community. There was a movement for more "God" and less "red lights." By 1909, "the gilt had eroded" (dilapidation set in; decreasing population). In 1914, production greatly fell, due to decreased demand from Europe (because of World War I) and the region lost workers. Farming becomes more important to local economy than mining. Recreation and tourism revenues become the only bright spot for many mining towns. Silver and gold mining all but ceased by about 1921. (Smith, 1982) Here, the end of mining has a fairly clear cut-off date. However, the beginning of mining seems to have stretched out over a longer period of time, during which mining levels were quite small. [The curve was previously modeled using a Maxwell-Boltzmann distribution, but this was a more empirical approach. Originally, only a few data points were readily available (Smith, 1982), but digitization of sources, even if just scans, has made much more data available.] Deviations will be shown in the curve occurred to both random events, social, economic and logistic "turbulence", business cycles and major external events such as the U.S. Civil War. Colorado San Juans gold production versus model A EDEG model was created (in 2010) for U.S. domestic petroleum extraction (see below). Actual data exceeds model prior and after peak. Parameters were set to match peak, but could have beed adjusted to for less error elsewhere at the expense of greater peak error. This model used a older version of EDEG than the most recent San Juans model, so the overall plot is not as well matched. EDEG model for US petroleum production up to 2008. Another example, on a multi-continental basis was gold and silver production in areas of the Americas controlled by Spain, primarily during the Habsburg dynasty. An EDEG model was produced for that period. This model used a cruder version of EDEG than the most recent San Juans model, so the peak is not as well matched. Silver and gold exports from New World versus model (data from Gibson, 1996) Ciotola, M. 1997. San Juan Mining Region Case Study: Application of Maxwell-Boltzmann Distribution Function. Journal of Physical History and Economics 1. Ciotola, M. 2001. Factors Affecting Calculation of L, edited by S. Kingsley and R. Bhathal. Conference Proceedings, International Society for Optical Engineering (SPIE) Vol. 4273. Ciotola, M. 2003. Physical History and Economics. San Francisco: Pavilion Press. Ciotola, M. 2009. Physical History and Economics, 2nd Edition. San Francisco: Pavilion of Research & Commerce. Ciotola, M. 2010. Modeling US Petroleum Production Using Standard and Discounted Exponential Growth Approaches. Gibson, C., Spain in America. Harper and Row, 1966. Hewett, D. F. 1929. Cycles in Metal Production, Technical Publication 183. New York: The American Institute of Mining and Metallurgical Engineers. M. King Hubbert. 1956. Nuclear Energy And The Fossil Fuels. Houston, TX: Shell Development Company, Publication 95. Hubbert, M. K. 1980. "Techniques of Prediction as Applied to the Production of Oil and Gas." Presented to a symposium of the U.S. Department of Commerce, Washington, D.C., June 18-20. Mazour, A. G., and J. M. Peoples. 1975. Men and Nations, A World History, 3rd Ed. New York: Harcourt, Brace, Jovanovich. Smith, D. A. 1982. Song of the Drill and Hammer: The Colorado San Juans, 1860–1914. Colorado School of Mines Press. Here we discuss economic regimes, more commonly known as "bubbles". Economic Flows Food and mineral flows also represent economic flows. So do transfers from one group to another, such as from parent to children, workers to retirees, exporter to importer. Flows often work at least two ways. For example, goods and services flow from an exporter while money flows from an importer. Many trade partners engage in both importing and exporting with each other. Financial Flows Economic flows can be abstracted into financial flows, such as an annual market demand or income. An example of a steady income flow is called an annuity. Direct Logistic or Gaussian Approach It is traditional to model growth by one of two types of curves, the pure exponential growth curve or the logistic growth curve. Since most new business plan for three to five years, this is a reasonable approach. All things end sooner or later, so it might make more sense to model growth with a Gaussian or Maxwell-Boltzmann distribution. However, most businesses don't like to plan for a downturn. Yet for particular products or businesses in industries where the typical lifetime may only be a few year or a single season, either of these curves may be superior to the pure exponential or logistic approaches. Beginning Point None of these approaches has a clear beginning point, mathematically speaking. The pure exponential, logistic and Maxwell-Boltzmann curves can arbitrarily be assigned a beginning point without too much thought. The Gaussian curve can prove more challenging to assign a beginning point. A fair approach is to initially establish a pure exponential growth curve, then later fit a Gaussian to that curve. Pure Exponential Growth Phase Sometimes it is hard to determine the parameters soon enough to make useful forecasts. Yet there are ways to handle this, although they are imperfect. However, if a business has a great product, and there is strong demand for it, the question is how quickly can the business expand to meet that demand? If the expansion cost and resultant speed can be calculated, then a model exponential growth curve can be generated, assuming that the business will expand as quickly as possible. Also assumed is that the growth of the business at a particular time will be proportional to its size at that time. If the business can only expand linearly, then a linear model must be generated. Leveling or Decline Phases Eventually, limiting factors will level off growth and even cause a decline in business. A logistics curve is appropriate for a product or business that will have relatively long-term, stable sales, such as a popular soft drink. For products that will have a known or likely decrease, a Gaussian or Maxwell-Boltzmann curve can model both the growth and decline. Efficiency Approach A more fundamental approach is to use efficiency data for modeling. This approach can work better if there are similar cases available for comparison so that reasonable parameters for reproduction costs and efficiency can be proposed at an early phase so that reasonable forecasts might be possible. This approach is similar to modeling a single historical regime. Two Places to Begin—Relation to Supply and Demand There are two placed to begin using the efficiency approach. One way is to determine the total lifetime sales for the product or business. (Take the raw value, not the Net Present Value-discounted value). If you can then determine what the peak sales amount will be, and the beginning and end dates of the business, you can treat efficiency as a linearly decreasing quantity (this is not entirely accurate, particularly for the beginning and end of the lifecycle, but can be a reasonable approximation). A perhaps better, but more complicated way is to first model demand for the product (in terms of a series of classical economic demand curves over time). Then determine a series of classic supply curves over time. This will tell you the sales revenue and volume over time. The trick is to use fast entropy and thermodynamic efficiency to model how the supply and demand curves will change over time. Fast entropy will cause the quantity supply curve to fall: as the business develops, the business will likely increase production capacity, so that it can afford to sell more at a lower per unit price. However, as time goes on, the demand curve will also fall due to growth leveling off or falling as the market becomes saturated. It is also possible, that there will be limits to how much production can grow is a required resource becomes scarcer (and thus expensive), so that the supply curve can only fall so far. These events represent decreasing thermodynamic efficiency. Thermodynamic efficiency should be differentiated from empirical efficiency, which may be due to such factors as economies of scale. In fact, it is often falling thermodynamic efficiency that required increased economies of scale to meet demand at sufficiently low prices. This is one reason why there is often consolidation in maturing industries. Modeling Macroeconomic Business Cycles It is possible to use this approach to model entire macro economic business cycles. (Despite their name, these cycles are really thermodynamic bubbles). US Adjusted GNP Example Figure below shows US adjusted GNP for 1993-2013 (the scale is nominal), a litter "sizzle" plot of the U.S. economy during that period. Long-term trends have been stripped out of the data. The figure shows the dot com bubble peaking around 2001 and the housing bubble peaking around 2006. These bubbles are not just random occurrences, for they share a similar structure. There is an underlying thermodynamic potential. . An engine of GNP growth forms to bridge that potential, such as firms that can create or take advantage of new computing technology or a relaxation of banking standards. At the beginning of the bubble, the potential is high, so that exploitation can take place at a high thermodynamic efficiency. However, as potential is consumed, the amount potential decreases, so efficiency necessarily drops. At the same time, old firms are expanding and new firms are being formed, resulting an increasing amount of heat engines" to consume potential. US Economy Sizzle Index 1993-2013 Once formed, these firm heat engines remain "hungry". They need to consume potential to survive and they very badly want to survive and grow. So they keep growing, even though potential is decreasing. Eventually, the potential (and therefore thermodynamic efficiency) drops so low and there are so many heat engines, that most of the heat engines can no longer support themselves. Chances are, industry overshoot has occurred (i.e. formation of too many hungry heat engines), and a crash occurs. This cycle usually repeats itself for each macroeconomic business cycle bubble, although the chief industries involved may vary among bubbles. The bubble itself apparently increases overall entropy production of a society, which is consistent with the principle of fast entropy. First published on . Last updated on May 17, 2019. Bubbles Involving Business Businesses are interesting cases to study. There are many businesses, both large and small. Some are long-lived, many are short-lived. They utilize many different types of opportunities. They all involve people. Many involve money, which is quantifiable and often the figures are recorded. Many businesses can also represented as bubbles, using the approach of a heat engines (or collections of heat engines). A business faces a new market opportunity of fixed magnitude. Businesses exploit the market opportunity, producing both work and entropy. The business or its industry reproduces exponentially, increasing the consumption of potential entropy exponentially. Eventually, it becomes increasingly difficult for the business or industry to locate new customers or orders, resulting in increased competition and decreased margins, hence lower efficiency. As efficiency decreases, the business will expand at a slower rate and eventually stop functioning. Lifecycle of a Business Some businesses can endure for longer than many dynasties. Yet most businesses go through common sorts of life cycles. They usually start small and are founded by an innovative entrepreneur. They get bigger and become efficient but start getting institutionalized. Eventually the overhead of their bureaucracy becomes more of a drag than help on overall efficiency. At the same time, the company has a harder time adapting to change. Eventually the opportunity the company originally exploited is gone, management can't adapt and the company ends. Companies don't exist in a vacuum. They are dependent upon their government for law and security, the the population for revenues. So a business can find new opportunities, be bought by another business, be regulated out of business, etc. The business does not progress on the same precise lifecycle as an animal, but there are frequent patterns. The Opportunity and Conception The business opportunity (potential) is identified. The opportunity could be one-time in nature, such as the discovery of a deposit of gold. It could be ongoing, such as a new technology that will be adopted for a long period, such as the commercialization of electricity in the 1800s or the internet. A means (engine) to exploit it is identified. The means could be a new mechanical invention, the building of a factory or construction of a mine. The development of the engine will require some initial "seed" resources, and literally has a "start-up" cost, called an initial investment (fixed cost). The business begins. Revenues are received and marginal costs are incurred. The company embarks upon exponential growth, which often starts slowly and then becomes rapid. Either the engine gets bigger or more engines are built. Often profits are reinvested in growing capacity. The growth slows as it approaches its peak then levels off. Also, both the engines and developing bureaucracy involve maintenance (overhead) costs. Decline, Acquisition or Transformation Eventually the business may decline as the original business opportunity declines or ends. The business might be able to take advantage of new opportunities, or may be bought by another company. It might buy other companies who are better at developing new opportunities. Modeling Business Growth Businesses as consumers of limited resources Businesses can be modeled as of consumers of limited resources and therefore as Hubbert curves. A business based upon an oil well or a gold mine is an obvious example. The limited resource can be intangible. Nearly all businesses are ultimately dependent upon a particular business opportunity that is often in turn dependent upon a limited resource. That limited resource might be satiable customer demand for a highly durable product. It might be a technology niche that has a limited lifetime or marketing window in a rapidly transforming marketplace. Other examples include resources can include intangibles such as goodwill and patents. Business Development Stages Businesses tend to develop through fairly well-defined stages: start-up, growth, stalling, acquisition of or by other businesses (or decline and then termination). Business Modes of Operation Businesses tend to operate in one of two modes, depending upon their current development stage. Growing start-ups are in an exponential growth (EG) mode, while established businesses move to an exponential decay (ED) mode. Operation in the EG mode is characterized by emphasis on revenue growth. Sources of revenue growth include new products, increased sales or acquisition of other businesses. Operation in the ED mode is characterized by emphasis on cost cutting. Forms of cost-cutting include consolidating product lines, reduced R&D spending, and layoffs. (The former case of stable major airline striving to raise profits by removing an olive per salad served is such an example). A firm that is experiencing the plateau of a logistic growth curve will rend to oscillate between the EG and ED modes, depending on short-term events. The transition from the EG mode to the ED mode can be a dangerous time for a business. Sometimes businesses grow too quickly and cannot make a successful transition. Cash shortages and the inability to fulfill customer orders are symptoms. Frequently the founder and the original management will be replaced at this point. Modeling Business Managers Some business managers desire growth, so much that they don't especially mind the ensuing disorder. Other people prefer order and harmony, even at the expense of growth. Managers who emphasize getting sales, launching truly new products, and even mergers and acquisition tend to be operating in an exponential growth mode. Managers who attempt to increase profits by focusing on reducing product costs and decreasing workforce size tend to be operating in a plateau of exponential decline mode. Precautionary Considerations General statements about a society or a category of people within a society should certainly not be taken to apply to individuals. Individuals tend to have a wide range of freedom to act and don't fit into most generalizations. Functions relate at least one variable to at least one other number or variable. is a variable. Its value could be anything. Yet, consider the equation: \(x = 1\) Here, x is constrained to equal to a single particular number, which is 1. Such a number is known as a constant. It's value in the equation cannot change. Things get more interesting when a variable is related to another variable. For example, let's say that \(y = 2x\). Both x and y are variables. However, the value of either variable can change, depending on the value of the other variable. Below are some sample pairs of values allowed by this equation: Such a function acts as a constraint of the values. If x is 1, then y must be 2. Functions can take many forms, such as y = x^3, z = 2x + 5y, or y = sin x, where sin is shorthand for the trigonometric function sine, what can be expressed quantitatively as a series of numbers. What Exponential Functions Are Exponential functions are functions where a constant is raised to a power. For example, \(y = 7^x\). Here, the values of x and y are: -2 1/49 -1 1/7 You can see that exponential functions can increase very quickly. There is a very special constant called equal to 2.71828. e is a very special number in mathematics for many reasons. However, it is also a very useful base for exponential functions. e is typically used as the base for exponential functions. Pure exponential growth Pure exponential growth is that which is proportional to its current quantity. In principle, a system that is capable of self-replication can experience pure exponential growth. It can apply to populations of bacteria, fish and even humans. It can apply to chain reactions in nuclear physics as well. Bacteria (photo credit: CDC US government) Humans can self-replicate, so human societies can also experience pure exponential growth, in the form of y = et, where the value of e is approximately 2.72. Pure exponential growth begins slowly, then literally explodes over time. Sometimes the plot of exponential growth is described as a "hockey stick" because it starts nearly horizontally, then "turns the corner" and grows nearly vertically. It typically concerts population differential equations such as \(\frac{dP}{dt} = kP\). where P is population, t is time and k is a proportionality constant. The solution for this equation is the classic exponential growth function \(P = P_0 e^{kt}\). where is the initial population. Different growth rates result in different levels of growth at a particular point of time, but growth is still ultimately explosive (see below). Pure exponential growth for different growth rates Human social movements can also experience exponential growth. Many philosophies have been subject to exponential growth, because the founder could teach others to teach yet others. Investment pyramid schemes can also experience exponential growth. Certain types of growth not only grow, but cause changes in their environment to enable further growth. Some futurists make dire projections of explosive population growth and resource consumption by extending a pure exponential function into the future.[2]In reality, exponential growth is never seen for long periods of time, due to limiting factors. It is noticed that systems in both nature and human society often grow exponentially at times (Ciotola 2001; Annila 2010). Exponential growth essentially means that a system's present growth is proportional to its present magnitude. For example, if a doubling of population (say of mice) is involved, then 10 mice will become 20 mice, while 20 mice will become 40 mice. A formula for exponential growth is: \(y = ek^t\) where \(y\) is the output, \(k\) is the growth rate, and \(t\) represents time. Logistic Growth A resource that is renewable, but limited in the short-run, can be modeled with a logistics curve. Examples of such resources are new-growth forests and wild Pacific salmon. They can be nearly totally consumed in the short run, but these resources can restore themselves if they have not been exploited too completely. A logistics curve is not shown here, but is in the shape of an elongated "S" and can be found in many differential equations textbooks. The beginning (and bottom) of the "S" represents the initial exploitation. The forward-sloping "back" of the "S" represents nearly pure exponential growth. The end and top of the "S" represents a leveling off of growth, as consumption of the resource matches its ability to restore itself. In logistic growth, population tends to move towards a particular population level that reflects the carrying capacity of the system or environment. Such functions are also called S-curves. Logistics functions are in the form of: \(\frac{dP}{dt}=k (1-\frac{P}{N})P\), where P is population, t is time, k is a growth rate coefficient and N is the carrying capacity. N can also be viewed as the periodic replenishment of potential. Achieving a logistics curve is the holy grail of sustainability enthusiasts. Applying concepts of sustainability to an entire dynasty or regime is called Big Sustainability, and involves social and economic sustainability, as well as physical resource sustainability (e.g. sufficient desired resources and ability to avoid toxins). Exponential decay of efficiency is produced by a Carnot engine operating across an exhaustible thermodynamic potential. It is a fundamental form of decay. The following is an equation for an exponential decay function: \(y = e^{-k^t}\) , where \(k\) is the decay factor. This appears nearly identical to the exponential growth function, except that the exponent is negative. Efficiency-Discounted Exponential Growth Efficiency-Discounted Exponential Growth (EDEG) involves the consumption of a non-replenished resource over time by a system of reproducing agents. It can be useful for modeling mineral production of a mining region. The efficiency-discounted exponential growth (EDEG) approach is relatively new, although functions similar in output have existed for over 100 years. A few rough simulations were conducted earlier (Ciotola 2009, 2010), but this approach is being further refined and structured. EDEG produces a model based on two mathematically simple components, but allows the addition of other, more sophisticated components. This author's original efforts at developing the EDEG approach were heavily influenced by M. King Hubbert, a geologist who used such curves to model domestic petroleum production (including peak oil) as well as labor model.[7] Hubbert in turn was influenced by D. F. Hewett. It is possible to create differential equations that produce EDEG. It is also possible to empirically synthesize functions that closely replicate EDEG. An analytical solution to EDEG differential equations from fundamental principles has not yet been created. However, the differential equations can be iteratively calculated via a spreadsheet or computer program to produce useful results. The author originally used the normal distribution approach (Ciotola, 2001) discussed by Hubbert, but this approach was not sufficiently broad for the authors need to model history, since few historical dynasties utilized petroleum in major quantities, nor did Hubbert's attempts involve a driving tendency for history. The author began some related models for French and Spanish dynasties (2009) and more fully developed EDEG for petroleum modeling (Ciotola 2010). It is not yet possible to analytically create a EDEG function from fundamental principles. However, EDEG can be expressed as a differential equation, which can then be iteratively calculated via a spreadsheet or computer program. When developing an EDEG model, an effort should be made to express parameters in terms of a potential and changing efficiency. If the quantity of the most critical conserved resource is known, then the model for total consumption over time should match that quantity. An EDEG function can be approximated by multiplying a pure exponential growth function by an efficiency function: \(P = k_1 e^{k_2 t} (1 – \frac{Q}{Q_h})\), where P is power (or production), is a constant of proportionality, typically the initial prediction (or power), is a growth factor, is the amount of a nonrenewable critical resource thus far consumed, and is the initial quantity of the nonrenewable critical resource. represents the exponential growth component." \((1 – \frac{Q}{Q_0})\) represents the efficiency component. HS Curves It may also be possible to create a function that transforms an H-Curve (EDEG) to an S curve. There is an initial amount of a non-renewable critical resource, and then periodic replenishment of that resource." Efficiency Discounted-Exponential Growth (EDEG) Approach Origins of This Approach Exponential Growth In principle, a system that is capable of self-replication can experience pure exponential growth. Examples include bacteria, fire and certain nuclear reactions. Humans can self-replicate, so human societies can also experience pure exponential growth, in the form of y = et, where the value of e is approximately 2.72. An example of pure exponential growth is graphed below. Efficiency-Discounted Exponential Growth (EDEG) involves the consumption of a non-replenished resource over time by a system of reproducing agents. It can be useful for modeling many phenomena, including the mineral production of a mining region. An EDEG function can be approximated by multiplying a pure exponential growth function by an efficiency function. \(P = k_1 e^{k_2 t} (1 – \frac{Q}{Q_0})\), where P is power (or production), \(k_1\) is a constant of proportionality, typically the initial prediction (or power), \(k_2\) is a growth factor, \(Q\) is the amount of a nonrenewable critical resource thus far consumed, and \(Q_0\) is the initial quantity of the nonrenewable critical resource. \(k_1 e^{k_2 t}\) [1] Meadows, et al. makes this point in Limits to Growth. [2] For an opposing, but still dire view, see D. H. Meadows et al, Limits to Growth, Universe Books, New York (1972). [3] The work of Forrester on system dynamics and the Club of Rome project Limits to Growthby Meadows et al involve attempts to better understand these limiting factors. The work of M. K. Hubbert and Howard Scott on peak oil and technocratic governance are other examples. [4] M. Butler, Animal Cell Culture and Technology. Oxford: IRL Press at Oxford University Press, 1996. [5] Ciotola, M. 1997. San Juan Mining Region Case Study: Application of Maxwell-Boltzmann Distribution Function. Journal of Physical History and Economics 1. Also see M. K. Hubbert. [6] Heilbroner, R. L., The Worldly Philosophers 5thEd. (Touchstone (Simon and Schuster), 1980. [7] Such curves were proposed by W. Hewitt as early as 1926 (source unavailable). [8] See Gibson, C., Spain in America. Harper and Row, 1966. Annila, A. and S. Salthe. 2010. Physical foundations of evolutionary theory. J. Non-Equilib. Thermodyn. 35:301–321. Schroeder, D. V. 2000. Introduction to Thermal Physics. San Francisco: Addison Wesley Longman. Human psychology is connected with resource availability and trends. Maslow's hierarchy indicates that when people have one thing in sufficient quantity, they then desire something "higher up" on their needs pyramid. Here, we discuss the inherent conflict between fast entropy and psychology. It is ironic, but fast entropy has shaped human psychology to reject the very idea of fast entropy. People View Science Through the Filter of Feelings People are emotional beings People have emotions. They naturally form impulsive judgments based upon their feelings or physiological reaction. Thinking takes time. Emotion can be instantaneous. People evolved from times when there was little time to think. Organisms who thought first were eaten by something bigger. Organisms who emotionally reacted (were immediately scared and ran way) lived to have offspring. Our ancestors evolved to be emotional reactors and we inherit this trait. New Ideas Are Psychologically Costly New ideas typically require changing ideas about what is already known. This can make a person feel ignorant, uncertain, out-of-control and even unsafe. New Ideas Can Be Deadly To some extent, the social status of all people is at least partially dependent upon their knowledge and wisdom. New discoveries and theories therefore automatically decrease the social status of existing people, for there are by definition unknown. Status has been demonstrated the most important determinant of longevity (how long the worker lives).[1]Therefore the decrease in status caused by new discoveries can shorten the longevity of people, especially older people. Since new discoveries are actually life-threatening, it is natural that people will resist new discoveries, as if their life depends on their resistance, which it indeed does. Of course, new discoveries are often in fact incorrect in one or more aspects, so it is reasonable for people to initially reject new ideas. However, the duty to winnow and sift is often not enough to explain the visceral negative reactions that academic workers sometimes have towards new ideas. Therefore, emotions often color judgment. Incidentally, this subject has been studied, such as by Kuhn in the Structure of Scientific Revolutions. Such impulsive judgments cannot be stopped. However, many decisions and final judgments can be accepted after time passes and reason has a chance to examine new discoveries. People Don't Like the Second Law of Thermodynamics People life freedom of choice People have the need to feel that they have freedom of choice. Although the Second Law of Thermodynamics is statistical in nature, it tends to be deterministic in effect. People prefer having choices and feel that they do have choices. Therefore anything that is deterministic is unacceptable. People don't like limits People don't like limits. They don't like to feel that the choices they do have are limited. Unfortunately, the laws of physics suggest all sorts of limits. The limits on efficiency under the Second Law, or the limited amount of petroleum under the surface of the Earth don't seem very pleasant to people. People like immortality People like immortality. Therefore the illusion of immortality, infinity and perfection are extremely desirable images. People like circles, for they are perfect and have no end. People also like pure sine waves, for they have no end, either. If you can view one cycle of the wave, you will know how it is forever and from one end of the universe to the other, so to speak; by grasping a piece of a pure sine wave, you grasp the whole. Also, pure sine waves exhibit the same sort of perfection as circles. Motivation Is Inherently Irrational People need to maintain their motivation. Indeed, the more they maintain their motivation, the more entropy they can each produce. Yet, the laws of physics feel constraining; it is hard to feel motivated when one is aware one is trapped by the laws of physics. So the principle of fast entropy has dictated through the evolution of the brain that people tend to deny the existence of fast entropy or other laws of physics. Instead, people live by impossible maxims. Have you not heard the saying "You can achieve anything you put your mind to." You could solve world hunger, cure cancer and finish reading this book by dinnertime. Yet you haven't done those things yet. Don't you care about the starving children of the world? Perhaps not. Mother Theresa didn't solve the problems of poverty or world under either. According to the above maxim, she could have, but chose not to, so she apparently didn't care either, or was all in favor of starving children. Another maxim is to give something 110% effort. That is physically impossible. The point is that such impossible maxims help us to achieve more in life. We may not achieve everything we wish to, but we'll achieve more than the scientifically cynic. The Complete Picture Inner and Outer Philosophy You may recall that it was asserted that both inner and outer philosophies are needed for a complete social science. The emotions and motivational necessities of inner philosophy need to be considered along with the cold, hard scientific facts of outer philosophy, and vice versa. If you can appreciate and practice this, then you have learned the most important lesson in this book. The author periodically presents on fast entropy to conferences of sociologists and other social scientists. A frequent audience reaction is to devote the rest of their lives to disproving the principle of fast entropy, or at least its application to the social sciences. How can we escape from the jaws of its limits? The best way to escape is to remember that as individuals, our rate of entropy consumption is not the primary determinant of our happiness. The quality of our personal relationships and sense of community may be far more important to our happiness, self-esteem and status. A second point is that, as individuals, we have a great scope of freedom. Specific consideration of the laws of thermodynamics rarely constrain the day-to-day actions or decisions of individuals, even of physicists. You only need to worry about them if you are deciding whether to invest in an exotic energy technology, designing a mechanical device or promulgating a macro social policy. Third, time is often your friend as much as your enemy. The world is not going to run out of oil today, tomorrow or even next year. To paraphrase French King Louis XV, it may very well last your time. Just as thermodynamic processes are nearly inevitable, they are rarely instantaneous. Finally, if you are able to use fast entropy as a crystal ball (albeit an often foggy one), you can arrange your life so to use fast entropy to your advantage. [1]BBC News article. People who might be accepting of something in general, may react differently when it is accompanied by change. Rates of change matter as well. People's immediate "gut" reaction to rapid change may differ from a slow, reasoned, adjusted reaction. People prefer to feel in control of their lives, and to feel secure in their environment. Rapid change involves uncertainly and a loss of control. Also, change can require a mental adjustment. Just the process of adjusting itself can be mentally painful or challenging to people, especially as they age. Older people have a lot of knowledge in their brains. Having to learn new stuff and reconcile inconsistencies between the old and new can be a greater challenge than merely learning the materials with an emptier brain when one is young. Older people must unleard andlearn. Finally, change can require physical adjustment. A person may have to move, change jobs, or learn new skills or languages. Here, we discuss how to use fast entropy and other methods described in this text to model history. [1] Note that dollar amounts are unadjusted for inflation — they reflect actual historical figures. [2] See M. Ciotola, Journal of Physical History and Economics, Vol. 1, Is. 1 (1996). Gibson, C. (1966) Spain In America. Harper and Row. Smith, D. (1982) Song of the Drill and Hammer. A model is a hypothesis about how something exists or works. A model could be a small version of something large, such as a table top copy of the Notre Dame cathedral in Paris. Such a model would represent the large, most important features of the cathedral such as the towers and flying buttresses, and possibly representations of some of the more distinctive smaller features such as the stained glass windows. A model can also be one or a set of mathematical equations that relate one quantity to another. For example, an equation could relate dynasty power to time. Such a model could be refines, such as to represent central versus regional power. We will primarily be concerned with creating quantitative models. Creating a quantitative model is really easy. Just relate two quantities to one other. For example, write the following equation: According that his model, the number of soldiers in the Roman empire is equal to the year in current era years. So in 100 CE (AD), the number of Roman imperial soldiers would be 100. This certainly is a model, because it produces results that can be compared with actual data. Historians evaluate the validity of such data, which may come from literary or archeological sources, and then can compare it with the model. A range of uncertainly is estimated. If the model produces a result that not within the range of uncertainty for the data, the model is rejected or revised. If the model fits within the range, then it is valid, although not necessarily absolutely correct (no model ever gets proved) or representative of ultimate truth. Generally, models that fit the data the best and are consistent with other valid models tend to be more accepted. It often requires several attempts to get a valid model, and many attempts to obtain better ones. The above example concerning Roman soldiers can be quickly rejected using commonly available data. Models can be improved by including additional terms. and changing parameters. For example, adding a baseline number of soldiers, and then a term that might take into account the growth of mercenaries might improve the accuracy of the model. In history, often the available or accepted data is limited, and uncertainties may be high. So initially, a more pragmatic approach may be to propose models and explore to what extent they might be valid. For the moment, let assume that we have a validated, precisely known data set. However, we don't know the relationship between the data, its trends or driving tendencies. So we decide to develop a model to gin a deeper understanding. Generating a model is easy. Personal income = (5 * personal height) + 6. There. Done! Yet to what extent is it a valid model? There are tools for that. In fact, modeling is often a process of adjusting the model function and parameters until the model fits the data well. One technique is minimizing the sum of the squares. For each term of data, calculate the value the model would produce (e.g. for each point of time). Take the square of difference between calculated and actual value. Add up all of those squares. Adjust your unknown parameters to produce the smallest value for the sum of the squares. This will be your best fit model. There is a limit in the significance of quantities. Here, we are only referring to mathematical significance. For example, a quality is only significant up to half of the smallest unit being measured. For example, if you measure a distance with a meter stick, and the stick is only ruled in 1 centimeter units, then you can express the distance in terms of the smallest subdivision of the rulings: in this case, that would be 1/2 of a centimeter. So the significance would be 0.5 cm, which is three digits, e.g. 91.5 cm. Measurements involve a degree of uncertainly. Once again, here, we are only referring to mathematical uncertainly. Using the previous example, the distance is likely not exactly at any specific centimeter ruling. It is somewhere between centimeter marks, and it can sometimes be a bit of a judgment call to determine which is the closest mark. So the uncertainly here would be plus or minus 0.5 cm. So if the distance was measured as 91.5 cm, then the measurement would be expressed as 91.5 cm +/- 0.5 cm. This sort of error cannot typically be eliminated. Measurements can be subject to systematic error. This type of error occurs due to a consistent flaw in the measurement system. For example, suppose the end of the meter stick was once cut off at the 1 cm mark, so that it always understates the distance by 1 cm. Such sources can sometimes be identified through examination of the measuring apparatus, and eliminated if identified. Statistical Approaches to Improving Quality of Data Large regimes are comprised of vast numbers of individuals. Even a small city might contain tens of thousands of people. Most large urban areas contain millions of people. Most powerful countries contain at least 50 million people to over one billion people in modern times. Even if a regime is governed by a single individual such as a monarch or dictator, the regime is nevertheless comprised of all of the individuals governed, each with their now needs, perspectives, influence and power (even if individually small). We are concerned with developing a unified history of science, which means that we must be able to propose testable hypotheses. It is a lot easier to reject hypothesis that can be quantified. Yet given how complicated individuals are, and how much complex entire societies of many individuals must be, how could it every be possible to quantify societies? There are ways, but there are some phenomena that first require discussion. Regimes As Vast Numbers of People Individual Freedom of Action These thousands and millions of people each possess their own interests and scope of action. Individuals appear to have a significant scope of freedom of action, even when they have limited civil rights. Does the Time Make The Hero? Does individual freedom translate into freedom of action for the entire regime? This brings to mind an age-old question. Does the time make the hero or does the hero make the time? Consider the following two cases. In football, the San Francisco 49ers were a legendary football team in the 1980s. For much of that time, they were lead by a legendary quarterback, Joe Montana. In one game, the 49ers were behind with 15 seconds left in that game. Then Joe Montana threw a winning touchdown pass and the rest was history. Joe Montana was certainly a great quarterback. Yet, while acknowledging Montana's skill, coach Bill Walsh pointed out that this last minute play had been rehearsed time and again in a comprehensive system of team training. Montana was part of that system.[1]Without that system, Montana could have thrown a great pass, but there would have been no one there to catch it. A second case applies to factory assembly lines.[2] In an assembly line a conveyer belt moves an uncompleted product past a series of workers. Each worker completes a task, which is often dependent upon the already-expended efforts of workers "up-line." What if one worker works exceptionally diligently and quickly? What will happen? If the worker processes products too fast, there will be a pile of "work-in-process" waiting in front of the next worker who is working more slowly. Unless that next worker speeds up, all that will happen is that the factory's inventory of unfinished goods will increase, which is a waste of money and resources. The factory will be harmed. Or the hard-working worker, dependent upon an "up-line" worker for work-in-process, will simply run out of product to work on and have idle time. In neither case do the extra efforts of the diligent worker contribute to the productivity of the factory and in one case even reduces productivity.[3] In a large, interdependent system, such as a large society, the conclusion here is that it is the time that makes the hero, even if the time is silent as to which individual will earn the title of hero. Regime As The Summation of Individual Behavior A regime can be viewed as the sum of the individual contributions and actions of its individuals. When one thousand carpenters strike one thousand nails with one thousand hammers, the regime is one thousand nails-hits richer. A city is a single legal entity, but is comprised of numerous houses, factories, shops and other structures. This summation effect appears to tend to cancel out any effect of individual free will over material lengths of time. Many individuals will behave in one way while many others will behave in the opposite way. Some carpenters will drive nails into boards, while others will remove nails. The larger the society, the greater this canceling effect will tend to be. Is a regime then completely at the mercy of historical destiny? This is not necessarily so, but the ways a regime can escape its "destiny" are limited and fairly specific in nature. Regimes As Producers and Consumers of Resources Regimes can be viewed as produces and consumers of resources. Just as an individual human requires air, water, food and other goods, so does a city albeit in larger amounts. Humans are to regimes as are cells to the human body. Great networks of blood vessels supply nutrients to individual cells and carry away waste. Networks of nerves convey information. In a contemporary society, water is carried in great aqueducts, rail lines and freeways channel in nutrients and remove garbage, and a myriad of telephone and internet lines transmit information. Such resources can be anything necessary to sustain the regime. Resource Exhaustion Some resources are partially renewable, such as agriculture production. Others are limited and can be totally exhausted. Such resources can include fossil fuels, ground water, and old growth forests for example. Social resources can also be exhausted. In business, social resources are accounted under the term "good will" and even have a quantitative financial value placed upon them. Societies Dependent Upon a Nonrenewable Resource When a regime is substantially dependent upon a limited, nonrenewable resource, it can be modeled as a function of a normal distribution or other similar distribution. Regimes which are substantially dependent upon mining mineral reserves such as gold and silver are a prime example. Spanish governance over the New World and the mining communities of the San Juan region of Colorado were both highly dependent upon producing gold and silver. Even where a critical physical resource is not apparent, most regimes are dependent upon limited social resources and can therefore be modeled as a Hubbert curve. Transition Points A new society grows exponentially. Its people expect that exponential growth will continue. They frequently do not recognize limits to growth soon enough. Production does not match expectations, leading to social disruption. Where growth slows and expectations diverge from actual production represents a transition point and may graphically appear as an inflection point. [1]William Walsh et al, Finding the Winning Edge. [2]This example is inspired from Elihu Goldratt, The Goal. Great Barrington, MA: North River Press, 1992 [3] Goldratt, The Goal. There are several long-term trends concerning humanity. Although these trends might not be observed every day, and there can even be periods and locations contrary to the trends, they still operate on long periods of time. Longterm Trends The inclination of the Earth with respect to the Sun changes in 26,000 year cycles (NASA). The Earth is slowly wobbling on its access. This affects regional climates, including wind, rainfall and temperature. This change can be significant when considering periods of more than a few centuries. Evolution and Selection Modern humans have existed for at least 10,000 years. There may not have been much genetic mutation over the past few millennia, so the scope of human evolution during that period may be limited. However, there has likely been some effects due to selection, that is the ability of people to adapt to particular local and social changes ad well as due to mating preferences. For example, during the 1950s-1970s, there was apparently a tremendous mating preference for those who were able and willing to master the electric guitar, an example of a new technology. Human Population Growth The human population has grown tremendously in the past 10,000 years (U.S. Census). Other Trends Species changes (extinction, domestication, monoculture) Destruction of forests Salinization of soil in irrigated lands Total land area used by humans Technology advancement Emergence of Societies Due to these trends, and arguably the driving force of the eth Law, human societies formed. Living organisms formed,then multi-cell creatures. Animals formed, then vertebrates, then mammals. Primates became smarter and able to use tools. Homo sapiens developed. Language and agriculture were discovered and adopted, allowing people to form complex societies. The term dynastyis used broadly to refer to a continuous ruling group; it could be a related family but does not have to be. The term regime can also be used. The term societyrefers to a group of related people, typically of a single or similar group of ethnicities, such as the Han people in China or the Frankish people in France. Dynasties exist within a society, but can conquer other societies as well. Fast Entropy can drive the formation of dynasties or regimes. Within the context of a civilization progressing over millennia, it is often possible to degrade built-up potential even more quickly even given current types of social structures for a particular society. Hence, dynasties form to more quickly achieve that potential (just as a convection bubble forms in a boiling pot of water to release heat more quickly). Dynasties result in more rapid degradation of energy than does a more static society. Each dynasty has a lifecycle. A dynasty is similar to an individual biological organism than a swinging pendulum. A dynasty is born, matures, endures for awhile, then dies. A new dynasty will not necessarily follow an old one, or might not immediately appear. Yet dynasty will continue to form as long as there exists a potential that cannot be more quickly achieved by other means. Does a dynasty have to have a life cycle. Could it not last forever, or at least indefinitely? Societies and some institutions can last much longer than dynasties. It is conceivable that a dynasty could be managed in a sustainable manner, but this is not what we typically observe in history. Why has the 300-year pattern appeared so frequently in history from France to China to West Africa? It could be that humans who organize in large, durable regimes traditionally chose monarchies. It could be that the values that lead to success and failure go through a roughly fifteen generation progression. It could be that these regimes have utilized the same sort of resources such as agriculture, and perhaps land becomes excessively exhausted after about 300 years. Conserved social resources could include good will or social flexibility. Property rights, concentration of wealth and gentrification could eventually petrify a regime. Or, this could be viewed in terms of a standard 300 year predator-prey prey scenario. Movements towards stabilization can be described as a march towards thermodynamic or statistical equilibrium. There is a short term type of equilibrium related to on-going flows and a longer-term equilibrium that relates to the "life-cycle" of the regime itself. Considering Dynasties As Bubbles A human society can experience bubbles. A new dynasty within a civilization encounters a potential of good will and other physical and social resources, albeit of fixed magnitude. The society governed by the dynasty fills the role of a collection of heat engines, producing both work and entropy. Prosperity expands exponentially, increasing the consumption of potential entropy exponentially. Eventually, it becomes increasingly difficult for the dynasty to rely upon its store of goodwill and physical and social resources, decreasing its efficiency. As efficiency decreases, the dynasty will experience social crises and will eventually stop functioning. So far, the discussion has been largely speculation. The correct application of factual evidence will demonstrate to what degree the above is valid. The underlying mathematical formulae will be proposed in other sections. This approach is different than mere philosophy or opinion, because it is capable of being numerically disproved or restricted to limiting cases. This section concerns generating models of the rise and fall of power of dynasties versus time and how a single dynasty can be modeled using the efficiency-discounted exponential growth (EDEG) approach. This sort of model can be called a power progression. Various approaches to modeling dynasties will be explored, then a new physical approach will be proposed. All of the approaches presented are simplifications that assume a gradual rise and fall. Of course history is rarely so cooperative. Hence, the models shown should be considered mere first approximations. Mathematical treatment of dynasties will be novel to most historians, so simple approaches will be discussed first, and more complex ones later. It is simpler to model a sufficiently large, robust, independent dynasty than one that existed merely at the whim of its neighbors, for there are less significant dependencies, and thus can be approximated as a substantially isolated system. So we will utilize Russia's Romonov dynasty as an example. Widely accepted start and end dates are 1613 and 1917 (Mazour and Peoples 1975). Peter the Great and Catherine the Great were the two important rulers of the Romanov dynasty, and the Russian Empire gained much of its most valuable territory by the end of Catherine's reign. The Romanov dynasty was big, robust and essentially independent. It fought wars, but it generally was not under serious threat of extinction. Even Napoleon could not conquer Russia, but rather Russia nearly conquered Napoleon. This dynasty was reasonably long-lived, rather than just a quick, "flash-in-the-pan" empire. By developing a fundamental approach to modeling the rise and fall of dynasties, it is possible to accept or reject models based upon both qualitative historical evidence and quantitative historical data. A regime can be a dynasty or corpus of government. A society here is defined to be a particular people or culture over time. The people occupying the area of modern day France from the time of the Frankish invasions to the present could be viewed as French society. A dynasty, such as the Capets in France, would be viewed as a regime. The nominal term "government" does not always describe a regime, however. A regime can be recognized by having a clear rise and fall connected with production or consumption of a critical resource. Historical dynasties are consumers of energy and producers of power, so models in terms of such quantities are inherently fundamental in that they can be derived directly from the laws of physics and expressed in physical quantities. Such models are not theories of everything, but rather describe certain types of broad macro-historical phenomena rather than the intricate workings of the interactions of individual people. The term energy is meant in the physical sense here. There are several possible measures of the physical energy of a dynasty, such as population governed or grain production. Each of these is translatable into physical units of energy. For example, the quantity of people multiplied by the mean Calorie diet per person will result in an amount in units of energy. These figures can be estimated for most dynasties over their lifespans, albeit with differing degrees on uncertainty. The proportion of this energy that rulers of the dynasty actually have at their disposal is beyond the scope of this paper, but should be considered for improved accuracy. Power is a physical term. It refers to energy expended per unit of time. Yet it also has meaning within social and political contexts, and will be discussed in both senses. Absolute power would generally be presented in physical units of power such as Watts. However, it is possible to express any type of power in terms of proportions, such as the ratio of power at a dynasty's peak to its start date. Such a ratio can apply to physical, political or even military power. So the EDEG approach can be utilized to model any type of power. In fact, the EDEG approach provides a framework to explore the question of how political and physical power are related. Exponential Growth Component A new regime will tend to experience exponential growth. A chief characteristic of exponential growth is that growth feeds even more growth, resulting in an increasing rate of growth. Increases in population and consumption can become explosive. Nevertheless, the growth rate in early stages tends to be relatively flat, while the growth rate later on tends to be relatively steep. In reality, the change between "flat" and "steep" can be surprisingly sudden despite warning signs.[1] Incidentally, the transition from flat to steep may be more painful than graceful for many people. Regimes that are unprepared can suffer greatly. Example of Mechanism of Exponential Growth Recall our example of heat engines begetting heat engines. That is an example of exponential growth, because the rate of growth of heat engines was proportional to the existing population of heat engines at a particular instant. Theory—Pure, Unlimited Growth Sources of growth can include geographic expansion, infrastructure improvements and trade expansion. It will be assumed that dynasties will strive to grow exponentially. (This paper does not attempt to prove this assertion, but rather it is a rebuttable presumption). If so, this certainly explains the rise of a dynasty. There is a minor distinction between exponential growth and compounded growth. Exponential growth essentially involves continuous compounding which produces a larger effective growth rate than discrete compounding. It is similar to the difference of quarterly versus daily compounding of a bank savings account. This effect is less significant at small growth rates but more so at very large rates. For the growth rates that we will consider, the effect is negligible compared to the other sources of uncertainty that exist. The Romanov dynasty with various growth rates is shown in Figure 5. The plot shapes appear similar, except that a greater rate produces a "sharper" corner. Also, notice the range of power values: a greater growth rate produces a disproportionately greater power value at later points of time. Exponential growth for various growth rates Efficiency Component Efficiency is the proportion of consumption that is transformed into production, or its equivalent in power. In reality, there will always be factors that limit the growth even of a self-replicating system. A regime that experiences pure exponential growth will eventually begin to experience such limiting factors.[3]The magnitude of these limiting factors will increase during growth (more than proportionately). These limiting factors restrain growth and sometimes stop it altogether. Limiting factors usually exist due to a shortage of some essential resource or an excess of some "negative" resource. In a simple case of exponential bacteria growth, limiting factors can include insufficient nutrients and production of excessive toxin. A toxin can reduce or prohibits growth even in a resource-rich environment. Turning to the biotechnology, an examination of the case of growing cells shows that the chief limiting factors are typically a nutrient limitation or an accumulation of a toxic metabolite.[4]Even in an environment that is overall rich in resources, scaling issues result in the decrease of surface area to volume ratio of the organism colony. In other cases, some cells require a growth surface to anchor to. A lack of oxygen can be a limiting factor for large cell cultures. The organisms often cannot get access to abundant resources because they are crowded out by their neighboring organisms. Multi-cell organisms attempt to overcome the surface area limitation with structures such as veins and folding. However, an elephant still faces many challenges as compared with an ant, such as expelling sufficient body heat. Human civilization meets a similar surface area challenge with similar structures. The great freeways and road networks in cities and even across the countryside in many ways resemble the blood circulation system in out own bodies. Another source of limiting factors is the growing cost per unit to extract limited resources such as minerals. Societies attempt to use large-scale social and technical structures to overcome this challenge, but these structures create additional challenges.[5] There are other examples. In the U.S., the "closing" of the western frontier marked a limit of growth to homesteading. In petroleum production, the increasing cost of drilling for oil is a limit to growth. P. Malthus[6]pointed out limiting factors in the growth of agricultural production. Efficiency Decay Dynasties inevitably end, which is typically preceded by a decline in power. Exponential growth alone is insufficient. Another physical principle comes to our aid. A dynasty can be viewed as a heat engine (or collection of such). Engines consume a potential to produce work or exert power. As an engine consumes a nonrenewable potential, the efficiency of that engine may decrease. (For physics-savvy readers, picture a Carnot engine operating across exhaustible thermal reservoirs. As heat is transferred, the temperature difference will decrease, and so to will efficiency. (Ciotola, 2003)). Therefore the engine's net production will decrease and eventually fall to zero. Likewise, as the dynasty progresses, non-renewable resources will be consumed, and efficiency will decrease. There will still be production until the end, but there will be a lower return on investment, so to speak. Causes of decay can include overuse of agricultural land leading to nutrient depletion, the build-up of toxins in the environment, depletion of old growth forests, and even running low on social goodwill. There are two types of decay, linear and exponential, compared in the plot below. Figure 6 contains plots of linear and exponential decay. Note that efficiency is shown as a multiplier rather than a percentage. Linear vs decay efficiency A regime, past its prime and dependent upon a limited resource, may experience exponential decline. Exponential decline in an EDEG situation can happen more quickly than exponential growth. However, there are two disadvantages of exponential decay within the context of modeling dynasties. First, it is slightly difficult to set up. For example, exponential decay has an infinitely long tail. While this allows for mathematical immortality, most of the tail is superfluous in the context of a dynasty of limited lifetime. Second, it may not provide the most consistent models with observations. A linear approach as simpler to set up. Importantly, it also provides some reflection of efficiencies achieved through centralization and economies of scale as the dynasty progresses. It has unambiguous beginning and end points. Efficiency cannot be greater than one, and is typically no lower than zero. Therefore, as a first approximation, one can set the efficiency to 100% at the start date of the dynasty and 0% at the end year (except that the math is simpler if the value 1 is used for 100%). Using a value of 0 for ending efficiency ensures that the dynasty actually does end by its historical end date. Although physical efficiency is typically lower than 100% for real life heat engines, 1 provides an easy starting point that also produces the correct shape of curve. The following is an example of linear decay function: efficiency = 1 – ((year – start year)/(end year – start year)). As the year increases, efficiency will decrease. Using a lower initial efficiency reduces the magnitude of production increase for the dynasty compared to its initial production. It also flattens out the curve. Using a value of zero for ending efficiency ensures that the dynasty actually does end by its historical end date. It is possible to use a value other than zero for the ending efficiency, but then some other factor must be used to end the dynasty. Double Injury—Declining Efficiency Can Occur Even As Resources Become Very Scarce The key impact of limiting factors, whether insufficient positive resources or excessive negative resources, is a decrease in the efficiency of whatever is acting as the "heat engine" to do work. Even where the EDEG function appears symmetric, the intrinsic efficiency function can be extremely asymmetric: high in the beginning and low towards the end. Centralization can produce economies of scale that can boost net efficiency, but when a centralized system goes bad, it can go really bad. It can be difficult to go gradually to a less centralized system, and failed central institutions can bring a regime crashing down quickly. This is an example of an irreversible process. Exponential Growth Where A Nonrenewable Resource Exists We still need to go a step further. The effect of limiting factors, even upon exponential growth, is that growth will either reach a plateau or will become negative. In all cases, given sufficient time, growth will become negative, and that negative growth shall substantially cancel out past positive growth. Here, growth refers to that derived from consumption of a critical limited resource. Production in a society is ultimately dependent upon a scarce, conserved resource. The term conserved means that the resource is non-renewable. The total amount everrecoverable cannot exceed a fixed quantity. In the case of a gold mine, certainly no more gold than is already present in the mine can be captured. A regime will typically consume both conserved andrenewable resources. It is the consumption of the critical conserved resource that shall determine the growth characteristics of the regime. So now we bring exponential growth and declining efficiency together. We need to use the decay to discount exponential growth, just a little in the beginning, then completely at the end. Growth will not only slow down but often will actually start to reverse. Such growth and decline can be represented by a EDEG function, where the area under the curve represents either the total production or consumption of a conserved resource over time. Note that the critical resource becomes more expensive as each successive unit of it is utilized. In the case of petroleum or a precious ore, the least expensive deposits are extracted first. Then the next least expensive deposits are extracted and so on. The following is an example of an EDEG equation: y = exponential growth function * efficiency function, where * is a multiplication symbol. Here is a simple way to generate a quantitative model for a dynasty. It is simplistic, but it produces generally qualitatively correct results. Assume exponential growth: \(P_t = P_0 e^{kt}\), where \(P\) is power, \(P_0\) is initial power, \(t\) is time and \(k\) is a constant of proportionality. Assume that a nonrenewable resource is being consumed that cannot be replaced within the lifetime of the dynasty. Then assume the efficiency of each subsequent unit of resource consumed produces power as a decreasing efficiency. A simple linear efficiency decay function could be: \(\epsilon = 1 -\Big ( \frac{current~year – start~year~of~dynasty}{end~year~of~dynasty – start~year}\Big ) \), where \(\epsilon\) if efficiency. Then the efficency-discounted power is: \( P = \epsilon * P_0 e^{kt}\). Substituting in our functions (utilizing linear decay): \(Y = e^{kt} *(1 – (\frac{t}{(end year – start year)}))\). Y = ek(year – start year)*(1 – ((year – start year)/(end year – start year))). This produces a steady rise, a level period and a slightly faster decay. So by discounting exponential growth by decreasing efficiency, we then have a rise and fall pattern that is consistent with the rise and fall of a dynasty. That negative growth occurs often indicates resource exhaustion. External competition can result in negative growth, but the success of external competition can often be described as a function of internal resource exhaustion. For example, by the early 13th century, the Byzantine empire had consumed much of its timber reserves, so essential for the maintenance of its navy. The Byzantine capital fell for the first time in 1205 A.D. Sample Description of the Progression of a Traditional Single Regime A major dynasty (e.g. China, France, West Africa) would typically begin with a daring, competent, often unpolished leader, but with effective power loosely distributed. The future generations of rulers will become increasingly desirous of luxurious living and will also demand expensive "trophies" such as palaces, major public works or optional conquests. This will stress the natural resources of that society, and the dynasty will experience financial difficulty. Taxes will be raised. Bureaucracy will need to be greatly expanded in order too collect the increased and taxes and to administer increasingly complex tax codes. Internal dissatisfaction will increase, so greater internal military effort will be required to suppress rebellions The dynasty rulers will become increasingly dependent upon their military to maintain internal order and to enforce tax collection. At the same time, the rulers will tend to become increasingly occupied with court etiquette and pursuit of such civilized activities as art and scholarship; but they will become less competent at governance and further removed from the realities of the population they govern. Eventually competing figures from within the society, backed by military figures, will challenge the rulers. These initial challenges will be put down brutally, further increasing discontent, and destroying much of the social structure and institutions required for the effective maintenance and defense of the society and its economy. Due to the decreasing magnitude of economic activity, the population and strain on natural resources will decline, allowing for some recovery of productive capabilities. Then, eventually, further challenges from either within or without the society will replace the dynasty, and a new dynasty will form. Thermodynamic Interpretation of Sample Description of the Progression of a Traditional Single Regime A thermodynamic potential has built up. A new major dynasty in effect represents a new collection of heat engines that is capable of consuming such built-up potential. The magnitude of built-up potential is relatively high, so heat engine efficiencies are relatively high. Such heat engines utilize some of their work to produce additional heat engines. The population will rise, and economic activity will increase. As future generations of rulers become increasingly desirous of luxurious living and will also demand expensive "trophies" such as palaces, the thermodynamic costs of maintaining such heat engines will increase. Initially, due to high efficiencies and growth of production, such maintenance costs will not be problematic. Yet while demands for maintenance increase, built-up potential is being depleted resulting in decreasing thermodynamic efficiency. (Although a continuing flow of potential exists from such sources as agriculture, it cannot keep up with consumption). Centralization temporarily successfully regains high efficiencies by increasing economies of scale and allowing access to difficult-to-extract potential (that has a high "activation energy"). Yet thermodynamic maintenance costs continue to escalate while efficiency decreases due to continued consumption of built-up potential. Overshoot results. The dynasty will no longer be able to maintain its structure, and total production (thermodynamic work) declines. This decline continues, but additional attempts at overshoot cause the decline to be chaotic. The population of heat engines continues to decrease. Chaotic decline place the population of heat engines below what can be supported by "renewable" flows of potential. Hence a thermodynamic potential will build up again. Then, eventually, a new dynasty (thermodynamic "bubble") will form. An exception is where the original built-up potential is from a source that can never be replenished, or that can be replenished over periods of time much longer than a typical dynasty lifetime. Rainforests, agricultural lands vulnerable to erosion, and mineral deposits may fall into this latter category. New dynasties can form in these areas, but their absolute magnitudes may be different from the original regime, since they are consuming a different source of potential. Another exception is where a truly breakthrough technology becomes utilized, such as dynamite versus manual labor for accessing mineral deposits. Very few technologies are sufficiency significant to fit in this category, though, and are usually incremental in nature and are already anticipated by the thermodynamic approach. Example: Romanov Dynasty Let us apply the EDEG approach to the Romanov dynasty. Let us assume a conservative 1% growth rate. Let us further assume linear decay from 100% to 0% efficiency. A simulation has been written in the Ruby programming language. This language is mathematically robust, yet involves code that is relatively easy to read and understand. The dynasty is run through the Ruby simulator, using the above parameters. A data table was produced to generate in the plot shown in below. Efficiency-discounted exponential growth (EDEG) Here the peak is close to 1820. Napoleon had been conquered, and the dynasty had achieved much of its geographic expansion by them. Yet by this time, social unrest began to shake the Romanov dynasty. Also, note how the dynasty power begins at a level of 1 and ends at a level of 0. This is appropriate, since the dynasty had to begin from something, but typically ends in nothing. For example, the ancestors of the Romanovs existed before 1613, but the entire immediate family was killed during the Russian revolution. The peak occurs at a relative power value of height of 2.6, which indicates that the dynasty was over twice as powerful at its peak as it its beginning. Remember, this model is merely a hypothesis that is either valid or invalid for a particular level of uncertainty. The simulation was run again with higher growth rates. We again assume linear decay from 100% to 0% efficiency (see Figure 8). Note several in the response of power to a changing growth rate parameter. First, a higher growth rate results in a later peak. Second, the total peak to initial power ratio skyrockets as the growth rate is increased. Additional factors can be imposed as adjustment functions. One-time events (such as a rare but large natural disaster) can be superimposed as an event "mask". It may be of further interest to tie the rise and fall to patterns concerning the production and consumption of resources, to determine what correspondence, if any, there is between physical and political power. This can be explored by utilizing actual physical energy data to produce a model of physical power, and then comparing that model with evidence of political power over time. With the wealth of historical data being gathered in anthropological data warehouses, and other "big data" facilities, this may be accomplished with increasing validity. Discussion and Future Directions This discussion of the EDEG approach is more of a barebones beginning than a complete end. It raises more questions than it answers, but it enables a broad framework to answer these questions. This framework acts as a unifying skeleton to link the humanistic elements of history with the quantitative constraints of the physical universe. The power of such a framework should not be underestimated. It is possible to gather quantitative data (or quantify qualitative evidence), perform statistical analysis and accept or disprove hypotheses. Yet such results, while often important, are merely empirical. They are often hard to use to constrain or illustrate each other. In a unified framework, all results act to constrain all other results. When we learn about one thing, we necessarily learn something about everything else. This is where the physical sciences have derived much of their strength. There are many immediately apparent improvements to improve the value of the EDEG approach. One improvement would be to better understand efficiency decay. Another improvement would be to start using actual data of physical energy, to the extent such data is available. Another improvement will be to separate the power level of the underlying society from that of the dynasty. For example, Russia did not disappear upon the death of the Romanov dynasty. On the contrary, it is still one of the most powerful societies on Earth. The brings up the need to be able to model the emergence of a series of dynasties in a way that connects and constrains each dynasty, such as concerning relative strength and timing of emergence. Further, there needs to be a way to compare co-existing dynasties and model their interaction within this framework. While the EDEG approach suggests possible means, the devil will be in the details. [1]Meadows, et al. makes this point in Limits to Growth. [2]For an opposing, but still dire view, see D. H. Meadows et al, Limits to Growth, Universe Books, New York (1972). [3]The work of Forrester on system dynamics and the Club of Rome project Limits to Growthby Meadows et al involve attempts to better understand these limiting factors. The work of M. K. Hubbert and Howard Scott on peak oil and technocratic governance are other examples. [4]M. Butler, Animal Cell Culture and Technology. Oxford: IRL Press at Oxford University Press, 1996. [5]M. Ciotola, San Juan case study. Also see M. K. Hubbert. [6]Heilbroner, R. L., The Worldly Philosophers 5thEd. (Touchstone (Simon and Schuster), 1980. [7]Such curves were proposed by W. Hewitt as early as 1926 (source unavailable). Gibson, C. 1966. Spain in America. New York: Harper and Row. M. King Hubbert.1956. Nuclear Energy And The Fossil Fuels. Houston, TX: Shell Development Company, Publication 95. Mark P. A. Ciotola, ArXiv, Efficiency Discounted Exponential Growth (EDEG) Approach to Modeling the Power Progression of a Historical Dynasty, submitted on 18 Nov 2014 Creating a simulation and generating results is relatively straightforward, mathematically speaking. Yet an important part of validating the principles behind a simulation are a favorable fit with actual historical data. Sometimes data is sparse. Sometimes it is plentiful, but not in a form that facilitates an easy comparison. Authorities the validity, relevance or accuracy of the data itself is in doubt and must be carefully evaluated. Quantitative Challenges What can be more difficult than identifying a regime is to identify its beginning and endpoints as well as quantifying the regime. For example, the Bourbon dynasty was disrupted by the French revolution in 1791, yet there were three more Bourbon kings up to the year 1848. Further, the movements behind many regimes begin well before the official birth date of the regime. For example, the family that become rulers of the Carolingian had ruled France in all but name since 700, which would have given it a duration of 287 years (see Table). General Approach For modeling, one can adjust the constants involved to produce the best fit for the data. The ways to do so could fill whole volumes in themselves, and are better covered in mathematical texts devoted to that subject. For purposes of this text, adjusting the parameters of the proposed function to provide the best visual fit provides a method that anyone who knows how to use a graphing program can utilize. Although this method is easy to implement, make sure to use the proper units for the constants! If you do not have any data, this method cannot be used. Also, if you only a small amount of data, or data for only a short time period, be warned that the model could be less accurate. If you have data that appears to contain a great deal of noise (random variations), is quite inconsistent from period to period or contains a cyclic variation (such as an annual cycle or a regular seven year weather pattern), you may need to smooth out the data. To reduce noise, you can use a moving average smoothing technique. For purposes of this text, you could average the each value with the value immediately before and after it. For cyclical data, you can average over half a cycle before and afterwards. There are much more sophisticated smoothing techniques that can be found in textbooks on various types of forecasting. Derivative Approach Full quantitative data for a historical regime is often unavailable, or is only available at great expense of time or money. Yet, historians frequently come across information that is anecdotal or qualitative rather than quantitative. Fortunately it is often possible to convert qualitative data into quantitative data using a trick from calculus. It is nearly always possible to attach somedate to anecdotal information. Often the date assigned can be quite precise. Major trends can often be identified anecdotally, and numerical dates can be matched to anecdotal data. Therefore, a series of date-trend pairs can be created for whether the regime is growing, reaching a plateau or declining. A table such as that below can be created. Anecdotal information indicating an increase or a decrease represents a positive or negative slope of an underlying function. Such a slope can be seen as the derivative of that underlying function. Such slopes can typically be crudely plotted on a meta-velocity versus time graph. Then an underlying function can be proposed that is consistent with the meta-velocity graph. A change in slope indicates meta-acceleration that further indicates the presence of a net meta-force. Hence, even anecdotal or qualitative data can frequently be used to generate a meta-mechanical function. TABLE: Date-Anecdote Data Pairs for a Hypothetical Regime Regime Growth 603 CE (e.g. or AD) Official birth of regime 612 CE + 780 CE Level 876 CE - 917 CE No longer exists An exponential function can be created, whose first derivative function matches the anecdotal date-trend data. This is certainly rough approach, but it can give a first approximation of the quantitative rise and fall of the regime. If the potential and other characteristics can be identified, then a better quantitative characterization can be achieved. Of course, there may be shorter term upward and downward trends. These can be modeled with a secondary function. Minor victories and setbacks should not be included in trend data for the major regime characterization. Although there may be a primary power source, such as agriculture, for a dynasty, there may be secondary sources of power and costs, as well as short-term events that will impact the primary power progression of s dynasty, and should be taken into account where possible. Superposition of Noise and Other Functions Upon Hewett-Hubbert Curves Actual data regarding production or consumption of a critical resource will not be a smooth function, but rather lots of jagged peaks and dips. These peaks and dips often represent independent functions that are not functions of the critical resource. Sometimes the independent functions are harmonic in nature. Often they are chaotic in nature. Sometimes the peaks and dips are due to random events or "noise". Usually the magnitude of the random events will not seriously disrupt the regime. Sufficiently complex regimes are "self-healing." They will react in a manner to compensate for these random events. (See discussion regarding conservation of shock under meta-mechanics). There may literally be small, shorter duration Hewett-Hubbert functions, reflecting business cycles, short-term opportunities and setbacks superimposed upon the dynastic function. Further, although a regime can be expressed as a function of a conserved critical resource, in some cases other resources may be good substitutes. Such resources will entail their own infrastructure and merit a Hubbert curve in their own right. Consider the example of oil versus coal. The U.S. is heavily dependent upon petroleum. There are large oil companies as well as drilling operations and a significant oil processing and distribution network. Petroleum acquisition is supported by powerful lobbyists and the deployment of the U.S. military where required. Coal can be an important substitute for petroleum. Gasoline can be made from coal. It's more expensive, but not prohibitively so. Coal has its own companies, its own processing facilities and its own distribution network and customer base. Coal has its own lobbyists as well. Sometimes coal interests are at odds with petroleum interests. Consumption of petroleum and coal can each be expressed as separate Hubbert curves. When describing the U.S. regime, the more significant of the two resources may considered as the critical resource. However, a better representation would be to superimpose both the oil and coal curves. This involves adding them up, period-by-period. Doing so for the past where data exists is relatively simple. Extending those curves into the future is possible, but more challenging. Standard economic analysis can provide some indication of relative demand for each resource. Secondary Functions from Hewett-Hubbert Curve Other functions may themselves be secondary functions that are driven by the main Hewett-Hubbert curve function. The social values and moods of a society are to some extent a function of position along the Hewett-Hubbert curve. Such social values and moods can be considered secondary Hubbert functions. Examples include the distribution of wealth, economic centralization, development of infrastructure, aspects of philosophy, tolerance for "immoral" behavior and even number and size of libraries. Describing A Society As A Series of Regimes A society can be modeled as a series of regimes or Hubbert curves. Each curve would typically represent a dynasty for a traditional historic monarchy. Traditional, monarchical, agricultural-based regimes have historically tended to endure for about 300 or so years. This is a rough rule of thumb. Other types of societies will tend to have a governance change in that period of time but may maintain better legal continuity of government. Not all regimes last for about 300 years. Where a potential has not restored itself, of those who attempt to rule the regime are not competent (i.e. a defective or inherently inefficient "heat engine"), a regime will be short lived. The other extreme is where the potential is too great. This can happen when neighboring regimes have become weak. In this case, a regime can expand too quickly and become a great, but brief empire. Such appears to have been the case of the first French Empire lead by Napoleon. There are plenty of exception to this 300 Year Rule. Yet, focusing attention on regimes that fit in this pattern can be useful to identify more general principles and constraints that govern humans. This is similar to the case of the development of astronomy, where first the easily observed bodies such as the Moon, Sun and visible planets were modeled first and lead to Newtonian mechanics. Later, smaller, further and more exotic objects were studied and modeled. Further, the 300 Year Rule is much less likely to be applicable to most of the regimes in existence when this book is written. Few regimes today are traditional agricultural monarchies. Further, regimes have become much more interdependent with each other, so it can be expected that Hubbert Curves will become more distorted and even more merged than any time in the past, even for the largest regimes in existence today. Also, most of the current regimes are dependent upon non-renewable resources such as petroleum that have never driven regimes before the 19th century. Yet, as mentioned above, a study of historical traditional 300 Year regimes can help to develop generic principles that can be applied to a much broader range of regimes. A common error would be to assume that the series of EDEG curves represents a periodic function. It's not. However, many functions can be expressed as a Fourier series (a combination of sinusoidal functions) so perhaps a series of EDEG curves can be as well. Regimes might not follow immediately one after another. Or there could be some overlap between older and newer regimes. Historical Dynasties Dynasties in major historical civilizations are typically easy to identify. In a sense, dynasties are what fill the pages of historical textbooks. The flowing is a chronological list of French dynasties, along with duration data. TABLE: Dynasty Series for France Dates (CE) 481–751 CE Merovingian dynasty 270 years 754–987 CE Carolingian dynasty 233 years 987–1328 CE Capetian dynasty 341 years 1429–1588 CE Period of relative discontinuity 1589–1791*/1848 CE Bourbon dynasty 202/259 years *1791 represents the French Revolution that interrupted the regime that was restored for awhile after the fall of Napoleon until 1848. It is clear that the dynasties are not exactly periodic (exactly the same length in years as each other). French dynasties 500-1850 CE Below is a plot of the power progressions of several major West African dynasties, from 750 CE to 1591 CE. The lack of periodicity is more obvious. Although the geographic locations were all in West Africa, the exact locations varied. There was less territorial overlap than in the French dynasties plotted above. Selected major West African dynasties, 750-1591 CE Thermodynamic Approaches to Model a Series of Dynasties An exciting next step it to attempt to model a series of past dynasties. You could simply do this by superimposing a best fit set of distributions over time. This is fairly simple to do any might provide some utility and satisfaction. However, the above series of French dynasties has been modeled. In this case, each regime was modeled individually using the simple thermodynamic method described. An important point about applying fast entropy to real life situations is keeping the expression "it takes two to tango" in mind. Fast entropy only creates a potential. There must also be the equivalent of an engine (or conductor) to bridge the potential to observe fast entropy in action. In history, that engine may be produced by a new royal family replacing the prior family, or a group of organized invaders. Sometimes that engine comes along immediately after the end of the prior regime, or sometimes it may take a few hundred years before a new major regime takes root. However, there is a more powerful approach. If you can determine past potential profiles for past regimes, and if they seem consistent or to follow some pattern, then you can create a "boiler" program to literally boil a series of regimes. In that sort of program, potential builds up until it reaches a trigger threshold, then a regime forms and goes through its lifecycle and exhausts the built-up potential and dies. Then the potential starts building up again and eventually another regime forms. If the pattern varies from actual history, it may be possible to identify catastrophes, unexpected events, and interference from more powerful regimes. If you have appropriate software, you can literally reverse-simulate a past regime to determine a past potential at each point of the regime as well as the total quantity of the CCR. Such parameters can be sometimes useful for forecasts. The function that expresses the potential in terms of time (age of regime) is its potential profile. Series of Russian dynasties Dynasty series Involving Nonrenewable Physical Resource Below is a series of EDEG models for a series of Spanish dynasties where significant extraction of gold and solver occurs. The presence of tremendous metal extraction may have shortened the life of the Habsburg dynasty. Model for extraction with blasting is based on one data point, and is largely conjecture. Bourbon dynasty is considered to have ended with takeover by Francisco Franco, regardless of present monarchy. Spanish dynasties 1531-1930s with some metal extraction data For data, see Gibson, C., Spain in America. Harper and Row, 1966. Simultaneously Existing Regimes A Hewett-Hubbert Curve is a fairly robust creature, but it can still be affected by simultaneous or co-existing regimes or even overwhelmed. Potentials can exist between regimes, such as in the case where one regime has a persistent trade deficit with a co-existing regime. Only something out of a science fiction movie could have eliminated either the Roman Empire or the Chinese Tang dynasty at their heights, for example. The Hewett-Hubbert curves for the very largest human regimes in history will be largely independent of each other. Many smaller regimes are still powerful enough to be fairly robust. However, regimes of small states that are highly affected by their neighbors. Likewise, new or dying regimes of larger states lie along portions of their Hubbert curves that are not as robust as middle portions. Such vulnerable regimes may have Hubbert Curves that are abruptly terminated rather than gradually terminated. The remaining critical resource of the regime must either be considered to have been discarded, or must be consolidated into the Hubbert Curve of a conquering regime. Such consolidation can be handled by superposition. Dynasties as a Set of Co-Existing Regimes Series of dynasties can be modeled, considered and plotted as a set of co-existing regimes without regard to interactions between them (see below figure). Dynasties, England, France and Spain, 1400-1800 CE Modeling History As A Set of Interacting Regimes Regimes often interact with each other. Therefore, one regime can impact another. This interaction can become quite complicated, especially for smaller regimes. However, the largest, most durable regimes can be studied, for there is often more data for them and they tend to be somewhat less affected by other regimes, so that the effects are more discernable. Possible interdependence between Chinese and Russian dynasties The China Clock Disclaimer: this example only applies to history preceding 1911. Since that time, China has had new forms of government and has also embraced fossil fuels. So regimes since 1911 will have fundamental characteristics that are different from those of traditional regimes. This same disclaimer could apply to most other contemporary societies as well. To study a system of interacting regimes, it is best to study the greatest series of regimes. China has historically described itself as the central kingdom. Then have other historic regimes circles about china as do the planets circle around the Sun in a heliocentric system of astronomy? Does PHE propose a Sinocentric sociology? Yes, but to a limited degree. The mass of the sun is about thousand times greater than that of even the largest planet Jupiter. The social "mass" of China is generally historically larger than that of other societies, but not by far of such a high proportion, and at times other empires have eclipsed or absorbed China's social "mass." Yet to the extent that there has been anysolar equivalent in history, it would be China. Further, the Han people of China has exhibited a series of traditional regimes for a much longer period than any other single society, so it could be argued that it is the closest thing that exists to a historical "clock." Yet perhaps and argument could be made for central Asia being such a clock, since its invasions have frequently affected societies in the continents of Asia, Europe and Africa. What drives the waves of invasions in history of central Asia? Is it a social cause or the build-up of a resource-driven potential? The answer to this question is not well known. TABLE: Major Traditional Regime Series for China -2000–1500 BCE Hsia 500 years -1500–1028 BCE Shang 472 years -1028–642* BCE Chou 1 386 years -642*–256 BCE Chou 2 386 years -202–220 CE Han 422 years 618–906 CE Tang 288 years 960–1279 CE Sung 319 years 1368–1644 CE Ming 276 years 1644–1912 CE Manchu 268 years * Chou dynasty became essentially symbolic by about 700 BC, and China was chiefly ruled by small states during this symbolic "second" Chou dynasty. So at times that China is ruled by a regime during the strong part of its lifecycle, does this block the Central Asiatic invaders so that their only outset is India, the Middle-East or Europe? The answer to this question depends upon several factors and changes depending on the state of those factors at a given time. The following table shows several strong traditional regimes in China and corresponding waves of invasions in Europe. This list is not complete, but is suggestive for several regimes. TABLE: Major Traditional Regime Series for China and Asiatic Invasions in European (CE) 618–906 Tang 288 years Lombards & Avars 960–1279 Sung 319 years Slavs & Magyars 1368–1644 Ming 276 years Ottoman Turks Yet there are exceptions. The Huns, and later the Mongols, overwhelmed both China and much of the West. Conversely, a strong Roman empire might have pushed the Huns eastward before they went Westward, for the Huns attacked China in 317, while they did not invade western Europe until the mid 400s.It could be the coincidence of strong empires in both the East and the West built up potential in central Asia up to the point that the Huns became extremely potent. That both Russia and China were both relatively strong during the time of the Sung dynasty may have contributed to a build-up of potential in central Asia that helped the Mongols become so powerful. Such speculation should not detract from the achievements Mongols such as their innovative battle tactics. (The author is not as familiar with the history the Huns, so cannot comment further upon them). Colossus is a computer-based simulator that generates models of world history. It utilizes a grid of dynasty-producing regions and interconnections between neighbors. This model focuses on the "old world" of Asia, Europe and Africa before World War I. History grid (PHP/SVG) A newer version coded in the D3 Javascript library, with a button for each century: History grid (D3) The Colossus simulator itself is written in the Ruby programming language, and results are exported as a CSV file. Sample output is below. Each row is a time period indicated by a year. The left bank of columns represent power. The right bank of columns represent power differenced between regions. This is an exploratory simulation. It has many deficiencies. Colossus simulation output An early example of graphical representation of output. It is not very meaningful. Early graphical representation of Colossus results Graphical Information Systems (GIS) can be used to analyze spatial aspects of societies, as well as their progression and interaction with concurrent societies over time. The Colossus world history grid was superimposed on an image of the Europe, Africa and Asia. GIS was utilized to better understand the relations between societies. Spatial connections between adjacent or nearby societies were identified. Each connection was discounted for distance and terrain factors. It is possible to study correlations found in the Colossus model with such factors. Satellite image of Asia showing political entities and possible connections (photo layer: credit Google) Modeling the near future cannot be done perfectly deterministically. It must be done using different approaches for different windows and levels of accuracy. Weather provides a good example for understanding. Forecasters can predict the temperatures and precipitation over the next seven days reasonable well, but not beyond that. However, for the next year, forecasters can predict average temperatures and precipitation type and quantity reasonably well. Meteorologists have also identified patterns that repeat over several years (such as El Nino and La Nina), but they cannot predict the exact timing or strength. It is also possible to predict the general climate by location and for the entire Earth for the next hundred or so years, assuming there are no catastrophic changes such as a large meteor hitting the Earth. There is often much short term "noise". In weather, there might be a local tornado which deviates the local wind without changing the overall mean wind. One should use probabilistic methods when modeling the future to overcome noise effects. Resistance to globalization Advancement in automation (robotics, AI) Consolidation of financial institutions Monetary shifts caused by trade imbalanced The 1800s colonia order continues to recede We have looked backwards, in time and space. Let's now look forwards. Looking forwards will use many of the ideas, but the techniques and subject matter will partially differ. Looking backwards involves uncertainty regarding the data, and using probability to estimate that uncertainty. Looking forwards involves applying probability to principles and trends to estimate the likelihood of future events. Humanity has experience several long-term trends that have transcended multi-century dynastic structures. Will such be able to continue? Humanity may be approaching limits to human population on Earth using only Earth resources, although serious technologies are expanding that limit. (See Club of Rome, Limits to Growth). Space resources might allow this limit to be increases, as well as to provide for additional people to live in space. Conversely, humanity is using up nonrenewable resources. If replacements are not found, the human population could decrease. U.S. Census, World Population: 1950-2050 U.S. Census, World Population Growth Rates: 1950-2050 Hillary Mayell, Human "Footprint" Seen on 83 Percent of Earth's Land, National Geographic News, October 25, 2002. Meadows et al., Limits to Growth, 1972. involve attempts to better understand these limiting factors. Modeling The Near Future Of A Single of Regime Modeling an existing regime may provide an indication of the magnitude of fast entropy tendencies upon that regime, especially if it is quite similar to a past regime, or is well advanced in age. Yet, the interdependent nature of today's regimes and the possibility for nuclear or biochemical warfare or catastrophes create greater uncertainties than in the past, so that caveat must always be kept in mind. Further, the existence "unknown unknowns" must not be forgotten. General Approaches There are two major approaches for modeling a future single regime. The first approach is to guess Gaussian or Maxwell-Boltzmann distributions and adjust the constants involved to produce the best fit for the data that you have so far. The ways to do so could fill whole volumes in themselves, and are better covered in mathematical texts devoted to that subject. For purposes of this text, adjusting the parameters of the proposed function to provide the best visual fit provides a method that anyone who knows how to use a graphing program can utilize. Although this method is easy to implement, make sure to use the proper units for the constants! If you do not have any data, this method cannot be used. Also, if you only a small amount of data, or data for only a short time period, be warned that the forecasts could wildly vary from what will actually occur, evenif nothing unexpected happens. The second approach is to combine the initial characteristics of the data already obtained with the values of parameters of past regimes. This approach is more intelligent, but equally more complicated. A simple way to do this is described as follows. The first step is to try to create a pure exponential growth function that describes the growth seen in the initial data and identify the parameters in that function. Then plug these parameters into a distribution function, and use parameters from past regimes to fill in the remaining parameters. The past regime should be as similar to the present regime as possible, taking into account location, historical point of time, size of regime, type of resource use, and any other characteristics that seem applicable. If you need to alter anyof these parameters to improve the fit, only do so if you have a rational basis for doing so. It is also possible that old values might give better long-term projections than parameters merely altered to improve the short-term fit. Thermodynamic Approach The thermodynamic approach is similar to the second approach above. However, an effort should be made to express parameters in terms of a potential and changing efficiency. If the quantity of the most critical conserved resource is known, then the projected total consumption over time should match that quantity. If the present regime uses the same types of resources and does not utilize any major new technologies, then it may be possible to use the potential profile from that past regime Modeling The Future As A Series of Regimes Future Horizon Challenge Modeling a single existing regime may be fairly reliable. Modeling a series of regimes into the future may be much less reliable. First, all of the reasons that challenge modeling a single present regime apply even more so to modeling a series of regimes. Even worse, human society may face a fundamental change over the next hundred years, if not sooner. This could nearly completely throw off most forecasts. However, once society has made that transition, whatever it may be, then the nature of resource use and technologies of regimes may become more constant over time, so that it may be reasonably reliable to model a series of regimes after that point. If humans are involved and go back to traditional, agricultural technologies, and the same traditional population centers remain, then even the 300 Year Rule and the old potential profiles might be applicable. If robots replace humans in future regimes and they live off of nuclear fusion or some more exotic energy source, than some other potential profile may apply. A similar methodology to that described in here may be useful in modeling a future series of regimes, especially if the probable potential and "heat engine" characteristics can be reasonably identified. Modeling The Future As A Set of Interacting Regimes Many Simultaneous Regimes Numerous simultaneous regimes may co-exist. Such was the case of the ancient Greek city-states before the time of the wars between Athens and Sparta. Each regime has some freedom of action. However, the collection of those states can often be considered aggregately to form a larger "super" regime. Interaction Between Two Simultaneous Regimes A frequent tale in history is the interaction between two simultaneous regimes, often of apparent equal power. There will typically be oscillatory flow of wealth or military strength back and forth between the two powers as they compete with each other, or there will be the ongoing flow of wealth or military strength from one to the other. The second case represents a potential, and can be modeled by utilizing that potential as a conserved resource. First published on . Last updated on May 1, 2021. Physical history and economics is based on physics. Since the laws of physics are invariant across time and space, they should be equally applicable to societies across the universe, as well as to humans. Out own galaxy may contain millions of planets. Some of those planets may be capable of hosting intelligent life and societies. The same thermodynamic tendencies as exist on Earth would drive the emergence of such life. Hence such life could be analyzed by the same approaches as used here. Although the driving force may be fast entropy, and the laws of physics are the same, local conditions might be quite different for other planets. It would be interesting to see what different kinds of beings and societies have developed, given their local resources and constraints. Habitable exoplanets (credit: PHL @ UPR Arecibo) Physical History and Economics represents a solid foundation for the future development of history and economics. PHE can provide new insight into some of the pressures and influences upon historical societies and the constraints that governed them. PHE can help to provide economists with another means to analyze regimes over their entire lifetime. However, Physical History and Economics is not alone sufficient to develop practical solutions for society. Rather, think of Advanced Social Science as one of two filters. Physical History and Economics can be used to anticipate and also to filter out scientifically impractical solutions. Such solutions are impractical because they run counter to the tendencies of nature and probably will not work. The other filter is human psychology. Even if a solution is scientifically valid, if it cannot be implemented due to the limitations and constraints of human psychology, then it is equally impractical and is doomed to probably failure. A good solution is one that can pass through both filters, that of Physical History and Economics and that of human psychology. Thus, Physical History and Economics, used in conjunction with an understanding of human psychology, can save humanity the trouble and expense of attempting impractical solutions and can provide options that will probably work. That is the goal and dream of Physical History and Economics.
CommonCrawl
American Institute of Mathematical Sciences Journal Prices Book Prices/Order Proceeding Prices E-journal Policy Conditional variational principle for the irregular set in some nonuniformly hyperbolic systems DCDS Home Integrability of vector fields versus inverse Jacobian multipliers and normalizers November 2016, 36(11): 6557-6580. doi: 10.3934/dcds.2016084 Longtime behavior of the semilinear wave equation with gentle dissipation Zhijian Yang 1, , Zhiming Liu 2, and Na Feng 2, Department of Mathematics, Zhengzhou University, No.100, Science Road, Zhengzhou 450001 School of Mathematics and Statistics, Zhengzhou University, No.100, Science Road, Zhengzhou 450001, China, China Received December 2015 Revised June 2016 Published August 2016 The paper investigates the well-posedness and longtime dynamics of the semilinear wave equation with gentle dissipation: $u_{tt}-\triangle u+\gamma(-\triangle)^{\alpha} u_{t}+f(u)=g(x)$, with $\alpha\in(0,1/2)$. The main results are concerned with the relationships among the growth exponent $p$ of nonlinearity $f(u)$ and the well-posedness and longtime behavior of solutions of the equation. We show that (i) the well-posedness and longtime dynamics of the equation are of characters of parabolic equations as $1 \leq p < p^* \equiv \frac{N + 4\alpha}{(N-2)^+}$; (ii) the subclass $\mathbb{G}$ of limit solutions has a weak global attractor as $p^* \leq p < p^{**}\equiv \frac{N+2}{N-2}\ (N \geq 3)$. Keywords: gentle dissipation, exponential attractor., global attractor, well-posedness, Semilinear wave equation. Mathematics Subject Classification: Primary: 35B41, 35B33; Secondary: 35B40, 35B65, 37L3. Citation: Zhijian Yang, Zhiming Liu, Na Feng. Longtime behavior of the semilinear wave equation with gentle dissipation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6557-6580. doi: 10.3934/dcds.2016084 J. M. Ball, Continuity properties and attractors of generalized semiflows and the Navier-Stokes equations,, Nonlinear Science, 7 (1997), 475. doi: 10.1007/s003329900037. Google Scholar J. M. Ball, Global attractors for damped semilinear wave equations,, Discrete Contin. Dyn. Syst., 10 (2004), 31. doi: 10.3934/dcds.2004.10.31. Google Scholar V. Belleri and V. Pata, Attractors for semilinear strongly damped wave equations on $\mathbbR^3$,, Discrete Contin. Dyn. Syst., 7 (2001), 719. doi: 10.3934/dcds.2001.7.719. Google Scholar N. Burq, G. Lebeau and F. Planchon, Global existence for energy critical waves in $3D$ domains,, J. of AMS, 21 (2008), 831. doi: 10.1090/S0894-0347-08-00596-1. Google Scholar A. N. Carvalho and J. W. Cholewa, Attractors for strongly damped wave equations with critical nonlinearities,, Pacific J. Math., 207 (2002), 287. doi: 10.2140/pjm.2002.207.287. Google Scholar A. N. Carvalho and J. W. Cholewa, Local well-posedness for strongly damped wave equations with critical nonlinearities,, Bull. Austral. Math. Soc., 66 (2002), 443. doi: 10.1017/S0004972700040296. Google Scholar A. N. Carvalho and J. W. Cholewa, Regularity of solutions on the global attractor for a semilinear damped wave equation,, J. Math. Anal. Appl., 337 (2008), 932. doi: 10.1016/j.jmaa.2007.04.051. Google Scholar A. N. Carvalho, J. W. Cholewa and T. Dlotko, Strongly damped wave problems: Bootstrapping and regularity of solutions,, J. Differential Equations, 244 (2008), 2310. doi: 10.1016/j.jde.2008.02.011. Google Scholar A. N. Carvalho, J. W. Cholewa and T. Dlotko, Damped wave equations with fast dissipative nonlinearities,, Discrete Continuous Dynam. Systems - A, 24 (2009), 1147. doi: 10.3934/dcds.2009.24.1147. Google Scholar S. Chen and R. Triggiani, Proof of two conjectures of G. Chen and D. L. Russell on structural damping for elastic systems,, Lecture Notes in Math., 1354 (1988), 234. doi: 10.1007/BFb0089601. Google Scholar S. Chen and R. Triggiani, Proof of extension of two conjectures on structural damping for elastic systems,, Pacific J. Math., 136 (1989), 15. doi: 10.2140/pjm.1989.136.15. Google Scholar S. Chen and R. Triggiani, Gevrey class semigroups arising from elastic systems with gentle dissipation: the case $0 <\alpha < 1/2$,, Proceedings of the American Mathematical Society, 110 (1990), 401. doi: 10.2307/2048084. Google Scholar I. Chueshov and I. Lasiecka, Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping,, in Memories of AMS, 195 (2008). doi: 10.1090/memo/0912. Google Scholar I. Chueshov, Global attractors for a class of Kirchhoff wave models with a structural nonlinear damping,, J. Abstr. Differ. Equ. Appl., 1 (2010), 86. Google Scholar I. Chueshov, Long-time dynamics of Kirchhoff wave models with strong nonlinear damping,, J. Differential Equations, 252 (2012), 1229. doi: 10.1016/j.jde.2011.08.022. Google Scholar I. Chueshov and I. Lasiecka, Von Karman Evolution Equations: Well-posedness and Long Time Dynamics,, Springer Science and Business Media, (2010). doi: 10.1007/978-0-387-87712-9. Google Scholar I. Chueshov, Dynamics of Quasi-Stable Dissipative Systems,, Springer, (2015). doi: 10.1007/978-3-319-22903-4. Google Scholar E. Feireisl, Asymptotic behavior and attractors for a semilinear damped wave equation with supercritical exponent,, Roy. Soc. Edinburgh Sect.- A, 125 (1995), 1051. doi: 10.1017/S0308210500022630. Google Scholar P. J. Graber and J. L. Shomberg, Attractors for strongly damped wave equations with nonlinear hyperbolic dynamic boundary conditions,, Nonlinearity, 29 (2016). doi: 10.1088/0951-7715/29/4/1171. Google Scholar V. Kalantarov and S. Zelik, Finite-dimensional attractors for the quasi-linear strongly-damped wave equation,, J. Differential Equations, 247 (2009), 1120. doi: 10.1016/j.jde.2009.04.010. Google Scholar V. Kalantarov, A. Savostianov and S. Zelik, Attractors for damped quintic wave equations in bounded domains, Ann. Henri Poincaré,, (2016) DOI 10.1007/s00023-016-0480-y., (2016), 00023. Google Scholar L. Kapitanski, Minimal compact global attractor for a damped semilinear wave equation,, Comm. Partial Differential Equations, 20 (1995), 1303. doi: 10.1080/03605309508821133. Google Scholar V. Pata and M. Squassina, On the strongly damped wave equation,, Comm. Math. Phys., 253 (2005), 511. doi: 10.1007/s00220-004-1233-1. Google Scholar V. Pata and S. Zelik, Smooth attractors for strongly damped wave equations,, Nonlinearity, 19 (2006), 1495. doi: 10.1088/0951-7715/19/7/001. Google Scholar V. Pata and S. Zelik, A remark on the damped wave equation,, Commun. Pure Appl. Anal., 5 (2006), 611. Google Scholar A. Savostianov, Strichartz estimates and smooth attractors for a sub-quintic wave equation with fractional damping in bounded domains,, Adv. Differential Equations, 20 (2015), 495. Google Scholar A. Savostianov and S. Zelik, Recent progress in attractors for quintic wave equations,, Mathemaica Bohemica, 139 (2014), 657. Google Scholar A. Savostianov and S. Zelik, Smooth attractors for the quintic wave equations with fractional damping,, Asymptot. Anal., 87 (2014), 191. Google Scholar A. Savostianov, Strichartz Estimates and Smooth Attractors of Dissipative Hyperbolic Equations,, Doctoral dissertation, (2015). Google Scholar H. F. Smith and C. D. Sogge, Global Strichartz estimates for non-trapping perturbations of the Laplacian,, Comm. Partial Differential Equations, 25 (2000), 2171. doi: 10.1080/03605300008821581. Google Scholar J. Simon, Compact sets in the space $L^p(0,T;B)$,, Annali diMatematica Pura ed Applicata, 146 (1986), 65. doi: 10.1007/BF01762360. Google Scholar R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics,, Springer-Verlag, (1997). doi: 10.1007/978-1-4612-0645-3. Google Scholar Z. J. Yang, N. Feng and T. F. Ma, Global attracts of the generalized double dispersion,, Nonlinear Analysis, 115 (2015), 103. doi: 10.1016/j.na.2014.12.006. Google Scholar Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015 Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345 Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270 Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161 Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302 Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361 Wenjun Liu, Hefeng Zhuang. Global attractor for a suspension bridge problem with a nonlinear delay term in the internal feedback. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 907-942. doi: 10.3934/dcdsb.2020147 Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248 Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377 Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142 Fang Li, Bo You. On the dimension of global attractor for the Cahn-Hilliard-Brinkman system with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021024 Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388 Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365 Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020105 Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163 Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299 Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 PDF downloads (101) HTML views (0) on AIMS Zhijian Yang Zhiming Liu Na Feng Copyright © 2021 American Institute of Mathematical Sciences Recipient's E-mail* Content* Code*
CommonCrawl
Which branches of mathematics can be done just in terms of morphisms and composition? Consider the first-order language $L_{\omega\omega}$ of the signature $L:=\{\mathrm{dom}, \mathrm{cod}, \mathrm{comp}\}$, where $\mathrm{dom}$ and $\mathrm{cod}$ are unary function symbols and $\mathrm{comp}$ is a ternary relation symbol. This is intended to be thought of as the language of a single category: $\mathrm{dom}$ resp. $\mathrm{cod}$ are interpreted as functions yielding the domain resp. codomain of a given morphism; $\mathrm{comp}(h, g, f)$ is interpreted as $h=g\circ f$. One can formally write down the axioms of a category (associativity of composition, identity morphisms for composition) as first-order $L$-sentences. If we call the collection of these axioms $T_\text{Cat}$, then an $L$-structure $C$ with $C\models T_\text{Cat}$ is essentially the same as a category. (Okay, one can argue about size issues or whether specific decisions concerning the design of the formal language are natural, for example, whether it would be better to use a two-sorted language with the sorts "objects" and "morphisms" rather than a one-sorted language where everything is a morphism, but let us ignore these issues for now.) Lawvere famously gave an axiomatization $\mathsf{ETCS}\supseteq T_{\mathrm{Cat}}$ of the category of sets in the language $L_{\omega\omega}$ and showed that a great deal of set theory can be carried out in this theory. I think it is quite remarkable that all the usual concepts of set theory (such as elements, the set of natural numbers, and the cartesian product) can be formulated categorically in the language $L_{\omega\omega}$ of morphisms. Here are some links for further reading for people not familiar with $\mathsf{ETCS}$: nLab, Lawvere's original paper, fully formal presentation of ETCS on the nLab, Tom Leinster's "Rethinking set theory". Lawvere also gave an axiomatization $\mathsf{ETCC}$ of the category of categories (nLab). (To me, this theory seems to be not as established as $\mathsf{ETCS}$ and I don't know to what extent this theory can be used to carry out doing category theory.) Question: Is it also possible to axiomatize the category of topological spaces (and continuous maps) in the language $L_{\omega\omega}$? Is it then possible to really carry out some topology in this theory? Also, is it possible to axiomatize the category of groups resp. rings in $L_{\omega\omega}$ and then really do some group resp. ring theory? (You can really interpret my question as a question schema: for each theory, you can ask this question.) This would be interesting, because it would show that one can do topology, group theory, ring theory, ... without presupposing some form of set theory. Also, it would show that one can express all (or a great deal of) the theorems of topology, group theory, ring theory, ... just in terms of morphisms, domain, codomain, and composition. ct.category-theory set-theory lo.logic soft-question foundations $\begingroup$ The question you've asked isn't really what you meant to, I think. ETCS doesn't axiomatize the category of sets fully - there are lots of statements about the category of sets, in the language above, which are independent of ETCS. This is relevant because when you ask "is it possible to axiomatize ---?," it's not clear what you mean by "axiomatize" - if you just mean "write some true statements about," then that's trivially true ($\emptyset$), while if you mean "give a complete axiomatization of" then that's false even for the category of sets. (cont'd) $\endgroup$ – Noah Schweber Mar 3 at 19:09 $\begingroup$ Or at least, $(i)$ ETCS doesn't constitute such an axiomatization and $(ii)$ there is no computable axiomatization at all (the category of sets is complicated enough for Godel's incompleteness theorem to apply to its theory). As to "it would show that one can express all (or a great deal of) the theorems of topology, group theory, ring theory, ... just in terms of morphisms, domain, codomain, and composition," this is already well-known and one of the whole points of category theory in the first place. (cont'd) $\endgroup$ – Noah Schweber Mar 3 at 19:12 $\begingroup$ I don't mean "axiomatizable" in the rigorous mathematical sense. I just wonder whether the theory (for example, topology) is axiomatizable in such a way that a great deal of the theory can be done in the axiomatization. Note that my question is a soft question. Basically, I just wonder: if one studies ETCS, why don't consider similar theories for topology, group theory, ring theory, ... $\endgroup$ – user7280899 Mar 3 at 19:13 $\begingroup$ As to groups and rings they can be developed as 1 order theories, without sets (but these theories will not be equivalent to the usual group and ring theories, since everything connected to morphisms will be out of description). And your language seems to be too poor for describing what people are interested in these theories. $\endgroup$ – Sergei Akbarov Mar 3 at 19:13 $\begingroup$ @SergeiAkbarov "your language seems to be too poor for describing what people are interested in these theories" I disagree with this - again, one of the whole points of category theory is that the language is rich enough to talk about a huge amount of the stuff we care about. $\endgroup$ – Noah Schweber Mar 3 at 19:16 I don't really know what you're after but here is an analogue of ETCS for topological spaces Dana I. Schlomiuk, An elementary theory of the category of topological spaces, Trans. Amer. Math. Soc. 149 (1970), 259-278, doi:10.1090/S0002-9947-1970-0258914-7 and here's one for (five different) categories of graphs Demitri Plessas, The Categories of Graphs, PhD thesis, University of Montana (2011) (link) David RobertsDavid Roberts $\begingroup$ Thank you very much! In these papers, they do exactly what I was looking for! I suppose one can do the same with the category of groups, rings, ... $\endgroup$ – user7280899 Mar 4 at 18:11 Not the answer you're looking for? Browse other questions tagged ct.category-theory set-theory lo.logic soft-question foundations or ask your own question. A "mother of all groups"? What kind of structures have "mother of all"s? A potential definition of weak $\omega$-categories Expressive power of first-order category theory An orthogonal factorization system on 1-Cob? Vectorisation of a category Does every Lawvere theory arise in this way? Is there equality between sets in structural set theory? Does equality between sets contradict the philosophy behind structural set theory? 2-natural operations on toposes Formalization and set-theoretic issues in the definition a functor category
CommonCrawl
how to calculate u and d in binomial tree Question Consider a binomial tree model for the stock price process fxn: 0 n 3g. u = exp. Sciences, Culinary Arts and Personal \mu = \frac{1}{T}(\log K - \log S_0). Why my implementation of CRR model does not converge? The volatility of a non-dividend-paying stock whose price is $78, is 30%. This preview shows page 5 - 9 out of 14 pages. Calculate the stock prices after 2 periods. If S is the current price then next period the price will be either Su=S (1+u) or Sd=S (1+d). If you'd like to see and edit the VBA, purchase the unprotected … The first step in pricing options using a binomial model is to create a lattice, or tree, of potential future prices of the underlying asset(s). MathJax reference. Can an Arcane Archer's choose to activate arcane shot after it gets deflected? To calculate the relative changes, we can use 1 + U-eơvõt and 1 + D-e-ơvõt In this process, we can calculate k (0, T)-In (ST/So) (Proposition 2.12 in the text). What should I do when I am demotivated by unprofessionalism that has affected me personally at the workplace? The spreadsheet also calculate the Greeks (Delta, Gamma and Theta). Both types of trees normally produce very similar results. Calculate u, d, and p when a binomial tree is constructed to value an option on a foreign currency. How to avoid overuse of words like "however" and "therefore" in academic writing? A. The first one, the CRR tree, used Does a regular (outlet) fan work for drying the bathroom? The algorithms are written in password-protected VBA . To Calculate The Relative Changes, We Can Use 1 + U = Povst And 1 +Dre-ovot In This Process, We Can Calculate K(0,T) = Ln(ST/S.) $$ Pages 14; Ratings 80% (5) 4 out of 5 people found this document helpful. 2. Is there any solution beside TLS for data-in-transit protection? Students also viewed these Corporate Finance questions. These tree's are used for options pricing, but I won't be going into details about that. A stock price is currently $40. Calculate u, d, and p when a binomial tree is constructed to value an option on a foreign currency. u−d ≤ 1. There has been a huge amount of work on binomial trees in the last 40 years and A scientific reason for why a greedy immortal character realises enough time and resources is enough? Assume every three months, the underlying price can move 20% up or down, giving us u = 1.2, d = 0.8, t = 0.25 and a three-step binomial tree. This description of the binomial tree model is structured as an answer to the following question (similar to one on the examination paper in 2011). We examine a binomial tree model used to model expected future stock prices. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. - Definition & Examples, Contract Breach Remedies: Reliance & Restitution, Secured Transactions: Examples & Explanations, Contracts that Fall Within the Statute of Frauds, Types of Contract Breach: Partial, Material, & Total, Specific Intent Crimes: Definition & Examples, Mutual Assent & Objective Standard in Contract Law: Definitions & Examples, Statute of Frauds Under the UCC: Definition, Exceptions & Examples, Legal & Equitable Title: Differences & Importance, Parol Evidence Rule: Definition, Examples & Purpose, Statute of Frauds Contracts: Definition & Purpose, Fee Simple Absolute: Definition & Examples, UExcel Business Law: Study Guide & Test Prep, ORELA Business Education: Practice & Study Guide, Business Law for Teachers: Professional Development, English 103: Analyzing and Interpreting Literature, DSST Lifespan Developmental Psychology: Study Guide & Test Prep, DSST Environmental Science: Study Guide & Test Prep, Political Science 101: Intro to Political Science, Psychology 108: Psychology of Adulthood and Aging, Biological and Biomedical Suppose you purchase eight put contracts on... What is a Forward Contract? - Definition & Examples, What is a Swap Contract? Did China's Chang'e 5 land before November 30th 2020? These are the things to do (not using the word steps, to avoid confusion) to calculate option price with a binomial model: Know your inputs (underlying price, strike price, volatility etc.). Calculate u, d, and p when a binomial tree is constructed to value an option on a foreign currency. For a binomial tree, everywhere in Hull and other literature, we have found the formulas for, but for binomial trees based on forward prices, we get a different formula. rev 2020.12.2.38106, Sorry, we no longer support Internet Explorer, The best answers are voted up and rise to the top, Quantitative Finance Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, If it is useful to you, please upvote and accept it, Difference in formulas for u & d in Binomial trees, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, "Question closed" notifications experiment results and graduation, Reference of using $\mu = \frac{1}{T}(\log K - \log S_0)$ in binomial tree model, How to derive the formula for risk-neutral probability for a Standard Binomial Tree (Forward Tree). 1.035, 0.966, 0.527. Binomial Tree: A graphical representation of possible intrinsic values that an option may take at different nodes or time periods. Here I would like to show how to program the binomial trees in R and how to generate the graph description which an external program like graphviz can turn into a pretty picture.. Why keep the tree? I give a comprehensive survey in my book, More Mathematical Finance. How to Calculate Option Price. Since (1 + u)(1 + d) = (1 + d)(1 + u), the value of the stock is the same whether it rst went up and then down or down and then up. Let the length of each period be hand let the up factor be denoted by u, and the down factor by d. What is the no-arbitrage condition for the binomial tree you are building? The tree step size is one month, the domestic interest rate… Calculate u d and p when a binomial tree is. and $d = 1/u.$ However, you can take any real-world drift and still get the same Use MathJax to format equations. This is a quick guide on how to do binomial trees in Excel. Calculate. It is known that... A stock price is currently $50. $$ Can I (a US citizen) travel from Puerto Rico to Miami with just a copy of my passport? What are the values of u, d and p when a binomial tree is constructed to value an option on a foreign currency. What is the difference between "wire" and "bank" transfer? Failing to replicate Wilmott's results for binomial option pricing. This is all you need for building binomial trees and calculating option price. Suppose that a stock price is currently $20 and... You are cautiously bullish on the common stock of... A stock price is currently $30. u = e^{\mu h +\sigma\sqrt{h}}, \text{ and } d = e^{\mu h -\sigma\sqrt{h}} Is there a way to notate the repeat of a larger section that itself has repeats in it? Either the original Cox, Ross & Rubinstein binomial tree can be selected, or the equal probabilities tree. Describe how volatility is captured in the binomial model. The type of option includes a call option and put option. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The tree step size is one month, the domestic interest rate is 5% per annum, the foreign interest rate is 8% per annum, and the volatility is 12% per annum. Could anyone please provide an explanation of why there is this extra term of $\exp(r-\delta)$ multiplied here? Binomial model is best represented using binomial trees which are diagrams that show option payoff and value at different nodes in the option's life. Under the trinomial method, the underlying stock price is modeled as a recombining tree, where, at each node the price has three possible paths: an up, down and stable or middle path.These values are found by multiplying the value at the current node by the appropriate factor u, d or m, where u = e α 2 Δ t It only takes a minute to sign up. Novel from Star Wars universe where Leia fights Darth Vader and drops him off a cliff. Call Options. Binomial Tree. Services, What Is an Option Contract? This is not yet another tutorial on binomial trees. School The University of Western Australia; Course Title FINA 2204 Type. $$ The tree step size is one month, the domestic interest rate is 5% per annum, the foreign interest rate is 8% per annum, and the volatility is 12% per annum. Approximation of CRR as Black Scholes PDE. We can interpret q as a probability; In a world where the probability of the up movement in the asset price is q, the equation x = e−rt (qP u +(1 −q)P d) says that the price of the derivative is the expected present value of its payoff. d Start: No we create an initial portfolio short 1 share of stock at S 0 and put the proceeds into the bank. Making statements based on opinion; back them up with references or personal experience. If not, why not? D. 1.039, 0.963, 0.530. Why do most Christians eat pork when Deuteronomy says not to? Suppose the initial stock price is $30, u = 1.02, d = 1/1.02 and the probability of an "up" move is 0.7. - Definition & Examples, What are Futures Contracts? - Example & Definition, Working Scholars® Bringing Tuition-Free College to the Community. Define and calculate the delta of a stock option. $$ for any fixed $\mu.$, $\mu = 0$ is a poor choice for convergence. u = exp. Calculate u, d, and p when a binomial tree is constructed to value an option on a foreign currency. Example: Binomial Tree. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Where did the concept of a (fantasy-style) "dungeon" originate? Thanks for contributing an answer to Quantitative Finance Stack Exchange! Let x0 = 100 and let the price rise or fall by … In this formulation, we are assuming the price movement as a logarithm process. B. It's imperative to note that the tree recombines: udS = duS . type of binomial tree. © copyright 2003-2020 Study.com. \mu = r - d - 0.5\sigma^2 1.035, 0.966, 0.455. The number of time steps is easily varied – convergence is rapid. Numerical Methods. Solution: d How To Eat Fruit Bread, 1/4-20 Toggle Bolt, Diamond Dove Website, Transplanting Spruce Saplings, Mission And Vision Of The Ministry Of Education, Friendly Farms Protein Yogurt, Mixed Berry, Habaneros Menu Near Me, The Keynesian Transmission Mechanism Might Get Blocked If, Employee Engagement Clipart, Single Phase Ac Power Calculation, 2020 how to calculate u and d in binomial tree
CommonCrawl
Using hybridization networks to retrace the evolution of Indo-European languages Matthieu Willems1, Etienne Lord1,2, Louise Laforest1, Gilbert Labelle3, François-Joseph Lapointe2, Anna Maria Di Sciullo4 & Vladimir Makarenkov1 Curious parallels between the processes of species and language evolution have been observed by many researchers. Retracing the evolution of Indo-European (IE) languages remains one of the most intriguing intellectual challenges in historical linguistics. Most of the IE language studies use the traditional phylogenetic tree model to represent the evolution of natural languages, thus not taking into account reticulate evolutionary events, such as language hybridization and word borrowing which can be associated with species hybridization and horizontal gene transfer, respectively. More recently, implicit evolutionary networks, such as split graphs and minimal lateral networks, have been used to account for reticulate evolution in linguistics. Striking parallels existing between the evolution of species and natural languages allowed us to apply three computational biology methods for reconstruction of phylogenetic networks to model the evolution of IE languages. We show how the transfer of methods between the two disciplines can be achieved, making necessary methodological adaptations. Considering basic vocabulary data from the well-known Dyen's lexical database, which contains word forms in 84 IE languages for the meanings of a 200-meaning Swadesh list, we adapt a recently developed computational biology algorithm for building explicit hybridization networks to study the evolution of IE languages and compare our findings to the results provided by the split graph and galled network methods. We conclude that explicit phylogenetic networks can be successfully used to identify donors and recipients of lexical material as well as the degree of influence of each donor language on the corresponding recipient languages. We show that our algorithm is well suited to detect reticulate relationships among languages, and present some historical and linguistic justification for the results obtained. Our findings could be further refined if relevant syntactic, phonological and morphological data could be analyzed along with the available lexical data. Many curious similarities between the processes of species and language evolution have been observed since Darwin's The Descent of Man [1]. But even earlier, in 1863, August Schleicher [2] sent a letter to Ernst Haeckel in which he discussed some of these similarities, comparing, for example, mixed languages to hybridized plants in botany. Atkinson and Gray [3] presented a table that highlights the most important conceptual parallels which can be drawn between these evolutionary phenomena. In particular, the latter study compares the process of social selection in linguistics to natural selection of species, borrowing of words across languages to horizontal transfer of genes, creole languages to plant hybrids, ancient texts to fossils, and cognates to homologies. There are also a few differences between these processes [4]. For instance, the biological alphabet (e.g., DNA) is universal, whereas the set of sounds used to form words is specific to each language. Moreover, the sequence data are usually much longer in molecular biology than in linguistics, and the selection of a perfect list of basic meanings suitable for the application of phylogenetic methods in the context of language evolution remains a challenging task. Nevertheless, the similarities and parallels between the two disciplines make it possible for researchers to use several well-developed computational biology methods for studying the evolution of species, and in particular reticulate evolution, in the field of linguistics. Obviously, it's not possible to apply these computational biology methods directly, without an appropriate adaptation, which is critical in interdisciplinary research. Thus, the existing phylogenetic algorithms should be modified and workflows adapted in order to obtain meaningful linguistic results and interpretations. Two nucleotide sequences observed in two distinct species are said to be homologous if they have evolved from a common ancestral sequence [5]. Similarly, in linguistics, a group of cognates is a group of word forms in different languages that have been inherited from a common ancestral word form [6]. The main difference between these concepts is that the concept of homology includes the possibility of lateral transfers, whereas the concept of cognacy excludes all potential processes of borrowing. Cognates and phylogenetic trees play a fundamental role when studying the evolution of natural languages using phylogenetic methods [7]. For instance, a phylogenetic tree representing the main traits of lexical evolution is equivalent to a species phylogeny depicting the key speciation events [3, 7, 8]. Several linguistic studies used phylogenetic methods to better understand the evolution of Indo-European (IE) languages [7–11]. Discovering the origin and main evolutionary trends characterizing the IE language family is one of the most recalcitrant intellectual challenges in historical linguistics [7, 12]. Two opposing theories, Kurgan and Anatolian, concerning early Indo-European origins are generally considered [7]. The Kurgan theory [13, 14] postulates that IE languages originate from the Kurgan culture dated around 3000 to 4000 BC, whereas the Anatolian theory [15] dates the origin of IE languages around 7000 BC. For example, the works of Gray and Atkinson [7] and Bouckaert et al. [9], which focus on inferring and dating the divergence times of the contemporary and extinct IE languages using Bayesian phylogenetic methods, support the Anatolian theory of IE origin. Phylogenetic tree model widely considered in linguistics assumes that the frequency of lateral word exchanges across languages has been relatively low. For example, Gray and Atkinson [7] and Bouckaert et al. [9] removed known loanwords from the basic vocabulary data before inferring their IE language trees. Obviously, linguistic phenomena such as word borrowing [10] and birth and evolution of hybrid languages [16], resulting from languages in contact, cannot be adequately represented by a tree model. For instance, a study of 80,000 words of the old Shorter Oxford Dictionary points out that English, which is a Germanic language, has borrowed 56.5 % of its total lexicon from Old French (Langue d'oïl) and Latin, 5.3 % from Greek, 13.2 % from other languages, and has inherited only 25 % of its current lexicon from its direct ancestor, Old Germanic [17, 18]. In this work, we analyzed basic vocabulary data from a 200-meaning Swadesh list [19]. While the use of this list may lead to a certain decrease in the number of loanwords [20], it remains helpful for detecting the most important word borrowing trends [21]. For example, the traditional 200-meaning English Swadesh list includes 33 confirmed loanwords (16.5 %) [22] and 10 additional "irregular phylogenetic patterns" which might be suggestive of unrecognized borrowings [21]. Moreover, in a recent revision of the Albanian Swadesh list 31.8 % of its entries were identified as probable borrowings [23]. Word borrowing can be viewed as one of the main development mechanisms leading to the emergence of hybrid (i.e., mixed or contact) languages. There exists a variety of hybrid languages, including pidgins, creoles, and lexical hybrids [24]. In a pidgin, the lexand minimum hybridization scoreicon usually comes from one parent language and the syntax comes from another one. A creole language, which arises from a pidgin, is a stable natural language spoken as a mother tongue. There are however many other types of lexical and grammatical transmission that produce a variety of linguistic outcomes. For example, Michif, which is the language of the Métis people of Canada and the United States, combines noun phrase phonology, lexicon, syntax and morphology from Métis French and verb phrase phonology, lexicon, syntax and morphology from Cree. As our analysis is based on lexical data only, here we address the problem of detection of lexical hybrids and word borrowing events. Clearly, phylogenetic networks, and not phylogenetic trees, should be used to represent hybrid languages and word borrowing events. In fact, some drawbacks of the tree model in historical linguistic were already pointed out by Schmidt [25] in 1872. Nakhleh, Ringe and Warnow [26] were among the first to use directed phylogenetic networks to identify lexical contacts among 24 IE languages. These contacts have been represented by bidirectional reticulations, but the donor languages were not clearly distinguished from the recipient languages in the presented "perfect linguistic networks". The study of Nakhleh and colleagues was restricted to the earliest attested languages of 12 subgroups of the IE family. Some other works that address the topic of modeling reticulate evolution in linguistics rely on the use of split graphs [3, 27, 28], minimal lateral networks (MLN) [10, 11, 21], and horizontal word transfer networks (HWTN) [29]. While the MLN and HWTN methods can be applied to detect word borrowing events, split graphs can be used to identify hybrid-like features of certain natural languages. For example, the split graph topology obtained for nine Germanic languages [27] allows one to identify Sranan, a language spoken in Suriname, as a hybrid of English and Dutch. However, split graphs were not specifically designed to detect and explicitly represent network relationships among languages. For instance, they cannot be used to identify explicitly the hybrid language, its parent languages and the corresponding hybridization/reticulation degree (i.e., percentage of lexical material transferred from each of the parent languages). Split graphs cannot be used to quantify the frequency of word borrowing events either. Furthermore, Wichmann and colleagues [30] proposed to infer reticulations based on distances retrieved from the Levenshtein metric [31] scores. Wang and Minett [32] used maximum parsimony to detect language contacts. The test, they designed, is based on the distribution of lexical similarities between languages. Köllner and Deller [33] proposed an ancestral state reconstruction method, which is specific to linguistics. The latter authors used the dissimilarities between a given node and its immediate ancestor in the tree in order to identify potential word borrowing events. In all these methods, the exact source and destination of the detected word borrowings cannot be identified explicitly. Only a few methods offer the advantage of finding the direction of reticulation events in linguistics. Mention here the work of Van der Ark et al. [34], who used the Levenshtein distance [31] to identify the source and the destination of word borrowing events, and that of Delz [35], who applied the horizontal gene transfer algorithm [36] from the T-Rex web server [37, 38] to detect loanwords and the corresponding word borrowings. In this study, we adapt a recently developed computational biology method [39], which was originally designed to detect hybrid species, their parents and the corresponding hybridization degrees, to identify explicitly hybrid languages (i.e., lexical hybrids in this study) and word borrowing events. One of the main advantages of our method over the MLN [10, 11, 21] and perfect networks [26] approaches is that it allows for determining the direction of reticulation events (e.g., word borrowing events) in addition to the quantification of influence of each of the donor languages on the corresponding recipient languages. For a more complete description of the benefits and shortcomings of the MLN approach, the reader is referred to [40–42]. We compare our explicit hybridization networks to the corresponding split graphs [43, 44] and galled networks [45]. Finally, we present some historical evidence that supports the results of our analysis. Several important studies dedicated to the classification of IE languages [7, 8, 10, 29] have examined the data from the 84 IE language database organized by Dyen and colleagues [46]. The Dyen database contains word forms for the meanings of the 200-meaning Swadesh list [19]. This list is one of a few lists of fundamental meanings collected by M. Swadesh in the 1940s and 50s. It is often used in lexicostatistics, which focuses on quantitative evaluation of lexical cognates, and in glottochronology, which focuses on dating divergence times of natural languages. Swadesh lists have been used by linguists to test the level of chronological separation of languages by comparing words, as they contain universal stable items with low levels of borrowing [7, 8]. However, it has been noticed that even though the use of Swadesh lists may decrease the level of borrowings to a certain degree, it cannot exclude all of them [21]. For each of the 200 basic meanings of the Swadesh list, the Dyen database contains their word forms in 84 IE languages. These word forms have been regrouped in cognate sets [46]. Two word forms were identified as cognate if they share an uninterrupted evolutionary history characterized by the presence of a common ancestral form. The word forms resulting from word borrowing (e.g., English word fruit which was borrowed from Old French) and those related by accidental similarity (e.g., the word form bad exists in both English and Farsi, but this is rather considered as an accidental similarity by linguists) were placed in a separate class. When it was difficult to differentiate between cognates and word forms resulting from borrowing or accidental similarities, the corresponding word forms, albeit not numerous, were categorized as doubtful cognates. For instance, this database was used by Gray and Atkinson [7] and Atkinson and Gray [47] to infer evolutionary trees of IE languages. In order to reconstruct our hybridization networks, we also considered some additional linguistic resources (Douglas Harper's Online Etymology Dictionary [48], the IE Lexical Cognacy Database (IELex) [49] and the IE etymological dictionaries collection [50]), which include relevant etymological information regarding loanwords and accidental similarities. Using these resources, we modified some of the original cognate sets created by Dyen et al. [46]. Precisely, the loanwords, put aside by Dyen and colleagues, were added to the corresponding cognate sets (i.e., cognate sets containing the donor forms for these loanwords). In some rare cases, the original cognate sets including doubtful cognates were either merged or eliminated. In total, our modified database included 1315 cognate sets. It is available at: http://www.trex.uqam.ca/biolinguistics. Reconstruction of explicit linguistic hybridization networks In [39], we presented a new algorithm for inferring explicit hybridization networks from distance data. This algorithm takes as input a matrix of evolutionary distances between species of size (nxn) and the three following user-defined parameters: minimum and maximum levels of hybridization (the value of these parameters varies between 0 and 1), and minimum hybridization score. The output of this algorithm, based on a famous neighbor-joining (NJ) principle [51], is either a traditional phylogenetic tree with n leaves or a hybridization network with n terminal nodes. It is worth noting that NJ remains by far the most popular distance-based method in phylogenetics, even though in linguistics Bayesian framework is also frequently used [7]. NJ is specifically well suited for the inference of large phylogenies. It takes as input a distance matrix D = {d(i, j)}1 ≤ i,j ≤ n defined on a set of n species (i.e., taxa or languages) and gives as output a phylogenetic tree representing their evolutionary history. NJ starts with a star tree including n leaves, one internal node and n branches. This tree is progressively transformed into an unrooted binary phylogeny with n leaves and 2n-3 branches. The p-th step of NJ consists of selecting and connecting the two most appropriate neighbors among (n − p + 1) candidates. For all of the (n − p + 1)(n − p)/2 tree configurations equivalent to that shown in Fig. 1a, the branch lengths are calculated according to the least-squares criterion. The configuration that minimizes the sum of all branch lengths of the tree is then selected and the two nodes i and j, which are neighbors in this configuration, are connected as shown in Fig. 1a. The nodes i and j are then replaced by the node X (their direct common ancestor; Fig. 1b) and the distance matrix D is updated by computing the new distances d(X, k), from X to each remaining leaf k of the tree, by means of the following formula \( d\left(X,k\right)=\frac{1}{2}\left(d\left(i,k\right)+d\left(j,k\right)\right) \). We used the NJ criterion [51] to infer explicit hybridization networks between species [39] and adapted it here to the identification of hybrids and word borrowings among natural languages. Note that in our networks both terminal and ancestral branches can be involved in hybridization. Obviously, the two parent branches (i.e., languages or groups of languages) are not necessarily neighbors. Each hybrid language (or recipient of lexical material) is explicitly identified along with its parent languages (or donors) and the degree of hybridization (or reticulation) corresponding to each of them. In the case of word borrowing, this degree of hybridization represents the proportion of the relative influence of each of the two donors on the recipient (Fig. 2c). As we will see later, it can also take into account the direct inheritance part of the recipient's lexicon (Fig. 2b, d). a Configuration in which languages i and j are selected as neighbors by the NJ algorithm, and b configuration in which language h is identified as a recipient of lexical material from languages i and k by our algorithm for inferring explicit hybridization networks (here, the parameters α and 1-α represent the hybridization (i.e., reticulation) degree of donor languages i and k, respectively) This figure illustrates three possible network configurations (b–d), when our algorithm detects a hybrid, h, which is neighbour of one of its parents, Nb(h), in the phylogenetic tree (a), e.g., in the IE language phylogeny inferred by Gray and Atkinson (see Fig. 1 in [6]). In configuration b, language h receives the proportion, α, of its lexicon from its closest ancestor in the tree via direct inheritance and the remaining part of its lexicon, (1-α), from a distant parent via word borrowing (e.g., see the case of Penn Dutch in Figs. 4 and 5b). In configuration c, language h is a lexical hybrid of Nb(h) and a distant parent (e.g., see the case of Sranan in Figs. 4 and 5b). In configuration d, language h receives the proportion α (indicated, in this case, in parentheses) of its lexicon from both its closest ancestor via direct inheritance and from its neighbour Nb(h) via word borrowing, and the remaining part, (1-α), of its lexicon from a distant parent via word borrowing (e.g., see the case of Old Armenian in Fig. 4) Here we present some important computational details of our algorithm. We use the following formula to determine the level of hybridization, α h i,j , for each possible triplet of languages, (h, i, j), assuming that h is a hybrid of i and j: $$ {\alpha}_{i,j}^h=\frac{{\displaystyle {\sum}_{k\ne i,j,h}{X}_k\left({Y}_k-{S}_h+{S}_j\right)}}{{\displaystyle {\sum}_{k\ne i,j,h}{X}_k{X}_k}}, $$ where \( {S}_l=\frac{{\displaystyle {\sum}_{k\ne i,j,h}d\left(k,l\right)}}{n-3} \) (for l = h, l = i or l = j), Y k = d(k, h) − d(k, j) and X k = S j − S i + d(k, i) − d(k, j). Formula 1 was obtained by minimizing the following least-squares function of α (its minimum is attained with α = α h i,j ): $$ L{S}_{i,j}^h={{\displaystyle {\sum}_{k\ne i,j,h}\left({Y}_k-{S}_h+{S}_j-\alpha {X}_k\right)}}^2 $$ the hybridization (reticulation) score, Sc h i,j , is defined as follows for all triplets of languages (h, i, j): $$ S{c}_{i,j}^h=\underset{k\ne i,j,h}{Min}\left\{d\left(i,j\right)+d\left(k,h\right)-d\left(i,h\right)-d\left(k,j\right);\ d\left(i,j\right)+d\left(k,h\right)-d\left(j,h\right)-d\left(k,i\right)\right\} $$ Formula 3 is related to the four point condition, which is satisfied in an additive tree (i.e., phylogenetic tree), but not in a phylogenetic network. We restrict the search of hybrids to the triplets of languages satisfying the following constraints: Sc h i,j ≥ MIN Sc and α MIN ≤ α h i,j ≤ α MAX , where the parameters 0 < α MIN < α MAX < 1 and MIN Sc are selected by the program's user depending on the desired number of hybridization events (see [39] for more details about parameter selection). Our network reconstruction algorithm can be defined as follows. First, we determine the languages i and j that should be connected at the current step by the traditional NJ algorithm. Prior to connecting i and j, we identify the language h that is the best candidate for being a hybrid of either i or j (Parent 1 of h) and any other remaining language k (Parent 2 of h; see Fig. 1b). We search for the language h 0 that maximizes the absolute value of the following function: $$ {\varDelta}_{i,j}^h={\displaystyle {\sum}_{k\ne i,j}\left(d\left(j,h\right)+d\left(i,k\right)-d\left(j,k\right)-d\left(i,h\right)\right).} $$ Note that Δ h i,j equals 0 if i and j are true neighbors in an additive tree. Then, we select the triplet (h 0 , i 0 , k), here i 0 = i or i 0 = j, that provides the minimum of the least-squares function LS h i,j and satisfies the above-mentioned constraints. If LS h i,j < (Δ h i,j )2, we consider that h 0 is a hybrid of i 0 and k, and remove from the distance matrix the row and the column corresponding to h 0. Otherwise, we connect the languages i and j as in the conventional NJ algorithm [51]. The time complexity of our network building algorithm is O(n 3), which is equivalent to the time complexity of NJ. It's important to mention that hybrid languages identified by our algorithm should not be always interpreted as real lexical hybrids or real mixed languages. In some cases, the detected parent-hybrid relationship may also represent the processes of word borrowing or even inheritance from the closest ancestor in the tree (see Fig. 2). This figure illustrates three possible network configurations which reflect the case where our algorithm detects a hybrid, h, which is a direct neighbour, or a very close neighbour, of one of its parents, Nb(h), in the phylogenetic tree (Fig. 2a). This tree is assumed to be inferred by a traditional tree reconstruction algorithm (e.g., NJ). For instance, language h may receive the proportion, α, of its lexicon either from its closest ancestor in the tree via direct inheritance (Fig. 2b), or from its neighbour Nb(h) in the tree as its lexical hybrid (Fig. 2c), or from both its closest ancestor via inheritance and from its neighbour Nb(h) via word borrowing (Fig. 2d; α is indicated in parentheses in this case). We tested several strategies of computing the distance matrix D between the 84 IE languages considered in our study. As Dyen's database [46] does not contain any word form from the Hittite and Tocharian languages, these ancient languages were discarded from our analysis. The first strategy, which provided the most plausible experimental results, used a binary presence-absence matrix of languages over the established cognate sets (1315 cognate sets in total). It is worth noting that our binary encoding concerned language presence-absence data only (e.g., as in [7]). The presence-absence matrix D had 84 rows and 1315 columns. The element (i, j) of this matrix was equal to 1 if a word form of language i was present in cognate set j, otherwise it was equal to 0. In total, 19.69 % of the data were missing in our database. Missing data were mostly due to the presence of the corresponding word forms in the special "non-cognate" class of the Dyen's database; such word forms that were neither cognate with any other word form of the given meaning nor related to any word form by the way of borrowing were excluded from our database. The distance between any pair of languages was then calculated as the Hamming distance between the rows corresponding to these languages in the presence-absence matrix (i.e., it was equal to the number of cognate sets that contained word forms of only one of these languages). Two data encoding strategies were tested. The first, when the missing data were encoded by 0's, and the second, when the Hamming distance between two languages was normalized by the number of meanings for which the corresponding word forms existed in both languages. As these two strategies provided very similar hybridization networks, only the results of the first strategy will be presented. The workflow chart of our method is presented in Fig. 3a, and a simple example of its application is shown in Fig. 3b. Here we consider a dataset with 8 languages, L1, L2, …, L8, 4 meanings and 16 cognate sets (i.e., 4 cognate sets for each meaning). According to the language content in these 16 cognate sets, language L4 can be seen as a hybrid of languages L3 and L5. Language L8 is used as an outgroup. In the first (respectively, second) step of our algorithm, languages L1 and L2 (respectively, L6 and L7) are joined, following the NJ principle. Then, before joining (L1, L2) and L3, language L4 is identified as a hybrid of L3 and L5 with the degree of hybridization, α, equal to 0.5 for both of its parents. Language L4 is then removed from the dataset, and the remaining steps of the algorithm correspond to the steps of traditional NJ. The obtained explicit hybridization network is presented in Fig. 3b. a Workflow chart of the new method for inferring explicit hybridization networks, and b an example of its application to a dataset consisting of 8 languages (including the hybrid language L4), 4 meanings and 16 cognate sets We also conducted the analysis using the Levenshtein distance [31] between words of the same meaning but did not obtain convincing results using such an approach. This should be due to the fact that this distance tends to reflect chance similarity when the compared word forms are not cognate [52]. The Levenshtein distance will be further used for inferring galled networks from word trees, but its application will be restricted to word forms belonging to the same cognate set. We applied our hybridization network inferring algorithm to the entire Hamming distance matrix of 84 IE languages, denoted here by D 84, as well as to its submatrices corresponding to each of the 11 considered IE language groups. In particular, some plausible lexical hybrids and word borrowing donors and recipients were found when the submatrices of the five following language groups were analyzed: Germanic, Latin (including the Italic and French/Iberian groups), Slavic, Sanskrit and Persian (see Figs. 4, 5b, 6b and 7b for the detailed results). Furthermore, the analysis of two submatrices corresponding to the union of the West Germanic and French/Iberian groups and the union of the Celtic and French/Iberian groups also provided very relevant results. We did not find additional reticulations within the other IE groups. We needed a distance matrix of size greater than four to be able to apply our algorithm. It is worth noting that the recovery of hybrid languages and word borrowing events seemed to be more complicated within smaller linguistic groups (i.e., groups with five or six taxa here). Explicit hybridization network given by our algorithm for the group of 84 IE languages originally considered by Dyen et al. [32]. The tree topology in this network corresponds to the IE language phylogeny inferred by Gray and Atkinson (see Fig. 1 in [6]). Language groups are indicated on the left. The numbers at the arrows are the reticulation degrees corresponding to each of the donor languages and the numbers at the internal tree nodes are their age estimates Split graph (a), explicit hybridization network (b) and galled network (c), obtained for 8 languages of the West-Germanic group Split graph (a), explicit hybridization network (b) and galled network (c), obtained for 7 languages of the North-Germanic group Split graph (a) and explicit hybridization network (b), obtained for 16 languages of the Latin group The input parameters of our algorithm, MIN Sc , α MIN and α MAX , were chosen according to the size of the considered distance matrix (see [39] for a detailed discussion on the parameter selection). For smaller distance matrices corresponding to particular language groups, the following set of input parameters: (MIN Sc = 0, α MIN = 0.1 and α MAX = 0.9) was used. To avoid an excessive number of false positives, more restrictive parameters: (MIN Sc = 0.1, α MIN = 0.25 and α MAX = 0.75) were used for the entire distance matrix D 84. For the representation of our hybridization networks (Figs. 4, 5b, 6b and 7b), we used the backbone IE phylogenetic tree inferred by Gray and Atkinson (Fig. 1 in [6]), mapping into it the detected lexical hybrids and word borrowing events with their respective reticulation degrees. Our program for inferring explicit hybridization networks is available at: www.info2.uqam.ca/~makarenkov_v/makarenv/hybrids_detection.zip. The data used in our study can be found at: www.trex.uqam.ca/biolinguistics/Biolinguistic_networks_data.zip. Reconstruction of split graph-based linguistic networks The split decomposition method introduced by Bandelt and Dress [43] decomposes the given distance matrix into simple components based on weighted splits (i.e., bipartitions of taxa, species or languages). These splits can then be represented using a split graph, a particular type of phylogenetic network that simultaneously represents both clusters in the data and evolutionary distances between taxa. The Neighbor-Net method introduced by Bryant and Moulton [44] and implemented in the SplitsTree program [53] works in a similar way, but constructs phylogenetic networks that are much more resolved than those given by split decomposition. Split graphs have been widely used in phylogenetic studies to depict phylogenetic relationships between species, but several works have also considered their applications in historical linguistics [3, 27]. We used SplitsTree [53] to infer the split graphs corresponding to the West Germanic, North Germanic and Latin groups of IE languages, with the same submatrices of D 84 as mentioned above. A total of 22 (respectively, 16 and 51) splits were identified for the West Germanic (respectively, North Germanic and Latin) language groups. These split graphs will be compared to our hybridization networks and galled phylogenetic networks inferred for the same groups of languages (Figs. 5, 6 and 7). Figure 8 shows the split graph, with 371 splits, obtained for the entire set of 84 IE languages examined in our study. Split graph obtained for the entire set of 84 IE languages Reconstruction of galled linguistic networks from word trees Several methods have been developed for inferring consensus phylogenetic networks from contradictory sets of two or more phylogenetic trees. They include, among others, cluster networks [54], galled networks [45] and level-k networks [55]. A cluster network is a rooted phylogenetic network obtained from a given set of clusters (i.e., set of bipartitions). In such a network, every branch represents exactly one input cluster. A galled network is a rooted phylogenetic network in which each reticulation has a tree cycle. A tree cycle is an undirected cycle consisting of two disjoint tree paths between a tree node and a reticulation node. A level-k network is a rooted phylogenetic network, such that the maximum number of reticulations contained in a biconnected component equals k. A given set of clusters can always be represented by a galled network, but not necessarily by a level-k network [55]. These three methods have been implemented in the Dendroscope software [56]. We conducted our analyses with all of them but present here only the results of the galled network method which provided the "most interpretable" linguistic networks (i.e., networks in which the obtained reticulations correlate the best with known contacts between natural languages). Since for running this method we needed a set of phylogenetic trees, we reconstructed word phylogenies for each of the 200 meanings of the Swadesh list. We used the normalised Levenshtein metric [31], denoted here by d L , to calculate the distances between the cognate word forms of the same meaning; the distance between any two non cognate word forms was set to 1 (see below for more details). The Levenshtein distance between two words is defined as the minimum number of editing operations, consisting of insertions, deletions and substitution of a single letter, necessary to transform one word into the other. This distance was normalized by the maximum length of two words. The Levenshtein distance has been criticized as a poor distance for building language trees because of its reflection of chance similarity when the compared words are not cognate [52]. Our comparative study presented below suggests that this distance can be used for building word trees from cognate data. Several recent linguistic studies argued that accurate comparisons between words should also incorporate likely changes to pronunciation and phonological system [52, 57]. Thus, we decided to compare, in terms of reconstruction word trees and word borrowing events, the normalized Levenshtein distance with the SCA (Sound-Class-based phonetic Alignment) distance recently introduced by List [58]. While the Levenshtein distance applies to orthographic data, the SCA distance is based on the comparison of phonological forms. Note that phonological forms are still not available for many word forms of Dyen's database. Thus, among 42 cognate sets that were found to be suggestive of borrowing into English according to the modified MLN approach [21], we selected the 28 cognate sets (Table 1) for which at least four cognates with available phonological forms were present in the IELex database [49]. Trees with less than four leaves have identical topologies and thus cannot be used to recognize word borrowings [29]. It is important to note that the MLN approach is an automatic approach based on tree topology and the 42 suggestive cases of borrowing recovered by MLN, which include 33 English loanwords identified by Donohue et al. [22], cannot be considered as crystal-clear borrowings. They may comprise some false positives, which can be due to parallel semantic development [21], for example. Table 1 This table reports the results provided by the word borrowing event detection algorithm [29] applied to the normalized Levenshtein [31] and SCA [58] distance matrices We applied our algorithm for inferring word borrowing events [29] to the word trees obtained with the normalized Levenshtein and the SCA distances (the inferred word tree topologies are available in Additional file 1). The results provided by using these distances can be considered as equivalent. The normalized Levenshtein distance allowed us to identify 23 of 28 suggested borrowings, while the SCA approach was able to detect 22 of them. For instance, the SCA-based algorithm was unable to recover the correct borrowings into English for the words flower, fruit and split (Table 1). The results of this analysis as well as the fact that orthographic cognate data are much more complete than phonological ones are the main reasons that justify the use of the normalized Levenshtein distance for inferring word trees. It is worth noting that one of the most significant differences between language history and biological evolution is that in the case of natural languages our alphabet systems change, while biological sequences change via mutation. Thus, methods using the Levenshtein distance as well as the more historically-oriented SCA distance may have shortcomings, since both of these distances are based on the idea that similarities and differences are due to mutations. For example, the distance between French tête and Latin testa should be 0 in linguistic terms, since the sound change was completely regular. Moreover, some words may contain cognate material, but only in parts. For example, the French word soleil is different from Italian sole, since it stems from a suffixed form of Latin sol, namely Latin soliculus. This case cannot be handled successfully by the Levenshtein and SCA distances, and the use of any of them will lead to the addition of noise to the distance matrix. Borrowings can be seen as mutations in some parts, since they are not produced by regular sound change. Thus, methods based on sequence similarity, like those using the Levenshtein distance, may have advantages in identifying borrowings over methods that seek to ignore regular dissimilarities between words, like those using the SCA distance. Furthermore, the presented method could be modified to account for language-specific distances, which could be measured by other algorithms, as for example, the LexStat algorithm by List [42] or the algorithm proposed by Steiner et al. [59]. For each considered meaning m of the 200-meaning Swadesh list, we denoted by L m the set of languages for which we had at least one word form of m in our database, and by C m the collection of cognate sets available for the meaning m. Let n m be the cardinality of L m . Note that for most of the meanings, the value of n m was lower than 84 since our database, as well as its original version created by Dyen, had some missing word forms for almost all the meanings. Mention that in some, rather rare, cases multiple word forms of the same language existed for a given meaning m. For each meaning m, a distance matrix D m of size n m was computed by applying the following formula to each pair of languages, l 1 and l 2, in L m : $$ {d}_m\left({l}_1,{l}_2\right)\kern0.5em =\frac{{\displaystyle \sum_{c\in {C}_m}{d}_c\left({l}_1,{l}_2\right)}}{n_{l_1,{l}_2}}, $$ where d c (l 1 ,l 2) was equal to 0 if neither word forms of l 1 nor those of l 2 were present in c; d c (l 1 ,l 2) was equal to 1 if word forms of either only l 1 or only l 2 were present in c; and, it was equal to the minimum value of d L (i,j), over all cognates i representing l 1 and all cognates j representing l 2 in c if word forms of both l 1 and l 2 were present in c. The integer \( {n}_{l_1,{l}_2} \) was the number of cognate sets of the meaning m that included at least one word form of either l 1 or l 2. Thus, we obtained 200 distance matrices D m of different sizes. For each such a matrix, we then inferred the corresponding unrooted word phylogeny T m using the NJ algorithm [51]. The obtained word phylogenies were given as input to the galled network algorithm [45]. Since these word trees did not contain the same sets of languages (i.e., tree leaves), we used the Z-closure method, available in Dendroscope 3 [56], to merge partial data [60]. Figures 5c, 6c and 9 present the most plausible networks provided by the galled network algorithm [45]. First, we inferred networks from the trees restricted to the languages of the West Germanic (Fig. 5c) and North Germanic (Fig. 6c) groups. The trees including at least four Germanic languages (West or North) were analyzed. Here we considered splits that were present in at least 30 % of the input trees. In the case of the West Germanic group, we examined 190 input trees with 207 input splits, 237 splits after Z-closure, and 76 remaining splits after the removal of partial splits. The consensus galled network obtained for the West Germanic group (Fig. 5c) contains 20 splits and 3 putative lexical recipients (i.e., Frisian, Flemish and Pennsylvania Dutch). In the case of the North Germanic group, we examined 188 input trees with 109 input splits, 112 splits after Z-closure, and 49 remaining splits after the removal of partial splits. The consensus galled network for the North Germanic group (Fig. 6c) contains 12 splits and 1 putative lexical recipient (i.e., Icelandic ST). We also inferred a galled network for a total of 84 IE languages. In this case, we considered only the splits that were present in at least 75 % of the input trees to avoid false positive reticulations. Figure 9 illustrates a sub-network of 12 IE languages that contains all the reticulations identified in the complete galled network of 84 languages. Here we considered 200 input trees with 6,176 input splits, 11,299 splits after Z-closure and 5,124 remaining splits after the removal of partial splits to obtain a consensus galled network with 101 splits and 3 putative recipient languages (i.e., Armenian List, Armenian Mod and Ossetic). The presented network correctly identifies the influence of the languages of the Iranian group and that of Ancient Greek on Armenian, but also includes false positive reticulations reflecting, for example, the influence of Frisian on Armenian. Partial galled network obtained for 12 IE languages. This is a maximum sub-network that includes reticulations of the complete galled network built for the entire set of 84 IE languages Hybrid languages emerge in a few generations as a new means of communication between two (or more) populations not sharing a common language. In many cases, e.g., when we found that Old Armenian is a lexical hybrid of Old Persian and Old Greek, we should interpret the results of our algorithm as the identification of the influence, e.g., cultural, political or military, which the two parent languages (i.e., the donors) had on their lexical hybrid (i.e., the recipient), at possibly different periods of time, and which could last over several centuries. As known from evolutionary biology, the position of hybrid species in a phylogenetic tree or network is often uncertain [61]. Furthermore, some of the hybrids added to the data can influence the position of their parents when a phylogenetic tree or network is inferred. Often a hybrid is placed as a direct neighbor of one of its parents in a phylogenetic tree or network, and the parents' location may change when this hybrid is removed from the data set. Thus, some of the results presented in this section were obtained after rerunning our algorithm on the distance matrices from which the detected lexical hybrids, identified at the first run of this algorithm, were removed. Here we present the most important reticulation events characterizing the evolution of IE languages which were identified by the three competing algorithms for inferring split graphs, galled networks and our explicit hybridization networks, respectively. The related historical facts and justifications are also discussed. Since only lexical data were considered in our study, the presented phylogenetic networks represent interactions between languages which are mainly based on lexical borrowings. They do not account for other language interactions, such as contact-induced syntactic restructuring, for example. Network relationships within the Germanic group We carried out our algorithm independently for the languages of the West Germanic group, the North Germanic group, and finally, the entire Germanic group. Four putative lexical hybrids were discovered in this analysis (Figs. 5b and 6b): Pennsylvania Dutch as a recipient of lexical material from English (by word borrowing) and German (by inheritance): Pennsylvania German or Pennsylvania Dutch (Penn Dutch) is a variant of German developed by the descendants of German, French (from Alsace and Lorraine) and Swiss emigrants to the East Coast of the United-States [62]. These migrants settled in the Unites-States in the 17th and 18th centuries. Pennsylvania Dutch borrowed many words from English, particularly in the 19th century. Frisian as a recipient of lexical material from Old English (by word borrowing) and the ancestor of Flemish, Afrikaans and Dutch (by word borrowing, but the inheritance from a close common ancestor is also possible here): The Frisian dialects are spoken in the northern parts of the Netherlands and Germany [63]. They are the closest living languages to English, after Scots. Due to the long lasting influence of Old Dutch (since the Middle Ages), Frisian is now more similar to Dutch than to English (see a greater reticulation degree obtained for Old Dutch than for Old English in Fig. 5b, i.e., 0.71 vs. 0.29). Sranan as a recipient of lexical material from English (by word borrowing) and Old Dutch (by word borrowing): Sranan is an English-based creole language spoken in Suriname [64]. After the invasion of Suriname by the Dutch in 1667, Sranan's vocabulary was greatly influenced by Dutch. Sranan also borrowed some Portuguese and African words. Riksmål as a recipient of lexical material from Danish (by word borrowing) and Icelandic (by word borrowing, but the inheritance from a close common ancestor is also possible here): Historically, the North Germanic languages were divided into three main branches: East Scandinavian (Danish and Swedish), West Scandinavian (Icelandic, Faroese and Norwegian) and Old Gutnish [65]. Riksmål (or Bokmål) is now the most widely-used written standard of contemporary Norwegian. It was strongly influenced by Danish, because of the political domination of Denmark over Norway during several centuries. Nowadays, Riksmål is closer to Danish than to Icelandic and Faroese (see the corresponding reticulation degrees in Fig. 6b). The following common features can be observed when comparing the explicit networks provided by our algorithm to those given by the split graphs (Figs. 5a and 6a) and galled networks (Figs. 5c and 6c) methods. In the case of the North Germanic group, the split graph (Fig. 6a) allows us to identify Riksmål as a potential lexical hybrid of Danish and the ancestor of Icelandic and Faroese. Very similar reticulations were found by our method (Fig. 6b). However, the split graph does not yield any quantitative estimation of the influence of donor languages on recipient languages. In the case of the West Germanic group, the identification of network relationships in the split graph is more sophisticated (Fig. 5a). For example, we could implicitly identify in this graph the same lexical recipients as in our explicit network, but we could also see German as a recipient of lexical material from (Flemish, Afrikaans and Dutch) and Pennsylvania Dutch, or the ancestor of (Flemish, Afrikaans and Dutch) as a lexical recipient of German and Frisian. The galled network method yielded more explicit linguistic networks than split graphs. However, the galled network obtained for the North Germanic languages (Fig. 6c) incorrectly identifies Icelandic as a recipient of lexical material from Danish and the ancestor of (Faroese and Riksmål). For the West Germanic group (Fig. 5c), the reconstructed galled network was able to depict two correct recipients languages: Frisian and Pennsylvania Dutch. Nevertheless, Flemish was wrongly identified as a recipient of lexical material from Dutch and Afrikaans, and Sranan was not detected as a lexical hybrid but rather as one of the donors of Frisian. Network relationships within the Latin group Only two possible lexical hybrids were identified by our algorithm in the Latin group (including Italian and French/Iberian subgroups; Fig. 7b): Catalan as a recipient of lexical material from the ancestor of Spanish, Portuguese and Brazilian (by word borrowing, but the inheritance from a close common ancestor is also possible here) and from Old French (by word borrowing), and Provençal as a recipient of lexical material from Catalan (by word borrowing) and Old French (by word borrowing, but the inheritance from a close common ancestor is also possible here). The detected reticulation events reflect the history of the Occitan language, which is a Romance language spoken in Southern France, Northern Italy and Eastern Spain [66]. There have been many interactions between Occitan and French since the Middle Ages. For instance, "Langue d'Oïl", from which evolved the modern French, was spoken in the North, and "Langue d'Oc", the ancestor of Occitan, was spoken in the South. Catalan, which is the closest relative of Occitan, is sometimes considered as one of its dialects [66, 67]. After the union of Aragon and Castile in 1479, the influence of the Iberian languages, in particular that of Spanish, on Catalan became more noticeable. Provençal is a dialect of Occitan spoken in Southern France [66]. The split graph obtained for the entire Latin group (Fig. 7a) represents a highly implicit linguistic network, which is not easy to interpret. For example, we could identify here Provençal as a lexical recipient with donors Catalan and the ancestor of French, Walloon and French Creole, as well as Italian as a lexical recipient with donors Ladin and Sardinian. No interpretable galled network has been obtained for the Latin language group. Network relationships within the Slavic group Here we identified Lusatian as a lexical hybrid of Polish and Czech (both by word borrowing). The Sorbian (or Lusatian) languages are Slavic languages spoken in North East Germany [68]. These languages have been strongly influenced by Czech and Polish, since Lusatia is located at the border between Germany, the Czech Republic and Poland. Network relationships within the Persian and Sanskrit groups Here we identified three possible lexical hybrids in two different program runs, i.e., one run for each of these groups: Wakhi as a recipient of lexical material from Tadzik (by word borrowing) and Ossetic (by word borrowing, but the inheritance from a close common ancestor is also possible here). Wakhi is an Iranian language spoken in Pamir, a mountain region between Pakistan, Afghanistan, China and Tajikistan. For the small nations of Pamir the language of oral and written communication is Tadzik. Moreover, the Wakhi oral tradition is bilingual (Wakhi and Tadzik), and most Wakhs speak Tadzik quite fluently [69]. Ancestor of Nepali and Khaskura as a recipient of lexical material from Hindi (by word borrowing) and Kashmiri (by word borrowing). Nepali and Khaskura are spoken mainly in Nepal, India and Bhutan. They share about 80 % of their lexicon with Hindi [70]. Ancestor of Lahnda and Panjabi as a recipient of lexical material from Hindi (by word borrowing, but the inheritance from a close common ancestor is also possible here) and Romani (by word borrowing). Lahnda and Panjabi are the languages spoken in Pakistan and India [71]. The Romani migrated from Northern India to Europe between the 6th and 11th centuries [72]. They had numerous interactions with Northern Indian, Iranian and European languages during their migrations. Network relationships within the Celtic and French/Iberian groups We applied our algorithm to the union of the Celtic and French/Iberian groups excluding from our analysis the lexical hybrids that we had already identified when examining the Latin group alone, i.e., Catalan and Provençal. This way, we found that the Breton subgroup was a recipient of lexical material from Old Welsh (by word borrowing, but the inheritance from a close common ancestor is also possible here) and Old French (by word borrowing). The former reticulation shows a close etymological relationship between Welsh and Breton, whereas the latter accounts for the important number of words that Breton borrowed from Old French, namely in the 15th and 16th centuries [73]. Network relationships within the West Germanic and French/Iberian groups We also applied our algorithm to the union of the West Germanic and French/Iberian groups ruling out the lexical hybrids we had already detected in these groups, i.e., Catalan, Provençal, Sranan, Pennsylvania Dutch and Frisian. This allowed us to identify English as a recipient of lexical material from the Old French (by word borrowing) and Old Dutch (by word borrowing, but the inheritance from a close common ancestor is also possible here) subgroups. Mention that these two reticulations do not exclude the direct inheritance of Old English from the Anglo-Frisian and North Germanic dialects originally spoken by Germanic tribes, traditionally known as the Angles, Saxons and Jutes [74]. Moreover, the relationship between Dutch and English originates in Old Saxon, which was spoken in North West Germany and in the Netherlands by Saxon peoples. Old Saxon was closely related to both Old English and Old Dutch [75]. After the Norman conquest of England in the 11th century, many French words were borrowed by Middle English. Furthermore, English was replaced as the language of the upper classes by Anglo-Norman, a relative of Old French, and Old English developed into the next historical form of English, known as the Middle English language [74]. Network relationships between IE language groups In our final analysis, we removed from our data set the 12 lexical hybrids already identified in the original set of 84 IE languages, thus obtaining a reduced distance matrix D 72 of size (72×72). We applied our algorithm to this reduced matrix and limited the search of recipient and donor languages to the ancestor branches of the 11 main IE language groups (Armenian, Albanian, Baltic, Celtic, Greek, Latin, North Germanic, Persian, Sanskrit, Slavic and West Germanic). First, we identified the Armenian group as a recipient of lexical material from the Albanian and Persian groups, and, second, the Albanian group as a recipient of lexical material from the Sanskrit and Latin groups. Since the reticulation (hybridization) score, which reflect the likelihood of a reticulation event (see Formula 3), of Albanian was much higher than that of Armenian, we applied our algorithm once again after removing from the distance matrix the data corresponding to the five languages of the Albanian group. It is worth noting that the position of the Albanian group in the IE language tree has been found to be unstable by many authors [7–9, 26]. The following application of our method to the reduced distance matrix D 67 of size (67×67) allowed us to identify Old Armenian as a recipient of lexical material from Old Persian and Old Greek (Fig. 4). A similar network pattern was found by the galled network method (Fig. 9). Thus, we could identify here: - Old Albanian as a recipient of lexical material from Sanskrit (by word borrowing, but the inheritance from a close common ancestor is also possible here) and Latin (by word borrowing). Albanian borrowed many words from Latin, in particular between the 2nd century B.C. and the 5th century A.D. [76]. The Albanian group is also a close relative of the union of the Sanskrit and Persian in the IE language tree (see for example Fig. 1 in [7]). - Old Armenian as a recipient of lexical material from Old Greek (by word borrowing, but the inheritance from a close common ancestor is also possible here) and Old Persian (by word borrowing). The Armenians stayed under Persian rule for long periods of time from the 5th century BC to the 19th century AC and the Armenian language includes a large number of Iranian loanwords in its vocabulary [77]. Moreover, the well-known "Graeco-Armenian" hypothesis postulates that Armenian is the closest relative of Greek [78]. The application of computational biology methods presented here in the context of historical linguistic can be viewed as a step towards a better understanding of the evolution of natural languages [79–82]. In this paper, we adapted a recently developed bioinformatics method for inferring explicit hybridization networks [39] to identify reticulate relationships between languages. We also showed how the well-known split graph [43, 44] and galled network [45] algorithms can be applied to analyze linguistic data. While all the three competing methods can be used to reconstruct evolutionary relationships between natural languages, our method has the important advantage of identifying these relationships explicitly. It also allows one to establish the extent of influence of each of the donor languages on the corresponding recipient languages through the computation of the reticulation degree parameter. Some recent studies have used syntactic distances to infer phylogenies of IE languages [83, 84]. Syntactic parameters reveal complementary relationships between languages which are often not reflected by lexicon [83]. This type of syntactic distances could be further used to refine the inference of linguistic networks along with plausible phonological and morphological data. It would be also interesting to extend our method to infer the exact timing of the obtained reticulation events. This will allow us to discover new historical events that have shaped the evolution of natural languages. Darwin C. The descent of man. London: Murray; 1871. Schleicher A. Die darwinsche Theorie und die Sprachwissenschaft. Weimar: Hermann Böhlau; 1863. Atkinson QD, Gray RD. Curious parallels and curious connections–Phylogenetic thinking in biology and historical linguistics. Syst Biol. 2005;54(4):513–26. Geisler H, List JM. Do languages grow on trees? The tree metaphor in the history of linguistics. In: Fangerau H, Geisler H, Halling T, Martin W, editors. Classification and evolution in biology, linguistics and the history of science. concepts – methods – visualization. Stuttgart: Franz Steiner Verlag; 2013. p. 111–24. Fitch WM. Homology: a personal view on some of the problems. Trends Genet. 2000;16(5):227–31. Trask RL. The dictionary of historical and comparative linguistics. Edinburgh: Edinburgh University Press; 2000. Gray RD, Atkinson QD. Language-tree divergence times support the Anatolian theory of Indo-European origin. Nature. 2003;426(6965):435–9. Rexová K, Frynta D, Zrzavý J. Cladistic analysis of languages: Indo-European classification based on lexicostatistical data. Cladistics. 2003;19(2):120–7. Bouckaert R, Lemey P, Dunn M, Greenhill S, Alekseyenko A, Drummond A, et al. Mapping the origins and expansion of the Indo-European language family. Science. 2012;337(6097):957–60. Nelson-Sathi S, List J-M, Geisler H, Fangerau H, Gray RD, Martin W, et al. Networks uncover hidden lexical borrowing in Indo-European language evolution. Proc Roy Soc B. 2011;278(1713):1794–803. Nelson-Sathi S, Popa O, List JM, Geisler H, Martin WF, Dagan T. Reconstructing the lateral component of language history and genome evolution using network approaches. In: Fangerau H, Geisler H, Halling T, Martin W, editors. Classification and evolution in biology, linguistics and the history of science. Concepts - methods – visualization. Stuttgart: Steiner; 2013. p. 163–80. Diamond J, Bellwood P. Farmers and their languages: The first expansions. Science. 2003;300(5619):597–603. Gimbutas M. Old Europe c. 7000–3500 B.C.: The earliest European civilization before the infiltration of the Indo-European peoples. JIES. 1973;1(1):1–20. Gimbutas M. The beginning of the bronze age in Europe and the Indo-Europeans: 3500–2500 B. C. JIES. 1973;1(2):163–214. Renfrew C. Archaeology and language: the puzzle of Indo-European origins. London: J. Cape; 1988. Thomason S, Kaufman T. Language contact, creolization, and genetic linguistics. Oakland: University of California Press; 1988. Finkenstaedt T, Wolff D. Ordered profusion; studies in dictionaries and the English lexicon. Heidelberg: Carl Winter; 1973. Pagel M. Maximum likelihood models for glottochronology and for reconstructing linguistic phylogenies. In: Time depth in historical linguistics. Cambridge: The McDonald Institute for Archaeological Research; 2000. p. 189–207. Swadesh M. Lexico-statistic dating of prehistoric ethnic contacts: with special reference to North American Indians and Eskimos. Proc Amer Phil Soc. 1952;96(4):452–63. Bowern C, Epps P, Gray R, Hill J, Hunley K, McConvell P, et al. Does lateral transmission obscure inheritance in hunter-gatherer languages? PLoS One. 2011;6(9), e25195. List J-M, Nelson-Sathi S, Geisler H, Martin W. Networks of lexical borrowing and lateral gene transfer in language and genome evolution. Bioessays. 2014;36(2):32–51. Donohue M, Denham T, Oppenheimer S. New methodologies for historical linguistics? Calibrating a lexicon-based methodology for diffusion vs. subgrouping. Diachronica. 2012;29(4):505–22. Holm HJ. "Swadesh lists" of Albanian revisited and consequences for its position in the Indo-European languages. J Indo-Eur Stud. 2011;39(1):43–99. Vellupilai V. Pidgins, creoles and mixed languages. Amsterdam: John Benjamins; 2015. Schmidt J. Die Verwantschaftsverhältnisse der indogermanischen Sprachen. Germany: Hermann Böhlau; 1872. Nakhleh L, Ringe D, Warnow T. Perfect phylogenetic networks: A new Methodology for reconstructing the evolutionary history of natural languages. Language. 2005;81(2):382–420. Bryant D, Filimon F, Gray R. Untangling our past: Languages, trees, splits and networks. In: Mace R, Holden S, Shennan S, editors. The evolution of cultural diversity: a phylogenetic approach. Walnut Creek: Left Coast Press; 2005. p. 69–85. Heggarty P, Maguire W, McMahon A. Splits or waves? Trees or webs? How divergence measures and network analysis can unravel language histories. Phil Trans R Soc B. 2010;365(1559):3829–43. Boc A, Di Sciullo AM, Makarenkov V. Classification of the Indo-European languages using a phylogenetic network approach. In: Locarek-Junge H, Weihs C, editors. Classification as a Tool for Research. Berlin Heidelberg: Springer; 2010. p. 647–55. Wichmann S, Holman EW, Rama T, Walker RS. Correlates of reticulation in linguistic phylogenies. Lang Dyn Change. 2011;1(2):205–40. Levenshtein VI. Binary codes capable of correcting deletions, insertions and reversals. Sov Phys Dokl. 1966;10(8):707–10. Wang WS-Y, Minett JW. Vertical and horizontal transmission in language evolution. Trans Phil Soc. 2005;103(2):121–46. Köllner M, Dellert J. Ancestral state reconstruction and loanword detection. In: Proceedings of the leiden workshop on capturing phylogenetic algorithms for linguistics. Tübingen: Eberhard Karls Universität, online publication system; 2016. Van der Ark R, Mennecier P, Nerbonne J, Manni F. Preliminary identification of language groups and loan words in Central Asia. In: Osenova P, Hinrichs E, Nerbonne J, editors. Proceedings of the RANLP Workshop on Computational Phonology. Borovetz: RANLP; 2007. p. 13–20. Delz M. A theoretical approach to automatic loanword detection (Master thesis). Tübingen: Eberhard Karls Universität; 2013. Boc A, Makarenkov V. New efficient algorithm for detection of horizontal gene transfer events. In: Benson G, Page R, editors. Proceedings of the 3rd Workshop on Algorithms in Bioinformatics, volume 2812 of Lecture Notes in Bioinformatics. Berlin: Springer; 2003. p. 190–201. Makarenkov V. T-REX: reconstructing and visualizing phylogenetic trees and reticulation networks. Bioinformatics. 2001;17(7):664–8. Boc A, Diallo AB, Makarenkov V. T-REX: a web server for inferring, validating and visualizing phylogenetic trees and networks. Nucleic Acids Res. 2012;40(W1):W573–9. Willems M, Tahiri N, Makarenkov V. A new efficient algorithm for inferring explicit hybridization networks following the Neighbor-Joining principle. J Bioinform Comput Biol. 2014;12(5):1450024. List JM, Nelson-Sathi S, Martin W, Geisler H. Using phylogenetic networks to model Chinese dialect history. Lang Dyn Change. 2014;4(2):222–52. List J-M. Network perspectives on Chinese dialect history. Bull Chin Ling. 2015;8(1):42–67. List J-M. Sequence comparison in historical linguistics. Düsseldorf: Düsseldorf University Press; 2014. Bandelt HJ, Dress AWM. A canonical decomposition theory for metrics on a finite set. Adv Math. 1992;92(1):47–105. Bryant D, Moulton V. NeighborNet: an agglomerative algorithm for the construction of planar phylogenetic networks. Mol Biol Evol. 2004;21(2):255–65. Huson DH, Rupp R, Berry V, Gambette P, Paul C. Computing galled networks from real data. Bioinformatics. 2009;25(12):i85–93. Dyen I, Kruskal J, Black P. An Indo-European classification: a lexicostatistical experiment. Trans Amer Phil Soc. 1992;82(5):1–132. Atkinson QD, Gray RD. How old is the Indo-European language family? Illumination or more moths to the flame? In: Forster P, Renfrew C, editors. Phylogenetic methods and the prehistory of languages Cambridge. UK: The McDonald Institute for Archaeological Research; 2006. p. 91–109. Douglas Harper's Online Etymology Dictionary. http://www.etymonline.com. Accessed 14 Mar 2016. The Indo-European Lexical Cognacy Database (IELex). http://ielex.mpi.nl. Accessed 14 Mar 2016. Lubotsky A. IE Etymological Dictionaries Project (Leiden Indo-European Etymological Dictionary Series). http://dictionaries.brillonline.com. Accessed 14 Mar 2016. Saitou N, Nei M. The neighbor-joining method. A new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987;4(4):406–25. Greenhill S. Levenshtein distances fail to identify language relationships accurately. Comp Ling. 2011;37:689–98. Huson DH, Bryant D. Application of phylogenetic networks in evolutionary studies. Mol Biol Evol. 2006;23(2):254–67. Huson DH, Rupp R. Summarizing multiple gene trees using cluster networks. In: Crandall A, Lagergren J, editors. Algorithms in Bioinformatics, volume 5251 of Lecture Notes in Computer Science. Berlin Heidelberg: Springer; 2008. p. 296–305. Van Iersel L, Kelk S, Rupp R, Huson D. Phylogenetic networks do not need to be complex: using fewer reticulations to represent conflicting clusters. Bioinformatics. 2010;26(12):i124–31. Huson DH, Scornavacca C. Dendroscope 3: An interactive tool for rooted phylogenetic trees and networks. Syst Biol. 2012;61(6):1061–7. Atkinson QD. The descent of words. Proc Natl Acad Sci U S A. 2013;110(11):4159–60. List J-M. SCA: Phonetic Alignment based on sound classes. In: Lassiter D, Slavkovik M, editors. New directions in logic, language, and computation, volume 7415 of Lecture Notes in Computer Science. Berlin Heidelberg: Springer; 2012. p. 32–51. Steiner L, Stadler PF, Cysouw M. A pipeline for computational historical linguistics. Lang Dyn Change. 2011;1(1):89–127. Huson DH, Dezulian T, Kloepper T, Steel MA. Phylogenetic super-networks from partial trees. IEEE/ACM Trans Comput Biol Bioinf. 2004;1(4):151–8. Legendre P, Makarenkov V. Reconstruction of biogeographic and evolutionary networks using reticulograms. Syst Biol. 2002;51(2):199–216. Buffington AF, Preston AB. A Pennsylvania German grammar. Revth ed. Allentown: Schlecter's; 1965. Rolf Jr HB. An introduction to Old Frisian. History, grammar, reader, glossary. Amsterdam: John Benjamins; 2009. Carlin E, Arends J. Atlas of the languages of Suriname. Leiden: KITLV Press; 2002. Bandle O, editor. The Nordic Languages: an international handbook of the history of the North Germanic languages. Berlin: Walter de Gruyter; 2005. Pierre B. La langue occitane. 3rd ed. Paris: PUF, coll. Que sais-je ? 1973. Smith N, Bergin TG. An old Provençal primer. New York: Garland; 1984. Vogt T, Geis T. Wort für Wort. Beilefeld: Reise Know-How; 2007. Kolga M. The red book of the peoples of the Russian Empire. Tallinn: NGO Red Book; 2001. Hodgson BH. Essays on the languages, literature, and religion of Nepal and Tibet: together with further papers on the geography, ethnology, and commerce of those countries. London: Trübner & Company; 1874. Kachru BB, Kachru Y, Sridhar SN. Language in South Asia. Cambridge: Cambridge University Press; 2008. Kenrick D. Historical dictionary of the Gypsies (Romanies). 2nd ed. Lanham: Scarecrow Press; 2007. Piette JRF. French loanwords in Middle Breton. Cardiff: University of Wales Press; 1973. Baugh AC, Cable T. A history of the English language. 5th ed. London: Routledge; 2002. Robinson OW. Old English and its closest relatives. Stanford: Stanford University Press; 1947. Bonnet G. Les mots latins de l'albanais. Paris: L'Harmattan; 1998. Bournoutian GA. A concise history of the Armenian people: (From ancient times to the present). 6th ed. Costa Mesa: Mazda Publishers; 2012. Clackson J. The linguistic relationship between Armenian and Greek. Oxford: Philological Society; 1994. Lightfoot D. Principles of diachronic syntax. Cambridge: Cambridge University Press; 1979. Lightfoot D. How new languages emerge. Cambridge: Cambridge University Press; 2006. Roberts I. Diachronic syntax. Oxford: Oxford University Press; 2007. Di Sciullo AM. A biolinguistic approach to variation. In: Di Sciullo AM, Boeckx C, editors. The biolinguistic entreprise: new perspectives on the evolution and nature of the human language faculty. Oxford: Oxford University Press; 2011. p. 305–28. Colonna V, Boattini A, Guardiano C, Dall'ara I, Pettener D, Longobardi G, Barbujani G. Long-range comparison between genes and languages based on syntactic distances. Hum Hered. 2010;70(4):245–54. Longobardi G, Guardiano C, Silvestri G, Boattini A, Ceolin A. Toward a syntactic phylogeny of modern Indo-European languages. J Hist Ling. 2013;3(1):122–52. We thank Dr. QD. Atkinson and two anonymous reviewers for their helpful comments and suggestions. This work was supported by Natural Sciences and Engineering Research Council of Canada, Fonds de Recherche sur la Nature et Technologies of Québec, and Fonds de Recherche sur la Société et la Culture of Québec. All the data presented in this article, including linguistic and phonetic data, distance matrices, methods' parameters, reconstructed trees and networks are available at: www.trex.uqam.ca/biolinguistics. MW and VM wrote the article and carried out the experimental study. EL carried out the experimental study. VM, LL, GL, FJL and AMD participated in the design of the study and in the search of linguistic and historical justifications for the results obtained. All authors gave final approval for publication. Department of Computer Science, Université du Québec à Montréal, Case postale 8888, succursale Centre-ville, Montréal, Québec, H3C 3P8, Canada Matthieu Willems, Etienne Lord, Louise Laforest & Vladimir Makarenkov Department of Biological Sciences, Université de Montréal, C.P. 6128 succ. Centre-Ville, Montreal, Quebec, H3C 3J7, Canada Etienne Lord & François-Joseph Lapointe Department of Mathematics, Université du Québec à Montréal, Case postale 8888, succursale Centre-ville, Montréal, Québec, H3C 3P8, Canada Gilbert Labelle Department of Linguistics, Université du Québec à Montréal, Case postale 8888, succursale Centre-ville, Montréal, Québec, H3C 3P8, Canada Anna Maria Di Sciullo Matthieu Willems Etienne Lord Louise Laforest François-Joseph Lapointe Vladimir Makarenkov Correspondence to Vladimir Makarenkov. Biolinguistic IE data archive. This file includes phonetic data, data matrices, Newick strings and word trees discussed in this paper as well as Perl and Python scripts for computing the Levenshtein and SCA distances. (ZIP 328 kb) Willems, M., Lord, E., Laforest, L. et al. Using hybridization networks to retrace the evolution of Indo-European languages. BMC Evol Biol 16, 180 (2016). https://doi.org/10.1186/s12862-016-0745-6 Historical linguistics Phylogenetic networks Reticulate evolution
CommonCrawl
(Redirected from Oil reserve) proven oil reserves in the ground A map of world oil reserves, 2013. Oil reserves denote the amount of crude oil that can be technically recovered at a cost that is financially feasible at the present price of oil.[1] Hence reserves will change with the price, unlike oil resources, which include all oil that can be technically recovered at any price. Reserves may be for a well, a reservoir, a field, a nation, or the world. Different classifications of reserves are related to their degree of certainty. The total estimated amount of oil in an oil reservoir, including both producible and non-producible oil, is called oil in place. However, because of reservoir characteristics and limitations in petroleum extraction technologies, only a fraction of this oil can be brought to the surface, and it is only this producible fraction that is considered to be reserves. The ratio of reserves to the total amount of oil in a particular reservoir is called the recovery factor. Determining a recovery factor for a given field depends on several features of the operation, including method of oil recovery used and technological developments.[2] Based on data from OPEC at the beginning of 2013 the highest proved oil reserves including non-conventional oil deposits are in Venezuela (20% of global reserves), Saudi Arabia (18% of global reserves), Canada (13% of global reserves), and Iran (9%).[3] Because the geology of the subsurface cannot be examined directly, indirect techniques must be used to estimate the size and recoverability of the resource. While new technologies have increased the accuracy of these techniques, significant uncertainties still remain. In general, most early estimates of the reserves of an oil field are conservative and tend to grow with time. This phenomenon is called reserves growth.[4] Many oil-producing nations do not reveal their reservoir engineering field data and instead provide unaudited claims for their oil reserves. The numbers disclosed by some national governments are suspected of being manipulated for political reasons.[5][6] 1.1 Proven reserves 1.2 Unproven reserves 1.3 Russian reserve categories 1.4 Strategic petroleum reserves 2 Estimation techniques 2.1 Volumetric method 2.2 Materials balance method 2.3 Production decline curve method 3 Reserves growth 4 Estimated reserves by country 5 OPEC countries 6 Prospective resources 6.1 Arctic prospective resources 7 Unconventional prospective resources Classifications[edit] Schematic graph illustrating petroleum volumes and probabilities. Curves represent categories of oil in assessment. There is a 95% chance (i.e., probability, F95) of at least volume V1 of economically recoverable oil, and there is a 5-percent chance (F05) of at least volume V2 of economically recoverable oil.[7] All reserve estimates involve uncertainty, depending on the amount of reliable geologic and engineering data available and the interpretation of that data. The relative degree of uncertainty can be expressed by dividing reserves into two principal classifications—"proven" (or "proved") and "unproven" (or "unproved").[7] Unproven reserves can further be divided into two subcategories—"probable" and "possible"—to indicate the relative degree of uncertainty about their existence.[7] The most commonly accepted definitions of these are based on those approved by the Society of Petroleum Engineers (SPE) and the World Petroleum Council (WPC) in 1997.[8] Proven reserves[edit] Main article: proven reserves Proven reserves are those reserves claimed to have a reasonable certainty (normally at least 90% confidence) of being recoverable under existing economic and political conditions, with existing technology. Industry specialists refer to this as "P90" (that is, having a 90% certainty of being produced). Proven reserves are also known in the industry as "1P".[9][10] Proven reserves are further subdivided into "proven developed" (PD) and "proven undeveloped" (PUD).[10][11] PD reserves are reserves that can be produced with existing wells and perforations, or from additional reservoirs where minimal additional investment (operating expense) is required.[11] PUD reserves require additional capital investment (e.g., drilling new wells) to bring the oil to the surface.[9][11] Until December 2009 "1P" proven reserves were the only type the U.S. Securities and Exchange Commission allowed oil companies to report to investors. Companies listed on U.S. stock exchanges must substantiate their claims, but many governments and national oil companies do not disclose verifying data to support their claims. Since January 2010 the SEC now allows companies to also provide additional optional information declaring 2P (both proven and probable) and 3P (proven plus probable plus possible) provided the evaluation is verified by qualified third party consultants, though many companies choose to use 2P and 3P estimates only for internal purposes. Unproven reserves[edit] An oil well in Canada, which has the world's third largest oil reserves. Unproven reserves are based on geological and/or engineering data similar to that used in estimates of proven reserves, but technical, contractual, or regulatory uncertainties preclude such reserves being classified as proven.[12] Unproven reserves may be used internally by oil companies and government agencies for future planning purposes but are not routinely compiled. They are sub-classified as probable and possible.[12] Probable reserves are attributed to known accumulations and claim a 50% confidence level of recovery. Industry specialists refer to them as "P50" (i.e., having a 50% certainty of being produced). The sum of proven plus probable reserves is also referred to in the industry as "2P" (proven plus probable).[9] Possible reserves are attributed to known accumulations that have a less likely chance of being recovered than probable reserves. This term is often used for reserves which are claimed to have at least a 10% certainty of being produced ("P10"). Reasons for classifying reserves as possible include varying interpretations of geology, reserves not producible at commercial rates, uncertainty due to reserve infill (seepage from adjacent areas) and projected reserves based on future recovery methods. The cumulative amount of proven, probable and possible resources are referred to in the industry as "3P" (proven plus probable plus possible).[9] Russian reserve categories[edit] In Russia, reserves categories A, B, and C1 correspond roughly to proved developed producing, proved developed nonproducing, and proved undeveloped, respectively; the designation ABC1 corresponds to proved reserves. The Russian category C2 includes probable and possible reserves.[13] Strategic petroleum reserves[edit] Main article: global strategic petroleum reserves Many countries maintain government-controlled oil reserves for both economic and national security reasons. According to the United States Energy Information Administration, approximately 4.1 billion barrels (650,000,000 m3) of oil are held in strategic reserves, of which 1.4 billion is government-controlled. These reserves are generally not counted when computing a nation's oil reserves. Resources[edit] Unconventional oil resources are greater than conventional ones.[14] Cumulative oil production plus remaining reserves and undiscovered resources. United States not included. A more sophisticated system of evaluating petroleum accumulations was adopted in 2007 by the Society of Petroleum Engineers (SPE), World Petroleum Council (WPC), American Association of Petroleum Geologists (AAPG), and Society of Petroleum Evaluation Engineers (SPEE). It incorporates the 1997 definitions for reserves, but adds categories for contingent resources and prospective resources.[7] Contingent resources are those quantities of petroleum estimated, as of a given date, to be potentially recoverable from known accumulations, but the applied project(s) are not yet considered mature enough for commercial development due to one or more contingencies. Contingent resources may include, for example, projects for which there are no viable markets, or where commercial recovery is dependent on technology under development, or where evaluation of the accumulation is insufficient to clearly assess commerciality. Prospective resources are those quantities of petroleum estimated, as of a given date, to be potentially recoverable from undiscovered accumulations by application of future development projects. Prospective resources have both an associated chance of discovery and a chance of development. The United States Geological Survey uses the terms technically and economically recoverable resources when making its petroleum resource assessments. Technically recoverable resources represent that proportion of assessed in-place petroleum that may be recoverable using current recovery technology, without regard to cost. Economically recoverable resources are technically recoverable petroleum for which the costs of discovery, development, production, and transport, including a return to capital, can be recovered at a given market price. "Unconventional resources" exist in petroleum accumulations that are pervasive throughout a large area. Examples include extra heavy oil, oil sand, and oil shale deposits. Unlike "conventional resources", in which the petroleum is recovered through wellbores and typically requires minimal processing prior to sale, unconventional resources require specialized extraction technology to produce. For example, steam and/or solvents are used to mobilize bitumen for in-situ recovery. Moreover, the extracted petroleum may require significant processing prior to sale (e.g., bitumen upgraders).[7] The total amount of unconventional oil resources in the world considerably exceeds the amount of conventional oil reserves, but are much more difficult and expensive to develop. Estimation techniques[edit] Example of a production decline curve for an individual well The amount of oil in a subsurface reservoir is called oil in place (OIP).[11] Only a fraction of this oil can be recovered from a reservoir. This fraction is called the recovery factor.[11] The portion that can be recovered is considered to be a reserve. The portion that is not recoverable is not included unless and until methods are implemented to produce it. [12] Volumetric method[edit] Further information: Extraction of petroleum and Oil in place Volumetric methods attempt to determine the amount of oil in place by using the size of the reservoir as well as the physical properties of its rocks and fluids. Then a recovery factor is assumed, using assumptions from fields with similar characteristics. OIP is multiplied by the recovery factor to arrive at a reserve number. Current recovery factors for oil fields around the world typically range between 10 and 60 percent; some are over 80 percent. The wide variance is due largely to the diversity of fluid and reservoir characteristics for different deposits.[15][16][17] The method is most useful early in the life of the reservoir, before significant production has occurred. Materials balance method[edit] The materials balance method for an oil field uses an equation that relates the volume of oil, water and gas that has been produced from a reservoir and the change in reservoir pressure to calculate the remaining oil. It assumes that, as fluids from the reservoir are produced, there will be a change in the reservoir pressure that depends on the remaining volume of oil and gas. The method requires extensive pressure-volume-temperature analysis and an accurate pressure history of the field. It requires some production to occur (typically 5% to 10% of ultimate recovery), unless reliable pressure history can be used from a field with similar rock and fluid characteristics.[12] Production decline curve method[edit] Decline curve generated by decline curve analysis software, utilized in petroleum economics to indicate the depletion of oil & gas in a [[petroleum reservoir]] The Y axis is a semi log scale, indicating the rate of oil depletion (green line), and gas depletion (red line). The X axis is a coordinate scale, indicating time in years and displays the production decline curve. The top red line is the gas decline curve, which is a hyperbolic decline curve. Gas is measured in MCF (thousand cubic feet in this case). The lower Blue line is the oil decline curve, which is an exponential decline curve. Oil is measured in BBL (Oil barrels). Data is from actual sales, not pumped production. The dips to zero indicate there were no sales that month, likely because the oil well did not produce a full tank, and thus was not worth a visit from a tank truck. The upper right legend (map) displays CUM, which is the cumulative gas or oil produced. ULT is the ultimate recovery projected for the well. Pv10 is the discounted present value of 10%, which is the future value of the remaining lease, valued for this oil well at $1.089 million USD. The decline curve method uses production data to fit a decline curve and estimate future oil production. The three most common forms of decline curves are exponential, hyperbolic, and harmonic. It is assumed that the production will decline on a reasonably smooth curve, and so allowances must be made for wells shut in and production restrictions. The curve can be expressed mathematically or plotted on a graph to estimate future production. It has the advantage of (implicitly) including all reservoir characteristics. It requires a sufficient history to establish a statistically significant trend, ideally when production is not curtailed by regulatory or other artificial conditions.[12] Reserves growth[edit] Experience shows that initial estimates of the size of newly discovered oil fields are usually too low. As years pass, successive estimates of the ultimate recovery of fields tend to increase. The term reserve growth refers to the typical increases in estimated ultimate recovery that occur as oil fields are developed and produced.[4] Estimated reserves by country[edit] The neutrality of this section is disputed. Relevant discussion may be found on the talk page. Please do not remove this message until conditions to do so are met. (May 2015) (Learn how and when to remove this template message) This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: The table in this section presently presents resources rather than reserves, according to SPE definition Please help improve this section if you can. (February 2017) (Learn how and when to remove this template message) Trends in proved oil reserves in top five countries, 1980-2013 (date from US Energy Information Administration) See also: List of countries by proven oil reserves The unit bbl = barrel of oil. A sample calculation for the reserve/production ratio is ( 296.5 × 1000 ) / ( 2.1 × 365 ) = 386.8 {\displaystyle (296.5\times 1000)/(2.1\times 365)=386.8} for Venezuela. Countries with largest oil reserves Most of the world's oil reserves are in the Middle East.[18] Summary of Proven Reserve Data as of 2012[3] Reserves[19] 109 bbl Production[20] 106 bbl/d 103 m3/d Reserve/ Production Ratio1 1 Venezuela 296.50 47.140 2.1 330 387 2 Saudi Arabia 265.40 42.195 8.9 1,410 82 3 Canada 175.00 27.823 2.7 430 178 4 Iran 151.20 24.039 4.1 650 101 5 Iraq 143.10 22.751 3.4 540 115 6 Kuwait 101.50 16.137 2.3 370 27 7 United Arab Emirates 97.80 15.549 2.4 380 18 8 Russia 80.00 12.719 10.0 1,590 15 9 Libya 47.00 7.472 1.7 270 76 10 Nigeria 37.00 5.883 2.5 400 41 11 Kazakhstan 30.00 4.770 1.5 240 55 12 Qatar 25.41 4.040 1.1 170 5 13 China 25.40 4.038 4.1 650 17 14 United States 25.00 3.975 7.0 1,110 10 15 Angola 13.50 2.146 1.9 300 19 16 Algeria 13.42 2.134 1.7 270 15 17 Brazil 13.20 2.099 2.1 330 17 Total of top seventeen reserves 1,540.43 244.909 59.5 9,460 71 1 Reserve to Production ratio (in years), calculated as reserves / annual production. (from above) It is estimated that between 100 and 135 billion tonnes (which equals between 133 and 180 billions m3 of oil) of the world's oil reserves have been used between 1850 and the present.[21] OPEC countries[edit] Since OPEC started to set production quotas on the basis of reserves levels in the 1980s, many of its members have reported significant increases in their official reserves.[22][23] There are doubts about the reliability of these estimates, which are not provided with any form of verification that meet external reporting standards.[22] Oil reserves of OPEC 1980–2005 The sudden revisions in OPEC reserves, totaling nearly 300 bn barrels, have been much debated.[24] Some of it is defended partly by the shift in ownership of reserves away from international oil companies, some of whom were obliged to report reserves under conservative US Securities and Exchange Commission rules.[22][25] The most prominent explanation of the revisions is prompted by a change in OPEC rules which set production quotas (partly) on reserves. In any event, the revisions in official data had little to do with the actual discovery of new reserves.[22] Total reserves in many OPEC countries hardly changed in the 1990s.[22] Official reserves in Kuwait, for example, were unchanged at 96.5 Gbbl (15.34×10^9 m3) (including its share of the Neutral Zone) from 1991 to 2002, even though the country produced more than 8 Gbbl (1.3×10^9 m3) and did not make any important new discoveries during that period.[22] The case of Saudi Arabia is also striking, with proven reserves estimated at between 260 and 264 billion barrels (4.20×1010 m3) in the past 18 years, a variation of less than 2%,[22] while extracting approximately 60 billion barrels (9.5×109 m3) during this period. Sadad al-Huseini, former head of exploration and production at Saudi Aramco, estimates 300 Gbbl (48×10^9 m3) of the world's 1,200 Gbbl (190×10^9 m3) of proven reserves should be recategorized as speculative resources, though he did not specify which countries had inflated their reserves.[26] Dr. Ali Samsam Bakhtiari, a former senior expert of the National Iranian Oil Company, has estimated that Iran, Iraq, Kuwait, Saudi Arabia and the United Arab Emirates have overstated reserves by a combined 320–390bn barrels and has said, "As for Iran, the usually accepted official 132 billion barrels (2.10×1010 m3) is almost one hundred billion over any realistic assay."[27] Petroleum Intelligence Weekly reported that official confidential Kuwaiti documents estimate reserves of Kuwait were only 48 billion barrels (7.6×10^9 m3), of which half were proven and half were possible. The combined value of proven and possible is half of the official public estimate of proven reserves.[23] In July 2011, OPEC's Annual Statistical Review showed Venezuela's reserves to be larger than Saudi Arabia's.[28][29] Prospective resources[edit] Arctic prospective resources[edit] See also: Petroleum exploration in the Arctic and Arctic methane emissions Location of Arctic Basins assessed by the USGS A 2008 United States Geological Survey estimates that areas north of the Arctic Circle have 90 billion barrels (1.4×1010 m3) of undiscovered, technically recoverable oil and 44 billion barrels (7.0×109 m3) of natural gas liquids in 25 geologically defined areas thought to have potential for petroleum. This represented 13% of the expected undiscovered oil in the world. Of the estimated totals, more than half of the undiscovered oil resources were estimated to occur in just three geologic provinces—Arctic Alaska, the Amerasia Basin, and the East Greenland Rift Basins. More than 70% of the mean undiscovered oil resources was estimated to occur in five provinces: Arctic Alaska, Amerasia Basin, East Greenland Rift Basins, East Barents Basins, and West Greenland–East Canada. It was further estimated that approximately 84% of the oil and gas would occur offshore. The USGS did not consider economic factors such as the effects of permanent sea ice or oceanic water depth in its assessment of undiscovered oil and gas resources. This assessment was lower than a 2000 survey, which had included lands south of the Arctic Circle.[30][31][32] Unconventional prospective resources[edit] In October 2009, the USGS updated the quantity of the Orinoco tar sands, in Venezuela, to 513 billion barrels (8.16×1010 m3).[33] In June 2013 the U.S. Energy Information Administration published a global inventory of estimated recoverable tight oil and tight gas resources in shale formations, "Technically Recoverable Shale Oil and Shale Gas Resources: An Assessment of 137 Shale Formations in 41 Countries Outside the United States." The inventory is incomplete due to exclusion of tight oil and gas from sources other than shale such as sandstone or carbonates, formations underlying the large oil fields located in the Middle East and the Caspian region, off shore formations, or about which there is little information. Estimated technically recoverable shale oil resources total 335 to 345 billion barrels.[34] Decline curve analysis Global strategic petroleum reserves Oil exploration Strategic Petroleum Reserve Energy and resources: Petro-aggression World energy resources and consumption List of countries by natural gas proven reserves ^ Society of Petroleum Engineers, Petroleum reserves and resources definitions, accessed 24 Feb. 2017. ^ "Oil reserve definitions". bp.com. BP. Retrieved 4 December 2013. ^ a b "OPEC Share of World Oil Reserves 2010". OPEC. 2011. ^ a b David F. Morehouse (1997). "The Intricate Puzzle of Oil and Gas Reserves Growth" (PDF). U.S. Energy Information Administration. Archived from the original (PDF) on August 6, 2010. Retrieved 2014-08-19. ^ "Proven Oil Reserves". moneyterms.co.uk. 2008. Retrieved 2008-04-17. ^ The Asylum, Leah McGrath Goodman, 2011, Harper Collins ^ a b c d e "Petroleum Resources Management System". Society of Petroleum Engineers. 2007. Retrieved 2008-04-20. ^ "Petroleum Reserves Definitions" (PDF). Petroleum Resources Management System. Society of Petroleum Engineers. 1997. Retrieved 2008-04-20. ^ a b c d "Glossary of Terms Used in Petroleum Reserves/Resources" (PDF). Society of Petroleum Engineers. 2005. Retrieved 2008-04-20. ^ a b Wright, Charlotte J.; Rebecca A Gallun (2008). Fundamentals of Oil & Gas Accounting (5 ed.). PenWell Books. p. 750. ISBN 978-1-59370-137-6. ^ a b c d e Hyne, Norman J. (2001). Nontechnical Guide to Petroleum Geology, Exploration, Drilling and Production. PennWell Corporation. pp. 431–449. ISBN 9780878148233. ^ a b c d e Lyons, William C. (2005). Standard Handbook Of Petroleum & Natural Gas Engineering. Gulf Professional Publishing. pp. 5–6. ISBN 9780750677851. ^ Society of Petroleum Engineers, SPE Reserves Committee, ^ Alboudwarej; et al. (Summer 2006). "Highlighting Heavy Oil" (PDF). Oilfield Review. Archived from the original (PDF) on 2008-05-27. Retrieved 2008-05-24. ^ "Defining the Limits of Oil Production". International Energy Outlook 2008. U.S. Department of Energy. June 2008. Archived from the original on 2008-09-24. Retrieved 2008-11-22. ^ E. Tzimas, (2005). "Enhanced Oil Recovery using Carbon Dioxide in the European Energy System" (PDF). European Commission Joint Research Center. Retrieved 2008-08-23. ^ Green, Don W.; Willhite, G. Paul (1998), Enhanced Oil Recovery, Society of Petroleum Engineers, ISBN 978-1555630775 ^ "World Proved Reserves of Oil and Natural Gas". US Energy Information Administration. 2007. Retrieved 2008-08-19. ^ PennWell Corporation, Oil & Gas Journal, Vol. 105.48 (December 24, 2007), except United States. Oil includes crude oil and condensate. Data for the United States are from the Energy Information Administration, U.S. Crude Oil, Natural Gas, and Natural Gas Liquids Reserves, 2006 Annual Report, DOE/EIA-0216(2007) (November 2007). Oil & Gas Journal's oil reserve estimate for Canada includes 5.392 billion barrels (857,300,000 m3) of conventional crude oil and condensate reserves and 173.2 billion barrels (2.754×1010 m3) of oil sands reserves. Information collated by EIA ^ U.S. Energy Information Administration (EIA) – U.S. Government – U.S. Dept. of Energy, September, 2011 EIA - International Energy Statistics ^ How Much Oil Have We Used?, Science Daily, 8 May 2009. Retrieved Mar 2014. ^ a b c d e f g WORLD ENERGY OUTLOOK 2005:Middle East and North Africa Insights (PDF). INTERNATIONAL ENERGY AGENCY. 2005. pp. 125–126. ^ a b "Oil Reserves Accounting: The Case Of Kuwait". Petroleum Intelligence Weekly. January 30, 2006. Retrieved 2008-08-23. ^ Adam, Porter (15 July 2005). "How much oil do we really have?". BBC News. ^ Maugeri, Leonardo (January 23, 2006). "The Saudis May Have Enough Oil". Newsweek. ^ "Oil reserves over-inflated by 300bn barrels – al-Huseini". October 30, 2007. Retrieved 2008-08-23. ^ "On Middle Eastern Oil Reserves". ASPO-USA's Peak Oil Review. February 20, 2006. Retrieved 2008-08-20. ^ Faucon, Benoit (18 July 2011). "Venezuela Oil Reserves Surpassed Saudi Arabia In 2010-OPEC". Fox Business. Retrieved 18 July 2011. ^ "OPEC Share of World Crude Oil Reserves". OPEC. 2010. Retrieved June 3, 2012. ^ United States Geological Survey, (USGS) (July 27, 2008). "90 Billion Barrels of Oil and 1,670 Trillion Cubic Feet of Natural Gas Assessed in the Arctic". USGS. Retrieved 2008-08-12. ^ MOUAWAD, JAD (July 24, 2008). "Oil Survey Says Arctic Has Riches". New York Times. ^ Alan Bailey (October 21, 2007). "USGS: 25% Arctic oil, gas estimate a reporter's mistake". Vol. 12, No. 42. Petroleum News. Retrieved 2008-07-24. ^ Christopher J. Schenk; Troy A. Cook; Ronald R. Charpentier; Richard M. Pollastro; Timothy R. Klett; Marilyn E. Tennyson; Mark A. Kirschbaum; Michael E. Brownfield & Janet K. Pitman. (11 January 2010). "An Estimate of Recoverable Heavy Oil Resources of the Orinoco Oil Belt, Venezuela" (PDF). USGS. Retrieved 23 January 2010. ^ "Technically Recoverable Shale Oil and Shale Gas Resources: An Assessment of 137 Shale Formations in 41 Countries Outside the United States" (PDF). U.S. Energy Information Administration (EIA). June 2013. Retrieved June 11, 2013. OPEC Annual Statistical Bulletin Energy Supply page on the Global Education Project web site, including many charts and graphs on the world's energy supply and use Oil reserves (most recent) by country Statistical Review of World Energy BP Statistical Review of Energy 2013 Mitigation of peak oil Predicting the timing of peak oil Hubbert peak theory Olduvai theory Results/responses Hirsch report Rimini protocol Price of petroleum 2000s energy crisis Export Land Model Food vs. fuel Pickens Plan Swing producer Renewable energy commercialization Albert Allen Bartlett Colin J. Campbell David Goodstein Richard Heinberg M. King Hubbert James Kunstler Jeremy Leggett Dale Allen Pfeiffer Richard Rainwater Richard C. Duncan Kenneth S. Deffeyes The End of Oil The Long Emergency Out of Gas Beyond Oil Twilight in the Desert A Crude Awakening The End of Suburbia Oil Factor PetroApocalypse Now? How Cuba Survived Peak Oil What a Way to Go Gashole Fuel (film) Escape from Suburbia Crude (2007 film) Energy Watch Group ODAC OAPEC Transition town Other peaks Retrieved from "https://en.wikipedia.org/w/index.php?title=Oil_reserves&oldid=903368034" NPOV disputes from May 2015 All NPOV disputes Articles needing cleanup from February 2017 Cleanup tagged articles with a reason field from February 2017 Wikipedia pages needing cleanup from February 2017
CommonCrawl
Satellite-relayed intercontinental quantum network (1801.04418) Sheng-Kai Liao, Wen-Qi Cai, Johannes Handsteiner, Bo Liu, Juan Yin, Liang Zhang, Dominik Rauch, Matthias Fink, Ji-Gang Ren, Wei-Yue Liu, Yang Li, Qi Shen, Yuan Cao, Feng-Zhi Li, Jian-Feng Wang, Yong-Mei Huang, Lei Deng, Tao Xi, Lu Ma, Tai Hu, Li Li, Nai-Le Liu, Franz Koidl, Peiyuan Wang, Yu-Ao Chen, Xiang-Bin Wang, Michael Steindorfer, Georg Kirchner, Chao-Yang Lu, Rong Shu, Rupert Ursin, Thomas Scheidl, Cheng-Zhi Peng, Jian-Yu Wang, Anton Zeilinger, Jian-Wei Pan Jan. 13, 2018 quant-ph We perform decoy-state quantum key distribution between a low-Earth-orbit satellite and multiple ground stations located in Xinglong, Nanshan, and Graz, which establish satellite-to-ground secure keys with ~kHz rate per passage of the satellite Micius over a ground station. The satellite thus establishes a secure key between itself and, say, Xinglong, and another key between itself and, say, Graz. Then, upon request from the ground command, Micius acts as a trusted relay. It performs bitwise exclusive OR operations between the two keys and relays the result to one of the ground stations. That way, a secret key is created between China and Europe at locations separated by 7600 km on Earth. These keys are then used for intercontinental quantum-secured communication. This was on the one hand the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China. Also, a videoconference was performed between the Austrian Academy of Sciences and the Chinese Academy of Sciences, which also included a 280 km optical ground connection between Xinglong and Beijing. Our work points towards an efficient solution for an ultralong-distance global quantum network, laying the groundwork for a future quantum internet. Space QUEST mission proposal: Experimentally testing decoherence due to gravity (1703.08036) Siddarth Koduru Joshi, Jacques Pienaar, Timothy C. Ralph, Luigi Cacciapuoti, Will McCutcheon, John Rarity, Dirk Giggenbach, Jin Gyu Lim, Vadim Makarov, Ivette Fuentes, Thomas Scheidl, Erik Beckert, Mohamed Bourennane, David Edward Bruschi, Adan Cabello, Jose Capmany, Alberto Carrasco-Casado, Eleni Diamanti, Miloslav Dus̆ek, Dominique Elser, Angelo Gulinatti, Robert H. Hadfield, Thomas Jennewein, Rainer Kaltenbaek, Michael A. Krainak, Hoi-Kwong Lo, Christoph Marquardt, Gerard Milburn, Momtchil Peev, Andreas Poppe, Valerio Pruneri, Renato Renner, Christophe Salomon, Johannes Skaar, Nikolaos Solomos, Mario Stipčević, Juan P. Torres, Morio Toyoshima, Paolo Villoresi, Ian Walmsley, Gregor Weihs, Harald Weinfurter, Anton Zeilinger, Marek Żukowski, Rupert Ursin Jan. 9, 2018 quant-ph, gr-qc Models of quantum systems on curved space-times lack sufficient experimental verification. Some speculative theories suggest that quantum properties, such as entanglement, may exhibit entirely different behavior to purely classical systems. By measuring this effect or lack thereof, we can test the hypotheses behind several such models. For instance, as predicted by Ralph and coworkers [T C Ralph, G J Milburn, and T Downes, Phys. Rev. A, 79(2):22121, 2009, T C Ralph and J Pienaar, New Journal of Physics, 16(8):85008, 2014], a bipartite entangled system could decohere if each particle traversed through a different gravitational field gradient. We propose to study this effect in a ground to space uplink scenario. We extend the above theoretical predictions of Ralph and coworkers and discuss the scientific consequences of detecting/failing to detect the predicted gravitational decoherence. We present a detailed mission design of the European Space Agency's (ESA) Space QUEST (Space - Quantum Entanglement Space Test) mission, and study the feasibility of the mission schema. Quantum Communication Uplink to a 3U CubeSat: Feasibility & Design (1711.03409) Sebastian Philipp Neumann, Siddarth Koduru Joshi, Matthias Fink, Thomas Scheidl, Roland Blach, Carsten Scharlemann, Sameh Abouagaga, Daanish Bambery, Erik Kerstel, Mathieu Barthelemy, Rupert Ursin Dec. 12, 2017 quant-ph Satellites are the efficient way to achieve global scale quantum communication (Q.Com) because unavoidable losses restrict fiber based Q.Com to a few hundred kilometers. We demonstrate the feasibility of establishing a Q.Com uplink with a tiny 3U CubeSat (measuring just 10X10X32 cm^3 ) using commercial off-the-shelf components, the majority of which have space heritage. We demonstrate how to leverage the latest advancements in nano-satellite body-pointing to show that our 4kg CubeSat can provide performance comparable to much larger 600kg satellite missions. A comprehensive link budget and simulation was performed to calculate the secure key rates. We discuss design choices and trade-offs to maximize the key rate while minimizing the cost and development needed. Our detailed design and feasibility study can be readily used as a template for global scale Q.Com. Distribution of high-dimensional entanglement via an intra-city free-space link (1612.00751) Fabian Steinlechner, Sebastian Ecker, Matthias Fink, Bo Liu, Jessica Bavaresco, Marcus Huber, Thomas Scheidl, Rupert Ursin Aug. 22, 2017 quant-ph Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links. Cosmic Bell Test: Measurement Settings from Milky Way Stars (1611.06985) Johannes Handsteiner, Andrew S. Friedman, Dominik Rauch, Jason Gallicchio, Bo Liu, Hannes Hosp, Johannes Kofler, David Bricher, Matthias Fink, Calvin Leung, Anthony Mark, Hien T. Nguyen, Isabella Sanders, Fabian Steinlechner, Rupert Ursin, Sören Wengerowsky, Alan H. Guth, David I. Kaiser, Thomas Scheidl, Anton Zeilinger Jan. 26, 2017 quant-ph, astro-ph.CO Bell's theorem states that some predictions of quantum mechanics cannot be reproduced by a local-realist theory. That conflict is expressed by Bell's inequality, which is usually derived under the assumption that there are no statistical correlations between the choices of measurement settings and anything else that can causally affect the measurement outcomes. In previous experiments, this "freedom of choice" was addressed by ensuring that selection of measurement settings via conventional "quantum random number generators" was space-like separated from the entangled particle creation. This, however, left open the possibility that an unknown cause affected both the setting choices and measurement outcomes as recently as mere microseconds before each experimental trial. Here we report on a new experimental test of Bell's inequality that, for the first time, uses distant astronomical sources as "cosmic setting generators." In our tests with polarization-entangled photons, measurement settings were chosen using real-time observations of Milky Way stars while simultaneously ensuring locality. Assuming fair sampling for all detected photons, and that each stellar photon's color was set at emission, we observe statistically significant $\gtrsim 7.31 \sigma$ and $\gtrsim 11.93 \sigma$ violations of Bell's inequality with estimated $p$-values of $ \lesssim 1.8 \times 10^{-13}$ and $\lesssim 4.0 \times 10^{-33}$, respectively, thereby pushing back by $\sim$600 years the most recent time by which any local-realist influences could have engineered the observed Bell violation. Quantum communication with photons (1701.00989) Mario Krenn, Mehul Malik, Thomas Scheidl, Rupert Ursin, Anton Zeilinger Jan. 4, 2017 quant-ph, physics.optics The secure communication of information plays an ever increasing role in our society today. Classical methods of encryption inherently rely on the difficulty of solving a problem such as finding prime factors of large numbers and can, in principle, be cracked by a fast enough machine. The burgeoning field of quantum communication relies on the fundamental laws of physics to offer unconditional information security. Here we introduce the key concepts of quantum superposition and entanglement as well as the no-cloning theorem that form the basis of this field. Then, we review basic quantum communication schemes with single and entangled photons and discuss recent experimental progress in ground and space-based quantum communication. Finally, we discuss the emerging field of high-dimensional quantum communication, which promises increased data rates and higher levels of security than ever before. We discuss recent experiments that use the orbital angular momentum of photons for sharing large amounts of information in a secure fashion. Experimental test of photonic entanglement in accelerated reference frames (1608.02473) Matthias Fink, Ana Rodriguez-Aramendia, Johannes Handsteiner, Abdul Ziarkash, Fabian Steinlechner, Thomas Scheidl, Ivette Fuentes, Jacques Pienaar, Tim C. Ralph, Rupert Ursin Aug. 8, 2016 quant-ph The quantization of the electromagnetic field has successfully paved the way for the development of the Standard Model of Particle Physics and has established the basis for quantum technologies. Gravity, however, continues to hold out against physicists' efforts of including it into the framework of quantum theory. Experimental techniques in quantum optics have only recently reached the precision and maturity required for the investigation of quantum systems under the influence of gravitational fields. Here, we report on experiments in which a genuine quantum state of an entangled photon pair was exposed to a series of different accelerations. We measure an entanglement witness for $g$ values ranging from 30 mg to up to 30 g - under free-fall as well on a spinning centrifuge - and have thus derived an upper bound on the effects of uniform acceleration on photonic entanglement. Our work represents the first quantum optics experiment in which entanglement is systematically tested in geodesic motion as well as in accelerated reference frames with acceleration a>>g = 9.81 m/s^2. Significant-loophole-free test of Bell's theorem with entangled photons (1511.03190) Marissa Giustina, Marijn A. M. Versteegh, Sören Wengerowsky, Johannes Handsteiner, Armin Hochrainer, Kevin Phelan, Fabian Steinlechner, Johannes Kofler, Jan-Åke Larsson, Carlos Abellán, Waldimar Amaya, Valerio Pruneri, Morgan W. Mitchell, Jörn Beyer, Thomas Gerrits, Adriana E. Lita, Lynden K. Shalm, Sae Woo Nam, Thomas Scheidl, Rupert Ursin, Bernhard Wittmann, Anton Zeilinger Local realism is the worldview in which physical properties of objects exist independently of measurement and where physical influences cannot travel faster than the speed of light. Bell's theorem states that this worldview is incompatible with the predictions of quantum mechanics, as is expressed in Bell's inequalities. Previous experiments convincingly supported the quantum predictions. Yet, every experiment requires assumptions that provide loopholes for a local realist explanation. Here we report a Bell test that closes the most significant of these loopholes simultaneously. Using a well-optimized source of entangled photons, rapid setting generation, and highly efficient superconducting detectors, we observe a violation of a Bell inequality with high statistical significance. The purely statistical probability of our results to occur under local realism does not exceed $3.74 \times 10^{-31}$, corresponding to an 11.5 standard deviation effect. Teleportation of entanglement over 143 km (1403.0009) Thomas Herbst, Thomas Scheidl, Matthias Fink, Johannes Handsteiner, Bernhard Wittmann, Rupert Ursin, Anton Zeilinger Feb. 6, 2015 quant-ph, physics.optics As a direct consequence of the no-cloning theorem, the deterministic amplification as in classical communication is impossible for quantum states. This calls for more advanced techniques in a future global quantum network, e.g. for cloud quantum computing. A unique solution is the teleportation of an entangled state, i.e. entanglement swapping, representing the central resource to relay entanglement between distant nodes. Together with entanglement purification and a quantum memory it constitutes a so-called quantum repeater. Since the aforementioned building blocks have been individually demonstrated in laboratory setups only, the applicability of the required technology in real-world scenarios remained to be proven. Here we present a free-space entanglement-swapping experiment between the Canary Islands of La Palma and Tenerife, verifying the presence of quantum entanglement between two previously independent photons separated by 143 km. We obtained an expectation value for the entanglement-witness operator, more than 6 standard deviations beyond the classical limit. By consecutive generation of the two required photon pairs and space-like separation of the relevant measurement events, we also showed the feasibility of the swapping protocol in a long-distance scenario, where the independence of the nodes is highly demanded. Since our results already allow for efficient implementation of entanglement purification, we anticipate our assay to lay the ground for a fully-fledged quantum repeater over a realistic high-loss and even turbulent quantum channel. Communication with spatially modulated Light through turbulent Air across Vienna (1402.2602) Mario Krenn, Robert Fickler, Matthias Fink, Johannes Handsteiner, Mehul Malik, Thomas Scheidl, Rupert Ursin, Anton Zeilinger Nov. 12, 2014 quant-ph, physics.optics The transverse spatial modes of light offer a large state-space with interesting physical properties. For exploiting it in future long-distance experiments, spatial modes will have to be transmitted over turbulent free-space links. Numerous recent lab-scale experiments have found significant degradation in the mode quality after transmission through simulated turbulence and consecutive coherent detection. Here we experimentally analyze the transmission of one prominent class of spatial modes, the orbital-angular momentum (OAM) modes, through 3 km of strong turbulence over the city of Vienna. Instead of performing a coherent phase-dependent measurement, we employ an incoherent detection scheme which relies on the unambiguous intensity patterns of the different spatial modes. We use a pattern recognition algorithm (an artificial neural network) to identify the characteristic mode pattern displayed on a screen at the receiver. We were able to distinguish between 16 different OAM mode superpositions with only ~1.7% error, and use them to encode and transmit small grey-scale images. Moreover, we found that the relative phase of the superposition modes is not affected by the atmosphere, establishing the feasibility for performing long-distance quantum experiments with the OAM of photons. Our detection method works for other classes of spatial modes with unambiguous intensity patterns as well, and can further be improved by modern techniques of pattern recognition. Crossed crystal scheme for fs-pulsed entangled photon generation in ppKTP (1404.6914) Thomas Scheidl, Felix Tiefenbacher, Robert Prevedel, Fabian Steinlechner, Rupert Ursin, Anton Zeilinger April 28, 2014 quant-ph We demonstrate a novel scheme for femto-second pulsed spontaneous parametric down-conversion in periodically poled KTP crystals. Our scheme is based on a crossed crystal configuration with collinear quasi-phase-matching. The non-degenerate photon pairs are split in a fiber-based wavelength division multiplexer. The source is easier to align than common pulsed sources based on bulk BBO crystals and exhibits high-quality polarization entanglement as well as non-classical interference capabilities. Hence, we expect this source to be a well-suited candidate for multi-photon state generation e.g. for linear optical quantum computation and quantum communication networks. Quantum erasure with causally disconnected choice (1206.6578) Xiao-song Ma, Johannes Kofler, Angie Qarry, Nuray Tetik, Thomas Scheidl, Rupert Ursin, Sven Ramelow, Thomas Herbst, Lothar Ratschbacher, Alessandro Fedrizzi, Thomas Jennewein, Anton Zeilinger Jan. 29, 2013 quant-ph, physics.optics The counterintuitive features of quantum physics challenge many common-sense assumptions. In an interferometric quantum eraser experiment, one can actively choose whether or not to erase which-path information, a particle feature, of one quantum system and thus observe its wave feature via interference or not by performing a suitable measurement on a distant quantum system entangled with it. In all experiments performed to date, this choice took place either in the past or, in some delayed-choice arrangements, in the future of the interference. Thus in principle, physical communications between choice and interference were not excluded. Here we report a quantum eraser experiment, in which by enforcing Einstein locality no such communication is possible. This is achieved by independent active choices, which are space-like separated from the interference. Our setup employs hybrid path-polarization entangled photon pairs which are distributed over an optical fiber link of 55 m in one experiment, or over a free-space link of 144 km in another. No naive realistic picture is compatible with our results because whether a quantum could be seen as showing particle- or wave-like behavior would depend on a causally disconnected choice. It is therefore suggestive to abandon such pictures altogether. Quantum optics experiments to the International Space Station ISS: a proposal (1211.2111) Thomas Scheidl, Eric Wille, Rupert Ursin Nov. 9, 2012 quant-ph We propose performing quantum optics experiments in an ground-to-space scenario using the International Space Station, which is equipped with a glass viewing window and a photographer's lens mounted on a motorized camera pod. A dedicated small add-on module with single-photon detection, time-tagging and classical communication capabilities would enable us to perform the first-ever quantum optics experiments in space. We present preliminary design concepts for the ground and flight segments and study the feasibility of the intended mission scenario. Experimental quantum teleportation over a high-loss free-space channel (1210.1282) Xiao-Song Ma, Sebastian Kropatschek, William Naylor, Thomas Scheidl, Johannes Kofler, Thomas Herbst, Anton Zeilinger, Rupert Ursin Oct. 4, 2012 quant-ph, physics.optics We present a high-fidelity quantum teleportation experiment over a high-loss free-space channel between two laboratories. We teleported six states of three mutually unbiased bases and obtained an average state fidelity of 0.82(1), well beyond the classical limit of 2/3. With the obtained data, we tomographically reconstructed the process matrices of quantum teleportation. The free-space channel attenuation of 31 dB corresponds to the estimated attenuation regime for a down-link from a low-earth-orbit satellite to a ground station. We also discussed various important technical issues for future experiments, including the dark counts of single-photon detectors, coincidence-window width etc. Our experiment tested the limit of performing quantum teleportation with state-of-the-art resources. It is an important step towards future satellite-based quantum teleportation and paves the way for establishing a worldwide quantum communication network. Quantum teleportation using active feed-forward between two Canary Islands (1205.3909) Xiao-song Ma, Thomas Herbst, Thomas Scheidl, Daqing Wang, Sebastian Kropatschek, William Naylor, Alexandra Mech, Bernhard Wittmann, Johannes Kofler, Elena Anisimova, Vadim Makarov, Thomas Jennewein, Rupert Ursin, Anton Zeilinger May 17, 2012 quant-ph, physics.optics Quantum teleportation [1] is a quintessential prerequisite of many quantum information processing protocols [2-4]. By using quantum teleportation, one can circumvent the no-cloning theorem [5] and faithfully transfer unknown quantum states to a party whose location is even unknown over arbitrary distances. Ever since the first experimental demonstrations of quantum teleportation of independent qubits [6] and of squeezed states [7], researchers have progressively extended the communication distance in teleportation, usually without active feed-forward of the classical Bell-state measurement result which is an essential ingredient in future applications such as communication between quantum computers. Here we report the first long-distance quantum teleportation experiment with active feed-forward in real time. The experiment employed two optical links, quantum and classical, over 143 km free space between the two Canary Islands of La Palma and Tenerife. To achieve this, the experiment had to employ novel techniques such as a frequency-uncorrelated polarization-entangled photon pair source, ultra-low-noise single-photon detectors, and entanglement-assisted clock synchronization. The average teleported state fidelity was well beyond the classical limit of 2/3. Furthermore, we confirmed the quality of the quantum teleportation procedure (without feed-forward) by complete quantum process tomography. Our experiment confirms the maturity and applicability of the involved technologies in real-world scenarios, and is a milestone towards future satellite-based quantum teleportation. Violation of local realism with freedom of choice (0811.3129) Thomas Scheidl, Rupert Ursin, Johannes Kofler, Sven Ramelow, Xiao-Song Ma, Thomas Herbst, Lothar Ratschbacher, Alessandro Fedrizzi, Nathan K. Langford, Thomas Jennewein, Anton Zeilinger Bell's theorem shows that local realistic theories place strong restrictions on observable correlations between different systems, giving rise to Bell's inequality which can be violated in experiments using entangled quantum states. Bell's theorem is based on the assumptions of realism, locality, and the freedom to choose between measurement settings. In experimental tests, "loopholes" arise which allow observed violations to still be explained by local realistic theories. Violating Bell's inequality while simultaneously closing all such loopholes is one of the most significant still open challenges in fundamental physics today. In this paper, we present an experiment that violates Bell's inequality while simultaneously closing the locality loophole and addressing the freedom-of-choice loophole, also closing the latter within a reasonable set of assumptions. We also explain that the locality and freedom-of-choice loopholes can be closed only within non-determinism, i.e. in the context of stochastic local realism. Feasibility of 300 km Quantum Key Distribution with Entangled States (1007.4645) Thomas Scheidl, Rupert Ursin, Alessandro Fedrizzi, Sven Ramelow, Xiao-Song Ma, Thomas Herbst, Robert Prevedel, Lothar Ratschbacher, Johannes Kofler, Thomas Jennewein, Anton Zeilinger July 27, 2010 quant-ph A significant limitation of practical quantum key distribution (QKD) setups is currently their limited operational range. It has recently been emphasized (X. Ma, C.-H. F. Fung, and H.-K. Lo., Phys. Rev. A, 76:012307, 2007) that entanglement-based QKD systems can tolerate higher channel losses than systems based on weak coherent laser pulses (WCP), in particular when the source is located symmetrically between the two communicating parties, Alice and Bob. In the work presented here, we experimentally study this important advantage by implementing different entanglement-based QKD setups on a 144~km free-space link between the two Canary Islands of La Palma and Tenerife. We established three different configurations where the entangled photon source was placed at Alice's location, asymmetrically between Alice and Bob and symmetrically in the middle between Alice and Bob, respectively. The resulting quantum channel attenuations of 35~dB, 58~dB and 71~dB, respectively, significantly exceed the limit for WCP systems. This confirms that QKD over distances of 300~km and even more is feasible with entangled state sources placed in the middle between Alice and Bob. High-fidelity transmission of entanglement over a high-loss freespace channel (0902.2015) Alessandro Fedrizzi, Rupert Ursin, Thomas Herbst, Matteo Nespoli, Robert Prevedel, Thomas Scheidl, Felix Tiefenbacher, Thomas Jennewein, Anton Zeilinger June 24, 2009 quant-ph Quantum entanglement enables tasks not possible in classical physics. Many quantum communication protocols require the distribution of entangled states between distant parties. Here we experimentally demonstrate the successful transmission of an entangled photon pair over a 144 km free-space link. The received entangled states have excellent, noise-limited fidelity, even though they are exposed to extreme attenuation dominated by turbulent atmospheric effects. The total channel loss of 64 dB corresponds to the estimated attenuation regime for a two-photon satellite quantum communication scenario. We confirm that the received two-photon states are still highly entangled by violating the CHSH inequality by more than 5 standard deviations. From a fundamental point of view, our results show that the photons are virtually not subject to decoherence during their 0.5 ms long flight through air, which is encouraging for future world-wide quantum communication scenarios. Space-QUEST: Experiments with quantum entanglement in space (0806.0945) Rupert Ursin, Thomas Jennewein, Johannes Kofler, Josep M. Perdigues, Luigi Cacciapuoti, Clovis J. de Matos, Markus Aspelmeyer, Alejandra Valencia, Thomas Scheidl, Alessandro Fedrizzi, Antonio Acin, Cesare Barbieri, Giuseppe Bianco, Caslav Brukner, Jose Capmany, Sergio Cova, Dirk Giggenbach, Walter Leeb, Robert H. Hadfield, Raymond Laflamme, Norbert Lutkenhaus, Gerard Milburn, Momtchil Peev, Timothy Ralph, John Rarity, Renato Renner, Etienne Samain, Nikolaos Solomos, Wolfgang Tittel, Juan P. Torres, Morio Toyoshima, Arturo Ortigosa-Blanch, Valerio Pruneri, Paolo Villoresi, Ian Walmsley, Gregor Weihs, Harald Weinfurter, Marek Zukowski, Anton Zeilinger June 5, 2008 quant-ph The European Space Agency (ESA) has supported a range of studies in the field of quantum physics and quantum information science in space for several years, and consequently we have submitted the mission proposal Space-QUEST (Quantum Entanglement for Space Experiments) to the European Life and Physical Sciences in Space Program. We propose to perform space-to-ground quantum communication tests from the International Space Station (ISS). We present the proposed experiments in space as well as the design of a space based quantum communication payload.
CommonCrawl
Only show content I have access to (14) Only show open access (5) Elements (1) Last 3 years (11) Over 3 years (35) Physics and Astronomy (10) Politics and International Relations (2) MRS Online Proceedings Library Archive (5) Journal of Clinical and Translational Science (3) Proceedings of the International Astronomical Union (3) Earth and Environmental Science Transactions of The Royal Society of Edinburgh (2) Experimental Physiology (2) Journal of British Studies (2) Quaternary Research (2) Transactions of the International Astronomical Union (2) Twin Research and Human Genetics (2) Ageing & Society (1) American Antiquity (1) Ancient Mesoamerica (1) Journal of American Studies (1) Journal of Zoology (1) Microscopy and Microanalysis (1) Mineralogical Magazine (1) Palliative & Supportive Care (1) Psychological Medicine (1) The British Journal of Psychiatry (1) The Journal of Asian Studies (1) Weed Science (1) Cambridge University Press (5) Boydell & Brewer (1) International Astronomical Union (5) Materials Research Society (5) North American Conference on British Studies (3) International Soc for Twin Studies (2) Physiological Society (2) Brazilian Society for Microscopy and Microanalysis (SBMM) (1) British Association for American Studies (1) Mineralogical Society (1) Royal College of Psychiatrists / RCPsych (1) Royal College of Speech and Language Therapists (1) Society for American Archaeology (1) The Association for Asian Studies (1) Weed Science Society of America (1) ryantest123456 (1) Cambridge Handbooks in Behavioral Genetics (2) Elements in Religion and Violence (1) International Hydrology Series (1) The Evolutionary Map of the Universe pilot survey Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Monoclinic–orthorhombic first-order phase transition in K2ZnSi5O12 leucite analogue; transition mechanism and spontaneous strain analysis Anthony M.T. Bell, Francis Clegg, Christopher M.B. Henderson Journal: Mineralogical Magazine / Volume 85 / Issue 5 / October 2021 Published online by Cambridge University Press: 26 August 2021, pp. 752-771 Print publication: October 2021 You have access Access Hydrothermally synthesised K2ZnSi5O12 has a polymerised framework structure with the same topology as leucite (KAlSi2O6, tetragonal I41/a), which has two tetrahedrally coordinated Al3+ cations replaced by Zn2+ and Si4+. At 293 K it has a cation-ordered framework P21/c monoclinic structure with lattice parameters a = 13.1773(2) Å, b = 13.6106(2) Å, c = 13.0248(2) Å and β = 91.6981(9)°. This structure is isostructural with K2MgSi5O12, the first cation-ordered leucite analogue characterised. With increasing temperature, the P21/c structure transforms reversibly to cation-ordered framework orthorhombic Pbca. This transition takes place over the temperature range 848−863 K where both phases coexist; there is an ~1.2% increase in unit cell volume between 843 K (P21/c) and 868 K (Pbca), characteristic of a first-order, displacive, ferroelastic phase transition. Spontaneous strain analysis defines the symmetry- and non-symmetry related changes and shows that the mechanism is weakly first order; the two-phase region is consistent with the mechanism being a strain-related martensitic transition. Tibetan Demonology Published online: 18 June 2020 Print publication: 16 July 2020 Buy the Element Tibetan Demonology discusses the rich taxonomy of gods and demons encountered in Tibet. These spirits are often the cause of, and exhorted for, diverse violent and wrathful activities. This Element consists of four thematic sections. The first section, 'Spirits and the Body', explores oracular possession and spirit-induced illnesses. The second section, 'Spirits and Time', discusses the role of gods in Tibetan astrology and ritual calendars. The third section, 'Spirits and Space', examines the relationship between divinities and the Tibetan landscape. The final section, 'Spirits and Doctrine', explores how certain deities act as fierce protectors of religious and political institutions. A reassessment of the age of the fauna from Cumberland Bone Cave, Maryland, (middle Pleistocene) using coupled U-series and electron spin resonance dating (ESR) Charles B. Withnell, Renaud Joannes-Boyau, Christopher J. Bell Journal: Quaternary Research / Volume 97 / September 2020 Published online by Cambridge University Press: 16 June 2020, pp. 187-198 The deposits in Cumberland Bone Cave (Allegany County, Maryland) preserved one of the most taxonomically diverse pre-radiocarbon Pleistocene faunas in the northeastern United States. The site has long been recognized as an important record of Pleistocene life in the region, but numerical age control for the fauna was never developed, and hypotheses for its age have been based upon biochronological assessments of the mammalian fauna. We used fossil teeth and preserved sediment housed in museum collections to obtain the first numerical age assessment of the fauna from Cumberland Bone Cave. Coupled U-series Electron Spin Resonance (US-ESR) was used to date fossil molars of the extinct peccary, Platygonus sp. The age estimates of two teeth gave ages of 722 ± 64 and 790 ± 53 ka. Our results are supported by previously unpublished paleomagnetic data generated by the late John Guilday, and by plotting length-width of the first molar (m1) of Ondatra (muskrats) from Cumberland Bone Cave on the chronocline of Ondatra molar evolution in North America. Our age assessments are surprisingly close to the age estimate previously proposed by Charles Repenning, who based his age on a somewhat complicated model of speciation and morphotype evolution among arvicoline rodents. Impact of the Canterbury earthquakes on dispensing of psychiatric medication for children and adolescents: longitudinal quantitative study Green Psychiatry Collection BJPsych Disasters and Trauma Themed Issue Children's Mental Health Collection Ben Beaglehole, Stephanie Moor, Tao Zhang, Gregory J. Hamilton, Roger T. Mulder, Joseph M. Boden, Christopher M. A. Frampton, Caroline J. Bell Journal: The British Journal of Psychiatry / Volume 216 / Issue 3 / March 2020 Published online by Cambridge University Press: 29 January 2020, pp. 151-155 Print publication: March 2020 Natural disasters are increasing in frequency and impact; they cause widespread disruption and adversity throughout the world. The Canterbury earthquakes of 2010–2011 were devastating for the people of Christchurch, New Zealand. It is important to understand the impact of this disaster on the mental health of children and adolescents. To report psychiatric medication use for children and adolescents following the Canterbury earthquakes. Dispensing data from community pharmacies for the medication classes antidepressants, antipsychotics, anxiolytics, sedatives/hypnotics and methylphenidate are routinely recorded in a national database. Longitudinal data are available for residents of the Canterbury District Health Board (DHB) and nationally. We compared dispensing data for children and adolescents residing in Canterbury DHB with national dispensing data to assess the impact of the Canterbury earthquakes on psychotropic prescribing for children and adolescents. After longer-term trends and population adjustments are considered, a subtle adverse effect of the Canterbury earthquakes on dispensing of antidepressants was detected. However, the Canterbury earthquakes were not associated with higher dispensing rates for antipsychotics, anxiolytics, sedatives/hypnotics or methylphenidate. Mental disorders or psychological distress of a sufficient severity to result in treatment of children and adolescents with psychiatric medication were not substantially affected by the Canterbury earthquakes. Roundtable: Antecedents of 2019 CHRISTOPHER PHELPS, JENNIFER LUFF, ALEX GOODALL, JONATHAN BELL, MOLLY GEIDEL Journal: Journal of American Studies / Volume 53 / Issue 4 / November 2019 Published online by Cambridge University Press: 10 October 2019, pp. 855-892 Print publication: November 2019 TwinsUK: The UK Adult Twin Registry Update Serena Verdi, Golboo Abbasian, Ruth C. E. Bowyer, Genevieve Lachance, Darioush Yarand, Paraskevi Christofidou, Massimo Mangino, Cristina Menni, Jordana T. Bell, Mario Falchi, Kerrin S. Small, Frances M. K. Williams, Christopher J. Hammond, Deborah J. Hart, Timothy D. Spector, Claire J. Steves Journal: Twin Research and Human Genetics / Volume 22 / Issue 6 / December 2019 Published online by Cambridge University Press: 17 September 2019, pp. 523-529 Print publication: December 2019 TwinsUK is the largest cohort of community-dwelling adult twins in the UK. The registry comprises over 14,000 volunteer twins (14,838 including mixed, single and triplets); it is predominantly female (82%) and middle-aged (mean age 59). In addition, over 1800 parents and siblings of twins are registered volunteers. During the last 27 years, TwinsUK has collected numerous questionnaire responses, physical/cognitive measures and biological measures on over 8500 subjects. Data were collected alongside four comprehensive phenotyping clinical visits to the Department of Twin Research and Genetic Epidemiology, King's College London. Such collection methods have resulted in very detailed longitudinal clinical, biochemical, behavioral, dietary and socioeconomic cohort characterization; it provides a multidisciplinary platform for the study of complex disease during the adult life course, including the process of healthy aging. The major strength of TwinsUK is the availability of several 'omic' technologies for a range of sample types from participants, which includes genomewide scans of single-nucleotide variants, next-generation sequencing, metabolomic profiles, microbiomics, exome sequencing, epigenetic markers, gene expression arrays, RNA sequencing and telomere length measures. TwinsUK facilitates and actively encourages sharing the 'TwinsUK' resource with the scientific community — interested researchers may request data via the TwinsUK website (http://twinsuk.ac.uk/resources-for-researchers/access-our-data/) for their own use or future collaboration with the study team. In addition, further cohort data collection is planned via the Wellcome Open Research gateway (https://wellcomeopenresearch.org/gateways). The current article presents an up-to-date report on the application of technological advances, new study procedures in the cohort and future direction of TwinsUK. Food of Sinful Demons: Meat, Vegetarianism, and the Limits of Buddhism in Tibet. By Geoffrey Barstow. New York: Columbia University Press, 2019. xv, 289 pp. Journal: The Journal of Asian Studies / Volume 78 / Issue 2 / May 2019 Published online by Cambridge University Press: 10 May 2019, pp. 422-424 Print publication: May 2019 New evidence on the link between genes, psychological traits, and political engagement Aaron C. Weinschenk, Christopher T. Dawes, Christian Kandler, Edward Bell, Rainer Riemann Journal: Politics and the Life Sciences / Volume 38 / Issue 1 / Spring 2019 Published online by Cambridge University Press: 16 May 2019, pp. 1-13 Print publication: Spring 2019 We investigate the link between genes, psychological traits, and political engagement using a new data set containing information on a large sample of young German twins. The TwinLife Study enables us to examine the predominant model of personality, the Big Five framework, as well as traits that fall outside the Big Five, such as cognitive ability, providing a more comprehensive understanding of the underpinnings of political engagement. Our results support previous work showing genetic overlap between some psychological traits and political engagement. More specifically, we find that cognitive ability and openness to experience are correlated with political engagement and that common genes can explain most of the relationship between these psychological traits and political engagement. Relationships between genes, psychological traits, and political engagement exist even at a fairly young age, which is an important finding given that previous work has relied heavily on older samples to study the link between genes, psychological traits, and political engagement. 3022 Barriers to Accessing Follow-up Care and Changes in Medical Needs after Childhood Injury Teresa Maria Bell, Ashley N Vetor, Dennis P Watson, Christopher A Harle, Aaron E Carroll Journal: Journal of Clinical and Translational Science / Volume 3 / Issue s1 / March 2019 Published online by Cambridge University Press: 26 March 2019, pp. 140-141 OBJECTIVES/SPECIFIC AIMS: The objective of this study was to prospectively assess caregiver-perceived barriers to accessing post-acute care for their injured child and determine if caregivers report ongoing, unmet health needs for their children after trauma. METHODS/STUDY POPULATION: This was a prospective cohort study that followed 50 participants for 6 months and administered surveys to parents of children who are admitted to a pediatric level 1 trauma center for injury. Surveys were given bi-weekly regarding care children received after hospital discharge. At 3 months, parents were surveyed over the phone on whether they were able to access all needed health services and if there were any perceived barriers to obtaining or providing at-home care. At 6 months, parents were given the Child & Family Follow-up Survey to assess ongoing physical, mental, social, and scholastic needs. Free responses and transcribed interviews were analyzed using thematic content analysis and frequencies are reported for discrete data. RESULTS/ANTICIPATED RESULTS: Out of 50 families recruited, 47 completed follow-up assessments. At 3 months, common themes regarding challenges after hospital discharge included difficulty scheduling specialist care; uncertainty in managing their child's pain; transitioning home without enough knowledge to meet their child's medical needs; lack of communication between multiple providers; distress at having providers release children to full activities before caregivers were comfortable. At 6 months, approximately 24% of parents reported children had ongoing cognitive limitations, 29% reported emotional problems, 19% reported physical limitations, 33.3% reported difficulty in school, and 15% reported play/social difficulties. DISCUSSION/SIGNIFICANCE OF IMPACT: Evidence suggests families face significant barriers in accessing follow-up care, despite nearly universal health insurance coverage for children. Further, a large percentage of parents report ongoing health needs, despite the majority of the cohort having only mild or moderate severity injuries. Making follow-up care more patient-centered for families of traumatically injured children may improve compliance with medical regiments and reduce the likelihood of future disability. Examples of this may be coordinating care among multiple specialty providers, so that patients with multiple injuries can schedule multiple follow-up appointments on the same day. Additionally, more caregiver education on administering pain medication, caring for wounds, and safe practices for returning to full activities would be beneficial for families. Long-term prevalence and predictors of prolonged grief disorder amongst bereaved cancer caregivers: A cohort study Rachel D. Zordan, Melanie L. Bell, Melanie Price, Cheryl Remedios, Elizabeth Lobb, Christopher Hall, Peter Hudson Journal: Palliative & Supportive Care / Volume 17 / Issue 5 / October 2019 The short-term impact of prolonged grief disorder (PGD) following bereavement is well documented. The longer term sequelae of PGD however are poorly understood, possibly unrecognized, and may be incorrectly attributed to other mental health disorders and hence undertreated. The aims of this study were to prospectively evaluate the prevalence of PGD three years post bereavement and to examine the predictors of long-term PGD in a population-based cohort of bereaved cancer caregivers. A cohort of primary family caregivers of patients admitted to one of three palliative care services in Melbourne, Australia, participated in the study (n = 301). Sociodemographic, mental health, and bereavement-related data were collected from the caregiver upon the patient's admission to palliative care (T1). Further data addressing circumstances around the death and psychological health were collected at six (T2, n = 167), 13 (T3, n = 143), and 37 months (T4, n = 85) after bereavement. At T4, 5% and 14% of bereaved caregivers met criteria for PGD and subthreshold PGD, respectively. Applying the total PGD score at T4, linear regression analysis found preloss anticipatory grief measured at T1 and self-reported coping measured at T2 were highly statistically significant predictors (both p < 0.0001) of PGD in the longer term. For almost 20% of caregivers, the symptoms of PGD appear to persist at least three years post bereavement. These findings support the importance of screening caregivers upon the patient's admission to palliative care and at six months after bereavement to ascertain their current mental health. Ideally, caregivers at risk of developing PGD can be identified and treated before PGD becomes entrenched. 2170 Risk factors for prescription opioid misuse after traumatic injury in adolescents Teresa M. Bell, Christopher A. Harle, Dennis P. Watson, Aaron E. Carroll Journal: Journal of Clinical and Translational Science / Volume 2 / Issue S1 / June 2018 Published online by Cambridge University Press: 21 November 2018, p. 87 Print publication: June 2018 OBJECTIVES/SPECIFIC AIMS: The objective of this study is to determine predictors and motives for sustained opioid use, prescription misuse, and nonmedical opioid use in the adolescent trauma population. METHODS/STUDY POPULATION: This is a prospective cohort study that will follow patients for 1 year and administer surveys to patients on prescription opioid usage; substance use; utilization of pain management and mental health services; mental and physical health conditions; and behavioral and social risk factors. Patient eligibility criteria include: (1) patient is 12–18 years of age; (2) admitted for trauma; (3) english speaking; (4) resides within Indianapolis, IN metropolitan area; and (5) consent can be obtained from a parent or guardian. Patients with severe brain injuries or other injuries that prevent survey participation will be excluded. The patient sample will comprise of 50 traumatically injured adolescents admitted for trauma who will be followed for 12 months after discharge. RESULTS/ANTICIPATED RESULTS: We expect that the results of this study will identify multiple risk factors for sustained opioid use that can be used to create targeted interventions to reduce opioid misuse in the adolescent trauma population. Clinical predictors such as opioid type, dosage, and duration that can be modified to reduce the risk of long-term opioid use will be identified. We expect to elucidate clinical, behavioral, and social risk factors that increase the likelihood adolescents will misuse their medication and initiate nonmedical opioid use. DISCUSSION/SIGNIFICANCE OF IMPACT: Trauma is a surgical specialty that often has limited collaboration with behavioral health providers. Collaborative care models for trauma patients to adequately address the psychological impact of a traumatic injury have become more common in recent years. These models have primarily been concerned with the prevention of post-traumatic stress disorder. We would like to apply the findings of our research to better understand what motivates adolescents to misuse pain medications as well as how clinical, individual, behavioral, and social factors affect medication usage. This may help identify patients at greater risk of developing a SUD by asking questions not commonly addressed in the hospital setting. For example, similar to how trauma centers have mandated brief interventions on alcohol use be performed for center verification, screening patients' on their social environment may identify patients at greater risk for SUD than assumed. The long-term goal would be to prevent opioid use disorders in injured adolescents by providing better post-acute care support, possibly by developing and implementing a collaborative care model that addresses opioid use. Additionally, we believe our findings could be applied in the acute care setting as well to help inform opioid prescribing and pain management methods in the acute phase of an injury. Genetic testing to determine which opioid to prescribe pediatric surgical patients is starting to be done at some pediatric hospitals. Certain genes determine which specific opioid is most effective in controlling a patient's pain and, further, using the optimal opioid medication can also reduce overdose. Our findings may help refine prescribing patterns that could increase or decrease the likelihood of developing SUD in patients with certain genetic, clinical, behavioral, and social characteristics. 2510: QIPR: Creating a Quality Improvement Project Registry Amber L. Allen, Christopher Barnes, Kevin S. Hanson, David Nelson, Randy Harmatz, Eric Rosenberg, Linda Allen, Lilliana Bell, Lynne Meyer, Debbie Lynn, Jeanette Green, Peter Iafrate, Matthew McConnell, Patrick White, Samantha Davuluri, Tarun Gupta Akirala Journal: Journal of Clinical and Translational Science / Volume 1 / Issue S1 / September 2017 Published online by Cambridge University Press: 10 May 2018, pp. 20-21 OBJECTIVES/SPECIFIC AIMS: To create a searchable public registry of all Quality Improvement (QI) projects. To incentivize the medical professionals at UF Health to initiate quality improvement projects by reducing startup burden and providing a path to publishing results. To reduce the review effort performed by the internal review board on projects that are quality improvement Versus research. To foster publication of completed quality improvement projects. To assist the UF Health Sebastian Ferrero Office of Clinical Quality & Patient Safety in managing quality improvement across the hospital system. METHODS/STUDY POPULATION: This project used a variant of the spiral software development model and principles from the ADDIE instructional design process for the creation of a registry that is web based. To understand the current registration process and management of quality projects in the UF Health system a needs assessment was performed with the UF Health Sebastian Ferrero Office of Clinical Quality & Patient Safety to gather project requirements. Biweekly meetings were held between the Quality Improvement office and the Clinical and Translational Science – Informatics and Technology teams during the entire project. Our primary goal was to collect just enough information to answer the basic questions of who is doing which QI project, what department are they from, what are the most basic details about the type of project and who is involved. We also wanted to create incentive in the user group to try to find an existing project to join or to commit the details of their proposed new project to a data registry for others to find to reduce the amount of duplicate QI projects. We created a series of design templates for further customization and feature discovery. We then proceed with the development of the registry using a Python web development framework called Django, which is a technology that powers Pinterest and the Washington Post Web sites. The application is broken down into 2 main components (i) data input, where information is collected from clinical staff, Nurses, Pharmacists, Residents, and Doctors on what quality improvement projects they intend to complete and (ii) project registry, where completed or "registered" projects can be viewed and searched publicly. The registry consists of a quality investigator profile that lists contact information, expertise, and areas of interest. A dashboard allows for the creation and review of quality improvement projects. A search function enables certain quality project details to be publicly accessible to encourage collaboration. We developed the Registry Matching Algorithm which is based on the Jaccard similarity coefficient that uses quality project features to find similar quality projects. The algorithm allows for quality investigators to find existing or previous quality improvement projects to encourage collaboration and to reduce repeat projects. We also developed the QIPR Approver Algorithm that guides the investigator through a series of questions that allows an appropriate quality project to get approved to start without the need for human intervention. RESULTS/ANTICIPATED RESULTS: A product of this project is an open source software package that is freely available on GitHub for distribution to other health systems under the Apache 2.0 open source license. Adoption of the Quality Improvement Project Registry and promotion of it to the intended audience are important factors for the success of this registry. Thanks goes to the UW-Madison and their QI/Program Evaluation Self-Certification Tool (https://uwmadison.co1.qualtrics.com/SE/?SID=SV_3lVeNuKe8FhKc73) used as example and inspiration for this project. DISCUSSION/SIGNIFICANCE OF IMPACT: This registry was created to help understand the impact of improved management of quality projects in a hospital system. The ultimate result will be to reduce time to approve quality improvement projects, increase collaboration across the UF Health Hospital system, reduce redundancy of quality improvement projects and translate more projects into publications. Host associations and turnover of haemosporidian parasites in manakins (Aves: Pipridae) ALAN FECCHIO, MARIA SVENSSON-COELHO, JEFFREY BELL, VINCENZO A. ELLIS, MATTHEW C. MEDEIROS, CHRISTOPHER H. TRISOS, JOHN G. BLAKE, BETTE A. LOISELLE, JOSEPH A. TOBIAS, REBEKA FANTI, ELYSE D. COFFEY, IUBATÃ P. DE FARIA, JOÃO B. PINHO, GABRIEL FELIX, ERIKA M. BRAGA, MARINA ANCIÃES, VASYL TKACH, JOHN BATES, CHRISTOPHER WITT, JASON D. WECKSTEIN, ROBERT E. RICKLEFS, IZENI P. FARIAS Journal: Parasitology / Volume 144 / Issue 7 / June 2017 Parasites of the genera Plasmodium and Haemoproteus (Apicomplexa: Haemosporida) are a diverse group of pathogens that infect birds nearly worldwide. Despite their ubiquity, the ecological and evolutionary factors that shape the diversity and distribution of these protozoan parasites among avian communities and geographic regions are poorly understood. Based on a survey throughout the Neotropics of the haemosporidian parasites infecting manakins (Pipridae), a family of Passerine birds endemic to this region, we asked whether host relatedness, ecological similarity and geographic proximity structure parasite turnover between manakin species and local manakin assemblages. We used molecular methods to screen 1343 individuals of 30 manakin species for the presence of parasites. We found no significant correlations between manakin parasite lineage turnover and both manakin species turnover and geographic distance. Climate differences, species turnover in the larger bird community and parasite lineage turnover in non-manakin hosts did not correlate with manakin parasite lineage turnover. We also found no evidence that manakin parasite lineage turnover among host species correlates with range overlap and genetic divergence among hosts. Our analyses indicate that host switching (turnover among host species) and dispersal (turnover among locations) of haemosporidian parasites in manakins are not constrained at this scale. The Washington Treaty era: neutralising the Pacific By Christopher M. Bell, Christopher M. Bell is Professor of History at Dalhousie University, Canada Edited by Christian Buchet, N.A.M. Rodger Book: The Sea in History - The Modern World Published by: Boydell & Brewer Print publication: 17 February 2017, pp 510-520 View extract ABSTRACT.The 1922 Washington Treaty and its associated agreements "froze" the battlefleet strengths of Britain, the United States and Japan in the ratio 5:5:3. They also forbade the "fortification" of naval bases in the Western Pacific. The intention was to prevent a naval race such as seemed to have encouraged the First World War, and to keep potential enemies too far apart to attack one another. Initially it worked, but in the 1930s it kept Britain and the United States too far away to hamper Japanese aggression in China. Increasingly preoccupied with aggression in Europe, the Western powers attempted to deter Japan with inadequate forces, and long-range Japanese strikes inflicted heavy losses on both at the beginning of the Pacific War. RÉSUMÉ.Le traité naval de Washington de 1922 et les termes en découlant « gelèrent » les forces des flottes de guerre de la Grande-Bretagne, des États-Unis et du Japon selon le ratio 5:3:3. Ils interdirent également la « fortification » des bases navales dans le Pacifique occidental. Visant à prévenir une course à la puissance maritime comme celle semblant avoir conduit à la première guerre mondiale, le traité avait également pour intention de garder des ennemis potentiels trop éloignés géographiquement pour pouvoir s'attaquer. Initialement, le plan fonctionna mais dans les années 30, la Grande-Bretagne et les États-Unis se retrouvèrent trop loin pour empêcher l'invasion japonaise en Chine. De plus en plus préoccupés par les conflits en Europe, les puissances occidentales tentèrent d'enrayer la menace de l'Empire du Japon avec des forces inadaptées et les frappes de longue portée japonaises infligèrent aux deux côtés de lourdes pertes au début de la guerre du Pacifique. Japan's emergence as a first-class naval power around the beginning of the 20th century created serious, and increasingly complex, challenges for both Great Britain and the United States. The most dangerous maritime threats to Britain's extensive imperial and economic interests in the Far East had traditionally come from other European powers. Any immediate danger from a rising Japan was effectively neutralized by the Anglo-Japanese Alliance, concluded in 1902 and renewed in 1905 and 1911. The steady growth of Japan's power and suspicions of its long-term ambitions nevertheless fuelled concerns about its reliability as an ally. By 1918 British naval leaders believed that Japan hoped to exclude western powers from China and, in time, to obtain regional hegemony, goals that would eventually lead to war. The Concordance and Heritability of Type 2 Diabetes in 34,166 Twin Pairs From International Twin Registers: The Discordant Twin (DISCOTWIN) Consortium Gonneke Willemsen, Kirsten J. Ward, Christopher G. Bell, Kaare Christensen, Jocelyn Bowden, Christine Dalgård, Jennifer R. Harris, Jaakko Kaprio, Robert Lyle, Patrik K.E. Magnusson, Karen A. Mather, Juan R. Ordoňana, Francisco Perez-Riquelme, Nancy L. Pedersen, Kirsi H. Pietiläinen, Perminder S. Sachdev, Dorret I. Boomsma, Tim Spector Published online by Cambridge University Press: 18 December 2015, pp. 762-771 Twin pairs discordant for disease may help elucidate the epigenetic mechanisms and causal environmental factors in disease development and progression. To obtain the numbers of pairs, especially monozygotic (MZ) twin pairs, necessary for in-depth studies while also allowing for replication, twin studies worldwide need to pool their resources. The Discordant Twin (DISCOTWIN) consortium was established for this goal. Here, we describe the DISCOTWIN Consortium and present an analysis of type 2 diabetes (T2D) data in nearly 35,000 twin pairs. Seven twin cohorts from Europe (Denmark, Finland, Norway, the Netherlands, Spain, Sweden, and the United Kingdom) and one from Australia investigated the rate of discordance for T2D in same-sex twin pairs aged 45 years and older. Data were available for 34,166 same-sex twin pairs, of which 13,970 were MZ, with T2D diagnosis based on self-reported diagnosis and medication use, fasting glucose and insulin measures, or medical records. The prevalence of T2D ranged from 2.6% to 12.3% across the cohorts depending on age, body mass index (BMI), and national diabetes prevalence. T2D discordance rate was lower for MZ (5.1%, range 2.9–11.2%) than for same-sex dizygotic (DZ) (8.0%, range 4.9–13.5%) pairs. Across DISCOTWIN, 720 discordant MZ pairs were identified. Except for the oldest of the Danish cohorts (mean age 79), heritability estimates based on contingency tables were moderate to high (0.47–0.77). From a meta-analysis of all data, the heritability was estimated at 72% (95% confidence interval 61–78%). This study demonstrated high T2D prevalence and high heritability for T2D liability across twin cohorts. Therefore, the number of discordant MZ pairs for T2D is limited. By combining national resources, the DISCOTWIN Consortium maximizes the number of discordant MZ pairs needed for in-depth genotyping, multi-omics, and phenotyping studies, which may provide unique insights into the pathways linking genes to the development of many diseases. A Global View of Molecule-Forming Clouds in the Galaxy IAU Issue 315 - From Interstellar Clouds... Steven J. Gibson, Ward S. Howard, Christian S. Jolly, Jonathan H. Newton, Aaron C. Bell, Mary E. Spraggs, J. Marcus Hughes, Aaron M. Tagliaboschi, Christopher M. Brunt, A. Russell Taylor, Jeroen M. Stil, Thomas M. Dame Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S315 / August 2015 Published online by Cambridge University Press: 12 September 2016, E27 Print publication: August 2015 We have mapped cold atomic gas in 21cm line H i self-absorption (HISA) at arcminute resolution over more than 90% of the Milky Way's disk. To probe the formation of H2 clouds, we have compared our HISA distribution with CO J = 1-0 line emission. Few HISA features in the outer Galaxy have CO at the same position and velocity, while most inner-Galaxy HISA has overlapping CO. But many apparent inner-Galaxy HISA-CO associations can be explained as chance superpositions, so most inner-Galaxy HISA may also be CO-free. Since standard equilibrium cloud models cannot explain the very cold H i in many HISA features without molecules being present, these clouds may instead have significant CO-dark H2. Exploring social inclusivity within the University of the Third Age (U3A): a model of collaborative research REBECCA PATTERSON, SUZANNE MOFFATT, MAUREEN SMITH, JESSICA SCOTT, CHRISTOPHER MCLOUGHLIN, JUDITH BELL, NORMAN BELL Journal: Ageing & Society / Volume 36 / Issue 8 / September 2016 Published online by Cambridge University Press: 19 June 2015, pp. 1580-1603 Lifelong learning is believed to have physical, social and emotional benefits for older adults. In recognition of this, numerous programmes encouraging learning in later life exist worldwide. One example is the University of the Third Age (U3A) – a lifelong learning co-operative rooted in peer-support and knowledge sharing. This article is based on a collaborative study conducted by university researchers and members of a U3A in North-East England (United Kingdom) investigating the social inclusivity of the group in light of low attendance levels among those from social housing and non-professional backgrounds. A qualitative approach comprising semi-structured interviews and focus groups was adopted to explore knowledge and experience of lifelong learning and the U3A. Sixty individuals aged 50+ were interviewed. The demographic profile of participants largely reflected the socio-economic make-up of the area, with the majority living in areas of high socio-economic deprivation. Several barriers to lifelong learning were revealed, including: poor health, insufficient transport and caring responsibilities. Regarding U3A participation, three exclusionary factors were outlined: lack of knowledge, organisational name and location. Poor comprehension of the purpose and remit of the U3A can result in the development of 'middle-class' myths regarding membership, perpetuating poor participation rates among lower socio-economic groups. Such perceptions must be dispelled to allow the U3A to fulfil its potential as a highly inclusive organisation. Driving Factors in the Colonization of Oceania: Developing Island-Level Statistical Models to Test Competing Hypotheses Adrian V. Bell, Thomas E. Currie, Geoffrey Irwin, Christopher Bradbury Journal: American Antiquity / Volume 80 / Issue 2 / April 2015 Print publication: April 2015 Migration is a key driver of human cultural and genetic evolution, with recent theoretical advances calling for work to accurately identify factors behind early colonization patterns. However, inferring prehistoric migration strategies is a controversial field of inquiry that largely relies on interpreting settlement chronologies and constructing plausible narratives around environmental factors. Model selection approaches, along with new statistical models that match the dynamic nature of colonization, offers a more rigorous framework to test competing theories. We demonstrate the utility of this approach by developing an Island-Level Model of Colonization adapted from epidemiology in a Bayesian model-selection framework. Using model selection techniques, we assess competing colonization theories of Near and Remote Oceania, showing that models of exploration angles and risk performed considerably better than models using inter-island distance, suggesting early seafarers were already adept at long-distance travel. These results are robust after artificially increasing the uncertainty around settlement times. We show how decades of thinking on colonization strategies can be brought together and assessed in one statistical framework, providing us with greater interpretive power to understand a fundamental feature of our past. By Robert R. H. Anholt, M. Fernanda Ceriani, Ann-Shyn Chiang, Anupama Dahanukar, Brigitte Dauwalder, J. Steven de Belle, Claude Desplan, Taylor R. Fore, Leslie C. Griffith, Yukinori Hirano, Ken Honjo, Junjiro Horiuchi, Bryon N. Hughson, George R. Jackson, Charalambos P. Kyriacou, Ricardo Leitão-Gonçalves, Fritz-Olaf Lehmann, Chih-Yung Lin, Trudy F. C. Mackay, Ian A. Meinertzhagen, Nara I. Muraro, Dick R. Nässel, Daniela Ostrowski, Viet Pham, Carlos Ribeiro, Jessica Robertson, Bidisha Roy, C. Dustin Rubinstein, Shinjiro Saeki, Minoru Saitoe, Christi A. Scott, Lisha Shao, Marla B. Sokolowski, Eric A. Stone, Christopher J. Tabone, W. Daniel Tracey, Nina Vogt, Mariana Wolfner, Troy Zars, Bing Zhang, Yi Zhong Edited by Josh Dubnau Book: Behavioral Genetics of the Fly (<I>Drosophila Melanogaster</I>) Published online: 05 July 2014 Print publication: 26 June 2014, pp vi-viii
CommonCrawl
HIV/AIDS length of stay in Portugal under financial constraints: a longitudinal study for public hospitals, 2009–2014 Gonçalo F. Augusto1Email author, Sara S. Dias†2, 3, Alexandre V. Abrantes†4 and Maria R. O. Martins†1 The global financial crisis and the economic and financial adjustment programme (EFAP) forced the Portuguese government to adopt austerity measures, which also included the health sector. The aim of this study was to analyse factors associated with HIV/AIDS patients' length of stay (LOS) among Portuguese hospitals, and the potential impact of the EFAP measures on hospitalizations among HIV/AIDS patients. Data used in this analysis were collected from the Portuguese database of Diagnosis Related Groups (DRG). We considered only discharges classified under MCD 24 created for patients with HIV infection. A total of 20,361 hospitalizations occurring between 2009 and 2014 in 41 public hospitals were included in the analysis. The outcome was the number of days between hospital admission and discharge dates (LOS). Hierarchical Poisson regression model with random effects was used to analyse the relation between LOS and patient, treatment and setting characteristics. To more effectively analyse the impact of the EFAP implementation on HIV/AIDS hospitalizations, yearly variables, as well as a variable measuring hospitals' financial situation (current ratio) was included. For the 5% level, having HIV/AIDS as the principal diagnosis, the number of secondary diagnoses, the number of procedures, and having tuberculosis have a positive impact in HIV/AIDS LOS; while being female, urgent admission, in-hospital mortality, pneumocystis pneumonia, hepatitis C, and hospital's current ratio contribute to the decrease of LOS. Additionally, LOS between 2010 and 2014 was significantly shorter in comparison to 2009. Differences in LOS across hospitals are significant after controlling for these variables. Following the EFAP, a number of cost-containment measures in the health sector were implemented. Results from our analysis suggest that the implementation of these measures contributed to a significant decrease is LOS among HIV/AIDS patients in Portuguese hospitals. The economic and financial crisis that started in 2008 reached Portugal in 2009 and had economic and social consequences that are still felt to this day. Portugal has experienced recessions in 2009 (− 2.98% in GDP), 2011 (− 1.83%), 2012 (− 4.03%) and 2013 (− 1.13%) and this was accompanied by a dramatic rise in the unemployment rate, which rose from 7.6% in 2008 to 16.2% in 2013 [1]. Due to the high level of public debt and the increasing difficulty in financing its economy, the country received a financial bailout by the European Commission, the International Monetary Fund and the European Central Bank [2]. In the Memorandum of Understanding (MoU) signed with the three institutions above, the Portuguese government compromised by implementing a number of reforms aimed at reducing public spending. With regard to the health sector, the MoU set a number of measures aimed at cost containment and increasing efficiency within the Portuguese National Health Service (NHS) [3]. These included severe cuts in the wages of health care worker; the creation and implementation of clinical guidelines; the reorganisation and rationalisation of the hospital network through specialisation and concentration of hospital and emergency services; and setting up a system for comparing hospital performance (benchmarking) [4]. The consequences of the economic and financial crisis on the health of the citizens and health care has been studied all over Europe [5–7] and has generated intense debate. However, the impact of these events on health care use is still unclear, mainly due to lack of measures to monitor the impact of the crisis and its consequences on health and health care. From the demand side, one could argue that income reductions could have an impact in the use of health care services, as international evidence shows that low-income people have a higher use of in-patient care [8], and longer in-patient stays [9], due to deterioration of their health status. From the supply side, budget cuts could have led hospitals to reduce inefficiencies but also to decrease quality of health care provided (e.g. by reducing length of stay or decreasing the number of admissions). The impact of the crisis on the health of the population has been the focus of recent research but findings are very controversial. Following the onset of the crisis, a rise in suicides has been observed in Greece, Spain, the UK, and the USA [10–13]; and a rise in mental health disorders has been observed in Greece and Spain [14–16]. Literature also suggests that there has been an increase in cases of infectious disease, homicides, substance abuse, and poor self-reported health in Greece [10, 17–19]. In contrast, there is evidence showing that economic crisis is associated with reduced mortality related to road traffic accidents and cardiovascular events [20]. The existing evidence suggests that since austerity measures came into effect in 2011 there has been a decline in access to health care in Portugal, particularly among vulnerable population groups who do not benefit from user charges exemptions [21]. Other Southern European countries, namely Greece and Spain [22, 23], experienced a similar situation and witnessed a serious setback in terms of the universal health coverage, population well-being and welfare state as a result of austerity measures [24]. The crisis led many countries to reduce budgets earmarked for control and prevention of infectious diseases, including HIV [25, 26]. People living with HIV (PLWHIV) are vulnerable group who need constant hospital care both outpatient and inpatient) and, therefore, constitute a relevant case study to evaluate how the austerity measures imposed by the MoU had an impact in health care provision. As PLWHIV are living longer and experiencing age-associated comorbidities, hospitalizations have become an important indicator of healthcare expenditure in these patients. As in the rest of the world, in Portugal HIV-related hospitalizations are among the most expensive. In 2008, the average cost of treatment was 14,277 EUR/patient/year, with the main cost-driver being ART (EUR 9598), followed by hospitalization costs (EUR 1323) [27]. In addition, the weight of hospitalization costs was considerably higher for the most severely affected patients [27]. By identifying and characterising the variations in length of stay (LOS) among HIV/AIDS hospitalizations across different Portuguese hospitals, the aim of this paper was to analyse the potential impact of the economic and financial adjustment programme (EFAP) on HIV/AIDS patients LOS. Data used in this analysis were collected from the Portuguese national database of the diagnosis related groups (AP-DRG v21.0) managed by the Central Administration of the Health System (ACSS). The DRG database is anonymous and available for scientific research. DRGs were first introduced in Portuguese hospitals through a pilot study in 1984 and, since the 1990s, DRGs are used for DRG-based hospital budget allocation from the NHS to hospitals and for DRG-based case payment from third-party payers [28]. Currently, there is only one DRG system in Portugal that applies to all NHS hospitals (public) and all patients (inpatients and ambulatory surgery), with exception of outpatients and patients treated in psychiatric and rehabilitation healthcare settings. Private hospitals are not included in the system. The DRG system currently in place defines 669 DRGs within 25 Major Diagnostic Categories (MDCs), each corresponding to one organ or physiological system [28]. The DRG system is supervised and maintained by the ACSS within the Ministry of Health. In the DRG database, each record corresponds to a discharge episode (hospitalization) and includes information about the patient as well as information collected during the hospitalization, including age, sex, place of residence, type of admission (elective or urgent), dates of hospitalization and discharge, principal diagnosis and secondary diagnosis, procedures during hospitalization, and outcome at discharge (dead or alive). We considered only discharges classified under MDC created for patients with HIV infection (MDC 24). Thus, the dataset provided by the ACSS included 20,580 discharges registered in public acute care hospitals in the Portuguese NHS classified under MDC 24, between 1st January 2009 and 31st December 2014. For this study we considered only those that met the following criteria: inpatients aged 18 or older; hospitalizations from hospitals with more than 10 discharges. Following these criteria, 20,361 hospitalizations occurring in 41 hospitals were included in the analysis (Fig. 1). Selection profile of study population Unlike previous studies [29, 30], transfers were not excluded from the analysis in order to capture the dynamics of the referral system among NHS hospitals. Thus, shorter hospitalizations in smaller hospitals followed by longer hospitalizations in bigger hospitals were all included in this analysis. Length of stay was considered for each patient discharged, including patients transferred between different hospital centres (LOS was not summed), in order to capture all hospitalizations. Transfers between hospitals represented only 2.5% (n = 507) of the total sample. Outcomes and covariates The outcome variable was the number of days between hospital admission and discharge dates (LOS). The main explanatory variable was the year, as we aimed to examine the impact of the EFAP, which was implemented in Portugal between May 2011 and May 2014. We examined three types of covariates: patient, treatment and setting variables. Patient covariates considered were: gender, age (at the date of admission), region of residence, type of admission (elective or urgent), readmission within 30 days of discharge, in-hospital mortality, presence of selected co-infections (Pneumocystis pneumonia, Hepatitis B, Hepatitis C, and Tuberculosis), HIV/AIDS as principal diagnosis at admission, and the number of secondary diagnosis (obtained as a sum of diagnosis apart from the main diagnosis, in 19 possible cases [29, 31]); Treatment covariates included: number of procedures (obtained as a sum of procedures in 20 possible cases [29, 31]); Setting covariates included whether the hospital was merged into a Hospital Centre or not, and the hospital's current ratio. Pneumocystis pneumonia became a common manifestation of HIV infection in the developed world during the 1980s, and frequently resulted in death. Following the introduction of HAART in 1996, there was a dramatic decline in the incidence of opportunistic infections in HIV/AIDS patients (including Pneumocystis pneumonia). However, despite the major benefits associated with HAART, Pneumocystis pneumonia remains one of the most common AIDS-defining diagnoses and most common causes of AIDS-related death, especially in HIV-infected patients who present late into medical care [32]. Hepatitis B and Hepatitis C are also common co-infections among people living with HIV. The estimated prevalence of hepatitis B among people living with HIV is 5–20%; thus, approximately 2 to 4 million people living with HIV worldwide have chronic hepatitis B coinfection [33, 34]. It is estimated that hepatitis C affects 2–15% of people living with HIV worldwide (and up to 90% of those are people who inject drugs [35]. Likewise, Tuberculosis and HIV/AIDS constitute the main burden of infectious disease in resource-limited countries [36]. Some 14 million individuals worldwide are estimated to be dually infected with HIV and Tuberculosis [37] and TB remains the leading cause of death among people living with HIV [38]. The DGR database records the principal and all secondary diagnosis (up to 19) from each discharge using ICD-9 codes. Table 1 shows the ICD-9 codes used to identify HIV and selected co-infections in the DRG dataset provided. ICD-9 codes and description for the selected diagnosis ICD-9 codes Human immunodeficiency virus (HIV) infection Pneumocystosis In the beginning of the 2000s, the NHS hospital network was reformed. Firstly, hospitals were transformed in public enterprises (2005) with the aim of promoting autonomous management and improve efficiency. Secondly, some hospitals were grouped into Hospital Centres. The rationale behind the creation of Hospital Centres was to improve efficiency through better coordination between institutions providing hospital care in the same geographical area [39]. The process of merging hospitals toke place over for several years, this explains why there were important changes during the study period (2009–2014): in 2009 there were 46 hospital institutions and in 2014 there were 41, and therefore different codes in the dataset provided by ACSS correspond currently to the same Hospital Centre. In order to have the same number of institutions during the study period, hospitals were coded according to their current status, as to simulate the Hospital Centre of which they are currently part of, and a dummy variable was added to measure the effect of this merger. Finally, a variable measuring the hospital's financial situation was added to this analysis. The current ratio is a liquidity ratio that measures a company's ability to pay short-term and long-term obligations. To measure this ability, the current ratio considers the current total assets of a company (both liquid and illiquid) relative to that company's current total liabilities [40], as follows: $$ Current\ Ratio=\frac{Current\ Assets}{Current\ Liabilities} $$ The annual current ratio for each hospital institution in the DRG dataset was taken from the annual report and accounts from each hospital between 2009 and 2014. The skewness and heterogeneity of LOS is a challenge for statistical analysis [41, 42]. Particularly, HIV/AIDS LOS has 6–7% of outliers and its distribution is very asymmetric [31]. LOS has been analysed using many different methods. For example, Barbour et al. studied changes among HIV/AIDS inpatients using a multivariable linear regression model [43], while Huang et al. analysed LOS and costs based on a generalized linear mixed model [44]. Other authors, like Wang et al., analysed maternity LOS from a two-component Poisson mixed model [45]. However, researchers must take into consideration that hospitalizations from the same hospital are often correlated, since ignoring the dependence of clustered data may lead to illegitimate associations and false interpretations [42]. Hierarchical Poisson regression model was specified to analyse the relation between LOS and the covariates. In DRG data, patients are nested within hospitals on the basis of their own choices which can range from place of residence, trust in a particular doctor, or even the hospital's reputation. This important element breaks the independence assumptions of classical regression analysis. Hence, hierarchical modelling is considered a more suitable statistical method when using multilevel structured data, like patients clustered within hospitals [46]. Additionally, the recognition of hospital random effects, which are nevertheless important, can be used to explain variations in hospital quality/performance [42]. Let yij (i = 1, 2, …m; j = 1, 2, …ni) the count variable (LOS) of the j th observation (hospitalizations) in the i th hospital, where m is the number of hospitals and $$ {\sum}_{\mathrm{i}=1}^{\mathrm{m}}{\mathrm{n}}_{\mathrm{i}}=\mathrm{n} $$ is the sample size. The generalized linear model takes the form: $$ {\uptheta}_{\mathrm{i}\mathrm{j}}={\upeta}_{\mathrm{i}\mathrm{j}}={\upchi}_{\mathrm{i}\mathrm{j}}\upbeta +{\upnu}_{\mathrm{i}} $$ where χij is vector with covariates with regression coefficients β, and νi is assumed to be independent and normal distribution. We used a mediation analysis to check if year dummies vary whether the current ratio is included or not. All statistical analyses were performed using statistical software R and its library MASS and package glmmPQL. The overall median length of stay (LOS) was 11 days (IQR = 16). Table 2 shows the summary statistics of hospitalization according to discharge episodes characteristics. Characteristics of HIV discharges in Portuguese NHS hospitals, 2009–2014 Discharges, N Length of stay (days); median (IQR) Age (years); median (IQR) No. secondary diagnoses; median (IQR) No. procedures; median (IQR) Gender; n (%) 2783 (72.02) 14,628 (71.84) Region of residence; n (%) Lisbon and the Tagus Valley 1258 (6.18) Type of admission; n (%) 14.40) Readmission within 30 days; n (%) In-hospital mortality; n (%) HIV/AIDS as principal diagnosis; n (%) Pneumocystis pneumonia; n (%) Hepatitis B; n (%) Hepatitis C; n (%) Tuberculosis; n (%) Out of 20,361 discharges, 14,628 (71.8%) were male and the median age was 44 years (IQR = 15). During the study period, the median number of secondary diagnoses was 7 (IQR = 5) and the median number of procedures was 8 (IQR = 7). The majority of hospitalizations corresponded to patients living in the Lisbon and the Tagus Valley region (53.3%). During the study period (2009–2014), there was a steady decrease in the number of hospitalizations (Table 2), while the majority corresponded to urgent admissions (83.4%). The most common HIV-related infections among hospitalizations between 2009 and 2014 were tuberculosis (43.6%) and hepatitis C (28.0%). In-hospital mortality during the same period was 12.6%. The hierarchical Poisson model as estimated by penalized quasi-likelihood and the majority of covariates have a significant impact in LOS (Table 3). Although age is not statistically significant, it was retained in the model to control for possible confounding. Hierarchical Poisson regression models estimation for HIV/AIDS LOS, 2009–2014 < 0.001 Gender (Female) Year 2010 (reference 2009) − 0.109 No. secondary diagnoses No. procedures Urgent admission Readmission within 30 days In-hospital mortality HIV/AIDS as principal diagnosis Hospital merge In contrast with the estimated coefficient of the variable that measured hospital mergers, estimated coefficients of year dummies remained statistically significant after introducing the variable current status in the model (Table 3). Thus, adjusting for other factors, patients hospitalized during 2010 and 2011 had an estimated LOS 0.092 and 0.109% lower, respectively, than those hospitalized in 2009, while patients hospitalized in 2012, 2013 and 2014 had an estimated LOS 0.186, 0.268 and 0.262% lower than those hospitalized in 2009 (Table 3). Adjusting for other variables, estimated LOS was lower for hospitalizations resulting in death, for women, and for patients with urgent admission (Table 3). Patients with urgent admission had an estimated LOS 0.068% lower than those with elective admission (Table 3). In contrast, patients with higher number of diagnosis (or higher number of procedures) have a higher estimated HIV/AIDS LOS. Adjusting for other variables, one additional number of secondary diagnosis increased LOS by 0.043%, while one additional number of procedures increased LOS by 0.085% (Table 3). Adjusting for other factors, when analysing selected co-morbidities, patients co-infected with Pneumocystis pneumonia and hepatitis C had an estimated LOS 0.129 and 0.126% shorter, respectively, than those without those co-infections (Table 3). In contrast, patients co-infected with tuberculosis had an estimated LOS 0.391% longer than those without TB. Finally, patients having HIV/AIDS as a principal diagnosis had an estimated LOS 0.085% longer than those with other principal diagnosis (Table 3). Hospital random effects were estimated to capture differences in unexplained variance in LOS across hospitals, after controlling for all other characteristics. Figure 2 shows these random effects and their respective 95% confidence intervals (IC) for the 41 hospitals analysed. Random effects and 95% CI for each hospital For the period 2009–2014, most hospitals had an estimated random effect closer to the mean value (one). However, two hospitals (40 and 41) showed a large positive effect, extending patients' length of stay. The constant decline in HIV-related hospitalizations during the period 2009–2014 is in line with what was observed in other studies [47]. In fact, the decrease in HIV incidence observed in Portugal suggests success in controlling the HIV epidemics in Portugal, following the worldwide trend [48]. In our analysis, most HIV patients (71.8%) hospitalized during the study period were men, which can be explained by the fact that, like in the rest of Europe, most of HIV patients in Portugal are men [49]. Therefore, estimated LOS was lower for female patients and longer for male patients. Over the study period, the median age of HIV patients hospitalized increased slightly, suggesting that PLWHIV are living longer, as demonstrated by other studies [47]. While urgent admissions have decreased steadily between 2009 and 2014 (− 37.7%) – having dropped by 10.4% between 2009 and 2011 and by 26.2% between 2012 and 2014 – elective admissions increased until 2012 (+ 36.2%) but declined in the following 2 years (− 41.3%). This study specifies a hierarchical Poisson regression to model HIV/AIDS LOS in Portuguese public hospitals. The estimated LOS of HIV/AIDS patients hospitalized in each year between 2010 and 2014 was significantly shorter than those hospitalized in 2009. A recent study carried out in Portugal analysed all in-patient stays at all Portuguese NHS hospitals over the 2001–2012 period and found that the volume of in-patient stays, particularly non-elective stays, increased significantly, while the length of stay has become shorter and elective admissions have decreased [1]. Although our analysis included HIV/AIDS patients only, and the study period was different, we found similar results regarding the shorter LOS and the decrease of elective admissions. The decrease of LOS for HIV/AIDS hospitalizations found in our analysis could be explained by two different hypotheses. The first one is that the EFAP measures might have induced efficiency gains, improving response from healthcare units. In contrast, the EFAP measures might have reduced quality of care provided in hospitals, with a reduction of the number of in-patient beds and increasing pressure to reduce LOS and cut costs [50]. However, our findings are not sufficient to support one hypothesis over another and further research is needed. Our results showed that the hospital's financial situation affected HIV/AIDS patients' hospitalizations: a greater current ratio decreased estimated LOS. This finding is supported by other studies that show a strong negative association between LOS and hospitals' operating margins [51]. Long hospitalizations consume many hospital resources and are, therefore, associated with increasing costs. The year dummies remained statistically significant in Model 2, even after introducing the variable measuring hospitals' current status. The fact that the annual decrease in LOS for HIV/AIDS patients was not explained by the hospitals' current status, suggests a generalized pressure to reduce costs not fully related with the hospitals' financial situation. By 2011, NHS hospitals were facing a severe financial situation, with the total amount of arrears (accounts payable to domestic suppliers past due date by 90 days) reaching EUR 3.0 billion [2]. Following the economic and financial adjustment programme (EFAP), a number of cost-containment measures and actions aimed at increasing efficiency in the health sector were implemented between 2012 and 2014. The Memorandum of Understanding clearly established the reduction of hospitals' operating costs as a priority, which is the reason why NHS hospitals were under continuous pressure to cut costs during the period of the EFAP. Our findings suggest that this was an important contributor for the decrease of LOS among HIV/AIDS patients. While patients' age is not statistically significant at the 5% level, when adjusting for other factors the estimated LOS was significantly lower for patients who died, suggesting that mortality occurs mostly at an early stage of hospitalization. This means that there is high mortality among those patients who are admitted at the hospital in more severe stages of AIDS-related illness, as supported by other studies [42]. Both the number of secondary diagnoses and the number of procedures significantly increase LOS, suggesting longer hospitalizations. A greater number of diagnoses or procedures suggests a more severe condition of the patient admitted and therefore leads to a delayed discharge [52]. Also, estimated LOS was longer for patients who had HIV/AIDS as principal diagnosis, suggesting that those patients are admitted in a more severe condition and are therefore more likely to need a longer hospitalization. The estimated LOS for HIV/AIDS patients was shorter for urgent admissions. It is important to highlight that, in Portugal, urgent admissions do not necessarily reflect emergency situations, as noted by previous studies [42]. Due to difficulties in accessing lower levels of care, it is not uncommon that patients seek assistance directly at a hospital emergency service, thus bypassing primary healthcare [53]. The variable measuring the effect of hospital mergers into hospital centres on estimated LOS for HIV/AIDS patients was not statistically significant in Model 2, after introducing the variable current status. Mergers can be a way of eliminating excess capacity and cutting costs, and additionally they can address performance issues for particular units or services. Hospital mergers in Portugal began in 1999 but were intensified in recent years, as a result of the economic and financial adjustment programme (EFAP) [4]. By concentrating within the same administration hospitals operating in the same geographic area and offering the same services, the aim was to increase efficiency and promote economies of scale. However, results from our study suggest the opposite, considering LOS as an indicator of hospital efficiency. Regarding recent mergers, the literature suggests that there are economies of scale and scope to explore further, but only mergers of relatively small and similar hospitals have been successful [54]. In fact, hospital mergers in Portugal did not achieve the expected efficiency gains due to the heterogeneity and geographical dispersion of many hospitals. As a result, despite being under the same administration, many hospitals kept the same practices as they had prior to being merged with other hospitals. Estimated hospital random effects suggest differences amongst hospitals which also require the need for further research. These effects which acknowledge unexplained factors that are nonetheless important, can be interpreted as differences in hospital efficiency, after controlling for all relevant factors. Hospitals 40 and 41 showed a positive effect, extending LOS for HIV/AIDS patients. Hospital 40 corresponds to a hospital centre in northern Portugal, geographically disperse and with no differentiated services, while hospital 41 is a large hospital in the Lisbon metropolitan area, offering more differentiated services. Our study provides an analysis of relevant factors related to LOS among HIV/AIDS hospitalizations between 2009 and 2014 in Portugal. However, it is important to note that healthcare-associated infections have a high prevalence in Portugal – overall prevalence rate of 10.5% in 2013 [55] – and are responsible for greater medical costs, longer LOS, and an increase in mortality rates. Our analysis did not include other types of pneumonia or urinary tract infections, which are major complications from nosocomial infections, as covariates, but the findings for pneumocystis pneumonia should prompt further research. Although the EFAP, in place between May 2011 and May 2014, and the severe economic recession in Portugal brought important social and economic consequences in Portugal, the interpretation of our findings must be carried out with caution. Between 2012 and 2014, Portugal also witnessed changes in the National Network of Long Term Care which was expanded and might have influenced the overall reduction in LOS among patients in Portuguese NHS hospitals. Our analysis did not address the potential impact of that support network. In fact, an aspect of the whole system performance that is ignored in this analysis is the impact of hospital performance on other sectors within the health system. For instance, it could be the case that the decrease of LOS is being secured at the expense of heavy workloads for rehabilitative and primary care services [56]. This study used comprehensive discharge data compiled in mainland Portugal, and these findings are more generalizable than results based on data from a single hospital. However, this study has limitations, due to the nature of the data [57]. Firstly, there are limitations regarding the retrospective collection of data for administrative purposes, which can allow for mistakes in recording information and/or variability of coding among hospitals. Secondly, the DRG database has very limited clinical information, which would have been important to better understand the clinical profile of HIV/AIDS patients (e.g. the number of years the patient is engaged in care, viral load, CD4 cell count, ART regime). To track the long-term outcomes and quality of care, further research is needed on the information system specially implemented in NHS hospitals in Portugal to capture these important components of HIV-related care (SI.VIDA). Also, in this analysis, the number of secondary diagnosis was used as a proxy of the number of co-morbidities, and therefore as an indicator of the patient's condition. However, this approach reveals nothing about the severity of each secondary diagnosis and does not measure their severity. Future research could consider the use of Elixhauser Comorbidity Index or Charlson Comorbidity Index [58, 59], which have been widely utilized by health researchers to measure burden of disease and case mix. The fact that hospital institutions were coded in the DRG database according to the hospital centre to which they belong may have prevented a more detailed analysis of the data. Although a dummy variable was considered to capture the aggregation of hospitals into hospital centres, it would be interesting to explore, within a single hospital centre, differences among institutions regarding risk-adjusted LOS. Finally, the selected study period is also a limitation of this analysis. Although the main objective was to measure the impact of the EFAP on hospital in-patient care for PLWHIV, the analysis over a longer period would have allowed us to better identify and measure the austerity effect from the long-term trends in LOS. The subject of the impacts of the economic crisis on the health of the population has been the focus of many studies in recent years [5–7]. Health policy research in this field poses important methodological challenges as it is often difficult to distinguish austerity measures from the overall economic crisis and its impact on health systems. Therefore, the model presented in this study aims to contribute to the analysis of the effects of the economic and financial adjustment programme on a particular group of patients. This study presents a hierarchical Poisson model to analyse LOS among HIV/AIDS patients in Portuguese public hospitals. A number of variables (HIV/AIDS as principal diagnosis, number of secondary diagnoses, number of procedures and tuberculosis) were found to increase LOS, while others (in-hospital mortality, urgent admission, Pneumocystis pneumonia and hepatitis C) contributed to the decrease of LOS. Our findings also show that LOS decreased during the study period, and elective admissions decreased after 2012. Our findings also showed that hospital's current ratio was found to decrease LOS, meaning that the better the financial situation, the lower the LOS for HIV/AIDS patients. With regard to HIV/AIDS hospitalizations, two of the analysed hospitals showed a large positive effect, extending patients' length of stay. These findings are a contribution to the study of the effects of the austerity measures implemented in Portugal between 2011 and 2014 in hospital care provision to a particular vulnerable group of patients. Our analysis suggests that the measures in place to cut costs and increase efficiency in public hospitals contributed to the decrease of HIV/AIDS patients' LOS. Results from this analysis demonstrate the need to further study this issue in order to better understand the effects of the EFAP on health and health care. Additionally, it would be important to implement measures to efficiently monitor health care delivery, particularly during periods of financial constraints. Sara S. Dias, Alexandre V. Abrantes and Maria R. O. Martins contributed equally to this work. ACSS: Central Administration of the Health System [Administração Central do Sistema de Saúde] Antiretroviral therapy Cluster of differentiation 4 DRG: Diagnosis Related Groups EFAP: Economic and financial adjustment programme HIV/AIDS: Human immunodeficiency virus/Acquired Immunodeficiency syndrome IQR: Interquartile range LOS: MDC: Major Diagnostic Category NHS: National Health Service (Portugal) PLWHIV: People living with HIV The authors wish to thank the Central Administration of the Health System (ACSS, Administração Central do Sistema de Saúde), which provided the data, and FCT (Fundação para a Ciência e Tecnologia) for funds to GHTM – UID/Multi/04413/2013. This study had no funding. The DRG database is held by the Central Administration of the Health System (ACSS) and can be requested for research purposes. Authors from this study cannot, by legal reasons, share publicly the database. Any researcher wishing to analyse DRG data should submit a request to the ACSS explaining the use of the data. GFA, SSD and MROM performed the statistical analysis. GFA drafted the manuscript, helped by MROM and AVA. All authors read and approved the final manuscript. This study was approved by the Ethics Committee of the Institute of Hygiene and Tropical Medicine (Conselho de Ética do Instituto de Higiene e Medicina Tropical), as part of the PhD project by GFA. The full project protocol was submitted to the Ethics Committee in April 2016 and approved in May 2016. The study used data routinely collected in Portuguese public hospitals. The DRG database was requested to the ACSS – MCD = 24 (HIV/AIDS infection) only, for the years 2009–2014 – and was sent to the authors completely anonymized. Therefore, the authors cannot identify any subject on that database. The Central Administration of the Health System (ACSS) is the legal owner of the DRG database in Portugal. Global Health and Tropical Medicine (GHTM), Instituto de Higiene e Medicina Tropical – Universidade NOVA de Lisboa (IHMT-UNL), Rua da Junqueira 100, 1349-008 Lisbon, Portugal Epidoc Unit – CEDOC, NOVA Medical School – Universidade Nova de Lisboa (NMS-UNL), Campo Mártires da Pátria 130, 1169-056 Lisbon, Portugal Center for Innovative Care and Health Technology (ciTechCare), Escola Superior de Saúde de Leiria (ESSLei), Instituto Politécnico de Leiria (IPLeiria), Campus 2, Morro do Lena, Alto do Vieiro, Apartado 4137, 2411-901 Leiria, Portugal Health Policy and Administration Department, Escola Nacional de Saúde Pública – Universidade NOVA de Lisboa (ENSP-UNL), Avenida Padre Cruz, 1600-560 Lisbon, Portugal Perelman J, Felix S, Santana R. The great recession in Portugal: impact on hospital care use. Health Policy. 2015;119(3):307–15.PubMedGoogle Scholar Augusto GF. Cuts in Portugal's NHS could compromise care. Lancet. 2012;379(9814):400.PubMedGoogle Scholar Barros PP. Health policy reform in tough times: the case of Portugal. Health Policy. 2012;106(1):17–22.PubMedGoogle Scholar Portugal – Memorandum of understanding on specific economic policy conditionality, 17 May 2011. http://ec.europa.eu/economy_finance/eu_borrower/mou/2011-05-18-mou-portugal_en.pdf. Accessed 27 Sept 2018. Karanikolos M, Kentikelenis A. Health inequalities after austerity in Greece. Int J Equity Health. 2016;15:83.PubMedPubMed CentralGoogle Scholar Karanikolos M, Mladovsky P, Cylus J, Thomson S, Basu S, Stuckler D, Mackenbach JP, McKee M. Financial crisis, austerity, and health in Europe. Lancet. 2013;381(9874):1323–31.PubMedGoogle Scholar García-Gómez P, Jiménez-Martín S, Labeaga JM. Consequences of the economic crisis on health and health care systems. Health Econ. 2016;25(Suppl 2):3–5.PubMedGoogle Scholar Van Doorslaer E, Wagstaff A, Van der Burg H, Christiansen T, DeGraeve D, Duchesne I, et al. Equity in the delivery of health care in Europe and the US. J Health Econ. 2000;19(5):553–83.PubMedGoogle Scholar Perelman J, Closon M-C. Impact of socioeconomic factors on in-patient length of stay and their consequences in per case hospital payment systems. J Health Serv Res Policy. 2011;16(4):197–202.PubMedGoogle Scholar Kondilis E, Giannakopoulos S, Gavana M, Ierodiakonou I, Waitzkin H. Benos a. economic crisis, restrictive policies, and the population's health and health care: the Greek case. Am J Public Health. 2013;103(6):973–9.PubMedPubMed CentralGoogle Scholar Bernal JAL, Gasparrini A, Artundo CM, McKee M. The effect of the late2000 financial crisis on suicides in Spain: an interrupted time-seriesanalysis. Eur J Pub Health. 2013;23(5):732–6.Google Scholar Barr B, Taylor-Robinson D, Scott-Samuel A, McKee M, Stuckler D. Suicides associated with the 2008–10 economic recession in England: time trend analysis. BMJ. 2012;345:e5142.PubMedPubMed CentralGoogle Scholar Reeves A, Stuckler D, McKee M, Gunnell D, Chang S-S, Basu S. Increase in state suicide rates in the USA during economic recession. Lancet. 2012;380(9856):1813–4.PubMedGoogle Scholar Economou M, Madianos M, Peppou LE, Patelakis A, Stefanis CN. Major depression in the era of economic crisis: a replication of a cross-sectional study across Greece. J Affect Disord. 2012;145(3):308–14.PubMedGoogle Scholar Economou M, Madianos M, Peppou LE, Theleritis C, Patelakis A, Ste-fanis C. Suicidal ideation and reported suicide attempts in Greece during the economic crisis. World Psychiatry. 2013;12(1):53–9.PubMedPubMed CentralGoogle Scholar Gili M, Roca M, Basu S, McKee M, Stuckler D. The mental health risks of economic crisis in Spain: evidence from primary care centres, 2006 and 2010. Eur J Pub Health. 2013;23(1):103–8.Google Scholar Kentikelenis A, Karanikolos M, Papanicolas I, Basu S, McKee M, Stuck-ler D. Health effects of financial crisis: omens of a Greek tragedy. Lancet. 2011;378(9801):1457–8.PubMedGoogle Scholar Zavras D, Tsiantou V, Pavi E, Mylona K, Kyriopoulos J. Impact ofeconomic crisis and other demographic and socio-economic factors on self-rated health in Greece. Eur J Pub Health. 2013;23(2):206–10.Google Scholar Vandoros S, Hessel P, Leone T, Avendano M. Have health trends worsened in Greece as a result of the financial crisis? A quasi-experimental approach. Eur J Pub Health. 2013;23(5):727–31.Google Scholar Ruhm CJ. Understanding the relationship between macroeconomic conditions and health. In: The Elgar companion to health economics. Northampton: Edward Elgar Publishing; 2011.Google Scholar Legido-Quigley H, Karanikolos M, Hernandez-Plaza S, de Freitas C, Bernardo L, Padilla B, et al. Effects of the financial crisis and troika austerity measures on health and health care access in Portugal. Health Policy. 2016 Jul;120(7):833–9.PubMedGoogle Scholar Kentikelenis A, Karanikolos M, Reeves A, McKee M, Stuckler D. Greece's health crisis: from austerity to denialism. Lancet. 2014;383(9918):748–53.PubMedGoogle Scholar Legido-Quigley H, Montgomery CM, Khan P, Atun R, Fakoya A, Getahun H, Grant AD. Integrating tuberculosis and HIV services in low- and middle-income countries: a systematic review. Tropical Med Int Health. 2013;18(2):199–211.Google Scholar Kentikelenis A. Bailouts, austerity and the erosion of health coverage in southern Europe and Ireland. Eur J Pub Health. 2015;25(3):365–6.Google Scholar Rechel B, Suhrcke M, Tsolova S, Suk JE, Desai M, McKee M, Stuckler D, Abubakar I, Hunter P, Senek M, Semenza JC. Economic crisis and communicable disease control in Europe: a scoping study among national experts. Health Policy. 2011;103(2–3):168–75.PubMedGoogle Scholar UNAIDS – Joint United Nations Programme on HIV/AIDS. The global economic crisis and HIV prevention and treatment programmes: vulnerabilities and impact. Geneva: UNAIDS/WHO; 2009.Google Scholar Perelman J, Alves J, Miranda AC, Mateus C, Mansinho K, Antunes F, Oliveira J, Poças J, Doroana M, Marques R, Teófilo E, Pereira J. Direct treatment costs of HIV/AIDS in Portugal. Rev Saude Publica. 2013;47(5):865–72.PubMedGoogle Scholar Mateus C. Portugal: Results of 25 years of experience with DRGs In: Busse R, Geissler A, Quentin W, Wiley M, editors. Diagnosis-related groups in Europe moving towards transparency, efficiency and quality in hospitals. Maidenhead: Open University Press; 2011.Google Scholar Dias SS, Andreozzi V, Martins MO, Torgal J. Predictors of mortality in HIV-associated hospitalizations in Portugal: a hierarchical survival model. BMC Health Serv Res. 2009;9:125.PubMedPubMed CentralGoogle Scholar Dias SS, Martins MFO. HIV/AIDS length of stay outliers. Proc Comput Sci. 2015;64:984–92.Google Scholar Xiao J, Lee AH, Vemuri SR. Mixture distribution analysis of length of hospital stay for efficient funding. Socio-Econ Plan Sci. 1999;33(1):39–59.Google Scholar Masur H, Kovacs J, Siegel M. Pneumocystis jirovecii pneumonia in human immunodeficiency virus infection. Semin Respir Crit Care Med. 2016;37(02):243–56.PubMedGoogle Scholar Konopnicki D, Mocroft A, de Wit S, Antunes F, Ledergerber B, Katlama C, et al. Hepatitis B and HIV: prevalence, AIDS progression, response to highly active antiretroviral therapy and increased mortality in the EuroSIDA cohort. AIDS. 2005;19:593–601.PubMedGoogle Scholar Kellerman SE, Hanson DL, McNaghten AD, Fleming PL. Prevalence of chronic hepatitis B and incidence of acute hepatitis B infection in human immunodeficiency virus-infected subjects. J Infect Dis. 2003;188:571–7.PubMedGoogle Scholar Platt L, Easterbrook P, Gower E, McDonald B, Sabin K, McGowan C, Yanny I, Razavi H, Vickerman P. Prevalence and burden of HCV co-infection in people living with HIV: a global systematic review and meta-analysis. Lancet Infect Dis. 2016;16:797–808.PubMedGoogle Scholar Pawlowski A, Jansson M, Sköld M, Rottenberg ME, Källenius G. Tuberculosis and HIV co-infection. Hobman TC (Ed). PLoS Pathog. 2012;8(2):e1002464.PubMedPubMed CentralGoogle Scholar Getahun H, Gunneberg C, Granich R, Nunn P. HIV infection-associated tuberculosis: the epidemiology and the response. Clin Infect Dis. 2010;50:S201–7.PubMedGoogle Scholar Bruchfeld J, Correia-Neves M, Källenius G. Tuberculosis and HIV coinfection. Cold Spring Harb Perspect Med. 2015;5(7):a017871.PubMedPubMed CentralGoogle Scholar Simões J, Augusto GF, Fronteira I, Hernandez-Quevedo C. Portugal: Health system review. Health Syst Transit. 2017 Mar;19(2):1–184.Google Scholar Brealey R, Myers S, Allen F. Principles of corporate finance. 12th. New York: McGraw Hill; 2017.Google Scholar Lee AH, Gracey M, Wang K, Yau KK. A robustified modeling approach to analyze pediatric length of stay. Ann Epidemiol. 2005;15(9):673–7.PubMedGoogle Scholar Dias SS, Andreozzi V, Martins RO. Analysis of HIV/AIDS DRG in Portugal: a hierarchical finite mixture model. Eur J Health Econ. 2013;14:715–23.PubMedGoogle Scholar Barbour KE, Fabio A, Pearlman DN. Inpatient charges among HIV/AIDS patients in Rhode Island from 2000-2004. BMC Health Serv Res. 2009;9:3.PubMedPubMed CentralGoogle Scholar Huang ZJ, LaFleur BJ, Chamberlain JM, Guagliardo MF, Joseph JG. Inpatient childhood asthma treatment: relationship of hospital characteristics to length of stay and cost: analyses of New York state discharge data, 1995. Arch Pediatr Adolesc Med. 2002;156(1):67–72.PubMedGoogle Scholar Wang K, Yau KK, Lee AH. A hierarchical Poisson mixture regression model to analyse maternity length of hospital stay. Stat Med. 2002;21(23):3639–54.PubMedGoogle Scholar Dias SS, Andreozzi V, Martins MO. Hierarchical normal mixture model to analyse HIV/AIDS LOS. In: Pacheco A, et al., editors. New advances in statistical modeling and applications. Switzerland: Springer International Publishing; 2014.Google Scholar Catumbela E, Freitas A, Lopes F, Mendoza Mdel C, Costa C, Sarmento A, da Costa-Pereira A. HIV disease burden, cost, and length of stay in Portuguese hospitals from 2000 to 2010: a cross-sectional study. BMC Health Serv Res. 2015;15:144.PubMedPubMed CentralGoogle Scholar UNAIDS – Joint United Nations Programme on HIV/AIDS. Global Report – UNAIDS report on the global AIDS epidemic 2013. Geneva: UNAIDS; 2013. http://www.unaids.org/sites/default/files/media_asset/UNAIDS_Global_Report_2013_en_1.pdf. Accessed 27 Sept 2018Google Scholar ECDC/WHO Europe – European Centre for Disease Prevention and Control/WHO Regional Office for Europe. HIV/AIDS surveillance in Europe 2015. Stockoholm: ECDC; 2016. http://www.euro.who.int/__data/assets/pdf_file/0019/324370/HIV-AIDS-surveillance-Europe-2015.pdf. Accessed 27 Sept 2018Google Scholar Quaglio G, Karapiperis T, Van Woensel L, Arnold E, McDaid D. Austerity and health in Europe. Health Policy. 2013;113(1–2):13–9.PubMedGoogle Scholar Chiang JC, Wang TY, Feng-Jui H. Factors impacting hospital financial performance in Taiwan following implementation of National Health Insurance. Int Bus Res. 2014;7(2):43–52.Google Scholar Singh CH, Ladusingh L. Inpatient length of stay: a finite mixture modeling analysis. Eur J Health Econ. 2010;11(2):119–26.PubMedGoogle Scholar OECD. OECD Reviews of health care quality: Portugal 2015: raising standards. Paris: OECD Publishing; 2015.Google Scholar Azevedo H, Mateus C. Economias de escala e de diversificação: uma análise da bibliografia no contexto das fusões hospitalares [economies of scale and diversification: an bibliography analysis in the context of hospital mergers]. Rev Port Saúde Pública. 2014;32(1):106–17.Google Scholar ECDC – European Centre for Disease Prevention and Control. Health inequalities, the financial crisis, and infectious disease in Europe. Stockholm: ECDC; 2013.Google Scholar Cylus J, Papanicolas I, Smith PC, editors. Health system efficiency – how to make measurement matter for policy and management. Health policy series No. 46. London: WHO Europe; 2016.Google Scholar Freitas JA, Silva-Costa T, Marques B, Costa Pereira A. Implications of data quality problems within hospital administrative databases. In: Pallikarakis N, Bamidis PD, editors. MEDICON 2010 Volume 29, IFMBE Proceedings; 2010.Google Scholar Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8–27.PubMedGoogle Scholar Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–83.PubMedGoogle Scholar
CommonCrawl
Helsinki Algorithms SeminarMiikka2018-09-07T14:16:46+02:00 The Helsinki Algorithms Seminar is a weekly meeting of researchers in the Helsinki area interested in the art of algorithms and algorithm design, broadly interpreted to cover both theoretical ideas and algorithm engineering on concrete computing platforms. In most cases we have a presentation prepared for each meeting to communicate an idea, a recent result, work-in-progress, or demo, but this should not be at the expense of discussion and simply having fun with algorithms. Who and where Our affiliations are with Aalto University and the University of Helsinki, and accordingly our activities alternate between the Otaniemi Campus of Aalto University and the Kumpula Campus of University of Helsinki, catalyzed by the Helsinki Institute for Information Technology HIIT. Prof. Parinya Chalermsook Aalto University Prof. Aristides Gionis Aalto University Prof. Tomi Janhunen Aalto University Prof. Petteri Kaski Aalto University Prof. Mikko Koivisto University of Helsinki Prof. Veli Mäkinen University of Helsinki Prof. Pekka Orponen Aalto University Prof. Juho Rousu Aalto University Prof. Jukka Suomela Aalto University (e-mail addresses: [email protected] or [email protected] — contact Parinya Chalermsook to reserve a slot) To join the seminar's mailing list, send an email to '[email protected]' or see here. The meetings are on Thursdays, 4pm to 5pm. For Autumn 2018, the meetings are at T4, Konemiehentie 2 (Otaniemi) and Exactum C122 (Kumpula). A Tight Extremal Bound on the Lovász Cactus Number in Planar Graphs Speaker: Sumedha Uniyal (Aalto University) Meeting place: Konemiehentie 2, Room T4 (Otaniemi) Abstract: A cactus graph is a graph in which any two cycles are edge-disjoint.We present a constructive proof of the fact that any plane graph G contains a cactus subgraph C where C contains at least a 1/6 fraction of the triangular faces of G. We also show that this ratio cannot be improved by showing a tight lower bound.Together with an algorithm for linear matroid parity, our bound implies two approximation algorithms for computing "dense planar structures" inside any graph: (i) A 1/6 approximation algorithm for, given any graph G, finding a planar subgraph with a maximum number of triangular faces; this improves upon the previous 1/11-approximation; (ii) An alternate (and arguably more illustrative) proof of the 4/9-approximation algorithm for finding a planar subgraph with a maximum number of edges. Our bound is obtained by analyzing a natural local search strategy and heavily exploiting the exchange arguments. Therefore, this suggests the power of local search in handling problems of this kind. Stabbing Rectangles by Line Segments – How Decomposition Reduces the Shallow-Cell Complexity Speaker: Joachim Spoerhase (Aalto University) Abstract: We initiate the study of the following natural geometric optimization problem. The input is a set of axis-aligned rectangles in the plane. The objective is to find a set of horizontal line segments of minimum total length so that every rectangle is stabbed by some line segment. A line segment stabs a rectangle if it intersects its left and its right boundary. The problem, which we call Stabbing, can be motivated by a resource allocation problem and has applications in geometric network design. To the best of our knowledge, only special cases of this problem have been considered so far.Stabbing is a weighted geometric set cover problem, which we show to be NP-hard. A constrained variant of Stabbing turns out to be even APX-hard. While for general set cover the best possible approximation ratio is Θ(log n), it is an important field in geometric approximation algorithms to obtain better ratios for geometric set cover problems. Chan et al. [SODA'12] generalize earlier results by Varadarajan [STOC'10] to obtain sub-logarithmic performances for a broad class of weighted geometric set cover instances that are characterized by having low shallow-cell complexity. The shallow-cell complexity of Stabbing instances, however, can be high so that a direct application of the framework of Chan et al. gives only logarithmic bounds. We still achieve a constant-factor approximation by decomposing general instances into what we call laminar instances that have low enough complexity. Our decomposition technique yields constant-factor approximations also for the variant where rectangles can be stabbed by horizontal and vertical segments and for two further geometric set cover problems. Maximizing the diversity of exposure in a social network Date: October 04, 2018 16:15–17:00 Place: Konemiehentie 2, Room T4 (Otaniemi) Speaker: Cigdem Aslay Title: Maximizing the diversity of exposure in a social network Abstract: In this talk I will present our recent result that provides a novel approach to contribute towards bursting filter bubbles. We formulate the problem as a task of recommending news articles to selected users with the aim to maximize the overall diversity of information exposure in a social network. We consider a realistic setting where we take into account the political leanings of users and articles, and the probability of users to further share articles. We show that this problem is a challenging generalization of the influence maximization problem, which is NP-hard, and it corresponds to the problem of maximizing a monotone submodular function subject to a matroid constraint on the allocation of articles to users. We introduce the notion of random reverse co-exposure sets and a set of estimation techniques based on martingales for efficiently estimating expected diversity of exposure. Accordingly, we devise a scalable instantiation of the greedy algorithm that provides (1/2-epsilon)-approximation to the optimum with high probability. Speaker: Parinya Chalermsook Title: Multiplicative Weight Updates for Efficient Algorithms and Data Structures: Some New Results from Old Techniques Abstract: Multiplicative Weight Update (MWU) is a powerful online prediction technique that has been useful in algorithms design in the past decades. In this talk, I will give an overview of my recent efforts to use the MWU-style updates to design (i) near-linear time algorithms for approximate LP solvers with exponential number of constraints and (ii) efficient online binary search trees that are able to achieve new properties. On the Complexity of Symmetric Polynomials Speaker: Gorav Jindal Title : On the Complexity of Symmetric Polynomials Abstract :It is an easy exercise to show that all symmetric Boolean functions are easy to compute (corresponding language is in the complexity class TC^0). Lipton and Regan ([1]) ask whether the same is also true for symmetric polynomials? They ask whether "Understanding the arithmetic complexity of symmetric polynomials is enough ?" In this work, we answer this question in affirmative.The fundamental theorem of symmetric polynomials states that for a symmetric polynomial f_sym \in C[x_1, x_2, …, x_n], there exists a unique "witness" f \in C[y_1, y_2, .., y_n] such that f_sym = f(e_1, e_2, .., e_n), where the e_i's are the elementary symmetric polynomials.We study the arithmetic complexity L(f) of the witness f as a function of the arithmetic complexity L(f_sym) of f_sym . We show that the arithmetic complexity L(f) of f is bounded by poly(L(f_sym ), deg(f), n). To the best of our knowledge, prior to this work only exponential upper bounds were known for L(f). The main ingredient in our result is an algebraic analogue of Newton's iteration on power series.As a corollary of this result, we show that if VP \neq VNP then there exist symmetric polynomial families which have super-polynomial arithmetic complexity. This is a joint work with Prof. Markus Bläser (Department of Computer Science, Saarland University) [1] : https://rjlipton.wordpress.com/2009/07/10/arithmetic-complexity-and-symmetry/ Rank Vertex Cover as a Natural Problem for Algebraic Compression Place: Exactum C122 (Kumpula) Speaker: Syed Meesum (Wroclaw) Title: Rank Vertex Cover as a Natural Problem for Algebraic Compression Abstract: The question of the existence of a polynomial kernelization of the Vertex Cover Above LP problem was a longstanding, notorious open problem in Parameterized Complexity. Six years ago, the breakthrough work by Kratsch and Wahlstr ̈om on representative sets finally answered this question in the affirmative [FOCS 2012]. In this talk, I will present an alternative, algebraic compression of the Vertex Cover Above LP problem into the Rank Vertex Cover problem. Here, the input consists of a graph G, a parameter k, and a bijection between V(G) and the set of columns of a representation of a matroid M, and the objective is to find a vertex cover whose rank is upper bounded by k. Extensor-coding: A unified algebraic approach to the longest path problem Date: November 01, 2018 16:15–17:00 Speaker: Cornelius Brand (Saarland) Title: Extensor-coding: A unified algebraic approach to the longest path problem Abstract: We devise an algorithm that approximately computes the number of paths of length k in a given directed graph with n vertices up to a multiplicative error of 1 ± ε. Our algorithm runs in time 4^k*(n+m)*poly(k)/ε². The algorithm is based on associating with each vertex an element in the exterior (or, Grassmann) algebra, called an extensor, and then performing computations in this algebra. This connection to exterior algebra generalizes a number of previous approaches for the longest path problem and is of independent conceptual interest. Using this approach, we also obtain a deterministic 2^k*poly(n) time algorithm to find a k-path in a given directed graph that is promised to have few of them. Our results and techniques generalize to the subgraph isomorphism problem when the subgraphs we are looking for have bounded pathwidth. Finally, we also obtain a randomized algorithm to detect k-multilinear terms in a multivariate polynomial given as a general algebraic circuit. To the best of our knowledge, this was previously only known for algebraic circuits not involving negative constants. Coresets for k-Means clustering Speaker: Michael Mathioudakis Speaker: Michael Mathioudakis (University of Helsinki) Title: Coresets for k-Means clustering Coresets are compact representations of data, accompanied with a guarantee that models trained on them have competitive performance with models trained on all the data. In this talk, I will discuss state-of-the-art algorithms on coresets for k-Means by Bachem et.al. [1], presented earlier this year at the KDD conference. [1] Bachem, Olivier, Mario Lucic, and Andreas Krause. "Scalable k-Means Clustering via Lightweight Coresets." International Conference on Knowledge Discovery and Data Mining (KDD). 2018. The effects of teamwork in time-inconsistent planning Speaker: Aris Gionis Title: The effects of teamwork in time-inconsistent planning Abstract: The problem of inconsistent planning in decision making, which leads to undesirable effects such as procrastination, has been studied in the behavioral-economics literature, and more recently in the context of computational behavioral models. Individuals, however, do not function in isolation, and successful projects most often rely on team work. Team performance does not depend only on the skills of the individual team members, but also on other collective factors, such as team spirit and cohesion. It is not an uncommon situation that a hard-working individual has the capacity to give a good example to her team-mates and motivate them to work harder. In this work we adopt the model of Kleinberg and Oren (EC'14) on time-inconsistent planning, and extend it to account for the influence of procrastination within the members of a team. Our first contribution is to model collaborative work so that the relative progress of the team members, with respect to their respective subtasks, motivates (or discourages) them to work harder. We compare the total cost of completing a team project when the team members communicate with each other about their progress, with the corresponding cost when they work in isolation. Our main result is a tight bound on the ratio of these two costs, under mild assumptions. We also show that communication can either increase or decrease the total cost. We also consider the problem of assigning subtasks to team members, with the objective of minimizing the negative effects of teamwork. We show that while a simple problem of forming teams of two members can be solved in polynomial time, the problem of assigning n tasks to n agents is NP-hard. This is joint work with Aris Anagnostopoulos and Nikos Parotsidis. Probabilistic tensors and opportunistic Boolean matrix multiplication Speaker: Petteri Kaski Title: Probabilistic tensors and opportunistic Boolean matrix multiplication Abstract: We study probabilistic extensions of classical deterministic measures of algebraic complexity of a tensor, such as the rank and the border rank. These probabilistic extensions enable improvements over their deterministic counterparts for specific tensors of interest, starting from the tensor $\langle 2,2,2 \rangle$ that represents $2\times 2$ matrix multiplication. While it is well known that the (deterministic) tensor rank and border rank of $\langle 2,2,2 \rangle$ is 7 [V. Strassen, Numer. Math. 13 (1969); J. E. Hopcroft and L. R. Kerr, SIAM J. Appl. Math. 20 (1971); S. Winograd, Linear Algebra Appl. 4 (1971); J. M. Landsberg, J. AMS 19 (2006)] we show that the probabilistic rank of $\langle 2,2,2 \rangle$ is at most 6+\frac{6}{7} and the probabilistic border rank is at most 6+\frac{2}{3}. By submultiplicativity, this leads immediately to novel randomized algorithm designs, such as algorithms for Boolean matrix multiplication as well as detecting and estimating the number of triangles in graphs. Joint work with Matti Karppa (Aalto University). Reasoning in Bayesian Opinion Exchange Networks is Computationally Hard Speaker: Jan Hazla (MIT) Title: Reasoning in Bayesian Opinion Exchange Networks is Computationally Hard Abstract: Bayesian models of opinion exchange are extensively studied in economics, dating back to the work of Aumann on the agreement theorem. An important class of such models features agents arranged on a network (representing, e.g., social interactions), with the network structure determining which agents communicate with each other. It is often argued that the Bayesian computations needed by agents in such models are difficult, but prior to our work there were no rigorous arguments for such hardness. We consider a well-studied model where fully rational agents receive private signals indicative of an unknown state of the world. Then, they repeatedly announce the state of the world they consider most likely to their neighbors, at the same time updating their beliefs based on their neighbors' announcements. I will discuss our complexity-theoretic results establishing hardness of agents' computations in this model. Specifically, we show that these computations are NP-hard and extend this result to PSPACE-hardness. We show hardness not only for exact computations, but also that it is computationally difficult even to approximate the rational opinion in any meaningful way. Joint work with Ali Jadbababie, Elchanan Mossel and Amin Rahimian. » History of Past Meetings Here are some other Helsinki area seminars that may be of interest to algorithmics people: HIIT Seminars ANTA Seminar Helsinki Logic Seminar
CommonCrawl
7. Sequences & Series 7.01 Notation for sequences and series 7.02 Arithmetics sequences 7.03 Arithmetic series 7.04 Geometric sequences 7.05 Geometric series Investigation: Sequences and saving money United States of AmericaCA Introduction to geometric sequences Geometric sequences all start with a first term and then either increase or decrease by a constant factor called the common ratio. We denote the first term by the letter $a$a and the common ratio by the letter $r$r. For example, the sequence $4,8,16,32\dots$4,8,16,32… is geometric with $a=4$a=4 and $r=2$r=2. The sequence $100,-50,25,-12.5,\dots$100,−50,25,−12.5,… is geometric with $a=100$a=100 and $r=-\frac{1}{2}$r=−12​. The size and sign of the geometric ratio plays an important role in how the sequence grows. Geometric ratios greater than $r=1$r=1 will cause terms in the sequence to get larger. Ratios between $0$0 and $1$1 will cause the terms to get smaller. Negative ratios will cause sign changes across consecutive terms, just like the last example mentioned in the previous paragraph. A great example of a geometric sequence concerns animal cell division, where a single cell divides into two 'daughter' cells through biological processes known as mitosis and cytokinesis. The daughter cells divide again and again, each time creating two new daughter cells so that the number of daughters in each new generation form the geometric sequence $2,4,8,16,32,\dots$2,4,8,16,32,… Clearly the first term is $a=2$a=2 and the common ratio $r=2$r=2. Another example concerns radioactive decay – a process whereby half of a certain amount of radioactive material disappears in a specified time known as a "half-life". If for example the half-life of a certain radioactive material is $10$10 years, then a quantity of $120$120 grams of the material reduces to $60$60 grams in the first $10$10 years, then to $30$30 grams in the next $10$10 years, and then to $15$15 grams in the next $10$10 years, and so on. Over $50$50 years, we see the quantity reduce according to the geometric sequence $120,60,30,15,7.5$120,60,30,15,7.5. In general terms, every geometric sequence begins as $a,ar,ar^2,ar^3,...$a,ar,ar2,ar3,... so that the $n$nth term is given by: $t_n=ar^{n-1}$tn​=arn−1 Having a formula for the $n$nth term allows us to quickly calculate the value of any term. For example, in the geometric sequence beginning $12,18,27,\dots$12,18,27,… we might want to know what the value of the $7$7th term is. Using the formula, and noticing that $a=12$a=12 and $r=\frac{3}{2}$r=32​ we can show that $t_7=12\times\left(\frac{3}{2}\right)^6$t7​=12×(32​)6 or $136.6875$136.6875. This applet will allow you to visualize the geometric side of geometric sequences. Play with the values of $a$a and $r$r. What happens when $r$r is less than $1$1? What about when $a$a is negative? What else do you notice? Created with Geogebra Study the pattern for the following sequence, and write down the next two terms. $3$3, $15$15, $75$75, $\editable{}$ , $\editable{}$ Consider the first four terms in this geometric sequence: $-8$−8, $-16$−16, $-32$−32, $-64$−64 If $T_n$Tn​ is the $n$nth term, evaluate $\frac{T_2}{T_1}$T2​T1​​. Evaluate $\frac{T_3}{T_2}$T3​T2​​ Hence find the value of $T_5$T5​. Some of the terms in the following geometric progression are missing. Use the common ratio to find these terms. , $\frac{3}{25}$325​, $-\frac{3}{125}$−3125​, $\editable{}$ Writing the recursive formula It is generally known as a recurrence relationship and for geometric sequences, the recurrence formula is given by: $t_{n+1}=r\times t_n,t_1=a$tn+1​=r×tn​,t1​=a The equation states that the $\left(n+1\right)$(n+1)th term is $r$r times the $n$nth term with the first term equal to $a$a. Thus the second term, $t_2$t2​ is $r$r times the first term $t_1$t1​, or $ar$ar. The third term $t_3$t3​ is $r$r times $t_2$t2​ or $ar^2$ar2. The fourth term $t_4$t4​ is $r$r times $t_3$t3​, or $ar^3$ar3, and so on. Hence, step by step, the sequence is revealed as $a$a, $ar$ar, $ar^2$ar2, $ar^3...$ar3... , $ar^{n-1}$arn−1. Take for example the recursive relationship given as $t_{n+1}=\frac{t_n}{2}$tn+1​=tn​2​ with $t_1=64$t1​=64. From this formula, we see that $t_2=\frac{t_1}{2}=32$t2​=t1​2​=32 and $t_3=\frac{t_2}{2}=16$t3​=t2​2​=16, and so on. This means that the sequence becomes $64,32,16,8,...$64,32,16,8,... which is clearly geometric with $a=64$a=64 and $r=\frac{1}{2}$r=12​. Consider the recurrence relationship given as $t_{n+1}=3t_n+2$tn+1​=3tn​+2 with $t_1=5$t1​=5. To test whether or not the relationship is geometric, we can evaluate the first three terms. $t_1=5$t1​=5, $t_2=3\times5+2=17$t2​=3×5+2=17 $t_3=3\times17+2=53$t3​=3×17+2=53. Thus, the sequence begins $5,17,53,...$5,17,53,... and we immediately see that $\frac{53}{17}$5317​ is not the same fraction as $\frac{17}{5}$175​, and thus the recursive relationship is not geometric. In fact the only way the relationship given by $t_{n+1}=rt_n+k$tn+1​=rtn​+k is geometric is when the constant term $k$k is zero. Consider the first-order recurrence relationship defined by $T_n=2T_{n-1},T_1=2$Tn​=2Tn−1​,T1​=2. Determine the next three terms of the sequence from $T_2$T2​ to $T_4$T4​. Write all three terms on the same line, separated by commas. Plot the first four terms on the graph below. Is the sequence generated from this definition arithmetic or geometric? The first term of a geometric sequence is $5$5. The third term is $80$80. Solve for the possible values of the common ratio, $r$r, of this sequence. State the recursive rule, $T_n$Tn​, that defines the sequence with a positive common ratio. Write both parts of the relationship on the same line, separated by a comma. State the recursive rule, $T_n$Tn​, that defines the sequence with a negative common ratio. The average rate of depreciation of the value of a Ferrari is $14%$14% per year. A new Ferrari is bought for $\$90000$$90000. What is the car worth after $1$1 year? What is the car worth after $3$3 years? Write a recursive rule for $V_n$Vn​, defining the value of the car after $n$n years. Write both parts of the rule on the same line, separated by a comma. Solving for terms Sometimes we are asked to find the common ratio of a certain geometric sequence, and then use this to find a group of terms or simply a specific term. We need to remember that the $n$nth term of a GP is given by: Note that there are two variables in the formula – the first term $a$a and the common ratio $r$r and any set of three consecutive terms will allow us to evaluate $r$r. Take for example the geometric sequence beginning $3,12,48,...$3,12,48,... . The common ratio is simply the ratio $\frac{t_2}{t_1}=\frac{t_3}{t_2}=4$t2​t1​​=t3​t2​​=4. This means that we can write down a formula for the $n$nth term as $t_n$tn​ $=$= $ar^{n-1}$arn−1 $=$= $3\left(4\right)^{n-1}$3(4)n−1 and this in turn allows us to determine any term or sequence of terms we like. For example, we see that the $5$5th term is given by $t_5=3\times4^4=768$t5​=3×44=768. As another example, we might wonder whether the sequence $\sqrt{3},6,12\sqrt{3}$√3,6,12√3 is geometric. It may not be immediately obvious, but we can show that the numbers are in geometric sequence in two ways. In the first method, we note that both $\frac{6}{\sqrt{3}}$6√3​ $=$= $\frac{6}{\sqrt{3}}\times\frac{\sqrt{3}}{\sqrt{3}}$6√3​×√3√3​ $=$= $2\sqrt{3}$2√3 and that $\frac{12\sqrt{3}}{6}=2\sqrt{3}$12√36​=2√3, and so it is immediately geometric with $r=2\sqrt{3}$r=2√3. In the second method we note that in any geometric sequence, $\frac{t_2}{t_1}=\frac{t_3}{t_2}$t2​t1​​=t3​t2​​ and by "cross" multiplication, we see that $\left(t_2\right)^2=t_1\times t_3$(t2​)2=t1​×t3​ or that the middle term is always the square root of the product of terms on each side of it. The middle term is defined to be the geometric mean of the outer two terms so that $t_2=\sqrt{t_1\times t_3}$t2​=√t1​×t3​ . In a final example consider the three terms $2,x,32$2,x,32 where the middle term is unknown. If this sequence is geometric then we can find $x$x, for we know that $\frac{x}{2}=\frac{32}{x}$x2​=32x​ and so $x^2=64$x2=64 and this means that there are two possible solutions for $x$x as $x=8$x=8 or $x=-8$x=−8. The two sequences are thus $2,8,32$2,8,32 or $2,-8,32$2,−8,32. Study the pattern for the following sequence. $-9$−9$,$, $3.6$3.6$,$, $-1.44$−1.44$,$, $0.576$0.576 ... State the common ratio between the terms. $12$12, $-48$−48, $192$192, $\editable{}$ Graphs and tables of geometric sequences Having a formula for the $n$nth term allows us to quickly generate a table of values for the sequence. For example in the sequence $12,18,27,\dots$12,18,27,… the first term is $12$12 and the common ratio is $1.5$1.5 and so the general term is given by the formula $t_n=12\times\left(1.5\right)^{n-1}$tn​=12×(1.5)n−1 . By substituting for $n$n appropriately and using a scientific calculator, we can quickly generate the following table of the first $7$7 terms of the sequence: tn 12 18 27 40.5 60.75 91.125 Perhaps more interestingly though is the different types of graphs that geometric sequences correspond to. Usually the graphs are not linear like arithmetic sequences. Graphs of geometric sequences are best known as rising or reducing graphs where the rate of rising continually changes, resulting in a curved growth or decay path. This happens whenever the common ratio is positive like the geometric sequence depicted in the above table. However, when the common ratio is negative, the values of successive terms flip their sign so that the graph is depicted as either a growing or diminishing zig-zag path. Think, for example, about the geometric sequence that that is identical to the one in the table, but has a negative ratio $r=-1.5$r=−1.5 so that its $n$nth term is given by $t_n=12\times\left(-1.5\right)^{n-1}$tn​=12×(−1.5)n−1 . The new table becomes: tn 12 -18 27 -40.5 60.75 -91.125 Checking, for $n=1$n=1, we have $t_1=12\times\left(-1.5\right)^{1-1}=12$t1​=12×(−1.5)1−1=12 and for $n=2$n=2 we have $t_2=12\times\left(-1.5\right)^{2-1}=-18$t2​=12×(−1.5)2−1=−18 so even numbered terms become negative and odd numbered terms become positive. Here is a graph of the two geometric sequences depicted in both tables. Note that the odd terms of the zig-zag graph coincide with the terms of the first geometric sequence. Note also that had the absolute value of the ratios of both geometric sequences been less than 1, then the the absolute value of the terms in both sequences would be reducing in size. The $n$nth term of a geometric progression is given by the equation $T_n=2\times3^{n-1}$Tn​=2×3n−1. Complete the table of values: $n$n $1$1 $2$2 $3$3 $4$4 $10$10 $T_n$Tn​ What is the common ratio between consecutive terms? Plot the points in the table that correspond to $n=1$n=1, $n=2$n=2, $n=3$n=3 and $n=4$n=4. If the plots on the graph were joined they would form: a straight line a curved line On Mercury the equation $d=1.5t^2$d=1.5t2 can be used to approximate the distance in meters, $d$d, that an object falls in $t$t seconds, if air resistance is ignored. Complete the table. Do not round any values. $0$0 $2$2 $4$4 $6$6 Graph the function $d=1.5t^2$d=1.5t2. Use the equation or otherwise to determine the number of seconds, $t$t, that it would take an object to fall $5.6$5.6m. Give the value of $t$t to the nearest second. A new car purchased for $\$38200$$38200 depreciates at a rate $r$r each year. Use the table of values to determine the value of $r$r. years passed ($n$n) $0$0 $1$1 $2$2 value of car ($A$A) $38200$38200 $37818$37818 $37439.82$37439.82 Determine the rule for $A$A, the value of the car, $n$n years after it is purchased. Assuming the rate of depreciation remains constant, how much can the car be sold for after $6$6 years? Give your answer to the nearest cent. A new motorbike purchased for the same amount depreciates according to the model $V=38200\left(0.97^n\right)$V=38200(0.97n). Which vehicle depreciates more rapidly?
CommonCrawl
Is Minkowski space not a metric space? I've just started reading a book on functional analysis, and first definition given there is for a metric and metric space: Let $\mathfrak{M}$ be an arbitrary set. A function $\rho\colon \mathfrak M\times\mathfrak M\to[0,\infty)$ is called metric if it has the following properties: 1) $\rho(x,y)=0 \iff x=y$ (axiom of identity) 2) $\rho(y,x)=\rho(x,y)\;\forall x,y\in\mathfrak M$ (axiom of symmetry) 3) $\rho(x,y)\le\rho(x,z)+\rho(z,y)\;\forall x,y,z\in\mathfrak M$ (triangle inequality) The pair $(\mathfrak M,\rho)$ is called metric space. First and second identities don't make any surprise, I understand them. But what about image of $\rho$ and third inequality? They don't seem to hold in Minkowski space, where if we check interval as candidate for metric, we get $$s^2=t^2-x^2-y^2-z^2,$$ which can be negative and violate triangle inequality (and if take square root, it becomes complex and inequality makes no sense in this case). So this clearly isn't a metric. But is Minkowski space then not a metric space? What is it then? metric-spaces $\begingroup$ from wikipedia (search metric): "In differential geometry, one considers metric tensors, which can be thought of as "infinitesimal" metric functions. They are defined as inner products on the tangent space with an appropriate differentiability requirement. While these are not metric functions as defined in this article, they induce metric functions by integration." $\endgroup$ – Dubious Oct 14 '13 at 18:59 Minkowski space is a metric space, but the metric is not the "Minkowski metric". Robert IsraelRobert Israel $\begingroup$ OK, but what metric would make sense for Minkowski space? Which one is actually used, e.g. in special relativity? $\endgroup$ – Ruslan Oct 15 '13 at 6:32 $\begingroup$ The topology is the standard one of ${\mathbb R}^4$. Which metric (in the "metric space" sense) you use for that is not particularly relevant. $\endgroup$ – Robert Israel Oct 15 '13 at 6:52 $\begingroup$ I suspect if you wanted a metric on a Minkowski space you'd want one that's Lorentz-invariant. I also suspect that there exist no such metric. Indeed wikipedia says: "Minkowski space is, in particular, not a metric space". Therefore I believe this answer is more confusing than useful. $\endgroup$ – Rotsor Apr 29 '17 at 23:53 It's a real vector space with an indefinite bilinear form. RasmusRasmus Not the answer you're looking for? Browse other questions tagged metric-spaces or ask your own question. Is Minkowski space a Hilbert space? Integral metric. definition of metric space Showing $\rho (x,y)=\frac{d(x,y)}{1+d(x,y)}$ is a metric Are there any "spaces" that violate symmetry of metric spaces? The axioms of metric space Why would the diameter of a subset of a metric space not be finite? Don't Understand proof Given (Metric Spaces) How can I prove the metrics will hold, given a matrix from a given space? No metric on $\mathbb R^{\infty}$ whose projections are the absolute-value metric How many of the triangle inequality constraints of a discrete metric space are redundant?
CommonCrawl
On the fundamental solution and a variational formulation for a degenerate diffusion of Kolmogorov type DCDS Home Elliptic equations with transmission and Wentzell boundary conditions and an application to steady water waves in the presence of wind July 2018, 38(7): 3387-3405. doi: 10.3934/dcds.2018145 Navier-Stokes-Oseen flows in the exterior of a rotating and translating obstacle Trinh Viet Duoc , Vietnam National University, Hanoi University of Science, Faculty of Mathematics, Mechanics, and Informatics, 334 Nguyen Trai, Hanoi, Vietnam * Corresponding author: Trinh Viet Duoc Received July 2017 Revised January 2018 Published April 2018 Fund Project: This research is funded by the Vietnam National University, Hanoi (VNU) under project number QG.17.07. In this paper, we investigate Navier-Stokes-Oseen equation describing flows of incompressible viscous fluid passing a translating and rotating obstacle. The existence, uniqueness, and polynomial stability of bounded and almost periodic weak mild solutions to Navier-Stokes-Oseen equation in the solenoidal Lorentz space $ L^{3}_{σ, w} $ are shown. Moreover, we also prove the unique existence of time-local mild solutions to this equation in the solenoidal Lorentz spaces $ L^{3,q}_{σ} $. Keywords: Bounded and almost periodic weak mild solutions, time-local mild solutions, Navier-Stokes-Oseen equation, Navier-Stokes equation, Oseen operator, rotating and translating obstacle. Mathematics Subject Classification: Primary: 35B15, 35B35, 35Q30; Secondary: 76D07. Citation: Trinh Viet Duoc. Navier-Stokes-Oseen flows in the exterior of a rotating and translating obstacle. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3387-3405. doi: 10.3934/dcds.2018145 J. Bergh and J. Löfström, Interpolation Spaces, Springer, Berlin-Heidelberg-New York, 1976. doi: 10.1007/978-3-642-66451-9. Google Scholar W. Borchers and T. Miyakawa, On stability of exterior stationary Navier-Stokes flows, Acta Math., 174 (1995), 311-382. doi: 10.1007/BF02392469. Google Scholar W. Borchers and H. Sohr, On the semigroup of the Stokes operator for exterior domains in $ L^p $-spaces, Math. Z., 196 (1987), 415-425. doi: 10.1007/BF01200362. Google Scholar R. E. Castillo and H. Rafeiro, An Introductory Course in Lebesgue Spaces, Springer, 2016. doi: 10.1007/978-3-319-30034-4. Google Scholar R. Farwig and T. Hishida, Stationary Navier-Stokes flows around a rotating obstacle, Funkc. Ekvac., 50 (2007), 371-403. doi: 10.1619/fesi.50.371. Google Scholar G. P. Galdi, Existence and uniqueness of time-periodic solutions to the Navier-Stokes equations in the whole plane, Discrete Continuous Dynam. Systems -S, 6 (2013), 1237-1257. doi: 10.3934/dcdss.2013.6.1237. Google Scholar G. P. Galdi and A. L. Silvestre, Existence of time-periodic solutions to the Navier-Stokes equations around a moving body, Pacific J. Math., 223 (2006), 251-267. doi: 10.2140/pjm.2006.223.251. Google Scholar G. P. Galdi and A. L. Silvestre, On the motion of a rigid body in a Navier-Stokes liquid under the action of a time-periodic force, Indiana Univ. Math. J., 58 (2009), 2805-2842. doi: 10.1512/iumj.2009.58.3758. Google Scholar G. P. Galdi and A. L. Silvestre, The steady motion of a Navier-Stokes liquid around a rigid body, Arch. Rational Mech. Anal., 184 (2007), 371-400. doi: 10.1007/s00205-006-0026-4. Google Scholar G. P. Galdi and H. Sohr, Existence and uniqueness of time-periodic physically reasonable Navier-Stokes flows past a body, Arch. Ration. Mech. Anal., 172 (2004), 363-406. doi: 10.1007/s00205-004-0306-9. Google Scholar M. Geissert, H. Heck and M. Hieber, $ L_p $-Theory of the Navier-Stokes flow in the exterior of a moving or rotating obstacle, J. Reine Angew. Math., 596 (2006), 45-62. doi: 10.1515/CRELLE.2006.051. Google Scholar Y. Giga, Solutions for semilinear parabolic equations in $ L^p $ and regurlarity of weak solutions of the Navier-Stokes system, J. Differential Equations, 62 (1986), 186-212. doi: 10.1016/0022-0396(86)90096-3. Google Scholar Y. Giga, S. Matsui and O. Sawada, Global existence of two-dimensional Navier-Stokes flow with nondecaying initial velocity, J. Math. Fluid Mech., 3 (2001), 302-315. doi: 10.1007/PL00000973. Google Scholar M. Hieber and Y. Shibata, The Fujita-Kato approach to the Navier-Stokes equations in the rotational framework, Math. Z., 265 (2010), 481-491. doi: 10.1007/s00209-009-0525-8. Google Scholar M. Hieber and O. Sawada, The Navier-Stokes equations in $ \mathbb{R}^n $ with linearly growing initial data, Arch. Ration. Mech. Anal., 175 (2005), 269-285. doi: 10.1007/s00205-004-0347-0. Google Scholar T. Hishida and Y. Shibata, $ L_p - L_q $ estimate of the Stokes operator and Navier-Stokes flows in the exterior of a rotating obstacle, Arch. Ration. Mech. Anal., 193 (2009), 339-421. doi: 10.1007/s00205-008-0130-8. Google Scholar T. Kato, Strong $ L^p $-solutions of Navier-Stokes equations in $ \mathbb{R}^n $ with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar T. Kobayashi and Y. Shibata, On the Oseen equation in the three dimensional exterior domains, Math. Ann., 310 (1998), 1-45. doi: 10.1007/s002080050134. Google Scholar H. Komatsu, A general interpolation theorem of Marcinkiewics type, Tôhoku Math. J., 33 (1981), 383-393. doi: 10.2748/tmj/1178229401. Google Scholar M. Kyed, The existence and regularity of time-periodic solutions to the three dimensional Navier-Stokes equations in the whole plane, Nonlinearity, 27 (2014), 2909-2935. doi: 10.1088/0951-7715/27/12/2909. Google Scholar B. M. Levitan and V. V. Zhikov, Almost Periodic Functions and Differential Equations, Cambridge, 1982. Google Scholar A. Lunardi, Interpolation Theory, Birkhäuser, 2009. Google Scholar P. Maremonti, Existence and stability of time periodic solutions to the Navier-Stokes equations in exterior domains, J. Math. Sci., 93 (1999), 719-746. doi: 10.1007/BF02366850. Google Scholar T. Miyakawa, On non-stationary solutions of the Navier-Stokes equations in an exterior domain, Hiroshima Math. J., 12 (1982), 115-140. Google Scholar T. H. Nguyen, T. V. Duoc, T. N. H. Vu and T. M. Vu, Boundedness, almost periodicity and stability of certain Navier-Stokes flows in unbounded domains, J. Differential Equations, 263 (2017), 8979-9002. doi: 10.1016/j.jde.2017.08.061. Google Scholar Y. Shibata, On a $ C^0 $ semigroup associated with a modified Oseen equation with rotating effect, Adv. Math. Fluid Mech, (2010), 513-551. doi: 10.1007/978-3-642-04068-9_29. Google Scholar Y. Shibata, On the Oseen semigroup with rotating effect, Funct. Anal. Evol. Equ., (2008), 595-611. doi: 10.1007/978-3-7643-7794-6_36. Google Scholar H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland, Amsterdam, New York, Oxford, 1978. Google Scholar M. Yamazaki, The Navier-Stokes equations in the weak-$ L^n $ space with time-dependent external force, Math. Ann., 317 (2000), 635-675. doi: 10.1007/PL00004418. Google Scholar Dedicated to Professor Dang Dinh Chau on the occasion of his 70th birthday Yejuan Wang, Tongtong Liang. Mild solutions to the time fractional Navier-Stokes delay differential inclusions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3713-3740. doi: 10.3934/dcdsb.2018312 Chérif Amrouche, María Ángeles Rodríguez-Bellido. On the very weak solution for the Oseen and Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 159-183. doi: 10.3934/dcdss.2010.3.159 Jingrui Wang, Keyan Wang. Almost sure existence of global weak solutions to the 3D incompressible Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 5003-5019. doi: 10.3934/dcds.2017215 Paul Deuring. Spatial asymptotics of mild solutions to the time-dependent Oseen system. Communications on Pure & Applied Analysis, 2021, 20 (5) : 1833-1849. doi: 10.3934/cpaa.2021044 Kuijie Li, Tohru Ozawa, Baoxiang Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1511-1560. doi: 10.3934/cpaa.2018073 Reinhard Farwig, Yasushi Taniuchi. Uniqueness of backward asymptotically almost periodic-in-time solutions to Navier-Stokes equations in unbounded domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1215-1224. doi: 10.3934/dcdss.2013.6.1215 Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure & Applied Analysis, 2002, 1 (1) : 35-50. doi: 10.3934/cpaa.2002.1.35 Peter E. Kloeden, José Valero. The Kneser property of the weak solutions of the three dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 28 (1) : 161-179. doi: 10.3934/dcds.2010.28.161 Reinhard Farwig, Ronald B. Guenther, Enrique A. Thomann, Šárka Nečasová. The fundamental solution of linearized nonstationary Navier-Stokes equations of motion around a rotating and translating body. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 511-529. doi: 10.3934/dcds.2014.34.511 Joelma Azevedo, Juan Carlos Pozo, Arlúcio Viana. Global solutions to the non-local Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021146 Igor Kukavica, Mohammed Ziane. Regularity of the Navier-Stokes equation in a thin periodic domain with large data. Discrete & Continuous Dynamical Systems, 2006, 16 (1) : 67-86. doi: 10.3934/dcds.2006.16.67 Petr Kučera. The time-periodic solutions of the Navier-Stokes equations with mixed boundary conditions. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 325-337. doi: 10.3934/dcdss.2010.3.325 Giovanni P. Galdi. Existence and uniqueness of time-periodic solutions to the Navier-Stokes equations in the whole plane. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1237-1257. doi: 10.3934/dcdss.2013.6.1237 Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5421-5448. doi: 10.3934/dcdsb.2020352 Gaston N'Guerekata. On weak-almost periodic mild solutions of some linear abstract differential equations. Conference Publications, 2003, 2003 (Special) : 672-677. doi: 10.3934/proc.2003.2003.672 C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403 Shuguang Shao, Shu Wang, Wen-Qing Xu, Bin Han. Global existence for the 2D Navier-Stokes flow in the exterior of a moving or rotating obstacle. Kinetic & Related Models, 2016, 9 (4) : 767-776. doi: 10.3934/krm.2016015 Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 Hong Cai, Zhong Tan, Qiuju Xu. Time periodic solutions to Navier-Stokes-Korteweg system with friction. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 611-629. doi: 10.3934/dcds.2016.36.611 Daniel Pardo, José Valero, Ángel Giménez. Global attractors for weak solutions of the three-dimensional Navier-Stokes equations with damping. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3569-3590. doi: 10.3934/dcdsb.2018279 Trinh Viet Duoc
CommonCrawl
Publishers A-Z About Help News Subjects -> ASTRONOMY (Total: 94 journals) Showing 1 - 46 of 46 Journals sorted alphabetically Advances in Astronomy (Followers: 51) Annual Review of Astronomy and Astrophysics (Followers: 39) Annual Review of Earth and Planetary Sciences (Followers: 63) Artificial Satellites (Followers: 23) Astrobiology (Followers: 14) Astronomical & Astrophysical Transactions: The Journal of the Eurasian Astronomical Society (Followers: 6) Astronomical Journal (Followers: 8) Astronomical Review (Followers: 4) Astronomische Nachrichten (Followers: 4) Astronomy & Geophysics (Followers: 48) Astronomy and Astrophysics (Followers: 60) Astronomy and Computing (Followers: 2) Astronomy Letters (Followers: 22) Astronomy Reports (Followers: 15) Astronomy Studies Development (Followers: 12) Astroparticle Physics (Followers: 8) Astrophysical Bulletin (Followers: 3) Astrophysical Journal (Followers: 19) Astrophysical Journal Letters (Followers: 14) Astrophysical Journal Supplement Series (Followers: 14) Astrophysics (Followers: 29) Astrophysics and Space Science (Followers: 46) Astrophysics and Space Sciences Transactions (ASTRA) (Followers: 56) Astropolitics: The International Journal of Space Politics & Policy (Followers: 12) Celestial Mechanics and Dynamical Astronomy (Followers: 11) Chinese Astronomy and Astrophysics (Followers: 24) Colloid Journal (Followers: 3) Comptes Rendus Physique (Followers: 2) Computational Astrophysics and Cosmology (Followers: 3) COSPAR Colloquia Series (Followers: 11) Earth, Moon, and Planets (Followers: 55) Earth, Planets and Space (Followers: 74) EAS Publications Series (Followers: 8) EPL Europhysics Letters (Followers: 8) Experimental Astronomy (Followers: 39) Expert Opinion on Astronomy and Astrophysics (Followers: 7) Extreme Life, Biospeology & Astrobiology - International Journal of the Bioflux Society (Followers: 6) Few-Body Systems (Followers: 1) Foundations of Physics (Followers: 41) Frontiers in Astronomy and Space Sciences (Followers: 12) Galaxies (Followers: 6) Globe, The (Followers: 4) Gravitation and Cosmology (Followers: 4) Icarus (Followers: 75) International Journal of Advanced Astronomy (Followers: 28) International Journal of Astrobiology (Followers: 4) International Journal of Astronomy (Followers: 19) International Journal of Astronomy and Astrophysics (Followers: 29) International Journal of Satellite Communications Policy and Management (Followers: 13) International Letters of Chemistry, Physics and Astronomy (Followers: 12) ISRN Astronomy and Astrophysics (Followers: 7) Journal for the History of Astronomy (Followers: 19) Journal of Astrobiology & Outreach (Followers: 3) Journal of Astronomical Instrumentation (Followers: 3) Journal of Astronomical Telescopes, Instruments, and Systems (Followers: 5) Journal of Astrophysics (Followers: 26) Journal of Astrophysics and Astronomy (Followers: 52) Journal of Atmospheric and Solar-Terrestrial Physics (Followers: 199) Journal of Cosmology and Astroparticle Physics (Followers: 38) Journal of Geophysical Research : Planets (Followers: 178) Journal of Geophysical Research : Space Physics (Followers: 178) Journal of High Energy Astrophysics (Followers: 22) Kinematics and Physics of Celestial Bodies (Followers: 10) KronoScope (Followers: 1) Macalester Journal of Physics and Astronomy (Followers: 4) MNASSA : Monthly Notes of the Astronomical Society of South Africa (Followers: 1) Molecular Astrophysics (Followers: 1) Monthly Notices of the Royal Astronomical Society (Followers: 14) Monthly Notices of the Royal Astronomical Society : Letters Nature Astronomy (Followers: 8) New Astronomy (Followers: 27) New Astronomy Reviews (Followers: 17) Nonlinear Dynamics (Followers: 19) NRIAG Journal of Astronomy and Geophysics (Followers: 5) Open Astronomy (Followers: 2) Physics of the Dark Universe (Followers: 4) Planetary and Space Science (Followers: 101) Planetary Science (Followers: 52) Proceedings of the International Astronomical Union (Followers: 2) Publications of the Astronomical Society of Australia (Followers: 2) Publications of the Astronomical Society of Japan (Followers: 3) Publications of the Astronomical Society of the Pacific (Followers: 4) Research & Reviews : Journal of Space Science & Technology (Followers: 17) Research in Astronomy and Astrophysics (Followers: 29) Revista Mexicana de Astronomía y Astrofísica (Followers: 2) Science China Physics, Mechanics & Astronomy (Followers: 4) Solar Physics (Followers: 34) Solar System Research (Followers: 14) Space Science International (Followers: 192) Space Science Reviews (Followers: 97) Space Weather (Followers: 24) Transport and Aerospace Engineering (Followers: 13) Universe (Followers: 5) Most Followed Journals Similar Journals HOME > Browse by Subject Space Science Reviews Journal Prestige (SJR): 3.262 Citation Impact (citeScore): 7 Number of Followers: 97 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1572-9672 - ISSN (Online) 0038-6308 Published by Springer-Verlag [2626 journals] SERENA: Particle Instrument Suite for Determining the Sun-Mercury Interaction from BepiColombo Abstract: Abstract The ESA-JAXA BepiColombo mission to Mercury will provide simultaneous measurements from two spacecraft, offering an unprecedented opportunity to investigate magnetospheric and exospheric particle dynamics at Mercury as well as their interactions with solar wind, solar radiation, and interplanetary dust. The particle instrument suite SERENA (Search for Exospheric Refilling and Emitted Natural Abundances) is flying in space on-board the BepiColombo Mercury Planetary Orbiter (MPO) and is the only instrument for ion and neutral particle detection aboard the MPO. It comprises four independent sensors: ELENA for neutral particle flow detection, Strofio for neutral gas detection, PICAM for planetary ions observations, and MIPA, mostly for solar wind ion measurements. SERENA is managed by a System Control Unit located inside the ELENA box. In the present paper the scientific goals of this suite are described, and then the four units are detailed, as well as their major features and calibration results. Finally, the SERENA operational activities are shown during the orbital path around Mercury, with also some reference to the activities planned during the long cruise phase. PubDate: 2021-01-12 The Physics of Accretion Discs, Winds and Jets in Tidal Disruption Events Abstract: Abstract Accretion onto black holes is an efficient mechanism in converting the gas mass-energy into energetic outputs as radiation, wind and jet. Tidal disruption events, in which stars are tidally torn apart and then accreted onto supermassive black holes, offer unique opportunities of studying the accretion physics as well as the wind and jet launching physics across different accretion regimes. In this review, we systematically describe and discuss the models that have been developed to study the accretion flows and jets in tidal disruption events. A good knowledge of these physics is not only needed for understanding the emissions of the observed events, but also crucial for probing the general relativistic space-time around black holes and the demographics of supermassive black holes via tidal disruption events. Mars Oxygen ISRU Experiment (MOXIE) Abstract: Abstract MOXIE is a technology demonstration that addresses the Mars 2020 (Perseverance) objective of preparing for future human exploration by demonstrating In Situ Resource Utilization (ISRU) in the form of dissociating atmospheric CO2 into O2. The primary goals of the MOXIE project are to verify and validate the technology of Mars ISRU as a springboard for the future, and to establish achievable performance requirements and design approaches that will lead to a full-scale ISRU system based on MOXIE technology. MOXIE has three top-level requirements: to be capable of producing at least 6 g/hr of oxygen in the context of the Mars 2020 mission (assuming atmospheric intake at 5 Torr, typical of Jezero Crater, and \(0~^{\circ}\text{C}\) , typical of the rover interior); to produce oxygen with \(>98\%\) purity; and to meet these first two requirements for at least 10 operational cycles after delivery. Since MOXIE is expected to operate in all seasons and at all times of day and night on Mars, these requirements are intended to be satisfied under worst-case environmental conditions, including during a dust storm, if possible. Proton Aurora and Optical Emissions in the Subauroral Region Abstract: Abstract Optical structures located equatorward of the main auroral oval often exhibit different morphologies and dynamics than structures at higher latitudes. In some cases, questions arise regarding the formation mechanisms of these photon-emitting phenomena. New developments in space and ground-based instruments have enabled us to acquire a clearer view of the processes playing a role in the formation of subauroral structures. In addition, the discovery of new optical structures helps us improve our understanding of the latitudinal and altitudinal coupling that takes place in the subauroral region. However, several questions remain unanswered, requiring the development of new instruments and analysis techniques. We discuss optical phenomena in the subauroral region, summarize observational results, present conclusions about their origin, and pose a number of open questions that warrant further investigation of proton aurora, detached subauroral arcs and spots, stable auroral red (SAR) arcs, and STEVE (Strong Thermal Emission Velocity Enhancement). The Mars Orbiter Subsurface Investigation Radar (MOSIR) on China's Tianwen-1 Mission Abstract: Abstract China launched Tianwen-1 spacecraft successfully on July 23rd, 2020. The Mars Orbiter Subsurface Investigation Radar (MOSIR) is a subsurface radar sounder as a scientific instrument onboard Tianwen-1 orbiter. It is designed to study the compositions of Martian surface material, subsurface structure, and the ionosphere's total electron content. It can also perform passive observations in a transfer orbit to Mars. The subsurface stratigraphic structure is critical to the study on Mars geological and climatic evolution history. Considering the optimal tradeoff between penetrating depth and range resolution, MOSIR operates at low-frequency and high-frequency channels, with the frequency bands of 10–15 MHz or 15–20 MHz and 30–50 MHz, respectively. MOSIR provides a penetration depth of more than 100 m with a vertical resolution of 7.5 m (20 MHz bandwidth) and 30 m (5 MHz bandwidth) in free space. The range and azimuth focusing techniques are applied in ground data processing to achieve the resolution of several hundred meters (along-track) and several thousand meters (cross-track). MOSIR is intended to search for water ice and liquid water that may be associated with signs of life in the polar layered deposits, Tianwen-1 landing site, and other selected areas. Formation of Venus, Earth and Mars: Constrained by Isotopes Abstract: Abstract Here we discuss the current state of knowledge of terrestrial planet formation from the aspects of different planet formation models and isotopic data from 182Hf-182W, U-Pb, lithophile-siderophile elements, 48Ca/44Ca isotope samples from planetary building blocks, recent reproduction attempts from 36Ar/38Ar, 20Ne/22Ne, 36Ar/22Ne isotope ratios in Venus' and Earth's atmospheres, the expected solar 3He abundance in Earth's deep mantle and Earth's D/H sea water ratios that shed light on the accretion time of the early protoplanets. Accretion scenarios that can explain the different isotope ratios, including a Moon-forming event ca. 50 Myr after the formation of the Solar System, support the theory that the bulk of Earth's mass (≥80%) most likely accreted within 10–30 Myr. From a combined analysis of the before mentioned isotopes, one finds that proto-Earth accreted most likely a mass of 0.5–0.6 \(M\) Earth within the first ≈3–4.5 Myr, the approximate lifetime of the protoplanetary disk. For Venus, the available atmospheric noble gas data are too uncertain for constraining the planet's accretion scenario accurately. However, from the available imprecise Ar and Ne isotope measurements, one finds that proto-Venus could have grown to a mass of up to 0.85–1.0 \(M\) Venus before the disk dissipated. Classical terrestrial planet formation models have struggled to grow large planetary embryos, or even cores of giant planets, quickly from the tiniest materials within the typical lifetime of protoplanetary disks. Pebble accretion could solve this long-standing time scale controversy. Pebble accretion and streaming instabilities produce large planetesimals that grow into Mars-sized and larger planetary embryos during this early accretion phase. The later stage of accretion can be explained well with the Grand-Tack model as well as the annulus and depleted disk models. The relative roles of pebble accretion and planetesimal accretion/giant impacts are poorly understood and should be investigated with N-body simulations that include pebbles and multiple protoplanets. To summarise, different isotopic dating methods and the latest terrestrial planet formation models indicate that the accretion process from dust settling, planetesimal formation, and growth to large planetary embryos and protoplanets is a fast process that occurred to a great extent in the Solar System within the lifetime of the protoplanetary disk. Landing Site Selection and Overview of China's Lunar Landing Abstract: Abstract Landing site selection is of fundamental importance for lunar landing mission and it is closely related to the scientific goals of the mission. According to the widely concerned lunar science goals and the landing site selection of the ongoing lunar missions; China has carried out the selection of landing site for a series of Chang' E (CE) missions. Under this background, this paper firstly introduced the principles, process, method and result of landing site selection of China's Lunar Exploration Program (CLEP), and then analyzed the support of the selected landing sites to the corresponding lunar research. This study also pointed out the outcomes that could possibly contribute to the key lunar questions on the basis of the selected landing sites of CE-4 and CE-5 such as deep material in South Pole-Aitken (SPA) basin, lunar chronology, volcanic thermodynamics and geological structure evolution history of the Moon. Finally, this approach analyzed the development trend of China's follow-up lunar landing missions, and suggested that the South Pole Region of the Moon could be the landing site of high priority for the future CE missions. Science Goals and Mission Objectives for the Future Exploration of Ice Giants Systems: A Horizon 2061 Perspective Abstract: Abstract The comparative study of planetary systems is a unique source of new scientific insight: following the six "key science questions" of the "Planetary Exploration, Horizon 2061" long-term foresight exercise, it can reveal to us the diversity of their objects (Question 1) and of their architectures (Question 2), help us better understand their origins (Question 3) and how they work (Question 4), find and characterize habitable worlds (Question 5), and ultimately, search for alien life (Question 6). But a huge "knowledge gap" exists which limits the applicability of this approach in the solar system itself: two of its secondary planetary systems, the ice giant systems of Uranus and Neptune, remain poorly explored. Starting from an analysis of our current limited knowledge of solar system ice giants and their systems in the light of these six key science questions, we show that a long-term plan for the space exploration of ice giants and their systems will greatly contribute to answer these questions. To do so, we identify the key measurements needed to address each of these questions, the destinations to choose (Uranus, Neptune, Triton or a subset of them), the combinations of space platform(s) and the types of flight sequences needed. We then examine the different launch windows available until 2061, using a Jupiter fly-by, to send a mission to Uranus or Neptune, and find that: (1) an optimized choice of platforms and flight sequences makes it possible to address a broad range of the key science questions with one mission at one of the planets. Combining an atmospheric entry probe with an orbiter tour starting on a high-inclination, low periapse orbit, followed by a sequence of lower inclination orbits (or the other way around) appears to be an optimal choice. (2) a combination of two missions to each of the ice giant systems, to be flown in parallel or in sequence, will address five out of the six key questions and establish the prerequisites to address the sixth one: searching for life at one of the most promising Ice Giant moons. (3) The 2032 Jupiter fly-by window, which offers a unique opportunity to implement this plan, should be considered in priority; if this window cannot be met, using the 2036 Jupiter fly-by window to send a mission to Uranus first, and then the 2045 window for a mission to Neptune, will allow one to achieve the same objectives; as a back-up option, one should consider an orbiter + probe mission to one of the planets and a close fly-by of the other planet to deliver a probe into its atmosphere, using the opportunity of a future mission on its way to Kuiper Belt Objects or the interstellar medium; (4) based on the examination of the habitability of the different moons by the first two missions, a third one can be properly designed to search for life at the most promising moon, likely Triton, or one of the active moons of Uranus. Thus, by 2061 the first two missions of this plan can be implemented and a third mission focusing on the search for life can be designed. Given that such a plan may be out of reach of a single national agency, international collaboration is the most promising way to implement it. The Sampling and Caching Subsystem (SCS) for the Scientific Exploration of Jezero Crater by the Mars 2020 Perseverance Rover Abstract: Abstract The Mars 2020 mission seeks to conduct a new scientific exploration on the surface of Mars. The Perseverance Rover will be sent to the surface of the Jezero Crater region to study its habitability, search for biosignatures of past life, acquire and cache samples for potential return, and prepare for possible human missions. To enable these objectives, an innovative Sampling and Caching Subsystem (SCS) has been developed and tested to allow the Perseverance Rover to acquire and cache rock core and regolith samples, prepare abraded rock surfaces, and support proximity science instruments. The SCS consists of the Robotic Arm (RA), the Turret and Corer, and the Adaptive Caching Assembly (ACA). These elements reside and interact both inside and outside of the Perseverance Rover to enable surface interactions, sample transfer, and caching. The main body of the Turret consists of the Coring Drill (Corer) with a Launch Abrading Bit initially installed prior to launch. Mounted to the Turret main structure are two proximity science instruments, SHERLOC and PIXL, as well as the Gas Dust Removal Tool (gDRT) and the Facility Contact Sensor (FCS). These work together with the RA to provide the sample acquisition, abraded surface preparation, and proximity science functions. The ACA is a network of assemblies largely inside the front belly of the Rover, which combine to perform the sample handling and caching functions of the mission. The ACA primarily consists of the Bit Carousel, the Sample Handling Assembly (SHA), End Effector (EE), Sample Tubes and their Sample Tube Storage Assembly (STSA), Seals and their Dispenser, Volume, and Tube Assembly (DVT), the Sealing Station, the Vision Station, the Cover Parking Lot, and additional supporting hardware. These components attach to the Caching Component Mounting Deck (CCMD) that is integrated with the Rover interior. This work describes these major elements of the SCS, with an emphasis on the functionality required to perform the set of tasks and interactions required by the subsystem. Key considerations of contamination control and biological cleanliness throughout the development of these hardware elements are also discussed. Additionally, aspects of testing and validating the functionality of the SCS are described. Early prototypes and tests matured the designs over several years and eventually led to the flight hardware and integrated testing in both Earth ambient and Mars-like environments. Multiple unique testbed venues were developed and used to enable testing from low-level mechanism operation through end-to-end sampling and caching interactions with the full subsystem and flight software. Various accomplishments from these testing efforts are highlighted. These past and ongoing tests support the successful preparations of the SCS on its pathway to operations on Mars. Did Mars Possess a Dense Atmosphere During the First ∼ 400 $\sim400$ Million Years' Abstract: Abstract It is not yet entirely clear whether Mars began as a warm and wet planet that evolved towards the present-day cold and dry body or if it always was cold and dry with just some sporadic episodes of liquid water on its surface. An important clue into this question can be gained by studying the earliest evolution of the Martian atmosphere and whether it was dense and stable to maintain a warm and wet climate or tenuous and susceptible to strong atmospheric escape. In this review we therefore discuss relevant aspects for the evolution and stability of a potential early Martian atmosphere. This contains the EUV flux evolution of the young Sun, the formation timescale and volatile inventory of the planet including volcanic degassing, impact delivery and removal, the loss of the catastrophically outgassed steam atmosphere, atmosphere-surface interactions, as well as thermal and non-thermal escape processes affecting a potential secondary atmosphere at early Mars. While early non-thermal atmospheric escape at Mars before 4 billion years ago is poorly understood, in particular in view of its ancient intrinsic magnetic field, research on thermal escape processes and the stability of a CO2-dominated atmosphere around Mars against high EUV fluxes indicate that volatile delivery and volcanic degassing cannot counterbalance the strong atmospheric escape. Therefore, a catastrophically outgassed steam atmosphere of several bars of CO2 and H2O, or CO and H2 for reduced conditions, through solidification of the Martian magma ocean could have been lost within just a few million years. Thereafter, Mars likely could not build up a dense secondary atmosphere during its first \(\sim400\) million years but might only have possessed an atmosphere sporadically during events of strong volcanic degassing, potentially also including SO2. This indicates that before \(\sim4.1\) billion years ago Mars indeed might have been cold and dry with at maximum short and sporadic warmer periods. A denser CO2- or CO-dominated atmosphere, however, might have built up afterwards but must have been lost later-on due to non-thermal escape processes and sequestration into the ground. The SuperCam Instrument Suite on the NASA Mars 2020 Rover: Body Unit and Combined System Tests Abstract: Abstract The SuperCam instrument suite provides the Mars 2020 rover, Perseverance, with a number of versatile remote-sensing techniques that can be used at long distance as well as within the robotic-arm workspace. These include laser-induced breakdown spectroscopy (LIBS), remote time-resolved Raman and luminescence spectroscopies, and visible and infrared (VISIR; separately referred to as VIS and IR) reflectance spectroscopy. A remote micro-imager (RMI) provides high-resolution color context imaging, and a microphone can be used as a stand-alone tool for environmental studies or to determine physical properties of rocks and soils from shock waves of laser-produced plasmas. SuperCam is built in three parts: The mast unit (MU), consisting of the laser, telescope, RMI, IR spectrometer, and associated electronics, is described in a companion paper. The on-board calibration targets are described in another companion paper. Here we describe SuperCam's body unit (BU) and testing of the integrated instrument. The BU, mounted inside the rover body, receives light from the MU via a 5.8 m optical fiber. The light is split into three wavelength bands by a demultiplexer, and is routed via fiber bundles to three optical spectrometers, two of which (UV and violet; 245–340 and 385–465 nm) are crossed Czerny-Turner reflection spectrometers, nearly identical to their counterparts on ChemCam. The third is a high-efficiency transmission spectrometer containing an optical intensifier capable of gating exposures to 100 ns or longer, with variable delay times relative to the laser pulse. This spectrometer covers 535–853 nm ( \(105\text{--}7070~\text{cm}^{-1}\) Raman shift relative to the 532 nm green laser beam) with \(12~\text{cm}^{-1}\) full-width at half-maximum peak resolution in the Raman fingerprint region. The BU electronics boards interface with the rover and control the instrument, returning data to the rover. Thermal systems maintain a warm temperature during cruise to Mars to avoid contamination on the optics, and cool the detectors during operations on Mars. Results obtained with the integrated instrument demonstrate its capabilities for LIBS, for which a library of 332 standards was developed. Examples of Raman and VISIR spectroscopy are shown, demonstrating clear mineral identification with both techniques. Luminescence spectra demonstrate the utility of having both spectral and temporal dimensions. Finally, RMI and microphone tests on the rover demonstrate the capabilities of these subsystems as well. Editorial to the Topical Collection: In Situ Exploration of the Ice Giants: Science and Technology Correction to: Studying the Composition and Mineralogy of the Hermean Surface with the Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) for the BepiColombo Mission: An Update Abstract: A Correction to this paper has been published: https://doi.org/10.1007/s11214-020-00780-w Meteorological Predictions for Mars 2020 Perseverance Rover Landing Site at Jezero Crater Abstract: Abstract The Mars Regional Atmospheric Modeling System (MRAMS) and a nested simulation of the Mars Weather Research and Forecasting model (MarsWRF) are used to predict the local meteorological conditions at the Mars 2020 Perseverance rover landing site inside Jezero crater (Mars). These predictions are complemented with the COmplutense and MIchigan MArs Radiative Transfer model (COMIMART) and with the local Single Column Model (SCM) to further refine predictions of radiative forcing and the water cycle respectively. The primary objective is to facilitate interpretation of the meteorological measurements to be obtained by the Mars Environmental Dynamics Analyzer (MEDA) aboard the rover, but also to provide predictions of the meteorological phenomena and seasonal changes that might impact operations, from both a risk perspective and from the perspective of being better prepared to make certain measurements. A full diurnal cycle at four different seasons ( \(\text{L}_{\mathrm{s}}\) \(0^{\circ}\) , \(90^{\circ}\) , \(180^{\circ}\) , and \(270^{\circ}\) ) is investigated. Air and ground temperatures, pressure, wind speed and direction, surface radiative fluxes and moisture data are modeled. The good agreement between observations and modeling in prior works [Pla-Garcia et al. in Icarus 280:103–113, 2016; Newman et al. in Icarus 291:203–231, 2017; Vicente-Retortillo et al. in Sci. Rep. 8(1):1–8, 2018; Savijärvi et al. in Icarus, 2020] provides confidence in utilizing these models results to predict the meteorological environment at Mars 2020 Perseverance rover landing site inside Jezero crater. The data returned by MEDA will determine the extent to which this confidence was justified. A Primer on Focused Solar Energetic Particle Transport Abstract: Abstract The basics of focused transport as applied to solar energetic particles are reviewed, paying special attention to areas of common misconception. The micro-physics of charged particles interacting with slab turbulence are investigated to illustrate the concept of pitch-angle scattering, where after the distribution function and focused transport equation are introduced as theoretical tools to describe the transport processes and it is discussed how observable quantities can be calculated from the distribution function. In particular, two approximations, the diffusion-advection and the telegraph equation, are compared in simplified situations to the full solution of the focused transport equation describing particle motion along a magnetic field line. It is shown that these approximations are insufficient to capture the complexity of the physical processes involved. To overcome such limitations, a finite-difference model, which is open for use by the public, is introduced to solve the focused transport equation. The use of the model is briefly discussed and it is shown how the model can be applied to reproduce an observed solar energetic electron event, providing insights into the acceleration and transport processes involved. Past work and literature on the application of these concepts are also reviewed, starting with the most basic models and building up to more complex models. ISA, a High Sensitivity Accelerometer in the Interplanetary Space Abstract: Abstract ISA (Italian Spring Accelerometer) is a high sensitivity accelerometer flying, as scientific payload, on-board one of the two spacecraft (the Mercury Planetary Orbiter) of BepiColombo, the first ESA mission to Mercury. The first commissioning phase (performed in the period November 2018 - August 2019) allowed to verify the functionality of the instrument itself as well as of the related data handling and archiving system. Moreover, the acceleration measurements gathered in this time frame allow to envisage the potentiality of such an instrument as a high-accuracy monitor of the spacecraft mechanical environment. Mercury Dust Monitor (MDM) Onboard the Mio Orbiter of the BepiColombo Abstract: Abstract An in-situ cosmic-dust instrument called the Mercury Dust Monitor (MDM) had been developed as a part of the science payload for the Mio (Mercury Magnetospheric Orbiter, MMO) stage of the joint European Space Agency (ESA)–JAXA Mercury-exploration mission. The BepiColombo spacecraft was successfully launched by an Ariane 5 rocket on October 20, 2018, and commissioning tests of the science payload were successfully completed in near-earth orbit before injection into a long journey to Mercury. MDM has a sensor consisting of four plates of piezoelectric lead zirconate titanate (PZT), which converts the mechanical stress (or strain) induced by dust-particle impacts into electrical signals. After the commencement of scientific operations, MDM will measure the impact momentum at which dust particles in orbit around the Sun collide with the sensor and record the arrival direction. This paper provides basic information concerning the MDM instrument and its predicted scientific operation as a future reference for scientific articles concerning the MDM's observational data. The Sun Through Time Abstract: Abstract Magnetic activity of stars like the Sun evolves in time because of spin-down owing to angular momentum removal by a magnetized stellar wind. These magnetic fields are generated by an internal dynamo driven by convection and differential rotation. Spin-down therefore converges at an age of about 700 Myr for solar-mass stars to values uniquely determined by the stellar mass and age. Before that time, however, rotation periods and their evolution depend on the initial rotation period of a star after it has lost its protostellar/protoplanetary disk. This non-unique rotational evolution implies similar non-unique evolutions for stellar winds and for the stellar high-energy output. I present a summary of evolutionary trends for stellar rotation, stellar wind mass loss and stellar high-energy output based on observations and models. Mars 2020 Mission Overview Abstract: Abstract The Mars 2020 mission will seek the signs of ancient life on Mars and will identify, prepare, document, and cache a set of samples for possible return to Earth by a follow-on mission. Mars 2020 and its Perseverance rover thus link and further two long-held goals in planetary science: a deep search for evidence of life in a habitable extraterrestrial environment, and the return of martian samples to Earth for analysis in terrestrial laboratories. The Mars 2020 spacecraft is based on the design of the highly successful Mars Science Laboratory and its Curiosity rover, but outfitted with a sophisticated suite of new science instruments. Ground-penetrating radar will illuminate geologic structures in the shallow subsurface, while a multi-faceted weather station will document martian environmental conditions. Several instruments can be used individually or in tandem to map the color, texture, chemistry, and mineralogy of rocks and regolith at the meter scale and at the submillimeter scale. The science instruments will be used to interpret the geology of the landing site, to identify habitable paleoenvironments, to seek ancient textural, elemental, mineralogical and organic biosignatures, and to locate and characterize the most promising samples for Earth return. Once selected, ∼35 samples of rock and regolith weighing about 15 grams each will be drilled directly into ultraclean and sterile sample tubes. Perseverance will also collect blank sample tubes to monitor the evolving rover contamination environment. In addition to its scientific instruments, Perseverance hosts technology demonstrations designed to facilitate future Mars exploration. These include a device to generate oxygen gas by electrolytic decomposition of atmospheric carbon dioxide, and a small helicopter to assess performance of a rotorcraft in the thin martian atmosphere. Mars 2020 entry, descent, and landing (EDL) will use the same approach that successfully delivered Curiosity to the martian surface, but with several new features that enable the spacecraft to land at previously inaccessible landing sites. A suite of cameras and a microphone will for the first time capture the sights and sounds of EDL. Mars 2020's landing site was chosen to maximize scientific return of the mission for astrobiology and sample return. Several billion years ago Jezero crater held a 40 km diameter, few hundred-meter-deep lake, with both an inflow and an outflow channel. A prominent delta, fine-grained lacustrine sediments, and carbonate-bearing rocks offer attractive targets for habitability and for biosignature preservation potential. In addition, a possible volcanic unit in the crater and impact megabreccia in the crater rim, along with fluvially-deposited clasts derived from the large and lithologically diverse headwaters terrain, contribute substantially to the science value of the sample cache for investigations of the history of Mars and the Solar System. Even greater diversity, including very ancient aqueously altered rocks, is accessible in a notional rover traverse that ascends out of Jezero crater and explores the surrounding Nili Planum. Mars 2020 is conceived as the first element of a multi-mission Mars Sample Return campaign. After Mars 2020 has cached the samples, a follow-on mission consisting of a fetch rover and a rocket could retrieve and package them, and then launch the package into orbit. A third mission could capture the orbiting package and return it to Earth. To facilitate the sample handoff, Perseverance could deposit its collection of filled sample tubes in one or more locations, called depots, on the planet's surface. Alternatively, if Perseverance remains functional, it could carry some or all the samples directly to the retrieval spacecraft. The Mars 2020 mission and its Perseverance rover launched from the Eastern Range at Cape Canaveral Air Force Station, Florida, on July 30, 2020. Landing at Jezero Crater will occur on Feb 18, 2021 at about 12:30 PM Pacific Time. Radiometric Calibration Targets for the Mastcam-Z Camera on the Mars 2020 Rover Mission Abstract: Abstract The Mastcam-Z Camera is a stereoscopic, multispectral camera with zoom capability on NASA's Mars-2020 Perseverance rover. The Mastcam-Z relies on a set of two deck-mounted radiometric calibration targets to validate camera performance and to provide an instantaneous estimate of local irradiance and allow conversion of image data to units of reflectance (R∗ or I/F) on a tactical timescale. Here, we describe the heritage, design, and optical characterization of these targets and discuss their use during rover operations. The Mastcam-Z primary calibration target inherits features of camera calibration targets on the Mars Exploration Rovers, Phoenix and Mars Science Laboratory missions. This target will be regularly imaged during flight to accompany multispectral observations of the martian surface. The primary target consists of a gold-plated aluminum base, eight strong hollow-cylinder Sm2Co17 alloy permanent magnets mounted in the base, eight ceramic color and grayscale patches mounted over the magnets, four concentric, ceramic grayscale rings and a central aluminum shadow post (gnomon) painted with an IR-black paint. The magnets are expected to keep the central area of each patch relatively free of Martian aeolian dust. The Mastcam-Z secondary calibration target is a simple angled aluminum shelf carrying seven vertically mounted ceramic color and grayscale chips and seven identical, but horizontally mounted ceramic chips. The secondary target is intended to augment and validate the calibration-related information derived from the primary target. The Mastcam-Z radiometric calibration targets are critically important to achieving Mastcam-Z science objectives for spectroscopy and photometric properties. School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: [email protected] Tel: +00 44 (0)131 4513762 Your IP address: 3.232.96.22 Home (Search) About JournalTOCs News (blog, publications) JournalTOCs © 2009-
CommonCrawl
FINA 3332 - EXAM 4 TopHat Questions huynhkimong - Weekly In-Class Quizzes over Chapters 8, 9, and 11. The chance of receiving an actual return that differs from the one that is expected is called ______________. Risk is indicated by variability, whether the variability is considered positive or negative. Both the positive and negative outcomes must be evaluated when considering risk.​ The standard deviation is calculated as the weighted average of all the deviations from the expected value, and it indicates how far above or below the expected value the actual value is expected to be. The greater the variability of the possible returns on an investment, ______________.​ the riskier the investment. What is capital budgeting? The financial analysis that a corporation conducts to determine if they should pursue a potential investment. Which of the following is an example of a capital investment project? All of the above are examples of capital investment projects. One advantage of the payback period method is that it provides a rough measure of a project's liquidity and risk.​ If the cost of an investment is $12,000 and the expected cash flow from the investment is $4,000, then the payback period is 4 years. If a firm has a 25% tax rate and it's cost of debt is 8.25, what is the net cost of this debt to a firm? The ______________ on a bond is the cost to the firm for using bondholders' funds. yield to maturity (YTM) The before-tax cost of debt is the same as the: yield to maturity (YTM) associated with the firm's bonds. Flotation costs are what a firm pays to investment bankers for their assistance in the issuance of new equity securities. A firm's weighted average cost of capital (WACC) is: determined by participants in the financial markets. Under normal circumstances, the weighted average cost of capital (WACC) is used as the firm's required rate of return because: as long as the firm's investments earn returns greater than its WACC, the value of the firm will be increased. Beige Inc. is evaluating three independent capital budgeting projects whose internal rates of return (IRRs) are greater than the firm's marginal cost of capital (WACC). Beige should choose: all of the projects whose internal rates of return (IRRs) are greater than the firm's weighted average cost of capital (WACC). Sets found in the same folder FINA 3332 - EXAM 4 FLASHCARDS FINA 3332 Exam 2 laurenoconnelll MARK 3336 - EXAM #01 REVIEW The following data relate to notes receivable and interest for Owens Co., a financial services company. (All notes are dated as of the day they are received.) Mar. 8. Received a $33,000, 5%, 60-day note on account. 31. Received an$80,000, 7%, 90-day note on account. May 7. Received $33,275 on note of March 8. 16. Received a$72,000, 7%, 90-day note on account. June 11. Received a $36,000, 6%, 45-day note on account. 29. Received$81,400 on note of March 31. July 26. Received $36,270 on note of June 11. Aug. 4. Received a$48,000, 9%, 120-day note on account. 14. Received $73,260 on note of May 16. Dec. 2. Received$49,440 on note of August 4. Instructions Journalize the entries to record the transactions. Which one of the following is the annuity present value formula? <br> <br> <br> A. $C ×\dfrac{1 - \frac{1}{(1 + r )^t}}{ r}$<br><br> B. $C × 1 - \bigg[\dfrac{1}{(1 + r )^t }\bigg]- r$<br> <br> C. $C ×\dfrac{1 -\frac{r}{(1+ r )^t }}{r}$<br> <br> D. $C × 1 - \bigg[\dfrac{1}{(1 × r )^t}\bigg] × r$<br> <br> E. $C ×{1 - \bigg[ \dfrac{r }{(1 × r )^t}\bigg]} × r$<br> <br> **Find the slope and $y$-intercept of each line.** $$ x=-\frac{3}{4} y+\frac{3}{2} $$ The current assets and current liabilities for Apple Inc. and Dell, Inc., are shown as follows at the end of a recent fiscal period: $$ \begin{array}{lrr} & \text { Apple Inc. } & \text { Dell, Inc. } \\ & \text { (in millions) } & \text { (in millions) } \\ \hline \text { Current assets: } & & \\ \quad\text { Cash and cash equivalents } & \$ 11,261 & \$ 13,913 \\ \quad\text { Short-term investment } & 14,359 & 452 \\ \quad\text { Accounts receivable } & 11,560 & 10,136 \\ \quad\text { Inventories } & 1,051 & 1,301 \\ \quad\text { Other current assets* } & 3,447 & 3,219 \\ \quad\quad \text { Total current assets } & \underline{\underline{\$ 41,678}} & \underline{\underline{\$ 29,021}} \\ \text { Current liabilities: }\\ \quad\text { Accounts payable } & \$ 17,738 & \$ 15,474 \\ \quad\text { Accrued and other current liabilities } & 2,984 & 4,009 \\ \quad\quad\text { Total current liabilities } & \$ 20,722 & \$ 19,483 \\\hline\hline \end{array} $$ *These represent prepaid expense and other nonquick current assets. b. Interpret the quick ratio difference between the two companies. Final Test EHR Bailey_Linke Chapter 8 Learn kaylacosentino CIsco Module 10-13 BJ1313 PHARM II CH33-37 END OF CHAPTER QUESTIONS briancahudnall
CommonCrawl
RTG 2553 Symmetries and classifying spaces: analytic, arithmetic and derived Info on applying Core research topics of the Research Training Group Moduli Spaces and Deformation Spaces The construction and study of the global geometry and deformation theory of moduli spaces parameterizing analytic, algebraic, or arithmetic objects has a rich history and is one of the central objectives in our research programme. The scope of the investigation ranges from fundamental work regarding general criteria for the existence of moduli spaces for algebraic and analytic stacks, over the complex and real geometry of moduli spaces parameterizing Higgs bundles or vector bundles, to moduli spaces occurring in arithmetic questions such as moduli spaces of $p$-divisible groups and Shimura varieties. Quotients of Hermitian symmetric domains are quite generally important objects of study in complex and differential geometry. In addition, Shimura varieties appear here in their capacity as algebro-geometric moduli spaces and owing to their rich interplay with arithmetic applications. Research topics include: Construction of algebraic and analytic moduli spaces Geometry of algebraic and analytic moduli spaces Derived algebraic geometry and motivic virtual fundamental classes Arithmetic applications of moduli spaces: Reciprocity laws Shimura varieties: Geometry of Newton strata Moduli spaces and universal covers of $p$-divisible groups Lie Groups, their Actions, and Quotient Spaces Lie groups (real, complex, $p$-adic) and linear algebraic groups form an important class of groups with a close tie to geometry, and with many applications for example in the construction and study of moduli spaces. While there are many differences between complex analytic, algebraic geometric and $p$-adic analytic techniques, there is also a large common foundation to these theories, which will play a crucial role in the education of our PhD students. Moment measure conjecture Characterisation of compact quotients of Hermitian symmetric spaces Analytic and topological invariants of quotient spaces Geometry of affine Grassmannians Equivariant vector bundles A particular form of symmetry which is ubiquitous in mathematics is duality. Specifically, we intend to study dg categories with duality. In a sense, the various forms of Langlands (and similar) correspondences which are also studied in other projects, or at least are present in the background there, can be seen as a form of duality. More concretely, that formalism involves several instances of dualities, such as the Langlands dual group. dg categories with duality and non-commutative Chow-Witt motives Duality and Euler characteristics over a general base De Rham theorem for intersection space cohomology The $p$-adic Langlands programme and global applications $p$-adic $L$-functions and the Drinfeld tower Galois and Automorphic Representations Understanding the structure of Galois groups such as the absolute Galois group of $\mathbb Q$ is one of the main questions which have driven the development of number theory and arithmetic geometry in the last decades. Nowadays the primary approach to achieve this is to understand the representations of such Galois groups. Via the Langlands program, Galois representations are connected with Lie group representations/automorphic representations. Galois representations and elliptic curves over imaginary quadratic fields Refined Iwasawa theory and higher Fitting invariants Deformation rings of Galois representations Representations of $p$-adic Lie groups
CommonCrawl
Methodology Article | Open | Published: 15 June 2015 Deriving movement properties and the effect of the environment from the Brownian bridge movement model in monkeys and birds Kevin Buchin1, Stef Sijben2, E Emiel van Loon3, Nir Sapir4, Stéphanie Mercier5, T Jean Marie Arseneau6 & Erik P Willems6 The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research. Modelling movement as a stochastic process provides means to estimate paths or location distributions when observations were not recorded continuously. This perspective is, however, often overlooked when analyzing movement based on discrete observations. For instance kernel-density estimation, which is frequently applied to movement data, does not take temporal autocorrelation into account. It is used for home-range estimation [1, 2] when the sampling rate is sufficiently low so that independence between observations can reasonably be assumed. Similarly, home range estimation based on minimum convex polygons [3] also ignores the actual movement between different locations. In other uses of movement data, locations are interpolated under the assumption of a linear movement path between observations [4]. This assumption is unrealistic except for densely sampled data, and can lead to wrong conclusions on sparser data as illustrated in Fig. 1(a). Linear interpolation compared to Brownian bridges. Linear interpolation compared to Brownian bridges. In this example the movement path is shown in gray and the location data as black dots connected by straight line segments. a Linear interpolation would incorrectly report that the movement path does not traverse the area $\mathcal {A}$ . b Two realizations in the BBMM, one of which traverses $\mathcal {A}$ . c Utilization distribution (density indicated by shading) and 99 % volume isopleth, which intersects $\mathcal {A}$ Stochastic models like state-space models [5–7] and the Brownian bridge movement model (BBMM) [8–12] have been successfully applied for estimating the movement path and intensity of space use based on discrete location data. In this paper we explicitly focus on the BBMM (but see online Additional file 1 for a more elaborate discussion of the similarities and differences between the BBMM and state-space models). The BBMM takes the movement of animals into account to calculate space use patterns. It does so making relatively few assumptions, yet still making biological sense in that its parameters reflect real properties of the relocation data: measurement accuracy and –in a way– speed and directionality of movement. The assumption underlying the BBMM is that the entity exhibits purely random (i.e., Brownian)motion. In a typical scenario in which the BBMM is applied, we have multiple location measurements and are interested to infer the location at times in the interval between two consecutive measurements. Therefore, we condition Brownian motion on the measured locations at the observation times. Such a conditioned Brownian motion is called a Brownian bridge, which is illustrated in Fig. 1(b–c). The BBMM has the desirable property of being able to take measurement uncertainty into account, usually by assuming that this uncertainty follows aGaussian distribution around a given relocation point (which is an appropriate assumption for e.g. relocations obtained from GPS-telemetry [13]). In contrast to pure Brownian motion, however, additional Gaussian noise results in a process that is not Markov [14]. The use of the BBMM in the context of movement ecology was proposed by Bullard [8] and Horne et al. [9] and is defined by the measurement error and the diffusion coefficient, which relates to an organism's mobility. Horne et al. propose to compute the diffusion coefficient using maximum likelihood estimation, thereby explicitly assuming homogeneous movement throughout an entire trajectory. However, as movement parameters change over time, it is biologically more realistic to allow the diffusion coefficient to vary. Kranstauber et al. [10] use the Bayesian information criterion to detect changes in the movement state of an organism, and use this to vary the diffusion coefficient over time. Bivariate Gaussian bridges factorise diffusion into a parallel and an orthogonal component [11]. A related algorithm is the Biased random walk proposed by Benhamou [15]. In his study the sampling density is increased using linear interpolation and then kernel density estimation is used at the resulting set of locations. Overall, these methods provide a more advanced estimate for the location distribution in relation to using a fixed diffusion coefficient, because they are more dynamic or segment-specific. The BBMM has so far been exclusively used to compute utilization distributions. The analysis of movement, however, often does not ask for location as such, but rather focuses on derived movement parameters like relative speed, or more complex analysis tasks like similarity estimation between trajectories. In recent work Buchin et al. [16] show how to derive such parameters and how to perform fundamental analysis tasks under the assumption of a BBMM. Since their paper focused on the technical side of mathematically deriving the corresponding parameters, they assumed that movement takes place in a featureless space, not taking into account the external and internal factors that govern organismal movement. Clearly though, these factors are essential for a proper biological understanding of animal movement. Nathan et al. [17] proposed a paradigm, which incorporates four basic components that affect a movement path: external factors, the internal state of the moving organism, its navigational capacity and its motion capacity. Getz and Saltz [18] present a framework for generating and analyzing movement paths using this paradigm, which can be used to generate movement paths by simulation and to segment movement paths by state-space methods. It does not, however, deal with the interpolation of location observations. In this article we present a computational framework for movement analysis using the BBMM in the context of the movement ecology paradigm. Unique to our framework is the application of the BBMM beyond the estimation of utilization distributions to also calculate derived movement parameters and their spatial distribution. The derived movement parameter we focus on in this paper is relative speed and its spatial distribution. It is important to note that in the BBMM speed estimations necessarily need to be relative to a time scale, since Brownian motion is nowhere differentiable. Therefore, speed calculated in our framework is always relative speed 1 and not an absolute measure. Further, we note that in our framework calculations are performed per bridge, and for any given bridge only the two adjacent observations are used. While this is in line with the work of Horne et al [9], this does not account for sequence of observations being not Markov [14] in the presence of measurement errors. In the Results section we first discuss how various factors influencing a movement path can be incorporated in such an analysis. We differentiate between two related but acutely different approaches to do so. The first approach takes factors into account a posteriori, that is, they do not influence the movement model but are used to biologically interpret its outcome. The second takes factors into account a priori, that is, factors influence a key model parameter (the diffusion coefficient), and thereby the estimation of the movement path and derived properties. We demonstrate our framework on data of two species with distinctly different movement. We apply the a posteriori approach in a case study on how the movement speed of vervet monkeys (Chlorocebus pygerythrus) within a monthly ranging area is related to local vegetation density, whereas for the a priori approach we look at the flight mode of European bee-eaters (Merops apiaster) during migration. Computational aspects of the movement ecology framework Organismal movement can be perceived as the outcome of the interaction between four key biological components: factors external to the organism, the organism's internal state, its navigational capacity, and its motion capacity [17]. In this paper we focus on external factors and consider two ways in which their relation to the movement can be investigated. First we consider the case in which the components do not affect the computation of the BBMM, but instead are used a posteriori to biologically interpret its outcome. Second, we use the components a priori to dynamically modify a key parameter of the BBMM, the diffusion coefficient. This approach is in general more difficult to handle computationally. The aspect which dictates this difficulty is the degree of spatial dependence of the components. If they are independent of space, possibly conditional on time or some measurement (e.g. behaviour, which may be identified in the basis of a short acceleration signal) [19]), it can be handled in an analytical movement model. In contrast, if a factor is especially spatially dependent (e.g. a highly heterogeneous habitat), an explicit simulation of the spatial trajectory is required. This would effectively imply a multitude of simulations because we are interested in conditional distributions. If a factor is only varying relatively little over the length of a trajectory segment (e.g. atmospheric variables like wind or thermal convection), it is possible to make a quasi-steady state assumption and consider it as constant within a local spatial domain. This makes it much easier to handle spatial dependency in a BBMM. In the following, we elaborate on the various settings at the hand of two case studies. In the first study the external factor (vegetation density) is given as raster data and has a strong spatial dependency. In this setting the a posteriori approach is applicable. The challenge here is to compute a spatial distribution of average speed. In the second study the external factor (atmospheric conditions) is given along the movement path and therefore the a priori approach is applicable. Since in this case study the movement behaviour depends crucially on the atmospheric conditions, the a posteriori approach would likely not provide added value. Movement speed of vervet monkeys – the a posteriori approach In the first case study, we apply our framework to investigate local differences in the movement speed of a wild group of free-ranging vervet monkeys within their ranging area over a 1 month period. Movement data were obtained from a GPS logger, deployed on a single adult female within the group and programmed to collect coordinates at hourly intervals during the animals' daily activity period. In total, 465 relocations were collected this way (Fig. 2a), representing 31 daily trajectories (Fig. 2b). The GPS data is provided as Additional file 2. Spatial distribution of vervet monkey movement data. The Brownian bridge movement model takes the GPS fixes along the trajectories as input and is used to calculate a probability density distribution function of location (i.e. the utilisation distribution), but also a spatial distribution of a movement property like speed (red equals low, violet high speed). The black outline demarcates the 99 % volume isopleth We first employ our implementation of the dynamic BBMM to calculate the monthly utilization distribution of the monkeys and delineate their ranging area by a 99 % volume isopleth (Fig. 2c). This revealed the monkeys used an area of 1.3 km 2 over the observation period. Then we investigate how speed estimates from this dynamic BBMM relate to the external environment in which the animals are moving. We hypothesize that the monkeys travel faster in the more open, less densely vegetated areas of their range (due to greater exposure to predators and lower food availability), and slower in those areas in which the vegetation is more lush (more safety and food). We investigate this hypothesis by relating our average speed estimate (calculated over 5 minute time intervals; Fig. 2d) to local vegetation density, proxied by a high resolution (0.50 × 0.50 m 2) Normalized Difference Vegetation Index (NDVI) image (see Methods section). High NDVI values correspond to high vegetation density, whereas low values reflect sparse vegetation. We thus predict a negative association between the average movement speed of the monkeys and local NDVI values. To statistically test this prediction, we generated 1000 random sample locations throughout the monthly range of the animals and extracted both local NDVI and speed estimate values. Since data exhibited significant levels of spatial autocorrelation (as indicated by inspections of Moran's I values and correlograms), statistical significance of the association between local vegetationdensity and speed of movement was assessed using geographically effective degrees of freedom [20]. This analysis revealed a significant, negative correlation between local NDVI-values and BBMM-estimated average relative speed (r Pearson =−0.213,F (1, 975.68)=46.15,p<0.0001), in line with our biological expectations. We also performed the same analysis using only one diffusion coefficient (i.e., non-dynamic BBMM), which also showed a significant, negative correlation (r Pearson =−0.175,F (1, 150.27)=4.78,p=0.03). Migration of European bee-eaters – the a priori approach The European bee-eater is a species that uses both flapping and soaring-gliding flight during its migratory movement. In this case study we use the relationship between atmospheric conditions and flight mode in this species [21 , 22] to construct a biologically informed BBMM that generates estimates of flight speed and trajectory uncertainty over different segments of the movement path, depending on likely flight-mode. Even though the influence of atmospheric conditions on the movement path (mediated by flight mode) has previously been investigated [22 – 25], this information has not yet been integrated into a movement model for the European bee-eater. We hypothesize that soaring-gliding flight is characterized by an overall less straight, more tortuos path because in this flight mode birds may rely on the spatial variability of convective thermal intensity. Since soaring-gliding birds may actively select to circle in strong thermals that are not necessarily found in the exact direction of their flight destination, their path may be less direct or straight. Additionally, since migration speed scales differently with bird size for flapping and soaring-gliding flight modes, for relatively small birds like the European Bee-eater (mean body mass of 56 g [22]), it has been suggested that soaring-gliding will be slower than flapping flight [26]. To investigate these hypotheses we calculate and compare the diffusion coefficients and average flight speeds for the two flight modes using the BBMM. The data set consisted of 91, 141 and 94 segments characterized by flapping, mixed and soaring-gliding flight modes respectively (see [22] for additional details). The data was collected by radio telemetry, resulting in an irregular measurement frequency of approximately 6 minutes (343 seconds with standard deviation of 547 seconds). The data set is provided as Additional file 3. We use a model to predict the fraction of time spent on soaring-gliding flight as a function of atmospheric conditions (most notably, the magnitude of the Turbulence Kinetic Energy, or TKE). After calibration, our model classified the animals' flight mode with an overall error rate of 1.1 %. This model has the following form: $$\frac{e^{a\cdot \textrm{TKE}-b}}{1+e^{a \cdot \textrm{TKE} - b}}, $$ where the value (with 95 % confidence bounds) for parameter a is 74 (25 - 227), and for parameter b is 16 (5 - 50). Figure 3 shows the shape of this model as well as its predictive uncertainty. Logistic function. The logistic function describing the fraction of time the birds flew using soaring-gliding as a function of turbulence kinetic energy (TKE). The grey-shaded range is a 0.95 confidence interval We selected the movement segments with the pure flapping and soaring-gliding flight modes and applied the maximum likelihood estimation by Horne et al. [9] separately to these. This resulted in estimated diffusion coefficients for flapping as 2965 m 2/s and 4505 m 2/s for soaring-gliding. This confirms our hypothesis that soaring-gliding is associated with a more tortuous flight path. The fact that this hypothesis could be investigated empirically on the basis of such sparse and irregularly sampled data is a distinct advantage of our approach over previous BBMM-based methods that, moreover are restricted to calculations of space use only. The difference in diffusion coefficients between the two flight modes is illustrated in Fig. 4. In this figure, the spatial distributions of two individuals are shown along with their flight mode. The movement path is clearly wider for segments with soaring-gliding flight than for those with flapping flight, and, to our knowledge, this aspect of flight mode on the migratory track has not yet been described elsewhere. Changing diffusion coefficients. Two examples of the effect of a changing diffusion coefficient on the predicted trajectory. The coloured line is interpolated linearly between measured locations, where blue means a low diffusion coefficient mainly flapping flight), and red means a high diffusion coefficient (mainly soaring/gliding flight). The contours indicate the 90 % and 99 % volume isopleths based on the trajectory. In the example to the right the time passed between two measurements is indicated. A larger diffusion coefficient results in a wider contour. For instance, of two bridges of similar duration (4:55 and 4:57 minutes and length the red bridge has a wider contour than the blue We calculated the movement speeds using our BBMM over 5 minute instances. Reasons for this resolution were the resolution of the original observations (approximately 6 minutes on average) and the fact that autocorrelation is very limited at this 5 minute resolution. At this resolution we found that the average relative cross-country speed for flapping flight was 9.7 m/s, while in soaring-gliding flight it was 8.5 m/s, a significant difference of 1.2 m/s (Welch two-sample T-test; 95 % confidence-interval: 0.91 - 1.56). The variance of relative cross-country speed for flapping flight was 16.2 m 2/s 2 and 7.1m 2/s 2 for soaring-gliding flight, a ratio of 2.30 (significant according to a 2-sided F-test; 95 % confidence-interval: 2.05 - 2.60). With respect to the speed difference we note, though, that wind conditions were somewhat different between segments flown using different flight modes. For example, Sapir et al. [27] have recently found that bee-eaters undertaking flapping flight experienced higher headwinds, while during soaring-gliding wind was overall less intense and this may have influenced our calculations that dealt only with the cross-country flight speed. Figure 5 shows the spatial distribution of average cross-country speed relative to three different time scales. We further note, that the speed variability within the soaring-gliding flight mode could be resolved if fine-resolution observations (e.g. <30 seconds) would be available. In that scenario, the variability in speed differences which is now implicit in the higher diffusion coefficient for that mode would become explicit through a higher variance in speed (at fine resolutions) for the soaring-gliding flight mode. We further note that the calculated speeds depend on the diffusion coefficient, the displacement between two observations and the chosen time scale; therefore –as is the case here– a higher diffusion coefficient does not necessarily imply higher speed. Spatial distribution of speed. Spatial distribution of speed of bee-eaters at different time scales, clipped to the 99 % volume isopleth using Israeli Transverse Mercator as coordinate grid. From left to right: 5 minutes, 15 minutes, and 30 minutes We demonstrated how the Brownian bridge movement model can be extended to compute the spatial distribution of derived movement parameters, such as relative speed, and used two case studies to illustrate different ways (the a posteriori and a priori approach) in which our computational framework can integrate environmental factors with the BBMM. In both case studies our framework provided meaningful biological insights that could not have been obtained previously from the BBMM. In the first case study, we used our framework to first calculate the utilization distribution and monthly ranging area of a group of vervet monkeys. Subsequently, we could analytically confirm the hypothesized relationship between the local average speed with which the animals traverse their ranging area to local vegetation density. Correlating local average speed to vegetation density required BBMM-based calculations novel to our paper, specifically an estimation of the spatial distribution of speeds. It would be interesting to see how a correlating variable could be used to estimate diffusion coefficients of a BBMM directly, which however seems like a computationally challenging task; this could mean that an a posteriori approach would be used as inspiration to apply an a priori approach. In the second case study, we used existing knowledge about the relationship between atmospheric conditions and flight mode of migrating European bee-eaters, to evaluate whether different flight modes result in different average cross-country flight speed and tortuosity of the movement path. This was not possible in previous studies [21,22] due to varying sampling intervals. Here, however, we first fit a biologically informed BBMM, which then enabled us to demonstrate that soaring-gliding flight involves higher variability in route straightness and lower flight speeds than flapping flight. Our work therefore adds a novel perspective to bee-eater biology, and the novel findings –not discovered by the traditional approaches– demonstrate the usefulness of the new approach. Both case studies heavily rely on the ability to not only estimate the spatial distribution of an animal but to also estimate derived movement parameters and their spatial distribution based on the BBMM – an application of the BBMM unique to our work. We note that many of the conceptual questions we address for the BBMM –like the use of spatial distributions of movement parameters to integrate environmental factors into the analysis– are also relevant to other movement models. In general, our framework may apply to settings where environmental factors are expected to influence velocity. For terrestrial, aquatic and airborne organisms that could respectively be terrain ruggedness, currents and wind. However, also an organism's internal state or interaction with other organisms may (when observations on these variables are available) be incorporated in the analysis. Even though our case studies do not represent all these possibilities, they do demonstrate that the derivation of movement parameters and their spatial distribution via BBMM is a powerful method for movement research. Methods for computing movement parameters in the Brownian bridge movement model We first discuss how various movement parameters are calculated in the BBMM and similar models. We then provide details on the specific methods used in the two case studies. The BBMM assumes that an entity exhibits Brownian motion between measured locations. A Brownianbridge is the distribution of this process conditioned on the locations of both endpoints. To model uncertainty in the measured locations and to avoid a degenerate probability distribution at the time of a measurement, the locations are often assumed to be normally distributed around the measured location. All of the following calculations are performed for individual Brownian bridges and only use the directly adjacent measurements. Note that in the presence of measurement errors the sequence of observations does not satisfy the Markov property [14], and any Brownian bridge actually depends on more than just the adjacent measurements. Thus, we need to assume that the measurement error is small relative to the diffusion coefficient. If we assume that we have two locations x i , x i+1 measured at times t i , t i+1 with variances ${\delta _{i}^{2}}$ and $\delta _{i+1}^{2}$ respectively, the position X t at a time t∈[t i ,t i+1] follows a circular bivariate normal distribution with parameters $$\begin{array}{@{}rcl@{}} \boldsymbol{\mu}(t) &=& (1-\alpha) \boldsymbol{x}_{i} + \alpha \boldsymbol{x}_{i+1}, \\ \sigma^{2}(t) &=& (t_{i+1}-t_{i}) \alpha(1-\alpha) D + (1-\alpha)^{2} {\delta_{i}^{2}} + \alpha^{2} \delta_{i+1}^{2}. \end{array} $$ Here, $\alpha = \frac {t-t_{i}}{t_{i+1}-t_{i}}$ is a variable that linearly moves from 0 to 1 as t moves from t i to t i+1 and D is the diffusion coefficient of the Brownian motion, which is often estimated by a maximum likelihood method [9]. When the trajectory contains different movement states over time, it may be appropriate to vary the diffusion over time rather than to keep it constant [10]. Given these probability distributions, derived parameters such as distance or speed (relative to a time scale) can be determined [16]. These parameters are important building blocks for the detection of many movement patterns. We summarize the results on the distributions of these parameters here, for full derivations we refer to [16] and online Additional file 4. Note that the derivation of velocity in [16] does not handle all possible dependencies and is superseded by the derivation in Appendix 1. If the positions of two animals A and B at time t have independent circular normal distributions with means μ A (t) and μ B (t) and variances ${\sigma _{A}^{2}}(t)$ and ${\sigma _{B}^{2}}(t)$ respectively, the distance between A and B has a Rice distribution with parameters |μ A (t)−μ B (t)| and $\sqrt {{\sigma _{A}^{2}}(t) + {\sigma _{B}^{2}}(t)}$ . The average velocity over a time interval [t 1,t 2] is given by the difference between two (generally not independent) circular normal distributions, for X(t 2) and X(t 1). The velocity has a circular normal distribution with mean $\frac {\boldsymbol {\mu }(t_{2})-\boldsymbol {\mu }(t_{1})}{t_{2}-t_{1}}$ , while the expression for the variance depends on the number of location measurements that were obtained between t 1 and t 2. Let t s , t i and t f be the time stamps of three consecutive observations with location variances ${\delta _{s}^{2}}$ , ${\delta _{i}^{2}}$ and ${\delta _{f}^{2}}$ respectively, chosen such that t s ≤t 1<t i . The observation at t f is only needed in the calculations if t i <t 2≤t f . The variance of the velocity is: $$ {\sigma_{V}^{2}} (t_{1}, t_{2}) =\left\{\!\! \begin{array}{ll} \frac{{\delta_{s}^{2}} + {\delta_{i}^{2}}}{(t_{i}-t_{s})^{2}} + \left(\frac{1}{t_{i}-t_{s}} + \frac{1}{t_{2}-t_{1}}\right) & \text{if}\; t_{1} < t_{2} \leq t_{i},\\ \vspace*{9pt} \frac{\sigma^{2}(t_{1}) + \sigma^{2}(t_{2}) - 2\left(\frac{t_{1}-t_{s}}{t_{i}-t_{s}}\right)\left(\frac{t_{f}-t_{2}}{t_{f}-t_{i}}\right){\delta_{i}^{2}}}{(t_{2}-t_{1})^{2}} & \text{if \(t_{i} < t_{2} \leq t_{f}\),}\\ \vspace*{6pt} \frac{\sigma^{2}(t_{1}) + \sigma^{2}(t_{2})}{(t_{2}-t_{1})^{2}} & \text{otherwise.} \end{array} \right. $$ Let μ V and ${\sigma _{V}^{2}}$ be the parameters of the velocity distribution over a time interval [t 1,t 2]. Speed (the absolute value of velocity) over this interval then has a Rice distribution with parameters |μ V | and σ V . The direction of this velocity has a distribution with density $$\begin{aligned} f(\gamma) =&\, \frac{e^{-\frac{\nu^{2}}{2}}}{2\pi} + \frac{\nu\cos\eta}{2\sqrt{2\pi}} \exp\left(\frac{\nu^{2}\left(\cos^{2}\eta-1\right)}{2}\right)\\ &\times\left(1+\text{erf}\left(\frac{\nu\cos\eta}{\sqrt{2}}\right)\right), \end{aligned} $$ where $\nu = \frac {|\boldsymbol {\mu }_{V}|}{\sigma _{V}}$ is the noncentrality of the velocity distribution and η=atan2(μ V )−γ is the angle between the direction of the mean and the direction under consideration. To obtain spatial distributions of speed, we consider the speed over a time interval [t+Δ t s ,t+Δ t f ], after fixing the position at one time t to a fixed location. If the time interval contains the time at which the position is fixed, i.e. Δ t s ≤0 and Δ t f ≥0, the position distributions at both endpoints of the interval are independent. The conditioned velocity and speed distributions are then determined from these two distributions. The spatial distribution of speed and the effect of the choice of the time scale (Δ t f −Δ t s ) is illustrated in Fig. 5 by the example of the data used in the second case study. We do not give the details about these position distributions here, but refer to Appendix 1. Let μ s , μ f , ${\sigma _{s}^{2}}$ and ${\sigma _{f}^{s}}$ represent the respective means and variances of the conditioned positions at both endpoints of the interval. Then by independence of the positions the velocity distribution conditioned on X t =x is given by $${\fontsize{9.3}{6}\begin{aligned} \boldsymbol{V}_{\boldsymbol{x}; t} (t+\Delta t_{s}, t+\Delta t_{f}) &= \frac{\boldsymbol{X}_{t+\Delta t_{f}} - \boldsymbol{X}_{t+\Delta t_{s}}}{\Delta t_{f} - \Delta t_{s}}\\ &\sim \mathcal{N}\left(\frac{\boldsymbol{\mu}_{f} - \boldsymbol{\mu}_{s}}{\Delta t_{f} - \Delta t_{s}}\right)\left(\frac{{\sigma_{s}^{2}} + {\sigma_{f}^{2}}}{\left(\Delta t_{f} - \Delta t_{s}\right)^{2}}\right). \end{aligned}} $$ As discussed before, the speed has a Rice distribution. We determine the average speed at a particular location by computing a weighted average over time of the mean speed. The weight is given by the probability density of the animal's position at the given time and location. That is, $$ S(\boldsymbol{x}) = \frac{1}{\int\! f_{\boldsymbol{X}{_{t}}}({\boldsymbol{x}})\,dt} \int f\boldsymbol{X}_{t}({\boldsymbol{x}}) \mathbb{E}\left[|\boldsymbol{V}_{\boldsymbol{x};t}(t+\Delta t_{s}, t+\Delta t_{f})|\right] dt. $$ ((1)) Methods for the analysis of the movement speed of vervet monkeys in relation to their environment Vervet monkeys are group-living primates that are abundant throughout most of sub-Saharan Africa [28]. They occur in stable, mixed-sex groups of typically 25-30 animals that consist of multiple adult males and females along with their offspring. Patterns of home range selection and general space use are strongly affected by external environmental factors such as primary productivity and vegetation structure [29] as well as the distribution of food, surface water and perceived predation risk [30]. In order to investigate whether the movement speed of animals is similarly affected by external variables, the data used in this case study were collected on a wild group of vervet monkey ranging freely in their natural habitat in Kwazulu-Natal, South Africa, during December2010. A digital telemetry collar (e-obs Type 1A, 69 gper unit, equivalent to just over 2 % of the tagged animal's body weight; All work at the Inkawu Vervet project was approved by the relevant local authorities (the ethical boards of Ezemvelo KwaZulu-Natal Wildlife and the University of Cape Town, South Africa), and complies with EU-directive 2010/63/EU on the protection of animals used for scientific purposes) was deployed on a single adult female within the group and programmed to obtain GPS-fixes at hourly intervals throughout the daily activity phase of the animals (05:00 – 19:00). Given that vervet monkey groups typically move as coherent units through the landscape, GPS-coordinates obtained from the tagged female were taken to represent the movement of the entire group. Local vegetation density was estimated from a multi-spectral, high-resolution (0.50 × 0.50 m 2 pixel size) satellite image (WorldView II, DigitalGlobe Inc.) obtained over the study-period. From this image, we calculated the Normalized Difference Vegetation Index (NDVI) [31], a well-established spectral correlate of primary productivity and vegetation structure. In our dynamic BBMM calculations, we did not consider bridges at the beginning of the day that stayed very close (≤50m) to the starting location, as this indicated the monkeys had not commenced moving yet, and similarly at the end of the day near the final location. On the remaining bridges the method by Kranstauber et al. [10] was used to estimate the diffusion coefficient (using a margin of 3 and a window size of 7). The average speed distribution presented in the Results section, was computed at a time scale Δ t of 5 minutes. Mean speed was computed as defined in Equation 1, over two time intervals relative to the focal point: one directly preceding it (i.e. Δ t s =−Δ t, Δ t f =0), and one directly following it (i.e. Δ t s =0, Δ t f =Δ t). If we had used only one of these intervals, we would not have been able to compute a speed near the beginning or end of the daily activity period, which could have resulted in missing values in the distribution. For the analysis with only one diffusion coefficient we used the method by Horne et al. [9]. The R scripts that were used in this analysis are provided as Additional file 5. Methods for migration of European bee-eaters This case study deals with the northward migration of the European bee-eater through the Arava Valley in southern Israel. The species is a very common passage migrant during both autumn and spring throughout the entire country [32]. In the 2005 and 2006 spring migration seasons, a total of 11 bee-eaters were trapped, marked and tagged with radio transmitters. Using portable systems, birds were followed over a total of 810 km during which their flight mode was established throug h both wing flap signals and the unique signature of circling flight in the recorded transmission (for details see [21,22]; Bee-eater trapping permits were obtained from the Israeli Nature and Parks Authority (permits 2005/22055, 2006/25555) and the experimental procedure was approved by the Animal Care and Use Committee of the Hebrew University of Jerusalem (permit NS-06-07-2)). Trajectories were annotated with simulated atmospheric conditions at appropriately short and small scales using the Regional Atmospheric Modeling System (RAMS; [33,34]). The relationship between bird flight mode (flapping, soaring-gliding and mixed flight) and atmospheric conditions are described in [22]. That study confirmed that turbulence kinetic energy (TKE, in m 2/s 2), as an indicator of convective updraught intensity in the atmosphere, facilitates soaring and gliding. In the current study, the relationship between bird flight mode and the movement path was estimated by calculating the effects of bird flight mode on the animal diffusion coefficient in the BBMM [16]. The relation between TKE and flight mode as well as between flight mode and the diffusion coefficient was determined by considering only movement stretches with flapping and pure soaring-gliding modes (hence omitting the mixed flight modes). The mixed flight mode is highly variable and biomechanically not as well defined as flapping or soaring-gliding flight. A univariate logistic model was fitted to estimate the fraction of soaring flight (s) as a function of TKE. Model and parameter significance was tested for this model (using a 0.05 significance level), as well as the overall classification error. Subsequently, the BBMM was fitted to segments with flapping flight and soaring-gliding flight separately, resulting in estimates for the diffusion coefficients for each of these flight modes. Next, the diffusion coefficient for the mixed flight mode was estimated by weighting the two diffusion coefficients with the fraction of time spent in each flight mode: $$D_{m} = (1-s) D_{f} + s\cdot D_{s}, $$ where D m , D f and D s refer to the diffusion coefficients of respectively mixed, flapping and soaring-gliding flight. The fraction s is obtained from the aforementioned logistic model. Using this parameterisation, the complete flight trajectories are estimated per bird by the BBMM. In addition to the estimated model coefficients, the results of this analysis are presented in the form of probability maps of movement for selected individuals, showing not only the most likely movement path but also the uncertainty in this as a function of distance between observation points and flight mode (as illustrated in Fig. 4). The R scripts that were used in this analysis are provided as Additional file 6. The vervet monkey GPS data set, the bee-eater data set, and the R scripts used in the analysis are included as additional files with the article. 1 For ease of readability we refer to relative speed simply as speed throughout the article. BBMM: Brownian bridge movement model RAMS: Regional Atmospheric Modeling System TKE: Turbulence kinetic energy Anderson DJ. The home range: a new nonparametric-estimation technique. Ecology. 1982; 63:103–12. http://dx.doi.org/10.2307/1937036. Worton BJ. Kernel methods for estimating the utilization distribution in home-range studies. Ecology. 1989; 70:164–8. Burgman MA, Fox JC. Bias in species range estimates from minimum convex polygons: implications for conservation and options for improved planning. Anim Conserv. 2003; 6:19–28. http://dx.doi.org/10.1017/S1367943003003044. Gudmundsson J, Laube P, Wolle T. Computational Movement Analysis In: Kresse W, Danko DM, editors. Springer Handbook of Geographic Information. Berlin Heidelberg: Springer: 2012. p. 423–38. http://dx.doi.org/10.1007/978-3-540-72680-7_22. Jonsen I, Mills Flemming J, Myers R. Robust state-space modeling of animal movement data. Ecology. 2005; 86:2874–80. Jonsen I, Basson M, Bestley S, Bravington M, Patterson T, Pederson M, et al. State-space models for biologgers: a methodological road map. Deep Sea Res II. 2013:34–46. Patterson T, Thomas L, Wilcox C, Ovaskainen O, Matthiopoulos J. State-space models of individual animal movement. Trends Ecol Evol. 2008; 23:87–94. Bullard F. Estimating the Home Range of an Animal: A Brownian Bridge Approach. Master's thesis: The University of North Carolina; 1999. Horne J, Garton E, Krone S, Lewis J. Analyzing animal movements using Brownian bridges. Ecology. 2007; 88(9):2354–63. Kranstauber B, Kays R, LaPoint S, Wikelski M, Safi K. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement. J Anim Ecol. 2012; 81(4):738–746. doi:10.1111/j.1365-2656.2012.01955.x. Kranstauber B, Safi K, Bartumeus F. Bivariate Gaussian bridges directional factorization of diffusion in Brownian bridge models. Movement Ecol. 2014; 2:5. http://www.movementecologyjournal.com/content/2/1/5. Palm E, Newman S, Prosser D, Xiao X, Ze L, Batbayar N, Balachandran S, Takekawa J. Mapping migratory flyways in Asia using dynamic Brownian bridge movement models. Movement Ecol. 2015; 3:3. http://www.movementecologyjournal.com/content/3/1/3. Van Diggelen F. GNSS Accuracy: Lies, Damn Lies and Statistics. GPS World. 2007; 18(1):26–32. Pozdnyakov V, Meyer T, Wang YB, Yan J. On modeling animal movements using Brownian motion with measurement error. Ecology. 2014; 95:247–53. doi:10.1890/13-0532.1. Benhamou S. Dynamic approach to space and habitat use based on biased random bridges. PloS one. 2011; 6:e14592. Buchin K, Sijben S, Arseneau TJM, Willems EP. Detecting Movement Patterns using Brownian Bridges. In: Proceedings of the 20th International Conference on Advances in Geographic Information Systems. New York, NY, USA: ACM: 2012. p. 119–28. doi:10.1145/2424321.2424338. Nathan R, Getz WM, Revilla E, Holyoak M, Kadmon R, Saltz D, Smouse PE. A movement ecology paradigm for unifying organismal movement research. Proc Natl Acad Sci. 1905; 105(49):2–9. http://www.pnas.org/content/105/49/19052.abstract. Getz WM, Saltz D. A framework for generating and analyzing movement paths on ecological landscapes. Proc Natl Acad Sci U S A. 1906; 105(49):6–71. http://www.pnas.org/content/105/49/19066.abstract. Halsey LG, Portugal SJ, Smith JA, Murn CP, Wilson RP. Recording raptor behavior on the wing via accelerometry. J Field Ornithol. 2009; 80(2):171–7. http://dx.doi.org/10.1111/j.1557-9263.2009.00219.x. Dutilleul P. Modifying the T-Test for assessing the correlation between 2 spatial processes. Biometrics. 1993; 49:305–14. Sapir N, Wikelski M, McCue MD, Pinshow B, Nathan R. Flight modes in migrating european bee-eaters: heart rate may indicate low metabolic rate during soaring and gliding. PLoS ONE. 2010; 5(11):e13956. http://dx.doi.org/10.1371/journal.pone.0013956. Sapir N, Horvitz N, Wikelski M, Avissar R, Mahrer Y, Nathan R. Migration by soaring or flapping: numerical atmospheric simulations reveal that turbulence kinetic energy dictates bee-eater flight mode. Proc R Soc B: Biological Sci. 1723; 278:3380–6. Bohrer G, Brandes D, Mandel JT, Bildstein KL, Miller TA, Lanzone M, et al. Estimating updraft velocity components over large spatial scales: contrasting migration strategies of golden eagles and turkey vultures. Ecol Lett. 2012; 15(2):96–103. http://dx.doi.org/10.1111/j.1461-0248.2011.01713.x. Duerr AE, Miller TA, Lanzone M, Brandes D, Cooper J, O'Malley K, et al. Testing an emerging paradigm in migration ecology shows surprising differences in efficiency between flight modes. PLoS ONE. 2012; 7(4):e35548. http://dx.doi.org/10.1371/journal.pone.0035548. Shepard ELC, Lambertucci SA, Vallmitjana D, Wilson RP. Energy beyond food: foraging theory informs time spent in thermals by a large soaring bird. PLoS ONE. 2011; 6(11):e27375. http://dx.doi.org/10.1371/journal.pone.0027375. Hedenstrom A. Migration by soaring or flapping flight in birds: the relative importance of energy cost and speed. 342. 1302:353–61. http://rstb.royalsocietypublishing.org/content/342/1302/353.abstract. Sapir N, Horvitz N, Wikelski M, Avissar R, Nathan R. Compensation for lateral drift due to crosswind in migrating European bee-eaters. J Ornithol. 2014; 155:745–53. Willems EP, Hill RA. A critical assessment of two species distribution models: a case study of the vervet monkey (Cercopithecus aethiops). J Biogeogr. 2009; 36(12):2300–2312. Willems EP, Barton RA, Hill RA. Remotely sensed productivity, regional home range selection, and local range use by an omnivorous primate. Behav Ecol. 2009; 20(5):985–92. Willems EP, Hill RA. Predator-specific landscapes of fear and resource distribution: effects on spatial range use. Ecology. 2009; 90(2):546–55. Tucker CJ. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sensing Environ. 1979; 8:127–50. Shirihai H, Dovrat E, Christie D, Harris A, Cottridge D. The birds of Israel, Volume 692. London: Academic Press London; 1996. Pielke R, Cotton W, Walko R, Tremback C, Lyons W, Grasso L, et al. A comprehensive meteorological modeling system-RAMS. Meteorol Atmos Phys. 1992; 49:69–91. http://dx.doi.org/10.1007/BF01025401. Cotton WR, Pielke SRA, Walko R L, Liston GE, Tremback C J, Jiang H, et al. RAMS 2001: current status and future directions. Meteorol Atmos Phys. 2003; 82:5–29. http://dx.doi.org/10.1007/s00703-001-0584-9. Research was supported by COST (European Cooperation in Science and Technology) ICT Action IC0903 MOVE, the Swiss National Science Foundation (Sinergia Grant CRSI33 _133040 to Redouan Bshary, Carel van Schaik and Andy Whiten), the Forschungskredit of the University of Zurich (EPW), the Claraz Foundation (EPW) and the Netherlands Organisation for Scientific Research (NWO) under grant no. 612.001.207 (KB). NS was funded by the US – Israel Binational Science Foundation, the Ring Foundation for Environmental Research and the Robert Szold Fund. This work was initiated at Schloss Dagstuhl Seminars on Representation, Analysis and Visualization of Moving Objects (10491, 12512), held in Wadern, Germany. We would like to thank Orr Spiegel, Kamran Safi and Ran Nathan for helpful discussions. Further, we would like to thank Ran Nathan for helping to set up the collaboration and for encouraging us to submit this work. Department of Mathematics and Computer Science, Technical University Eindhoven, Eindhoven, The Netherlands Kevin Buchin Faculty of Mathematics, Ruhr-Universität Bochum, Bochum, Germany Stef Sijben Computational Geo-Ecology, Institute for Biodiversity and Ecosystem Dynamics, University of Amsterdam, Amsterdam, The Netherlands E Emiel van Loon Department of Evolutionary and Environmental Biology, The University of Haifa, Haifa, Israel Nir Sapir Institut de Biologie, Université de Neuchâtel, Neuchâtel, Switzerland Stéphanie Mercier Anthropological Institute & Museum, University of Zurich, Zurich, Switzerland T Jean Marie Arseneau & Erik P Willems Search for Kevin Buchin in: Search for Stef Sijben in: Search for E Emiel van Loon in: Search for Nir Sapir in: Search for Stéphanie Mercier in: Search for T Jean Marie Arseneau in: Search for Erik P Willems in: Correspondence to Kevin Buchin. KB, SS and EPW developed the computational framework, and SS implemented it. KB, EEvL and EPW designed the study. EEvL conducted the statistical analysis for the bee-eater data. KB, EEvL, NS, SS and EPW wrote the manuscript. NS collected the bee-eater tracking data and supervised atmospheric simulations. TJMA, SM and EPW collected data for the vervet monkey study. All authors read and approved the final manuscript. Additional file 1 The Brownian bridge movement model in relation to state-space models. Document containing a discussion of the Brownian bridge movement model in relation to state-space models. Vervet monkey data set. GPS data set used in the first case study. Bea-eater data set. Data set used in the second case study. Relative velocity in the Brownian bridge movement model. Document containing the derivation of the distribution of relative velocity over time in the Brownian bridge movement model. From this we derive the distribution of speed and of direction and the spatial distribution of average speed. R scripts (first study). R scripts used in the first case study. R scripts (second study). R scripts used in the second case study. Movement speed Spatial distribution Home range utilization Migratory flight behaviour
CommonCrawl
Efficient resource allocation for passive optical fronthaul-based coordinated multipoint transmission Gang Wang1,2, Rentao Gu ORCID: orcid.org/0000-0003-3183-28571,2, Hui Li1,2 & Yuefeng Ji3,2 The centralized processing in cloud radio access network enables cooperation between baseband processing units (BBUs) like inter-cell interference (ICI) cancellation on the basis of coordinated multipoint (CoMP). Large amounts of the sharing data will be transmitted through fronthaul transport network. In the paper, both integer non-linear programming (INLP) optimization model and adaptive genetic algorithm (GA) are explored to release the capacity pressure of the fronthaul transport network when CoMP is introduced. We also consider the resource allocation problem of the passive optical fronthaul network. The proposed algorithm tries to reduce the downlink bandwidth and improve the optical resource allocation efficiency of the optical fronthaul with minimal influence on the fronthaul topology. During the simulations, three critical factors are considered: (1) the number of cell edge users, (2) the average traffic demand of cell edge users, (3) the size of cell cluster used to enable the CoMP. The simulation results show that the most efficient bandwidth saving and optical resource allocation can be achieved with INLP, while the proposed adaptive GA nearly has the same performance with low computational complexity and fast convergence, which is more applicable for the large-scale fronthaul network. Furthermore, the load difference of the fronthaul transport network can be further reduced. Network densification using small cells is emerging as a critical technology to enhance the resource management of next-generation wireless network [1, 2]. However, the received signal quality of cell edge users can be sharply degraded by the transmission of neighboring cells. Thus, the signal-to-noise ratio is badly influenced (particularly near the cell edge), as well as the downlink capacity of the mobile network. Inter-cell interference (ICI) has been a bottleneck to improve the mobile network capacity and the quality of service (QoS) of the mobile users that located at cell edge [3]. To face this challenge, coordinated multipoint (CoMP) was proposed [4, 5]. Multiple base stations (BSs) are connected and exchange information for cooperation via backhaul links to reduce ICI. And prediction algorithms can be used to estimate the movement of mobile equipment [6, 7]. Techniques enabling coordinated transmission are explored to migrate the inter-cell interference and increase the system capacity [8–16]. Spectral efficiency (SE)-oriented CoMP techniques have been investigated [8–10]. In [8, 9], CoMP precoders have been explored to improve the spectral efficiency and network capacity. And both spectral efficiency and fairness were considered in CoMP systems [10]. Besides, energy efficiency (EE)-oriented CoMP techniques have also been studied [11–14]. In CoMP-enabled mobile network, the authors investigated the downlink transmit power optimization problem with QoS constraint and limited cell coordination (max-min EE for CoMP systems) [11]. Considering the individual data rate requirement and transmit power of each BS, energy-efficient CoMP precoding was proposed [12]. And EE-oriented resource allocation algorithm was also proposed in CoMP-enabled heterogeneous network, considering of the backhaul power consumption [13]. Semi-smart antenna-based coordinated multipoint technique has been studied to reduce the transmit power of orthogonal frequency division multiplexing (OFDMA) networks [14]. Furthermore, the methods to acquire channel state information were discussed in [5]. Limited feedback CoMP system was reviewed in [15]. Moreover, in [16], the authors took the non-ideal backhaul into consideration, and the spectrum allocation scheme was proposed in heterogeneous network for coordinated multipoint transmission. And cooperation between base stations in the downlink of heterogeneous network has been studied [17]. In [18], CoMP downlink transmission design for cloud radio access network (C-RAN) was studied. C-RAN is emerging as a potential architecture for the next-generation wireless network [19]. In C-RAN, baseband units (BBUs) are migrated and centralized into a BBU central server retaining only distributed remote radio units (RRUs) at remote cells [20]. The fiber technique-based network used to forwarding signals between BBUs and RRUs is called fronthaul [21]. This novel architecture opens up opportunities for a better management of resource of the mobile network. CoMP can benefit from the centralized processing in C-RAN. However, large amounts of sharing data need to be transmitted in the fronthaul transport network when CoMP is introduced in C-RAN. Considering the limited bandwidth and optical resource of the fronthaul, to release the capacity pressure and improve the optical resource efficiency of the fronthaul is significantly important when CoMP technique is introduced in next-generation radio access network. However, as far as we know, little attention has been focused on the influence on fronthaul when CoMP technique is introduced in C-RAN. In previous works, recent advances like key technologies and system architectures in fronthaul-constrained C-RAN have been discussed in [22]. Advanced techniques were explored to enhance the utilization efficiency and transfer capability of the optical fronthaul [23–29]. Digital signal processing (DSP)-based channel aggregating technique was researched [23]. To ensure a reasonable fronthaul transmission rate, subcarrier multiplexing technique was investigated [24]. In [25], microwave-photonics techniques were introduced for integrated optical-wireless access network. Besides, topology-reconfigurable fronthaul transport network has been proposed [26]. CoMP and device-to-device (D2D) connectivity can benefit from this architecture and network measurement schemes [30] in the 5G mobile networking era. And different models of optical fronthaul for C-RAN were discussed [27]. Furthermore, to simplify the RRU, fully passive RRU and self-tuning colorless optical network unit (ONU) transmitter was proposed and demonstrated for short-range wireless network [28]. Data and energy are jointly transmitted through optical fronthaul. Moreover, in C-RAN with non-ideal fronthaul network, delay-sensitive services can benefit from the efficient strategy proposed in [29]. And multicore fiber media (MCF) has been investigated for the future optical fronthaul [31]. However, little work has been done to solve CoMP-oriented resource allocation problem in the fronthaul transport. In the paper, we try to release the capacity pressure and improve the optical resource allocation efficiency of the fronthaul with minimal influence on the fronthaul topology. We present two CoMP-oriented resource allocation schemes for the fronthaul transport network. Both integer non-linear programming (INLP) model and adaptive genetic algorithm (GA) are explored. In this paper, we formulate the INLP model in Section 2, and Section 3 discusses the adaptive GA. Meanwhile, in Section 4, the performance of numerical simulations are described. Finally, we summarize the paper in Section 5. INLP model formulation for CoMP-oriented fronthaul In subsequent subsections, we present the CoMP-oriented C-RAN architecture and develop the INLP model to release the capacity pressure and improve the optical resource allocation efficiency of the time and wavelength division multiplexing (TWDM) passive optical network (PON)-enabled fronthaul transport network. CoMP-oriented C-RAN architecture Figure 1 illustrates the CoMP-oriented C-RAN architecture. Recently, large-scale deployment of PONs significantly releases the capacity pressure of the access network [32, 33]. TWDM-PON is emerging as a potential candidate to transfer data between centralized BBUs and distributed RRUs with strong ability of transmission [34]. As shown in Fig. 1, optical line terminal is deployed at BBU pool, and ONUs with tunable lasers are placed with cost and power efficient RRUs. Virtualization technology has been widely investigated [35, 36]. In TWDM-PON-enabled fronthaul, a virtual PON is formed when a group of ONUs transfer data using the same wavelength. As shown in [27], based on the software defined network (SDN) technique, different transport abstractions can be achieved in the centralized controller, which results in better performance of the C-RAN. Besides, we also take common public radio interface (CPRI) compression techniques [37–39] into consideration, since the typical CPRI physical link rate is fixed [19]. Compared to the typical CPRI, compression technique-based bitrate-variable CPRI is a potential efficient radio interface to face the challenge of overwhelming data stream in the 5G. INLP model formulation In [8–16], different challenges of CoMP technique have been investigated, such as SE/EE-oriented precoder, the influence of the limited feedback. However, little attention has been focused on the influence on fronthaul when CoMP technique is introduced in C-RAN. Considering that large sharing data needs to be transmitted in the fronthaul transport network when CoMP technique is introduced, we focus on releasing the capacity pressure and improving the optical resource allocation efficiency of the fronthaul with minimal influence on the fronthaul topology. The proposed INLP model considers CoMP technique and broadcast characteristic of the TWDM-PON. The transmission of sharing data can benefit greatly from the broadcast characteristic of the TWDM-PON. W: The set of optical wavelength resources used in the TWDM-PON-enabled fronthaul transport network. C: The set of cells served by the TWDM-PON-enabled C-RAN. O: The set of distributed ONUs at cell sites co-located with simplified RRUs. T: A series of discrete time slots. I: The set of mobile terminals located at the small cells. I e: The set of cell edge mobile terminals, I e ⊂ I. C i: The small-cell cluster enabling CoMP for cell edge mobile terminal i, i ∈ I e, C i ∈ C. Cv: The maximum bandwidth of a single-wavelength. In current TWDM-PON system, Cv = 10Gb/s. νc: CPRI fixed link rate [19]. ni: The size of small cell cluster enabling CoMP for cell edge mobile terminal i, i ∈ I e. bi: The bandwidth requirement of mobile terminal i, i ∈ I. \( {\beta}_{c,o}^t: \) Load fluctuation-based compression ratio of typical CPRI for cell c served by ONU o at time t, c ∈ C, o ∈ O. \( {\mathbf{Q}}_{c,o}^t=\left\{i\left|i\in \mathbf{I}\&\ i\ \mathrm{is}\ \mathrm{in}\ \mathrm{cell}\ c\ \mathrm{at}\ \mathrm{time}\ t\right.\right\}: \) The set of mobile terminals that is located in the small cell c is served by the corresponding ONU o. R w, oj = {w|w ∈ W j, W j ⊆ W}: The wavelength tuning range of ONU oj, oj ∈ O. \( {\boldsymbol{\Omega}}_{w,o}^t: \) The topology of current TWDM-PON-enabled fronthaul at time t. $$ {\lambda}_w^t=\left\{\begin{array}{cc}\hfill 1,\hfill & \hfill \mathrm{if}\ \mathrm{wavelength}\ w\in \mathbf{W}\ \mathrm{is}\ \mathrm{used}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{data}\ \mathrm{transmission}\ \mathrm{at}\ \mathrm{time}\ t\ \hfill \\ {}\hfill 0,\hfill & \hfill \mathrm{otherwise}\hfill \end{array}\right. $$ $$ {\boldsymbol{\Omega}}_{w,o}^t=\left\{\begin{array}{cc}\hfill 1,\hfill & \hfill \mathrm{if}\ \mathrm{wavelength}\ w\ \in \mathbf{W}\ \mathrm{is}\ \mathrm{used}\ \mathrm{t}\mathrm{o}\ \mathrm{establish}\ \mathrm{a}\ \mathrm{lightpath}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{O}\mathrm{N}\mathrm{U}\ o\in \mathbf{O}\ \mathrm{a}\mathrm{t}\ \mathrm{t}\mathrm{ime}\ t\hfill \\ {}\hfill 0,\hfill & \hfill \mathrm{otherwise}\hfill \end{array}\right. $$ \( {\sigma}_i^t: \) The number of wavelength used for cell cluster C i enabling CoMP at time t. Maximize the fronthaul transport network bandwidth allocation efficiency η: $$ \eta =\frac{{\displaystyle \sum_{c\in \mathbf{C}}{\displaystyle \sum_{i\in \mathbf{I}e\cap {\mathbf{Q}}_{c,o}^t} nibi}}-{\displaystyle \sum_{c\in \mathbf{C}}{\displaystyle \sum_{i\in \mathbf{I}e\cap {\mathbf{Q}}_{c,o}^t}{\sigma}_i^t bi}}}{{\displaystyle \sum_{c\in \mathbf{C}}{\displaystyle \sum_{i\in \mathbf{I}e\cap {\mathbf{Q}}_{c,o}^t} nibi}}+{\displaystyle \sum_{c\in \mathbf{C}}{\displaystyle \sum_{i\in \left(\mathbf{I}-\mathbf{I}e\right)\cap {\mathbf{Q}}_{c,o}^t} bi}}} $$ In addition, improving the optical resource allocation efficiency of the fronthaul plays a critical role for CoMP-oriented optimization. Three sub-objectives are also considered during the optimization: $$ \min \kern0.75em \zeta =\left(\zeta 1,\zeta 2,\zeta 3\right) $$ $$ \zeta 1={\displaystyle \sum_{w\in \mathbf{W}}{\lambda}_w^t} $$ $$ \zeta 2=\frac{{\displaystyle \sum_{w\in \mathbf{W}}\left({\lambda}_w^t{\left({\displaystyle \sum_{o\in \mathbf{O}}\left({\varOmega}_{w,o}^t\cdot \nu c\cdot {\beta}_{c,o}^t\right)\hbox{-} {\displaystyle \sum_{o\in \mathbf{O}}\nu c\cdot {\beta}_{c,o}^t}/{\displaystyle \sum_{w\in \mathbf{W}}{\lambda}_w^t}}\right)}^2\right)}}{{\displaystyle \sum_{w\in \mathbf{W}}{\lambda}_w^t}} $$ $$ \zeta 3=\left({\displaystyle \sum_{w\in \mathbf{W}}{\displaystyle \sum_{o\in \mathbf{O}}\left({\left({\varOmega}_{w,o}^t-{\varOmega}_{w,o}^{t-1}\right)}^2\cdot {\displaystyle \sum_{i\in {\mathbf{Q}}_{c,o}^t} bi}\right)}}\right)/2 $$ $$ {\lambda}_w^t\in \left\{0,1\right\},{\varOmega}_{w,o}^t\in \left\{0,1\right\}\kern1.5em \forall w\in \mathbf{W},\forall o\in \mathbf{O} $$ $$ {\displaystyle \sum_{w\in \mathbf{W}}{\varOmega}_{w,o}^t}=1\kern3.5em \forall o\in \mathbf{O} $$ $$ {\displaystyle \sum_{w\in \mathbf{W}}{\lambda}_w^t}\le \left|\mathbf{W}\right| $$ $$ {\displaystyle \sum_{o\in \mathbf{O}}{\varOmega}_{w,o}^t\cdot \nu c\cdot {\beta}_{c,o}^t}\le Cv\kern0.75em \forall w\in \mathbf{W} $$ $$ w=\left\{w\left|{\varOmega}_{w,o}^t=1,w\in \mathbf{W}\right.\right\}\in \mathbf{R}w,o\kern0.5em \forall o\in \mathbf{O} $$ The three sub-objectives of the INLP are as follows: (1) minimize the used optical resource, (2) balance out the traffic load served by the activated wavelengths, and (3) minimize the migrated load due to fronthaul topology adjustment. Equation (6) indicates the reasonable integer value of variable \( {\lambda}_w^t \) and \( {\varOmega}_{w,o}^t \), ∀ w ∈ W, ∀ o ∈ O. Equation (7) states that each ONU co-located with the RRU at small cell can only be allocated one wavelength at time t. Equation (8) limits the maximum feasible wavelengths of fronthaul at time t. Equation (9) limits the maximum load served by each single-wavelength w ( ∀ w ∈ W) at time t. Equation (10) ensures that the wavelength assigned to ONU o (o ∈ O) is within the tuning range at time t. Adaptive genetic algorithm for CoMP-oriented resource allocation The complexity of INLP is exponentially increasing with the growing of network scale. To reduce the time complexity, an adaptive GA is proposed to solve the CoMP-oriented capacity and resource allocation problems of the fronthaul network. We will introduce the modified genetic encoding scheme, corresponding fitness function, and adaptive genetic operations for the proposed GA as follows. Genetic encoding and the fitness function GA is an efficient search heuristic method on the basis of principles of natural evolution in the real world [40]. A reasonable chromosome (or an individual) is encoded as a group of genes. For the CoMP-oriented resource allocation, we encode each gene as {ξ(oj, wj), oj ∈ O, wj ∈ R w, oj}, where ξ(oj, wj) indicates that wavelength wj is allocated to ONU oj. For each distributed ONU oj, a wavelength wj is randomly selected for its data transmission according to the traffic in the corresponding small cell cj. The lightpath is built up for data transmission between ONU oj and OLT. We apply this process for all ONUs to obtain an individual I. We can form a different individual by choosing different optical resource for some of genes. We randomly repeat P times to generate more individuals and form the population Ι by grouping different individuals together. In order to release the capacity pressure of the fronthaul when CoMP technique is introduced, we need to enhance the bandwidth efficiency of the optical fronthaul based on the broadcast characteristic of the TWDM-PON. In addition, improving the optical resource allocation efficiency of the fronthaul is also playing a significant role. Furthermore, we also need to pay attention to load balancing of the fronthaul and the influenced traffic load during ONU migration. Finally, each of the individual's fitness is assigned as (ρ1, ρ2, ρ3, ρ4)(η, ζ)', where ρ1, ρ2, ρ3, and ρ4 are the weights allocated to the optimization objectives described in Section 2.2, respectively. Better individuals survive and reproduce themselves more often than the worse ones. In each iteration, we update the fittest individual on the basis of each individual's fitness. The GA can obtain a good result when it converges [41]. Adaptive genetic operations Algorithm 1 illustrates the procedure of the proposed adaptive GA. The initial population Ι of constant size P is generated randomly based on the gene generation principle mentioned above. Then the population Ι goes into the following adaptive genetic operations. The tournament selection [42] is adopted for the selection operation. We randomly select s individuals from population Ι and implement tournament selection by holding a tournament among s competitors, where s is the tournament size. When all tournaments are finished, we select the winner of each competing group for crossover, on the basis of each individual's fitness among competitors. In crossover phase, we implement multipoint gene level crossover to generate the offspring. We randomly select two individuals paired as parent for crossover. In each crossover operation, on the basis of crossover rate pc, |O| ⋅ pc genes are randomly picked out from the parent and swapped at random positions of the individuals. Then P individuals are selected based on their fitness to go into mutation phase. During the evolution, the population size is constant. In the mutation phase, on the basis of mutation rate pm, |O| ⋅ pm genes of the individuals will be randomly modified to generate new genes. And we modify a gene ξ(oj, wj) by replacing its optical resource wj with another feasible one. Based on the fitness of each individual, P individuals are selected to form a new population. During crossover and mutation phases, in order to limit the maximum traffic load served by each single-wavelength and ensure that the ONU tuning range is legal, a penalty function is used during each genetic operation. In the evolution phase, pc and pm vary with the fitness value of the initial population of each iteration. We define Fj as the fitness of individual Ij. We have F max = max j(Fj), j ∈ I, \( F\mathrm{mean}=\left({\displaystyle \sum_jFj}\right)/P \), and F ' = max(Fj1, Fj2). Then pc and pm are obtained by Eqn. (11) and Eqn. (12) [43], where βc and βm are fixed parameters. $$ pc=\left\{\begin{array}{cc}\hfill \frac{F \max -F\hbox{'}}{F \max -F\mathrm{mean}},\hfill & \hfill F\hbox{'}\le F\mathrm{mean}\hfill \\ {}\hfill \beta c,\hfill & \hfill otherwise\hfill \end{array}\right. $$ $$ pm=\left\{\begin{array}{cc}\hfill \frac{F \max -Fj}{F \max -F\mathrm{mean}},\hfill & \hfill Fp\le F\mathrm{mean}\hfill \\ {}\hfill \beta m,\hfill & \hfill otherwise\hfill \end{array}\right. $$ We define GA's degree of diversity as Eqn. (13) [44]. d(j1, j2) indicates the differences between two individuals Ij1 and Ij2. The GA stopped when its convergence reaches a preset threshold [41]. $$ Dp=\frac{2}{P\left(P-1\right)}{\displaystyle \sum_{j1=1}^{P-1}{\displaystyle \sum_{j2=j1+1}^P\frac{d\left(j1,j2\right)}{\left|I\right|}}} $$ Numerical simulations are conducted based on the proposed INLP model and adaptive GA for CoMP-oriented optical resource optimization. In the simulations, a 32-cell physical topology (shown in Fig. 2) is used, and the number of used wavelengths in the TWDM-PON-enabled fronthaul is set as eight. We assume that each distributed RRU is assigned only one ONU with tunable lasers, and compression technique-based bitrate-variable CPRI is adopted in TWDM-PON-enabled fronthaul. Besides, we also assume that time is divided into discrete time periods. Table 1 illustrates the parameters used in our simulations. The 32-cell topology for simulations Table 1 Simulation parameters In the simulations, different scenarios are considered: (1) typical C-RAN without CoMP-oriented optimization; (2) C-RAN with CoMP-oriented INLP optimization; 3) C-RAN with CoMP-oriented GA. We also consider the influence of the tuning range of the ONUs. With the traffic fluctuation, we try to release the capacity pressure and improve the resource allocation efficiency of the fronthaul when CoMP technique is introduced in C-RAN. Besides, the migrated traffic due to topology change is also considered in our formulations. Figure 3 represents the converging condition of the proposed adaptive GA with the Dp defined in Eqn. (13). It is obviously that the proposed GA converges when the number of iterations exceed 50, if the threshold of the Dp is set at 0.15. Besides, compared to the INLP method, the computation time of the proposed GA is much lower within 1.68 s. Convergence performance of the adaptive GA Figure 4 illustrates the optical resource allocation performance comparison of the INLP to the proposed adaptive GA. The limited tuning range of each ONU is also considered. It is known that the traffic flow varies with the time in the fronthaul. Compared to the scenario without CoMP-oriented optimization, better optical resource allocation performance can be obtained with INLP. Specifically, in light traffic load time slot, nearly 70 % wavelengths can be saved by using INLP. However, in typical C-RAN, wavelength resource allocation is fixed regardless of load variation. It is known that the complexity of INLP is exponentially increasing with the growing of network scale. As we can see, the proposed GA nearly has the same resource allocation performance with the INLP based on the load fluctuation. Compared to the INLP, the computation time of the adaptive GA is much lower within 1.68 s. Besides, compared to the tuning limited INLP, better performance can be obtained with full-spectrum tunable lasers, which is more expensive. Furthermore, as shown in Eqn. (4) and Eqn. (5), the load imbalance and the migrated traffic due to topology adjustment are also considered during the optimization. In traditional C-RAN, considering that the traffic served by each wavelength resource has a big difference, the load imbalance is clearly in the TWDM-PON-enabled fronthaul. As shown in Fig. 5, the variance of the traffic load fluctuation is further reduced, while the migrated traffic load due to topology change is very light by using INLP and the proposed adaptive GA. Utilization of wavelength with different methods Load balancing simulation results with different methods Figures 6, 7, 8, and 9 show the CoMP-oriented downlink bandwidth optimization simulation results with different methods. When the number of mobile users that located near the small cell edge is small, little sharing data is needed to be transmitted through fronthaul transport network. The performance of the proposed algorithms is not obvious. However, with the growth of mobile users that is near the cell edge, the sharing data is getting large. As shown in the Fig. 6, the most efficient bandwidth saving can be achieved by using INLP. Compared with INLP, similar trends can be observed by using the proposed adaptive GA with lower computational complexity. The limited tuning range of the ONU is also considered. Better performance can be obtained with full-spectrum tunable lasers. Besides, Fig. 7 represents the influence of the average traffic demand on the optimization. When the bandwidth demand of the cell edge users increases, the total demand of the downlink bandwidth is increasing including the bandwidth allocated to the sharing data. By using the proposed method, the capacity pressure of the fronthaul is released. Furthermore, the size of the cell cluster used to enable the CoMP also plays a critical role in the optimization. As shown in Figs. 8 and 9, the larger the size of the cell cluster, the better performance can be achieved. Finally, the significant bandwidth saving is attributed to the broadcast characteristic of the TWDM-PON, wavelength assignment provided by the INLP and proposed GA, and the SDN technique. Bandwidth saving rate considering the number of cell edge users with different methods (n = 2) Bandwidth saving rate considering the average traffic demand of cell edge users with different methods (n = 2) Both INLP model and adaptive GA were explored to release the capacity pressure of the fronthaul, when CoMP technique is introduced in C-RAN. The proposed algorithm offered an efficient way to face the capacity pressure of the fronthaul. Besides, optical resource allocation problem was also considered. The results from the simulations of the proposed algorithm in the 32-cell topology indicated that good performance could be achieved by using the INLP and the proposed adaptive GA. The significant performance was attributed to the broadcast characteristic of the TWDM-PON, wavelength assignment provided by the INLP and the GA, and the SDN technique used in the C-RAN. BBU: Baseband unit CoMP: Coordinated multipoint CPRI: Common public radio interface C-RAN: Cloud radio access network D2D: Device-to-device EE: GA: ICI: Inter-cell interference INLP: Integer non-linear programming OFDMA: OLT: Optical line terminal ONU: Optical network unit RRU: Remote radio unit Spectral efficiency TWDM-PON: Time wavelength division multiplexing passive optical network SDN: Software defined network V Jungnickel, K Manolakis, W Zirwas, B Panzner, V Braun, M Lossow, R Apelfrojd, The role of small cells, coordinated multipoint, and massive mimo in 5G. IEEE Commun. Mag. 52(5), 44–51 (2014). doi:10.1109/MCOM.2014.6815892 N Bhushan, J Li, D Malladi, D Malladi, R Gilmore, D Brenner, A Damnjanovic, R Sukhavasi, C Patel, S Geirhofer, Network densification: the dominant theme for wireless evolution into 5G. IEEE Commun. Mag. 52(2), 82–89 (2014). doi:10.1109/MCOM.2014.6736747 BB Haile, AA Dowhuszko, J Hämäläinen, R Wichman, Z Ding, On performance loss of some CoMP techniques under channel power imbalance and limited feedback. IEEE Trans. Wirel. Commun. 14(8), 4469–4481 (2015). doi:10.1109/TWC.2015.2421898 MK Karakayali, GJ Foschini, RA Valenzuela, Network coordination for spectrally efficient communications in cellular systems. IEEE Trans. Wirel. Commun. 13(4), 56–61 (2006). doi:10.1109/MWC.2006.1678166 C Yang, S Han, X Hou, AF Molisch, How do we design CoMP to achieve its promised potential? IEEE Trans. Wirel. Commun. 20(1), 67–74 (2013). doi:10.1109/MWC.2013.6472201 WS Soh, HS Kim, A predictive bandwidth reservation scheme using mobile positioning and road topology information. IEEE/ACM Trans. 14(5), 1078–1091 (2006). doi:10.1109/TNET.2006.882899 F Song, R Li, H Zhou, Feasibility and issues for establishing network-based carpooling scheme. Pervasive Mob. Comput. 24, 4–15 (2015). doi:10.1016/j.pmcj.2015.05.002 H Weingarten, Y Steinberg, S Shamai, The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory 52(9), 3936–3964 (2006). doi:10.1109/TIT.2006.880064 QH Spencer, AL Swindlehurst, M Haardt, Zero-forcing methods for downlink spatial multiplexing in multiuser MIMO channels. IEEE T. Signal. Proces. 52(2), 461–471 (2004). doi:10.1109/TSP.2003.821107 Y Huang, G Zheng, M Bengtsson, KK Wong, L Yang, B Ottersten, Distributed multicell beamforming design approaching Pareto boundary with max-min fairness. IEEE Trans. Wirel. Commun. 11(8), 2921–2933 (2012). doi:10.1109/TWC.2012.061912.111751 B Du, C Pan, W Zhang, M Chen, Distributed energy-efficient power optimization for CoMP systems with max-min fairness. IEEE Commun. Lett. 18(6), 999–1002 (2014). doi:10.1109/LCOMM.2014.2317734 Z Xu, C Yang, GY Li, Y Liu, Energy-efficient CoMP precoding in heterogeneous networks. IEEE T. Signal Proces. 62(4), 1005–1017 (2014). doi: 10.1109/TSP.2013.2296279 KMS Huq, S Mumtaz, J Bachmatiuk, J Rodriguez, Green HetNet CoMP: energy efficiency analysis and optimization. IEEE T. Veh. Technol. 64(10), 4670–4683 (2015). doi:10.1109/TVT.2014.2371331 X Yang, Y Wang, T Zhang, L Cuthbert, L Xiao, Combining CoMP with semi-smart antennas to improve performance. Electron. Lett. 47(13), 775–776 (2011). doi:10.1049/el.2011.0736 Q Cui, H Wang, P Hu, X Tao, P Zhang, J Hamalainen, L Xia, Evolution of limited-feedback CoMP systems from 4G to 5G: CoMP features and limited-feedback approaches. IEEE Veh. Technol. Mag. 9(3), 94–103 (2014). doi:10.1109/MVT.2014.2334451 F Chen, W Xu, S Li, JR Lin, Non-ideal backhaul based spectrum splitting and power allocation for downlink CoMP in cognitive Macro/Femtocell networks. IEEE Commun. Lett. 18(6), 1031–1034 (2014). doi:10.1109/LCOMM.2014.2317774 G Nigam, P Minero, M Haenggi, Coordinated multipoint joint transmission in heterogeneous networks. IEEE Trans. Commun. 62(11), 4134–4146 (2014). doi:10.1109/TCOMM.2014.2363660 V Ha, L Le, ND Dao, Coordinated multipoint (CoMP) transmission design for cloud-RANs with limited fronthaul capacity constraints. IEEE T. Veh. Technol. to be published. doi:10.1109/TVT.2015.2485668 China Mobile Research Institute, C-RAN: the road towards green RAN. whitepaper v. 3.0, online,China Mobile, 2013. Y Shi, J Zhang, KB Letaief, Group sparse beamforming for green Cloud-RAN. IEEE T. WIREL. COMMUN. 13(5), 2809-2823 (2013). doi:10.1109/TWC.2014.040214.131770 A Pizzinat, P Chanclou, F Saliou, T Diallo, Things you should know about fronthaul. J. Lightwave Technol 33(5), 1077–1083 (2015). doi:10.1109/JLT.2014.2382872 M Peng, C Wang, V Lau, HV Poor, Fronthaul-constrained cloud radio access networks: insights and challenges. IEEE Wirel. Commun. 22(2), 152–160 (2015). doi:10.1109/MWC.2015.7096298 X Liu, F Effenberger, N Chand, L Zhou, H Lin, in 2015 OSA Optical Fiber Communications Conference and Exhibition (OFC). Demonstration of bandwidth-efficient mobile fronthaul enabling seamless aggregation of 36 E-UTRA-like wireless signals in a single 1.1-GHz wavelength channel, (2015), paper .M2J.2. doi:10.1364/OFC.2015.M2J.2 M Zhu, X Liu, N Chand, F Effenberger, GK Chang, in 2015 OSA Optical Fiber Communications Conference and Exhibition (OFC). High-capacity mobile fronthaul supporting LTE-advanced carrier aggregation and 8 × 8 MIMO, (2015), paper M2J.3. doi:10.1364/OFC.2015.M2J.3 C Liu, J Wang, L Cheng, M Zhu, GK Chang, Key microwave-photonics technologies for next-generation cloud-based radio access networks. J. Lightwave Technol. 32(20), 3452–3460 (2014). doi:10.1109/JLT.2014.2338854 C Neda, T Akihiro, K Konstantinos, W Ting, SDN-controlled topology-reconfigurable optical mobile fronthaul architecture for bidirectional CoMP and low latency inter-cell D2D in the 5G mobile era. Opt. Express. 22(17), 20809–15 (2014). doi:10.1364/OE.22.020809 M Fiorani, A Rostami, L Wosinska, P Monti, Transport abstraction models for an SDN-controlled centralized RAN. IEEE Commun. Lett. 19(8), 1406–1409 (2015). doi:10.1109/LCOMM.2015.2446480 B Schrenk, G Humer, M Stierle, H Leopold, Fully-passive remote radio head for uplink cell densification in wireless access networks. IEEE Photon. Technol. Lett. 27(9), 970–973 (2015). doi:10.1109/LPT.2015.2399231 J Li, M Peng, A Cheng, Y Yu, Resource allocation optimization for delay-sensitive traffic in fronthaul constrained cloud radio access Networks. IEEE Syst. J. 7335(1), 1–12 (2014). doi:10.1109/JSYST.2014.2364252 F Song, Y Zhang, Z An, H Zhou, I You, The correlation study for parameters in four tuples. Int. J. Ad Hoc Ubig. Co. 19(1/2), 38–49 (2015). doi: 10.1504/IJAHUC.2015.069492 A Macho, M Morant, R Llorente, Next-generation optical fronthaul systems using multicore fiber media. J. Lightwave Technol. to be published. doi:10.1109/JLT.2016.2573038 Y Ji, X Wang, S Zhang, R Gu, T Guo, Z Ge, Dual-layer efficiency enhancement for future passive optical network. Sci. China Inf. Sci. 59(2), 1–13 (2016). doi:10.1007/s11432-015-5430-7 YJ Liu, L Guo, LC Zhang, JZ Yang, A new integrated energy-saving scheme in green Fiber-Wireless (FiWi) access network. Sci. China Inf. Sci. 57(6), 1–15 (2014). doi: 10.1007/s11432-013-4958-7 I Daisuke, K Shigeru, K Jun-lchi, T Jun, Dynamic TWDM-PON for mobile radio access networks. Opt. Express 21(22), 26209–26218 (2013). doi:10.1364/OE.21.026209 HX Wang, JX Zhao, H Li, YF Ji, Opaque virtual network mapping algorithms based on available spectrum adjacency for elastic optical networks. Sci. China Inform. Sci., 59(4), 1–11 (2016). doi: 10.1007/s11432-016-5525-9 F Song, D Huang, H Zhou, I You, An optimization-based scheme for efficient virtual machine placement. Int. J. Parallel Prog. 42(5), 1–20 (2014). doi:10.1007/s10766-013-0274-5 S Nanba, A Agata, in 2013 IEEE 24th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC). A new IQ data compression scheme for front-haul link in Centralized RAN, (2013), pp. 210–214. doi:10.1109/PIMRCW.2013.6707866 KF Nieman, BL Evans, in 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP). Time-domain compression of complex-baseband LTE signals for cloud radio access networks, (2013), pp. 1198–1201. doi:10.1109/GlobalSIP.2013.6737122 B Guo, W Cap, A Tao, D Samardzija, LTE/LTE‐A signal compression on the CPRI Interface. Bell Labs Tech. J. 18(2), 117–133 (2013). doi:10.1002/bltj.21608 D Whitley, A genetic algorithm tutorial. Stat. Comput. 4(2), 65–85 (1994). doi:10.1007/BF00175354 J. Koza, Genetic programming: on the programming of computers by means of natural selection, Cambridge, Mass.: MIT, 1992 BL Miller, DE Goldberg, Genetic algorithms, tournament selection, and the effects of the noise. Stat. Comput. 9(3), 193–212 (1995) M Srinivas, LM Patnaik, Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Trans. Syst. Man Cybern 24(4), 656–667 (1994). doi:10.1109/21.286385 L Gong, X Zhou, W Lu, Z Zhu, A two-population based evolutionary approach for optimizing routing, modulation and spectrum assignments (RMSA) in O-OFDM networks. IEEE Commun. Lett. 16(9), 1520–1523 (2012). doi:10.1109/LCOMM.2012.070512.120740 This work was jointly supported by National High Technology Research and Development Program of China (863 Program) under Grant No. 2015AA015503, the National Natural Science Foundation of China under Grant No. 61372118, and the Beijing Natural Science Foundation under Grant No. 4142036, and Funds of Beijing Advanced Innovation Center for Future Internet Technology of Beijing University of Technology (BJUT), People's Republic of China. Beijing Key Laboratory of Network System Architecture and Convergence, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China Gang Wang , Rentao Gu & Hui Li Beijing Advanced Innovation Center for Future Internet Technology, Beijing University of Technology, Beijing, 100124, China , Hui Li & Yuefeng Ji State Key Laboratory of Information Photonics and Optical Communications, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China Yuefeng Ji Search for Gang Wang in: Search for Rentao Gu in: Search for Hui Li in: Search for Yuefeng Ji in: Correspondence to Rentao Gu or Yuefeng Ji. Wang, G., Gu, R., Li, H. et al. Efficient resource allocation for passive optical fronthaul-based coordinated multipoint transmission. J Wireless Com Network 2016, 225 (2016). https://doi.org/10.1186/s13638-016-0725-y INLP Adaptive genetic algorithm TWDM-PON-enabled fronthaul Intelligent Mobility Management for Future Wireless Mobile Networks
CommonCrawl
The Carrier Space of a Commutative Banach Algebra Recall from the Multiplicative Linear Functionals on a Banach Algebra page that if $\mathfrak{A}$ is a Banach algebra then every multiplicative linear functional $f$ on $\mathfrak{A}$ is bounded and $\| f \| \leq 1$. The set of all multiplicative linear functionals on $\mathfrak{A}$ is denoted by $\Phi_{\mathfrak{A}}$, and the set of all multiplicative linear functionals on $\mathfrak{A}$ union the zero functional on $\mathfrak{A}$ is denoted by $\Phi_{\mathfrak{A}}^{\infty}$. Also recall that the dual space $\mathfrak{A}^*$ is the collection of all bounded linear functionals on $\mathfrak{A}$. Thus we have that: \begin{align} \quad \Phi_{\mathfrak{A}} \subset \Phi_\mathfrak{A}^{\infty} \subseteq \mathfrak{A}^* \end{align} We can consider the weak-* topology on $\mathfrak{A}^*$. Recall the weak-* topology on $\mathfrak{A}^*$ is the topology induced by the collection $\hat{\mathfrak{A}} = \{\hat{x} : x \in \mathfrak{A} \} \subseteq \mathfrak{A}^{**}$ of linear functionals on $\mathfrak{A}^*$ where for each $x \in \mathfrak{A}$ we defined $\hat{x} : \mathfrak{A}^* \to \mathbb{C}$ for all $f \in \mathfrak{A}^*$ by $\hat{x}(f) = f(x)$. We noted that a sequence $(f_n)$ in $\mathfrak{A}^*$ is said to weak-* converge to $f \in \mathfrak{A}^*$ if for all $x \in \mathfrak{A}$ we have that $\lim_{n \to \infty} \hat{x}(f_n) = \hat{x}(f)$, that is, for all $x \in \mathfrak{A}$ we have that: \begin{align} \quad \lim_{n \to \infty} f(x_n) = f(x) \end{align} We can consider both $\Phi_{\mathfrak{A}}$ and $\Phi_{\mathfrak{A}}^{\infty}$ as subspaces of $(\mathfrak{A}^*, \mathrm{weak}-*)$. When we do, we give the spaces a special name. Definition: Let $\mathfrak{A}$ be a commutative Banach algebra. Equip the dual space $\mathfrak{A}^*$ with the weak-* topology. The $\mathfrak{A}$-Topology on $\Phi_{\mathfrak{A}}$ (or $\Phi_{\mathfrak{A}}^{\infty}$) is the subspace topology on $\Phi_{\mathfrak{A}}$ (or $\Phi_{\mathfrak{A}}^{\infty}$). The Carrier Space for $\mathfrak{A}$ is the space $\Phi_{\mathfrak{A}}$ (or sometimes $\Phi_{\mathfrak{A}}^{\infty}$) equipped with the $\mathfrak{A}$-topology. We begin with some properties of the carrier space for a commutative Banach algebra $\mathfrak{A}$. Proposition 1: Let $\mathfrak{A}$ be a commutative Banach algebra. Then: a) $\Phi_{\mathfrak{A}}$ has one-point compactification $\Phi_{\mathfrak{A}}^{\infty}$. b) If $\mathfrak{A}$ has a unit then $\Phi_{\mathfrak{A}}$ is compact. c) $\Phi_{\mathfrak{A}}$ is locally compact and Hausdorff. Proof of a) As mentioned above, if $f \in \Phi_{\mathfrak{A}}^{\infty}$. Then $\| f \| \leq 1$ and so $\Phi_{\mathfrak{A}}^{\infty}$ is contained in $B_{\mathfrak{A}^*}$ where $B_{\mathfrak{A}^*}$ denotes the closed unit ball of $\mathfrak{A}^*$. Recall from Alaoglu's Theorem that $B_{\mathfrak{A}^*}$ is weak-* compact. So to show that $\Phi_{\mathfrak{A}}^{\infty} \subseteq B_{\mathfrak{A}^*}$ is weak-* compact it is sufficient to prove that $\Phi_{\mathfrak{A}}^{\infty}$ is weak-* closed (this is because every closed subset of a compact space is also compact). If $\Phi_{\mathfrak{A}}^{\infty} = \mathfrak{A}^*$ then we are done. Otherwise, suppose that $\Phi_{\mathfrak{A}}^{\infty} \neq \mathfrak{A}^*$. We show that $\Phi_{\mathfrak{A}}^{\infty}$ is weak-* closed by showing that its complement $\mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$ is weak-* open, i.e., by showing that each $f \in \mathfrak{A}^* \setminus \Phi_{\mathfrak{A}^*}$ has an open neighbourhood fully contained in $\mathfrak{A}^* \setminus \Phi_{\mathfrak{A}^*}$. Let $f \in \mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$. Then $f$ is a linear functional but is NOT multiplicative. So for some $x, y \in \mathfrak{A}$ we have that $f(x)f(y) \neq f(xy)$. Define: \begin{align} \quad \delta := |f(x)f(y) - f(xy)| > 0 \end{align} Let: \begin{align} \quad \epsilon = \min \left \{ \frac{1}{2} \delta^{1/2}, \frac{1}{4} \left ( 1 + |f(x)| + |f(y)| \right )^{-1}\delta \right \} > 0 \end{align} And consider the following open neighbourhood $V$ of $f$: \begin{align} \quad V = \left \{ g : |g(x) - f(x)| < \epsilon, |g(y) - f(y)| < \epsilon, |g(xy) - f(xy)| < \epsilon \right \} \end{align} Suppose that $g \in V$ so that $|g(x) - f(x)| < \epsilon$, $|g(y) - f(y)| < \epsilon$, and $|g(xy) - f(xy)| < \epsilon$. We want to show that $g \in \mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$ so that $V \subset \mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$ to conclude that $\mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$ is weak-* open. We have that: \begin{align} \quad |g(x)g(y) - f(x)f(y)| &\leq |g(x)g(y) [- g(x)f(y) - f(x)g(y) + f(x)f(y) + g(x)f(y) + f(x)g(y) - f(x)f(y)] - f(x)f(y) | \\ & \leq |[g(x)g(y) - g(x)f(y) - f(x)g(y) + f(x)f(y)] + g(x)f(y) + f(x)g(y) - f(x)f(y) - f(x)f(y) | \\ & \leq |[g(x) - f(x)][g(y) - f(y)] + [g(x)f(y) -f(x)f(y)] + [f(x)g(y) - f(x)f(y)] | \\ & \leq |g(x) - f(x)||g(y) - f(y)| + |f(y)| |g(x) - f(x)| + |f(x)||g(y) - f(y)| \\ & < \epsilon^2 + |f(y)| \epsilon + |f(x)| \epsilon \\ & < \epsilon^2 + \epsilon[|f(y)| + |f(x)|] \\ & < \left ( \frac{1}{2} \delta^{1/2} \right )^2 + \frac{1}{4}\left (1 + |f(x)| + |f(y)| \right )^{-1} \delta \left [ |f(y)| + |f(x)| \right ] \\ & < \frac{1}{4} \delta + \frac{1}{4} \frac{|f(x)| + |f(y)|}{1 + |f(x)| + |f(y)|} \delta \\ & < \frac{1}{4} \delta + \frac{1}{4} \delta \\ & < \frac{1}{2} \delta \end{align} Now we have also assumed above that \begin{align} \quad |g(xy) - f(xy)| < \epsilon \leq \frac{1}{4} \delta \end{align} Therefore $|g(xy) - g(x)g(y)| \geq \frac{1}{4} \delta$ showing that $g$ is not multiplicative. (See the image below). So $g \in \mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$. Thus $g \in \mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$. So we have constructed an open neighbourhood of $f$ fully contained in $\mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$. Since $f$ was chosen arbitrarily in $\mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$, this shows that $\mathfrak{A}^* \setminus \Phi_{\mathfrak{A}}^{\infty}$ is weak-* open, so $\Phi_{\mathfrak{A}}^{\infty}$ is a weak-* closed subset of the weak-* compact space $\mathfrak{A}^*$, i.e., $\Phi_{\mathfrak{A}}^{\infty}$ is compact with respect to the $\mathfrak{A}$-topology. $\blacksquare$ Proof of b) Suppose that $\mathfrak{A}$ has a unit $1_{\mathfrak{A}}$. Recall that then if $f \in \Phi_{\mathfrak{A}}$ then $f(1_{\mathfrak{A}}) = 1$. Therefore we see that: \begin{align} \quad \Phi_{\mathfrak{A}} = \{ f \in \Phi_{\mathfrak{A}}^{\infty} : f(1_{\mathfrak{A}}) = 1 \} \end{align} So $\Phi_{\mathfrak{A}}$ is a closed subset of the $\mathfrak{A}$-topology compact space $\Phi_{\mathfrak{A}}^{\infty}$ which shows that $\Phi_{\mathfrak{A}}$ is compact with respect to the $\mathfrak{A}$ topology. $\blacksquare$ Proof of c) By (a) we have that $\Phi_{\mathfrak{A}}^{\infty}$ is compact with respect to the $\mathfrak{A}$-topology. For any $f \in \Phi_{\mathfrak{A}}$, by definition, $f$ is not identically zero so there exists an $x_0 \in \mathfrak{A}$ with $f(x_0) \neq 0$. Let: \begin{align} \quad N(f, x_0) = \left \{ g \in \Phi_{\mathfrak{A}}^{\infty} : |g(x_0)| \geq \frac{1}{2} |f(x_0)| \right \} \end{align} The above set is a closed subset of $\Phi_{\mathfrak{A}}^{\infty}$ and is thus compact in $\Phi_{\mathfrak{A}}$ with respect to the $\mathfrak{A}$-topology; contains $f$, and does not contain the zero functional. So $\Phi_{\mathfrak{A}}$ is locally compact. Lastly we know that the dual space $\mathfrak{A}^*$ is Hausdorff. Since $\Phi_{\mathfrak{A}}$ is a subspace of $\mathfrak{A}^*$ we have that $\Phi_{\mathfrak{A}}$ is also Hausdorff. $\blacksquare$
CommonCrawl
Reproductive biology of common carp (Cyprinus carpio Linnaeus, 1758) in Lake Hayq, Ethiopia Assefa Tessema ORCID: orcid.org/0000-0002-0873-68351, Abebe Getahun1, Seyoum Mengistou1, Tadesse Fetahi1 & Eshete Dejen2 Fisheries and Aquatic Sciences volume 23, Article number: 16 (2020) Cite this article This study was conducted in Lake Hayq between January and December 2018. The objectives of this study were to determine the growth, condition, sex ratio, fecundity, length at first sexual maturity (L50), and spawning seasons of common carp (Cyprinus carpio). Monthly fish samples of C. carpio were collected using gillnets of stretched mesh sizes of 4, 6, 7, 8, 10, and 13 cm and beach seines of mesh size of 6 cm. Immediately after the fish were captured, total length (TL) and total weight (TW) for each individual were measured in centimeters and grams, respectively, and their relationship was determined using power function. Length at first maturity (L50) was determined for both males and females using the logistic regression model. The spawning season was determined from the frequency of mature gonads and variation of gonadosomatic index (GSI) values of both males and females. Fecundity was analyzed from 67 mature female specimens. The length and weight relationship of C. carpio was TW = 0.015TL2.93 for females and TW = 0.018TL2.87 for males that indicate negative allometric growth in both cases. The mean Fulton condition factor (CF) was 1.23 ± 0.013 for females and 1.21 ± 0.011 for males. The value of CF in both cases was > 1 that shows both sexes are in good condition. Among the total 1055 C. carpio collected from Lake Hayq, 459 (43.5%) were females and 596 (56.5%) were males. The chi-square test showed that there was a significant deviation between male and female numbers from 1:1 ratio (χ2= 22, df = 11, P > 0.05) within sampling months. The length at first sexual maturity (L50) for females and males were 21.5 and 17.5 cm, respectively. Males mature at smaller sizes than females. The spawning season of C. carpio was extended from February to April, and the peak spawning season for both sexes was in April. The average absolute fecundity was 28,100 ± 17,462. C. carpio is currently the commercially important fish while Nile tilapia fishery has declined in Lake Hayq. Therefore, this baseline data on growth, condition, and reproductive biology of common carp will be essential to understand the status of the population of carp and design appropriate management systems for the fish stock of Lake Hayq, Ethiopia, and adjacent countries. Common carp (Cyprinus carpio) is one of the widely cultured commercially important freshwater fish species in the world (FAO 2013). C. carpio is native to Eastern Europe and Central Asia. It can tolerate a wide range of water quality parameters. In natural water bodies, this species can survive in very low water temperature and it can tolerate low concentrations and supersaturation of dissolved oxygen (Banarescu and Coad 1991). Common carp is omnivorous fish species that consume animals (aquatic insects, macroinvertebrates, and zooplankton) and plant origin (phytoplankton, macrophytes) (Rahman et al. 2008, 2009; Weber and Brown 2009). C. carpio grows rapidly, achieves sexual maturation in the second year of life, and is highly fertile (about 2 million eggs per female) (Balon 1975; Hossain et al. 2016). The combination of these features allows developing invasiveness potential (Troca and Vieira 2012). Knowledge of fish reproductive biology is very important for the rational utilization of fish stocks and their sustainable production (Cochrane 2002; Temesgen 2017). Understanding the reproductive aspects of fish is also very important for providing sound scientific advice in fishery management (Hossain et al. 2017; Khatun et al. 2019). Common carp have been introduced into many water bodies throughout the world, including Europe, Australia, North America, Africa, and Asia. The wide distribution and successful introductions of common carp are mostly due to their tolerance to variable environmental conditions (Forester and Lawrence 1978), as well as to their capability for early sexual maturity and rapid growth (Koehn 2004). Cyprinus carpio was first introduced to Aba Samuel Dam (Awash River basin) in 1940 from Italy (Getahun 2017). Later, C. carpio has been introduced in Lake Ziway in the late 1980s (FAO 1997; Abera et al. 2015), in highland lakes such as Ashengie, Ardibo, and Maybar (Golubtsov and Darkov 2008) for food security purpose, and the introduction was successful. Common carp were introduced to Lake Hayq accidentally from Lake Ardibo in 2008 (Wolde Mariam, Personal Communication 2018) through Ankerkeha River that connects the two lakes during the rainy season. Though common carp have established recently in Lake Hayq, it is dominating the other commercially important fish species, Nile tilapia and catfish. Fishermen of Lake Hayq believe that the current stunt growth of Nile tilapia (Oreochromis niloticus) is due to the recent invasion of common carp in the lake. Though there are some research works conducted in different water bodies of Ethiopia on common carp reproductive biology such as Hailu (2013) in Amerti Reservoir, Abera (2015) in Lake Ziway, and Asnake (2010) in Lake Ardibo, there is no information on the reproductive biology of common carp in Lake Hayq. Therefore, the purpose of this study was to establish baseline data on growth and condition, sex ratio, fecundity, length at first sexual maturity, and spawning seasons of common carp and design management strategy for the population of common carp in Lake Hayq. Study area and sampling techniques The study was conducted in Lake Hayq. Lake Hayq is located in the North Central highlands of Ethiopia. It is a typical example of highland lake of Ethiopia with volcanic origin. Geographically, it lies between 11° 3′ N to 11° 18′ N latitude and 39° 41′ E to 39° 68′ E longitude with an average elevation of 1911 meters above sea level. The lake has a closed drainage system, and the total watershed area is about 77 km2 of which 22.8 km2 is occupied by Lake Hayq. According to Demlie et al. (2007), the average depth of the lake is 37 m, and the maximum depth is 81 m. The only stream entering the lake is the Ankerkeha River, which flows into its southeastern corner. According to Fetahi et al. (2011), Lake Hayq is classified as a small highland freshwater (Fig. 1). Location map of Lake Hayq with respect to Ethiopia and Amhara Regional State Among the climate variables, only maximum and minimum temperature and rainfall of Lake Hayq were available at Kombolcha Meteorological Agency. In 2018, the average monthly maximum and minimum temperature around Lake Hayq was 25.9 and 9.9 °C, respectively (Fig. 2). The annual rainfall were 1200 mm (Fig. 3). The rainfall and the temperature variability around Lake Hayq for the last 10 years (2009–2018) were very low. The average monthly minimum and maximum temperature and annual rainfall were 9.8 °C, 26.6 °C, and 1205.6 mm, respectively (Kombolcha Meteorological Agency, 2019). Monthly maximum and minimum temperature variation of Lake Hayq in 2018 Monthly rainfall variation of Lake Hayq in 2018 Fishery data Three sampling sites were selected based on the impact of human and livestock activities. These are littoral site with intensive human activities related to recreation in lodges; pelagic site, less impact from human and livestock; and river mouth (Ankerkeha River), carrying huge silt every year (Table 1). The sampling sites were fixed with GPS, and a map was generated (Fig. 1). Fish specimens were collected each month for 1 year using gill nets of 4, 6, 8, 10, and 13 cm stretched mesh sizes through setting the nets overnight in the lake and beach seines of 6 cm mesh size. Data such as length, weight, sex, and maturity stages were collected in the field immediately after the fish were caught. Table 1 Sampling site description Some biological aspects of common carp Length-weight relationship The relationship between total length (TL) and total weight (TW) of C. carpio was calculated using power function as in Bagenal and Tesch (1978). $$ \mathrm{TW}={\mathrm{aTL}}^{\mathrm{b}} $$ TW Total weight (g) TL Total length (cm) a Intercept of the regression line b Slope of the regression line Condition factor (Fulton factor) The wellbeing of common carp was determined by using the Fulton condition factor as indicated in Bagenal and Tesch (1978). Fulton condition factor was calculated as: $$ \mathrm{FCF}=\frac{\mathrm{TW}}{{\mathrm{TL}}^3}\times 100 $$ where TW is the total weight in grams and TL is the total length in centimeters Sex ratio was determined using the formula: $$ \mathrm{Sex}\ \mathrm{ratio}=\frac{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{females}}{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{males}} $$ The absolute fecundity (AF) of individual females was determined gravimetrically (Bagenal and Braum 1987), with the number of ripe oocytes counted from triplicates of 1-g sub-sample of the ovary. The relationship between absolute fecundity with total length, total weight, and gonad weight was determined using least squares regression. The spawning season was determined from the percentages of fish with ripe gonads taken each month (Hossain and Ohtomi 2008) and from monthly GSI variations (Hossain et al. 2017). The spawning seasons of C. carpio were determined based on monthly variations of the gonadosomatic index (GSI): $$ GSI=\frac{W_g}{W-{W}_g}\times 100 $$ where Wg is the gonad weight (g) and W is the total weight (g) of the fish (Ricker 1975). Maturity estimation Total length (cm) and total weight (g) of each specimen of common carp were measured at the sampling sites using measuring board and sensitive balance, respectively. After dissection, the gonad maturity of each specimen was identified using a 5-point maturity scale (Wudneh, 1998). The length at which 50% of both sexes reached maturity (L50) was determined from the percentages of mature fish selected from peak breeding seasons (March–April) and fitted to the logistic equation described by Echeverria (1987). Descriptive statistics (frequency, percentages, and graphs) and inferential statistics (chi-square, independent t test, linear, and logistic regression) were used to summarize the collected data. SPSS Software Package version 16 and R 3.3.1 were used to summarize the collected data. The total length of female and male C. carpio ranged from 11 to 50 cm and 10.5 to 52 cm, respectively, and the total weight of females and males ranged from 19 to 1697 g and 18 to 1378 g, respectively. The length-weight relationship of both female and male C. carpio in Lake Hayq was curvilinear, and as a result, the line fitted to the data was described by the regression equation (Table 2). In this study, the "b" values of both female and male C. carpio were significantly different from 3, showing allometric growth (Figs. 4 and 5) Table 2 Length-weight relationship of C. carpio in Lake Hayq Length-weight relationship of female Cyprinus carpio in Lake Hayq (N = 459) Length-weight relationship of male Cyprinus carpio in Lake Hayq (N = 596) Fulton's condition factor The Fulton condition factor values of female and male C. carpio ranged from 1 to 1.98 and 1 to 1.83, respectively. The mean and SE values of FCF of females and males were 1.23 ± 0.013 and 1.21 ± 0.011, respectively. The independent t test analysis showed that there was no significant difference (P > 0.05) in mean FCF between male and female C. carpio in Lake Hayq. From 1055 specimens of C. carpio collected from Lake Hayq, 459 (43.5%) were females and 596 (56.5%) were males. The chi-square test showed that there was a significant deviation between males and females from 1:1 ratio (χ2= 22, df = 11, P > 0.05) within sampling months. Reproductive aspects of common carp Length at first sexual maturity Size at first maturity (L50) is the size at which 50% of the fish get matured for the first time. From the logistic regression model analyzed, male C. carpio matured at smaller size (17.5 cm) than female (21.5 cm) in Lake Hayq as shown in Fig. 6. Length at first sexual maturity (L50) of female (a) and male (b) C. carpio in Lake Hayq The occurrence of mature males and females The number of mature males (stage 4) of C. carpio was higher than that of females during sampling months. The number of mature female and male specimens was higher from January to April. The highest number of mature females and males was observed from February to April (Fig. 7). The 10-year (2009–2018) meteorological data analysis showed that the average atmospheric temperature around Lake Hayq from February to April was 27.2 °C. Rainfall distribution around Lake Hayq is bimodal. Rainfall was available in these months for the same year around the lake. This warm weather condition and rainfall availability might trigger the spawning of common carp in the lake. Monthly frequency of mature specimens of C. carpio in Lake Hayq Sixty-seven fully mature C. carpio with TL (21–49 cm) and TW (104–1230 g) were selected for fecundity study. The average absolute fecundity (AF) was 28100 ± 17462. The relation between AF with TL, TW, and GW was linear (Figs. 8, 9, and 10). There was a significant relation in absolute fecundity with TL, TW, and GW (P < 0.05). Relationship between absolute fecundity (AF) and total length (TL) in C. carpio Relation between absolute fecundity (AF) and total weight (TW) in C. carpio Relation between absolute fecundity (AF) and gonad weight (GW) in C. carpio Gonadosomatic index Cyprinus carpio in Lake Hayq has more than one peak spawning season starting from February to April. However, the highest peak spawning season for both sexes was in April (Fig. 11). Mean monthly gonadosomatic index (GSI) of C. carpio in Lake Hayq This reproductive biological study of C. carpio in Lake Hayq is the first report which will be used as basic and baseline information. The result of the study helps to know the population status of the fish and design the possible strategies for sustainable utilization of fisheries of the lake. Length-weight relationships in fishes are an important tool in fish stock assessment to know the growth status and management of the fishes (Ujjania et al. 2012). The length-weight relationship of C. carpio in Lake Hayq was negative allometric growth with a "b" value of 2.93 for females and 2.87 for males. These values were similar to 2.82 for C. carpio in Lake Ardibo for both sexes (Asnake 2010), 2.87 and 2.77 for female and male C. carpio in Foum El-Khanga Dam in Algeria (Sahtout et al.2017), but different from 1.9 and 2.3 for female and male C. carpio in Lake Naivasha in Kenya (Aera et al. 2014) and 2.92 for C. carpio in Lake Amerti (Hailu 2013). These situations may be caused by several factors including the seasonal effect, habitat type, degree of stomach fullness, gonad maturity, sex, health, preservation techniques, food availability, differences in the observed length ranges, and fatness of the species as well as physical factors such as temperature and salinity (Wootton, 1998; Rahman et al. 2012; Hossain et al. 2016). The variations in "b" values between males and females may depend on various factors such as the number of specimens examined, and the sampling season. The FCF of females and males of C. carpio were 1.23 ± 0.13 and 1.21 ± 0.011, respectively. These values were similar to 1.22 ± 0.14 for C. carpio in Amerti Reservoir (Hailu, 2013), but different from 1.58 and 1.57 for female and male C. carpio in Damsa Dam Lake in Turkey (Mert and Bulut 2014), 1.57 for both sexes of C. carpio in Foum El-Khanga Dam in Algeria (Sahtout et al. 2017), and 1.39 and 1.27 for female and male C. carpio in Almus Dam Lake in Turkey (Karataş et al., 2007). These variations in FCF of C. carpio in different water bodies could be based on the difference in age, sex, season, stage of maturity, the fullness of gut, the type of food consumed, the amount of fat reserve, and the degree of muscular development (Pauker and Coot, 2004; Hossain et al. 2013). The sex ratio (F:M) in this study was 1.3:1, and there was a significant deviation from hypothetical female to male ratio (1:1). The result of this study disagrees with Hailu (2013) that has reported non-significant variation (1.15:1) female to male ratio in Amerti Reservoir. However, this result agrees with the report (1.53:1) female to male ratio in Damsa Dam Lake in Turkey (Mert and Bulut 2014). In the present study, the size at first sexual maturity of C. carpio was 17.5 cm for males and 21.5 cm for females. These values were similar to 15.8 and 22.5 for male and female C. carpio in Sidi Saad Reservoir in Tunisia (Hajlaoui et al. 2016). But these values were different from 27 cm and 28.3 cm for male and female C. carpio in Amerti Reservoir (Hailu 2013), 27 cm and 28.7 cm for male and female C. carpio in Lake Ziway (Abera et al. 2015), and 34 and 42 cm for male and female C. carpio in Lake Naivasha in Kenya (Oyugi 2012 ). Knowledge on the fecundity of fish is important to examine the potential of its stocks, life history, practical culture, and actual management of the fishery (Islam et al. 2012). The range and the mean fecundity of C. carpio in Lake were 10,316–122,600 and 28100 ± 17462, respectively. These values were greater than the absolute fecundity range of 1610–99,737 for C. carpio in Lake Ardibo (Asnake 2010). However, the fecundity of C. carpio of Lake Hayq was lower than most water bodies of Ethiopia; it is less than a range of 36955–318584 and mean of 170937 ± 1308 fecundity recorded for C. carpio in Amerti Reservoir (Hailu 2013) and a range of 75645–356745 and mean of 210538 for C. carpio in Lake Ziway (Lemma et al. 2015). Fecundity of C. carpio depends on body size and produce between 500,000 and 3 million eggs per spawning (Smith 2004). Thus, the reproductive potential of C. carpio is exceptional as they mature early, are highly fecund, increase reproductive effort with age over their life span, and reproduce at least once each year when conditions are appropriate for survival of larvae. The lower absolute fecundity in Lake Hayq could be due to the smaller size of fish compared to the C. carpio in Amerti and Lake Ziway. Appropriate identification of the maturity status of fishes is a fundamental strategy for the appropriate management of exploited stocks in the fishery and is commonly used tools by fisheries' biologists and managers (Rahman et al. 2018). The monthly average GIS values of males and females were higher from February to April and were highest in April (Fig. 11). The lowest and highest GSI values were 1.1 and 4 for males and 3.5 and 10 for female C. carpio. The GIS values of females were higher than those of males due to the higher gonad weight of females. The observed higher values of GSI of both males and females between February and April and the highest values in April might be associated with higher atmospheric and water temperature values of 26 and 23 °C, respectively. Rainfall availability might also contribute for more food (planktons, macrophytes, and detritus) together with temperature, and triggers the spawning of C. carpio in Lake Hayq. The mean monthly average water temperature of Lake Hayq was 23 °C, and better rainfall was recorded during the spawning months. In agreement with the current study, peak breeding season was recorded in Amerti Reservoir (Hailu 2013) and Lake Ziway (Abera et al. 2015) when water temperature becomes higher and rainfall is available. C. carpio in Lake Hayq has more than one spawning season similar to Amerti Reservoir (Hailu 2013), Lake Ziway (Abera et al. 2015), and Lake Naivasha in Kenya (Oyugi, 2012). This might be related to the thermally stable warm environment and unlimited food resources (Muchiri et al. 1995). The mean monthly surface water temperature that ranged from 21.1 to 25.1 °C during the study period appears to favor year-round spawning of common carp in Lake Hayq. Conclusion and recommendation The growth and condition of common carp in Lake Hayq were good. The absolute fecundity of common carp in Lake Hayq was lower compared to other Ethiopian and African water bodies which could be due to smaller size of fishes used for fecundity analysis. The L50 values of common carp were smaller (17.5 and 21.5 cm for male and female) which might be associated with illegal fishing activities, and narrow-sized gillnets of mesh size of 4–6 cm. Hence, the mesh size of the gillnets should be regulated at least to 8 cm which is the national standard. Furthermore, common carp have extended spawning seasons in Lake Hayq (February–April) with peak spawning in April. Therefore, these intense spawning months should be used for closing seasons (no fishing activities). Thus restricted gillnet use and closed season practices could bring better recruitment and better fish size. Long-term monitoring on reproduction potential, spawning season, and population status of common carp should be done for sustainable fishery utilization of Lake Hayq. Data sharing is not applicable. Abera L, Getahun A, Lemma B. Some aspects of reproductive biology of the common carp (Cyprinus Carpio Linnaeus, 1758) in Lake Ziway, Ethiopia. Global J Agric Res Rev. 2015;3:151–7. Aera NC, Migiro EK, Yasindi A, Outa N. Length-weight relationship and condition factor of common carp, (Cyprinus carpio) in Lake Naivasha, Kenya. Int J Curr Res. 2014;6:8286–96. Asnake W. Fish resource potential and some biological aspect of Oreochromis niloticus and Cyprinus carpio in Lake Ardibo, Northern Ethiopia. MSc Thesis, College of Agricultural and Environmental Science. Bahir Dar: Bahir Dar University; 2010. Bagenal TB, Braum E. Methods for assessment of fish production in freshwaters. London: Blackwell Scientific Publications; 1987. Bagenal TB, Tesch FW. Age and growth. In: Bagenal TB, editor. Methods for assessment of fish production in freshwaters. Handbook no.3, England. Oxford: Blackwell; 1978. p. 101–136. Balon EK. Reproductive guilds of fishes: a proposal and definition. J Fish Res Board Can. 1975;32:821–64. Banarescu P, Coad B W. Cyprinids of Eurasia. In: Winfield, IJ, Nelson JS, editors. Cyprinid fishes: systematics, biology, and exploitation., Chapman and Hall, London 1991. p127–155. Cochrane KL. A fishery manager's guidebook: management measures and their application. In: FAO fisheries technical paper. No. 424. Rome: FAO; 2002. p. 231. Demlie M, Ayenew T, Stefan W. Comprehensive hydrological and hydrogeological study of topographically closed lakes in highland Ethiopia: the case of Hayq and Ardibo Lakes. J Hydrol. 2007;339:145–58. Echeverria TW. Thirty-four species of California rockfishes: maturity and seasonality of reproduction. US Fish Bull. 1987;85:229–50. FAO. Fish state plus: Universal software for fishery statistical time series (available at www.fao.org/fi/statist/fisoft/fishplus.asp). 2013. FAO (Food and Agricultural Organization). Aquaculture production statistics 1986-1996. FAO fish. Circ, 815, Rev. 9. 1997. Fetahi T, Michael S, Mengistou S, Simone L. Food web structure and trophic interactions of the tropical highland Lake Hayq. Ethiopia Ecol Model. 2011;222:804–13. Forester TS, Lawrence JM. Effects of grass carp and carp on populations of bluegill and largemouth bass in ponds. Trans Am Fish Soc. 1978;107:172–5. Getahun A. The freshwater fishes of Ethiopia, diversity, and utilization. Addis Ababa: View Graphics and Printing Plc; 2017. Golubtsov AS, Darkov AA. A review of fish diversity in the main drainage systems of Ethiopia based on the data obtained by 2008. In: Pavlov DS, Dgebudaze, YuYu, Darkov AA, Golubtsov AS, Mina MV, editors. Ecological and faunistic studies in Ethiopia, "Proceedings of Jubilee Meeting Joint Ethio-Russian Biological Expedition: 20 years of scientific cooperation". Moscow: KMK Scientific Press Ltd; 2008. p. 69–102. Hailu M. Reproductive aspects of common carp (Cyprinus Carpio L, 1758) in Amerti reservoir, Ethiopia. J Ecol Nat Environ. 2013;5:260–4. Hajlaoui W, Missaoui S. Reproductive biology of the common carp, Cyprinus carpio communis, in Sidi Saad reservoir (Central Tunisia). Bull Soc Zool Fr. 2016;141:25–39. Hossain MY, Hossen MA, Islam MM, Pramanik MNU, Nawer F, Paul AK, et al. Biometric indices and size at first sexual maturity of eight alien fish species from Bangladesh. Egypt J Aquatic Res. 2016;42:331–9. Hossain MY, Hossen MA, Islam MS, Jasmine S, Nawer F, Rahman MM. Reproductive biology of Pethia ticto (Cyprinidae) from the Gorai River (SW Bangladesh). J Appl Ichthyol. 2017;33:1007–14. Hossain MY, Ohtomi J. Reproductive biology of the southern rough shrimp Trachysalambria curvirostris (Penaeidae) in Kagoshima Bay, southern Japan. J Crustac Biol. 2008;28:607–12. Hossain MY, Rahman MM, Abdallah EM, Ohtomi J. Biometric relationships of the pool barb Puntius sophore (Hamilton 1822) (Cyprinidae) from three major rivers of Bangladesh. Sains Malaysiana. 2013;22:1571–80. Islam MR, Sultana N, Hossain MB, Mondal S. Estimation of fecundity and gonadosomatic index (GSI) of gangetic whiting, Sillaginopsis panijus (Hamilton, 1822) from the Meghna River Estuary, Bangladesh. World Appl Sci J. 2012;17:1253–60. Karataş M, Çiçek E, Başusta A, Başusta N. Age, growth, and mortality of common carp (Cyprinus Carpio Linneaus, 1758) population in Almus dam Lake (Tokat- Turkey). J Appl Biol Sci. 2007;1:81–5. Khatun D, Hossain MY, Nawer F, Mostafa AA, Al-Askar AA. Reproduction of Eutropiichthys vacha (Schilbeidae) in the Ganges River (NW Bangladesh) with special reference to the potential influence of climate variability. Environ Sci Pollut Res. 2019;26:10800–15. Koehn JD. Carp (Cyprinus carpio) as a powerful invader in Australian waterways. Freshw Biol. 2004;49:882–94. Lemma A, Abebe G, Brook L. Some Aspects of Reproductive Biology of the common carp (Cyprinus Carpio Linnaeus, 1758) in Lake Ziway, Ethiopia. Global Journal of Agricultural Research and Reviews. 2015;3:151–157. Mert R, Bulut S. Some biological properties of carp (Cyprinus carpio L., 1758) introduced into Damsa Dam Lake, Cappadocia Region, Turkey. Pakistan J Zool. 2014;46:337–46. Muchiri SM, Hart BJ, Harper MD. The persistence of two introduced tilapia species in Lake Naivasha, Kenya in the face of environmental variability and fishing pressure. In: Pitcher TJ, Hart PJB, editors. The impact of species changes in African lakes: Chapman & Hall; 1995. p. 299–320. Oyugi OD. Ecological impacts of common carp (Cyprinus Carpio L. 1758) (Pisces: Cyprinidae) on naturalised fish species in Lake Naivasha, Kenya. PhD dissertation. Kenya: University of Nairobi; 2012. Pauker C, Coot RSR. Factors affecting the condition of Flanmelmout suckers in Colorado River, grand canyon, Arizona. North Am J Fish Manag. 2004;24:648–53. Rahman MM, Hossain MY, Jewel MAS, Rahman MM, Jasmine S, Abdallah EM, Ohtomi J. Population structure, length-weight, and length-length relationships, and condition-and form-factors of the Pool barb Puntius sophore (Hamilton, 1822) (Cyprinidae) from the Chalan Beel, North-Central Bangladesh. Sains Malaysiana. 2012;41:795–802. Rahman MM, Hossain MY, Jo Q, Kim SK, Ohtomi J, Meyer C. Ontogenetic shift in dietary preference and low dietary overlap in rohu (Labeo rohita) and common carp (Cyprinus carpio) in semi-intensive polyculture ponds. Ichthyol Res. 2009;56. Rahman MM, Hossain MY, Tumpa AS, Hossain MI, Billah MM, Ohtomi J. Size at sexual maturity and fecundity of the mola carplet, Amblypharyngodon mola (Hamilton 1822) (Cyprinidae) in the Ganges River, Bangladesh. Zool Ecol. 2018;28:429–36. Rahman MM, Jo Q, Gong YG, Miller SA, Hossain MY. A comparative study of common carp (Cyprinus carpio L.) and calbasu (Labeo calbasu Hamilton) on bottom soil resuspension, water quality, nutrient accumulations, food intake, and growth of fish in simulated rohu (Labeo rohita Hamilton) ponds. Aquaculture. 2008;285:78–83. Ricker WE. Computational and interpretation of biological statistics of fish populations Bulletin of the Fisheries Research Board of Canada; 1975. Sahtout F, Boualleg C, Khelifi N, Kaouachi N, Boufekane B, Brahmia S, et al. Study of some biological parameters of Cyprinus carpio from Foum El-Khanga dam, souk-Ahras. Algeria. AACL Bioflux. 2017;10:663–74. Smith BB. Common carp (Cyprinus carpio L.1758): Spawning dynamics and early growth in lower River Murray. PhD dissertation, School of Earth and Environmental Sciences, University of Adeilaide, Austrialia. 2004. Temesgen M. Status and trends of fish and fisheries in a tropical rift valley lake, Lake Langeno, Ethiopia. PhD dissertation, Department of Zoological Sciences. Addis Ababa: Addis Ababa University; 2017. Troca DFA, Vieira JP. Potencial invasor dos Peixes Não Nativos Cultivados Na Região Costeira do Rio Grande Do Sul, Brasil. Bol Inst Pesca. 2012;38:109–20. Ujjania NC, Kohli MPS, Sharma LL. Length-weight relationship and condition factors of Indian major carps (Catla catla, Labeo rohita, and Cirrhinus mrigala) in Mahi Bajaj Sagar, India. Res J Biol. 2012;2:30–6. Weber M, Brown M. Effects of common carp on aquatic ecosystems 80 years after "carp as a dominant": ecological insights for fisheries management. Rev Fish Sci. 2009;17:524–37. Wootton RJ. Ecology of teleost fishes. 2nd ed. London: Kluwer Academic Publishers; 1998. Wudneh T. Biology and management of fish stocks in Bahir Dar Gulf, Lake Tana, Ethiopia. PhD dissertation. Wageningen: Wageningen Agricultural University; 1998. The authors would like to acknowledge Addis Ababa University, Ministry of Water, Irrigation, and Electricity, Haik Agricultural Research Sub-Center, for their financial and logistic support. We would also like to extend our gratitude to fishermen of Lake Hayq, specially Fiseha Woldemariam and Seid Abebe, for their unreserved support during fish sample collection and Kidane Aragaw for his support in the data analysis especially logistic regression analysis using R software. Addis Ababa University and Ministry of Water, Irrigation, and Electricity have granted us 2000 USD for data collection for this research. The funding organizations have commented on the work for the betterment of the paper. Department of Zoological Sciences, Addis Ababa University, Addis Ababa, Ethiopia Assefa Tessema, Abebe Getahun, Seyoum Mengistou & Tadesse Fetahi Intergovernmental Authorities on Development (IGAD), Djibouti City, Djibouti Eshete Dejen Assefa Tessema Abebe Getahun Seyoum Mengistou Tadesse Fetahi TA, the corresponding author, has prepared the report from the collected data where us GA, MS, FT, and DE are co-authors who have edited the paper. The authors read and approved the final manuscript. Authors' information TA is currently a PhD student in Addis Ababa University. The co-authors, GA and MS, are professors, and FT is an associate professor working in Addis Ababa University department of Zoological sciences. They have published many articles in fisheries and Aquatic Sciences and have advised MSc and PhD students. The last co-author, DE, is a senior and energetic researcher in fishery and aquaculture areas. He is a PhD holder and working in IGAD at senior fishery expert. Correspondence to Assefa Tessema. Not applicable, since there is no ethical approval process for fishery data in Ethiopia. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Tessema, A., Getahun, A., Mengistou, S. et al. Reproductive biology of common carp (Cyprinus carpio Linnaeus, 1758) in Lake Hayq, Ethiopia. Fish Aquatic Sci 23, 16 (2020). https://doi.org/10.1186/s41240-020-00162-x DOI: https://doi.org/10.1186/s41240-020-00162-x Fulton condition factor Length at first sexual maturity spawning seasons Associated Content Ecology and Fisheries resource management
CommonCrawl
Heavy metal in radiology: how to reliably differentiate between lodged copper and lead bullets using CT numbers Dominic Gascho ORCID: orcid.org/0000-0001-9004-43621, Niklaus Zoelch1,2, Henning Richter3, Alexander Buehlmann4, Philipp Wyss4, Michael J. Thali1 & Sarah Schaerli1,5 The in situ classification of bullets is of interest in forensic investigations when the bullet cannot be removed. Although computed tomography (CT) is usually performed on shooting victims, visual assessment, or caliber measurements using CT can be challenging or infeasible if the bullets are deformed or fragmented. Independent from the bullet's intactness, x-ray attenuation values (CT numbers) may provide information regarding the material of the bullet. Ethical approval was not required (animal cadavers) or waived by the ethics committee (decedents). Copper and lead bullets were fired into animal cadavers, which then underwent CT scanning at four energy levels (80, 100, 120, and 140 kVp). CT numbers were measured within regions of interest (ROIs). In addition to comparing CT numbers, the dual-energy index (DEI), representing the ratio between the CT numbers of two energy levels, was calculated. The most appropriate method was applied for decedents with fatal gunshot wounds. CT numbers demonstrated no significant difference between copper and lead bullets, and false classifications can easily occur. DEI calculations revealed significant differences between the two groups of bullets. The 120/140 DEIs calculated from the maximum CT numbers obtained from ROIs at the edge of copper versus lead bullets presented a significant difference (p = 0.002) and a gap between the CT numbers of copper and lead bullets and was successfully applied for the decedents. This study presents a viable method for distinguishing copper and lead bullets in situ via CT and highlights the potential pitfalls of incorrect classifications. Computed tomography (CT) numbers are not reliable for distinguishing copper from lead bullets. The dual-energy index (DEI), representing the ratio between the CT numbers of two energies, is more reliable for classifying those bullets. The ratio of maximum CT numbers (DEImax) was suitable for classifications. Using the 120/140 DEImax from CT numbers of bullets' edges is recommended. Computed tomography (CT) allows identification of the location of a lodged projectile and detection of gunshot residues indicating a contact shot [1, 2]. Ballistic experts examine bullets secured at a crime scene or removed from a body, and laboratory analysis of the deposits from an entrance wound can provide information on the bullet used [3,4,5]. The in situ identification of a bullet can be particularly interesting in forensic investigations [6,7,8] when a lodged bullet will not be removed from the patient, for example, to avoid the risk of neural damage due to an intervention [9]. The feasibility of visual assessment or caliber measurement of lodged bullets using CT was assessed on real shooting victims in postmortem studies [10,11,12]. The authors of these studies concluded that visual assessments and caliber measurements on CT are often impeded or infeasible since lodged bullets are frequently heavily deformed or fragmented [13,14,15,16]. Therefore, a method that is less dependent on the intactness of a lodged bullet is desired. Dual-energy-based material differentiation of bullets using clinical CT scanners was assessed in ex situ, animal cadaver, and phantom studies [17,18,19]. The x-ray attenuation of a material can be measured as CT numbers (Hounsfield unit (HU) values) within a defined region of interest (ROI) including several pixels/voxels. Calculation of a bullet's dual-energy index (DEI) from CT numbers measured at two different energy levels (dual-energy) was recently presented as a robust method for distinction between intact bullets composed of copper (and zinc) and those composed of lead, which were manually inserted into animal cadaver models [18]. Repeated single-energy CT scans at different energy levels were performed instead of actual dual-energy CT scans, since only single-energy CT scans allowed for reconstructions that enabled CT number measurements beyond the standard range of HU values [18]. Therefore, this DEI method is feasible using any standard CT scanner, but it causes additional radiation exposure. However, previous ex situ studies on foreign bodies demonstrated significantly different CT numbers between copper or brass (a copper-zinc alloy) specimens and lead specimens at a single energy of 130 kVp [20, 21]; thus, the DEI method might be superfluous for the distinction between copper and lead bullets. Therefore, further investigation of the x-ray attenuation characteristics of copper and lead bullets and exploration of the potential benefit of using the DEI compared to using CT numbers for the differentiation of these two types of bullets were deemed necessary. This study aimed (1) to investigate the need for two CT scans at different energy levels for the material differentiation of lodged bullets composed of copper and lead in an animal cadaver study and (2) to present a reliable and valid method for differentiating between these two types of frequently encountered bullets using clinical CT scans. No animals were killed for the scientific purposes of this study. The animal models used in this study were obtained from an institute of veterinary pathology. Fresh cadavers were used as an addition to another study with ethical approval and are in accordance with the 3Rs (replacement, reduction, and refinement)—the guiding principles for the ethical use of animals in science. Additional ethical approval for using these animal cadavers was not required. Parts of this study were performed with human cadavers. Ethical approval was waived by the responsible ethics committee of the Canton of Zurich (waiver number: 2015-0686). This article does not contain any studies with (living) human participants. Animal cadaver study and real forensic cases Bullets (n = 12) from four different types of ammunition were selected for this study (Action 4, n = 3; QD-PEP, n = 3; Hydra-Shok, n = 3; 7.65 Browning, n = 3) (Fig. 1). The bullets were divided into two groups according to their core materials. One group (copper group, n = 6) included the unjacketed Action 4 and QD-PEP bullets, which are composed of copper. These solid copper bullets are deformation bullets that were developed for law enforcement units. The other group (lead group, n = 6) included the Hydra-Shok and 7.65 Browning bullets, which are frequently encountered lead bullets with jackets composed of copper-zinc alloys (copper/zinc). The Hydra-Shok bullet is a semi-jacketed hollow-point (deformation) bullet, while 7.65 Browning bullets (which are also referred to as .32 ACP bullets) are full metal-jacketed bullets. From each type of ammunition, three bullets were fired into animal cadaver models at a dedicated shooting range. Sheep legs were used as a substitute for human tissue. The shootings were performed by a ballistics expert from a forensics institute. After shooting, each sheep leg was scanned by CT. Action 4 (a), QD-PEP (b), Hydra-Shok (c), and 7.65 Browning (d) bullets were fired into animal cadaver models at a dedicated shooting range. Then, computed tomography scans of the animal cadaver models with lodged copper bullets (a, b) and lodged lead bullets (c, d) were performed Additionally, the distinction between copper and lead bullets was assessed in real forensic cases with fatal gunshot wounds and lodged bullets. The decedents (n = 15) underwent postmortem imaging as part of forensic judicial investigations. Ethical approval was waived by the responsible ethics committee. The bullets were removed during autopsy and identified by the forensics institute. Before the bullets were removed, the decedents underwent a CT examination using the same scanner used for the animal cadaver study. The CT scan protocol from the animal cadaver study was used. Decedents with a lodged Action 4 copper bullet (n = 3) and decedents with a lodged .22 LR lead bullet (n = 3) were selected for this study. Scan protocol Repeated CT scans using energy levels of 80, 100, 120, and 140 kVp were performed using a standard medical 128-slice CT scanner (SOMATOM Definition Flash, Siemens Healthcare GmbH, Forchheim, Germany). The tube current was adjusted to gain an almost equal volume CT dose index of 9 mGy at each energy level, which provides equivalent image noise. A standard pitch of 0.6 was used. The raw data were reconstructed using standard filtered back projection with a hard kernel (B70), a slice thickness of 1.5 mm, and a field of view of 140 × 140 mm (reconstruction matrix, 512 × 512; in-plane voxel size, 0.27 × 0.27 mm). Reconstructions were calculated in an extended CT scale (ECTS) to allow measurements beyond the standard range of HU values [22]. ROI measurements, CT numbers, and the dual-energy index CT numbers were measured in a defined ROI at 80, 100, 120, and 140 kVp (Fig. 2). To assure identical ROI placement, the datasets were displayed side by side in a multiplanar reconstruction view using dedicated software (MM Reading, syngo.via, Version VB10B HF03, Siemens Healthcare GmbH, Forchheim, Germany) [23]. The software enables the mean and maximum CT numbers to be measured within an ROI at the exact same position on all four datasets with different energy levels. ROI circles were drawn at the centre (ROI: 1.6 mm2) and edge (ROI: 0.5 mm2) of the lodged bullet. Measurements were taken separately at these two positions to demonstrate the influence of the bullet's caliber. For each bullet, six ROIs were positioned at different slices, i.e., different levels within the bullet or its fragments for the core and edge measurements (ROIs per bullet: core, n = 6; edge, n = 6). An ROI was repositioned on a new slice if the upper limit of 30,710 HU was displayed as the maximum CT number. The DEI was calculated for dual-energy pairs of 80/100 kVp, 80/120 kVp, 80/140 kVp, 100/120 kVp, 100/140 kVp, and 120/140 kVp using the mean CT numbers (DEImean) and the maximum CT numbers (DEImax) from the ROI measurements at the centre and edge of the lodged bullet. The following formula [24] was used to calculate the DEI: $$ DEI=\frac{x_{\mathrm{low}}-{x}_{\mathrm{high}}}{x_{\mathrm{low}}+{x}_{\mathrm{high}}+2000} $$ The variable xlow represents the CT number measured at the lower energy level of the individual dual-energy pair, while xhigh is the CT number measured at the higher energy level. Cross sections of a lodged Action 4 copper bullet at different energy levels indicated in kilo-voltage-peaks (kVp) (a, 80 kVp; b, 100 kVp; c, 120 kVp; d, 140 kVp). Region-of-interest-based measurements were carried out in the hyperdense ring at the edge (area 0.5 mm2, highlighted in blue) and in the centre of the bullet (area 1.6 mm2, highlighted in red). Each pixel (i.e., voxel) contains a single CT number. The measurements indicate the mean computed tomography (CT) number, the standard deviation (SD), the minimum CT number, and the maximum CT number of all pixels within the ROI. The CT numbers (i.e., the x-ray attenuation values) are influenced by the energy level. CT numbers obtained from two different energy levels can be used to calculate the dual-energy index (DEI), which represents the ratio of the CT numbers of the two energy levels Distinction between copper and lead bullets The difference between using CT numbers from a single energy and the DEI for the distinction between copper and lead bullets within the animal cadaver models was assessed by considering the energy level, the use of the mean and maximum CT numbers, and the ROI position (core or edge). The two groups of bullets were compared using statistical analysis, standard deviations, and data overlap. Finally, the most suitable method with the lowest standard deviations and the smallest data overlap was applied and assessed in real forensic cases. Statistical analysis and data overlap calculations The overall mean values of the mean CT numbers, of the maximum CT numbers, and of the DEIs of each bullet were used for statistical analysis. The Shapiro-Wilk test was used to determine whether the data were normally distributed. The t test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed data to reveal statistically significant differences between the two groups of bullets (significance level, p < 0.05). The statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS, International Business Machines Corporation, IBM, Armonk, NY, USA). Animal cadaver study All bullets were deformed, and the lead bullets were partially fragmented. If a bullet was fragmented, ROI measurements were conducted on the largest main fragment. One of the lead bullets was separated from its jacket, which was composed of a copper-zinc alloy (Fig. 3). This particular jacket was used to further examine the final method applied on real forensic cases in this study. Volume rendering of the Hydra-Shok bullets within the animal cadaver model obtained from the computed tomography scan with 140 kVp (a). The copper/zinc jacket (a, arrowhead; b) of one of the Hydra-Shok bullets (Hydra-Shok number 2) was separated from the lead core of the bullet (a, arrowhead). All Hydra-Shok bullets were heavily deformed; however, they did not hit the femoral bone as no osseous fractures were detected. Metal artifacts are visible as streaks in the panel a, but these streaks disappeared when a window that could precisely visualise the metallic object was selected (b) Differences in CT numbers from single energies The mean and maximum CT numbers of a total of 576 ROI measurements are illustrated in Fig. 4. Table 1 lists the overall mean values and standard deviations, minimum and maximum values, and the statistical analysis of the mean and maximum CT numbers for the two groups at all four energy levels from the ROI measurements at the core and the edge of the lodged bullets. Mean computed tomography (CT) numbers obtained from the cores (a) and the edges (b) of the bullets and the maximum CT numbers obtained from the cores (c) and the edges (d) of the bullets. The CT numbers of these two types of bullets demonstrate a strong overlap. The CT numbers from the cores were much lower than those from the edges. The red dashed line indicates the upper limit of the extended CT scale (ECTS) Table 1 CT numbers of copper and lead bullets At all four energy levels, the mean values of the (mean and maximum) CT numbers obtained from core measurements were higher in the copper group than in the lead group, while the mean values from the edge measurements exhibited the opposite relationship (i.e., the (mean and maximum) CT numbers obtained from core measurements were higher in the lead group than in the copper group). The standard deviations decreased with an increase in the energy level in the copper group; the same phenomenon occurred for the edge measurements in the lead group, while in the lead core, the standard deviations increased with an increase in the energy level. Furthermore, the (mean and maximum) CT numbers obtained from lead bullets increased with an increase in the energy level from 80 to 140 kVp. However, the CT numbers measured in the copper bullets presented an increase only from 80 to 120 kVp in the core and from 80 to 100 kVp at the edge. A peak occurred for the CT numbers at 120 kVp (core) and at 100 kVp (edge), and the CT numbers decreased at higher energy levels. The Shapiro-Wilk test indicated that the mean and maximum CT numbers were normally distributed; therefore, the t test was applied. The t tests revealed no statistically significant differences between the CT numbers of copper and lead bullets. Differences in the dual-energy index Table 2 lists the overall mean values and standard deviations or median values and interquartile ranges, minimum and maximum values, and the statistical analysis of the DEIs of each group calculated from core-based and edge-based CT numbers for all six dual-energy pairs. The copper group demonstrated higher DEIs than the lead group except for the 80/100 and 80/120 DEImean values and the 80/100 DEImax value based on core measurements. In the core, statistically significant differences between copper and lead bullets were detected for the 80/100 and 80/120 DEImax values (each, p = 0.002). Concerning edge measurements, a statistically significant difference was detected for the 100/120 DEImean value and 100/120, 100/140, and 120/140 DEImax values (each, p = 0.002). The DEIs of the lead bullets that differed significantly from the copper bullets presented small standard deviations (range, ± 0.008 to ± 0.014) or interquartile ranges (range, 0.005−0.015), while those of the copper bullets presented larger standard deviations (range, ± 0.023 to ± 0.036) or interquartile ranges (range, 0.024-0.028) except for the 120/140 DEImax. The edge-based 120/140 DEImax presented small interquartile ranges in both groups of bullets (copper bullets, 0.011; lead bullets, 0.005) and a gap between the calculated DEIs of the two groups (Fig. 5); therefore, the edge-based 120/140 DEImax was deemed the most appropriate for distinguishing between copper and lead bullets. A boundary at 0.004 was identified between the two groups. Table 2 Dual-energy indexes for copper and lead bullets The core-based 80/100 dual-energy index (DEI)max (a), the core-based 80/120 DEImax (b), the edge-based 100/120 DEImean (c), the edge-based 100/120 DEImax (d), the edge-based 120/140 DEImax (e), and the edge-based 120/140 DEImax (f) yielded statistically significant differences between the two groups of bullets (x-axis: Action 4 bullets, 1-3; QD-PEP bullets, 4-6; Hydra-Shok bullets, 7-9; 7.65 Browning bullets, 10-12). While large overlaps are visible for core measurements, the copper group visibly differed from the lead group in terms of the DEI calculated from edge measurements. Only the edge-based 120/140 DEImax resulted in a clear dividing line between the two groups of bullets (green dashed line). The Hydra-Shok no. 2 (bullet number 8 on the x-axis), which was separated from its jacket, yielded lower values than all other copper/zinc-jacketed lead bullets. The 120/140 DEImax was also calculated for the jacket composed of copper (and zinc) that was separated from the lead core after entering the animal cadaver (Fig. 3). The jacket yielded a 120/140 DEImax above the threshold (mean value, 0.064; range, 0.051–0.072), which was also observed for all solid copper bullets. Consequently, the separated jacket clearly differed from its initial lead core and from all (still) jacketed lead bullets in the 120/140 DEImax. Interestingly, the Hydra-Shok no. 2 bullet, the "unjacketed" lead bullet, presented lower DEIs than the other lead bullets (Fig. 5). Real forensic cases Similar to the animal cadavers, all three Action 4 copper bullets lodged in decedents were deformed, while all three unjacketed .22 LR lead bullets were deformed and partially fragmented. The bullets were located in the cranium (cases 1, 2, and 4), the dorsal muscles (cases 3 and 6), and the muscles of the upper arm (case 5). The lead bullets fragmented into several pieces and tiny metal fragments were scattered along the wound channel. The fragments did not allow visual classification or caliber measurements (Fig. 6). According to the results of the animal cadaver study, the edge-based 120/140 DEImax was applied for the decedents. Since the CT numbers at 120 kVp and 140 kVp had to be measured for calculation of the DEI, these CT numbers were also compared between the individual cases. At both energy levels, only two of the three Action 4 copper bullets (cases 2 and 5) presented lower CT numbers than the .22 LR lead bullets in all other cases, while the Action 4 copper bullet in case 6 was not distinguishable from the lead bullets using the CT numbers (Fig. 7a and b). However, the 120/140 DEImax allowed a clear distinction between copper and lead bullets (Fig. 7c). All lead bullets presented DEI values below the threshold of 0.004, while all copper bullets had DEI values far above this threshold. The Action 4 bullet in case 6, which did not differ from lead bullets according to its CT numbers, was located approximately 12 cm from the 10th thoracic vertebra. A closer look at this particular copper bullet revealed that beam hardening was very unevenly distributed along the hyperdense ring at the edge of the bullet (Fig. 8). Cinematic rendering of a cerebral non-perforating gunshot wound (case 1) obtained from a computed tomography (CT) examination of the decedent's head at 120 kVp (a). The entrance wound is located on the left side of the frontal bone. Numerous bone fragments and tiny metal fragments are scattered along the bullet path. The bullet fragment is lodged in the occipital lobe above the posterior cranial fossa (a, arrowhead). Visual identification of the bullet was not feasible via CT due to its severe deformation (b) Maximum computed tomography (CT) numbers from the edge at 120 kVp (a) and at 140 kVp (b) and the 120/140 DEImax calculated from CT numbers obtained from edge measurements (c). Only the lodged Action 4 copper bullet in cases 2 and 5 demonstrated far lower maximum CT numbers than the lodged .22 LR lead bullets in cases 1, 3, and 4. However, the Action 4 copper bullet in case 6 did not differ from the lead bullets at either 120 kVp (a) or 140 kVp (b). At 120 kVp, for the Action 4 copper bullet, values very close to the upper limit of the extended CT scale (ECTS) represented as red-dashed line were reached. In contrast, all Action 4 copper bullets clearly differed from the .22 LR lead bullets in terms of the dual-energy index (DEI) and obtain DEIs above the threshold of 0.004 represented as green dashed line defined by the results of the animal cadaver study. Even the Action 4 copper bullet in case 6 clearly differed from the lead bullets in terms of the DEI, which was calculated from CT numbers that were not obviously different from those of lead bullets at 120 and 140 kVp. The Action 4 copper bullet in case 6 was lodged in the dorsal muscles (a). The computed tomography numbers at 120 kVp varied from those that are consistent with copper bullets (b, c: 18,490 HU) to those that are consistent with lead bullets (a, b: 30,600 HU). A high window centre can be used to illustrate unevenly distributed beam hardening along the hyperdense ring at the edge of the bullet (c). Since the beam hardening artifact is pronounced in the diagonal vertical direction of the x-ray beam, the vertebral bone and x-ray scattering may have affected the intensity of beam hardening at the edge of this particular bullet This study highlights some important factors that must be considered for the differentiation between copper and lead bullets according to their metallic components using CT. At 80, 100, 120, and 140 kVp, the CT numbers of copper bullets did not significantly differ from those of lead bullets, and using CT numbers alone can lead to false classifications. The ratio between CT numbers at two different energy levels indicated by the DEI is more suitable to distinguish copper from lead bullets. For this ratio, the maximum CT numbers appeared to be more appropriate than the mean CT numbers. In the core, only the CT number ratios between low energies (80/100 and 80/120 DEImax) presented significant differences but these CT number ratios presented also large data overlaps and large standard deviations for the copper bullets. In contrast, in the edge, only the CT number ratios between high energies presented significant differences (100/120, 100/140, and 120/140 DEImax). Only the 120/140 DEImax obtained from edge measurements exhibited small standard deviations for both groups of bullets and a gap between the data of both groups. The edge-based 120/140 DEImax was successfully introduced into postmortem imaging of deceased gunshot victims. Beam hardening occurs at the edge of a bullet or its fragment [18, 25]; thus, the measured CT numbers are not "real". However, while this physical effect barely affects the DEI, it considerably affects the CT number, as demonstrated in case 6 of the real forensic cases, where the CT numbers were unexpectedly high for a copper bullet. Nonetheless, the DEI calculated from those unexpectedly high CT numbers still allowed clear classification of the bullet due to the slight decrease in the maximum CT numbers from 120 to 140 kVp. An increase or a decrease in CT numbers over all four energy levels is related to the atomic numbers (Z) and the K-edge energies of the individual metals (copper, Z = 29; K-edge = 8.9 keV; lead, Z = 82; K-edge = 88.0 keV) and the photoelectric effect [18]. Differentiation between bullets composed of metallic components with atomic numbers that are close together in the periodic table might be considerably more challenging using the DEI-based approach. Maximum CT numbers are usually not used since they are strongly affected by quantum and image noise. Therefore, using the same scanning and reconstruction parameters as well as the same volume CT dose index is important for repeated scans with two different energies. Complying with these conditions, the ratio between maximum CT numbers within the same ROI from two different energy levels can be a robust indicator for the attenuation characteristics of metallic objects. Although the ROIs at the edge of the jacketed lead bullets very likely included some pixels/voxels located in the bullets' jackets, which are composed of copper-zinc alloy, the material of the jacket did not affect the identification of lead bullets. The DEIs of the jackets of lead bullets in the animal cadaver study did not noticeably differ from those of the unjacketed .22 LR bullets in the real forensic cases. Lead presents very high CT numbers; thus, the CT numbers of the less radiopaque metal in the jacket hardly affects the mean or maximum CT numbers obtained from ROI measurements. However, a jacket composed of copper-zinc alloy that is separated from the bullet, which occurred once in this study, can be differentiated from its unjacketed lead core or other jacketed lead bullets. Core measurements are considered unreliable for differentiating between copper and lead bullets since large data overlaps were calculated and large standard deviations were observed for the copper bullets. Similar to CT numbers measured at the edge, CT numbers measured in the centre are not "real". Previous studies on material differentiation [20, 21, 26] used the mean CT numbers from the centre of the objects to distinguish between metallic foreign bodies. The authors pointed out that CT numbers must be measured far from near-surface regions to ensure reliable HU values and to avoid partial volume effects [20, 21, 26]. However, the CT number of a metallic object strongly decreases with the x-ray penetration depth. This cupping effect, which increases with the diameter of a bullet, was shown for intact bullets in a previous ex situ study on intact bullets [25]. Additionally, photon starvation increases with the size of a radiopaque object, indicating that the detector receives noisier information regarding the x-ray attenuation at the centre of the material. Consequently, the CT numbers obtained from a radiopaque material vary depending on the ROI position and the ROI size. Some limitations of this study should be noted. First, only bullets composed of copper or lead were investigated. However, these are the most frequently encountered types of bullets [27, 28]. Other metallic components, such as steel, bismuth, and tungsten, are much less often used for the bullet core, while steel is more frequently used for the jackets of bullets. To distinguish ferromagnetic steel-jacketed bullets from non-ferromagnetic non-steel-jacketed bullets, different studies yielded contradictory results [17, 29] since the type of metal used for the core of the bullet (usually copper or lead) was not considered [18, 29]. Notably, some bullets have different core metals at the point than at the body. Second, only a small number of bullets were used in this study, consequently minimising the statistical power of the results. Additionally, the ROIs at the edges of the bullets contained only a small number of pixels (see Fig. 2). Despite the small number of bullets used in the animal cadaver study and the small number of pixels in the ROIs, the selected 120/140 DEImax was successfully applied to decedents. Third, intra-observer agreement and inter-observer agreement were not tested in this study. However, a previous study [26] reported negligible observer variabilities for ROI measurements. Fourth, inter-scanner variability was not assessed in this study. Fifth, a potential benefit of using the dual-energy technique could not be assessed since the dual-energy data do not allow ECTS reconstructions. Additionally, CT numbers above the upper limit of the ECTS had to be excluded in this study. In conclusion, this study presents a viable approach for in situ distinction of bullets composed of different metallic components according to their x-ray attenuation characteristics at two different energy levels. If the bullet geometry is not visually identifiable due to deformation or fragmentation, then in situ classification of bullets according to their metallic components can provide rapid information on the type of bullet. Since CT scanning is routinely performed on shooting victims in emergency hospitals [30, 31] and increasingly applied postmortem in forensic medicine [32,33,34], in situ distinction between copper and lead bullets is becoming increasingly feasible. DEI: Dual-energy index Extended CT scale Hounsfield unit Stein KM, Bahner ML, Merkel J, Ain S, Mattern R (2000) Detection of gunshot residues in routine CTs. Int J Legal Med 114:15–18. https://doi.org/10.1007/s004149900124 Gascho D, Marosi M, Thali MJ, Deininger-Czermak E (2020) Postmortem computed tomography and magnetic resonance imaging of gunshot wounds to the neck. J Forensic Sci. https://doi.org/10.1111/1556-4029.14311 Lantz PE, Jerome WG, Jaworski JA (1994) Radiopaque deposits surrounding a contact small-caliber gunshot wound. Am J Forensic Med Pathol 15:10–13. https://doi.org/10.1097/00000433-199403000-00003 Karger B, Hoekstra A, Schmidt PF (2001) Trajectory reconstruction from trace evidence on spent bullets. I. Deposits from intermediate targets. Int J Legal Med 115:16–22. https://doi.org/10.1007/s004140000202 Wunnapuk K, Minami T, Durongkadech P et al (2009) Discrimination of bullet types using analysis of lead isotopes deposited in gunshot entry wounds. Biol Trace Elem Res 129:278–289. https://doi.org/10.1007/s12011-008-8304-7 DiMaio VJMD (1999) Gunshot wounds: practical aspects of firearms, ballistics, and forensic techniques. CRC-Press, New York Marais AAS, Dicks HJ (2019) Utilization of x-ray computed tomography for the exclusion of a specific calibre and bullet type in a living shooting victim. J Forensic Sci 64:264–269. https://doi.org/10.1111/1556-4029.13805 Alves AM, Picoli FF, Silveira RJ et al (2020) When forensic radiology meets ballistics–in vivo bullet profiling with computed tomography and autopsy validation: a case report. Forensic Imaging. https://doi.org/10.1016/j.fri.2020.200357 Dhillon MS, Dhatt SS (2012) First aid and emergency management in orthopedic injuries. JP Medical Ltd, New Delhi Makhlouf F, Scolan V, Ferretti G, Stahl C, Paysant F (2013) Gunshot fatalities: correlation between post-mortem multi-slice computed tomography and autopsy findings: a 30-months retrospective study. Leg Med (Tokyo) 15:145–148. https://doi.org/10.1016/j.legalmed.2012.11.002 Kirchhoff SM, Scaparra EF, Grimm J et al (2016) Postmortem computed tomography (PMCT) and autopsy in deadly gunshot wounds—a comparative study. Int J Legal Med 130:819–826. https://doi.org/10.1007/s00414-015-1225-z Gascho D, Zoelch N, Deininger-Czermak E et al (2020) Visualization and material-based differentiation of lodged projectiles by extended CT scale and the dual-energy index. J Forensic Lega Med. https://doi.org/10.1016/j.jflm.2020.101919 Padrta JC, Barone JE, Reed DM, Wheeler G (1997) Expanding handgun bullets. J Trauma 43:516–520. https://doi.org/10.1097/00005373-199709000-00022 Haag LC (2013) The forensic aspects of contemporary disintegrating rifle bullets. Am J Forensic Med Pathol 34:50–55. https://doi.org/10.1097/PAF.0b013e31827a05b7 Kaplan J, Klose R, Fossum R, Di Maio VJM (1998) Centerfire frangible ammunition: wounding potential and other forensic concerns. Am J Forensic Med Pathol 19:299–302. https://doi.org/10.1097/00000433-199812000-00001 Coupland R (1999) Clinical and legal significance of fragmentation of bullets in relation to size of wounds: retrospective analysis. BMJ 319:403–406. https://doi.org/10.1136/bmj.319.7207.403 Diallo I, Auffret M, Deloire L, Saccardy C, Aho S, Ben Salem D (2018) Is dual-energy computed tomography helpful to determinate the ferromagnetic property of bullets? J Forensic Radiol Imaging 15:21–25. https://doi.org/10.1016/j.jofri.2018.10.001 Gascho D, Zoelch N, Richter H, Buehlmann A, Wyss P, Schaerli S (2019) Identification of bullets based on their metallic components and x-ray attenuation characteristics at different energy levels on CT. AJR Am J Roentgenol 213:W105–W113. https://doi.org/10.2214/AJR.19.21229 Ognard J, Dissaux B, Diallo I, Attar L, Saccardy C, Ben Salem D (2019) Manual and fully automated segmentation to determine the ferromagnetic status of bullets using computed tomography dual-energy index: a phantom study. J Comput Assist Tomogr 43:799–804. https://doi.org/10.1097/RCT.0000000000000899 Bolliger SA, Oesterhelweg L, Spendlove D, Ross S, Thali MJ (2009) Is differentiation of frequently encountered foreign bodies in corpses possible by Hounsfield density measurement? J Forensic Sci 54:1119–1122. https://doi.org/10.1111/j.1556-4029.2009.01100.x Ruder TD, Thali Y, Bolliger SA et al (2012) Material differentiation in forensic radiology with single-source dual-energy computed tomography. Forensic Sci Med Pathol 9:163–169. https://doi.org/10.1007/s12024-012-9398-y Gascho D, Thali MJ, Niemann T (2018) Post-mortem computed tomography: technical principles and recommended parameter settings for high-resolution imaging. Med Sci Law 58:70–82. https://doi.org/10.1177/0025802417747167 Gascho D, Philipp H, Flach PM, Thali MJ, Kottner S (2018) Standardized medical image registration for radiological identification of decedents based on paranasal sinuses. J Forensic Lega Med 54:96–101. https://doi.org/10.1016/j.jflm.2017.12.003 Krauss B, Schmidt B, Flohr TG (2011) Dual source CT. In: Johnson T, Fink C, Schönberg SO, Reiser MF (eds) Dual energy CT in clinical practice. Springer, Berlin, Heidelberg. https://doi.org/10.1007/174_2010_44 Paulis LE, Kroll J, Heijnens L et al (2019) Is CT bulletproof? On the use of CT for characterization of bullets in forensic radiology. Int J Legal Med. https://doi.org/10.1007/s00414-019-02033-0 Ruder TD, Thali Y, Schindera ST et al (2012) How reliable are Hounsfield-unit measurements in forensic radiology? Forensic Sci Int 220:219–223. https://doi.org/10.1016/j.forsciint.2012.03.004 Gremse F, Krone O, Thamm M et al (2014) Performance of lead-free versus lead-based hunting ammunition in ballistic soap. PLoS One. https://doi.org/10.1371/journal.pone.0102015 Thomas VG (2013) Lead-free hunting rifle ammunition: product availability, price, effectiveness, and role in global wildlife conservation. Ambio 42:737–745. https://doi.org/10.1007/s13280-012-0361-7 Gascho D, Zoelch N, Schaerli S (2019) Explanation for the contradiction between the results of Diallo et al. (doi:10.1016/j.jofri.2018.10.001) and Winklhofer et al. (doi:10.1097/RLI.0000000000000032) in differentiating ferromagnetic from nonferromagnetic bullets by means of the dual-energy index. J Forensic Radiol Imaging. https://doi.org/10.1016/j.jofri.2019.100351 Reginelli A, Russo A, Maresca D, Martiniello C, Cappabianca S, Brunese L (2015) Imaging assessment of gunshot wounds. Semin Ultrasound CT MR 36:57–67. https://doi.org/10.1053/j.sult.2014.10.005 Serraino S, Milone L, Picone D, Argo A, Salerno S, Midiri M (2020) Imaging for ballistic trauma: other applications of forensic imaging in the living. In: Lo Re G, Argo A, Midiri M, Cattaneo C (eds) Radiology in Forensic Medicine: from Identification to post-mortem imaging. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-319-96737-0_15 Decker SJ, Braileanu M, Dey C et al (2019) Forensic radiology: a primer. Acad Radiol 26:820–830. https://doi.org/10.1016/j.acra.2019.03.006 Cascini F, Polacco M, Cittadini F, Paliani GB, Oliva A, Rossi R (2019) Post-mortem computed tomography for forensic applications: a systematic review of gunshot deaths. Med Sci Law 60:54–62. https://doi.org/10.1177/0025802419883164 Gascho D, Tappero C, Zoelch N et al (2019) Synergy of CT and MRI in detecting trajectories of lodged bullets in decedents and potential hazards concerning the heating and movement of bullets during MRI. Forensic Sci Med Pathol 16:20–31. https://doi.org/10.1007/s12024-019-00199-y We thank Stephan Christen from the Forensic Institute in Zurich and Patrick Kircher from the Vetsuisse Faculty (University of Zurich) for endorsing the collaboration in this field of research. In addition, we are grateful to Emma Louise Kessler for her donation to the Zurich Institute of Forensic Medicine, University of Zurich, Switzerland. The authors state that this work has not received any funding. Department of Forensic Medicine and Imaging, Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland Dominic Gascho, Niklaus Zoelch, Michael J. Thali & Sarah Schaerli Department of Psychiatry, Psychotherapy and Psychosomatics, Hospital of Psychiatry, University of Zurich, Zurich, Switzerland Niklaus Zoelch Diagnostic Imaging Research Unit (DIRU), Clinic for Diagnostic Imaging, Vetsuisse Faculty, University of Zurich, Zurich, Switzerland Henning Richter Zurich Forensic Science Institute, Zurich Canton Police and Zurich City Police, Zurich, Switzerland Alexander Buehlmann & Philipp Wyss Institute of Forensic Medicine, Health Department Basel, University of Basel, Basel, Switzerland Sarah Schaerli Dominic Gascho Alexander Buehlmann Philipp Wyss Michael J. Thali DG, NZ, and SS designed the study. HR provided the animal cadaver models. AB performed the shooting experiments (on the animal cadaver models), and PW, DG, NZ, and HR helped with the shooting experiments. MJT provided the technical equipment. DG performed the measurements and wrote the original draft of the manuscript. All authors reviewed the final manuscript. Correspondence to Dominic Gascho. This study was performed with human cadavers. Ethical approval was waived by the responsible ethics committee of the Canton of Zurich (waiver number: 2015-0686). This article does not contain any studies with (living) human participants. No animals were killed for the scientific purposes of this study. The animal models used in this study were obtained from an institute of veterinary pathology. Fresh cadavers were used as an addition to another study and are in accordance with the 3Rs (replacement, reduction, and refinement)—the guiding principles for the ethical use of animals in science. Ethical approval was waived. The authors declare that they have no competing interests to report. Gascho, D., Zoelch, N., Richter, H. et al. Heavy metal in radiology: how to reliably differentiate between lodged copper and lead bullets using CT numbers. Eur Radiol Exp 4, 43 (2020). https://doi.org/10.1186/s41747-020-00168-z DOI: https://doi.org/10.1186/s41747-020-00168-z Tomography (x-ray computed), Wounds (gunshot)
CommonCrawl
How to get your music onto itunes for free Doc love dating tips for guys How to put your own back into alignment How to get an ex back after cheating on him Text your ex back success rate Relationship advice for fighting couples How to win your ex back in a long distance relationship advice Halloween love spells that work youtube My ex came back after years Best Way To Get Ex Back Best Way To Get Your Ex Back Get Ex Gf Back Getting Back Together With Ex Boyfriend How To Get A Woman Back How To Get Boyfriend Back How To Win Your Girlfriend Back Text Your Ex Back Example Texts Things To Say To Your Ex Boyfriend Ways To Get Him Back How to get someone back that did you wrong poems Man u news done deal How to get my man back after a bad break up 2014 $10 love spells yourself Love spells for a specific person iphone How to get your ex girlfriend to want you back mp3 How to make your boyfriend want you back after a break up What do you get your girlfriend for christmas How to get my bipolar ex boyfriend back now Using text messages to get your ex girlfriend back quotes My ex text me hello kitty Sweet text to my ex girlfriend How to get a virgo man back after a break up 100 Ways to get your ex girlfriend to want you back quotes The best way to get her back for good karaoke How to find a girlfriend at 25 2014 How to find friends online for free viooz How to get my nigerian boyfriend back coach How to win back her respect video Michael fiore text your ex back pdf free full Why do you want your ex back up How to get my girlfriend back after she cheated quotes Not sure if i want to get back with my ex quiz How to win back my ex girl jealous How to make your back flexible for beginners xcode How to find a right triangle with vertices,how to get your ex back when she hates you 320,relationship advice jobs edinburgh,find a boyfriend now - Step 1 The hypotenuse of a right triangle is 26 in and one leg is 10 in. What is the sum of the two shortest sides? Find the missing leg of the right triangle when one of the legs is and the hypotenuse is . Given that the hypotenuse of a right triangle is and one side is , find the other side length. Explanation: To find the length of the missing side, you can either use the pythagorean theorm or realize this is a case of a special right triangle with sides . Since the missing side corresponds to side , rewrite the Pythagorean Theorem and solve for . Virginia Polytechnic Institute and State University, Bachelors, Animal and Poultry Science. Indiana University-Bloomington, Current Undergrad, Teaching All Learners: Elementary Education and Special Education . Example - Problem 2: Two lines tangent to a circle at points M and N have a point of intersection A. A line through the center C of the circle and a point of tangency to the circle is perpendicular to the tangent line, hence the right angles at M and N in the figure below. So far, we've only dealt with right triangles, but trigonometry can be easily applied to non-right triangles because any non-right triangle can be divided by an altitude* into two right triangles. Remember that an altitude is a line segment that has one endpoint at a vertex of a triangle intersects the opposite side at a right angle. We could do the same derivation with the other two altitudes, drawn from angles A and C to come up with similar relations for the other angle pairs. We could again do the same derivation using the other two altitudes of our triangle, to yield three versions of the law of cosines for any triangle. Use the law of cosines to find the missing measurements of the triangles in these two examples. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Hint: If you cut the equilateral triangle into two by bisecting an angle, you make two right triangles. Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). It is best not to use a general formula that one has not great control over, when the problem is a very concrete one. If you do not know the "special angles" fact that $\tan(30^\circ)=\frac{1}{\sqrt{3}}$, you can use the calculator to evaluate $\tan(30^\circ)$ to high precision. Not the answer you're looking for?Browse other questions tagged geometry or ask your own question. Given the area of $1000$ sided polygon ,how could we find the maximum length between any two of its vertices? How do I find the number of sides of a regular polygon given only its side length and area? The size of angle MAN is equal to x degrees and the length of the radius of the circle is equal to r. Capital letters are angles and the corresponding lower-case letters go with the side opposite the angle: side a (with length of a units) is across from angle A (with a measure of A degrees or radians), and so on. It's best to use the original known angle and side so that round-off errors or mistakes don't add up. In the first, the measures of two sides and the included angle (the angle between them) are known. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Recall the apothem is the (length of) a perpendicular from the centre of the equilateral triangle to any one of the sides. Notice that we need to know at least one angle-opposite side pair for the Law of Sines to work. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Club penguin waddle around and meet new friends 2014 nbc How do i get my wifes love back Best Way To Get Ex Back | admin ErroR, 07.04.2016 at 12:34:31 May rely on the program's buyer. Apocalupse, 07.04.2016 at 10:53:32 You continue to maintain on to that, it isn't going up, we'll be alright with. Yahoo answers how to get a girlfriend in middle school powered by WordPress
CommonCrawl
Estimating the value of medical treatments to patients using probabilistic multi criteria decision analysis Henk Broekhuizen1, Catharina G. M. Groothuis-Oudshoorn1, A. Brett Hauber2, Jeroen P. Jansen3 & Maarten J. IJzerman1 Estimating the value of medical treatments to patients is an essential part of healthcare decision making, but is mostly done implicitly and without consulting patients. Multi criteria decision analysis (MCDA) has been proposed for the valuation task, while stated preference studies are increasingly used to measure patient preferences. In this study we propose a methodology for using stated preferences to weigh clinical evidence in an MCDA model that includes uncertainty in both patient preferences and clinical evidence explicitly. A probabilistic MCDA model with an additive value function was developed and illustrated using a case on hypothetical treatments for depression. The patient-weighted values were approximated with Monte Carlo simulations and compared to expert-weighted results. Decision uncertainty was calculated as the probability of rank reversal for the first rank. Furthermore, scenario analyses were done to assess the relative impact of uncertainty in preferences and clinical evidence, and of assuming uniform preference distributions. The patient-weighted values for drug A, drug B, drug C, and placebo were 0.51 (95 % CI: 0.48 to 0.54), 0.51 (95 % CI: 0.48 to 0.54), 0.54 (0.49 to 0.58), and 0.15 (95 % CI: 0.13 to 0.17), respectively. Drug C was the most preferred treatment and the rank reversal probability for first rank was 27 %. This probability decreased to 18 % when uncertainty in performances was not included and increased to 41 % when uncertainty in criterion weights was not included. With uniform preference distributions, the first rank reversal probability increased to 61 %. The expert-weighted values for drug A, drug B, drug C, and placebo were 0.67 (95 % CI: 0.65 to 0.68), 0.57 (95 % CI: 0.56 to 0.59), 0.67 (95 % CI: 0.61 to 0.71), and 0.19 (95 % CI: 0.17 to 0.21). The rank reversal probability for the first rank according to experts was 49 %. Preferences elicited from patients can be used to weigh clinical evidence in a probabilistic MCDA model. The resulting treatment values can be contrasted to results from experts, and the impact of uncertainty can be quantified using rank probabilities. Future research should focus on integrating the model with regulatory decision frameworks and on including other types of uncertainty. Decisions in healthcare policy regarding research portfolio management, market access, reimbursement and price-setting all depend (in part) on the added value of medical treatments for patients. This treatment valuation task is difficult because it has to be based on a large set of (possibly uncertain) clinical evidence and on subjective assessments of the desirability of clinical endpoints. Multi criteria decision analysis (MCDA) is a decision analytic modelling approach that has been used for such treatment valuation tasks [1, 2], primarily because it can support decision makers by structuring the available evidence [3, 4] and by guiding informed discussions through visualizations [5]. In MCDA, the decision goal (in our case, valuing treatments) is decomposed into a set of concrete and measurable criteria (in our case, clinical endpoints or treatment characteristics like mode of administration). The identification of this set of criteria can be done, for example, by interviewing patients and clinical experts. Then, the set of relevant decision options (termed alternatives) is defined. These are often a given in a treatment valuation task. Now that the structure of the MCDA model is built, two main inputs are required: criterion weights and performance scores. Criterion weights indicate the relative importance of criteria. Performance scores measure the experts' assessment of how well the alternatives perform on each of the criteria. Criterion weights and performance scores can be aggregated to come to an overall value of each included treatment [6]. This overall value can then be used to select a most preferred treatment, to rank treatments from best to worst, or to sort treatments into categories. Studies applying MCDA to the treatment valuation task can, for example, be found in the decision contexts of market access [7–9] and reimbursement [10–12]. These applications of MCDA have mostly used expert input to construct the criterion weights and performances scores. However, it has been argued that the patient perspective forms an essential part of treatment value [13–16]. In an MCDA framework this could be operationalized by letting patients set the criterion weights. One approach for this is to involve individual patient representatives in the decision making process, but a more representative approach would be to use stated preference methods to elicit preferences from a large group of patients [17, 18]. These patient preferences could then be used to weigh the available clinical evidence [19]. In that way, treatment value can be estimated from the patient's perspective in a transparent and representative manner. The results from such analyses could then be used as input for the decision makers' decision making process. In its simplest form, this combination of patient preferences with clinical evidence can be done deterministically. This would imply that the mean criterion weights and mean performance scores are used as input for the MCDA. However, including an assessment of uncertainty in a decision analysis would be advantageous because 1) it can help assess confidence in the outcomes of the model, 2) it can help ascertain the usefulness of performing additional research [20], and 3) can prevent bias in non-linear models [21]. Several approaches exist for taking into account uncertainty in MCDA. A recent review into uncertainty analysis approaches that are potentially useful for the specific context of healthcare identified deterministic sensitivity analysis, probabilistic sensitivity analysis, Bayesian frameworks, grey theory and fuzzy set theory [22]. The review concluded that deterministic sensitivity analysis is likely sufficient for most decisions in healthcare but that for decisions where the views of multiple stakeholders are combined or when uncertainty in multiple parameters is to be considered simultaneously, approaches that allow distributions (such as the probabilistic approach) would be more appropriate [22]. The treatment valuation task considered in this paper requires the combination of the views from multiple stakeholders (namely, a large group of patients) and requires the combination of uncertainty in multiple parameters (namely, all weights and performance scores), and the probabilistic approach is therefore adopted in this study. The aim of this study is to illustrate how patients' criterion weights derived from a stated preference study together with performance scores derived from clinical evidence can be used to value treatments from the patient's perspective, taking into account parameter uncertainty in both criterion weights and performance scores. A hypothetical case based on earlier studies concerning three antidepressants and placebo will be presented to illustrate the developed model. Its main outputs are patient-weighted treatment values with associated 95 % confidence intervals. It will be shown how the patient valuation can be contrasted to an expert-based valuation and the utility of the developed modeling approach for practical decision making will be further illustrated by present the results from three scenario analyses. Suppose I treatments have to be valued in an MCDA based on n criteria simultaneously. We define treatments with a higher value to be preferred to treatments with a lower value. The clinical performance of drug i on criterion k is denoted with θ ik . The partial value function v k (θ) for criterion k maps the criterion-specific performance values θ ik onto a linear scale between 0 at a 'worst imaginable' performance of θ − k and 1 at a 'best imaginable' performance of θ + k for treatment i: $$ {v}_k\left({\theta}_{ik}\right)=\kern2.5em \left\{\begin{array}{cc}\hfill 1,\hfill & \hfill if{\theta}_{ik}\ge {\theta}_k^{+}\hfill \\ {}\hfill \frac{\theta_{ik}-{\theta}_k^{-}}{\theta_k^{+}-{\theta}_k^{-}},\hfill & \hfill \kern2.75em if{\theta}_k^{-}<{\theta}_{ik}<{\theta}_k^{+}\hfill \\ {}\hfill \kern2.5em 0,\hfill & \hfill if{\theta}_{ik}\le {\theta}_k^{-}\hfill \end{array}\right. $$ The weights of the criteria are denoted with w k . These criterion weights indicate the relative importance of scale swings from θ − k to θ + k on a criterion, and should be estimated using the swing direct method or the MACBETH pairwise comparisons method [4]. To come to an overall value V i for each treatment i, the partial values in this study are combined with an additive value function $$ {V}_i={\displaystyle {\sum}_{k=1}^n{w}_k{v}_k\left({\theta}_{ik}\right),} $$ where it is assumed that the criteria are independent. Taking into account uncertainty We adopt a probabilistic framework, in which the uncertainty in estimates for criterion weight and performance scores are represented with probability distributions [22]. The partial value functions and the overall values are stochastic variables with probability distributions that are complex combinations of the probability distributions for the weights and treatment performances. These are hard to calculate analytically and will therefore be approximated by applying a Monte Carlo simulation approach. In such an approach, for each simulation run t, weights w kt and performances θ ikt are sampled from their respective probability distributions. Then, formula's 1 and 2 are used to come to partial values v k (θ ikt ) and overall values V it . This process is then repeated a large number of times T. The main outcomes of the MCDA model are the mean overall value for each treatment, the value distributions for each treatment, and the ranking probabilities for each treatment. The mean overall value for treatment i is estimated with the posterior mean, that is \( {V}_i=\frac{{\displaystyle \sum }{V}_{it}}{T} \). The value distribution of treatment i is the empirical distribution of all v it . Rank probabilities are calculated by ranking treatments in descending order on their overall value each Monte Carlo simulation run. We define r xi as the amount of Monte Carlo simulation runs were treatment i attains rank x. Then, treatment i's rank probability for rank x is \( \frac{r_{xi}}{T} \). The probability that the treatment with the highest mean value is not ranked first is used as a measure of decision uncertainty. It is calculated as follows: terming treatment j the treatment with the highest mean value, the probability that this treatment is not ranked first is \( 1-\frac{r_{1j}}{T} \). Illustration using a case The model is illustrated with a case on treatments for severe depression. As can be seen in Fig. 1, the included treatments are compared on four criteria: response, remission, adverse events and severe adverse events. Response is defined as the probability of an acute 50 % reduction in depression symptoms as measured on a depression scale such as the Hamilton rating scale [23] for depression or the Montgomery Asberg depression rating scale [24]. Remission is defined as the yearly probability that depressive symptoms are reduced for such a time such that a patient can be considered to have recovered from an acute depressive episode. Adverse events considered are (yearly probability of) sexual dysfunction, hypertension, restlessness, sedation, dizziness, nausea, dry mouth, sweating and weight increase. Severe adverse events considered are (yearly probability of) suicide and other events that lead to death, threat to life, permanent/severe disability or hospitalization. Decision structure used in the illustrative case. Starting from the top, there is the decision goal (assessing value), that can be operationalized with four criteria. The relative importance of the criteria is indicated by the criterion weights and the plus or minus indicates if the criterion is to be maximized or minimized. The performance of the four decision alternatives at the bottom on the criteria is determined with performance scores. Note that for clarity only the arrows showing the performance scores for drug A are shown The response and remission criteria are to be maximized, with the best level θ + defined as 100 % and the worst level θ − as 0 %. The adverse events and severe adverse events are to be minimized, with the best level θ + defined as 0 % and the worst level θ − as 100 %. This means that, if for a patient the weight for response is 0.4 and the weight for remission is 0.2, that patient considers the swing from 0 % probability of response to 100 % probability of response to be twice as important as the swing from 0 % probability of remission to 100 % probability of remission when choosing an antidepressant. For illustration purposes hypothetical preference and performance datasets were used. A patient criterion weight sample of 100 patients was constructed by bootstrapping the results from a patient panel held in an earlier elicitation study [25]. In that study, weights were also elicited from five clinical experts. These are included in our study for comparison with patient preferences. Three hypothetical antidepressants and placebo are included. We define drugs "A" and "B" as the currently used drugs. They are assumed to have moderate effectiveness and side effects. We assume that a large number of clinical trials have been performed over the years for drugs A and B and that therefore there is only minor uncertainty surrounding their clinical performance. We assume there is a new drug "C" that is potentially much more effective than the conventional drugs. However, we assume that due to its novelty only a small number of patients have been enrolled in clinical trials. This means there still is considerable uncertainty regarding its actual clinical performance. It is assumed that placebo provides almost no effectiveness and that it is associated with very little adverse events. It is assumed that all clinical trials ran for one year. An overview of the datasets for preferences and clinical performances is presented in Table 1. Table 1 Hypothetical dataset used in the case study Table 2 Model outcomes: overall scores and rank probabilities In each simulation run t, criterion weights w kt are obtained by using a bootstrap resampling method. This means that for each run a bootstrap sample of 100 cases of the weight dataset was drawn with replacement. Because the clinical performances of drugs are proportions, performance samples θ ikt are assumed to be distributed with a Beta distribution [21]. Beta distributions require two parameters: α 1 and α 2. We used the number of events as α 1 and the sample size of the study minus the number of events as α 2. This ensures that the expected value of the distribution is the event's probability, and that the variance of the distribution is inversely related to the trial sample size. After sampling from the Beta distributions, v kt (θ ikt ) and V it are calculated using Formula's 1 and 2. In total, T = 10,000 Monte Carlo simulations are performed. The 95 % confidence interval for each treatment's value was estimated with the 2.5 %th and 97.5 %th quantiles of its V it from the simulation output. The model was programmed in R [26]. Several scenario analyses will be performed. First of all, a model that uses only the mean criterion weights and mean performance scores will be run. Then, the impact of uncertainty will be explored by running separate Monte Carlo simulations with 1) only uncertainty in criterion weights (that is, fixing the performances at their mean values while varying the weights as in the base case), 2) only uncertainty in performance scores (that is, fixing the weights at their mean values while varying the performances as in the base case), or 3) uniform probability distributions for criterion weights (keeping the sum of weights constant at one, and varying the performance scores as in the base case). Patient and expert valuations of drugs When using a deterministic model, that is, setting both criterion weights and performance scores to their mean values, the overall scores for drug A, drug B, drug C and placebo are 0.51, 0.51, 0.54, and 0.15, respectively for patients. For experts, the overall scores for drug A, drug B, drug C and placebo would be 0.67, 0.57, 0.67, and 0.19, respectively. This suggests that drug C has the highest value for patients and drugs A and C seem the most valuable treatment according to experts. Although this is already an insightful result, we cannot assess the confidence of these valuation statements. Taking into account uncertainty as described in the Methods section gives us more insight into the treatment valuation by patients and experts (Fig. 2). Note the more spread out probability density for the value of drug C which indicates that its value is more uncertain than that of drugs A and B. Drug C seems still to be the most valuable treatment to patients with a probability of being ranked first of 73 %. Placebo still has the last rank, as would be expected. There is considerable uncertainty in the treatment values: there is a 27 % probability that drug C turns out to not be the most valuable to patients. Furthermore, there is considerable decision uncertainty as to the second most valuable drug (r 2 A = 37 % (and r 2 B = 47 %). The clinical experts' results incorporating uncertainty show that drugs A and C that both have a score of 0.67. The first rank probabilities for drug A and drug C are 51 % and 49 %, respectively. This means there is clinical equipoise between drugs A and C according to experts. Drug B is ranked third in all simulations with a score of 0.57 and placebo is again ranked last in all simulations with a score of 0.19. The impact of patient preferences as opposed to clinical experts is thus that while patients seem certain that drug C has the highest value, experts consider drugs A and C to be equally valuable. An overview of drug values and rank probabilities can be found in Table 2. Probability density estimation plot (Gaussian kernel estimation using the density function in R) of the model results for when patient preferences are used. Red = Drug A, green = Drug B, Blue = drug C and purple = placebo. Treatment value distributions in base case Overview of the overall values of the included treatments. From left to right: patients (with uncertainty in weights and performance scores), experts (with uncertainty in performance scores), patients (with uncertainty in weights but no uncertainty in performance scores), patients (with uncertainty in performance scores but no uncertainty in weights), uniformly distributed weights (with uncertainty in performance). The error bars indicate the 95 % confidence intervals. Pts = Patients, Plc = Placebo Impact of uncertainty The results from the scenario analyses as compared to the base case are presented in Figure 3. When uncertainty in either patient-assigned criterion weights or performance scores is ignored (that is, set to their mean values), the point estimates for all four drugs remain similar. However, the confidence intervals of the drugs become smaller. This can be seen also in the ranking probabilities, which are higher for each rank. The rank reversal probabilities for first rank decreases to 18 % when uncertainty in performances is not taken into account and increases to 41 % when uncertainty in criterion weights is not taken into account. This means that in the case performance uncertainty seems to have a larger impact than preference uncertainty on the confidence with which the most valuable drug is chosen. Finally, using a uniform distribution for criterion weights induced a very large variation in drug scores and consequently, a high rank reversal probability for the first rank (61 %). This large variation in scores is logical because the criterion weights vary between 0 and 1, whereas in the other scenarios there is much less variation in sampled weights (Table 1). In this paper we have demonstrated a probabilistic multi-criteria approach to determine the patient-weighted value of treatments. The MCDA model developed for this purpose takes into account the parameter uncertainty surrounding both the elicited preferences and clinical trial data. The model was illustrated using a hypothetical case on three antidepressant treatments and placebo. In the case the patient-weighted treatment values are considerably different from the expert-weighted values. Furthermore, the rank order of treatments is still uncertain for patients and experts (as reflected in the rank reversal probabilities). Scenario analyses showed that in this case decision uncertainty seems to depend more on uncertainty in clinical evidence than on uncertainty in patient preferences. Finally, adopting uniform criterion weight distributions lead to the most decision uncertainty, as reflected in the high probabilities of rank reversal. Comparison to earlier work Our MCDA model builds on and combines characteristics from earlier approaches for evidence gathering and evidence synthesis. First of all, the model structure and value functions are based on value-based MCDA [3, 7]. There various "families" of MCDA methods, each with their own (dis) advantages. In this paper a value-based method based on multi-attribute value theory was used. The main advantages of this method are its strong foundation in decision theory [27] and the ease of weight elicitation (which is especially relevant when patient preferences are used). Secondly, preference data from stated preference methods can be included in the model, allowing the incorporation of uncertainty around patient preferences. Thirdly, this uncertainty was combined with uncertainty around clinical performance estimates using Monte Carlo simulation methods. Although there have been other methods to combine patient preferences and clinical trial data in the context of healthcare policy, these are mainly limited by not practically taking into account multiple (concurrent) events and/or uncertainty around preferences [28–32]. Stochastic multi-criteria acceptability analysis (SMAA) also combines preference data with clinical trial data, but a non-informative (uniform) distribution or a single rank order of criteria is used for preferences [19, 33]. A similar approach is adopted by Caster et al. who include a rank order of criteria importances based on qualitative information on utilities [34]. Although both SMAA and the method by Caster et al. can include information about patient preferences, only including rank orders of criteria would preclude decision makers from considering the rich information on patient preferences yielded by stated preference studies. Applicability and advantages of the model The treatment valuation task considered in this paper forms only one ingredient of healthcare policy decisions. This is because there is a distinction between a patient's preferences and values, the patient's health-related behavior, and the actual implementation of a decision in the context of a specific healthcare system. Although these concepts are clearly linked, the main distinction is that behavior and outcomes may or may not be in line with a patient's preferences, depending on constraints concerning the patient's circumstances, his/her behavior, and/or the context of the specific healthcare system. After establishing the value of treatments to patients using our model, further modelling work, e.g. with dynamic (system) simulation models [35], or fuzzy cognitive maps to estimate patients' behaviors [36], may support decision makers design policies that are best in line with the patient's preferences. On a physician-patient interaction level where (for example) prescription decisions are made, decision aids (based on MCDA for example [37]) that help patients think about their preferences and the treatment options may be valuable, but the probabilistic modelling framework adopted in this study may be prohibitive with regard to time constraints. Although this was a demonstration and not an empirical comparison of the model to other modes of decision making, we believe the presented approach may have several advantages for decision makers seeking to do a treatment valuation task as part of their decision making process. First of all, the adopted MCDA approach can help decision makers to structure the available preference data and clinical evidence and can help them assess the impact of preferences on the overall value of treatments. The present study adds to this the explicit inclusion and combination of patient preferences and clinical evidence. Furthermore, to account for uncertainty in both preferences and clinical data, the flexible probabilistic approach is adopted. These two additions may give decision makers more insight into 1) the influence of patient preferences on treatment value, and 2) into the impact of uncertainty in both preferences and clinical data on the decision. A final advantage is that because of the explicit use of evidence and the use of visualizations, decision makers can use the model to communicate their decision (argumentation) to stakeholders. This can be especially true for communicating a decision to patients because patient preferences are explicitly used. Our model considers mainly the evidence-based treatment valuation task, whereas a complete regulatory decision making process in healthcare has much more steps. Therefore, real-world applications of our model would require it to be applied in the context of an overarching decision making framework that guides the decision making process from problem definition to final decision. One such framework is PrOACT-URL, which structures the decision making process with the following phases: problem, objective, alternatives, consequences, trade-off, uncertainty, risk tolerance and linked decisions [38]. For an inclusion of the developed model in PrOACT-URL, criterion weights should be elicited from patients in the process after the definition of the 'effects table' in the consequences phase. These could be combined with the clinical evidence according to the model developed in this study to help decision makers assess the benefit-risk balance from a patient's perspective in the trade-offs phase of PrOACT-URL and to guide the discussion in the uncertainty phase that follows the 'trade-offs' phase. We argue that the inclusion of our proposed model into frameworks such as PrOACT-URL would be most useful when it is judged by decision makers that the decision to make is characterized by uncertain clinical evidence and/or uncertain patient preferences. Given the explicit use of elicited patient preference, decision makers seeking to apply our model should be aware of remaining normative issues regarding the use of elicited patient preferences in real world decision contexts. These are: whose preferences should be elicited, who should perform the preference elicitation study, and what stated preference method should be used. Aspects of the decision making process that may change in real world policy decision contexts compared to our simple illustrative case, are that more patients could be involved and that more criteria (not all relating directly to the patient experience) may be considered relevant by the decision makers. Given previous experience with performing large patient preference studies [17, 18, 39] and experience with using MCDA to consider large amounts of criteria [3, 40], it is reasonable to expect that our model can be extended to real world use. Furthermore, even if the real-life decision involves other criteria requiring other normative judgments outside the patient experience (such as societal willingness to pay), it is possible to construct an MCDA model that includes the preferences of multiple stakeholder groups. The relative weight of the preferences of these (and potentially other) stakeholder groups can then be weighed by the decision makers, who make the final decision after considering the outcomes of the evidence synthesis as facilitated by the MCDA model presented in this study. Finally, in the case of benefit-risk assessments decision makers may be reluctant to aggregate benefits and risks into one score [29]. In that case, decision makers could elect to model benefits and risks in separate MCDA models and use the results during the assessment of the benefit-risk assessment. Limitations of the model and opportunities for further research Our model has some limitations. First of all, a multi-attribute method with a simple additive value function as aggregation method was used in this study. Although more complex methods are known, adopting such an aggregation method may imply that the elicitation questions become too hard for patients to understand. Independency of criteria is assumed in our MCDA model. In real-world applications of the model this requires great care to be taken when the decision model is built together with the stakeholders since it is essential that the included criteria comply with the assumptions in the MCDA model. Future studies could use modeling strategies for example with joint distributions of preference parameters such that the independency assumption can be relaxed. Another limitation in this study was that the overall treatment value was assumed to scale linearly with the criterion weights. Since the lower and upper levels were 0 % and 100 % for all criteria in the case, this implied that criterion weights reflected the relative importance of events and that (often reported) non-linear preferences for probabilities could not be incorporated, although methods are known for eliciting non-linear value functions from respondents (e.g. the bisection method [27]). There are several categories of uncertainty in MCDA [22]. In this study, only parameter uncertainty was considered, while patient-specific preference variation and patient-specific variation in outcomes is increasingly becoming important in light of recent developments in personalized medicine [41]. A final and practical limitation is that the process of gathering relevant data on patient preference and clinical evidence, as well as building the model can be time-consuming. What MCDA methods to use in real-world applications of the presented model should be the topic of future research. Aspects we believe are important include the type of patient preferences that are to be elicited (since these need to match the MCDA method [42]), the preferable type of clinical evidence and specific decision maker needs. It may be useful to look into experiences in other disciplines where there is a longer history of using MCDA to support decision makers (see e.g. [43–46]). In conclusion, we have developed a novel approach to estimate the value of treatments from the patient perspective using a probabilistic MCDA model. The model was illustrated with a case on antidepressants. The model can provide insight into the patient-weighted value of treatments and how this may differ from an expert's assessment. It also can provide insight into the impact of uncertainty that still surrounds the value of treatments. Future work will need to address patient-specific variation and the feasibility of the modeling approach in practical applications (specifically in existing regulatory decision making frameworks). MACBETH: measuring attractiveness by a categorical based evaluation technique MCDA: Multi criteria decision analysis PrOACT-URL: problem, objective, alternatives, consequences, trade-off, uncertainty, risk tolerance and linked decisions SMAA: stochastic multicriteria acceptability analysis Diaby V, Campbell K, Goeree R. Multi-criteria decision analysis (MCDA) in health care: a bibliometric analysis. Oper Res Heal Care. 2013;2:20–4. doi:10.1016/j.orhc.2013.03.001. Marsh K, Lanitis T, Neasham D, Orfanos P, Caro J. Assessing the value of healthcare interventions using multi-criteria decision analysis: a review of the literature. Pharmacoeconomics. 2014;32:1–21. Belton V, Stewart TJ. Multiple criteria decision analysis: an integrated approach. 2nd ed. Dordrecht: Kluwer Academic Publishers; 2002. Dodgson JS, Spackman M, Pearman A, Phillips L. Multi-criteria analysis: a manual. Department for Communities and Local Government; 2009. http://eprints.lse.ac.uk/12761/. Accessed 30 Nov 2015. Hughes D, Waddingham E, Mt-isa S, Goginsky A, Chan E, Downey G, et al. Recommendations for the methodology and visualisation techniques to be used in the assessment of benefit and risk of medicines. IMI-PROTECT. http://www.imi-protect.eu/benefitsRep.shtml. Accessed 30 Nov 2015. Ishizaka A, Nemery P. Multi-criteria decision analysis: methods and software. 1st ed. John Wiley & Sons Ltd; 2013. doi:10.1002/9781118644898. Mussen F, Salek S, Walker S. A quantitative approach to benefit-risk assessment of medicines — part 1: the development of a new model using multi-criteria decision analysis. Pharmacoepidemiol Drug Saf. 2007;16:S2–S15. doi:10.1002/pds. Felli JC, Noel RA, Cavazzoni PA. A multiattribute model for evaluating the benefit-risk profiles of treatment alternatives. Med Decis Mak. 2009;29:104–15. doi:10.1177/0272989X08323299. Hummel JM, Bridges JFP, IJzerman MJ. Group decision making with the analytic hierarchy process in benefit-risk assessment: a tutorial. Patient. 2014;7:129–40. doi:10.1007/s40271-014-0050-7. Diaby V, Goeree R. How to use multi-criteria decision analysis methods for reimbursement decision-making in healthcare: a step-by-step guide. Expert Rev Pharmacoecon Outcomes Res. 2014;14:81–99. doi:10.1586/14737167.2014.859525. Goetghebeur MM, Wagner M, Khoury H, Levitt RJ, Erickson LJ, Rindress D. Bridging health technology assessment (HTA) and efficient health care decision making with multicriteria decision analysis (MCDA): applying the EVIDEM framework to medicines appraisal. Med Decis Mak. 2012;32:376–88. doi:10.1177/0272989X11416870. Tony M, Wagner M, Khoury H, Rindress D, Papastavros T, Oh P, et al. Bridging health technology assessment (HTA) with multicriteria decision analyses (MCDA): field testing of the EVIDEM framework for coverage decisions by a public payer in Canada. BMC Health Serv Res. 2011;11:329. doi:10.1186/1472-6963-11-329. van Til JA, IJzerman MJ. Why Should Regulators Consider Using Patient Preferences in Benefit-risk Assessment? Pharmacoeconomics 2013;10–3. doi:10.1007/s40273-013-0118-6. Facey K, Boivin A, Gracia J, Hansen HP, Lo Scalzo A, Mossman J, et al. Patients' perspectives in health technology assessment: a route to robust evidence and fair deliberation. Int J Technol Assess Health Care. 2010;26:334–40. doi:10.1017/S0266462310000395. Bridges JFP, Jones C. Patient-based health technology assessment: a vision of the future. Int J Technol Assess Health Care. 2007;23:30–5. doi:10.1017/S0266462307051549. MDIC PCBR project group members. A framework for incorporating information of patient preferences regarding benefit and risk into regulatory assessments of new medical technology. Medical Device Innovation Consortium; 2015. http://mdic.org/PCBR/. Accessed 30 Nov 2015. Brett Hauber A, Fairchild AO, Reed JF. Quantifying benefit-risk preferences for medical interventions: an overview of a growing empirical literature. Appl Health Econ Health Policy. 2013;11:319–29. doi:10.1007/s40258-013-0028-y. Article CAS PubMed Google Scholar Weernink MGM, Janus SIM, van Til J a, Raisch DW, van Manen JG, IJzerman MJ. A Systematic Review to Identify the Use of Preference Elicitation Methods in Healthcare Decision Making. Pharmaceut Med 2014. doi:10.1007/s40290-014-0059-1. van Valkenhoef G, Tervonen T, Zhao J, de Brock B, Hillege HL, Postmus D. Multicriteria benefit-risk assessment using network meta-analysis. J Clin Epidemiol. 2012;65:394–403. doi:10.1016/j.jclinepi.2011.09.005. Briggs AH, Weinstein MC, Fenwick EAL, Karnon J, Sculpher MJ, Paltiel AD. Model parameter estimation and uncertainty: a report of the ISPOR-SMDM modeling good research practices task force-6. Med Decis Making. 2012;32(5);722-33. doi:10.1177/0272989X12458348. Claxton K. Exploring Uncertainty in Cost-Effectiveness Analysis. Pharmacoeconomics. 2008;26(9):781–98. Broekhuizen H, Groothuis-Oudshoorn C, van Til J, Hummel M, IJzerman M. A review and classification of approaches for dealing with uncertainty in multi-criteria decision analysis for healthcare decisions. Pharmacoeconomics. 2015;33:445–55. Hamilton M. A rating scale for depression. Neurol Neurosurg Psychiatry. 1960;23:56–62. Article CAS Google Scholar Montgomery S, Asberg M. A new depression scale designed to be sensitive to change. Br J Psychiatry. 1979;134:382–9. Hummel JM, Volz F, van Manen JG, Danner M, Dintsios CM, IJzerman MJ, et al. Using the analytic hierarchy process to elicit patient preferences. Patient. 2012;5:1–13. R Development Core Team. R: A Language and Environment for Statistical Computing 2012. http://www.rproject.org. Accessed 30 Nov 2015. Keeney R, Raiffa H. Decisions with multiple objectives. Cambridge: Cambridge University Press; 1976. Holden W. Benefit-risk analysis: A brief review and proposed quantitative approaches. Drug Saf. 2003;26:853–62. Shaffer ML, Watterberg KL. Joint distribution approaches to simultaneously quantifying benefit and risk. BMC Med Res Methodol 2006;6. doi:10.1186/1471-2288-6-48. Lynd LD, Najafzadeh M, Colley L, Byrne MF, Willan AR, Sculpher MJ, et al. Using the incremental net benefit framework for quantitative benefit-risk analysis in regulatory decision-making - a case study of alosetron in irritable bowel syndrome. Value Heal. 2009;13:1–7. doi:10.1111/j.1524-4733.2009.00595.x. Lynd LD, Marra CA, Najafzadeh M, Sadatsafavi M. A quantitative evaluation of the regulatory assessment of the benefits and risks of rofecoxib relative to naproxen: an application of the incremental net-benefit framework. Pharmacoepidemiol Drug Saf. 2010;19:1172–80. doi:10.1002/pds. Wen S, Zhang L, Yang B. Two approaches to incorporate clinical data uncertainty into multiple criteria decision analysis for benefit-risk assessment of medicinal products. Value Heal. 2014;17:619–28. doi:10.1016/j.jval.2014.04.008. Tervonen T, van Valkenhoef G, Buskens E, Hillege HL, Postmus D. A stochastic multicriteria model for evidence-based decision making in drug benefit-risk analysis. Stat Med. 2011;30:1419–28. doi:10.1002/sim.4194. Caster O, Norén GN, Ekenberg L, Edwards IR. Quantitative benefit-risk assessment using only qualitative information on utilities. Med Decis Mak. 2012;32:E1–E15. doi:10.1177/0272989X12451338. Marshall DA, Burgos Liz L, Eng II, Ijzerman MJ, Osgood ND, Padula WV, et al. Applying dynamic simulation modeling methods in health care delivery research — the simulate checklist : report of the ispor simulation modeling emerging good practices task force. Value Heal. 2015;18:5–16. doi:10.1016/j.jval.2014.12.001. Giabbanelli PJ, Crutzen R. Creating groups with similar expected behavioural response in Randomized Controlled Trials: a fuzzy cognitive map approach. BMC Med Res Methodol. 2014;14:130. Dolan JG. Shared decision-making--transferring research into practice: the Analytic Hierarchy Process (AHP). Patient Educ Couns. 2008;73:418–25. doi:10.1016/j.pec.2008.07.032. Zafiropoulos N, Phillips L, Pignatti F, Luria X. Evaluating benefit-risk: An agency perspective. Regul Rapp. 2012;9:5–8. Groothuis Oudshoorn CG, Fermont JM, Van Til JA, Ijzerman MJ. Public stated preferences and predicted uptake for genome-based colorectal cancer screening. BMC Med Inform Decis Mak. 2014;14:18. doi:10.1186/1472-6947-14-18. EMA. Work package 2 report: Applicability of current tools and processes for regulatory benefit-risk 2011;44:1–33. http://www.ema.europa.eu/docs/en_GB/document_library/Report/2010/10/WC500097750.pdf. Accessed 30 Nov 2015. Rogowski W, Payne K, Schnell-Inderst P, Manca A, Rochau U, Jahn B, et al. Concepts of "personalization" in personalized medicine: implications for economic evaluation. Pharmacoeconomics. 2015;33:49–59. doi:10.1007/s40273-014-0211-5. Choo EU, Schoner B, Wedley WC. Interpretation of criteria weights in multicriteria decision making. Comput Ind Eng. 1999;37:527–41. doi:10.1016/S0360-8352(00)00019-X. Huang IB, Keisler J, Linkov I. Multi-criteria decision analysis in environmental sciences: Ten years of applications and trends. Sci Total Environ. 2011;409:3578–94. doi:10.1016/j.scitotenv.2011.06.022. Wang J-J, Jing Y-Y, Zhang C-F, Zhao J-H. Review on multi-criteria decision analysis aid in sustainable energy decision-making. Renew Sustain Energy Rev. 2009;13:2263–78. doi:10.1016/j.rser.2009.06.021. Steuer RE, Na P. Multiple criteria decision making combined with finance: A categorized bibliographic study. Eur J Oper Res. 2003;150:496–515. doi:10.1016/S0377-2217(02)00774-9. Ho W, Xu X, Dey PK. Multi-criteria decision making approaches for supplier evaluation and selection: A literature review. Eur J Oper Res. 2010;202:16–24. doi:10.1016/j.ejor.2009.05.009. Department of Health Technology and Services Research, MIRA Institute, University of Twente, Enschede, The Netherlands Henk Broekhuizen, Catharina G. M. Groothuis-Oudshoorn & Maarten J. IJzerman RTI Health Solutions, Research Triangle Park, NC, USA A. Brett Hauber Department Public Health and Community Medicine, School of Medicine, TUFTS University, Boston, MA, USA Jeroen P. Jansen Henk Broekhuizen Catharina G. M. Groothuis-Oudshoorn Maarten J. IJzerman Correspondence to Henk Broekhuizen. HB designed the study, ran the statistical analysis and drafted the manuscript. CK assisted in the statistical analyses and helped to draft the manuscript. AH and JJ helped to draft the manuscript. MIJ initiated the study and helped draft the manuscript. All authors participated in the interpretation of the data, revised the manuscript critically for intellectual content, and read and approved the final manuscript. Broekhuizen, H., Groothuis-Oudshoorn, C.G.M., Hauber, A.B. et al. Estimating the value of medical treatments to patients using probabilistic multi criteria decision analysis. BMC Med Inform Decis Mak 15, 102 (2015). https://doi.org/10.1186/s12911-015-0225-8 Monte Carlo simulations Probabilistic models Standards, technology, and modeling
CommonCrawl
Transient ischemic attack analysis through non-contact approaches Qing Zhang1,2, Yajun Li2, Fadi Al-Turjman3,4, Xihui Zhou1 & Xiaodong Yang5 Human-centric Computing and Information Sciences volume 10, Article number: 16 (2020) Cite this article The transient ischemic attack (TIA) is a kind of sudden disease, which has the characteristics of short duration and high frequency. Since most patients can return to normal after the onset of the disease, it is often neglected. Medical research has proved that patients are prone to stroke in a relatively short time after the transient ischemic attacks. Therefore, it is extremely important to effectively monitor transient ischemic attack, especially for elderly people living alone. At present, video monitoring and wearing sensors are generally used to monitor transient ischemic attacks, but these methods have certain disadvantages. In order to more conveniently and accurately monitor transient ischemic attack in the indoor environment and improve risk management of stroke, this paper uses a microwave sensing platform working in C-Band (4.0 GHz–8.0 GHz) to monitor in a non-contact way. The platform first collects data, then preprocesses the data, and finally uses principal component analysis to reduce the dimension of the data. Two machine learning algorithms support vector machine (SVM) and random forest (RF) are used to establish prediction models respectively. The experimental results show that the accuracy of SVM and RF approaches are 97.3% and 98.7%, respectively; indicating that the scheme described in this paper is feasible and reliable. Transient ischemic attack (TIA) is a transient neurological disorder caused by focal ischemia of the brain or retina without acute infarction. The clinical symptoms usually last less than 1 hour and the neurological function can return to normal after the onset [1]. TIA is characterized by sudden onset, short duration and high frequency of attack. Currently, the causes of TIA are generally recognized by the medical community as follows :(1) Embolus in arterial blood flows into the brain, resulting in blockage and poor circulation of blood. (2) When blood pressure fluctuates, especially when blood pressure drops, the blood flow in the distal part of the brain's smaller blood vessels decreases. (3) Changes in blood composition cause blood clots to form in blood vessels, which can block blood vessels in the brain [2] [3]. Relevant clinical experimental data show that TIA has an early warning effect on stroke. After TIA, the incidence of stroke within 48 h is as high as 50%, and the incidence of stroke within 3 months is 10%–20% [4]. Every year, there are many old people living alone in the world who do not receive effective attention and timely treatment when suffering from TIA, resulting in a subsequent stroke or even death. Some literatures also give other aspects of TIA [5,6,7], indicating the necessity of monitoring TIA at early stage [8]. The main symptom of TIA is that the patient suddenly falling down due to the weakness of both legs, accompanied by vertigo or vomiting, especially when he suddenly gets up after sitting or lying for a long time [9]. For patients with TIA, early treatment should be given to quickly rule out bleeding in the brain or seizures. At present, various advanced methods and technologies have been used in the diseases prediction and healthcare fields, including acceleration sensors [10], cameras [11], Multiple GPUs [12], Accounting for Label Uncertainty in Machine Learning [13], machine intelligence [14], mobile healthcare framework [15], deep learning approach [16], system in Internet of Medical Things [17], predictive analytics [18] and cloud computing environment [19]. All these significant contributions show their advantage and great usefulness. In this work, the authors develop a wireless sensing platform (WSP), which is composed of a transmitter and a receiver and this platform also has its unique advantage. It can be installed indoors without any contact with patients. The working process of the WSP is as follows: the transmitter transmits the electromagnetic wave in the C band, the receiver receives the wireless signal and simultaneously extracts the wireless channel state information (CSI) data and saves them. Firstly, we conducted a series of preprocessing on the collected CSI data, including removal of outliers and signal denoising, and then used principal component analysis (PCA) to reduce the dimensionality of the preprocessed data [20]. Finally, two machine learning algorithms, support vector machine (SVM) and random forest (RF) [21, 22], were used to classify, so as to monitor TIA. The experimental results in this paper show that the accuracy of the two machine learning algorithms can reach over 95%, which proves that the method described in this paper can effectively monitor TIA, so as to reduce the life risk of the elderly living alone. The contributions of this paper are as follows: It is proposed for the first time to monitor TIA by using C-band wireless sensing technology, which avoids the additional burden of wearable devices of monitored objects and does not invade their privacy; Two kinds of machine learning algorithms are used to train the prediction model to increase the stability and accuracy of the prediction results; Our self-developed WSP has the advantages of low cost, convenient and fast. The rest of the paper is arranged as follows. In the second part, we will introduce the principle of C-Band wireless sensing technology in detail, and the third part will describe the experimental scheme. In the fourth part, we will preprocess the experimental data and classify them by the machine learning algorithms. Experimental results will be discussed in the fifth part, and finally, we summarize the article in the sixth part. Principle of C-band wireless sensing technology Wireless signal propagation model C band has been included in the official document of the Ministry of Industry and Information Technology of the People's Republic of China; in addition, the usage of this band is also recommended by the microwave stations within the Chinese territory. C-Band wireless signal has the characteristics of light like scattering and line-of-sight (LOS) propagation. In the process of propagation, it is easy to be interfered with space environment and appear fading, so that the electromagnetic wave propagating in different paths shows different amplitudes and phases at the receiving end, resulting in multipath effect [23]. The indoor propagation model of C-Band signal is shown in Fig. 1. Wireless signal propagation model for indoor environment According to Fries transfer formula [24], the power of the receiving antenna can be expressed as: $$P_{r} = \frac{{P_{t} G_{t} G_{r} \lambda^{2} }}{{\left( {4\pi R} \right)^{2} }}$$ where \(P_{t}\), \(P_{r}\) are the power of transmitting antenna and receiving antenna respectively. \(G_{t} ,G_{r}\) are the gain of transmitting antenna and receiving antenna respectively, \(\lambda\) is the wavelength of wireless signal, and R is the distance between the two antennas. According to Fig. 1, we assume that the distance of the wireless signal reflected through the ceiling and the ground is D, and the distance scattered by the human body is H, then (1) can be rewritten as: $$P_{r} = \frac{{P_{t} G_{t} G_{r} \lambda^{2} }}{{16\pi^{2} \left( {R + D + H} \right)^{2} }}$$ According to (2), when there is no moving object on the signal transmission path, R, D and H remain unchanged, and the power of the receiving antenna \(P_{r}\) is stable. When an object moves in the path of signal transmission, H will change, resulting in a change in the received power, that is, the amplitude and phase of the received signal will change. The different amplitude and phase of the received signal contains rich information about the external environment. By processing of the received signal, we can extract the change information of the external environment from the received signal, so as to achieve the purpose of monitoring. The C-Band WSP used in this paper adopts orthogonal frequency division multiplexing (OFDM) technology at the transmitter. The main advantage of this technology is to improve the data transmission efficiency and spectrum utilization, and it has a good anti-multipath attenuation capability [25]. The platform includes: spectrum analyzer, RF generator, Cables, vector network analyzer, antennas, absorbing material, networked computer, etc. OFDM technology supports multi-antenna access. If MIMO-OFDM system has \(N_{T}\) (at least 2) antennas at the transmitter and \(N_{R}\) (at least 2) antennas at the receiver, the number of OFDM subcarriers is \(N_{C}\), then the channel model of wireless system can be expressed as [26], $$Y = \varvec{H}X + N$$ where \(Y\) represents received signal, \(X\) represents transmitted signal, \(N\) represents ambient noise, \(\varvec{H}\) represents the wireless channel state matrix, and its dimension is \(N_{T} \times N_{R} \times N_{C}\), as shown below, $$H = \left[ {\begin{array}{*{20}c} {H_{11} } & {H_{12} } & \ldots & {H_{{1N_{R} }} } \\ {H_{21} } & {H_{22} } & \ldots & {H_{{2N_{R} }} } \\ \vdots & \vdots & \ldots & \vdots \\ {H_{{N_{T} 1}} } & {H_{{N_{T} 2}} } & \ldots & {H_{{N_{T} N_{R} }} } \\ \end{array} } \right].$$ In the formula above, $$H_{ij} = \left[ {H_{1} \left( {f_{1} } \right),H_{2} \left( {f_{2} } \right), \ldots ,H_{{N_{C} }} \left( {f_{{N_{C} }} } \right)} \right].$$ where \(f_{k}\) represents the center frequency of the Kth subcarrier, and \(H_{k} \left( {f_{k} } \right)\left( {1 \le k \le N_{C} } \right)\) represents the frequency response of each subcarrier, which can be rewritten as, $$H_{k} \left( {f_{k} } \right) = \begin{array}{*{20}c} {H_{k} \left( {f_{k} } \right)} \\ \end{array} e^{{jarg\left( {H_{k} \left( {f_{k} } \right)} \right)}}$$ where \(\begin{array}{*{20}c} {H_{k} \left( {f_{k} } \right)} \\ \end{array}\) represents CSI amplitude of the Kth subcarrier, and \(arg(H_{k} \left( {f_{k} } \right)\) represents CSI phase of the Kth subcarrier. The received signal is continuously sampled in a certain period of time. Through channel estimation [27], a series of discrete \(\widehat{H}\) matrices are obtained. Thus, the CSI data we need is obtained. The channel estimation formula is as follows, $$\widehat{H} \approx \frac{Y}{X}$$ The experimental scheme It can be seen from the first section that the obvious symptom of TIA is that the patient suddenly falls down due to weakness of both legs. We need to distinguish TIA from other normal daily actions. In this experiment, we will collect data of several actions as shown in Table 1. Table 1 Daily actions and TIA in the experiment Before the experiment, all the subjects were informed about the matters needing attention, on the other hand, all the subjects were trained, rigorously, about the TIA movements simulation. The experiment was carried out in an approximate ward environment. The size of the laboratory is 7 × 5 m. The transmitter and the receiver of WSP are placed at both ends of the room, respectively. The object makes the actions shown in Table 1 in the room. The experimental scene is similar to Fig. 1, and the actual experimental scene has sofa, bookcase and other furniture. Absorbing material will reduce the signal reflection and thus the non-line-of-sight propagation would be affected. However, in actual application scenarios, absorbing material is seldomly used, leading to the fact that the multipath propagation has to be taken into account. Due to the fact that single subject was considered, monopole antennas were used in the experiment. They transmit and receive omnidirectional EM waves and easy to design; more complicated directional antennas can be used for some other special environments. The experimental flow is shown in Fig. 2. The experiment flow chart In Fig. 2, the WSP is a self-designed device for monitoring TIA, which is composed of a transmitter, a receiver and a data processing module. The transmitter works in C-Band and adopts OFDM technology. The transmitting signal bandwidth is 40 MHz and there are 30 subcarriers in total. At the same time, the time window is 5 s. The receiver receives the signal, calculates the channel state matrix, and obtains the CSI data. The data processing module first processes the CSI data, and then classifies the indoor human activities through the model trained by the machine learning algorithm, so as to achieve the purpose of monitoring. The platform is a non-contact monitoring tool, which can be directly installed indoors, very safe and convenient. We collected 300 samples for each action. The waveform of TIA original data collected in the experiment is shown in Fig. 3. The waveform of TIA original data From Fig. 3, we can see that the waveform contains a lot of burrs (noise), which need to be processed. Next, we will describe how to process the data. The data processing This section details the data processing approach as shown in Fig. 2. Select subcarrier In order to eliminate redundant information and facilitate subsequent data processing, we need to pick out the appropriate subcarriers. We calculate the variance of 30 subcarriers of each group of data separately. According to the principle, the larger the variance, the larger the amount of information [28], the sequence number of subcarriers selected for each action is shown in Table 2. Table 2 Selected subcarrier number for each action Remove outliers The fluctuation of equipment voltage or other factors will cause outliers, which will affect the subsequent data processing. We will remove the outliers according to the Pauta criterion [29]. The original waveform of the TIA is shown in Fig. 4a, after removing the outliers, the waveform is shown in Fig. 4b. TIA No.11 subcarrier waveform. a Original waveform of TIA. b TIA waveform after removing outliers Signal filtering Before feature extraction, we need to denoise the signal. In this paper, a wavelet transform is used to filter out the signal noise [30], which is mainly based on the following considerations: (1) wavelet transform has good time–frequency characteristics; (2) wavelet transform can well describe the non-stationary characteristics of the signal; (3) wavelet basis function selection is relatively flexible. In this paper, the signal is decomposed into five layers using the "sym8" wavelet. The Symlet wavelet function is an approximate symmetric wavelet function proposed by Ingrid Daubechies, which is an improvement on the dB function. SimN (N = 2,3,…,8) wavelet has good symmetry, which can reduce the phase distortion of signal analysis and reconstruction to some extent [31]. The waveforms of the motion after the wavelet transform filtering process are shown in Fig. 5. It can be clearly seen from Fig. 5 that the signal waveform is clear and smooth. Waveform of each action after denoising. a The waveform of "standing". b The waveform of "sitting". c The waveform of "lying". d The waveform of "stand up". e The waveform of "sit down" . f The waveform of "walking". g The waveform of "TIA" Due to the large dimension of data, we will use PCA to reduce the dimension and extract features. In 1901, K. Pearson proposed PCA. The idea of this method is to extract a set of new features that are not related to each other from old features. The new features are arranged in descending order of importance [32]. The principle of PCA is as follows, $$Y = \varvec{A}X$$ where, \(X = \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right)\) is the vector in the original n-dimensional feature space, \(Y = \left( {y_{1} ,y_{2} , \ldots ,y_{n} } \right)\) is a vector composed of n new features, and \(\varvec{A}\) is an orthogonal transformation matrix. The new features can be expressed as follows, $$y_{i} = \mathop \sum \limits_{j = 1}^{n} a_{ij} x_{j} = \varvec{a}_{i}^{T} x_{j} \left( {i = 1,2, \ldots n} \right)$$ The larger the variance of the new feature, the greater difference in the feature of the sample, and this feature is more important. The variance of each new feature is: $$var\left( {y_{i} } \right) = E\left[ {y_{i}^{2} } \right] - E\left[ {y_{i} } \right]^{2} .$$ E[] is a mathematical expectation, combined with (2), (3) can be reduced to $$var\left( {y_{i} } \right) = \varvec{a}_{i}^{T} {\varvec{\Sigma}}\varvec{a}_{i}$$ where \({\varvec{\Sigma}}\) is the covariance matrix of the original eigenvector \(X\), which can be estimated and calculated with samples. Because \(\varvec{A}\) is an orthogonal matrix, so \(\varvec{a}_{i}^{T} \varvec{a}_{i} = 1\). Using Lagrange method, we can get the maximum variance value as follows, $$f\left( {\varvec{a}_{i} } \right) = \varvec{a}_{i}^{T} {\varvec{\Sigma}}\varvec{a}_{i} - \lambda \left( {\varvec{a}_{i}^{T} \varvec{a}_{i} - 1} \right)$$ where λ is the Lagrange multiplier, and the derivative of (5) for \(\varvec{a}_{i}\) is obtained as follows, $${\varvec{\Sigma}}\varvec{a}_{i} = \lambda \varvec{a}_{i}$$ Substituting (13) into (11), we can get $$var\left( {y_{i} } \right) = \varvec{a}_{i}^{T} {\varvec{\Sigma}}\varvec{a}_{i} = \lambda \varvec{a}_{i}^{T} \varvec{a}_{i} = \lambda$$ Combining (13) and (14), the optimal \(\varvec{a}_{i}\) should be the eigenvector corresponding to the maximum eigenvalue of \({\varvec{\Sigma}}\), and the corresponding \(y_{i}\) is the first principal component, with the largest variance. The covariance matrix \({\varvec{\Sigma}}\) has a total of n eigenvalues, sorting them from large to small, and then obtaining the second principal component,…, the nth principal component. Thus, the corresponding \(\varvec{a}_{i}\) is obtained, and then the orthogonal transformation matrix \(\varvec{A}\) is obtained. The proportion of information represented by the first k principal components is: $$P = \mathop \sum \limits_{i = 1}^{k} \lambda_{i} /\mathop \sum \limits_{i = 1}^{n} \lambda_{i}$$ As a rule of thumb, most of the information in the data is concentrated on a few principal components. As shown in Fig. 6, we selected the first 53 principal components. Principal component cumulative contribution rate curve Precision comparison of two algorithms In this paper, SVM and RF are used to classify the data respectively. The basic idea of SVM is to transform the original nonlinear problems in low-dimensional space into linear classification problems in high-dimensional space through feature transformation. This feature transformation is realized by defining appropriate inner product kernel functions. SVM implements different forms of nonlinear classifiers by using different kernel functions, whose performance depends on the selection of kernel functions and the parameter setting of kernel functions. The commonly used kernel functions are: (1) radial basis function; (2) polynomial function; (3) Sigmoid function. Since the linear function can only deal with the linear classification problem and the performance of the Sigmoid function can be obtained by taking a certain parameter of the radial basis function, the radial basis function will be adopted in this paper [21] [33]. The advantages of SVM include high precision, good theoretical guarantees on the overfitting, etc. The idea of RF is to build many decision trees to form a "forest" and make decisions by voting. Both theoretical and experimental studies show that this method can effectively improve the accuracy of classification [22]. The RF in this paper has 500 decision trees. The advantages of RF include: the resistance to over fitting, stability, etc. It is also worth mentioning that some other advanced machine learning approaches have also been used in the various healthcare and Internet of Things applications [34,35,36,37,38,39,40], which also indicate the path of future research. The experimental results The confusion matrix of the results processed by the classification algorithm is shown in Table 3. The total number of samples for each action is 300, 225 samples are taken as the training set and 75 samples are taken as the test set. Table 3 Confusion matrix of different classification algorithms As shown in Table 3, the accuracy of both SVM and RF is over 95%, and the errors mainly come from static actions (standing, sitting and lying). This result can effectively prove the feasibility and reliability of the method described in this paper. From Table 3, we can also see that if only TIA and other actions are distinguished, the accuracy of SVM will reach 97.3%, and the accuracy of RF will reach 98.7%. It can be seen from Fig. 5a, b and c that the amplitude of static action oscillogram is very gentle, and the differentiation is more obvious than other actions. At the same time, it can be seen from Table 3 that static actions are only internal classification deviation, and other actions will not be misjudged as static actions, and static actions will not be misjudged as other actions. At the same time, the accuracy of walking is 100% in both classification algorithms. It can be seen from Fig. 5 d–g that the oscillogram of stand up, sit down, walk and TIA all have their own characteristics. However, due to the similarity between sit down and TIA, there will be some errors between them. From Table 3, it can be seen that there are one or two sample classification errors in sit down and TIA. In this paper, the accuracy of the RF algorithm is higher than that of the SVM algorithm., so RF is more suitable algorithm for monitoring of TIA. All the important abbreviations are summarized in the following table: Monitoring TIA can make patients get timely treatment, which is helpful to prevent patients from subsequent stroke. As far as we know, this paper is the first time to use C-Band wireless sensing technology to monitor TIA in a non-contact way. Firstly, we remove the outliers from the data, filter the data by wavelet transform, then reduce the dimension of the preprocessed data by PCA, and finally train the model by SVM and RF. The accuracy of SVM and RF approaches are 97.3% and 98.7%, respectively; demonstrating the effectiveness of the technology presented in the paper. Next, we further explore the applications of C-Band wireless sensing technology in medical care, and propose more reliable, safe and convenient technical application programs; the main contribution and novelty of the work lies in the development of the platform and the non-contact disease warning in the early stage. The data were collected by specialized facilities and platforms. TIA: PCA: Principle component analysis RF: IoT: WCSI: Wireless channel state information OFDM: MIMO: Multi-input multi-output RBF: Radial basis function Bernstein RA, Alberts MJ (2003) Transient ischemic attack-proposed new definition. N Engl J Med 348(16):1607–1609 Johnston SC (2002) Transient ischemic attack. N Engl J Med 348(16):4339 Meyer JS, Muramatsu K, Shirai T (1996) Cerebral embolism as a cause of stroke and transient ischemic attack. Echocardiography 13(5):513–518 Easton JD et al (2009) Definition and evaluation of transient ischemic attack: a scientific statement for healthcare professionals from the American heart Association/American stroke association stroke council; council on cardiovascular surgery and anesthesia; council on cardiovascular radiology and intervention; council on cardiovascular nursing; and the interdisciplinary council on peripheral vascular disease: the American academy of neurology affirms the value of this statement as an educational tool for neurologists. Stroke 40(6):2276–2293 Gennesseaux J, Orsini GG, Lefour S et al (2020) Early management of transient ischemic attack in emergency departments in France. J Stroke Cerebrovascu Dis 29(1):104464 Chang Bernard P, Rostanski Sara, Willey Joshua et al (2019) Safety and feasibility of a rapid outpatient management strategy for transient ischemic attack and minor stroke: the rapid access vascular evaluation-neurology (RAVEN) approach. Ann Emerg Med 74(4):562–571 DeSimone CV, Friedman PA, Noheria A (2013) Stroke or transient ischemic attack in patients with transvenous pacemaker or defibrillator and echocardiographically detected patent foramen ovale. Circulation 128(13):1433–1441 Mcelveen WA, Alway D (2009) Ischemic stroke and transient ischemic attack—acute evaluation and management. Stroke essentials for primary care. Humana Press. https://doi.org/10.1007/978-1-59745-433-9_2 Nakajima M et al (2010) Symptom progression or fluctuation in transient ischemic attack patients predicts subsequent stroke. Cerebrovascular 29(3):221–227 Ichwana D, Arief M, Puteri N Ekariani S. Movements Monitoring and Falling Detection Systems for Transient Ischemic Attack Patients Using Accelerometer Based on Internet of Things, 2018 International Conference on Information Technology Systems and Innovation (ICITSI), BandungPadang, Indonesia, 2018, pp. 491-496 Nguyen VD, Le MT, Do AD, Duong HH, Thai TD, Tran DH. An efficient camera-based surveillance for fall detection of elderly people, 2014 9th IEEE Conference on Industrial Electronics and Applications, Hangzhou, 2014, pp. 994-997 Sierra-Sosa Daniel, Garcia-Zapirain Begonya, Castillo Cristian et al (2019) Scalable healthcare assessment for diabetic patients using deep learning on multiple GPUs. IEEE Trans Industr Inf 15(10):5682–5689 Reamaroon Narathip, Sjoding Michael W, Lin Kaiwen et al (2019) Accounting for label uncertainty in machine learning for detection of acute respiratory distress syndrome. IEEE J Biomed Health Inform 23(1):407–415 Shishvan OR, Zois DS, Soyata T (2018) Machine intelligence in healthcare and medical cyber physical systems: a survey. IEEE Access 20(6):46419–46494 Alhussein M, Muhammad G (2018) Voice pathology detection using deep learning on mobile healthcare framework. IEEE Access 16(6):41034–41041 Benjamin S, Patrick JT, Azra B et al (2018) Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform 22(5):1589–1604 Sayeed MA, Mohanty SP, Kougianos E, Zaveri HP (2019) Neuro-detect: a machine learning-based fast and accurate seizure detection system in the IoMT. IEEE Transact Consum Electron 65(3):359–368 Johnston Stephen S, Morton John M et al (2019) Using machine learning applied to real-world healthcare data for predictive analytics: an applied example in bariatric surgery. Value Health (Elsevier) 22(5):580–586 Abdelaziz Ahmed, Elhoseny Mohamed et al (2018) A machine learning model for improving healthcare services on cloud computing environment. Measurement 119:117–128 Wold S (1987) Principal component analysis. Chemom Intell Lab Syst 2(1):37–52 Yasutoshi Y (2005) Linear programming approaches for multicategory support vector machines. Eur J Operational Res 162(2):514–531 MathSciNet Article Google Scholar Breiman L (2001) Random forests. Mach Learn 45(1):5–32 Chen Z, Zhao Y, Zhao D (2016) Multipath effects on time reversal OFDM communications between wireless sensors, 11th International Symposium on Antennas, Propagation and EM Theory (ISAPE). Guilin 2016:376–379 Volakis J (2009) Antenna engineering handbook, 4th edn. McGraw-Hill, New york Nee RV, House A (2000) OFDM for wireless multimedia communications. Artech House, Boston Wang Y, Wu K, Ni LM (2016) Wifall: device-free fall detection by wireless networks. IEEE Transac Mobile Comput 16(2):581–594 Van De Beek JJ et al (2005) On channel estimation in OFDM systems. Vehic Technol Conference IEEE 2:815–819 Stephens DW (1989) Variance and the value of information. Am Nat 134(1):128–140 Dao-Wen C et al. Study on the fast judgment of abnormal value with Excel. International Conference on Computer Science & Network Technology IEEE, 2013 Rieder P, Nossek JA. Implementation of orthogonal wavelet transforms and their applications, Proceedings IEEE International Conference on Application-Specific Systems, Architectures and Processors, Zurich, Switzerland, 1997, p 489-498 Vijayakumari B, Devi JG, Mathi MI. Analysis of noise removal in ECG signal using symlet wavelet, 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE'16), Kovilpatti, 2016, p 1-6 Pearson K. On lines and planes of closest fit to systems of points in space. London, Edinburgh & Dublin Philosophical Magazine & Journal of Science 1901 Zhiliang L, Zuo MJ, Xu H. Parameter selection for Gaussian radial basis function in support vector machine classification. 2012 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering IEEE, 2012 Shah SA, Fioranelli F. (2019) Human activity recognition: preliminary results for dataset portability using FMCW Radar. In: 2019 International Radar Conference, Toulon, France, 2327 Sept 2019, in press Fioranelli F, Le Kernec J, Shah SA. Radar for health care: recognizing human activities and monitoring vital signs, in IEEE Potentials, vol. 38, no. 4, p 1623, July-Aug 2019 Shah SA, Fioranelli F, RF Sensing Technologies for Assisted Daily Living in Healthcare: A Comprehensive Review, in IEEE Aerospace and Electronic Systems Magazine, vol. 34, no. 11, p 26-44, 1 Nov. 2019 Shah SI, Shah SY, Shah SA. Intrusion detection through leaky wave cable in conjunction with channel state information. 2019 UK/China Emerging Technologies (UCET) 2019 Aug 21 p1-4. IEEE Tahir A, Ahmad J, Shah SA, Morison G, Skelton DA, Larijani H, Abbasi QH, Imran MA, Gibson RM (2019) WiFreeze: multiresolution scalograms for freezing of gait detection in Parkinson's leveraging 5G spectrum with deep learning. Electronics 8:1433 Al-Turjman F, Zahmatkesh H. An Overview of Security and Privacy in Smart Cities IoT Communications, Wiley Transactions on Emerging Telecommunications Technologies, 2019. https://doi.org/10.1002/ett.3677 Al-Turjman F (2020) Intelligence and security in big 5G-oriented IoNT: an overview. Elsevier Future Gener Comput Syst 102(1):357–368 The work was supported in part by the National Natural Science Foundation of China 61301175. First Affiliated Hospital of Xi'an Jiaotong University, Xi'an Jiaotong University Health Science Center, Xi'an Jiaotong University, Xi'an, Shaanxi, China Qing Zhang & Xihui Zhou Northwest Women's and Children's Hospital, Xi'an Jiaotong University Health Science Center, Xi'an, Shaanxi, 710061, China Qing Zhang & Yajun Li Artificial Intelligence Department, Near East University, Mersin 10, Nicosia, Turkey Fadi Al-Turjman Research Centre for AI and IoT, Near East University, Mersin 10, Nicosia, Turkey School of Electronic Engineering, Xidian University, Shaanxi, Xi'an, 710071, China Qing Zhang Yajun Li Xihui Zhou Idea QZ, original manuscript QZ and XZ, editing and guidance YL and FT, Funding XY, project management XY. All authors read and approved the final manuscript. Correspondence to Xiaodong Yang. There are no Competing interests for this manuscript. Zhang, Q., Li, Y., Al-Turjman, F. et al. Transient ischemic attack analysis through non-contact approaches. Hum. Cent. Comput. Inf. Sci. 10, 16 (2020). https://doi.org/10.1186/s13673-020-00223-z Microwave sensing platform
CommonCrawl
A compact pulsatile simulator based on cam-follower mechanism for generating radial pulse waveforms Tae-Heon Yang1, Gwanghyun Jo2, Jeong-Hoi Koo3, Sam-Yong Woo4, Jaeuk U. Kim5 & Young-Min Kim ORCID: orcid.org/0000-0001-7417-52905 BioMedical Engineering OnLine volume 18, Article number: 1 (2019) Cite this article There exists a growing need for a cost-effective, reliable, and portable pulsation simulator that can generate a wide variety of pulses depending on age and cardiovascular disease. For constructing compact pulsation simulator, this study proposes to use a pneumatic actuator based on cam-follower mechanism controlled by a DC motor. The simulator is intended to generate pulse waveforms for a range of pulse pressures and heart beats that are realistic to human blood pulsations. This study first performed in vivo testing of a healthy young man to collect his pulse waveforms using a robotic tonometry system (RTS). Based on the collected data a representative human radial pulse waveform is obtained by conducting a mathematical analysis. This standard pulse waveform is then used to design the cam profile. Upon fabrication of the cam, the pulsatile simulator, consisting of the pulse pressure generating component, pressure and heart rate adjusting units, and the real-time pulse display, is constructed. Using the RTS, a series of testing was performed on the prototype to collect its pulse waveforms by varying the pressure levels and heart rates. Followed by the testing, the pulse waveforms generated by the prototype are compared with the representative, in vivo, pulse waveform. The radial Augmentation Index analysis results show that the percent error between the simulator data and human pulse profiles is sufficiently small, indicating that the first two peak pressures agree well. Moreover, the phase analysis results show that the phase delay errors between the pulse waveforms of the prototype and the representative waveform are adequately small, confirming that the prototype simulator is capable of simulating realistic human pulse waveforms. This study demonstrated that a very accurate radial pressure waveform can be reproduced using the cam-based simulator. It can be concluded that the same testing and design methods can be used to generate pulse waveforms for other age groups or any target pulse waveforms. Such a simulator can make a contribution to the research efforts, such as development of wearable pressure sensors, standardization of pulse diagnosis in oriental medicine, and training medical professionals for pulse diagnosis techniques. The importance of monitoring artery-related factors such as arterial pressure waveform and pulse wave velocity has steadily increased in the medical science and healthcare fields [1,2,3]. Among the factors for health monitoring, the radial pressure waveform is a surrogate marker for estimating the central aortic pressure and predicting cardiovascular diseases [4,5,6]. Thus, in recent years, the need for radial artery monitoring sensors is rapidly increasing in order to measure radial pulsation waveforms, which can vary according to human race, sex, age, and health conditions, such as arterial stiffness [7, 8]. To effectively measure the radial artery pulse waveforms, there have been numerous research studies on flexible and wearable sensing technologies. These studies aimed at developing skin-attachable blood pressure sensors with superior sensing properties along with mechanical flexibility and robustness, thus enabling real-time blood pressure measurement or monitoring. Recently, numerous nanomaterials including nanowires [9], carbon nanotubes [10], polymer nanofibers [11], metal nanoparticles [12], and graphene [13] were tested in the design of wearable blood pressure sensors. With the rapid increase in the research and development of wearable blood pressure sensors, the demand for securing the measurement accuracy of the wearable sensors has also been increased considerably. The accuracy of blood pressure measurement is critically important for the commercial use of such wearable sensors. In the case of hypertension, a 5-mmHg error in blood pressure measurements may double the number of patients diagnosed with hypertension, or even reduce it by half [14]. Despite the importance of measurement accuracy, few studies exist on the evaluation and improvement of wearable sensors' measurement accuracy. Ideally, for such studies, clinical trials with large numbers of patients are the best way to examine the accuracy of wearable sensor measurements. However, often times, a large-scale human subject testing is limited due to high cost and time constraints. As an alternative to clinical testing, mechanical simulators capable of accurately regenerating standardized radial pulsation waveforms with a variety of different pulse features can be a good means of investigating and improving the measurement accuracy of wearable sensors. In addition to a growing demand of pulsatile simulators for calibrating blood pressure sensors, they can make significant contributions to the scientific advancement of Oriental Medicine (OM), such as modernizing or standardization of pulse diagnosis techniques. OM or traditional Chinese medicine is a long-established traditional medical practice in Asia, but it is being widely used nowadays in Western countries in the form of alternative medicines. OM practices include pulse diagnosis, acupuncture, and herbal medicine. The pulse diagnosis is one of the most important diagnostic methods in OM. It is based on the 3-finger technique that sense radial pulses at the terminal region of radial artery on a wrist by index, middle, and ring fingers to diagnose health conditions of internal organs. Unfortunately, the pulse diagnosis technique is ambiguous, and it is not standardized. It depends on the pulse characteristics (intensity, patterns, etc.) and location of the figures. Furthermore, it heavily relies on OM doctors' subjective experiences. Thus, there exists an urgent need for quantification or standardization of pulse waveforms to modernize and teach the pulse diagnosis. Pulsatile simulators capable of reproducing standardized radial pulse waveforms reliably can play an important role in order to train OM students and professionals and to meet the urgent need. Currently, several simulators generating blood pressure waveforms have been developed. They are mainly based on a mechanism circulating viscous fluids similar to blood. ViVitro Labs, Inc., developed an endovascular simulator that can generate pulsatile flow and blood pressure waveforms similar to those of the human body [15]. This simulator is characterized as a super pump that generates a pulsating flow, and the generated pulsatile flow passes through a viscoelastic impedance adapter, a pump head, and a compliance chamber to an aortic anatomical model. Lee et al. developed a cardiovascular simulator for studying the depth, the rate, the shape, and the strength of radial pulses [16]. The simulator is comprised of a pulse generating part, a vessel part, and a measurement part. Chang et al. developed a pulse simulator based on a hydraulic control method [17]. The developed simulator can adjust the characteristic parameters of the pulse wave by manipulating the opening time of the hydraulic valve and the hydraulic pressure intensity. Tellyes Scientific, Inc., developed a pulse-training simulator (Victor Pulse) [18]. It was developed to realize 26 pulse waveforms using a method of circulating fluid and opening and closing multiple valves to produce a desired waveform. Current pulse simulators, including simulators developed in above mentioned studies, are complex, bulky, and expensive. In most simulators, the method of controlling the fluid to generate the blood pressure waveform is a quite complicated. Even if sophisticated and expensive simulators, they have limitations in generating a wide range of radial pulse waveforms, which may be hundreds depending on human race, sex, age, and health conditions, by adjusting several valves and flow rates. Note that it is extremely difficult to control the reflected waves sporadically generated in the liquid. In addition, liquid-based simulators, in particular for portable ones, pose potential problems such as distortion of pressure waveforms due to cavitation and the leakage of liquid. The primary goal of this study is to develop a cost-effective and portable blood pulse simulator that can accurately and repetitively generate a human radial pulse waveform. To this end, it proposes to use a cam-follower mechanism to generate radial artery waveform. The proposed simulator adopts a pneumatic-driven mechanism to avoid the problem of pressure wave reflection, bubbles, and leakage produced in a liquid-driven device. In this study, a cam profile is determined based on a "standard" radial pulse waveform obtained by in vivo testing of a healthy young man in his 20 s. To demonstration purpose, only one cam is fabricated, but the proposed simulator is designed to easily replace the cam with other types of cams to generate other radial pulse waveforms. The design includes a DC motor connected to the cam and follower mechanism that pushes a piston into a cylinder to simulate the heart beat rate with its speed. The design also includes a "diastolic" chamber to adjust the pulse pressure of the waveforms. Using the prototype simulator, a serious of testing was performed to evaluate its performance in generating radial pulse waveforms. These waveforms were compared with the human pulse profile. This article is organized as follows. The next section describes the target pulse pressure waveform of the radial artery that the proposed simulator is trying to reproduce. The following design and development section explains the schematic diagram and the developed platform of the radial pulsation simulator based on the physiological behavior of the human body. Finally, after describing the process of the pressure data measurement from the developed simulator, an analysis and discussion of the experimental results conclude the article. Data collection of human pulse waveforms The reference input signal of a radial pulse waveform was acquired from the clinical data of a healthy young adult because the second peak of the pulse waveform was apparently observed in young people [19, 20]. As shown in Fig. 1, the robotic tonometry system (RTS), developed by the authors at Korea Institute of Oriental Medicine in South Korea, was utilized to obtain the radial pulse waveform with high precision by autonomously detecting the exact pulsation positions and precisely pressurizing the radial artery using a 6-DOF robotic manipulator including one redundant actuator [21]. A pulse sensor array with six pressure sensory channels was attached to the end effector of the RTS to maintain a constant posture and contact force on the radial artery [22]. A 3-DOF motorized stage moved the center position of the pulse sensor to the exact pulsation position. The contact directions between the pulse sensor and the skin surface were controlled by two harmonic-driving actuators without gear backlash. A ball-screw typed linear actuator was used to precisely control the contact force of the pulse sensor. Data collection procedure of human pulse waveforms by the robotic tonometry system: a the operator takes the pulsation position with his/her fingers and marks the pulsatile position using the cross shape laser pointer, b the RTS detects the appropriate direction and force of the pulse sensor contacting on the wrist skin surface Figure 2 shows the raw data of the pulse wave measured from the time that the pulse sensor reached for the wrist skin surface to pressurize the artificial radial artery. The pulse sensor incrementally pressed the radial artery until it found the pulse pressure (PP) that was the maximum value of the first peak magnitudes. When the PP was detected, the tonometry device maintained the contact force for about 60 s to reliably record the raw signals of the radial artery pulse waveform at the PP. The final reference signal of the radial artery pulse waveform was obtained by averaging the 40 pulse waveforms recorded in the steady-state region. Procedure for detecting the PP and recording the pulse waveform in a reliable state and the raw data acquired from six channels of the pulse sensor array Data processing for obtaining representative waveform Forty consecutive pulses of the steady state were extracted from the pressure waveforms measured on the wrist of a human subject using the RTS, and the selected pulses were used to generate representative waveforms. In order to calculate the representative waveform effectively, first, each pulse is normalized to have the same period: T = 1. The function representing the normalized 40 pulses and the mean value are defined as \( {\text{u}}_{\text{i}} : {\text{T}} \to {\text{R}} \), (i = 1,…, 40) and \( \overline{u} : {\text{T}} \to {\text{R}}, \) respectively. In the mean value \( \overline{u} \), the value of the radial augmentation index (AI), which is the most important statistical value for evaluating the clinically important arterial stiffness, is relatively small, and thus the mean value could not be used as a representative waveform. Therefore, the minimization problem expressed by Eq. (1) using the total error \( E\left( {\text{u}} \right) \) was defined to obtain the representative waveform that maximally preserved the value of the radial AI. $$ \begin{aligned} & {\text{Minimize}}\;E\left( {\text{u}} \right) = \sqrt {{\text{E}}_{{{\text{L}}^{2} }} \left( u \right)^{2} + \left( {\alpha E_{RI} \left( u \right)} \right)^{2} } \\ & \quad {\text{in u}} \in {\text{L}}^{2} \left( {\text{T}} \right),\quad {\text{where}} \\ & {\text{L}}^{2} \;{\text{error of total waveform:}}\;E_{{L^{2} }} \left( u \right) = \frac{1}{40}\sqrt {\mathop \sum \limits_{i = 1, \ldots ,40} \mathop \smallint \limits_{T} \left( {u_{i} - u} \right)^{2} } \\ & {\text{Error of radial AI: }}E_{RI} \left( u \right) = \left| {RI - \overline{RI\left( u \right)} } \right|. \\ \end{aligned} $$ Here, \( RI \) is the mean value of the radial AI of \( u_{i} \left( {i = 1, \ldots ,40} \right) \), and \( \overline{RI\left( u \right)} \) is the radial AI of the u function. If the value of α in \( E\left( {\text{u}} \right) \) is set to a sufficiently large constant, the solution minimizing \( E\left( {\text{u}} \right) \) can preserve the value of the radial AI because Eq. (1) becomes a penalty problem using the radial AI. Thus, the solution obtained by solving the minimization problem, Eq. (1), in the N-dimensional space of the discrete Fourier series function can be used as a representative waveform. To solve Eq. (1), the mean value of the pressure pulse \( \overline{u} \) is used as the initial guess after it is represented by a N-dimensional discrete Fourier series function, as shown in Eq. (2). Then, using an iterative method based on a line search, the coefficients \( a_{k} \) and \( b_{k} \) in Eq. (2) are updated in each iteration step. Here, the dimension of the Fourier series function is set to a sufficiently large value of 10 so that the Fourier series function can accurately form a representative waveform: $$ \overline{u} \left( \theta \right) = {\text{a}}_{0} + \mathop \sum \limits_{k = 1}^{n} \left( {a_{k} \cos \left( {\frac{k}{T}\theta } \right) + b_{k} \sin \left( {\frac{k}{T}\theta } \right)} \right) $$ The objective of the minimization problem, Eq. (1), is to find the Fourier series equation that most closely matches the human measured data while minimizing the relative error of the radial AI. Therefore, when the evolution of \( E_{RI} \left( u \right) \) is plotted in Fig. 3 while solving the minimization problem using iterative methods. As shown in Fig. 3, 150 iterations were performed so that the relative error of the radial AI could be sufficiently reduced to 0.00247%. Through this iteration process, the coefficients of the Fourier series function of Eq. (2) are obtained as shown in Table 1. In addition, the \( {\text{L}}^{2} \) error of the total waveform, \( E_{{L^{2} }} \left( u \right) \), is 3.64%, which means that the representative waveform is well matched to the average waveform of \( u_{i} \). The evolution result of error function of radial AI, \( E_{RI} \left( u \right) \) during iteration method Table 1 Coefficients of Fourier series The coefficients of Table 1 determined through the minimization problem are applied to the representative waveform Eq. (2), and the equation is plotted as shown in Fig. 4. In Fig. 4, the value of the radial augmentation index defined by Eq. (3) is calculated as 73.3%, which is similar to the average radial AI of men (69.5% ± 16.3%, [5]). Therefore, it was confirmed that the representative waveform obtained by the Fourier series preserved the average radial AI of men. Normalized pulse pressure wave obtained by the Fourier series equations at the time of the normalized period $$ Radial\;Augmentation\;Index\;(AI) = \frac{Late\;Systolic\;Pulse\;Pressure}{Early\;Systolic\;Pulse\;Pressure} \times 100\; (\% ) $$ Fabrication of three-peak cam-based pulsation simulator In order to convert the Fourier series equations (Fig. 4) obtained from the human data continuously measured by RTS into a three-peak circular shape, the normalized period (Fig. 4) is converted to 360°, and the shape of the cam is schematized as shown in Fig. 5a. This schematized three-peak cam design was fabricated through a wire-cutting machining process of nonmagnetic and high-rigidity material, stainless-steel 304, as shown in Fig. 5b. Three-peak cam obtained from the Fourier series equations: a three-peak circular cam design, b fabricated three-peak cam To design a device capable of regenerating human-like pulse pressure using the fabricated three-peak cam, the cam was mounted on a DC motor (Maxon Motor, DCX 26 L), and a cylinder/piston module capable of repeating compression and tension according to the shape of the cam during rotation was mounted in connection with the cam, as shown in Fig. 6. Here, in order to measure and display the heart rate, a Hall sensor capable of measuring the rotational speed of the DC motor was installed. An additional small cylinder/piston module was installed to control the diastolic pressure by adjusting the amount of air in the cylinder. Developed three-peak cam-based affordable pulsation simulator In order to facilitate the RTS or human to detect the pulse pressure wave generated when the air in the cylinder is compressed by the piston connected to the cam, a silicon artificial blood vessel was connected to the end of the cylinder, and the blood vessel was supported by an artificial wrist bone for tonometry and was surrounded by silicone skin (3B Scientific, W19310). To monitor the air pressure inside the cylinder in real time, a small pressure sensor (Honeywell, 40PC006G) was connected to the cylinder by a tube, and the measured pulse pressure value was displayed on the LCD screen in real time. The microprocessor was built in the housing and was used to calculate the pulse pressure, diastolic pressure, and mean pressure from the measured pulse pressure waveform value in real time. These values were displayed on the screen. Evaluation of developed simulator using robotic tonometry system To verify that the developed cam-based pulsation simulator can accurately reproduce the average radial pulse profile of a human measured by RTS, the radial pulse generated at the wrist region of the simulator was measured again using RTS. Figure 7 shows the overall experimental setup for evaluating the developed simulator. The simulator's wrist part was laid and fixed on the base plate of the RTS similar to the location of the human arm. The developed simulator can modulate the pulse pressure and heart rate generated by the cam-based mechanism by changing the length of the air tube and the rotational speed of the cam. In the experiments, the simulator pulse pressure increased from 50 to 60 mmHg, and the heart rate values increased from 65 to 75 bpm. The generated radial pulse was measured by RTS. While working for 5 min, the simulator showed a repeatability of CV = 0.23% and CV = 0.82% for the heart rate and the pulse pressure, respectively. Evaluation of the developed simulator by the robotic tonometry system: a measurement setup, b measurement procedure As shown in Fig. 8, since the artificial arm of the simulator was fixed on the base of the robotic tonometry device, the contact force direction of the pulse sensor could be kept constant when the pulse sensor surface angles with the gravitational axis were controlled by the constant target values α = − 5.0° and β = 2.0°. In the experiment, the two contact angles between the pulse sensor surface and the base plane of the simulator were controlled with error bounds of ± 0.21° and ± 0.37°, respectively. Representation of the definitions of the two vertical contact angles for defining the contact direction with the radial artery Figure 9 shows the raw data of the pulse wave measured from the time that the pulse sensor reached the artificial wrist surface to the pressurization of the artificial radial artery. The center of the pulse sensor was laid on the same contact point of the surface, and the radial artery was incrementally pressured until the maximum pulse pressure values were found. When the maximum pulse pressure was detected, the tonometry device maintained the contact force for about 30 s to reliably record the raw signals of the maximum pulse pressure of the radial artery pulse. Approximately 30 pulse waveforms obtained in the reliable region were averaged to analyze the dynamic characteristics between the artificial radial pulse and the reference signal obtained from clinical data. The raw signals of the artificial radial artery pulse wave: a detecting the maximum pulse pressure by increasingly pressurizing the radial artery, b recovering the pressurizing force values at the maximum pulse pressure, c recording the raw data of pulse wave at the maximum pulse pressure In order to analyze how accurately the developed cam-based pulsation simulator can regenerate the human representative radial pulse waveforms shown in Fig. 4, error analyses were performed among the representative pulse waveforms of the human (Fig. 4), the pulse wave measured by the RTS on the skin above the simulator wrist (Fig. 9), and pressure sensor outputs built into the simulator's vessel. First, these error analyses were performed by comparing the radial AI calculated from each radial pressure waveform. This is because the radial AI has a significantly high correlation with the central aortic AI, which is a very important indicator for predicting cardiovascular diseases such as atherosclerosis and vascular aging [5, 23, 24]. Next, these error analyses were also conducted by comparing the phase delay between the first peaks of the representative human pulse wave and the simulator's measured pulse data. The first peak of the radial artery pressure waveform was used to reconstruct the early systolic shoulder of the aortic pressure wave through a generalized transfer function [6]. Since the upstroke slope of the early systolic shoulder is related to the left ventricular contractility whose abnormality can initiate a clinically significant heart failure syndrome [25, 26], the slope of the first peak as well as the magnitude ratio (radial AI) were evaluated to be in good agreement with the upstroke slope of the representative human waveform. Radial augmentation index (AI) For the various comparative analyses shown in Fig. 10, the heart rate was adjusted to 65 bpm and 75 bpm by changing the rotational speed of the built-in motor in the simulator. The pulse pressure was regulated to 50 mmHg and 60 mmHg, respectively, by adjusting the internal volume of the simulator. In each case, the measured pulses were stored using the RTS on the skin of the wrist of the simulator. At the same time, the pulses measured by a pressure sensor built into the simulator's silicone vessel were stored. In each case, the average waveforms were generated from the stored pulses, and then the magnitude and period were normalized to 1 and compared with the representative waveform of a human (Fig. 4) as shown in Fig. 10. Comparison of pulse waveforms measured on a human's wrist using RTS, on a simulator's wrist using RTS and inside simulator's vessel (tube) using a pressure sensor with varying heart rate and pulse pressure: a case with heart rate of 65 bpm and pulse pressure of 50 mmHg, b case with heart rate of 65 bpm and pulse pressure of 60 mmHg, c case with heart rate of 75 bpm and pulse pressure of 50 mmHg, d case with heart rate of 75 bpm and pulse pressure of 60 mmHg Figure 10a shows the results obtained by measuring the heart rate and pulse pressure of the simulator at 65 bpm and 50 mmHg, respectively. In the figure, the measured data with the RTS and pressure sensor were compared with the representative waveform of a human. Figure 10b shows the comparison results when the simulator heart rate and pulse pressure were set to 65 bpm and 60 mmHg, respectively, and Fig. 10c shows the comparison results at 75 bpm and 50 mmHg. Figure 10d shows the comparison results at 75 bpm and 50 mmHg. As shown in Fig. 10a, it was confirmed that the waveform measured by the RTS on the wrist of the simulator and the waveform measured by the pressure sensor are in good agreement with the human representative waveform. This result implies that the proposed three-peak cam generating the pressure waveform in the simulator is designed to accurately regenerate the human pulse waveform. As shown in Fig. 10b–d, similar trends were observed when the same comparisons were made by changing the heart rate and pulse rate. Figure 11a shows the error between the representative waveform of a human (Fig. 4) and the pressure waveform measured by a pressure sensor inside the simulator's vessel in terms of the radial AI. Here, since the error value of the radial AI is very small at less than 8.14E−3, it was confirmed that the radial AI values of both waveforms were matched well. On the other hand, in Fig. 11b, the error value of the radial AI between the representative waveform of a human (Fig. 4) and the waveform measured by the RTS on the skin of the wrist of the simulator was about 4.85E−2, which is relatively large. The reason why the error value (waveform measured by the RTS) is relatively large is that the proposed simulator generates the pressure waveform using air pressure instead of an incompressible liquid similar to blood. Because the compressibility of air is different from that of blood, the pressure waveform measured by the tonometry method using the RTS has a slightly different radial AI value from that of the human radial AI. Although this error value is larger than the value in Fig. 11a, this error value is small enough to conclude that the developed simulator can reproduce a pulse waveform similar to the human waveform while ensuring the value of the radial AI. Error comparison of radial augmentation index: a error between measured data inside simulator's vessel by a pressure sensor and human's data, b error between measured data on simulator's skin by RTS and human's data Phase Delay As shown in Fig. 10, when comparing the upstroke slopes of early systolic pressure in the three measured waveforms, it can be seen that the slope of the representative pressure waveform of the human is the steepest. This is because the proposed simulator generates a pressure waveform by compressing and tensioning air instead of an incompressible liquid similar to blood. Owing to the nature of the air, which is a compressible fluid, the slope of the upstroke becomes less steep, resulting in a phase delay. To ensure that this phase delay effect is small enough to be ignored, an error analysis of the phase delay was performed among the representative pulse waveforms of a human, the pulse wave measured by the RTS on the skin above the simulator's wrist, and the pressure sensor outputs built into the simulator's vessel, as described in Fig. 10. Figure 12 shows the amplitude of Fourier transform at each frequency in the frequency domain when a discrete Fourier transform was applied to the pulse waveforms measured in the human and the simulator. The results of the discrete Fourier transform showed almost no difference between the amplitudes obtained from the human waveform and simulator waveform. The results also showed that the pulse waveforms measured in the human and the simulator had a dominant amplitude in the low-frequency range. Thus, we investigated the difference in phase angle at low frequencies of 1 Hz and 2 Hz to determine the phase delay between pulse waveforms. The result obtained by discrete Fourier transformation of the measured pressure waveforms in the human and the simulator If \( y^{h} \) is the pulse shape of a human, and \( y^{c} \) is the waveform generated by the cam simulator, the discrete Fourier transform is defined as follows: $$ \begin{aligned} \widehat{y}^{h} \left( f \right) = \mathop \sum \limits_{n = 1, \ldots ,N} y^{h} (x_{n} )e^{{ - \frac{2\pi i}{N}fn}} \hfill \\ \widehat{y}^{c} \left( f \right) = \mathop \sum \limits_{n = 1, \ldots ,N} y^{c} (x_{n} )e^{{ - \frac{2\pi i}{N}fn}} \hfill \\ \end{aligned} $$ Here, \( x_{n} = x/N \) (N = 200). If \( \widehat{y}^{h} \left( f \right) \) and \( \widehat{y}^{c} \left( f \right) \) in Eq. (4) are expressed in the complex domain, the angles determined by the real parts and imaginary parts are denoted by \( \theta \left( {\widehat{y}^{h} \left( f \right)} \right) \) and \( \theta \left( {\widehat{y}^{c} \left( f \right)} \right) \), respectively. Here, the phase angle delay is defined as Eq. (5): $$ Phase\;Angle \;Delay = \theta \left( {\widehat{y}^{h} \left( f \right)} \right) - \theta \left( {\widehat{y}^{c} \left( f \right)} \right) $$ At low frequencies of 1 Hz and 2 Hz, the phase angle delays are calculated as Eq. (6): $$ \begin{aligned} Phase\;Angle \;Delay_{{\left( {f = 1Hz} \right)}} = \theta \left( {\widehat{y}^{h} \left( 1 \right)} \right) - \theta \left( {\widehat{y}^{c} \left( 1 \right)} \right) \hfill \\ Phase\; Angle\; Delay_{{\left( {f = 2Hz} \right)}} = \theta \left( {\widehat{y}^{h} \left( 2 \right)} \right) - \theta \left( {\widehat{y}^{c} \left( 2 \right)} \right) \hfill \\ \end{aligned} $$ In four cases where the heart rate is adjusted to 65 bpm and 75 bpm and the pulse pressure is adjusted to 50 mmHg and 60 mmHg as shown in Fig. 10. Figure 13 illustrates the phase angle at \( f = 1 {\text{Hz and }}f = 2 {\text{Hz}} \) of each pulse waveform measured in the human and the simulator. In all figures, the difference in phase angle is positive, indicating that the phase of the waveform is delayed. Phase delay comparison of waves of Fourier series of human pulse and pulse generated by CAM simulator at 1 Hz and 2 Hz: a case with heart rate of 65 bpm and pulse pressure of 50 mmHg, b case with heart rate of 65 bpm and pulse pressure of 60 mmHg, c case with heart rate of 75 bpm and pulse pressure of 50 mmHg, d case with heart rate of 75 bpm and pulse pressure of 60 mmHg Figure 14 shows the error of the phase angle delay owing to heart rate and pulse pressure changes. The error value of the phase delay of the waveform measured on the skin of the simulator (Fig. 14b) is larger than the error value of the phase delay of the waveform measured by the pressure sensor (Fig. 14a). The reason is that the proposed simulator produces a pressure waveform using air pressure with compressibility characteristics, which results in a decrease in the pressure transfer efficiency to the skin. Error comparison of Fourier phase angle delay at 1 Hz: a error between measured data inside simulator's vessel by a pressure sensor and human's data, b error between measured data on simulator's skin by RTS and human's data A maximum phase angle delay of 11.4° occurs at a heart rate of 65 bpm and a pulse pressure of 60 mmHg, as shown in Fig. 14b. This phase angle delay is small enough to be 3.2% when converted to a percentage error in one cycle (360°). As a result, the phase delay effect caused by using air instead of incompressible liquid in the proposed simulator is sufficiently small, thus proving that a very accurate pulse pressure waveform can be reproduced using the air-based three-peak cam simulator. In this study, a radial pulsation simulator equipped with a cam mechanism was developed and tested. The developed simulator employed a pneumatic-driven mechanism to avoid the problems of liquid-driven devices, such as sporadic reflections of pressure waves, bubbles, and leakage. To design the cam profile, human pulse waveforms measured by a robotic tonometry system were mathematically modeled as one representative waveform. The representative waveform for a 20-year old was then converted into the circular cam profile. A cam design with three peak points was machined and mounted on a simulator, consisting of a rotating motor, a cylinder/piston module, an artificial wrist, and an LCD display module. The experimental results show that the proposed cam simulator can reproduce human representative waveforms with considerably small errors for the radial augmentation index and the phase delay effect with a maximum of 4.9% and 3.2%, respectively. In summary, this study successfully developed a radial pulsation simulator based on a cam mechanism, and it demonstrated that the prototype simulator can accurately reproduce and control radial pulse waveforms, contributing to the advancement of a radial simulator that is cost-effective, portable, and reliable. Further work will be focusing on establishing representative radial pulse waveforms according to ages, race, and health conditions, and fabricating a cam with multiple peaks. Moreover, applications of the radial pulsation simulators equipped with a multiple-peak cam will be explored, including the verification of the computational simulation of blood flow, evaluation of wearable pressure sensors, studying the transfer functions or relationships between the radial and central pulse pressures, and training the pulse diagnosis of oriental medicine. Li C, Xiong H, Pirbhulal S, Wu D, Li Z, Huang W, Zhang H, Wu W. Heart-carotid pulse wave velocity a useful index of atherosclerosis in chinese hypertensive patients. Medicine. 2015;94(51):e2343. Li C, Xiong H, Wu W, Tian X, Wang Y, Wu D, Lin WH, Miao F, Zhang H, Huang W, Zhang YT. The relationship between heart-carotid pulse transit time and carotid intima-media thickness in hypertensive patients. J Hum Hypertens. 2015;29(11):663–8. Nelson MR, Stepanek J, Cevette M, Covalciuc M, Hurst RT, Tajik AJ. Noninvasive measurement of central vascular pressures with arterial tonometry: clinical revival of the pulse pressure waveform? Mayo Clin Proc. 2010;85(5):460–72. Pauca AL, O'Rourke MF, Kon ND. Prospective evaluation of a method for estimating ascending aortic pressure from the radial artery pressure waveform. Hypertension. 2001;38:932–7. Katsuhiko K, Yasuharu T, Akira O, Yoshinori M, Tatsuya K, Tetsuro M. Radial augmentation index: a useful and easily obtainable parameter for vascular aging. Am J Hypertens. 2005;18(1):11S–4S. Kenji T, Hideyuki K, Naohisa S, Nobuhiro T, Akira Y. Relationship between radial and central arterial pulse wave and evaluation of central aortic pressure using the radial arterial pulse wave. Hypertens Res. 2007;30(3):219–28. Kelly R, Hayward C, Avolio A, O'Rourke M. Noninvasive determination of age-related changes in the human arterial pulse. Circulation. 1989;80:1652–9. Blood Pressure Monitoring Devices Market Size Report 2022. https://www.grandviewresearch.com/industry-analysis/blood-pressure-monitoring-devices-market. Kuniharu T, Toshitake T, Johnny CH, Hyunhyub K, Andrew GG, Paul WL, Ronald SF, Ali J. Nanowire active-matrix circuitry for low-voltage macroscale artificial skin. Nat Mater. 2010;9:821–6. Darren JL, Michael V, Benjamin CKT, Sondra LH, Jennifer AL, Courtney HF, Zhenan B. Skin-like pressure and strain sensors based on transparent elastic films of carbon nanotubes. Nat Nanotechnol. 2011;6:788–92. Gao Q, Meguro H, Okamoto S, Kimura M. Flexible tactile sensor using the reversible deformation of poly (3-hexylthiophene) nanofiber assemblies. Langmuir. 2012;28:17593–6. Maheshwari V, Saraf RF. High-resolution thin-film device to sense texture by touch. Science. 2006;312:1501–4. Yao HB, Ge J, Wang CF, Wang X, Hu W, Zheng ZJ, Ni Y, Yu SH. A flexible and highly pressure-sensitive graphene–polyurethane sponge based on fractured microstructure design. Adv Mater. 2013;25:6692–8. Turner M, Speechly C, Bignell N. Sphygmomanometer calibration—why, how and how often? Aust Fam Phys. 2007;36:834–7. ViVitro Labs Inc., Endovascular Simulator. http://vivitrolabs.com/product/endovascular-ev-simulator/. Lee JY, Jang M, Shin SH. Study on the depth, rate, shape, and strength of pulse with cardiovascular simulator. Evid Based Complement Altern Med. 2017;2017:1–11. 汤伟昌, 孙汉钧, 徐建国, 李斌芳: 中医脉象模拟系统的研究. 中国中医基础医学杂志. 2000, 6: 62–65. Tellyes Scientific Co., Victor Pulse. http://tellyes.ecbay.xyz/product-medical-science/2524679/victor-pulse.html. Woo SH, Choi YY, Kim DJ, Bien F, Kim JJ. Tissue-informative mechanism for wearable non-invasive continuous blood pressure monitoring. Sci Rep. 2014;4(6618):1–6. Bae JH, Ku B, Jeon YJ, Kim H, Kim J, Lee H, Kim JY, Kim JU. Radial pulse and electrocardiography modulation by mild thermal stresses applied to feet: an exploratory study with randomized, crossover design. Chin J Integrat Med. 2017. https://doi.org/10.1007/s11655-017-2972-0. Bae JH, Jeon YJ, Lee S, Kim JU. A feasibility study on age-related factors of wrist pulse using principal component analysis. In: IEEE 38th annual international conference of the engineering in medicine and biology society. 2016. p. 6202–5. Kim YM. Precise measurement method of radial artery pulse waveform using robotic applanation tonometry sensor. J Sens Sci Technol. 2017;26(2):135–40. Schram MT, Henry RM, van Dijk RA, Kostense PJ, Dekker JM, Nijpels G, Heine RJ, Bouter LM, Westerhof N, Stehouwer CD. Increased central artery stiffness in impaired glucose metabolism and type 2 diabetes: the Hoorn Study. Hypertension. 2004;43:176–81. Nichols WW, O'Rourke MF. McDonald's blood flow in arteries. In: Theoretical, experimental and clinical principles, 4th Edn. London: Edward Arnold; 1998. p. 201–22. Robert HT, Marcel ED. Arterial waveform analysis for the anesthesiologist: past, present, and future concepts. Anesth Analg. 2011;113(4):766–76. Steven RH, Kenneth BM. Is depressed myocyte contractility centrally involved in heart failure? Circ Res. 2003;92(4):350–8. T-HY wrote the manuscript and designed and developed the pulse generation system, GJ mathematically modeled the radial artery pulse waveform and analyzed the performance of the developed system, J-HK conducted the experiments for evaluating the simulator, S-YW designed and developed the three-peak cam-follower mechanism, JUK designed the procedure for acquiring radial artery pulse waves from the robotic pulse tonometry device, Y-MK managed the experiments and analysis of the developed system and contributed to writing and revising the manuscript. All authors read and approved the final manuscript. This work was supported by a Grant (K18022) from the Korea Institute of Oriental Medicine (KIOM), funded by the Korean government. The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. Department of Electronic Engineering, Korea National University of Transportation, Chungju-si, Chungbuk, Republic of Korea Tae-Heon Yang Department of Mathematical Sciences, KAIST, Daejeon, Republic of Korea Gwanghyun Jo Department of Mechanical and Manufacturing Engineering, Miami University, Oxford, OH, USA Jeong-Hoi Koo Center for Mechanical Metrology, KRISS, Daejeon, Republic of Korea Sam-Yong Woo Future Medicine Division, Korea Institute of Oriental Medicine (KIOM), 1672 Yuseongdaero, Yuseong-gu, Deajeon, 34054, Republic of Korea Jaeuk U. Kim & Young-Min Kim Jaeuk U. Kim Young-Min Kim Correspondence to Young-Min Kim. Yang, TH., Jo, G., Koo, JH. et al. A compact pulsatile simulator based on cam-follower mechanism for generating radial pulse waveforms. BioMed Eng OnLine 18, 1 (2019). https://doi.org/10.1186/s12938-018-0620-3 Radial pulsation simulator Radial artery pressure waveform Augmentation index
CommonCrawl
Semi-analytical assessment of the relative accuracy of the GNSS/INS in railway track irregularity measurements Qijin Chen1, Quan Zhang1,2, Xiaoji Niu ORCID: orcid.org/0000-0002-5591-08591,3 & Jingnan Liu1,2 Satellite Navigation volume 2, Article number: 25 (2021) Cite this article An aided Inertial Navigation System (INS) is increasingly exploited in precise engineering surveying, such as railway track irregularity measurement, where a high relative measurement accuracy rather than absolute accuracy is emphasized. However, how to evaluate the relative measurement accuracy of the aided INS has rarely been studied. We address this problem with a semi-analytical method to analyze the relative measurement error propagation of the Global Navigation Satellite System (GNSS) and INS integrated system, specifically for the railway track irregularity measurement application. The GNSS/INS integration in this application is simplified as a linear time-invariant stochastic system driven only by white Gaussian noise, and an analytical solution for the navigation errors in the Laplace domain is obtained by analyzing the resulting steady-state Kalman filter. Then, a time series of the error is obtained through a subsequent Monte Carlo simulation based on the derived error propagation model. The proposed analysis method is then validated through data simulation and field tests. The results indicate that a 1 mm accuracy in measuring the track irregularity is achievable for the GNSS/INS integrated system. Meanwhile, the influences of the dominant inertial sensor errors on the final measurement accuracy are analyzed quantitatively and discussed comprehensively. The integration of a Global Navigation Satellite System (GNSS) and an Inertial Navigation System (INS) has been widely used in weapon guidance, aviation engineering, and land mobile mapping to provide accurate georeferencing(Liu et al. , 2020; El-Sheimy and Youssef , 2020). Attention is also paid to the outstanding short-term relative measurement accuracy of the INS in inertial surveying applications (Zhang et al. , 2013; Zhu et al. , 2019; Zhang et al. , 2020). For example, an INS aided by GNSS and an odometer has successfully applied in precise engineering surveying, such as railway track irregularity measurement and road surface roughness measurement (Chen et al. , 2015; Chen et al. , 2018; Niu et al. , 2016). The focus and accuracy requirement are very different for the above two kinds of applications. The second kind of applications pays more attention to the absolute accuracy, which is dominated by the mid-term and long-term error components, while the railway track irregularity measurement, a typical precise inertial surveying application, is more concerned with the temporal or spatial relative measurement accuracy, like the smoothness of the estimated trajectory, as illustrated in Fig. 9 in the appendix. This difference is made more explicit in the example depicted in Fig. 1. The upper panel in this figure depicts two typical samples from the first-order Gauss–Markov processes with the same covariance but different correlation times. Considering two positioning apparatuses that are corrupted by these two error processes; in this scenario, we would conclude that they have the same accuracy for georeferencing because these two stochastic processes have the same second moment, i.e., covariance matrix. In addition, it is obvious that the correlation between the values of \(y(t_1)\) and \(y(t_2)\) is higher than that between \(x(t_1)\) and \(x(t_2)\) for any two instants \(t_1\) and \(t_2\), i.e., x exhibits more rapid variations in magnitude than y. Physically, we would then expect the apparatus corrupted by error process y have better accuracy in measuring railway track irregularities, as depicted in the lower panel and discussed in the appendix. This example reveals that the covariance matrix or the propagation models of the aided INS navigation errors do not contain the information describing the temporal correlation characteristics, which actually determine the relative measurement accuracy. Different correlation characteristics of two random processes. The upper subplots are two typical samples from the first-order Gauss–Markov process \({\dot{\alpha }}(t) = -\frac{1}{T}\alpha (t) + w(t)\), with \(w(t)\sim N(0, 2 \sigma ^2/T )\), where x and y have the same parameter \(\sigma\) but different correlation times T; the lower two are defined as \(\Delta \alpha (t) = \alpha (t) - \alpha (t+\Delta t)\) For the research on precise railway track irregularity measurement by an aided INS, the following two questions are often asked Question 1: Is it possible to achieve 1 mm accuracy in track irregularity identification when the carrier-phase differential GNSS/INS can only provide centimeter-level accuracy? Question 2: What accuracy can be expected with a Track Geometry Measuring Trolley (TGMT, as shown in Fig. 10) that uses the given GNSS/INS integrated system? These two questions are critical for the feasibility study and system design of a TGMT based on an aided INS. The example presented in Fig. 1 intuitively illustrates why the answer to the first question is positive, while obtaining insights into these questions requires analyzing the relative measurement error propagation of the aided INS. The previous studies were almost concentrated on the absolute accuracy analysis, for example, the propagation of the covariance matrix \({\mathbf {P}}_e\) of the navigation errors, as discussed in the appendix. The time history of \({\mathbf {P}}_{e}\) portrays how the ensemble errors accumulate with time. However, this kind of analysis cannot answer the above two questions. As discussed in the appendix, it is \({\mathbf {P}}_{\Delta e}\) in (A.7) rather than \({\mathbf {P}}_{e}\) that characterizes the track irregularity measurement accuracy, while this information is not contained in \({\mathbf {P}}_{e}\). Therefore, it is impossible to evaluate the performance of a TGMT system by studying the time history of its covariance matrix. The main difference between the present research and previous studies lies in the following: (1) The relative measurement accuracy instead of the absolute error budget of the GNSS/INS integrated system determines its performance in railway track irregularity parameter identification. Hence, we not only care about the error budget but also are interested in the correlation characteristics of the navigation errors. (2) A semi-analytic approach is proposed to analyze the relative measurement accuracy of the GNSS/INS system, based on which the effects of the principal inertial sensor errors on the final railway track irregularity measurement accuracy are quantitatively evaluated. The relative measurement accuracy analysis of an aided INS system was rarely studied, but with the emergence of precise engineering surveying using the inertial technique, this analysis has received more and more attention. In our previous work (Zhang et al. , 2020; Zhang et al. , 2017), we pointed out the importance of relative measurement accuracy in some mobile mapping systems and studied the short-term relative measurement accuracy of the GNSS/INS system by reading the Allan variance plots of the positioning error samples of a real GNSS/INS system. The errors in the position, velocity and attitude solution with a GNSS/INS integrated system arise from different sources, including alignment errors, inertial sensor errors, computational inaccuracy and imperfections in navigation aids, etc. Navigation error propagation is also affected by the host vehicle trajectory. It is well known that a complete determination of GNSS/INS navigation error propagation is a complex problem, and analytical solution is available only for some extremely simple cases. Maybeck (Maybeck , 1978) analyzed the navigation error of an INS aided by position data in a simplified channel . His work suggested that the analytic or at least semi-analytic performance assessment of a GNSS/INS integrated system, specific for railway track irregularity measurement, is possible, as the host vehicle, i.e., the railway track trolley, moves along a simple and previously known trajectory and experiences only low dynamic motion in the surveying procedure. The remainder of this paper is organized as follows. We first present a GNSS/INS integration algorithm specifically designed for railway track irregularity measurement and explicitly define the railway track irregularity measurement errors. Then, the system dynamic and measurement equations are simplified according to the railway track measurement condition, resulting in a time-invariant linear system. Subsequently, the relative measurement accuracy is analyzed with a semi-analytic approach. The relative error propagation analysis is then validated by a simulation. Finally, we use the semi-analytic method to quantitatively analyze the effects of the principal inertial sensor errors on the railway track irregularity measurement. For the railway track irregularity measurement application, the GNSS carrier phase observations are postprocessed in the differential mode to provide the positions accurate to the centimeter level and then the results are fused with the Inertial Measurement Unit (IMU) data in a loosely coupled architecture. The speed measurements from odometers and a Nonholonomic Constraint (NHC) are used to enhance the navigation performance. In the following, the design of a navigation Kalman Filter (KF) for loose integration of the GNSS and INS is sketched, and the track irregularity measurement errors are explicitly defined. Used coordinate systems We first define the coordinate systems frequently used in this work. the navigation frame (n-frame): a local geographic reference frame whose origin coincides with the IMU measurement center, x-axis points toward geodetic north, z-axis is down-pointing along the ellipsoid normal , and y-axis is directed east to form a right-handed frame, i.e., the North-East-Down (NED) system. the body frame (b-frame): an IMU body frame whose axes are the same as the IMU's body axes; it is the frame in which the accelerations and angular rates generated by the strapdown accelerometers and gyroscopes are resolved. the vehicle frame (v-frame): the host vehicle frame, whose x-axis is along the vehicle's forward direction, z-axis points downward, and y-axis is directed outward to form a right-hand system. System models For the GNSS/INS integrated KF design, the error state vector is defined as $$\begin{aligned} \varvec{x}(t) = \left[ \begin{matrix} (\delta \varvec{r}^n)^\text {T}&(\delta \varvec{v}^n)^\text {T}&\varvec{\phi }^\text {T}&\delta \varvec{b}_g^\text {T}&\delta \varvec{b}_a^\text {T} \end{matrix} \right] ^\text {T} \end{aligned}$$ In this definition, \(\delta \varvec{r}^n = [\begin{matrix} \delta r_N&\delta r_E&\delta r_D \end{matrix}]^\text {T}\) and \(\delta \varvec{v}^n = [\begin{matrix} \delta v_N&\delta v_E&\delta v_D \end{matrix}]^\text {T}\) are the INS-derived position and velocity errors in the n-frame, respectively, and \(\delta h = -\delta r_D\). \(\varvec{\phi } = [\begin{matrix} \phi _N&\phi _E&\phi _D\end{matrix}]^\text {T}\) is the three-dimensional attitude error vector, including tilt errors and the azimuth error (Benson , 1975). \(\delta \varvec{b}_g\) and \(\delta \varvec{b}_a\) are the errors of the gyroscope and accelerometer biases, respectively. The gyroscope and accelerometer scale factor and cross-coupling errors are not included in this error state vector, for their influence on the final navigation solution depends on the host vehicle maneuvers. The TGMT experiences only low and weak maneuvers when it moves along the rails, and therefore the scale factor and cross-coupling error of the high-grade IMU can be safely neglected. The error state vector differential equation is written as $$\begin{aligned} \dot{\varvec{x}}(t) = {\mathbf {F}}(t)\varvec{x}(t) + {\mathbf {G}}(t)\varvec{w}(t) \end{aligned}$$ where \({\mathbf {F}}\) is the system matrix describing the system dynamics, \({\mathbf {G}}\) is the system noise distribution matrix, and \(\varvec{w}\) is the system noise vector. To obtain the model for an aided INS system the time derivative of each state variable is calculated. The position, velocity and attitude error differential equations are $$\begin{aligned} \delta \dot{\varvec{r}}^n= & {} -\varvec{\omega }_{en}^n \times \delta \varvec{r}^n +\delta \varvec{\theta }\times \varvec{v}^n + \delta \varvec{v}^n \end{aligned}$$ $$\begin{aligned} \delta \dot{\varvec{v}}^n= & {} ~{\mathbf {C}}_b^{n}\delta \varvec{f}^b + \varvec{f}^n \times \varvec{\phi } - (2\varvec{\omega }_{ie}^n + \varvec{\omega }_{en}^n)\times \delta \varvec{v}^n \nonumber \\&+ \varvec{v}^n \times (2\delta \varvec{\omega }_{ie}^n + \delta \varvec{\omega }_{en}^n) +\delta \varvec{g}_p^n \end{aligned}$$ $$\begin{aligned} \dot{\varvec{\phi }}= & {} - \varvec{\omega }_{in}^n \times {\varvec{\phi }} +\delta \varvec{\omega }_{in}^n- {\mathbf {C}}_b^n \delta \varvec{\omega }_{ib}^b \end{aligned}$$ where \(\varvec{\omega }_{en}^n\) is the angular rate vector of the n-frame with respect to the earth frame in the n-frame and \(\delta \varvec{\theta } = [\begin{matrix} \delta \lambda \cos \varphi&-\delta \varphi&\delta \lambda \sin \varphi \end{matrix}]^\text {T}\), where \(\varphi\) denotes the latitude and \(\delta \varphi\) and \(\delta \lambda\) are the errors in the latitude and longitude, respectively. \({\mathbf {C}}_b^n\) is the b-frame to n-frame coordinate transformation matrix; \(\varvec{f}^b\) is the specific force vector in the b-frame, where \(\delta \varvec{f}^b\) is its error vector in the b-frame and \(\varvec{f}^n\) denotes the specific force in the n-frame; \(\varvec{\omega }_{ie}^n\) is the earth angular rotation rate vector in the n-frame; \(\varvec{v}^n\) is the velocity vector in the n-frame; \(\delta \varvec{\omega }_{ie}^n\) and \(\delta \varvec{\omega }_{en}^n\) denote errors of \(\varvec{\omega }_{ie}^n\) and \(\varvec{\omega }_{en}^n\), respectively; \(\delta \varvec{g}_p^n\) is the local gravity error vector in the n-frame; \(\varvec{\omega }_{in}^n\) is the angular velocity vector, \(\varvec{\omega }_{in}^n = \varvec{\omega }_{ie}^n + \varvec{\omega }_{en}^n\) with the corresponding error denoted by \(\delta \varvec{\omega }_{in}^n\); and \(\delta \varvec{\omega }_{ib}^b\) refers to the gyro measurement errors. More details on the above three equations can be found in (Benson , 1975; Shin , 2005). The residual inertial sensor errors are modeled as $$\begin{aligned} \delta \varvec{f}^b = \delta \varvec{b}_a + \varvec{w}_a \end{aligned}$$ $$\begin{aligned} \delta \varvec{\omega }_{ib}^b = \delta \varvec{b}_g + \varvec{w}_g \end{aligned}$$ where \(\varvec{w}_a\), \(\varvec{w}_g\) are three-element vectors representing the white noise component of the accelerometers and the gyro measurements, respectively. The residual biases of the gyros and accelerometer are modeled as the first-order Gauss–Markov process $$\begin{aligned} \delta \dot{\varvec{b}}_g= & {} -\frac{1}{T_{gb}} \varvec{b}_g + \varvec{w}_{gb} \end{aligned}$$ $$\begin{aligned} \delta \dot{\varvec{b}}_a= & {} -\frac{1}{T_{ab}} \varvec{b}_a + \varvec{w}_{ab} \end{aligned}$$ where \(T_{gb}\) and \(T_{ab}\) are the correlation times and \(\varvec{w}_{gb}\) and \(\varvec{w}_{ab}\) are the corresponding driving white noise of strength \(2 \sigma ^2/T\), with \(\sigma\) being the root mean squared value of the process. Measurement models For a TGMT the available external measurements include the GNSS-derived position and the speed from the odometer and NHC. The models for these observables are given below. GNSS position update The position of a GNSS antenna is related to the INS position by taking into account the lever arm as follows [(Teunissen and Montenbruck , 2017), p. 831]: $$\begin{aligned} \varvec{r}_{G} = \varvec{r}_{I} + {\mathbf {D}}^{-1} {\mathbf {C}}_b^n \varvec{l}^b \end{aligned}$$ where \(\varvec{r}_{G}\) and \(\varvec{r}_{I}\) are the position vectors of the GNSS antenna phase center and the IMU measurement center, respectively, which are expressed as the latitude \(\varphi\), longitude \(\lambda\) and height h. \(\varvec{l}^b\) is the lever arm from the IMU center to the GNSS antenna phase center in the b-frame, which can be accurately measured. \({\mathbf {D}}^{-1} = \text {diag}\left( \left[ \frac{1}{R_M+h} \frac{1}{\left( R_N+h\right) \cos \varphi } -1\right] ^\text {T}\right)\), a diagonal matrix that converts the delta position in meter to delta latitude, delta longitude in radians and delta height in meter, where \(R_M\) and \(R_N\) are, respectively, the meridian and prime vertical radii of the curvature. Then, the measurement innovation vector comprises the difference between the GNSS- and the INS-derived positions, and the measurement model can be derived by perturbation as $$\begin{aligned} \begin{aligned} \varvec{z}_r&= {\mathbf {D}}\left( \hat{\varvec{r}}_{G} - \tilde{\varvec{r}}_{G} \right) \\&= \delta \varvec{r}^n + ({{\mathbf {C}}}_b^n \varvec{l}^b)\times \varvec{\phi } + \varvec{n}_{r} \end{aligned} \end{aligned}$$ where \(\hat{\varvec{r}}_{G}\) is the estimated position of the GNSS antenna center, \(\tilde{\varvec{r}}_{G}^n\) is the position measurement provided by the GNSS receiver, \(\varvec{n}_{r}\) denotes the GNSS position measurement noise in meter, modeled as Gaussian white noise with \(\varvec{n}_r\sim N({\mathbf {0}}, {\mathbf {R}}_r)\), which is adequate if the GNSS sampling rate is below 1 Hz (Niu et al. , 2014). Velocity update in the vehicle frame The host vehicle, i.e., the TGMT, is a specifically designed trolley (as depicted in Fig. 10) that differs from civilian land vehicles since it cannot change its course arbitrarily. It can keep rigid contact with the rails in both lateral and vertical directions when moving along the track and conforms to the NHC quite well even when railway track deformation exists. More details on the TGMT can be found in (Chen et al. , 2015; Chen et al. , 2018). The trolley has only an along-track speed, which can be obtained from an odometer sensor, and the velocities in both cross-track and vertical directions are zero. Therefore, the velocity measurement in the v-frame can be written as $$\begin{aligned} \tilde{\varvec{v}}^v_\text {wheel}= & {} \left[ \begin{matrix} v_\text {odo}&0&0 \end{matrix}\right] ^\text {T} \end{aligned}$$ $$\begin{aligned} \tilde{\varvec{v}}^v_\text {wheel}= & {} \varvec{v}^v_\text {wheel} - \varvec{n}_{v} \end{aligned}$$ where \(\tilde{\varvec{v}}^v_\text {wheel}\) denotes the velocity measurement vector in the v-frame; \(v_\text {odo}\) represents the along-track velocity derived from the odometer output, and \(\varvec{n}_{v}\) is the velocity measurement noise with \(\varvec{n}_v\sim N({\mathbf {0}}, {\mathbf {R}}_v)\). Note that the noise strength of the last two components of \(\varvec{n}_{v}\) are different from that of the first component and should be set according to the NHC condition. The relationship between the velocities of the trolley wheel and the IMU is expressed as $$\begin{aligned} \varvec{v}_\text {wheel}^v = {\mathbf {C}}_b^v {\mathbf {C}}_n^b \varvec{v}^n + {\mathbf {C}}_b^v (\varvec{\omega }_{eb}^b \times ) \varvec{l}_\text {wheel}^b \end{aligned}$$ where \(\varvec{l}_\text {wheel}^b\) is the lever-arm vector from the IMU measurement center to the wheel sensor in the b-frame; \({\mathbf {C}}_b^v\) is the b-frame to v-frame coordinate transformation matrix, which can be computed from the misalignment angles of the v-frame with respect to the b-frame; and the estimated velocity at the wheel point is denoted by \(\hat{\varvec{v}}_\text {wheel}^v\). The v-frame velocity error measurement model can be expressed as $$\begin{aligned} \begin{aligned} \varvec{z}_\text {v}&= \hat{\varvec{v}}_\text {wheel}^v -\tilde{\varvec{v}}^v_\text {wheel} \\&= {\mathbf {C}}_b^v {\mathbf {C}}_n^b \delta \varvec{v}^n - {\mathbf {C}}_b^v {\mathbf {C}}_n^b(\varvec{v}^n \times ) \varvec{\phi } \\&\quad - {\mathbf {C}}_b^v(\varvec{l}_\text {wheel}^b \times ) \delta \varvec{\omega }_{ib}^b + \varvec{n}_{v} \end{aligned} \end{aligned}$$ Errors of the optimal position estimates In the GNSS/INS integrated solution, the optimal position estimate is obtained by subtracting the optimal estimate of the INS-derived position error from the INS-derived position solution as $$\begin{aligned} \hat{\varvec{r}}_\text {est}(t) = \hat{\varvec{r}}(t) - \hat{\delta \varvec{r}}(t) \end{aligned}$$ where \(\hat{\varvec{r}}_\text {est}(t)\) denotes the optimal position estimate from the aided INS, with the subscript ' est' indicating the optimal estimate; \(\hat{\varvec{r}}(t)\) is the INS-indicated position solution, which contains errors \(\delta \varvec{r}\); and \(\hat{\delta \varvec{r}}\) is the optimal estimate of \(\delta \varvec{r}\) from the data-fusion KF. The error in the position estimate, denoted by \(\delta \hat{\varvec{r}}_\text {est}\), is defined as $$\begin{aligned} \delta \hat{\varvec{r}}_\text {est}(t) \triangleq \hat{\varvec{r}}_\text {est}(t) - \varvec{r}_\text {real}(t) \end{aligned}$$ The INS-derived position error is defined as $$\begin{aligned} \delta \varvec{r}(t) = \hat{\varvec{r}}(t) - \varvec{r}_\text {real}(t) \end{aligned}$$ where \(\varvec{r}_\text {real}(t)\) is the true position vector of the IMU. The best estimate \(\hat{\delta \varvec{r}}(t)\) also contains an estimation error, represented as $$\begin{aligned} \delta [\hat{\delta \varvec{r}}(t)] \triangleq \hat{\delta \varvec{r}}(t) - \delta \varvec{r}(t) \end{aligned}$$ Substituting (15) and (17) into (16) yields $$\begin{aligned} \delta \hat{\varvec{r}}_\text {est}(t) = \delta \varvec{r}(t) - \hat{\delta \varvec{r}}(t) \end{aligned}$$ A comparison of (18) and (19) shows that the error of the estimated position of the integrated system is exactly the estimation error of the position state variable \(\delta \varvec{r}(t)\), i.e., \(\delta \hat{\varvec{r}}_\text {est}(t) = -\delta [\hat{\delta \varvec{r}}(t)]\). Thus, \(\delta \hat{\varvec{r}}_\text {est}(t)\) is used in the following error propagation modeling. Errors of the track irregularity estimates As introduced in the appendix, the alignment and vertical irregularities of the railway track are both defined as differential versines in (A.2), namely, track irregularity is a relative quantity. The track irregularity measurement error with the GNSS/INS system should be evaluated by the relative navigation error as defined in (A.5). Assuming the trolley moves at a constant speed, the railway track irregularity measurement error can be computed as differential navigation errors, denoted by \(\triangledown \hat{\varvec{r}}_\text {est}\): $$\begin{aligned} \triangledown \hat{\varvec{r}}_\text {est}(t, \omega _i) = \delta \hat{\varvec{r}}_\text {est}(t, \omega _i) - \delta \hat{\varvec{r}}_\text {est}(t+\Delta t, \omega _i) \end{aligned}$$ where \(t \in \text {T}\), \(\omega _i \in \Omega\); \(\delta \hat{\varvec{r}}_\text {est}\) is a stochastic process that is a function defined on the product space \(\text {T}\times \Omega\), where \(\Omega\) is a fundamental sample space and \(\text {T}\) is a subset of the real line denoting a time set of interest. Therefore, \(\triangledown \hat{\varvec{r}}_\text {est}\) is also a stochastic process, and contains all necessary information to completely describe the railway track irregularity measurement accuracy of a TGMT. It is clear that the objective of the performance analysis of a TGMT based on an aided INS in measuring the track irregularity is to completely characterize the relative error process \(\triangledown \hat{\varvec{r}}_\text {est}(t, \omega )\). In the subsequent analysis, we study the second moment, i.e., covariance matrix, of \(\triangledown \hat{\varvec{r}}_\text {est}\) to evaluate the measurement performance of a given TGMT. The covariance matrix of the measurement errors is defined as $$\begin{aligned} {\mathbf {P}}_\text {ir}(t) \triangleq E\{\left[ \triangledown \hat{\varvec{r}}_\text {est}(t) - \varvec{m}_\text {ir}(t)\right] \left[ \triangledown \hat{\varvec{r}}_\text {est}(t) - \varvec{m}_\text {ir}(t)\right] ^\text {T} \} \end{aligned}$$ $$\begin{aligned} \varvec{m}_\text {ir}(t) \triangleq E\{\triangledown \hat{\varvec{r}}_\text {est}(t,\cdot )\} \end{aligned}$$ Proposed method Equations (3) to (5) describe the INS error dynamics using a set of nonhomogeneous differential equations with time-varying coefficients. The full determination of the INS error propagation is too complicated when taking into account the real trajectories and maneuvers. In this case, we generally use a simulation approach to aid the analysis (Titterton and Weston , 2004; Groves , 2008). The situation becomes much more complex for an aided INS because the integrated navigation solution is affected not only by INS error sources but also by the external measurement noises. Fortunately, in the railway track surveying application, the nominal trajectory and motion of the host track trolley are simple and almost deterministic, which reduces the complexity and makes the analytic assessment possible. Figure 2 depicts a flowchart of the semi-analytic assessment of the relative measurement accuracy with the aided INS in measuring railway track irregularity. The procedure consists of an analytic assessment phase (the upper part of the figure) and a Monte Carlo simulation phase (the lower part). First, the system model and measurement model are simplified according to the several assumptions that hold in the specific railway surveying scenario. Then the coupling between the channels of the INS error model vanishes, and the error state dynamics is reduced to a linear time-invariant stochastic system driven only by white Gaussian noise, as enclosed by the dashed lines in the figure. The simplified INS error differential equations are solved in the Laplace domain, where \({\mathbf {H}}_\text {INS}(s)\) represents the transfer function. The Laplace transformation of the steady-state KF estimator is performed to yield the optimal estimate, where \({\mathbf {H}}_\text {KF}(s)\) characterizes the transfer function. An analytical solution of the navigation errors in the Laplace domain, i.e., \(\delta \hat{\varvec{r}}(s)\), is obtained as the difference between \(\delta \varvec{r}(s)\) and \(\hat{\delta \varvec{r}}(s)\). The Monte Carlo simulation part, enclosed in the gray box, is performed as follows: create a transfer function model based on the derived \(\delta \hat{\varvec{r}}(s)\) model; then, use the simulated white noise series as the input to generate the navigation error samples, i.e., the system responses; and finally, evaluate the relative measurement accuracy based on the output navigation error samples. More details on the simulation part are given in subsequent sections. Flowchart of the semi-analytical analysis of the relative measurement accuracy of the GNSS/INS integrated system Simplification for the railway measurement case The track trolley is assumed to move uniformly in a straight line in the south-north direction. This assumption is based on the following facts: High-speed railway track is usually designed with a very large radius of curvature in both the horizontal and vertical profiles and with a slope gradient smaller than 1.5%. The longest chord in railway track irregularity measurements does not exceed 300 m; thus, it is reasonable to simplify such a short curve section to a straight line and to assume that the host vehicle moves uniformly in south-north direction and remains level, in which case the attitude matrix \({\mathbf {C}}_b^n = {\mathbf {I}}\). As defined in appendix, the horizontal railway track deformation refers to the deviation of rails from its nominal position in the lateral and vertical directions. When the trolley moves in the south-north direction, we can evaluate the horizontal and vertical irregularity measurement accuracy by directly analyzing the east and vertical positioning errors, respectively. Thus, this assumption would simplify the following analysis. In addition, the trolley moves at a low speed, and its attitude changes slowly, which makes the dynamics-induced errors (such as the effects of the scale factor and cross-coupling) negligible. The IMU is mounted with its sensitive axes perfectly aligned with the host vehicle axes, i.e., the b-frame is coincident with the v-frame. In this case \({\mathbf {C}}_b^v\) becomes the identity matrix. This assumption is reasonable because the IMU mounting angles can be estimated and compensated for with sufficient accuracy, as discussed in (Chen et al. , 2020) Lever arms of the GNSS antenna and odometer are all zeros, and GNSS positions with centimeter accuracy, obtained by the postprocessed kinematic, are available all the time, as the lever arms can be measured with sufficient accuracy and the related effect can be corrected in practice. The local gravity uncertainty is assumed to be much smaller than the accelerometer measurement error and is therefore negligible. System model simplification The performance of the aided INS is also influenced by the real trajectories and maneuvers. We list the key information on the trajectory, inertial sensor errors, and absolute navigation error as follows: mean position of the trajectory: latitude = 30\(^\circ\), longitude = 114\(^\circ\), h = 20 m. mean absolute positioning error: \(\delta r_N\) = 0.01 m, \(\delta r_E\) = 0.01 m, \(\delta r_D\) = 0.02 m. velocity: \(v_N\) = 1 m/s, \(v_E\) = 0 m/s, \(v_D\) = 0 m/s. mean velocity error: \(\delta v_N\) = 0.002 m/s, \(\delta v_E\) = 0.002 m/s, \(\delta v_D\) = 0.002 m/s. mean attitude error: \(\phi _N\) = 0.003\(^\circ\), \(\phi _E\) = 0.003\(^\circ\), \(\phi _D\) = 0.005\(^\circ\). gyro bias: \(\delta \omega _{ib}^b\) = 0.01 \(^\circ /h\). accelerometer bias: \(\delta f^b\) = 10 mGal. local gravity value: 9.78 m/s\(^2\). We can calculate the magnitude of each term in the INS error differential equations by substituting the parameters listed above. The error terms with a magnitude smaller than 10% of the predominant term are considered insignificant terms and thus are neglected in the following analysis. The inertial sensor error terms \(\delta \varvec{\omega }_{ib}^b\) and \(\delta \varvec{f}^b\) are always kept. As a result, the INS error differential equations from (3) to (5) can be simplified as $$\begin{aligned} {\left\{ \begin{array}{ll} \delta {\dot{r}}_N = \delta v_N \\ \delta {\dot{r}}_E = \delta v_E \\ \delta {\dot{r}}_D = \delta v_D \\ \delta {\dot{v}}_N = -f_D \phi _E + \delta f_N \\ \delta {\dot{v}}_E = f_D \phi _N + \delta f_E \\ \delta {\dot{v}}_D = \delta f_D \\ {\dot{\phi }}_N = -\delta \omega _{ib,N}^n \\ {\dot{\phi }}_E = -\delta \omega _{ib,E}^n \\ {\dot{\phi }}_D = -\delta \omega _{ib,D}^n \end{array}\right. } \end{aligned}$$ where \(\delta f_N\), \(\delta f_E\), \(\delta f_D\) are the accelerometer measurement errors in the north, east, and vertical directions, respectively. \(\delta \omega _{ib,N}^n\), \(\delta \omega _{ib,E}^n\), \(\delta \omega _{ib,D}^n\) are the gyroscopic measurement errors in the north, east, and vertical directions, respectively. After simplification, as shown in (23), the coupling between channels vanishes, and each channel can be analyzed separately. Measurement model simplification A similar simplification can be carried out for measurement Eqs. (10) and (14) by taking into account the assumptions and trajectory information listed above, yielding $$\begin{aligned} \varvec{z}_r = \delta \varvec{r}^n + \varvec{n}_{r} \end{aligned}$$ $$\begin{aligned} \varvec{z}_\text {v} = \delta \varvec{v}^n - \varvec{v}^n \times \varvec{\phi } + \varvec{n}_{v} \end{aligned}$$ Measurement error propagation model The navigation error propagation modeling of the GNSS/INS is analyzed in the vertical and horizontal channels. In the following, the details on the vertical channel analysis are presented, while the analysis of the horizontal channel is similar and given in the appendix. For the vertical channel analysis, in addition to the height and vertical velocity errors, the easting gyro bias \(\delta b_{gE}\) and vertical accelerometer bias \(\delta b_{aD}\) should be augmented into the error state vector. Additionally, since the NHC is induced as a navigation aid, the attitude error term \(\phi _E\) should be augmented. Thus, the error state vector and the related state dynamics model can be written as $$\begin{aligned} \varvec{x}_D= & {} \left[ \begin{matrix} \delta h&\delta v_D&\phi _E&\delta b_{gE}&\delta b_{aD} \end{matrix}\right] ^\text {T} \end{aligned}$$ $$\begin{aligned} \dot{\varvec{x}}_D(t)= & {} {\mathbf {F}}_D(t)\varvec{x}_D(t) + {\mathbf {G}}_D(t)\varvec{w}_D(t) \end{aligned}$$ $$\begin{aligned} \begin{matrix} {\mathbf {F}}_D=\left[ \begin{matrix} 0&{} -1&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 1\\ 0&{} 0&{} 0&{} -1&{} 0\\ 0&{} 0&{} 0&{} -1/T_{gb}&{} 0\\ 0&{} 0&{} 0&{} -1/T_{ab}&{} 0\\ \end{matrix} \right] \text {, }&{} {\mathbf {G}}_D=\left[ \begin{matrix} 0&{} 0\\ 0&{} {\mathbf {I}}_4\\ \end{matrix} \right] \\ \end{matrix} \end{aligned}$$ where the subscript 'D' denotes the down direction and \(\varvec{w}_D(t)\) represents the driving white Gaussian noise of strength \({\mathbf {Q}}_D\): $$\begin{aligned} \varvec{w}_D= & {} \left[ \begin{matrix} 0&w_{aD}&w_{gE}&w_{gbE}&w_{abD} \end{matrix}\right] ^\text {T} \end{aligned}$$ $$\begin{aligned} E\big \{\varvec{w}_D(t) \varvec{w}^\text {T}_D(t+\tau ) \big \}= & {} {\mathbf {Q}}_D \delta (\tau ) \end{aligned}$$ where \(\delta b_{gE}\) is modeled as a first-order Gauss–Markov process with correlation time \(T_{gb}\) and driven by white Gaussian noise process \(w_{gbE}\). The residual bias in the vertical accelerometer \(\delta b_{aD}\) is modeled as a first-order Gauss–Markov process with correlation time \(T_{ab}\) and driven by white Gaussian noise process \(w_{abD}\). \(w_{aD}\) is the white Gaussian noise that corrupted the vertical accelerometer measurement. \(w_{gE}\) is the white Gaussian noise that corrupted the gyro measurement in the east direction. The measurements in the KF for the aided INS in the vertical channel includes the height difference between the INS and GNSS and the vertical velocity difference between the INS and the NHC. According to (25), the velocity measurement equation can be written as $$\begin{aligned} z_{vD} = \delta v_D + v_E \phi _N- v_N \phi _E + n_{vD} \end{aligned}$$ where \(n_{vD}\) is the vertical component of \(\varvec{n}_{v}\) in (12) and represents the measurement uncertainty of the across-track zero velocity. According to assumption 1, the host vehicle is assumed to move in the south-north direction, in which case the east velocity shall be zero. Thus, (31) can be simplified as $$\begin{aligned} z_{v,D} = \delta v_D - v_N \phi _E + n_{vD} \end{aligned}$$ Therefore, the measurement equation for the aided INS in the vertical channel can be written as $$\begin{aligned} \varvec{z}_D = {\mathbf {H}}_D \varvec{x}_D + \varvec{n}_D \end{aligned}$$ where \({\mathbf {z}}_D = \left[ \begin{matrix} z_h&z_{vD}\end{matrix}\right] ^\text {T}\), \({\mathbf {n}}_D = \left[ \begin{matrix} n_{rD}&n_{vD}\end{matrix}\right] ^\text {T}\) $$\begin{aligned} {\mathbf {H}}_D= & {} \left[ \begin{matrix} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} -v_N &{} 0 &{} 0 \end{matrix}\right] \end{aligned}$$ $$\begin{aligned} E\big \{\varvec{n}_D(t) \varvec{n}^\text {T}_D(t+\tau )\big \}= & {} {\mathbf {R}}_D \delta (\tau ) \end{aligned}$$ However, if the vehicle moves with a nonzero constant heading angle rather in the south-north direction, then \(v_E \ne 0\). In this case the simplification from (31) to (32) is not valid and the measurement Eq. (31) should be used. The state dynamics equations of the continuous KF are given by $$\begin{aligned} \dot{\hat{\varvec{x}}}_D(t) = {\mathbf {F}}_D(t) \hat{\varvec{x}}_D(t) + {\mathbf {K}}_D(t) \left[ \varvec{z}_D(t) - {\mathbf {H}}_D(t) {\hat{x}}_D (t)\right] \end{aligned}$$ It is obvious from Eqs. (28) and (34) that \({\mathbf {F}}_D\), \({\mathbf {G}}_D\), \({\mathbf {H}}_D\) are all constant matrices; \({\mathbf {R}}_D\) and \({\mathbf {Q}}_D\) are constant and diagonal matrices. The filter gains converge to steady-state values in a short time, resulting in constant matrices \({\mathbf {P}}\) and \({\mathbf {K}}\). The matrix \({\mathbf {P}}\) can be calculated by solving the continuous matrix Riccati differential equation \(\dot{{\mathbf {P}}}(t) = {\mathbf {0}}\). The gain matrix \({\mathbf {K}}_D\) in this case is a 5-by-2 matrix. Taking the Laplace transform of Eq. (27) yields the related solution in the Laplace domain: $$\begin{aligned} \varvec{x}_D(s) = \left[ \left( s{\mathbf {I}} - {\mathbf {F}}_D\right) ^{-1} {\mathbf {G}}_D \varvec{w}_D(s) \right] \end{aligned}$$ The expansion of the above formula can be written as $$\begin{aligned} \delta h(s)= & {} -\frac{1}{s^2} w_{aD}(s) - \frac{1}{s^2(s+\gamma _{ab})} w_{abD}(s) \end{aligned}$$ (38a) $$\begin{aligned} \delta v_D(s)= & {} \frac{1}{s} w_{aD}(s) + \frac{1}{s (s+\gamma _{ab})}w_{abD}(s) \end{aligned}$$ (38b) $$\begin{aligned} \phi _E(s)= & {} \frac{1}{s} w_{gE}(s) - \frac{1}{s (s+\gamma _{gb})} w_{gbE}(s) \end{aligned}$$ (38c) $$\begin{aligned} \delta b_{aD}(s)= & {} \frac{1}{s+\gamma _{ab}} w_{abD}(s) \end{aligned}$$ (38d) $$\begin{aligned} \delta b_{gE}(s)= & {} \frac{1}{s+\gamma _{gb}} w_{gbE}(s) \end{aligned}$$ (38e) where \(\gamma _{gb} = \frac{1}{T_{gb}}\), \(\gamma _{ab} = \frac{1}{T_{ab}}\). By taking the Laplace transform of Eq. (33) and substituting (37) into the transformed equation we obtain \(\varvec{z}_D(s)\), whose expansion can be written as $$\begin{aligned} z_h(s)= & {} -\frac{1}{s^2} w_{aD}(s) - \frac{1}{s^2(s+\gamma _{ab})} w_{abD}(s) \nonumber \\&+ n_{rD}(s) \end{aligned}$$ $$\begin{aligned} z_{vD}(s)= & {} \frac{1}{s} w_{aD}(s) - \frac{v_N}{s} w_{gE}(s) + \frac{1}{s(s+\gamma _{ab})} w_{abD}(s) \nonumber \\&+ \frac{v_N}{s(s+\gamma _{gb})} w_{gbE}(s) + n_{vD}(s) \end{aligned}$$ Solving Eq. (36) in the Laplace domain yields $$\begin{aligned} \hat{\varvec{x}}_D(s) = \left[ \left( s{\mathbf {I}} - {\mathbf {F}}_D + {\mathbf {K}}_D {\mathbf {H}}_D\right) ^{-1} {\mathbf {K}}_D \right] \varvec{z}_D(s) \end{aligned}$$ Subtracting the height component \(\hat{\delta h}(s)\) in vector \(\hat{\varvec{x}}_D(s)\) from \(\delta h(s)\) in (37) yields the height estimate error of the aided INS, \(\delta {\hat{h}}(s)\), which is a linear function of \(\varvec{w}_D(s)\), written as $$\begin{aligned} \begin{aligned} \delta {\hat{h}}(s) = ~&H_{aD}~w_{aD}(s) + H_{gE}~ w_{gE}(s) \\&+ H_{gbE} ~w_{gbE}(s) + H_{abD}~ w_{abD}(s) \\&+ H_{nrD}~ n_{rD}(s) + H_{nvD} ~n_{vD}(s) \end{aligned} \end{aligned}$$ where \(H_{aD}\), \(H_{gE}\), \(H_{gbE}\), \(H_{abD}\), \(H_{nrD}\), and \(H_{nvD}\) are the related coefficients of the noise terms. The above equation is the error propagation model of the aided INS in the vertical channel in the Laplace domain. It tells that the height error of the GNSS/INS integration for measuring the railway track irregularity can be regarded as a time-invariant stochastic system with the white noise as the system input. The detailed expression of \(\delta {\hat{h}}(s)\) is a bit complicated, and the related derivation can be got by a symbolic operation with the MATLAB software. Relative measurement accuracy analysis The position error solution \(\delta {\hat{r}}_\text {est}(s)\) in the Laplace domain has been obtained. Theoretically, the corresponding analytic solutions in the time domain, denoted by \(\delta {\hat{r}}_\text {est}(t)\), can be obtained by an inverse Laplace transform. Then, we can completely evaluate the relative measurement accuracy of the aided INS according to (A.5) and (A.7). Unfortunately, obtaining this analytic expression \(\delta {\hat{r}}_\text {est}(t)\) is impossible because the inverse Laplace transformation of white noise in Eqs. (41) and (B.12) does not lead to an explicit analytic expression. Therefore, the remainder analysis should be performed with a Monte Carlo simulation to generate the samples of \(\delta {\hat{r}}_\text {est}(t)\), as depicted in Fig. 2. The Monte Carlo simulation relies on repeated random sampling to obtain numerical results, which is helpful in understanding the behaviors of a stochastic system that are not amenable to analysis by the usual direct mathematical methods (Brown and Hwang , 2012). Since white Gaussian noise is a stationary and Gaussian-distributed random process with a constant spectral density function over all frequencies, if it is put into a linear time-invariant system, the system output will also be stationary and Gaussian distributed. It is evident from (41), (B.12) and (20) that \(\triangledown \hat{\varvec{r}}_\text {est}\) will also be a stationary, Gaussian-distributed, and ergodic random process. In that case, we would be able to conduct the performance analysis of the relative measurement accuracy with sufficiently long samples \(\delta \hat{\varvec{r}}_\text {est}(t)\). The analysis procedure is as follows: Use the MATLAB function tf to create a transfer function model based on the error propagation model (41) and (B.12) in the Laplace domain. Simulate the input white noise samples, including the system noise, measurement noise, and driving noise of the first-order Gauss–Markov process; the corresponding parameters needed are listed subsequently. Use the MATLAB function lsim to generate the time response of the dynamic system (generated in step 1) to the input stimuli (the output in step 2), obtaining navigation error samples of the aided INS in the time domain. Calculate the railway track irregularity measurement error \(\triangledown \hat{\varvec{r}}_\text {est}\) in (20) and its corresponding covariance matrix \({\mathbf {P}}_\text {ir}\) in (21). The information needed for generating the input white noise are listed as follows: Angular Random Walk (ARW) = 0.002 \(^\circ /\sqrt{h}\), Velocity Random Walk (VRW) = 0.001 \(m/s/\sqrt{h}\). \(T_{gb}\) = 1000 s, \(\sigma _{gb}\) = 0.005 \(^\circ /h\), \(T_{gb}\) = 1000 s, \(\sigma _{gb}\) = 25 mGal \(n_{rE}\) = 1 cm/\(\sqrt{Hz}\), \(n_{rD}\) = 2 cm/\(\sqrt{Hz}\), \(n_{v}\) = 0.1 mm/s/\(\sqrt{Hz}\) Figure 3 is a sample of the aided INS positioning error process in the lateral and vertical directions. In this figure, the navigation errors are plotted versus the travel distance, which is obtained from the calibrated odometer sensor. Since the positioning error in the along-track direction has little influence on the track irregularity measurement accuracy, it is not included in this figure. The first observation is that the error sequence varies within 1 cm and shows an explicit spatial correlation. Based on the generated navigation error samples, we can calculate the railway track irregularity measurement error \(\triangledown \hat{\varvec{r}}_\text {est}\) as introduced in the above step 4. Figure 4 is a histogram of \(\triangledown \hat{\varvec{r}}_\text {est}\) in the east and vertical directions, which are related to the short-wave alignment and longitudinal direction, respectively, illustrating the distribution of \(\triangledown \hat{\varvec{r}}_\text {est}\). They approximately follow the Gaussian distribution, as expected. The results show that 1 mm accuracy is achievable in vertical and horizontal track irregularity measurements with a navigation-grade IMU aided by a carrier-phase differential GNSS and the NHC. This result answers the first question raised in the introduction section. A sample of the aided INS positioning error process in the lateral and vertical directions Histogram of the short-wave track irregularity measurement error incurred by the aided INS Validation and discussion In this section, we employ a full simulation approach to validate the results obtained from the proposed semi-analytical error propagation model. Figure 5 depicts the validation flowchart. The previous analysis adopted a semianalytical approach where the analytical method was aided by a simulation technique in the last step. The simulation method encompasses a system simulation where the actual filter algorithm is embedded and the effects of nonlinearities in the system are kept. The simulated raw IMU data and navigation aids corrupted by different error terms are processed by a full comprehensive aided INS algorithm, i.e., the data-fusion KF, and based on its output errors; the irregularity measurement errors are analyzed. The results from these two approaches are finally compared to evaluate the feasibility of the semi-analytical method. Flowchart of the validation of the semianalytical error propagation model through a data simulation approach Aided INS raw data simulation The aided INS data simulation software is developed at the GNSS Research Center of Wuhan University. The software consists of an IMU data simulator, navigation aiding simulator, error source simulator, and reference navigation calculator. It can generate the outputs of three orthogonal gyros and three orthogonal accelerometers, together with the navigation aid that includes position and velocity measurements via the user-defined trajectory and motion. The perfect inertial sensor output without corruption is first derived with the inverse principle of the INS. The simulated perfect IMU data are then integrated forward with time through the INS mechanization algorithm by setting the initial navigation state to generate the true navigation solutions as the navigation reference truth. The error source simulator is designed to generate several typical stochastic processes to model the inertial sensor errors and measurement noise, including white noise, random walk, and the first-order Gauss–Markov process, by using the corresponding parameters. The simulated error terms perturbing the gyros and accelerometers are added to the perfect inertial sensor outputs to yield the noise corrupted IMU data. The noise corrupted navigation aid, such as position measurement, can be achieved by adding additive noise terms to the perfect aid samples. Simulation settings The host vehicle trajectory and motion settings for the aided INS data simulation are consistent with the assumptions and settings in the proceeding section for error propagation modeling. The host vehicle is assumed to move in a straight line with a constant speed in the north direction and remain level. The motion details are summarized in Table 1. At the beginning, the vehicle remains stationary for ten minutes to finish the static alignment, followed by an alternating accelerating dynamic motion to make the KF converge to a steady state in a short time. Finally, it accelerates from rest at 0.2 \(m/s^2\) for a period of 5 s, reaching a speed of 1 m/s, and then maintains uniform motion in a straight line at this velocity for 10,000 s. Only the navigation error of the uniform motion section is studied in detail in the following sections. The error terms that perturb the onboard IMU and the navigation aids are set to the same values as those for the preceding error propagation modeling analysis. The simulated IMU dataset is sampled at 200 Hz. Table 1 Description of the dynamic motion of the host vehicle Histogram of the short-wave track irregularity measurement errors of the aided INS by the simulation method Table 2 Comparison of the track irregularity measurement accuracy analysis results from the semianalytical and simulation methods Simulated data processing The simulated aided INS data are processed with the postprocessing software InsRail developed at the GNSS Research Center of Wuhan University. InsRail contains three function modules: post GNSS positioning process, aided INS data fusion, and railway track geometry analysis. The fusion of the GNSS positioning solutions and the raw inertial dataset is implemented in a loosely coupled manner. As the basis of the integrated processing modular of InsRail, the extended KF is designed based on 21-dimensional error state vectors, e.g., errors of the position, velocity and attitude, biases and scale factors of the gyros and accelerometers. Moreover, the Rauch-Tung-Striebel (RTS) backward smoothing algorithm is applied in the software to achieve a high accuracy in the smoothed solutions. InsRail also supports other aids for the INS, such as the Zero-velocity Update (ZUPT), velocity update, and NHCs, to improve the aided INS performance. To be consistent with the analytic assessment based on the error propagation model, the simulated aided INS data are processed under the forward Kalman filtering mode. Figure 6 shows the Short wavelength Track Irregularity (STI) measurement error distribution in both alignment (or lateral) and longitudinal-level (vertical) directions. It illustrates that the measurement errors of the GNSS/INS integrated system in the simulation case closely follow a Gaussian distribution and are consistent with the those obtained from the semi-analytical approach, as shown in Fig. 4. It should be noted that the STI measurement is used here as an example, and a similar figure can also be plotted for the Long wavelength Track Irregularity (LTI) measurement. The statistical values of the measurement errors from both the semi-analytical and simulation approaches are listed in Table 2, which demonstrates that the consistence between these two methods is better than 85%. This comparison validates the feasibility of the proposed semi-analytical approach. It is notable that the measurement errors from the semi-analytical approach are slightly greater than those in the simulation. This can be explained as follows: the coupling between channels (i.e., axes) of the INS is ignored in the error propagation modeling procedure; the original weak observability of some navigation states across an axis is reduced or lost; and the loss of state observability causes the KF to lose the opportunity for state correction, lowing the final estimation accuracy slightly. In the simulation full coupling between channels has been considered, resulting in better results. Comparison of the short wavelength track irregularity measurements by a GNSS/INS trolley with the independent references Validation in field tests The proposed method and the related measurement accuracy analysis can also be verified through field tests, and the results were reported in our previous work (Chen et al. , 2015; Chen et al. , 2018). The TGMT based on a GNSS/INS configuration was evaluated comprehensively for the first time in November 2013 in the newly built Lanzhou-Urumqi high speed railway. The Lanzhou-Urumqi high speed passenger railway runs from Lanzhou to Urumqi in northwestern China with a designed operation speed of 250 km/h. In the experiment, the track section of about 1000 m was surveyed. The key information on the experiment and equipment used is listed below: GNSS/INS integrated system: a navigation grade Positioning and Orientation System (POS) which integrates a Ring Laser Gyro (RLG) based IMU and a high-precision NovAtel GNSS OEM6 receiver. Independent reference: a classical TGMT using high precision total station was used to provide alignment reference values accurate to 1.4 mm. A Trimble DiNi digital level was used to obtain the referenced vertical irregularity that is accurate to about 0.3 mm. GNSS base station receiver: Trimble NetR9 GNSS reference receiver. GNSS antenna: NovAtel GPS-702-GGL. Figure 7 compares the STI measurements by the TGMT using GNSS/INS integrated system with the independent reference values obtained with the high precision geodetic surveys. It shows that the STI measurements in both vertical and alignment directions are accurate to 1 mm at 98% confidence level, which are consistent with the semi-analytical and simulation results. In this figure, we take the STI result as an example, more details on the field tests and results can be found in (Chen et al. , 2018). In a field test, Dead Reckoning (DR) using the GNSS/INS integrated attitude and the travel distance is suggested to exploit the NHC's potential to the fullest and ensure the consistency between the semianalytical analysis and the field tests. In the error propagation modeling, the semi-analytical method is used, which combines an analytical method and a Monte Carlo simulation in the relative measurement accuracy analysis. The method is proved to be feasible and credible for the performance analysis of a TGMT based on an aided INS for specific railway track irregularity measurement applications. There is an advantage of employing the semi-analytical method over the full simulation approach. Because the error state space KF model for the aided INS is simplified and reduced to a linear time-invariant stochastic system driven by the input white Gaussian noise for the specific application, it is possible to perform the accuracy analysis beforehand without a raw IMU data generator simulation and an integrated data processing. Thus, the semi-analytical method is more computationally efficient than the full simulation approach and more convenient to evaluate the effects of any changes in sensor hardware or error sources on the final measurement accuracy. It aids the system design of a TGMT based on an aided INS without actually building the system, implementing an IMU raw data simulator, and aided INS data processing software. A limitation of this approach is the analytic portion is appropriate only for the situation that the vehicle motion is extremely simple, e.g., railway track geometry surveying. It is not a general method suitable for any ground vehicle systems. Track irregularity measurement error caused by principal inertial sensor errors Effect of the inertial sensor errors on the measurement accuracy With the semi-analytical approach, it is convenient to analyze the performance of a TGMT based on GNSS/INS integration and evaluate the effects of any changes in sensor hardware and error sources on the final measurement accuracy, which is very important for sensor selection in the TGMT design procedure. Here, we present a quantitative analysis of the effects of predominant inertial sensor errors, including bias instability and measurement noise of the gyroscopes and accelerometers, on the track irregularity measurement accuracy. The bias is modeled as a first-order Gauss–Markov process characterized by the correlation time and mean squared value. Figure 8 shows measurement errors of the STI and LTI in both vertical and alignment directions versus the inertial sensor bias and random noise. It shows a significant increase in measurement errors with respect to the gyroscope bias and noise, i.e., ARW, in both the STI and LTI measurement. While there are no noticeable effects of the accelerometer bias and VRW on the measurement accuracy. The determined track position and track irregularity are highly correlated with the relative measurement accuracy of the attitude angles, including roll, pitch and heading angles. Since the TGMT motion is constrained by rails, the short term attitude accuracy of the GNSS/INS integrated system is dominated by the gyroscope triads. This figure shows that we can evaluate the system performance with the derived error propagation model and given sensor error sources. We can conclude from the above analysis that the errors in the gyroscopic measurements are the predominant error sources for the TGMT based on GNSS/INS integration. Accelerometer errors seem to have insignificant effects on the system performance and the relative measurement accuracy. However, the accelerometer bias can give rise to roll angle measurements and finally affect the cross-level measurement. Namely, gyroscopes are critical sensors that determine the performance of the railway track irregularity measurement system. Therefore, in the system design attention should be paid to the gyroscope selection. The above result has answered question 2 raised in the introduction section. In this research, the temporal and spatial relative measurement accuracy and corresponding error propagation of the GNSS/INS integration specifically for the measurement of railway track irregularities are studied. A semi-analytical method is proposed to analyze the relative measurement error propagation model. In the railway surveying, the complexity of the nonlinearity of the aided INS is reduced, allowing each channel to be analyzed separately. Considering the steady-state KF operation, true errors in the optimal estimates of an aided INS are obtained with an analytical expression in the Laplace domain. Based on the derived error propagation model, the relative measurement accuracy of a railway track geometry surveying system based on the aided INS is assessed. The proposed error model is verified by a simulation, which shows the agreement between the results with the semi-analytic and simulation method is better than 85% and 93% for the vertical and horizontal channels, respectively. Therefore, this research has answered the two fundamental questions for a TGMT based on an aided INS: 1) an accuracy of 1 mm is possible in measuring the alignment and vertical track irregularities and 2) the influence of the principal inertial sensor errors on the final measurement accuracy can be quantitatively analyzed. These conclusions are valuable for the development of a TGMT and other inertial surveying applications using an aided INS and can benefit the future research. The datasets are available from the corresponding author. Benson, D. O. (1975). A comparison of two approaches to pure-inertial and Doppler-inertial error analysis. IEEE Transactions on Aerospace and Electronic Systems, 4, 447–455. Brown, R. G., & Hwang, P. Y. C. (2012). Introduction to Random Signals and Applied Kalman Filtering: with MATLAB Exercises (4th ed.). New York: Wiley. MATH Google Scholar Chen, Q., Zhang, Q., & Niu, X. (2020). Estimate the pitch and heading mounting angles of the IMU for land vehicular GNSS/INS integrated system. IEEE Transactions on Intelligent Transportation Systems Chen, Q., Niu, X., Zhang, Q., & Cheng, Y. (2015). Railway track irregularity measuring by GNSS/INS integration. Navigation: Journal of The Institute of Navigation, 62(1), 83–93. Chen, Q., Niu, X., Zuo, L., Zhang, T., Xiao, F., Liu, Y., & Liu, J. (2018). A railway track geometry measuring trolley system based on aided INS. Sensors, 18(2), 538. El-Sheimy, N., & Youssef, A. (2020). Inertial sensors technologies for navigation applications: State of the art and future trends. Satellite Navigation, 1(1), 1–21. Groves, P. D. (2008). Principles of GNSS, inertial, and multisensor integrated navigation systems. Boston: Artech House. Liu, J., Gao, K., Guo, W., Cui, J., & Guo, C. (2020). Role, path, and vision of "5G+ BDS/GNSS." Satellite Navigation, 1(1), 1–8. Maybeck, P. S. (1978). Performance analysis of a particularly simple Kalman filter. Journal of Guidance and Control, 1(6), 391–396. Maybeck, P. S. (1982). Stochastic Models, Estimation, and Control (Vol. I). New York: Academic press. Niu, X., Chen, Q., Zhang, Q., Zhang, H., Niu, J., Chen, K., et al. (2014). Using Allan variance to analyze the error characteristics of GNSS positioning. GPS Solutions, 18(2), 231–242. Niu, X., Chen, Q., Kuang, J., & Liu, J. (2016). Return of inertial surveying–trend or illusion? Proceedings of IEEE/ION PLANS, 2016, 165–169. Shin, E.-H. (2005). Estimation techniques for low-cost inertial navigation. Ph.D. thesis, University of Calgary, Deparment of Geomatics Engineering Teunissen, P., & Montenbruck, O. (2017). Springer Handbook of Global Navigation Satellite Systems. Switzerland: Springer. Book Google Scholar Titterton, D., & Weston, J. L. (2004). Strapdown Inertial Navigation Technology (2nd ed.). Stevenage: IET. Zhang, Q., Niu, X., & Shi, C. (2020). Impact assessment of various IMU error sources on the relative accuracy of the GNSS/INS systems. IEEE Sensors Journal, 20(9), 5026–5038. Zhang, Q., Niu, X., Chen, Q., Zhang, H., & Shi, C. (2013). Using Allan variance to evaluate the relative accuracy on different time scales of GNSS/INS systems. Measurement Science and Technology, 24(8), 085006. Zhang, T., Ban, Y., Niu, X., Guo, W., & Liu, J. (2017). Improving the design of MEMS INS-aided PLLs for GNSS carrier phase measurement under high dynamics. Micromachines, 8(5), 135. Zhu, F., Zhou, W., Zhang, Y., Duan, R., Lv, X., & Zhang, X. (2019). Attitude variometric approach using DGNSS/INS integration to detect deformation in railway track irregularity measuring. Journal of Geodesy, 93(9), 1571–1587. The authors thank Wuhan MAP Space Time Navigation Technology Co., LTD, and Guangzhou Datie Detecting & Surveying, Inc., for their efforts in promoting the TGMT using an aided INS. This work is funded by the National Natural Science Foundation of China (41904019). GNSS Research Center, Wuhan University, Wuhan, China Qijin Chen, Quan Zhang, Xiaoji Niu & Jingnan Liu Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan, China Quan Zhang & Jingnan Liu Artificial Intelligence Institute, Wuhan University, Wuhan, China Xiaoji Niu Qijin Chen Quan Zhang Jingnan Liu Conceptualization: Qijin Chen and Xiaoji Niu; Data analysis: Qijin Chen; Writing: Qijin Chen; All authors read and approved the final manuscript. Correspondence to Xiaoji Niu. Illustration of the track irregularity parameter Railway track irregularity parameters For a given railway track section, we can draw a chord of certain length with its start and end points on the rail, as depicted in Fig. 9. Then, the nominal offset from the rail to the chord at mileage s, denoted by \(\text {v}_\text {nom}(s)\), can be computed from the designed geometry of the track. For example, in the straightline section, the nominal versine is zero. Actual rails do not perfectly follow the designed positions and tend to drift away from the nominal smoothness. The offset from the actual rail to the chord is called the actual versine and denoted by \(\text {v}_\text {real}(s)\), which is measured by the classic geodetic instruments or a GNSS/INS integrated system. The difference between the real and nominal versines is defined as $$\begin{aligned} \Delta \text {v}(s) = \text {v}_\text {nom}(s) - \text {v}_\text {real}(s) \end{aligned}$$ (A.1) where \(\Delta \text {v}(s)\) can be regarded as the versine offset from the nominal value. Then, the railway track alignment irregularity parameter is computed as $$\begin{aligned} Ir(s) = \Delta \text {v}(s) - \Delta \text {v}(s + \Delta s) \end{aligned}$$ It is clear from the above equation that track alignment irregularity is a relative quantity. A series of Ir, i.e., the point-to-point variation of the versine deviations, portrays the track's smoothness. In the standards on precise railway track measurement, the chord length and \(\Delta s\) are defined. The chord length can be chosen as 30 m or 300 m, and the corresponding \(\Delta s\) can then be set as 5 m or 150 m, respectively. The railway track irregularity parameters related to a 30 m chord are called short wavelength irregularities, and those related to a 300 m chord are called long wavelength irregularities. The railway track irregularities are evaluated separately in both horizontal and vertical directions, with the former called alignment and the latter called longitudinal or vertical irregularity. The vertical irregularity parameters are computed using the same method depicted in Fig. 9 but plotted in the mileage-height plane. More details on the track geometric parameters can be found in our previous work (Chen et al. , 2015; Chen et al. , 2018). Relative measurement accuracy The navigation error process in an aided INS's solutions is stochastic. Let \(\Omega\) be a fundamental sample space and T be a subset of the real line denoting a time set of interest. Then, the navigation error process can be defined as a real-valued function of two arguments \(\varvec{e}(t,\omega )\), where the first argument is an element of T and the second an element of \(\Omega\). For any fixed \(t\in T\), \(\varvec{e}(t,\cdot )\) is a random variable. If we fix the second argument, for each point \(\omega _i \in \Omega\) there is associated with a time function \(\varvec{e}(\cdot , \omega _i)\), whose value at each time instant is a sample from the stochastic process. It is common practice to describe the stochastic navigation error by the associated first two moments, i.e., the mean value function and covariance matrix. The mean value function \(\varvec{m}_e(t)\) and covariance matrix \({\mathbf {P}}_{e}(\cdot )\) of the process \(\varvec{e}(\cdot )\) are defined for all \(t\in T\) by $$\begin{aligned}&\varvec{m}_e(t) \equiv E\{\varvec{e}(t,\cdot )\} \end{aligned}$$ $$\begin{aligned}&{\mathbf {P}}_{e}(t) \equiv E\{\left[ \varvec{e}(t) - \varvec{m}_e(t)\right] \left[ \varvec{e}(t) - \varvec{m}_e(t)\right] ^\text {T}\} \end{aligned}$$ In the preceding section, we notice that it is the error variation between the navigation error \(\varvec{e}\) at any time instant t and \(t+\Delta t\) that inherently determines the track irregularity measurement accuracy. Here, we denote the relative navigation error for all \(t\in T\) by $$\begin{aligned} \Delta \varvec{e}(t) \equiv \varvec{e}(t,\cdot ) - \varvec{e}(t+\Delta t,\cdot ) \end{aligned}$$ The corresponding mean value function and covariance matrix are defined as $$\begin{aligned}&\varvec{m}_{\Delta e}(t) \equiv E\{\Delta \varvec{e}(t,\cdot )\} \end{aligned}$$ $$\begin{aligned}&{\mathbf {P}}_{\Delta e}(t) \equiv E\{\left[ \Delta \varvec{x}(t) - \varvec{m}_{\Delta e}(t)\right] \left[ \Delta \varvec{x}(t) - \varvec{m}_{\Delta e}(t)\right] ^\text {T} \} \end{aligned}$$ Obviously, the information in (A.7), which characterize the TGMT's relative measurement accuracy, is not available in (A.4). Fig. 10 Railway track geometry measuring trolley (TGMT) based on an aided INS Analytic analysis of a steady-state Kalman filter estimator The steady-state KF can be derived from the temporal continuous KF. If the system dynamic matrix \({\mathbf {F}}\), the noise input mapping matrix \({\mathbf {G}}\) in (2), and the measurement matrix \({\mathbf {H}}\) are all constant matrices, and if the system noise and measurement noise are stationary (\({\mathbf {Q}}\) and \({\mathbf {R}}\) are constant), the KF for a GNSS/INS/NHC integrated system may reach steady-state performance, leading to a constant covariance matrix \({\mathbf {P}}\). In this condition, the Riccati equation for the continuous KF becomes an algebraic relation: $$\begin{aligned} \dot{{\mathbf {P}}} = {\mathbf {F}}{\mathbf {P}} + {\mathbf {P}}{\mathbf {F}}^\text {T} + {\mathbf {G}}{\mathbf {Q}}{\mathbf {G}}^\text {T} - {\mathbf {P}}{\mathbf {H}}^\text {T}{\mathbf {R}}^\text {-1}{\mathbf {H}}{\mathbf {P}} = {\mathbf {0}} \end{aligned}$$ In the steady-state condition, the rate at which the uncertainty increases (given by \({\mathbf {G}}{\mathbf {Q}}{\mathbf {G}}^\text {T}\)) is balanced by the rate at which new information enters (\({\mathbf {P}}{\mathbf {H}}^\text {T}{\mathbf {R}}^\text {-1}{\mathbf {H}}{\mathbf {P}}\)) and the dissipative effects of the system (\({\mathbf {F}}{\mathbf {P}} + {\mathbf {P}}{\mathbf {F}}^\text {T}\)). For a steady-state covariance matrix, the optimal filter is also time invariant, given by $$\begin{aligned} \dot{\hat{\varvec{x}}}(t) = \left[ {\mathbf {F}} - {\mathbf {K}}{\mathbf {H}}\right] \hat{\varvec{x}}(t) + {\mathbf {K}} \varvec{z}(t) \end{aligned}$$ Taking the Laplace transform of this (neglecting initial conditions) yields $$\begin{aligned} \left[ s{\mathbf {I}} - {\mathbf {F}} + {\mathbf {K}}{\mathbf {H}}\right] \hat{\varvec{x}}(s) = {\mathbf {K}} \varvec{z}(s) \end{aligned}$$ (A.10) $$\begin{aligned} \hat{\varvec{x}}(s) = \left[ \left( s{\mathbf {I}} - {\mathbf {F}} + {\mathbf {K}}{\mathbf {H}}\right) ^{-1} {\mathbf {K}}\right] \varvec{z}(s) \end{aligned}$$ \(\hat{\varvec{x}}(s)\) is the optimal estimate of the state vector of the KF for the GNSS/INS/NHC integration. The term in brackets in the above equation is the transfer function representation of the steady-state KF. More details about the steady state can be found in the textbook [(Maybeck , 1982), p. 273]. Navigation error propagation of the aided INS in the horizontal channel The navigation error propagation analysis in the horizontal channel is similar to that for the vertical channel. According to assumption 1, the host vehicle is assumed to move at constant speed in a northward straight line. The north channel is the along-track direction, for which centimeter-level accuracy is sufficient and easy to fulfill when an accurate GNSS position is available. The east position component of the aided INS determines the track alignment measurement accuracy in the across-track direction, which is critical for railway track geometry surveying (Chen et al. , 2015). Therefore, for the horizontal channel, we concentrate only on the east direction. In addition to the position and velocity errors, the tilt error \(\phi _N\) is augmented into the error state vector, since a tilt error will introduce an acceleration component due to gravity with magnitude \(g\phi _N\) to be projected onto the horizontal axes (Titterton and Weston , 2004)). The attitude error about the yaw axis is also considered once the NHC is used. In this case the residual gyro bias about the x-axis \(\delta b_{gN}\) and about the z-axis \(b_{gD}\), as well as the east component of the accelerometer bias \(\delta b_{aE}\), should also be augmented into the error state vector of the filter. Then there are seven error state variables: $$\begin{aligned} \varvec{x}_E = \left[ \begin{matrix} \delta r_E&\delta v_E&\phi _N&\phi _D&\delta b_{gN}&\delta b_{gD}&\delta b_{aE} \end{matrix}\right] ^\text {T} \end{aligned}$$ (B.1) The state dynamics model can be written as $$\begin{aligned} \dot{\varvec{x}}_E(t) = {\mathbf {F}}_E(t)\varvec{x}_E(t) + {\mathbf {G}}_E(t)\varvec{w}_E(t) \end{aligned}$$ $$\begin{aligned} {\mathbf {F}}_E(t)= & {} \left[ \begin{matrix} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} f_D &{} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} -1/T_{gb} &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1/T_{gb} &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} -1/T_{ab} \\ \end{matrix}\right] \end{aligned}$$ $$\begin{aligned} {\mathbf {G}}_E(t)= & {} \left[ \begin{matrix} 0 &{} 0 \\ 0 &{} {\mathbf {I}}_6 \end{matrix}\right] \end{aligned}$$ where the subscript 'E' denotes the east direction. \(\varvec{w}_E\) represents the driving zero-mean white Gaussian noise of strength \({\mathbf {Q}}_E(t)\) $$\begin{aligned}&\varvec{w}_E = \left[ \begin{matrix} 0&w_{aE}&w_{gN}&w_{gD}&w_{gbN}&w_{gbD}&w_{abE} \end{matrix}\right] ^\text {T} \end{aligned}$$ $$\begin{aligned}&E\big \{\varvec{w}_E(t) \varvec{w}^\text {T}_E(t+\tau ) \big \} = {\mathbf {Q}}_E \delta (\tau ) \end{aligned}$$ where \({\mathbf {I}}_6\) is the 6-by-6 identity matrix. \(\delta b_{gN}\) is the north component of the gyro bias, modeled as a first-order Gauss–Markov process with correlation time \(T_{gb}\) and driven by zero-mean white Gaussian noise \(w_{gbN}\); \(\delta b_{gD}\) is the vertical component of the gyro bias, modeled as a first-order Gauss–Markov process with correlation time \(T_{gb}\) and driven by zero-mean white Gaussian noise \(w_{gbD}\). \(b_{aE}\) is the east component of the accelerometer bias, modeled as a first-order Gauss–Markov process with correlation time \(T_{ab}\) and driven by zero-mean white Gaussian noise \(w_{abE}\). \(w_{aE}\) represents the accelerometer measurement noise along the east axis. \(w_{gN}\) and \(w_{gD}\) are the north and vertical components of the gyro measurement noise, respectively. The measurement to be presented to the KF for the aided INS in the east direction should include the position difference between the INS and GNSS and the cross-track velocity difference between the INS and the NHC. Considering assumption 1, we have \(v_D = 0\). Then, the velocity measurement Eq. (25) can be simplified as $$\begin{aligned} z_{vE} = \delta v_E + v_N \phi _D + n_{vE} \end{aligned}$$ where \(n_{vE}\) is the east component of \(\varvec{n}_{v}\) in (12) and represents the measurement uncertainty of the across-track zero velocity. The GNSS position measurement equation can then be written as $$\begin{aligned} z_{rE} = \delta v_E + n_{rE} \end{aligned}$$ Therefore, the measurement equations for the aided INS in the east direction can be written as $$\begin{aligned} \varvec{z}_E = {\mathbf {H}}_E \varvec{x}_E + \varvec{n}_E \end{aligned}$$ where \(\varvec{z}_E = \left[ \begin{matrix} z_{r,E}&z_{vnhc,E}\end{matrix}\right] ^\text {T}\), \(\varvec{n}_E = \left[ \begin{matrix} n_{r,E}&n_{vnhc,E}\end{matrix}\right] ^\text {T}\) $$\begin{aligned}&{\mathbf {H}}_E = \left[ \begin{matrix} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} v_N &{} 0 &{} 0 &{} 0 \end{matrix}\right] \end{aligned}$$ (B.10) $$\begin{aligned}&E\big \{\varvec{n}_E(t) \varvec{n}^\text {T}_E(t+\tau )\big \} = {\mathbf {R}}_E \delta (\tau ) \end{aligned}$$ It is obvious from (B.3), (B.4) and (B.10) that \({\mathbf {F}}_E\), \({\mathbf {G}}_E\), \({\mathbf {H}}_E\), \({\mathbf {R}}_E\) and \({\mathbf {Q}}_E\) are all constant matrices. The introduction of the NHC makes the heading error observable; thus, the filter gains will converge to steady-state values. In this situation \({\mathbf {P}}\) and \({\mathbf {K}}\) also become constant matrices. The gain matrix \({\mathbf {K}}\) in this case is a 7-by-2 matrix. A similar derivation can be performed for the vertical channel to obtain the east position component error propagation model as $$\begin{aligned} \begin{aligned} \delta {\hat{r}}_E(s) = ~&H_{aE}~w_{aE}(s) + H_{gN}~w_{gN}(s) \\&+ H_{gD} ~w_{gD}(s) + H_{gbN}~ w_{gbN}(s) \\&+ H_{gbD}~ w_{gbD}(s) + H_{abE} ~w_{abE}(s) \\&+ H_{rE} ~n_{rE}(s) + H_{vE}~ n_{vE}(s) \end{aligned} \end{aligned}$$ where \(H_{aE}\), \(H_{gN}\), \(H_{gD}\), \(H_{gbN}\), \(H_{gbD}\), \(H_{abE}\), \(H_{vE}\), and \(H_{rE}\) represent the related coefficients of the noise terms. The above equation is the east position error propagation model of the aided INS in the Laplace domain. Chen, Q., Zhang, Q., Niu, X. et al. Semi-analytical assessment of the relative accuracy of the GNSS/INS in railway track irregularity measurements. Satell Navig 2, 25 (2021). https://doi.org/10.1186/s43020-021-00057-9 Inertial surveying Error propagation modeling Steady-state Kalman filter Precise engineering surveying
CommonCrawl
Singly periodic free boundary minimal surfaces in a solid cylinder of $\mathbb{R}^3$ Wavefronts of a stage structured model with state--dependent delay Regions of stability for a linear differential equation with two rationally dependent delays Joseph M. Mahaffy 1, and Timothy C. Busken 2, Department of Mathematics and Statistics, Nonlinear Dynamical Systems Group, Computational Sciences Research Center, San Diego State University, San Diego, CA 92182-7720, United States Department of Mathematics, Grossmont College, El Cajon, CA 92020, United States Received July 2013 Revised January 2015 Published April 2015 Stability analysis is performed for a linear differential equation with two delays. Geometric arguments show that when the two delays are rationally dependent, then the region of stability increases. When the ratio has the form $1/n$, this study finds the asymptotic shape and size of the stability region. For example, a delay ratio of $1/3$ asymptotically produces a stability region about 44.3% larger than any nearby delay ratios, showing extreme sensitivity in the delays. The study provides a systematic and geometric approach to finding the eigenvalues on the boundary of stability for this delay differential equation. A nonlinear model with two delays illustrates how our methods can be applied. Keywords: exponential polynomial, stability analysis, eigenvalue., Delay differential equation, bifurcation. Mathematics Subject Classification: Primary: 37C75, 37G15; Secondary: 39B8. Citation: Joseph M. Mahaffy, Timothy C. Busken. Regions of stability for a linear differential equation with two rationally dependent delays. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 4955-4986. doi: 10.3934/dcds.2015.35.4955 J. Bélair, Stability of a differential-delay equation with two time lags,, in Oscillations, (1987), 305. Google Scholar J. Bélair and M. Mackey, A model for the regulation of mammalian platelet production,, Ann. N. Y. Acad. Sci., 504 (1987), 280. doi: 10.1111/j.1749-6632.1987.tb48740.x. Google Scholar J. Bélair and M. Mackey, Consumer memory and price fluctuations in commodity markets: An integrodifferential model,, J. Dyn. and Diff. Eqns., 1 (1989), 299. doi: 10.1007/BF01053930. Google Scholar J. Bélair, M. C. Mackey and J. M. Mahaffy, Age-structured and two delay models for erythropoiesis,, Math. Biosci., 128 (1995), 317. doi: 10.1016/0025-5564(94)00078-E. Google Scholar J. Bélair and S. A. Campbell, Stability and bifurcations of equilibria in a multiple-delayed differential equation,, SIAM J. Appl. Math., 54 (1994), 1402. doi: 10.1137/S0036139993248853. Google Scholar J. Bélair, S. A. Campbell and P. v. d. Driessche, Frustration, stability, and delay-induced oscillations in a neural network model,, SIAM Journal on Applied Mathematics, 56 (1996), 245. doi: 10.1137/S0036139994274526. Google Scholar R. Bellman and K. L. Cooke, Differential-Difference Equations,, Lectures in Applied Mathematics, (1963). Google Scholar F. G. Boese, The delay-independent stability behaviour of a first order differential-difference equation with two constant lags,, preprint, (1993). Google Scholar F. G. Boese, A new representation of a stability result of N. D. Hayes,, Z. Angew. Math. Mech., 73 (1993), 117. doi: 10.1002/zamm.19930730215. Google Scholar F. G. Boese, Stability in a special class of retarded difference-differential equations with interval-valued parameters,, Journal of Mathematical Analysis and Applications, 181 (1994), 227. doi: 10.1006/jmaa.1994.1017. Google Scholar D. M. Bortz, Eigenvalues for two-lag linear delay differential equations,, submitted, (2012). Google Scholar R. D. Braddock and P. van den Driessche, A population model with two time delays,, in Quantitative Population Dynamics (eds. D. G. Chapman and V. F. Gallucci), (1981). Google Scholar T. C. Busken, On the Asymptotic Stability of the Zero Solution for a Linear Differential Equation with Two Delays,, Master's Thesis, (2012). Google Scholar S. A. Campbell and J. Bélair, Analytical and symbolically-assisted investigation of Hopf bifurcations in delay-differential equations,, Proceedings of the G. J. Butler Workshop in Mathematical Biology (Waterloo, 3 (1995), 137. Google Scholar K. L. Cooke and J. A. Yorke, Some equations modelling growth processes and gonorrhea epidemics,, Math. Biosci., 16 (1973), 75. doi: 10.1016/0025-5564(73)90046-1. Google Scholar L. E. El'sgol'ts and S. Norkin, Introduction to the Theory of Differential Equations with Deviating Arguments,, Academic Press, (1977). Google Scholar T. Elsken, The region of (in)stability of a 2-delay equation is connected,, J. Math. Anal. Appl., 261 (2001), 497. doi: 10.1006/jmaa.2001.7536. Google Scholar C. Guzelis and L. O. Chua, Stability analysis of generalized cellular neural networks,, International Journal of Circuit Theory and Applications, 21 (1993), 1. doi: 10.1002/cta.4490210102. Google Scholar J. Hale, E. Infante and P. Tsen, Stability in linear delay equations,, J. Math. Anal. Appl., 105 (1985), 533. doi: 10.1016/0022-247X(85)90068-X. Google Scholar J. K. Hale, Nonlinear oscillations in equations with delays,, in Nonlinear Oscillations in Biology (Proc. Tenth Summer Sem. Appl. Math., (1978), 157. Google Scholar J. K. Hale and W. Huang, Global geometry of the stable regions for two delay differential equations,, J. Math. Anal. Appl., 178 (1993), 344. doi: 10.1006/jmaa.1993.1312. Google Scholar J. K. Hale and S. M. Tanaka, Square and pulse waves with two delays,, Journal of Dynamics and Differential Equations, 12 (2000), 1. doi: 10.1023/A:1009052718531. Google Scholar G. Haller and G. Stépán, Codimension two bifurcation in an approximate model for delayed robot control,, in Bifurcation and Chaos: Analysis, (1990), 155. Google Scholar N. Hayes, Roots of the transcendental equation associated with a certain differential difference equation,, J. London Math. Soc., 25 (1950), 226. Google Scholar T. D. Howroyd and A. M. Russell, Cournot oligopoly models with time lags,, J. Math. Econ., 13 (1984), 97. doi: 10.1016/0304-4068(84)90009-0. Google Scholar , E. F. Infante,, Personal Communication, (1975). Google Scholar I. S. Levitskaya, Stability domain of a linear differential equation with two delays,, Comput. Math. Appl., 51 (2006), 153. doi: 10.1016/j.camwa.2005.05.011. Google Scholar X. Li, S. Ruan and J. Wei, Stability and bifurcation in delay-differential equations with two delays,, Journal of Mathematical Analysis and Applications, 236 (1999), 254. doi: 10.1006/jmaa.1999.6418. Google Scholar N. MacDonald, Cyclical neutropenia; Models with two cell types and two time lags,, in Biomathematics and Cell Kinetics (eds. A. J. Valleron and P. D. M. Macdonald), (1979), 287. Google Scholar N. MacDonald, An activation-inhibition model of cyclic granulopoiesis in chronic granulocytic leukemia,, Math. Biosci., 54 (1980), 61. doi: 10.1016/0025-5564(81)90076-6. Google Scholar M. C. Mackey, Commodity price fluctuations: Price dependent delays and nonlinearities as explanatory factors,, J. Econ. Theory, 48 (1989), 497. doi: 10.1016/0022-0531(89)90039-2. Google Scholar J. M. Mahaffy, P. J. Zak and K. M. Joiner, A Three Parameter Stability Analysis for a Linear Differential Equation with Two Delays,, Technical report, (1993). Google Scholar J. M. Mahaffy, P. J. Zak and K. M. Joiner, A geometric analysis of stability regions for a linear differential equation with two delays,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 5 (1995), 779. doi: 10.1142/S0218127495000570. Google Scholar M. Mizuno and K. Ikeda, An unstable mode selection rule: Frustrated optical instability due to two competing boundary conditions,, Physica D, 36 (1989), 327. doi: 10.1016/0167-2789(89)90088-2. Google Scholar S. Mohamad and K. Gopalsamy, Exponential stability of continuous-time and discrete-time cellular neural networks with delays,, Applied Mathematics and Computation, 135 (2003), 17. doi: 10.1016/S0096-3003(01)00299-5. Google Scholar W. W. Murdoch, R. M. Nisbet, S. P. Blythe, W. S. C. Gurney and J. D. Reeve, An invulnerable age class and stability in delay-differential parasitoid-host models,, American Naturalist, 129 (1987), 263. doi: 10.1086/284634. Google Scholar R. D. Nussbaum, A Hopf global bifurcation theorem for retarded functional differential equations,, Trans. Amer. Math. Soc., 238 (1978), 139. doi: 10.1090/S0002-9947-1978-0482913-0. Google Scholar M. Piotrowska, A remark on the ode with two discrete delays,, Journal of Mathematical Analysis and Applications, 329 (2007), 664. doi: 10.1016/j.jmaa.2006.06.078. Google Scholar C. G. Ragazzo and C. P. Malta, Singularity structure of the Hopf bifurcation surface of a differential equation with two delays,, Journal of Dynamics and Differential Equations, 4 (1992), 617. doi: 10.1007/BF01048262. Google Scholar J. Ruiz-Claeyssen, Effects of delays on functional differential equations,, J. Diff. Eq., 20 (1976), 404. doi: 10.1016/0022-0396(76)90117-0. Google Scholar S. Sakata, Asymptotic stability for a linear system of differential-difference equations,, Funkcial. Ekvac., 41 (1998), 435. Google Scholar R. T. Wilsterman, An Analytic and Geometric Approach for Examining the Stability of Linear Differential Equations with Two Delays,, Master's Thesis, (2013). Google Scholar T. Yoneyama and J. Sugie, On the stability region of differential equations with two delays,, Funkcial. Ekvac., 31 (1988), 233. Google Scholar E. Zaron, The Delay Differential Equation: $x'(t) = -ax(t) + bx(t-\tau_1) + cx(t-\tau_2)$,, Technical report, (1987). Google Scholar Farah Abdallah, Denis Mercier, Serge Nicaise. Spectral analysis and exponential or polynomial stability of some indefinite sign damped problems. Evolution Equations & Control Theory, 2013, 2 (1) : 1-33. doi: 10.3934/eect.2013.2.1 Tomás Caraballo, José Real, T. Taniguchi. The exponential stability of neutral stochastic delay partial differential equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 295-313. doi: 10.3934/dcds.2007.18.295 Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169 Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367 Michael Scheutzow. Exponential growth rate for a singular linear stochastic delay differential equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1683-1696. doi: 10.3934/dcdsb.2013.18.1683 Tomás Caraballo, Renato Colucci, Luca Guerrini. Bifurcation scenarios in an ordinary differential equation with constant and distributed delay: A case study. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2639-2655. doi: 10.3934/dcdsb.2018268 Serge Nicaise, Cristina Pignotti, Julie Valein. Exponential stability of the wave equation with boundary time-varying delay. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 693-722. doi: 10.3934/dcdss.2011.4.693 Eugen Stumpf. Local stability analysis of differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3445-3461. doi: 10.3934/dcds.2016.36.3445 Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219 Sun Yi, Patrick W. Nelson, A. Galip Ulsoy. Delay differential equations via the matrix lambert w function and bifurcation analysis: application to machine tool chatter. Mathematical Biosciences & Engineering, 2007, 4 (2) : 355-368. doi: 10.3934/mbe.2007.4.355 Bao-Zhu Guo, Li-Ming Cai. A note for the global stability of a delay differential equation of hepatitis B virus infection. Mathematical Biosciences & Engineering, 2011, 8 (3) : 689-694. doi: 10.3934/mbe.2011.8.689 Junya Nishiguchi. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5657-5679. doi: 10.3934/dcds.2016048 Yaru Xie, Genqi Xu. Exponential stability of 1-d wave equation with the boundary time delay based on the interior control. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 557-579. doi: 10.3934/dcdss.2017028 Eugenii Shustin. Exponential decay of oscillations in a multidimensional delay differential system. Conference Publications, 2003, 2003 (Special) : 809-816. doi: 10.3934/proc.2003.2003.809 István Györi, Ferenc Hartung. Exponential stability of a state-dependent delay system. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 773-791. doi: 10.3934/dcds.2007.18.773 Pham Huu Anh Ngoc. Stability of nonlinear differential systems with delay. Evolution Equations & Control Theory, 2015, 4 (4) : 493-505. doi: 10.3934/eect.2015.4.493 Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521 Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 Shubo Zhao, Ping Liu, Mingchao Jiang. Stability and bifurcation analysis in a chemotaxis bistable growth system. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1165-1174. doi: 10.3934/dcdss.2017063 Eugen Stumpf. On a delay differential equation arising from a car-following model: Wavefront solutions with constant-speed and their stability. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3317-3340. doi: 10.3934/dcdsb.2017139 Joseph M. Mahaffy Timothy C. Busken
CommonCrawl
Investigating the math of waveshapers: Chebyshev polynomials Jun 18, 2022 ~ last updated on Sep 8, 2022 Over a year ago, I wrote "Adding harmonic distortions with Arduino Teensy". In that post, I happened upon a way to apply any arbitrary profile of harmonics using a Teensy-based waveshaper (just except that waveshapers categorically can't vary the phase of each harmonic). I used that project to replicate the "musical" distortion of tube amplifiers, but I had forgotten about the other purpose of those things: sound synthesis. Guitar pedals, soft clippers, and bit-crushers–that is, pushing distortion to the point of musicality in its own right. That was where all the established literature was! Even in 1979, there was "A Tutorial on Non-Linear Distortion or Waveshaping Synthesis". I happened upon the same thing in an overcomplicated way, but it helped me learn the practical nuances of waveshapers. Let me show a mix of my own and the established method. Compared to what I did before, it's easier to implement and much more concise. How to generate a waveshaper function in four steps! Decide what amplitude ratios $\alpha_n$ each $n$-th harmonic should have with the fundemental frequency Build a preliminary function $f_0(x)$ as the weighted sum of the Chebyshev polynomials \[f_0(x) = T_1(x) + \sum_{n=2}^N \alpha_n T_n(x)\] where the first Chebyshev polynomials are $T_0(x) = 1$ $T_1(x) = x$ $T_2(x) = 2x^2-1$ $T_3(x) = 4x^3-3x$ $T_4(x) = 8x^4-8x^2+1$ $T_5(x) = 16x^5-20x^3+5x$ and more Chebyshev polynomials can be derived from the recursive formula \[T_{n+1}(x) = 2 x T_n(x)-T_{n-1}(x)\] Shift $f_0(x)$ so that it maps zero to zero (for preventing constant DC) by evaluating $f_0(x)$ at $x=0$ then subtracting that \[f_1(x) = f_0(x)-f_0(0)\] Normalize $f_1(x)$ by (brute-force) finding the maximum absolute value for $-1 < x < 1$ then dividing by that \[f_2(x) = \frac{f_1(x)}{f_{\text{1,absmax}}}\] The above function, $f_2(x)$, is your final function. Evaluate it at as many points within $-1 < x < 1$ as can fit in your waveshaper's LUT! If the input sine wave swings exactly within $-1 < x < 1$, then the ratios $\alpha_n$ will be realized. Otherwise, different and smaller ratios will occur. Using this method, I can perfectly replicate my old post! In that old post, I chose to give the second harmonic a weight of 0.2 and no weight to the higher ones, so $\alpha_2 = 0.2$ and $\alpha_n = 0$ for $n > 2$. The sum reduces to a single Chebyshev polynomial term, so the preliminary function is \[f_0(x) = x + 0.2 (2x^2-1)\] We can calculate that $f_0(0)=-0.2$, so our new function must be \[f_1(x) = x+0.2 (2x^2-1)+0.2\] Plotting $f_1(x)$ reveals that it achieves a maximum absolute value of 1.4 at $x=1$, so our final function must be \[f_2(x) = \frac{x+0.2*(2x^2-1)+0.2}{1.4}\] That function simplifies to $\frac{2}{7}x^2+\frac{5}{7}x$. What's new? What's established? Steps 1 and 2 are inspired by that 1979 paper, but steps 3 and 4 were my own. Step 4 was a practical move. It responds to the Teensy waveshaper only accepting points between -1 and 1. If that isn't true for your waveshaper, then you're at liberty to do something else–as long as you don't clip your signal unintentionally! On the other hand, step 3 represents a constraint that I haven't seen online. Specifically, we shouldn't want to map zero to something non-zero because that would cause constant DC! This is why I subtracted $f_0(0)$. However, I think it's interesting to ask why we would have a constant DC problem in the first place. That requires digging into step 2. Why do we take a weighted sum of the Chebyshev polynomials? What are Chebyshev polynomials? And from there, how could they break? Applying Chebyshev polynomials in waveshaping? Chebyshev polynomials can be used in a rigorous approach to building waveshapers. In this context, their claim to fame is that they can twist a $\cos(x)$ wave into its $n$-th harmonic, $\cos(nx)$. Somehow, high school algebra and trigonometry are all you need to use them. You already know one if you remember the double-angle identity $\cos(2x)=2 \cos^2(x)-1$. Now if you will, imagine un-substituting $\cos(x)$, and you get the Chebyshev polynomial $T_2(x) = 2x^2-1$. Then, imagine a double-double-angle formula, $\cos(4x)=2 \cos^2(2x)-1$, and expand that to $8 \cos^4(x)-8 \cos^2(x)+1$. Un-substituting $\cos(x)$ gives the Chebyshev polynomial $T_4(x)=8x^4-8x^2+1$. However, there is another way to jump from $T_2(x)$ to $T_4(x)$, and that is the recursion formula $T_{n+1}(x) = 2 x T_n(x)-T_{n-1}(x)$. Feel free to try it yourself, and notice that this alternative method involves stepping through $T_3(x)$ first. That would have corresponded to the triple-angle formula! Okay, my only reason for bringing up $T_4(x)$ was this intuitive-looking plot. Phase shifts usually get in the way of the intuitive nature, but not for $n=4$. That aside! At the same time, how we should use a Chebyshev polynomial becomes clearer. That is, they're actually functions of $\cos(x)$ that spit out $\cos(n x)$! In symbols, $T_n(\cos(x)) = \cos(n x)$. The final product of waveshapers should be a sum of the original wave plus some new harmonics. According to $T_n(\cos(x)) = \cos(n x)$, the Chebyshev polynomials stand in for the new harmonics, and so we simply take a weighted sum. Here $T_1(x) = \cos(x)$ by defintion, so it is our original wave. Then, each $\alpha_n T_n(x)$ term adds the $n$-th harmonic, scaled by $\alpha_n$. That is what step 2 really means. Breaking polynomial waveshapers with arbitrary inputs That said, something is wrong with the result of step 2. Often, we'd want to input a wave of some varying amplitude $a(t) \leq 1$ (i.e. an ADSR envelope) or even an arbitrary input. I don't know where the impacts end. At the very least, I can address one of them: DC shifts. For $a(t) = 0$, a waveshaper will see nothing but zero, and it may decide to map that to something nonzero. This is because Chebyshev polynomials weren't defined with that in mind. For example, $T_2(0)=-1$. If my headphones saw -1 volts at DC, they'd blow. From my old post, I only saw that happen when I added even harmonics, and I've seen that adding a constant equal to $\alpha_n$ or $-\alpha_n$ would correct that. Ultimately though, the easiest way to correct this effect is to just evaluate the waveshaper function at $x=0$, then subtract that value. That's step 3. Finally, this addition technically could be considered as a part of the weighted sum in part 2. Specifically, it would be as $\alpha_0$ of another Chebyshev polynomial, $T_0(x)=1$, that represents a "zeroth" harmonic or 0 Hz. I just wouldn't wish upon anyone the task of solving for $\alpha_0$ without evaluating the rest of the sum first.
CommonCrawl
Laboratory measurements show temperature-dependent permittivity of lunar regolith simulants Express Letter M. Kobayashi1, H. Miyamoto ORCID: orcid.org/0000-0001-8013-61242,3, B. D. Pál4,5, T. Niihara6 & T. Takemura7 Earth, Planets and Space volume 75, Article number: 8 (2023) Cite this article The mapping of available water–ice is a crucial step in the lunar exploration missions. Ground penetrating radars have the potential to map the subsurface structure and the existence of water–ice in terms of the electromagnetic properties, specifically, the permittivity. Slight differences in permittivity can be significantly important when applied in a dry environment, such as on the Moon and Mars. The capability of detecting a small fraction of putative water–ice depends on the permittivity changes in terms of its dependent parameters, such as the frequency, the temperature, the porosity, and the chemical composition. Our work aims at mitigating false detection or overlooking of water–ce by considering their conditions that previous researches did not cover. We measured the permittivity of different lunar regolith relevant analogue samples with a fixed 40 % porosity in the ultra-high-frequency–super-high-frequency band. We used the coaxial probe method to measure anorthosite, basalt, dunite and ilmenite at \(20\,^\circ \hbox {C}\), \(-20\,^\circ \hbox {C}\) and \(-60\,^\circ \hbox {C}\), and we find that, at \(-60\,^\circ \hbox {C}\), the permittivity decreases about 6–18 % compared with the values at \(20\,^\circ \hbox {C}\). Within this temperature range, the permittivity is quite similar to the permittivity of water–ice. We find that the conventional calculation would overestimate the permittivity in the low temperature areas, such as the permanently shadowed regions. We also find that each component in the lunar regolith has different temperature-dependent permittivity, which might be important for radar data analysis to detect lunar polar water–ice. Our results also suggest that it should be possible to estimate the water–ice content from radar measurements at different temperatures given an appropriate method. The existence of putative water–ice on the Moon has been a frequently debated topic for the last decades (Arnold 1979; Watson et al. 1961). Multiple observations have implied the existence of water–ice, especially in the low-temperature regions of the lunar poles in permanently shadowed regions (Colaprete et al. 2010; Feldman et al. 1998; Lawrence et al. 2006; Li et al. 2018; Nozette et al. 1996; Spudis et al. 2013; Zuber et al. 2012; Kereszturi 2020). The water–ice on the Moon has substantial scientific values and is also a vital resource for future explorations and long-term missions on the Moon. However, there is no consensus about the amount and distribution of water–ice. The estimated amount in the regolith ranges from less than 1.0 wt% (Miller et al. 2012; Pieters et al. 2009; Sanin et al. 2017) to about 30 wt% (Li et al. 2018). Moreover, the estimated distribution by different observations does not match (e.g., Fisher et al. 2017; Hayne et al. 2015; Li et al. 2018). Thus, several landing missions such as Lunar Polar Exploration (LUPEX), Volatiles Investigating Polar Exploration Rover (VIPER), and Chang'E-6 aim at gathering data about the amount and distribution of water–ice in the lunar polar region (Colaprete et al. 2019; Hoshino et al. 2020). The existence of ice could be examined through in-situ analysis of subsurface materials by drilling, which is technically possible up to a few meters (Boazman et al. 2022). In our work, we focus on obtaining information about the subsurface up to this depth and aid the selection of candidate sites. One observational instrument that can obtain information on the lunar subsurface up to a few meters in depth and with a high resolution is the Ground-Penetrating Radar (GPR). GPRs use high-frequency radio waves to image the subsurface. Electromagnetic (EM) waves reflect or scatter at the boundary between materials with different dielectric properties. The behavior of EM waves in a medium is mainly determined by permittivity, permeability, and conductivity. For the Apollo samples, the conductivity is typically almost zero \((10^{-9} - 10^{-14}\) S/m) (Heiken et al. 1991), which means that the maximum attenuation rate is in the order of \(10^{-7}\) dB/m in the lunar regolith and can be negligible for the EM propagation through the lunar regolith. In GPRs frequency range, most minerals (except for magnetic minerals) have no ferromagnetism, so the permeability is almost the same as the magnetic constant (permeability in vacuum), and does not affect the propagation of EM waves through soils or rocks (Martinez and Byrnes 2001). Thus, permittivity is the only dominant parameter for EM propagation through lunar regolith. This physical exploration method has been conducted by the previous Chang'E missions (Xiao et al. 2015; Li et al. 2020; Zhang et al. 2020). LUPEX is also equipped with a GPR to identify regolith structures before drilling. In-situ observations have been conducted to estimate lunar regolith permittivity (e.g., Dong et al. 2017, 2021; Ishiyama et al. 2013; Su et al. 2022), but the accuracy is limited, which makes laboratory measurements to be one of the best methods to estimate permittivity. A reliable data set was compiled from the Apollo missions measured under dry conditions at various temperatures (e.g., Chung and Westphal 1973; Frisillo et al. 1975; Gold et al. 1976; Olhoeft and Strangway 1975; Heiken et al. 1991). The relative permittivity is associated with the bulk density and FeO+\(\hbox {TiO}_2\) content (e.g., Heiken et al. 1991; Olhoeft and Strangway 1975), and more recent results came from the JSC-1A (Johnson Space Center) lunar regolith simulant measurements (Calla and Rathore 2012) and the Chang'E-5 mission (Su et al. 2022). Although these studies help us understand the lunar subsurface, it is still challenging to predict the permittivity of future exploration sites. This is because the permittivity is dependent on various parameters such as applied frequency, water content, chemical composition, and temperature (Campbell and Ulrichs 1969; Hansen 1973; Heiken et al. 1991; Jones and Friedman 2000; Shkuratov and Bondarenko 2001; Topp et al. 1980), that is, the permittivity should be treated with the constraints of these parameters. Future GPRs also use a higher frequency band than previous in-situ observations to acquire higher resolution, so it is unclear whether the permittivity at other frequencies can be applied to the analysis of GPRs. In addition, the chemical composition of lunar rocks and regolith has a wide range (e.g., Heiken et al. 1991; Lemelin et al. 2022). Thus, it is not easy to estimate the permittivity at different landing sites from only the measurement of returned samples. Although the measurement of permittivity of simulants is helpful for this purpose, the chemical composition is adjusted to be similar to Apollo samples and the variation has less. Although the effect of temperature on the permittivity is also significant, especially in the low temperature regions such as the permanently shadowed regions, few previous researches have considered the effect in the frequency band for GPR. Calla and Rathore (2012) reported on the temperature dependence of permittivity by measuring the JSC-1A simulant, which is partly very useful to consider the permittivity under the low-temperature environment on the Moon. However, the JSC-1A simulant is manufactured to represent Mare region, not highland and polar region. Thus, the permittivity of highland and polar region at low temperatures has been unclear. Furthermore, the permittivity of water–ice is about 3.1–3.2 (Fujita et al. 2000), which is very close to that of the lunar subsurface. This means that it is unreliable to analyse water–ice existence from the roughly estimated permittivity. To gain useful results for near-future lunar missions, we need to precisely estimate the permittivity at landing sites with various chemical compositions. In this letter, we focus not on the mixture of lunar simulant, but the typical lunar regolith end-members covering the whole lunar surface of mare, highland, and polar regions (useful for arbitrary mixture permittivity estimations), and report permittivity measurements on different frequencies, which are useful for the GPR data analysis. We simulate the lunar surface/subsurface environment of the polar regions and measure the permittivity with high accuracy in the UHF–SHF band. Knowing the permittivity of several lunar simulants as the end-member of lunar regolith has an advantage to enable us to estimate the bulk permittivity of regolith with an arbitrary composition in the future. Our results highlight the importance of keeping the temperature dependence of various minerals in mind when studying radar data, because other basic lunar regolith materials could be mistaken for water–ice content. The new water–ice hunting lunar missions already on their way to the Moon, working or planned to start in the next years make our results a timely addition to the field. Experiment methodology We measure the permittivity of four samples, including rocks and a pure mineral, that typically exist on the Moon (Additional file 1: Fig. S1 and Additional file 3: Table S1): anorthosite, basalt, dunite and ilmenite. Anorthosite, mainly composed of plagioclase accompanying several types of mafic minerals, is distributed widely on the lunar highland and is considered as a representative material for a primitive crust. According to the data from the Kaguya Spectral Profiler measurement, the polar regions can be characterized by a mixture of homogeneous plagioclase (up to 90 wt%) and FeO (<15 wt%) content (Lemelin et al. 2022). Thus, the permittivity of anorthosite is crucial for interpreting the data of the radar observations by LUPEX and Chang'E, while basalt is typically found in Mare regions. Using the permittivity of anorthosite and basalt we can roughly estimate the bulk permittivity in almost all lunar regions. Besides, as mafic materials, we measured the permittivity of dunite (mainly composed of olivine, located in the region where the crust is relatively thin), which exists on the floor of the SPA basin, and its permittivity should be significant for the analysis of the Lunar Penetrating Radar (LPR) data on Chang'E-4 Yutu-2 rover. In addition, the bulk permittivity of the lunar surface and subsurface could be affected by a small fraction of other materials, especially with high concentrations of Fe and/or Ti (Heiken et al. 1991; Olhoeft and Strangway 1975), and thus, we also measure the permittivity of ilmenite. The chemical composition of the geological samples is measured by X-Ray Fluorescence (XRF) to consider the adequacy of samples for the simulated lunar materials. The measurement is conducted using ZSX Primus II (Rigaku) at 50 kV. Additional file 3: Table S2. shows the chemical composition of each sample. We also checked the modal composition of anorthosite to conduct the point counting method, and confirmed that the anorthosite consistes of mostly Ca-rich plagioclase (92 vol% at most) and with minor pyroxene and olivine, which is consistent with the Apollo sample from highland (Heiken et al. 1991). The bulk composition of basalt is within the chemical composition of the Apollo samples except for \(\hbox {TiO}_2\) and FeO, which may result from the lack of ilmenite. The grain density of each sample is also measured using a 25 mL Gay–Lussac pycnometer. As in the case of the ilmenite, the grain density is the ideal one (Holden 1921). The coaxial probe method requires uniformly mixed samples for accurate permittivity measurements, so we crushed the solid samples into fine powders to exclude heterogeneity (Additional file 1: Fig. S1). A few to tens of centimetre-sized samples are first crushed with the hammer and jaw crusher (CR-200B; Marubishi Scientific Instrument Meg. CO., LTD) into smaller than 10 mm, then crushed further with the stamp mill made of stainless steel (ANS-143PS; NITTO KAGAKU CO., LTD). We then sieved the powders with 46 \(\mu\)m with JIS standard sieves. The grain size of ilmenite is about hundreds micrometers, and the basalt is sieved with 500 \(\mu\)m because this sample has a mineral dependence on the particle size and the sieving would exclude some minerals arbitrarily. We then baked the samples at 95 \(^\circ \hbox {C}\) for a day or more in the oven to get rid of the water content because the lunar regolith is mostly dry (at the polar regions it is more complex as ice might exist under dry regolith), and the existence of water could affect the bulk permittivity of the samples (Olhoeft and Strangway 1975; Topp et al. 1980). To remove the effect of porosity on the permittivity, which is one of the most effective parameters on the permittivity, we carefully arranged each sample to have the same porosity of 40 %, which is relevant for the top cm–dm layer although at 1 m below the porosity is much smaller. The porosity can change with tapping so we treated the samples carefully, minimising vibrations before the measurement. This preparation makes it possible to measure the permittivity depending on only the difference of materials under the constrained environment. Permittivity measurement Various methods are proposed to measure the permittivity (Venkatesh and Raghavan 2005), but coaxial probe method is the most accurate with high reproducibility at the UHF–SHF band. The method utilises the reflection caused by the impedance mismatch between the probe and the samples (Wang et al. 2020). The reflection is measured at a single network port as the \(S_{11}\) parameter which describes the ratio of input to output power in an electrical instrument. We used the coaxial probe and cable (85070E Dielectric Probe Kit; Keysight) and the vector network analyzer (VNA; 8753ES S-parameter Network Analyzer; Keysight) (Additional file 2: Fig. S2), with frequency between 1 MHz to 6 GHz in 1601 points (the frequency resolution is \(\sim\)3.7 MHz). To make the measurement stable, we warm up the vector network analyzer for 60 minutes before measurements. For the calibration, air, short, and pure water are used, which is the standard method for using the coaxial probe method (Blackham and Pollard 1997). We include the numerical results of the room temperature (\(20\,^\circ \hbox {C}\)),\(-20\,^\circ \hbox {C}\), and \(-60\,^\circ \hbox {C}\) deep freezer experiments in this Letter as we use these to determine the permittivity of an arbitrary mixture. To quantify the measurement error of the developed experimental system, we measured the permittivity of air and pure water after calibration, before and after the room temperature experiments. The permittivity values and standard errors of air and pure water measurements are in Additional file 3: Tables S3 and S4 comparing with Barthel et al. (1991). The maximum relative error of air is 2.8 % at 1 GHz and \(\le\) 1.0 % above 2 GHz, while that of pure water is 1.3 % at 1.7 GHz and \(\le\) 0.8 % above 2.05 GHz. As the maximum relative error is more prominent at 1 GHz than at frequencies higher than 2 GHz, we show the measurement results of the permittivity of geological samples between 2 GHz and 6 GHz. We list the permittivity results of each sample at \(20\,^\circ \hbox {C}\), \(-20\,^\circ \hbox {C}\), and \(-60\,^\circ \hbox {C}\) in Tables 1, 2 and 3. We measured every sample several times, with the probe positioned in various places on the surface of the samples to exclude the effects of the non-uniformity of the powdered samples (see Additional file 2: Fig. S2 in the Appendix for the experimental setup). The permittivity of the samples, with the exception of ilmenite is about 3, which is approximately consistent with the Apollo return sample results (Heiken et al. 1991). In the case of ilmenite, the permittivity is higher, about 7–8, which is consistent with previous results of the permittivity being strongly dependent on the FeO+\(\hbox {TiO}_2\) content (Heiken et al. 1991). We found that permittivity displays a strong temperature dependence, which is different for each sample. Comparing the \(20\,^\circ \hbox {C}\) and \(-60\,^\circ \hbox {C}\) results, the permittivity of basalt changes by \(\sim\)15–18 %, the permittivity of anorthosite changes by \(\sim\)10–14 % and dunite shows only a \(\sim\)6–10 % difference. The permittivity of ilmenite also varies by about only \(\sim\)6–10 %, but the maximum absolute value change in the case of ilmenite is \(\sim\)0.9, which is larger than in the case of the other samples. The temperature dependence on the permittivity of lunar simulant has been previously reported by Calla and Rathore (2012). They described that the permittivity of JSC-1A at 2.5 GHz is about 4.13 and 4.01 at 30 \(^\circ \hbox {C}\) and − 50 \(^\circ \hbox {C}\), respectively, which indicates that the decrease is about 3.0 %. This difference is smaller than our measurement results. The theoretical research, based on the Debye model detailed in Yushkova and Kibardina (2017), reports that the permittivity decrease between \(20\,^\circ \hbox {C}\) and \(-60\,^\circ \hbox {C}\) at 1 GHz is about 13 % (see Fig. 4 in Yushkova and Kibardina 2017). Thus, our result is comparatively similar to the theoretical estimation of the permittivity at temperatures rather than the pervious measurement result of lunar simulant. At lower temperatures, such as 40–80 K (which is a typical temperature in the permanently shadowed regions), theoretical estimation of the permittivity of lunar simulants suggests that the permittivity should not decrease further after reaching a certain temperature (Yushkova and Kibardina 2017). This limit is about 200 K or less in the UHF–SHF band, so our results at \(-60\,^\circ \hbox {C}\) might be close to the lowest permittivity value (possibly somewhat larger). This should be confirmed by measuring at lower temperatures in the future work. A simple demonstration of our results with the Looyenga–Landau–Lifshitz equation (which is one of the mixing equations of permittivity) (Looyenga 1965) would be the following: let us assume that 1) a lunar rover equipped with GPR utilising 4 GHz measured 3.0 permittivity at an anorthositic regolith site (e.g., lunar polar region), and 2) the porosity is 40 % (the relevant for the top cm–dm layer). Using the Looyenga–Landau–Lifshitz equation, the permittivity with 40 % porosity of anorthositic regolith resulting from our measurements, and the permittivity of 3.1 for water–ice (Fujita et al. 2000), we can estimate the water–ice content as 4.4 wt% at \(-20\,^\circ \hbox {C}\) and 11.2 wt% at \(-60\,^\circ \hbox {C}\). This means that because the permittivity of rock fraction is different at the two temperatures, the water–ice content estimated from the permittivity of 3.0 is different (even though the permittivity measured at the regolith site is the same). The difference between the two estimated water–ice contents indicates that for an accurate water–ice abundance estimation temperature information is also required, to consider the temperature dependent permittivity. In addition to the temperature information, the materials' variation is also required: our results show how this temperature dependence is different for each geological sample; thus, the permittivity of lunar regolith should be estimated in the future while considering these variations. For example, if the content of ilmenite, which is one of the largest permittivity in the regolith, is not considered accurately, the higher permittivity could falsely suggest water–ice content from radar measurement. We highlight the importance to take the possibility that the measured permittivity could be a consequence of ilmenite-content into consideration. In addition, the observation of the porosity of regolith is difficult with existing method in-situ. Thus, it is required to develop the method to obtain the porosity and water–ice content with the consideration of the chemical composition of the lunar regolith at an arbitrary site. To aid this, we are developing a method to quantify the water–ice content focusing on the different temperature dependencies on the permittivity of rocks, minerals, and water–ice (Kobayashi et al., in prep.). Most of the previous works analysing GPR data of the Moon relies on the relationship between the permittivity of the Apollo samples and bulk density (Heiken et al. 1991; Olhoeft and Strangway 1975): $$\begin{aligned} \epsilon =1.919^{\varrho } \end{aligned}$$ where \(\varrho\) is the bulk density in \(\hbox {g/cm}^{3}\). This empirical formula (Eq. 1) is useful as a rough average estimation of the permittivity of the lunar subsurface; however, since it was determined by fitting the permittivity measurements of the Apollo samples under various conditions, it is less accurate at specific locations. In addition, because each material showed a different temperature dependence, it is difficult to estimate the permittivity of the lunar regolith at low temperatures based on Eq. 1. We calculate the permittivity based on Eq. 1 and the density of the samples used in the measurements. While the permittivity at \(20\,^\circ \hbox {C}\) and \(-20\,^\circ \hbox {C}\) are both larger than the value coming from Eq. 1, the permittivity at \(-60\,^\circ \hbox {C}\) is lower (Fig. 1). This indicates that GPR analysis using Eq. 1. cannot consider the temperature dependence on the permittivity, which could lead to the wrong estimation of the subsurface structure and even of the existence of water–ice. Furthermore, it is unclear whether Eq. 1 can be applied to the analysis using GPRs because of the difference in frequency range, since the equation is acquired by fitting the data measured at a lower frequency (typically 10–450 MHz). Our results in the UHF–SHF band show that while the permittivity is slightly affected by the frequency, the variation is considerably larger in the case of ilmenite (Fig. 1), which is consistent with previous studies (e.g., Stillman and Olhoeft 2008). Overall, our results indicate that though the permittivity changes with temperature change, the variation ranges widely and should be treated carefully to consider the materials present at the radar-scanned location; which makes the subsurface structure and even water–ice existence estimations possible in future missions. Table 1 Permittivity of the samples having 40% porosity at \(20\,^\circ \hbox {C}\) Table 2 Permittivity of the samples having 40% porosity at \(-20\,^\circ \hbox {C}\) Table 3 Permittivity of the samples having 40 % porosity at \(-60\,^\circ \hbox {C}\) To appropriately evaluate GPR data from previous and future lunar missions, we need to take the variations in chemical composition, temperature conditions, and porosities of the lunar regolith into consideration. We evaluated these effects individually from laboratory measurements: first, we prepared lunar representative materials, based on chemical studies of the Apollo return samples, meteorite studies and remote-sensing observations. We determined the 4 end-member rocks and minerals for radar observations, such as anorthosite, basalt, dunite, and ilmenite. We developed a system to measure the permittivity of powdered samples in the 2–6 GHz frequency range with high precision (less than 1% relative error). We prepared powdered samples of the 4 identified end-members and measured their permittivity at different temperatures (\(20\,^\circ \hbox {C}\), \(-20\,^\circ \hbox {C}\), and \(-60\,^\circ \hbox {C}\)). We found that the permittivity of lunar simulant materials has a complex dependency on several parameters. The content of ilmenite increases the bulk permittivity of the lunar regolith, which is also strongly dependent on temperature. The permittivity decreases at lower temperatures, typically of the geological samples by 6–18% between \(-60\,^\circ \hbox {C}\) and \(20\,^\circ \hbox {C}\). Previous works did not accurately consider the temperature dependent differences of permittivity when estimating the permittivity of the lunar subsurface, thus our results fill an important knowledge gap, while being consistent with former research (e.g., Yushkova and Kibardina , 2017). While they reported the temperature dependence of the permittivity of lunar simulants, our paper reported first on the difference between temperature dependence on each lunar simulant representing lunar regolith end-members. Thus, we should consider the effect on the permittivity carefully when discussing about the existence of water–ice with radar observations at the cold polar lunar regions. While we reported that the permittivity of lunar materials depends on temperature, this is not true for water–ice (Fujita et al. 2000). This implies that by measuring lunar regolith permittivity at different temperatures (different local times), the water–ice content could be calculated from permittivity variations (Kobayashi et al., in prep). Thus we propose that using the results from our future new method the existence of even small amounts of water–ice could be detected by radar data collected at different temperatures. Temperature dependence of the permittivity. a Anorthosite, b Basalt, c Dunite, and d Ilmenite. The circular points are the average of 10 points below and above each frequency (e.g., the average of the permittivity from 1.984–2.018 GHz in the case of 2 GHz). In the case of 6 GHz, it is averaged with the 10 points only below the frequency (i.e., the average of the permittivity from 5.966 to 6.000 GHz). The error bar shows the standard error. The colour shows the frequency (dark purple: 2 GHz, light blue: 3 GHz, green: 4 GHz, orange: 5 GHz, and dark red: 6 GHz), while the horizontal line shows the calculation result based on Eq. 1. in magenta, where appropriate. The calculation result is only shown in the case of the anorthosite and basalt, because Eq. 1 was derived from the Apollo sample and the dunite and ilmenite are minor components in the Apollo sample The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. GPR: Ground-penetrating radar EM: JSC: Johnson Space Center UHF–SHF: Ultra high frequency–super high frequency LPR: Lunar Penetrating Radar XRF: X-Ray fluorescence VNA: SPA: South Pole-Aitken Arnold JR (1979) Ice in the lunar polar regions. J Geophys Res Solid Earth 84:5659–5668 Barthel J, Bachhuber K, Buchner R, Hetzenauer H, Kleebauer M (1991) A computer-controlled system of transmission lines for the determination of the complex permittivity of lossy liquids between 8.5 and 90 ghz. Ber Bunsenges Phys Chem. https://doi.org/10.1002/bbpc.19910950802 Blackham D, Pollard R (1997) An improved technique for permittivity measurements using a coaxial probe. IEEE Trans Instrum Meas 46(5):1093–1099 Boazman S, Kereszturi A, Heather D, Sefton-Nash E, Orgel C, Tomka R, Houdou B, Lefort X (2022) Analysis of the Lunar South Polar Region for PROSPECT, NASA/CLPS. Eur Planet Sci Congr. https://doi.org/10.5194/epsc2022-530 Calla OPN, Rathore IS (2012) Study of complex dielectric properties of lunar simulants and comparison with Apollo samples at microwave frequencies. Adv Space Res 50(12):1607–1614 Campbell MJ, Ulrichs J (1969) Electrical properties of rocks and their significance for lunar radar observations. J Geophys Res 74(25):5867–5881 Chung DH, Westphal WB (1973) Dielectric spectra of Apollo 15 and 16 lunar solid samples. Lunar Planet Sci Conf Proc 4:3077 Colaprete A, Andrews D, Bluethmann W, Elphic RC, Bussey B, Trimble J, Zacny K, Captain JE (2019) An Overview of the Volatiles Investigating Polar Exploration Rover (VIPER) Mission. AGU Fall Meeting Abstracts. 2019:P34B-03 Colaprete A, Schultz P, Heldmann J, Wooden D, Shirley M, Ennico K, Hermalyn B, Marshall W, Ricco A, Elphic RC, Goldstein D, Summy D, Bart GD, Asphaug E, Korycansky D, Landis D, Sollitt L (2010) Detection of water in the LCROSS ejecta plume. Science 330(6003):463 Dong Z, Fang G, Ji Y, Gao Y, Wu C, Zhang X (2017) Parameters and structure of lunar regolith in Chang'E-3 landing area from lunar penetrating radar (LPR) data. Icarus 282:40–46 Dong Z, Fang G, Zhou B, Zhao D, Gao Y, Ji Y (2021) Properties of Lunar Regolith on the Moon's Farside Unveiled by Chang'E-4 Lunar Penetrating Radar. J Geophys Res 126(6):e06564 Feldman WC, Maurice S, Binder AB, Barraclough BL, Elphic RC, Lawrence DJ (1998) Fluxes of fast and epithermal neutrons from Lunar prospector: evidence for water ice at the Lunar poles. Science 281:1496 Fisher EA, Lucey PG, Lemelin M, Greenhagen BT, Siegler MA, Mazarico E, Aharonson O, Williams J-P, Hayne PO, Neumann GA, Paige DA, Smith DE, Zuber MT (2017) Evidence for surface water ice in the lunar polar regions using reflectance measurements from the Lunar Orbiter Laser Altimeter and temperature measurements from the Diviner Lunar Radiometer Experiment. Icarus 292:74–85 Frisillo AL, Olhoeft GR, Strangway DW (1975) Effects of vertical stress, temperature and density on the dielectric properties of lunar samples 72441,12, 15301,38 and a terrestrial basalt. Earth Planet Sci Lett 24(3):345–356 Fujita S, Matsuoka T, Ishida T, Matsuoka K, Mae S (2000) A summary of the complex dielectric permittivity of ice in the megahertz range and its applications for radar sounding of polar ice sheets. In: Fujita S (ed) Physics of ice core records. Hokkaido University Press, Hokkaido, pp 185–212 Gold T, Bilson E, Baron RL (1976) Electrical properties of Apollo 17 rock and soil samples and a summary of the electrical properties of lunar material at 450 MHz frequency. Lunar Planet Sci Conf Proc 3:2593–2603 Hansen W (1973) The dielectric properties of selected basalts. Geophysics 38(1):135 Hayne PO, Hendrix A, Sefton-Nash E, Siegler MA, Lucey PG, Retherford KD, Williams J-P, Greenhagen BT, Paige DA (2015) Evidence for exposed water ice in the Moon's south polar regions from Lunar reconnaissance orbiter ultraviolet albedo and temperature measurements. Icarus 255:58–69 Heiken GH, Vaniman DT, French BM (1991) Lunar sourcebook, a user's guide to the moon Holden EF (1921) Specific gravity and composition in iron rutile. Am Mineral J Earth Planet Mater 6(6):100–103 Hoshino T, Wakabayashi S, Ohtake M, Karouji Y, Hayashi T, Morimoto H, Shiraishi H, Shimada T, Hashimoto T, Inoue H, Hirasawa R, Shirasawa Y, Mizuno H, Kanamori H (2020) Lunar polar exploration mission for water prospection–JAXA's current status of joint study with ISRO. Acta Astronaut 176:52–58 Ishiyama K, Kumamoto A, Ono T, Yamaguchi Y, Haruyama J, Ohtake M, Katoh Y, Terada N, Oshigami S (2013) Estimation of the permittivity and porosity of the lunar uppermost basalt layer based on observations of impact craters by SELENE. J Geophy Res 118(7):1453–1467 Jones SB, Friedman SP (2000) Particle shape effects on the effective permittivity of anisotropic or isotropic media consisting of aligned or randomly oriented ellipsoidal particles. Water Resour Res 36(10):2821–2833 Kereszturi A (2020) Polar ice on the moon. Springer International Publishing, Cham, pp 1–9. https://doi.org/10.1007/978-3-319-05546-6_216-1 Lawrence DJ, Feldman WC, Elphic RC, Hagerty JJ, Maurice S, McKinney GW, Prettyman TH (2006) Improved modeling of Lunar Prospector neutron spectrometer data: implications for hydrogen deposits at the lunar poles. J Geophys Res 111(E8):E08001 Lemelin M, Lucey PG, Camon A (2022) Compositional Maps of the Lunar polar regions derived from the Kaguya spectral profiler and the Lunar orbiter laser altimeter data. Planet Sci J 3(3):63 Li C, Su Y, Pettinelli E, Xing S, Ding C, Liu J, Ren X, Lauro SE, Soldovieri F, Zeng X, Gao X, Chen W, Dai S, Liu D, Zhang G, Zuo W, Wen W, Zhang Z, Zhang X, Zhang H (2020) The Moon's farside shallow subsurface structure unveiled by Chang'E-4 Lunar Penetrating Radar. Sci Adv 6(9):eaay6898 Li S, Lucey PG, Milliken RE, Hayne PO, Fisher E, Williams J-P, Hurley DM, Elphic RC (2018) Direct evidence of surface exposed water ice in the lunar polar regions. Proc Natl Acad Sci 115(36):8907–8912 Looyenga H (1965) Dielectric constants of heterogeneous mixtures. Physica 31(3):401–406 Martinez A, Byrnes AP (2001) Modeling dielectric-constant values of geologic materials: an aid to ground-penetrating radar data collection and interpretation. Curr Res Earth Sci. https://doi.org/10.17161/cres.v0i247.11831 Miller RS, Nerurkar G, Lawrence DJ (2012) Enhanced hydrogen at the lunar poles: New insights from the detection of epithermal and fast neutron signatures. J Geophys Res 117(E11):E11007 Nozette S, Lichtenberg CL, Spudis P, Bonner R, Ort W, Malaret E, Robinson M, Shoemaker EM (1996) The Clementine Bistatic Radar experiment. Science 274(5292):1495–1498 Olhoeft GR, Strangway DW (1975) Dielectric properties of the first 100 meters of the Moon. Earth Planet Sci Lett 24(3):394–404 Pieters CM, Goswami JN, Clark RN, Annadurai M, Boardman J, Buratti B, Combe JP, Dyar MD, Green R, Head JW, Hibbitts C, Hicks M, Isaacson P, Klima R, Kramer G, Kumar S, Livo E, Lundeen S, Malaret E, McCord T, Mustard J, Nettles J, Petro N, Runyon C, Staid M, Sunshine J, Taylor LA, Tompkins S, Varanasi P (2009) Character and Spatial Distribution of OH/\(\text{ H}_{2}\)O on the Surface of the Moon Seen by M\(^{3}\) on Chandrayaan-1. Science 326(5952):568 Sanin AB, Mitrofanov IG, Litvak ML, Bakhtin BN, Bodnarik JG, Boynton WV, Chin G, Evans LG, Harshman K, Fedosov F, Golovin DV, Kozyrev AS, Livengood TA, Malakhov AV, McClanahan TP, Mokrousov MI, Starr RD, Sagdeev RZ, Tret'yakov VI, Vostrukhin AA (2017) Hydrogen distribution in the lunar polar regions. Icarus 283:20–30 Shkuratov YG, Bondarenko NV (2001) Regolith layer thickness mapping of the moon by radar and optical data. Icarus 149(2):329–338 Spudis PD, Bussey DBJ, Baloga SM, Cahill JTS, Glaze LS, Patterson GW, Raney RK, Thompson TW, Thomson BJ, Ustinov EA (2013) Evidence for water ice on the moon: results for anomalous polar craters from the LRO Mini-RF imaging radar. J Geophys Res 118(10):2016–2029 Stillman D, Olhoeft G (2008) Frequency and temperature dependence in electromagnetic properties of Martian analog minerals. J Geophys Res 113(E9):E09005 Su Y, Wang R, Deng X, Zhang Z, Zhou J, Xiao Z, Ding C, Li Y, Dai S, Ren X, Zeng X, Gao X, Liu J, Liu D, Liu B, Zhou B, Fang G, Li C (2022) Hyperfine structure of Regolith Unveiled by Chang'E-5 Lunar Regolith penetrating radar. IEEE Trans Geosci Remote Sens 60:3148200 Topp GC, Davis JL, Annan AP (1980) Electromagnetic determination of soil water content: measurements in coaxial transmission lines. Water Resour Res 16(3):574–582 Venkatesh M, Raghavan V (2005) An overview of dielectric properties measuring techniques. Can Biosyst Eng 47:15–30 Wang J, Lim EG, Leach MP, Wang Z, Man KL (2020) Open-ended coaxial cable selection for measurement of liquid dielectric properties via the reflection method. Math Prob Eng 2020:1–8 Watson K, Murray B, Brown H (1961) On the possible presence of ice on the moon. J Geophys Res 66(5):1598–1600 Xiao L, Zhu P, Fang G, Xiao Z, Zou Y, Zhao J, Zhao N, Yuan Y, Qiao L, Zhang X, Zhang H, Wang J, Huang J, Huang Q, He Q, Zhou B, Ji Y, Zhang Q, Shen S, Li Y, Gao Y (2015) A young multilayered terrane of the northern Mare Imbrium revealed by Chang'E-3 mission. Science 347(6227):1226–1229 Yushkova O, Kibardina I (2017) Dielectric properties of lunar surface. Solar Syst Res 51(2):121–126 Zhang L, Li J, Zeng Z, Xu Y, Liu C, Chen S (2020) Stratigraphy of the Von Kármán Crater Based on Chang'E-4 Lunar Penetrating Radar Data. Geophys Res Lett 47(15):e88680 Zuber MT, Head JW, Smith DE, Neumann GA, Mazarico E, Torrence MH, Aharonson O, Tye AR, Fassett CI, Rosenburg MA, Melosh HJ (2012) Constraints on the volatile distribution within Shackleton crater at the lunar south pole. Nature 486(7403):378–381 We are grateful to Dr. Kazutaka Yasukawa for support on XRF measurements. This work is supported in part by the Ministry of Internal Affairs and Communications of Japanese Government Reiwa 4-0155-0099, by JAXA's Feasibility Study 2022, by the University of Tokyo's International Graduate Program for Excellence in Earth-Space Science (IGPEES), and the 164800 MÁEÖ bilateral scholarship of the Tempus Public Foundation. This work is supported in part by the Ministry of Internal Affairs and Communications of Japanese Government Reiwa 4-0155-0099, by JAXA's Feasibility Study 2022, by the University of Tokyo's International Graduate Program for Excellence in Earth-Space Science (IGPEES), and the 164800 MÁEÖ bilateral scholarship of the Tempus Public Foundation. Department of Earth and Planetary Science, School of Science, University of Tokyo, Tokyo, Japan M. Kobayashi Department of Systems Innovation, School of Engineering, University of Tokyo, Tokyo, Japan H. Miyamoto Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Budapest, Hungary B. D. Pál CSFK, MTA Centre of Excellence, Budapest, Hungary Department of Applied Science, Faculty of Science, Okayama University of Science, Okayama, Japan T. Niihara Department of Systems Innovations, School of Engineering, University of Tokyo, Tokyo, Japan T. Takemura MK and HM conceived the idea of the study. MK and TN developed the measurement system. MK, TN, and TT contributed to the sample preparation. MK conducted the measurement. MK, HM, and BDP contributed to the interpretation of the results. MK, HM, and BDP drafted the original manuscript. HM supervised the conduct of this study. All authors read and approved the final manuscript. Correspondence to H. Miyamoto. The authors declare that they have no competing interest. Geological samples for our permittivity measurements. a) Anorthosite, b)Basalt, c) Dunite and d) Ilmenite Experimental setup. a) The Vector Network Analyzer (VNA; 8753ES) used for our measurement. b) Appearance of our measurement system. The coaxial probe is set inside the deep freezer (up to -60 °C). To avoid the temperature change inside, the lid is closed during each measurement. c) Inside of the deep freezer. On the floor, a lab jack is set to touch the samples with probe. The coaxial probe is covered with an aluminium foil to prevent frost forming on the surface. d) Enlarged view of the sample and coaxial probe. To exclude the chemical and mineral heterogeneity on the surface of samples, the measurement was conducted by setting the probe to different places on the surface of the samples. Geological samples for our permittivity measurements. Anorthosite and dunite, which is aquiredas solid pieces, are crushed into microns' size and sieved with 46 μm for stacking to the coaxial probe.The basalt has the mineral dependence on the particle size, so the sample having a broad particle sizeis used for the measurement. The grain density of ilmenite is referred from Holden [1921]. Table S2. Chemical composition of samples measured by XRF. The chemical composition of ilmenite isthe ideal one calculated based on the chemical formula. Table S3. Measured permittivity, standard error, and maximum relative error of air. Higher than 2 GHz,the maximum relative error is less than 1.0 %. Table S4. Measured permittivity, standard error, and maximum relative error of pure water comparedwith Barthel et al. (1991). Higher than 2.05 GHz, the maximum relative error is less than 1.0 %. Kobayashi, M., Miyamoto, H., Pál, B.D. et al. Laboratory measurements show temperature-dependent permittivity of lunar regolith simulants. Earth Planets Space 75, 8 (2023). https://doi.org/10.1186/s40623-022-01757-5 Water–ice Permittivity Lunar mission 7. Planetary science
CommonCrawl
1. from Prague to Brno Assume that the Earth is a sphere and the surface distance between Dresden and Vienna is approximately $d=370$ km. How much is the distance reduced if you decide to dig a tunnel between those two cities instead of walking. Neglect the different altitudes. Compare the tunnel distance with walking distance. For simplicity, you can approximate trigonometric functions as $$ \mathrm{sin} α ≈ α - α^{3}/6 \,,\\ \mathrm{cos} α ≈ 1 - α^{2}/2 \,,\\ \mathrm{tg} α ≈ α + α^{3}/3 \,, $$ where the angle is assumed to be given in radians. mechanics of a point massmathematics 2. hollow Earth Imagine that all the mass of the Earth is remodeled into a spherical shell. The thickness of the shell is $d=1\;\mathrm{km}$. Assume the density remains the same, what is the outer radius of the new planet? What is the gravitational acceleration on its surface? 3. life in Venice Two chubby residents of Venice Paolo and Francesca Muschetti (with masses $m_{P}=180\;\mathrm{kg}$ and $m_{F}=130\;\mathrm{kg})$ decided to go for a gondola ride. However, none of the gondoliers would allow them to enter the boat because it would sink. Fortunately, they managed to find one gondolier who designed the device shown on the picture. Both Paolo and Francesca were tied to the ends of the rope in such a way that at first Francesca was at the top but then she switched with Paolo. How tall should this device be in order for the boat to traverse the canal? The travel time is $τ=60\;\mathrm{s}$. Assume that if this device is used, the gondola does not sink. You can neglect any friction, the mass of the rope and the moments of inertia of all the pulleys. 4. a hamster Imagine the toy for hamsters depicted in the picture. The cylinder is free to rotate around the center point $O$. The hamster stands on the horizontal plate that is glued to the cylinder at a distance $h$ from the axis of rotation. How should the hamster move in order for the plate to stay in the horizontal position? The coefficient of friction between the hamster and the plate is $f$. 5. the U tube Imagine a U-tube filled with mercury, and a bubble of height $h_{0}$ that floats inside (see the attached picture). Describe what would happen if we changed the surrounding atmosphere in the following ways. Assume that the density of mercury is independent of temperature. The same is valid for the glass the tube is made of. Also assume that the surrounding air behaves as an ideal gas. The initial state of the atmosphere is described by temperature $T_{0}=300K$, and pressure $p_{a}=10\cdot 10^{5}Pa$. Furthermore, assume that the system is in a thermodynamic equilibrium at all times, and that the bubble has a cylindrical shape. Both ends of the tube are open, and the temperature doubles. Both ends of the tube are closed, and the temperature doubles. Only one of the ends of the tube is closed, and the temperature doubles. For each of these cases, determine the new size of the bubble, and the height difference between the mercury columns in the two branches. Bonus: Repeat the calculation assuming that the volume of mercury grows linearly with temperature. P. messing with gravity What if the gravitational constant suddenly doubled (without affecting the value of other physical constants)? What if it increased a hundred times? Discuss the impact the change would have on the life on the Earth and on the trajectories of bodies in the universe. gravitational field E. it's fall again Estimate the average surface area of a leaf of your choice. We are looking forward to see a thorough statistical analysis of your measurements! Use your result to estimate the fraction of energy obtained from the Sun that is used to make saccharides. S. drifting What kind of drifts can we observe in a linear trap? Assume that the axis of the trap is horizontal. Will the drift caused by the gravitational force have a significant effect on the motion of a particle? Derive a formula for the loss cone and draw an original picture illustrating the behavior of a particle in a linear trap. Derive a formula for the drift caused by an electric field that is perpendicular to a magnetic field and that has a constant gradient parallel with the electric field. Discuss the the dependence of the particle trajectories on the magnitude of this gradient.
CommonCrawl
16 Rates The particle size is one of the most important characteristics of particulate materials. It directly affects several properties, from the accessibility of minerals during processing to the mouthfeel of many foods. In industry, the aim of particle size measurement is to first find a correlation between the particle size and the property of interest (e.g. mouthfeel, reactivity, sintering behavior).This information can then be used to modify the properties of the substances via the particle size and also use the particle size as a parameter for quality insurance purposes. Today, measuring the particle size distribution of a substance is an easy and straightforward task thanks to the modern instrumentation available which often enables measurement in less than one minute. To determine particle size there is a large variety of techniques available, which deliver a similarly large selection of results in the form of means, averages, modes, and other parameters. The key to understanding the results is to first know the meaning of these parameters. In this article we will introduce the basic terms and their use in particle size analysis and discuss the advantages and disadvantages of using single parameters vs. multiple parameters to characterize the particle size distribution. Describing the particle size distribution by a single parameter Figure 1: Theoretical particle size distribution of a simple mixture in different weightings Obtaining a meaningful description of a particulate system with many different particle sizes and shapes using just one or two parameters is challenging. To deliver useful information, the chosen parameter should be easy to calculate, should be specific enough, and should have a direct connection to the physical property of interest. Different measurement techniques "see" the particles in a different way, which translates to a different weighting as the result. The different weightings are: Number-weighted distribution (% number) Intensity-weighted distribution (% in surface) Volume-weighted distribution (% in volume) As an example, as a microscope sees the diameter of each particle, using this technique will deliver a number-weighted result. In contrast, the diffraction of light is proportional to the volume of the particle, therefore techniques such as laser diffraction or X-ray diffraction deliver volume-weighted results. Figure 1 shows an example of how the particle size distribution (PSD) of a mixture containing 1 µm, 2 µm and 3 µm particles would look like in different weightings. Table 1: Particle size distributions delivered by different measurement techniques Measurement technique Number-weighted distribution (% number) Microscopy, dynamic image analysis Intensity-weighted distribution (% in surface) Dynamic light scattering (DLS) Volume-weighted distribution (% in volume) Sedimentation, laser diffraction, X-ray diffraction As Figure 1 illustrates, the main difference between these weightings is the representation of the fine fraction compared to the coarse fraction. Number weighting gives equal representation to the fine and the coarse fraction. In contrast, in the volume-weighted results a few large particles might outweigh numerous smaller particles. Both representations have their merits, depending on what is required from a measurement: For someone working in the mining industry, it is important to know the size of the particles in a given volume, while in microbiology, where individual cells are counted, the number-based distribution is preferred. The remaining sections of this article will look at the selection of options to visualize, analyze and compare particle size distributions, as illustrated by an example of a measurement using the laser diffraction technique. Using laser diffraction to obtain a single parameter for quality control Figure 2: Typical results of a particle size analysis by laser diffraction. CC BY license When using laser diffraction, the two most common visual representations are the density or frequency distribution (q3), see Figure 2, and the cumulative curve (Q3), shown in Figure 3. The former displays the probability of finding a particle with diameter d in the population, while the latter shows the percentage of particles which are smaller (undersize) or bigger (oversize) than diameter d. The two are in strong mathematical correlation, as the cumulative curve is a simple integral of the frequency curve: $$Q_3(r)=\int_{0}^{r} q_3(r) dr$$ Read more about the principles of laser diffraction. Graphical depiction is practical, as it gives lots of information at a glance. A few examples are the modality, the relative ratios of the particle populations, the broadness, and the general shape of the distribution. Although a visual input is fast and convenient, for documentation and comparison purposes numerical values are indispensable. The three most important options for describing a particle size distribution by a single parameter are: the mode the median The mode is defined as the most commonly occurring size. It is the peak position on the frequency distribution curve. This frequency distribution curve is a straightforward figure for strictly monomodal (having only one peak) and narrow samples, such as latex beads, or monodisperse (containing only particles of very similar sizes) nanoparticle populations. However, its limitations become clear once more than one particle population is present in the sample (bimodal or polymodal substances) or when characterizing polydisperse samples. Figure 2 shows an example with three samples, all of which have a very similar mode although the particle distribution of the three samples is clearly different. Figure 3: Three different samples with the same mode, but clearly different particle size distributions The median is the value separating the higher half of the data from the lower half. It is the determined particle size from which half of the particles are smaller and half are larger. This is also called D50 (see Figure 1). The median is easy to determine from the cumulative distribution curve (s. Figure 4). However, it has the same problem as the mode: Several distributions may have the same median; therefore it is not reliable as a single figure. Figure 4: Determination of the median from the cumulative curve The mean is the single most valuable measure of a sample. There are many different mean diameters defined, based on the weighting of the mean. The three most important means are: Number-weighted mean diameter Surface-weighted mean diameter Volume-weighted mean diameter The calculation of these means follows the general formula: $$D[p,q]=\frac{\sum n*d^p}{\sum n*d^q} $$ These means are equally important because – as already mentioned earlier – depending on the application people are interested in different properties of the sample. Table 2 summarizes the means and gives a few applicative examples. Table 2: Weighted mean diameters and their applications Mean diameter Number-weighted mean D[1,0] Biological applications, viruses, blood cell counting Surface-weighted (Sauter) mean D[3,2] Fluid dynamics, catalysis [Catalyst AR], fuel combustion Volume-weighted (De Brouckere) mean D[4,3] Mining, comminution, mineral processing [Building Mat AR] Additionally, several methods were developed in the 20th century in order to provide a single measure for the quality check of products in the mining, milling, and grinding industries. The two most prominent examples are the Rosin-Rammler (RR) model and the Gates-Gaudin-Schuchmann (GGS) model. Both of these provide a linearization of the distribution through a logarithmic calculation and provide a more sensitive median (D63.5). Describing the particle size distribution by multiple parameters A single number is a valuable piece of information for quality control purposes, e.g. when checking for differences in production batches. However, it does not deliver information about the source of the deviation, such as the shape, broadness, modality, etc. A deeper investigation into, and understanding of, the source of differences call for a multiparameter description of the particle size distribution. The following parameters are used in particle sizing techniques. The most common values used in several particle sizing techniques are the D-values. These are related to the cumulative distribution and are only meaningful followed by a number, e.g. Dx. Both undersize and oversize D-values can be defined (just like the cumulative curve can be undersize or oversize). If not stated otherwise, the standard is the undersize D-value which means that in general Dx gives the diameter of which x percentage of the particles are smaller. The three most often used D-values are D10, D50, and D90 (see Figure 2), but custom D-values can be found for particular applications (e.g. IV Insulin). Even though D-values are the most often used results due to their convenience and easy to understand definition, they are typically meant for quality control purposes as an alternative to the single parameter but more calculation-heavy RR (Rosin-Rammler) or GGS (Gates-Gaudin-Schuchmann) method. In most cases a useful description of a particle size distribution also calls for a way to describe the shape of the distribution. An important measure of any statistical distribution is the width or broadness. In laser diffraction an often-used measure is the span. It is calculated in general by the following formula: $$ Span = \frac{D_{90} - D_{10}}{D_{50}} $$ However, some custom-defined spans exist for specific applications. Folk & Ward parameters Some samples, such as soil and geological samples, have a large variety of grain sizes and are classified accordingly. Their particle size distribution is often complex with very broad polymodal distribution. For the analysis of such samples Folk and Ward (F&W) proposed a statistical grain-size analysis method [Ref: (R.L. Folk, 1957), AR: Soil]. At the core of the method is the conversion of the particle size distribution from millimeters to the phi scale by the following formula: $$ \phi = -\log_{2}d $$ Followed by the calculation of a set of parameters: Median: see above. Measured in phi units Mean: definition as above, however Folk proposed his own formula (R.L. Folk, 1957): $$ M_z = \frac{\phi_{16} + \phi_{50} + \phi_{84}}{3} $$ The mean is also measured in phi units and is the most widely used of all the parameters. Skewness: Measure of the asymmetry of the distribution around its peak. It is determined by the equation: $$ sk_1 = \frac{\phi_{16} + \phi_{84} - 2\phi_{50}}{2(\phi_{84} - \phi_{16})} + \frac{\phi_{5} + \phi_{95}-2\phi_{50}}{2(\phi_{95}-\phi_{5})} $$ The skewness is also given in phi units. There are other means of calculating the skewness. However, the formula proposed by Folk also takes the "tails" into consideration. Perfectly symmetrical distributions have a skewness of 0.00. A positive skew represents an increased fine portion ("tail" on the left), while a negative skew indicates a shift to the coarse materials ("tail" on the right). Folk also suggested a classification: ±0.1 is nearly symmetrical -0.10 to -0.30 is coarse-skewed and +0.1 to +0.3 is fine-skewed -0.30 to -1.00 is strongly coarse skewed and +0.3 to +1.0 is strongly fine-skewed. Kurtosis: Measure of the deviation from the normal (Gaussian) distribution or in other words the "peakedness". The formula proposed by Folk is: $$ k_g=\frac{\phi_{95} - \phi_{5}}{2.44(\phi_{75}-\phi_{25})} $$ Kurtosis is measured in phi units as well. It can be used to determine the number of outliers in the sample. A perfectly Gaussian curve has a kurtosis of 1.00 and is also called mesokurtic. If the sample is more sharp-peaked, hence less outliers are present, then the kurtosis is higher than 1.00 and it is called leptokurtic. If the sample curve is more flat-peaked, meaning there are a large number of outliers in the sample, the value is smaller than 1.00. Such a distribution is called platykurtic. Phi units furthermore have the advantage of providing an easy-to-read scale of size classes for the classification of soil types starting from boulders (ϕ > -8) down to clay (ϕ < 8). Particle size analysis is a convenient tool for quality control as well as research and development. Through the careful and mindful selection of the right weighting model, mode, and further parameters, it is possible to extract crucial information also in the research and development phase of new products. Deeper discussions of special sample groups are given here: Catalyst characterization Particle size distributions of building materials Particle size analysis of soil Read more about laser diffraction for particle sizing Look up the principles of dynamic light scattering Learn more about the influence of particles on suspension rheology R.L. Folk, W. W. (1957). Brazos River Bar: A Study in the Significance of Grain Size Parameters. Journal of Sedimentary Petrology, Vol 27, No. 1, pp 3-36. 1. Describing the particle size distribution by a single parameter 1.1. Using laser diffraction to obtain a single parameter for quality control 1.2. The mode 1.3. The median 1.4. The mean 1.5. Other methods 2. Describing the particle size distribution by multiple parameters 2.1. D-values 2.2. Span 2.3. Folk & Ward parameters
CommonCrawl
Home Journals EJEE Sensorless Direct Torque Control of PMSM Based on Fuzzy Sliding Mode Control with Full Order Sliding Mode Observer Sensorless Direct Torque Control of PMSM Based on Fuzzy Sliding Mode Control with Full Order Sliding Mode Observer Idriss Baba Arbi* | Abdelkrim Allag Department of Electrical Engineering, University Echahid Hamma Lakhdar Eloued, P.O. Box, 109, Eloued 39000, Algeria [email protected] This paper presents an implementation of Fuzzy Sliding Mode Control for Sensorless Direct Torque Control (DTC) of a Permanent Magnets Synchronous Machine as a combination between the known performances of direct torque control on the one hand and the robustness of sliding mode control on the other hand. The fuzzy controller is introduced to reduce the effect of chattering phenomenon which is the major disadvantage of sliding mode control technique. The proposed controller is used to replace the conventional PI angular speed controller that generates the electromagnetic reference torque for DTC, in order to improve the dynamic and the permanent behaviors of the angular speed control response as well the electromagnetic torque. The proposed control technique is implemented without using speed or position sensors, where a Full Order Sliding Mode Observer is used. It is shown that the proposed control technique has given improved simulation results with different speed ranges and different load values. PMSM, DTC, attractiveness condition, Fuzzy Sliding Mode Controller (FSMC), Full Order Sliding Mode Observer (FOSMO), DTC-FSMC The synchronous machine has long been the most important of the electromechanical power-conversion devices, playing a key role both in the production of electricity and in certain special drive applications [1]. The permanent magnets synchronous machine is widely used when the high performance of machines is asked, for example the industrial robots and machines tools, basing on its advantages knew, as the high power and the good ratio torque-inertia. Several control techniques have been used for the order of the permanent magnets synchronous machine, among those techniques, the direct torque control (DTC) which is one of the highest performance control techniques applied to AC machines put forward by German scholars in 1980's [2]. DTC presents a good robustness facing variations of parameters of the machine especially stator resistance. Sliding mode control is considered as a robust nonlinear control, based on systematic methods which use a sliding surface and Lyapunov stability analysis. It is known by its strong robustness, effective rejection of disturbances, and quick response [3]. However, the chattering phenomenon, which is caused by the high-frequency switching control action adopted in the SMC law, is a problem that impedes SMC implementation [4]. Many techniques have been used for reduce the chattering phenomena. Fuzzy control is of those techniques used for obtaining the discontinuous part of sliding mode control signal which is the cause of chattering. Commonly the obtaining of the electromagnetic torque reference for the Direct Torque Controller is done via PI controller. This paper presents how we can use a fuzzy sliding mode controller FSMC for obtain the reference torque instead of using PI controller. Recently, the control techniques without sensors of permanent magnet synchronous machine have become a major tendency. Many techniques are used in order to estimate angular speed and rotor position of PMSM, in literature, like Extended Kalman Filtering (EKF) algorithm is applied to estimate speed and rotor position [5], and Luenberger observer [6]. Sliding mode technique known by its fast dynamic and its robustness in front of parametric variations, it's also widely used for estimate speed and rotor position [7]. This paper presents a sensorless PMSM direct torque control based on fuzzy sliding mode where speed and rotor position are obtained by using a Full order Sliding Mode Observer. 2. Mathematical Model of the PMSM The mathematical model of the PMSM in the d-q frame including the electric and mechanical equations can be expressed as follows: $\left\{\begin{array}{c}u_{d}=R_{s} i_{d}+L_{d} \frac{d i_{d}}{d t}-p L_{q} \omega \\ u_{q}=R_{s} i_{q}+L_{q} \frac{d i_{q}}{d t}+p L_{d} \omega i_{d}+p \omega \Phi_{f} \\ J \frac{d \omega}{d t}=T_{e m}-T_{L}-F_{c} \omega\end{array}\right.$ (1) $\mathrm{T}_{\mathrm{em}}=\frac{3}{2} \mathrm{p}\left[\left(\mathrm{L}_{\mathrm{d}}-\mathrm{L}_{\mathrm{q}}\right) \mathrm{I}_{\mathrm{d}} \mathrm{I}_{\mathrm{q}}+\mathrm{I}_{\mathrm{q}} \Phi_{\mathrm{f}}\right]$ (2) For salient pole PMSM, $\mathrm{L}_{\mathrm{d}}=\mathrm{L}_{\mathrm{q}}=\mathrm{L}$, so the Eq. (2) can be rewritten as follow: $\mathrm{T}_{\mathrm{em}}=\frac{3}{2} \mathrm{pI}_{\mathrm{q}} \Phi_{\mathrm{f}}$ (3) 3. Fuzzy Sliding Mode Controller of Angular Speed The main role of speed controller is tracking of the desired speed in an effective and robust manner with the presence of a load with physical conditions as friction where the controller output is the reference torque $\mathrm{T}_{\mathrm{e}}^{*}$ used in direct torque control technique. Instead of using speed PI controller, the proposed technique based on a fuzzy sliding mode controller is used to achieve a reference angular speed $\omega^{*}$ for the PMSM. 3.1 Sliding surface choice The basic idea of sliding mode control is first to draw the states of the system in an area properly selected, then design a control law that will always keep the system in this region. The angular speed error state is defined as: $e_{\omega}=\omega-\omega^{*}$ (4) The sliding surface is defined by: $S=e_{\omega}=\omega-\omega^{*}$ (5) The goal is to make the surface S=0 attractive in order to stabilize the angular speed around a reference speed value. 3.2 Attractiveness condition Let's define a Lyapunov function as: $V=\frac{1}{2} S^{2}>0, \quad \forall t>0$ (6) Its derivative is given by: $\frac{\partial V}{\partial t}=\dot{V}=S \dot{S}$ (7) The necessary attractiveness condition to achieve the sliding mode is: $\dot{V}<0 \Rightarrow S \dot{S}<0$ (8) Since the sliding mode is achieved, the trajectory remains on the switching surface. This can be expressed as follow [8]: $S=0$ (9) 3.3 Reference torque calculation The surface derivative is given by: $\dot{S}=\dot{\omega}-\dot{\omega}^{*}$ (10) When $\omega^{*}$ is considered as constant then: $\dot{\omega}^{*}=0$ (11) So that and using the Eq. (1): $\dot{S}=\dot{\omega}=\frac{1}{J} T_{e m}-\frac{1}{J} T_{L}-\frac{1}{J} F_{c} \omega$ (12) In order to satisfy the inequality (7) we will take, $\dot{S}=-K \operatorname{sgn}(S)$ (13) where, $K>0$ is the control gain. $\operatorname{sgn}(S)=\left\{\begin{array}{c}1 \text { if } S>0 \\ 0 \text { if } S=0 \\ -1 \text { if } S<0\end{array}\right.$ (14) From the expressions by Eqns. (11) and (12) we obtain: $\frac{1}{J} T_{e m}-\frac{1}{J} T_{L}-\frac{1}{J} F_{c} \omega=-K \operatorname{sgn}(S)$ (15) Therefore, the reference torque will be used for the DTC controller is given by: $T_{e m}^{*}=T_{L}+\frac{1}{J} F_{c} \omega-K J \operatorname{sgn}(S)$ (16) The main disadvantage of classical sliding mode control is the chattering appeared in the control signal, where its amplitude is proportional to the value of the positive constant K. As a solution to reduce the chattering effect in the sliding mode control is using a variable K issued from a fuzzy controller which uses as inputs the sliding surface and its derivative. 4. Fuzzy Logic Controller The sliding surface $S$ and its derivative $\dot{S}$ are the inputs of the fuzzy logic controller (Figure 1). The membership functions of $S$ and $\dot{S}$ are represented by Figure 2 and Figure 3 respectively. Figure 1. Fuzzy logic controller structure Figure 2. S first input membership functions Figure 3. $\dot{S}$ second input membership functions The membership functions of the output K which must be necessary positive are given by the Figure 4. Figure 4. K output membership functions For inference rules, the Mamdani reasoning method is used which is shown in Table 1. Table 1. Inference rules $\dot{S}$ 5. PMSM DTC Fuzzy Sliding System For PMSM with non-salient pole, the direct and the indirect axis synchronous inductance is equal, namely Ld=Lq=Ls, The electromagnetic torque can be described as [9]: $T_{e m}=\frac{3}{2 L_{s}} p\left|\Phi_{r} \| \Phi_{s}\right| \sin \delta$ (17) where, $\delta$ represents the angle between the stator and rotor flux vectors. In PMSM, $\left|\Phi_{\mathrm{r}}\right|$ is an invariant [10]. According to the Eq. (16) we notice that it's possible to control the electromagnetic torque $\mathrm{T}_{\mathrm{em}}$ via adjusting the torque angle $\delta$ with maintaining the of the stator flux linkage amplitude $\left|\Phi_{s}\right|$ without variation. Using an inverter fed with eight switching modes and six voltage vector plane regions (Figure 5), the stator flux linkage in the stator two-phase reference frame can be given by: $\Phi_{s}=\int\left(V_{s}-R_{s} i_{s}\right) d t$ (18) where, $V_{s}$ represents the stator voltage, $I_{s}$ the stator current, and $\mathrm{R}_{s}$ the stator resistance. The stator flux trajectory is divided into six symmetrical sections (S1 to S6) referring to the inverter voltage vectors, as shown in Figure 5. Stator flux components are used to determine the flux vector position. Figure 5. Voltage vectors and stator flux sectors In $(\alpha, \beta)$ frame the estimated flux magnitude and its angular position can be given by: $\left\|\hat{\phi}_{s}\right\|=\sqrt{\hat{\phi}_{s \alpha}^{2}+\hat{\phi}_{s \beta}^{2}}$ (19) $\angle \hat{\phi}_{s}=\tan ^{-1}\left(\frac{\widehat{\phi}_{s \beta}}{\widehat{\phi}_{s \alpha}}\right)$ (20) We can estimate the electromagnetic torque $\mathrm{T}_{\mathrm{em}}$ from the estimated fluxes, and the measured statorique currents, and it can be described as follow: $\widehat{T}_{e m}=\frac{3}{2} p\left[\widehat{\phi}_{s \alpha} i_{s \beta}-\widehat{\phi}_{s \beta} i_{s \alpha}\right]$ (21) The flux and torque hysteresis comparators outputs with the flux angular position are used as inputs of a switching table in order to generate the inverter switching sequence [10]. Figure 6. DTC- fuzzy sliding mode controller of PMSM with Full order sliding mode observer The control scheme is presented in Figure 6 where: $\Delta_{T_{e m}}=\left|T_{e m}^{*}-\widehat{T}_{e m}\right|$ (22) $\Delta_{\phi_{s}}=\left|\phi_{s}^{*}-\hat{\phi}_{s}\right|$ (23) 6. Full Order Sliding Mode Observer of PMSM If we consider the PMSM model in $(\alpha, \beta)$ frame is given by: $\left\{\begin{array}{c}\frac{d i_{\alpha}}{d t}=-\frac{R_{s}}{L_{s}} i_{\alpha}+\frac{p K_{e}}{L_{s}} \omega \sin \theta+\frac{1}{L_{s}} u_{\alpha} \\ \frac{d i_{\beta}}{d t}=-\frac{R_{s}}{L_{s}} i_{\beta}+\frac{p K_{e}}{L_{s}} \omega \cos \theta+\frac{1}{L_{s}} u_{\beta} \\ \frac{d \omega}{d t}=\frac{p K_{e}}{J}\left(i_{\beta} \cos \theta-i_{\alpha} \sin \theta\right)-\frac{f}{J} \omega-\frac{1}{J} T_{L} \\ \frac{d \theta}{d t}=\omega\end{array}\right.$ (24) $\mathrm{u}_{\alpha}, \mathrm{u}_{\beta}:$ Stator voltage in $\alpha-\beta$ axis; $\mathrm{i}_{\alpha}, \mathrm{i}_{\beta}:$ Stator current in $\alpha-\beta$ axis; $\mathrm{K}_{\mathrm{e}}$ : Electromotive force; $\theta$ : Electrical rotor position. From the machine model given by (23) we can express the Full Order Sliding Mode Observer for PMSM as follow [11]: $\left\{\begin{array}{c}\frac{d \hat{i}_{\alpha}}{d t}=-\frac{R_{s}}{L_{s}} \hat{i}_{\alpha}+\frac{p K_{e}}{L_{s}} \widehat{\omega} \sin \hat{\theta}+\frac{1}{L_{s}} u_{\alpha} \\ \quad+K_{1} \operatorname{sgn}\left(i_{\alpha}-\hat{i}_{\alpha}\right) \\ \frac{d \hat{i}_{\beta}}{d t}=-\frac{R_{s}}{L_{s}} \hat{i}_{\beta}+\frac{p K_{e}}{L_{s}} \widehat{\omega} \cos \hat{\theta}+\frac{1}{L_{s}} u_{\beta} \\ \quad+K_{1} \operatorname{sgn}\left(i_{\beta}-\hat{i}_{\beta}\right) \\ \frac{d \widehat{\omega}}{d t}=\frac{p K_{e}}{J}\left(\hat{i}_{\beta} \cos \hat{\theta}-\hat{i}_{\alpha} \sin \hat{\theta}\right)-\frac{f}{J} \widehat{\omega}-\frac{1}{J} T_{L} \\ \quad+K_{2} \operatorname{sgn}\left(i_{\alpha}-\hat{i}_{\alpha}\right)+K_{2} \operatorname{sgn}\left(i_{\beta}-\hat{i}_{\beta}\right) \\ \frac{d \hat{\theta}}{d t}=\widehat{\omega}+K_{3} \operatorname{sgn}\left(i_{\alpha}-\hat{i}_{\beta}\right)+K_{3} \operatorname{sgn}\left(i_{\beta}-\hat{i}_{\beta}\right)\end{array}\right.$ (25) $\hat{\imath}_{\alpha}, \hat{1}_{\beta}$: Estimated stator current value in $\alpha-\beta$ axis; $\widehat{\omega}$: Estimated speed; $\hat{\theta}$: Estimated rotor position; $\mathrm{K}_{1}, \mathrm{~K}_{2}$ and $\mathrm{K}_{3}$ are constant positive gains. The FO-SMO surfaces are chosen as follow: $S=\left[\begin{array}{l}S_{\alpha} \\ S_{\beta}\end{array}\right]=\left[\begin{array}{l}i_{\alpha}-\hat{l}_{\alpha} \\ i_{\beta}-\hat{l}_{\beta}\end{array}\right]$ (26) Resolving the differential equations system (25) allows obtaining the estimated values of speed and rotor position. 7. Simulation Results Two different schemes of DTC, the first is DTC with using conventional PI controller and the second is the proposed DTC combined with the fuzzy sliding mode controller. The simulation is performed in MATLAB/SIMULINK where the PMSM parameters used are presented in Table 2. Table 2. Parameters of PMSM Flux linkage 0.175 Wb Number of pole pairs Stator resistance 1.4 Ω q-axis inductance d-axis inductance Inertia moment 8.10-4kg.m2 Viscous friction coefficient 0.0035 N.m.s/rad Full order sliding mode observer constants used are: $K_{1}=10^{6}, K_{2}=250$ and $K_{3}=80$. In order to examine the performance of the proposed control technique, it will be tested first with variations in the reference speed $\omega^{*}$ and then with variations in the load torque $\mathrm{T}_{\mathrm{L}}$ during the simulation period, all that with two different percentages $75 \%$ of $\mathrm{R}_{\mathrm{s}}$ then $125 \%$ of $\mathrm{R}_{\mathrm{s}}$. 7.1 Results with reference speed variations The speed reference is changed from $\omega^{*}=130 \mathrm{rad} / \mathrm{s}$ to $150 \mathrm{rad} / \mathrm{s}$ at $\mathrm{t}=0.3 \mathrm{~s}$ then to $100 \mathrm{rad} / \mathrm{s}$ at $\mathrm{t}=0.6 \mathrm{~s}$ and finally to $\omega^{*}=70 \mathrm{rad} / \mathrm{s}$ at $t=0.8 \mathrm{~s}$ with a load torque $T_{L}=$ $5 \mathrm{~N} . \mathrm{m}$ applied starting from $t=0.5 \mathrm{~s}$. Figure 7. Rotation speed with speed reference changes Figure 8. Estimated and actual speeds Figure 9. Estimated and actual positions Figure 10. Electromagnetic torque via DTC-PI and DTC-FSMC Figure 11. Stator current with DTC-FSMC Figure 12. Stator current with DTC-PI With the same simulation parameters except using $75 \%$ of $\mathrm{R}_{\mathrm{s}}$ instead of $\mathrm{R}_{\mathrm{s}}$, the following results are obtained. Figure 13. Rotation speed with speed reference changes and 75% of Rs Figure 14. Estimated and actual speeds (75% of Rs) Figure 15. Estimated and actual position (75% of Rs) Figure 16. Electromagnetic torque via DTC-PI and DTC-FSMC (75% of Rs) Figure 17. Stator current with DTC-FSMC (75% of Rs) Figure 18. Stator current with DTC-PI (75% of Rs) When the stator armature resistance is increased by 25%, more than its nominal value, the following results are obtained: Figure 19. Rotation speed with speed reference changes (125% of Rs) Figure 20. Estimated and actual speeds (125% of Rs) Figure 21. Estimated and actual positions (125% of Rs) Figure 22. Electromagnetic torque via DTC-PI and DTC-FSMC (125% of Rs) Figure 23. Stator current via DTC-FSMC (125% of Rs) Figure 24. Stator current via DTC-PI (125% of Rs) Figures 7, Figures 13 and Figures 19 show clearly that the behavior of the rotation speed using DTC-FSMC technique in dynamic regime is better compared to the conventional DTC-PI control as well as the electromagnetic torque in Figure 10, Figures 16 and Figures 22, at reference speed change moments. DTC-FSMC advantage also appears also in stator current variation at reference speed changes moments changes shown by Figures 11, 12, 17, 18, 23 and 24. The speed error in permanent regime is $\mathrm{e}_{\omega} \simeq 0.05 \mathrm{rad} / \mathrm{s}$ (Figure 7) via DTC-FSMC method, while it reaches $0.8 \mathrm{rad} / \mathrm{s}$ with DTC-PI. Rotation speed and rotor position are estimated with good accuracy which is shown by Figures 8, 9, 14, 15, 20 and 21. 7.2 Results with load torque variations Now the load torque is changed from $\mathrm{T}_{\mathrm{L}}=0 \mathrm{~N}$. m. to 5N. $\mathrm{m}$. at $\mathrm{t}=0.3 \mathrm{~s}$ then to $10 \mathrm{~N} . \mathrm{m}$ at $\mathrm{t}=0.6 \mathrm{~s}$ and finally to $2.5 \mathrm{~N} . \mathrm{m}$ at $\mathrm{t}=0.8 \mathrm{~s}$ with setting up the speed reference at $\omega^{*}=$ $130 \mathrm{rad} / \mathrm{s}$. Figure 25 and Figure 26 show the good reactions of rotation speed and electromagnetic torque using DTC-FSMC compared to DTC with PI controller at load value change moments. The difference and noticed also in the dynamic regime of the variation of the statoric current in Figures 27 and 28. Figure 25. Rotation speed with load torque changes Figure 26. Electromagnetic torque with load torque Figure 27. Stator current via DTC-FSMC Figure 28. Stator current via DTC-PI All Figures that represent speed and torque gaits show that the dynamic regime using the DT-FSMC control technique is better compared to the conventional DTC-PI, either for speed or electromagnetic torque where overshoots via conventional DTC-PI technique are remarkable. Based on the mathematical model of the permanent magnet synchronous motor and the DTC principle, a rotation speed controller based on a fuzzy sliding mode control technique (FSMC) was designed to replace the PI controller in the conventional DTC. The controller has the advantages of a good robustness against sudden changes in the reference speed values and also the load changes during simulation period. The robustness of the DTC-FSMC is examined by the stator armature resistance $R_{s}$ changes, where the control technique has demonstrated a good stability in speed and torque responses especially in the dynamic regime with the remarkable absence of overshoots in the speed and electromagnetic torque variations during sudden change of reference rotation speed. The DTC-FSMC scheme also has given an improved precision in permanent regime in comparison with the conventional DTC-PI technique. Full Order Sliding Mode Observer used with the proposed DTC-FSMC has proven a good accuracy in speed and rotor position estimation. $\mathrm{R}_{\mathrm{s}}$ Stator armature resistance, Ω $\mathrm{L}_{\mathrm{d}}, \mathrm{L}_{\mathrm{q}}$ Direct and quadrature inductance, H $\omega$ Rotor angular speed thermal, rad.s-1 Pole pairs number $\mathrm{u}_{\mathrm{d}}, \mathrm{u}_{\mathrm{q}}$ Stator voltage in d-q-axis, volts $\mathrm{i}_{\mathrm{d}}, \mathrm{i}_{\mathrm{q}}$ Stator current in d-q-axis, A $\Phi_{\mathrm{f}}$ Flux created by the rotor magnets, Wb $\mathrm{T}_{\mathrm{em}}$, $\mathrm{T}_{\mathrm{L}}$ Electromagnetic and load torques, N.m $\mathrm{F}_{\mathrm{c}}$ Viscous friction coefficient, N.m.s.rad-1 Total moment of inertia of the motor and load, Kg.m2 [1] Hooshyar, H., Savaghebi, M., Vahedi, A. (2007). Synchronous generator: Past, present and future. IEEE-AFRICON Conference, Windhoek, South Africa, pp. 1-7. https://doi.org/10.1109/AFRCON.2007.4401482 [2] Sun, D., Fang, W.Z., He, Y.K. (2001). Study on the direct torque control of permanent magnet synchronous motor drives. IEEE-ICEMS Conference, Shenyang, China, 1: 571-574. https://doi.org/10.1109/ICEMS.2001.970740 [3] Guo, R., Wang, X., Zhao, J., Yu, W. (2011). Fuzzy sliding mode direct torque control for PMSM. Eighth International Conference on Fuzzy Systems and Knowledge Discovery, Shanghai, China, pp. 511-514. https://doi.org/10.1109/FSKD.2011.6019599 [4] Wang, H., Li, S., Yang, J., Zhou, X. (2016). Continuous sliding mode control for permanent magnet synchronous motor speed regulation systems under time-varying disturbances. Journal of Power Electronics, 16(4): 1324-1335. https://doi.org/10.6113/JPE.2016.16.4.1324 [5] Walambe, R.A., Joshi, V.A. (2018). Closed loop stability of a PMSM-EKF controller-observer structure. IFAC-PapersOnLine, 51(1): 249-254. https://doi.org/10.1016/j.ifacol.2018.05.062 [6] Bakhti, I., Chaouch, S., Makouf, A., Douadi,T. (2016). Robust sensorless sliding mode control with Luenberger observer design applied to permanent magnet synchronous motor. 5th International Conference on Systems and Control (ICSC), pp. 204-210. https://doi.org/10.1109/ICoSC.2016.7507051 [7] Attou, A., Massoum, A., Chiali, E. (2013). Sliding mode control of a permanent magnets synchronous machine. IEEE-INSPEC Conference, Istanbul, Turkey, pp. 115-119. https://doi.org/10.1109/PowerEng.2013.6635591 [8] Zhao, Y., Huang, Z. (2015). Fuzzy direct torque control of permanent magnet synchronous motors. 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), pp. 330-334. https://doi.org/10.1109/FSKD.2015.7381963 [9] Sun, D., He, Y.K., Zhu, J.G. (2004). Sensorless direct torque control for permanent magnet synchronous motor based on fuzzy logic. The 4th International Power Electronics and Motion Control Conference, 3: 1286-1291. [10] Lu, Z., Sheng, H., Hess, H.L., Buck, K.M. (2005). The modeling and simulation of a permanent magnet synchronous motor with direct torque control based on Matlab/Simulink. IEEE International Conference on Electric Machines and Drives, San Antonio, TX, USA. https://doi.org/10.1109/IEMDC.2005.195866 [11] Saadaoui, O., Khlaief, A., Abassi, M., Chaari, A., Boussak, M. (2016). Sensorless FOC of PMSM drives based on full order SMO. International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), pp. 663-668. https://doi.org/10.1109/STA.2016.7952106
CommonCrawl
IPI Home Reconstruction in the partial data Calderón problem on admissible manifolds June 2017, 11(3): 427-454. doi: 10.3934/ipi.2017020 A direct D-bar method for partial boundary data electrical impedance tomography with a priori information Melody Alsaker a, , Sarah Jane Hamilton b,, and Andreas Hauptmann c, Gonzaga University, Mathematics Department, 502 E. Boone Ave. MSC 2615, Spokane, WA 99258-0072, USA Department of Mathematics, Statistics and Computer Science, Marquette University, Milwaukee, WI 53233, USA Department of Computer Science, University College London, WC1E 6BT London, UK * Corresponding author Received September 2016 Revised October 2016 Published April 2017 Full Text(HTML) Figure(21) / Table(3) Electrical Impedance Tomography (EIT) is a non-invasive imaging modality that uses surface electrical measurements to determine the internal conductivity of a body. The mathematical formulation of the EIT problem is a nonlinear and severely ill-posed inverse problem for which direct D-bar methods have proved useful in providing noise-robust conductivity reconstructions. Recent advances in D-bar methods allow for conductivity reconstructions using EIT measurement data from only part of the domain (e.g., a patient lying on their back could be imaged using only data gathered on the accessible part of the body). However, D-bar reconstructions suffer from a loss of sharp edges due to a nonlinear low-pass filtering of the measured data, and this problem becomes especially marked in the case of partial boundary data. Including a priori data directly into the D-bar solution method greatly enhances the spatial resolution, allowing for detection of underlying pathologies or defects, even with no assumption of their presence in the prior. This work combines partial data D-bar with a priori data, allowing for noise-robust conductivity reconstructions with greatly improved spatial resolution. The method is demonstrated to be effective on noisy simulated EIT measurement data simulating both medical and industrial imaging scenarios. Keywords: Electrical impedance tomography, partial boundary data, Neumannto-Dirichlet map, D-bar method, a priori information. Mathematics Subject Classification: Primary: 65N21; Secondary: 94A08. Citation: Melody Alsaker, Sarah Jane Hamilton, Andreas Hauptmann. A direct D-bar method for partial boundary data electrical impedance tomography with a priori information. Inverse Problems & Imaging, 2017, 11 (3) : 427-454. doi: 10.3934/ipi.2017020 M. Alsaker and J. L. Mueller, A D-bar algorithm with a priori information for 2-D Electrical Impedance Tomography, SIAM J. on Imaging Sciences, 9 (2016), 1619-1654. doi: 10.1137/15M1020137. Google Scholar N. Avis and D. Barber, Incorporating a priori information into the Sheffield filtered backprojection algorithm, Physiological Measurement, 16 (1995), A111-A122. doi: 10.1088/0967-3334/16/3A/011. Google Scholar U. Baysal and B. Eyüboglu, Use of a priori information in estimating tissue resistivities -a simulation study, Physics in Medicine and Biology, 43 (1998), 3589-3606. doi: 10.1088/0967-3334/25/3/013. Google Scholar D. Calvetti, P. J. Hadwin, J. M. Huttunen, D. Isaacson, J. P. Kaipio, D. McGivney, E. Somersalo and J. Volzer, Artificial boundary conditions and domain truncation in electrical impedance tomography. part Ⅰ: Theory and preliminary results, Inverse Problems & Imaging, 9 (2015), 749-766. doi: 10.3934/ipi.2015.9.749. Google Scholar D. Calvetti, P. J. Hadwin, J. M. Huttunen, J. P. Kaipio and E. Somersalo, Artificial boundary conditions and domain truncation in electrical impedance tomography. part Ⅱ: Stochastic extension of the boundary map., Inverse Problems & Imaging, 9 (2015), 767-789. doi: 10.3934/ipi.2015.9.767. Google Scholar E. Camargo, Development of an Absolute Electrical Impedance Imaging Algorithm for Clinical Use, PhD thesis, University of São Paulo, 2013.Google Scholar F. J. Chung, Partial data for the neumann-to-dirichlet map, Journal of Fourier Analysis and Applications, 21 (2015), 628-665. doi: 10.1007/s00041-014-9379-5. Google Scholar G. Cinnella, S. Grasso, P. Raimondo, D. D'Antini, L. Mirabella, M. Rauseo and M. Dambrosio, Physiological effects of the open lung approach in patients with early, mild, diffuse acute respiratory distress syndromean electrical impedance tomography study, The Journal of the American Society of Anesthesiologists, 123 (2015), 1113-1121. doi: 10.1097/ALN.0000000000000862. Google Scholar E. Costa, C. Chaves, S. Gomes, M. Beraldo, M. Volpe, M. Tucci, I. Schettino, S. Bohm, C. Carvalho, H. Tanaka, R. G. Lima and M. Amato, Real-time detection of pneumothorax using electrical impedance tomography, Critical Care Medicine, 36 (2008), 1230-1238. doi: 10.1097/CCM.0b013e31816a0380. Google Scholar W. Daily and A. Ramirez, Electrical imaging of engineered hydraulic barriers, Symposium on the Application of Geophysics to Engineering and Environmental Problems, (1999), 683-691. doi: 10.4133/1.2922667. Google Scholar M. DeAngelo and J. L. Mueller, 2d D-bar reconstructions of human chest and tank data using an improved approximation to the scattering transform, Physiological Measurement, 31 (2010), 221-232. doi: 10.1088/0967-3334/31/2/008. Google Scholar H. Dehghani, D. Barber and I. Basarab-Horwath, Incorporating a priori anatomical information into image reconstruction in electrical impedance tomography, Physiological Measurement, 20 (1999), 87-102. doi: 10.1088/0967-3334/20/1/007. Google Scholar M. Dodd and J. L. Mueller, A real-time D-bar algorithm for 2-D electrical impedance tomography data, Inverse Problems and Imaging, 8 (2014), 1013-1031. doi: 10.3934/ipi.2014.8.1013. Google Scholar D. Ferrario, B. Grychtol, A. Adler, J. Sola, S. Bohm and M. Bodenstein, Toward morphological thoracic EIT: Major signal sources correspond to respective organ locations in CT, Biomedical Engineering, IEEE Transactions on, 59 (2012), 3000-3008. doi: 10.1109/TBME.2012.2209116. Google Scholar D. Flores-Tapia and S. Pistorius, Electrical impedance tomography reconstruction using a monotonicity approach based on a priori knowledge, in Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of The IEEE, 2010, 4996–4999. doi: 10.1109/IEMBS.2010.5627204. Google Scholar C. Grant, T. Pham, J. Hough, T. Riedel, C. Stocker and A. Schibler, Measurement of ventilation and cardiac related impedance changes with electrical impedance tomography, Critical Care, 15 (2011), R37. doi: 10.1186/cc9985. Google Scholar G. Hahn, A. Just, T. Dudykevych, I. Frerichs, J. Hinz, M. Quintel and G. Hellige, Imaging pathologic pulmonary air and fluid accumulation by functional and absolute EIT, Physiological Measurement, 27 (2006), S187-S198. doi: 10.1088/0967-3334/27/5/S16. Google Scholar M. Hallaji, A. Seppänen and M. Pour-Ghaz, Electrical impedance tomography-based sensing skin for quantitative imaging of damage in concrete, Smart Materials and Structures, 23 (2014), 085001. doi: 10.1088/0964-1726/23/8/085001. Google Scholar S. J. Hamilton, J. L. Mueller and M. Alsaker, Incorporating a spatial prior into nonlinear d-bar EIT imaging for complex admittivities, IEEE Trans. Med. Imaging, 36 (2017), 457-466. doi: 10.1109/TMI.2016.2613511. Google Scholar S. J. Hamilton and S. Siltanen, Nonlinear inversion from partial data EIT: Computational experiments, Contemporary Mathematics: Inverse Problems and Applications, 615 (2014), 105-129. doi: 10.1090/conm/615/12267. Google Scholar B. Harrach and M. Ullrich, Local uniqueness for an inverse boundary value problem with partial data, Proc. Amer. Math. Soc., 145 (2017), 1087-1095. doi: 10.1090/proc/12991. Google Scholar A. Hauptmann, M. Santacesaria and S. Siltanen, Direct inversion from partial-boundary data in electrical impedance tomography, Inverse Problems, 33 (2017), 025009. doi: 10.1088/1361-6420/33/2/025009. Google Scholar L. M. Heikkinen, M. Vauhkonen, T. Savolainen, K. Leinonen and J. P. Kaipio, Electrical process tomography with known internal structures and resistivities, Inverse Probl. Eng., 9 (2001), 431-454. doi: 10.1080/174159701088027775. Google Scholar T. Hermans, D. Caterina, R. Martin, A. Kemna, T. Robert and F. Nguyen, How to incorporate prior information in geophysical inverse problems-deterministic and geostatistical approaches in Near Surface 2011-17th EAGE European Meeting of Environmental and Engineering Geophysics, 2011. doi: 10.3997/2214-4609.20144397. Google Scholar J. Hola and K. Schabowicz, State-of-the-art non-destructive methods for diagnostic testing of building structures — anticipated development trends, Archives of Civil and Mechanical Engineering, 10 (2010), 5-18. doi: 10.1016/S1644-9665(12)60133-2. Google Scholar T. Hou and J. Lynch, Electrical impedance tomographic methods for sensing strain fields and crack damage in cementitious structures, Journal of Intelligent Material Systems and Structures, 20 (2009), 1363-1379. doi: 10.1177/1045389X08096052. Google Scholar N. Hyvönen, Approximating idealized boundary data of electric impedance tomography by electrode measurements, Mathematical Models and Methods in Applied Sciences, 19 (2009), 1185-1202. doi: 10.1142/S0218202509003759. Google Scholar N. Hyvönen, P. Piiroinen and O. Seiskari, Point measurements for a neumann-to-dirichlet map and the calderón problem in the plane, SIAM Journal on Mathematical Analysis, 44 (2012), 3526-3536. doi: 10.1137/120872164. Google Scholar O. Imanuvilov, G. Uhlmann and M. Yamamoto, The neumann-to-dirichlet map in two dimensions, Advances in Mathematics, 281 (2015), 578-593. doi: 10.1016/j.aim.2015.03.026. Google Scholar D. Isaacson, J. L. Mueller, J. C. Newell and S. Siltanen, Reconstructions of chest phantoms by the D-bar method for electrical impedance tomography, IEEE Transactions on Medical Imaging, 23 (2004), 821-828. doi: 10.1109/TMI.2004.827482. Google Scholar J. Kaipio, V. Kolehmainen, M. Vauhkonen and E. Somersalo, Inverse problems with structural prior information, Inverse Problems, 15 (1999), 713-729. doi: 10.1088/0266-5611/15/3/306. Google Scholar C. Karagiannidis, A. D. Waldmann, C. Ferrando Ortolá, M. Muñoz Martinez, A. Vidal, A. Santos, P. L. Róka, M. Perez Márquez, S. H. Bohm and F. Suarez-Spimann, Position-dependent distribution of ventilation measured with electrical impedance tomography, European Respiratory Journal, 46 (2015), PA2144. doi: 10.1183/13993003.congress-2015.PA2144. Google Scholar K. Karhunen, A. Seppänen, A. Lehikoinen, P. J. M. Monteiro and J. P. Kaipio, Electrical resistance tomography imaging of concrete, Cement and Concrete Research, 40 (2010), 137-145. doi: 10.1016/j.cemconres.2009.08.023. Google Scholar P. Kaup and F. Santosa, Nondestructive evaluation of corrosion damage using electrostatic measurements, Journal of Nondestructive Evaluation, 14 (1995), 127-136. doi: 10.1007/BF01183118. Google Scholar K. Knudsen, M. Lassas, J. Mueller and S. Siltanen, Regularized D-bar method for the inverse conductivity problem, Inverse Problems and Imaging, 3 (2009), 599-624. doi: 10.3934/ipi.2009.3.599. Google Scholar D. Liu, V. Kolehmainen, S. Siltanen, A.-m. Laukkanen and A. Seppänen, Estimation of conductivity changes in a region of interest with electrical impedance tomography, Inverse Problems and Imaging, 9 (2015), 211-229. doi: 10.3934/ipi.2015.9.211. Google Scholar D. Liu, V. Kolehmainen, S. Siltanen and A. Seppänen, A nonlinear approach to difference imaging in EIT; assessment of the robustness in the presence of modelling errors, Inverse Problems, 31 (2015), 035012, 25pp. doi: 10.1088/0266-5611/31/3/035012. Google Scholar J. Mueller and S. Siltanen, Linear and Nonlinear Inverse Problems with Practical Applications, vol. 10 of Computational Science and Engineering, SIAM, 2012. doi: 10.1137/1.9781611972344. Google Scholar J. Mueller and S. Siltanen, Direct reconstructions of conductivities from boundary measurements, SIAM Journal on Scientific Computing, 24 (2003), 1232-1266. doi: 10.1137/S1064827501394568. Google Scholar E. K. Murphy and J. L. Mueller, Effect of domain-shape modeling and measurement errors on the 2-d D-bar method for electrical impedance tomography, IEEE Transactions on Medical Imaging, 28 (2009), 1576-1584. doi: 10.1109/TMI.2009.2021611. Google Scholar A. I. Nachman, Global uniqueness for a two-dimensional inverse boundary value problem, Annals of Mathematics, 143 (1996), 71-96. doi: 10.2307/2118653. Google Scholar R. Novikov, A multidimensional inverse spectral problem for the equation $-δ ψ+(v(x)-eu(x))ψ = 0$, Functional Analysis and Its Applications, 22 (1988), 263-272. doi: 10.1007/BF01077418. Google Scholar A. Pesenti, G. Musch, D. Lichtenstein, F. Mojoli, M. B. P. Amato, G. Cinnella, L. Gattinoni and M. Quintel, Imaging in acute respiratory distress syndrome, Intensive Care Medicine, 42 (2016), 686-698. doi: 10.1007/s00134-016-4328-1. Google Scholar H. Reinius, J. B. Borges, F. Fredén, L. Jideus, E. D. L. B. Camargo, M. B. P. Amato, G. L. A. Hedenstierna and F. Lennmyr, Real-time ventilation and perfusion distributions by electrical impedance tomography during one-lung ventilation with capnothorax, Acta Anaesthesiologica Scandinavica, 59 (2015), 354-368. doi: 10.1111/aas.12455. Google Scholar A. Schlibler, T. Pham, A. Moray and C. Stocker, Ventilation and cardiac related impedance changes in children undergoing corrective open heart surgery, Physiological Measurement, 34 (2013), 1319-1327. doi: 10.1088/0967-3334/34/10/1319. Google Scholar A. Seppänen, K. Karhunen, A. Lehikoinen, J. Kaipio and P. Monteiro, Electrical resistance tomography imaging of concrete, in Concrete Repair, Rehabilitation and Retrofitting Ⅱ: 2nd International Conference on Concrete Repair, Rehabilitation and Retrofitting, 2009, 571–577.Google Scholar S. Siltanen, J. Mueller and D. Isaacson, An implementation of the reconstruction algorithm of A. Nachman for the 2-D inverse conductivity problem, Inverse Problems, 16 (2000), 681-699. doi: 10.1088/0266-5611/16/3/310. Google Scholar M. Soleimani, Electrical impedance tomography imaging using a priori ultrasound data BioMedical Engineering OnLine, 5. doi: 10.1186/1475-925X-5-8. Google Scholar M. Vauhkonen, D. Vadász, P. A. Karjalainen, E. Somersalo and J. P. Kaipio, Tikhonov regularization and prior information in electrical impedance tomography, IEEE Transactions on Medical Imaging, 17 (1998), 285-293. doi: 10.1109/42.700740. Google Scholar Figure 1. Example simulating a patient with a pneumothorax in the left lung. The simulated noisy measurement is collected from 75% ventral data. The first image displays the true conductivity with the position of electrodes indicated. Using a partial data D-bar approach alone results in a reconstruction with low spatial resolution, where the pathology can be hardly seen (second). Incorporating a priori data corresponding to a healthy patient directly into the reconstruction method significantly improves the spatial resolution (third). Refining the prior improves the reconstruction further, allowing even sharper visualization of the pathology (fourth) Figure Options Download full-size image Download as PowerPoint slide Figure 2. Illustration of mappings involved in the measurement modeling. Top row: Neumann data with the basis function $\varphi(\theta)=\cos(\theta)/\sqrt{\pi}$ on the left and the nonorthogonal projection $J\varphi$ on the right. Bottom row: Dirichlet data where $g=u|_{\partial\Omega}$ on the left is the solution of the partial differential equation (1) and on the right the orthogonal projection to the extended electrodes Figure 3. The A Priori D-bar Method with Partial Data Figure 4. Phantoms used in numerical examples with the corresponding boundaries of the priors outlined by white dots. Note that for each example, the prior does not assume a pathology/defect. Left: A simulated pneumothorax occurring near the heart in the left lung. Middle: A simulated pleural effusion occurring away from the heart in the left lung. Right: An enclosed diamond with an ovular defect Figure 5. Blind priors used for the thoracic (top) and industrial (bottom) imaging examples. Take particular note that the priors do not assume any pathology/defect Figure 6. The real part of the ${\mu ^{{\rm{int}}}}$ data (shown in the $z$ plane for $z\in\mathcal{D}$) corresponding to the blind thoracic prior given in Figure 5(top) computed from extended radii $R_2=4.0$, 6.5, and 9.0 in the $k$ plane. Note that as the radius increases, the integral term approaches its asymptotic behavior of ${\mu ^{{\rm{int}}}}\sim 1$ Figure 7. Scattering data corresponding to the pneumothorax example using the blind prior given in Figure 5(top). The original radius is $R=4$ and extended radius $R_2=9$. All scattering data is plotted on the same scale (real and imaginary, respectively) Figure 8. Pneumothorax example for 62.5% ventral data. TOP: The partial data ND D-bar reconstruction ${\sigma ^{{\rm{ND}}}}$. BOTTOM: The recovered conductivity ${\sigma _{{R_2},\alpha }}$, using the blind thoracic prior. The maximum value is 2.25, occurring of $R=4$, $\alpha=0$ Figure 9. Left: Original prior. Right: Updated Pneumothorax prior. The left lung in the updated prior was segmented into two regions Figure 10. Pneumothorax example with 75% Ventral data and segmented prior. The corresponding partial data ND D-bar reconstruction ${\sigma ^{{\rm{ND}}}}$ is shown in Figure 8. Here we display the recovered conductivity ${\sigma _{{R_2},\alpha }}$ for $R_2=4,6.5$ and various $\alpha$ using the SEG AVG or SEG MIN segmented thoracic priors. The maximum value is 2.70 and occurs in the $R_2=4$, $\alpha=0$ recon using the SEG MIN prior. Figure 11. Pleural effusion example for 75% ventral data. TOP: The partial data ND D-bar reconstruction ${\sigma ^{{\rm{ND}}}}$. BOTTOM: The recovered conductivity ${\sigma _{{R_2},\alpha }}$ using the blind thoracic prior. The maximum value is 2.90, occurring of $R=4$, $\alpha=0$ Figure 12. Pneumothorax Example. Results for $R_2=9.0$ and $\alpha=0.67$. The maximum is 2.71 and occurs in the 100% boundary data, BLIND prior reconstruction Figure 13. Pleural effusion example for 62.5% ventral data. The partial data ND D-bar reconstruction ${\sigma ^{{\rm{ND}}}}$ is shown at the top. Below, the recovered conductivity ${\sigma _{{R_2},\alpha }}$ is shown using the blind thoracic prior. The maximum value is 2.74, occurring of $R=4$, $\alpha=0$ Figure 14. Left: Original prior. Right: Updated Pleural Effusion prior with the left lung segmented into two regions Figure 15. Pleural effusion example for 75% ventral data and segmented prior. The corresponding partial data ND D-bar reconstruction ${\sigma ^{{\rm{ND}}}}$ is shown in Figure 13. Here we display the recovered conductivity ${\sigma _{{R_2},\alpha }}$ for $R_2=4$, 6.5 and various $\alpha$ using the SEG AVG or SEG MAX segmented thoracic prior. The maximum value is 2.83 and occurs in the $R_2=4$, $\alpha=0$ reconstruction using the SEG MAX prior Figure 17. Pleural Effusion Example. Results for $R_2=6.5$ and $\alpha=0.67$. The maximum is 2.65 and occurs in the 100% boundary data, BLIND prior reconstruction Figure 18. Industrial Example: From top to bottom, conductivity reconstructions ${\sigma _{{R_2},\alpha }}$ for 100%, 75%, 62.5%, and 50% boundary data are presented with scattering radius $R_2=4$ and various weights $\alpha$. The first column displays the ${\sigma ^{{\rm{ND}}}}$ reconstructions that do not include any a priori information. The maximum value (3.12) occurs for the 50% data reconstruction with strongest weight $\alpha=0$ Figure 19. Industrial Example: From top to bottom, conductivity reconstructions ${\sigma _{{R_2},\alpha }}$ for 100%, 75%, 62.5%, and 50% boundary data are presented with extended scattering radius $R_2=6.5$ and various weights $\alpha$. The first column displays the ${\sigma ^{{\rm{ND}}}}$ reconstructions that do not include any a priori information. The maximum value (3.13) occurs for the 50% data reconstruction with strongest weight $\alpha=0$ Figure 20. Relative $\ell_2$-error of reconstructions from 75% ventral data of the pneumothorax example. The horizontal axis represents $\alpha$-values for increasing regularization radii $R_2$. Recall that $\alpha=0$ corresponds to the heaviest weighting of the ${\mu ^{{\rm{int}}}}$ term, while $\alpha=1$ to the weakest expression of the prior. Errors from ${\sigma ^{{\rm{ND}}}}$ are compared to the new reconstructions ${\sigma _{{R_2},\alpha }}$ for the blind and segmented priors Figure 21. Relative $\ell_2$-error in the lung region within the boundary of the pathology, for 75% ventral data for the pneumothorax example. The horizontal axis represents $\alpha$-values for increasing regularization radii $R_2$ Table 1. Conductivity values of thoracic phantoms and assigned blind prior in S/m Heart Lungs Pathology Aorta Spine Background Pneumothorax 2.0 0.5 0.15 2.0 0.25 1 Pleural Effusion 2.0 0.5 1.8 2.0 0.25 1 Prior 2.05 0.45 - 2.05 0.23 1 Download as excel Table 2. Conductivity values of industrial phantom and assigned blind prior in S/m Diamond Inclusion Background Industrial 2.0 1.4 1 Prior 2.05 - 1 Table 3. Relative $\ell_2$-errors (%) for the conductivity reconstructions from §4, for the extended regularization radii $R_2=4$ and $6.5$ D-BAR $\mathbf{R_2=4}$ $\mathbf{R_2=6.5}$ RECON $\alpha=1$ $\alpha=\frac{2}{3}$ $\alpha=\frac{1}{3}$ $\alpha=0$ $\alpha=1$ $\alpha=\frac{2}{3}$ $\alpha=\frac{1}{3}$ $\alpha=0$ Blind Prior: 75% 35.13 29.65 26.74 24.86 24.44 26.82 25.36 24.20 23.39 Seg Avg Prior: 75% 35.13 29.14 26.22 24.65 24.95 25.75 24.16 22.92 22.11 Seg Min Prior: 75% 35.13 28.84 25.96 24.75 25.74 25.07 23.44 22.23 21.55 Blind Prior: 62.5% 38.95 32.71 30.02 28.06 27.12 30.12 28.74 27.56 26.63 Seg Avg Prior: 62.5% 38.95 32.33 29.62 27.83 27.30 29.43 27.99 26.78 25.84 Seg Min Prior: 62.5% 38.95 31.99 29.27 27.67 27.61 28.66 27.13 25.88 24.97 Seg Max Prior: 75% 27.40 24.14 21.95 21.39 22.81 21.98 20.94 20.34 20.22 Seg Max Prior: 62.5% 32.56 28.87 27.01 26.24 26.80 26.77 25.85 25.20 24.87 Industrial phantom Blind Prior: 100% 18.43 18.43 16.07 14.17 12.99 15.31 14.17 13.28 12.68 Sarah Jane Hamilton, Andreas Hauptmann, Samuli Siltanen. A data-driven edge-preserving D-bar method for electrical impedance tomography. Inverse Problems & Imaging, 2014, 8 (4) : 1053-1072. doi: 10.3934/ipi.2014.8.1053 Melody Dodd, Jennifer L. Mueller. A real-time D-bar algorithm for 2-D electrical impedance tomography data. Inverse Problems & Imaging, 2014, 8 (4) : 1013-1031. doi: 10.3934/ipi.2014.8.1013 Nuutti Hyvönen, Lassi Päivärinta, Janne P. Tamminen. Enhancing D-bar reconstructions for electrical impedance tomography with conformal maps. Inverse Problems & Imaging, 2018, 12 (2) : 373-400. doi: 10.3934/ipi.2018017 Henrik Garde, Kim Knudsen. 3D reconstruction for partial data electrical impedance tomography using a sparsity prior. Conference Publications, 2015, 2015 (special) : 495-504. doi: 10.3934/proc.2015.0495 Daniela Calvetti, Paul J. Hadwin, Janne M. J. Huttunen, Jari P. Kaipio, Erkki Somersalo. Artificial boundary conditions and domain truncation in electrical impedance tomography. Part II: Stochastic extension of the boundary map. Inverse Problems & Imaging, 2015, 9 (3) : 767-789. doi: 10.3934/ipi.2015.9.767 Ville Kolehmainen, Matti Lassas, Petri Ola, Samuli Siltanen. Recovering boundary shape and conductivity in electrical impedance tomography. Inverse Problems & Imaging, 2013, 7 (1) : 217-242. doi: 10.3934/ipi.2013.7.217 Jérémi Dardé, Harri Hakula, Nuutti Hyvönen, Stratos Staboulis. Fine-tuning electrode information in electrical impedance tomography. Inverse Problems & Imaging, 2012, 6 (3) : 399-421. doi: 10.3934/ipi.2012.6.399 Kim Knudsen, Matti Lassas, Jennifer L. Mueller, Samuli Siltanen. Regularized D-bar method for the inverse conductivity problem. Inverse Problems & Imaging, 2009, 3 (4) : 599-624. doi: 10.3934/ipi.2009.3.599 Gen Nakamura, Päivi Ronkanen, Samuli Siltanen, Kazumi Tanuma. Recovering conductivity at the boundary in three-dimensional electrical impedance tomography. Inverse Problems & Imaging, 2011, 5 (2) : 485-510. doi: 10.3934/ipi.2011.5.485 Bastian Gebauer. Localized potentials in electrical impedance tomography. Inverse Problems & Imaging, 2008, 2 (2) : 251-269. doi: 10.3934/ipi.2008.2.251 Melody Alsaker, Jennifer L. Mueller. Use of an optimized spatial prior in D-bar reconstructions of EIT tank data. Inverse Problems & Imaging, 2018, 12 (4) : 883-901. doi: 10.3934/ipi.2018037 Fabrice Delbary, Rainer Kress. Electrical impedance tomography using a point electrode inverse scheme for complete electrode data. Inverse Problems & Imaging, 2011, 5 (2) : 355-369. doi: 10.3934/ipi.2011.5.355 Nuutti Hyvönen, Harri Hakula, Sampsa Pursiainen. Numerical implementation of the factorization method within the complete electrode model of electrical impedance tomography. Inverse Problems & Imaging, 2007, 1 (2) : 299-317. doi: 10.3934/ipi.2007.1.299 Helmut Harbrecht, Thorsten Hohage. A Newton method for reconstructing non star-shaped domains in electrical impedance tomography. Inverse Problems & Imaging, 2009, 3 (2) : 353-371. doi: 10.3934/ipi.2009.3.353 Kari Astala, Jennifer L. Mueller, Lassi Päivärinta, Allan Perämäki, Samuli Siltanen. Direct electrical impedance tomography for nonsmooth conductivities. Inverse Problems & Imaging, 2011, 5 (3) : 531-549. doi: 10.3934/ipi.2011.5.531 Daniela Calvetti, Paul J. Hadwin, Janne M. J. Huttunen, David Isaacson, Jari P. Kaipio, Debra McGivney, Erkki Somersalo, Joseph Volzer. Artificial boundary conditions and domain truncation in electrical impedance tomography. Part I: Theory and preliminary results. Inverse Problems & Imaging, 2015, 9 (3) : 749-766. doi: 10.3934/ipi.2015.9.749 Hiroshi Isozaki. Inverse boundary value problems in the horosphere - A link between hyperbolic geometry and electrical impedance tomography. Inverse Problems & Imaging, 2007, 1 (1) : 107-134. doi: 10.3934/ipi.2007.1.107 Liliana Borcea, Fernando Guevara Vasquez, Alexander V. Mamonov. Study of noise effects in electrical impedance tomography with resistor networks. Inverse Problems & Imaging, 2013, 7 (2) : 417-443. doi: 10.3934/ipi.2013.7.417 Dong liu, Ville Kolehmainen, Samuli Siltanen, Anne-maria Laukkanen, Aku Seppänen. Estimation of conductivity changes in a region of interest with electrical impedance tomography. Inverse Problems & Imaging, 2015, 9 (1) : 211-229. doi: 10.3934/ipi.2015.9.211 Nicolay M. Tanushev, Luminita Vese. A piecewise-constant binary model for electrical impedance tomography. Inverse Problems & Imaging, 2007, 1 (2) : 423-435. doi: 10.3934/ipi.2007.1.423 PDF downloads (19) HTML views (11) Melody Alsaker Sarah Jane Hamilton Andreas Hauptmann
CommonCrawl
Minimality of the Ehrenfest wind-tree model JMD Home This Volume The entropy of Lyapunov-optimizing measures of some matrix cocycles 2016, 10: 229-254. doi: 10.3934/jmd.2016.10.229 Effective equidistribution of translates of maximal horospherical measures in the space of lattices Kathryn Dabbs 1, , Michael Kelly 2, and Han Li 3, Department of Mathematics, University of Texas, 1 University Station, Austin, TX 78712, United States Department of Mathematics, University of Michigan, 530 Church St., Ann Arbor, MI 48109, United States Department of Mathematics and Computer Science, Wesleyan University, 265 Church Street, Middletown, CT 06459, United States Received June 2015 Revised February 2016 Published July 2016 Recently Mohammadi and Salehi-Golsefidy gave necessary and sufficient conditions under which certain translates of homogeneous measures converge, and they determined the limiting measures in the cases of convergence. The class of measures they considered includes the maximal horospherical measures. In this paper we prove the corresponding effective equidistribution results in the space of unimodular lattices. We also prove the corresponding results for probability measures with absolutely continuous densities in rank two and three. Then we address the problem of determining the error terms in two counting problems also considered by Mohammadi and Salehi-Golsefidy. In the first problem, we determine an error term for counting the number of lifts of a closed horosphere from an irreducible, finite-volume quotient of the space of positive definite $n\times n$ matrices of determinant one that intersect a ball with large radius. In the second problem, we determine a logarithmic error term for the Manin conjecture of a flag variety over $\mathbb{Q}$. Keywords: horospherical subgroups, convergence cone., horospherical measures, Effective equidistribution. Mathematics Subject Classification: Primary: 37C85, 37P30, 22E4. Citation: Kathryn Dabbs, Michael Kelly, Han Li. Effective equidistribution of translates of maximal horospherical measures in the space of lattices. Journal of Modern Dynamics, 2016, 10: 229-254. doi: 10.3934/jmd.2016.10.229 V. Bernik, D. Kleinbock and G. A. Margulis, Khintchine-type theorems on manifolds: the convergence case for standard and multiplicative versions,, Internat. Math. Res. Notices, (2001), 453. doi: 10.1155/S1073792801000241. Google Scholar A. Borel and J. Tits, Groupes réductifs,, Inst. Hautes Études Sci. Publ. Math., 27 (1965), 55. Google Scholar S. G. Dani, Invariant measures of horospherical flows on noncompact homogeneous spaces,, Invent. Math., 47 (1978), 101. doi: 10.1007/BF01578067. Google Scholar W. Duke, Z. Rudnick and P. Sarnak, Density of integer points on affine homogeneous varieties,, Duke Math. J., 71 (1993), 143. doi: 10.1215/S0012-7094-93-07107-4. Google Scholar A. Eskin and C. McMullen, Mixing, counting, and equidistribution in Lie groups,, Duke Math. J., 71 (1993), 181. doi: 10.1215/S0012-7094-93-07108-6. Google Scholar J. Franke, Y. I. Manin and Y. Tschinkel, Rational points of bounded height on Fano varieties,, Invent. Math., 95 (1989), 421. doi: 10.1007/BF01393904. Google Scholar J. E. Humphreys, Linear Algebraic Groups,, Graduate Texts in Mathematics, (1975). Google Scholar D. Y. Kleinbock and G. A. Margulis, Flows on homogeneous spaces and Diophantine approximation on manifolds,, Ann. of Math. (2), 148 (1998), 339. doi: 10.2307/120997. Google Scholar D. Y. Kleinbock and G. A. Margulis, On effective equidistribution of expanding translates of certain orbits in the space of lattices,, in Number Theory, (2012), 385. doi: 10.1007/978-1-4614-1260-1_18. Google Scholar D. Kleinbock, R. Shi and B. Weiss, Pointwise equidistribution with an error rate and with respect to unbounded functions,, , (2015). Google Scholar D. Kleinbock and B. Weiss, Dirichlet's theorem on Diophantine approximation and homogeneous flows,, J. Mod. Dyn., 2 (2008), 43. Google Scholar G. A. Margulis, On some aspects of the theory of Anosov systems,, Springer Monographs in Mathematics, (2004). doi: 10.1007/978-3-662-09070-1. Google Scholar A. Mohammadi and A. S. Golsefidy, Translate of horospheres and counting problems,, Amer. J. Math., 136 (2014), 1301. doi: 10.1353/ajm.2014.0037. Google Scholar H. Oh, Orbital counting via mixing and unipotent flows,, in Homogeneous Flows, (2010), 339. Google Scholar P. Sarnak, Asymptotic behavior of periodic orbits of the horocycle flow and Eisenstein series,, Comm. Pure Appl. Math., 34 (1981), 719. doi: 10.1002/cpa.3160340602. Google Scholar A. Selberg, Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series,, J. Indian Math. Soc. (N.S.), 20 (1956), 47. Google Scholar N. Shah and B. Weiss, On actions of epimorphic subgroups on homogeneous spaces,, Ergodic Theory Dynam. Systems, 20 (2000), 567. doi: 10.1017/S0143385700000298. Google Scholar N. A. Shah, Limit distributions of expanding translates of certain orbits on homogeneous spaces,, Proc. Indian Acad. Sci. Math. Sci., 106 (1996), 105. doi: 10.1007/BF02837164. Google Scholar R. Shi, Expanding cone and applications to homogeneous dynamics,, , (2015). Google Scholar D. Zagier, Eisenstein series and the Riemann zeta-function,, in Automorphic Forms, (1981), 275. Google Scholar Amir Mohammadi. Measures invariant under horospherical subgroups in positive characteristic. Journal of Modern Dynamics, 2011, 5 (2) : 237-254. doi: 10.3934/jmd.2011.5.237 Wenyu Pan. Effective equidistribution of circles in the limit sets of Kleinian groups. Journal of Modern Dynamics, 2017, 11: 189-217. doi: 10.3934/jmd.2017009 Zhi Lin, Katarína Boďová, Charles R. Doering. Models & measures of mixing & effective diffusion. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 259-274. doi: 10.3934/dcds.2010.28.259 Piotr Pokora, Tomasz Szemberg. Minkowski bases on algebraic surfaces with rational polyhedral pseudo-effective cone. Electronic Research Announcements, 2014, 21: 126-131. doi: 10.3934/era.2014.21.126 Byung-Soo Lee. Existence and convergence results for best proximity points in cone metric spaces. Numerical Algebra, Control & Optimization, 2014, 4 (2) : 133-140. doi: 10.3934/naco.2014.4.133 Yi Zhang, Liwei Zhang, Jia Wu. On the convergence properties of a smoothing approach for mathematical programs with symmetric cone complementarity constraints. Journal of Industrial & Management Optimization, 2018, 14 (3) : 981-1005. doi: 10.3934/jimo.2017086 Alexis De Vos, Yvan Van Rentergem. Young subgroups for reversible computers. Advances in Mathematics of Communications, 2008, 2 (2) : 183-200. doi: 10.3934/amc.2008.2.183 Nadya Markin, Eldho K. Thomas, Frédérique Oggier. On group violations of inequalities in five subgroups. Advances in Mathematics of Communications, 2016, 10 (4) : 871-893. doi: 10.3934/amc.2016047 Benjamin Dozier. Equidistribution of saddle connections on translation surfaces. Journal of Modern Dynamics, 2019, 14: 87-120. doi: 10.3934/jmd.2019004 Yuri Berest, Alimjon Eshmatov, Farkhod Eshmatov. On subgroups of the Dixmier group and Calogero-Moser spaces. Electronic Research Announcements, 2011, 18: 12-21. doi: 10.3934/era.2011.18.12 Naoki Chigira, Nobuo Iiyori and Hiroyoshi Yamaki. Nonabelian Sylow subgroups of finite groups of even order. Electronic Research Announcements, 1998, 4: 88-90. Kingshook Biswas. Maximal abelian torsion subgroups of Diff( C,0). Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 839-844. doi: 10.3934/dcds.2011.29.839 Carlos A. Berenstein and Alain Yger. Residues and effective Nullstellensatz. Electronic Research Announcements, 1996, 2: 82-91. Ye Tian, Qingwei Jin, Zhibin Deng. Quadratic optimization over a polyhedral cone. Journal of Industrial & Management Optimization, 2016, 12 (1) : 269-283. doi: 10.3934/jimo.2016.12.269 John Franks, Michael Handel. Some virtually abelian subgroups of the group of analytic symplectic diffeomorphisms of a surface. Journal of Modern Dynamics, 2013, 7 (3) : 369-394. doi: 10.3934/jmd.2013.7.369 L. Bakker. Semiconjugacy of quasiperiodic flows and finite index subgroups of multiplier groups. Conference Publications, 2005, 2005 (Special) : 60-69. doi: 10.3934/proc.2005.2005.60 Nicolás Matte Bon. Topological full groups of minimal subshifts with subgroups of intermediate growth. Journal of Modern Dynamics, 2015, 9: 67-80. doi: 10.3934/jmd.2015.9.67 Sanghoon Kwon, Seonhee Lim. Equidistribution with an error rate and Diophantine approximation over a local field of positive characteristic. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 169-186. doi: 10.3934/dcds.2018008 Salvatore Cosentino, Livio Flaminio. Equidistribution for higher-rank Abelian actions on Heisenberg nilmanifolds. Journal of Modern Dynamics, 2015, 9: 305-353. doi: 10.3934/jmd.2015.9.305 Viviane Baladi, Sébastien Gouëzel. Banach spaces for piecewise cone-hyperbolic maps. Journal of Modern Dynamics, 2010, 4 (1) : 91-137. doi: 10.3934/jmd.2010.4.91 PDF downloads (8) Kathryn Dabbs Michael Kelly Han Li
CommonCrawl
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up. Question about a circuit from "Quantum Computing for Computer Scientists" I am trying to implement a basic quantum computing emulator. In the chapter on Grover's algorithm, we're shown the following circuit: They demonstrate Grover's algorithm with a function $f$ that picks out $101$, i.e. $f(101)=1$ and $0$ otherwise. They start with $\psi_{1}=[1, 0, 0, 0, 0, 0, 0, 0]^{T}$. The Hadamard gate (specifically H tensored with itself $n$ times) gives $\psi_{2}=1/\sqrt8 [1, 1, 1, 1, 1, 1, 1, 1]^{T}$. This is as far as I've come. They don't show explicitly how to get to $\psi_{3}$, which should be $1/\sqrt8 [1, 1, 1, 1, 1, -1, 1, 1]^{T}$. I am not sure how to interpret the circuit. My best guess was to take the tensor product of $|0 \rangle=|000 \rangle$ and $|1 \rangle = [0, 1]^{T}$, then apply $I_{2^{n}} \otimes H$, then $U_{f}$. However, I have two problems: The book says that, at that stage in the calculation, $\psi_{3} = 1/\sqrt8 [1, 1, 1, 1, 1, -1, 1, 1]^{T}$, which has length $8$, instead of $16$. I don't know how to "extract" the "top" qubits. Furthermore, my answer is $1/4[1, 1..., 1]^{T}$, which doesn't suggest the correct answer (especially given the fact that every entry is $1/4$). Am I misinterpreting this circuit? What is the correct way to go from $\psi_{2}$ to $\psi_{3}$, from a programmatic point of view? quantum-gate programming grovers-algorithm quantum-circuit LPenguin LPenguinLPenguin Have you tried Quirk? It can handle things like extracting the tensor factor of intermediate states for you, so you get a sense of what you should be getting as the answer. In general, the tensor factor may not exist because the extra qubit could be entangled with the part you're trying to extract. But if it's not entangled, which in this case it's not, then you just look at the subset of the state vector where the extra qubit is 0 and that's your answer (after normalizing) (or, if the part of the state vector where the extra qubit is 0 is all amplitudes zero, look at the subset where the extra qubit is 1). To be more specific, what you do is group the state vector into parts keyed by the state of qubits you're not including. You then pick the part with the largest 2 norm as your reference part. This is the result you will return, normalized to have 2-norm of 1, if the state is not entangled. The state is not entangled if all the parts are parallel to each other. To verify lack of entanglement in a numerically stable way, you sum up the dot products of the reference part with all the parts (including itself). If the sum of dot products has a magnitude of 1, the state is not entangled. The sum's magnitude will get smaller as entanglement increases, though it won't necessarily get to zero. Craig GidneyCraig Gidney As you have read, Grover's Algorithm consists of starting in an equal superposition and the applying two gates, $U_\omega$ and $U_s$, $\sqrt{n}$ times where $n$ is the number of elements. You are looking on how to implement $U_\omega$ which gets from state $\psi_2$ to $\psi_3$. This is called a phase flipping oracle since it adds a phase of $e^{i\pi} = -1$ to the basis state which represents the correct element. The best way to do so is to apply an $X$ gate to a $|-\rangle = \frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$ to a separate allocated qubit, when you recognize that it is the correct basis states. This tutorial describes how do implement this gate for a graph coloring problem in Q#. To answer some of your other questions : The state vector is of length 8 since there are 3 qubits which gives $2^3 = 8$ basis states. To implement $U_s$, here is a question which shows how to implement it in basic gates BrockenDuckBrockenDuck Thanks for contributing an answer to Quantum Computing Stack Exchange! Not the answer you're looking for? Browse other questions tagged quantum-gate programming grovers-algorithm quantum-circuit or ask your own question. How can you decompose Grover's diffusion operator into gates? What applications does Grover's Search Algorithm have? Implementation of the oracle of Grover's algorithm on IBM Q using three qubits Grover's algorithm: a real life example? Grover's algorithm in a nutshell How to interpret a 4 qubit quantum circuit as a matrix? Is it possible to mine bitcoin by implementing Grover's algorithm on a quantum computer How to switch bit in the quantum state? How to write decent code for oracle in Qiskit without custom circuit or long truth table? Question About How Qiskit Reset Gate Affects Other Entangled Qubits
CommonCrawl