text
stringlengths
100
500k
subset
stringclasses
4 values
Corporate Finance & Accounting Financial Ratios Book-to-Bill Ratio Definition Reviewed by Will Kenton and Adam Hayes What Is the Book-to-Bill Ratio? A book-to-bill ratio is the ratio of orders received to units shipped and billed for a specified period, generally a month or quarter. It is a widely used metric in the technology industry, specifically in the semiconductor equipment sector. Investors and analysts closely watch this ratio for an indication of the performance and outlook for individual companies and the technology sector as a whole. A ratio of above 1 implies more orders were received than filled, indicating strong demand, while a ratio below 1 implies weaker demand. The Formula for the Book-to-Bill Ratio Is: Book to Bill=Orders ReceivedOrders Shipped\text{Book to Bill} = \frac{\text{Orders Received}}{\text{Orders Shipped}}Book to Bill=Orders ShippedOrders Received​ What Does the Book-to-Bill Ratio Tell You? A book-to-bill ratio is typically used for measuring supply and demand in volatile industries such as the technology sector. The ratio measures the number of orders coming in compared to the number of orders going out. A company fulfilling orders as they come in has a book-to-bill ratio of 1. For example, Company A books 500 orders for parts and then ships and bills all 500 orders. The booked and billed orders have a ratio of 1, or 500/500. The book-to-bill ratio reveals how quickly a business fulfills the demand for its products. The ratio also shows the strength of a sector, such as aerospace or defense manufacturing. It may also be used when determining whether to purchase stock in a company. If a business has a ratio of less than 1, there may be more supply than demand. For example, Company B books 500 orders for parts, and then ships and bills 610 orders, including some orders from the previous month. The booked and billed orders have a ratio of 0.82. For every dollar of orders, the company billed, only $0.82 of orders were booked that month. However, if the ratio is greater than 1, there may be more of a demand than can be efficiently supplied. For example, Company C books 500 orders for parts, and then ships and bills 375 orders. The book-to-bill ratio is 1.3, or 500/375. In contrast, a business with a ratio of 1 is meeting supply and demand adequately by shipping and billing orders as they are received. A book-to-bill ratio is the ratio of orders received to units shipped and billed for a specified period, generally a month or quarter. It is a widely used metric in the technology industry, specifically in the semiconductor equipment sector. Investors and analysts closely watch it for an indication of the performance and outlook for individual companies and the technology sector as a whole. A ratio of above 1 implies more orders were received than filled, indicating strong demand, while a ratio below 1 implies weaker demand. Example of the Book-to-Bill Ratio As a historical example, in June of 2016, companies creating semiconductor pieces in the United States and Canada received orders averaging $1.71 billion over three consecutive months. The book-to-bill ratio was 1. Thus, for every $100 in orders received for the month, $100 of the product was billed. The companies booked $1.75 billion in orders during May 2016, making that month 2.1% more profitable than the average bookings from April through June of that year. Debt Ratio The debt ratio is a financial ratio that measures the extent of a company's leverage. Fixed Asset Turnover Ratio The fixed asset turnover ratio is a ratio that measures how efficiently a company is generating net sales from its fixed-asset investments. This efficiency ratio compares net sales to fixed assets and measures a company's ability to generate net sales from its fixed-asset investments. Asset Turnover Ratio Asset turnover ratio measures the value of a company's sales or revenues generated relative to the value of its assets. Understanding the CAPE Ratio The CAPE ratio is a measurement that adjusts past company earnings by inflation to present a snapshot of stock market affordability at a given point in time. What the Coverage Ratio Tells Us A coverage ratio is a group of measures of a company's ability to service its debt and meet its financial obligations such as interests payments or dividends. The higher the coverage ratio, the easier it should be to make interest payments on its debt or pay dividends. Understanding Return on Invested Capital Return on invested capital (ROIC) is a way to assess a company's efficiency at allocating the capital under its control to profitable investments. An Introduction to Coverage Ratios Key Financial Ratios to Analyze the Auto Industry Book Value Vs. Market Value: What's the Difference Key Financial Ratios for Retail Companies Assessing a Stock's Future With the Price-to-Earnings Ratio and PEG The Basics of Trading Chip Stocks (INTC, TXN)
CommonCrawl
Results for 'Yann Ben��treau-Dupin' 1000+ found The Bayesian Who Knew Too Much.Yann Benétreau-Dupin - 2015 - Synthese 192 (5):1527-1542.details In several papers, John Norton has argued that Bayesianism cannot handle ignorance adequately due to its inability to distinguish between neutral and disconfirming evidence. He argued that this inability sows confusion in, e.g., anthropic reasoning in cosmology or the Doomsday argument, by allowing one to draw unwarranted conclusions from a lack of knowledge. Norton has suggested criteria for a candidate for representation of neutral support. Imprecise credences (families of credal probability functions) constitute a Bayesian-friendly framework that allows us to avoid (...) inadequate neutral priors and better handle ignorance. The imprecise model generally agrees with Norton's representation of ignorance but requires that his criterion of self-duality be reformulated or abandoned. (shrink) Bayesian Reasoning, Misc in Philosophy of Probability Doomsday Argument in Philosophy of Probability Imprecise Credences in Philosophy of Probability Indifference Principles in Philosophy of Probability Inductive Logic in Logic and Philosophy of Logic Fair Numbers: What Data Can and Cannot Tell Us About the Underrepresentation of Women in Philosophy.Yann Benétreau-Dupin & Guillaume Beaulac - 2015 - Ergo: An Open Access Journal of Philosophy 2:59-81.details The low representation (< 30%) of women in philosophy in English-speaking countries has generated much discussion, both in academic circles and the public sphere. It is sometimes suggested (Haslanger 2009) that unconscious biases, acting at every level in the field, may be grounded in gendered schemas of philosophers and in the discipline more widely, and that actions to make philosophy a more welcoming place for women should address such schemas. However, existing data are too limited to fully warrant such an (...) explanation, which therefore will not satisfy those in favor of the status quo or those who argue against the need to address gender imbalance. In this paper, we propose measures to improve the profession that ought to be implemented without referring explicitly to this underrepresentation or to the climate for women and other underrepresented groups. Such recommendations are based on empirical research already carried out in other disciplines and do not rest on whether it is possible to identify the cause of this low representation. We argue that we need not wait for new or better data to ensure that fairer practices are enacted for women, other underrepresented groups, and everybody else, if only out of precaution. (shrink) Academic and Teaching Ethics in Philosophy of Social Science Feminism: Equality in Philosophy of Gender, Race, and Sexuality Implicit Bias in Social and Political Philosophy Women in Philosophy in Philosophy of Gender, Race, and Sexuality Blurring Out Cosmic Puzzles.Yann Benétreau-Dupin - 2015 - Philosophy of Science 82 (5):879–891.details The Doomsday argument and anthropic reasoning are two puzzling examples of probabilistic confirmation. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, they constitute a challenge to Bayesianism. Several attempts, some successful, have been made to avoid these conclusions, but some versions of these arguments cannot be dissolved within the framework of orthodox Bayesianism. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of (...) ignorance in Bayesian reasoning and explains away these puzzles. (shrink) Anthropic Principle in Philosophy of Physical Science Buridan's Solution to the Liar Paradox.Yann Benétreau-Dupin - 2015 - History and Philosophy of Logic 36 (1):18-28.details Jean Buridan has offered a solution to the Liar Paradox, i.e. to the problem of assigning a truth-value to the sentence 'What I am saying is false'. It has been argued that either this solution is ad hoc since it would only apply to self-referencing sentences [Read, S. 2002. 'The Liar Paradox from John Buridan back to Thomas Bradwardine', Vivarium, 40 , 189–218] or else it weakens his theory of truth, making his 'a logic without truth' [Klima, G. 2008. 'Logic (...) without truth: Buridan on the Liar', in S. Rahman, T. Tulenheimo and E. Genot, Unity, Truth and the Liar: The Modern Relevance of Medieval Solutions to the Liar Paradox, Berlin: Springer, 87–112 ; Dutilh Novaes, C. 2011. 'Lessons on truth from mediaeval solutions to the Liar Paradox', The Philosophical Quarterly, 61 , 58–78]. Against , I will argue that Buridan's solution by means of truth by supposition does not involve new principles. Self-referential sentences force us to handle supposition more carefully, which does not warrant the accusation of adhoccery. I will also argue, against (2), that it is exaggerated to assert that this solution leads to a 'weakened' theory of truth, since it is consistent with other passages of the Sophismata, which only gives necessary conditions for the truth of affirmative propositions, but sufficient conditions for falsity. (shrink) Jean Buridan in Medieval and Renaissance Philosophy Liar Paradox in Logic and Philosophy of Logic Medieval Philosophy of Language in Medieval and Renaissance Philosophy Probabilistic Reasoning in Cosmology.Yann Benétreau-Dupin - 2015 - Dissertation, The University of Western Ontariodetails Cosmology raises novel philosophical questions regarding the use of probabilities in inference. This work aims at identifying and assessing lines of arguments and problematic principles in probabilistic reasoning in cosmology. -/- The first, second, and third papers deal with the intersection of two distinct problems: accounting for selection effects, and representing ignorance or indifference in probabilistic inferences. These two problems meet in the cosmology literature when anthropic considerations are used to predict cosmological parameters by conditionalizing the distribution of, e.g., the (...) cosmological constant on the number of observers it allows for. However, uniform probability distributions usually appealed to in such arguments are an inadequate representation of indifference, and lead to unfounded predictions. It has been argued that this inability to represent ignorance is a fundamental flaw of any inductive framework using additive measures. In the first paper, I examine how imprecise probabilities fare as an inductive framework and avoid such unwarranted inferences. In the second paper, I detail how this framework allows us to successfully avoid the conclusions of Doomsday arguments in a way no Bayesian approach that represents credal states by single credence functions could. -/- There are in the cosmology literature several kinds of arguments referring to self- locating uncertainty. In the multiverse framework, different "pocket-universes" may have different fundamental physical parameters. We don't know if we are typical observers and if we can safely assume that the physical laws we draw from our observations hold elsewhere. The third paper examines the validity of the appeal to the "Sleeping Beauty problem" and assesses the nature and role of typicality assumptions often endorsed to handle such questions. -/- A more general issue for the use of probabilities in cosmology concerns the inadequacy of Bayesian and statistical model selection criteria in the absence of well-motivated measures for different cosmological models. The criteria for model selection commonly used tend to focus on optimizing the number of free parameters, but they can select physically implausible models. The fourth paper examines the possibility for Bayesian model selection to circumvent the lack of well-motivated priors. (shrink) Astrophysics in Philosophy of Physical Science Probabilistic Principles, Misc in Philosophy of Probability Simplicity and Parsimony in General Philosophy of Science Updating Principles in Philosophy of Probability An Empiricist Criterion of Meaning.Yann Benétreau-Dupin - 2011 - South African Journal of Philosophy 30 (2):95-108.details The meaning of scientific propositions is not always expressible in terms of observable phenomena. Such propositions involve generalizations, and also terms that are theoretical constructs. I study here how to assess the meaning of scientific propositions, that is, the specific import of theoretical terms. Empiricists have expressed a concern that scientific propositions, and theoretical terms, should always be, to some degree, related to observable consequences. We can see that the former empiricist criterion of meaning only implies for theoretical terms not (...) to be definable in terms of observable, but that their use put a constraint on the observable consequences of a theory. To that effect, Ramsey's method of formal elimination of theoretical terms can be an interesting tool. It has faced important logical objections, which have mostly been addressed with respect to the problem of the ontological commitment of the second-order quantification they imply. I show here that these criticisms can be overcome, and that there can be a successful Ramsey elimination of theoretical terms with first order sentences, making Ramsey's method a relevant tool to assess the empirical meaning of scientific propositions. (shrink) Alternatives to Scientific Realism, Misc in General Philosophy of Science Constructive Empiricism in General Philosophy of Science Instrumentalism in General Philosophy of Science Ramsey Sentences in General Philosophy of Science Semantic View of Theories in General Philosophy of Science Structural Realism in General Philosophy of Science The Cosmos As Involving Local Laws and Inconceivable Without Them.Chris J. Smeenk & Yann Benétreau-Dupin - 2017 - The Monist 100 (3):357-372.details Traditional debates, such as those regarding whether the universe is finite in spatial or temporal extent, exemplified, according to Kant, the inherent tendency of pure reason to lead us astray. Although various aspects of Kant's arguments fail to find a footing in modern cosmology, Kant's objections to the search for a complete objective description of the cosmos are related to three intertwined issues that are still of central importance: the applicability of universal laws, the status of distinctively cosmological laws, and (...) the explanatory sufficiency of laws. We will advocate a broadly Kantian position on these three issues as part of a critical response to a prevalent strain of Leibnizian rationalism in contemporary cosmology. (shrink) Kant: Philosophy of Science in 17th/18th Century Philosophy Laws of Nature, Misc in General Philosophy of Science Philosophy of Cosmology, Misc in Philosophy of Physical Science Perspectives of History and Philosophy on Teaching Astronomy.Horacio Tignanelli & Yann Benétreau-Dupin - 2014 - In Michael R. Matthews (ed.), International Handbook of Research in History, Philosophy and Science Teaching. Springer. pp. 603-640.details The didactics of astronomy is a relatively young field with respect to that of other sciences. Historical issues have most often been part of the teaching of astronomy, although that often does not stem from a specific didactics. The teaching of astronomy is often subsumed under that of physics. One can easily consider that, from an educational standpoint, astronomy requires the same mathematical or physical strategies. This approach may be adequate in many cases but cannot stand as a general principle (...) for the teaching of astronomy. This chapter offers in a first part a brief overview of the status of astronomy education research and of the role of the history and philosophy of science (HPS) in astronomy education. In a second part, it attempts to illustrate possible ways to structure the teaching of astronomy around its historical development so as to pursue a quality education and contextualized learning. (shrink) History of Physics in Philosophy of Physical Science Natural Kinds in Metaphysics Nature of Science, Misc in General Philosophy of Science Philosophy of Education, Misc in Philosophy of Social Science Philosophy of Teaching in Philosophy of Social Science The Aims of Education in Philosophy of Social Science $319.12 used $356.28 new (collection) Amazon page Geraint F. Lewis and Luke A. Barnes. A Fortunate Universe: Life in a Finely Tuned Cosmos. [REVIEW]Yann Benétreau-Dupin - 2017 - Notre Dame Philosophical Reviews 201706.details This new book by cosmologists Geraint F. Lewis and Luke A. Barnes is another entry in the long list of cosmology-centered physics books intended for a large audience. While many such books aim at advancing a novel scientific theory, this one has no such scientific pretense. Its goals are to assert that the universe is fine-tuned for life, to defend that this fact can reasonably motivate further scientific inquiry as to why it is so, and to show that the multiverse (...) and intelligent design hypotheses are reasonable proposals to explain this fine-tuning. This book's potential contribution, therefore, lies in how convincingly and efficiently it can make that case. (shrink) Fine-Tuning in Cosmology in Philosophy of Physical Science Revisiting Model-Based Learning. [REVIEW]Yann Benétreau-Dupin - 2018 - Science & Education 27 (9-10):1033-1037.details Education in Professional Areas Models and Explanation in General Philosophy of Science The Nature of Education in Philosophy of Social Science Where is 'Where is Everybody?'?: Milan M. Ćirković: The Great Silence: The Science and Philosophy of Fermi's Paradox. Oxford: Oxford University Press, 2018, Xxvii+395pp, $32.95 HB. [REVIEW]Yann Benétreau-Dupin - 2020 - Metascience 29 (1):67-70.details Observation in Cosmology in Philosophy of Physical Science Teaching the Conceptual History of Physics to Physics Teachers.Peter Garik, Luciana Garbayo, Yann Benétreau-Dupin, Charles Winrich, Andrew Duffy, Nicholas Gross & Manher Jariwala - 2015 - Science & Education 24 (4):387-408.details For nearly a decade we have taught the history and philosophy of science as part of courses aimed at the professional development of physics teachers. The focus of the history of science instruction is on the stages in the development of the concepts and theories of physics. For this instruction, we designed activities to help the teachers organize their understanding of this historical development. The activities include scientific modeling using archaic theories. We conducted surveys to gauge the impact on the (...) teachers of including the conceptual history of physics in the professional development courses. The teachers report greater confidence in their knowledge of the history of physics, that they reflect on this history for their teaching, and that they use of the history of physics for their classroom instruction. In this paper, we provide examples of our activities, the rationale for their design, and discuss the outcomes for the teachers of the instruction. (shrink) Conceptual Change in Science in General Philosophy of Science Report on a Boston University Conference December 7–8, 2012 on How Can the History and Philosophy of Science Contribute to Contemporary US Science Teaching?Peter Garik & Yann Benétreau-Dupin - 2014 - Science & Education 23 (9):1853-1873.details This is an editorial report on the outcomes of an international conference sponsored by a grant from the National Science Foundation to the School of Education at Boston University and the Center for Philosophy and History of Science at Boston University for a conference titled: How Can the History and Philosophy of Science Contribute to Contemporary US Science Teaching? The presentations of the conference speakers and the reports of the working groups are reviewed. Multiple themes emerged for K-16 education from (...) the perspective of the history and philosophy of science. Key ones were that: students need to understand that central to science is argumentation, criticism, and analysis; students should be educated to appreciate science as part of our culture; students should be educated to be science literate; what is meant by the nature of science as discussed in much of the science education literature must be broadened to accommodate a science literacy that includes preparation for socioscientific issues; teaching for science literacy requires the development of new assessment tools; and, it is difficult to change what science teachers do in their classrooms. The principal conclusions drawn by the editors are that: to prepare students to be citizens in a participatory democracy, science education must be embedded in a liberal arts education; science teachers alone cannot be expected to prepare students to be scientifically literate; and, to educate students for scientific literacy will require a new curriculum that is coordinated across the humanities, history /social studies, and science classrooms. (shrink) J. S. Mill's Conception of Utility: Ben Saunders.Ben Saunders - 2010 - Utilitas 22 (1):52-69.details Mill's most famous departure from Bentham is his distinction between higher and lower pleasures. This article argues that quality and quantity are independent and irreducible properties of pleasures that may be traded off against each other – as in the case of quality and quantity of wine. I argue that Mill is not committed to thinking that there are two distinct kinds of pleasure, or that 'higher pleasures' lexically dominate lower ones, and that the distinction is compatible with hedonism. I (...) show how this interpretation not only makes sense of Mill but allows him to respond to famous problems, such as Crisp's Haydn and the oyster and Nozick's experience machine. (shrink) Does Participation Matter? An Inconsistency in Parfit's Moral Mathematics: Ben Eggleston.Ben Eggleston - 2003 - Utilitas 15 (1):92-105.details Consequentialists typically think that the moral quality of one's conduct depends on the difference one makes. But consequentialists may also think that even if one is not making a difference, the moral quality of one's conduct can still be affected by whether one is participating in an endeavour that does make a difference. Derek Parfit discusses this issue – the moral significance of what I call 'participation' – in the chapter of Reasons and Persons that he devotes to what he (...) calls 'moral mathematics'. In my paper, I expose an inconsistency in Parfit's discussion of moral mathematics by showing how it gives conflicting answers to the question of whether participation matters. I conclude by showing how an appreciation of Parfit's error sheds some light on consequentialist thought generally, and on the debate between act- and rule-consequentialists specifically. (shrink) Topics in Consequentialism, Misc in Normative Ethics Interview: Ben Cohen.Ben Cohen & Craig Cox - 1994 - Business Ethics: The Magazine of Corporate Responsibility 8 (5):18-21.details Ethics in Value Theory, Miscellaneous Ben Abadiano Photographs.Ben Abadiano - 2008 - Budhi: A Journal of Ideas and Culture 12 (2).details Photography in Aesthetics Liu Ben Wen Ji.Ben Liu - 2008 - Zhongguo She Hui Ke Xue Chu Ban She.details Socialism and Marxism in Social and Political Philosophy Well-Being and Death.Ben Bradley - 2009 - Oxford University Press.details Well-Being and Death addresses philosophical questions about death and the good life: what makes a life go well? Is death bad for the one who dies? How is this possible if we go out of existence when we die? Is it worse to die as an infant or as a young adult? Is it bad for animals and fetuses to die? Can the dead be harmed? Is there any way to make death less bad for us? Ben Bradley defends the (...) following views: pleasure, rather than achievement or the satisfaction of desire, is what makes life go well; death is generally bad for its victim, in virtue of depriving the victim of more of a good life; death is bad for its victim at times after death, in particular at all those times at which the victim would have been living well; death is worse the earlier it occurs, and hence it is worse to die as an infant than as an adult; death is usually bad for animals and fetuses, in just the same way it is bad for adult humans; things that happen after someone has died cannot harm that person; the only sensible way to make death less bad is to live so long that no more good life is possible. (shrink) Defining Death in Applied Ethics Desire Satisfaction Accounts of Well-Being in Value Theory, Miscellaneous Hedonist Accounts of Well-Being in Value Theory, Miscellaneous Objective Accounts of Well-Being in Value Theory, Miscellaneous The Badness of Death in Applied Ethics Well-Being, Misc in Value Theory, Miscellaneous Ethics in the Societal Debate on Genetically Modified Organisms: A (Re)Quest for Sense and Sensibility. [REVIEW]Yann Devos, Pieter Maeseele, Dirk Reheul, Linda Van Speybroeck & Danny De Waele - 2008 - Journal of Agricultural and Environmental Ethics 21 (1):29-61.details Via a historical reconstruction, this paper primarily demonstrates how the societal debate on genetically modified organisms (GMOs) gradually extended in terms of actors involved and concerns reflected. It is argued that the implementation of recombinant DNA technology out of the laboratory and into civil society entailed a "complex of concerns." In this complex, distinctions between environmental, agricultural, socio-economic, and ethical issues proved to be blurred. This fueled the confusion between the wider debate on genetic modification and the risk assessment of (...) transgenic crops in the European Union. In this paper, the lasting skeptical and/or ambivalent attitude of Europeans towards agro-food biotechnology is interpreted as signaling an ongoing social request – and even a quest – for an evaluation of biotechnology with Sense and Sensibility. In this (re)quest, a broader-than-scientific dimension is sought for that allows addressing the GMO debate in a more "sensible" way, whilst making "sense" of the different stances taken in it. Here, the restyling of the European regulatory frame on transgenic agro-food products and of science communication models are discussed and taken to be indicative of the (re)quest to move from a merely scientific evaluation and risk-based policy towards a socially more robust evaluation that takes the "non-scientific" concerns at stake in the GMO debate seriously. (shrink) Environmental Ethics in Applied Ethics Embodiment, Spatial Categorisation and Action.Yann Coello & Yvonne Delevoye-Turrell - 2007 - Consciousness and Cognition 16 (3):667-683.details Despite the subjective experience of a continuous and coherent external world, we will argue that the perception and categorisation of visual space is constrained by the spatial resolution of the sensory systems but also and above all, by the pre-reflective representations of the body in action. Recent empirical data in cognitive neurosciences will be presented that suggest that multidimensional categorisation of perceptual space depends on body representations at both an experiential and a functional level. Results will also be resumed that (...) show that representations of the body in action are pre-reflective in nature as only some aspects of the pre-reflective states can be consciously experienced. Finally, a neuro-cognitive model based on the integration of afferent and efferent information will be described, which suggests that action simulation and associated predicted sensory consequences may represent the underlying principle that enables pre-reflective representations of the body for space categorisation and selection for action. (shrink) Science of Consciousness in Philosophy of Cognitive Science Tilting at Imaginary Windmills: A Comment on Tyfield.Yann Giraud & E. Roy Weintraub - 2009 - Erasmus Journal for Philosophy and Economics 2 (1):52-59.details The Shifting Border Between Perception and Cognition.Ben Phillips - 2019 - Noûs 53 (2):316-346.details The distinction between perception and cognition has always had a firm footing in both cognitive science and folk psychology. However, there is little agreement as to how the distinction should be drawn. In fact, a number of theorists have recently argued that, given the ubiquity of top-down influences, we should jettison the distinction altogether. I reject this approach, and defend a pluralist account of the distinction. At the heart of my account is the claim that each legitimate way of marking (...) a border between perception and cognition deploys a notion I call 'stimulus-control.' Thus, rather than being a grab bag of unrelated kinds, the various categories of the perceptual are unified into a superordinate natural kind. (shrink) A Wadge Hierarchy for Second Countable Spaces.Yann Pequignot - 2015 - Archive for Mathematical Logic 54 (5-6):659-683.details We define a notion of reducibility for subsets of a second countable T0 topological space based on relatively continuous relations and admissible representations. This notion of reducibility induces a hierarchy that refines the Baire classes and the Hausdorff–Kuratowski classes of differences. It coincides with Wadge reducibility on zero dimensional spaces. However in virtually every second countable T0 space, it yields a hierarchy on Borel sets, namely it is well founded and antichains are of length at most 2. It thus differs (...) from the Wadge reducibility in many important cases, for example on the real line R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb{R}}$$\end{document} or the Scott Domain Pω\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{P}\omega}$$\end{document}. (shrink) Model Theory in Logic and Philosophy of Logic Aesopica. A Series of Texts Relating to Aesop or Ascribed to Him or Closely Connected with the Literary Tradition That Bears His Name, Collected and Critically Edited with a Commentary and Historical Essay by Ben Edwin Perry. Volume I: Greek and Latin Texts. Pp. Xxiii + 765. Urbana: University of Illinois Press, 1952. Cloth, $15. [REVIEW]H. Ll Hudson-Williams & Ben Edwin Perry - 1953 - Journal of Hellenic Studies 73:163-163.details Descartes' Philosophical Revolution: A Reassessment.Hanoch Ben-Yami - 2015 - Palgrave-Macmillan.details In this book, Ben-Yami reassesses the way Descartes developed and justified some of his revolutionary philosophical ideas. The first part of the book shows that one of Descartes' most innovative and influential ideas was that of representation without resemblance. Ben-Yami shows how Descartes transfers insights originating in his work on analytic geometry to his theory of perception. The second part shows how Descartes was influenced by the technology of the period, notably clockwork automata, in holding life to be a mechanical (...) phenomenon, reducing the soul to the mind and considering it immaterial. Ben-Yami explores the later role of the digital computer in Turing's criticism of Descartes' ideas. The last part discusses the Meditations: far from starting everything afresh without presupposing anything that can be doubted, Descartes' innovations in the dream argument, the cogito and elsewhere are modifications of old ideas based upon considerations issuing from his separately developed theories, formed under the influence of the technology, mathematics and science of his age. (shrink) René Descartes in 17th/18th Century Philosophy L'être de Dieu.Yann Schmitt - 2016 - Editions d'Ithaque.details Le théisme est la position métaphysique au cœur des religions monothéistes : il est l'affirmation qu'il existe un Dieu omniscient, omnipotent, parfaitement bon et créateur. Penser l'objet de ces croyances, à savoir Dieu, suppose donc une étude des catégories métaphysiques nécessaires à l'explicitation du théisme. Loin de tout rationalisme étroit et de toute exaltation mystique, le présent ouvrage mobilise les outils de la philosophie contemporaine afin de mettre au jour les choix théoriques qui sont requis pour concevoir un Dieu compris (...) comme l'être ayant toutes les perfections. Les questions du réalisme, de la vérité, du premier principe, du possible et du nécessaire sont étudiées aussi bien à partir du contenu des croyances religieuses que de la métaphysique analytique contemporaine, en réponse aux critiques de Kant et de Heidegger. Car avant même de s'interroger sur l'existence ou sur l'inexistence d'un tel Dieu, ou encore de discuter de la rationalité ou de l'irrationalité des croyances religieuses, ce sont les outils conceptuels pour penser un Dieu qu'il nous faut examiner philosophiquement. (shrink) Divine Necessity in Philosophy of Religion Divine Simplicity in Philosophy of Religion $257.73 used Amazon page Autonomy and Liberalism.Ben Colburn - 2010 - New York, USA: Routledge.details This book concerns the foundations and implications of a particular form of liberal political theory. Colburn argues that one should see liberalism as a political theory committed to the value of autonomy, understood as consisting in an agent deciding for oneself what is valuable and living life in accordance with that decision. Understanding liberalism this way offers solutions to various problems that beset liberal political theory, on various levels. On the theoretical level, Colburn claims that this position is the only (...) defensible theory of liberalism in current circulation, arguing that other more dominant theories are either self-contradictory or unattractive on closer inspection. And on the practical level, Colburn draws out the substantive commitments of this position in educational, economic, and social policy. Hence, the study provides a blueprint for a radical liberal political agenda which will be of interest to philosophers and to politicians alike. (shrink) Autonomy in Political Theories in Social and Political Philosophy Equality and Responsibility in Social and Political Philosophy Liberalism and Value in Social and Political Philosophy Multiculturalism and Autonomy in Social and Political Philosophy $62.17 new $62.71 used $62.95 from Amazon Amazon page Analogies and "Modeling Analogies" in Teaching: Some Examples in Basic Electricity.J. J. Dupin & S. Johsua - 1989 - Science Education 73 (2):207-224.details Theories and Models in General Philosophy of Science Currents in Contemporary Bioethics: Open Access as Benefit Sharing? The Example of Publicly Funded Large-Scale Genomic Databases.Yann Joly, Clarissa Allen & Bartha M. Knoppers - 2012 - Journal of Law, Medicine and Ethics 40 (1):143-146.details The Deadlock of Absolute Divine Simplicity.Yann Schmitt - 2013 - International Journal for Philosophy of Religion 74 (1):117-130.details In this article, I explain how and why different attempts to defend absolute divine simplicity fail. A proponent of absolute divine simplicity has to explain why different attributions do not suppose a metaphysical complexity in God but just one superproperty, why there is no difference between God and His super-property and finally how a absolute simple entity can be the truthmaker of different intrinsic predications. It does not necessarily lead to a rejection of divine simplicity but it shows that we (...) may consider another conception of divine simplicity compatible with some metaphysical complexity in God. (shrink) The Way Things Were.Ben Caplan & David Sanson - 2010 - Philosophy and Phenomenological Research 81 (1):24-39.details When is Death Bad for the One Who Dies?Ben Bradley - 2004 - Noûs 38 (1):1–28.details Epicurus seems to have thought that death is not bad for the one who dies, since its badness cannot be located in time. I show that Epicurus' argument presupposes Presentism, and I argue that death is bad for its victim at all and only those times when the person would have been living a life worth living had she not died when she did. I argue that my account is superior to competing accounts given by Thomas Nagel, Fred Feldman and (...) Neil Feit. (shrink) Death and Dying, Misc in Applied Ethics Epicurus in Ancient Greek and Roman Philosophy Social Capital Versus Social Theory: Political Economy and Social Science at the Turn of the Millennium.Ben Fine - 2001 - Routledge.details Ben Fine traces the origins of social capital through the work of Becker, Bourdieu and Coleman and comprehensively reviews the literature across the social sciences. The text is uniquely critical of social capital, explaining how it avoids a proper confrontation with political economy and has become chaotic. This highly topical text addresses some major themes, including the shifting relationship between economics and other social sciences, the 'publish or perish' concept currently burdening scholarly integrity, and how a social science interdisciplinarity requires (...) a place for political economy together with cultural and social theory. (shrink) Philosophy of Social Science, General Works in Philosophy of Social Science Philosophy of Social Science, Miscellaneous in Philosophy of Social Science Citizenship as Shared Fate: Education for Membership in a Diverse Democracy.Sigal Ben-Porath - 2012 - Educational Theory 62 (4):381-395.details The diversity of contemporary democratic nations challenges scholars and educators to develop forms of education that would both recognize difference and develop a shared foundation for a functioning democracy. In this essay Sigal Ben-Porath develops the concept of shared fate as a theoretical and practical response to this challenge. Shared fate offers a viable alternative to current forms of citizenship education, one that develops a significant shared dimension while respecting deep differences within a political community. It is grounded in the (...) social and moral realities of civic life, and it seeks to weave the historical, political, and social ties among members of the nation into a form of affiliation that would sustain and expand their shared political project. Some particular educational contexts are considered through the lens of shared fate, including the resegregation of some schooling systems, linguistic diversity, and patriotic education. (shrink) Ethics in the Societal Debate on Genetically Modified Organisms: A Quest for Sense and Sensibility.Devos Yann, Maeseele Pieter, Reheul Dirk, Speybroeck Linda & Waele Danny - 2008 - Journal of Agricultural and Environmental Ethics 21 (1):29-61.details Conscientious Objection in Medicine: Making it Public.Nir Ben-Moshe - 2021 - HEC Forum 33 (3):269-289.details The literature on conscientious objection in medicine presents two key problems that remain unresolved: Which conscientious objections in medicine are justified, if it is not feasible for individual medical practitioners to conclusively demonstrate the genuineness or reasonableness of their objections? How does one respect both medical practitioners' claims of conscience and patients' interests, without leaving practitioners complicit in perceived or actual wrongdoing? My aim in this paper is to offer a new framework for conscientious objections in medicine, which, by bringing (...) medical professionals' conscientious objection into the public realm, solves the justification and complicity problems. In particular, I will argue that: an "Uber Conscientious Objection in Medicine Committee" —which includes representatives from the medical community and from other professions, as well as from various religions and from the patient population—should assess various well-known conscientious objections in medicine in terms of public reason and decide which conscientious objections should be permitted, without hearing out individual conscientious objectors; medical practitioners should advertise their conscientious objections, ahead of time, in an online database that would be easily accessible to the public, without being required, in most cases, to refer patients to non-objecting practitioners. (shrink) Vagueness and Family Resemblance.Hanoch Ben-Yami - 2017 - In Hans-Johann Glock (ed.), A Companion to Wittgenstein. Oxford, UK: Wiley-Blackwell. pp. 407-419.details Ben-Yami presents Wittgenstein's explicit criticism of the Platonic identification of an explanation with a definition and the alternative forms of explanation he employed. He then discusses a few predecessors of Wittgenstein's criticisms and the Fregean background against which he wrote. Next, the idea of family resemblance is introduced, and objections answered. Wittgenstein's endorsement of vagueness and the indeterminacy of sense are presented, as well as the open texture of concepts. Common misunderstandings are addressed along the way. Wittgenstein's ideas, as is (...) then shown, have far-reaching implications for knowledge of meaning and the nature of logic, and with them to the nature of the philosophical project and its possible achievements. (shrink) Open Texture in Philosophy of Language Sorites Paradox in Philosophy of Language Vagueness and Indeterminacy, Miscellaneous in Philosophy of Language Presentism and Truthmaking.Ben Caplan & David Sanson - 2011 - Philosophy Compass 6 (3):196-208.details Three plausible views—Presentism, Truthmaking, and Independence—form an inconsistent triad. By Presentism, all being is present being. By Truthmaking, all truth supervenes on, and is explained in terms of, being. By Independence, some past truths do not supervene on, or are not explained in terms of, present being. We survey and assess some responses to this. Mental Maps.Ben Blumson - 2012 - Philosophy and Phenomenological Research 85 (2):413-434.details It's often hypothesized that the structure of mental representation is map-like rather than language-like. The possibility arises as a counterexample to the argument from the best explanation of productivity and systematicity to the language of thought hypothesis—the hypothesis that mental structure is compositional and recursive. In this paper, I argue that the analogy with maps does not undermine the argument, because maps and language have the same kind of compositional and recursive structure. Mental Imagery in Philosophy of Mind The Language of Thought in Philosophy of Mind Autonomy and Adaptive Preferences.Ben Colburn - 2011 - Utilitas 23 (1):52-71.details Adaptive preference formation is the unconscious altering of our preferences in light of the options we have available. Jon Elster has argued that this is bad because it undermines our autonomy. I agree, but think that Elster's explanation of why is lacking. So, I draw on a richer account of autonomy to give the following answer. Preferences formed through adaptation are characterized by covert influence (that is, explanations of which an agent herself is necessarily unaware), and covert influence undermines our (...) autonomy because it undermines the extent to which an agent's preferences are ones that she has decided upon for herself. This answer fills the lacuna in Elster's argument. It also allows us to draw a principled distinction between adaptive preference formation and the closely related phenomenon of character planning. (shrink) Topics in Moral Value in Normative Ethics Doing Away with Harm1.Ben Bradley - 2012 - Philosophy and Phenomenological Research 85 (2):390-412.details The Experience Machine.Ben Bramble - 2016 - Philosophy Compass 11 (3):136-145.details In this paper, I reconstruct Robert Nozick's experience machine objection to hedonism about well-being. I then explain and briefly discuss the most important recent criticisms that have been made of it. Finally, I question the conventional wisdom that the experience machine, while it neatly disposes of hedonism, poses no problem for desire-based theories of well-being. Value Theory, Misc in Value Theory, Miscellaneous Evolutionary Debunking Arguments and the Reliability of Moral Cognition.Ben Fraser - 2014 - Philosophical Studies 168 (2):457-473.details Recent debate in metaethics over evolutionary debunking arguments against morality has shown a tendency to abstract away from relevant empirical detail. Here, I engage the debate about Darwinian debunking of morality with relevant empirical issues. I present four conditions that must be met in order for it to be reasonable to expect an evolved cognitive faculty to be reliable: the environment, information, error, and tracking conditions. I then argue that these conditions are not met in the case of our evolved (...) faculty for moral judgement. (shrink) Evolution of Morality in Normative Ethics Defending Musical Perdurantism.Ben Caplan & Carl Matheson - 2006 - British Journal of Aesthetics 46 (1):59-69.details If musical works are abstract objects, which cannot enter into causal relations, then how can we refer to musical works or know anything about them? Worse, how can any of our musical experiences be experiences of musical works? It would be nice to be able to sidestep these questions altogether. One way to do that would be to take musical works to be concrete objects. In this paper, we defend a theory according to which musical works are concrete objects. In (...) particular, the theory that we defend takes musical works to be fusions of performances. We defend this view from a series of objections, the first two of which are raised by Julian Dodd in a recent paper and the last of which is suggested by some comments of his in an earlier paper. (shrink) Musical Works in Aesthetics Perdurance in Metaphysics Three- and Four-Dimensionalism in Metaphysics Educational Justice, Epistemic Justice, and Leveling Down.Ben Kotzee - 2013 - Educational Theory 63 (4):331-350.details Harry Brighouse and Adam Swift argue that education is a positional good; this, they hold, implies that there is a qualified case for leveling down educational provision. In this essay, Ben Kotzee discusses Brighouse and Swift's argument for leveling down. He holds that the argument fails in its own terms and that, in presenting the problem of educational justice as one of balancing education's positional and nonpositional benefits, Brighouse and Swift lose sight of what a consideration of the nonpositional benefits (...) by itself can reveal. Instead of focusing on education's positional benefits, Kotzee investigates what emphasizing education's nonpositional benefits would imply for educational justice. Drawing on recent work in social epistemology, Kotzee holds that theories of educational justice would benefit from a consideration of the virtue of epistemic justice, and he outlines solutions to a number of problems in the area from this perspective. (shrink) Equality in Social and Political Philosophy Altruism or Solidarity? The Motives for Organ Donation and Two Proposals.Ben Saunders - 2012 - Bioethics 26 (7):376-381.details Proposals for increasing organ donation are often rejected as incompatible with altruistic motivation on the part of donors. This paper questions, on conceptual grounds, whether most organ donors really are altruistic. If we distinguish between altruism and solidarity – a more restricted form of other-concern, limited to members of a particular group – then most organ donors exhibit solidarity, rather than altruism. If organ donation really must be altruistic, then we have reasons to worry about the motives of existing donors. (...) However, I argue that altruism is not necessary, because organ donation supplies important goods, whatever the motivation, and we can reject certain dubious motivations, such as financial profit, without insisting on altruism.Once solidaristic donation is accepted, certain reforms for increasing donation rates seem permissible. This paper considers two proposals. Firstly, it has been suggested that registered donors should receive priority for transplants. While this proposal appears based on a solidaristic norm of reciprocity, it is argued that such a scheme would be undesirable, since non-donors may contribute to society in other ways. The second proposal is that donors should be able to direct their organs towards recipients that they feel solidarity with. This is often held to be inconsistent with altruistic motivation, but most donation is not entirely undirected in the first place (for instance, donor organs usually go to co-nationals). While allowing directed donation would create a number of practical problems, such as preventing discrimination, there appears to be no reason in principle to reject it. (shrink) Altruism and Psychological Egoism in Normative Ethics Organ Donation in Applied Ethics Is Death Bad for a Cow?Ben Bradley - 2015 - In The Ethics of Killing Animals. pp. 51-64.details Animal Ethics, Misc in Applied Ethics Can a Musical Work Be Created?Ben Caplan & Carl Matheson - 2004 - British Journal of Aesthetics 44 (2):113-134.details Can a musical work be created? Some say 'no'. But, we argue, there is no handbook of universally accepted metaphysical truths that they can use to justify their answer. Others say 'yes'. They have to find abstract objects that can plausibly be identified with musical works, show that abstract objects of this sort can be created, and show that such abstract objects can persist. But, we argue, none of the standard views about what a musical work is allows musical works (...) both to be created and to persist. (shrink) March of Refugees: An Act of Civil Disobedience.Ali Emre Benli - 2018 - Journal of Global Ethics 14 (3):315-331.details ABSTRACTOn 4 September 2015 asylum seekers who got stranded in Budapest's Keleti train station began a march to cross the Austrian border. Their aim was to reach Germany and Sweden where they believed their asylum claims would be better received. In this article, I argue that the march should be characterized as an act of civil disobedience. This claim may seem to contradict common convictions regarding acts of civil disobedience as well as asylum seekers. The most common justifications are given (...) with reference to moral rights of citizens and concerns for enhancing justice or democracy within states. Asylum seekers are not members of the European public. How can they be entitled to break the law? I first show that the march displays features of a paradigm case of civil disobedience. Then, I identify moral reasons for asylum seekers to carry out the march acceptable from both strong and weak cosmopolitan perspectives. After that, I point out its corrective, stabilizing and democracy-enhancing roles in the European political landscape. I conclude by emphasizing that conceptualizing the march as an act of civil disobedience is significant in recognizing asylum seekers as political agents making claims within an evolving framework of refugee protection. (shrink) 1 — 50 / 1000
CommonCrawl
5.7: Correlations 5: Descriptive Statistics 5.7.2 The strength and direction of a relationship 5.7.3 The correlation coefficient 5.7.4 Calculating correlations in R 5.7.5 Interpreting a correlation 5.7.6 Spearman's rank correlations 5.7.7 The correlate() function Up to this point we have focused entirely on how to construct descriptive statistics for a single variable. What we haven't done is talked about how to describe the relationships between variables in the data. To do that, we want to talk mostly about the correlation between variables. But first, we need some data. After spending so much time looking at the AFL data, I'm starting to get bored with sports. Instead, let's turn to a topic close to every parent's heart: sleep. The following data set is fictitious, but based on real events. Suppose I'm curious to find out how much my infant son's sleeping habits affect my mood. Let's say that I can rate my grumpiness very precisely, on a scale from 0 (not at all grumpy) to 100 (grumpy as a very, very grumpy old man). And, lets also assume that I've been measuring my grumpiness, my sleeping patterns and my son's sleeping patterns for quite some time now. Let's say, for 100 days. And, being a nerd, I've saved the data as a file called parenthood.Rdata. If we load the data… load( "./data/parenthood.Rdata" ) who(TRUE) ## -- Name -- -- Class -- -- Size -- ## parenthood data.frame 100 x 4 ## $dan.sleep numeric 100 ## $baby.sleep numeric 100 ## $dan.grump numeric 100 ## $day integer 100 … we see that the file contains a single data frame called parenthood, which contains four variables dan.sleep, baby.sleep, dan.grump and day. If we peek at the data using head() out the data, here's what we get: head(parenthood,10) ## dan.sleep baby.sleep dan.grump day ## 1 7.59 10.18 56 1 ## 3 5.14 7.92 82 3 ## 10 6.58 7.09 71 10 Next, I'll calculate some basic descriptive statistics: describe( parenthood ) ## vars n mean sd median trimmed mad min max range ## dan.sleep 1 100 6.97 1.02 7.03 7.00 1.09 4.84 9.00 4.16 ## baby.sleep 2 100 8.05 2.07 7.95 8.05 2.33 3.25 12.07 8.82 ## dan.grump 3 100 63.71 10.05 62.00 63.16 9.64 41.00 91.00 50.00 ## day 4 100 50.50 29.01 50.50 50.50 37.06 1.00 100.00 99.00 ## skew kurtosis se ## dan.sleep -0.29 -0.72 0.10 ## baby.sleep -0.02 -0.69 0.21 ## dan.grump 0.43 -0.16 1.00 ## day 0.00 -1.24 2.90 Finally, to give a graphical depiction of what each of the three interesting variables looks like, Figure 5.6 plots histograms. Figure 5.6: Histograms for the three interesting variables in the parenthood data set One thing to note: just because R can calculate dozens of different statistics doesn't mean you should report all of them. If I were writing this up for a report, I'd probably pick out those statistics that are of most interest to me (and to my readership), and then put them into a nice, simple table like the one in Table ??.79 Notice that when I put it into a table, I gave everything "human readable" names. This is always good practice. Notice also that I'm not getting enough sleep. This isn't good practice, but other parents tell me that it's standard practice. Table 5.2: Descriptive statistics for the parenthood data. std. dev IQR Dan's grumpiness 41 91 63.71 62 10.05 14 Dan's hours slept 4.84 9 6.97 7.03 1.02 1.45 Dan's son's hours slept 3.25 12.07 8.05 7.95 2.07 3.21 Figure 5.7: Scatterplot showing the relationship between dan.sleep and dan.grump Figure 5.8: Scatterplot showing the relationship between baby.sleep and dan.grump We can draw scatterplots to give us a general sense of how closely related two variables are. Ideally though, we might want to say a bit more about it than that. For instance, let's compare the relationship between dan.sleep and dan.grump (Figure 5.7 with that between baby.sleep and dan.grump (Figure 5.8. When looking at these two plots side by side, it's clear that the relationship is qualitatively the same in both cases: more sleep equals less grump! However, it's also pretty obvious that the relationship between dan.sleep and dan.grump is stronger than the relationship between baby.sleep and dan.grump. The plot on the left is "neater" than the one on the right. What it feels like is that if you want to predict what my mood is, it'd help you a little bit to know how many hours my son slept, but it'd be more helpful to know how many hours I slept. In contrast, let's consider Figure 5.8 vs. Figure 5.9. If we compare the scatterplot of "baby.sleep v dan.grump" to the scatterplot of "`baby.sleep v dan.sleep", the overall strength of the relationship is the same, but the direction is different. That is, if my son sleeps more, I get more sleep (positive relationship, but if he sleeps more then I get less grumpy (negative relationship). Figure 5.9: Scatterplot showing the relationship between baby.sleep and dan.sleep We can make these ideas a bit more explicit by introducing the idea of a correlation coefficient (or, more specifically, Pearson's correlation coefficient), which is traditionally denoted by r. The correlation coefficient between two variables X and Y (sometimes denoted rXY), which we'll define more precisely in the next section, is a measure that varies from −1 to 1. When r=−1 it means that we have a perfect negative relationship, and when r=1 it means we have a perfect positive relationship. When r=0, there's no relationship at all. If you look at Figure 5.10, you can see several plots showing what different correlations look like. Figure 5.10: Illustration of the effect of varying the strength and direction of a correlation The formula for the Pearson's correlation coefficient can be written in several different ways. I think the simplest way to write down the formula is to break it into two steps. Firstly, let's introduce the idea of a covariance. The covariance between two variables X and Y is a generalisation of the notion of the variance; it's a mathematically simple way of describing the relationship between two variables that isn't terribly informative to humans: \operatorname{Cov}(X, Y)=\frac{1}{N-1} \sum_{i=1}^{N}\left(X_{i}-\bar{X}\right)\left(Y_{i}-\bar{Y}\right) \nonumber$$ Because we're multiplying (i.e., taking the "product" of) a quantity that depends on X by a quantity that depends on Y and then averaging80, you can think of the formula for the covariance as an "average cross product" between X and Y. The covariance has the nice property that, if X and Y are entirely unrelated, then the covariance is exactly zero. If the relationship between them is positive (in the sense shown in Figure@reffig:corr) then the covariance is also positive; and if the relationship is negative then the covariance is also negative. In other words, the covariance captures the basic qualitative idea of correlation. Unfortunately, the raw magnitude of the covariance isn't easy to interpret: it depends on the units in which X and Y are expressed, and worse yet, the actual units that the covariance itself is expressed in are really weird. For instance, if X refers to the dan.sleep variable (units: hours) and Y refers to the dan.grump variable (units: grumps), then the units for their covariance are "hours × grumps". And I have no freaking idea what that would even mean. The Pearson correlation coefficient r fixes this interpretation problem by standardising the covariance, in pretty much the exact same way that the z-score standardises a raw score: by dividing by the standard deviation. However, because we have two variables that contribute to the covariance, the standardisation only works if we divide by both standard deviations.81 In other words, the correlation between X and Y can be written as follows: r_{X Y}=\frac{\operatorname{Cov}(X, Y)}{\hat{\sigma}_{X} \hat{\sigma}_{Y}} By doing this standardisation, not only do we keep all of the nice properties of the covariance discussed earlier, but the actual values of r are on a meaningful scale: r=1 implies a perfect positive relationship, and r=−1 implies a perfect negative relationship. I'll expand a little more on this point later, in Section@refsec:interpretingcorrelations. But before I do, let's look at how to calculate correlations in R. Calculating correlations in R can be done using the cor() command. The simplest way to use the command is to specify two input arguments x and y, each one corresponding to one of the variables. The following extract illustrates the basic usage of the function:82 cor( x = parenthood$dan.sleep, y = parenthood$dan.grump ) ## [1] -0.903384 However, the cor() function is a bit more powerful than this simple example suggests. For example, you can also calculate a complete "correlation matrix", between all pairs of variables in the data frame:83 # correlate all pairs of variables in "parenthood": cor( x = parenthood ) ## dan.sleep baby.sleep dan.grump day ## dan.sleep 1.00000000 0.62794934 -0.90338404 -0.09840768 ## baby.sleep 0.62794934 1.00000000 -0.56596373 -0.01043394 ## dan.grump -0.90338404 -0.56596373 1.00000000 0.07647926 ## day -0.09840768 -0.01043394 0.07647926 1.00000000 Naturally, in real life you don't see many correlations of 1. So how should you interpret a correlation of, say r=.4? The honest answer is that it really depends on what you want to use the data for, and on how strong the correlations in your field tend to be. A friend of mine in engineering once argued that any correlation less than .95 is completely useless (I think he was exaggerating, even for engineering). On the other hand there are real cases – even in psychology – where you should really expect correlations that strong. For instance, one of the benchmark data sets used to test theories of how people judge similarities is so clean that any theory that can't achieve a correlation of at least .9 really isn't deemed to be successful. However, when looking for (say) elementary correlates of intelligence (e.g., inspection time, response time), if you get a correlation above .3 you're doing very very well. In short, the interpretation of a correlation depends a lot on the context. That said, the rough guide in Table ?? is pretty typical. knitr::kable( rbind( c("-1.0 to -0.9" ,"Very strong", "Negative"), c("-0.9 to -0.7", "Strong", "Negative") , c("-0.7 to -0.4", "Moderate", "Negative") , c("-0.4 to -0.2", "Weak", "Negative"), c("-0.2 to 0","Negligible", "Negative") , c("0 to 0.2","Negligible", "Positive"), c("0.2 to 0.4", "Weak", "Positive"), c("0.4 to 0.7", "Moderate", "Positive"), c("0.7 to 0.9", "Strong", "Positive"), c("0.9 to 1.0", "Very strong", "Positive")), col.names=c("Correlation", "Strength", "Direction"), booktabs = TRUE) -1.0 to -0.9 Very strong Negative -0.9 to -0.7 Strong Negative -0.7 to -0.4 Moderate Negative -0.4 to -0.2 Weak Negative -0.2 to 0 Negligible Negative 0 to 0.2 Negligible Positive 0.2 to 0.4 Weak Positive 0.4 to 0.7 Moderate Positive 0.7 to 0.9 Strong Positive 0.9 to 1.0 Very strong Positive However, something that can never be stressed enough is that you should always look at the scatterplot before attaching any interpretation to the data. A correlation might not mean what you think it means. The classic illustration of this is "Anscombe's Quartet" (??? Anscombe1973), which is a collection of four data sets. Each data set has two variables, an X and a Y. For all four data sets the mean value for X is 9 and the mean for Y is 7.5. The, standard deviations for all X variables are almost identical, as are those for the the Y variables. And in each case the correlation between X and Y is r=0.816. You can verify this yourself, since the dataset comes distributed with R. The commands would be: cor( anscombe$x1, anscombe$y1 ) ## [1] 0.8164205 You'd think that these four data setswould look pretty similar to one another. They do not. If we draw scatterplots of X against Y for all four variables, as shown in Figure 5.11 we see that all four of these are spectacularly different to each other. Figure 5.11: Anscombe's quartet. All four of these data sets have a Pearson correlation of r=.816, but they are qualitatively different from one another. The lesson here, which so very many people seem to forget in real life is "always graph your raw data". This will be the focus of Chapter 6. Figure 5.12: The relationship between hours worked and grade received, for a toy data set consisting of only 10 students (each circle corresponds to one student). The dashed line through the middle shows the linear relationship between the two variables. This produces a strong Pearson correlation of r=.91. However, the interesting thing to note here is that there's actually a perfect monotonic relationship between the two variables: in this toy example at least, increasing the hours worked always increases the grade received, as illustrated by the solid line. This is reflected in a Spearman correlation of rho=1. With such a small data set, however, it's an open question as to which version better describes the actual relationship involved. The Pearson correlation coefficient is useful for a lot of things, but it does have shortcomings. One issue in particular stands out: what it actually measures is the strength of the linear relationship between two variables. In other words, what it gives you is a measure of the extent to which the data all tend to fall on a single, perfectly straight line. Often, this is a pretty good approximation to what we mean when we say "relationship", and so the Pearson correlation is a good thing to calculation. Sometimes, it isn't. One very common situation where the Pearson correlation isn't quite the right thing to use arises when an increase in one variable X really is reflected in an increase in another variable Y, but the nature of the relationship isn't necessarily linear. An example of this might be the relationship between effort and reward when studying for an exam. If you put in zero effort (X) into learning a subject, then you should expect a grade of 0% (Y). However, a little bit of effort will cause a massive improvement: just turning up to lectures means that you learn a fair bit, and if you just turn up to classes, and scribble a few things down so your grade might rise to 35%, all without a lot of effort. However, you just don't get the same effect at the other end of the scale. As everyone knows, it takes a lot more effort to get a grade of 90% than it takes to get a grade of 55%. What this means is that, if I've got data looking at study effort and grades, there's a pretty good chance that Pearson correlations will be misleading. To illustrate, consider the data plotted in Figure 5.12, showing the relationship between hours worked and grade received for 10 students taking some class. The curious thing about this – highly fictitious – data set is that increasing your effort always increases your grade. It might be by a lot or it might be by a little, but increasing effort will never decrease your grade. The data are stored in effort.Rdata: > load( "effort.Rdata" ) > who(TRUE) -- Name -- -- Class -- -- Size -- effort data.frame 10 x 2 $hours numeric 10 $grade numeric 10 The raw data look like this: > effort hours grade 1 2 13 2 76 91 If we run a standard Pearson correlation, it shows a strong relationship between hours worked and grade received, > cor( effort$hours, effort$grade ) [1] 0.909402 but this doesn't actually capture the observation that increasing hours worked always increases the grade. There's a sense here in which we want to be able to say that the correlation is perfect but for a somewhat different notion of what a "relationship" is. What we're looking for is something that captures the fact that there is a perfect ordinal relationship here. That is, if student 1 works more hours than student 2, then we can guarantee that student 1 will get the better grade. That's not what a correlation of r=.91 says at all. How should we address this? Actually, it's really easy: if we're looking for ordinal relationships, all we have to do is treat the data as if it were ordinal scale! So, instead of measuring effort in terms of "hours worked", lets rank all 10 of our students in order of hours worked. That is, student 1 did the least work out of anyone (2 hours) so they get the lowest rank (rank = 1). Student 4 was the next laziest, putting in only 6 hours of work in over the whole semester, so they get the next lowest rank (rank = 2). Notice that I'm using "rank =1" to mean "low rank". Sometimes in everyday language we talk about "rank = 1" to mean "top rank" rather than "bottom rank". So be careful: you can rank "from smallest value to largest value" (i.e., small equals rank 1) or you can rank "from largest value to smallest value" (i.e., large equals rank 1). In this case, I'm ranking from smallest to largest, because that's the default way that R does it. But in real life, it's really easy to forget which way you set things up, so you have to put a bit of effort into remembering! Okay, so let's have a look at our students when we rank them from worst to best in terms of effort and reward: rank (hours worked) rank (grade received) student 1 1 student 2 10 student 10 9 Hm. These are identical. The student who put in the most effort got the best grade, the student with the least effort got the worst grade, etc. We can get R to construct these rankings using the rank() function, like this: > hours.rank <- rank( effort$hours ) # rank students by hours worked > grade.rank <- rank( effort$grade ) # rank students by grade received As the table above shows, these two rankings are identical, so if we now correlate them we get a perfect relationship: > cor( hours.rank, grade.rank ) What we've just re-invented is Spearman's rank order correlation, usually denoted ρ to distinguish it from the Pearson correlation r. We can calculate Spearman's ρ using R in two different ways. Firstly we could do it the way I just showed, using the rank() function to construct the rankings, and then calculate the Pearson correlation on these ranks. However, that's way too much effort to do every time. It's much easier to just specify the method argument of the cor() function. > cor( effort$hours, effort$grade, method = "spearman") The default value of the method argument is "pearson", which is why we didn't have to specify it earlier on when we were doing Pearson correlations. As we've seen, the cor() function works pretty well, and handles many of the situations that you might be interested in. One thing that many beginners find frustrating, however, is the fact that it's not built to handle non-numeric variables. From a statistical perspective, this is perfectly sensible: Pearson and Spearman correlations are only designed to work for numeric variables, so the cor() function spits out an error. Here's what I mean. Suppose you were keeping track of how many hours you worked in any given day, and counted how many tasks you completed. If you were doing the tasks for money, you might also want to keep track of how much pay you got for each job. It would also be sensible to keep track of the weekday on which you actually did the work: most of us don't work as much on Saturdays or Sundays. If you did this for 7 weeks, you might end up with a data set that looks like this one: > load("work.Rdata") work data.frame 49 x 7 $tasks numeric 49 $pay numeric 49 $day integer 49 $weekday factor 49 $week numeric 49 $day.type factor 49 > head(work) hours tasks pay day weekday week day.type 1 7.2 14 41 1 Tuesday 1 weekday 2 7.4 11 39 2 Wednesday 1 weekday 3 6.6 14 13 3 Thursday 1 weekday 4 6.5 22 47 4 Friday 1 weekday 5 3.1 5 4 5 Saturday 1 weekend 6 3.0 7 12 6 Sunday 1 weekend Obviously, I'd like to know something about how all these variables correlate with one another. I could correlate hours with pay quite using cor(), like so: > cor(work$hours,work$pay) [1] 0.7604283 But what if I wanted a quick and easy way to calculate all pairwise correlations between the numeric variables? I can't just input the work data frame, because it contains two factor variables, weekday and day.type. If I try this, I get an error: > cor(work) Error in cor(work) : 'x' must be numeric It order to get the correlations that I want using the cor() function, is create a new data frame that doesn't contain the factor variables, and then feed that new data frame into the cor() function. It's not actually very hard to do that, and I'll talk about how to do it properly in Section@refsec:subsetdataframe. But it would be nice to have some function that is smart enough to just ignore the factor variables. That's where the correlate() function in the lsr package can be handy. If you feed it a data frame that contains factors, it knows to ignore them, and returns the pairwise correlations only between the numeric variables: > correlate(work) - correlation type: pearson - correlations shown only when both variables are numeric hours tasks pay day weekday week day.type hours . 0.800 0.760 -0.049 . 0.018 . tasks 0.800 . 0.720 -0.072 . -0.013 . pay 0.760 0.720 . 0.137 . 0.196 . day -0.049 -0.072 0.137 . . 0.990 . weekday . . . . . . . week 0.018 -0.013 0.196 0.990 . . . day.type . . . . . . . The output here shows a . whenever one of the variables is non-numeric. It also shows a . whenever a variable is correlated with itself (it's not a meaningful thing to do). The correlate() function can also do Spearman correlations, by specifying the corr.method to use: > correlate( work, corr.method="spearman" ) - correlation type: spearman Obviously, there's no new functionality in the correlate() function, and any advanced R user would be perfectly capable of using the cor() function to get these numbers out. But if you're not yet comfortable with extracting a subset of a data frame, the correlate() function is for you. 5.6: Standard Scores 5.8: Handling Missing Values
CommonCrawl
Home > Turkish Journal of Mathematics > Vol. 41 (2017) > No. 4 Turkish Journal of Mathematics p-Subordination chains and p-valence integral operators ERHAN DENİZ HALİT ORHAN MURAT ÇAĞLAR 10.3906/mat-1505-9 In the present investigation we obtain some sufficient conditions for the analyticity and the $p$-valence of an integral operator in the unit disk $\mathbb{D}$. Using these conditions we give some applications for a few different integral operators. The significant relationships and relevance to other results are also given. A number of known univalent conditions would follow upon specializing the parameters involved in our main results. DENİZ, ERHAN; ORHAN, HALİT; and ÇAĞLAR, MURAT (2017) "p-Subordination chains and p-valence integral operators," Turkish Journal of Mathematics: Vol. 41: No. 4, Article 15. https://doi.org/10.3906/mat-1505-9 Available at: https://journals.tubitak.gov.tr/math/vol41/iss4/15 Mathematics Commons Publishing Policy & Ethics Manuscript Template - NEW! All Issues Vol. 47, No. 1 Vol. 46, No. 8 Vol. 46, No. 7 Vol. 46, No. 6 Vol. 46, No. SI-2 Vol. 46, No. 4 Vol. 46, No. 3 Vol. 46, No. SI-1 Vol. 46, No. 1 Vol. 45, No. 6 Vol. 45, No. 5 Vol. 45, No. 4 Vol. 45, No. 3 Vol. 45, No. 2 Vol. 45, No. 1 Vol. 44, No. 6 Vol. 44, No. 5 Vol. 44, No. 4 Vol. 44, No. 3 Vol. 44, No. 2 Vol. 44, No. 1 Vol. 43, No. 6 Vol. 43, No. 5 Vol. 43, No. 4 Vol. 43, No. 3 Vol. 43, No. 2 Vol. 43, No. 1 Vol. 42, No. 6 Vol. 42, No. 5 Vol. 42, No. 4 Vol. 42, No. 3 Vol. 42, No. 2 Vol. 42, No. 1 Vol. 41, No. 6 Vol. 41, No. 5 Vol. 41, No. 4 Vol. 41, No. 3 Vol. 41, No. 2 Vol. 41, No. 1 Vol. 40, No. 6 Vol. 40, No. 5 Vol. 40, No. 4 Vol. 40, No. 3 Vol. 40, No. 2 Vol. 40, No. 1 Vol. 39, No. 6 Vol. 39, No. 5 Vol. 39, No. 4 Vol. 39, No. 3 Vol. 39, No. 2 Vol. 39, No. 1 Vol. 38, No. 6 Vol. 38, No. 5 Vol. 38, No. 4 Vol. 38, No. 3 Vol. 38, No. 2 Vol. 38, No. 1 Vol. 37, No. 6 Vol. 37, No. 5 Vol. 37, No. 4 Vol. 37, No. 3 Vol. 37, No. 2 Vol. 37, No. 1 Vol. 36, No. 4 Vol. 36, No. 3 Vol. 36, No. 2 Vol. 36, No. 1 Vol. 35, No. 4 Vol. 35, No. 3 Vol. 35, No. 2 Vol. 35, No. 1 Vol. 34, No. 4 Vol. 34, No. 3 Vol. 34, No. 2 Vol. 34, No. 1 Vol. 33, No. 4 Vol. 33, No. 3 Vol. 33, No. 2 Vol. 33, No. 1 Vol. 32, No. 4 Vol. 32, No. 3 Vol. 32, No. 2 Vol. 32, No. 1 Vol. 31, No. Suppl. Vol. 31, No. 4 Vol. 31, No. 3 Vol. 31, No. 2 Vol. 31, No. 1 Vol. 30, No. 4 Vol. 30, No. 3 Vol. 30, No. 2 Vol. 30, No. 1 Vol. 29, No. 4 Vol. 29, No. 3 Vol. 29, No. 2 Vol. 29, No. 1 Vol. 28, No. 4 Vol. 28, No. 3 Vol. 28, No. 2 Vol. 28, No. 1 Vol. 27, No. 4 Vol. 27, No. 3 Vol. 27, No. 2 Vol. 27, No. 1 Vol. 26, No. 4 Vol. 26, No. 3 Vol. 26, No. 2 Vol. 26, No. 1 Vol. 25, No. 4 Vol. 25, No. 3 Vol. 25, No. 2 Vol. 25, No. 1 Vol. 24, No. 4 Vol. 24, No. 3 Vol. 24, No. 2 Vol. 24, No. 1 Vol. 23, No. 4 Vol. 23, No. 3 Vol. 23, No. 2 Vol. 23, No. 1 Vol. 22, No. 4 Vol. 22, No. 3 Vol. 22, No. 2 Vol. 22, No. 1 Vol. 21, No. EK Vol. 21, No. 4 Vol. 21, No. 3 Vol. 21, No. 2 Vol. 21, No. 1 Vol. 20, No. 4 Vol. 20, No. 3 Vol. 20, No. 2 Vol. 20, No. 1 Issues by Year
CommonCrawl
Machine Vision and Applications May 2016 , Volume 27, Issue 4, pp 559–576 | Cite as An efficient surface registration using smartphone Tomislav Pribanić Yago Diez Ferran Roure Joaquim Salvi First Online: 13 February 2016 Gathering 3D object information from the multiple spatial viewpoints typically brings up the problem of surface registration. More precisely, registration is used to fuse 3D data originally acquired from different viewpoints into a common coordinate system. This step often requires the use of relatively bulky and expensive robot arms (turntables) or presents a very challenging problem if constrained to software solutions. In this paper we present a novel surface registration method, motivated by an efficient and user-friendly implementation. Our system is inspired by the idea that three out of generally six registration parameters (degrees of freedom) can be provided in advance, at least to some degree of accuracy, by today's smartphones. Experimental evaluations demonstrated the successful point cloud registrations of \(\sim \)10,000 points in a matter of seconds. The evaluation included comparison with state-of-the-art descriptor methods. The method's robustness was also studied and the results using 3D data from a professional scanner showed the potential for real-world applications. Surface registration 3D reconstruction Structured light Stereo vision Inertial sensor Smartphone Cell phone Appendix: Computing orientation from the accelerometer and magnetometer measurements For the completeness of the proposed method, in this section we provide a general computational framework of how to compute orientation data directly from both the accelerometer and the magnetometer measurements. For convenience we have proposed our method using a smartphone. However, there may be researches working in the applications where accelerometer and magnetometer are already provided as stand-alone pieces of hardware and which may be then preferred over a smartphone. It is useful to note that the orientation sensor is basically a so-called logical sensor, what means that the output data delivered by the sensor are the result of data gathered by the physical sensor, e.g., accelerometer and magnetometer, including some processing and filtering. Some professional orientation sensors include additional physical sensors such as gyroscopes for data stability under dynamic conditions [35]. However, for our work, a simple accelerometer and magnetometer will suffice. Accelerometers are devices sensitive to the difference between the linear acceleration of the sensor and the local gravitational field, whereas magnetometers measure the components of earth's magnetic field. We define an orientation sensor (phone) coordinate system as depicted in Fig. 2 (for simplicity a smartphone is shown again, but it should be clear that generally it can be any device containing an accelerometer and a magnetometer). It is also assumed that the accelerometer and magnetometer, contained inside the physical sensor, are aligned with the sensor coordinate system. It is convenient to define a sensor reference position using gravitational and magnetic field vectors (Fig. 2), where the sensor in its reference position is positioned flat and facing northwards. Consequently, all sensor rotations are expressed with respect to that reference. With this assumption, a three-axis accelerometer in the earth's gravitational field, and undergoing no linear acceleration, will have the output \(\mathbf{G}_{\mathbf{p}}\) given by: $$\begin{aligned} G_\mathrm{p}= \left[ \begin{array}{l} G_x\\ G_y\\ G_z\\ \end{array}\right] =R_\mathrm{S} \cdot \left[ \begin{array}{l} 0\\ 0\\ g\\ \end{array}\right] \end{aligned}$$ where rotation matrix \(\mathbf{R}_{\mathbf{S}}\) describes a sensor orientation from the reference position and \(g=9.8065\) ms\(^{-2}\) is the gravitational constant. \(\mathbf{R}_{\mathbf{S}}\) is composed as the product of three basic rotations \(\mathbf{R}_{\mathbf{X}}\), \(\mathbf{R}_{\mathbf{Y}}\) and \(\mathbf{R}_{\mathbf{Z}}\) about \(X_{\mathrm{S}}\), \(Y_{\mathrm{S}}\) and \(Z_{\mathrm{S}}\) axis, respectively; so that each rotation is defined by the corresponding rotation angle (Fig. 2): $$\begin{aligned}&R_X =\left[ {{\begin{array}{c@{\quad }c@{\quad }c} 1 &{} 0 &{} 0 \\ 0 &{} {\cos \hbox {Pitch}} &{} {-\sin \hbox {Pitch}} \\ 0 &{} {\sin \hbox {Pitch}} &{} {\cos \hbox {Pitch}} \\ \end{array} }} \right] \end{aligned}$$ $$\begin{aligned}&R_Y =\left[ {{\begin{array}{c@{\quad }c@{\quad }c} {\cos \hbox {Roll}} &{} 0 &{} {\sin \hbox {Roll}} \\ 0 &{} 1 &{} 0 \\ {-\sin \hbox {Roll}} &{} 0 &{} {\cos \hbox {Roll}} \end{array} }} \right] \end{aligned}$$ $$\begin{aligned}&R_Z =\left[ {{\begin{array}{c@{\quad }c@{\quad }c} {\cos \hbox {Yaw}} &{} {-\sin \hbox {Yaw}} &{} 0 \\ {\sin \hbox {Yaw}} &{} {\cos \hbox {Yaw}} &{} 0 \\ 0 &{} 0 &{} 1 \end{array} }} \right] \end{aligned}$$ There are six possible ways of combining these three rotation matrices to compose the rotation matrix \(\mathbf{R}_{\mathbf{S}}\), acknowledging that a different rotation ordering would yield to a different set of Yaw (azimuth), Pitch and Roll rotation angles. The question is: Can we choose any of this six possible rotation orderings and based on Eq. (6) be able to retrieve the rotation angles? The matter of fact is that a three-component accelerometer vector reading \(\mathbf{G}_{\mathbf{P}}\) also needs (ideally) to satisfy the following constraint: $$\begin{aligned} \sqrt{G_x^2 +G_y^2 +G_z^2 } =g=9.8060~\hbox {ms}^{-2} \end{aligned}$$ Hence, there are actually only two degrees of freedom, which would be insufficient to find unique values for all three Yaw, Pitch and Roll angles, based on \(\mathbf{G}_{\mathbf{P}}\) only. Fortunately, two out of six rotation orderings (xyz and yxz rotation order) yield a functional form of Eq. (6) dependent on two rotation angles only: Roll and Pitch. More specifically, for xyz rotation order, i.e., \(\mathbf{R}_{\mathbf{S}}=\mathbf{R}_{\mathbf{X}}\cdot \mathbf{R}_{\mathbf{Y}}\cdot \mathbf{R}_{\mathbf{Z}}\), we obtain: $$\begin{aligned}&{G_\mathrm{p}} = \left[ {\begin{array}{c} {{G_x}} \\ {{G_y}} \\ {{G_z}} \\ \end{array} } \right] = \left[ {\begin{array}{c} {g \cdot \sin \hbox {Roll}} \\ { - g \cdot \cos \hbox {Roll} \cdot \sin \hbox {Pitch}} \\ {g \cdot \cos \hbox {Roll} \cdot \cos \hbox {Pitch}} \end{array} } \right] \end{aligned}$$ $$\begin{aligned}&\tan \hbox {Pitch} = \frac{{ - {G_y}}}{{{G_z}}}\end{aligned}$$ $$\begin{aligned}&\tan \hbox {Roll} = \frac{{{G_x}}}{{\sqrt{G_y^2 + G_z^2} }} \end{aligned}$$ Then, two rotation angles can be retrieved as: $$\begin{aligned}&\hbox {Pitch} = a\tan 2( - {G_y},~{G_z})\end{aligned}$$ $$\begin{aligned}&\hbox {Roll} = a\tan 2\frac{{{G_x}}}{{\sqrt{G_y^2 + G_z^2} }} \end{aligned}$$ where Android convention is used restricting Pitch angle in the range between \(-180^{\circ }\) and \(180^{\circ }\) (\(a\tan 2\) here represents the tangent inverse function in the range \(-180^{\circ }\) and \(180^{\circ }\)), and Roll angle in the range between \(-90^{\circ }\) and \(90^{\circ }\). It is worth noting that in practice when \(G_y\approx 0\) and \(G_z\approx 0,\) computing the inverse tangent function of Eq. (12) becomes unstable. There is no perfect solution for this problem but one common approach is to use instead of (12) the following approximate expression to compute the Pitch angle: $$\begin{aligned} \tan \hbox {Pitch} = \frac{{ - {G_y}}}{{\sqrt{\mu \cdot G_x^2 + G_z^2} }} \end{aligned}$$ where typically \(\mu =0.01\) [34]. The preceding text demonstrated that accelerometer alone is sufficient to reveal two out of three orientation angles, needed for the proposed method. On those findings we build next how to recover the remaining orientation Yaw angle. A three-axis magnetometer in the earth's magnetic field will have output \(\mathbf{B}_{\mathbf{p}}\) given by: $$\begin{aligned} {B_\mathrm{p}} = \left[ {\begin{array}{c} {{B_x}} \\ {{B_y}} \\ {{B_z}} \\ \end{array} } \right] = {R_\mathrm{S}} \cdot \left[ {\begin{array}{c} 0 \\ {B \cdot \cos \delta } \\ { - B \cdot \sin \delta } \\ \end{array} } \right] \end{aligned}$$ where rotation matrix \(\mathbf{R}_{\mathbf{S}}\) describes the sensor orientation from the reference position (Fig. 2), B is the geomagnetic field strength which varies over the earth's surface, and \(\delta \) is the angle of inclination of the geomagnetic field measured downwards from horizontal and varies over the earth's surface, too. Fortunately, both \(\delta \) and B cancel out during the subsequent computation. Assuming \(\mathbf{R}_{\mathbf{S}}\) is defined through the three successive rotations \(\mathbf{R}_{\mathbf{X}}\), \(\mathbf{R}_{\mathbf{Y}}\), and \(\mathbf{R}_{\mathbf{Z}}\) around system axes X, Y and Z, respectively, and using xyz rotation order, Eq. (17) can be re-written as: $$\begin{aligned} R_Y^{ - 1} \cdot R_X^{ - 1} \cdot \left[ {\begin{array}{c} {{B_x}} \\ {{B_y}} \\ {{B_z}} \\ \end{array} } \right]= & {} {R_Z} \cdot \left[ {\begin{array}{c} 0 \\ {B \cdot \cos \delta } \\ { - B \cdot \sin \delta } \\ \end{array} } \right] \nonumber \\= & {} \left[ {\begin{array}{c} { - B \cdot \cos \delta \cdot \sin \hbox {Yaw}} \\ {B \cdot \cos \delta \cdot \cos \hbox {Yaw}} \\ { - B \cdot \sin \delta } \end{array} } \right] \end{aligned}$$ Since two rotation angles Pitch and Roll have already been found using accelerometers measurements, after some algebraic manipulation of (18), the Yaw angle is computed as: $$\begin{aligned}&{B_2} = {B_x} \cdot \cos \hbox {Roll} - {B_z} \cdot \cos \hbox {Pitch} \cdot \sin \hbox {Roll}\nonumber \\&\qquad \quad + {B_y} \cdot \sin {\text {Pitch}} \cdot \sin \hbox {Roll}\nonumber \\&{B_1} = {B_y} \cdot \cos \hbox {Pitch} + {B_z} \cdot \sin \hbox {Pitch}\nonumber \\&\hbox {Yaw}= a\tan 2( -{B_2},{B_1}) \end{aligned}$$ Equation (19) provides Yaw readings in the range between \(-180^{\circ }\) and \(180^{\circ }\), though the Android version we have been experimenting with appears to use a convention that transforms Yaw readings into the range between \(0^{\circ }\) and \(360^{\circ }\). We conclude that all three rotation angles, required initially by the proposed registration method, can be neatly retrieved in general case using combined accelerometer and magnetometer measurements. Matabosch, C., Foı, D., Salvi, J., Batlle, E.: Registration of surfaces minimizing error propagation for a one-shot multi-slit hand-held scanner. Pattern Recognit. 41, 2055–2067 (2008)CrossRefzbMATHGoogle Scholar Martins, A.F., Bessant, M., Manukyan, L., Milinkovitch, M.C.: R\(^{2}\)OBBIE-3D, a fast robotic high-resolution system for quantitative phenotyping of surface geometry and colour-texture. PLoS One 10(6), e0126740 (2015). doi: 10.1371/journal.pone.0126740 CrossRefGoogle Scholar https://www.igd.fraunhofer.de/en/Institut/Abteilungen/VHT/Projekte/Automated-3D-Digitisation. Accessed Aug 2015 Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., Ginzton, M., Anderson, S., Davis, J., Ginsberg, J., Shade, J., Fulk, D.: The digital michelangelo project: 3D scanning of large statues. In: Siggraph 2000, Computer Graphics Proceedings, pp. 131–144. ACM Press/ACM SIGGRAPH/Addison Wesley Longman (2000)Google Scholar Salvi, J., Matabosch, C., Fofi, D., Forest, F.: A review of recent range image registration methods with accuracy evaluation. Image Vis. Comput. 25, 578–596 (2007)CrossRefGoogle Scholar Aiger, D., Mitra, N.J.: Daniel Cohen-Or. 4-points congruent sets for robust surface registration. In: ACM SIGGRAPH, pp. 1–10 (2008)Google Scholar Besl, P., McKay, N.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992)CrossRefGoogle Scholar Rusinkiewicz, S., Levoy, M.: Efficient variants of the ICP algorithm. In: 3rd International Conference on 3-D Digital Imaging and Modeling, pp. 145–152 (2011)Google Scholar Mohammadzade, H., Hatzinakos, D.: Iterative closest normal point for 3D face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36, 381–397 (2013)CrossRefGoogle Scholar Diez, Y., Martí, J., Salvi, J.: Hierarchical normal space sampling to speed up point cloud coarse matching. Pattern Recognit. Lett. 33, 2127–2213 (2012)CrossRefGoogle Scholar Wald, I., Havran, V.: On building fast kd-trees for ray tracing, and on doing that in O(NlogN). In: IEEE Symposium on Interactive Ray Tracing, pp. 61–69 (2006)Google Scholar Project tango. https://www.google.com/atap/projecttango/#project. Accessed Mar 2015 Structure Sensor. http://structure.io/. Accessed March 2015 Huang, Q.-X., Adams, B., Wicke, M., Guibas, L.J.: Non-rigid registration under isometric deformations. Comput. Graph. Forum 27, 1449–1457 (2008)CrossRefGoogle Scholar Lian, Z., et al.: A comparison of methods for non-rigid 3d shape retrieval. Pattern Recognit. 46, 449–461 (2013)CrossRefGoogle Scholar Alt, H., Mehlhorn, K., Wagener, H., Welzl, E.: Congruence, similarity, and symmetries of geometric objects. Discret. Comput. Geom. 3, 237–256 (1988)MathSciNetCrossRefzbMATHGoogle Scholar Chung, D.H., Yun, D., Lee, S.U.: Registration of multiple-range views using the reverse-calibration technique. Pattern Recognit. 31, 457–464 (1998)CrossRefGoogle Scholar Santamaría, J., Cordon, O., Damas, S.: A comparative study of state-of-the-art evolutionary image registration methods for 3D modelling. Comput. Vis. Image Underst. 115, 1340–1354 (2011)CrossRefGoogle Scholar Mian, A., Bennamoun, M., Owens, R.: On the Repeatability and quality of keypoints for local feature-based 3D object retrieval from cluttered scenes. Int. J. Comput. Vis. 89, 348–361 (2010)CrossRefGoogle Scholar Makadia, A., Patterson, A., Daniilidis, K.: Fully automatic registration of 3D point clouds. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. I, pp. 1297–1304 (2006)Google Scholar Wyngaerd, J.V., Van Gool, L.: Automatic crude patch registration: toward automatic 3d model building. Comput. Vis. Image Underst. 87, 8–26 (2002)CrossRefzbMATHGoogle Scholar Stamos, I., Leordeanu, M.: Automated feature-based range registration of urban scenes of large scale. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 555–561 (2003)Google Scholar Park, S.-Y., Choi, S.-I., Kim, J., Chae, J.S.: Real-time 3D registration using GPU. Mach. Vis. Appl. 22, 837–850 (2011)CrossRefGoogle Scholar Tanskanen, P., Kolev, K., Meier, L., Camposeco, F., Saurer, O., Pollefeys, M.: Live metric 3D reconstruction on mobile phones. In: ICCV '13 Proceedings of the 2013 IEEE International Conference on Computer Vision, pp. 65–72Google Scholar Nießner, M., Dai, A., Fisher, M.: Combining inertial navigation and ICP for real-time 3D surface reconstruction. Eurographics 2014, 13–16 (2014)Google Scholar Feldmar, J., Ayache, N.: Rigid, affine and locally affine registration of free-form surfaces. Technical Report of INRIA, Sophia Antipolis (1994)Google Scholar http://www.gsmarena.com/htc_wildfire-3337.php. Accessed March 2015 Petković, T., Pribanić, T., Donlić, M.: The self-equalizing De Bruijn sequence for 3D profilometry. In: Proceedings of the 26th British Machine Vision Conference (BMVC 2015), September 7–10, Swansea, UK, pp. 1–11Google Scholar Salvi, J., Fernandez, S., Pribanic, T., Llado, X.: A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 43, 2666–2680 (2010)CrossRefzbMATHGoogle Scholar Pribanic, T., Mrvos, S., Salvi, J.: Efficient multiple phase shift patterns for dense 3D acquisition in structured light scanning. Image Vis. Comput. 28, 1255–1266 (2010)CrossRefGoogle Scholar Amenta, N., Choi, S., Kolluri, R.K.: The power crust, unions of balls, and the medial axis transform. Comput. Geom. Theory Appl. 19, 127–153 (2001)MathSciNetCrossRefzbMATHGoogle Scholar http://meshlab.sourceforge.net/. Accessed March 2015 Pedley, M.: Tilt sensing using a three-axis accelerometer, AN3461, Freescale SemiCondutor, pp. 1–22 (2013)Google Scholar Ozyagcilar, T.: Implementing a tilt-compensated eCompass using accelerometer and magnetometer sensors. AN4248, Freescale SemiCondutor, pp. 1–21 (2008)Google Scholar MTi and MTx User Manual and Technical Documentation. http://www.xsens.com/. Accessed March 2015 Gotcha 3DAngel sculpture. http://www.4ddynamics.com/examples/. Accessed March 2015 Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 433–449 (1999)CrossRefGoogle Scholar Tombari, F., Salti, S., Di Stefano, L.: Unique signatures of histograms for local surface description. In: Computer Vision—ECCV 2010, pp. 356–369. Springer, Berlin (2010)Google Scholar http://graphics.stanford.edu/data/3Dscanrep/. Accessed March 2015 Shirmohammadi, B., Taylor, C.J.: Self-localizing smart camera networks. ACM Trans. Embed. Comput. Syst. 8, 1–26 (2011)Google Scholar 1.Faculty of Electrical Engineering and ComputingUniversity of ZagrebZagrebCroatia 2.GSIS Tokuyama LaboratoryTohoku UniversitySendaiJapan 3.Institute of Computer Vision and RoboticsUniversity of GironaGironaSpain Pribanić, T., Diez, Y., Roure, F. et al. Machine Vision and Applications (2016) 27: 559. https://doi.org/10.1007/s00138-016-0751-0 Received 17 March 2015 Revised 24 November 2015 Accepted 18 January 2016 First Online 13 February 2016
CommonCrawl
When studying renormalization of QED in standard textbooks, we typically encounter derivatives with respect to ${\not}{p}=p^\mu \gamma_\mu$, i.e., $\partial/\partial{\not}p$. As far as I understand, this is just a symbolic notation to perform derivative operations over functions of symbols like ${\not}{p}$ itself or $p^2={\not}{p}^{\,2}$, as if ${\not}{p}$ were just some variable which could be replace by $x$, for instance. Clearly, there is something more going on because ${\not}{p}$ is a matrix, so a derivative with respect to it would be something like a derivative with respect to a matrix, which I cannot understand. As far as it concerns the conventional QED, that works fine -- we only need to perform this derivative on ${\not}{p}$ or $p^2$--, but if we depart from it to consider different models, we may encounter operations like $\frac{\partial\,p^\mu}{\partial\,{\not}{p}}$. I wonder if someone has ever dealt with that, and what was the solution found. I have been trying to keep this symbolic notation to find the answer, but I always get stucked at some point or I end up finding something inconsistent -- for instance, I would expect the result to be Lorentz covariant, but I cannot find a result with such property. Conversely, I am trying to give up this notation and trying to make real sense of the operation, but with no success so far. Edit: The basic context is set in the discussion of renormalization of QED. You have the renormalized electron's propagator written as $\frac{i}{{\not}{p}-m-\Sigma(p)}$. When you take your quantities on shell, i.e., at ${\not}{p}=m$ (where both sides of this equality are understood to be acting on a spinor $u(p)$), you require that the pole of this propagator is located at $p^2=m^2$ and the residue is $i$. This amounts to the requirement of $\Sigma({\not}{p}=m)=0$ and $\frac{\partial\,\Sigma}{\partial\,{\not}{p}}|_{{\not}{p}=m}=0$, once you assumed the symbolic notation of the derivative with respect to ${\not}{p}$. For the conventional QED, $\Sigma=\Sigma({\not}{p})$ and this symbolic derivative can be taken without a problem. Modified models based on QED can lead to a $\Sigma(p)$ that also contains $p^\mu$, for instance, making that derivative operation unclear. andrehgomesandrehgomes $\begingroup$ Where, precisely, have you seen this notation? It would be helpful to have more specific context to interpret it properly. $\endgroup$ – joshphysics Dec 13 '13 at 20:05 $\begingroup$ See Eq. (17.116) of Hatfield's "Quantum field theory of point particles and strings", or Eq. (7.25) of Peskin & Schroeder's "An introduction to quantum field theory", or the comment before Eq. (62.38) of Srednicki's "Quantum field theory", for instance. $\endgroup$ – andrehgomes Dec 13 '13 at 20:25 $\begingroup$ {\not}p yields ${\not}p$, which looks prettier than \not{p} rendered as $\not{p}$ $\endgroup$ – Christoph Dec 13 '13 at 21:55 $\begingroup$ I believe @joshphysics is asking for reference of $\frac{\partial\,p^\mu}{\partial\,{\not}{p}}$, but the examples you gave(at least the ones from P&S and Srednicki) are of the form $\frac{\partial f({\not}{p})}{\partial\,{\not}{p}}$ $\endgroup$ – Jia Yiyang Dec 14 '13 at 1:38 $\begingroup$ I see now, sorry. Yes, that's correct, all given textbook examples are of the form $\frac{\partial\,f({\not}{p})}{\partial\,{\not}{p}}$. The case for $\frac{\partial\,p^{\mu}}{\partial\,{\not}{p}}$ is something particular of a nonconventional QED model I am working on. The nonconventional part gives contributions to the electron's self-energy $\Sigma$ that are proportional also to $p^{\mu}$ (there are new degrees of freedom coupling to $p^\mu$ so the expression remains covariant). $\endgroup$ – andrehgomes Dec 14 '13 at 13:47 Here is my tentative answer. First of all, this answer is all based on the conventional QED, where we know the electron's self-energy is like $\Sigma=\Sigma({\not}{p})$. My original question deals with nonconventional QED where $\Sigma$ may also have momentum dependence on $p^\mu$ (which could be contracted with a background field of some kind, for instance, therefore resulting in an observer scalar quantity). Hopefully, my arguments may be extended to that nonconventional case. Basically, I cannot find a consistent way of doing $\frac{\partial p^\mu}{\partial {\not}{p}}$, and I suspect there isn't one at all because $\frac{\partial}{\partial {\not}{p}}$ may not be a well defined operation but instead just a practical symbolic notation. Therefore, my approach is to avoid using $\frac{\partial}{\partial {\not}{p}}$ and instead use $\frac{\partial}{\partial p^2}$, which seems to be the meaningful operation. As far as I could track, in the context mentioned above, the only instance where we need $\frac{\partial}{\partial {\not}{p}}$ is when imposing $\frac{\partial \Sigma}{\partial {\not}{p}}\Big|_{{\not}{p}=m}=0$. This requirement shows up when we ask the propagator $\frac{i}{{\not}{p}-m-\Sigma}$ to have residue at $i$ when ${\not}{p}=m$ once we already have the condition $\Sigma({\not}p=m)=0$ that ensures it's pole is located at $p^2=m^2$. The important point here is that the specific value $i$ arises after a series of loose symbolic operations for taking the residue: $Residue = \displaystyle \lim_{{\not}{p}\rightarrow m} ({\not}p-m) \frac{i}{{\not}{p}-m-\Sigma}$, but in the limit we would get $\frac{0}{0}$ because $\Sigma({\not}p=m)=0$. Thus, we use l'Hôpital's rule (by taking $\frac{\partial}{\partial {\not}{p}}$ on numerator and denominator) and in the end we see that imposing $\frac{\partial \Sigma}{\partial {\not}{p}}\Big|_{{\not}{p}=m}=0$ indeed fix the residue at $i$. This is motivated by the wish of having the renormalized propagator to look like the free one (where there is no $\Sigma$). On the other hand, we can do the same thing in a meaningful way writing the propagator as $i({\not}{p}-m-\Sigma)^{-1} = i\frac{{\not}{p}+m+\Sigma}{p^2-(m+\Sigma)^2}$ and look for the residue, $Residue = \displaystyle \lim_{p^2\rightarrow m^2} (p^2-m^2) i\frac{{\not}{p}+m+\Sigma}{p^2-(m+\Sigma)^2}$, and again we would find $\frac{0}{0}$ and then use l'Hôpital, but by means of the $\frac{\partial}{\partial p^2}$ derivative instead. We find that imposing $\frac{\partial \Sigma}{\partial p^2}\Big|_{{\not}{p}=m}=0$ fix the residue at $2m i$. The extra $2m$ factor is indeed the same we would find for the free case ($\Sigma=0$). The value of the residue is different in both cases first because the dimensionality of the derivatives are different. Second, the latter approach have a pole at $p^2=m^2$ and this makes precise sense, and the former symbolic approach mention before has the pole at ${\not}{p}=m$ in a very loose sense. The first point is: in the end, both conditions are expected to give precisely the same result. Both conditions are always taken on shell (${\not}{p}=m$), therefore, given the correct dimensionality factor $2m$, derivatives with respect to $p^2$ should be equivalent to ones with respect to ${\not}{p}$. (One way of understand that is by looking at the effect of both derivatives on $p^2={\not}{p}^{\,2}$ after imposing ${\not}{p}=m$.) The second point is: because $\Sigma$ depends on ${\not}{p}$ it is much easier to work with $\frac{\partial}{\partial {\not}{p}}$ instead of $\frac{\partial}{\partial p^2}$, also because to use the latter we need to know $\frac{\partial p^\mu}{\partial p^2}$, which is not immediately obvious. Once we know the symbolic notation is equivalent to the meaningful one, using $\frac{\partial}{\partial {\not}{p}}$ is more practical for the conventional QED, because all we need to derive depends on ${\not}{p}$ or $p^2={\not}{p}^{\,2}$. In some model based on nonconventional QED, we may need to know the derivative of $p^\mu$ with respect to either ${\not}{p}$ or $p^2$. The former I cannot figure out (and I suspect it may not make sense at all), but I expect that figuring out $\frac{\partial p^\mu}{\partial p^2}$ should be enough. For the latter task I invoke Lorentz covariance for the result and write $\frac{\partial p^\mu}{\partial p^2}=a(p^2) p^\mu$. Multiplying both sides by $p_\mu$ and further differentiating by $p^2$ leads to the conclusion that $a=\frac{1}{2p^2}$ and, therefore, $\frac{\partial p^\mu}{\partial p^2}=\frac{p^\mu}{2p^2}$. In the conventional QED I can check the consistency of this result. There, we know that the structure of $\Sigma$ is $\Sigma({\not}{p})=A(p^2){\not}{p}-B(p^2)m$. If we use this explicity structure when finding the conditions for the residue of the propagator to be $2mi$, by the meaningful approach, we would find $\left[\frac{1}{2}A+m^2\left(\frac{\partial A}{\partial p^2}-\frac{\partial B}{\partial p^2}\right) \right]_{{\not}{p}=m}=0$, and this results does not require any derivative of $p^\mu$ with respect to $p^2$. On the other hand, we already know this result should be equivalent to $\frac{\partial \Sigma}{\partial p^2}\Big|_{{\not}{p}=m}=0$ which, using the explicit form of $\Sigma$ does require derivation of $p^\mu$ with respect to $p^2$. Using $\frac{\partial p^\mu}{\partial p^2}=\frac{p^\mu}{2p^2}$ indeed gives the expected result, indicating that the result $\frac{\partial p^\mu}{\partial p^2}=\frac{p^\mu}{2p^2}$ is consistent so far. The next step would be to generalize the discussion to consider $\Sigma$ with a more general form than the mentioned above (including, for instance, factors with $\gamma_5$ and $\sigma^{\mu\nu}$ contracted with $p^\mu$ and suitable background fields). I am able to outline some points: The result $\frac{\partial p^\mu}{\partial p^2}=\frac{p^\mu}{2p^2}$ remains the same. The condition for the residue using the symbolic notation seemly remains the same, $\frac{\partial \Sigma}{\partial {\not}{p}}\Big|_{{\not}{p}=m}=0$, but it may not be useful as long as we don't know $\frac{\partial p^\mu}{\partial {\not}{p}}$. It seems to be harder to find the analogous condition by the meaningful approach with $\frac{\partial}{\partial p^2}$ because we need to invert a more complicated $\Sigma$ structure, but nevertheless we would know how to actually apply the resulting condition, because we know how to do the derivative. The way out, and I may be streching too far, I suspect may be a correspondence like $\left[\frac{\partial}{\partial {\not}{p}}(\dots)\right]_{{\not}{p}=m} = 2m \left[\frac{\partial}{\partial p^2}(\dots)\right]_{{\not}{p}=m}$. Not the answer you're looking for? Browse other questions tagged quantum-electrodynamics renormalization fermions or ask your own question. Charge Renormalization and Photon Propagator What does it mean for a QFT to not be well-defined? Quantum corrections to massless fermionic field Renormalizing QED with on-shell fermions Deriving solution of the Renormalization Group Equation Derivative with respect to a spinor of the free Dirac lagrangian "Running with momentum $p$" v.s. "running with renormalization scale $\mu$" How many counterterms does QED have?
CommonCrawl
Reclassification calibration test for censored survival data: performance and comparison to goodness-of-fit criteria Olga V. Demler1Email authorView ORCID ID profile, Nina P. Paynter1 and Nancy R. Cook1 Diagnostic and Prognostic Research20182:16 Accepted: 18 May 2018 The risk reclassification table assesses clinical performance of a biomarker in terms of movements across relevant risk categories. The Reclassification- Calibration (RC) statistic has been developed for binary outcomes, but its performance for survival data with moderate to high censoring rates has not been evaluated. We develop an RC statistic for survival data with higher censoring rates using the Greenwood-Nam-D'Agostino approach (RC-GND). We examine its performance characteristics and compare its performance and utility to the Hosmer-Lemeshow goodness-of-fit test under various assumptions about the censoring rate and the shape of the baseline hazard. The RC-GND test was robust to high (up to 50%) censoring rates and did not exceed the targeted 5% Type I error in a variety of simulated scenarios. It achieved 80% power to detect better calibration with respect to clinical categories when an important predictor with a hazard ratio of at least 1.7 to 2.2 was added to the model, while the Hosmer-Lemeshow goodness-of-fit (gof) test had power of 5% in this scenario. The RC-GND test should be used to test the improvement in calibration with respect to clinically relevant risk strata. When an important predictor is omitted, the Hosmer-Lemeshow goodness-of-fit test is usually not significant, while the RC-GND test is sensitive to such an omission. Risk reclassification Goodness-of-fit test Hosmer-Lemeshow Grønnesby-Borgan Risk prediction is viewed as an important part of clinical decision making. For cardiovascular disease and breast cancer, the development of a new risk prediction model has led to changes in practice guidelines. For example, the American College of Cardiology/American Heart Association (ACC/AHA) 10-year cardiovascular disease (CVD) risk model developed from pooled cohorts is currently used in cardiovascular medicine [1], and the Gail model of 5-year risk of developing breast cancer is an application of risk prediction models in cancer [2]. Risk prediction model development typically follows the following steps [3]. First, biomarkers for the new model are selected usually based on significance of their regression coefficients (from Wald or likelihood ratio tests). Once association is established, model performance is assessed usually in terms of its discrimination (measured by area under the receiver operating characteristic curve (AUC) and net reclassification improvement (NRI) among others) and calibration (i.e., Hosmer-Lemeshow goodness-of-fit test, calibration slope, etc.). Given that absolute risk often defines the treatment prescribed, it is very important to ensure that the model is well calibrated (or that predicted risk is close to its true value). A model can perform well based on tests of association or measures of discrimination but have poor calibration characteristics. Van Calster et al. introduced a four-level hierarchy of risk calibration [4]: mean (or calibration in the large, i.e., the average of predicted risk is the same as observed average risk), weak (or calibration intercept and slope equal zero and one respectively [5, 6]), moderate (or calibration in subgroups of risk assessed with calibration plot, Hosmer-Lemeshow test [7]), and strong (or calibration in various covariate patterns). When the true biological model is known (in terms of inclusion of all important predictor variables in correct functional form), then maximum likelihood estimation of model parameters will produce an asymptotically strongly calibrated model. In practice, the true model is never known and we can only hope that the given model is close to the true model, produces reasonable approximation of risk estimates, and performs well in important subgroups. When the true model is unknown, maximum likelihood estimation guarantees only calibration in the large. It does not guarantee for example a good discrimination and calibration in subgroups as noted by Zhou et al. [8]. Calibration in subgroups defined by risk strata is important for assessing the impact of a new predictor on medical decision-making process. Risk stratification is routinely used in clinical practice. Since in most clinical areas, physicians have a choice of relatively few treatment options, risk is often stratified and different treatments are prescribed for different risk strata. For example, the most recent ACC/AHA cholesterol guidelines recommend that treatment be guided by overall cardiovascular risk. Specifically, in primary prevention, for those aged 40–75 years, 10-year risk should be assessed, and if it is above 7.5%, then consideration of moderate to high intensity statin therapy is recommended along with patient discussion [1]. The American Society of Clinical Oncology recommends consideration of tamoxifen/raloxifene/exemestane therapy as an option to reduce the risk of invasive breast cancer if 5-year breast cancer risk is at least 1.66% in premenopausal women aged 35 years and above [9]. The National Osteoporosis Foundation chose a 10-year hip fracture probability of 3% as an osteoporosis intervention treatment threshold [10]. These examples show that risk stratification is an important component of the medical decision-making process. When risk stratification is of interest, a relevant question is how adding a given biomarker to a risk model affects clinical decision making [11]. Does it result in more (or less) intensive treatment assignment? A biomarker resulting in many very small adjustments to absolute risk might lead to a significant test of association but in practice may not affect ranges of clinical interest and therefore will have very small effect on clinical decision making. On the other hand, if many individuals change risk strata, this may translate into differences in monitoring or treatment. As we have mentioned earlier, maximum likelihood methods do not guarantee good calibration in the subgroups including those defined by risk strata. The risk reclassification table [12] is one of the tools that can be used to assess clinical performance in terms of movements across relevant risk strata. Besides assessing discrimination, it can be used to assess calibration within subgroups defined by these risk strata. While it was originally developed for binary outcome data, it has been used in low-censoring survival data [13]. The performance of the reclassification calibration (RC) statistic for moderate to high censoring rates has not been evaluated. Below, we provide an adaptation of the statistic to the survival setting and explore its properties. While the extent of change in risk strata is important clinically, whether these changes lead to better model calibration must be considered. A reclassification table is an informative way to display these data. The risk reclassification table was introduced by Cook et al. [12] and is defined in the following section. Definition of the risk reclassification table In Table 1, we use the reclassification table generated from the Women's Health Study (WHS) data to compare models with and without current smoking (left) and the uninformative biomarker (right) predicting hard CVD events. The WHS is a large-scale nationwide 10-year cohort study of women, which commenced in 1992 [14]. Data include 27,464 women with a median age at baseline of 52 years with an age range of 38 to 89 years. The median follow-up is 10.2 years up through March 2004. A total of 600 women developed hard CVD by 10 years of follow-up, and 36.6% of women were censored prior to year 10, most of the censoring occurring after year 8 (with only 1.4% censored prior to year 8). The 2013 ACC/AHA guidelines recommend that "initiation of moderate-intensity statin therapy be considered for patients with predicted 10-year 'hard' ASCVD risk of 5.0% to < 7.5%" [1]. We used these thresholds to define risk prediction categories in RC tables presented in Table 1. The left column and the top row in each table define the risk categories produced by the reduced and by the full model correspondingly. Reclassification table for informative and uninformative predictors in Women's Health Study (N = 27,464) Risk category 7.5%+ Left: rows—categories defined by the reduced model (controlling for age, total cholesterol, HDL cholesterol, systolic blood pressure and diabetes) and columns—categories defined by the reduced model + current smoking Right: rows—categories defined by the reduced model and columns—categories defined by the reduced model + uninformative predictor On the diagonal is the number of people (non-events and events) who do not change categories. Based on the left table, inclusion of current smoking resulted in transition of 21 and 346 non-events from the lowest to the middle risk category, while 23,843 and 289 events remained in the lowest risk category. Addition of the non-informative biomarker resulted in the reclassification table with very few observations in the off-diagonal cells. Risk categories are sometimes used in clinical decision making to assign treatment as is the case in cardiovascular disease and breast cancer. When choosing between two risk prediction models, we then should consider groups of patients who will be affected by the switch to a new risk prediction model and evaluate whether the proposed reclassification is beneficial. We can ask the question whether the new risk categorization is closer to the actual risk, and we can use the RC test to test this hypothesis. While reclassifications can improve the fit, movement due to chance must also be accounted for. To evaluate the quality of reclassification, a reclassification calibration statistic was introduced [12, 13, 15]. It evaluates similarity between observed and expected counts in each cell of the reclassification table. The test of the RC statistic in logistic regression and in the survival setting with low censoring rates has the following form [15]: $$ {\chi}_{\mathrm{RC}}^2={\sum}_{g=1}^G\frac{{\left[{O}_g-{n}_g{\overline{p}}_g\right]}^2}{n_g{\overline{p}}_g\left(1-{\overline{p}}_g\right)} $$ where Og is observed number of events in the gth cell, \( {\overline{p}}_g \) is the average of predicted probabilities for the model in question, ng is the number of observations in the gth cell, and G is the number of cells in the RC table. The test is similar to the Hosmer-Lemeshow test using categories defined by the cross-tabulation of risk strata from the two models. Its performance characteristics have been described [13], and the power and Type I error found to be appropriate in this setting. In this paper, we developed a robust test of RC statistic in survival setting. The reclassification table by construction compares the performance of two models; therefore, there are two ways to calculate the expected counts of events in each cell in (1). One is based on predicted probabilities from the full, and the other is based on the reduced model's predicted probabilities. Technically, the two RC tests can result in four possible testing combinations, as illustrated in Table 2. The implications of RC testing RC test based on predicted probabilities from the new model Statistically significant RC test based on predicted probabilities from the old model A. Both new and old models are miscalibrated or use incorrect functional form B. New model provides improved calibration across risk classifications C. New model is miscalibrated or uses incorrect functional form. Old model is preferable. D. Reclassification is not choosing between models. Comparison of RC test with expected counts calculated from the old model with RC test with expected counts calculated from the new model Typically, when a new important predictor is added to a model, or a fuller model is used, the RC test for the old model indicates significant deviation from the observed rates, while the new model matches the observed rates more closely, as in Table 2 (cell B). More rarely, when the new model is significant (Table 2 (cells A and C)), then the new model is miscalibrated or uses an incorrect functional form. If both models show significant deviations (Table 2 (cell A)), both are miscalibrated. If the models are not nested, it is possible that each model contains unique predictors that are important to prediction. If there is little reclassification, both RC statistics may be non-significant. In this case, either model could be used or other criteria, such as model simplicity or cost, should be used to choose between the models. In practice, we observe mostly the situation described by cells B and D in Table 2. Risk reclassification and calibration for survival data For all N observations in the dataset, we assumed that the following data are collected: covariates measured at baseline (x1,…,xp), event occurrence, and T = time of event or administrative censoring (i.e., all observations who did not have an event by the year 10 are censored at T = 10). We assume that event times can be right censored and made the usual assumption of independent censoring. We denote δ as an event indicator (δ = 1, if the event was observed; δ = 0, if censored) and we observe T = time of event or censoring time, whichever occurs first. In order to apply this test to studies with long follow-up and censored observations, we need to extend the test (1) to the survival setting. Cook and Ridker [15] applied the Nam-D'Agostino test [16] to the reclassification table in the survival setting with low censoring rate and suggested estimating the observed proportion Og/ng using the non-parametric Kaplan-Meier estimator. Expected probabilities in the original formula (1) are replaced with model-based predicted probabilities (i.e. based on Cox model) calculated at a fixed time t and averaged for each cell (denoted as \( {\overline{p(t)}}_g \)) as illustrated in Table 1. In order to test improvement of classification, expected probabilities in each cell are estimated as an average of predicted probabilities from the new model. $$ {\chi}_{\mathrm{RC}}^2(t)={\sum}_{g=1}^G\frac{{\left[{KM}_g-{\overline{p(t)}}_g\right]}^2}{{\overline{p(t)}}_g\left(1-{\overline{p(t)}}_g\right)/{n}_g}{\sim}_{H_0}\kern0.5em {\chi}_{G-1}^2 $$ where KMg is the observed probability of an event in group g estimated using Kaplan-Meier non-parametric estimate. In Fig. 1, we present results using simulated data to compare the size of this version of the RC test for survival with low and high censoring rates. In the absence of censoring, the RC test performs well at the targeted 5% significance level, but then quickly deteriorates and becomes too conservative for higher censoring rates. In this paper, we investigate ways of adapting the original RC test to higher censoring rates in the survival setting, discuss their performance under a variety of scenarios, and compare its performance to the Hosmer-Lemeshow test. Size of the original RC test (1) for low and high censoring rates. An uninformative new marker is added to a baseline model. Size is calculated as a fraction of significant RC statistics To adapt (2) to the survival setting with high censoring rates, we considered two options: the Grønnesby-Borgan (GB) [17, 18] and the Greenwood-Nam-D'Agostino (GND) [19] tests. These two tests extend Hosmer-Lemeshow style goodness-of-fit tests for survival models and both perform well in a variety of settings [19]. Differences in underlying principles behind the two tests lead to different advantages and different limits of applicability. Greenwood-Nam-D'Agostino test Nam and D'Agostino formulated a test which is also based on the difference between observed and expected number of events, but uses scaled up versions of the observed counts [16]. Their test uses the Kaplan-Meier estimate of the number of events that would occur without censoring. Their test is valid for low-censoring scenarios and has been extended for higher censoring rates in [19]. The new version of this test is called Greenwood-Nam-D'Agostino (GND) test, because it uses the Greenwood variance formula [20] in the denominator. The GND test performs well for higher censoring rates and is defined as $$ {\chi}_{\mathrm{GND}}^2(t)={\sum}_{g=1}^G\frac{{\left[{KM}_g(t)-{\overline{p(t)}}_g\right]}^2}{\mathrm{Var}\left({KM}_g(t)\right)}{\sim}_{H_0}\kern0.5em {\chi}_{G-1}^2 $$ Grønnesby-Borgan test Using martingale theory, Grønnesby and Borgan developed a test of fit for Cox proportional hazards regression models [18]. It is based on the difference between the observed and expected number of events in deciles, but it can be applied to any grouping. Previously [19], we showed that the GND test has comparable or sometimes superior performance to the Grønnesby and Borgan (GB) test. In this paper, we applied the GB and the GND tests to the reclassification table, denoting them RC-GB and RC-GND. We concluded that the GND test is superior; therefore, in this paper, we focused on the RC-GND test. Results related to the performance of the RC-GB test are presented in Additional file 1: Figure S1, and Additional file 2: Figure S2. The goal of this paper is to extend the RC statistic to the survival setting with higher censoring rates, compare its performance to the Hosmer-Lemeshow goodness-of-fit, and relate it to existing measures of performance of risk prediction models. In the following sections, we compared performance of the two tests in simulations, discuss differences between the reclassification table and HL type approaches, and apply our findings to the practical example. Simulation setup Samples of size N = 1000, 5000, and 10,000 were generated 1000 times. Event times were generated from the Weibull distribution with the shape parameter α set to 3.0 for models with increasing baseline hazard and 0.3 for models with the decreasing baseline hazard. The scale parameter of the Weibull distribution was proportional to exponentiated risk score of the data-generating model, i.e., rs = ln(8)x1 + ln(1.0,1.3,1.7,2.0,3.0)x2, where x1~N(0,0.5) and x2~N(0,0.5). The scale parameter of the Weibull distribution was also calibrated to an 0.1 event incidence rate. Censoring times were uniformly distributed to generate 0, 25, and 50% censoring rates. Cox proportional hazards models were used to fit the data. The RC table was calculated with cutoffs of 5 and 20% for the simulated data. Two models were compared: the full model with x1 and x2 and a reduced model with only one predictor variable x1. To estimate the size of the proposed tests, probabilities from the full model were used to estimate the expected proportions in (3). In this case, we would expect the RC statistic to be non-significant because the data are generated under the null. To estimate the power of the proposed tests, probabilities from the reduced model were used to estimate the expected proportions in (3). In this case, we would expect the RC statistic to be significant because the data are under the alternative. We evaluate power in other scenarios as well. Simulations were performed using R statistical software [21]. In a reclassification table, off-diagonal elements can be small or even zero. Bias of the Greenwood variance estimator in such small subgroups is negative and can be as high in absolute value as 25% [22]. For these reasons, the GND test deteriorates for small cell sizes. To accommodate this, we used the following collapsing strategy. All cells with less than five events were collapsed with the nearest cell which is closer to the diagonal and the null setting. In this way, we keep all the data and avoid problems with small cells, although we are biasing the test toward the reference model to some degree. If collapsing was performed, then the degrees of freedom of the test should be adjusted accordingly. The collapsing strategy is illustrated in Table 3. Collapsing strategy of the reclassification table Full model with total cholesterol Without total cholesterol ←1 0→ Cells with less than five events are collapsed with the next cell closer to the diagonal. Columns—categories defined by the full model (controlling for age, total cholesterol, HDL cholesterol, systolic blood pressure, current smoking, and diabetes) and rows—categories defined by the full model without total cholesterol ev number of events, ne number of non-events Performance of the GND test for the RC statistic As described in the previous section, we generated a reclassification table for the full model with two predictor variables x1 and x2 and a reduced model with only one predictor variable x1. Full model was used to generate data and to estimate the expected proportions in the RC statistic formula. Detailed explanation of the simulations is presented in the Additional file 3: Table S1. In Fig. 2, we show the size of the RC-GND tests for decreasing (left) and increasing (right) baseline hazards. The RC-GND test is robust to censoring when compared to Fig. 1. In general, the RC-GND test does not exceed targeted Type I error rate (we used 5% significance level in this paper) but can be too conservative when effect size is moderate. Size of the RC Greenwood-Nam-D'Agostino test (RC-GND) (2) for 0, 25, and 50% censoring rates. Comparing full (y~ x1 + x2) and reduced (y~ x1) models with decreasing (left) and increasing (right) baseline hazard functions. N = 5000, p = 0.1, collapse when evg < 5 To evaluate power, we considered several scenarios, including omission of an important predictor variable, omission of a squared term, and omission of an interaction term. Simulations scenarios are summarized in the Additional file 3: Table S1. Power of RC-GND when omitting an important predictor variable Data were simulated according to the correct full model, but the reduced model was used to estimate the expected proportions in the RC statistic formula, thus mimicking the situation when an important predictor variable was omitted. Based on Fig. 3, the RC-GND test loses power for hazard ratios less than 2.0 and achieves 80% power for HR > 2.0 and decreasing baseline hazard. Power when an important predictor is omitted for 0, 25, and 50% censoring rates. The RC statistic was calculated for the full (y~ x1 + x2) and reduced (y~ x1) models. Data were simulated according to the full model, but the reduced model was used to estimate the expected proportions in the RC statistic formula. Left panel—decreasing baseline hazard, right panel—increasing baseline hazard. N = 1000 (top row), 5000 (middle row), 10,000 (bottom row), p = 0.1, collapse when evg < 5 Power of RC-GND when omitting squared term In Fig. 4, we generated survival times according to the model with two predictor variables: x1 and\( {x}_1^2 \), in the reclassification table and we compared it to the model with only x1, thus omitting the squared term. RC-GND is robust to censoring for a decreasing baseline hazard (Fig. 4). Power when a squared term is omitted for 0, 25, and 50% censoring rates. The RC statistic was calculated for the full (y~ x1 and x12) and reduced (y~ x1) models. Data were simulated according to the full model, but the reduced model was used to estimate the expected proportions in the RC statistic formula. Left panel—decreasing baseline hazard, right panel—increasing baseline hazard. N = 1000 (top row), 5000 (middle row), 10,000 (bottom row), p = 0.1, collapse when evg < 5 Power of RC-GND when omitting an interaction term Similar results were obtained when omitting an interaction term and are presented in the Fig. 5. Power when an interaction term is omitted for 0, 25, and 50% censoring rates. The RC and HL statistics were calculated for the following reduced and full models: y~ x1 and y~ x1 + x2; y~ x1 and y~ x1 + x12; and y~ x1 + x2 and y~ x1 + x2 + x1 * x2. Data were simulated according to the full model, but the reduced model was used to estimate the expected proportions in the RC statistic formula. Left column—power of the RC-GND test, right column—power of the HL gof test. N = 1000 (top row), 5000 (middle row), 10,000 (bottom row), p = 0.1, collapse when evg < 5 Connection between the RC statistic, the NRI, and the HL test The RC statistic and net reclassification improvement (NRI) The NRI is a measure of improvement in predictive model performance [23] which gained popularity in recent years. Its categorical version is defined as the fraction of correct movements across categories among events plus the fraction of correct movements among non-events: $$ {\mathrm{NRI}}_{\mathrm{cat}}=\frac{\#\mathrm{ca}{{\mathrm{t}}_{\mathrm{up}}}_{\mathrm{ev}}-\#\mathrm{ca}{{\mathrm{t}}_{\mathrm{down}}}_{\mathrm{ev}}}{n_1}-\frac{\#\mathrm{ca}{{\mathrm{t}}_{\mathrm{up}}}_{\mathrm{ne}}-\#\mathrm{ca}{{\mathrm{t}}_{\mathrm{down}}}_{\mathrm{ne}}}{n_0} $$ The NRI conditions on event status, while the RC statistic conditions on the specific cells. NRI penalizes events that move down and non-events that move up while the RC statistic penalizes individual cells that have poor fit. The two statistics were created for different purposes and cannot be formally compared: the RC statistic assesses model calibration in defined risk strata, and NRI is solely a measure of discrimination ability of one model versus the other [24–27]. From this point of view, the RC statistic is closer to another measure of goodness-of-fit—the Hosmer-Lemeshow statistic. The RC statistic and the HL gof test The Hosmer-Lemeshow test combines data across categories of predicted probabilities (often deciles). Therefore, the HL statistic can be viewed as a test of the horizontal margin of the reclassification table, had we used clinical risk categories rather than deciles as a grouping variable (Table 4). The RC statistic tests whether the fit is good in a more informed set of categories than the Hosmer-Lemeshow test statistics, which are determined by risk strata of the alternative model. Building blocks of HL and RC statistics Full model Reduced model \( {\left[{KM}_{11}(t)-{\overline{p(t)}}_{11}\right]}^2 \) Components of HL statistic \( {\left[{KM}_1(t)-{\overline{p(t)}}_1\right]}^2 \) \( {\left[{KM}_{31}(t)-{\overline{p(t)}}_{31}\right]}^2 \) is one of the terms in the RC statistics formula. It corresponds to observations that moved from risk category 3 according to the reduced model to the risk category 1 of the full model. The reclassification table is more informative when evaluating two models because it displays the transitions from one category to another under different models In Figs. 3, 4, and 5, we calculated the power of the Hosmer-Lemeshow test when omitting an important new biomarker, a squared term, and an interaction term to compare the power of the RC test based on clinical categories defined by 5% and 7.5% thresholds to the HL test based on deciles of predicted probabilities. We present results for an increasing baseline hazard only; simulations with decreasing baseline hazard are comparable and are included in Additional file 2: Figure S2. From Fig. 5, the HL test is unable to detect an important omitted predictor variable for any considered sample size whereas the reclassification table does have power to detect it. In the reclassification table, information about the omitted variable is present in the form of the horizontal grouping, while for the HL statistic, this information is not provided. The lack of power of the HL statistic to detect an omitted predictor has been previously reported [13]. The RC-GND and HL tests have similar power to detect an omitted squared term (Fig. 4) when its hazard ratio is moderate to strong. The RC test also has more power to detect the omitted interaction term (Fig. 5). The RC-GB test has more power in the considered scenarios. The GB test is semi-parametric which allows it to gain power but limits its application to the Cox proportional hazards model. The RC-GND test is non-parametric and can be applied in a wider range of scenarios. When detecting an omitted predictor variable, RC-GND and RC-GB require a sufficiently large sample size (at least 5K for an event rate of 0.1) and a large hazard ratio (2.0 and above). For smaller sample sizes, counts in the off-diagonal cells of the RC table are too small and are comparable to what could be observed under the null due to stochastic variation. Only when the signal is strong enough can it become visible over the background noise. Application: the Women's Health Study We used data from the Women's Health Study (WHS) to illustrate how to apply the RC test in a real data example. To calculate the 10-year risk of major CVD, we used Cox proportional hazards regression with age, total cholesterol, high-density lipoprotein cholesterol (HDL), systolic blood pressure, current smoking, and diabetes as predictor variables in the full model. "Hard" CVD is defined as non-fatal myocardial infarction, a non-fatal stroke, or death from cardiovascular causes. The analysis was performed using SAS software [28] using macros available at ncook.bwh.harvard.edu. We used RC table cutoffs of 5 and 20% in this example. In Table 5, we tested seven reclassification tables, comparing the full model to one without the predictor in the first column of Table 5 (reduced model). Results of seven RC statistics tests, comparing the full model to one without the predictor in the first column of this table (reduced model) Based on pp from the reduced model Based on pp from the full model p-value (beta) RC statistic < 0.001 − 0.95 CURRSMOKING The GND test was used for testing the reclassification table. We used age, total cholesterol (TOTC), HDL cholesterol (HDLC), current smoking (CURRSMOKING), systolic blood pressure (SBP), and diabetes status (DIABETES) as well as a random null variable (RANDOM) as predictor variables in the full model In Table 5, the beta coefficients are significant for all six informative predictor variables. However, total cholesterol and HDL cholesterol have a non-significant effect on reclassification into clinical categories: corresponding p-values when the reduced model probabilities were used show a good fit (χ2 = 7.48 and 8.00, one-sided p-values = 0.28 and 0.24), while the RC statistic using the new model is also not significant. In that case, we would choose the more parsimonious model without the variable in question. This finding is due to the fact that total cholesterol and HDL cholesterol are correlated and result in very few clinical reclassifications. It also illustrates our point that a significant biomarker with a small beta estimate can result in a limited number of reclassifications, and therefore, it will have only minor impact in clinical practice. In contrast, a removal of current smoking from the full model results in a highly significant RC-GND test when predicted probabilities were used from the reduced model (χ2 = 24.84, p-value < .001). When the predicted probabilities were used from the model with smoking, a good fit was found (χ2 = 7.38, p-value = 0.39), confirming that the full model reclassifies observations into better calibrated groups, using Kaplan-Meier to estimate the observed event rate in each group. In the last row, we added an uninformative biomarker to the full model. We expected the RC-GND test to be non-significant no matter whether one uses the full or reduced model to calculate predicted probabilities in a cell. Indeed, both tests had non-significant p-values (.56 and .54), indicating that the smaller model has a good fit and the addition of the new biomarker does not improve it. Nor does it negatively affect it (because the full model with uninformative biomarker is also well calibrated). However, we prefer a more parsimonious model since it performs at least as well. In practice, if the uninformative marker displayed no association with the outcome using likelihood ratio testing or other established methods, we would not proceed to examine reclassification. Risk reclassification extends evaluation of risk prediction models from traditional approaches informed by discrimination and calibration measures (such as the AUC and Hosmer-Lemeshow test) toward assessments focused on the clinical relevance of a new model and implications on present-day treatment decisions [11, 29–31]. Appropriate statistical methodology for measures of reclassification is still an active field of research, and it is crucial to develop valid statistical tests [11]. The RC statistic is an important reclassification tool which compares performance of predictive models with respect to clinically relevant decision categories [12, 15, 32–34]. Performance of new markers may vary across subgroups [35], and it will be of interest to identify subgroups for which the new markers may or may not be useful. The reclassification table helps to visualize and to better understand movements between categories, see which groups of patients are influenced more by the inclusion of a given biomarker, and test significance of improvement. The RC test falls between the moderate and strong calibration categories in the Van Calster hierarchy of risk calibration [4]. It goes beyond testing in standard Hosmer-Lemeshow risk groups defined by a single model and looks at movements across risk groups defined by both full and reduced models. It also can be repeated for a variety of covariate patters but does not exhaust all possibilities. Therefore, it is not performing a full assessment to assure "strong" calibration, but it goes beyond the moderate calibration within standard HL deciles. In this paper, we extend the RC statistic to the survival setting with higher censoring rates. We recommend using the RC-GND test to test the reclassification table with survival data. The RC-GND test is fully non-parametric and therefore can be applied in a wide variety of situations. It does not refit the baseline hazard as, for example, the Grønnesby-Borgan test does [19], so it can detect a lack of calibration in either model. In our simulations, the RC-GND is very sensitive to omission of an important predictor variable (Fig. 3), a quality that some other goodness-of-fit tests do not share. It achieves 80% power when an important new predictor with HR > 2.0 was omitted, though this depends on the sample size. Many authors noted that improving discrimination of a strong baseline model also requires a strong enough predictor variable [36]. Therefore, if an established model has a relatively strong discrimination (as for example Framingham ATPIII model with c-statistic of 0.83 for women [37]), then to improve significantly in terms of discrimination (measured by c-statistic) or in terms of calibration, a strong predictor variable is required. Limitations of the RC statistic include its dependence on the existence of clinically relevant risk stratification categories. Oftentimes, however, clinically relevant cutoffs are not established. In this situation, we recommend producing an RC table for a set of sensible risk cut points, possibly centered around the disease incidence [13]. As we have mentioned earlier, treatment guidelines in several fields do rely on established risk categories [1, 9]. In this situation, another important issue is how sharp are the boundaries of clinically established risk categories. If a patients' risk falls in a proximity of a cutoff point (for example risk of 7.4% with the cutoff of 7.5%), then how certain are we that the treatment regimen should be that for intermediate risk rather than for a high risk? It may make sense to establish "transition areas" where assignment to a risk category is mute. A prediction confidence interval for the predicted risk is available in most statistical software packages and can be included in risk calculators for patient's estimated risk. If prediction confidence interval covers the threshold, then patient's risk falls in the transition area from one risk category to another. This is an important information to consider when making a treatment decision. Alternatively, if there is a single risk cutoff, then additional cutoffs on either side of it could be established in a four-category classification to allow for uncertainty. A single category below the cut point could also be used for "watchful waiting" or further follow-up. Additionally, we also did not consider competing events, although these could be taken into account in a similar fashion [38]. Sensitivity to small cell sizes is another disadvantage of the RC-GND test. If sample size is too small and the hazard ratio of the new biomarker is not large enough, the RC-GND test does not have enough power to detect an improvement over the baseline model, and therefore, the RC-GND test is too conservative. We compared the Hosmer-Lemeshow style test to the RC test. The Hosmer-Lemeshow test can be viewed as a test of the margin of the reclassification table. An important limitation of the HL test is its inability to detect an omitted biomarker. Our Fig. 3 illustrates that non-significance of the HL test should not be viewed as an evidence that the model contains all important biomarkers. If a decision must be made about inclusion of a biomarker in a risk-prediction model, the HL statistic will always show a good fit if the categories are defined by a model without that biomarker. In other words, if a model has a good fit based on the HL test, it does not guarantee at all that the model has all important variables in it. In the reclassification table, the biomarker is used to define risk categories, so the RC statistic is sensitive to the omission of an important biomarker. In general, however, these measures focus on calibration, and more direct model comparisons, such as likelihood ratio or related measures, can be used to assess whether a new biomarker is important. The reclassification table is a step toward better understanding of the clinical utility of one model versus the other. It can be used to visualize movements of patients across categories and examine whether a new model has an impact on clinical treatment assignment. The associated RC statistics can assess calibration of both models and indicate areas where fit may be lacking. Unlike the GB test, the GND test does not rely on the assumptions of proportionality of hazards [19]; therefore, we recommend the GND test for inference in a variety of settings, particularly when the Cox model is not in use. Area under the Receiver Operating Characteristic Curve CVD: GB test: GND test: gof: Goodness-of-fit HL test: Hosmer-Lemeshow goodness-of-fit test NRI: Net Reclassification Improvement RC statistic: Reclassification Calibration statistic WHS: Women's Health Study This work was supported by the National Institutes of Health grant R01HL113080 received by NRC and by K01 award K01HL135342 received by OVD from The National Heart, Lung and Blood Institute (https://www.nhlbi.nih.gov). The Women's Health Study data were supported by grant HL043851 from the National Heart, Lung and Blood Institute (https://www.nhlbi.nih.gov) and by the grant CA047988 from the National Cancer Institute. R and SAS code to run the GB and GND tests for the reclassification tables as well as for the HL setting is available at ncook.bwh.harvard.edu. As an illustration, methods presented in this paper are applied to Women's Health Study data. The WHS data supporting the findings is available from the corresponding author on a reasonable request and upon fulfilling institutional requirements. OVD, NPP, and NRC conceived and designed the study. OVD and NRC contributed to the simulation study, adjustments to the methodology, and data analysis. OVD wrote the original draft of the paper. OVD, NPP, and NRC reviewed and edited the manuscript. All authors read and approved the final manuscript. For analysis of Women's Health Study, Data Institutional Review Board's approvals were acquired prior to analysis. The IRB protocol approval number is 2004P001661 from Partners Human Research Committee of Brigham and Women's Hospital, Boston, MA. Additional file 1: Figure S1. Size of the RC-GND test (3) and RC-GB (score test). Comparing full (y~ x1 + x2) and reduced (y~ x1) models with decreasing (top row) and increasing (bottom row) baseline hazard functions. N = 5000, p = 0.1, collapse when evg < 5. (PDF 59 kb) Additional file 2: Figure S2. Power of RC-GND and RC-GB for a decreasing baseline hazard. Summary of the simulations is presented in the Supplementary Table S1. An important predictor variable was omitted (the top row), a squared term was omitted (the middle row), and an interaction term was omitted (the bottom row). Event times follow Weibull distribution with a decreasing baseline hazard as discussed in the section "Simulations setup" and the Supplementary Table S1. for the sample size of 5000, event rate of 0.1, cells were collapsed when number of events in a cell was less than five. (PDF 48 kb) Additional file 3: Table S1. Outline of simulations used to generate Fig. 1. (DOCX 24 kb) Division of Preventive Medicine, Brigham and Women's Hospital, 900 Commonwealth Ave, Brookline, MA 02115, USA Stone NJ, Robinson JG, Lichtenstein AH, Merz CNB, Blum CB, Eckel RH, Goldberg AC, Gordon D, Levy D, Lloyd-Jones DM. 2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2014;63(25_PA):2889–934.View ArticlePubMedGoogle Scholar Gail MH, Brinton LA, Byar DP, Corle DK, Green SB, Schairer C, Mulvihill JJ. Projecting individualized probabilities of developing breast cancer for white females who are being examined annually. J Natl Cancer Inst. 1989;81(24):1879–86.View ArticlePubMedGoogle Scholar Steyerberg E: Clinical prediction models: a practical approach to development, validation, and updating: Springer Science & Business Media 2008.Google Scholar Van Calster B, Nieboer D, Vergouwe Y, De Cock B, Pencina MJ, Steyerberg EW. A calibration hierarchy for risk models was defined: from utopia to empirical data. J Clin Epidemiol. 2016;Google Scholar Harrell FE, Lee KL, Mark DB. Tutorial in biostatistics multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996;28;15(4):361–87.Google Scholar Crowson CS, Atkinson EJ, Therneau TM. Assessing calibration of prognostic risk scores. Stat Methods Med Res. 2013; 0962280213497434Google Scholar Hosmer DW, Lemeshow S. Goodness of fit tests for the multiple logistic regression model. Commun Stat-Theory and Methods. 1980;9(10):1043–69.Google Scholar Zhou Q, Zheng Y, Cai T. Subgroup specific incremental value of new markers for risk prediction. In: Risk Assessment and Evaluation of Predictions Springer; 2013. p. 253–82.View ArticleGoogle Scholar Visvanathan K, Hurley P, Bantug E, Brown P, Col NF, Cuzick J, Davidson NE, DeCensi A, Fabian C, Ford L. Use of pharmacologic interventions for breast cancer risk reduction: American Society of Clinical Oncology clinical practice guideline. J Clin Oncol. 2013;2049:3122.Google Scholar Tosteson AN, Melton LJ III, Dawson-Hughes B, Baim S, Favus MJ, Khosla S, Lindsay RL. Cost-effective osteoporosis treatment thresholds: the United States perspective. Osteoporos Int. 2008;19(4):437–47.View ArticlePubMedPubMed CentralGoogle Scholar Tzoulaki I, Liberopoulos G, Ioannidis JP. Use of reclassification for assessment of improved prediction: an empirical evaluation. Int J Epidemiol. 2011;40(4):1094-105.Google Scholar Cook NR, Buring JE, Ridker PM. The effect of including C-reactive protein in cardiovascular risk prediction models for women. Ann Intern Med. 2006;145(1):21–9.Google Scholar Cook NR, Paynter NP. Performance of reclassification statistics in comparing risk prediction models. Biom J. 2011;53(2):237–58.View ArticlePubMedPubMed CentralGoogle Scholar Lee I-M, Cook NR, Gaziano JM, Gordon D, Ridker PM, Manson JE, Hennekens CH, Buring JE. Vitamin E in the primary prevention of cardiovascular disease and cancer: the Women's Health Study: a randomized controlled trial. J Am Med Assoc. 2005;294(1):56–65.View ArticleGoogle Scholar Cook NR, Ridker PM. Advances in measuring the effect of individual predictors of cardiovascular risk: the role of reclassification measures. Ann Intern Med. 2009;150(11):795–802.View ArticlePubMedPubMed CentralGoogle Scholar D'Agostino R, Nam B-H. Evaluation of the performance of survival analysis models: discrimination and calibration measures. Handbook of statistics. 2004;23:1–25.Google Scholar May S, Hosmer DW. A simplified method of calculating an overall goodness-of-fit test for the Cox proportional hazards model. Lifetime Data Anal. 1998;4(2):109–20.View ArticlePubMedGoogle Scholar Grønnesby JK, Borgan Ø. A method for checking regression models in survival analysis based on the risk score. Lifetime Data Anal. 1996;2(4):315–28.View ArticlePubMedGoogle Scholar Demler OV, Paynter NP, Cook NR. Tests of calibration and goodness-of-fit in the survival setting. Stat Med. 2015;34(10):1659–80.View ArticlePubMedPubMed CentralGoogle Scholar Kalbfleisch JD, Prentice RL. The statistical analysis of failure time data, vol. 360. Hoboken: Wiley; 2011.Google Scholar R Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2013. http://www.R-project.org/. Klein JP. Small sample moments of some estimators of the variance of the Kaplan-Meier and Nelson-Aalen estimators. Scand J Stat. 1991;1:333–40.Google Scholar Pencina MJ, D'Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157–72.View ArticlePubMedGoogle Scholar Hilden J, Gerds TA. A note on the evaluation of novel biomarkers: do not rely on integrated discrimination improvement and net reclassification index. Stat Med. 2014;33(19):3405–14.View ArticlePubMedGoogle Scholar Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for some traditional and novel measures. Epidemiology. 2010;21(1):128.View ArticlePubMedPubMed CentralGoogle Scholar Leening MJ, Steyerberg EW, Van Calster B, D'Agostino RB, Pencina MJ. Net reclassification improvement and integrated discrimination improvement require calibrated models: relevance from a marker and model perspective. Stat Med. 2014;33(19):3415–8.View ArticlePubMedGoogle Scholar Shao F, Li J, Fine J, Wong WK, Pencina M. Inference for reclassification statistics under nested and non-nested models for biomarker evaluation. Biomarkers. 2015;20(4):240–52.View ArticlePubMedGoogle Scholar SAS/STAT software, Version 9.4 of the SAS System for Windows. Copyright © 2002-2012 SAS Institute Inc. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc., Cary, NC, USA.Google Scholar Siontis GC, Tzoulaki I, Siontis KC, Ioannidis JP: Comparisons of established risk prediction models for cardiovascular disease: systematic review. 2012.Google Scholar Ioannidis JP, Tzoulaki I. What makes a good predictor?: the evidence applied to coronary artery calcium score. Jama. 2010;303(16):1646–7.View ArticlePubMedGoogle Scholar Ray P, Le Manach Y, Riou B, Houle TT. Statistical evaluation of a biomarker. J Am Soc Anesthesiol. 2010;112(4):1023–40.View ArticleGoogle Scholar Tice JA, Cummings SR, Smith-Bindman R, Ichikawa L, Barlow WE, Kerlikowske K. Using clinical factors and mammographic breast density to estimate breast cancer risk: development and validation of a new predictive model. Ann Intern Med. 2008;148(5):337–47.View ArticlePubMedPubMed CentralGoogle Scholar Tice JA, Miglioretti DL, Li C-S, Vachon CM, Gard CC, Kerlikowske K. Breast density and benign breast disease: risk assessment to identify women at high risk of breast cancer. J Clin Oncol. 2015;2060:8869.Google Scholar Ko JJ, Xie W, Kroeger N, Lee J-L, Rini BI, Knox JJ, Bjarnason GA, Srinivas S, Pal SK, Yuasa T. The international metastatic renal cell carcinoma database consortium model as a prognostic tool in patients with metastatic renal cell carcinoma previously treated with first-line targeted therapy: a population-based study. The Lancet Oncology. 2015;16(3):293–300.View ArticlePubMedGoogle Scholar Zhou QM, Zheng Y, Cai T. Subgroup specific incremental value of new markers for risk prediction. Lifetime Data Anal. 2013;19(2):142–69.View ArticlePubMedGoogle Scholar Pepe MS, Janes H, Longton G, Leisenring W, Newcomb P. Limitations of the odds ratio in gauging the performance of a diagnostic, prognostic, or screening marker. Am J Epidemiol. 2004;159(9):882–90.View ArticlePubMedGoogle Scholar D'Agostino Sr RB, Grundy S, Sullivan LM, Wilson P, Group CRP. Validation of the Framingham coronary heart disease prediction scores: results of a multiple ethnic groups investigation. Jama. 2001;286(2):180–7.View ArticleGoogle Scholar Wolbers M, Koller MT, Witteman JC, Steyerberg EW. Prognostic models with competing risks: methods and application to coronary risk prediction. Epidemiology. 2009;20(4):555–61.View ArticlePubMedGoogle Scholar
CommonCrawl
Contributions to the study of Anosov geodesic flows in non-compact manifolds DCDS Home Notes on the values of volume entropy of graphs September 2020, 40(9): 5131-5148. doi: 10.3934/dcds.2020222 Exponential convergence in the Wasserstein metric $ W_1 $ for one dimensional diffusions Lingyan Cheng 1, , Ruinan Li 2,, and Liming Wu 3, School of Science, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu, China School of Statistics and Information, Shanghai University of International Business and Economics, Shanghai 201620, China Laboratoire de Math. CNRS-UMR 6620, Université Clermont-Auvergne, Clermont Ferrand, 63177 Aubière, France * Corresponding author: Ruinan Li Received April 2019 Revised March 2020 Published June 2020 Fund Project: The first author is supported by "the Fundamental Research Funds for the Central Universities'' grant 30920021145 In this paper, we find some general and efficient sufficient conditions for the exponential convergence $ W_{1,d}(P_t(x,\cdot), P_t(y,\cdot) )\le Ke^{-\delta t}d(x,y) $ for the semigroup $ (P_t) $ of one-dimensional diffusion. Moreover, some sharp estimates of the involved constants $ K\ge 1, \delta>0 $ are provided. Those general results are illustrated by a series of examples. Keywords: Wasserstein metric, diffusion processes, exponential convergence, Poisson operator. Mathematics Subject Classification: Primary: 60B10, 60H10, 60J60. Citation: Lingyan Cheng, Ruinan Li, Liming Wu. Exponential convergence in the Wasserstein metric $ W_1 $ for one dimensional diffusions. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5131-5148. doi: 10.3934/dcds.2020222 F. Barthe and C. Roberto, Sobolev inequalities for probability measures on the real line, Studia Math., 159 (2003), 481-497. doi: 10.4064/sm159-3-9. Google Scholar M. F. Chen, From Markov Chains to Nonequilibrium Partcile Systems, World Scientific Publishing Co., Inc., River Edge, NJ, 1992. doi: 10.1142/1389. Google Scholar M. F. Chen, Analytic proof of dual variational formula for the first eigenvalue in dimension one, Sci. Sin. A, 42 (1999), 805-815. doi: 10.1007/BF02884267. Google Scholar M. F. Chen, Eigenvalues, Inequalities and Ergodic Theory, Springer-Verlag London, Ltd., London, 2005. Google Scholar M. F. Chen and F.-Y. Wang, Estimation of the first eigenvalue of second order elliptic operators, J. Funct. Anal., 131 (1995), 345-363. doi: 10.1006/jfan.1995.1092. Google Scholar M. F. Chen and F.-Y. Wang, Estimation of spectral gap for elliptic operators, Trans. Am. Math. Soc., 349 (1997), 1239-1267. doi: 10.1090/S0002-9947-97-01812-6. Google Scholar L. Y. Cheng and L. M. Wu, Centered Sobolev inequality and exponential convergence in $\Phi$-entropy, Statistics and Probability Letters, 148 (2019), 101-111. doi: 10.1016/j.spl.2019.01.002. Google Scholar H. Djellout, $L^p$-Uniqueness for One-Dimensional Diffusions, Mémoire de D.E.A Université Blaise Pascal, Clermont-Ferrand, 1997. Google Scholar H. Djellout and L. M. Wu, Lipschitzian norm estimate of one-dimention Poisson equations and applications, Ann. Inst. Henri Poincaré Probab. Stat., 47 (2011), 450-465. doi: 10.1214/10-AIHP360. Google Scholar A. Eberle, Uniqueness and Non-Uniqueness of Semigroups Generated by Sigular Diffusion Operators, Lecture Notes in Mathmatics, 1718. Springer-Verlag, Berlin, 1999. doi: 10.1007/BFb0103045. Google Scholar A. Eberle, Reflection couplings and contraction rates for diffusions, Probability Theory and Related Fields, 166 (2016), 851-886. doi: 10.1007/s00440-015-0673-1. Google Scholar A. Eberle, A. Guillin and R. Zimmer, Quantitative Harris theorem for diffusions and Mckean-Vlasov processes, Trans. Amer. Math. Soc., 371 (2019), 7135-7137. doi: 10.1090/tran/7576. Google Scholar A. Eberle, A. Guillin and R. Zimmer, Couplings and quantitative contraction rates for Langevin dynamics, The Annals of Probability, 47 (2019), 1982-2010. doi: 10.1214/18-AOP1299. Google Scholar A. Guillin, C. Léonard, L. M. Wu and N. Yao, Transportation-information inequalities for Markov processes, Probab. Theory Relat. Fields., 144 (2009), 669-695. doi: 10.1007/s00440-008-0159-5. Google Scholar N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, Second edition, North-Holland Mathematical Library, 24. North-Holland Publishing Co., Amsterdam, Kodansha, Ltd., Tokyo, 1989. Google Scholar [16] K. Itȏ, H. P. Mckean and Jr ., Diffusion Processes and Their Sample Paths, Die Grundlehren der Mathematischen Wissenschaften, Band 125 Academic Press, Inc., Publishers, New York, Springer-Verlag, Berlin-New York, 1965. Google Scholar R. Latala and K. Oleszkiewicz, Between Sobolev and Poincaré, Geometric Aspects of Functional Analysis, Lect. Notes in Math., Springer, Berlin, 1745 (2000), 147-168. doi: 10.1007/BFb0107213. Google Scholar J. Lott and C. Villani, Ricci curvature for metric-measure spaces via optimal transport, Ann. of Math. (2), 169 (2009), 903-991. doi: 10.4007/annals.2009.169.903. Google Scholar D. J. Luo and J. Wang, Exponential convergence in Wasserstein distance for diffusion processes without uniform dissipativity, Math. Nachr., 289 (2016), 1909-1926. Google Scholar S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability, Communications and Control Engineering Series, Springer-Verlag London, Ltd., London, 1993. doi: 10.1007/978-1-4471-3267-7. Google Scholar M.-K. von Renesse and K.-T. Sturm, Transport inequalities, gradient estimates, entropy, and Ricci curvature, Comm. Pure Appl. Math., 58 (2005), 923-940. doi: 10.1002/cpa.20060. Google Scholar F. Y. Wang, Exponential contraction in Wasserstein distances for diffusion semigroups with negative curvature, preprint, arXiv: 1603.05749. Google Scholar L. M. Wu, Essential spectral radius for Markov semigroups. I. Discrete time case, Probab. Theory Raleted Fields, 128 (2004), 255-321. doi: 10.1007/s00440-003-0304-0. Google Scholar Jonathan Zinsl. Exponential convergence to equilibrium in a Poisson-Nernst-Planck-type system with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2915-2930. doi: 10.3934/dcds.2016.36.2915 Jun Li, Fubao Xi. Exponential ergodicity for regime-switching diffusion processes in total variation norm. Discrete & Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2021309 Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056 James Broda, Alexander Grigo, Nikola P. Petrov. Convergence rates for semistochastic processes. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 109-125. doi: 10.3934/dcdsb.2019001 Oliver Knill. A deterministic displacement theorem for Poisson processes. Electronic Research Announcements, 1997, 3: 110-113. Mingying Zhong. Diffusion limit and the optimal convergence rate of the Vlasov-Poisson-Fokker-Planck system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021041 Yeping Li, Jie Liao. Stability and $ L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1281-1302. doi: 10.3934/cpaa.2019062 Philipp Harms. Strong convergence rates for markovian representations of fractional processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (10) : 5567-5579. doi: 10.3934/dcdsb.2020367 Giuseppe Savaré. Self-improvement of the Bakry-Émery condition and Wasserstein contraction of the heat flow in $RCD (K, \infty)$ metric measure spaces. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1641-1661. doi: 10.3934/dcds.2014.34.1641 A. M. Micheletti, Angela Pistoia. Multiple eigenvalues of the Laplace-Beltrami operator and deformation of the Riemannian metric. Discrete & Continuous Dynamical Systems, 1998, 4 (4) : 709-720. doi: 10.3934/dcds.1998.4.709 Ebenezer Bonyah, Fatmawati. An analysis of tuberculosis model with exponential decay law operator. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2101-2117. doi: 10.3934/dcdss.2021057 András Bátkai, Istvan Z. Kiss, Eszter Sikolya, Péter L. Simon. Differential equation approximations of stochastic network processes: An operator semigroup approach. Networks & Heterogeneous Media, 2012, 7 (1) : 43-58. doi: 10.3934/nhm.2012.7.43 Klaus-Jochen Engel, Marjeta Kramar Fijavž. Waves and diffusion on metric graphs with general vertex conditions. Evolution Equations & Control Theory, 2019, 8 (3) : 633-661. doi: 10.3934/eect.2019030 Byung-Soo Lee. Existence and convergence results for best proximity points in cone metric spaces. Numerical Algebra, Control & Optimization, 2014, 4 (2) : 133-140. doi: 10.3934/naco.2014.4.133 Michael Anderson, Atsushi Katsuda, Yaroslav Kurylev, Matti Lassas and Michael Taylor. Metric tensor estimates, geometric convergence, and inverse boundary problems. Electronic Research Announcements, 2003, 9: 69-79. Sylvia Serfaty. Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1427-1451. doi: 10.3934/dcds.2011.31.1427 Benjamin Söllner, Oliver Junge. A convergent Lagrangian discretization for $ p $-Wasserstein and flux-limited diffusion equations. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4227-4256. doi: 10.3934/cpaa.2020190 A. Guillin, R. Liptser. Examples of moderate deviation principle for diffusion processes. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 803-828. doi: 10.3934/dcdsb.2006.6.803 Alexandre Nolasco de Carvalho, Stefanie Sonner. Pullback exponential attractors for evolution processes in Banach spaces: Theoretical results. Communications on Pure & Applied Analysis, 2013, 12 (6) : 3047-3071. doi: 10.3934/cpaa.2013.12.3047 Alexandre Nolasco de Carvalho, Stefanie Sonner. Pullback exponential attractors for evolution processes in Banach spaces: Properties and applications. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1141-1165. doi: 10.3934/cpaa.2014.13.1141 HTML views (116) Lingyan Cheng Ruinan Li Liming Wu \begin{document}$ W_1 $\end{document} for one dimensional diffusions" readonly="readonly">
CommonCrawl
A network model for angiogenesis in ovarian cancer Kimberly Glass1,2,3, John Quackenbush1,2, Dimitrios Spentzos4, Benjamin Haibe-Kains5 & Guo-Cheng Yuan1,2 We recently identified two robust ovarian cancer subtypes, defined by the expression of genes involved in angiogenesis, with significant differences in clinical outcome. To identify potential regulatory mechanisms that distinguish the subtypes we applied PANDA, a method that uses an integrative approach to model information flow in gene regulatory networks. We find distinct differences between networks that are active in the angiogenic and non-angiogenic subtypes, largely defined by a set of key transcription factors that, although previously reported to play a role in angiogenesis, are not strongly differentially-expressed between the subtypes. Our network analysis indicates that these factors are involved in the activation (or repression) of different genes in the two subtypes, resulting in differential expression of their network targets. Mechanisms mediating differences between subtypes include a previously unrecognized pro-angiogenic role for increased genome-wide DNA methylation and complex patterns of combinatorial regulation. The models we develop require a shift in our interpretation of the driving factors in biological networks away from the genes themselves and toward their interactions. The observed regulatory changes between subtypes suggest therapeutic interventions that may help in the treatment of ovarian cancer. Ovarian cancer is the fifth leading cause of cancer death for women in the U.S. and the seventh most fatal worldwide [1]. Although ovarian cancer is notable for its initial sensitivity to platinum-based therapies, the vast majority of women eventually recur and succumb to increasingly platinum-resistant disease. Despite significant investment, improvements in patient prognosis have been slow and usually in small increments. The disease generally presents at an advanced stage (III/IV) and the five-year survival rate of advanced disease is less than 30% with median survival only slightly longer than two years [2]. Furthermore, ovarian cancer patients often undergo similar treatment regimens, mainly because the highly suspected multiple subtypes have not yet been well characterized in terms of their biological significance. In previous work, we analyzed gene expression data from 129 high-grade serous ovarian cancer samples and identified a poor-prognosis subtype characterized by the expression of angiogenic genes [3]. This subtype and the associated differences in patient survival were validated using gene expression data from a collection of 1606 ovarian cancer samples assembled from ten independent published studies. Multiple other subtypes have been proposed [4,5], but the importance of this subtyping, relative to other definitions, is that angiogenesis represents a process that is of potential clinical relevance and it is the only subtyping model which has been shown to be robust and prognostic in multiple independent datasets. Angiogenesis is one of the well-characterized hallmarks associated with cancer progression [6], playing an important role in maintaining tumor growth [7]. Angiogenesis is facilitated by interactions between cells and the extra-cellular matrix [8,9] and is associated with increased expression among a set of particular genes, including, but not limited to, matrix metalloproteinases (MMPs) [10] and VEGF [11-13]. Angiogenesis inhibition is being intensely studied as a possible therapeutic advance in ovarian cancer, but the effects in survival are still modest, suggesting that our understanding of the molecular underpinnings and biological implications of angiogenesis in this disease is still limited [2,14]. A number of clinical trials have tested drugs targeting angiogenic factors and shown these drugs have anti-cancer activity in a subset of ovarian cancer patients [15-18]; however, better understanding of the complex mechanisms driving response to these therapies is crucial to improving their efficacy and patient outcomes [17,19]. Although we have long studied ovarian cancer from the perspective of single genes and their properties, it has become clear that more integrative, systems-level analyses are necessary to better understand how the disease and its subtypes develop and progress, and how it may respond to different therapeutic interventions. The characterization of biological processes can distinguish disease states in cases where single gene biomarkers cannot [20]. The importance of applying network approaches in particular to better understand disease has previously been highlighted [21-24]. Simultaneously, it has become evident that integrative approaches that incorporate multiple sources of data to model biological systems often yield the most informative results [25-27]. Along these lines, we recently described an integrative network inference method, PANDA (Passing Attributes between Networks for Data Assimilation), that models information flow in regulatory networks by searching for agreement among various data-types, using information from each to iteratively refine predictions in the others [28]. PANDA models network interactions as communication between "transmitters" and "receivers". In the context of PANDA's regulatory networks, the transmitters are transcription factors and the receivers are their downstream target genes. This approach recognizes that for communication to occur, both the transmitter and the receiver have an essential role – although a transcription factor is responsible for regulating a target gene, the gene must also be available to be regulated. By constructing a "prior" regulatory network consisting of potential routes for communication (for example, by mapping transcription factor motifs to a reference genome) and integrating with other sources of regulatory information (such as protein interaction and gene expression data), one can estimate the responsibility and availability of each potential interaction, predict where communication is succeeding and failing, and deduce condition-specific network structures. Here we describe the application of PANDA to reconstructing subtype-specific regulatory networks in ovarian cancer. We identify differences in network topologies between the angiogenic and non-angiogenic subtypes, and use this information to suggest potential therapies that may be efficacious in treating patients with angiogenic-subtype ovarian tumors. Building angiogenic and non-angiogenic specific regulatory network modules To begin, we downloaded mRNA expression data for 510 ovarian samples profiled by the Cancer Genome Atlas (TCGA) (https://tcga-data.nci.nih.gov/tcga/tcgaHome2.jsp, [5]), normalized this data using fRMA [29], and mapped probes to Ensembl identifiers using the biomaRt Bioconductor package version 2.8.1 [30]. As reported in [3] we classified samples as belonging to either the angiogenic or non-angiogenic subtype; 188 were classified as part of the angiogenic subtype, and 322 were classified as non-angiogenic. We constructed separate genome-wide regulatory network models for the two subtypes. We began by mapping 111 TFs with known binding motifs to the promoters, here defined as [−750,+250] base-pairs around the transcription start site, of the 12290 genes with expression data in TCGA samples. Because transcriptional regulation involves assembly of protein complexes and allows for combinatorial regulatory processes, we collected information regarding physical protein interaction data between human transcription factors estimated using a mouse-2-hybrid assay [31]. We used PANDA to integrate information from transcription factor binding motifs, protein-protein interaction data, and subtype-specific gene expression, constructing directed transcriptional regulatory networks for the angiogenic and non-angiogenic ovarian cancer subtypes (Figure 1A). A summary of PANDA gene regulatory network reconstruction and the identification of subtype-specific subnetworks. (A) We combine transcription factor motif and physical protein-protein interaction (PPI) data with gene expression data from each subtype to build individual network models. (B) We compare the weights of edges predicted by PANDA for each of the network models. Each point in the graph represents an edge connecting a TF to a target gene. We also define subnetworks by selecting high-probability edges specific to either the angiogenic (red) and non-angiogenic (blue) model. The number of edges and genes identified as part of these subnetworks is noted. (C) Although the subnetworks contain unique sets of edges, genes targeted by TFs in these interactions are not necessarily unique, suggesting distinct regulatory processes. For each edge that connects a transcription factor to its target gene, PANDA assigns a weight, in z-score units, that reflects the confidence level of a potential inferred regulatory relationship. Not surprisingly, we found the edge weights for the angiogenic and non-angiogenic regulatory networks to be highly correlated (Figure 1B), representing common regulatory mechanisms and processes active in both subtypes. However, we also found regulatory edges that were more strongly supported by either the angiogenic or the non-angiogenic subtype. We identified 12631 edges that we assigned to an angiogenic subnetwork, shown in red, and 15735 edges for the non-angiogenic subnetwork, shown in blue. The edges in the angiogenic subnetwork target 4081 genes while the non-angiogenic subnetwork edges target 4419; of these, 1828 genes are targeted in both subnetworks (Figure 1C), although by different upstream transcription factors. This may reflect the fact that in addition to different pathways being activated in each of the subtypes, there may also be a complex "re-wiring" of the networks around commonly targeted genes. Network analysis reveals biological mechanisms associated with regulatory differences We compared the corresponding subnetworks and identified a subset of transcription factors associated with the strongest regulatory changes. To do this we identified the genes targeted for regulation by a transcription factor in each of the two subnetworks and determined both an "edge enrichment" score as well as a p-value significance for the difference in the number of target genes. Specifically, we define "edge enrichment" for each transcription factor as the out-degree of that transcription factor (number of edges pointing out to a target gene) in the angiogenic subnetwork divided by the out-degree for the same transcription factor in the non-angiogenic subnetwork (multiplied by a normalization factor equal to the total number of edges in the angiogenic subnetwork divided by the total number of edges in the non-angiogenic subnetwork); the statistical significance (quantified as a p-value) of the rewiring is determined by using the hypergeometric distribution model to evaluate the overlap between edges from a transcription factor and edges specific to a particular subnetwork. On average, a given transcription factor will be associated with around 114 edges in the subtype-specific subnetworks. However some transcription factors are associated with a relatively small number (less than twenty) of edges. These transcription factors were excluded in further analysis to enhance statistical robustness. We identified ten "key" transcription factors with an edge enrichment greater than 1.5 (or less than 1/1.5) and with p < 10−3 (Figure 2A). The identified transcription factors all have established associations with angiogenesis or survival ([32-43], Figure 2B). For example NFKB1 is important for chromatin remodeling during angiogenesis [32], PRRX1 deletion causes vascular anomalies [38], and MZF1 can repress MMP2 [41], a key prognostic factor in ovarian cancer [44]. A summary of some of the key regulatory events distinguishing the two subnetworks. (A) Transcriptional factors that have significantly more targets in one of the subnetworks compared to the other (edge enrichment > 1.5, p < 1e-3). The differential expression and methylation of each transcription factor as well as the differential expression/methylation of its target genes (in the corresponding subnetwork) is noted. Color corresponds to direction of differential expression/methylation (red: higher in angiogenic, blue: higher in non-angiogenic). (B) Genetic functions associated with these key transcription factors describing their potential role in angiogenesis and ovarian cancer. (C) A distribution of the t-statistic for differential methylation across all the genes. The shift of the distribution to the right indicates an overall increase in methylation in the angiogenic compared to the non-angiogenic samples. We tested if these ten transcription factors would also be identified in a simple differential expression analysis using a t-test. Only three out of ten TFs are differentially expressed at the p < 0.01 cutoff: NFKB1 (more highly expressed in the angiogenic subtype, p = 7.8 × 10−5, FDR = 4.3 × 10−4), PRRX2 (more highly expressed in the angiogenic subtype, p = 0.005, FDR = 0.015), and MZF1 (more highly expressed in the non-angiogenic subtype, p = 4.1 × 10−7, FDR = 4.2 × 10−6). Thus, PANDA identified transcription factors that are not strongly differentially expressed between the subtypes yet are known to participate in angiogenic processes. We also tested targets of each transcription factor for differential expression (see Methods) and found significant differences in target expression for six of the seven remaining TFs. For example, ARID3A is not differentially expressed between angiogenic and non-angiogenic subtypes, but its targets have significantly (p = 2.6 × 10−22) increased expression in the angiogenic subtype. This suggests that transcription factor activity changes may not be detectable based on their own expression level increases or decreases, but that the expression of their targets can provide information on how they influence phenotype. There are many factors that could contribute to expression abnormalities in cancer, such as differences in mutations, copy-number variation and epigenetic states. We sought to determine which, if any, of these, might be contributing to the differential expression of the genes targeted by each of our ten key transcription factors. First, we investigated whether copy-number variation might explain the overall change in gene expression (see Methods). Although we find some nominally significant changes for the targets of PRRX2 (p = 0.0029) and ARID3A (p = 0.0257), these genes actually have less overall amplification in the agiogenic subtype compared to the non-angiogenic subtype (t = −2.98 and t = −2.23 respectively). This is despite the fact that their mRNA expression levels are higher in the angiogenic compared to non-angiogenic subtype. Thus copy-number variation does not appear to be the primary factor driving changes in expression of these target genes. Epigenetic factors provide an alternative mechanism for differential targeting by these transcription factors. To explore this possibility we mapped DNA methylation data from TCGA to the samples and genes used in our network reconstruction. We used a t-test to quantify any potential change in DNA methylation for each gene between the angiogenic and non-angiogenic subtypes and show the distribution of these values in Figure 2C. Overall we found DNA methylation levels to be higher in angiogenic tumor samples (mean value of t-statistic across all genes is 1.52). We next determined the differential methylation of the ten transcription factors and their targets in each of the subnetworks (Figure 2A). Compared to the rest of the genome, genes targeted by four of the ten transcription factors, ARID3A, SOX5, NKX2-5 and PRRX2, are associated with significantly lower methylation in angiogenic samples. It should be noted that this lower level of methylation does not indicate hypomethylation. In fact, the average t-statistic value for the targets of these transcription factors is greater than zero (ARID3A, t = 1.17; SOX5, t = 1.31; NKX2-5, t = 0.70; PRRX2, t = 1.24), indicating that their targets, on average, have higher levels of methylation in the angiogenic compared to the non-angiogenic subtype; however, that increase in methylation is comparatively less than that experienced by genes not targeted by these regulators. We note that the TCGA methylation data was not used by PANDA to construct the networks, yet is highly concordant with the predicted patterns of gene regulation. Although many of the transcription factors that alter targeting between the aniogenic and non-angiogenic subnetworks are not significantly differentially-expressed between the subnetworks, their targets genes often are. At the same time, these target genes are also differentially-methylated between the subtypes. Overall this analysis provides independent support of the overall network model. Both transcriptional activation and repression are used to control angiogenic pathways Transcription factors can either activate or repress gene expression. The target gene expression analysis in Figure 2A provides a preliminary indication about potential regulatory roles for the identified transcription factors. For example, although MZF1 and BRCA1 exhibit an edge-enrichment in the non-angiogenic subnetwork and are themselves also more highly expressed in the non-angiogenic samples, their targets show the opposite trend, with significantly higher expression in the angiogenic samples (p = 1.0 × 10−56 and 4.3 × 10−41, respectively). There are two scenarios consistent with these observations: (1) loss of control by these transcription factors results in the increased expression of their former targets, and (2) increased control by these transcription factors results in the decreased expression of their target genes. Interestingly, BRCA1 is known to negatively regulate IGF1 expression in breast cancer cells [43], which could inhibit angiogenesis as multiple studies have shown that increased levels of IGF1 in cancer calls leads to an increase in cell proliferation [45-47]. Similarly, MZF1 is a repressor of MMP2 [41] and is known to inhibit hematopoietic lineage differentiation in embryonic stem cells [48]. Combined with the analysis presented in Figure 2A, this suggests that MZF1 and BRCA1 act as transcriptional repressors in the non-angiogenic samples. Motivated by these observations, we classified the edges in our two subnetworks as either "activating" or "repressing" based on whether changes in the target gene's expression is correlated or anti-correlated with subnetwork assignment. We then assigned each target gene to one of six non-overlapping classes (see Figure 3A): "A+": genes targeted only in the angiogenic subnetwork that are more highly expressed in the angiogenic subtype; "A-": genes targeted only in the angiogenic subnetwork that are more highly expressed in the non-angiogenic subtype; "A+;N-": genes targeted in both subnetworks and more highly expressed in the angiogenic subtype; "N+;A-": genes targeted in both subnetworks and more highly expressed in the non-angiogenic subtype; "N-": genes targeted only in the non-angiogenic subnetwork that are more highly expressed in the angiogenic subtype; "N+": genes targeted only in the non-angiogenic subnetwork that are more highly expressed in the non-angiogenic subtype. Characteristics of six classes of differentially-targeted genes. (A) The classification of genes based on whether evidence suggests that the regulatory interactions targeting those genes are activating, repressive, or both. (B) Enriched Biological Process Gene Ontology terms (FDR < 0.1) associated with at least one of these six classes; we only included categories with at least 100 gene annotations. FDR significance is shown as a color with darker colors representing more significant enrichment. (C) Potential angiogenesis biomarkers that belong to each of the six classes of genes. Biomarkers differentially-expressed between the subtypes at an FDR < 0.1 are noted in red or blue based on whether they are more highly expressed in the angiogenic or non-angiogenic subtypes, respectively. We used DAVID [49] to test for functional enrichment in these six classes of target genes, with the 12290 network genes taken as a background. The FDR p-values for GO "Biological Process" categories with more than one hundred members that have an FDR enrichment of less than 0.1 in at least one of our six classes of genes are illustrated as a heat map in Figure 3B. The angiogenic-activated class ("A+") has the greatest number of significantly enriched functional categories. Many of these are associated with immune-response; processes associated with angiogenesis are also included, for example "chemotaxis," "hematopoiesis," "positive regulation of cell communication" and "metal ion homeostasis." Some processes found to be enriched for genes repressed in the non-angiogenic subnetwork ("N-" genes), such as "cell adhesion" and "extracellular structure organization," also play a role in angiogenesis. In addition, genes activated in the angiogenic subnetwork but repressed in the non-angiogenic subnetwork ("A+;N-") include those involved in "blood vessel morphogenesis." This suggests angiogenesis involves not only the activation of certain genes in the angiogenic subtype, but also removal of repressive regulatory interactions from the non-angiogenic subtype. In contrast, genes repressed in the angiogenic subnetwork ("A-" genes) are associated with "chromatin organization," consistent with the observed role that epigenetics plays in distinguishing the subtypes (see Figure 2C). Genes activated in the non-angiogenic subnetwork but repressed in the angiogenic-subnetwork ("N+;A-" genes) are involved in functions such as "transcription" and "DNA metabolic process". We also investigated whether previously identified potential biomarkers for angiogenesis were targeted in our networks, and, if so, which "class" of genes those biomarkers belonged to. In particular, we investigated thirty-five biomarkers, described in [50,51], and found twenty-two targeted in our defined subnetworks (Figure 3C). The majority (eighteen) of these biomarkers are targeted in either the "A+", "A+;N-"or "N-"class, consistent with higher expression in the angiogenic subtype. Interestingly, many of these biomarkers are targeted in the non-angiogenic subnetwork ("A+;N-"or "N-"classes). One possible interpretation of these results is that repressive regulatory features play a role in inhibiting angiogenic progression, in addition to transcriptional activation of these biomarkers in angiogenic tumors. Curiously, three of the four biomarkers not included in the pro-angiogenic network classes were identified in only a single study [52] that included twenty patients with inflammatory breast cancer [50]. We note that while many of the network-targeted biomarkers are also significantly differentially-expressed between the subtypes (FDR < 0.1 based on un-paired t-test), several are not, including VEGFA (FDR = 0.69), TP53 (FDR = 0.12), LCN2 (FDR = 0.68), KIT (FDR = 0.73) and SLC2A1 (FDR = 0.10). The identification of VEGFA and other biomarkers in our network model despite clear lack of differential-expression may indicate that our network model is able to identify important cellular regulatory alterations even in the absence of distinct changes in downstream target gene expression. Combinatorial control plays a critical role in potentiating angiogenesis Regulatory information that pertains to the core of our network can be depicted using a ring diagram representing the union of the two subnetworks (Figure 4A). In this visualization, our ten key regulators form the inner ring, while their targets, colored based on whether they exhibit higher average expression in the angiogenic (red) or non-angiogenic (blue) samples, form the outer ring. Viewing the two subnetworks, it is clear that there is a high degree of combinatorial gene regulation. To quantify this, we applied the hypergeometric distribution model and, using the union of the genes targeted in both the angiogeneic and non-angiogenic networks as a background, tested for over-representation of genes co-targeted by specific pairs of transcription factors in the various network classifications (either "A+," "A-," "N+" and "N-"). Here, we focus on the three most significant pairs that include at least one of the ten identified key transcription factors. Information for all pairs can be found in the Supplemental Material (Additional file 1: Dataset S1). Note that the genes in the "A-" and "N+" classes had no combinatorial pairs significantly enriched (using a p = 10−3 cutoff). Characterizing combinatorial regulation in the subnetworks. (A) An illustration of the identified key active subnetworks. Identified key transcription factors form the inner ring and their target genes the outer ring. Target genes are colored based on whether they are more highly expressed in the angiogenic (red) or non-angiogenic subtype (blue) and are organized based on their classification. Angiogenic subnetwork edges (red) and non-angiogenic subnetwork edges (blue) extend between these rings, from the regulating transcription factor in the inner ring, to its target gene in the outer ring. (B) A table of the top three co-regulatory TF-pairs targeting "A+" genes, (C) a diagram illustrating these co-regulatory interactions, and (D) a Venn-diagram showing the overlap of the "A+" genes targeted by these TFs. (E) A table of the top three co-regulatory TF-pairs targeting "N-" genes, (F) a diagram illustrating all significant co-regulatory events between these TFs in "N-" genes, and (G) a Venn diagram showing the overlap of "N-" genes targeted by each of these TFs. For "A+" genes, as was seen in Figure 4A, we identify significant co-regulatory associations between ARID3A, PRRX2, and SOX5 (Figure 4B-C) and these three regulators share many common targets (Figure 4D). In fact, 58% of the "A+" genes are targeted by at least one of these transcription factors, 32% by at least two, and 14% are targeted by all three, suggesting they may function as a module that coordinately regulates these genes. For the "N-" genes, the top three significant co-regulatory transcription factor pairs include combinations of ETS1, ARNT (also known as HIF1β), MZF1 and AHR (Figure 4E). All possible pairs of these four TFs (including those that don't include one of our "key" transcription factors) are enriched in "N-" genes at a statistically significant level (p < 1 × 10−6, Figure 4F), although MZF1 generally only shares targets of AHR or ARNT in combination with ETS1 (Figure 4G). AHR and MZF1 are among our key regulators, and, as noted previously, MZF1 is known to repress MMP2 and reduced cancer invasiveness [41]. However, ETS1 and ARNT were not among our list of key regulators, indicating that combinatorial events might be especially important for these two transcription factors. Previous reports suggest that although various ETS family members can either activate or repress angiogenic pathways [53,54], ETS1, in particular, acts as a mediator of angiogenesis [55], dimerizing with HIF2α to activate VEGFR1 and VEGFR2 [56,57]. Similarly, ARNT dimerizes with HIF1α to activate VEGF and angiogenesis [58]. However, the dimerization of AHR with ARNT inhibits ARNT/HIF1α dimerization, thereby reducing VEGF production and subsequent angiogenesis [59]. Thus, even though ARNT/HIF1α promotes angiogenesis, the fact that ARNT/AHR dimerization inhibits angiogenesis offers an explanation for our observation that ARNT is associated with the repression of genes. Since both ETS1/HIF2α and ARNT/HIF1α interactions occur through a PAS domain [60,61], it is likely that a similar mechanism underlies our observed combinatorial enrichment of ETS1 with AHR and we hypothesize that ETS1 interaction with AHR prevents dimerization with HIFα proteins, thereby reducing VEGF production and subsequent angiogenesis. The network model captures the effects of various treatment strategies We wished to investigate how genes identified using our network model might respond to standard or other treatment protocols. Therefore, we analyzed experimental data (GEO accession numbers GSE8057, GSE40837) measuring gene expression levels in response to several chemotherapy drugs that are commonly used to treat ovarian cancer patients and/or angiogenesis, including cisplatin, oxaliplatin and sorafenib. For each experiment, we used RMA [62] to normalize gene expression CEL files downloaded from the Gene Expression Omnibus and used a custom-CDF to map to Entrez GeneIDs [63]. We selected samples that correspond to either a treatment or control experiment and performed a t-test to quantify the differential expression of all genes between these sets of samples. Finally, we computed a summary statistic representing the aggregate differential expression value for the sets of genes within each of the six "classes" defined by our subnetworks (for more details, see Methods). The results are summarized in Figure 5A; intensity of red or blue coloration scales with the significance of increased or decreased expression, respectively, in the treatment compared to the control samples, for the genes belonging to each of our networks "classes". Proposed therapeutic approaches. (A) A summary of the results found by comparing the expression patterns of genes in "treatment" versus "control" samples in each of the GEO datasets to "classes" of genes defined in our network analysis. We report the significance of association of differential expression with the indicated gene class, using a GSEA approach [110]; colors indicate direction of differential expression (red - increase upon treatment, blue - decrease upon treatment). (B) An illustration of some of the key findings regarding the potential mechanisms driving angiogenesis in ovarian cancer found using PANDA, as well as three potential treatments that may inhibit angiogenesis. Platinum-based therapies are widely used in ovarian cancer treatment regimens. Therefore, we began by investigating the effect of the chemotherapy drugs cisplatin and oxaliplatin on the expression levels of genes in A2780 human ovarian carcinoma cells (GSE8057, [64]). As a negative control, we compared expression levels of cells grown in a drug-free medium for 16 hours to their expression at 0 hours. As expected, there is little differential expression and we observe no association with any of our classes of genes (Figure 5A). We next compared expression of cells grown for 16 hours following treatment with either cisplatin or oxaliplatin to those grown for 16 hours in drug-free medium. Curiously, in both instances we see that genes normally repressed in the non-angiogenic subnetwork ("N-") actually increase their expression levels following treatment, suggesting that these drugs disrupt regulatory interactions that are important for repressing angiogenic activities. This is consistent with the results of a previous study [65] showing that when taken in isolation, cisplatin is not effective for treating angiogenesis in ovarian cancer. Although decreased expression in "A+" genes does not occur following platinum-based treatment, we reasoned that the effects of a VEGF-inhibiting drug should be reflected in our identified subnetworks. Biopsies of ER+ breast tumors have been collected from patients both prior to and following a clinical trial of sorafenib, and the genome-wide expression levels of genes were measured in these samples (GSE40837). Analysis of this data in the same manner as the platinum-based therapies shows a striking association with the subnetworks (Figure 5A). Genes in the "A+," "A+;N-" and "N-" groups all show a profound decrease in expression post-treatment while genes in the "A-," "A-;N+" and "N+" groups all increase their expression. Although this is breast rather than ovarian cancer, these results are exciting since they rely on patient samples collected from a clinical trial rather than cell-lines, illustrating a patient-level association of an angiogenic-inhibition drug among network-defined genes. This result also serves as a positive control on our network analysis. We note that the six classes of genes we define are not wholey independent of the gene expression data, which we used both to reconstruct the networks as well as to divide target genes into distinct classes. Consequently, we would expect similar results for analyses of other "classes" of genes whose differential-expression is associated with differences between the two subtypes. However, analyzing the networks has the potential to provide additional mechanistic insight into differences between the subtypes and identify other drugs not classically associated with angiogenesis. Three treatments may synergistically inhibit angiogenic progression The optimal angiogenesis-based treatment in ovarian cancer is still a matter of ongoing investigation [14]. Commonly-used anti-angiogenic drugs mainly target VEGF, a major contributor to angiogenesis. On the other hand, as described below and illustrated in Figure 5B, several mechanisms highlighted in our network analysis suggest alternate approaches for treatment that, although speculative, could utilize existing compounds to control, or potentially reverse, angiogenesis in ovarian cancer. For each of these proposed treatments we identified highly-related compounds and ascertained if there is a verifiable effect on gene expression in either ovarian cancer or another human system. ARNT and ETS1 dimerization with HIF1α and HIF2α, respectively, must be prevented As noted above, the dimerization of ETS1 and ARNT with HIF1α and HIF2α, respectively, generally promotes angiogenesis, although AHR may be interfering to repress target gene expression in the non-angiogenic subtype. It is therefore essential that the dimerization of these HIF proteins with ARNT and ETS1 is inhibited. HIF2α dimerizes with ARNT through a PAS-B domain, located on the C-terminus of the ARNT protein. The structural basis for this dimerization has been solved [66,67] and a small molecule ligand has been identified that dimerizes with HIF2α, decreasing affinity of the ARNT/HIF2α heterodimer [60]. Similarly, a compound has been identified that dimerizes with HIF1α, decreasing the affinity of the ARNT/HIF1α heterodimer [68]. Using either or both of these compounds, we believe one could prevent or reverse angiogenic effects driven by ARNT/HIF1α and perhaps also those driven by ETS1/HIF2α dimerization. In lieu of a dimerization inhibitor, we investigated how siRNA depletion of HIF1α affects the expression levels of genes in our identified subnetworks. We observe that in two independent experiments [69,70], "A+" genes exhibit a decrease in expression upon HIFα depletion, as we would expect from our model (Figure 5A). AHR dimerization with ARNT and ETS1 must be promoted Preventing the dimerization of ARNT/HIF1α and ETS1/HIF2α may be insufficient; inhibition of angiogenesis is also contingent upon the dimerization of ARNT with AHR (and perhaps also ETS1/AHR). Consequently, we also suggest treatment with an AHR agonist, such as the selective AHR modular (SAhRM) 6-methyl-1,3,8-trichlorodibensofuran (6-MCDF), which has been shown to inhibit carcinogen-induced mammary tumor growth in rats [71]. TRAMP mice fed 6-MCDF in their diet had overall lower levels of serum VEFG and were five times less likely to have metastasis compared to mice on a control diet [72]. Although a known carcinogen, one of the most efficient AHR agonists is the environmental toxin 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD). TCCD has been shown to potentiate ARNT/AHR dimerization thereby inhibiting angiogenesis and preventing vascular remodeling in rat placenta [73]. Curiously, accidental exposure to TCCD was found to potentially decrease incidence of breast and endometrial cancer in a group of women [74]. In our network, "A+" genes show a decrease in expression upon treatment with the AHR agonist TCDD in both hepatocytes [75] and CD43+ hematopoetic cells [76] (Figure 5A). The hepatocyte data also shows a significant decrease in expression for "A+;N-" genes and "N-" genes, and an increase in expression for "A-" genes. Methylation levels across the entire angiogenic genome must be decreased In patients whose tumors have already become angiogenic, epigenetic alterations may need to be considered. One hallmark of many cancers is alteration of DNA methylation and, indeed, we found higher genome-wide methylation levels in the angiogenic subtype (Figure 2C). Interestingly, SOX5, one of our "key" transcription factors that was also found to play an important combinatorial role with ARID3A and PRRX2, contains an HMG-box which binds the minor groove [77]. Since methylation modifications occupy the major groove of DNA [78], this implies that SOX5 might be mediating key regulatory processes in the angiogenic subtype in a methylation-independent manner. This suggests that it may be necessary to decrease methylation levels across the angiogenic genome, thereby increasing competition with SOX5 binding to gene promoters, and altering their subsequent expression. This could be achieved using, for example, DNA methyltransferase (DNMT) and histone deacetylase (HDAC) inhibitors, which have already shown potential to inhibit angiogenesis in other systems [79,80]; such treatments in ovarian cancer might yield similar results. A hypomethylating agent, such as 5-azacytedine, could also be used to alter the epigenetic landscape and control angiogenic progression. With this in mind, we investigated the expression of ovarian cancer cells both prior to and post treatment with 5-azacytedine [81]; we observe a decrease in expression of the "A+" genes, consistent with our hypotheses (Figure 5A). Additional potential therapies associated with differentially-targeted genes We have identified several treatment options that target specific biological mechanisms uncovered when contrasting our network models; however, these are the result of intensive literature mining to determine suitable candidate drugs. We used the connectivity map (CMAP) [82] to determine if a gene classification based on our network analysis could also be used to identify potential drugs for treating angiogenesis in ovarian cancer. We first used genes assigned to the "A+" and "A-" classification (see Figure 3) to build a "network-signature" suitable for CMAP analysis. As expression information was included in our network model, we also built an "expression-signature" by selecting genes with the most significant changes in expression between in the angiogenic and the non-angiogenic subtypes (based on an unpaired t-test). A cutoff of p < 1.6e-6 (FDR < 1.45 × 10−5) was used to select genes more highly expressed in the angiogenic subtype and p < 3.05e-4 (FDR < 1.4 × 10−3) to select genes more highly expressed in the non-angiogenic subtype. We used these criteria to select differentially-expressed genes so that the expression-signature and network-signature had equivalent dimensions (920 "up" genes and 1287 "down" genes in both cases). These two signatures share approximately 20% of their genes, with 225 in common in the "up" direction and 223 in common in the "down" direction. We used CMAP to identify drugs associated with each of these signatures. Comparing results from these two signatures will allow us to distinguish between drug candidates only identified in the network context, and those which would also be identified using a differential-expression analysis. Table 1 lists drugs significantly associated with the network-signature classification (p < 0.01). Most of these drugs are also significantly associated with the expression-signature -- not surprising given that these two signatures are non-independent; however, the ranking from the expression-signature is vastly different from that of the network-signature. One significant exception is Prestwick-675, or hippeastrine, which is ranked highly in both analyses. Hippeastrine is an amaryllidaceae alkaloid with potent anti-invasive properties [83] and anticancer activities in cell lines [84]; it is also believed to contribute to the reported anti-cancer activities of the Chinese herb Lycoris aurea [85]. Table 1 The drugs significantly associated (FDR < 0.1) with the network-signature classification based on CMAP analysis One of the most striking results from this analysis is that several drugs are significantly associated with the network-signature but not the expression-signature. The antitussive pentoxyverine is an agonist of the sigma-1 receptor [86], which has been shown to contribute to the induction of cancer-specific apoptosis by interleukin-24, a known inhibitor of angiogenesis [87]. Dicoumarol is an anticoagulant that is often administered to cancer patients. Anticoagulants are believed to be able to interfere with tumor angiogenesis [88] and in clinical trials their overall association with improved patient-survival, while encouraging, may to be limited to a subset of cancers [89]. Dicoumarol, in particular, inhibits furin-like activity by blocking the processing of MMP1 [34] and has been shown to abolish the TNF-induced activation of NFKB1 [90], one of our identified "core" transcription factors. Another interesting drug identified by the network-signature is harmine. Harmine was recently shown to suppress tumor growth by inhibiting angiogenic activities in endothelial cells [91] and to induce apoptosis by inhibiting the expression of MMP2 in gastric cancer [92]. Interestingly, in light of the network model we propose above, harman, a related alkaloid, stimulates AHR-dependent luciferase activity [93], and harmine is a competing ligand with the well-known AHR-agonist TCDD [94]. The association of these alkaloids with the network-signature is strengthened by the fact that two of harmine's sister compounds, harmol and harmalol, were identified as the top compounds most significantly associated with the network-signature. Although a wealth of cancer gene expression data has been generated over the last decade, most biological inference has been based on statistical tests at the level of individual genes (with very high rates of spurious associations) followed by functional meta-analysis using gene set enrichment. Our network analysis of angiogenic and non-angiogenic phenotypes in ovarian cancer led us not just to differential expression, but also to the underlying regulatory mechanisms associated with the differential activity of transcriptional programs. By associating differences in regulatory patterns with differences in gene expression we were able to define subsets of genes that are activated or repressed by their regulators. Then, by identifying and exploring relationships between a set of key transcriptional regulators, we were able to identify putative mechanisms by which they might be coordinately working together to activate, or repress, the expression of their target genes. Based on these observations we propose three therapeutic strategies that may complement or replace currently-used anti-angiogenic treatments. While these proposed strategies remain speculative, and experimental validation will be critical in validating their efficacy, the strategies are supported by our analysis of experiments from independent, published gene expression datasets, where mechanisms closely relevant to those predicted by our models were tested. We anticipate that these treatments could be combined synergistically to better inhibit angiogenesis in ovarian cancer tumors and bypass resistance that develops with the use mono-therapeutic conventional angiogenesis inhibitor regimens. Until recently no clinical trials have collected the expression data needed to validate the angiogenic subtype classification in ovarian cancer, and consequently there is currently no experimental data supporting increased angiogenesis in the angiogenic subtype. However, there is strong anecdotal evidence supporting the classification, including response rates consistent with what we would have predicted in clinical trials involving angiogenesis inhibitors [95-97]. The use of PANDA in comparing the subtypes led to an independent identification of angiogenic processes, among others. Importantly, PANDA itself does not rely on differential-expression, but rather characterizes the targeting of genes by transcription factors. Indeed, we found that many of the transcription factors with the greatest differential-targeting between the subtype-specific subnetworks are not themselves strongly differentially-expressed between the two subtypes. However, subsequent analysis led us to identify potential therapies that could disrupt the processes that distinguish the subtypes, which are only coincidentally associated with angiogenesis. The clinical impact of current anti-angiogneic therapies on the outcome of ovarian cancer and other cancers, although real, continues to be modest, despite early highly promising results in mouse models [98]. In seminal ovarian cancer clinical trials only a small amount of improvement in progression free survival (four months) was observed with bevacizumab treatment [99] and angiogenesis inhibitor resistance is frequently seen de novo, or over the course of treatment [100]. In addition, several attempts to define biomarkers of clinical response to anti-angiogenesis drugs, have failed to produce a singular strong or consistently predictive biomarker [50,51,101-104]. Such therapeutic and predictive limitations may also reflect our limited understanding of the specific underlying mechanisms driving angiogenic progression. Indeed, we observe that the activity of many previously identified potential biomarkers for angiogenesis may be modulated through complex regulatory features (see Figure 3C) that include an important role both for coordinated transcriptional activation and repression. For instance, our analysis revealed that VEGFA is targeted in the angiogenic network but not differentially expressed between the subtypes. Furthermore, markers such as HIF1 (a proangiogenic factor) and PDGFRA (a kinase contributing to angiogeneisis and frequently targeted by angiogenesis inhibitor drugs), while having increased expression in the angiogenic tumor subtype, were identified as targeted for repression in the non-angiogenic subnetwork. Thus, our proposed network and treatment models begin to address, in more depth, the complex regulatory mechanisms relevant to angiogenesis in ovarian cancer, laying the ground for a network-based subtype categorization that may allow better prediction as well as more rational therapeutic development. Importantly, the methods we use in our analyses are generalizable and could be applied to many other disease settings to suggest new therapeutic approaches. The specific transcriptional programs activated in angiogenic ovarian cancer, and identified through our use of PANDA, underscore the complex nature of regulatory processes and point to specific interventions that may have an increased likelihood of success. While a great deal of work would be required to validate these drug candidates, and to test whether they are subtype specific, their identification and the plausibility of their specific mode of action suggest that the type of network analysis we performed can identify candidates not found through the more widely-used gene-by-gene methods for expression analysis. In diseases such as ovarian cancer, where the outcome is poor and there are few viable drug candidates, network-based methods could represent a valuable addition to the existing repertoire of tools for analyzing genomic data. Constructing regulatory networks with PANDA PANDA [28,105] uses three inputs: a motif prior, a set of known protein-protein interactions, and expression data. To create angiogenic and non-angiogenic subtype-specific transcriptional regulatory networks, we ran PANDA twice using the same transcription factor motif prior and protein-protein interaction data, but with gene expression data unique to either the angiogenic or non-angiogenic ovarian cancer subtypes. In both runs the update parameter (α) was set equal to 0.25. Expression data Gene expression data were downloaded from TCGA (https://tcga-data.nci.nih.gov/tcga/tcgaHome2.jsp), normalized using fRMA, and individual ovarian cancer samples were assigned to either the angiogenic or non-angiogenic subtype, as described in [3]. Briefly, a Gaussian mixture model was fit to the distribution of these patient scores using Mclust R package version 3.4.10 [106] and the maximum posterior probability was used to classify each sample. Of the 510 samples, 188 were classified as angiogenic, and 322 were classified as non-angiogenic. An R-package containing the data used in this manuscript has been deposited at the URL: http://bcb.dfci.harvard.edu/ovariancancer/. Motif data To create our motif prior, we downloaded the position weight matrixes (PWM) of 130 core vertebrate transcription factor binding site motifs from the JASPAR database [107,108] as processed as described in [109]. Namely, to search for motif target candidates, the motif score of each candidate S was defined as motif score = log [P(S|M)/P(S|B)], where P(S|M) is the probability to observe sequence S given the motif M, and P(S|B) is the probability to observe sequence S given the genome background B. To define motif targets, we modeled the motif score distribution by randomly sampling the genome 106 times. Targets of motifs were then defined as those with a score at a significance level of p < 10−5. We associated genes with these motif targets if that target fell within its promoter region ([−750, 250] base-pairs around a transcriptional start site). It is possible for a motif to correspond to multiple transcription factors; in these cases we included all corresponding transcription factors. This resulted in a transcription factor to target gene mapping. From this mapping we excluded edges connected to transcription factors or genes for which we did not have expression data. This left us with a prior network from 111 transcription factors to 12290 genes. Protein-protein interactions Predicted human transcription factor interactions were obtained from [31]. We filtered these interactions to only include those between the 111 transcription factors in our motif prior and used these interactions in constructing the regulatory networks. Network quality estimation To evaluate the robustness of our predicted networks we performed a number of variations on the input data used in the reconstruction and determined how it might influence the resulting estimated edge weights. The analysis shown in Additional file 2: Figure S1-S3 demonstrates the predicted networks' robustness to jackknifing the prior edges in the motif data, the protein-protein interaction dataset used, and the samples used to estimate the two subtype networks. See the Additional file 2: Figure S1-S3 legends for more details. Quantitative network comparison PANDA estimates a probability that an edge exists in an individual network and reports that estimate in terms of z-score units. We wanted to identify potential regulatory interactions that best characterized each of the subtype-specific networks. Therefore, we selected edges based both on the probability that they are "supported" in the network inference, and on whether they are "different" between the subtypes. To determine the probability that an edge is "supported," we took the value of the inverse cumulative distribution function of a normal distribution to assign a probability value between zero and one for each edge (instead of a z-score). To determine the probability that an edge is "different" between the networks, we first subtracted the z-score weight values estimated by PANDA for the two networks and then determined the value of the inverse cumulative distribution for this difference. The product of these two probabilities represents the probability than an edge is both "supported" and "different." We select edges for which this combined probability is greater than 80%, or: $$ \mathrm{Edge}\ \mathrm{identified}\kern0.5em \mathrm{a}\mathrm{s}\ \left\{\left.\underline{\begin{array}{cc}\hfill angiogenic- specific\hfill & \hfill CD{F}^{-1}\left({W}_{ij}^{(A)}\right)\times CD{F}^{-1}\left({W}_{ij}^{(A)}-{W}_{ij}^{(N)}\right)>0.8\hfill \\ {}\hfill non- angiogenic- specific\hfill & \hfill CD{F}^{-1}\left({W}_{ij}^{(N)}\right)\times CD{F}^{-1}\left({W}_{ij}^{(N)}-{W}_{ij}^{(A)}\right)>0.8\hfill \\ {}\hfill neither\hfill & \hfill otherwise\hfill \end{array}}\right|\right. $$ This 80% cutoff was chosen so that each subnetwork contains roughly 1% of all possible edges. We verified the robustness of our network analysis to this cutoff by varying it systematically between 65% and 95% (see Additional file 2: Figure S4). A file with the edge-enrichment analysis for TFs performed across each of these cutoffs is also supplied in the supplemental material (Additional file 3: Dataset S2). We recognize that there could be hidden dependencies between the z-scores so this analysis may be over-estimating the significance. Edge enrichment To identify key transcription factors we calculated two values, an "edge enrichment" score as well as a p-value significance for the difference in the number of target genes. The edge enrichment for a given transcription factor (E TF ) can be formulaically defined as: E TF = (k A /k N )/(n A /n N ), where k A and k N are the out-degree of the TF in the angiogenic and non-angiogenic subnetwork, respectively, and n A and n N are the total number of edges in the angiogenic and non-angiogenic subnetworks, respectively. The p-value significance in the overlap between edges from a transcription factor and edges specific to the angiogenic subnetwork, as modeled by the hypergeometric distribution, can then be defined as: $$ {p}_{TF}^{(A)}={\displaystyle \sum_{k_A}^{n_A}\frac{\left(\begin{array}{c}\hfill {n}_A\hfill \\ {}\hfill {k}_A\hfill \end{array}\right)\left(\begin{array}{c}\hfill {n}_N\hfill \\ {}\hfill {k}_N\hfill \end{array}\right)}{\left(\begin{array}{c}\hfill {n}_A+{n}_N\hfill \\ {}\hfill {k}_A+{k}_N\hfill \end{array}\right)}} $$ An equivalent formula is used to calculate \( {p}_{TF}^{(N)} \). For simplicity we report \( {p}_{TF}^{(A)} \) for E TF > 1 and \( {p}_{TF}^{(N)} \) for E TF < 1. In selecting "key" transcription factors we used an edge enrichment of greater than 1.5 (or less than 1/1.5), and a p-value significance less than 10−3. To help account for the fact that these measures are not highly robust for small out-degree values, we also only limited our analysis to transcription factors with twenty or more total edges (k A + k N ≥ 20). This last threshold resulted in excluding one potential TF, ELK4, from being identified as a "key" transcription factor. For a full list of every TF's edge-enrichment, p-value significance and total edge count across a variety of potential subnetwork definitions, see supplemental material (Additional file 3: Dataset S2). Characterizing differential expression/methylation of transcription factors and their target genes We determined the differential expression between the subtypes for each gene in our network by using the t-test. We determined the corresponding significance and adjusted for multiple-hypotheses by applying the Benjamini-Hochberg correction. The differential expression patterns of a set of target genes was determined by comparing the values of the t-statistic for that set of genes to the values of the t-statistic for all other genes [110]. For the same 510 samples for which we have expression data, we also downloaded level-3 methylation data from the TCGA website (https://tcga-data.nci.nih.gov/tcga/tcgaHome2.jsp) Of the 14473 genes with methylation data, we limited ourselves to the 10108 included in the expression data, 621 of which had empty values reported across all 510 patient samples analyzed. For the remaining 9487 genes, we compared methylation levels between the two subtypes using a t-test. We further tested for the differential methylation patterns for sets of target genes as we did for the gene expression data. Values for the differential-expression and differential methylation of each gene are included in Additional file 4: Dataset S3. For sets of target genes we also performed a randomization procedure to ensure that the results observed in the above analysis is not coincidental (Additional file 2: Figure S5). See the supplemental figure legend for more details. Characterizing differential CNV for target genes To evaluate changes in copy-number for sets of target genes, we downloaded level 3 CNV (SNP Array) data files from TGCA. According to the TCGA documentation, these files contain the results of CBS segmentation of the log R ratio data for each tumor/normal pair. We identified all the segments in which each gene occurs and used a t-test to compare the values of the segments identified within subjects classified into the angiogenic subtype to the segments identified within subjects classified into the non-angiogenic subtype. To determine the overall differential-CNV for a set of target genes, we compared the resulting t-statistic value for the set of genes targeted by a particular transcription factor to all other genes [110]. Values for the differential-CNV of each gene are included in Additional file 4: Dataset S3. Characterizing the association of differential gene expression within classes of network genes We wished to investigate how genes identified by our network model might respond to standard or other treatment protocols. Therefore, we analyzed publically-available experimental data measuring gene expression levels in response to various stimuli. For each experiment, we RMA-normalized raw CEL data deposited on the Gene Expression Omnibus using a custom-CDF to map to Entrez GeneIDs [63], selected samples that correspond to either a treatment or control experiment, and performed a t-test to quantify the expression differences between the treatment and control samples. Finally, we computed a summary statistic representing the significance of the association of this differential expression with genes in each of the "classes" defined by our subnetworks. Specifically, we calculate a "meta"-t-statistic and associated p-value by comparing the differential-expression t-statistic values for genes in a given network class to the t-statistic values for all other genes [110]. We also performed a randomization procedure to ensure that significant results identified in the above analysis is not accidental, and would not be observed for random "classes" of genes (Additional file 2: Figure S5). See the legend for Additional file 2: Figure S5 for more details. CMAP analysis We downloaded the "raw" gene expression CEL files from the Connectivity Map website (http://www.broadinstitute.org/cmap/cel_file_chunks.jsp) and normalized these using fRMA [29]. This dataset contains the expression of approximately 12,000 genes before and after administration of 1,309 drugs in as many as 5 cell lines. We generated drug perturbation signatures by quantifying the differential gene expression, controlling for tissue type and batch effects using the following model: $$ {G}_i={\beta}_{0,i}^d+{\beta}_{c,i}^d{C}^d+{\beta}_{t,i}^d{T}^d+{\beta}_{b,i}^d{B}^d,\forall i\in M $$ where variables are the same as those used for drug sensitivity signatures except for C d, representing the concentration of drug d used to treat the cell lines, T d, representing the tissue of the cell-line treated with drug d, and B d, representing the batch of the array measuring the effect of drug d. The strength and significance of differential expression of gene i due to perturbation by drug d is given here by the term \( {\beta}_{c,i}^d \) and its associated p-value (Student's t-test). We defined the gene signatures for drug perturbations based on estimates for the coefficients of \( {\beta}_{\mathrm{c},\mathrm{i}}^{\mathrm{d}} \) and their associated p-values. Code and materials for repeating the analysis in the paper The PANDA implementation used to perform this analysis, data input files, output predicted networks, as well as a separate tool to perform edge-enrichment analysis on a pair of PANDA networks is available at http://sourceforge.net/projects/panda-net/. Siegel R, Naishadham D, Jemal A. Cancer statistics, 2012. CA Cancer J Clin. 2012;62(1):10–29. PubMed Article Google Scholar Cannistra SA. Cancer of the ovary. N Engl J Med. 2004;351(24):2519–29. CAS PubMed Article Google Scholar Bentink S, Haibe-Kains B, Risch T, Fan JB, Hirsch MS, Holton K, et al. Angiogenic mRNA and microRNA gene expression signature predicts a novel subtype of serous ovarian cancer. PLoS One. 2012;7(2):e30269. Tothill RW, Tinker AV, George J, Brown R, Fox SB, Lade S, et al. Novel molecular subtypes of serous and endometrioid ovarian cancer linked to clinical outcome. Clin Cancer Res. 2008;14(16):5198–208. TCGA. Integrated genomic analyses of ovarian carcinoma. Nature. 2011;474(7353):609–15. Hanahan D, Weinberg RA. The hallmarks of cancer. Cell. 2000;100(1):57–70. Folkman J. Tumor angiogenesis: therapeutic implications. N Engl J Med. 1971;285(21):1182–6. Ingber DE, Folkman J. Mechanochemical switching between growth and differentiation during fibroblast growth factor-stimulated angiogenesis in vitro: role of extracellular matrix. J Cell Biol. 1989;109(1):317–30. Dike LE, Chen CS, Mrksich M, Tien J, Whitesides GM, Ingber DE. Geometric control of switching between growth, apoptosis, and differentiation during angiogenesis using micropatterned substrates. In Vitro Cell Dev Biol Anim. 1999;35(8):441–8. Moses MA, Wiederschain D, Loughlin KR, Zurakowski D, Lamb CC, Freeman MR. Increased incidence of matrix metalloproteinases in urine of cancer patients. Cancer Res. 1998;58(7):1395–9. Veikkola T, Karkkainen M, Claesson-Welsh L, Alitalo K. Regulation of angiogenesis via vascular endothelial growth factor receptors. Cancer Res. 2000;60(2):203–12. Korpelainen EI, Alitalo K. Signaling angiogenesis and lymphangiogenesis. Curr Opin Cell Biol. 1998;10(2):159–64. Ferrara N. Role of vascular endothelial growth factor in the regulation of angiogenesis. Kidney Int. 1999;56(3):794–814. Burger RA. Overview of anti-angiogenic agents in development for ovarian cancer. Gynecol Oncol. 2011;121(1):230–8. Burger RA, Sill MW, Monk BJ, Greer BE, Sorosky JI. Phase II trial of bevacizumab in persistent or recurrent epithelial ovarian cancer or primary peritoneal cancer: a Gynecologic Oncology Group Study. J Clin Oncol. 2007;25(33):5165–71. Cannistra SA, Matulonis UA, Penson RT, Hambleton J, Dupont J, Mackey H, et al. Phase II study of bevacizumab in patients with platinum-resistant ovarian cancer or peritoneal serous cancer. J Clin Oncol. 2007;25(33):5180–6. Matulonis UA, Berlin S, Ivy P, Tyburski K, Krasner C, Zarwan C, et al. Cediranib, an oral inhibitor of vascular endothelial growth factor receptor kinases, is an active drug in recurrent epithelial ovarian, fallopian tube, and peritoneal cancer. J Clinical Oncol. 2009;27(33):5601–6. Aghajanian C, Blank SV, Goff BA, Judson PL, Teneriello MG, Husain A, et al. OCEANS: a randomized, double-blind, placebo-controlled phase III trial of chemotherapy with or without bevacizumab in patients with platinum-sensitive recurrent epithelial ovarian, primary peritoneal, or fallopian tube cancer. J Clinical Oncol. 2012;30(17):2039–45. Liu J, Matulonis UA. Anti-angiogenic agents in ovarian cancer: dawn of a new era? Curr Oncol Rep. 2011;13(6):450–8. Emmert-Streib F. The chronic fatigue syndrome: a comparative pathway analysis. J Comput Biol. 2007;14(7):961–72. Barabasi AL. Network medicine–from obesity to the "diseasome". N Engl J Med. 2007;357(4):404–7. Silverman EK, Loscalzo J. Developing new drug treatments in the era of network medicine. Clin Pharmacol Ther. 2013;93(1):26–8. Silverman EK, Loscalzo J. Network medicine approaches to the genetics of complex diseases. Discov Med. 2012;14(75):143–52. Papin JA, Reed JL, Palsson BO. Hierarchical thinking in network biology: the unbiased modularization of biochemical networks. Trends Biochem Sci. 2004;29(12):641–7. Nariai N, Tamada Y, Imoto S, Miyano S. Estimating gene regulatory networks and protein-protein interactions of Saccharomyces cerevisiae from multiple genome-wide data. Bioinformatics. 2005;21 Suppl 2:ii206–12. Hartemink AJ, Gifford DK, Jaakkola TS, Young RA. Combining location and expression data for principled discovery of genetic regulatory network models. Pac Symp Biocomput. 2002;7:437–49. Hecker M, Lambeck S, Toepfer S, van Someren E, Guthke R. Gene regulatory network inference: data integration in dynamic models-a review. Biosystems. 2009;96(1):86–103. Glass K, Huttenhower C, Quackenbush J, Yuan GC. Passing messages between networks to refine predicted interactions. PLoS One. 2013;8(5), e64832. McCall MN, Bolstad BM, Irizarry RA. Frozen robust multiarray analysis (fRMA). Biostatistics (Oxford, England). 2010;11(2):242–53. Smedley D, Haider S, Ballester B, Holland R, London D, Thorisson G, et al. BioMart – biological queries made easy. BMC Genomics. 2009;10(1):22. Ravasi T, Suzuki H, Cannistraci CV, Katayama S, Bajic VB, Tan K, et al. An atlas of combinatorial transcriptional regulation in mouse and man. Cell. 2010;140(5):744–52. Aurora AB, Biyashev D, Mirochnik Y, Zaichuk TA, Sanchez-Martinez C, Renault MA, et al. NF-kappaB balances vascular regression and angiogenesis via chromatin remodeling and NFAT displacement. Blood. 2010;116(3):475–84. Webb CF, Bryant J, Popowski M, Allred L, Kim D, Harriss J, et al. The ARID family transcription factor bright is required for both hematopoietic stem cell and B lineage development. Mol Cell Biol. 2011;31(5):1041–53. Komiyama T, Coppola JM, Larsen MJ, van Dort ME, Ross BD, Day R, et al. Inhibition of furin/proprotein convertase-catalyzed surface and intracellular processing by small molecules. J Biol Chem. 2009;284(23):15729–38. Stevens TA, Meech R. BARX2 and estrogen receptor-alpha (ESR1) coordinately regulate the production of alternatively spliced ESR1 isoforms and control breast cancer cell growth and invasion. Oncogene. 2006;25(39):5426–35. Gershenwald JE, Sumner W, Calderone T, Wang Z, Huang S, Bar-Eli M. Dominant-negative transcription factor AP-2 augments SB-2 melanoma tumor growth in vivo. Oncogene. 2001;20(26):3363–75. Tanaka M, Chen Z, Bartunkova S, Yamasaki N, Izumo S. The cardiac homeobox gene Csx/Nkx2.5 lies genetically upstream of multiple genes essential for heart development. Development. 1999;126(6):1269–80. Bergwerff M, Gittenberger-de Groot AC, Wisse LJ, DeRuiter MC, Wessels A, Martin JF, et al. Loss of function of the Prx1 and Prx2 homeobox genes alters architecture of the great elastic arteries and ductus arteriosus. Virchows Archiv. 2000;436(1):12–9. Roman AC, Carvajal-Gonzalez JM, Rico-Leo EM, Fernandez-Salguero PM. Dioxin receptor deficiency impairs angiogenesis by a mechanism involving VEGF-A depletion in the endothelium and transforming growth factor-beta overexpression in the stroma. J Biol Chem. 2009;284(37):25135–48. Schmidlin H, Diehl SA, Nagasawa M, Scheeren FA, Schotte R, Uittenbogaart CH, et al. Spi-B inhibits human plasma cell differentiation by repressing BLIMP1 and XBP-1 expression. Blood. 2008;112(5):1804–12. Tsai SJ, Hwang JM, Hsieh SC, Ying TH, Hsieh YH. Overexpression of myeloid zinc finger 1 suppresses matrix metalloproteinase-2 expression and reduces invasiveness of SiHa human cervical cancer cells. Biochem Biophys Res Commun. 2012;425(2):462–7. Kawai H, Li H, Chun P, Avraham S, Avraham HK. Direct interaction between BRCA1 and the estrogen receptor regulates vascular endothelial growth factor (VEGF) transcription and secretion in breast cancer cells. Oncogene. 2002;21(50):7730–9. Kang HJ, Yi YW, Kim HJ, Hong YB, Seong YS, Bae I. BRCA1 negatively regulates IGF-1 expression through an estrogen-responsive element-like site. Cell Death Dis. 2012;3:e336. Huang KJ, Sui LH. The relevance and role of vascular endothelial growth factor C, matrix metalloproteinase-2 and E-cadherin in epithelial ovarian cancer. Med Oncol. 2012;29(1):318–23. Arnaldez FI, Helman LJ. Targeting the insulin growth factor receptor 1. Hematol Oncol Clin North Am. 2012;26(3):527–42. vii-viii. Yu H, Rohan T. Role of the insulin-like growth factor family in cancer development and progression. J Natl Cancer Inst. 2000;92(18):1472–89. Davison Z, de Blacquiere GE, Westley BR, May FE. Insulin-like growth factor-dependent proliferation and survival of triple-negative breast cancer cells: implications for therapy. Neoplasia. 2011;13(6):504–15. Perrotti D, Melotti P, Skorski T, Casella I, Peschle C, Calabretta B. Overexpression of the zinc finger protein MZF1 inhibits hematopoietic development from embryonic stem cells: correlation with negative regulation of CD34 and c-myb promoter activity. Mol Cell Biol. 1995;15(11):6075–87. da Huang W, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4(1):44–57. Gerger A, LaBonte M, Lenz HJ. Molecular predictors of response to antiangiogenesis therapies. Cancer J. 2011;17(2):134–41. Murukesh N, Dive C, Jayson GC. Biomarkers of angiogenesis and their role in the development of VEGF inhibitors. Br J Cancer. 2010;102(1):8–18. Yang SX, Steinberg SM, Nguyen D, Wu TD, Modrusan Z, Swain SM. Gene expression profile and angiogenic marker correlates with response to neoadjuvant bevacizumab followed by bevacizumab plus chemotherapy in breast cancer. Clin Cancer Res. 2008;14(18):5893–9. Lelievre E, Lionneton F, Soncin F, Vandenbunder B. The Ets family contains transcriptional activators and repressors involved in angiogenesis. Int J Biochem Cell Biol. 2001;33(4):391–407. Sato Y. Role of ETS family transcription factors in vascular development and angiogenesis. Cell Struct Funct. 2001;26(1):19–24. Khatun S, Fujimoto J, Toyoki H, Tamaya T. Clinical implications of expression of ETS-1 in relation to angiogenesis in ovarian cancers. Cancer Sci. 2003;94(9):769–73. Elvert G, Kappel A, Heidenreich R, Englmeier U, Lanz S, Acker T, et al. Cooperative interaction of hypoxia-inducible factor-2alpha (HIF-2alpha) and Ets-1 in the transcriptional activation of vascular endothelial growth factor receptor-2 (Flk-1). J Biol Chem. 2003;278(9):7520–30. Dutta D, Ray S, Vivian JL, Paul S. Activation of the VEGFR1 chromatin domain: an angiogenic signal-ETS1/HIF-2alpha regulatory axis. J Biol Chem. 2008;283(37):25404–13. Beischlag TV, Taylor RT, Rose DW, Yoon D, Chen Y, Lee WH, et al. Recruitment of thyroid hormone receptor/retinoblastoma-interacting protein 230 by the aryl hydrocarbon receptor nuclear translocator is required for the transcriptional response to both dioxin and hypoxia. J Biol Chem. 2004;279(52):54620–8. Fritz WA, Lin TM, Peterson RE. The aryl hydrocarbon receptor (AhR) inhibits vanadate-induced vascular endothelial growth factor (VEGF) production in TRAMP prostates. Carcinogenesis. 2008;29(5):1077–82. Scheuermann TH, Tomchick DR, Machius M, Guo Y, Bruick RK, Gardner KH. Artificial ligand binding within the HIF2alpha PAS-B domain of the HIF2 transcription factor. Proc Natl Acad Sci U S A. 2009;106(2):450–5. Hao N, Whitelaw ML, Shearwin KE, Dodd IB, Chapman-Smith A. Identification of residues in the N-terminal PAS domains important for dimerization of Arnt and AhR. Nucleic Acids Res. 2011;39(9):3695–709. Irizarry RA, Hobbs B, Collin F, Beazer-Barclay YD, Antonellis KJ, Scherf U, et al. Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003;4(2):249–64. Dai M, Wang P, Boyd AD, Kostov G, Athey B, Jones EG, et al. Evolving gene/transcript definitions significantly alter the interpretation of GeneChip data. Nucleic Acids Res. 2005;33(20):e175. Brun YF, Varma R, Hector SM, Pendyala L, Tummala R, Greco WR. Simultaneous modeling of concentration-effect and time-course patterns in gene expression data from microarrays. Cancer Genomics Proteomics. 2008;5(1):43–53. Li D, Williams JI, Pietras RJ. Squalamine and cisplatin block angiogenesis and growth of human ovarian cancer cells with or without HER-2 gene overexpression. Oncogene. 2002;21(18):2805–14. Card PB, Erbel PJ, Gardner KH. Structural basis of ARNT PAS-B dimerization: use of a common beta-sheet interface for hetero- and homodimerization. J Mol Biol. 2005;353(3):664–77. 1x0o: http://www.rcsb.org/pdb/explore.do?structureId=1x0o. Cardoso R, Love R, Nilsson CL, Bergqvist S, Nowlin D, Yan J, et al. Identification of Cys255 in HIF-1alpha as a novel site for development of covalent inhibitors of HIF-1alpha/ARNT PasB domain protein-protein interaction. Protein Sci. 2012;21(12):1885–96. Mimura I, Nangaku M, Kanki Y, Tsutsumi S, Inoue T, Kohro T, et al. Dynamic change of chromatin conformation in response to hypoxia enhances the expression of GLUT3 (SLC2A3) by cooperative interaction of hypoxia-inducible factor 1 and KDM3A. Mol Cell Biol. 2012;32(15):3018–32. Storti P, Bolzoni M, Donofrio G, Airoldi I, Guasco D, Toscani D, et al. Hypoxia-inducible factor (HIF)-1alpha suppression in myeloma cells blocks tumoral growth in vivo inhibiting angiogenesis and bone destruction. Leukemia. 2013;27(8):1697–706. Safe S. Molecular biology of the Ah receptor and its role in carcinogenesis. Toxicol Lett. 2001;120(1–3):1–7. Fritz WA, Lin TM, Safe S, Moore RW, Peterson RE. The selective aryl hydrocarbon receptor modulator 6-methyl-1,3,8-trichlorodibenzofuran inhibits prostate tumor metastasis in TRAMP mice. Biochem Pharmacol. 2009;77(7):1151–60. Ishimura R, Kawakami T, Ohsako S, Tohyama C. Dioxin-induced toxicity on vascular remodeling of the placenta. Biochem Pharmacol. 2009;77(4):660–9. Bertazzi A, Pesatori AC, Consonni D, Tironi A, Landi MT, Zocchetti C. Cancer incidence in a population accidentally exposed to 2,3,7,8-tetrachlorodibenzo-para-dioxin. Epidemiology. 1993;4(5):398–406. Carlson EA, McCulloch C, Koganti A, Goodwin SB, Sutter TR, Silkworth JB. Divergent transcriptomic responses to aryl hydrocarbon receptor agonists between rat and human primary hepatocytes. Toxicol Sci. 2009;112(1):257–72. Fracchiolla NS, Todoerti K, Bertazzi PA, Servida F, Corradini P, Carniti C, et al. Dioxin exposure of human CD34+ hemopoietic cells induces gene expression modulation that recapitulates its in vivo clinical and biological effects. Toxicology. 2011;283(1):18–23. Privalov PL, Jelesarov I, Read CM, Dragan AI, Crane-Robinson C. The energetics of HMG box interactions with DNA: thermodynamics of the DNA binding of the HMG box from mouse sox-5. J Mol Biol. 1999;294(4):997–1013. Prokhortchouk E, Defossez PA. The cell biology of DNA methylation in mammals. Biochim Biophys Acta. 2008;1783(11):2167–73. Hellebrekers DM, Griffioen AW, van Engeland M. Dual targeting of epigenetic therapy in cancer. Biochim Biophys Acta. 2007;1775(1):76–91. Yoo CB, Jones PA. Epigenetic therapy of cancer: past, present and future. Nat Rev Drug Discov. 2006;5(1):37–50. Matsumura N, Huang Z, Mori S, Baba T, Fujii S, Konishi I, et al. Epigenetic suppression of the TGF-beta pathway revealed by transcriptome profiling in ovarian cancer. Genome Res. 2011;21(1):74–82. Lamb J, Crawford ED, Peck D, Modell JW, Blat IC, Wrobel MJ, et al. The Connectivity Map: using gene-expression signatures to connect small molecules, genes, and disease. Science. 2006;313(5795):1929–35. Evidente A, Kireev AS, Jenkins AR, Romero AE, Steelant WF, Van Slambrouck S, et al. Biological evaluation of structurally diverse amaryllidaceae alkaloids and their synthetic derivatives: discovery of novel leads for anticancer drug design. Planta Med. 2009;75(5):501–7. Antoun MD, Mendoza NT, Rios YR, Proctor GR, Wickramaratne DB, Pezzuto JM, et al. Cytotoxicity of Hymenocallis expansa alkaloids. J Nat Prod. 1993;56(8):1423–5. Liao N, Ao M, Zhang P, Yu L. Extracts of Lycoris aurea induce apoptosis in murine sarcoma S180 cells. Molecules. 2012;17(4):3723–35. Brown C, Fezoui M, Selig WM, Schwartz CE, Ellis JL. Antitussive activity of sigma-1 receptor agonists in the guinea-pig. Br J Pharmacol. 2004;141(2):233–40. Do W, Herrera C, Mighty J, Shumskaya M, Redenti SM, Sauane M. Sigma 1 Receptor plays a prominent role in IL-24-induced cancer-specific apoptosis. Biochem Biophys Res Commun. 2013;439(2):215–20. Bobek V, Kovarik J. Antitumor and antimetastatic effect of warfarin and heparins. Biomed Pharmacother. 2004;58(4):213–9. Gerotziafas GT, Papageorgiou C, Hatmi M, Samama MM, Elalamy I. Clinical studies with anticoagulants to improve survival in cancer patients. Pathophysiol Haemost Thromb. 2008;36(3–4):204–11. Ahn KS, Sethi G, Jain AK, Jaiswal AK, Aggarwal BB. Genetic deletion of NAD(P)H:quinone oxidoreductase 1 abrogates activation of nuclear factor-kappaB, IkappaBalpha kinase, c-Jun N-terminal kinase, Akt, p38, and p44/42 mitogen-activated protein kinases and potentiates apoptosis. J Biol Chem. 2006;281(29):19798–808. Dai F, Chen Y, Song Y, Huang L, Zhai D, Dong Y, et al. A natural small molecule harmine inhibits angiogenesis and suppresses tumour growth through activation of p53 in endothelial cells. PLoS One. 2012;7(12):e52162. Zhang H, Sun K, Ding J, Xu H, Zhu L, Zhang K, et al. Harmine induces apoptosis and inhibits tumor cell proliferation, migration and invasion through down-regulation of cyclooxygenase-2 expression in gastric cancer. Phytomedicine. 2014;21(3):348–55. El Gendy MA, El-Kadi AO. Harman induces CYP1A1 enzyme through an aryl hydrocarbon receptor mechanism. Toxicol Appl Pharmacol. 2010;249(1):55–64. PubMed Article CAS Google Scholar El Gendy MA, Soshilov AA, Denison MS, El-Kadi AO. Transcriptional and posttranslational inhibition of dioxin-mediated induction of CYP1A1 by harmine and harmol. Toxicol Lett. 2012;208(1):51–61. Kumaran GC, Jayson GC, Clamp AR. Antiangiogenic drugs in ovarian cancer. Br J Cancer. 2009;100(1):1–7. Spannuth WA, Sood AK, Coleman RL. Angiogenesis as a strategic target for ovarian cancer therapy. Nat Clin Pract Oncol. 2008;5(4):194–204. Teoh D, Secord AA. Antiangiogenic agents in combination with chemotherapy for the treatment of epithelial ovarian cancer. Int J Gynecol Cancer. 2012;22(3):348–59. O'Reilly MS, Boehm T, Shing Y, Fukai N, Vasios G, Lane WS, et al. Endostatin: an endogenous inhibitor of angiogenesis and tumor growth. Cell. 1997;88(2):277–85. Burger RA, Brady MF, Bookman MA, Fleming GF, Monk BJ, Huang H, et al. Incorporation of bevacizumab in the primary treatment of ovarian cancer. N Engl J Med. 2011;365(26):2473–83. Moreno Garcia V, Basu B, Molife LR, Kaye SB. Combining antiangiogenics to overcome resistance: rationale and clinical experience. Clin Cancer Res. 2012;18(14):3750–61. Collinson F, Hutchinson M, Craven RA, Cairns DA, Zougman A, Wind TC, et al. Predicting response to bevacizumab in ovarian cancer: a panel of potential biomarkers informing treatment selection. Clin Cancer Res. 2013;19(18):5227–39. Madsen CV, Steffensen KD, Olsen DA, Waldstrom M, Smerdel M, Adimi P, et al. Serial measurements of serum PDGF-AA, PDGF-BB, FGF2, and VEGF in multiresistant ovarian cancer patients treated with bevacizumab. J Ovarian Res. 2012;5(1):23. Horowitz NS, Penson RT, Duda DG, di Tomaso E, Boucher Y, Ancukiewicz M, et al. Safety, Efficacy, and Biomarker Exploration in a Phase II Study of Bevacizumab, Oxaliplatin, and Gemcitabine in Recurrent Mullerian Carcinoma. Clin Ovarian Cancer Gynecol Malign. 2011;4(1):26–33. Sessa C, Guibal A, Del Conte G, Ruegg C. Biomarkers of angiogenesis for the development of antiangiogenic therapies in oncology: tools or decorations? Nat Clin Pract Oncol. 2008;5(7):378–91. Glass K: PANDA Implementation (http://sourceforge.net/projects/panda-net/). 2013. Fraley C, Raftery AE. Model-Based Clustering, Discriminant Analysis, and Density Estimation. J Am Stat Assoc. 2002;97(458):611–31. Stormo GD. DNA binding sites: representation and discovery. Bioinformatics. 2000;16(1):16–23. Wasserman WW, Sandelin A. Applied bioinformatics for the identification of regulatory elements. Nat Rev Genet. 2004;5(4):276–87. Xu J, Shao Z, Glass K, Bauer DE, Pinello L, Van Handel B, et al. Combinatorial assembly of developmental stage-specific enhancers controls gene expression programs during human erythropoiesis. Dev Cell. 2012;23(4):796–811. Oron AP, Jiang Z, Gentleman R. Gene set enrichment analysis using linear models and diagnostics. Bioinformatics. 2008;24(22):2586–91. This project has been supported by R01 HL089438, R01 HL111759 and P01 HL105339. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Dana-Farber Cancer Institute, Boston, MA, USA Kimberly Glass, John Quackenbush & Guo-Cheng Yuan Harvard School of Public Health, Boston, MA, USA Brigham and Women's Hospital, Boston, MA, USA Kimberly Glass Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA Dimitrios Spentzos Princess Margaret Cancer Center, University Health Network, Toronto, ON, M5G 2M9, Canada Benjamin Haibe-Kains John Quackenbush Guo-Cheng Yuan Correspondence to Guo-Cheng Yuan. KG, JQ, and GY conceptualized and designed the study. All authors participated in analysis and statistical support. All authors contributed to the writing and editing of the manuscript. All authors read and approved the final manuscript. Additional file 1: Dataset S1. (TranscriptionFactor_CombinatorialEnrichment.txt): The significance of co-targeting by all pairs of transcription factors in the various network classifications. Analysis demonstrating that network estimates are robust to the exact expression samples used. Figure S2. Analysis examining the effect of the prior motif structure on network estimates. Figure S3. Evaluation of how different protein-interaction databases affect PANDA's predicted networks. Figure S4. Analysis examining the consequences of varying the threshold used to define the angiogenic and non-angiogenic subnetworks. Figure S5. Analysis examining whether the differential methylation and expression of target genes and gene classes might be observed by chance. (TF_Statistics_Across_P-cutoffs.txt): A file containing the edge-enrichment and p-values for TF-differential-targeting between subnetworks identified across various P-cutoff values. The results included in this file were used to generate Additional file 2: Figure S4. The P-cutoff value of 0.8 was used to define the subnetworks used in the analysis shown in the main text (see Figure 2A). (AllGenes_SubtypeInformation.txt): File containing various characteristics of the genes included in our network reconstruction. This includes (1) whether each gene was identified as a key TF or is a previously-identified biomarker for angiogenesis; (2) the network and expression classification of each gene (used for functional and CMAP analysis); and (3) the differential-expression, -methylation and –CNV of each gene between the subtypes. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Glass, K., Quackenbush, J., Spentzos, D. et al. A network model for angiogenesis in ovarian cancer. BMC Bioinformatics 16, 115 (2015). https://doi.org/10.1186/s12859-015-0551-y Network modeling Regulatory networks Cancer subtypes
CommonCrawl
Ethane and ethene: which is easier to burn? Which one burns hotter? Why? Anything that burns "easy", has a low activation energy ($E_\mathrm a$) for the burning process. Anything that burns hotter, will have a lesser enthalpy and thus, will have a more aggressive exothermic reaction. Ethane is a molecule with two carbon atoms and 6 hydrogen atoms; all bonded with covalent single bonds. Ethene has the same number of carbons, but 4 hydrogen atoms; the bond between carbon atoms, in this one, is a double bond. If you had to choose a fuel, and heat was your prior factor, which one would you have chosen? I imagined that for this, the activation energy and the enthalpy comparison could lead to an answer. I could easily have looked both $E_\mathrm a$ and enthalpies up, but I didn't. Why and how do enthalpy and $E_\mathrm a$ change when two single bonds and two H atoms are replaced with a double bond? physical-chemistry bond enthalpy hydrogen orthocresol♦ M.A.R. ಠ_ಠM.A.R. ಠ_ಠ Olefins are less stable than their corresponding alkane analogues. For example, when ethene is hydrogenated to ethane roughly 33 kcal/mol of heat is given off because ethene has a higher energy content than ethane. Why is this, why is a molecule with a double bond higher in energy than a molecule without a double bond? There are two equivalent ways to describe the orbitals in ethene. One is the "double bond" description where a pi and sigma bond exist together side by side. We know that a pi bond is not as strong as a sigma bond due to poorer overlap with the pi bond compared to the the sigma bond. It only takes something on the order of 60 kcal/mol to break a pi bond and produce cis-trans isomerization about the double bond, whereas it takes something like 90 kcal/mol to break a carbon-carbon single bond. This poorer overlap with the pi bond causes ethene to be higher in energy than ethane. Alternatively one can view ethene as a two-membered ring, no pi bond, just 2 sigma bonds between the 2 carbons forming a two-membered ring. It is easy to see that such a ring system would contain a significant amount of strain. Whether you use the pi bond or two-membered ring approach to describe olefinic double bonds, both lead to the conclusion that olefins are destabilized (higher in energy content) due to poor overlap or ring strain (which in itself is really a reflection of poor overlap). Because of this destabilization, alkenes will generally have lower activation energies and more heat will be given off when they react compared to alkanes. Example: Let's consider the bromination of ethene and ethane $$\begin{align} \ce{C2H4 + Br2 &-> BrH2C-CH2Br}\\ \ce{C2H6 + Br2 &-> 2 CH3Br} \end{align}$$ The heats of formation of ethene, bromine and 1,2-dibromoethane are 52.5, 0 and -37.8 kJ/mol respectively; the reaction is exothermic by 90.3 kJ/mol. On the other hand, in the case of ethane, the heats of formation are -84.7, 0 and (2x) -37.4 kJ/mol; this reaction is endothermic by 10.3 kJ/mol. The olefinic reaction is significantly favored for the reasons discussed above. Gaurang Tandon Ethane will release more heat per mole but ethene should produce a higher temperature. The extra heat released by ethane goes into heating up more water which has a high heat capacity. As for which has lower activation energy, ethyne is dangerous to store because it can explode so I am assuming ethene is faster to react because of some trend. I also worked with someone who was researching possible fuels for a ramjet and his group chose ethene because it was the best compromise between safety and speed of combustion. I do not know why the trend ethane>ethene>ethyne for reaction rates is as it is though. Brinn BelyeaBrinn Belyea Ethene produce a high temperature. Ethane release more heat per mol. The extra heat released by ethane goes into heating up more water which has a high heat temperature. Elisha Babanyara EllemidiElisha Babanyara Ellemidi $\begingroup$ Probably true, but why? $\endgroup$ – matt_black Jun 29 '18 at 17:40 Not the answer you're looking for? Browse other questions tagged physical-chemistry bond enthalpy hydrogen or ask your own question. Reactivity of Alkanes Which C-C bond would break first, the one in ethane, or 2,2-dimethylpropane? Length of C-H bonds in hybridized bonding orbitals Bond length comparison between two carbon atoms Why is an S-S bond stronger than an O-O bond? Why is an alkenyl hydrogen more acidic than the alkyl analog? What causes the deshielding in carbon - carbon bonds? Ethylene hybridization Calculating the enthalpy of polymerisation of ethylene given the bond strengths Why does ethane release more energy than ethyne when burned? Explanation for bond lengths in trans-hexatriene
CommonCrawl
The unstable topology of the network The established discrete time Markov chain Numerical results The shortest path problem in the stochastic networks with unstable topology Gholam H. Shirdel†1Email author and Mohsen Abdolhosseinzadeh†1 †Contributed equally SpringerPlus20165:1529 Received: 24 February 2016 Accepted: 31 August 2016 The stochastic shortest path length is defined as the arrival probability from a given source node to a given destination node in the stochastic networks. We consider the topological changes and their effects on the arrival probability in directed acyclic networks. There is a stable topology which shows the physical connections of nodes; however, the communication between nodes does not stable and that is defined as the unstable topology where arcs may be congested. A discrete time Markov chain with an absorbing state is established in the network according to the unstable topological changes. Then, the arrival probability to the destination node from the source node in the network is computed as the multi-step transition probability of the absorption in the final state of the established Markov chain. It is assumed to have some wait states, whenever there is a physical connection but it is not possible to communicate between nodes immediately. The proposed method is illustrated by different numerical examples, and the results can be used to anticipate the probable congestion along some critical arcs in the delay sensitive networks. Stochastic network The stochastic shortest path Discrete time Markov chain Arrival probability Mathematics Subject Classification Primary 90C59 90C35; Secondary 90C40 The deterministic shortest path problem has been studied extensively and applied in many fields of optimization; there are polynomial time algorithms to solve the deterministic shortest path problem (Dijkstra 1959; Bellman 1958; Orlin et al. 2010). However, paths in networks should be reliable to transmit flow from a source node to a destination node especially in delay sensitive networks. The best connection helps to avoid traffic congestion in networks. So, the arrival probability is used to evaluate the reliability of paths and it has been considered as an optimality index of the stochastic shortest path length (Bertsekas and Tsitsiklis 1991; Fan et al. 2005; Kulkarni 1986; Shirdel and Abdolhosseinzadeh 2016). The stochastic shortest path problem (SSP) is defined as the best path with stochastic optimality condition. Liu (2010) assumed the arc lengths to be uncertain variables. Pattanamekar et al. (2003) considered the individual travel time variance and the mean travel time forecasting error. Also, Hutson and Shier (2009) and Rasteiro and Anjo (2004) supposed two criteria: mean and variance of path length. Fan et al. (2005) assumed known conditional probabilities for link travel times that each link could be congested or uncongested. Wu et al. (2004) modeled a stochastic and time-dependent network with discrete probability distributed arc weights. Peer and Sharma (2007) assumed two kinds of nodes, possible failure and always working. Ji (2005) solved three models of the shortest path by integrating stochastic simulation and genetic algorithm. The considered model in this paper is a directed acyclic stochastic network with known discrete distribution probabilities of leaving or waiting in nodes. Our criterion to evaluation of the connections from the source node toward the destination node in the network is presented as the arrival probability, which is obtained by the established discrete time Markov chain (DTMC) in the network (Shirdel and Abdolhosseinzadeh 2016); then, the best possible connection is determined with the largest arrival probability. Liu (2010) converted his models into deterministic programming problems. Hutson and Shier (2009) and Rasteiro and Anjo (2004) obtained the maximum expected value of a utility function. Fan et al. (2005) applied a procedure for dynamic routing policies. Nie and Fan (2006) formulated the stochastic on-time arrival problem with dynamic programming, and Fan et al. (2005) minimized the expected travel time. In this paper, the maximum arrival probability from a given source node to a given destination node is computed according to known discrete distribution probabilities of leaving or waiting in nodes, and a DTMC stochastic process is used to model the problem rather than dynamic programming or stochastic programming. Kulkarni (1986) developed a method based on a continuous time Markov chain (CTMC) to compute the distribution function of the shortest path length. Azaron and Modarres (2005) applied Kulkarni's method to queuing networks. Thomas and White (2007) modeled the problem of constructing a minimum expected total cost route as a Markov decision process. They wanted to respond to dissipated congestion over time according to some known probability distribution. The arrival probability gives overall information of the network conditions to transmit flow from the source node toward the destination node. Two conditions at any state of the established DTMC are assumed: departing from the current state to a new state, or waiting in the current state with expecting better conditions. There are several unstable connections between nodes. The leaving distribution probability from one node toward another node is known as the probability that their connected arc is uncongested. A DTMC with an absorbing state is established and the transition matrix is obtained. Then, the arrival probability from the source node toward the destination node is computed as the multi-step transition probability from the initial state to the absorbing state in DTMC. The arrival probability introduced by Shirdel and Abdolhosseinzadeh (2016) is reviewed in this paper, and it is extended and the concepts and definitions are organized to find the stochastic shortest path. This paper is organized as follow. In "The unstable topology of the network" section some definitions and assumptions of networks with unstable topology is introduced. The concept of the stochastic process and the established DTMC in the network is described in "The established discrete time Markov chain" section; also, the computations of the arrival probability and the stochastic shortest path are presented in "The established discrete time Markov chain" section. "Numerical results" section contains some numerical results of implementation of the proposed method on some networks with various topologies. In this section, we introduce some definitions and assumptions of networks with unstable topology. Let network \(G=(N,A)\), with node set N and arc set A, be a directed acyclic network. Then, we can label its nodes in a topological order such that for any \((i,j)\in A, i<j\) (Ahuja et al. 1993). The physical topology for any \((i,j)\in A\) shows the possibility of communication between nodes \(i,j\in N\) in the network. In the transportation networks there are some physical connections between nodes, but we cannot traverse anymore toward the destination node because of probable congestion. If there are some facilities in the network G, but it is not possible to use them continuously, then G has unstable topology. So, for any arc \((i,j)\in A\) it is not mean there is a stable communication between nodes \(i,j\in N\) all the time (it could be probably congested). For any node i, it is supposed that the uniform distribution probabilities of leaving arcs (i, j) to be uncongested are known (Shirdel and Abdolhosseinzadeh 2016). Now, consider the situation that some arcs are congested and flow cannot leave because of the unstable topology. There are two kinds of wait situations: first, waiting in a particular node with expecting some facilities to release from the current condition, and it is called option 1; second, traversing some arcs those do not lead to visit a new node, and it is called option 2. For example, if it is decided to be in node 3 in the example network (Fig. 1), arc (1, 3) does not cause to visit a new node whereas arc (3, 4) leads to the new node 4. The produced wait situations are more extended than queuing networks considered by Azaron and Modarres (2005) and Thomas and White (2007). The example network with 4 nodes and 5 arcs The stochastic variable of arc (i, j) according to the unstable topology is shown by \(x_{ij}\). If \(x_{ij}=1\), it is possible to traverse arc (i, j), and otherwise \(x_{ij}=0\). The probability that arc (i, j) to be uncongested is \(q_{ij}=Pr[x_{ij}=1]\), and it represents the uniform probability that node i is leaved toward node j (an adjacency node). Then, the wait probability in node i, is \(q_{ii}=1-\sum _{\{j:(i,j)\in A\}}q_{ij}\), and it is the probability that leaving arcs by node i are congested. Figure 1 shows the example network with its topological ordered nodes and it is the initial physical topology of the network. The numbers on arcs show the leaving probabilities \(q_{ij}\). Node 1 is the source node and node 4 is the destination node. It is not possible to traverse arc (2, 4) because it does not exist in the physical topology of the example network. However, the arcs in the physical topology could be congested according to the known distribution probabilities. In this section, the proposed DTMC by Shirdel and Abdolhosseinzadeh (2016) is reviewed. The discrete time stochastic process \(\{X_r,r=1,2,3,\ldots \}\) is called Markov chain (\(X_r\) shows the process position), if it satisfies the following Markov property (see Ross 2006 and Thomas and White 2007) $$Pr[X_{r+1}=S_l|X_r=S_k,X_{r-1}=S_m,\ldots ,X_1=S_n]=Pr[X_{r+1}=S_l|X_r=S_k]=p_{lk}.$$ Any state \(S_l\) of the established DTMC determines the traversed nodes of the original network. For the example network (Fig. 1) the created states \(S_i\), are shown in Table 1. The conditional probability of the next state depends on the current state and independent of the previous states. Let \(S=\{S_i,i=1,2,3,\ldots \}\), the initial state \(S_1=\{1\}\) of DTMC contains the single source node and the absorbing state \(S_{|S|}=\{1,2,\ldots ,|N|\}\) contains all nodes of the network and it is not possible to depart; so, S is a finite state space (it is not possible to depart from \(S_{|S|}\)). For the example network, the absorbing state \(S_5=\{1,2,3,4\}\) contains all nodes of the network; and the instance state \(S_4\) of the state space S (Table 1) contains nodes \(\{ 1,2,3\}\) and all connected components of the example network, those are constructed by nodes 1, 2 and 3, as seen in Fig. 2. The state space of the established DTMC for the example network \(S_1\) \(\{1\}\) \(\{1,2\}\) \(\{1,2,3\}\) \(\{ 1,2,3,4\}\) Constructed connected components of state \(S_4\) The final state contains the destination node of the network, where DTMC does not progress anymore, and it is called assumption i. The states of the established DTMC contain the traversed nodes of the network, those are reached from some nodes in a previous state, and it is called assumption ii. It is not allowed to return from the last traversed node; however, it is possible to wait in the current state. Clearly, a new state is revealed if a leaving arc \((i,j)\in A\) is traversed such that the current node i is contained in the current state and the new node j is contained in the new state, and it is called assumption iii. As previously said, the wait states are one of option 1 or option 2. The state space diagram of the established DTMC for the example network is constructed as Fig. 3; the values on arcs show the wait and the transition probabilities. The state space diagram of the established DTMC The transition and the wait probabilities The transition probabilities \(p_{kl}\) satisfy the following conditions \(0\le p_{kl}\le 1\) for \(k=1,2,\ldots ,|S|\) and \(l=1,2,\ldots ,|S|\) \(\sum _l{p_{kl}}=1\), for \(k=1,2,\ldots ,|S|\). The transition probabilities are elements of matrix \(P_{|S|\times |S|}\), where \(p_{kl}\) is kth row and lth column of matrix P, and it is called the transition matrix or Markov matrix (Ibe 2009). The following theorems are used to obtain the transition matrix of the established DTMC in the network by Shirdel and Abdolhosseinzadeh (2016). The transition probabilities (except the absorbing state) are obtained by Theorem 1. Theorem 1 If \(p_{kl}\) is klth element of matrix P, that \(k \ne l, l < |S|\) and \(S_{k}=\{v_0=1,v_1,\ldots ,v_m\}\) is the current state, then the transition probability from state \(S_{k}\) to state \(S_{l}\) is computed as follow if \(l < k\) then \(p_{kl} = 0\), otherwise if \(l > k\) then $$p_{kl}=Pr\left[ \bigcup _{\left( v,w\right) \in \Psi }{E_{vw}}\right] \times \left( \prod _{\left( v,w\right) \in \Psi }{\left( 1-\sum _{\begin{array}{c} \left( v,u\right) \in A\\ u\ne w,u\notin S_k \end{array}}{q_{vu}}\right) }\right) \times q_{v_mv_m}+q_{v_mw}.$$ \(E_{vw}\) denotes the event which arc (v, w) is traversed during transition from \(S_k\) to \(S_l\) and \(\Psi =\{(v,w)\in A:v\in S_k\backslash \{v_m\},w\in S_{l}\backslash S_k,|S_l\backslash S_k|=1\}\). Since, it is not allowed to traverse from one state to the previous states (assumption ii), then necessarily \(p_{kl}=0\), for \(l\ <\ k\). Otherwise, suppose \(l\ >\ k\), during transition from the current state \(S_k\) to the new state \(S_l\), it should be reached just one node other than the nodes of the current state, so \(|S_l\backslash S_k|=1, v\in S_k\), and \(w\in S_l\backslash S_k\) are held by assumption ii and iii. Two components of \(p_{kl}\) formula should be computed. In the last node \(v_m\) of the current state \(S_k\), it is possible to wait in \(v_m\) with probability \(q_{v_mv_m}\). Notice, it is not possible to wait in the other nodes \(v\in S_k\backslash \{v_m\}\) because it should be leaved to construct the current state, however it is not necessary for node \(v_m\) with the largest label (leaving \(v_m\) leads to a new node, and therefore results in a new state). If \(w\in S_l\backslash S_k\), then one or all of events \(E_{vw}\) (i.e. to traverse a connecting arc between a node of the current state and another node of the new state) can happen for \(\left( v,w\right) \in \Psi\), and the arrival probability of node \(w \in S_l\) from the current state \(S_k\) is equal to \(Pr[\bigcup _{\left( v,w\right) \in \Psi }{E_{vw}}]\). The collection probability should be computed because of deferent representations of the new state (for example see Fig. 2). Then, the nodes of the current state \(v\in S_k\backslash \{v_m\}\) (while waiting in \(v_m\)) should be prevented from reaching other nodes \(u\notin S_k\) and \(u\ne w\) (assumption iii), so arcs \(\left( v,u\right)\) are not allowed to traverse and they are excluded simultaneously, thus it is equal to \(\prod _{\left( v,w\right) \in \Psi }{(1-\sum _{\begin{array}{c} \left( v,u\right) \in A \\ u\ne w,u\notin S_k \end{array}}{q_{vu}))}}\). The other possibility in node \(v_m\), that is leaving it toward the new node \(w\in S_l\backslash S_k\) with probability \(q_{v_mw}\). \(\square\) For example, in the established DTMC of the example network, the transition probability \(p_{24}\) is computed by the constructed components as shown in Fig. 4; and it is \(P(E_{13})\times (1-q_{14})\times q_{22}+q_{23}\), where \(P(E_{13})= q_{13}\), then \(p_{24}=q_{22}\times q_{13}\times (1-q_{14})+q_{23}\). It is possible to wait in node 2 but not other nodes of the current state \(S_2=\{1,2\}\); where, by traversing arc (1, 3) or (2, 3) the new state \(S_4=\{1,2,3\}\) is revealed. The constructed states during transition from \(S_2\) to \(S_4\) Theorem 2 describes the transition probabilities to the absorbing state \(S_{|S|}\), and they are the last column of the transition matrix P. To compute the transition probability from state \(S_k=\{v_0=1,v_1,\ldots ,v_m\}\) to the absorbing state \(S_{|S|}\) for \(k=1,2,\ldots ,|S|-1\), which is k|S|th element of matrix P, suppose \(v_n\in S_{|S|}\) is the given destination node of the network then $$p_{k|S|}=Pr\left[ \bigcup _{v\in S_k,(v,v_n)\in A}E_{vv_n}\right].$$ \(E_{vv_n}\) denotes the event that arc \((v,v_n)\in N\) of the network is traversed during the transition from \(S_k\) to \(S_{|S|}\). To compute the transition probabilities \(p_{k|S|}\), for \(k=1,2,\ldots ,|S|-1\) it should be noticed the final state is the absorbing state \(S_{|S|}=\{1,2,3,\ldots ,|N|\}\) containing all nodes of the network, and the stochastic process does not progress any more (assumption i). So, it is sufficient to consider leaving arcs \((v,v_n)\) from \(v\in S_k\), the nodes of the current state, toward the destination node \(v_n\in S_{|S|}\). Then, one or all of events \(E_{vv_n}\) (i.e. to traverse a connecting arc between a node of the current state and the destination node of the absorbing state) can happen and the transition probability from the current state \(S_k\) to the absorbing state \(S_{|S|}\) is totally equal to \(Pr[\bigcup _{v\in S_k,(v,v_n)\in A}E_{vv_n}]\). The collection probability should be computed because of different representations of the states (for example see Fig. 2). \(\square\) For state \(S_4\), transition probability \(p_{45}\) is obtained by \(P(E_{14}\cup E_{34}\cup E_{24})\), however \(q_{24}=0\) as seen in Fig. 1, then \(p_{45}=q_{14}+q_{34}-q_{14}\times q_{34}\). The wait probabilities, those are the diagonal elements of the transition matrix P, are obtained by Theorem 3. Suppose \(S_k=\{v_0=1,v_1,\ldots ,v_m\}\) is the current state, then the wait probability \(p_{kk}\) is kkth element of matrix P and it is $$\begin{aligned} p_{kk}= {\left\{ \begin{array}{ll} 1-\sum ^{|S|}_{j=k+1}p_{kj} &\quad \text {if } k<|S|\\ 1 &\quad \text {if }k=|S|. \end{array}\right. } \end{aligned}$$ The wait probabilities \(p_{kk}\) are the complement probabilities of the transition probabilities from the current state \(S_k\), for \(k=1,2,\ldots ,|S|-1\), toward the all departure states \(S_j\), for \(j=k+1,k+2,\ldots ,|S|\). Then, we have \(p_{kk}=1-\sum ^{|S|}_{j=k+1}p_{kj}\), for \(k=1,2,\ldots ,|S|-1\), in other word, they are the diagonal elements of matrix P, those are computed for any row \(k=1,2,\ldots ,|S|-1\) of the transition matrix (see Ibe 2009). The absorbing state \(S_{|S|}\) does not have any departure state, so \(p_{|S||S|}=1\) as the transition matrix P. \(\square\) The arrival probability The arrival probability determines the overall reliability of connections in the network, and it shows the probability that they are not congested during the transmission of flow from the source node to the destination node in the network. The arrival probability is defined as multi-step transition probability from the initial state \(S_1\) to the absorbing state \(S_{|S|}\) in the established DTMC. According to the assumptions i, ii and iii, the state space of DTMC is directed and acyclic (otherwise return to the previous states is allowed contradictively). Out-degree of any state is at least one (without loop wait transition arcs consideration), except the absorbing state \(S_{|S|}\), then for any state \(S_k\), there is one\multi-step transition from the initial state to the absorbing state that traverses state \(S_k\). Consequently, the absorbing state is accessible from the initial state after some finite transitions. Let \(p_{kl}(r)=Pr[X_{m+r}=S_l|X_m=S_k]\) denote the conditional probability that the process will be in state \(S_l\) after exactly r transitions, given that it is presently in state \(S_k\). So, if matrix P(r) is the transition matrix after exactly r transitions, it can be shown that \(P(r)=P^r\), and let \(p_{kl}(r)\) be klth element in matrix \(P^r\) (see Ibe 2009). Thus, the arrival probability after exactly r transitions is \(p_{1|S|}(r)=Pr[X_r=S_{|S|}|X_0=S_1]\) and it is the 1|S|th element in the matrix \(P^r\). For the example network, we want to obtain the probability of the arrival node 4 from node 1. The arrival probability \(p_{15}(r)\) is obtained as shown in Fig. 5 after six transitions. For r sufficiently large, the probabilistic behavior of DTMC becomes independent of the starting state i.e. \(Pr[X_r=S_{|S|}|X_0=S_1]=Pr[X_r=S_{|S|}]\), that is the multi-step transition probability (Ibe 2009). The arrival probabilities of the example network Now, we extended Shirdel and Abdolhosseinzadeh (2016) method to compute the arrival probability for a specific path, it should be considered as the probable shortest path. So, it is enough to put some conditions on the leaving probabilities \(q_{ij}\), those enforce the nodes of the considered shortest path to be reached sooner than the other nodes in the network. Thus, the stochastic shortest path is determined as the path which has the largest arrival probability. For path \(\Pi\) with node set \(N_\Pi\) and arc set \(A_\Pi\), following changes in the network imply that path \(\Pi\) is the stochastic shortest path; for all \(i \in N_\Pi\) if \(j \notin N_\Pi\) and \((j,i) \in A_\Pi\) then \(q_{ji}:= 0\) and \(q_{jj}:= q_{jj}+q_{ji}\) if \(j \in N_\Pi\) and \((i,j) \notin A_\Pi\) then \(q_{ji}:= 0\). For example, in path \(1 \rightarrow 3 \rightarrow 4\) the changes are \(q_{23}:=0, q_{22}:=1, q_{14}:=0\). For all of the paths in the example network: path1: \(1 \rightarrow 4\), path2: \(1 \rightarrow 3 \rightarrow 4\), path3: \(1 \rightarrow 2 \rightarrow 3 \rightarrow 4\), Fig. 5 shows path2 to be the stochastic shortest path with the largest arrival probability. Some implementations of the proposed method on the networks with different topologies are presented in this section. The instances are directed acyclic networks and there is a path from each node to the destination node. The leaving probabilities of nodes are random numbers produced by the uniform distribution probability. Then, the arrival probability is computed for the established DTMC. All of the experiments are coded in MATLAB R2008a and they are performed on Dell Latitude E5500 (Intel(R) Core(TM) 2 Duo CPU 2.53 GHz, 1 GB memory). To avoid vague demonstration just the stochastic shortest path with the arrival probability computation results are shown by square and circle markers in the figures, respectively; whereas, dashed lines are the results for other paths. We use two propositions inductively to be sure there will be a path from the source node to the destination node in its initial topology, and the created network is an acyclic network. If node k is the first node with larger index than source node 1 and \(in{\text{-}}degree(k)=0\), let \(1 \le l < k\) is an arbitrary node, then by adding arc (l, k) there exists a path from source node 1 to node k. If node k is the first node with smaller index than destination node n and \(out{\text{-}}degree(k)=0\), let \(k < l \le n\) is an arbitrary node, then by adding arc (k, l)there exists a path from node k to destination node n. Network 1 has an arbitrary topology with 8 nodes and 18 arcs and the leaving probabilities of arcs are shown in Table 2. For the established DTMC on network 1, the size of the state space is 47. The absorbing state containing the destination node is accessible by at least two transitions. The leaving probabilities of arcs in network 1 (i, j) \(q_{ij}\) As shown in Fig. 6, path 4: \(1 \rightarrow 6 \rightarrow 8\) is the stochastic shortest path of network 1 with arrival probability 0.6523 among 27 possible paths. The arrival probability of network 1 Network 2 and network 3 are grid networks and the leaving probabilities of their arcs are shown in Table 3. The size of the state space for the established DTMC on network 2 is 76 and for network 3 is 49. The leaving probabilities of arcs in network 2 and network 3 The destination node of network 2 is accessible after at least four transitions, and it is done for network 3 after at least three transitions. As shown in Fig. 7, path 11: \(1 \rightarrow 2 \rightarrow 4 \rightarrow 8 \rightarrow 9\) is the stochastic shortest path of network 2 with arrival probability 0.6535 among 33 paths. For network 3, path 3: \(1 \rightarrow 2 \rightarrow 5 \rightarrow 6 \rightarrow 9\) is the stochastic shortest path with arrival probability 0.3996 among 6 paths (see Fig. 8). Network 4 is a complete graph with 9 nodes and 36 arcs and the leaving probabilities are shown in Table 4. The size of the state space for the established DTMC on network 4 is 129. The obtained arrival probabilities of network 4 are shown in Fig. 9, and path 16: \(1 \rightarrow 3 \rightarrow 5 \rightarrow 9\) is the stochastic shortest path with arrival probability 0.4882 among 128 possible paths. The obtained arrival probability in a network determines the general situation of the network to transmit flow from a source node toward a destination node (Shirdel and Abdolhosseinzadeh 2016); however, the presented method precisely determines the path with the largest probability amongst all paths. The arrival probability from a given source node to a given destination node was computed according to the probability of transition from the initial state to the absorbing state by multi-step transition probability of the established discrete time Markov chain in the original network. The proposed method to obtain the arrival probability determines that the destination node is accessible for the first time. The stochastic shortest path was separately determined which has the largest arrival probability value. So, this method can be applied to rank paths of a network by considering their obtained arrival probabilities. Also, the proposed method evaluates the reliability of connections in the networks. So, it can be used in the shortest path problem with recourse, where locally should be decided which path is selected to traverse. The discrete nature of the proposed model could apply meta-heuristic methods to reduce the computations. Also, the proposed method can be used for the stochastic problems as a policy evaluation index. Gholam H. Shirdel and Mohsen Abdolhosseinzadeh contributed equally to this work Both authors have equal contribution to this article. GS participated in the design and conceived of the study. MA worked out the algorithms and helped to draft the manuscript. Both authors read and approved the final manuscript. Department of Mathematics, Faculty of Basic Science, University of Qom, Qom, Iran Ahuja RK, Magnanti TL, Orlin JB (1993) Network flows: theory, algorithms, and applications. Prentice-Hall, Englewood CliffsMATHGoogle Scholar Azaron A, Modarres M (2005) Distribution function of the shortest path in networks of queues. OR Spectr 27:123–144MathSciNetView ArticleMATHGoogle Scholar Bellman R (1958) On a routing problem. Q Appl Math 16:87–90MATHGoogle Scholar Bertsekas DP, Tsitsiklis JN (1991) An analysis of stochastic shortest path problems. Math Oper Res 16:580–595MathSciNetView ArticleMATHGoogle Scholar Dijkstra EW (1959) A note on two problems in connexion with graphs. Numer Math 1:269–271MathSciNetView ArticleMATHGoogle Scholar Dréo J, Pétrowski A, Siarry P, Taillard E (2006) Metaheuristics for hard optimization. Springer, BerlinMATHGoogle Scholar Fan YY, Kalaba RE, Moore JE (2005) Arriving on time. J Optim Theory Appl 127:497–513MathSciNetView ArticleMATHGoogle Scholar Fan YY, Kalaba RE, Moore JE (2005) Shortest paths in stochastic networks with correlated link costs. Comput Math Appl 49:1549–1564MathSciNetView ArticleMATHGoogle Scholar Hutson KR, Shier DR (2009) Extended dominance and a stochastic shortest path problem. Comput Oper Res 36:584–596MathSciNetView ArticleMATHGoogle Scholar Ibe OC (2009) Markov processes for stochastic modeling. Academic Press, LondonMATHGoogle Scholar Ji X (2005) Models and algorithm for stochastic shortest path problem. Appl Math Comput 170:503–514MathSciNetMATHGoogle Scholar Kulkarni VG (1986) Shortest paths in networks with exponentially distributed arc lengths. Networks 16:255–274MathSciNetView ArticleMATHGoogle Scholar Liu W (2010) Uncertain programming models for shortest path problem with uncertain arc length. In: Proceedings of the first international conference on Uncertainty Theory. Urumchi, China, pp 148–153Google Scholar Nie Y, Fan Y (2006) Arriving-on-time problem discrete algorithm that ensures convergence. Transp Res Rec 1964:193–200View ArticleGoogle Scholar Orlin JB, Madduri K, Subramani K, Williamson M (2010) A faster algorithm for the single source shortest path problem with few distinct positive lengths. J Discr Algorithms 8:189–198MathSciNetView ArticleMATHGoogle Scholar Pattanamekar P, Park D, Rillet LR, Lee J, Lee C (2003) Dynamic and stochastic shortest path in transportation networks with two components of travel time uncertainty. Transp Res Part C 11:331–354View ArticleGoogle Scholar Peer SK, Sharma DK (2007) Finding the shortest path in stochastic networks. Comput Math Appl 53:729–740MathSciNetView ArticleMATHGoogle Scholar Rasteiro DDML, Anjo AJB (2004) Optimal path in probabilistic networks. J Math Sci 120:974–987MathSciNetView ArticleMATHGoogle Scholar Ross SM (2006) Introduction to probability models. Academic Press, New YorkMATHGoogle Scholar Shirdel GH, Abdolhosseinzadeh M (2016) A genetic algorithm for the arrival probability in the stochastic networks. SpringerPlus, Berlin : 1-14MATHGoogle Scholar Thomas BW, White CC III (2007) The dynamic and stochastic shortest path problem with anticipation. Eur J Oper Res 176:836–854MathSciNetView ArticleMATHGoogle Scholar Wu Q, Hartly J, Al-Dabass D (2004) Time-dependent stochastic shortest path(s) algorithms for a scheduled transportation network. Int J Simul 6:7–8Google Scholar
CommonCrawl
Visitor $0.00 Login or Register Fight Finance Courses Tags Random All Recent Scores Question 272 NPV Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow? than $102, $102 or than $102? Question 278 inflation, real and nominal returns and cash flows Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? Question 279 diversification Do you think that the following statement is or ? "Buying a single company stock usually provides a safer return than a stock mutual fund." Question 1 NPV Jan asks you for a loan. He wants $100 now and offers to pay you back $120 in 1 year. You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Remember: ### V_0 = \frac{V_t}{(1+r_\text{eff})^t} ### Will you or Jan's deal? Question 2 NPV, Annuity Katya offers to pay you $10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? Question 4 DDM For a price of $13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? For a price of $6, Carlos will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to his share or politely ? For a price of $102, Andrea will sell you a share which just paid a dividend of $10 yesterday, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be ##10(1+0.05)^1=$10.50## in one year from now, and the year after it will be ##10(1+0.05)^2=11.025## and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? Question 59 NPV The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) (a) -100 (c) 10 (e) 132 Question 43 pay back period A project to build a toll road will take 3 years to complete, costing three payments of $50 million, paid at the start of each year (at times 0, 1, and 2). After completion, the toll road will yield a constant $10 million at the end of each year forever with no costs. So the first payment will be at t=4. The required return of the project is 10% pa given as an effective nominal rate. All cash flows are nominal. What is the payback period? (a) Negative since the NPV is negative. (b) Zero since the project's internal rate of return is less than the required return. (c) 15 years. (d) 18 years. (e) Infinite, since the project will never pay itself off. Question 182 NPV, IRR, pay back period A project's NPV is positive. Select the most correct statement: (a) The project should be rejected. (b) The project's IRR is more than its required return. (c) The project's IRR is less than its required return. (d) The project's IRR is equal to its required return. (e) The project will never pay itself off, assuming that the discount rate is positive. Question 542 price gains and returns over time, IRR, NPV, income and capital returns, effective return For an asset price to double every 10 years, what must be the expected future capital return, given as an effective annual rate? (a) 0.2 (b) 0.116123 (c) 0.082037 (d) 0.071773 (e) 0.06054 For an asset price to triple every 5 years, what must be the expected future capital return, given as an effective annual rate? (e) 0.219755 Question 533 NPV, no explanation You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume twice as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. How much can you consume at time zero and one? The answer choices are given in the same order. (a) $52,380.95, $52,380.95 (b) $62,500, $31,250 (c) $66,666.67, $33,333.33 (d) $68,750, $34,375 (e) $70,967.74, $35,483.87 You wish to consume half as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. (a) $26,190.48, $52380.95 A firm is considering a business project which costs $10m now and is expected to pay a single cash flow of $12.1m in two years. Assume that the initial $10m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? (a) The NPV is zero. (b) The IRR is 10% pa, equal to the 10% cost of capital. (c) The payback period is two years assuming that the whole $12.1m cash flow occurs at t=2, or 1.826 years if the $12.1m cash flow is paid smoothly over the second year. (d) The project could be accepted or rejected, the owners would be indifferent. (e) If the project is accepted then the market value of the firm's assets will increase by $2.1m more than it would otherwise if the project was rejected. Question 9 DDM, NPV For a price of $129, Joanne will sell you a share which is expected to pay a $30 dividend in one year, and a $10 dividend every year after that forever. So the stock's dividends will be $30 at t=1, $10 at t=2, $10 at t=3, and $10 forever onwards. Question 20 NPV, APR, Annuity Your friend wants to borrow $1,000 and offers to pay you back $100 in 6 months, with more $100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of $100 equals $1,200 so she's being generous. If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal? (a) -648.51 (b) 60.28 (c) 70.88 (d) 125.51 Question 481 Annuity This annuity formula ##\dfrac{C_1}{r}\left(1-\dfrac{1}{(1+r)^3} \right)## is equivalent to which of the following formulas? Note the 3. In the below formulas, ##C_t## is a cash flow at time t. All of the cash flows are equal, but paid at different times. (a) ##C_0+C_1+C_2+C_3## (b) ##\dfrac{C_0+C_1+C_2+C_3}{(1+r)^3} ## (c) ##C_0+\dfrac{C_1}{(1+r)^1} +\dfrac{C_2}{(1+r)^2} + \dfrac{C_3}{(1+r)^3} ## (d) ##\dfrac{C_1}{(1+r)^1} +\dfrac{C_2}{(1+r)^2} + \dfrac{C_3}{(1+r)^3} ## (e) ##\dfrac{C_1}{(1+r)^1} + \dfrac{C_2}{(1+r)^2} ## Question 356 NPV, Annuity Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you $5,000 now (t=0), and in return she wants you to pay her back $1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of $1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. (a) -$1,000 (b) $209.2132 (c) $644.7393 (d) $1,040.6721 (e) $1,400.611 Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five $10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? (a) $0 (b) $10 (c) $50 (d) Positive infinity (e) Priceless Question 479 perpetuity with growth, DDM, NPV Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? (a) ##V_0=\dfrac{C_t}{(1+r)^t} ## (b) ##V_0=\dfrac{C_1}{r}.\left(1-\dfrac{1}{(1+r)^T} \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t}{(1+r)^t} \right) ## (c) ##V_0=\dfrac{C_1}{r-g}.\left(1-\left(\dfrac{1+g}{1+r}\right)^T \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## (d) ##V_0=\dfrac{C_1}{r} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t}{(1+r)^t} \right) ## (e) ##V_0=\dfrac{C_1}{r-g} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## Question 517 DDM A stock is expected to pay its next dividend of $1 in one year. Future annual dividends are expected to grow by 2% pa. So the first dividend of $1 will be in one year, the year after that $1.02 (=1*(1+0.02)^1), and a year later $1.0404 (=1*(1+0.02)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. (a) $10 (b) $12.254902 (c) $12.5 (d) $12.75 (e) $13.75 A stock just paid a dividend of $1. Future annual dividends are expected to grow by 2% pa. The next dividend of $1.02 (=1*(1+0.02)^1) will be in one year, and the year after that the dividend will be $1.0404 (=1*(1+0.02)^2), and so on forever. A stock is just about to pay a dividend of $1 tonight. Future annual dividends are expected to grow by 2% pa. The next dividend of $1 will be paid tonight, and the year after that the dividend will be $1.02 (=1*(1+0.02)^1), and a year later 1.0404 (=1*(1+0.04)^2) and so on forever. For a price of $1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be ##100(1+0.05)^1=$105.00##, and the year after it will be ##100(1+0.05)^2=110.25## and so on. Question 528 DDM, income and capital returns The perpetuity with growth formula, also known as the dividend discount model (DDM) or Gordon growth model, is appropriate for valuing a company's shares. ##P_0## is the current share price, ##C_1## is next year's expected dividend, ##r## is the total required return and ##g## is the expected growth rate of the dividend. ###P_0=\dfrac{C_1}{r-g}### The below graph shows the expected future price path of the company's shares. Which of the following statements about the graph is NOT correct? (a) Between points A and B, the share price is expected to grow by ##r##. (b) Between points B and C, the share price is expected to instantaneously fall by ##C_1##. (c) Between points A and C, the share price is expected to grow by ##g##. (d) Between points B and D, the share price is expected to grow by ##g##. (e) Between points D and E, the share price is expected to instantaneously fall by ##C_1.(1+r)^1##. The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. ###P_0=\frac{d_1}{r-g}### A stock pays dividends annually. It just paid a dividend, but the next dividend (##d_1##) will be paid in one year. According to the DDM, what is the correct formula for the expected price of the stock in 2.5 years? (a) ##P_{2.5}=P_0 (1+g)^{2.5} ## (b) ##P_{2.5}=P_0 (1+r)^{2.5} ## (c) ##P_{2.5}=P_0 (1+g)^2 (1+r)^{0.5} ## (d) ##P_{2.5}=P_0 (1+r)^2 (1+g)^{0.5} ## (e) ##P_{2.5}=P_0 (1+r)^3 (1+g)^{-0.5} ## Question 28 DDM, income and capital returns ### P_{0} = \frac{C_1}{r_{\text{eff}} - g_{\text{eff}}} ### What would you call the expression ## C_1/P_0 ##? (a) The expected total return of the stock. (b) The expected income return of the stock. (c) The expected capital return of the stock. (d) The expected growth rate of the dividend. (e) The expected growth rate of the stock price. The following is the Dividend Discount Model (DDM) used to price stocks: If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: (a) Dividend growth rate is equal to the long term expected growth rate of the stock price. (b) Dividend growth rate is equal to the long term expected capital return of the stock. (c) Dividend growth rate is equal to the long term expected dividend yield. (d) Total return of the stock is equal to its long term required return. (e) Total return of the stock is equal to the company's long term cost of equity. Question 497 income and capital returns, DDM, ex dividend date A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the $10 one tonight will be $10.50 in one year, then in two years it will be $11.025 and so on. The stock's required return is 10% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? (a) $200 today and $210 tomorrow. (b) $210 today and $220 tomorrow. (c) $220 today and $230 tomorrow. (d) $210 today and $200 tomorrow. (e) $220 today and $210 tomorrow. Question 289 DDM, expected and historical returns, ROE In the dividend discount model: ###P_0 = \dfrac{C_1}{r-g}### The return ##r## is supposed to be the: (a) Expected future total return of the market price of equity. (b) Expected future total return of the book price of equity. (c) Actual historical total return on the market price of equity. (d) Actual historical total return on the book price of equity. (e) Actual historical return on equity (ROE) defined as (Net Income / Owners Equity). Question 36 DDM, perpetuity with growth A stock pays annual dividends which are expected to continue forever. It just paid a dividend of $10. The growth rate in the dividend is 2% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? (a) $127.5 (b) $125 (c) $102 (d) $101.98 (e) $100 A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; the dividend at t=5 will be $1.15(1+0.05), the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? (a) $27.7567 (b) $24.1226 (c) $23.5680 (d) $22.4457 (e) $3.6341 ### p_0 = \frac{d_1}{r - g} ### Which expression is NOT equal to the expected dividend yield? (a) ## r-g ## (b) ## \dfrac{d_1}{p_0} ## (c) ## \dfrac{d_5}{p_4} ## (d) ## \dfrac{d_5(1+g)^2}{p_6} ## (e) ## \dfrac{d_3}{p_0(1+r)^2} ## A fairly valued share's current price is $4 and it has a total required return of 30%. Dividends are paid annually and next year's dividend is expected to be $1. After that, dividends are expected to grow by 5% pa in perpetuity. All rates are effective annual returns. What is the expected dividend income paid at the end of the second year (t=2) and what is the expected capital gain from just after the first dividend (t=1) to just after the second dividend (t=2)? The answers are given in the same order, the dividend and then the capital gain. (a) $1.3, $0.26 (b) $1.25, $0.25 (c) $1.1025, $0.2205 (d) $1.05, $0.21 (e) $1, $0.2 Question 50 DDM, stock pricing, inflation, real and nominal returns and cash flows Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a $0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? (e) $27.2330 Question 535 DDM, real and nominal returns and cash flows, stock pricing You are an equities analyst trying to value the equity of the Australian telecoms company Telstra, with ticker TLS. In Australia, listed companies like Telstra tend to pay dividends every 6 months. The payment around August is called the final dividend and the payment around February is called the interim dividend. Both occur annually. Today is mid-March 2015. TLS's last interim dividend of $0.15 was one month ago in mid-February 2015. TLS's last final dividend of $0.15 was seven months ago in mid-August 2014. Judging by TLS's dividend history and prospects, you estimate that the nominal dividend growth rate will be 1% pa. Assume that TLS's total nominal cost of equity is 6% pa. The dividends are nominal cash flows and the inflation rate is 2.5% pa. All rates are quoted as nominal effective annual rates. Assume that each month is exactly one twelfth (1/12) of a year, so you can ignore the number of days in each month. Calculate the current TLS share price. (a) $6.06 (b) $6.080152 (c) $6.149576 (d) $6.179509 (e) $6.300707 Question 217 NPV, DDM, multi stage growth model A stock is expected to pay a dividend of $15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? (a) $361.78 (b) $236.33 (c) $237.93 (e) $223.24 Question 180 equivalent annual cash flow, inflation, real and nominal returns and cash flows Details of two different types of light bulbs are given below: Low-energy light bulbs cost $3.50, have a life of nine years, and use about $1.60 of electricity a year, paid at the end of each year. Conventional light bulbs cost only $0.50, but last only about a year and use about $6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. (a) 1.4873, 6.7857 (b) 1.6525, 6.7857 (c) 2.1415, 7.1250 (d) 14.8725, 6.7857 (e) 2.0924, 7.1250 Question 353 income and capital returns, inflation, real and nominal returns and cash flows, real estate A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. (a) 3.9216%, 2.9412%, 0.9804%. (b) 3.9216%, 0.9804%, 2.9412%. (c) 3.9216%, 0.9804%, 0.9804%. (d) 1.9804%, 1.0000%, 0.9804%. (e) 1.9608%, 0.9804%, 0.9804%. Question 478 income and capital returns Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? (a) Dividends. (b) Rent. (c) Coupons. (d) Loan payments. (e) Capital gains. A share was bought for $30 (at t=0) and paid its annual dividend of $6 one year later (at t=1). Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: ##r_\text{total}## , ##r_\text{capital}## , ##r_\text{dividend}##. (a) -0.1, -0.3, 0.2. (b) -0.1, 0.1, -0.2. (c) 0.1, -0.1, 0.2. (d) 0.1, 0.2, -0.1. (e) 0.2, 0.1, -0.1. Question 404 income and capital returns, real estate One and a half years ago Frank bought a house for $600,000. Now it's worth only $500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $18,617.27. The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. (a) 3.1029% (b) 3.3201% (c) 3.7235% (d) 3.9841% (e) 7% Question 407 income and capital returns, inflation, real and nominal returns and cash flows A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa. What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order. (a) 11.100%, 4.000%, 7.100%. (b) 11.140%, 4.040%, 7.100%. (c) 4.902%, 0.000%, 4.902%. (d) 9.140%, 4.040%, 5.100%. (e) 9.140%, 4.040%, 7.100%. Question 295 inflation, real and nominal returns and cash flows, NPV When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? (a) I only. (b) III only. (c) IV only. (d) I and IV only. (e) II and III only. Question 526 real and nominal returns and cash flows, inflation, no explanation How can a nominal cash flow be precisely converted into a real cash flow? (a) ##C_\text{real, t}=C_\text{nominal,t}.(1+r_\text{inflation})^t## (b) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{(1+r_\text{inflation})^t} ## (c) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{r_\text{inflation}} ## (d) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation} ## (e) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation}.t## Question 473 market capitalisation of equity The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? (a) $431.18 billion (b) $429 billion (c) $134.07 billion (d) $8.44 billion (e) $3.21 billion Question 466 limited liability, business structure Which business structure or structures have the advantage of limited liability for equity investors? (a) Sole traders. (b) Partnerships. (c) Corporations. (d) All of the above. (e) None of the above Question 531 bankruptcy or insolvency, capital structure, risk, limited liability Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately. (a) Alice has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $10,000 and liabilities of $3,000. (b) Billy has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a corporate business with assets worth $3,000 and liabilities of $10,000. (c) Carla has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a corporate business with assets worth $10,000 and liabilities of $3,000. (d) Darren has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $3,000 and liabilities of $10,000. (e) Ernie has $1,000 cash, lent $3,000 to his friend, and doesn't have any personal debt or own any businesses. On his 20th birthday, a man makes a resolution. He will put $30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. (a) $21,600, $95,035.46 (b) $21,600, $49,515.44 (c) $21,600, $4,909.33 (d) $21,600, $2,557.86 (e) $11,254.05, $2,557.86 What is the present value of a real payment of $500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. (a) $472.3557 (b) $471.298 (d) $435.7415 (e) $405.8112 Question 476 income and capital returns, idiom The saying "buy low, sell high" suggests that investors should make a: (a) Positive income return. (b) Positive capital return. (c) Negative income return. (d) Negative capital return. (e) Positive total return. An asset's total expected return over the next year is given by: ###r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0} ### Where ##p_0## is the current price, ##c_1## is the expected income in one year and ##p_1## is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? (a) ##c_1## (b) ##p_1-p_0## (c) ##\dfrac{c_1}{p_0} ## (d) ##\dfrac{p_1}{p_0} -1## (e) ##\dfrac{p_1}{p_0} ## Question 525 income and capital returns, real and nominal returns and cash flows, inflation Which of the following statements about cash in the form of notes and coins is NOT correct? Assume that inflation is positive. Notes and coins: (a) Pay no income cash flow. (b) Have a nominal total return of zero. (c) Have a nominal capital return of zero. (d) Have a nominal income return of zero. (e) Have a real total return of zero. Question 444 investment decision, corporate financial decision theory The investment decision primarily affects which part of a business? (a) Assets. (b) Liabilities and owner's equity. (c) Current assets and current liabilities. (d) Dividends and buy backs. (e) Net income, also known as earnings or net profit after tax. Question 443 corporate financial decision theory, investment decision, financing decision, working capital decision, payout policy Business people make lots of important decisions. Which of the following is the most important long term decision? (a) Investment decision. (b) Financing decision. (c) Working capital decision. (d) Payout policy decision. (e) Capital or labour decision. Question 445 financing decision, corporate financial decision theory The financing decision primarily affects which part of a business? Question 467 book and market values Which of the following statements about book and market equity is NOT correct? (a) The market value of equity of a listed company's common stock is equal to the number of common shares multiplied by the share price. (b) The book value of equity is the sum of contributed equity, retained profits and reserves. (c) A company's book value of equity is recorded in its income statement, also known as the 'profit and loss' or the 'statement of financial performance'. (d) A new company's market value of equity equals its book value of equity the moment that its shares are first sold. From then on, the market value changes continuously but the book value which is recorded at historical cost tends to only change due to retained profits. (e) To buy all of the firm's shares, generally a price close to the market value of equity will have to be paid. Question 221 credit risk You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? (a) Junior debt is the safest. Preference shares will have the highest returns. (b) Preference shares are the safest. Ordinary shares will have the highest returns. (c) Senior debt is the safest. Ordinary shares will have the highest returns. (d) Junior debt is the safest. Ordinary shares will have the highest returns. (e) Senior debt is the safest. Junior debt will have the highest returns. You expect a nominal payment of $100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct? (a) The nominal cash flow of $100 in 5 years is equivalent to a real cash flow of $86.2609 in 5 years. This means that $86.2609 will buy the same amount of goods and services now as $100 will buy in 5 years. (b) The real discount rate of 10% pa is equivalent to a nominal discount rate of 13.3333% pa. (c) The nominal price of goods and services will increase by 3% every year. (d) The real price of goods and services will increase by 3% every year. (e) The present value of your payment will increase by the nominal discount rate every year. Question 446 working capital decision, corporate financial decision theory The working capital decision primarily affects which part of a business? Question 498 NPV, Annuity, perpetuity with growth, multi stage growth model A business project is expected to cost $100 now (t=0), then pay $10 at the end of the third (t=3), fourth, fifth and sixth years, and then grow by 5% pa every year forever. So the cash flow will be $10.5 at the end of the seventh year (t=7), then $11.025 at the end of the eighth year (t=8) and so on perpetually. The total required return is 10℅ pa. Which of the following formulas will NOT give the correct net present value of the project? (a) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^3} \right)}{(1+0.1)^2} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## (b) ##-100+ \dfrac{10}{(1+0.1)^3} +\dfrac{10}{(1+0.1)^4} +\dfrac{10}{(1+0.1)^5} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## (c) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^4} \right)}{(1+0.1)^2} +\dfrac{\left(\dfrac{10.5}{0.1-0.05}\right)}{(1+0.1)^6} ## (d) ##-100+ \dfrac{10}{(1+0.1)^3} +\dfrac{10}{(1+0.1)^4} +\dfrac{10}{(1+0.1)^5} +\dfrac{10}{(1+0.1)^6} +\dfrac{\left(\dfrac{10.5}{0.1-0.05}\right)}{(1+0.1)^6} ## (e) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^3} \right)}{(1+0.1)^3} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## Question 126 IRR What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. (a) 0.21 (b) 0.105 (c) 0.1111 (d) 0.1 (e) 0 Question 37 IRR If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: (a) Positive infinity (##+\infty##) (b) Zero (0). (c) Less than the project's required return. (d) More than the project's required return. (e) Equal to the project's required return. Question 500 NPV, IRR The below graph shows a project's net present value (NPV) against its annual discount rate. For what discount rate or range of discount rates would you accept and commence the project? All answer choices are given as approximations from reading off the graph. (a) From 0 to 10% pa. (b) From 0 to 5% pa. (c) At 5.5% pa. (d) From 6 to 20% pa. (e) From 0 to 20% pa. Which of the following statements is NOT correct? (a) When the project's discount rate is 18% pa, the NPV is approximately -$30m. (b) The payback period is infinite, the project never pays itself off. (c) The addition of the project's cash flows, ignoring the time value of money, is approximately $20m. (d) The project's IRR is approximately 5.5% pa. (e) As the discount rate rises, the NPV falls. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end (t=1). How much can you consume at each time? (a) $57,619.0476 (b) $55,000 (c) $53,809.5238 (d) $52,380.9524 (e) $50,000 Question 190 pay back period A project has the following cash flows: What is the payback period of the project in years? Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the $500 at time 2 is actually earned smoothly from t=1 to t=2. (a) -0.80 (b) 0.80 (c) 1.20 (d) 1.80 (e) 2.20 The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are received smoothly over the year. So the $121 at time 2 is actually earned smoothly from t=1 to t=2. (a) 2.7355 (b) 2.3596 (d) 1.2645 (e) 0.2645 Question 502 NPV, IRR, mutually exclusive projects An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects now ($) Sale price in one year ($) IRR Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? (a) Petrol station. (b) Car wash. (c) Car park. (d) None of the projects. (e) All of the projects. Question 579 price gains and returns over time, time calculation, effective rate How many years will it take for an asset's price to double if the price grows by 10% pa? (a) 1.8182 years (b) 3.3219 years (c) 7.2725 years (d) 11.5267 years (e) 13.7504 years How many years will it take for an asset's price to quadruple (be four times as big, say from $1 to $4) if the price grows by 15% pa? (d) 9.919 years Question 250 NPV, Loan, arbitrage table Your neighbour asks you for a loan of $100 and offers to pay you back $120 in one year. You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates. Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs. The Net Present Value (NPV) of lending to your neighbour is $9.09. Describe what you would do to actually receive a $9.09 cash flow right now with zero net cash flows in the future. (a) Borrow $109.09 from the bank and lend $100 of it to your neighbour now. (b) Borrow $100 from the bank and lend it to your neighbour now. (c) Borrow $209.09 from the bank and lend $100 to your neighbour now. (d) Borrow $120 from the bank and lend $100 of it to your neighbour now. (e) Borrow $90.91 from the bank and lend it to your neighbour now. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have $50,000 in the bank after that (t=2). (b) $23,666.6667 (e) $16,666.6667 Question 489 NPV, IRR, pay back period, DDM A firm is considering a business project which costs $11m now and is expected to pay a constant $1m at the end of every year forever. (a) The NPV is negative $1m. (b) The IRR is 9.09% pa, less than the 10% cost of capital. (c) The payback period is infinite, the project will never pay itself off. (d) The project should be rejected. (e) If the project is accepted then the market value of the firm's assets will fall by $1m. Question 532 mutually exclusive projects, NPV, IRR An investor owns a whole level of an old office building which is currently worth $1 million. There are three mutually exclusive projects that can be started by the investor. The office building level can be: Rented out to a tenant for one year at $0.1m paid immediately, and then sold for $0.99m in one year. Refurbished into more modern commercial office rooms at a cost of $1m now, and then sold for $2.4m when the refurbishment is finished in one year. Converted into residential apartments at a cost of $2m now, and then sold for $3.4m when the conversion is finished in one year. All of the development projects have the same risk so the required return of each is 10% pa. The table below shows the estimated cash flows and internal rates of returns (IRR's). Project Cash flow now ($) Cash flow in Rent then sell as is -900,000 990,000 10 Refurbishment into modern offices -2,000,000 2,400,000 20 Conversion into residential apartments -3,000,000 3,400,000 13.33 (a) Rent then sell as is. (b) Refurbishment into modern offices. (c) Conversion into residential apartments. (e) Any of the above. Question 290 APR, effective rate, debt terminology Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) An effective annual rate could be called: "a yearly rate compounding per year". (b) An APR compounding monthly could be called: "a yearly rate compounding per month". (c) An effective monthly rate could be called: "a yearly rate compounding per month". (d) An APR compounding daily could be called: "a yearly rate compounding per day". (e) An effective 2-year rate could be called: "a 2-year rate compounding every 2 years". Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) Effective rates compound once over their time period. So an effective monthly rate compounds once per month. (b) APR's compound more than once per year. So an APR compounding monthly compounds 12 times per year. The exception is an APR that compounds annually (once per year) which is the same thing as an effective annual rate. (c) To convert an effective rate to an APR, multiply the effective rate by the number of time periods in one year. So an effective monthly rate multiplied by 12 is equal to an APR compounding monthly. (d) To convert an APR compounding monthly to an effective monthly rate, divide the APR by the number of months in one year (12). (e) To convert an APR compounding monthly to an effective weekly rate, divide the APR by the number of weeks in one year (approximately 52). Question 16 credit card, APR, effective rate A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: ### r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily} ### (a) 0.0072, 0.09, 0.0002. (b) 0.0139, 0.18, 0.0005. (c) 0.0139, 6.2876, 0.0055. (d) 0.015, 0.1956, 0.0005. (e) 0.015, 0.1956, 0.006. Question 26 APR, effective rate A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. ### r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily} ### (b) 0.0080, 0.1, 0.0003. (c) 0.0083, 0.1, 0.0003. (d) 0.0083, 2.1384, 0.0031. (e) 0.0083, 0.1047, 0.0033. Question 131 APR, effective rate Calculate the effective annual rates of the following three APR's: A credit card offering an interest rate of 18% pa, compounding monthly. A bond offering a yield of 6% pa, compounding semi-annually. An annual dividend-paying stock offering a return of 10% pa compounding annually. ##r_\text{credit card, eff yrly}##, ##r_\text{bond, eff yrly}##, ##r_\text{stock, eff yrly}## (a) 0.1956, 0.0609, 0.1. (b) 0.015, 0.09, 0.1. (e) 6.2876, 0.1236, 0.1. Question 49 inflation, real and nominal returns and cash flows, APR, effective rate In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? (e) 0.6300% In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. (a) -1.3529627% (b) -0.4977348% (c) 0.4977348% (d) 1.3529627% (e) 1.3621776% Question 265 APR, Annuity On his 20th birthday, a man makes a resolution. He will deposit $30 into a bank account at the end of every month starting from now, which is the start of the month. So the first payment will be in one month. He will write in his will that when he dies the money in the account should be given to charity. The bank account pays interest at 6% pa compounding monthly, which is not expected to change. If the man lives for another 60 years, how much money will be in the bank account if he dies just after making his last (720th) payment? (a) $712,534.12 (b) $211,628.47 (c) $21,600.00 (d) $15,993.85 (e) $5,834.58 Question 19 fully amortising loan, APR You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). (a) 900 (b) 2,700 (c) 2,722.1 (d) 2,843.71 (e) 34,424.99 You want to buy an apartment worth $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 1,500.00 (b) 2,250.00 (c) 2,855.79 Question 259 fully amortising loan, APR You want to buy a house priced at $400,000. You have saved a deposit of $40,000. The bank has agreed to lend you $360,000 as a fully amortising loan with a term of 30 years. The interest rate is 8% pa payable monthly and is not expected to change. (a) $1,000 (b) $1,106.6497 (c) $2,400 (e) $2,641.5525 You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. (a) $320,725.47, $284,977.19 (b) $310,704.66, $277,862.39 (c) $310,704.66, $197,354.23 (d) $308,209.62, $273,856.37 (e) $308,209.62, $192,529.73 You want to buy an apartment worth $400,000. You have saved a deposit of $80,000. The bank has agreed to lend you the $320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (b) $1,600.00 (c) $1,885.99 (d) $1,918.56 (e) $23,247.65 You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as a fully amortising loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) $32,692.01 You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 246,567.70, 93,351.63 (b) 246,567.70, 235,741.91 (c) 248,563.73, 96,346.75 (d) 248,563.73, 238,323.24 (e) 256,580.38, 245,314.97 You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 20 years, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. (b) 166,791.61, 90,073.45 (c) 166,791.61, 139,580.77 (d) 165,177.97, 88,321.04 (a) 184,925.77, 164,313.82 Question 204 time calculation, fully amortising loan, APR To your surprise, you can actually afford to pay $2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? (a) 38.87 months, which is 3.24 yrs. (b) 47.91 months, which is 3.99 yrs. (c) 160.72 months, which is 13.39 yrs. (d) 164.65 months, which is 13.72 yrs. (e) None of the above. Question 29 interest only loan You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). You just signed up for a 30 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). Question 107 interest only loan You want to buy an apartment worth $300,000. You have saved a deposit of $60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 17,435.74 (e) 666.67 You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) $ 1,250.00 (b) $ 2,250.00 (c) $ 2,652.17 (d) $ 2,697.98 (e) $ 32,692.01 Question 239 income and capital returns, inflation, real and nominal returns and cash flows, interest only loan A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? (a) Approximately 6%. (b) Approximately 4%. (c) Approximately 2%. (d) Approximately 0%. (e) Approximately -2%. A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow (##V_\text{before}##), so: ###\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}} ### Assume that: Interest rates are expected to be constant over the life of the loan. Loans are interest-only and have a life of 30 years. Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month. (a) 0.055679 Question 459 interest only loan, inflation In Australia in the 1980's, inflation was around 8% pa, and residential mortgage loan interest rates were around 14%. In 2013, inflation was around 2.5% pa, and residential mortgage loan interest rates were around 4.5%. If a person can afford constant mortgage loan payments of $2,000 per month, how much more can they borrow when interest rates are 4.5% pa compared with 14.0% pa? Give your answer as a proportional increase over the amount you could borrow when interest rates were high ##(V_\text{high rates})##, so: ###\text{Proportional increase} = \dfrac{V_\text{low rates}-V_\text{high rates}}{V_\text{high rates}} ### Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates (APR's) compounding per month. (a) 0.095 Question 45 profitability index What is the Profitability Index (PI) of the project? (b) 1.1 (d) 0.82845 Question 174 profitability index What is the Profitability Index (PI) of the project? Assume that the cash flows shown in the table are paid all at once at the given point in time. The required return is 10% pa, given as an effective annual rate. Question 191 NPV, IRR, profitability index, pay back period A project's Profitability Index (PI) is less than 1. Select the most correct statement: (c) The project's payback period will be less than 3 years. (e) The project's NPV is greater than 1. Question 505 equivalent annual cash flow A low-quality second-hand car can be bought now for $1,000 and will last for 1 year before it will be scrapped for nothing. A high-quality second-hand car can be bought now for $4,900 and it will last for 5 years before it will be scrapped for nothing. What is the equivalent annual cost of each car? Assume a discount rate of 10% pa, given as an effective annual rate. The answer choices are given as the equivalent annual cost of the low-quality car and then the high quality car. (a) $100, $490 (b) $909.09, $608.5 (c) $1,000, $980 (d) $1,000, $1578.3 (e) $1,100, $1,292.61 You're advising your superstar client 40-cent who is weighing up buying a private jet or a luxury yacht. 40-cent is just as happy with either, but he wants to go with the more cost-effective option. These are the cash flows of the two options: The private jet can be bought for $6m now, which will cost $12,000 per month in fuel, piloting and airport costs, payable at the end of each month. The jet will last for 12 years. Or the luxury yacht can be bought for $4m now, which will cost $20,000 per month in fuel, crew and berthing costs, payable at the end of each month. The yacht will last for 20 years. What's unusual about 40-cent is that he is so famous that he will actually be able to sell his jet or yacht for the same price as it was bought since the next generation of superstar musicians will buy it from him as a status symbol. Bank interest rates are 10% pa, given as an effective annual rate. You can assume that 40-cent will live for another 60 years and that when the jet or yacht's life is at an end, he will buy a new one with the same details as above. Would you advise 40-cent to buy the or the ? Note that the effective monthly rate is ##r_\text{eff monthly}=(1+0.1)^{1/12}-1=0.00797414## Question 658 CFFA, income statement, balance sheet, no explanation To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the income statement needed? Note that the income statement is sometimes also called the profit and loss, P&L, or statement of financial performance. (a) Net income, depreciation and interest expense. (b) Depreciation and capital expenditure. (d) Current assets, current liabilities and capital expenditure. (e) Current assets, current liabilities and depreciation expense. Question 377 leverage, capital structure Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Question 236 diversification, correlation, risk Diversification in a portfolio of two assets works best when the correlation between their returns is: (a) -1 (b) -0.5 Question 559 variance, standard deviation, covariance, correlation Which of the following statements about standard statistical mathematics notation is NOT correct? (a) The arithmetic average of variable X is represented by ##\bar{X}##. (b) The standard deviation of variable X is represented by ##\sigma_X##. (c) The variance of variable X is represented by ##\sigma_X^2##. (d) The covariance between variables X and Y is represented by ##\sigma_{X,Y}^2##. (e) The correlation between variables X and Y is represented by ##\rho_{X,Y}##. Question 83 portfolio risk, standard deviation Portfolio Details return Standard deviation Correlation ##(\rho_{A,B})## Dollars A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the standard deviation (not variance) of the above portfolio? Question 557 portfolio weights, portfolio return An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. Stock A has an expected return of 5% pa. Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? (a) 80%, 20% (b) 60%, 40% (c) 40%, 60% (d) 20%, 80% (e) 20%, 20% Question 556 portfolio risk, portfolio return, standard deviation An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 12% pa. Stock A has an expected return of 10% pa and a standard deviation of 20% pa. Stock B has an expected return of 15% pa and a standard deviation of 30% pa. The correlation coefficient between stock A and B's expected returns is 70%. What will be the annual standard deviation of the portfolio with this 12% pa target return? (a) 24.28168% pa (b) 24% pa (c) 22.126907% pa (d) 19.697716% pa (e) 16.970563% pa Question 563 correlation What is the correlation of a variable X with itself? The corr(X, X) or ##\rho_{X,X}## equals: (a) var(X) or ##\sigma_X^2## (b) sd(X) or ##\sigma_X## (d) 0 (e) Mathematically undefined What is the correlation of a variable X with a constant C? The corr(X, C) or ##\rho_{X,C}## equals: Question 561 covariance, correlation The covariance and correlation of two stocks X and Y's annual returns are calculated over a number of years. The units of the returns are in percent per annum ##(\% pa)##. What are the units of the covariance ##(\sigma_{X,Y})## and correlation ##(\rho_{X,Y})## of returns respectively? (a) Percentage points per annum ##(\text{pp pa})## and percentage points per annum ##(\text{pp pa})##. (b) Percentage points per annum ##(\text{pp pa})## and percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)##. (c) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and percentage points per annum ##(\text{pp pa})##. (d) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)##. (e) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and a pure number with no units. Hint: Visit Wikipedia to understand the difference between percentage points ##(\text{pp})## and percent ##(\%)##. Question 306 risk, standard deviation Let the standard deviation of returns for a share per month be ##\sigma_\text{monthly}##. What is the formula for the standard deviation of the share's returns per year ##(\sigma_\text{yearly})##? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. (a) ##\sigma_\text{yearly} = \sigma_\text{monthly}## (b) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times 12## (c) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times 144## (d) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times \sqrt{12}## (e) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times {12}^{1/3}## Question 285 covariance, portfolio risk Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: Prices and expected returns of each stock stays the same, Variance of stock B's returns stays the same, Correlation of returns between the stocks stays the same. (a) The variance of the portfolio will increase. (b) The standard deviation of the portfolio will increase. (c) The covariance of returns between stocks A and B will stay the same. (d) The portfolio return will stay the same. (e) The portfolio value will stay the same. Question 293 covariance, correlation, portfolio risk All things remaining equal, the higher the correlation of returns between two stocks: (a) The more diversification is possible when those stocks are combined in a portfolio. (b) The lower the variance of returns of an equally-weighted portfolio of those stocks. (c) The lower the volatility of returns of an equal-weighted portfolio of those stocks. (d) The higher the covariance between those stocks' returns. (e) The more likely that when one stock has a positive return, the other has a negative return. Question 111 portfolio risk, correlation All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: (a) The correlation between the stocks' returns rise. (b) The correlation between the stocks' returns decline. (c) The portfolio standard deviation declines. (d) Both stocks' individual variances decline. (e) Both stocks' individual standard deviations decline. Question 82 portfolio return deviation Correlation Dollars What is the expected return of the above portfolio? Question 558 portfolio weights, portfolio return, short selling (a) 200%, -100% (b) 200%, 100% (c) -100%, 200% (d) 100%, 200% (e) -100%, 100% Question 81 risk, correlation, diversification Stock A and B's returns have a correlation of 0.3. Which statement is NOT correct? (a) If stock A's return increases, stock B's return is also expected to increase. (b) If stock A's return decreases, stock B's return is also expected to decrease. (c) If stock A and B were combined in a portfolio, there would be no diversification at all since they are positively correlated. (d) Stock A and B's returns have positive covariance. (e) a and b. deviation Covariance ##(\sigma_{A,B})## Beta Dollars A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the standard deviation (not variance) of the above portfolio? Note that the stocks' covariance is given, not correlation. Question 72 CAPM, portfolio beta, portfolio risk deviation Correlation Beta Dollars What is the beta of the above portfolio? (b) 0.833333333 (d) 1.166666667 (e) 1.4 Question 294 short selling, portfolio weights Which of the following statements about short-selling is NOT true? (a) Short sellers benefit from price falls. (b) To short sell, you must borrow the asset from person A and sell it to person B, then later on buy an identical asset from person C and return it to person A. (c) Short selling only works for assets that are 'fungible' which means that there are many that are identical and substitutable, such as shares and bonds and unlike real estate. (d) An investor who short-sells an asset has a negative weight in that asset. (e) An investor who short-sells an asset is said to be 'long' that asset. Question 282 expected and historical returns, income and capital returns You're the boss of an investment bank's equities research team. Your five analysts are each trying to find the expected total return over the next year of shares in a mining company. The mining firm: Is regarded as a mature company since it's quite stable in size and was floated around 30 years ago. It is not a high-growth company; Share price is very sensitive to changes in the price of the market portfolio, economic growth, the exchange rate and commodities prices. Due to this, its standard deviation of total returns is much higher than that of the market index; Experienced tough times in the last 10 years due to unexpected falls in commodity prices. Shares are traded in an active liquid market. Your team of analysts present their findings, and everyone has different views. While there's no definitive true answer, who's calculation of the expected total return is the most plausible? The analysts' source data is correct and true, but their inferences might be wrong; All returns and yields are given as effective annual nominal rates. (a) Alice says 5% pa since she calculated that this was the average total yield on government bonds over the last 10 years. She says that this is also the expected total yield implied by current prices on one year government bonds. (b) Bob says 4% pa since he calculated that this was the average total return on the mining stock over the last 10 years. (c) Cate says 3% pa since she calculated that this was the average growth rate of the share price over the last 10 years. (d) Dave says 6% pa since he calculated that this was the average growth rate of the share market price index (not the accumulation index) over the last 10 years. (e) Eve says 15% pa since she calculated that this was the discount rate implied by the dividend discount model using the current share price, forecast dividend in one year and a 3% growth rate in dividends thereafter, which is the expected long term inflation rate. The following table shows a sample of historical total returns of shares in two different companies A and B. Stock Returns Total effective annual returns Year ##r_A## ##r_B## 2008 0.04 -0.2 2010 0.18 0.5 What is the historical sample covariance (##\hat{\sigma}_{A,B}##) and correlation (##\rho_{A,B}##) of stock A and B's total effective annual returns? (a) 0.05696, 0.702247 (b) 0.05696, 0.238663 (c) 0.053333, 0.936329 (d) 0.040889, 0.930519 (e) 0.04, 0.930519 Question 67 CFFA, interest tax shield Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)### ###CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp### What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and ##r_D## is the cost of debt. (a) ##D(1+r_D)## (b) ##D/(1+r_D) ## (c) ##D.r_D ## (d) ##D / r_D## (e) ##NI.r_D## Question 121 capital structure, leverage, financial distress, interest tax shield Fill in the missing words in the following sentence: All things remaining equal, as a firm's amount of debt funding falls, benefits of interest tax shields __________ and the costs of financial distress __________. (a) Fall, fall. (b) Fall, rises. (c) Rise, fall. (d) Rise, rise. (e) Remain unchanged, remain unchanged. Question 301 leverage, capital structure, real estate Your friend just bought a house for $1,000,000. He financed it using a $900,000 mortgage loan and a deposit of $100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is $100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? No income (rent) was received from the house during the short time over which house prices fell. Your friend will not declare bankruptcy, he will always pay off his debts. (a) -1,000% (b) -150% (c) -100% (d) -15% (e) -10% Question 379 leverage, capital structure, payout policy Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? The "interest expense" on a company's annual income statement is equal to the cash interest payments (but not principal payments) made to debt holders during the year. or ? Question 378 leverage, capital structure, no explanation A levered company's required return on debt is always less than its required return on equity. or ? Interest expense on debt is tax-deductible, but dividend payments on equity are not. or ? Question 375 interest tax shield, CFFA One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). ###\begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\### Does this annual FFCF or the annual interest tax shield? Question 94 leverage, capital structure, real estate Your friend just bought a house for $400,000. He financed it using a $320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is $80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So ##V=D+E##. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. ### r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0} ### where ##r_{0-1}## is the return (percentage change) of an asset with price ##p_0## initially, ##p_1## one period later, and paying a cash flow of ##c_1## at time ##t=1##. (a) -100% (b) -50% (c) -12.5% (e) -8% Question 506 leverage, accounting ratio A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio? (a) 20% (b) 36% (c) 60% (d) 75% (e) 93.75% One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). ###\begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\### Question 80 CAPM, risk, diversification Diversification is achieved by investing in a large amount of stocks. What type of risk is reduced by diversification? (a) Idiosyncratic risk. (b) Systematic risk. (c) Both idiosyncratic and systematic risk. (d) Market risk. (e) Beta risk. Question 112 CAPM, risk According to the theory of the Capital Asset Pricing Model (CAPM), total risk can be broken into two components, systematic risk and idiosyncratic risk. Which of the following events would be considered a systematic, undiversifiable event according to the theory of the CAPM? (a) A decrease in house prices in one city. (b) An increase in mining industry tax rates. (c) An increase in corporate tax rates. (d) A case of fraud at a major retailer. (e) A poor earnings announcement from a major firm. Question 173 CFFA Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Candys Corp Income Statement for year ending 30th June 2013 $m COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77 as at 30th June 2013 2012 $m $m Current assets 220 180 Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480 Note: all figures are given in millions of dollars ($m). (c) 112 Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) CapEx is added in the Net Income (NI) equation so it needs subtracting in the CFFA equation. (b) CapEx is a financing cash flow that needs to be ignored. Therefore it should be subtracted. (c) CapEx is not a cash flow, it's a non-cash expense made up by accountants that needs to be subtracted. (d) CapEx is subtracted to account for the net cash spent on capital assets. (e) CapEx is subtracted because it's too hard to predict, therefore we exclude it. Question 349 CFFA, depreciation tax shield Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? ###NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) An increase in revenue (Rev). (b) An increase in rent expense (part of fixed costs, FC). (c) An increase in depreciation expense (Depr). (d) An decrease in net working capital (ΔNWC). (e) An increase in dividends. Question 238 CFFA, leverage, interest tax shield A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: (a) The company is increasing its debt-to-assets and debt-to-equity ratios. These are types of 'leverage' or 'gearing' ratios. (b) The company will pay less tax to the government due to the benefit of interest tax shields. (c) The company's net income, also known as earnings or net profit after tax, will fall. (d) The company's expected levered firm free cash flow (FFCF or CFFA) will be higher due to tax shields. (e) The company's expected levered equity free cash flow (EFCF) will not change. Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Sidebar Corp COGS 100 Rent expense 22 Taxable Income 210 Taxes at 30% 63 Net income 147 Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Trade creditors 11 19 Bond liabilities 400 390 Retained profits 154 120 The cash flow from assets was: (a) $138m (b) $142m (c) $143m (d) $172m (e) $176m Over the next year, the management of an unlevered company plans to: Achieve firm free cash flow (FFCF or CFFA) of $1m. Pay dividends of $1.8m Complete a $1.3m share buy-back. Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. All amounts are received and paid at the end of the year so you can ignore the time value of money. The firm has sufficient retained profits to pay the dividend and complete the buy back. The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? (a) $2.1m (b) $1.3m (c) $0.8m (d) $0.3m (e) No new shares need to be issued, the firm will be sufficiently financed. Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - ΔNWC+IntExp### (b) An increase in rent expense (a type of recurring fixed cost, FC). (d) An increase in inventories (a current asset). (e) An decrease in interest expense (IntExp). Read the following financial statements and calculate the firm's free cash flow over the 2014 financial year. UBar Corp Gas expense 8 EBIT 60 Interest expense 0 Taxable income 60 Taxes 18 Cash 30 29 Accounts receivable 5 7 Pre-paid rent expense 1 0 Trade payables 20 18 Accrued gas expense 3 2 Non-current liabilities 0 0 Asset revaluation reserve 5 0 The firm's free cash flow over the 2014 financial year was: (d) $61 (e) $62 Find Trademark Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Trademark Corp Operating expense 5 Income before tax 30 Tax at 30% 9 Current assets 120 80 Carrying amount 90 100 Current liabilities 75 65 Non-current liabilities 75 55 Contributed equity 50 50 (a) -19 Find UniBar Corp's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. UniBar Corp Net income 7 Current liabilities 110 60 (a) 12 (c) -8 (d) -18 Find Piano Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Find World Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. World Bar Retained earnings 100 100 Note: all figures above and below are given in millions of dollars ($m). (e) -20 Find Scubar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Scubar Corp Trade debtors 19 6 Trade creditors 10 8 (a) $40m (b) $41m (c) $54m (d) $60m (e) $74m Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Ching-A-Lings Corp Taxes at 30% 9 Trade creditors 4 10 (e) $1m Question 366 opportunity cost, NPV, CFFA, needs refinement Your friend is trying to find the net present value of a project. The project is expected to last for just one year with: a negative cash flow of -$1 million initially (t=0), and a positive cash flow of $1.1 million in one year (t=1). The project has a total required return of 10% pa due to its moderate level of undiversifiable risk. Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project. He knows that the opportunity cost of investing the $1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m ##(=1m \times 10\%)## which occurs in one year (t=1). He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year. Your friend has listed a few different ways to find the NPV which are written down below. (I) ##-1m + \dfrac{1.1m}{(1+0.1)^1} ## (II) ##-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1 ## (III) ##-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1 ## (IV) ##-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1 ## (V) ##-1m + 1.1m - 1.1m \times 0.1 ## Which of the above calculations give the correct NPV? Select the most correct answer. (b) II only. (c) III only. (e) II and V only. Question 485 capital budgeting, opportunity cost, sunk cost A young lady is trying to decide if she should attend university or not. The young lady's parents say that she must attend university because otherwise all of her hard work studying and attending school during her childhood was a waste. What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university? The hard work studying at school in her childhood should be classified as: (a) A sunk cost. (b) An opportunity cost. (c) A negative side effect. (d) A positive side effect. (e) A depreciation expense. A young lady is trying to decide if she should attend university. Her friends say that she should go to university because she is more likely to meet a clever young man than if she begins full time work straight away. What's the correct way to classify this item from a capital budgeting perspective when trying to find the Net Present Value of going to university rather than working? The opportunity to meet a desirable future spouse should be classified as: A man is thinking about taking a day off from his casual painting job to relax. He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work. But he's thinking about the hours that he could work today (in the future) which are: (d) A capital expense. A man has taken a day off from his casual painting job to relax. It's the end of the day and he's thinking about the hours that he could have spent working (in the past) which are now: Question 300 NPV, opportunity cost What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed. Assume the following: The degree takes 3 years to complete and all students pass all subjects. There are 2 semesters per year and 4 subjects per semester. University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. There are 52 weeks per year. The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. Working full time at the grocery store instead of studying full-time pays $20/hr and you can work 35 hours per week. Wages are paid at the end of each week. Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: (b) $97,915.91 Question 511 capital budgeting, CFFA Find the cash flow from assets (CFFA) of the following project. One Year Mining Project Data Project life 1 year Initial investment in building mine and equipment $9m Depreciation of mine and equipment over the year $8m Kilograms of gold mined at end of year 1,000 Sale price per kilogram $0.05m Variable cost per kilogram $0.03m Before-tax cost of closing mine at end of year $4m Tax rate 30% Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of $1m at the end of the year. Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of $3m right now. However, if the mine goes ahead then this natural beauty will be destroyed. Note 3: The mining equipment will have a book value of $1m at the end of the year for tax purposes. However, the equipment is expected to fetch $2.5m when it is sold. Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one. (a) -9, 15.65 (b) -9, 14.3 (c) -12, 16.8 (d) -12, 16.35 (e) -12, 14.3 Project life 2 years Initial investment in equipment $6m Depreciation of equipment per year for tax purposes $1m Unit sales per year 4m Sale price per unit $8 Variable cost per unit $3 Fixed costs per year, paid at the end of each year $1.5m Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch $0.9 million when it is sold at t=2. Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another $0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities. Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). (a) -6, 12.25, 16.68 (b) -6.8, 13.25, 14.05 (c) -6.8, 13.25, 15.88 (d) -6.8, 13.25, 18.51 (e) -6.8, 13.25, 17.71 Question 273 CFFA, capital budgeting Value the following business project to manufacture a new product. Project life 2 yrs Depreciation of equipment per year $3m Expected sale price of equipment at end of project $0.6m Fixed costs per year, paid at the end of each year $1m Interest expense per year 0 Weighted average cost of capital after tax per annum 10% The firm's current assets and current liabilities are $3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by $2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by $0.1m at the end of the first year (t=1). At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought. The project cost $0.5m to research which was incurred one year ago. All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. All rates and cash flows are real. The inflation rate is 3% pa. All rates are given as effective annual rates. The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? (b) $8.481735m (c) $8.743802m (d) $8.991736m (e) $9.719008m Question 406 leverage, WACC, margin loan, portfolio return One year ago you bought $100,000 of shares partly funded using a margin loan. The margin loan size was $70,000 and the other $30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. (e) 11.7067% Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Question 206 CFFA, interest expense, interest tax shield Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: (a) the bond's face value multiplied by its annual yield to maturity. (b) the bond's face value multiplied by its annual coupon rate. (c) the bond's market price at the start of the year multiplied by its annual yield to maturity. (d) the bond's market price at the start of the year multiplied by its annual coupon rate. (e) the future value of the actual cash payments of the bond over the year, grown to the end of the year, and grown by the bond's yield to maturity. Question 223 CFFA, interest tax shield Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? (a) An increase in net capital spending. (b) A decrease in the cash flow to creditors. (c) An increase in interest expense. (d) An increase in net working capital. (e) A decrease in dividends paid. (a) An increase in revenue (##Rev##). (b) An decrease in revenue (##Rev##). (c) An increase in rent expense (part of fixed costs, ##FC##). (d) An increase in interest expense (##IntExp##). Question 68 WACC, CFFA, capital budgeting A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. (a) The manufacturing firm's before-tax WACC. (b) The manufacturing firm's after-tax WACC. (c) A services firm's before-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm. (d) A services firm's after-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm. (e) The services firm's levered cost of equity. There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not. Which of the below FFCF formulas include the interest tax shield in the cash flow? ###(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp### ###(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)### ###(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c### ###(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC### ###(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c### ###(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC### ###(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC### ###(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c### ###(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC### ###(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c### The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent. ###NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )### ###EBIT=Rev - COGS - FC-Depr### ###EBITDA=Rev - COGS - FC### ###Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}### (a) 1, 3, 5, 7, 9. (b) 2, 4, 6, 8, 10. (c) 1, 4, 6, 8, 10. (d) 2, 3, 5, 7, 9. (e) 1, 3, 5, 8, 10. Question 89 WACC, CFFA, interest tax shield A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. (a) Discount the project's unlevered CFFA by the furniture manufacturing firms' 30% WACC after tax. (b) Discount the project's unlevered CFFA by the company's 20% WACC after tax. (c) Discount the project's levered CFFA by the company's 20% WACC after tax. (d) Discount the project's levered CFFA by the furniture manufacturing firms' 30% WACC after tax. (e) The methods outlined in answers (a) and (c) will give the same valuations, both are correct. Question 113 WACC, CFFA, capital budgeting The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. Motorola had a 20% after-tax WACC before it merged with Google. Google and Motorola have the same level of gearing. Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: (a) Unlevered CFFA should be discounted by Google's 10% WACC after tax. (b) Unlevered CFFA should be discounted by Motorola's 20% WACC after tax. (c) Levered CFFA should be discounted by Google's 10% WACC after tax. (d) Levered CFFA should be discounted by Motorola's 20% WACC after tax. (e) Unlevered CFFA by 15%, the average of Google and Motorola's WACC after tax. A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned}### One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense ##(IntExp)## is zero: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned}### Does this annual FFCF with zero interest expense or the annual interest tax shield? Question 413 CFFA, interest tax shield, depreciation tax shield There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). One method is to use the following formulas to transform net income (NI) into FFCF including interest and depreciation tax shields: ###FFCF=NI + Depr - CapEx -ΔNWC + IntExp### ###NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )### Another popular method is to use EBITDA rather than net income. EBITDA is defined as: ###EBITDA=Rev - COGS - FC### One of the below formulas correctly calculates FFCF from EBITDA, including interest and depreciation tax shields, giving an identical answer to that above. Which formula is correct? (a) ##FFCF=EBITDA+ Depr - CapEx -ΔNWC + IntExp## (b) ##FFCF=EBITDA.(1-t_c )+Depr- CapEx -ΔNWC## (c) ##FFCF=EBITDA.(1-t_c )+ Depr.t_c - CapEx -ΔNWC + IntExp.t_c## (d) ##FFCF=EBITDA.(1-t_c )+Depr.(1-t_c )- CapEx -ΔNWC+IntExp.(1-t_c)## (e) ##FFCF=EBITDA.(1-t_c )- CapEx -ΔNWC## Question 370 capital budgeting, NPV, interest tax shield, WACC, CFFA Initial investment in equipment $600k Depreciation of equipment per year $250k Expected sale price of equipment at end of project $200k Revenue per job $12k Variable cost per job $4k Quantity of jobs per year 120 Fixed costs per year, paid at the end of each year $100k Interest expense in first year (at t=1) $16.091k Interest expense in second year (at t=2) $9.711k Government treasury bond yield 5% Bank loan debt yield 6% Levered cost of equity 12.5% Market portfolio return 10% Beta of assets 1.24 Beta of levered equity 1.5 Firm's and project's debt-to-equity ratio 25% The project will require an immediate purchase of $50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored. The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year. Thousands are represented by 'k' (kilo). All rates and cash flows are nominal. The inflation rate is 2% pa. The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. (a) $684.222k (b) $690.919k (c) $697.616k (d) $698.713k (e) $710.601k Question 242 technical analysis, market efficiency Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: (a) Markets are weak-form efficient. (b) Markets are semi-strong-form efficient. (c) Past prices cannot be used to predict future prices. (d) Past returns can be used to predict future returns. (e) Stock prices reflect all publically available information. Question 243 fundamental analysis, market efficiency Fundamentalists who analyse company financial reports and news announcements (but who don't have inside information) will make positive abnormal returns if: (a) Markets are weak and semi-strong form efficient but strong-form inefficient. (b) Markets are weak form efficient but semi-strong and strong-form inefficient. (c) Technical traders make positive excess returns. (d) Chartists make negative excess returns. (e) Insiders make negative excess returns. Question 100 market efficiency, technical analysis, joint hypothesis problem A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: (a) Only III is true. (b) Only II and III are true. (c) Only I, II and III are true. (d) Only IV is true. (e) Either I, II and III are true, or IV is true, or they are all true. Question 119 market efficiency, fundamental analysis, joint hypothesis problem Your friend claims that by reading 'The Economist' magazine's economic news articles, she can identify shares that will have positive abnormal expected returns over the next 2 years. Assuming that her claim is true, which statement(s) are correct? (iv) The asset pricing model used to measure the abnormal returns (such as the CAPM) is either wrong (mis-specification error) or is measured using the wrong inputs (data errors) so the returns may not be abnormal but rather fair for the level of risk. (a) Only (iii) is true. (b) Only (ii) and (iii) are true. (c) Only (i), (ii) and (iii) are true. (d) Either (ii) and (iii) are true, or (iv) is true, or (ii), (iii) and (iv) are true. (e) Either (i), (ii) and (iii) are true, or (iv) is true, or all are true. Question 338 market efficiency, CAPM, opportunity cost, technical analysis A man inherits $500,000 worth of shares. He believes that by learning the secrets of trading, keeping up with the financial news and doing complex trend analysis with charts that he can quit his job and become a self-employed day trader in the equities markets. What is the expected gain from doing this over the first year? Measure the net gain in wealth received at the end of this first year due to the decision to become a day trader. Assume the following: He earns $60,000 pa in his current job, paid in a lump sum at the end of each year. He enjoys examining share price graphs and day trading just as much as he enjoys his current job. Stock markets are weak form and semi-strong form efficient. He has no inside information. He makes 1 trade every day and there are 250 trading days in the year. Trading costs are $20 per trade. His broker invoices him for the trading costs at the end of the year. The shares that he currently owns and the shares that he intends to trade have the same level of systematic risk as the market portfolio. The market portfolio's expected return is 10% pa. Measure the net gain over the first year as an expected wealth increase at the end of the year. (a) $110,000 (c) $45,000 (d) -$15,000 (e) -$65,000 Question 105 NPV, risk, market efficiency A person is thinking about borrowing $100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced. What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value (##V_0##), not the value in one year (##V_1##). (b) $3 (c) $2.8037 (d) $2.7273 (e) $0 Question 48 IRR, NPV, bond pricing, premium par and discount bonds, market efficiency The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? (a) The internal rate of return (IRR) of buying a fairly priced bond is equal to the bond's yield. (b) The Present Value of a fairly priced bond's coupons and face value is equal to its price. (c) If a fairly priced bond's required return rises, its price will fall. (d) Fairly priced premium bonds' yields are less than their coupon rates, prices are more than their face values, and the NPV of buying them is therefore positive. (e) The NPV of buying a fairly priced bond is zero. Question 340 market efficiency, opportunity cost A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing $100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: The fund has no private information. Markets are weak and semi-strong form efficient. The fund's transaction costs are negligible. The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. (b) -$20,000.00 (c) -$48,000.17 (d) -$51,999.83 (e) -$80,000.00 Question 464 mispriced asset, NPV, DDM, market efficiency A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): (a) $0, $0 (b) $1,977.19, $2,000 (c) $2,977.19, $3,000 (d) $499.96, $500 (e) $84,214.9, Infinite Question 417 NPV, market efficiency, DDM A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year. How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that: The fund invests its fees in the same companies as it invests your funds in, but with no fees. The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years. (a) $4,462,125.27, $63,800.29 (b) $3,407,788.62, $1,118,136.94 (c) $3,316,736.53, $1,209,189.03 (d) $2,172,452.15, $2,353,473.41 (e) $2,017,206.85, $2,508,718.71 Question 569 personal tax The average weekly earnings of an Australian adult worker before tax was $1,542.40 per week in November 2014 according to the Australian Bureau of Statistics. Therefore average annual earnings before tax were $80,204.80 assuming 52 weeks per year. Personal income tax rates published by the Australian Tax Office are reproduced for the 2014-2015 financial year in the table below: Tax on this income 0 – $18,200 Nil $18,201 – $37,000 19c for each $1 over $18,200 $37,001 – $80,000 $3,572 plus 32.5c for each $1 over $37,000 $80,001 – $180,000 $17,547 plus 37c for each $1 over $80,000 $180,001 and over $54,547 plus 45c for each $1 over $180,000 The above rates do not include the Medicare levy of 2%. Exclude the Medicare levy from your calculations How much personal income tax would you have to pay per year if you earned $80,204.80 per annum before-tax? Question 449 personal tax on dividends, classical tax system A small private company has a single shareholder. This year the firm earned a $100 profit before tax. All of the firm's after tax profits will be paid out as dividends to the owner. The corporate tax rate is 30% and the sole shareholder's personal marginal tax rate is 45%. The United States' classical tax system applies because the company generates all of its income in the US and pays corporate tax to the Internal Revenue Service. The shareholder is also an American for tax purposes. What will be the personal tax payable by the shareholder and the corporate tax payable by the company? (a) Personal tax of $6.43 and corporate tax of $45. (b) Personal tax of $15 and corporate tax of $30. (c) Personal tax of $16.5 and corporate tax of $45. (d) Personal tax of $31.5 and corporate tax of $30. (e) Personal tax of $45 and corporate tax of $0. Question 624 franking credit, personal tax on dividends, imputation tax system, no explanation Which of the following statements about Australian franking credits is NOT correct? Franking credits: (a) Refund the corporate tax paid by companies to their individual shareholders. Therefore they prevent the double-taxation of dividends at the corporate and personal level. (b) Are distributed to shareholders together with cash dividends. (c) Are also called imputation credits. (d) Are worthless to individuals who earn less than the tax-free threshold because they have a zero marginal personal tax rate. (e) Are worthless to individual shareholders who are foreigners for tax purposes. Question 448 franking credit, personal tax on dividends, imputation tax system The Australian imputation tax system applies because the company generates all of its income in Australia and pays corporate tax to the Australian Tax Office. Therefore all of the company's dividends are fully franked. The sole shareholder is an Australian for tax purposes and can therefore use the franking credits to offset his personal income tax liability. Question 309 stock pricing, ex dividend date A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes. The share price is expected to fall during the: (a) Day of the payment date, between the payment date's morning opening price and afternoon closing price. (b) Night before the payment date, between the previous day's afternoon closing price and the payment date's morning opening price. (c) Day of the ex-dividend date, between the ex-dividend date's morning opening price and afternoon closing price. (d) Night before the ex-dividend date, between the last with-dividend date's afternoon closing price and the ex-dividend date's morning opening price. (e) Day of the last with-dividend date, between the with-dividend date's morning opening price and afternoon closing price. Question 202 DDM, payout policy Currently, a mining company has a share price of $6 and pays constant annual dividends of $0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of $0.30 in 1 year. If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only ##(P_\text{0 one-off})## , and the second assumes that the increase is permanent ##(P_\text{0 permanent})##: (a) ##P_\text{0 one-off} = 9.6000, \space \space P_\text{0 permanent} = 6.2766## (b) ##P_\text{0 one-off} = 6.3000, \space \space P_\text{0 permanent} = 6.2769## (c) ##P_\text{0 one-off} = 9.6000, \space \space P_\text{0 permanent} = 6.3000## (d) ##P_\text{0 one-off} = 6.2769, \space \space P_\text{0 permanent} = 9.6000## (e) ##P_\text{0 one-off} = 6.3000, \space \space P_\text{0 permanent} = 9.6000## Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist. Question 454 NPV, capital structure, capital budgeting A mining firm has just discovered a new mine. So far the news has been kept a secret. The net present value of digging the mine and selling the minerals is $250 million, but $500 million of new equity and $300 million of new bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and equity and bond raising to shareholders simultaneously in the same announcement. The shares and bonds will be issued shortly after. Once the announcement is made and the new shares and bonds are issued, what is the expected increase in the value of the firm's assets ##(\Delta V)##, market capitalisation of debt ##(\Delta D)## and market cap of equity ##(\Delta E)##? Assume that markets are semi-strong form efficient. The triangle symbol ##\Delta## is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: ##\Delta V = \Delta D+ \Delta E## (a) ##\Delta V = 250m##, ##ΔD = 300m##, ##ΔE= 250## (b) ##\Delta V = 250m##, ##ΔD = 300m##, ##ΔE= 750## (c) ##\Delta V = 400m##, ##ΔD = 300m##, ##ΔE= -250## (d) ##\Delta V = 1,050m##, ##ΔD = 300m##, ##ΔE= 750## (e) ##\Delta V = 1,050m##, ##ΔD = 300m##, ##ΔE= 250## Question 568 rights issue, capital raising, capital structure A company conducts a 1 for 5 rights issue at a subscription price of $7 when the pre-announcement stock price was $10. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. Ignore all taxes, transaction costs and signalling effects. (a) -16.67%, 20% (b) -5%, 20% (c) 0%, 20% (d) 7.14%, 20% (e) 11.67%, 0% Question 625 dividend re-investment plan, capital raising Which of the following statements about dividend re-investment plans (DRP's) is NOT correct? (a) DRP's are voluntary, shareholders only participate if they choose. (b) DRP's increase the number of shares. (c) The number of shares issued to a shareholder participating in a DRP is usually calculated as their total dividends owed, divided by the allocation share price which is usually close to the current market share price. (d) DRP's do not incur brokerage costs for the shareholder. This is unlike the case where the shareholder uses the cash dividend to buy more shares herself. (e) If all shareholders participated in a company's DRP, the company would not pay any dividends and the firm's share price would not fall due to the cash dividend or the DRP. Question 214 rights issue In late 2003 the listed bank ANZ announced a 2-for-11 rights issue to fund the takeover of New Zealand bank NBNZ. Below is the chronology of events: 23/10/2003. Share price closes at $18.30. 24/10/2003. 2-for-11 rights issue announced at a subscription price of $13. The proceeds of the rights issue will be used to acquire New Zealand bank NBNZ. Trading halt announced in morning before market opens. 28/10/2003. Trading halt lifted. Last (and only) day that shares trade cum-rights. Share price opens at $18.00 and closes at $18.14. 29/10/2003. Shares trade ex-rights. All things remaining equal, what would you expect ANZ's stock price to open at on the first day that it trades ex-rights (29/10/2003)? Ignore the time value of money since time is negligibly short. Also ignore taxes. (a) 17.3492 (b) 17.2308 (c) 14.8418 (d) 13.7908 (e) 13.7692 Question 708 continuously compounding rate, continuously compounding rate conversion Convert a 10% continuously compounded annual rate ##(r_\text{cc annual})## into an effective annual rate ##(r_\text{eff annual})##. The equivalent effective annual rate is: (a) 230.258509% pa (b) 10.536052% pa (e) 9.531018% pa Question 709 continuously compounding rate, APR Which of the following interest rate quotes is NOT equivalent to a 10% effective annual rate of return? Assume that each year has 12 months, each month has 30 days, each day has 24 hours, each hour has 60 minutes and each minute has 60 seconds. APR stands for Annualised Percentage Rate. (a) 9.7617696% is the APR compounding semi-annually ##(r_\text{apr comp 6mth})## (b) 9.5689685% is the APR compounding monthly ##(r_\text{apr comp monthly})## (c) 9.6454756% is the APR compounding daily ##(r_\text{apr comp daily})## (d) 9.5310182% is the APR compounding per second ##(r_\text{apr comp per second})## (e) 9.5310180% is the continuously compounded rate per annum ##(r_\text{cc annual})## A continuously compounded monthly return of 1% ##(r_\text{cc monthly})## is equivalent to a continuously compounded annual return ##(r_\text{cc annual})## of: (a) 12.682503% pa (c) 12% pa Question 712 effective rate conversion An effective monthly return of 1% ##(r_\text{eff monthly})## is equivalent to an effective annual return ##(r_\text{eff annual})## of: Question 714 return distribution, no explanation Which of the following quantities is commonly assumed to be normally distributed? (a) Prices, ##P_1##. (b) Gross discrete returns per annum, ##r_{\text{gdr 0 }\rightarrow \text{ 1}} = \dfrac{P_1}{P_0} ##. (c) Effective annual returns per annum also known as net discrete returns, ##r_{\text{eff 0 }\rightarrow \text{ 1}} = \dfrac{P_1 - P_0}{P_0} = \dfrac{P_1}{P_0}-1##. (d) Continuously compounded returns per annum, ##r_{\text{cc 0 }\rightarrow \text{ 1}} = \ln \left( \dfrac{P_1}{P_0} \right)##. (e) Annualised percentage rates compounding per month, ##r_{\text{apr comp monthly 0 }\rightarrow \text{ 1 mth}} = \left( \dfrac{P_1 - P_0}{P_0} \right) \times 12##. Question 716 return distribution The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Which of the below statements is NOT correct? (a)##-1 < \text{Red} < \infty## if Red is log-normally distributed. (b) ##-2 < \text{Green} < 2## if Green is normally distributed. (c) ##0 < \text{Blue} < \infty## if Blue is log-normally distributed. (d) If the Green distribution is normal, then the mode = median = mean. (e) If the Red and Blue distributions are log-normal, then the mode < median < mean. Question 718 arithmetic and geometric averages The symbol ##\text{GDR}_{0\rightarrow 1}## represents a stock's gross discrete return per annum over the first year. ##\text{GDR}_{0\rightarrow 1} = P_1/P_0##. The subscript indicates the time period that the return is mentioned over. So for example, ##\text{AAGDR}_{1 \rightarrow 3}## is the arithmetic average GDR measured over the two year period from years 1 to 3, but it is expressed as a per annum rate. Which of the below statements about the arithmetic and geometric average GDR is NOT correct? (a) Arithmetic average gross discrete return is ##\text{AAGDR}_{0\rightarrow T} = \dfrac{\text{GDR}_{0 \rightarrow 1}+\text{GDR}_{1\rightarrow 2}+...+\text{GDR}_{T-1 \rightarrow T}}{T}## (b) Geometric average gross discrete return is ##\text{GAGDR}_{0\rightarrow T} = \dfrac{\text{GDR}_{0 \rightarrow 1} . \text{GDR}_{1\rightarrow 2} ... \text{GDR}_{T-1 \rightarrow T}}{T}## (c) The geometric average gross discrete return can be quickly found using the first ##(P_0)## and last ##(P_T)## prices. ##\text{GAGDR}_{0\rightarrow T} = \left( \dfrac{P_T}{P_0} \right)^{1/T}## (d) The arithmetic average is always bigger than or equal to the geometric average, ##\text{AAGDR} \ge \text{GAGDR}##. (e) The arithmetic and geometric averages of returns will be equal if the variance of the stock's returns is zero. Question 725 return distribution, mean and median returns If a stock's future expected effective annual returns are log-normally distributed, what will be bigger, the stock's or effective annual return? Or would you expect them to be ? Question 722 mean and median returns, return distribution, arithmetic and geometric averages, continuously compounding rate Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 1 50 -0.6931 0.5 -0.5 2 100 0.6931 2 1 Arithmetic average 0 1.25 0.25 Arithmetic standard deviation -0.6931 0.75 0.75 (a) The geometric average of the gross discrete returns (GAGDR) equals 1 which is 100%. (b) ##\text{GAGDR} = \exp \left( \text{AALGDR} \right)##. The GAGDR is equal to the natural exponent of the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (c) ##\text{GAGDR} = \left( P_T/P_0 \right)^{1/T}##. The GAGDR equals the ratio of the last and first prices raised to the power of the inverse of the number of time periods between them. (d) ##\text{LGAGDR} = \text{AALGDR}##. This is always true, regardless of the distribution of the prices or returns and the number of return observations. The logarithm of the geometric average of the gross discrete returns (LGAGDR) is always equal to the arithmetic average of the logarithms of the gross discrete returns (AALGDR). (e) ##\text{LAAGDR} = \text{AALGDR} + \text{SDLGDR}^2/2##. This is always true, regardless of the distribution of the prices or returns and the number of return observations. The logarithm of the arithmetic average of the gross discrete returns (LAAGDR) equals the arithmetic average of the logarithms of the gross discrete returns (AALGDR) plus half the variance of the LGDR's. Question 410 CAPM, capital budgeting The CAPM can be used to find a business's expected opportunity cost of capital: ###r_i=r_f+β_i (r_m-r_f)### What should be used as the risk free rate ##r_f##? (a) The current central bank policy rate (RBA overnight money market rate). (b) The current 30 day federal government treasury bill rate. (c) The average historical 30 day federal government treasury bill rate over the last 20 years. (d) The current 30 year federal government treasury bond rate. (e) The average historical 30 year federal government treasury bond rate over the last 20 years. Question 674 CAPM, beta, expected and historical returns A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last year, given as an effective annual rate? (a) -12.5% pa (b) -4% pa (c) -1.5% pa (d) -1% pa (e) 12.5% pa In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? (a) -12.5% (b) -4% (c) -1.5% (d) -1% (e) 12.5% Question 672 CAPM, beta What do you think will be the stock's expected return over the next year, given as an effective annual rate? (a) 5% pa (b) 7.5% pa (d) 12.5% pa (e) 20% pa Question 628 CAPM, SML, risk, no explanation Assets A, B, M and ##r_f## are shown on the graphs above. Asset M is the market portfolio and ##r_f## is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct? (a) Asset A has the same systematic risk as asset B. (b) Asset A has more total variance than asset B. (c) Asset B has zero idiosyncratic risk. Asset B must be a portfolio of half the market portfolio and half government bonds. (d) If risk-averse investors were forced to invest all of their wealth in a single risky asset, so they could not diversify, every investor would logically choose asset A over the other three assets. (e) Assets M and B have the highest Sharpe ratios, which is defined as the gradient of the capital allocation line (CAL) from the government bonds through the asset on the graph of expected return versus total standard deviation. Unit sales per year 10m Note 1: Due to the project, the firm will have to purchase $40m of inventory initially (at t=0). Half of this inventory will be sold at t=1 and the other half at t=2. Note 2: The equipment will have a book value of $2m at the end of the project for tax purposes. However, the equipment is expected to fetch $1m when it is sold. Assume that the full capital loss is tax-deductible and taxed at the full corporate tax rate. Note 3: The project will be fully funded by equity which investors will expect to pay dividends totaling $10m at the end of each year. (a) -48, 54.5, 55.8 (b) -48, 54.5, 74.5 (c) -48, 74.5, 35.8 (d) -48, 74.5, 75.8 (e) -8, 74.5, 75.8 Question 605 cross currency interest rate parity, foreign exchange rate If the Reserve Bank of Australia is expected to keep its interbank overnight cash rate at 2% pa while the US Federal Reserve is expected to keep its federal funds rate at 0% pa over the next year, is the AUD is expected to , , or remain against the USD over the next year? Question 626 cross currency interest rate parity, foreign exchange rate, forward foreign exchange rate The Australian cash rate is expected to be 2% pa over the next one year, while the Japanese cash rate is expected to be 0% pa, both given as nominal effective annual rates. The current exchange rate is 100 JPY per AUD. What is the implied 1 year forward foreign exchange rate? (a) 98.04 JPY per AUD. (b) 100 JPY per AUD. (c) 102 JPY per AUD. (d) 1.02 AUD per JPY. (e) 0.9804 AUD per JPY. Question 667 forward foreign exchange rate, foreign exchange rate, cross currency interest rate parity, no explanation The Australian cash rate is expected to be 2% pa over the next one year, while the US cash rate is expected to be 0% pa, both given as nominal effective annual rates. The current exchange rate is 0.73 USD per AUD. What is the implied 1 year USD per AUD forward foreign exchange rate? (a) 0.7157 USD per AUD (b) 0.73 USD per AUD (c) 0.7446 USD per AUD (d) 0.9804 USD per AUD (e) 1.02 USD per AUD Question 246 foreign exchange rate, forward foreign exchange rate, cross currency interest rate parity Suppose the Australian cash rate is expected to be 8.15% pa and the US federal funds rate is expected to be 3.00% pa over the next 2 years, both given as nominal effective annual rates. The current exchange rate is at parity, so 1 USD = 1 AUD. (a) 1 USD = 1.1025 AUD (b) 1.1025 USD = 1 AUD (c) 1 USD = 1.05 AUD (d) 1 USD = 1.1 AUD (e) 1.1 USD = 1 AUD Question 617 systematic and idiosyncratic risk, risk, CAPM A stock's required total return will increase when its: (a) Systematic risk increases. (b) Idiosyncratic risk increases. (c) Total risk increases. (d) Systematic risk decreases. (e) Idiosyncratic risk decreases. Question 657 systematic and idiosyncratic risk, CAPM, no explanation A stock's required total return will decrease when its: Question 669 beta, CAPM, risk Which of the following is NOT a valid method for estimating the beta of a company's stock? Assume that markets are efficient, a long history of past data is available, the stock possesses idiosyncratic and market risk. The variances and standard deviations below denote total risks. (a) ##Β_E=\dfrac{cov(r_E,r_M )}{var(r_M)}## (b) ##Β_E=\dfrac{correl(r_E,r_M ).sd(r_E)}{sd(r_M)} ## (c) ##Β_E=\dfrac{sd(r_E)}{sd(r_M)}##, since ##var(r_E)=β_E^2.var(r_M)## (d) ##Β_E=\dfrac{r_E-r_f}{r_M-r_f }##, since ##r_E=r_f+Β_E. (r_M-r_f )## (e) ##Β_E= \left(B_V - \dfrac{D}{V}.Β_D \right).\dfrac{V}{E}##, since ##B_V=\dfrac{E}{V}.Β_E+\dfrac{D}{V}.Β_D ## Question 621 market efficiency, technical analysis, no explanation Technical traders: (a) Believe that asset prices follow a random walk. (b) Believe that markets are weak form efficient (c) Are pessimists, they believe that they cannot beat the market (d) Base their investment decisions on past publicly available news (e) Believe that the theory of weak form market efficiency is broken. Question 566 capital structure, capital raising, rights issue, on market repurchase, dividend, stock split, bonus issue A company's share price fell by 20% and its number of shares rose by 25%. Assume that there are no taxes, no signalling effects and no transaction costs. Which one of the following corporate events may have happened? (a) $1 cash dividend when the pre-announcement stock price was $5. (b) On-market buy-back of 20% of the company's outstanding stock. (c) 5 for 4 stock split. (d) 1 for 5 bonus issue. (e) 1 for 4 rights issue at a subscription price of $3 when the pre-announcement stock price was $5. Question 668 buy and hold, market efficiency, idiom A quote from the famous investor Warren Buffet: "Much success can be attributed to inactivity. Most investors cannot resist the temptation to constantly buy and sell." Buffet is referring to the buy-and-hold strategy which is to buy and never sell shares. Which of the following is a disadvantage of a buy-and-hold strategy? Assume that share markets are semi-strong form efficient. Which of the following is NOT an advantage of the strict buy-and-hold strategy? A disadvantage of the buy-and-hold strategy is that it reduces: (a) Capital gains tax. (b) Explicit transaction costs such as brokerage fees. (c) Implicit transaction costs such as bid-ask spreads. (d) Portfolio rebalancing to maintain maximum diversification. (e) Time wasted on researching whether it's better to buy or sell. To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position. (c) Current assets, current liabilities and cost of goods sold (COGS). Question 307 risk, variance Let the variance of returns for a share per month be ##\sigma_\text{monthly}^2##. What is the formula for the variance of the share's returns per year ##(\sigma_\text{yearly}^2)##? (a) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2## (b) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12## (c) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12^2## (d) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times \sqrt{12}## (e) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times {12}^{1/3}## Question 560 standard deviation, variance The standard deviation and variance of a stock's annual returns are calculated over a number of years. The units of the returns are percent per annum ##(\% pa)##. What are the units of the standard deviation ##(\sigma)## and variance ##(\sigma^2)## of returns respectively? (e) Percent per annum ##(\% pa)## and percent per annum ##(\% pa)##. If a stock's future expected continuously compounded annual returns are normally distributed, what will be bigger, the stock's or continuously compounded annual return? Or would you expect them to be ? If a stock's expected future prices are log-normally distributed, what will be bigger, the stock's or future price? Or would you expect them to be ? Question 706 utility, risk aversion, utility function Mr Blue, Miss Red and Mrs Green are people with different utility functions. Note that a fair gamble is a bet that has an expected value of zero, such as paying $0.50 to win $1 in a coin flip with heads or nothing if it lands tails. Fairly priced insurance is when the expected present value of the insurance premiums is equal to the expected loss from the disaster that the insurance protects against, such as the cost of rebuilding a home after a catastrophic fire. (a) Mr Blue, Miss Red and Mrs Green all prefer more wealth to less. This is rational from an economist's point of view. (b) Mr Blue is risk averse. He will not enjoy a fair gamble and would like to buy fairly priced insurance. (c) Miss Red is risk-neutral. She will not enjoy a fair gamble but wouldn't oppose it either. Similarly with fairly priced insurance. (d) Mrs Green is risk-loving. She would enjoy a fair gamble and would dislike fairly priced insurance. (e) Mr Blue would like to buy insurance, but only if it is fairly or under priced. Question 704 utility, risk aversion, utility function, gamble Each person has $256 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $256. Each player can flip a coin and if they flip heads, they receive $256. If they flip tails then they will lose $256. Which of the following statements is NOT correct? (a) All people would appear rational to an economist since they prefer more wealth to less. (b) Mrs Green and Miss Red would appear unusual to an economist since they are not risk averse. (c) Mr Blue's certainty equivalent of the gamble is $64. This is less than his current wealth of $256 which is why he would refuse the gamble. (d) Miss Red's certainty equivalent of the gamble is $256. This is the same as her current wealth of $256 which is why she would be indifferent to playing or not. (e) Mrs Green's certainty equivalent of the gamble is $512. This is more than her current wealth of $256 which is why she would love to play. (a) All people prefer more rather than less wealth which is rational. (b) Mr Blue is risk averse, Miss Red is risk neutral and Mrs Green is risk loving. (c) Mr Blue's certainty equivalent of the gamble is $225. This is less than his current $500 which is why he would dislike the gamble. (d) Miss Red's certainty equivalent of the gamble is $500. This is the same as her current $500 which is why she would be indifferent to gambling. (e) Mrs Green's certainty equivalent of the gamble is $793.70. This is more than her current $500 which is why she would like the gamble. Copyright © 2014 Keith Woodward
CommonCrawl
BMC Bioinformatics SPServer: split-statistical potentials for the analysis of protein structures and protein–protein interactions Joaquim Aguirre-Plans1, Alberto Meseguer1, Ruben Molina-Fernandez1, Manuel Alejandro Marín-López1, Gaurav Jumde1, Kevin Casanova1, Jaume Bonet2, Oriol Fornes3, Narcis Fernandez-Fuentes4,5 & Baldo Oliva ORCID: orcid.org/0000-0003-0702-02501 BMC Bioinformatics volume 22, Article number: 4 (2021) Cite this article Statistical potentials, also named knowledge-based potentials, are scoring functions derived from empirical data that can be used to evaluate the quality of protein folds and protein–protein interaction (PPI) structures. In previous works we decomposed the statistical potentials in different terms, named Split-Statistical Potentials, accounting for the type of amino acid pairs, their hydrophobicity, solvent accessibility and type of secondary structure. These potentials have been successfully used to identify near-native structures in protein structure prediction, rank protein docking poses, and predict PPI binding affinities. Here, we present the SPServer, a web server that applies the Split-Statistical Potentials to analyze protein folds and protein interfaces. SPServer provides global scores as well as residue/residue-pair profiles presented as score plots and maps. This level of detail allows users to: (1) identify potentially problematic regions on protein structures; (2) identify disrupting amino acid pairs in protein interfaces; and (3) compare and analyze the quality of tertiary and quaternary structural models. While there are many web servers that provide scoring functions to assess the quality of either protein folds or PPI structures, SPServer integrates both aspects in a unique easy-to-use web server. Moreover, the server permits to locally assess the quality of the structures and interfaces at a residue level and provides tools to compare the local assessment between structures. https://sbi.upf.edu/spserver/. Three-dimensional (3D) structures of proteins and protein–protein interactions (PPIs) are essential to understand most biochemical functions of cells and living organisms. Yet, the amount of experimentally determined 3D structures is limited, especially for protein complexes. Structural models derived by computational methods can be used to close the gap between the number of sequences and structures. In the recent CASP13 competition, we have observed a dramatic progress in the quality of the template-free models made by novel computational methods involving deep learning techniques [1]. However, these methods need to be complemented by evaluation methods to know the margins of accuracy when we study the role of structural models in a biological system [2]. Evaluation methods can be classified into two categories: single- and multiple-model methods. Single-model methods only require one model as input, whereas multiple-model methods require several. The latter ones take advantage of the similarity between the distinct models to evaluate them, but they are not based on the properties of the model itself. In contrast, single-model methods are often based on the geometric and energetic analysis of the model coordinates, although some of them may also use additional information (e.g. for evolutionary related proteins) [3, 4]. For single-model methods, the most common approach is to use knowledge-based potentials, i.e. scoring functions derived from the analysis of empirical data [5]. Several computational methods have been implemented from knowledge-based potentials [6,7,8]. Split-Statistical Potentials (SPs) are knowledge-based potentials that consider the frequency of pairs of residues in contact and include their structural environment, such as solvent accessibility and type of secondary structure. Previously, we demonstrated that SPs can be used to: (1) identify near-native protein decoys in structure prediction [9]; and (2) rank protein–protein docking poses [10, 11]. SPs compared favorably against 115 scoring functions on a docking decoy benchmark [12] and were successful at predicting binding energies of PPIs without requiring the native structures of the complexes [13]. Many scoring methods have been proposed to assess the quality of protein fold models [6,7,8, 14,15,16,17,18]. However, very few can be easily accessed as web servers by the non-specialized user. In most cases, the web servers have a reduced input flexibility (i.e. only accept models in PDB format, require chain identifiers and protein sequences, or do not accept multiple structures) and a complicated visualization of the results (i.e. do not permit to download results or do not have 3D visualization capabilities). Here, we present the Split-Statistical Potentials Server (SPServer) featuring our SPs for the evaluation of protein structures and PPIs. The web server has been designed to facilitate its use and the interpretation of results. When analyzing protein folds, the server returns global scores and shows score profiles along the protein sequence to identify potentially problematic regions in the structure. When analyzing PPIs, the server returns global scores and score maps of the interfaces. The SPServer identifies stabilizing and disrupting residue pairs that can be used as starting point for follow up protein engineering. The overall implementation of the web server is summarized in Fig. 1 and explained in detail as follows: General scheme of the functioning of the SPServer. The web server is divided into three sections: input, to upload either single protein structures (for fold analyses) or binary complexes (for protein–protein interaction analyses); scoring, to score the quality of the single and complex structures; and output, to display the local profiles of single structures and heatmap of residue-residue scores in the interface of the input binary complexes As input, users have to provide the structures of one or more proteins or protein complexes. The server input is flexible; users can provide either PDB structures, mmCIF files or compressed directories containing the structures to analyze. Users also have to select the parameter used to define residue contacts (i.e.12 Å cut-off between their β-carbons—option Cβ—or 5 Å between any atom of each residue—option MIN—). Often the structures used as input are produced by modelling or fold prediction approaches, because we are interested in checking the quality of models rather than the quality of experimental structures. In the case of structures of single proteins or folds, the most common methods to produce them are by homology modelling (e.g. by MODELLER [19]), remote homology (e.g. by PHYRE [20] or FUGUE [21]), by threading and ab initio fold prediction (e.g. by I-TASSER [22], THREADER [23], or in particular for sequences in CASP13 using AlphaFold [24]), or protein structure design (e.g. with Rosetta [25]). For protein–protein interactions the structures may be produced by template homology (e.g. from Interactome3D [26], PrePPI [27] or MODPIN [28]), template docking (e.g. by ICM [29]), docking (e.g. by pyDOCK [30], FTDOCK [31], V-D2OCK [32], PatchDock [33] or ZDOCK [34]) or directed docking (e.g. RosettaDock [25] and HADDOCK [35]). The first step of the scoring process is to identify the contacts between residues from the same protein (to score protein folds) or from different proteins (to score PPIs). These contacts consider the amino acids type, the distance between them, and environmental features such as the type of secondary structure or the degree of exposure of the amino acids. SPs provide a score for each one of these contacts. We obtain the score of a structure by performing the sum of scores of all its contacts. We can also get the scores of individual amino acids by performing the sum of scores of all the contacts of that residue. This can be used to define a score profile along the protein sequence. Residue scores can be averaged using a sliding window of size defined by the user along the protein sequence in order to smooth the profile. We defined SPs in previous works [9, 10] using the description of a potential of mean force (PMF), say the features describing an amino acid are defined by θ, with: θ = (secondary structure, polar character, degree of exposure). Then we define the potentials as in Eqs. 1–5: $${PMF}_{PAIR}\left(a,b\right)=-{k}_{B}T log\left(\frac{P\left(a,b | {d}_{ab}\right)}{P\left(a\right) P\left(b\right) P\left({d}_{ab}\right)}\right)$$ $${PMF}_{LOCAL}\left(a,b\right)={k}_{B}T log\left(\frac{P\left(a | {\theta }_{a}\right) P\left({\theta }_{a}\right)}{P\left(a\right)}\right)+{k}_{B}T log\left(\frac{P\left(b | {\theta }_{b}\right) P\left({\theta }_{b}\right)}{P\left(b\right)}\right)$$ $${PMF}_{3D}\left(a,b\right)={k}_{B}T log\left(P\left({d}_{ab}\right)\right)$$ $${PMF}_{3DC}\left(a,b\right)={k}_{B}T log\left(\frac{P\left({\theta }_{a},{\theta }_{b}\right) | {d}_{ab}}{P\left({\theta }_{a},{\theta }_{b}\right)}\right)$$ $${PMF}_{S3DC}\left(a,b\right)={-k}_{B}T log\left(\frac{P\left(a,b | {d}_{ab}, {\theta }_{a},{\theta }_{b}\right) P({\theta }_{a},{\theta }_{b})}{P\left(a,b | {\theta }_{a},{\theta }_{b}\right) P\left({\theta }_{a},{\theta }_{b} | {d}_{ab}\right)}\right)$$ with kB the Boltzmann constant, T the standard temperature (300 K), θa, and θb the features of amino acids a and b, and dab the distance between both residues. The terms P(·) denote the probabilities of observing interacting pairs (with or without conditions). For instance, P(a,b|dab) is the conditional probability that residues a,b interact at distance smaller than or equal to dab, and P(dab) is the probability of finding any pair of residues interacting at distance smaller than or equal to dab. The scores PAIR, ELOCAL, E3D, E3DC, and ES3DC are obtained by summing the PMF with the corresponding subindex of each pair of interacting residues a, b, either of the same protein (for fold) or between two interacting proteins (for PPIs), as in Eq. 6: $$E=\sum_{a,b}PMF\left(a,b\right)$$ We proved [9] that the classical statistic potential, PAIR, can be approximated to: $$PAIR=ES3DC-E3DC+E3D-ELOCAL+\varepsilon$$ With a residual ε that accounts for the reference state and becomes noise centered at 0 upon normalization (i.e. when transformed in Z-scores, see further). Hence, given that E3D nullifies when normalizing the scores and ε is irrelevant, we define another score, ECOMB, as: $$ECOMB=ES3DC-E3DC-ELOCAL$$ Furthermore, these potentials can be used to generate a profile per amino acid position along the sequence by summing the energies of the contacts of each residue. In conclusion, the SPServer has 6 types of SPs available that differ on the environmental features considered for the contact definition: (1) ES3DC considers residue frequencies along distances and their environments (i.e. hydrophobicity of each amino acid, solvent accessibility and secondary structure); (2) E3DC considers frequencies along distances of pairs referred by the hydrophobicity of the amino acids and the rest of their environments; (3) PAIR considers amino acid frequencies along distances; (4) ELOCAL considers amino acid frequencies on a particular environment; (5) E3D considers the frequencies of any pair of residues along distances; and finally, (6) ECOMB combines ES3DC, ELOCAL and E3DC scores [9]. Additionally, Z-scores are provided for each one of these scoring functions by normalizing the scores with respect to the average and standard deviation of 1000 random sequences with the same structure. Similarly, scoring profiles can also be transformed into Z-scoring profiles by normalizing with respect to the 20 possible amino acids in each position. As calculated, scores are proxy measures for energy, and thus, the lowest the score is, the closer the model is to the native-like structure. Output for protein folds For a set of protein folds, the SPServer outputs: (1) the global scores (raw and normalized) of all SPs; and (2) the scoring profile per residue (local scores) along the protein sequence. Global scores account for the overall quality of structural models, while per-residue score plots pinpoint problematic regions of the models that likely have either a wrong conformation or contacts with a wrongly modelled region. Output for protein–protein interactions For PPIs, the server outputs: (1) global scores for the quality of the interface between the two interacting proteins; (2) a measure of the penetration between two proteins to assess for steric clashes at the interface; and (3) interface maps with the scores of residue contact-pairs between the two proteins. Global scores inform on the overall quality of the interaction (i.e. for ranking docking poses). The measurement of steric hindrances is indicated in a color legend depending on the relevance of the clashes (see Additional file 1: Data and Additional file 2: Figure S1 for details). Finally, interface maps allow for detailed exploration of the protein interfaces at residue level. The server also provides different tools to smooth and compare interface maps. Case study 1: Evaluation of the structural models of Cysteine synthase A We compared the native structure of Cysteine synthase A from E. coli with two decoys of predicted structures: a near-native structure and a wrong decoy. All structures were retrieved from the CASP12 dataset (codes T0861, T0861TS275_2 and T0861TS321_1) [36]. The global scores rank the native structure with the lowest score, followed by the near-native and the wrong decoy (see Additional file 13: Table S1). Local score profiles of the native and the near-native structures are very similar, while the profile of the wrong decoy is different (see Additional file 3: Figure S2 and Additional file 4: Figure S3). Moreover, we compared the results of SPServer PAIR potential with a standard statistical potential (PROSA [6]). Both potentials show similar differences between the profiles of the native structure and the wrong decoy (Pearson correlation coefficient = 0.50), and highlight the residue-residue contact areas corresponding with wrongly modelled regions of the decoy structure (see Fig. 2). Comparison of the residue pair scores for the native and wrong decoy structures of cysteine synthase calculated with PROSA and SPServer. a Residue-residue contact maps are shown at the top, with green/blue, pink/red and brown/yellow colors identifying native contacts that have been lost when comparing the native structure and the wrong decoy, where native contacts are lost. b Local profile of the difference between the scores per residue of the native structure and the wrong decoy (in red are shown the scores of PAIR and in blue the scores of Pair potential of PROSA). The regions highlighted in the contact maps are also shown on the X-axis above the residue number, showing a coincidence between high scores and the regions where the wrong decoy differs from the native structure Case study 2: Mutation in the interaction between BAX and BID The interaction of BAX with BID mediates the insertion of BAX in the outer mitochondrial membrane, which induces apoptosis [37]. The BAX variant G108V has been associated with Burkitt Lymphoma [38]. We analyzed the interaction BAX-BID in its native form and the G108V variant (mutant form) generated with Modeller [19]. At a global level, only two of SPs are slightly higher for the mutant (i.e. PAIR, ES3DC and their respective Z-scores) while the rest remain unaffected (see Additional file 14: Table S2). However, the analysis of the interface identifies the detrimental effect of the mutation, as observed in the region around residues 108–110 of BAX (see Additional file 5: Figure S4). Evaluation of the SPServer global and residue scores on the CASP12 benchmark We test the SPs of the SPServer on the CASP12 [36] benchmark curated by López-Blanco et al. [39] (Additional file 17: Table S5). We classify the decoys of the benchmark as near-native (GDT_TS \(\ge\) 65%, as defined in [40]) and wrong (GDT_TS < 65%). The final CASP12 benchmark contains 9,977 structures, of which 2,100 were classified as near-native and 7,845 as wrongly modelled, and 32 were the native structure. We compare SPServer global and local scores with those from two standard scoring programs: PROSA [6] and DOPE [41]. In Fig. 3, we show the distributions of different scores for wrongly modelled decoys, near-native decoys and native structures in the CASP12 benchmark for proteins with different length. The scoring functions distinguish between native and non-native structures, assigning lower scores to native, higher scores to near-native and much higher to wrong decoy conformations. For proteins longer than 200 residues, all scoring approaches clearly separate native, near-native and wrong conformations. However, the scores of PROSA (Z-score of Pair potential), ZES3DC (Z-score normalized ES3DC) and ZPAIR (Z-score of PAIR) are optimal to distinguish between native and non-native structures. Distribution of scores for proteins in CASP12 dataset. Scores of native (green), near native (blue) and wrong decoy structures (red) are shown with respect to the protein number of residues. The figure shows in four panels the distributions of scores obtained with PROSA (Z-score of Pair potential), DOPE and the Z-scores of PAIR (ZPAIR) and ES3DC (ZES3DC). Distribution of scores independent of protein length are shown in the left of each panel In the Additional file 1: Data, we include the pairwise correlations between the global (full protein) and local (per residue) scores of the SPServer scoring functions PAIR and ES3DC, and the scores of PROSA (Pair potential) and DOPE. The Pearson correlation coefficients between the potentials ZPAIR and ZES3DC and the state-of-the-art potentials PROSA and DOPE are higher than 0.6 (ranging between 0.6 and 0.72, see Additional file 15: Table S3 and Additional file 6: Figure S5). We also compared the local scores (profiles per residue) of the different scoring functions. The SPServer profiles with score PAIR are correlated with the profiles using DOPE (0.57) and PROSA (0.38) (see Additional file 16: Table S4 and Additional file 12: Figure S11). Additionally, we compare the global Z-scores of SPServer with three quality metrics used as reference in CASP: Template Modelling (TM) score [42], Global Distance Test (GDT_TS) [43] and Quality Control Score (QCS) [44]. TM score and GDT_TS measure the quality of a model based on its similarity with the native structure. In contrast, QCS measures the quality of the model based on structural features such as the position of its secondary structure elements. Additional file 15: Table S3 and Additional file 6: Figure S5 compare ZPAIR and ZES3DC global scores with TM, GDT_TS and QCS (Pearson correlations range between − 0.44 and − 0.58). Our scores compete with other scores, such as the Z-score of PROSA or the global score of DOPE, showing similar Pearson correlations with both (ranging between − 0.1 and − 0.47), proving their utility to detect the right fold among several decoys. The comparison of scores for all the CASP12 structures can be easily visualized as scatter plots in Additional files 7–11: Figures S6–S10. Comparison of the SPServer interface with other protein scoring web servers We compared the SPServer in terms of input flexibility, user-friendliness, speed and intuitive visualization of results with other state-of-the-art functional web servers for protein fold assessment (ANOLEA [14], MODFOLD6 [18], ProQ3D [17], ProSA-web [6], QMEAN [16], Verify 3D [15], VoroMQA [8]). SPServer, ANOLEA [14], PROSA-web [6] and QMEAN [16] use statistical potentials. ModFOLD6 [18] and ProQ3D [17] combine several structural features and outputs from 3rd party software into neural networks. QMEAN [16] and VERIFY 3D [15] analyze local structural features such as the secondary structure, the degree of exposure and the degree of polarity for each amino acid. VoroMQA [8] analyzes contact regions based on the study of van der Waals radius through Voronoi tessellations. The comparison is summarized in Table 1. Table 1 Comparison of the input, scoring and output functionalities of the SPServer and other current servers for the assessment of protein folds In terms of input flexibility, the SPServer accepts both PDB and mmCIF formats, inputs with single or multiple structures, and does not require the sequence or the identifiers of the protein chains because it handles everything automatically. In contrast, only ProQ3D and QMEAN accept mmCIF format, and only MODFOLD6, ProQ3D, QMEAN and VoroMQA accept multiple structures. In terms of scoring calculation, all the web servers offer both global and local scores in short time. The only web server requiring some extra time of calculation is MODFOLD6, as it integrates different scoring functions and the use of neural networks. Finally, in terms of intuitive visualization of the results, most web servers offer clear plots for the analysis of local scores. They also provide a tool to visualize the structure, where the residues are colored according to their local score. Still, only the SPServer provides interactive tools to easily compare the local scores of multiple structures; the local scores can be visualized together in the same plot and smoothed or shifted according to the user's preferences. Additionally, none of the methods reviewed provide tools to score the quality of the interface of PPIs. The SPServer facilitates the quality assessment of both protein folds and protein–protein interaction structures in an easy-to-use web server. The quality assessment of the structures is obtained with Split-Statistical Potentials scoring functions that handle several terms associated with the structural local features of the amino acid environments. They are obtained from the analysis of empirical structures: different terms are taken into account such as pairs of interacting residues, solvent accessibility or type of secondary structure. The Split-Statistical Potentials have been tested on the CASP12 dataset and distinguish successfully native structures from wrong decoys. Moreover, the resulting scores are highly correlated with those from reference scoring functions such as PROSA and DOPE. While the other state-of-the-art web servers only show the local scores of the structures in a plot, the SPServer permits to compare different local score profiles simultaneously. This is done in an interactive plot where the scores can be smoothed or shifted to facilitate the analysis and visualization. Thanks to these analytical tools, we can use the SPServer to compare the quality of different protein models and protein–protein interactions, or to understand better the structural effect of a mutation both on the fold and the binding. Availability and requirements Project name: SPServer. Project home page: https://sbi.upf.edu/spserver. Operating system(s): Platform independent. Programming language: PHP, JavaScript, Python. Other requirements: Chrome, Safari, Firefox or any other modern browser. License: Open Source. Any restrictions to use by non-academics: None. The web server can be found at https://sbi.upf.edu/spserver. The standalone software can be found at https://github.com/structuralbioinformatics/SPServer. PPIs: Protein–protein interactions SPs: Split-statistical potentials SPServer: Split-statistical potentials server Kryshtafovych A, Schwede T, Topf M, Fidelis K, Moult J. Critical assessment of methods of protein structure prediction (CASP)-Round XIII. Protein StructFunctBioinform. 2019;87:1011–20. Won J, Baek M, Monastyrskyy B, Kryshtafovych A, Seok C. Assessment of protein model structure accuracy estimation in CASP13: Challenges in the era of deep learning. Protein StructFunctBioinform. 2019;87:1351–60. https://doi.org/10.1002/prot.25804. Kryshtafovych A, Monastyrskyy B, Fidelis K, Schwede T, Tramontano A. Assessment of model accuracy estimations in CASP12. Protein StructFunctBioinform. 2018;86:345–60. Cheng J, Choe M, Elofsson A, Han K, Hou J, Maghrabi AHA, et al. Estimation of model accuracy in CASP13. Protein StructFunctBioinform. 2019;87:1361–77. Fornes O, Garcia-Garcia J, Bonet J, Oliva B. On the use of knowledge-based potentials for the evaluation of models of protein-protein, protein-DNA, and protein-RNA interactions. In: Advances in protein chemistry and structural biology. Elsevier; 2014. p. 77–120. Wiederstein M, Sippl MJ. ProSA-web: interactive web service for the recognition of errors in three-dimensional structures of proteins. Nucleic Acids Res. 2007;35(SUPPL. 2):407–10. Conway P, DiMaio F. Improving hybrid statistical and physical forcefields through local structure enumeration. Protein Sci. 2016;25:1525–34. Olechnovič K, Venclovas Č. VoroMQA web server for assessing three-dimensional structures of proteins and protein complexes. Nucleic Acids Res. 2019;47:W437–W442442. https://doi.org/10.1093/nar/gkz367. Aloy P, Oliva B. Splitting statistical potentials into meaningful scoring functions: testing the prediction of near-native structures from decoy conformations. BMC StructBiol. 2009;9:71. https://doi.org/10.1186/1472-6807-9-71. Feliu E, Aloy P, Oliva B. On the analysis of protein-protein interactions via knowledge-based potentials for the prediction of protein-protein docking. Protein Sci. 2011;20:529–41. Segura J, Marín-López MA, Jones PF, Oliva B, Fernandez-Fuentes N. VORFFIP-driven dock: V-D2OCK, a fast, accurate protein docking strategy. PLoS ONE. 2015;10:1–12. Moal IH, Torchala M, Bates PA, Fernández-Recio J. The scoring of poses in protein–protein docking: current capabilities and future directions. BMC Bioinform. 2013;14:286. https://doi.org/10.1186/1471-2105-14-286. Marín-López MA, Planas-Iglesias J, Aguirre-Plans J, Bonet J, Garcia-Garcia J, Fernandez-Fuentes N, et al. On the mechanisms of protein interactions: predicting their affinity from unbound tertiary structures. Bioinformatics. 2017. https://doi.org/10.1093/bioinformatics/btx616. Article PubMed Central Google Scholar Melo F, Devos D, Depiereux E, Feytmans E. ANOLEA: a www server to assess protein structures. ProcIntConfIntellSystMolBiol. 1997;5:187–90. Eisenberg D, Lüthy R, Bowie JU. VERIFY3D: assessment of protein models with three-dimensional profiles. Methods Enzymol. 1997;277:396–404. Benkert P, Biasini M, Schwede T. Toward the estimation of the absolute quality of individual protein structure models. Bioinformatics. 2011;27:343–50. https://doi.org/10.1093/bioinformatics/btq662. Uziela K, Hurtado DM, Shu N, Wallner B, Elofsson A. Pro Q3D: improved model quality assessments using deep learning. Bioinformatics. 2017;33:1578–80. Maghrabi AHA, McGuffin LJ. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models. Nucleic Acids Res. 2017;45:W416–W421421. https://doi.org/10.1093/nar/gkx332. Webb B, Sali A. Comparative protein structure modeling using MODELLER. CurrProtocBioinform. 2016. https://doi.org/10.1002/cpps.20. Kelley LA, Mezulis S, Yates CM, Wass MN, Sternberg MJE. The Phyre2 web portal for protein modeling, prediction and analysis. Nat Protoc. 2015;10:845–58. https://doi.org/10.1038/nprot.2015.053. Shi J, Blundell TL, Mizuguchi K. FUGUE: Sequence-structure homology recognition using environment-specific substitution tables and structure-dependent gap penalties. J MolBiol. 2001;310:243–57. https://doi.org/10.1006/jmbi.2001.4762. Yang J, Yan R, Roy A, Xu D, Poisson J, Zhang Y. The I-TASSER suite: Protein structure and function prediction. Nat Methods. 2014;12:7–8. https://doi.org/10.1038/nmeth.3213. Jones DT, Taylort WR, Thornton JM. A new approach to protein fold recognition. Nature. 1992;358:86–9. https://doi.org/10.1038/358086a0. Senior AW, Evans R, Jumper J, Kirkpatrick J, Sifre L, Green T, et al. Improved protein structure prediction using potentials from deep learning. Nature. 2020;577:706–10. https://doi.org/10.1038/s41586-019-1923-7. Leaver-Fay A, Tyka M, Lewis SM, Lange OF, Thompson J, Jacak R, et al. Rosetta3: an object-oriented software suite for the simulation and design of macromolecules. In: Abelson J, et al., editors. Methods in enzymology. New York: Academic Press Inc.; 2011. p. 545–574. https://doi.org/10.1016/B978-0-12-381270-4.00019-6 Mosca R, Céol A, Aloy P. Interactome3D: adding structural details to protein networks. Nat Methods. 2013;10:47–53. https://doi.org/10.1038/nmeth.2289. Zhang QC, Petrey D, Garzón JI, Deng L, Honig B. PrePPI: A structure-informed database of protein-protein interactions. Nucleic Acids Res. 2013;41:D828–D833833. Meseguer A, Dominguez L, Bota PM, Aguirre-Plans J, Bonet J, Fernandez-Fuentes N, et al. Using collections of structural models to predict changes of binding affinity in protein-protein interactions. Protein Sci. 2020. https://doi.org/10.1002/pro.3930. Neves MAC, Totrov M, Abagyan R. Docking and scoring with ICM: the benchmarking results and strategies for improvement. J Comput Aided Mol Des. 2012;26:675–86. https://doi.org/10.1007/s10822-012-9547-0. Jiménez-García B, Pons C, Fernández-Recio J. pyDockWEB: a web server for rigid-body protein–protein docking using electrostatics and desolvation scoring. Bioinformatics. 2013;29:1698–9. Gabb HA, Jackson RM, Sternberg MJE. Modelling protein docking using shape complementarity, electrostatics and biochemical information. J MolBiol. 1997;272:106–20. https://doi.org/10.1006/jmbi.1997.1203. Segura J, Marín-López MA, Jones PF, Oliva B, Fernandez-Fuentes N. VORFFIP-driven dock: V-D2OCK, a fast, accurate protein docking strategy. PLoS ONE. 2015. https://doi.org/10.1371/journal.pone.0118107. Schneidman-Duhovny D, Inbar Y, Nussinov R, Wolfson HJ. PatchDock and SymmDock: servers for rigid and symmetric docking. Nucleic Acids Res. 2005;33:W363–W367367. Pierce BG, Wiehe K, Hwang H, Kim BH, Vreven T, Weng Z. ZDOCK server: Interactive docking prediction of protein-protein complexes and symmetric multimers. Bioinformatics. 2014;30:1771–3. Van Zundert GCP, Rodrigues JPGLM, Trellet M, Schmitz C, Kastritis PL, Karaca E, et al. The HADDOCK2.2 web server: user-friendly integrative modeling of biomolecular complexes. J MolBiol. 2016;428:720–5. Kryshtafovych A, Monastyrskyy B, Fidelis K, Schwede T, Tramontano A. Assessment of model accuracy estimations in CASP12. Proteins StructFunctBioinform. 2017;86:345–60. https://doi.org/10.1002/prot.25371. Eskes R, Desagher S, Antonsson B, Martinou J-C. Bid induces the oligomerization and insertion of bax into the outer mitochondrial membrane. Mol Cell Biol. 2000;20:929–35. Meijerink JP, Mensink EJ, Wang K, Sedlak TW, Slöetjes AW, de Witte T, et al. Hematopoietic malignancies demonstrate loss-of-function mutations of BAX. Blood. 1998;91:2991–7. López-Blanco JR, Chacón P. KORP: knowledge-based 6D potential for fast protein and loop modeling. Bioinformatics. 2019;35:3013–9. Rykunov D, Fiser A. New statistical potential for quality assessment of protein models and a survey of energy functions. BMC Bioinform. 2010. https://doi.org/10.1186/1471-2105-11-128. Shen M-Y, Sali A. Statistical potential for assessment and prediction of protein structures. Protein Sci. 2006;15:2507–24. Zhang Y, Skolnick J. Scoring function for automated assessment of protein structure template quality. Proteins StructFunct Genet. 2004;57:702–10. https://doi.org/10.1002/prot.20264. Zemla A. LGA: a method for finding 3D similarities in protein structures. Nucleic Acids Res. 2003;31:3370–4. Cong Q, Kinch LN, Pei J, Shi S, Grishin VN, Li W, et al. An automatic method for CASP9 free modeling structure prediction assessment. Bioinformatics. 2011;27:3371–8. The authors would like to thank the technical support from GRIB IT team. This work was supported by the Spanish Ministry of Science and Innovation (MICINN) [BIO2017-85329-R (co-funded by ERDF,UE)]. BO also acknowledges support from MICINN [ref: MDM-2014-0370]. NFF acknowledges support from [BIO2017-83591-R (co-funded by ERDF, UE)] and [RYC-2015-17519]. We also akcnowledge support from the Spanish National Bioinformatics Institute (INB), PRB2-ISCIII and Grants PT13/0001/0023 of the PEI +D+i 2013–2016, funded by ISCIII and co-funded by ERDF of EU. BO and NFF acknowledge Agència de Gestió d'Ajuts Universitaris I de Recerca de la Generalitat de Catalunya, grant SGR17-1020, and the Council for the Catalan Republic contribution to cover the expenses of publication. Structural Bioinformatics Lab, Department of Experimental and Health Science, Universitat Pompeu Fabra, 08003, Barcelona, Catalonia, Spain Joaquim Aguirre-Plans, Alberto Meseguer, Ruben Molina-Fernandez, Manuel Alejandro Marín-López, Gaurav Jumde, Kevin Casanova & Baldo Oliva Laboratory of Protein Design and Immuno-Enginneering, School of Engineering, Ecole Polytechnique Federale de Lausanne, 1015, Lausanne, Vaud, Switzerland Jaume Bonet Centre for Molecular Medicine and Therapeutics, Department of Medical Genetics, BC Children's Hospital Research Institute, University of British Columbia, Vancouver, BC, V5Z 4H4, Canada Oriol Fornes Department of Biosciences, U Science Tech, Universitat de Vic-Universitat Central de Catalunya, Vic 08500, Barcelona, Catalonia, Spain Narcis Fernandez-Fuentes Institute of Biological, Environ-Mental and Rural Sciences, Aberystwyth University, Aberystwyth, SY23 3EB, UK Joaquim Aguirre-Plans Alberto Meseguer Ruben Molina-Fernandez Manuel Alejandro Marín-López Gaurav Jumde Kevin Casanova Baldo Oliva BO and NFF conceived the project. JAP, MAML and KC designed the web interface. JAP, MAML, JB, OF and BO created the scripts to calculate statistical potentials. GJ created the program to identify steric crashes. AMD wrote the tutorials. JAP, AMD, RM, JB, OF, NFF and BO extensively tested the application. RM tested the scoring functions using the CASP dataset. JAP, AMD, RM and BO designed the case studies. JAP, AMD, RM and BO wrote the manuscript with input from all the authors. All authors read and approved the final manuscript. Correspondence to Baldo Oliva. Baldo Oliva is member of the Editorial Board of this journal. The rest of authors have no other competing interest. Additional file 1 . Data. . Figure S1: Identification of steric crashes using GEPOL approach to calculate the surface. The two atoms are represented as light blue and light brown circles. The normal and position vectors are shown both in a case where there is no steric crash (a), and there is a steric crash (b). In the case (a) both vectors form and acute angle (i.e. < 90°) while in the case (b) they form an obtuse angle (i.e. > 90), and thus the sign of the two dot products will be negative. . Figure S2: Residue scores of the native structure of Cysteine synthase A (green), the near-native model (blue) and the wrong model (red). The curves represent the smoothed PAIR scores with a sliding window of value 10. . Figure S3: Difference between the residue scores of the native structure (reference) and the near-native (blue) and wrong (red) models. The curves represent the smoothed PAIR scores with a sliding window of value 10. . Figure S4: Local scores map of the interface of the interaction between BAX (Receptor) and BID (Ligand). Large cells are used for local scores (statistic energy) of the wildtype structure and upper (smaller) squares are for the mutant. Energies are shown by colors, from high (red) to low (blue), indicating the range in the label at the bottom. The scores are calculated with the PAIR potential, using a sliding window of 1 to smooth, being the optimal interactions those with most negative energy. . Figure S5: Mean Pearson correlation values of the comparison between the global scores of the SPServer (ZES3DC and ZPAIR), DOPE and PROSA (Pair Z-score) potentials, and TM, GDT_TS and QCS quality metrics for the structures of CASP12 benchmark. The correlation values are extracted after performing a bootstrapping strategy of 1000 repetitions (described above). The Pearson correlation values of TM score, GDT_TS and QCS are negative because their score is higher when the model is more similar to the native structure (the opposite of the statistical potentials). Additional file 7. Figure S6 : Scatter plots of the global scores of the SPServer potentials ZES3DC (a) and ZPAIR (b) with respect to PROSA (Z-score of Pair potential) for the structures of the CASP12 benchmark. . Figure S7: Scatter plots of the global scores of the SPServer potentials ZES3DC (a) and ZPAIR (b) with respect to DOPE for the structures of the CASP12 benchmark. . Figure S8: Scatter plots of the global scores of the SPServer potentials ZES3DC (a) and ZPAIR (b) with respect to GDT_TS for the structures of the CASP12 benchmark. Additional file 10 . Figure S9: Scatter plots of the global scores of the SPServer potentials ZES3DC (a) and ZPAIR (b) with respect to TM score for the structures of the CASP12 benchmark. . Figure S10: Scatter plots of the global scores of the SPServer potentials ZES3DC (a) and ZPAIR (b) with respect to QCS for the structures of the CASP12 benchmark. . Figure S11: Histograms showing the residue correlations between the SPServer scoring functions (ES3DC and PAIR) and the PROSA (Pair) and DOPE scoring functions. Each correlation value corresponds to the correlation of all the residue scores of a structure from the CASP12 benchmark. . Table S1: Global scores of the native structure of Cysteine synthase A and two predicted structural models. . Table S2: Global scores of the native structure of Cysteine synthase A and two of its models. . Table S3: Comparison between global and quality metrics for the structures of CASP12 benchmark. . Table S4: Comparison of local (residue) profiles between SPServer and state-of-art methods DOPE and PROSA for the structures of CASP12 benchmark. . Table S5: SPServer global Z-scores (ZES3DC, ZPAIR), PROSA (Pair Z-score), DOPE score, GDT_TS, TM score and QCS of the structures of the CASP12 benchmark. Aguirre-Plans, J., Meseguer, A., Molina-Fernandez, R. et al. SPServer: split-statistical potentials for the analysis of protein structures and protein–protein interactions. BMC Bioinformatics 22, 4 (2021). https://doi.org/10.1186/s12859-020-03770-5 Accepted: 20 September 2020 Protein structure evaluation Protein structure quality assessment Protein structure prediction Protein–protein interaction Protein–protein evaluation Knowledge-based potential
CommonCrawl
OSA Publishing > Applied Optics > Volume 59 > Issue 34 > Page 10870 Gisele Bennett, Editor-in-Chief Early Posting Design and analysis of inner focus for a large spectral bandwidth optical system YunHan Huang, XiaoYan Wang, YueGang Fu, GuoYu Zhang, and ZhiYing Liu YunHan Huang,1,2 XiaoYan Wang,3 YueGang Fu,1,2 GuoYu Zhang,1,2 and ZhiYing Liu1,2,* 1Changchun University of Technology and Science, Changchun 130022, China 2Key Laboratory of Optoelectric Measurement and Optical Information Transmission Technology of Ministry of Education, Changchun University of Science and Technology, Changchun, China 3Beijing Institute of Control Engineering, Beijing 100190, China *Corresponding author: [email protected] G Zhang pp. 10870-10879 •https://doi.org/10.1364/AO.408507 YunHan Huang, XiaoYan Wang, YueGang Fu, GuoYu Zhang, and ZhiYing Liu, "Design and analysis of inner focus for a large spectral bandwidth optical system," Appl. Opt. 59, 10870-10879 (2020) Get PDF (10069 KB) 50X five-group inner-focus zoom lens design with focus tunable lens using Gaussian brackets and lens modules (OE) Fundamental design parameters of two-component optical systems: theoretical analysis (AO) Aberration design of zoom lens systems using thick lens modules (AO) Optical Design and Fabrication Chromatic aberrations Image metrics Lens design Modulation transfer function Revised Manuscript: November 1, 2020 Manuscript Accepted: November 3, 2020 INNER FOCUS AND ACHROMATIC MODEL DESIGN EXAMPLE In this paper, we present a method of solving the chromatic aberration problem of large spectral bandwidth optical systems encountered during the internal focusing process. Rational selection of the focal length of each lens group and the distance between them retained the achromatic characteristic of the optical system when the inner focus lens group was mobilized. The proposed design was experimentally validated. This paper can be useful to research on internal focusing in wide-band systems. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement The wide-spectrum optical system has good spatial resolution and spectral resolution over a wide bandwidth range, and thus can obtain more target information. For optical systems with small or medium apertures, the transmission-type system has great performance advantages. The control of chromatic aberration through the selection of optical glasses is one of the most extensively studied subjects in the field of multi-wave band lens design. In 1986, Robert D. Sigler adopted Buchdahl's dispersion equation as a guide for the design of a system with air-spaced thin lenses to achieve apochromatic color correction. The proposed vector representation proved to be useful for glass selection [1]. Sun et al. presented a method for the correction of longitudinal chromatic, spherical, and coma aberrations in doublet design. A secondary dispersion formula was utilized to realize the minimal longitudinal chromatic aberrations for the doublet [2]. Albuquerque et al. proposed a glass selection method based on the theoretical model proposed by Rayces et al. and Mercado et al. The proposed method uses a multi-objective approach to select sets of compatible glasses suitable for the design of super-apochromatic optical systems, thus narrowing the selection to a few choices from thousands of combinations [3–5]. Mikš et al. presented a method to design an achromatic optical system composed of two or three thin lens groups and proposed a chromatic aberration correction equation. On the basis of the third-order aberration theory, the surface radius was obtained to alleviate the third-order spherical and coma aberration [6]. Through the efforts of past researchers, we can now rationally select optical materials for a system with a specific focal power for each lens group. The optical system can detect targets with different object distances by moving specific components in the system. Nakatsuji et al. provided a zoom system adopting five lens groups; the system relies on moving the second lens group axially for its inner focus functionality, wherein the lens group is ordered from the object side sequentially, and the other four lens groups are moved axially simultaneously to get the desired focal length [7]. Kogaku et al. made an improvement on the previous inner-focus-type zoom lens where the first and fifth lens groups are kept stationary during the zoom process, and the second lens group moves toward the first lens group axially in accordance with infinite-short object distance [8]. Imaoka et al. provided an interchangeable inner focus camera with three lens groups; the system achieves its internal focus capability by axially moving its second lens unit, while the first lens group and the third lens group remain stationary, wherein the first lens group can be imaging itself, and it can be replaced by another good imaging system. The whole dimension of the system changes little relative to the first lens group [9]. Choi et al. designed an ultra-wide-angle internal focusing optical system with high resolution. In this design, one central lens component is required to realize the internal focusing function. Through detailed analysis, it was ensured that minimal aberration occurred during the internal focusing process [10]. The moving components consisted of only a few lenses to control the weight deliberately, which ensured the realization of the auto focus function. The smaller the weight of the internal focusing component, the better the internal focusing mechanism can realize its function [9,10]. The combination of a wide-spectrum optical system with the internal focusing function can be implemented in more applications. However, in optical systems, the amount of chromatic aberration produced by each lens is related not only to the power of the lens but also to the dispersion properties of the material being used in the lens [11], in addition to its position in the system. During the internal focusing process, some of the system components will be moved. Subsequently, the balance of the chromatic aberration characteristics (including the axis chromatic aberration and lateral chromatic aberration) of the system will be altered, which finally leads to changes in the imaging performance of the system. Thus, internal focus may cause a degradation in image quality of the system, especially when there exists an intimate connection between the lens groups. If the wide-spectrum internal focusing system can be designed and analyzed reasonably, it will considerably improve the performance of the internal focusing wide-spectrum system. The purpose of this paper is to obtain a chromatic aberration correction method for the internal focusing wide-spectrum optical system. We desire reasonable definitions of system parameters at the beginning of the design process. Furthermore, these technologies can be developed in depth practically. 2. INNER FOCUS AND ACHROMATIC MODEL A. Selection of Number of Lens Groups During the process of system design, it is often preferable to design in the form of lens groups. This is apt for the initial construction of the system, and for specific analysis of certain problems [12–14]. As for the inner focus system, the inner focus group moves independently; hence, such a group is considered as one lens group. Also, the system generally adopts a stationary group as another lens group; if only two lens groups are set, moving one group will affect the symmetry of whole system. To solve the problem, systems utilize mainly three lens groups where the central lens group performs as the inner focus group [9,14]. However, if the system involves no optical zoom, it is not necessary for the system to adopt numerous lens groups since they will introduce complexity, which is a disadvantage in the design and analysis of systems. In the following sections, all discussions pertain to the three-lens-group model. B. Principle of Achromatic Method The axial achromatic correction equation corresponding to a three-component system can be expressed as [15] (1)$${{W_{\textit{AC}}} = - \frac{1}{2}\left({\frac{{{h_1}^2}}{{{v_1}}}{{{\Phi}}_1} + \frac{{{h_2}^2}}{{{v_2}}}{{{\Phi}}_2} + \frac{{{h_3}^2}}{{{v_3}}}{{{\Phi}}_3}} \right),}$$ where ${h_1}$, ${h_2}$, and ${h_3}$ represent the marginal ray height on each lens group; ${\nu _1}$, ${\nu _2}$, and ${\nu _3}$ denote the partial dispersion of each lens group; ${\Phi _1}$, ${\Phi _2},$ and ${\Phi _3}$ represent the power of each lens group. Similarly, the secondary spectrum formula with the three-component system can be expressed as (2)$$\begin{split}{{W_{AC,{\rm{secondary}}}} = - \frac{1}{2}\left({\frac{{{h_1}^2}}{{{v_1}}}{P_1}{{{\Phi}}_1} + \frac{{{h_2}^2}}{{{v_2}}}{P_2}{{{\Phi}}_2} + \frac{{{h_3}^2}}{{{v_3}}}{P_3}{{{\Phi}}_3}} \right),}\end{split}$$ where ${P_1}$, ${P_2}$, and ${P_3}$ represent the partial dispersion coefficient of each lens group. By transforming Eq. (1), we can get the achromatic equation (3)$${\frac{{{h_1}^2}}{{{h_3}^2}}\frac{{{{{\Phi}}_1}}}{{{v_1}}} + \frac{{{h_2}^2}}{{{h_3}^2}}\frac{{{{{\Phi}}_2}}}{{{v_2}}} + \frac{{{{{\Phi}}_3}}}{{{v_3}}} = 0.}$$ Similarly, based on the secondary spectrum formula, Eq. (4) can be derived as (4)$${\frac{{{h_1}^2}}{{{h_3}^2}}\frac{{{{{\Phi}}_1}}}{{{v_1}}}{P_1} + \frac{{{h_2}^2}}{{{h_3}^2}}\frac{{{{{\Phi}}_2}}}{{{v_2}}}{P_2} + \frac{{{{{\Phi}}_3}}}{{{v_3}}}{P_3} = 0.}$$ The internal focusing function of the system can be realized by moving one of its lens groups axially. It can be observed from Eqs. (3) and (4) that during the process of internal focusing, the optical power and material properties of each component of the system do not change, and hence the Abbe coefficient and secondary spectrum does not change. The main changes in the internal focusing process are of the ratios ${h_1}/{h_3}$ and ${h_2}/{h_3}$. If the chromatic aberration is corrected for one state of internal focusing, then even if there is a small deviation in the two ratios of ${h_1}/{h_3}$ and ${h_2}/{h_3}$ during the internal focusing process, the axial chromatic aberration characteristic of the system will remain invariable. This is the starting point from where we correct the internal focusing chromatic aberration due to a small change in the ${h_1}/{h_3}$ and ${h_2}/{h_3}$ ratios for each inner focus state. C. Axial Chromatic Aberration of Inner Focusing System Figure 1 depicts a paraxial model of the three-lens-group optical system. In the schematic, we simplify each lens group into a paraxial lens. The propagation process of zero field rays is described in the diagram, where the marginal aperture ray is colored in red. The ray height of each marginal aperture ray on the lens group is expressed by ${h_{\rm i}}$, the distance following the $i$th lens group is ${t_{\rm i}}$, and the object distance of the entire system is expressed by $L$. The incident and exit angles of each surface are labeled as ${u_{\rm i}}$ and ${u_{\rm i}^ \prime}$, respectively, and it can be inferred from Fig. 1 that ${u_{\rm i}^ \prime} = {u_{{\rm i} + 1}}$. The case when the object distance is infinite is analyzed first. For clarity, we use subscript $f$ to denote the parameters of the system while working in the finite object distance regime. Fig. 1. Schematic of three-lens group system. To analyze the optical system, we utilize a paraxial ray tracing process. During the internal focusing process of the system, the object distance varies. If the ray tracing process is built in the conventional way, it will be hard to track and deal with the object distance parameter $L$ during the iteration steps of the ray tracing process. Thus, we adopt the reverse ray tracing process. When the object is at finite distance, we can get the following iteration ray tracing equation: (5)$${{u_{\!f4}} = - \frac{{{h_{f3}}}}{{{t_{f3}}}},}$$ (6)$${\left\{{\begin{array}{*{20}{c}}{{u_{\textit{fi}}} = {u_{f({i + 1})}} + {{{\Phi}}_i} \cdot {h_i}}\\{{h_{\!f({i - 1})}} = {h_{\textit{fi}}} - {u_{\textit{fi}}} \cdot {t_{f({i - 1} )}}}\end{array}} \right..}$$ The ray height can be calculated using the above equations. In Eq. (6), ${\Phi _i}$ represents the bending power of the $i$th lens group. When the object distance is finite, there exists the following relationship: (7)$${\frac{{{h_1}}}{{{u_1}}} = L.}$$ The symmetrical structure is beneficial to the monochromatic correction of the system. Therefore, we try to ensure the symmetrical structure of the system [16]. To simplify the system design process, the lengths of ${t_1}$, ${t_2}$ are set to be ${t_d}$ equally: (8)$${{t_{f1}} = {t_{f2}} = {t_d}.}$$ If we have the values of parameters ${\Phi _2}$, ${\Phi _3}$, ${t_d}$, ${t_3}$, and $L$, the value of ${\Phi _1}$ can be evaluated from (9)$$\begin{split}&{{{\Phi}}_1} =\\& \frac{{L + 2{t_d} + {t_3} - {{{\Phi}}_3}{t_3}{t_d} + ({{{{\Phi}}_2}{t_d}({{{{\Phi}}_3}{t_3} - 1} ) - {t_3}({{{{\Phi}}_2} + {{{\Phi}}_3}} )} )({L + {t_d}} )}}{{L \cdot ({{t_3} - 2{{{\Phi}}_3}{t_3}{t_d} + {t_d}({2 - {{{\Phi}}_2}{t_d}} ) + {{{\Phi}}_2}{t_3}{t_d}({- 1 + {{{\Phi}}_3}{t_d}} )} )}}.\end{split}$$ Also, the ratios of ${h_1}/{h_3}$ and ${h_2}/{h_3}$ can be inferred from Eqs. (5) and (6) when the object is located at a finite distance: (10)$${\frac{{{h_1}}}{{{h_3}}} = 1 - 2{{{\Phi}}_3}{t_d} + {{{\Phi}}_2}{t_d}({{{{\Phi}}_3}{t_d} - 1} ) + \frac{{{t_d}({2 - {{{\Phi}}_2}{t_d}} )}}{{{t_3}}},}$$ (11)$${\frac{{{h_2}}}{{{h_3}}} = \frac{{{t_3} + {t_d} - {{{\Phi}}_3}{t_3}{t_d}}}{{{t_3}}}.}$$ When the object working distance is at infinity, Eq. (7) can be expresses as (12)$${{u_1} = 0.}$$ The proper method to achieve internal focus is to mobilize only one lens group, which can minimize the complexity of the internal focus and make it feasible. From the perspective of symmetry, we adopt the second lens group to achieve inner focus. The advantage of this method will be discussed in detail in Section 2.E. In the internal focusing process, we assume that the stop location is at the point of no variation. Because the location of the third lens group did not change during the internal focusing, ${u_3}$ and ${u_4}$ remain the same. For the sake of uniformity, we use the subscript $o$ to denote the parameters of the system working at infinite object distance. During the process of internal focus, the parameter in Eq. (6) will change. ${t_{\textit{od}}}$ is used to replace the distance ${t_d}$. Equation (13) can be derived to describe the reverse ray tracing process based on Eq. (6): (13)$$\left\{{\begin{array}{*{20}{c}}{{h_{o2}} = {h_{o3}} - {u_{o3}}{t_{\textit{od}}}}\\{{u_{o2}} = {u_{o3}} + {{{\Phi}}_2} \cdot {h_{o2}}}\\{{h_{o1}} = {h_{o2}} - {u_{o2}}({2{t_d} - {t_{\textit{od}}}} )}\\{{u_{o1}} = {u_{o2}} + {{{\Phi}}_1} \cdot {h_{o1}}}\end{array}} \right..$$ An expression for ${t_{\textit{od}}}$ can be obtained by combining the parameters of Eq. (13): (14)$$\begin{split}{t_{\textit{od}}}& = \left({- \sqrt {{A^2}{{{\Phi}}_2}^2 - 4A{{{\Phi}}_1}{{{\Phi}}_2}({{{{\Phi}}_3}{t_3} - 1})}} + {({A + 2{{{\Phi}}_1}{t_3}}){{{\Phi}}_2}} \right)\\&\quad \cdot \frac{1}{{2{{{\Phi}}_1}{{{\Phi}}_2}({{{{\Phi}}_3}{t_3} - 1} )}},\\[-1.2pc]\end{split}$$ (15)$${A = - {{{\Phi}}_1}{t_3} + ({2{{{\Phi}}_1}{t_d} - 1} )({{{{\Phi}}_3}{t_3} - 1} ).}$$ From Eqs. (14) and (15), the inner focus zoom length, which is the distance between ${t_{\textit{od}}}$ and ${t_d}$, can be calculated. Similar to the case when the object working distance is infinite, the ratio of marginal ray height of each lens group can be derived using Eqs. (13)–(15): (16)$${\frac{{{h_{o2}}}}{{{h_{o3}}}} = \frac{{{t_3} + {t_{\textit{od}}} - {{{\Phi}}_3}{t_3}{t_{\textit{od}}}}}{{{t_3}}}},$$ (17)$$\begin{split}\frac{{{h_{o1}}}}{{{h_{o3}}}}& = 1 + \frac{{{{{\Phi}}_2}{t_{\textit{od}}}^2 + {t_d}({2 - 2{{{\Phi}}_2}{t_{\textit{od}}}} )}}{{{t_3}}} \\&\quad - 2{{{\Phi}}_3}{t_d} + {{{\Phi}}_2}({2{t_d} - {t_{\textit{od}}}} )({{{{\Phi}}_3}{t_{\textit{od}}} - 1} ).\end{split}$$ The ${t_3}$ given in Eqs. (16) and (17) is labeled without the subscript $o$ because this parameter does not change during the internal zoom process. Its value remains unchanged during the whole internal zoom process. Combined with Eqs. (10) and (11), the deviation of the ratio of ${h_1}/{h_3}$ and ${h_2}/{h_3}$ can be determined as Fig. 2. Variations in $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ with ${\Phi _2}/{\Phi _3}$ for different values of ${\Phi _3}{{\cdot}}{t_3}$, with ${t_d} = {{57}}\;{\rm{mm}}$ and ${t_3} = {{70}}\;{\rm{mm}}$. Fig. 3. Variations in $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ with ${t_3}/{\rm{EFL}}$ for different values of ${\Phi _3}{{\cdot}}{t_3}$, with ${f_2} = - {{1000}}\;{\rm{mm}}$, ${f_3} = {{90}}\;{\rm{mm}}$, and ${\rm{EFL}} = {{100}}\;{\rm{mm}}$. (18)$${\Delta \frac{{{h_1}}}{{{h_3}}} = \frac{{{{{\Phi}}_2} \cdot ({{t_d} - {t_{\textit{od}}}} )({{t_3} - {t_d} + {{{\Phi}}_3}{t_3}({{t_d} - {t_{\textit{od}}}} ) + {t_{\textit{od}}}} )}}{{{t_3}}},}$$ (19)$${\Delta \frac{{{h_2}}}{{{h_3}}} = - \frac{{({{{{\Phi}}_3}{t_3} - 1})({{t_d} - {t_{\textit{od}}}})}}{{{t_3}}}.}$$ Thus, the values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ can be obtained to evaluate the changes in the color balance due to the process of internal focusing. In general, as long as $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ vary little in the process of internal focusing, the features of expression in Eqs. (3) and (4) will experience little deviation, ensuring that the system retains its chromatic characteristic during the internal focusing process. D. Discussion on the Parameters in Axial Chromatic Aberration Correction In the previous discussion, we constructed the axial chromatic model. In this model, we need to ascertain the values of four parameters ${\Phi _2}$, ${\Phi _3}$, ${t_d}$, and ${t_3}$ in advance. Then, based on these predefined variables, the axial movement ${t_d} - {t_{\textit{od}}}$ and the ratio of ${h_1}/{h_3}$, ${h_2}/{h_3}$ can be evaluated. Below we give the influence of the four predefined parameters on the performance of the axial chromatic aberration. It can be seen intuitively from Eq. (18) that the smaller the ${\Phi _2}$, the smaller the value of $\Delta {h_1}/{h_3}$. Correspondingly, from Eq. (19), we can see that when ${\Phi _3}{{\cdot}}{t_3}$ approaches one, the value of $\Delta {h_2}/{h_3}$ approaches zero. However, during the internal focusing process, $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ should be evaluated simultaneously. Further, the axial movement distance ${t_d} - {t_{\textit{od}}}$ should also be considered. Thus, the evaluation should be made based on the four predefined parameters. Assuming the total focal length of the system is 100 mm, the working distance of the internal focusing system is ${{200}}\;{\rm{m }}^{- \infty}$. Figure 2 depicts the influence of the optical power selection on $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$. The solid line corresponds to the left vertical axis, and the dashed line corresponds to the right vertical axis. We can see that at small and negative values of ${\Phi _2}/{\Phi _3}$, both the absolute values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ become small. At the same time, it can be observed that when a specific value of ${\Phi _3}{{\cdot}}{t_3}$ is selected (e.g., when ${\Phi _3}{{\cdot}}{t_3} = {0.7}$), with a relatively large value of ${\Phi _2}/{\Phi _3}$, the absolute values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ can be made smaller. However, this is one special case and is not commonly used. In the following sections, we will see that a relatively large value of ${\Phi _3}{{\cdot}}{t_3}$ will lead to a small movement track of the inner focus lens group, which is not conducive to the internal focusing of the component. The following discussion does not include this specific idea. Figure 3 shows the determination of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ from the distance selection. ${f_i}$ represents the effective focal length of the $i$th lens group. The solid lines correspond to the left vertical axis, and the dashed lines correspond to the right vertical axis. EFL represents the focal length. We can see that the absolute values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ vary with the value of ${t_3}/{\rm{EFL}}$. A smaller value of ${t_d}/{t_3}$ will lead to small absolute values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$. Nevertheless, the influence of ${t_d}/{t_3}$ is trivial. As for the distance between each lens group, a reasonable choice of ${t_3}/{\rm{EFL}}$ can lead to sufficient control over the values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$. Fig. 4. Variation in $\Delta {t_d}$ with ${\Phi _2}/{\Phi _3}$ for different values of ${\Phi _3}{{\cdot}}{t_3}$, with ${t_d} = {{57}}\;{\rm{mm}}$, ${t_3} = {{70}}\;{\rm{mm}}$, and ${\rm{EFL}} = {{100}}\;{\rm{mm}}$. Fig. 5. Variation of $\Delta {t_d}$ with ${t_3}/{\rm{EFL}}$ for different values of ${t_d}/{t_3}$, with ${f_2} = - {{1000}}\;{\rm{mm}}$, ${f_3} = {{90}}\;{\rm{mm}}$, and ${\rm{EFL}} = {{100}}\;{\rm{mm}}$. In the internal focusing process, an extremely small inner focus distance is not conducive to the movement process of inner focus lens components. A relatively large moving distance with a smooth track is beneficial to realize precise control over the zoom lens group [17]. Figure 4 shows the influence of the optical power selection on the inner focus distance. According to the previous discussion, a large negative value of ${\Phi _2}/{\Phi _3}$ is more beneficial to control $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$. However, as can be seen in Fig. 4, a large negative value of ${\Phi _2}/{\Phi _3}$ will lead to a smaller value of the moving distance of the inner focus lens, which causes issues for internal focusing. Therefore, the ratio of ${\Phi _2}/{\Phi _3}$ can be appropriately increased in the actual process, increasing the inner focus distance while increasing the absolute values of $\Delta {h_1}/{h_3}$ and $\Delta {h_2}/{h_3}$ to some extent. This trade-off can be made during the design process. Figure 5 shows the impact of distance selection on the inner focus distance. It can be seen that the ratio of ${t_{d}}/{t_3}$ has little influence on the inner focus distance, whereas the ratio of ${t_3}$ to EFL plays a key role in it. As shown in Fig. 5, the focusing distance is proportional to value of ${t_3}/{\rm{EFL}}$. As shown in Fig. 3, excessive values of ${t_3}$ will have an adverse impact on $\Delta {h_1}/{h_3}$. In the actual design process, we can combine the results of Figs. 3 and 5 to arrive at a proper choice of distance parameters. E. Lateral Chromatic Aberration of Inner Focusing System Figure 6 shows a schematic diagram of the three-component optical system. The heights of the chief ray on each lens group are denoted as $\overline {{h_1}}$, $\overline {{h_2}}$, and $\overline {{h_3}}$. The stop is placed in the center of the system; then, the lateral chromatic aberration of the system can be expressed as [15] (20)$${{W_{\textit{LC}}} = \frac{{{h_1}\overline {{h_1}}}}{{{v_1}}}{{{\Phi}}_1} + \frac{{{h_2}\overline {{h_2}}}}{{{v_2}}}{{{\Phi}}_2} + \frac{{{h_3}\overline {{h_3}}}}{{{v_3}}}{{{\Phi}}_3}}.$$ Fig. 6. Schematic of lateral color of three-lens group. Transforming Eq. (20), we get (21)$${\frac{{{h_1}}}{{{h_3}}}\frac{{\overline {{h_1}}}}{{\overline {{h_3}}}}\frac{{{{{\Phi}}_1}}}{{{v_1}}} + \frac{{{h_2}}}{{{h_3}}}\frac{{\overline {{h_2}}}}{{\overline {{h_3}}}}\frac{{{{{\Phi}}_2}}}{{{v_2}}} + \frac{{{{{\Phi}}_3}}}{{{v_3}}} = 0}.$$ As shown in Fig. 6, if the first or third lens group is chosen as the inner focus zoom component, the chief ray height on it will change accordingly. This change will disturb the lateral color aberration balance, which can be inferred from Eq. (21). The direct method to offset this variation is to shrink the lens power of the first or third lens group. However, the system needs lens components to bear the optical power. If the lens components on either side do not bear the optical power, the symmetry of the system will be broken, which is not useful for the correction of off-axis aberrations of the system [16]. Hence, we mobilize the second component to perform internal focusing. Assume that the second lens group moves to the left during the finite-infinity internal focusing process. Hence, to avoid the collision of the second lens group and the stop position, we locate the stop position to the right side of the second lens group and vice versa. The movement direction can be calculated according to Eqs. (14) and (15). Figure 7 shows the overall schematic diagram of the first and second lens groups. In Fig. 7, ${t_o}$ and ${t_{o1}}$ represent the distance between the stop and the second lens group elements and that between the first and second lens group, respectively. The incident and exit angles of the chief ray on each surface are $\overline {{u_i}}$, $\overline {{u_i}{\rm{^\prime}}}$, respectively. It can be seen in Fig. 7 that $\overline {{u_i}{\rm{^\prime}}} = \overline {{u_{i + 1}}}$. Fig. 7. Detailed schematic of first and second lens groups. During the internal focusing process, the stop remains at its position along with the right-side part. Hence, the third component is not shown in Fig. 7. During the internal focusing process, the second lens group moves back and forth. We add the suffix $o$ to represent the parameters working at infinity, as shown in Fig. 7: (22)$$\begin{array}{*{20}{c}}{\left\{{\begin{array}{*{20}{c}}{\overline {{h_{o2}}} = - \overline {{u_2}^\prime} {t_o}}\\{\overline {{u_2}} = \overline {{u_2}^\prime} + {{{\Phi}}_2} \cdot \overline {{h_{o2}}}}\\{\overline {{h_{o1}}} = \overline {{h_{o2}}} - \overline {{u_2}} {t_{o1}}}\end{array}} \right..}\end{array}$$ Rearrange Eq. (22) to derive Eq. (23) as follows: (23)$${\overline {{h_{o1}}} = - \overline {{u_2}^\prime} {t_o} - \overline {{u_2}^\prime} {t_{o1}} + \overline {{u_2}^\prime} {t_o}{{{\Phi}}_2}{t_{o1}}.}$$ The chief ray height on the first lens group corresponding to the finite object distance can be expressed as (24)$${\overline {{h_1}} = - \overline {{u_2}^\prime} ({{t_o} + {t_{o1}}})}.$$ Combining Eqs. (23) and (24) and simplifying the expression, ${{\Delta}}\overline {{h_1}}$ can be expressed as follows: (25)$${\Delta \overline {{h_1}} = \overline {{u_2}^\prime} {t_o}{{{\Phi}}_2}{t_{o1}}}.$$ Following the same method, ${{\Delta}}\overline {{h_2}}$ can be expressed as ${-}\overline {{u_2}^\prime} {t_o}$. Inserting Eq. (25) and the expression of ${{\Delta}}\overline {{h_2}}$ into Eq. (20), the lateral color wavefront aberration can be expressed as (26)$${\Delta {W_{\textit{LC}}} = \left({\frac{{{h_1}}}{{{v_1}}}{t_{o1}}{{{\Phi}}_1} - \frac{{{h_2}}}{{{v_2}}}} \right)\overline {{u_2}^\prime} {t_o}{{{\Phi}}_2}.}$$ The terms inside the brackets of Eq. (26) represent the choice of materials and the rational combination. $\overline {{u_2}^\prime}$ represents the field of view, which is pre-defined, and ${t_o}$ represents the movement distance of the inner focus component, which is based on the design process. Equation (26) shows that we can realize a small deviation of the later color aberration by reducing the absolute value of ${\Phi _2}$. In the above section, we start from Eqs. (3), (4), and (21), focusing on the determination of axial chromatic aberration and lateral chromatic aberration from the focal length of each lens group and distances between them, especially their influence on parameters $\Delta {h_1}/{h_3}$, $\Delta {h_2}/{h_3}$, and ${{\Delta}}{W_{\textit{LC}}}$; by rational selection of a key factor, $\Delta {h_1}/{h_3}$, $\Delta {h_2}/{h_3}$, and ${{\Delta}}{W_{\textit{LC}}}$ can all be guaranteed in a magnitude of ${{10}^{- 3}}$, and thus the deviation of Eqs. (3) and (4) retains a magnitude of ${{10}^{- 6}}$, i.e., the achromatic characteristic is almost unchanged during the inner focus process. However, in this model, the dispersion coefficient and partial dispersion coefficient merely contribute to axial and lateral achromaticity, wherein the partial dispersion coefficient is highly determined by the spectral range. Also, key parameters such as focal length and distance have no correlation with spectral range; therefore in the solution, the factor of spectral range merely effects the achromatic characteristic during the internal focusing, which means the achromatic inner focus model can work well on all wavelength bandwidths with the one precondition that the system has corrected its chromatic problem in one status during the internal focusing process. To facilitate the comprehension of the building process of the system, a flowchart is provided in Fig. 8. Fig. 8. Flowchart of the building process of the system. 3. DESIGN EXAMPLE Based on the discussion in Section 2, we propose a lens design. The basic parameters are listed in Table 1 according to the discussion in Section 2. Table 1. Parameters of the System View Table | View all tables in this article We adopted the parameters in the left-hand column of Table 2 to define the primary focal length and the inner lens group distances of the system; these parameters are initially self-defined according to the discussion in Section 2. Then, the system is expanded in the real lens form. The materials are chosen to correct the chromatic aberration [1–6]. During the optimization process, the key parameters are kept as consistent as possible. Table 2. Model Parameters of the System Before and After Optimization After optimization, the final layout of the optical system is shown in Fig. 9. The parameters relating to the optical system after optimization are listed in the right-hand column of Table 2. Fig. 9. Layout of the final optical system. Table 3. Data of the System Fig. 10. Variation of the move distance of second lens group with object distance. Fig. 11. Variation of each field rms radius with object distance. Fig. 12. Design of the mechanical structure of the system. The final data of the system after optimization are listed in Table 3. The system can achieve an instantaneous field of view of 41.5 µrad with an inner focus distance of 2.5 mm. The distortion of the system remains almost unchanged as ${-}{0.0366}\%$ in each state. Fig. 13. Assembled optical system with internal focusing functionality. Fig. 14. Synthetic image created by single wavelength images. The variation of $\Delta {t_d}$ of the system with different object distances is shown in Fig. 10. The smooth internal focusing curve is beneficial for the mechanical realization [17]. The image quality is acceptable since the system follows the achromatic design explained in Section 2. The variation of the root-mean-square (RMS) spot radius with the object distance is shown in Fig. 11. We finally carried out the actual manufacturing and assembly of the proposed system, and the assembled optical system is shown in Figs. 12 and 13. It Fig. 15. Modulation transfer function of the system at object distances of (a) 120–30 km, (b) 400 m, and (c) 200 m. utilizes a filter wheel to filter the target information of different wavebands to detect the relevant information. One synthetic image created by numerous single wavelength images detected by the system is provided in Fig. 14. The modular transfer function in actual measurement is shown in Figs. 15(a)–15(c). The final designed system could achieve a modulation transfer function $\gt {0.5}$ at 72.5 lp/mm at all fields of view during the internal focusing process, thereby validating the success of the proposed design. This paper proposes a method for correcting wide-spectrum chromatic aberration during the internal focusing process. Based on the chromatic aberration correction formula, this paper analyses the chromatic aberration characteristics corresponding to infinity and finite object distance, individually. The consequent inner focus chromatic aberration correction model is simple and can be used to quickly estimate a reasonable initial structure of the inner focus system with achromatic performance, by ensuring reasonable system parameters at the beginning of the design process. The proposed method greatly simplifies the design process for the inner focus wide-band optical system. For example, by adopting the method provided in this paper, we can first ascertain the focal length of each lens group and the distances between them. In the subsequent step, we need only to choose the appropriate materials and combinations to achieve the achromatic effect. The resulting system will maintain its achromaticity during the internal focusing process. Finally, based on the proposed method, this paper presents the design of an internal focusing wide-spectrum system. The optical system was manufactured and assembled, and it delivered superior performance during the internal focusing process, proving the effectiveness of the design method. The work presented in this paper will greatly help researchers in designing inner focus wide-band systems. Jinlin Scientific and Technology Development Program (20200401055GX); National Natural Science Foundation of China (61705018, 61805025). 1. R. D. Sigler, "Glass selection for airspaced apochromats using the Buchdahl dispersion equation," Appl. Opt. 25, 4311–4320 (1986). [CrossRef] 2. W. S. Sun, C. H. Chu, and C. L. Tien, "Well-chosen method for an optimal design of doublet lens design," Opt. Express 17, 1414–1428 (2009). [CrossRef] 3. B. F. C. de Albuquerque, J. Sasian, F. L. de Sousa, and A. S. Montes, "Method of glass selection for color correction in optical system design," Opt. Express 20, 13592–13611 (2012). [CrossRef] 4. J. L. Rayces and M. Rosete-Aguilar, "Selection of glasses for achromatic doublets with reduced secondary spectrum. I. Tolerance conditions for secondary spectrum, spherochromatism, and fifth-order spherical aberration," Appl. Opt. 40, 5663–5676 (2001). [CrossRef] 5. R. I. Mercado and P. N. Robb, "Color corrected optical systems and method of selecting optical materials therefore," U.S patent 5,210,646 (6 May 1993). 6. A. Mikš and J. Novák, "Method for primary design of superachromats," Appl. Opt. 52, 6868–6876 (2013). [CrossRef] 7. M. Nakatsuji and K. Suzuki, "Zoom lens utilizing inner focus system," U.S. patent 5,325,233 (28 June 1994). 8. A. Kogaku and K. Kabushiki, "Inner focus type telephoto zoom lens," U.S. patent 5,572,276 (5 November 1996). 9. T. Imaoka, K. Hoshi, and H. Hagimori, "Inner focus lens, interchangeable lens apparatus and camera system," U.S. patent 8,503,096 (6 August 2013). 10. H. Choi and J. Ryu, "Design of wide angle and large aperture optical system with inner focus for compact system camera applications," Appl. Sci. 10, 179 (2020). [CrossRef] 11. Y. Tamagawa, S. Wakabayashi, T. Tajime, and T. Hashimoto, "Multilens system design with an athermal chart," Appl. Opt. 33, 8009–8013 (1994). [CrossRef] 12. S. C. Park and W. S. Lee, "Paraxial design method based on an analytic calculation and its application to a three-group inner-focus zoom system," J. Korean Phys. Soc. 64, 1671–1676 (2014). [CrossRef] 13. D. Lee and S. C. Park, "Design of an 8X four-group inner-focus zoom system using a focus tunable lens," J. Opt. Soc. Korea 20, 283–290 (2016). [CrossRef] 14. T. Sakai, "Inner focus lens," U.S. patent 9,678,305 (13 June 2017). 15. J. M. Geary, Introduction to Lens Design: With Practical ZEMAX Examples (Willmann-Bell, 2002). 16. W. J. Smith, Modern Optical Engineering (Tata McGraw-Hill Education, 2007). 17. T. Kobayashi, "Internal focusing zoom lens," U.S. patent 7,852,569 (14 December 2010). R. D. Sigler, "Glass selection for airspaced apochromats using the Buchdahl dispersion equation," Appl. Opt. 25, 4311–4320 (1986). W. S. Sun, C. H. Chu, and C. L. Tien, "Well-chosen method for an optimal design of doublet lens design," Opt. Express 17, 1414–1428 (2009). B. F. C. de Albuquerque, J. Sasian, F. L. de Sousa, and A. S. Montes, "Method of glass selection for color correction in optical system design," Opt. Express 20, 13592–13611 (2012). J. L. Rayces and M. Rosete-Aguilar, "Selection of glasses for achromatic doublets with reduced secondary spectrum. I. Tolerance conditions for secondary spectrum, spherochromatism, and fifth-order spherical aberration," Appl. Opt. 40, 5663–5676 (2001). R. I. Mercado and P. N. Robb, "Color corrected optical systems and method of selecting optical materials therefore," U.S patent5,210,646 (6May1993). A. Mikš and J. Novák, "Method for primary design of superachromats," Appl. Opt. 52, 6868–6876 (2013). M. Nakatsuji and K. Suzuki, "Zoom lens utilizing inner focus system," U.S. patent5,325,233 (28June1994). A. Kogaku and K. Kabushiki, "Inner focus type telephoto zoom lens," U.S. patent5,572,276 (5November1996). T. Imaoka, K. Hoshi, and H. Hagimori, "Inner focus lens, interchangeable lens apparatus and camera system," U.S. patent8,503,096 (6August2013). H. Choi and J. Ryu, "Design of wide angle and large aperture optical system with inner focus for compact system camera applications," Appl. Sci. 10, 179 (2020). Y. Tamagawa, S. Wakabayashi, T. Tajime, and T. Hashimoto, "Multilens system design with an athermal chart," Appl. Opt. 33, 8009–8013 (1994). S. C. Park and W. S. Lee, "Paraxial design method based on an analytic calculation and its application to a three-group inner-focus zoom system," J. Korean Phys. Soc. 64, 1671–1676 (2014). D. Lee and S. C. Park, "Design of an 8X four-group inner-focus zoom system using a focus tunable lens," J. Opt. Soc. Korea 20, 283–290 (2016). T. Sakai, "Inner focus lens," U.S. patent9,678,305 (13June2017). J. M. Geary, Introduction to Lens Design: With Practical ZEMAX Examples (Willmann-Bell, 2002). W. J. Smith, Modern Optical Engineering (Tata McGraw-Hill Education, 2007). T. Kobayashi, "Internal focusing zoom lens," U.S. patent7,852,569 (14December2010). Choi, H. Chu, C. H. de Albuquerque, B. F. C. de Sousa, F. L. Geary, J. M. Hagimori, H. Hashimoto, T. Hoshi, K. Imaoka, T. Kabushiki, K. Kobayashi, T. Kogaku, A. Lee, D. Lee, W. S. Mercado, R. I. Mikš, A. Montes, A. S. Nakatsuji, M. Novák, J. Park, S. C. Rayces, J. L. Robb, P. N. Rosete-Aguilar, M. Ryu, J. Sakai, T. Sasian, J. Sigler, R. D. Smith, W. J. Sun, W. S. Suzuki, K. Tajime, T. Tamagawa, Y. Tien, C. L. Wakabayashi, S. Appl. Sci. (1) J. Korean Phys. Soc. (1) J. Opt. Soc. Korea (1)
CommonCrawl
Can math be subjective? Often times in math, ever since Kindergarten and before, math has been defined by the fact that there are only one answer for problems. For example: $1+1=2$ and $\frac{d}{dx}x^2=2x$. What I am showing by these two examples are two questions that are from completely different areas of math. However, they both have have only one solution. Problems with multiple answers doesn't necessarily mean they are subjective though, such as $|x|=2,$ which has two solutions. My question is that are any such problems that depend entirely on perspective? If all subjective math problems follow a certain pattern, please tell me what that pattern is. I really have no idea of any examples of this and I would really be interested to see one. Thank you very much. soft-question Danu $\begingroup$ This is really a question for the philosophy SE. As a philosophy student, I wrote a paper (admittedly just an undergrad paper, but I'm working towards having it published) in which I argue mathematics is analytic a priori, which would imply objectivity (otherwise it couldn't be analytic). I can send you this paper if you like. $\endgroup$ – Alfred Yerger Jul 8 '15 at 16:12 $\begingroup$ At the very least, math is contextual, meaning there is often more than one answer to a problem depending on context. For example, what is $22+5$? Without a context we would assume what you might call a "default context" of Real numbers or Integers and say $22+5=27$. But in the context of hours of 24-hour time, we would decide that $22+5=3$ (five hours after 2200 is 0300) without giving it a second thought. $\endgroup$ – Todd Wilcox Jul 8 '15 at 17:14 $\begingroup$ I don't know about subjective, but sometimes it is clearly surjective... (Sorry for the pun)... $\endgroup$ – Martigan Jul 9 '15 at 12:21 $\begingroup$ @Rich: The teacher was correct. $\endgroup$ – Asaf Karagila♦ Jul 9 '15 at 21:14 $\begingroup$ @AsafKaragila, for a seventh-grader to conclude that $1/0$ is "infinity" is completely reasonable-sensible, I think, even if there do remain issues, as there certainly do. $\endgroup$ – paul garrett Jul 10 '15 at 14:23 There's plenty of room for subjective opinion in mathematics. It usually doesn't concern questions of the form Is this true? since we have a good consensus how to recognize an acceptable proof and which assumptions for such a proof you need to state explicitly. As soon as we move onwards to Is this useful? and Is this interesting?, or even Is this likely to work?, subjectivity hits us in full force. Even in pure mathematics, it's easy to choose a set of axioms and derive consequences from them, but if you want anyone to spend time reading your work, you need to tackle the subjective questions and have an explanation why what you're doing is either useful or interesting, or preferably both. In applied math, these questions are accompanied by Is this the best way to model such-and-such real-world problem? -- where "best" again comes down to usefulness (does the model answer questions we need to have answered?) and interest (does the model give us any insight about the situation we wouldn't have without it?). The subjective questions are important in research, but can also arise at more elementary level. The high-school teacher who chooses to devote several lessons to presenting Cardano's method for solving the generic third-degree equation will certainly have to answer his students' questions why this is useful or interesting. Perhaps he has an answer. Perhaps he has an answer that the students don't agree with. In that case, he cannot look for a deductive argument concluding that Cardano's formula is interesting -- he'll have to appeal to emotions, curiosity, all of those fluffy touchy-feely considerations that we need to use to tackle subjective questions. Henning MakholmHenning Makholm $\begingroup$ I wish more teachers would take your answer into account when teaching. $\endgroup$ – user21820 Jul 9 '15 at 16:18 $\begingroup$ I would agree, and point out that it follows all of science and engineering and thereby all subjects that are studied seriously. Since there is subjectivity, there are different interpretations, which leads to intra-subject politics. In fact, it always surprises me when people claim Math is objective, because from what admittedly very little I've seen, it's subject to as much debate and disagreement as any other subject. $\endgroup$ – JFA Jul 9 '15 at 20:43 Every single statement, question, and claim in mathematics is subjective because they are always based on a set of axioms, which are arbitrary, and are picked to observe their consequences. However, once you phrase the claim in the form of an implication, (such as: "if [the axioms of Euclidean geometry], then [the Pythagorean theorem]") then you have an objective truth. This is assumed to be the meaning when any mathematician states a theorem - we understand what axiomatic framework they are working in and understand that their claim is contingent on those axioms. Given a particular axiom system, three of the possible results for a mathematical claim are: We prove the claim true. [Ex: The Pythagorean theorem] We prove the claim false. [Ex: "The integers under multiplication form a group"] We prove that the claim is independent of our axiomatic system. [Ex: the continuum hypothesis]. (see Mario Carneiro's comment for other possibilities). There are no claims that can be subjective if we take for granted that we are working in an axiomatic system. Some people might argue that the continuum hypothesis is "subjective" under ZFC, but I prefer to think of it as simply having no truth value. Jonathan HebertJonathan Hebert $\begingroup$ The word "arbitrary" I think, is inappropriate. We rarely ever just pick an arbitrary set of axioms. Quote: ... based on random choice or personal whim, rather than any reason or system. "his mealtimes were entirely arbitrary" synonyms: capricious, whimsical, random, chance, unpredictable; More antonyms: reasoned, rational $\endgroup$ – Thomas Andrews Jul 8 '15 at 19:25 $\begingroup$ Since ZFC is a system of axioms (and implicitly assumed standard FOL), it has no "truth value" in any case. Truth value comes when you look at models of ZFC. In a given, specific, model of ZFC the continuum hypothesis has a truth value. Simply because in a given structure, a given statement has an assigned truth value. What you probably want to say, is that only know that $M$ is a model of ZFC does not provide us with enough information to determine if CH is true in $M$. If you are a Platonist, though, and you believe ZFC is true, then subjectivity enters the game in the form of beliefs. $\endgroup$ – Asaf Karagila♦ Jul 9 '15 at 7:59 $\begingroup$ My point is that subjectivity can only exist if we pretend that we don't know what axiom system the statement is referring to. We usually do, so we don't bring up the axioms of Euclidean geometry as an antecedent each time we state the Pythagorean theorem. $\endgroup$ – Jonathan Hebert Jul 9 '15 at 15:15 $\begingroup$ @Tobia: That's a bunch of bullocks. There's absolutely nothing shady in the axiom of choice, or else people would have noticed it right away. The fact that there are strange consequences of both the axiom of choice and its failure is because infinite sets are weird, and they don't agree with our physical, finite, intuition. Calling it shady suits very well to a physics student who understands very little of mathematics. Just saying. $\endgroup$ – Asaf Karagila♦ Jul 9 '15 at 22:30 $\begingroup$ I'm surprised that no one has pointed out that the "three categories" division of mathematical facts as provable, refutable, or provably independent is false. If I write these cases as $T\vdash\phi$, $T\vdash\lnot\phi$, $T\vdash{\rm Ind}(\phi)$ (where ${\rm Ind}(\phi)=(T\nvdash\phi\land T\nvdash\lnot\phi)$), it should be clear that there are also the options $T\vdash T\nvdash{\rm Ind}(\phi)$ ($\phi$ is provably unprovably independent), $T\vdash T\nvdash T\nvdash{\rm Ind}(\phi)$ ($\phi$ is provably unprovably unprovably independent), and so on a countable infinity times... $\endgroup$ – Mario Carneiro Jul 10 '15 at 0:41 There are a few more points of subjectivity that are related to, but slightgly different from the sometimes subjective choice of axiom system: I am talking about definitions of some standard objects. For example, people may have different "opinions" whether or not $0\in\mathbb N$. Or whether they accept an answer to a question asking for an explicit solution only if it is elementary (a combinations of polynomials, trigonmetrics, esponential, logarithm) or if they would also accept something involving the Lambert $W$ function or the error function ... And then there are things that are just personal preferences for different notation (or cultural preerences - I personally have great difficulties reading something as simple as a long division if it is written the "American way" that looks to me rather like a $\sqrt .$) Hagen von EitzenHagen von Eitzen $\begingroup$ How do you do long division where you're from? $\endgroup$ – Akiva Weinberger Aug 28 '15 at 3:39 $\begingroup$ like this, not like this. Of course the method is the same, but I really have difficulties "to adapt my eyes" $\endgroup$ – Hagen von Eitzen Aug 28 '15 at 13:50 It depends on what you mean. First: You give the equation $\lvert x \rvert = 2$ as an example of an equation/problem with two solutions. In a sense you could say that the problem only has one solution in that the solution set is $\{-2, 2\}$. The point of saying that a problem only has one solution/answer is that you can't have two answers that contradict each other. If you, for example, ask if a given equation has a solution, then the answer is only yes or no. It can't be both yes and no. There are, of course, questions that mathematicians are interested in that will have several answers. You could ask how you best mathematically describe something using an equation. Here the key work is "best". It is, to some extent, subjective what is best. So in general you will not have answers that depend on something subjective in (pure) mathematics. Now, you can drive the question about subjectivity and different answers to a philosophical one. Mathematics is (can be) built on a set of axioms (say of set theory) and the rules of logic. Some will argue that mathematics is just a game of how to use the rules of logic given the set of axioms. But then questions arise about how to best pick the axioms and questions about exactly what rules of logic we should allow. These discussions are interesting to some mathematicians and some will say that this is inherent to the nature of mathematics. And in these discussions answers will have a level of subjectivity to them. Much more can be said about this, but when we fix a system of axioms and a set of rules, then we avoid these discussions (I might be oversimplifying things). Another way that subjectivity can arise in mathematics is when the discussions turn to what areas deserve the most attention. If an institution is hiring a new faculty, then the existing faculty will have discussions about the directions of the department. Here arguments can be made for the different areas and the discussions can turn political. Note, some might say that $1+1 = 2$ is subjective because we have $1+1 =0$ when we do modular arithmetic (mod $2$). But understand that the elements $1$ are completely different in the two situations. We are taking about two different sets of "numbers" and so there isn't any subjectivity here. $\begingroup$ This is rare enough outside philosophy that I would bet most mathematicians have never heard of it, but there are logical systems where the answer to something can be both yes and no. The system is neutered in such a way that this doesn't result in the principle of explosion. $\endgroup$ – Matt Samuel Jul 8 '15 at 16:55 $\begingroup$ I think a slightly better example than solving x in your absolute value example is square root. Everyone "knows" the square root of 4 is 2. Why isn't it -2? Convention. (A useful one, to be sure.) Of course, depending on the context, you might instead report it as ±2. $\endgroup$ – Ben Hocking Jul 9 '15 at 3:56 $\begingroup$ @MattSamuel: There's very extensive, very well-researched and pretty well-understood theory of provability, and determinability. If you talk to mathematicians that dealt with logic, they'd most likely have heard of systems where yes-and-no is a valid state. $\endgroup$ – Marcus Müller Jul 11 '15 at 13:51 There are some notions which have no common accepted definition or actually many competitive definitions which do not agree. Some examples are: mixed motives the "field" with one element $n$-categories Perhaps, one day, mathematicians will agree in each of these cases which is the correct definition, but of course this is kind of subjective (what makes a definition correct?). In any case, the mathematics done with each of these notions is objectively true. Martin BrandenburgMartin Brandenburg $\begingroup$ So you are saying that the fact the people disagree about how to define a term is an example of subjectivity? $\endgroup$ – Thomas Jul 10 '15 at 14:38 $\begingroup$ No, please read carefully. Mathematicians have a notion in mind, as if it existed outside the universe, based on long mathematical experience, and want to define it "correctly". This kind of "correctness" is subjective, since it cannot be tested objectively. $\endgroup$ – Martin Brandenburg Jul 10 '15 at 14:40 $\begingroup$ Then I don't understand your answer. Can you elaborate? $\endgroup$ – Thomas Jul 10 '15 at 14:41 $\begingroup$ @Thomas: The subjective question is "which of these possible definitions produces the theory that is most useful / most interesting / easiest to work with / best at generating new insights?" The arguments on favor of different answers are ultimately subjective, but that doesn't prevent them from being, at their core, mathematics. $\endgroup$ – Henning Makholm Jul 11 '15 at 11:03 $\begingroup$ @HenningMakholm: This is what your answer says, but I mean something different. The "correctness" does not primarily depend on usefulness, but rather by the "nature" of a notion which mathematics with long experience "feel", even without looking at examples (because they focus too much). As we all know, Grothendieck understood this better than anyone else, and his work has also shown that following this kind of feeling produces useful theories. $\endgroup$ – Martin Brandenburg Jul 11 '15 at 11:10 Some people does not aggree with usage of Axiom of choice in proofs* https://en.wikipedia.org/wiki/Axiom_of_choice https://mathoverflow.net/questions/22927/why-worry-about-the-axiom-of-choice There is Constructivism (* for example this ones ) which does not accept proofs by contradiction as a proof of existence https://en.wikipedia.org/wiki/Constructivism_(mathematics) Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem. The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems Consequences, within the meaning of subjectivity is left to the reader ;) boucekvboucekv Are there problems whose solutions are subjective? It depends on how far you are willing to stretch the word "subjective." An idea may have two essentially different meanings to two different mathematicians. To one mathematician, the fundamental theorem of algebra might mean the field of complex numbers is algebraically closed. To another, it might mean that every complex $n\times n$ matrix has a complex eigenvalue. Both meanings are equivalent, but they are not "the same," in a very restrictive sense, so one might convey this as subjective. Otherwise? Not really. I think, however, that there is a larger question to be answered here. Is mathematics subjective? No...but yes. As far as "the facts," as you call them, are concerned, once we agree on definitions and axioms, things are pretty much set in stone. The theorems are either right or wrong (or unprovable), and there isn't any wiggle room. Here's something that most laypeople, and even a good handful of mathematicians, don't get: mathematics is not just "the facts." The myth that mathematics is solely about the theorems is as wrong as it is dangerous; it scares away many people that only know mathematics by the bland material fed to them in high school and discourages mathematicians who decide that their value is based solely on what Thurston called "theorem-credits." While on the topic of theorem-credits, I think many, if not most, questions of the form "Is mathematics [blah-dee-dah-dee-dah]?" can be resolved by reading On proof and progress in mathematics. In fact, if you haven't read it, then you probably should. Mathematics, or at least the mathematics that I do, is about thinking and sharing ideas with other mathematicians. This is where subjectivity creeps in. Typical questions of a mathematician, amidst the deep mathematical thoughts, are "Does this look presentable?" "Is this notation clear?" "Is this worth further study?" "Does mathematics honestly need another use for the word 'normal'?" These are about as subjective as questions can get. Robin GoodfellowRobin Goodfellow Mathematics is not subjective because it is perceived differently by different mathematicians. That's a misuse of the word subjective. I think mathematics is the most objective thing we have. If that's true, then if the word objective means anything, we can simply say that mathematics is objective, without qualification. I want to bring up two related questions in the mathematics realm. The first is why mathematics so often relates directly to fundamental properties of the natural world. This happens so often it's spooky - someone goes off and researches an obscure mathematical problem, and the answer turns out to predict the curve of a NDA molecule's spirals or something like that. The second is whether all reality is socially constructed. I say no, that mathematics and much of science are entirely or mostly objective. Usefulness, implementation, interest, our readiness to take a given piece of work further - those are much more socially constructed. (But influenced by objective qualities of the work.) The fact that the results are perceived differently for reasons that are significantly socially constructed does not make them inherently subjective or socially constructed. Sorry for not reading/taking into account all the other comments, I have to get back to work... Floyd Earl SmithFloyd Earl Smith I would argue that the number of solutions does not influence the subjectivity of mathematics. Rather that mathematics is an inherently subjective art in the sense that it is perceived differently by different mathematicians. However, what I think is the true beauty of it, is that we all (most of us) operate under the same vast framework, where even with differing perspectives, all our results are consistent with one another. In comparison with other fields such as psychology or economics, the results obtained in those fields can be disputed, many economists hold different viewpoints and the results obtained are not consistent for the most part. This is something that does not occur (or if it does, occurs infrequently) in the field of Mathematics. For example, you said that $1 + 1 = 2$ has only one solution, yet in certain field of mathematics you will often see $1 + 1 \equiv 0 \pmod{2}$. Yet this does not mean that it is not consistent with $1+1 =2$, Mathematics just works under incredibly precise foundations. My point is, I think you need to reconsider your definition of subjectivity in mathematics. Zain PatelZain Patel $\begingroup$ I don't think this implies subjectivity. You have simply masked the problem by changing the context. Surely when working mod 2, it is still an objective fact that $ 1+1 \equiv 0$, since it is an analytic fact (it is based on the definition of "mod 2." What you've shown is that the beauty of mathematics entails a certain personal subjectivity. But the content itself does not. On the other hand, which axioms are reasonable and intuitive is a subjective matter. $\endgroup$ – Alfred Yerger Jul 8 '15 at 16:17 $\begingroup$ Oh, I agree. I was simply trying to point out to the OP that his/her definition of subjectivity wasn't totally logical. $\endgroup$ – Zain Patel Jul 8 '15 at 16:18 $\begingroup$ @ZainPatel But what is OP's definition of subjectivity? He/she explicitly stated that having two solutions does not make it subjective. $\endgroup$ – Eff Jul 8 '15 at 16:19 One important subjective question is when exactly an argument is sufficiently rigorous. I'm very fond of reminding people that rigor is a continuum, not a binary property. Take the Pythagorean theorem. Is this proof by rearrangement a rigorous proof? I would say yes, in the sense that I would consider it very pedantic of someone not to find it convincing. For all practical purposes, once you've seen that image, you know the theorem is true. But maybe you can't quite squish the nagging doubt that if you picked a really oddly shaped right triangle, it would reveal an implicit assumption in that diagram, and so you couldn't do the rearrangement and it wouldn't work. Or maybe you just find precision aesthetically appealing. Then maybe the Euclidean proof would suit you, or some more modern formalization of it. Or, perhaps you find the physical universe so complicated and ambiguous that you just prefer not to think of it at all, and you won't be satisfied unless you've defined triangles as subsets of the set of all pairs of real numbers, and proven the theorem formally in that setting, thus escaping from empiricism and reality altogether and reducing the theorem to a statement about real numbers (which are defined in some other way, etc). Of course, a disadvantage of this approach is that it becomes hard to justify that your argument gives you any knowledge about real world triangles, since you deliberately cut away any connection to them. But whether or not this is really bothersome is subjective. It all depends on why you're studying math and what you want to do with your theorems, as well as your sense of aesthetics. Jack MJack M Many of the answers posted here cover much what I would say about subjectivity in mathematics. For the most part, the subjectivity of mathematics arises from the acceptance of certain collections of axioms. One possibly controversial one is the Axiom of Choice. There are several "paradoxes" associated with it, such as the Banach-Tarski paradox. Whether or not you accept this as a paradox depends on how weird you think the statement is, but it is not a paradox in the mathematical sense. Once axioms are accepted, the deductions that follow are not subjective. There is another vein of skepticism in mathematics comes from the point of computability. For instance, in mathematics, we often talk about real numbers. We can produce logical formalisms to purport the existence of the set of real numbers, but a valid question is whether such a system exists in the real world. A computer can only compute in rational numbers (an then only a small subset of rational numbers can be represented by a computer), and measurements made in the physical world can only be expressed as rational numbers. Thus even though we talk about representing real numbers as limits of rational numbers, no such limit can be measured. This leads to other number systems such as the $p$-adic completion of the rational numbers. A less technical example is prime factorizations. As far as we are aware, there are only a finite number of particles in the universe, this means that there must be numbers that we can never express given any computer we could ever construct. If there is no way to express these numbers, then a valid question is if it is meaningful to work with these numbers. Certainly number theory tells us that formally every integer has a prime factorization, but consider the number $$N=2^{2^{2^{2^{2^{2^{2^{2^{2^{2^{2^{2^{2}}}}}}}}}}}}+1.$$ Theoretically, this has a prime factorization, but this number is so large, that no computer can calculate this prime factorization. (Even if this one can be factored, there must be some upper limit on the numbers that can be by the finiteness of the universe.) You could say that the claim that this number has a prime factorization is subjective, since none could be produced (formal proofs aside). As for myself, I happily work with the real numbers everyday I do calculus, and I thoroughly enjoy Hardy's number theory work. My paycheck depends on my ability to do analysis. However, while I work I must accept the Axiom of Choice, the real numbers, and prime factorizations. JoelJoel Erdos (well, actually Alfréd Rényi) said that a mathematician is a machine for turning coffee into theorems. If that's literally true, then mathematics is objective. Erdos also referred to The Book, in which God kept all of the most elegant proofs of each theorem. I believe that the ability to perceive elegance is a subjective thing. The fact that a problem has more than one solution does not make mathematics subjective. But the fact that the two proofs can be compared to each other and arguments can be made over which is the better definitely does make mathematics subjective. Finally, someone said that mathematics is what mathematicians do. I don't think that is literally true either. It blurs the distinction between The Book and the act of creating an entry for The Book. I really want to believe that no machine can create elegant theorems as well as a mathematician and a pot of coffee, or a really hot cup of tea. The act of creating or discovering a proof may or may not be a subjective act. The act of deciding whether that proof belongs in The Book is a subjective act. steven gregorysteven gregory $\begingroup$ Erdos didn't say that. If he would have said something like this, he would have more likely refer to amphetamines, not caffeine. $\endgroup$ – Asaf Karagila♦ Jul 11 '15 at 7:03 $\begingroup$ @AsafKaragila The source of the coffee quote is debatable and Erdos did like bennies. I tried them in college but quit because I didn't like the side effects. I hate coffee. Maybe that's why I haven't created a zillion theorems. $\endgroup$ – steven gregory Jul 11 '15 at 7:51 $\begingroup$ Do you have any references that it is debatable? To my knowledge, the original quote is in German, and it's a wordplay on "theorem" and "used coffee grounds". So being a joke told in English by a Hungarian mathematician... sounds less likely. (Not to mention that one Hungarian visitor mentioned this quote just last week, and while he mentioned the name of the mathematician who said that, I can't recall at this moment. It wasn't Paul Erdos though.) $\endgroup$ – Asaf Karagila♦ Jul 11 '15 at 10:06 $\begingroup$ According to WikiQuotes "Widely attributed to Erdős, this actually originates with Alfréd Rényi, according to My Brain Is Open : The Mathematical Journeys of Paul Erdos (1998) by Bruce Schechter, p. 155" The following quote is also in the list of misattributed quotes: "The first sign of senility is that a man forgets his theorems, the second sign is that he forgets to zip up, the third sign is that he forgets to zip down." $\endgroup$ – steven gregory Jul 11 '15 at 13:55 Yeah, it can: Consider that polynomials of degree 4 or lower are said to be solvable (or solvable "in radicals" or whatever) whereas the higher-order ones are said to be unsolvable. But this is quite subjective and arbitrary. The definition of $\sqrt{2}$ is literally "the positive real number $x$ that satisfies $x^2 = 2$". What makes solving $x^2 = 2$ any different from solving $x^5 + x + 10 = 0$? I really don't see a meaningful distinction -- both are real numbers and both are irrational. To write either of them out as decimals, you have to use the same kinds of root-finding algorithms for both. To me, the distinction that one of them can be written in terms of radicals and the other one can't seems pretty arbitrary and subjective. Just look at the definition of e.g. "closed-form solution". It's completely subjective. MehrdadMehrdad $\begingroup$ Not all higher degree polynomials are non-solvable. And the definition is not at all arbitrary or subjective. $\endgroup$ – Tobias Kildetoft Jul 9 '15 at 7:49 $\begingroup$ @TobiasKildetoft: (1) Yes, I know that, you know that, the reader probably knows that, and the reader probably realizes that all of us already know that, so I'm not sure why there's a need to be pedantic here. (2) I certainly find it subjective. I don't see why a different definition for solvability instead of via radicals would be illogical. If you think it is impossible to come up with another reasonable definition then feel free to show why you think so. Just saying something "isn't subjective" isn't exactly compelling. $\endgroup$ – Mehrdad Jul 9 '15 at 7:58 $\begingroup$ @Mehrdad: I think you're still missing the earlier commenter's point. The criterion "lies in a radical extension $k/\mathbb{Q}$" may be arbitrary (and I would definitely say it isn't, given, well, the entirety of Galois theory), but it certainly isn't subjective. As for closed-form expressions, we're often loose about the exact criteria, but it's not a term of art anyway. When we talk about the lack of a closed form for, say, $\int_0^x e^{-t^2}$, that can make completely precise and objective by specifying, say, $\mathbb{R}(t)$; we usually just don't care enough to bother. $\endgroup$ – anomaly Jul 10 '15 at 14:29 $\begingroup$ You right on one point. Closed-form is made up jargon with little mathematical relevance. $\endgroup$ – Zach466920 Jul 10 '15 at 15:02 This question is on the edge of being too soft to learn from. Some people are Platonists and will read this question differently. I'd say yes, math should be considered entirely subjective. You can imagine you encounter a homeless man who loudly recites Alice in Wonderland on the market place all day long. He may be personally convinced that $50^2$ equals $5225$ and not $2500$ or another number. Now if you ask most math students, they'd say $50^2=5225$ is wrong. They all share this opinion, but the beauty is we only need one person with another opinion to label something as subjective. And the mere opinion of the majority that this man is may be "crazy" (i.e. not acting according to our rules and expectations), or that he doesn't change his mind upon being shown a "proof" or "demonstration", doesn't make him any less of a agent with a subjective opinion. His deviant view makes the issue subjective, by definition. We can now ask anybody on StackExchange and any fields medalist what $50^2$ reduces to, or $\frac{d}{dx}x^2$ for that matter, but we'll only find opinions that (probably) agree with each other. Opinions of people that reassure themselves, and nothing more than exactly that. Nikolaj-KNikolaj-K protected by Asaf Karagila♦ Jul 9 '15 at 18:37 How do people come up with difficult math Olympiad questions? Expository problems How to fill gaps in my math knowledge? Theory vs problems in modern math Math Prerequisites for QFT Continuing math on my own? Motivating examples of puzzles that can only be solved by using deep mathematics? Do some areas of math have worse problems? Tricks in research vs. contest math (Soft question) Why should one study pure math?
CommonCrawl
Do exactly what I say! [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Inspired by Infinite Precision No, I'm not picky, I just want things done exactly right. Complete 5 of these tasks (tell me how you would do them) and I will give you exactly 25 bragging points. I will not allow any trivial answers ( 0, {}, a straight line is not a square wave, the absence of light is not white, etc ). Excite me a wave which is exactly square. Draw me an exact circle. Brighten my night with a light that is exactly white. Show me a container that is exactly full. Bind me a book which is exactly endless. (Reader will always read left to right, top to bottom, one page after another) Make me an apparatus which rotates exactly once per year. Find me two objects of exactly the same color. riddle science Tony RuthTony Ruth $\begingroup$ I want to concentrate but exactly written in bold diverts my mind. $\endgroup$ – wrangler Jun 11 '16 at 6:43 $\begingroup$ @humn no language tricks here, this is all science, so I added the science tag. Looking at the original post might give you some ideas. $\endgroup$ – Tony Ruth Jun 11 '16 at 20:41 $\begingroup$ With respect to binding a book that is exactly endless, I would respectfully direct you to John Barth's solution "Frame-Tale" described here. It is pretty cool that he foresaw you asking this question back in 1968. $\endgroup$ – user29544 Aug 29 '16 at 14:21 $\begingroup$ @TonyRuth Inazuma, Ankoganit, Lashit, and I all have a list of exactly five items that can be done, and they're not the same list. So does that mean there's an oversight in your puzzle or at least three of our answers? $\endgroup$ – user24580 Aug 31 '16 at 15:43 $\begingroup$ @Tony So can you give us some pointers on how to improve our answers to try to match what you were thinking? Which parts of our answers are wrong? It's hard to answer a puzzle with no clue of how we're doing. $\endgroup$ – user24580 Sep 1 '16 at 2:14 Three ideas: Earth rotating around the sun? Is this an apparatus? But this is the definition of a year. <div id="container" style="padding:0;"> <div id="stuff" style="margin:0;width=100%;height=100%;"> Stuff </div> </div> The container is exactly full (tell me if I am missing something). In JavaScript: var a = "object1"; var b = "object2"; a.color = "#f00"; b.color = "#f00"; a and b have exactly the same color. Another possibility: You can just take any object and show it twice as it's not mentioned that the objects are different. :P OK, that's probably cheating as the OP asks for two objects. But this could work: You could take two atoms because single atoms have no color. palschpalsch $\begingroup$ Yes, this is definitely the right idea for the first two. The rotation uses the definition of a year, and the only containers that can be full which I can think of are digital. As far as the color goes, I did not specify that they had to be physical objects, so I guess this counts. But, I think you can find two physical objects of the exact same color as well. $\endgroup$ – Tony Ruth Jun 11 '16 at 23:31 $\begingroup$ @Tony Ruth How do you "make an Earth rotating the sun"? Show me an apparatus, yes, but make - no. $\endgroup$ – Inazuma Jun 12 '16 at 0:09 $\begingroup$ @Inazuma, any object you build on Earth will revolve exactly once around the sun per year. Of course the earth rotates on its axis so any object on the surface of the Earth would pickup additional rotations. But, I think you can come up with some mad scientist contraptions that would rotate when the earth goes around the sun, but not when the Earth rotates on its axis. $\endgroup$ – Tony Ruth Jun 12 '16 at 2:43 $\begingroup$ @TonyRuth I suppose that makes sense, but then you have six completed tasks. Also your original post says rotates, not revolves. $\endgroup$ – Inazuma Jun 12 '16 at 2:54 A Foucault's pendulum at the north pole will precess one revolution per day. One at the equator will not precess at all. Therefore, there is some latitude north of the equator where the precession rate will be equivalent to one revolution per year. Chris CudmoreChris Cudmore $\begingroup$ s/North/South/g if you live in the southern hemisphere and are offended by this answer.. $\endgroup$ – Chris Cudmore Jun 13 '16 at 20:59 partial answer: rotates exacty once per year: the month field of a real-time clock (computer hardware) rotates exactly once per year, when it changes from 1 to 2 this is exactly the same as a rotation left by 1 bit. none of the other annual changes are exact rotations. two of the same exactly the same colour two similar sodium low pressure sodium vapour lamps will have indistinguishable spectra, two helium-neon, or carbon-dioxide lasers same. two pure samples of a coloured substance (eg sulphur) or just painted with paint from the same can. JasenJasen $\begingroup$ He asked for exactly the same color, not just indistinguishable. $\endgroup$ – palsch Jun 12 '16 at 10:07 $\begingroup$ there's a difference? $\endgroup$ – Jasen Jun 13 '16 at 12:11 $\begingroup$ Yes, I think so. Indistinguishable is when you can't see the difference with your eyes, but the same color is when you have two objects that don't even have differences when you use very good machines. $\endgroup$ – palsch Jun 13 '16 at 13:49 $\begingroup$ indistinguishable applies to machines too, $\endgroup$ – Jasen Jun 13 '16 at 23:45 $\begingroup$ OK, so sorry, I'm no native speaker. +1 then. $\endgroup$ – palsch Jun 14 '16 at 4:27 My ideas to complete 5 tasks: Square Wave The equation of a general square wave is given by $S(x)=A\cdot (-1)^{\left\lfloor\frac{2(x-x_0)}{T}\right\rfloor}$. (See Wolfram Mathworld entry ) No one said I can't put $A=0$, thus attaining a square wave of $0$ amplitude. So to achieve this, simply Do nothing! Exact circle As suggested by humn, draw a circle of $0$ radius. Endless Book Make a book with $0$ pages. If you think it's not endless, tell me the last page number! Or if that sounds like cheating, manshu' s idea of spiral binding looks fine. Annual apparatus Take a disk with a small hole, and place it horizontally above a photographic plate. There will be a dot on the plate every noon, and the dot will be on the exact same position after a year. Objects of same colour Since the question asks to "Find" to objects, and to find usually means to get something which already existed, digital imagery doesn't seem acceptable to me. Instead, take the inner sides of your eye-lids. 'Colour' has meaning only when you see it, and to see them you need to close your eyes, and so they are both black. AnkoganitAnkoganit $\begingroup$ Objects of same colour: Would an object and the objects reflection in a mirror be the same colour? $\endgroup$ – B540Glenn Jun 20 '16 at 14:51 $\begingroup$ What about the OP's stipulation: "I will not allow any NULL answers."? $\endgroup$ – Lawrence Aug 29 '16 at 15:06 $\begingroup$ @Lawrence It's ambiguous: I interpreted it to mean "I wouldn't allow any answer that does not include the solution to one or more of the tasks". $\endgroup$ – Ankoganit Aug 29 '16 at 15:19 $\begingroup$ @B540Glenn What if the mirror is tinted? $\endgroup$ – Ankoganit Aug 29 '16 at 15:20 $\begingroup$ I changed the wording of NULL answers to trivial answers. Think of it this way, I wouldn't count something as both an exact circle and an exact square. $\endgroup$ – Tony Ruth Sep 1 '16 at 22:45 Excite me a wave which is exactly square. For the non-technical readers, waves come in four types: sine, square, triangle, and sawtooth. However, to quote Wikipedia, "An ideal mathematical square wave changes between the high and the low state instantaneously, and without under- or over-shooting. This is impossible to achieve in physical systems, as it would require infinite bandwidth." So, this one cannot be done. This one is fairly easy. Use a compass. Brighten my night with a light that is exactly white. This line could be interpreted in more than one way - light that is white (contains all parts of the EM spectrum) or that appears white (stimulates all of the cones in the eye). On my cursory reading I thought this would be impossible, as what is considered white can change depending on the context. But the question is asking for a white light, not a white object; thus, moonlight will do. (It is night, after all.) Show me a container that is exactly full All atoms contain lots of empty space between the nucleus and the electrons. Thus, a container full of water still is not full, nor is there any way to squash anything else inside. However, the OP [confirmed] that we're dealing with an HTML container. Nevertheless, there's still a problem: the definition of full is "completely filled," and you're always able to fit more code inside an HTML container. So I don't see how this one is possible. (You think I'm being picky? I just want things done exactly right.) Bind me a book which is exactly endless Spiral book. Take out the cover, and there's always another page if you keep flipping them around. Credit to manshu. Make me an apparatus which rotates exactly once per year Rotates in respect to what? The question asked that we "make" something, so can I make an anything, leave it exactly where it is, and let it sit for a year, during which time it will have rotated around the sun? (Which was said to be correct, but imprecisely worded. We are trying to be exact, aren't we? IMO a better answer would be to build a contraption like an egg timer or a watch whose hand/dial rotates exactly once per year. Find me two objects of exactly the same color As I mentioned in the white light line, our sense of color changes depending on where it is, what light is shining on it, etc. The only way two objects can be of the same color is if either they're digital, or if they can be confirmed to have the same amount of the same light waves bounced off of them no matter what is - a black hole. (Credit to humn.) Vantablack gets pretty darn close, but the question asked for it to be "exactly" the same color, and there's still that 0.035% of possibly different-colored light that bounces out. Exact circle, exactly white light, exactly endless book, apparatus that rotates exactly once per year, and two objects with exactly the same color are exactly five items that can be performed exactly. $\begingroup$ your answers are pretty good, and you've gotten very close to the spirit of this question. Here are my critiques: I'm not sure what you mean by use a compass. In regards to the container, I think the answer on whether or not it can be filled is dependant on what it contains. If it contains matter, sure you can keep adding to it, and in the html example, if you say its a container for code then its not full, but if its a container for an object and you put one object in it, then its full because you cannot insert more objects. $\endgroup$ – Tony Ruth Sep 1 '16 at 22:33 $\begingroup$ As to the rotating apparatus, consider this. If you put an object on the surface of the Earth, it will revolve around the earth every day, around the Sun every year, around the galaxy in whatever timespan that is, and it will pick up the Earth's precession. That's a lot more than 1 rotation per year, so it does not fulfill the requirements. Unlike translational motion, there are consequences of being inside a rotating system, like centripetal and coriolis forces. $\endgroup$ – Tony Ruth Sep 1 '16 at 22:37 $\begingroup$ @TonyRuth What's wrong with using a compass? Stick the one end in the center, drag the other end around in a perfect circle around it. And what was wrong with my egg timer? $\endgroup$ – user24580 Sep 2 '16 at 3:23 Partial answer: "An ideal mathematical square wave changes between the high and the low state instantaneously, and without under- or over-shooting. This is impossible to achieve in physical systems, as it would require infinite bandwidth." - Wikipedia White light: Apparently colourless light, for example ordinary daylight. It contains all the wavelengths of the visible spectrum at equal intensity. Hmm, I should propose that I shine three red, green and blue lights all at the same time, with varying frequencies, and at one time the light must pass this point , as defined by the Commission Internationale d'Eclairage (CIE). Draw me a circle which is exactly round: raʊnd/ adjective 1. shaped like a circle or cylinder. Hence any circle is 'round'. Correction after OP's clarification: Suggested by humn Note, a circle is the set of all points in a plane that are at a given distance from a given point, the centre; equivalently it is the curve traced out by a point that moves so that its distance from a given point is constant. The distance between any of the points and the centre is called the radius. Hence it is nominated that I draw a circle of radius 0. Full of air should do it, shouldn't it? Unless you want to argue that atoms actually have lots of space between the nucleus and electrons and therefore a container actually can't be full. Update (as shown in palsch's answer): Draw an object digitally Two objects of exactly the same colour: Cut one in half? Otherwise proposed by humn: Black hole (i.e. no colour) Proposed by manshu, for spiral book: "We can use spiral binding and take out the book cover. In this way, whenever we turn the page, the page will become the last page of the book." Hence the two that are not possible are: Square wave and yearly rotating apparatus. The five that are possible are described above. InazumaInazuma $\begingroup$ air is compressible, when is it exactly full? $\endgroup$ – Jasen Jun 11 '16 at 8:17 $\begingroup$ ordinary daylight does not contain all visible frequencies in equal parts, this is how helium was discovered. you'll get white light from a 4700K furnace. $\endgroup$ – Jasen Jun 11 '16 at 8:31 $\begingroup$ I changed the circle prompt. Your container of air is not full since more air can be inserted. I'll count the cutting two objects in half, but I think there are also other ways to do it. $\endgroup$ – Tony Ruth Jun 11 '16 at 18:14 $\begingroup$ About the book....We can use spiral binding and take out the book cover. In this way, whenever we turn the page, the page will become the last page of the book. $\endgroup$ – manshu Jun 11 '16 at 18:25 $\begingroup$ A (mumble) might be the easiest-to-draw perfect circle. $\endgroup$ – humn Jun 11 '16 at 20:56 Give me some graph paper and I'll draw a perfect square for you. If more precision needed, I can also make this graph digitally (For e.g. In Python using Matplotlib). Draw me an exact circle Nah, I can't wake you up. Seems impossible. Show me a container that is exactly full. (credits Palsch) I can do this digitally. Since you didn't specify anything about the reader going back, therefore whenever reader request a new page, I'll generate random text. Hence, whenever reader finishes a page and requests for the next one, he always gets another page. I'll make some apparatus and then we have to go to one of the poles to check it's validity. I bet this'll rotate once a year. Find me two objects of exactly the same color. (credits Palsch) var a = object1(); var b = object2(); a.color = "#f00"; b.color = "#f00"; //Given that both object1 and object 2 have an attribute named color Laschet JainLaschet Jain A square wave basically plots between a low point and a high point, only - it is a non-sineoid waveform, according to wikipedia. So... binary? Plot your ones, and your zeroes, in a scatter-plot. Then, connect the dots - you know it has no in-betweens, so straight up-line form zero to one and straight down-line from one to zero, all on a forward line for time. If your computer is being all picky about lag-time and the time spent on rising and falling, tap it firmly and plot the silly thing by hand. A perfect square wave has an equal number of highs and lows (50% duty cycle) so make sure your binary message is long (or short) enough, or specially formulated enough, to get a perfect square wave. Or else "test plot" just alternating ones and zeroes. Take an eyedropper and a container of pure water. Carefully hold the eyedropper straight, and let a single drop of the water fall. While it is falling, its circumference at the equator should be an exact circle, since water's cohesion should pull it together in a sphere. Granted, it's hard to measure - but the circle has been created, and drawn with a description rather than a line. For an alternate method, build a space shuttle (or talk to those who have one) and repeat the experiment in space, where the shape of the water should form and exact sphere. Take a solution of distilled water and common dish soap, in a 6-to-1 ratio (optional: add glycerin or corn syrup). Mix well. Take a straw, a length of pipe, or a bubble wand, dip the end in your solution so that a thin film covers it, and gently blow through the hole to blow bubbles. They are a container, check. They are full (of air) also check. No more air can be placed in them, and none taken out of them, or they will pop. Or, alternately Take a sponge, and set it in a bowlful of water. Let rest until fully re-hydrated (some squeezing and expanding may help). The sponge is now exactly full of water. The fact it can be taken out of the water, and still be waterlogged, makes it a container. If you take it out of water, without squeezing, it may still be full of water (perhaps exactly so for a little bit) - but it's harder to judge the "exactness" of the fill or exactly when it is done dripping. Start with a standard hole-punch, and the correct number of binder rings (most hole-punchers make three holes, some do four). Binder rings are effectively smooth (though ones with as little difference in the hinge and clasp are preferred), so any pages, when turned, will quickly circle around to land at the back of the sheaf of papers. To make "truly" endless, add increasing numbers of pages until they have comfortably filled the ring, and the book can't "close" to a single sheaf but must remain standing, with the pages equally splayed out around the circumference - it can still be read, since even when full at the ring site, the edges are quite loose and can be turned. To make it a book, fill the pages with something, perhaps art and complex images, so one can continue revisiting the pages indefinitely, rather than a simple story which might not make sense out of order or may not be worth rereading. Making a clock-like mechanism shouldn't be too hard. Make sure it's sturdy enough to last at least a year (under any conditions), and also - instead of rotating a hand, have the motor rotate a disk set atop another disk, just for aesthetics. Have on top of the second disk, another rotating mechanism that can be set at an angle. Take it and run away - north, or south, depending on your location and travel habits, but you need to get over one of the poles - and I mean right over the turning point, not the magnetic pole or what have you. Set your apparatus down right over the pole (consider setting up a shelter over it, you built it study but why take chances), and set the mechanism to turn once per day - clockwise, exactly one rotation per 23 hours, 56 minutes, and four seconds. The earth rotates counterclockwise at the same rate, bringing the rotating disk of the apparatus to a halt regarding the earth's daily rotation. The second mechanism should be set to one rotation per year, and the mechanism tilted by 23.4 degrees - this should counterbalance the earth's tilt. In exactly one year, though, it has rotated exactly once around the sun - the rotation will occur at exactly the one year mark because one rotation about the sun is the actual definition of a year, it is 365:5:48:45 on average. Larger rotations, like the sun's or galaxy's rotating, are so immeasurably large, and the portion our apparatus got through in a year so small, that they don't count - that is, exactly one sun-rotation, and some extra wobbly non-rotation distance moved. addition: Because the axial tilt is a genuine pain, take our clock-mechanism, head over to the pole, and build a tiny railroad in a complete circle around the pole. It needs to be exactly 1,618.60075 miles long, which means 257.6085 miles from the pole in every direction (you may want the south pole for the extra room). Set your device on a tiny train on the tiny railroad, set it going counterclockwise with a speed of 3.25 inches per second - which will really cancel the axial tilt, and prevent a small second rotation from being traced by the device spinning around the tilt-axis. Alternately, go back to the folk you got your space ship from back in drawing an exact circle, and borrow a space ship for a brief jaunt over one of the poles (and their computers) and plant the device...in space, in a reverse-geosynchronous orbit, directly over the axis perpendicular to the earth's rotation (that means, 23.4 degrees from the pole) or else in a sun-synchronous orbit over the equator. Use the two rotating mechanisms to cancel out any other rotations you would like to eliminate (the first will cancel the tilt but the daily rotation will need countering, the second will cancel the daily rotation but the tilt will need countering, and one to spare if the satellite it's attached to might rotate and that needs countering, whatever). Take two samples of a pure substance - composed of a single element or reliably pure compound. It is important that the substance be pure, 100%, but not really what the substance is - even distilled water will do as long as it is pure. Since we want two objects, you could even purify the samples from different batches - and for potential bonus points, have them identically shaped or held. Take the samples into a lead-lined darkroom, set them on a table, and shut off the lights. Your objects are now exactly the same color. Since they are a pure substance, no chemical test can distinguish between the wavelengths of light they will reflect, and in such perfect darkness, your eyes (or anyone else's) can't be fooled by apparent differences brought on by exact shape, any containers, or specific placement in regards to the light source (angles or shadows). MeghaMegha The quantum chromodynamics tells us that quarks hold one of exactly three color charges. So you can just take four random quarks and two of them are bound to be exactly same color. You will need a steel ball, a smooth level surface, a nail and a rope. You take a steel ball and attach the rope to it. Drive the nail into the surface, and attach the other end of the rope. Now spin the ball with enough velocity such that the rope stays tight. The ball will make an exact circle. Or easier: Put this into postscript file: '0 0 3 0 360 arc closepath stroke'. jurejure Not the answer you're looking for? Browse other questions tagged riddle science or ask your own question. Infinite precision Identity crisis. What exactly am I? When you're sad and down, what do you say? How do you prove that mirrors aren't parallel universes? What does this message say? The Corridor of Eon Syras Do exactly what I tell you! My Identity Is What I Am. So What Am I? What I am not riddle ?Say my Name! What did you say? In Corsica? Really?
CommonCrawl
Inferring the epidemiological benefit of indoor vector control interventions against malaria from mosquito data The impact of stopping and starting indoor residual spraying on malaria burden in Uganda Jane F. Namuganga, Adrienne Epstein, … Isabel Rodriguez-Barraquer Systematic review of indoor residual spray efficacy and effectiveness against Plasmodium falciparum in Africa Ellie Sherrard-Smith, Jamie T. Griffin, … Thomas S. Churcher Leveraging risk maps of malaria vector abundance to guide control efforts reduces malaria incidence in Eastern Province, Zambia David A. Larsen, Anne Martin, … Anna Winters Malaria elimination on Hainan Island despite climate change Huaiyu Tian, Naizhe Li, … Christopher Dye Strengthening long-lasting insecticidal nets effectiveness monitoring using retrospective analysis of cross-sectional, population-based surveys across sub-Saharan Africa Mark M. Janko, Thomas S. Churcher, … Steven R. Meshnick Barrier bednets target malaria vectors and expand the range of usable insecticides Gregory P. D. Murray, Natalie Lissenden, … Philip J. McCall The potential public health consequences of COVID-19 on malaria in Africa Ellie Sherrard-Smith, Alexandra B. Hogan, … Thomas S. Churcher Mesocosm experiments reveal the impact of mosquito control measures on malaria vector life history and population dynamics Kija Ng'habi, Mafalda Viana, … Heather M. Ferguson Investigating differences in village-level heterogeneity of malaria infection and household risk factors in Papua New Guinea Desmond Gul, Daniela Rodríguez-Rodríguez, … Leanne J. Robinson Ellie Sherrard-Smith ORCID: orcid.org/0000-0001-8317-79921, Corine Ngufor ORCID: orcid.org/0000-0002-7755-96092,3, Antoine Sanou ORCID: orcid.org/0000-0002-3598-88004, Moussa W. Guelbeogo4, Raphael N'Guessan5,3, Eldo Elobolobo6, Francisco Saute6, Kenyssony Varela7, Carlos J. Chaccour ORCID: orcid.org/0000-0001-9812-050X8, Rose Zulliger9, Joseph Wagman ORCID: orcid.org/0000-0002-5178-309810, Molly L. Robertson10, Mark Rowland3, Martin J. Donnelly ORCID: orcid.org/0000-0001-5218-149711, Samuel Gonahasa12, Sarah G. Staedke3, Jan Kolaczinski13 & Thomas S. Churcher ORCID: orcid.org/0000-0002-8442-05251 Ecological epidemiology The cause of malaria transmission has been known for over a century but it is still unclear whether entomological measures are sufficiently reliable to inform policy decisions in human health. Decision-making on the effectiveness of new insecticide-treated nets (ITNs) and the indoor residual spraying of insecticide (IRS) have been based on epidemiological data, typically collected in cluster-randomised control trials. The number of these trials that can be conducted is limited. Here we use a systematic review to highlight that efficacy estimates of the same intervention may vary substantially between trials. Analyses indicate that mosquito data collected in experimental hut trials can be used to parameterize mechanistic models for Plasmodium falciparum malaria and reliably predict the epidemiological efficacy of quick-acting, neuro-acting ITNs and IRS. Results suggest that for certain types of ITNs and IRS using this framework instead of clinical endpoints could support policy and expedite the widespread use of novel technologies. New vector control tools are urgently needed to control malaria1. Two sets of evidence on the likely impact of new classes of intervention are required to expedite the time between their development and a World Health Organization (WHO) recommendation for their widespread use; (i) information on the tools safety, quality and entomological efficacy and (ii) evidence that it reduces disease in the target population2. Requirement i uses evidence of vector control efficacy pertaining to entomological outcomes and formulation durability. Evidence requirement ii needs epidemiological data from human populations where the intervention has been used. Cluster-randomised control trials (RCTs) are the primary method used to generate quality evidence of disease control for interventions, which act to reduce transmission across the whole community and not just those people using them. Provided that the two WHO evidence standards are met, the resulting prequalification of the product by WHO provides the confidence sought by countries and large international procurers such as The Global Fund. To provide some reassurance of generalisability of impact requirement ii must have data from a minimum of two epidemiological trials conducted in different settings. There are no specific guidelines on how different these settings need to be, and it is not possible to capture the diverse array of ecological, entomological and epidemiological scenarios the interventions are likely to be deployed in. Unless these data are generated concurrently or shortly after requirement i, delays between product development and approval can occur, slowing product uptake and public health impact3. Indoor vector control tools that kill mosquitoes and aim to provide population-level impact in addition to personal protection are the most widely used form of global malaria prevention4. Insecticide-treated nets (ITNs) are the principal intervention, with over two billion nets distributed globally by 2020. Until that time, nearly all nets deployed have been broadly equivalent in their design, containing a single insecticide of the pyrethroid class. Resistance to pyrethroids is now widespread4 and the WHO has identified the development of nets treated with insecticides other than pyrethroids as an unmet public health need and alternatives are currently under development or evaluation. The synergist piperonyl butoxide (PBO) has been added to some pyrethroid net products to combat pyrethroid-resistant mosquitoes since 2008. Pyrethroid-PBO ITNs have only been widely distributed since 2018 following demonstration of their impact on disease5,6, which led to a conditional WHO recommendation in 2017. The epidemiological evidence from these RCTs is consistent with entomological data that shows the ability of pyrethroid-only ITNs to kill pyrethroid-resistant mosquitoes has been reduced, and this mortality-inducing effect can be somewhat restored with pyrethroid-PBO ITNs, though other explanations have been proposed7. A second key vector control tool recommended for large-scale deployment is indoor residual spraying (IRS) of insecticides aimed at killing mosquitoes resting on treated surfaces. Five chemical classes of insecticide are covered by the WHO recommendation for IRS8. These products have different durations of activity at killing and inhibiting blood-feeding of mosquitoes9, and use and price varies substantially10. Overall, IRS has been deployed to protect fewer people than ITN campaigns10, though it has been shown to be highly effective in the focal areas where it is used11. Malaria budgets are generally restricted, and new(er) ITN and IRS products tend to cost more, at least during market introduction. Post-deployment epidemiological research and surveillance is not always possible in many malaria endemic regions due to a lack of financial resources. It is unclear whether routine case-reporting is sufficiently robust to guide intervention deployment12, making the generation of a strong evidence-base essential before new products are adopted. Surrogates of protection are widely utilised in medicine13. Changes in blood-pressure are used to indicate differing risks of hypertension whilst antibody responses provide evidence of vaccine protection. This raises the question whether entomological data on the ability of a tool to kill mosquitoes can be used to infer protection provided to humans. Clinical surrogates used to evaluate drugs and vaccines only need to consider congruence within individual humans, making it easy to link a person's disease status to whether or not they have received treatment. Extrapolating the impact of interventions on mosquitoes to changes in the burden of disease in populations of people will likely be more complex. We do not therefore refer to entomological measures as a surrogate but rather as measuring a correlation of protection. Experimental hut trials (EHTs) are a complex real-world entomological assay that quantifies the host-seeking mosquito interaction with humans and indoor vector control interventions. EHTs are conducted in specially designed huts containing volunteers either protected by the intervention or acting as a control (unprotected or, more commonly, sleeping under an untreated net14). Wild, free-flying mosquitoes naturally enter huts and differences in numbers caught, dying, and blood-feeding between intervention and control arms are used to estimate entomological efficacy of ITNs and IRS. EHT are widely used in the development of novel ITNs and IRS and follow a well-defined protocol and analysis plan. A standard set of holes are cut into ITNs to mimic natural wear and tear and enable the actions of the insecticide to be fully assessed. Results can be used to parameterise mechanistic models of malaria transmission that capture the different entomological effects of ITNs and IRS in specified local settings9,15,16. In terms of speed and cost, these trials sit between entomological laboratory assays and RCTs. To date, there are no published studies, which conducted EHTs alongside epidemiological trials. Here, we propose a framework to investigate the utility of entomological data in predicting the epidemiological impact of ITNs and IRS recommended for large-scale deployment against malaria (Supplementary Fig. S1). We use this framework to investigate the ability of data derived from EHTs to predict changes in malaria parasite prevalence measured in RCTs. The non-linear transmission dynamics of malaria means that a reduction in mosquito bites caused by ITNs, or IRS, does not correspond to a similar reduction in malaria burden. To account for this, we use a malaria transmission dynamics model to convert EHT data into estimates of epidemiological impact. The model is calibrated to each trial with local entomological and epidemiological data so that it recreates the observed baseline parasite prevalence estimates. It is then run for the duration of the trial and model predictions are compared to observed changes in malaria parasite prevalence at the respective timepoints that match cross-sectional surveys completed throughout each trial. The effectiveness of ITNs and IRS will likely vary depending on local epidemiology, history of control and local mosquito characteristics. We use a systematic review of the published literature to identify all RCTs investigating the effectiveness of mass use of ITNs and IRS at reducing malaria prevalence. This is used to demonstrate how ITN and IRS efficacy varies between sites and provides a robust method for assessing the ability of the framework to predict the effectiveness of ITN and IRS across the range of African sites where RCTs have been conducted. Differences in the effect sizes of ITN and IRS RCTs Data from a limited number of epidemiological trials cannot be readily extrapolated to infer the quantitative impact of ITNs and IRS in different settings. Thirteen different RCTs (Supplementary Table 1) were identified by the systematic review that fulfil the search criteria and recorded changes in malaria parasite prevalence following the mass use of ITNs and/or IRS (Supplementary Fig. S2). These studies contained 37 distinct trial arms (a total of 73 cross-sectional surveys) implementing different ITN and IRS products alone, or in combination, across multiple ecological settings in Africa (Supplementary Data 1.1–1.3). The ability of interventions to reduce malaria prevalence relative to the control arm varied markedly across RCTs (Fig. 1). Differences in trial design and study setting were considerable, with the time of impact assessment after deployment of the interventions varying substantially, making overall comparison difficult. We do not have cluster-level data to adjust for known ecological differences in trial arms so the effect sizes of the RCTs are crudely estimated as the absolute difference in prevalence of the treatment arm relative to the control arm. Even when consistent endpoints were used, considerable differences between trials remain; for example, in locations with no evidence of insecticide resistance the efficacy of a single brand of net in reducing disease prevalence varied from 11% after 11 months in Tanzania (latest observation reported in the RCT)17 to 57% after 20 months in Kenya (earliest observation reported in the RCT)18. Multiple entomological and epidemiological determinants may explain these differences, and as noted, with cluster-level data some of these can be accounted for, but the ambiguity of the intervention efficacy (Fig. 2a) will likely hinder extrapolation of the findings to other settings making it more challenging for decision makers to decide on the most appropriate vector control to implement. Fig. 1: Summary of the randomised control trials completed on ITNs, indoor residual spraying (IRS) or a combination of these intervention tools. The first column indicates the control arm interventions to which the tested intervention (2nd column) are compared. Intervention types represented include no-intervention (black), untreated mosquito nets (grey), conventional nets dipped in pyrethroid insecticide every 6–8 months (CTNs, red), pyrethroid-only insecticide-treated nets, which incorporate insecticide (ITNs, red), pyrethroid-PBO ITNs (blue), or ITNs together with IRS (pyrethroid-only ITN + IRS, pale green, pyrethroid-PBO ITN + IRS, purple) or IRS only (orange). The country and study represented are shown in columns 3 and 4; symbols correspond to the studies shown in Fig. 2 and references in the supporting information Supplementary Table 1. The efficacy estimate reported in each of the trials is shown by the coloured square box at the appropriate timepoint the survey was conducted following start of the trial. It is calculated as the mean difference between reported malaria prevalence in the intervention arm relative to the control arm, with greener colours indicating higher observed differences. Trials vary substantially in the number and timing of the cross-sectional surveys. Fig. 2: Differences in the epidemiological impact of insecticide-treated nets (ITNs) and the residual spraying of insecticides indoors (IRS) as evaluated in cluster-randomised control trials (RCTs) and predicted by a entomological data. A Trial observed relative (to respective control arms as noted in Fig. 1, and Materials and Methods) efficacy against prevalence estimated for 46 data observations (Supplementary Data S1.8). Bar colours indicate the different types of intervention examined. B Comparison between observed trial prevalence and prevalence predicted by the transmission dynamics model parameterised using entomological data (matching diagnostic method and cohort characteristics, Supplementary Data S1.7, best-fitting parameters shown; Supplementary Table 3, column 4) for 13 RCTs, with symbols identifying principal investigators listed with the start date of the trial, that reported a total of 73 prevalence cross-sectional surveys. Colours indicate the type of intervention in the trial arm: pyrethroid-only nets (red), pyrethroid-PBO nets (blue), pyrethroid-only nets and IRS (green), pyrethroid-PBO nets and IRS (purple), or IRS only (orange). C Comparison of observed efficacy estimates and those predicted by the model (Supplement Data S8). In C, colours denote the length of time in months since the deployment of interventions when the prevalence observation was made that was used to estimate efficacy. Individual model predictions for each study are given in Supplementary Figs. S3–S15 with equivalent figures for alternative methods of combing data shown in Supplementary Fig. S18. Vertical and horizontal solid lines around point estimates (mean) for either observed or predicted data indicate 95% uncertainty from intervention performance, while dashed black line in B and C show the equivalence line. Uncertainty estimates for the observed data found in Supplementary Data S1.3 and for the different models in Supplementary Data S1.7 and 1.8 for B and C, respectively. Predicting epidemiological outcomes from entomological data We systematically investigated the ability of EHT data and models to predict epidemiological outcomes.Each of the 37 trial arms were simulated separately using observed patterns of intervention use (Supplementary Figs. S3–S15). Model predictions were compared to observed trial results for subsequent timepoints. The best-performing model (Supplementary Tables 2 and 3) predicted malaria prevalence at different timepoints after the start of the trials with high accuracy (Fig. 2b, Adjusted-R2 = 0.95, N = 73). Model predictions of epidemiological impact relative to the control arm of the matched cross-sectional survey were broadly consistent with those observed in the RCTs (Fig. 2c, Adjusted-R2 = 0.67). Further investigation of the goodness-of-fit of each of the different studies investigating the impact of different EHT design (Supplementary Fig. S15) are provided in Supplementary Figs. S17–19. The framework predicted different types of ITNs and IRS vector control interventions with broadly equivalent consistency (Supplementary Table 3), be it the change in malaria parasite prevalence caused by any net (including conventional dip-nets; R2 = 0.97, n = 37), pyrethroid-only long-lasting insecticidal nets (R2 = 0.97, n = 14), pyrethroid-PBO ITNs (R2 = 0.93, n = 7), or additional protection from IRS (R2 = 0.91, n = 20). This analysis suggests that EHTs are equally good at predicting trials of all ITNs and IRS currently recommended for large-scale deployment (though impact following routine deployment is likely to be different). The rationale for having different experimental hut trial designs is that housing type varies between regions and that this could influence the entomological impact of ITNs and IRS. The analyses are repeated to investigate whether models characterising the entomological impact of ITNs and IRS using EHTs of the regional design are better able to predict RCTs from that region (i.e., do East African design huts better predict the RCTs carried out in East Africa?). Though the number of studies are limited there is no systematic evidence to suggest that models fit to data using the local design of hut predict the local RCTs better using this mechanistic framework (Supplementary Table 4). There is a need to quickly evaluate the likely epidemiological impact of new ITNs and insecticides for IRS to inform policy. The systematic review of RCT data shows that mass use of ITNs and IRS consistently reduce malaria parasite prevalence, but the magnitude of the decrease varies substantially (Fig.1). This provides evidence that these interventions have public health benefit but that the level of protection can vary due to varying ecologies and endemicities in the setting. RCTs cannot be conducted across the range of areas within which ITNs and IRS could be beneficial, so entomological assays and modelling could be a feasible alternative to help guide local decisions. A previous review19 indicated that since 1988, over 136 EHTs of ITN or IRS products have been conducted using broadly standard methodology in over 33 sites in Africa (Supplementary Fig. S14), compared to the 14 RCTs of the same products (since 1992, from 13 sites across the continent of Africa, Supplementary Table 1). This study shows that a framework that combines meta-analyses of EHT data with a transmission dynamics mathematical model can approximate the results of the RCTs for the different ITN and IRS interventions currently widely used. The trials summarised in this systematic review were conducted over a 30-year period and differed substantially in their design and time of data collection. Earlier trials may not have adhered to currently expected standards as future RCTs must now be registered, and their design reviewed by the WHO Vector Control Advisory Group and other bodies in advance to ensure they are robust. Despite this, the substantial uncertainty in epidemiological effect size outlined in Fig. 1 is generally predictable by the model, which accounts for ecological site-specific details. Meta-analyses can deal with differences by sub-group analyses though this is heavily restricted when the number of trials is small20. Given that WHO's minimum requirement for evaluation of a new intervention is only two trials with epidemiological outcomes, it is notable that observed differences between trials can be largely explained using EHT data and trial context. Nevertheless, the question arises whether EHTs are robust enough to support a policy decision? Considering the extensive entomological evidence-base for the ITNs and IRS and the wide variability observed in RCT efficacy estimates our analysis suggests that, for those interventions examined here (fast-acting and neuro-acting insecticide-treated nets and spray products), the evidence is sufficiently strong to justify using entomological efficacy measured in EHTs as a correlate of protection to facilitate WHO recommendation on whether a product in an existing product class would have epidemiological value (evidence requirement ii). It is important to highlight that the interventions investigated here all have proven epidemiological impact, so the ability of EHTs to identify interventions that do not provide epidemiological benefit (should that be shown from epidemiological data) has not been tested. Such an acceptance of entomological data would bring ITNs in line with IRS evaluation as new IRS products with proven quick-acting entomological characteristics do not require epidemiological evidence of impact. Generation and use of high-quality information on the epidemiological impact of vector control interventions should always be encouraged to support decision-making. This work suggests that in the absence of these data EHT results combined with local information can predict the magnitude of epidemiological impact. It also justifies the use of EHT entomological data to evaluate the non-inferiority of new products that are like those that have already provided epidemiological evidence of impact21. There are several important caveats to the use of EHT data to support decision-making. The evidence presented here is for ITN and IRS products with very defined entomological modes of action that use quick-acting neuro-acting insecticides to kill and inhibit blood-feeding—effects that are measurable using experimental huts. Outcomes which are not captured by experimental huts may fail to identify epidemiological impact. ITNs with different modes of action, such as the pyrrole insecticides22 that act on mitochondrial respiratory pathways or insect growth regulators23, which act on female mosquito fertility will require further empirical epidemiological evaluation to allow analysis similar to the one presented here. Similarly, vector control tools other than ITNs and IRS with alternative delivery mechanisms, like spatial repellents or attractive targeted sugar baits, will require extensive epidemiological evidence to support their use (ideally using RCTs). The use of entomological assays to evaluate these new types of interventions alongside their epidemiological trials (in the same trial sites) could provide the evidence-base to support using mosquito data as a correlate of protection for evaluating novel methods of vector control in the future. Here, we have only considered African ITN and IRS trials, and additional studies are needed to underpin potential extrapolation of impact to other malaria endemic parts of the world. The model is specific to falciparum malaria, so we cannot comment on the use of this methodology for other Plasmodium parasites of public health importance. Results indicate that using a hut design from the region where the RCT took place (East or West Africa) did not improve model predictions. No design is going to broadly capture the diversity of housing from such a large and varied geographical continent and the diversity of mosquito species within that region could dramatically impact ITN and IRS efficacy. Studies directly comparing hut designs would be interesting to explore the advantages of tailoring the assay to the local regional housing compared to having a more consistent assay between sites, which might allow a more direct comparison of the same product against different mosquito populations. Further work is also needed to verify the durability of pyrethroid-PBO ITNs and assess whether the natural aging process can be artificially induced by washing (as is the case for pyrethroid-only ITNs). An artificial method of aging ITNs would enable new and washed nets to be simultaneously evaluated in EHTs allowing nets to be evaluated over a couple of months rather than multiple years in RCTs. If EHT data is to be increasingly used to support policy, then there is a further need to ensure reproducibility of results. The WHO already require EHTs used in vector control product registration to follow good laboratory practice regulations, and there are on-going projects to certify testing facilities for hut sites across Africa. Protocols already provide clear instructions as to how study arms should be selected, rotated, randomised, how study arms can be blinded, and replicated14, though power calculations are rarely conducted, primarily due to uncertainties in the numbers of mosquitoes caught per night. This study has tried to reduce any potential bias by using a meta-analysis of many trials. In future the measurement error of the assay needs to be further assessed and causes of variability in trial outcomes identified to instil greater confidence in results from individual trials. This would allow more rigorous power calculations to be conducted, though adaptive trial design may be required to ensure conclusions are based on sufficient numbers of mosquitoes. In this study, EHTs were used for assessing the entomological correlate of protection. There is considerable scope to improve predictions; future studies could consider augmenting EHT data with other laboratory or field assays that can evaluate interventions24. These could be rigorously assessed using the framework outlined here. We stress that epidemiological trials should still be advocated for to evaluate WHO recommended ITN and IRS products. Mosquito ecology is highly diverse, and we do not fully understand how the effectiveness of these interventions vary between settings nor how they are influenced by changing mosquito populations (for example due to insecticide resistance or behavioural avoidance25). Further epidemiological studies, such as well-resourced implementation programmes11, will be important to verify the context-specific impact estimates needed for intervention prioritisation, and to provide continued justification for the considerable annual cost of vector control. Alongside this, the use of entomological data can expedite the time between the development of new ITNs and IRS and their widespread use, saving lives. A systematic review (PROSPERO Registered: CRD42020165355) of all cluster-randomised control trials currently published on ITNs [including conventional nets (CTNs), pyrethroid-only long-lasting nets (pyrethroid-nets), and pyrethroid-piperonyl butoxide synergist nets (pyrethroid-PBO ITNs)], IRS or a combination of both interventions was completed to validate an established transmission model for Plasmodium falciparum malaria parameterised using entomological assessment of the interventions. Three search platforms, Web of Knowledge, PubMed and Google Scholar were used and further studies were included from three recent Cochrane reviews that have focused on individual- or cluster- randomised control trials testing either ITNs, IRS or both26,27,28. Our search criteria focused on studies within Africa, and those reporting an epidemiological outcome such as parasite prevalence or clinical incidence in a defined age-cohort. A total of 138 studies were initially identified for further assessment (Supplementary Fig. S2). Those papers identified through the systematic review went through another round of screening to ensure they fell within the scope of the work and were compatible with existing modelling parameterisation. These criteria included (i) the intervention falls within an existing World Health Organization recommendation (so trials, or arms of trials, investigating pyrethroid-pyriproxyfen ITNs29 or insecticide-treated curtains30 were excluded), (ii) the entomological impact of the product had been previously statistically characterised as part of the modelling framework (trials investigating DDT31 or propoxur IRS32 were excluded), (iii) the study was within the Africa continent, (iv) the study randomised interventions in the intervention arm across the community (i.e., interventions were not targeted to individuals or risk groups within the community)33,34,35, and (v) the study was not reporting a cluster-randomised design36. A full description of why studies and arms were excluded is provided in Data S1.1. RCTs can assess the public health impact of interventions using different epidemiological endpoints. The two most common metrics used in malaria RCTs is infection prevalence (generally assessing parasitemia in a particular age group using microscopy or rapid diagnostic tests) or clinical incidence (typically assessed using active case detection in a cohort, which had previously been cleared of infection). These metrics are both equally valid though may give different results. For example, it may be harder to change malaria parasite prevalence with a partially effective intervention in a high-transmission setting (where people have a high chance of being reinfected) compared to a low-transmission setting (where reinfection is less common). Similarly, estimates of clinical incidence will vary depending on the study design and regularity of follow-up. For example, there are practical constraints on the number of times people within an active cohort can be tested. In areas of higher transmission incidence estimates will be greater the more regularly the cohort is tested as people infected multiple times between screening will be less common. This information on the regularity of screening is not always reported making it difficult to adjust models accordingly. It is also important to account for cluster-level effects when interpreting trial results, and this cluster-level data is also mostly unavailable37. The systematic review identified more studies that evaluated interventions in their ability to change malaria prevalence, with 13 out of 14 RCTs showing how the intervention changed parasite prevalence between the study arms compared with 8 RCTs, which reported changes in clinical incidence. Therefore, we focus on prevalence as our metric for epidemiology impact in this framework though note this should be repeated with clinical incidence estimates should more data become available. The final dataset had 73 cross-sectional surveys of prevalence in a defined age-cohort, 37 trial arms from 13 different RCTs. Characterising the entomological impact of ITNs and IRS Experimental hut trials (EHTs) measure the outcome of wild, free-flying, mosquito attempting to feed on volunteers resting indoors in the presence of an indoor intervention38. This includes (i) whether or not a mosquito is deterred away from a hut, which has the intervention (calculated by the number of mosquitoes found in the control hut relative to the intervention hut), (ii) whether the mosquito exits without feeding (repellence, measured as the percentage of alive unfed mosquitoes inside the intervention hut), (iii) the percentage entering the hut that successfully blood-feed, or (iv) the percentage of mosquitoes which die. Intervention efficacy is typically summarised for the intervention huts relative to a no-intervention (or untreated net) control huts, be it induced mortality (the increase in the percentage of mosquitoes dying over a 24-h period) or blood-feeding inhibition (the reduction in the percentage of mosquitoes receiving a blood-meal). EHTs use specially built structures that follow a defined floor-plan and set of specifications. There are multiple designs of experimental hut as they were originally intended to replicate the predominant type of housing found in the local area. We recently conducted a systematic review to capture the average behaviours of mosquitoes across different hut designs19. The two most used huts in Africa are the West African design and East Africa hut39 (a third hut—the Ifakara hut—is not considered here39). The meta-analyses showed that the associations describing the probable outcome of a mosquito feeding attempt (deterrence, repellence, successful feeding, or death) varies according to hut design. It is unclear that hut design best predicts epidemiological impact. Meta-analyses of EHT data have shown how the entomological efficacy of pyrethroid-nets has diminished over time, probably due to the rise of pyrethroid-resistant mosquitoes16,19,40, though there may be some manufacturing changes41. EHTs are conducted throughout Africa but are limited to the sites where the huts are built and cannot directly inform estimates of ITN efficacy outside of these areas. The most widely used quantitative measure for approximating the phenotypic level of resistance in the local mosquito population is the discriminating-dose bioassay. There are two main types of discriminating assays, the WHO susceptibility bioassay and the CDC bottle bioassay42,43. Both these assays measure the proportion of local Anopheline mosquitoes that survive 24-h following exposure to a discriminatory dose of pyrethroid for 60 min. Results from these bioassays are highly variable44 though collating data from multiple tests has shown clear trends over time45. The relationship between the level of resistance in the local mosquito population (as measured in a discriminating-dose bioassay) and the mortality induced by ITNs in EHTs can be used to extrapolate the results from hut trials to other geographical regions16. Modelling rationale The two main metrics recorded in EHTs do not capture all entomological impacts of ITNs and IRS. Though useful, induced mortality does not consider the sub-lethal impact of interventions whilst blood-feeding inhibition fails to differentiate between preventing blood-meals and killing mosquitoes, which are likely to have very different epidemiological impacts. Killing mosquitoes reduces the force of infection for users and non-users (through a community effect) so the overall effectiveness of treated nets and IRS will vary according to how abundantly and regularly they are used by the local human population. In addition, the impact of ITNs and IRS is likely to vary between sites because of factors such as the disease endemicity itself driven by societal behaviours, seasonality of transmission and the use of other malaria control interventions, amongst others. This means that raw EHT data is unlikely to directly correlate with the results of RCTs. EHTs are widely used to parameterise malaria transmission dynamics mathematical models46,47,48. These models rigorously quantify the outcome of each mosquito feeding attempt and, by making a limited number of assumptions, can estimate an overall entomological efficacy by combining the impact of the level of personal protection elicited by the intervention to the user and the indirect community effect provided to both users and non-users. Transmission dynamics mathematical models are designed to mechanistically capture the underlying processes governing malaria transmission and so can account for known non-linear processes such as the acquisition of human immunity49,50,51. This enables these models to translate the entomological efficacy quantified in an EHT into predictions of epidemiological impact given the characteristics of the site. Unfortunately, to date, there are no published EHTs that have been conducted alongside RCT evaluation of ITNs or IRS products (and therefore evaluated against the same mosquito population). To overcome this issue we parameterise the models using a meta-analyses of 136 EHT results16,19 collated from across Africa, which quantifies how mosquito deterrence, repellence, successful feeding, or death varies with time since the intervention is deployed and according to the level of pyrethroid resistance in the local mosquito population (as measured by the discriminating-dose bioassay). This approach has been able to recreate the epidemiological impact observed in RCTs evaluating a small number of ITNs15 or IRS products9, but this is the first attempt at using this method to validate the modelling framework against all trials evaluating nets and IRS. There is considerable uncertainty in how the entomological efficacy of treated ITNs varies with the level of resistance in the local population. This is a key relationship determining how field discriminating-dose bioassay data should be interpreted yet it is highly uncertain, with a recent meta-analyses indicating that it is equally well explained by two different functional forms (the logistic or log-logistic functions)19. Similarly, it is unclear whether the epidemiological impact of ITNs or IRS is best captured by all experimental hut data combined (Supplementary Fig. S14C, D)19 or if the meta-analyses should be restricted to just West or East African hut design data alone. To rigorously differentiate between these options six different models are run for each trial arm (n = 37), varying both the relationship between discriminating-dose bioassay and EHT mosquito mortality (either the logistic or log-logistic function) and the data used in the EHT meta-analyses (all data, East or West African design huts). The ability of these models to recreate the observed results is statistically compared and the most accurate selected for the main analyses. Transmission dynamics model The malaria transmission model that we use here incorporates the transmission dynamics of Plasmodium falciparum between human hosts and Anopheles mosquito vectors. The differential equations and associated assumptions of the original transmission model52 have been comprehensively reported in the Supplementary Material from Griffin et al.53, Walker et al.54 and Winskill et al.55. The model has been extensively fitted to data on the relationship between vector density, entomological inoculation rate, parasite prevalence, uncomplicated malaria, severe disease and death49,52,53,56,57. Model equations and assumptions are provided in the Supplementary Methods and https://github.com/jamiegriffin/Malaria_simulation. Unless stated (Supplementary Data S1), default parameters are taken from these papers. Data requirements for model simulation The transmission model can be parameterised to describe the specific ecology of each RCT location using data on the mosquito bionomics, seasonal transmission patterns, historic use of various interventions—principally insecticide-treated ITNs or the residual spraying of insecticides (IRS)—and baseline endemicity. These data are recorded within the research articles reporting the trials at the trial arm level (Supplementary Data S1.2 notes where data are available and which resources were used; Supplementary Data S1.3 lists the key data identified for model parameterisation) and Supplementary Fig. S1 provides a diagram of how they are combined to inform the model. Briefly, the Anopheles mosquito species composition at baseline is used to determine the proportion of mosquitoes with bespoke behaviours that could alter exposure risk to mosquito bites and thus transmission risk. Species-specific mosquito behaviours are parameterised from systematic reviews on anthropophagy, using the human blood index47,58,59, and the proportion of mosquito bites that are received indoors or in bed because this impacts the efficacy estimate for indoor interventions60. Other information that are specific to each trial also help interpret our success at predicting, or not, the observed results of an intervention tested in an RCT; the diagnostic used to measure prevalence or incidence is useful because different tests have different sensitivities61, which can be included in the model framework54. The baseline burden of infection is particularly important to enable the model to be calibrated to the endemicity of the study site by varying the number of mosquitoes per person (the human:mosquito ratio). This is determined by a cross-sectional estimate of parasite prevalence in a defined age-cohort at a particular time of year of the baseline survey. For any location, the current level of endemicity is determined by the historic interventions already operating at the site. Therefore, wherever possible, ITN use and the historic use of sprayed insecticides, as well as the estimated proportion of clinical cases that are drug-treated, are included as baseline parameters. In addition to the waning potency of insecticide active ingredient outlined above, the impact of nets can also wane because of changes in the proportion of people using them. This can be driven by the quality of the product, seasonal patterns in humidity or other social patterns of use62,63,64. Where data are available, this waning adherence to net use is captured by fitting an exponential decay function to the proportion of people using nets measured at cross-sectional surveys throughout the trials: $${{{{{{{\mathrm{U}}}}}}{{{{{\mathrm{sage}}}}}}}}_{i}={e}^{-{\sigma }_{i}t}$$ where σ is a parameter determining how rapidly people stop using nets in an intervention arm i of the trial and t is time in years. Parameter estimates for pyrethroid-only and pyrethroid-PBO ITNs are provided for different levels of resistance for the 6 potential methods of associating bioassays and using data (Supplementary Data S1.4). The IRS product used is equally important as the entomological impact of different products vary, particularly for pyrethroid-based IRS in the presence of resistant mosquitoes9. Supplementary Data S1.5 show the parameter estimates for products included in the analysis. The seasonality of transmission has been defined previously for each RCT site (at the administration subunit 1 level) using normalised rainfall patterns obtained from the US Climate Prediction Center65. The daily time series are aggregated to 64 points per year for years 2002 to 2009. A Fourier function is fitted to these data to capture seasonality by reconstructing annual rainfall patterns54,66. We deliberately do not match rainfall data from the respective RCTs, which would likely improve the model estimates because we are ultimately testing whether this framework has predictive power across future years or alternative ecologies, where we will not know how rainfall will exactly impact mosquito densities and hence malaria transmission. The mean simulated malaria prevalence (matching the age-cohort of the trial) is recorded for all RCT surveys timepoints. This equates to a total of 73 cross-sectional surveys post-implementation. The process was repeated using the 6 different entomological parameter sets (the relationship between bioassay and hut trial mortality and the hut design used to summarise treated net entomological impact). An illustration of the different models and their fit to data is demonstrated in Supplementary Fig. S17 for a recent study trialling pyrethroid-only nets, pyrethroid-PBO ITNs alone or in combination with a long-lasting IRS product in Tanzania5. The difference between the observed and predicted prevalence at each timepoint is shown for all RCTs in Supplementary Fig. S18. A simple linear regression is conducted comparing observed and predicted results are summarised in Supplementary Table 3. Let Xi denote the malaria prevalence predicted by the model at timepoint i while Yi is the observed prevalence. The regression, $${Y}_{i}=m{X}_{i}$$ for i = 1,…,c + n, where m is the gradient between the observed and predicted result (consistent across studies), c is the number of post-intervention datapoints in the control arms and n is the number of post-intervention datapoints in the intervention arms (c + n = 73 for analyses of all RCTs). Better fitting models have a higher adjusted R2 (adjusted R2 values of one indicate the model is perfectly predicting the trial result) whilst the gradient of the regression m indicates any bias (with value of one reporting the model can predict prevalence equally well across the endemicity range). Results are presented for all ITNs and IRS RCTs and separately for RCTs of different types of (pyrethroid-only ITNs, pyrethroid-PBO ITNs and IRS, Supplementary Table 3). The log-logistic model (results 4–6 in Supplementary Table 3) describing the relationship between bioassay and hut trial mortality consistently fits the data better, with models fit using either all hut trial data or East African design huts having a similar accuracy (adjusted R2 = 0.95). This parameter combination also had the least bias, with the best fit regression line being closer to one. The average efficacy of the different ITNs and IRS combinations was calculated by comparing malaria prevalence for the different trial arms to the respective control arms at matched timepoints following the introduction of interventions. Let \({E}_{{jk}}^{l}\) be the relative reduction in the malaria prevalence between the control (k = 0) to intervention (k = 1) arms at matched timepoint j in the same trial for either the predicted (l = Xjk) or observed (l = Yjk) malaria prevalence, $${E}_{j}^{X}=({{X}_{j0}-{X}}_{j1})/{X}_{j0}\,{{{{{\rm{ and }}}}}}\,{E}_{j}^{Y}=({Y}_{j0}-{Y}_{j1})/{Y}_{j0}$$ for j = 1,…,n. The goodness of fit for the efficacy estimates is calculated in a similar manner to the prevalence estimates by substituting in \({E}_{j}^{X}\) and \({E}_{j}^{Y}\) into Xi and Yi in E2, respectively. Models are on average able to estimate the efficacy of the interventions at different timepoints (Supplementary Table 3). Estimates for some timepoints diverge substantially (for example, the study testing conventional nets in the Gambia relative to untreated nets67 measured negative effect in one setting; the treated net arm having more infected children whereas the model predicted a 12.5% reduction due to the CTN (with parameters derived from all EHT data and the log-logistic function, 4 in Supplementary Table 3), Supplementary Data S1.8), but in most studies the trial average (averaged across all timepoints) is remarkably consistent. Accuracy is lower than estimates of absolute prevalence, in part because the difference between the percentage of people slide positive in low-endemicity settings may be relatively modest in absolute terms but might represent a substantial difference as a percentage. It is also important to note that when the models do systematically miss some timepoints, this is consistent across the control and treated arms. For example, in the Protopopoff et al. study in Tanzania5 (Figs. S14 and S17) efficacy is over-estimated in all arms 18 months after the start of the trial, but the relative difference between the arms (in terms of ordering, and the efficacy estimate) is relatively consistent. This indicates that unmeasured factors, such as differences in the timing and duration of the rainy season, may have occurred across all trial arms. As previously, the log-logistic functional form describing the relationship between bioassay and hut trial mortality consistently fits the data better (Supplementary Table 3, options 4 to 6). The models fit describing the entomological efficacy of any net using all EHT data predicts efficacy data better with East African design hut data providing similar accuracy (adjusted R2 = 0.64 vs. 0.62, respectively). Following this we select the log-logistic functional form to describe the relationship between mortality in the discriminating-dose bioassay and EHT and characterise the entomological efficacy of treated ITNs using data from both East and West African design huts for the main analyses (Fig. 2B, C). The ability of the best-performing model (Supplementary Table 3, column 4: log-logistic function and all EHT data) to capture the relative drop in prevalence over time compared to the baseline (pre-intervention) estimate is shown in Supplementary Fig. S19. This value is denoted as \({\dot{E}}_{t}^{l}\) and is calculated as, $${\dot{E}}_{t}^{X}=({X}_{0}-{X}_{t})\,{{{{{\rm{and}}}}}}\,{\dot{E}}_{t}^{Y}=({X}_{0}-{Y}_{t})$$ where \({X}_{0}\) is the malaria prevalence at baseline (prior to intervention deployment with the exception of Chaccour et al.68) observed from the RCT and the model is calibrated to this endemicity. Xt is then the subsequent cross-sectional survey observed for each study, and RCTs have different numbers of surveys ranging from 1 to 4 in the published literature. The corresponding model estimate is represented by Yt. Estimates are calculated for all post-intervention timepoints in both control and intervention arms and are shown in Fig. S19A. The difference between \({\dot{E}}_{t}^{X}\) and \({\dot{E}}_{t}^{Y}\) can be used to explore how closely the model is able to predict this absolute difference observed in the trials (a value of 0 indicates exact match, high predictive ability). The model overestimates the performance of IRS only, deployed in 1995 using the pyrethroid IRS ICON CS 10% (Syngenta), but otherwise there is no difference in the models' ability to estimate different ITN interventions or combination net and IRS interventions, be it the absence of an intervention, conventional dipped-nets, pyrethroid-only nets, pyrethroid-PBO ITNs with or without IRS (Fig. S19B). All code is available69. Results from the systematic review and all data used in the analyses are provided in Supplementary Data; these are collated data from previously published trials that are owned by the authors noted in the publications documented in Supplementary Data. Model code can be found here: https://github.com/jamiegriffin/Malaria_simulation and data manipulation, input parameters and processing is available here: https://github.com/EllieSherrardSmith/ibm_rct_prediction69. Alonso, P. L. Malaria: a problem to be solved and a time to be bold. Nat. Med. 27, 1506–1509 (2021). World Health Organization. Norms, Standards and Processes Underpinning Development of Who Recommendations on Vector Control. https://www.who.int/publications/i/item/9789240017382 (World Health Organization, 2020). Rowland, M. W. & Protopopoff, N. Dawn of the PBO-pyrethroid long lasting net - light at last. Outlooks Pest. Manag. 29, 242–244 (2018). World Health Organization. World Malaria Report 2020: 20 years of Global Progress and Challenges. World Health vol. WHO/HTM/GM (World Health Organization, 2020). Protopopoff, N. et al. Effectiveness of a long-lasting piperonyl butoxide-treated insecticidal net and indoor residual spray interventions, separately and together, against malaria transmitted by pyrethroid-resistant mosquitoes: a cluster, randomised controlled, two-by-two fact. Lancet 391, 1577–1588 (2018). Staedke, S. G. et al. LLIN Evaluation in Uganda Project (LLINEUP)–Impact of long-lasting insecticidal nets with, and without, piperonyl butoxide on malaria indicators in Uganda: study protocol for a cluster-randomised trial. Trials 20, 321 (2019). Lindsay, S. W., Thomas, M. B. & Kleinschmidt, I. Threats to the effectiveness of insecticide-treated bednets for malaria control: thinking beyond insecticide resistance. Lancet Glob. Heal 9, e1325–e1331 (2021). GlobalFund. List of Indoor Residual Sprays (IRS) That Meet WHOPES Specifications for Use Against Malaria Vector (GlobalFund, 2020). Sherrard-Smith, E. et al. Systematic review of indoor residual spray efficacy and effectiveness against Plasmodium falciparum in Africa. Nat. Commun. 9, 4982 (2018). Roll Back Malaria Partnership to End Malaria. https://endmalaria.org/dashboard/chai-forecasting-global-malaria-commodities. Accessed 1st December 2021. Namuganga, J. F. et al. The impact of stopping and starting indoor residual spraying on malaria burden in Uganda. Nat. Commun. 12, 1–9 (2021). Alegana, V. A., Okiro, E. A. & Snow, R. W. Routine data for malaria morbidity estimation in Africa: challenges and prospects. BMC Med. 18, 121 (2020). Sadoff, J. C. & Wittes, J. Correlates, surrogates, and vaccines. J. Infect. Dis. 196, 1279–1281 (2007). World Health Organization. Guidelines for laboratory and field-testing of long-lasting insecticidal nets. www.who.int (World Health Organization, 2013). Sherrard-Smith, E. et al. Optimising the deployment of vector control tools against malaria: a data-informed modelling study. Lancet Planet. Heal. https://doi.org/10.1016/S2542-5196(21)00296-5 (2022). Churcher, T. S., Lissenden, N., Griffin, J. T., Worrall, E. & Ranson, H. The impact of pyrethroid resistance on the efficacy and effectiveness of bednets for malaria control in Africa. Elife 5, e16090 (2016). Curtis, C. F. et al. A comparison of use of a pyrethroid either for house spraying or for bednet treatment against malaria vectors. Trop. Med. Int. Heal. 3, 619–631 (1998). Nevill, C. G. et al. Insecticide-treated bednets reduce mortality and severe morbidity from malaria among children on the Kenyan coast. Trop. Med. Int. Heal. 1, 139–146 (1996). Nash, R. K. et al. Systematic review of the entomological impact of insecticide-treated nets evaluated using experimental hut trials in Africa. Curr. Res. Parasitol. Vector-Borne Dis. 1, 100047 (2021). Gleave, K., Lissenden, N., Richardson, M., Choi, L. & Ranson, H. Piperonyl butoxide (PBO) combined with pyrethroids in insecticidetreated nets to prevent malaria in Africa (Review). Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD012776.pub3.www.cochranelibrary.com (2021). World Health Organization. Data Requirements and Protocol for Determining Non-inferiority of Insecticide-treated Net and Indoor Residual Spraying Products within an Established WHO Intervention Class. https://www.who.int/publications/i/item/WHO-CDS-GMP-2018.22 (World Health Organization, 2018). Mosha, F. W. et al. Experimental hut evaluation of the pyrrole insecticide chlorfenapyr on bed nets for the control of Anopheles arabiensis and Culex quinquefasciatus. Trop. Med. Int. Heal. 13, 644–652 (2008). Toé, K. H. et al. Assessing the impact of the addition of pyriproxyfen on the durability of permethrin-treated bed nets in Burkina Faso: A compound-randomized controlled trial. Malar. J. 18, 383 (2019). Angarita-Jaimes, N. C. et al. A novel video-tracking system to quantify the behaviour of nocturnal mosquitoes attacking human hosts in the field. J. R. Soc. Interface 13, 20150974 (2016). Sougoufara, S. et al. Standardised bioassays reveal that mosquitoes learn to avoid compounds used in chemical vector control after a single sub-lethal exposure. Sci. Rep. 12, 1–12 (2022). Choi, L., Pryce, J. & Garner, P. Indoor residual spraying for preventing malaria in communities using insecticide-treated nets. Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD012688.pub2 (2019). Pluess, B., Tanser, F. C., Lengeler, C. & Sharp, B. L. Indoor residual spraying for preventing malaria. Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD006657.pub2 (2010). Pryce, J., Richardson, M. & Lengeler, C. Insecticide-treated nets for preventing malaria. Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD000363.pub3 (2018). Tiono, A. B. et al. Efficacy of Olyset Duo, a bednet containing pyriproxyfen and permethrin, versus a permethrin-only net against clinical malaria in an area with highly pyrethroid-resistant vectors in rural Burkina Faso: a cluster-randomised controlled trial. Lancet 392, 569–580 (2018). Sexton, J. D. et al. Permethrin-impregnated curtains and bed-nets prevent malaria in western Kenya. Am. J. Trop. Med. Hyg. 43, 11–18 (1990). Pinder, M. et al. Efficacy of indoor residual spraying with dichlorodiphenyltrichloroethane against malaria in Gambian communities with high usage of long-lasting insecticidal mosquito nets: a cluster-randomised controlled trial. Lancet 385, 1436–1446 (2015). Loha, E. et al. Long-lasting insecticidal nets and indoor residual spraying may not be sufficient to eliminate malaria in a low malaria incidence area: Results from a cluster randomized controlled trial in Ethiopia. Malar. J. 18, 1–15 (2019). Snow, R. W., Rowan, K. M. & Greenwood, B. M. A trial of permethrin-treated bed nets in the prevention of malaria in Gambian children. Trans. R. Soc. Trop. Med. Hyg. 81, 563–567 (1987). Moyou-Somo, R., Lehman, L., Awahmukalah, S. & Ayuk Enyong, P. Deltamethrin impregnated bednets for the control of urban malaria in Kumba Town, South-West Province of Cameroon. J. Trop. Med. Hyg. 98, 316–318 (1995). Fraser-Hurt, N. et al. 9. Effect of insecticide-treated bed nets on haemoglobin values, prevalence and multiplicity of infection with Plasmodium falciparum in a randomized controlled trial in Tanzania. Trans. R. Soc. Trop. Med. Hyg. 93, 47–51 (1999). Abuaku, B. et al. Impact of indoor residual spraying on malaria parasitaemia in the Bunkpurugu-Yunyoo District in northern Ghana. Parasit. Vectors 11, 1–11 (2018). Mwandigha, L. M., Fraser, K. J., Racine-Poon, A., Mouksassi, M. S. & Ghani, A. C. Power calculations for cluster randomized trials (CRTs) with right-truncated Poisson-distributed outcomes: a motivating example from a malaria vector control trial. Int. J. Epidemiol. 49, 954–962 (2021). World Health Organization. Data requirements and protocol for determining non-inferiority of insecticide-treated net and indoor residual spraying products within an established WHO policy class. (World Health Organization, 2019). Massue, D. J. et al. Comparative performance of three experimental hut designs for measuring malaria vector responses to insecticides in Tanzania. Malar. J. 15, 165 (2016). Strode, C., Donegan, S., Garner, P., Enayati, A. A. & Hemingway, J. The impact of pyrethroid resistance on the efficacy of insecticide-treated bed nets against African Anopheline Mosquitoes: systematic review and meta-analysis. PLoS Med. 11, e1001619 (2014). Vinit, R. et al. Decreased bioefficacy of long-lasting insecticidal nets and the resurgence of malaria in Papua New Guinea. Nat. Commun. 11, 1–7 (2020). World Health Organization. Test procedures for insecticide resistance monitoring in malaria vector mosquitoes Global Malaria Programme (World Health Organization, 2016). Bagi, J. et al. When a discriminating dose assay is not enough: measuring the intensity of insecticide resistance in malaria vectors. Malar. J. 14, 210 (2015). Owusu, H. F., Jančáryová, D., Malone, D. & Müller, P. Comparability between insecticide resistance bioassays for mosquito vectors: time to review current methodology? Parasit. Vectors 8, 357 (2015). Hancock, P. A. et al. Mapping trends in insecticide resistance phenotypes in African malaria vectors. PLoS Biol. 18, 1–23 (2020). Le Menach, A. et al. An elaborated feeding cycle model for reductions in vectorial capacity of night-biting mosquitoes by insecticide-treated nets. Malar. J. 6, 10 (2007). Killeen, G. F. et al. Quantifying behavioural interactions between humans and mosquitoes: Evaluating the protective efficacy of insecticidal nets against malaria transmission in rural Tanzania. BMC Infect. Dis. 6, 161 (2006). Chitnis, N., Schapira, A., Smith, T. & Steketee, R. Comparing the effectiveness of malaria vector-contorl interventions through a mathematical model. Am. J. Trop. Med. Hyg. 83, 230–240 (2010). Griffin, J. T. et al. Gradual acquisition of immunity to severe malaria with increasing exposure. Proc. R. Soc. B Biol. Sci. 282, 20142657 (2015). Smith, D. L. et al. A sticky situation: the unexpected stability of malaria elimination. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 368, 20120145 (2013). Penny, M. A., Maire, N., Studer, A., Schapira, A. & Smith, T. A. What should vaccine developers ask? simulation of the effectiveness of malaria vaccines. PLoS ONE 3, e3193 (2008). Griffin, J. T. et al. Reducing Plasmodium falciparum malaria transmission in Africa: a model-based evaluation of intervention strategies. PLoS Med. 7, e1000324 (2010). Griffin, J. T. et al. Potential for reduction of burden and local elimination of malaria by reducing Plasmodium falciparum malaria transmission: a mathematical modelling study. Lancet Infect. Dis. 16, 465–472 (2016). Walker, P. G. T., Griffin, J. T., Ferguson, N. M. & Ghani, A. C. Estimating the most efficient allocation of interventions to achieve reductions in Plasmodium falciparum malaria burden and transmission in Africa: a modelling study. Lancet Glob. Heal. 4, e474–e484 (2016). Winskill, P., Slater, H. C., Griffin, J. T., Ghani, A. C. & Walker, P. G. T. The US President's Malaria Initiative, Plasmodium falciparum transmission and mortality: a modelling study. PLOS Med. 14, e1002448 (2017). Griffin, J. T., Ferguson, N. M. & Ghani, A. C. Estimates of the changing age-burden of Plasmodium falciparum malaria disease in sub-Saharan Africa. Nat. Commun. 5, 1–10 (2014). White, M. T. et al. Modelling the impact of vector control interventions on Anopheles gambiae population dynamics. Parasit. Vectors 4, 153 (2011). Killeen, G. F. et al. Going beyond personal protection against mosquito bites to eliminate malaria transmission: population suppression of malaria vectors that exploit both human and animal blood. BMJ Glob. Heal. 2, e000198 (2017). Massey, N. C. et al. A global bionomic database for the dominant vectors of human malaria. Sci. Data 3, 160014 (2016). Sherrard-Smith, E. et al. Mosquito feeding behavior and how it influences residual malaria transmission across Africa. Proc. Natl Acad. Sci. USA 116, 15086–15096 (2019). Slater, H. C. et al. Assessing the impact of next-generation rapid diagnostic tests on Plasmodium falciparum malaria elimination strategies. Nature 528, S94–S101 (2015). Atieli, H. E. et al. Insecticide-treated net (ITN) ownership, usage, and malaria transmission in the highlands of western Kenya. Parasit. Vectors 4, 1–10 (2011). Koenker, H. & Kilian, A. Recalculating the Net Use Gap: a multi-country comparison of ITN use versus ITN access. PLoS ONE 9, e97496 (2014). Koenker, H. et al. Quantifying seasonal variation in insecticide-treated net use among those with access. Am. J. Trop. Med. Hyg. 101, 371–382 (2019). National Weather Service. Climate Prediction Center. [internet] [cited 24 Mar 2016]. (National Weather Service, 2016) Garske, T., Ferguson, N. M. & Ghani, A. C. Estimating air temperature and its influence on malaria transmission across Africa. PLoS ONE 8, e56487 (2013). D'Alessandro, U. et al. Mortality and morbidity from malaria in Gambian children after introduction of an impregnated bednet programme. Lancet 345, 479–483 (1995). Chaccour, C. et al. Incremental impact on malaria incidence following indoor residual spraying in a highly endemic area with high standard ITN access in Mozambique: results from a cluster‐randomized study. Malar. J. 20, 1–15 (2021). Sherrard-Smith, E. et al. Github repository: EllieSherrardSmith/ibm_rct_prediction: v1.1 https://doi.org/10.5281/zenodo.6424161 (2022). We thank all those involved in the extensive entomological and epidemiological trials that form the evidence-base of this work and Professor Hilary Ranson for insightful comments. E.S.S. was funded by a UKRI Future Leaders Fellowship from the Medical Research Council (MR/T041986/1). T.S.C .received funding from the Bill & Melinda Gates Foundation [under Grant Agreement No. OPP1200155]. E.S.S. and T.S.C. acknowledge funding from the MRC Centre for Global Infectious Disease Analysis (reference MR/R015600/1), jointly funded by the UK Medical Research Council (MRC) and the UK Foreign, Commonwealth & Development Office (FCDO), under the MRC/FCDO Concordat agreement, the EDCTP2 programme supported by the European Union, and Community Jameel. Financial support for R.Z. was provided by the US President's Malaria Initiative. The findings and conclusions contained within are those of the authors and do not necessarily reflect positions or policies of the Bill & Melinda Gates Foundation or the U.S. Agency for International Development. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated. Imperial College London, London, UK Ellie Sherrard-Smith & Thomas S. Churcher Centre de Recherches Entomologiques de Cotonou, Cotonou, Benin Corine Ngufor London School of Hygiene and Tropical Medicine, London, UK Corine Ngufor, Raphael N'Guessan, Mark Rowland & Sarah G. Staedke Centre National de Recherche et de Formation sur le Paludisme, Ouagadougou, Burkina Faso Antoine Sanou & Moussa W. Guelbeogo Institut Pierre Richet, Bouake, Côte d'Ivoire Raphael N'Guessan Centro de Investigação em Saúde de Manhiça, Manhiça, Mozambique Eldo Elobolobo & Francisco Saute PMI VectorLink Project, Abt Associates, Maputo, Mozambique Kenyssony Varela ISGlobal, Barcelona, Spain Carlos J. Chaccour US President's Malaria Initiative, USAID, Washington, DC, USA Rose Zulliger PATH, Washington, DC, USA Joseph Wagman & Molly L. Robertson Liverpool School of Tropical Medicine, Liverpool, UK Martin J. Donnelly Infectious Diseases Research Collaboration, Kampala, Uganda Samuel Gonahasa World Health Organization, Geneva, Switzerland Jan Kolaczinski Ellie Sherrard-Smith Antoine Sanou Moussa W. Guelbeogo Eldo Elobolobo Francisco Saute Joseph Wagman Molly L. Robertson Mark Rowland Sarah G. Staedke Thomas S. Churcher This work was conceived by E.S.S., C.N., A.S., M.G. and T.S.C. Empirical data were produced by C.N., A.S., M.G., R.N.G., E.E., F.S., K.V., C.C., R.Z., J.W., M.L.R., M.R., M.D., S.G. and S.S. E.S.S. performed the systematic review, data collation and modelling simulations. E.S.S. performed the post-processing analysis and produced all figures. T.S.C. and E.S.S. wrote the first draft. Expert guidance on analysis, policy and interpretation were provided by C.N., M.R., M.D., S.S. and J.K. All authors contributed comments and approved the final version of the manuscript. Correspondence to Thomas S. Churcher. Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Description of Additional Supplementary Files Supplementary Data 1 Sherrard-Smith, E., Ngufor, C., Sanou, A. et al. Inferring the epidemiological benefit of indoor vector control interventions against malaria from mosquito data. Nat Commun 13, 3862 (2022). https://doi.org/10.1038/s41467-022-30700-1
CommonCrawl
9.6.2 Varieties of thermal friction model It is useful to understand exactly how a temperature-based friction model can be incorporated into the simulation approach described in section 9.2. This sheds some light on the different behaviour of alternative choices of model. First, a reminder of how the original friction-curve model is dealt with. At each time step, the new values of friction force and string velocity are found by the geometric construction illustrated in Fig. 1, which is a repeat of Fig. 7 from section 9.2. Figure 1. Repeat of Fig. 7 for section 9.2, showing the graphical construction involving the intersection of a straight line and the friction force-velocity curve. Three examples of possible positions of the straight line are shown in blue, and two versions of the force-velocity curve (corresponding to different bow forces) are shown in red. The history of the string motion allows us to compute the current value of the variable we called $v_h$, and this determines the position of a sloping straight line. We then have to find the intersection of that straight line with the curve representing friction force as a function of velocity, including the vertical segment which represents the range of possible values of force during sticking. Three possible positions of the straight line are shown, and two versions of the friction-velocity curve corresponding to different values of the bow force. For the solid red friction curve, the blue line labelled c indicates an ambiguity, which is resolved by a hysteresis loop involving jumps of force and velocity, as illustrated in Fig. 2 (which is another repeat of an earlier figure). Figure 2. Repeat of Fig. 8 of section 9.2, illustrating the hysteresis loop and jump behaviour that arises if the maximum slop of the force-velocity curve is steeper than the straight line. The original thermal friction model is based on the assumption that the the friction force is proportional to the normal bow force, and that the coefficient of friction is a function $\mu(T)$ of temperature $T$ alone. In order to ensure that the friction force always opposes the direction of sliding motion, the friction force $f$ must then take the form $$f=-F_b \mu(T) \mathrm{sgn}(v) \tag{1}$$ where $F_b$ is the bow force, and $\mathrm{sgn}(v)$ denotes the sign function, equal to $\pm1$ depending on whether $v$ is positive or negative. What this means for the simulation is that at each time step we first calculate the temperature, and hence obtain $\mu$, then to deal with the $\mathrm{sgn}(v)$ term we solve a version of the graphical construction illustrated in Fig. 3. This function has a vertical portion representing the state of sticking, but during sliding the function is simply constant, so there can be no jumps and no hysteresis loop. Figure 3. Version of the graphical construction shown in Fig. 1, appropriate to the original thermal friction model. Again, three examples of possible positions of the sloping straight line are shown in blue. There is another possible temperature-based friction model we could have explored. The measurements on the bulk properties of rosin shown in Fig. 8 of section 9.6 suggest that over most of the relevant temperature range rosin behaves predominantly as a viscous liquid. That would suggest a friction law taking the general form $$f=-F_b \zeta(T) v . \tag{2}$$ The function $\mathrm{sgn}(v)$ has been replaced by simple proportionality to $v$, because a viscous shear force is proportional to shear rate. This factor $v$ achieves the desired effect of reversing direction according to the sign of the sliding velocity, but it has a very drastic effect on the predicted behaviour. Figure 4 shows the graphical representation equivalent to Figs. 1 and 3. This time, there is no barrier whatever representing "sticking" behaviour. Simulations based on such a model will regularly show "forward slipping", something that is never (or virtually never) seen in measurements. Figure 4. Plot corresponding to Fig. 4, for a thermal model which assumed a temperature-dependent viscosity to represent the behaviour of the rosin layer. A viscous-based model like this is clearly not, therefore, a good candidate for the true friction law. The explanation for this apparent conflict with the measured properties of rosin lies in the particular conditions of those measurements. The rheometer tests used extremely small strains and strain rates, in order to keep the behaviour in the linear range. But the bowed string motion we are interested in involves intervals of gross sliding. This will take the material well outside the range of the rheometer tests. For an extension of the original thermal model to allow the possibility of an initial jump in velocity (and hence in measured bridge force) we need to look for something different. There is a very simple, if entirely ad hoc, approach to that question which is motivated by Figs. 1 and 3. We can replace the function $\mathrm{sgn}(v)$ in equation (1) by something like the function plotted in Fig. 5. The middle one of the three blue lines shows that this function can exhibit multiple intersections, and therefore sometimes predict jumps and hysteresis. But note that the range of velocity plotted here is much wider than in Fig. 1: the red curve here has a falling shape which is much gentler than the original friction curve. Figure 5. Plot corresponding to Fig. 3, for the modified version of the thermal model as described in the text. The function $\mathrm{sgn}(v)$ has been replaced by a hyperbolic curve tending asymptotically to the values $\pm1$ at large sliding speeds. For this example I have used a hyperbolic shape to replace the flat portion of the function $\mathrm{sgn}(v)$. It has a horizontal asymptote at the value 1, and it is then fully defined by two parameters: a vertical asymptote at a speed $v_{as}$, and a value $k_{lim}$ as the sliding speed tends to zero. The particular function plotted here has $v_{as}=0.8$ m/s and $k_{lim}=2$, which are the parameter values used for the simulation examples shown in section 9.6. With this modified model, we can still calculate a form for the required temperature-dependent friction coefficient by requiring that the simulated results for steady sliding agree with the original measurements. The result, when the term $\mathrm{sgn}(v)$ in equation (1) is replaced by the particular function plotted in Fig. 5, is shown in Fig. 6: it is still broadly in line with expectations based on the rheometer tests and glass transition range of violin rosin. Figure 6. Coefficient of friction as a function of temperature, deduced from the modified thermal friction model shown in Fig. 5 after requiring that it should agree with the steady-sliding friction measurements.
CommonCrawl
Does the auto-correlation function of stationary random process always converge? The auto-correlation function of the stationary random process only depends on the time difference $\tau$. http://web.ntpu.edu.tw/~yshan/chapter6_han.pdf 64th slide of this lecture note mentions that for a zero-mean process, the autocorrelation converges to zero as $\tau$ goes to infinity. I want to know why. autocorrelation wefgf1wefgf1 $\begingroup$ Hi: I think it's being stated as an assumption rather than as a property of a zero mean process. Certainly, there can be zero mean processes whose autocorr function does not converge to zero. Now, whether there are zero mean stationary processes, whose autocorrelation function does not converge to zero is a more interesting question whose answer is probably yes but I'm not sure about that. All stationary means is that the covariance of two observations at $t_{1}$ and $t_{2}$ is only a function of $t_{1} - t{_2}$ but whether the covariance converges to zero is I think a different issue. $\endgroup$ – mark leeds Sep 12 '18 at 9:08 $\begingroup$ Take a look at this related question. $\endgroup$ – Matt L. Sep 12 '18 at 9:17 No it does not necessarily. For example the following discrete-time, WSS random process $$x[n] = A \sin(\omega_0 n + \phi) $$ which is called the random phase sinusoid, where $A$ and $\omega_0 \neq 0$ are fixed values and $\phi$ is a random variable uniformly distributed in $\phi \in [-\pi,\pi)$ has an auto-correlation function of the form $$ r_{xx}[k] = \frac{A^2}{2} \cos(\omega_0 k) $$ which does not go to zero as $k$ goes to infinity; $ \lim_{k \to \infty} r_{xx}[k] \neq 0$. Similarly for a continuous-time process, the same can be shown. Note, however, that as MattL indicated in his answer as well, the information contained within a random process is mostly included in its innovations part, this is also expressed in Wold decomposition theorem that any random process can be broken into two parts as a predictable periodic part and a regular unpredictable part (which is the innovations part), then if a WSS random process only includes a regular part but no predictable part, then its covariance (or correlation if zero mean) sequence shall go to zero as the lag goes to infinity if it has a uniformly convergent power spectral density; i.e., the DTFT of auto-correlation sequence (ACS) should converge, and this requires the sum of abs value of ACS be finite and this also requires that $\lim_{k \to \infty} r_{xx}[k] = 0$ for a zero-mean, WSS process containing no predictable (periodic) parts in it. Furthermore, when the concept of ergodicity is introduced, then one of the necessary conditions for a WSS random process to be ergodic in the mean is that its auto-covariance (or equivalently auto-correlation for a zero mean process) should go to zero as $k$ goes to infinity. May be that was stated in that document. Fat32Fat32 A non-rigorous but intuitive explanation would be to note that for zero-mean (wide-sense) stationary processes, the autocorrelation at lag $\tau$ is the correlation between two samples of the process at a temporal distance $\tau$. It seems natural that with certain regularity assumptions (intuitive: "sufficient randomness"), that correlation should decrease (on average) with increasing lag, and should finally converge to zero for $|\tau|\rightarrow\infty$. An example of a process for which this is not the case was given in Fat32's answer, but such a process lacks what I hand-wavingly referred to as "sufficient randomness", because it can be parameterized by a finite number of random variables. Such a process is sometimes called singular. Singular processes have Dirac delta impulses (at $\omega\neq 0$) in their power spectrum. Note that for the given example, the limit of the auto-correlation function does not exist. The power spectrum of a non-singular (regular) process can have only one Dirac delta impulse, and that must be at $\omega=0$, reflecting a non-zero mean. Since the auto-correlation is the inverse Fourier transform of the power spectrum, that delta impulse causes a constant term in the auto-correlation, and, consequently, for a non-zero mean WSS process, the auto-correlation function cannot converge to zero, but it converges to the square of the mean: $$\lim_{|\tau|\to\infty}R_X(\tau)=\mu_X^2\tag{1}$$ However, if $\mu_X=0$, you generally have (for regular processes) $$\lim_{|\tau|\to\infty}R_X(\tau)=0\tag{2}$$ As a side note, since the power spectrum is the Fourier transform of the auto-correlation, the power spectrum can only exist as a conventional function (without Dirac delta impulses) if $(2)$ is satisfied, because otherwise the Fourier integral does not converge in the conventional sense. Not the answer you're looking for? Browse other questions tagged autocorrelation or ask your own question. Random process $X(t)$ with autocorrelation function given find the mean and the variance Fluctuation of autocorrelation of a signal due to signal's noise Relaxation time in a stationary stochastic process Proving a cyclostationary processes signal Autocorrelation of a stochastic process which is a sum Construct a correlation matrix using the covariance method How do we compute distrubtions of the value of a random process conditional on initial conditions? Calculation of an autocorrelation function Auto-covariance of the product of deterministic and wide-sense stationary signal Doubt about wide sense stationary random process Null autocorrelation function and stationary Autocorrelation of signal with offset
CommonCrawl
Were tables of square roots ever in use? Before the advent of calculators they had useful ready made tables for the main functions:sines,cosines logs etc..., do you know if tables of square roots were ever produced or in use? I never heard of it, yet, it would be very easy to produce, since it is enough to find the roots of numbers from 1 to 100. $\begingroup$ Tables of base-ten logarithms were used to faciliated calculations. The reason for using $10$ as the base is that if one wants, for example, the logarithm of $7314,$ one finds $7.314$ in the table, which goes only from $1$ to $10,$ and then one adds $3$ to the logarithm, corresponding to moving the decimal point three places to the right. $\qquad$ $\endgroup$ – Michael Hardy $\begingroup$ One should add that there's an easy algorithm for square roots. So in a sense you did not need a square root table as much as you needed one for logs and trigs. Moreover, you can find roots with logs. $\endgroup$ – Chrystomath $\begingroup$ Oh you sweet summer children.... (I continue to be amazed at what youngsters are not taught or exposed to) $\endgroup$ – Carl Witthoft Pretty much every mathematics textbook (school or college) before the early 1980s (and many even up to the late 1980s), at the algebra level or above, as well as many (most?) chemistry and physics and engineering textbooks, had such a table at the back of the book (as an appendix or something, where "selected answers" and "index" and "glossary" would appear). Often the entries would be for both $\sqrt{n}$ and $\sqrt{10n},$ which was enough to allow you to easily get approximations for any magnitude-size numbers. For example, to find an approximation for $\sqrt{3880},$ use $n=3.88$ and look in the $\sqrt{10n}$ entries, since $$ \sqrt{3880} \; =\; \sqrt{1000\times 3.88} \; = \; 10\sqrt{10 \times 3.88}. $$ There were stand-alone (i.e. as separate books) tables also, such as the following, where square roots from $1.00$ to $9.99$ by increments of $0.01$ are on pp. 16-17 and square roots from $10.0$ to $99.9$ by increments of $0.1$ are on pp. 18-19: Mathematical Tables for Class-Room Use by Mansfield Merriman (1915) https://archive.org/details/mathematicaltabl00merrrich When I was in high school I owned (purchased in 1974) and used the 20th edition (1973) of the CRC Standard Mathematical Tables. On pp. 71-90 you'll find a table having column entries for $n^2$ and $\sqrt{n}$ and $\sqrt{10n}$ and $n^{3}$ and $\sqrt[3]{n}$ and $\sqrt[3]{10n}$ and $\sqrt[3]{100n}$ from $n=1$ to $n=1000$ by increments of $1.$ In high school I also owned (I no longer seem to have it, however) Logarithmic and Trigonometric Tables to Five Places by Kaj L. Nelson (in the well known Barnes and Noble College Outline Series of books), and other people I knew (in college) had the Schaum's Outline Series of Mathematical Handbook of Formulas and Tables by Murray R. Spiegel (1968), which I didn't have a copy of back then but a few years ago I saw and purchased a copy of a later printing (the 1990 printing) at a local used bookstore (square roots are on pp. 238-239). However, in looking at the Schaum's book now, it's more of a handbook of formulas (algebraic, trigonometric, calculus, series, special functions, etc.) than a table of numerical values for computational use. For other such books, try this search and similar searches. For tables at the back of textbooks, simple archive.org and google-books searches will give you hundreds (if not thousands) of examples where you'll find square root tables at the back of the book. Dave L RenfroDave L Renfro $\begingroup$ When I was in high school (the early 70's), the CRC book was too expensive for me. I used (and still have) the Schaum Mathematical Handbook, but I mostly used a smaller book that was just tables. It was similar to your Logarithmic and Trigonometric Tables book but the cover was different. That was the book I used the most--so much that it fell apart on me a couple of years ago so I had to throw it away. And yes, I did use the table of square roots. +1 from me--thanks for the memories. $\endgroup$ – Rory Daulton $\begingroup$ +1 ... I, too, have the CRC book (a prize for a mathematical contest). But now I am considered to be "history" I guess. $\endgroup$ $\begingroup$ @Roy Daulton: I was taking our 4th year math course (essentially trigonometry in the fall, and precalculus math including conics and logarithms and matrices and probability and math induction in the spring) during 1974-1975 (my sophomore HS year), and our teacher was able, I think, to get a special discount for us, so maybe 6 to 8 of us (out of around 30 total in the two 4th year classes) wound up getting a copy in fall 1974. For what it's worth, about 2 years ago, for prosperity purposes, I made a .pdf scan copy of my notes for that class (213 pages). $\endgroup$ – Dave L Renfro $\begingroup$ @Roy Daulton (and Edgar): For more memories, see my answer to Using log table to solve a division problem. $\endgroup$ $\begingroup$ @user157860: The beginning of various editions of Charles Hutton's Mathematical Tables (1785 edition, other editions) have a lot of historical information. More complete historical information is known today, but it's interesting to read accounts written back then, which don't have present-day written myopic treatments in which everything is viewed through the lens of (electronic) computers. $\endgroup$ The Babylonian clay tablet from around 1700 BC known as YBC7289, since it's one of many in the Yale Babylonian Collection has a diagram of a square with one side marked as having length 1/2. They took this length, multiplied it by the square root of 2, and got the length of the diagonal. Given that square roots were in use then, it would have made sense to create a table of such, rather than repeating the labour in calculating the result. Mozibur UllahMozibur Ullah $\begingroup$ After seeing all these antiquities it seems that humans were quite intelligent thousands of years ago. From Mesopotamia to the building of huge pyramids---mysteries are spread everywhere. $\endgroup$ In addition to the existence of such tables, (in the later 1960's, for example) high school math classes included some lessons on (linear) interpolation from the values given in tables. And some of us wondered about higher-order interpolation, and so on, which does nicely lead to many topics in calculus ... and splines and other stuff. Slide rules were a very good competitor to "tables", in many situations, since they could give quicker answers (tho' somewhat lower precision) and were more portable. paul garrettpaul garrett Not the answer you're looking for? Browse other questions tagged mathematics or ask your own question. When was the method of getting square roots (invented by Viète in 1610 and developed by Harriot in 1631) first taught to school children? What is so mysterious about Archimedes' approximation of $\sqrt 3$? What edition of CRC mathematical Tables was the last to contain logarithmic tables? Where did Mathematics establish its roots? Who was the first to show that this quintic equation has five radical roots? Who first introduced the longhand square-rooting method into European mathematics? Logarithm tables vs multiplication tables
CommonCrawl
High Power Laser Science and Engineering EMP control and characterizatio... EMP control and characterization on high-power laser systems 2 Experimental setup 3 Laser parameters 3.1 Laser energy 3.2 Pulse duration 3.3 Pre-pulse delay 3.4 Defocus 4 Target design parameters 4.1 Foil geometry 4.2 Stalk design and material composition 5 Particle-In-Cell (PIC) and electromagnetic simulations 5.1 Setup 5.2 Results and analysis He, Ting Wei, Chaoyang Jiang, Zhigang Zhao, Yuanan and Shao, Jianda 2018. Super-smooth surface demonstration and the physical mechanism of CO2 laser polishing of fused silica. Optics Letters, Vol. 43, Issue. 23, p. 5777. Dubois, J. L. Rączka, P. Hulin, S. Rosiński, M. Ryć, L. Parys, P. Zaraś-Szydłowska, A. Makaruk, D. Tchórz, P. Badziak, J. Wołowski, J. Ribolzi, J. and Tikhonchuk, V. 2018. Experimental demonstration of an electromagnetic pulse mitigation concept for a laser driven proton source. Review of Scientific Instruments, Vol. 89, Issue. 10, p. 103301. Krasa, Josef Klir, Daniel Rezac, Karel Cikhardt, Jakub Velyhan, Andriy Pfeifer, Miroslav Dostal, Jan Krus, Miroslav Dudzak, Roman Buryskova, Simona Nassisi, Vincenso Delle Side, Domenico and Di Lazzaro, Paolo 2019. Target current: an appropriate parameter for characterizing the dynamics of laser-matter interaction. p. 7. Jarrett, J. King, M. Gray, R. J. Neumann, N. Döhl, L. Baird, C. D. Ebert, T. Hesse, M. Tebartz, A. Rusby, D. R. Woolsey, N. C. Neely, D. Roth, M. and McKenna, P. 2019. Reflection of intense laser light from microstructured targets as a potential diagnostic of laser focus and plasma temperature. High Power Laser Science and Engineering, Vol. 7, Issue. , Zheltikov, A M and Nevels, R D 2019. Intensity and wavelength scaling of laser-driven electron transition radiation: toward a table-top source of electromagnetic pulses. Laser Physics Letters, Vol. 16, Issue. 1, p. 015401. High Power Laser Science and Engineering, Volume 6 2018 , e21 P. Bradford (a1), N. C. Woolsey (a1), G. G. Scott (a2), G. Liao (a3), H. Liu (a4) (a5), Y. Zhang (a4) (a5), B. Zhu (a4) (a5), C. Armstrong (a6), S. Astbury (a2), C. Brenner (a2), P. Brummitt (a2), F. Consoli (a7), I. East (a2), R. Gray (a6), D. Haddock (a2), P. Huggard (a8), P. J. R. Jones (a2), E. Montgomery (a2), I. Musgrave (a2), P. Oliveira (a2), D. R. Rusby (a2), C. Spindloe (a2), B. Summers (a2), E. Zemaityte (a6), Z. Zhang (a4), Y. Li (a4) (a5), P. McKenna (a6) and D. Neely (a2) (a6) 1 Department of Physics, York Plasma Institute, University of York, Heslington, York YO10 5DD, UK 2 Central Laser Facility, STFC Rutherford Appleton Laboratory, Didcot OX11 0QX, UK 3 Key Laboratory for Laser Plasmas (Ministry of Education) and School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China 4 Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China 5 School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China 6 Department of Physics SUPA, University of Strathclyde, Glasgow G4 0NG, UK 7 ENEA - C.R. Frascati - Dipartimento FSN, Via E. Fermi 45, 00044 Frascati, Italy 8 Space Science Department, STFC Rutherford Appleton Laboratory, Didcot OX11 0QX, UK This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. DOI: https://doi.org/10.1017/hpl.2018.21 Published online by Cambridge University Press: 21 May 2018 Figure 1. Schematic of target design and experimental arrangement. Figure 2. EMP energy versus on-target laser energy for the D-dot and two B-dot probes. The coloured lines represent linear fits for all three probes. Figure 3. Plot of EMP energy and total number of escaping electrons versus laser pulse duration. The grey diamonds represent the ratio of EMP energy to on-target laser energy, while the orange diamonds represent the ratio of total electron number ( $N_{e}$ ) to on-target laser energy. EMP data was taken from the B-dot West probe. Figure 4. EMP energy as a function of pre-pulse delay, measured by the D-dot probe. Figure 5. The ratio of EMP energy to laser energy plotted against defocus (as measured by the D-dot East probe). The Gaussian fit is meant as a visual aid, with a laser focal intensity of approximately $1\times 10^{18}~\text{W}\cdot \text{cm}^{-2}$ at the Gaussian peak. Figure 6. EMP energy as a function of on-target laser energy for wire, flag and standard foil designs (B-dot probe East). Laser focal intensity ranges from $8\times 10^{17}~\text{W}\cdot \text{cm}^{-2}$ to $2\times 10^{19}~\text{W}\cdot \text{cm}^{-2}$ on these shots and we have chosen a logarithmic $y$ -axis to emphasize the drop in EMP. Notice how changing the wire diameter has led to a deviation from the linear relationship between EMP and on-target laser energy. Figure 7. EMP energy versus on-target laser energy for a variety of different stalk designs (B-dot probe East). Laser focal intensity is between $8\times 10^{17}~\text{W}\cdot \text{cm}^{-2}$ and $2\times 10^{19}~\text{W}\cdot \text{cm}^{-2}$ for these shots. Also included is a linear fit to the standard CH cylindrical stalk data, as detailed in Figure 2. Figure 8. The three different stalk designs: (a) standard cylindrical geometry with a geodesic path length of 20 mm; (b) a sinusoidally modulated stalk with the same maximum cross-section as the standard cylinder and a path length of 30 mm; (c) spiral stalk design with an identical diameter to (a), but a geodesic path length of 115 mm. Figure 9. Total number of electrons recorded by the electron spectrometer as a function of on-target laser energy. Uncertainties in on-target laser energy are ${\sim}10\%$ . Figure 10. Number of electrons with energies above 5 MeV versus on-target laser energy. Uncertainties in on-target laser energy are ${\sim}10\%$ . Figure 11. Side elevation of stalk designs used in 3D PIC simulations. Transparent grey sections represent a perfect electrical conductor (PEC), while the grey-green regions represent Teflon plastic. (a) Standard cylindrical stalk configuration: pure Teflon and PEC models were used. (b) Sinusoidally modulated Teflon stalk. (c) Teflon spiral stalk. (d) Half-length Teflon and PEC stalk. Figure 12. Two tables containing values of the magnetic component of the EMP energy ( $\unicode[STIX]{x1D716}_{\text{magnetic}}$ ) at positions $P_{1}$ and $P_{2}$ in the simulation box. Figure 13. The ratio of EMP energy to laser energy versus thickness of PE backing on $1~\unicode[STIX]{x03BC}\text{m}$ Cu targets as measured by the B-dot West probe. MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org. Giant electromagnetic pulses (EMP) generated during the interaction of high-power lasers with solid targets can seriously degrade electrical measurements and equipment. EMP emission is caused by the acceleration of hot electrons inside the target, which produce radiation across a wide band from DC to terahertz frequencies. Improved understanding and control of EMP is vital as we enter a new era of high repetition rate, high intensity lasers (e.g. the Extreme Light Infrastructure). We present recent data from the VULCAN laser facility that demonstrates how EMP can be readily and effectively reduced. Characterization of the EMP was achieved using B-dot and D-dot probes that took measurements for a range of different target and laser parameters. We demonstrate that target stalk geometry, material composition, geodesic path length and foil surface area can all play a significant role in the reduction of EMP. A combination of electromagnetic wave and 3D particle-in-cell simulations is used to inform our conclusions about the effects of stalk geometry on EMP, providing an opportunity for comparison with existing charge separation models. Ongoing advances in high-power laser technology[1] have led to renewed interest in the processes that drive electromagnetic pulse (EMP) generation. Control over the strength and frequency of emission is not just essential for the protection of expensive hardware – it could open the door to a new generation of bespoke laser-driven B-field and radio-frequency sources of interest to the inertial confinement fusion, high-field and astrophysical communities[2–4]. A number of different mechanisms have been proposed to explain the broad spectral profile of laser-driven EMP and they all rely upon the acceleration of hot electrons within the target. When a sufficiently intense laser pulse ( $I\unicode[STIX]{x1D706}^{2}\gtrsim 10^{15}~\text{W}\cdot \text{cm}^{-2}\cdot \unicode[STIX]{x03BC}\text{m}^{2}$ ) interacts with a material, a portion of its energy is resonantly and parametrically absorbed, leading to the production of hot electrons with energies exceeding 10 keV[5]. At still higher intensity, other processes (e.g. $\boldsymbol{J}\times \boldsymbol{B}$ heating) can accelerate electrons to MeV energies[6]. It is thought that these electrons contribute towards the EMP in three key stages, starting with the emission of THz radiation as they propagate across the target surface[7]. Although significant currents may be associated with this THz emission, the frequency is generally too high to pose a threat to electronic equipment[8]. The second contribution to the EMP is, by contrast, acutely damaging to circuitry and lies within the GHz spectral domain. It occurs when some of the most energetic hot electrons are ejected from the target[9, 10], leaving behind a potential that both prevents less energetic electrons from escaping and draws a return current out of the chamber surroundings. As this current oscillates across the stalk that connects the target to the chamber, antenna radiation is emitted at radio frequencies[2]. The third spectral component is in the MHz domain and depends on the geometry of the interaction chamber. An expanding cloud of charge is produced by the evaporating target, which strikes the walls of the chamber and causes it to resonate at its natural EM frequency[11]. EMP emission is strongest at high laser energy, when more escaping electrons can be produced. Since the GHz component of the EMP is caused by a neutralization current propagating across the target stalk, by reducing the magnitude and duration of this current one may hope to limit the damaging effects of EMP. In this paper, we present new data that shows how a significant reduction in EMP can be achieved with minimal experimental disruption. Experimental results are divided into two main sections – one for EMP variation with laser parameters and the other for variation with target foil and stalk/mount characteristics. The data presented here is independent of target thickness, of which more details can be found in the Appendix (see Appendix A). All data used to produce the figures in this work, along with other supporting material, can be found at http://dx.doi.org/10.15124/a5d78c76-0546-412c-8b02-9edcb75efbb7. Our experiment was performed at the Vulcan Target Area West (TAW) laser facility on the site of the Rutherford Appleton Laboratory[12]. We used a short-pulse beamline with 1 ps pulse duration and energies ranging from 1 to 70 J. The incidence angle of the 1030 nm p-polarized beam was $30^{\circ }$ to the target normal. The focal spot size was fixed at 3.5 $\unicode[STIX]{x03BC}\text{m}$ , with a maximum laser focal intensity of $I=2\times 10^{19}~\text{W}\cdot \text{cm}^{-2}$ . Three probes were used to monitor the EMP during the experiment. A B-dot probe and a D-dot probe were placed behind a porthole on the East side of the chamber, $0^{\circ }$ vertically from Target Chamber Centre (TCC). A second B-dot probe was placed opposite, on the West side of the chamber, behind a porthole $35^{\circ }$ vertically from TCC. All three probes were exposed to the air. The B-dot probes were Prodyn B-24 detectors connected to a BIB-100G matching box, while the D-dot was an FD-5C model (also made by Prodyn Technologies). In an attempt to limit the amount of EMP noise pick-up, probe measurements were passed through 35-m double-shielded BNC cables to an oscilloscope situated outside of the target area. The oscilloscope was a Tektronix DPO 71254C model with a 12.5 GHz analog bandwidth, though cable parameters restricted measurements to frequencies below ${\sim}$ 3 GHz. Probe measurements were converted to EMP energy using the procedure outlined in 2012 by Kugland et al. [13] Ignoring frequencies above 3 GHz and below 50 MHz, we inverted the RG223 cable attenuation and integrated the corrected signal to yield $B(t)$ (or $D(t)$ in the case of the D-dot probe). Next, we used the free-space plane wave approximation ( $\text{}\text{E}\approx c\text{}\text{B}$ ) to estimate the instantaneous Poynting flux, $S(t)=|\text{}\text{E}\times \text{}\text{H}|$ . The EMP energy could then be calculated via[13] $$\begin{eqnarray}\unicode[STIX]{x1D716}_{\text{EMP}}=A_{eq}\sum S(t)\unicode[STIX]{x0394}t,\end{eqnarray}$$ where $A_{eq}$ is the probe equivalent area and $\unicode[STIX]{x1D716}_{\text{EMP}}$ is the EMP energy at the probe head. The standard laser-target design consisted of a $3~\text{mm}\times 8~\text{mm}$ metal foil mounted on a 2.9 mm-diameter cylindrical stalk (see Figure 1). All of the stalks were 30 mm in height and positioned along the circumference of a rotating Al wheel. Stalks were composed either of Cu or an acrylic resin called VEROBLACKPLUS RGD875, which we refer to as CH for the remainder of the paper. Escaping electrons produced during the interaction were detected using an electron spectrometer. It was positioned directly in line with the laser, facing the target rear surface. Initial measurements examined the relationship between laser energy and EMP. For this purpose, 1 ps laser pulses were fired at $100~\unicode[STIX]{x03BC}\text{m}$ -thick Cu targets (hereafter Cu100) on cylindrical CH stalks. In Figure 2, we show weighted linear fits for on-target laser energies between ${\sim}1$ and 70 J ( $I=1\times 10^{17}{-}10^{19}~\text{W}\cdot \text{cm}^{-2}$ ). Linearity is observed across all three diagnostics for laser energies exceeding ${\sim}7~\text{J}$ , which suggests that EMP measurements may be a reliable measure of laser-target coupling for a given target geometry. The dependence of EMP on laser pulse duration was probed using standard Cu100 foils on cylindrical stalks. The pulse duration of the laser was increased gradually to a maximum of 23 ps and EMP measurements were compared with supplementary data from an electron spectrometer. Results indicate that both EMP energy and the total number of emitted electrons drop away for pulse lengths above 10 ps (see Figure 3). Furthermore, a peak in electron and EMP emission was observed at approximately 2.5 ps. Laser focal intensity ranged from $8.7\times 10^{17}~\text{W}\cdot \text{cm}^{-2}$ to $2.4\times 10^{18}~\text{W}\cdot \text{cm}^{-2}$ . The variation of EMP energy with pre-pulse delay is presented in Figure 4. Since the pre-pulse and main drive were both delivered via the same beamline, we attribute the change in EMP to the formation of a frontal pre-plasma[14]. The received pre-pulse energy was consistent at ${\sim}$ 0.6 J, while the main beam energy fluctuated between $55$ and 67 J. Standard Cu100 foils with CH stalks were used as targets and laser focal intensity was maintained at $I\,\sim \,5\,\times \,10^{18}~\text{W}\,\cdot \,\text{cm}^{-2}$ . Figure 4 suggests that the greater the delay between the pre-pulse and main drive, the greater the EMP energy. This is consistent with current theoretical models of laser absorption and EMP generation. Scott et al. have shown that laser absorption is a strong function of plasma density and scale length[15], which are both dependent on the pre-pulse delay. The longer the delay between the pre-pulse and main drive, the greater the pre-plasma expansion and the greater the transfer of laser energy to hot electrons. The effect of laser focus on EMP energy can be seen in Figure 5. On-target laser energy spanned a $54$ –64 J range and the beam was focussed onto Cu100 foils mounted on cylindrical CH stalks. Using a Gaussian fit to guide the eye, peak emission appears to fall at a modest defocus, dropping away towards zero at a distance of approximately $\pm 300~\unicode[STIX]{x03BC}\text{m}$ from the focal position. It has been reported in a number of publications that foil surface area has a significant impact on charge separation and GHz emission from the target[8, 10, 16]. Our experiment used three different foil designs, each made from copper and mounted on CH stalks. The standard foils were $100~\unicode[STIX]{x03BC}\text{m}$ -thick with a $3~\text{mm}\times 8~\text{mm}$ rectangular surface. We also used smaller 'flag' targets ( $1~\text{mm}\times 1~\text{mm}$ and $0.5~\text{mm}\times 0.5~\text{mm}$ ), as well as wire targets with 25, 50 and $100~\unicode[STIX]{x03BC}\text{m}$ diameters. A marked reduction in EMP was seen on shots involving the flag and wire targets, with over an order of magnitude drop in EMP energy observed for the wire shots (Figure 6). This is qualitatively consistent with existing theoretical and experimental work, which indicates that EMP is strongest for targets with a large transverse area[8, 16–18]. Larger targets tend to build up lower positive potentials because the potential difference caused by the ejection of hot electrons is spread out over a wide area. As a result, more electrons are able to escape and a bigger neutralization current is generated[16, 17]. To explore how the stalk's material composition might affect the measured EMP, we compared Al and CH plastic stalks with a fixed cylindrical geometry ( $r=2.9~\text{mm}$ , height is 30 mm). We found that the EMP energy dropped by more than $1/3$ when Al stalks were substituted for plastic (see Figure 7). To probe the effect of stalk shape on EMP, Cu100 foils were suspended on a variety of 3D-printed CH stalk designs. The geometry and geodesic path length of each design are detailed in Figure 8. We use the term geodesic path to denote the shortest route from the base of the stalk to the bottom of the foil travelling along the stalk surface. It is introduced as a rough measure of stalk impedance and resistance to electrical breakdown. If the reader refers again to Figure 7, they will observe that EMP was significantly reduced on shots involving the modulated and spiral stalks. The modulated design reduced the received signal by ${\sim}1/3$ on average, but the most profound effects were seen when using the spiral target. Follow-up shots with a 20 ps extended pulse confirmed that the spiral stalk reduces EMP by a factor of ${\sim}7$ with respect to the CH cylinders and over an order of magnitude with respect to the Al. Now that we have confirmed that the modified stalks offer a clear advantage over conventional designs, it is important to understand why. If the reduction in EMP was caused by impaired charge separation in the target one would expect to see a change in the electron distribution. We find, however, that the number and energy of ejected electrons do not change significantly for shots involving the spiral and modulated stalks. Data from the electron spectrometer (see Figures 9 and 10) shows that the energy, temperature and number of emitted electrons scale strongly with laser energy, but have no correlation with stalk geometry. We can therefore be confident that the drop in EMP is caused by a corresponding reduction in the return current through the stalk. The magnitude and temporal profile of this return current were not captured by our experiment. For a foil mounted on top of a dielectric stalk, a polarization current can pass through the stalk body or electrical breakdown can lead to the generation of a surface current[19]. By increasing the geodesic path length while keeping the stalk height constant, it is possible to increase both the impedance and inductance of the target stalk. The benefits of this approach are most clearly seen in the spiral signal; however, since the cylindrical and modulated stalks have similar electromagnetic characteristics, differences between the two may be a combination of several factors. In the next section, we explore other potential explanations for the observed reduction in EMP. The sinusoidal and spiral stalks not only have a greater base geodesic length — their shape introduces a shadowing effect that could make them more resistant to photoionization, charge implantation and electrical breakdown. To better understand the effects of stalk geometry on EMP emission, self-consistent 3D PIC simulations were performed alongside full-wave time-domain EM simulations[20]. Simulated foil dimensions were fixed at $8~\text{mm}\times 4.5~\text{mm}\times 0.75~\text{mm}$ and targets were placed at the centre of a perfectly conducting box with $X_{\text{box}}=800~\text{mm}$ , $Z_{\text{box}}=600~\text{mm}$ and height $Y_{\text{box}}=440~\text{mm}$ . Descriptions of the various stalk designs can be found in Figure 11. Simulated particles were emitted from a circle of 1 mm radius, centred on the foil surface. Conical electron emission was radially uniform within an angle of $40^{\circ }$ with respect to the target normal and particle energies were uniformly distributed between 50 and 150 keV. The total emitted charge was restricted to 5 nC in order to maintain cone structure and minimize space-charge effects. The electron current was set to a maximum at the first computational step before undergoing a Gaussian decay with an inflection time of 0.5 ns. Since we are only interested in the GHz component of the EMP, these assumptions are suitable for picosecond-scale laser interactions with a nanosecond-order response time. The ejected electron current is the source of all EM fields inside the box. Eigenmode field solutions are excited as these electrons travel across the box interior and over the target stalk. All electrons ejected from the target will have reached the walls after 6 ns, at which point they can contribute towards the current flowing across the stalk. In a closed, perfectly conducting chamber, modal fields excited by electron currents will have no opportunity to decay. Simulations were therefore stopped after 25 ns, when oscillations had achieved a steady state. For each of the five stalk designs, the energy associated with the electric ( $\unicode[STIX]{x1D716}_{\text{electric}}=\int _{0}^{t_{f}}|\text{}\text{E}|^{2}\,\text{d}t$ ) and magnetic ( $\unicode[STIX]{x1D716}_{\text{magnetic}}=\int _{0}^{t_{f}}|\text{}\text{H}|^{2}\,\text{d}t$ ) fields was calculated. These calculations were performed at two locations: $P_{1}(-X_{\text{box}}/4,-Y_{\text{box}}/4,Z_{\text{box}}/4)$ and $P_{2}(X_{\text{box}}/4,Y_{\text{box}}/4,Z_{\text{box}}/4)$ . Simulation results for the magnetic energy at the two locations are contained in Figure 12. In switching from the PEC mount to the Teflon stalk, $\unicode[STIX]{x1D716}_{\text{magnetic}}$ was reduced by a factor of $27$ at $P_{1}$ and a factor of $16$ at $P_{2}$ . No advantage was found for using the sinusoidal stalk over the dielectric cylinder and only a modest additional reduction was found for the spiral stalk ( $26\%$ at $P_{1}$ and $12\%$ at $P_{2}$ ). Although these results show striking EMP attenuation when switching from conducting to insulating stalks, they do not explain the lower attenuation of the cylindrical dielectric stalk compared with the sinusoidal and spiral designs. One possible explanation involves a superficial charged layer caused by X-ray/UV photoionization and electron/ion bombardment of the rod surface, effectively transforming the dielectric stalk into a conductor and reducing the low-conductance path length. Stalks with a large low-conductance path length, such as the spiral stalk, will be more resistant to electrical breakdown and EMP. To model the generation of this hypothetical charged layer, simulations were performed using a dielectric stalk of half-length (see Figure 11). The shorter stalk produced a much stronger EMP than the full-size cylinder, demonstrating that an effective decrease in geodesic path length (through photoionization and/or charge implantation) may be responsible for the relatively low EMP attenuation observed for CH cylinders in our experiment. It also provides us with an explanation for the performance of the modified stalks. Both the sinusoidal and spiral stalks have surface regions out of direct sight of the target, protecting them from the harsh laser plasma interaction and increasing their low-conductance path length. Although these results are promising, it should be remembered that our simulations do not account for photoionization or charge implantation physics. Further experimental work is needed before we can definitively identify the cause of reduced EMP emission from modified stalks. Control and characterization of EMP emission at the VULCAN TAW facility has been achieved through the alteration of laser, target foil and stalk/mount characteristics. EMP energy was found to scale linearly with applied laser energy, but it is also sensitive to laser pre-pulse delay, pulse duration, defocus, stalk material and target transverse area. We have successfully reduced the measured EMP energy by increasing the geodesic path length of the target stalk and we have shown that a dielectric spiral design is an effective and unobtrusive means of limiting GHz emission from the target. 3D PIC simulations suggest that this reduction may be due to a shadowing effect that limits photoionization and charge implantation along the length of the stalk. A full theoretical description of the current discharge mechanism in these modified stalks is left to future work. The authors would like to thank the staff of the Central Laser Facility, whose support and expertise were invaluable in the production of this paper. They also gratefully acknowledge funding from EPSRC grants EP/L01663X/1 and EP/L000644/1, the Newton UK grant, the National Natural Science Foundation of China NSFC/11520101003, and the LLNL Academic Partnership in ICF. Appendix A. Target foil thickness Rectangular Cu foils suspended on CH cylindrical stalks were used to gauge the impact of foil thickness on EMP emission. A variety of thicknesses were tested between $1$ and 100 $\unicode[STIX]{x03BC}\text{m}$ , but we discovered no substantial correlation with integrated EMP energy. We also varied the thickness of polyethylene (PE) applied to the back of Cu–PE targets, as shown in Figure 13. Fixing the Cu thickness at $1~\unicode[STIX]{x03BC}\text{m}$ and the stalks as CH cylinders, we increased the PE backing up to $5000~\unicode[STIX]{x03BC}\text{m}$ . Again, no clear trend was observed. For target thicknesses smaller than the hot electron range, one would expect increased target charging from electrons exiting the target rear surface[2, 21]. Given that our results display no such trend, it is likely the targets were too thick to resolve this effect. 1. Danson, C. Hillier, D. Hopps, N. and Neely, D. High Power Laser Sci. Eng. 3, e3 (2015). 2. Poyé, A. Hulin, S. Bailly-Grandvaux, M. Ribolzi, J. Raffestin, D. Bardon, M. Lubrano-Lavaderci, F. Santos, J. J. Nicolaï, P. and Tikhonchuk, V. Phys. Rev. E 91, 043106 (2015). 3. Goyon, C. Pollock, B. B. Turnbull, D. P. Hazi, A. Divol, L. Farmer, W. A. Haberberger, D. Javedani, J. Johnson, A. J. Kemp, A. Levy, M. C. Logan, B. G. Mariscal, D. A. Landen, O. L. Patankar, S. Ross, J. S. Rubenchik, A. M. Swadling, G. F. Williams, G. J. Fujioka, S. Law, K. F. F. and Moody, J. D. Phys. Rev. E 95, 033208 (2017). 4. De Marco, M. Krása, J. Cikhardt, J. Velyhan, A. Pfeifer, M. Dudžák, R. Dostál, J. Krouský, E. Limpouch, J. Pisarczyk, T. Kalinowska, Z. Chodukowski, T. Ullschmied, J. Giuffrida, L. Chatain, D. Perin, J.-P. and Margarone, D. Phys. Plasmas 24, 083103 (2017). 5. Courtois, C. Ash, A. D. Chambers, D. M. Grundy, R. A. D. and Woolsey, N. C. J. Appl. Phys. 98, 054913 (2005). 6. Gibbon, P. Short Pulse Laser Interactions with Matter: An Introduction (Imperial College Press, 2005). 7. Liao, G. Q. Li, Y. T. Zhang, Y. H. Liu, H. Ge, X. L. Yang, S. Wei, W. Q. Yuan, X. H. Deng, Y. Q. Zhu, B. J. Zhang, Z. Wang, W. M. Sheng, Z. M. Chen, L. M. Lu, X. Ma, J. L. Wang, X. and Zhang, J. Phys, Rev. Lett. 116, 205003 (2016). 8. Poyé, A. Dubois, J. L. Lubrano-Lavaderci, F. D'Humières, E. Bardon, M. Hulin, S. Bailly-Grandvaux, M. Ribolzi, J. Raffestin, D. Santos, J. J. Nicolaï, P. and Tikhonchuk, V. Phys. Rev. E 92, 043107 (2015). 9. Brown, C. J. Jr Throop, A. Eder, D. and Kimbrough, J. J. Phys. Conf. Ser. 112, 032025 (2008). 10. Dubois, J. L. Lubrano-Lavaderci, F. Raffestin, D. Ribolzi, J. Gazave, J. Compant La Fontaine, A. D'Humières, E. Hulin, S. Nicolaï, P. Poyé, A. and Tikhonchuk, V. T. Phys. Rev. E 89, 1 (2014). 11. Mead, M. J. Neely, D. Gauoin, J. Heathcote, R. and Patel, P. Rev. Sci. Instrum. 75, 4225 (2004). 12. Edwards, C. B. Danson, C. N. Hutchinson, M. H. R. Neely, D. and Wyborn, B. AIP Conf. Proc. 426, 485 (1998). 13. Kugland, N. L. Aurand, B. Brown, C. G. Constantin, C. G. Everson, E. T. Glenzer, S. H. Schaeffer, D. B. Tauschwitz, A. and Niemann, C. Appl. Phys. Lett. 101, 024102 (2012). 14. McKenna, P. Lindau, F. Lundh, O. Neely, D. Persson, A. and Wahlstrom, C.-G. Philos. Trans. R. Soc. A 364, 711 (2006). 15. Scott, G. G. Bagnoud, V. Brabetz, C. Clarke, R. J. Green, J. S. Heathcote, R. I. Powell, H. W. Zielbauer, B. Arber, T. D. McKenna, P. and Neely, D. New J. Phys. 17, 033027 (2015). 16. Chen, Z.-Y. Li, J.-F. Yu, Y. Wang, J.-X. Li, X.-Y. Peng, Q.-X. and Zhu, W.-J. Phys. Plasmas 19, 113116 (2012). 17. Raven, A. Rumsby, P. T. Stamper, J. A. Willi, O. Illingworth, R. and Thareja, R. Appl. Phys. Lett. 35, 526 (1979). 18. Eder, D. C. Throop, A. Brown, C. G. Kimbrough, J. Stowell, M. L. White, D. A. Song, P. Back, N. Macphee, A. Chen, H. Dehope, W. Ping, Y. Maddox, B. Lister, J. Pratt, G. Ma, T. Tsui, Y. Perkins, M. O'Brien, D. and Patel, P. Lawrence Livermore National Laboratory Report (2009), LLNL-TR-411183. 19. Zener, C. Proc. R. Soc. Lond. Ser. A 145, 523 (1934). 20. Consoli, F. De Angelis, R. Duvillaret, L. Andreoli, P. L. Cipriani, M. Cristofari, G. Di Giorgio, G. Ingenito, F. and Verona, C. Sci. Rep. 6, 27889 (2016). 21. Ra̧czka, P. Dubois, J.-L. Hulin, S. Tikhonchuk, V. Rosiński, M. Zaraś-Szydłowska, A. and Badziak, J. Laser Part. Beams 35, 677 (2017).
CommonCrawl
Urban-induced changes in tree leaf litter accelerate decomposition Jens Dorendorf1, Anja Wilken1, Annette Eschenbach2 & Kai Jensen1 Ecological Processes volume 4, Article number: 1 (2015) Cite this article The role of urban areas in the global carbon cycle has so far not been studied conclusively. Locally, urbanization might affect decomposition within urban boundaries. So far, only few studies have examined the effects of the level of urbanization on decomposition. This study addresses the influence of the level of urbanization on decomposition processes. It explores whether potential influences are exerted through leaf litter quality alterations or through direct effects of decomposition site's level of urbanization. Leaf litter of five different tree species was sampled at urban and periurban sites. Decomposition of this litter was analyzed in three different experiments: a climate chamber incubation, a reciprocal litterbag transplant at urban and periurban sites, and a common garden litterbag transplant. Decomposition site's level of urbanization did not show a significant effect. However, in all species, when significant differences were observed, leaf litter of urban origin decomposed significantly faster than leaf litter of periurban origin. This effect was observed in all three experiments. In the reciprocal litter transplant experiment, 62% ± 3% mass loss in litter of urban origin compared to 53% ± 3% in litter of periurban origin was observed. The difference was not as pronounced in the other two experiments, with 94% ± 1% mass loss of litter originating in urban habitats compared to 92% ± 1% mass loss of litter originating in periurban habitats in the common garden experiment and 225 ± 13 mg CO2 released from litter originating in urban habitats compared to 200 ± 13 mg CO2 released from litter originating in periurban habitats in the climate chamber incubation. We conclude that the level of urbanization affects decomposition indirectly through alterations in leaf litter quality even over short urban to periurban distances. Urban areas play a crucial role in anthropogenic climate change. Due to the high number of people living in cities (more than half of the world's population, UN 2011), cities can be considered 'hot spots' for the release of CO2 (Grimm et al. 2008). Urban areas, however, not only are sources of CO2 but also store substantial amounts of organic carbon in trees and soils (Churkina et al. 2010). Moreover, cities not only affect the global carbon cycle through anthropogenic release of CO2 and storage of organic carbon but have long been observed to influence decomposition (e.g., Fenn and Dunn 1989). As urban areas cover substantial amounts of the earth's land surface (with estimates up to 3.52 million km2, Potere and Schneider 2007), the influence of urban areas on decomposition processes is of interest beyond individual city limits. This is especially true for carbon sequestration modeling in residential landscapes (e.g., Zirkle et al. 2012). Decomposition is a key component in the carbon cycle, as carbon bound in biomass is either released into the atmosphere as CO2 or stabilized by humification processes into long-lived soil organic matter. The process of decomposition is complex and includes, for example, ingrowth of microbial biomass and nutrient accumulation in addition to leaching and respiration. Here, the term decomposition is used to describe the net mass loss of biomass (Berg and McClaugherty 2008). Decomposition is influenced by abiotic environmental conditions, decomposer communities, and the quality of decomposing material (Berg and McClaugherty 2008). All of these factors can be altered by the level of urbanization. Abiotic environmental conditions shown to be influenced by the level of urbanization are raised temperatures (urban heat island (UHI), Oke 1973) and increased precipitation in and downwind of cities (Schlünzen et al. 2010). Decomposer communities have been shown to be influenced by the level of urbanization due to heavy metal deposition (Cotrufo et al. 1995) and introduction of non-native soil fauna (Steinberg et al. 1997). The quality of decomposing material can be altered in numerous chemical parameters known to influence decomposition processes (e.g., N content), with significant interspecific differences in alterations (Carreras et al. 1996; Alfani et al. 2000). As some alterations due to the level of urbanization might have an accelerating effect on decomposition, while others might have a decelerating one, predicting the net effect of the level of urbanization on decomposition is challenging and empirical data is scarce. Following the model of a concentric city, with a heavily urbanized inner city and a gradually decreasing level of urbanization towards the city's fringes (Wittig et al. 1998), distance to city center can be used as an approximated, inversed urbanization gradient. Employing gradients or paired studies from inner city study sites to study sites at a city's fringes or even in the rural hinterlands has been pointed out as a valuable possibility to study the ecological impact of urbanization (McDonnell and Pickett 1990). So far, the impact of the level of urbanization on decomposition has only been studied in few cities around the globe and results are contradictory. Leaf litter alterations in litter originating from urban habitats in Naples as well as in New York City led to a slower decomposition (Cotrufo et al. 1995; Carreiro et al. 1999; Pouyat and Carreiro 2003), while litter from Helsinki decomposed faster (Nikula et al. 2010) compared to the respective litter of rural origin. Environmental and soil alterations at urban sites led to an accelerated mass loss of litter in New York City and Helsinki (Pouyat et al. 1997; Pouyat and Carreiro 2003; Nikula et al. 2010), while a decelerated mass loss has been observed at urban sites in Naples as well as in Asheville, NC, USA (Cotrufo et al. 1995; Pavao-Zuckerman and Coleman 2005) compared to the respective rural sites. Still other studies found no effect on decomposition, neither of the decomposition site's level of urbanization nor of leaf litter origin site's level of urbanization (Carreiro et al. 2009). Since results of previous studies are contradictory and cities around the globe differ drastically, e.g., in their structure and pollution regime, more research in various cities is needed to detect general trends in the influence of urbanization on decomposition. The study at hand examines the effects of level of urbanization on decomposition in the city of Hamburg, Germany, either through alterations in leaf litter quality or conditions at the decomposition site. Moreover, the study examines decomposition of leaf litter of five different tree species. Until now, individual studies have solely been conducted with leaf litter of a single species (e.g., Pouyat et al. 1997). As species or even variants of the same species differ in their reaction to urbanization (Carreras et al. 1996), potentially leading to differently altered decomposition processes, this study includes leaf litter of five tree species to enable interspecific comparisons. Chosen species are common in the study area and attention was paid to include species with differing ecology, namely with and without symbiosis with dinitrogen (N2)-fixing bacteria, and native as well as non-native species. To compare results between cities without the confounding influence of interspecific differences, this study includes two species previously studied in other cities (Populus tremula in Helsinki and Quercus rubra in New York City) (Carreiro et al. 1999; Pouyat and Carreiro 2003; Nikula et al. 2010). Marked differences exist between early and late stages of leaf litter decomposition (Berg and McClaugherty 2008). Simplified, in early stages of decomposition, non-lignified compartments of litter are decomposed as they are easily accessible to microorganisms (Berg and McClaugherty 2008). This process is accelerated by high litter N content and decelerated by high lignin content (Berg and McClaugherty 2008). In later stages, this relation is reversed, the overall decomposition rate is slowed, and a high lignin and low N content are related to a relatively faster and more complete decomposition (Berg and McClaugherty 2008). To analyze different stages of decomposition in a limited amount of time, three different methods were employed: a climate chamber incubation focusing on early stages of decomposition, a reciprocal litter transplant expected to show intermediate mass loss values (intermediate stage), and a common garden experiment with elevated temperature and humidity expected to lead to high mass loss values (late stage). This study aims to (1) analyze whether origin of tree leaf litter (urban vs. periurban, respective city center vs. city fringe) affects its decomposition over a short distance. Further, the study (2) examines direct influences of decomposition sites (urban vs. periurban) on mass loss by comparing decomposition at urban and periurban sites including field measurements of the site conditions. These questions are addressed in combination with (3) evaluating interspecific differences in decomposition between the five chosen species and the respective influences of the level of urbanization. Additionally, it (4) compares results of the three employed methods representing different stages of decomposition. Study site Hamburg, Germany, is located in northern Europe (53°38′N, 10°0′E) and home to about 1.8 million inhabitants (Statistisches Amt für Hamburg und Schleswig-Holstein 2013). In addition to typical inner city built-up, the city's 755 km2 include various land uses, e.g., grasslands and agriculture. Its climate is temperate oceanic with 749 mm of precipitation annually and an average annual temperature of 8.8°C for the years from 1891 to 2007 (Hoffmann and Schlünzen 2010). An urban heat island of inner city sites of about 1.0 K and a 5% to 20% increased precipitation compared to areas 43 km downwind of the urban center have been determined (Schlünzen et al. 2010; Wiesner et al. 2014). Collection of leaf litter and analyzed species Air-dried, senescent leaf litter samples of five different tree species were used for this study. Species include native Acer platanoides, Alnus glutinosa, and Populus tremula and non-native Quercus rubra and Robinia pseudoacacia. Of these, Alnus as well as Robinia live symbiotically with N2-fixing bacteria. All species are common to the study area. To sample trees of urban and periurban origin, ten individual trees per species were sampled in the city's center and at its periphery. To find individual sample trees, we used the tree cadaster of Hamburg's Office for Urban Planning and Nature. The urban and periurban sites were chosen as to include a sufficient number of individuals of each species within a reasonable distance of one another. Thus, the urban sites were located about 3 km north and the periurban sites about 11 km east of the city hall. The term 'urban' has so far not been defined comprehensively (Raciti et al. 2012). Neither has the term 'periurban,' used here to describe areas between urban center and rural hinterlands. Periurban sample sites in our study were located in districts of Hamburg with a relatively high population density and a mixture of large open spaces, rural lots, major roads, and industrial complexes. Fifty individual trees at both sites were selected with the objective of growing in rather free ground in parks or green strips of varying sizes and ages, but not in small roadside tree-planting spaces. According to data obtained from the German Meteorological Service (DWD), urban tree sites have a higher mean temperature and lower mean precipitation than periurban tree sites (9.5°C and 753 mm compared to 9.0°C and 784 mm, respectively). Additionally, districts harboring urban sites had a higher mean population density than the periurban ones (11,000 people per km2 and 4,000 people per km2, respectively) (Statistisches Amt für Hamburg und Schleswig-Holstein 2012). Though no direct measurements of air quality, e.g., O3 concentration, were conducted at the tree sites, differences in air chemistry can be expected between them (Freie und Hansestadt Hamburg 2012). Leaf litter was collected off the ground in October 2011 and air-dried (Of the sampled trees, two urban Acer and one urban Populus were determined to be of another species during this step and their litter excluded). Petioles of litter were discarded and litter was torn into pieces of about 3-cm diameter to ease subsequent filling of litterbags. Subsequently, litter was pooled according to species and origin of leaf litter, i.e., urban and periurban tree locations, yielding ten pools in total. Litterbag experiments Litterbags were constructed from PVC-coated fiberglass fly-screen with a mesh size of 1.2 mm × 1.4 mm by heat welding, allowing access of soil microorganisms to the litter, but excluding macrofauna (Swift et al. 1979). Litterbags were square-shaped with 15-cm side length and were filled with about 2.00 g of the pooled, air-dried litter. Litter mass as well as mass of litter residues was determined with a two-decimal accuracy to calculate relative percent mass loss (see below). Two correction factors were established for mass of initial litter. One was a species-specific air-dried to oven-dried correction factor, obtained by dividing the mass of five litter subsamples dried at 105°C for 48 h per species by their initial air-dried weight. The other was a decomposition site (see below)-specific travelbag factor, accounting for litter fragments lost due to handling and transport. Travelbags were constructed in the same way as other litterbags and were transported to and back from the decomposition site with mass loss due to handling determined subsequently. Twenty-five travelbags were constructed in total and culminated in a mean correction factor of 4% litter mass lost. Reciprocal litter transplant experiment To assess the effect the level of urbanization has on litter decomposition via litter quality alterations and alterations at the decomposition site, a reciprocal litter transplant experiment was carried out. For litter to reach an intermediate stage of decomposition under near-natural conditions, a duration of 155 to 161 days was chosen after which all litterbags were retrieved at once. Sets of litterbags were deployed in March 2012 at 15 urban and 15 periurban sites and retrieved in August 2012. The 30 sites of decomposition were chosen from the 100 tree sites of litter origin (10 individual trees per species per origin). At each decomposition site, one set of litterbags was set out (Figure 1). Each litterbag set included one bag of all ten pools: each species (Acer, Alnus, Populus, Quercus, Robinia) of each origin (urban, periurban). The ten bags per set were set out next to another with a 3- to 5-cm gap between them. To prevent disturbance from, e.g., passersby, litterbags were covered with 1 to 5 cm of soil. Urban and periurban sites of litter decomposition in the city of Hamburg. The black line represents the political border of Hamburg, while red dots represent urban and blue dots periurban litterbag setups. Since sites of decomposition were chosen from the 100 tree sites of litter origin, they were adjacent to the said tree and 'near-natural,' meaning parks of various sizes, green strips, or hedgerows. We established a high number of sites (30 in total) to represent the variety of soils and soil covers found at urban and periurban 'near-natural' sites, though an in-depth analysis of soil types was beyond the scope of this study. Soil cover varied between and within urban and periurban sites from true litter layers, to understory herbs, to bare soil. Sites were shaded by nearby trees and their understory. While all decomposition sites included litterbags of all five species of both origins, the individual tree at the decomposition site naturally belonged to only one of the species. The assessment of a potential 'home-field advantage' (e.g., Kagata and Ohgushi 2013) (e.g., Acer litter decomposing faster under an Acer tree) was beyond the scope of this study. To prevent 'home-field advantage' from being a confounding factor, three urban and three periurban sites of decomposition were set up per tree species, resulting in an equal number of litterbag sets per tree species, e.g., three sites under an urban and three under a periurban Acer tree and so forth. After retrieval, litterbags were opened and contents washed to separate litter residues from contaminants, e.g., soil and visible fauna. Afterwards, residues were oven-dried at 105°C for 48 h and weighed. Soil parameters and leaf litter quality To determine soil temperatures at urban and periurban decomposition sites during the reciprocal litter transplant experiment, iButtons® (accuracy of ±0.5°C and resolution of 0.0625°C; DS1922L-F5#, Maxim Integrated, San Jose, CA, USA) were placed in plastic zipper bags and placed alongside the buried litterbags (see above) at 0- to 5-cm depth. To determine differences in water content, pH, and salt concentration between urban and periurban decomposition sites, soil cores were taken at all decomposition sites at six dates during the course of the litterbag experiment. To detect possible water shortages during dry periods, sampling dates for taking cores were set to be after at least 3 days without precipitation. The cores were transported to the laboratory in airtight plastic bags and gravimetric water content determined by weight difference before and after drying at 105°C for 24 h. Oven-dried samples were used to determine pH in CaCl2 and electrical conductivity in H2O with a pH and conductivity meter (Eijkelkamp 18.28, Eijkelkamp Agrisearch Equipment, Giesbeek, The Netherlands). All initial leaf litter quality analyses were conducted on air-dried, ground leaf material. Carbon and nitrogen content of litter oven-dried at 105°C for 24 h was determined via CN analyses (vario MAX CNS elementar, Elementar Analysensysteme GmbH, Hanau, Germany) and content of main nutrients and some trace elements of the same oven-dried litter (Al, B, Ca, Cr, Cu, Fe, K, Mg, Mn, Na, P, S, Zn) via inductively coupled plasma optical emission spectrometry (iCAP 6300 duo, Thermo Fisher Scientific, Schwerte, Germany). Furthermore, structural carbohydrate content of litter dried at room temperature was determined through subsequent digestion in a fiber analyzer (ANKOM 2000, ANKOM Technology, Macedon, NY, USA) in detergent solutions for neutral and acid detergent fiber (NDF consisting mainly of hemicellulose, cellulose, and lignin; ADF consisting mainly of cellulose and lignin) as well as acid detergent lignin (ADL) as described by the manufacturer (ANKOM 2014a, ANKOM 2014b). As the analytical procedure provided by the manufacturer does not include oven-drying of the samples, possible differences in water content between species and subsequent bias in structural carbohydrate content cannot be excluded. Thus, interspecific comparisons in structural carbohydrate content need to be conducted cautiously. Common garden litter transplant experiment To study mass loss in more advanced, later stages of decomposition in the same time span as the reciprocal litter transplant experiment, a decomposition site with expected accelerated decomposition was chosen. To examine alterations in decomposition based solely on origin of leaf litter and without the influence of differing sites of decomposition, 15 sets of litterbags (see above) were placed in an experimental field at the Biocenter Klein Flottbek, Hamburg. To accelerate decomposition, an experimental site known to provide naturally increased soil water contents and to be sun-exposed was chosen. Additionally, the soil was covered by a black, water-permeable fleece, elevating soil temperatures due to a reduced sun light albedo and preventing the growth of weeds. Sets of litterbags were deployed and retrieved and soil parameters measured as described for the reciprocal litter transplant experiment. Climate chamber incubation To assess the effect of the level of urbanization on early stages of litter decomposition via litter chemistry alterations, a climate chamber incubation was carried out. Leaf litter samples pooled by species and origin were incubated under controlled conditions in a dark climate chamber at 21°C for 14 days in May 2012 and respired CO2 measured daily. Soil was taken from the experimental field and sieved at 2 mm, its water content was increased to 60% water holding capacity, and it was allowed to rest for 6 days at 6°C. Adapting a method by Isermeyer (1952), 1.00 g of pooled litter was mixed with 50 g wet soil in 1-L jars, with four replicates per litter pool. Two types of controls were set up with four replicates each, empty jars as blanks to detect possible error and jars containing soil without leaf litter to measure basal respiration. To follow decomposition, CO2 respired from soil (controls) and soil with litter was trapped in 25 mL of NaOH in open plastic containers set into the airtight jars. Containers were changed every 24 h; 0.5 M BaCl2 as well as a few drops of indicator (phenolphthalein) was added to the NaOH and titrated with HCl until neutralization. Preliminary experiments had shown rapid decomposition rates during the first days of the experiment. Thus, 0.1 M NaOH was used during the first 7 days and 0.05 M NaOH the following, adapting the used BaCl2 (10 and 5 mL, respectively) as well as the HCl (0.1 and 0.05 M, respectively). Oxygen supply was given by opening of the jars for changing NaOH containers. Empty jars (blank controls) found the error of CO2 entering the jars during this step to be negligible (less than 1 mg CO2 per day respectively opening, data not shown). For the climate chamber incubation, decomposition rate for the 14 daily measurements was calculated with an equation adapted from Alef and Nannipieri (1995): $$ {D}_r=\frac{\left({V}_0-V\right)\cdot 2.2}{dw\cdot t} $$ In Equation 1, D r is the decomposition rate [mg CO2 g−1 oven-dry weight of soil amended with litter h−1], V0 the amount of HCl used for titration in milliliters for soil with litter, V the amount of HCl used in milliliters for the soil sample (control), dw the oven-dry weight of soil amended with litter in grams, t the incubation time in hours, and 2.2 the conversion factor (1 ml 0.1 M NaOH equals 1 mg CO2). A conversion factor of 1.1 was used for the time of the experiment in which 0.05 M NaOH was used instead of 0.1 M NaOH. Additionally, the amount of CO2 released during the incubation time of 14 days of the experiment was calculated with the equation: $$ {D}_t={\displaystyle \sum}\left({V}_0-V\right)\cdot 2.2 $$ In Equation 2, D t is the total amount of released CO2 after 14 days [mg CO2 g−1 oven-dry weight of soil amended with litter], V0 the amount of HCl used for titration in milliliters for soil and litter, V the amount of HCl in milliliters used for the soil sample (control), and 2.2 the conversion factor (1 ml 0.1 M NaOH equals 1 mg CO2). A conversion factor of 1.1 was used for the time of the experiment in which 0.05 M NaOH was used instead of 0.1 M NaOH. Data analyses/statistical analyses All statistical analyses were conducted in Statistica 9.1 (StatSoft Inc., Tulsa, OK, USA). Prior to ANOVAs, data was visually checked for normal distribution. For litterbags employed at urban and periurban sites, differences in mass loss between species, origin of leaf litter, and decomposition site were analyzed with Kruskal-Wallis tests, and the respective interactions with a three-way ANOVA, despite mass loss values being measured in percent. To detect interspecific differences, multiple mean rank comparisons were computed. To detect intraspecific differences with regard to origin of litter and decomposition site, Mann–Whitney tests were performed. Since Populus showed a very pronounced contrast in decomposition between urban and periurban litter originating from urban habitats in the climate chamber incubation experiment (see below), a Mann–Whitney test was conducted for differences between mass losses of all litter with regard to origin excluding Populus data. Mean temperatures, water contents, pH, and electrical conductivities measured at urban and periurban decomposition sites were tested for significant differences via t-tests. pH values were re-exponentiated before calculation of the mean concentration of H ions, which then was expressed as the negative common logarithm again. To test for correlations between mass loss in the reciprocal litter transplant to measured soil parameters and leaf litter chemical properties, linear regressions were computed. For litterbags employed at the experimental field, differences in mass loss between species and origin of leaf litter as well as inter- and intraspecific differences were analyzed accordingly with Kruskal-Wallis and Mann–Whitney tests as well as multiple mean rank comparisons, and the interactions for species and origin with a two-way ANOVA, despite mass loss values being measured in percent. Decomposition rate was visualized and subsequently tested for differences between species and origin of leaf litter with a repeated measures ANOVA. Additionally, decomposition rate during the time of the experiment with the most rapid decomposition (7 days, beginning after the first 48 h) was tested with a separate repeated measures ANOVA as well. A two-way ANOVA was computed to test for differences in the total amount of released CO2 during the 14 days of incubation. Intraspecific differences due to origin as well as interspecific differences were tested for via the Tukey HSD post hoc test. Since Populus showed a stark difference in the amount of released CO2, a two-way ANOVA excluding Populus was computed to test whether differences were solely based on this species. While origin of leaf litter and species had a significant effect on mass loss (p ≤ 0.05 and p ≤ 0.001, respectively), decomposition site did not (p ≥ 0.1) (Figure 2). Litter of urban origin decomposed faster than litter of periurban origin (62% ± 3% and 53% ± 3% mass loss, respectively) (Figure 2a). Despite the pronounced difference found in the climate chamber incubation for Populus with regard to origin (see below), when Populus data were excluded from the analysis, the difference between urban and periurban origin was still significant (61% ± 3% and 52 ± 3% mean mass loss, respectively; data not shown, p ≤ 0.05). Alnus had the highest (86% ± 3%) and Quercus the lowest (29% ± 4%) mass loss (Figure 2c). In a three-way ANOVA, no significant interactions between factors were found. Significant intraspecific differences in mass loss due to origin of litter or decomposition site were only detected in Robinia with litter originating from urban habitats decomposing faster than litter originating from periurban ones (Figure 3). Mass loss of reciprocal litter transplant. Depending on (a) origin of litter, (b) decomposition site, and (c) species. The asterisk indicates significant difference according to the Mann–Whitney test, and different letters indicate pairwise differences according to multiple mean rank comparisons of species' means; differences in n are due to loss of litterbags through disturbance. Intraspecific differences in mass loss in the reciprocal litter transplant. Depending on (a) origin of litter and (b) decomposition site. The asterisk indicates significant intraspecific difference according to the Mann–Whitney test; differences in n are due to loss of litterbags. Of the employed 300 litterbags, 16 were lost due to disturbance at urban sites (10 of one site and 6 individual litterbags). In this investigation, soils of the urban sites had a slightly higher overall mean temperature than periurban ones (about 0.3°C) and contained slightly more water (Table 1). Soil samples of urban sites were slightly less acidic than those of periurban sites, but this difference was not significant (Table 1). However, urban soil samples had a significantly increased electrical conductivity compared to periurban soil samples (Table 1). Table 1 Mean and SE of measured soil parameters Litter showed inter- as well as intraspecific differences in its chemical composition. As it was not possible to test for statistical significance, only few results will be highlighted. Structural carbohydrates tended to be decreased in urban compared to periurban litter samples (NDF and ADL in four of the five species, ADF in three; Table 2). Alnus and Robinia litter had a higher mean N concentration (2.46% and 1.98%, respectively) than species without N2-fixing symbionts (0.93%, Table 2). Carbon content was slightly decreased in urban compared to periurban samples in all species (Table 2). Comparisons in most parameters show ambiguous trends, e.g., P content is increased in urban Alnus litter, but decreased in urban Acer litter (Table 2). Table 2 Chemical properties of leaf litter pools Linear regression found no significant relation (p ≥ 0.1) between analyzed soil parameters and mass loss of litterbags at the respective sites, but mass loss of litter at urban and periurban sites correlated significantly with leaf litter quality parameters (Table 3). Table 3 R of linear regressions between leaf litter quality and mass loss As intended, soil at the experimental field site had a higher mean temperature (17.4°C) and contained more water (0.33 g per g dry soil), as well as was less acidic (pH 5.7) and had a lower electrical conductivity (102 μS cm−1) than soils at the urban and periurban sites. As anticipated, mass loss at the experimental field site was higher for all species than that in the reciprocal litter transplant experiment. And in accordance to results at urban and periurban decomposition sites, litter of urban origin decomposed faster than litter of periurban origin (94% ± 1% and 92% ± 1% mass loss, respectively, p ≤ 0.05) (Figure 4a). When differences between litter origins were tested for species individually, only Populus showed a significant difference in mass loss (p ≤ 0.01) (Figure 4c). When Populus was excluded from the overall analysis of mass loss depending on origin, the difference was almost significant (94% ± 1% and 92% ± 1% mass loss, respectively, p ≤ 0.1; data not shown). Species differed significantly in their mass loss (p ≤ 0.001), and post hoc tests revealed that Acer, Alnus, and Populus decomposed faster than Quercus and Robinia (98% ± <1%, 99% ± <1%, 99% ± <1% and 78% ± 3%, 91% ± 2%, respectively) (Figure 4b). Two-way ANOVA found no interaction between the factors origin of litter and species. Mass loss in the common garden litter transplant. Depending on (a) origin of leaf litter (n = 75) and (b) species (n = 30) and (c) showing intraspecific differences (n = 15). Asterisks indicate significant differences according to the Mann–Whitney tests, and different letters indicate different species' means according to multiple mean rank comparisons. Decomposition rate over the entire 14 days of the experiment differed significantly between species (p ≤ 0.001), origins of leaf litter (p ≤ 0.001), and time (p ≤ 0.001) (Table 4). Decomposition rate of most species increased during the first 48 h until it gradually decreased (Figure 5). If only the 7 days with the most rapid decomposition were included, decomposition rate still differed significantly between species (p ≤ 0.001), origin (p ≤ 0.001), and time (p ≤ 0.001) (Table 5). The interaction between species and origin was found to be significant, too (p ≤ 0.001) (Table 5). While urban litter tended to decompose faster, Acer and Alnus of urban origin showed a slightly reduced decomposition rate. The symbiotically N2-fixing species Robinia and Alnus had the highest decomposition rates, while Quercus had the lowest. Table 4 Repeated measures ANOVA results for rate of respired CO2 in the 14-day climate chamber experiment Mean respiration rate in the climate chamber incubation. In CO2 [μg] per g dry soil amended with litter per hour for all species of both origins (n = 4, except for Robinia periurban where n = 3). Identical soil with naturally occurring carbon and biomass was used for all species. Table 5 Repeated measures ANOVA results for rate of respired CO2 in rapid-respiring 7 days of incubation Total amount of released CO2 after 14 days of incubation differed significantly between species (p ≤ 0.001) and origin (p ≤ 0.001). The interaction between species and origin was found to be significant as well (p ≤ 0.001). Robinia and Alnus showed the highest amount of CO2 and Quercus the lowest (Figure 6b). Including all species, urban litter released more CO2 than periurban litter (Figure 6a). However, Acer and Alnus released slightly less CO2 from urban than from their respective periurban litter (Figure 6c). In contrast, Quercus and Robinia released more CO2 from litter of urban origin and Populus litter of urban origin released almost 40% more than litter of rural origin (Figure 6c). Differences in total amount of respired CO 2 in the 14-day climate chamber incubation. With regard to (a) origin of leaf litter, (b) species, and (c) both (n = 4, except for Robinia periurban where n = 3). Asterisks indicate significant differences due to origin of litter, and different letters indicate different species' means according to post hoc tests. Despite the pronounced difference found in the climate chamber incubation for Populus with regard to origin, when Populus was excluded from the analysis, species still differed significantly in the amount of released CO2 (p ≤ 0.001). However, while urban litter over the four remaining species still released more CO2 than periurban litter, this difference was only almost significant (p ≤ 0.1) with a significant interaction between species and origin (p ≤ 0.05). The study revealed a faster decomposition of litter originating from urban habitats compared to litter originating from periurban habitats. The level of urbanization of the decomposition site did not show significant effects. Species showed differences in their decomposition, but the general trend of accelerated decomposition of litter originating from urban habitats was observed over all species. Due to methodological limitations, we cannot safely say that the accelerated decomposition of litter originating in urban habitats is maintained in the late stages of decomposition. Origin of leaf litter Leaf litter origin had a significant influence on decomposition, with litter grown on trees in urban areas exhibiting an accelerated mass loss. Mass loss in litterbags at urban and periurban sites was significantly and negatively correlated to ADL:N ratio, phosphorus content, and chrome content of initial litter. Nikula et al. (2010) found similar results in Helsinki and listed the higher nitrogen, phosphorus, and base cation as well as the lower lignin, total phenolics, C:N, and lignin:N contents and ratios of litter originating from urban habitats as possible reasons for its accelerated decay. Contrary, a decelerated decomposition of litter originating in urban habitats compared to litter originating in rural habitats has been observed in New York City (Carreiro et al. 1999; Pouyat and Carreiro 2003). Here, lignin and NDF content of initial litter were found to be negatively correlated with decay rates (Carreiro et al. 1999). A decelerated decomposition of litter originating from urban habitats was also observed in Naples and London and attributed to increased heavy metal deposition on leaves of urban origin (Cotrufo et al. 1995; Post and Beeby 1996). Results suggest that leaf litter quality plays a crucial role in the accelerated decomposition of litter originating from urban habitats, and litter quality has been postulated as a dominant factor of decomposition under favorable weather conditions (Coûteaux et al. 1995). Alterations in leaf litter due to the level of urbanization seem to either enhance the nutrient supply to decomposer communities or nutrients are easier accessible due to lower lignin contents (Berg and McClaugherty 2008). Surprisingly, though a correlation of decomposition to litter quality parameters was found in our investigations, these parameters are not altered consistently across the analyzed species. While we found a trend towards decreased structural carbohydrates (NDF, ADF, ADL) in litter originating from urban habitats, neither of the parameters we found to be negatively correlated to mass loss (ADL:N, P, Cr) consistently showed lower values in litter originating from urban habitats (Table 2). This underlines the complex nature of decomposition processes, where individual parameters' predictive values show interspecific differences as well as differences in the various stages of decomposition (Prescott 2010). Further, it has to be kept in mind that the fact of correlation not implying causality holds true also for the relation of litter quality parameters and decomposition (Prescott 2010). Contrary to results from urban sites in Naples (Cotrufo et al. 1995) and London (Post and Beeby 1996), polluted with high concentrations of heavy metals, the difference in airborne heavy metal loads between urban and periurban tree locations in the city of Hamburg was not pronounced enough to delay decomposition of litter originating from urban habitats. Influence of decomposition site No significant difference in mass loss between periurban and urban decomposition sites and no significant correlation of mass loss to measured environmental parameters were found. Previous studies that found an acceleration of decomposition in urban areas attributed their findings to higher temperatures (Pouyat et al. 1997; Pouyat and Carreiro 2003; Nikula et al. 2010), introduced, non-native earthworms with no earthworms present at the respective rural sites (Pouyat et al. 1997; Pouyat and Carreiro 2003), or increased N deposition (Nikula et al. 2010). Studies that found a deceleration of decomposition at urban sites linked it to increased heavy metal pollution (Cotrufo et al. 1995) or decreased soil moisture (Pavao-Zuckerman and Coleman 2005). In our study, soil temperature at urban sites was only slightly higher (mean 0.3°C) than soils at the periurban sites, while a 2°C to 3°C higher temperature at urban locations was cited in New York City litterbag studies (Pouyat et al. 1997; Pouyat and Carreiro 2003). Assuming a Q 10 of two (Raich and Schlesinger 1992), the temperature difference in the New York City litterbag studies explained a 20% to 30% increased decomposition at the urban sites (Pouyat et al. 1997; Pouyat and Carreiro 2003), while the temperature difference in the present study would merely accelerate decomposition by 3%. Opposing findings in Asheville, where urban soils contained less water, leading to a decelerated decomposition (Pavao-Zuckerman and Coleman 2005), in our study no limitation of decomposition through water shortage at urban or periurban sites seems likely. The higher electrical conductivity (EC) at urban sites indicates higher soil salt concentrations and likely stems from higher inputs of de-icing salts in urban areas. Despite being increased, EC values at urban sites were still low, but a direct influence of salt concentration on decomposer organisms cannot be ruled out (Czerniawska-Kusza et al. 2004). A high abundance of non-native earthworms was found in New York City (Steinberg et al. 1997), and this was considered a main reason for the accelerated decomposition at urban compared to rural sites. Since earthworms are not native to the New York City area, rural sites were without earthworms as they lacked introduced species. Contrary, juvenile worms of introduced species at the urban sites were able to access litter through the litterbags' mesh (Pouyat et al. 1997; Pouyat and Carreiro 2003). In Hamburg, introduction of non-native species does not seem to be as pronounced and native earthworms are common in urban as well as in rural areas. Currently, a relevant influence of non-native earthworms on decomposition processes in the city of Hamburg does not seem likely. Alterations in environmental and soil parameters between urban and periurban sites of this study were apparently not strong enough to significantly affect decomposition processes. The difference of our study results to previous studies most likely stems from the very short distance of about 10 to 17 km employed (compared to, e.g., 130 km between urban and rural sites in New York City, Pouyat and Carreiro 2003). The rapid decomposition at the common garden experiment with its drastically different environment (higher temperature, higher humidity, lower acidity, and lower electrical conductivity than at the urban and periurban decomposition sites) highlights that more pronounced environmental alterations would result in a significantly affected decomposition. Differences between species No pronounced difference in species' reaction to the level of urbanization was observed. When significant intraspecific differences in the reciprocal litter transplant, the common garden litter transplant, and the total amount of respired CO2 in the climate chamber incubation experiment were determined, litter originating from urban habitats decomposed faster than the periurban counterpart. However, the significant interaction in the climate chamber incubation ANOVA between species and origin of litter points towards differences in species' reaction to the level of urbanization, though these are not significant when species are analyzed individually. In the climate chamber incubation experiment, the litter of symbiotically N2-fixing species Alnus and Robinia released most CO2, while Quercus litter with its high amount of lignin released the least. Considering the litterbag experiments as advanced stages of decomposition, mass loss of Robinia litter seems to decelerate relative to the other species, while mass loss of Alnus litter increases similar to the not symbiotically N2-fixing species compared to early-stage decomposition. The high N content of Alnus and Robinia litter was expected to initially accelerate decomposition, but as decomposition progresses, it was expected to act as a retarding factor (Berg and McClaugherty 2008). A possible explanation for the deviating result of high mass loss of Alnus litter in the common garden experiment could be that the chosen mesh size of litterbags is only suitable for early-stage decomposition. Potentially, as decomposition progresses, litter parts that are not truly decomposed (i.e., respired) but solely fragmented are lost through the mesh and render the method unreliable. This would imply that results for late-stage decomposition derived from litterbag experiments might be less reliable for any of the analyzed species. This is worth considering especially for extremely small partially decomposed litter fragments and humic substances. Individual species' leaf litter decomposition can be affected differently by different cities. Contrary to results from New York City (Carreiro et al. 1999; Pouyat and Carreiro 2003), Q. rubra litter decomposed faster in the climate chamber incubation when originating from urban locations. While this difference was only significant in the climate chamber incubation, the same trend was observed in the litterbag experiments. P. tremula leaf litter decomposition was affected in similar ways in Helsinki and Hamburg, with litter originating from urban habitats showing a faster decomposition (Nikula et al. 2010). The different employed methods yielded similar results, with litter originating from urban habitats decomposing faster. However, all methods have specific advantages and disadvantages. While litterbag experiments are time-consuming and decomposition site setups in urban areas have a high potential for disturbance, they yield valuable insights into decomposition processes in a near-natural environment. Their biggest advantage is the possibility to examine the effects of the level of urbanization at the decomposition site. However, as seen in the common garden experiment, loss of only partly decomposed material from the litterbags through the mesh can become a problem as decomposition progresses. If only the effect of litter origin is to be studied, climate chamber incubations are a fast alternative. We were able to detect an effect of the level of urbanization on decomposition processes. This effect is exerted through alterations in leaf litter quality and not through alterations at the site of decomposition. Other studies also determined an effect of the level of urbanization on decomposition processes through leaf litter alterations. However, found results partly contradict previous findings, where litter of urban origin was often found to exhibit a decelerated decomposition. Further, previous studies often observed an effect due to the level of urbanization of the site of decomposition. The effect the level of urbanization has on decomposition processes can so far not be characterized globally. Through including five species in our study, we were able to differentiate the effects the level of urbanization has on decomposition in more detail than previous studies. While we found some interspecific difference, the overall effect the level of urbanization has on decomposition processes was similar across species. Due to the partly contradiction to previous findings, we conclude that the effects the level of urbanization has are complex and vary between cities. Further studies are needed to pinpoint the underlying mechanisms responsible for the differing observations in various cities. Alef K, Nannipieri P (1995) Methods in applied soil microbiology and biochemistry. Academic Press, London Alfani A, Baldantoni D, Maisto G, Bartoli G, de Virzo SA (2000) Temporal and spatial variation in C, N, S and trace element contents in the leaves of Quercus ilex within the urban are of Naples. Environ Pollut 109(1):119–129 ANKOM (2014a) ANKOM 2000 fiber analyzer- Operator's Manual., https://ankom.com/media/documents/A2000series_Manual_RevE_083011.pdf. Accessed 09. Dec 2014 ANKOM (2014b) Method for Determining Acid Detergent Lignin in Beakers., https://ankom.com/media/documents/ADL_beakers.pdf. Accessed 09. Dec 2014 Berg B, McClaugherty C (2008) Plant litter. Decomposition, humus formation, carbon sequestration, 2nd edn. Springer, Berlin Carreiro M, Howe K, Parkhurst D, Pouyat R (1999) Variation in quality and decomposability of red oak leaf litter along an urban–rural gradient. Biol Fertil Soils 30(3):258–268 Carreiro M, Pouyat R, Tripler C, Zhu W (2009) Carbon and nitrogen cycling in soils of remnant forests along urban–rural gradients: case studies in the New York metropolitan area and Louisville, Kentucky. In: McDonnell M, Hahs A, Breuste J (eds) Ecology of cities and towns: a comparative approach. Cambridge University Press, Cambridge, pp 308–328 Carreras HA, Cañas MS, Pignata ML (1996) Differences in responses to urban air pollutants by Ligustrum lucidum Ait. and Ligustrum lucidum Ait. f. tricolor (Rehd.) Rehd. Environ Pollut 93(2):211–218 Churkina G, Brown D, Keoleian G (2010) Carbon stored in human settlements: the conterminous United States. Glob Chang Biol 16(1):135–143, doi:10.1111/j.1365-2486.2009.02002.x Cotrufo MF, de Santo AV, Alfani A, Bartoli G, de Cristofaro A (1995) Effects of urban heavy metal pollution on organic matter decomposition in Quercus ilex L. woods. Environ Pollut 89(1):81–87 Coûteaux M, Bottner P, Berg B (1995) Litter decomposition, climate and litter quality. Tree 10(2):63–66 Czerniawska-Kusza I, Kusza G, Dużyński M (2004) Effect of deicing salts on urban soils and health status of roadside trees in the Opole region. Environ Toxicol 19(4):296–301 Fenn ME, Dunn PH (1989) Litter decomposition across an air-pollution gradient in the San Bernardino Mountains. Soil Sci Soc Am J 53(5):1560–1567 Freie und Hansestadt Hamburg (2012) Behörde für Gesundheit und Verbraucherschutz- Institut für Hygiene und Umwelt- Hamburger Luftmessnetz (HALm)- Ozonwarndienst- Hamburger Luftmessnetz- Ergebnisse 2011 Grimm NB, Faeth SH, Golubiewski NE, Redman CL, Wu J, Bai X, Briggs JM (2008) Global change and the ecology of cities. Science 319(5864):756–760, doi:10.1126/science.1150195 Hoffmann P, Schlünzen K (2010) Das Hamburger Klima. In: Poppendieck H, Bertram H, Brandt I, Engelschall B, Prondzinski J (eds) Der Hamburger Pflanzenatlas. Von A bis Z, 1st edn. Dölling und Galitz, München Isermeyer H (1952) Eine einfache Methode zur Bestimmung der Bodenatmung und der Karbonate im Boden. Zeitschrift für Pflanzenernährung, Düngung, Bodenkunde 56(1–3):26–38 Kagata H, Ohgushi T (2013) Home-field advantage in decomposition of leaf litter and insect frass. Popul Ecol 55(1):69–76, doi:10.1007/s10144-012-0342-5 McDonnell M, Pickett S (1990) Ecosystem structure and function along urban–rural gradients. An unexploited opportunity for ecology. Ecology 71(4):1232–1237 Nikula S, Vapaavuori E, Manninen S (2010) Urbanization-related changes in European aspen (Populus tremula L.): leaf traits and litter decomposition. Environ Pollut 158(6):2132–2142, doi:10.1016/j.envpol.2010.02.025 Oke T (1973) City size and the urban heat island. Atmos Environ 7(8):769–779 Pavao-Zuckerman MA, Coleman DC (2005) Decomposition of chestnut oak (Quercus prinus) leaves and nitrogen mineralization in an urban environment. Biol Fertil Soils 41(5):343–349, doi:10.1007/s00374-005-0841-z Post R, Beeby A (1996) Activity of the microbial decomposer community in metal-contaminated roadside soils. J Appl Ecol 33(4):703–709 Potere D, Schneider A (2007) A critical look at representations of urban areas in global maps. GeoJournal 69(1–2):55–80, doi:10.1007/s10708-007-9102-z Pouyat RV, Carreiro MM (2003) Controls on mass loss and nitrogen dynamics of oak leaf litter along an urban–rural land-use gradient. Oecologia 135:288–298 Pouyat RV, McDonnell MJ, Pickett STA (1997) Litter decomposition and nitrogen mineralization in oak stands along an urban-to-rural land use gradient. Urban Ecosystems 1(2):117–131, doi:10.1023/A:1018567326093 Prescott CE (2010) Litter decomposition: what controls it and how can we alter it to sequester more carbon in forest soils? Biogeochemistry 101(1–3):133–149 Raciti S, Hutyra L, Rao P, Finzi A (2012) Inconsistent definitions of "urban" result in different conclusions about the size of urban carbon and nitrogen stocks. Ecol Appl 22(3):1015–1035 Raich J, Schlesinger W (1992) The global carbon dioxide flux in soil respiration and its relationship to vegetation and climate. Tellus 44(2):81–99 Schlünzen KH, Hoffmann P, Rosenhagen G, Riecke W (2010) Long-term changes and regional differences in temperature and precipitation in the metropolitan area of Hamburg. Int J Climatol 30(8):1121–1136, doi:10.1002/joc.1968 Statistisches Amt für Hamburg und Schleswig-Holstein (2012) Hamburger Stadtteilprofile., http://www.statistik-nord.de/fileadmin/download/Stadtteil_Profile_html5/atlas.html. Accessed 7 Oct 2013 Statistisches Amt für Hamburg und Schleswig-Holstein (2013) Statistisches Jahrbuch 2012/2013. Hamburg Steinberg DA, Pouyat RV, Parmelee RW, Groffman PM (1997) Earthworm abundance and nitrogen mineralization rates along an urban–rural land use gradient. Soil Biol Biochem 29(3–4):427–430 Swift MJ, Heal OW, Anderson JM (1979) Decomposition in terrestrial ecosystems. Studies in ecology, vol 5. University of California Press, Berkeley UN (2011) World Urbanization Prospects The 2011 Revision Highlights., http://de.slideshare.net/undesa/wup2011-highlights. Accessed 09.Dec 2014 Wiesner S, Eschenbach A, Ament F (2014) Urban air temperature anomalies and their relation to soil moisture observed in the city of Hamburg. Meteorol Z 23(2):143–15 Wittig R, Sukopp H, Klausnitzer B (1998) Die ökologische Gliederung der Stadt. In: Sukopp H, Wittig R (eds) Stadtökologie. Ein Fachbuch für Studium und Praxis, 2nd edn. G. Fischer, Stuttgart, pp 316–372 Zirkle G, LAL R, Augustin B, Follett R (2012) Modeling carbon sequestration in the U.S. residential landscape. In: Lal R, Augustin B (eds) Carbon sequestration in urban ecosystems. Springer Science + Business Media B.V, Dordrecht, pp 265–276 We thank the Bauer-Stiftung in the Stifterverband für die Deutsche Wissenschaft and the Deutsches Stiftungszentrum for funding. For data provision and support, we would like to thank the Office for Urban Planning and the Environment of Hamburg. We thank the graduate school SICSS for the assistance of J. Dorendorf. Valuable support was received by the staff at Biocenter Klein Flottbek and Institute for Soil Sciences at the University of Hamburg. Further thanks go towards the staff at the Biocenter Grindel for their help in the analysis of structural carbohydrates. Notably, we thank Lars Kutzbach. We thank the Thünen Institute of Wood Research, notably Thomas Schwarz for the provision of ICP-OES and support. Applied Plant Ecology, Biocenter Klein Flottbek, Universität Hamburg, Ohnhorstraße 18, 22609, Hamburg, Germany Jens Dorendorf, Anja Wilken & Kai Jensen Institute for Soil Science, CEN, Universität Hamburg, Allende-Platz 2, 20146, Hamburg, Germany Annette Eschenbach Jens Dorendorf Anja Wilken Correspondence to Jens Dorendorf. JD conceptualized, designed, and carried out the mentioned studies; analyzed and interpreted the data; and drafted the manuscript. AW carried out the climate chamber incubation and analyzed and interpreted the respective data. AE and KJ supported the conceptualization and design of studies as well as the analysis and interpretation of data and have revisited the manuscript critically for important intellectual content. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Dorendorf, J., Wilken, A., Eschenbach, A. et al. Urban-induced changes in tree leaf litter accelerate decomposition. Ecol Process 4, 1 (2015). https://doi.org/10.1186/s13717-014-0026-5 Litterbags Urban gradients Carbon cycling Mass loss Urban forests Paired studies Urban–rural contrast Urban landscapes - from patterns to processes
CommonCrawl
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up. Who was E. Busche? Donald E. Knuth reports in his: TAOCP: Volume 1: Fundamental Algorithms (3rd ed.) in $\S1.2.4$: Integer Functions and Elementary Number Theory: Exercise 38 that this result: $$\sum_{0 \mathop \le k \mathop < y} \left\lfloor {x + \dfrac k y}\right\rfloor = \lfloor {x y + \lfloor {x + 1} \rfloor (\lceil y \rceil - y) }$$ was the work of E. Busche. This result apparently appeared in Crelle's v. 136, 39-57 (1909). I have found out that "E. Busche" was "Conrad Heinrich Edmund Friedrich Busche", but apart from the fact that he died fighting in WWI, I have not been able to identify any biographical details about him. Can anybody help? mathematicians biographical-details Prime MoverPrime Mover $\begingroup$ This seems to be Edmund Busche from Hamburg (1861-1916). A quick search shows papers by him published between 1887 and 1912, many of them in J. für die Reine und Angew. Math. $\endgroup$ – njuffa $\begingroup$ Do you have a link? I can't find that information myself. I will take your word for it, but it would be useful to have a link to whatever online resource you may have obtained this information from. $\endgroup$ – Prime Mover $\begingroup$ I am looking for confirmation right now (after midnight here; probably going to continue my search Thursday morning). The paper you cited identifies the author as "E. Busche in Hamburg" so I simply googled using that information. A number theorist. He died in WW1, as you noted in the question. In Belgium it seems. I haven't found a date of death yet. He apparently was alive in January of 1916 and dead by late December of 1916 (I know that does not help very much :-) $\endgroup$ $\begingroup$ Scan of the dissertation available from the Göttingen Digitization Center. $\endgroup$ $\begingroup$ Thank you, you have already plugged some gaps in the important stuff (birth and death, year birthplace). Feel free to present what you have already dug up as an answer, and you will get the kudos. $\endgroup$ Edmund Busche was killed on the Western Front in WW1, on May 2, 1916. This was reported by his colleague Paul Riebesell in the annual report of the German Mathematical Society: Jahresbericht der Deutschen Mathematiker-Vereinigung, Vol. 25, 1917, p. 283 (Google snippet) E. Busche † Von P. Riebesell in Hamburg Am 2. Mai 1916 fiel als Hauptmann der Landwehr a.D. an der Westfront Edmund Busche im Alter von 55 Jahren. Der Verstorbene war seit 1888 Mitglied der Mathematischen Gesellschaft in Hamburg, seit 1891 auch Mitglied der Deutschen Mathematiker-Vereinigung. Nachdem er etwa 20 Jahre als Oberlehrer an der Hansaschule in Bergedorf tätig gewesen war, wirkte er in den letzten zehn Jahren als Professor an der Oberrealschule in Hamburg-Eimsbüttel und als Dozent [...] From this we learn that at the time of death he held the rank of captain in the militia. Edmund Busche had been a member of the Hamburg Mathematical Society since 1888 and a member of the German Mathematical Society (founded in 1890 by Georg Cantor) since 1891. He had worked for about twenty years as a senior teacher at a high-school in the Hamburg suburb of Bergedorf (now a borough within the City of Hamburg), and thereafter for ten years as a professor at a high-school in Hamburg-Eimsbüttel. The Hamburg Mathematical Society honored Edmund Busche in volume 5, issue 6 of their journal Mitteilungen der Mathematischen Gesellschaft in Hamburg. A high-quality scan of the issue with a picture of Busche is provided by the HathiTrust Digital Library. The issue starts out with an article by E. Hoppe in Hamburg "Edmund Busche zum Gedächtnis" (In Memory of Edmund Busche) which provides a biographical sketch of Busche's life. He was born on May 2, 1861 in Neuland in the district of Kehdingen which is on the Elbe river north of Hamburg. He was the second son of a minor official. Soon after his birth the family moved to the small town of Drochtersen where his father died in 1868. His mother then moved the family to Walsrode, where they lived for three years, before moving to Uelzen in 1871. In 1875 they moved to Hanover where Busche completed high-school. He then attended the University of Göttingen to study mathematics. After only four years he completed his dissertation under Ernst Schering, finishing his doctoral studies on February 27, 1883. A few days later he passed the teaching exam, allowing him to work as "Oberlehrer" (senior teacher). He then served in the military, where he ultimately achieved the rank of lieutenant. Due to a glut of teachers he initially had trouble finding work and had to work as a private tutor for a while before finding permanent employment with the Hansaschule in Bergedorf in 1886. He got married in 1901 and had two children, a girl and a boy. According to genealogical information I found on the internet for whose accuracy I cannot vouch, his wife Ottilie Fastenau was born in 1876, his daughter Friederike in 1902 and the son Reinhard in 1904. The next article in the commemorative issue is by H. v. Mangoldt in Danzig, "Edmund Busches wissenschaftliches Lebenswerk" (The Scientific Œuvre of Edmund Busche). Including his Ph.D thesis, it lists thirty-five publications by Busche. The author points out that it is clear from the topics of the publications that Busche's interests lay entirely within pure mathematics, with a focus on number theory and projective geometry. He mentions that since Busche gave a talk on the theory of relativity at the Hamburg Mathematical Society in 1911 he must have also had a passing interest in mathematical physics. The next article is by P. Riebesell in Hamburg, "Eine Verallgemeinerung des Pascalschen Satzes für beliebige Sechsecke nach E. Busche" (A Generalization of Pascal's Theorem for Arbitrary Hexagons, based on work by E. Busche), in which the author attempts to complete Busche's final, unfinished, work. In the introductory paragraph he mentions that Busche served on the front in Belgium. He last saw him when he was furloughed from the front in January of 1916, at which time Busche asked him to do some background research on this topic. Busche's 1883 Ph.D. thesis is entitled: "Ueber eine Beweismethode in der Zahlentheorie und einige Anwendungen derselben, insbesondere auf das Reciprocitätsgesetz in der Theorie der quadratischen Reste" (On a method of proof in number theory and some applications of the same, in particular to the reciprocity law in the theory of quadratic residues) and is dedicated to his mother. A high-quality scan is available from the Göttingen Digitization Center. At the end of the dissertation there is a brief biographical sketch that confirms information from other sources: Busche was born on May 2, 1861 in Neuland in the district of Kehdingen. His mother's name was Friederike. In 1879 he finished high-school in Hanover. From Easter 1879 through Easter 1883 he studied mathematics in Göttingen. On March 3, 1883 he passed the teaching exam and subsequently departed for mandatory military service to Berlin. He thanks his Ph.D. advisor, professor E. Schering, for much valuable advice. An annual report of the Hansaschule from 1889 and a festschrift published by the school in 1908 confirm some of the biographical details already mentioned above. njuffanjuffa $\begingroup$ So just seriously bad luck to have been killed in action on his 55th birthday? $\endgroup$ $\begingroup$ @Prime Mover It is quite odd, but I have no further information on that other than that one source refers to him being killed by a bullet. It is not clear whether that is to be taken literally or figuratively, though. One could speculate. Maybe his officer buddies threw him a little birthday party and alcohol was consumed. They got a tad careless as a result and he got picked off by a sniper on the way home? $\endgroup$ $\begingroup$ ... or they were playing Russian Roulette after a series of ridiculous drinking games?:-) Definitely room for a bit of creative speculation for anyone writing a novel about him. $\endgroup$ Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged mathematicians biographical-details or ask your own question. Who is Joshua King? Who said that theory of probability was not mathematics? Who was L. Aubry? Who was the first person who emphasis the importance of proof? Who was Heinrich Kornblum? Who was Hans Bauer who worked on the Perron integral? Who was that forgetful mathematician? Who is the Göbel who is the eponym of Göbel's Sequence? Who was Fleury? And what was his first name? Who was Paul Gerwien?
CommonCrawl
Dynamical Borel–Cantelli lemmas Gamma convergence and asymptotic behavior for eigenvalues of nonlocal problems Julián Fernández Bonder 1,, , Analía Silva 2, and Juan F. Spedaletti 2, Instituto de Matemática Luis A. Santaló (IMAS), CONICET, Departamento de Matemática, FCEN - Universidad de Buenos Aires, Ciudad Universitaria, Pabellón I, C1428EGA, Av. Cantilo s/n, Buenos Aires, Argentina Instituto de Matemática Aplicada San luis (IMASL), Ejército de los Andes 950, D5700HHW, San Luis, Argentina * Corresponding author: J. Fernández Bonder Received January 2020 Revised September 2020 Published October 2020 In this paper we analyze the asymptotic behavior of several fractional eigenvalue problems by means of Gamma-convergence methods. This method allows us to treat different eigenvalue problems under a unified framework. We are able to recover some known results for the behavior of the eigenvalues of the $ p- $fractional laplacian when the fractional parameter $ s $ goes to 1, and to extend some known results for the behavior of the same eigenvalue problem when $ p $ goes to $ \infty $. Finally we analyze other eigenvalue problems not previously covered in the literature. Keywords: Fractional eigenvalues, stability of nonlinear eigenvalues, fractional p-laplacian problems. Mathematics Subject Classification: Primary: 35P30, 35J92; Secondary: 49R05. Citation: Julián Fernández Bonder, Analía Silva, Juan F. Spedaletti. Gamma convergence and asymptotic behavior for eigenvalues of nonlocal problems. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020355 J. Bourgain, H. Brezis and P. Mironescu, Another look at sobolev spaces, in Optimal Control and Partial Differential Equations, 2001,439–455. Google Scholar L. Brasco, E. Parini and M. Squassina, Stability of variational eigenvalues for the fractional $p$-Laplacian, Discrete Contin. Dyn. Syst., 36 (2016), 1813-1845. doi: 10.3934/dcds.2016.36.1813. Google Scholar T. Champion and L. De Pascale, Asymptotic behaviour of nonlinear eigenvalue problems involving $p-$laplacian-type operators, Proc. Roy. Soc. Edinburgh Sect. A, 137 (2007), 1179-1195. doi: 10.1017/S0308210506000667. Google Scholar G. Dal Maso, An Introduction to $\Gamma$-convergence, Progress in Nonlinear Differential Equations and their Applications, 8, Birkhäuser Boston, Inc., Boston, MA, 1993. doi: 10.1007/978-1-4612-0327-8. Google Scholar L. M. Del Pezzo, J. D. Rossi and A. M. Salort, Fractional eigenvalue problems that approximate steklov eigenvalue problems, Proceedings of the Royal Society of Edinburgh Section A: Mathematics, 148 (2018), 499-516. doi: 10.1017/S0308210517000361. Google Scholar E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar J. Fernández Bonder, J. P. Pinasco and A. M. Salort, Eigenvalue homogenisation problem with indefinite weights, Bull. Aust. Math. Soc., 93 (2016), 113-127. doi: 10.1017/S0004972715001094. Google Scholar J. Fernández Bonder, A. Ritorto and A. M. Salort, $H$-convergence result for nonlocal elliptic-type problems via Tartar's method, SIAM J. Math. Anal., 49 (2017), 2387-2408. doi: 10.1137/16M1080215. Google Scholar J. Fernández Bonder and A. M. Salort, Fractional order Orlicz-Sobolev spaces, J. Funct. Anal., 277 (2019), 333-367. doi: 10.1016/j.jfa.2019.04.003. Google Scholar J. Fernández Bonder and A. M. Salort, Stability of solutions for nonlocal problems, Nonlinear Analysis, 200 (2020), 112080, 13 pp. doi: 10.1016/j.na.2020.112080. Google Scholar M. Focardi, Aperiodic fractional obstacle problems, Adv. Math., 225 (2010), 3502-3544. doi: 10.1016/j.aim.2010.06.014. Google Scholar _____, Γ-convergence: A tool to investigate physical phenomena across scales, Math. Methods Appl. Sci., 35 (2012), 1613-1658. doi: 10.1002/mma.2551. Google Scholar G. Franzina and G. Palatucci, Fractional $p$-eigenvalues, Riv. Math. Univ. Parma (N.S.), 5 (2014), 373-386. Google Scholar E. Lindgren and P. Lindqvist, Fractional eigenvalues, Calc. Var. Partial Differential Equations, 49 (2014), 795-826. doi: 10.1007/s00526-013-0600-1. Google Scholar A. Piatnitski and E. Zhizhina, Periodic homogenization of nonlocal operators with a convolution-type kernel, SIAM J. Math. Anal., 49 (2017), 64-81. doi: 10.1137/16M1072292. Google Scholar A. C. Ponce, A new approach to Sobolev spaces and connections to $\Gamma$-convergence, Calc. Var. Partial Differential Equations, 19 (2004), 229-255. doi: 10.1007/s00526-003-0195-z. Google Scholar P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations, CBMS Regional Conference Series in Mathematics, vol. 65, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1986. doi: 10.1090/cbms/065. Google Scholar R. W. Schwab, Periodic homogenization for nonlinear integro-differential equations, SIAM J. Math. Anal., 42 (2010), 2652-2680. doi: 10.1137/080737897. Google Scholar _____, Stochastic homogenization for some nonlinear integro-differential equations, Comm. Partial Differential Equations, 38 (2013), 171-198. doi: 10.1080/03605302.2012.741176. Google Scholar Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005 Lingju Kong, Roger Nichols. On principal eigenvalues of biharmonic systems. Communications on Pure & Applied Analysis, 2021, 20 (1) : 1-15. doi: 10.3934/cpaa.2020254 Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445 Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442 Yutong Chen, Jiabao Su. Nontrivial solutions for the fractional Laplacian problems without asymptotic limits near both infinity and zero. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021007 Lingwei Ma, Zhenqiu Zhang. Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 537-552. doi: 10.3934/dcds.2020268 Wenxiong Chen, Congming Li, Shijie Qi. A Hopf lemma and regularity for fractional $ p $-Laplacians. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3235-3252. doi: 10.3934/dcds.2020034 Fuensanta Andrés, Julio Muñoz, Jesús Rosado. Optimal design problems governed by the nonlocal $ p $-Laplacian equation. Mathematical Control & Related Fields, 2021, 11 (1) : 119-141. doi: 10.3934/mcrf.2020030 Raffaele Folino, Ramón G. Plaza, Marta Strani. Long time dynamics of solutions to $ p $-Laplacian diffusion problems with bistable reaction terms. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020403 Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3529-3559. doi: 10.3934/dcds.2020040 Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321 Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020379 Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020466 Matthieu Alfaro, Isabeau Birindelli. Evolution equations involving nonlinear truncated Laplacian operators. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3057-3073. doi: 10.3934/dcds.2020046 Julián Fernández Bonder Analía Silva Juan F. Spedaletti
CommonCrawl
Handbook of Uncertainty Quantification Handbook of Uncertainty Quantification pp 69-127 | Cite as Multi-response Approach to Improving Identifiability in Model Calibration Zhen Jiang Paul D. Arendt Daniel W. Apley Reference work entry In physics-based engineering modeling, two primary sources of model uncertainty that account for the differences between computer models and physical experiments are parameter uncertainty and model discrepancy. One of the main challenges in model updating results from the difficulty in distinguishing between the effects of calibration parameters versus model discrepancy. In this chapter, this identifiability problem is illustrated with several examples that explain the mechanisms behind it and that attempt to shed light on when a system may or may not be identifiable. For situations in which identifiability cannot be achieved using only a single response, an approach is developed to improve identifiability by using multiple responses that share a mutual dependence on the calibration parameters. Furthermore, prior to conducting physical experiments but after conducting computer simulations, in order to address the issue of how to select the most appropriate set of responses to measure experimentally to best enhance identifiability, a preposterior analysis approach is presented to predict the degree of identifiability that will result from using different sets of responses to measure experimentally. To handle the computational challenges of the preposterior analysis, we also present a surrogate preposterior analysis based on the Fisher information of the calibration parameters. Parameter uncertainty Model discrepancy Experimental uncertainty Calibration Bias correction (Non)identifiability Identifiability Model uncertainty quantification Calibration parameters Discrepancy function Gaussian process Modular Bayesian approach Hyperparameters Simply supported beam Non-informative prior Multi-response Gaussian process Multi-response modular Bayesian approach Spatial correlation Non-spatial covariance Preposterior covariance Preposterior analysis Fixed-θ preposterior analysis Surrogate preposterior analysis Observed Fisher information Appendix A: Estimates of the Hyperparameters for the Computer Model MRGP To obtain the MLEs of the hyperparameters for the computer model MRGP model, the multivariate normal likelihood function is first constructed as: $$\displaystyle{ \begin{array}{ll} &p(vec(\mathbf{Y}^{m})\vert \mathbf{B}^{m},\boldsymbol{\Sigma }^{m},\boldsymbol{\upomega }^{m}) = (2\pi )^{-qN_{m}/2}\left \vert \Sigma ^{m}\right \vert ^{-N_{m}/2}\left \vert \mathbf{R}^{m}\right \vert ^{-q/2} \\ & \quad \quad \quad \quad \times \exp \left \{-\frac{1} {2}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m})^{T}\left (\Sigma ^{m} \otimes \mathbf{R}^{m}\right )^{-1}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m})\right \},\end{array} }$$ where vec(⋅ ) is the vectorization of the matrix (stacking of the columns), ⊗ denotes the Kronecker product, R m is a N m × N m correlation matrix whose ith-row, jth-column entry is \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x}_{j}^{m},\boldsymbol{\uptheta }_{j}^{m}))\), and \(\mathbf{H}^{m} = [\mathbf{h}^{m}(\mathbf{x}_{1}^{m},\boldsymbol{\uptheta }_{1}^{m})^{T},\,\,\ldots,\mathbf{h}^{m}(\mathbf{x}_{N_{m}}^{m},\boldsymbol{\uptheta }_{N_{m}}^{m})^{T}]^{T}\). Taking the log of Eq. (4.18) yields: $$\displaystyle{ \begin{array}{l} \ln (p(vec(\mathbf{Y}^{m})\vert \mathbf{B}^{m},\boldsymbol{\Sigma }^{m},\boldsymbol{\upomega }^{m})) = -\frac{qN_{m}} {2} \ln (2\pi ) -\frac{N_{m}} {2} \ln \left (\left \vert \Sigma ^{m}\right \vert \right ) -\frac{q} {2}\ln \left (\left \vert \mathbf{R}^{m}\right \vert \right ) \\ \qquad \qquad \qquad \qquad -\frac{1} {2}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m})^{T}\left (\Sigma ^{m} \otimes \mathbf{R}^{m}\right )^{-1}vec(\mathbf{Y}^{m} -\mathbf{H}^{m}\mathbf{B}^{m}). \end{array} }$$ The MLE of B m is found by setting the derivative of Eq. (4.19) with respect to B m equal to zero, which gives: $$\displaystyle{ \hat{\mathbf{B}} ^{m} = [(\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{H}^{m}]^{-1}(\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{Y}^{m}. }$$ The MLE of \(\boldsymbol{\Sigma }^{m}\) is found using result 4.10 of Ref. [52], which yields: $$\displaystyle{ \boldsymbol{\hat{\Sigma } }^{m} = \frac{1} {N_{m}}(\mathbf{Y}^{m} -\mathbf{H}^{m}\hat{\mathbf{B}} ^{m})^{T}(\mathbf{R}^{m})^{-1}(\mathbf{Y}^{m} -\mathbf{H}^{m}\hat{\mathbf{B}} ^{m}). }$$ Finally, the MLE of \(\boldsymbol{\upomega }^{m}\), denoted by \(\hat{\boldsymbol{\upomega }}^{m}\), is found by numerically maximizing Eq. (4.19) after plugging in the MLEs of B m and \(\boldsymbol{\Sigma }^{m}\). Appendix B: Posterior Distributions of the Computer Responses After observing Y m , the posterior of the computer response \(y_{i}^{m}(\mathbf{x},\boldsymbol{\uptheta })\) given Y m (and given \(\boldsymbol{\upomega }^{m}\) and \(\boldsymbol{\Sigma }^{m}\) and assuming a non-informative prior for B m ) is Gaussian with mean and covariance: $$\displaystyle{ \mathbb{E}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta })\vert \mathbf{Y}^{m},\boldsymbol{\upphi }^{m}] = \mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })\hat{\mathbf{B}} ^{m} + \mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T}(\mathbf{R}^{m})^{-1}(\mathbf{Y}^{m} -\mathbf{H}^{m}\hat{\mathbf{B}} ^{m}) }$$ $$\displaystyle{ \begin{array}{ll} &\mbox{ Cov}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta })),\mathbf{y}^{m}(\mathbf{x}^{{\prime}},\boldsymbol{\uptheta }')\vert \mathbf{Y}^{m},\boldsymbol{\upphi }^{m}] = \boldsymbol{\Sigma }^{m}\left \{R^{m}((\mathbf{x},\boldsymbol{\uptheta }),(\mathbf{x}^{{\prime}},\boldsymbol{\uptheta }')\right.) \\ &\quad -\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T}(\mathbf{R}^{m})^{-1}\mathbf{r}^{m}(\mathbf{x}^{{\prime}},\boldsymbol{\uptheta }') + [\mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T} - (\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })]^{T} \\ &\quad \times \left.[(\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{H}^{m}]^{-1}[\mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })^{T} - (\mathbf{H}^{m})^{T}(\mathbf{R}^{m})^{-1}\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })]\right \}\end{array} }$$ where \(\mathbf{r}^{m}(\mathbf{x},\boldsymbol{\uptheta })\) is a N m × 1 vector whose ith element is \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x},\boldsymbol{\uptheta }))\). Using an empirical Bayes approach, the MLEs of the hyperparameters from Appendix A are plugged into Eqs. (4.22) and (4.23) to calculate the posterior distribution of the computer responses. Notice that Eqs. (4.22) and (4.23) are analogous to the single-response GP model results. Appendix C: Estimates of the Hyperparameters for the Discrepancy Functions MRGP To estimate the hyperparameters \(\boldsymbol{\upphi }^{\delta } =\{ \mathbf{B}^{\delta }\), \(\boldsymbol{\Sigma }^{\updelta }\), \(\boldsymbol{\upomega }^{\delta }\), \(\boldsymbol{\uplambda }\}\) of the MRGP model representing the discrepancy functions, the procedure outlined by Kennedy and O'Hagan [1] is used and modified to handle multiple responses. This procedure begins by obtaining a posterior of the experimental responses given the simulation data and the hyperparameters from Module 1, which has prior mean and covariance: $$\displaystyle{ \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta } = \boldsymbol{\uptheta }^{{\ast}}] = \mathbb{E}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta }^{{\ast}})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}] + \mathbf{h}^{\delta }(\mathbf{x})\mathbf{B}^{\delta }, }$$ $$\displaystyle{ \begin{array}{l} \mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x}),\mathbf{y}^{e}(\mathbf{x'})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta } = \boldsymbol{\uptheta }^{{\ast}}] = \boldsymbol{\Sigma }^{\delta }R^{\delta }(\mathbf{x},\mathbf{x}) + \boldsymbol{\uplambda } \\ \qquad \qquad \qquad \quad + \mbox{ Cov}[\mathbf{y}^{m}(\mathbf{x},\boldsymbol{\uptheta }^{{\ast}})),\mathbf{y}^{m}(\mathbf{x'},\boldsymbol{\uptheta }^{{\ast}}))\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}],\end{array} }$$ where \(\hat{\boldsymbol{\upphi }}^{m}\) are the MLEs of the hyperparameters for the computer model MRGP model. Since Eqs. (4.24) and (4.25) depend on the unknown true value of \(\boldsymbol{\uptheta }^{{\ast}}\), these two equations are integrated with respect to the prior distribution of \(\boldsymbol{\uptheta }(p(\boldsymbol{\uptheta }))\) via: $$\displaystyle{ \begin{array}{ll} &\mathbb{E}[\mathbf{y}^{e}(\mathbf{x})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}] =\int \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta }]p(\boldsymbol{\uptheta })\mbox{ d}\boldsymbol{\uptheta }, \\ &\mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x}),\mathbf{y}^{e}(\mathbf{x'})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m}] =\int \mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x}),\mathbf{y}^{e}(\mathbf{x'})\vert \mathbf{Y}^{m},\hat{\boldsymbol{\upphi }}^{m},\boldsymbol{\uptheta }]p(\boldsymbol{\uptheta })\mbox{ d}\boldsymbol{\uptheta }.\end{array} }$$ Kennedy and O'Hagan [53] provide closed form solutions for Eq. (4.26) under the conditions of Gaussian correlation functions, constant regression functions, and normal prior distributions for \(\boldsymbol{\uptheta }\) (for details, refer to Section 3 of [53] and Section 4.5 of [1]). In this chapter, similar closed form solutions are used except that a uniform prior distribution is assumed for \(\boldsymbol{\uptheta }\). After observing the experimental data, Y e , one can construct a multivariate normal likelihood function with mean and variance from Eq. (4.26). The MLEs of \(\boldsymbol{\upphi }^{\delta }\) maximize this likelihood function. The MLE of B δ can found by setting the analytical derivative of this likelihood function with respect to B δ equal to zero (see Section 2 of Ref. [53]). However, there are no analytical derivatives with respect to the hyperparameters \(\boldsymbol{\Sigma }^{\delta }\), \(\boldsymbol{\upomega }^{\delta }\), and \(\boldsymbol{\lambda }\). Therefore, numerical optimization techniques are needed to find these MLEs. Appendix D: Posterior Distribution of the Calibration Parameters The posterior for the calibration parameters in Eq. (4.12) involves the likelihood function \(p(\mathbf{d}\vert \boldsymbol{\uptheta },\hat{\boldsymbol{\upphi }})\) and the marginal posterior distribution for the data \(p(\mathbf{d}\vert \hat{\boldsymbol{\upphi }})\). The likelihood function is multivariate normal with mean vector and covariance matrix defined as: $$\displaystyle\begin{array}{rcl} \mathbf{m}(\boldsymbol{\uptheta })& =& \mathbf{H}(\boldsymbol{\uptheta })\hat{\mathbf{B}} \\ \quad \quad \;& =& \left [\begin{array}{*{20}c} \mathbf{I}_{q} \otimes \mathbf{H}^{m} & \mathbf{0} \\ \mathbf{I}_{q} \otimes \mathbf{H}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta })&\mathbf{I}_{q} \otimes \mathbf{H}^{\delta }\\ \end{array} \right ]\left [\begin{array}{*{20}c} vec(\hat{\mathbf{B}} ^{m}) \\ vec(\hat{\mathbf{B}} ^{\delta } )\\ \end{array} \right ],{}\end{array}$$ $$\displaystyle{ \mathbf{V}(\boldsymbol{\uptheta }) = \left [\begin{array}{*{20}c} \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m} & \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{C}^{m} \\ \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{C}^{m\;T}&\boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta }) + \boldsymbol{\hat{\Sigma } } ^{\delta }\otimes \mathbf{R}^{\delta } + \boldsymbol{\hat{\lambda }} \otimes \mathbf{I}_{N_{e}}\\ \end{array} \right ], }$$ where \(\hat{\mathbf{B}}= \left (\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{H}(\boldsymbol{\uptheta })\right )^{-1}\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{d}\), which is calculated based on the entire data set (instead of using the estimates from Modules 1 and 2 for B m and B δ ) as detailed in Section 4 of [53]. \(\mathbf{H}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta }) = [\mathbf{h}^{m}(\mathbf{x}_{1}^{e},\boldsymbol{\uptheta })^{T},\,\,\ldots,\mathbf{h}^{m}(\mathbf{x}_{N_{e}}^{m},\boldsymbol{\uptheta })^{T}]_{}^{T}\) and \(\mathbf{H}^{\delta } = [\mathbf{h}^{\delta }(\mathbf{x}_{1}^{e})^{T},\,\,\ldots,\mathbf{h}^{\delta }(\mathbf{x}_{N_{e}}^{\delta })^{T}]_{}^{T}\) denote the specified regression functions for the computer model and the discrepancy functions at the input settings X e . C m denotes the N m × N e matrix with ith-row, jth-column entries \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x}_{j}^{e},\boldsymbol{\uptheta }))\). \(\mathbf{R}^{m}(\mathbf{X}^{e},\boldsymbol{\uptheta })\) denotes the N e × N e matrix with ith-row, jth-column entries \(R^{m}((\mathbf{x}_{i}^{e},\boldsymbol{\uptheta }),(\mathbf{x}_{j}^{e},\boldsymbol{\uptheta }))\). R δ denotes the N e × N e matrix with ith-row, jth-column entries R δ (x i e , x j e ). Finally, I q and \(\mathbf{I}_{N_{e}}\) denote the q × q and N e × N e identity matrices. The marginal posterior distribution for the data \(p(\mathbf{d}\vert \hat{\boldsymbol{\upphi }})\) is: $$\displaystyle{ p(\mathbf{d}\vert \boldsymbol{\upphi }) =\int p(\mathbf{d}\vert \boldsymbol{\uptheta },\boldsymbol{\upphi })p(\boldsymbol{\uptheta })\mbox{ d}\boldsymbol{\uptheta }, }$$ which can be calculated using any numerical integration technique. In this chapter, Legendre-Gauss quadrature is used for the low-dimensional examples. Alternatively, Markov chain Monte Carlo (MCMC) could be used to sample complex posterior distributions such as those in Eq. (4.12). Appendix E: Posterior Distribution of the Experimental Responses Since a MRGP model represents the experimental responses, the conditional (given \(\boldsymbol{\uptheta }\)) posterior distribution at any point x is Gaussian with mean and covariance defined as (assuming a non-informative prior on B m and B δ and using the empirical Bayes approach that treats \(\boldsymbol{\upphi }=\hat{\boldsymbol{\upphi }}\) as fixed): $$\displaystyle{ \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}] = \mathbf{h}(\mathbf{x},\boldsymbol{\uptheta })\boldsymbol{\hat{\mathrm{B}} }+ \mathbf{t}(\mathbf{x},\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}(\mathbf{d} -\mathbf{H}(\boldsymbol{\uptheta })\boldsymbol{\hat{\mathrm{B}} } ), }$$ $$\displaystyle{ \begin{array}{ll} &\mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x})^{T},\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}] = \boldsymbol{\hat{\Sigma } } ^{m}R^{m}((\mathbf{x},\boldsymbol{\uptheta }),(\mathbf{x'},\boldsymbol{\uptheta })) + \boldsymbol{\hat{\Sigma } } ^{\delta } R^{\delta }(\mathbf{x},\mathbf{x'}) \\ &\quad + \boldsymbol{\hat{\lambda }} -\mathbf{t}(\mathbf{x},\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{t}(\mathbf{x'},\boldsymbol{\uptheta }) + \left (\mathbf{h}(\mathbf{x},\boldsymbol{\uptheta })^{T} -\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{t}(\mathbf{x},\boldsymbol{\uptheta })\right )^{T} \\ &\quad \times \left (\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{H}(\boldsymbol{\uptheta })\right )^{-1}\left (\mathbf{h}(\mathbf{x'},\boldsymbol{\uptheta })^{T} -\mathbf{H}(\boldsymbol{\uptheta })^{T}\mathbf{V}(\boldsymbol{\uptheta })^{-1}\mathbf{t}(\mathbf{x'},\boldsymbol{\uptheta })\right ),\\ \end{array} }$$ $$\displaystyle{ \mathbf{t}(\mathbf{x},\boldsymbol{\uptheta }) = \left [\begin{array}{*{20}c} \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m}((\mathbf{X}^{m},\boldsymbol{\uptheta }^{m}),(\mathbf{x},\boldsymbol{\uptheta })) \\ \boldsymbol{\hat{\Sigma } } ^{m} \otimes \mathbf{R}^{m}((\mathbf{X}^{e},\boldsymbol{\uptheta }),(\mathbf{x},\boldsymbol{\uptheta })) + \boldsymbol{\hat{\Sigma } } ^{\delta }\otimes \mathbf{R}^{\delta }(\mathbf{X}^{e},\mathbf{x})\\ \end{array} \right ], }$$ $$\displaystyle{ \mathbf{h}(\mathbf{x},\boldsymbol{\uptheta }) = \left [\begin{array}{*{20}c} \mathbf{I}_{q} \otimes \mathbf{h}^{m}(\mathbf{x},\boldsymbol{\uptheta })&\mathbf{I}_{ q} \otimes \mathbf{h}^{\delta }(\mathbf{x})\\ \end{array} \right ]. }$$ \(\mathbf{R}^{m}((\mathbf{X}^{m},\boldsymbol{\Theta }^{m}),(\mathbf{x},\boldsymbol{\uptheta }))\) is a N m ×1 vector whose ith entry is \(R^{m}((\mathbf{x}_{i}^{m},\boldsymbol{\uptheta }_{i}^{m}),(\mathbf{x},\boldsymbol{\uptheta }))\), \(\mathbf{R}^{m}((\mathbf{X}^{e},\boldsymbol{\uptheta }),(\mathbf{x},\boldsymbol{\uptheta }))\) is a N e ×1 vector whose ith entry is \(R^{m}((\mathbf{x}_{i}^{e},\boldsymbol{\uptheta }),(\mathbf{x},\boldsymbol{\uptheta }))\), and R δ (X e , x) is a N e × 1 vector whose ith entry is R δ (x i e , x). To calculate the unconditional posterior distributions (marginalized with respect to \(\boldsymbol{\uptheta }\)) of the experimental responses, the conditional posterior distributions are marginalized with respect to the posterior distribution of the calibration parameters from Module 3. The mean and covariance of the unconditional posterior distributions can be written as: $$\displaystyle\begin{array}{rcl} \mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \mathbf{d},\hat{\boldsymbol{\upphi }}] = \mathbb{E}[\mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}]],& &{}\end{array}$$ $$\displaystyle\begin{array}{rcl} \begin{array}{l} \mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x})^{T},\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \mathbf{d},\hat{\boldsymbol{\upphi }}] = \mathbb{E}[\mbox{ Cov}[\mathbf{y}^{e}(\mathbf{x})^{T},\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}]] \\ \quad + \mbox{ Cov}[\mathbb{E}[\mathbf{y}^{e}(\mathbf{x})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}], \mathbb{E}[\mathbf{y}^{e}(\mathbf{x'})^{T}\vert \boldsymbol{\uptheta },\mathbf{d},\hat{\boldsymbol{\upphi }}]],\\ \end{array} & &{}\end{array}$$ where the outer expectation and covariance are with respect to the posterior distribution of the calibration parameters. Equations (4.34) and (4.35) are derived using the law of total expectation and the law of total covariance [54]. Due to the complexity of the posterior distribution of the calibration parameters, the marginalization requires numerical integration methods. For the examples in this chapter, Legendre-Gauss quadrature is used. Kennedy, M.C., O'Hagan, A.: Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 63(3), 425–464 (2001)MathSciNetCrossRefzbMATHGoogle Scholar Higdon, D., Kennedy, M.C., Cavendish, J., Cafeo, J., Ryne, R.: Combining field data and computer simulations for calibration and prediction. SIAM J. Sci. Comput. 26(2), 448–466 (2004)MathSciNetCrossRefzbMATHGoogle Scholar Reese, C.S., Wilson, A.G., Hamada, M., Martz, H.F.: Integrated analysis of computer and physical experiments. Technometrics 46(2), 153–164 (2004)MathSciNetCrossRefGoogle Scholar Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J., Cafeo, J.A., Cavendish, J., Lin, C.H., Tu, J.: A framework for validation of computer models. Technometrics 49(2), 138–154 (2007)MathSciNetCrossRefGoogle Scholar Higdon, D., Gattiker, J., Williams, B., Rightley, M.: Computer model calibration using high-dimensional output. J. Am. Stat. Assoc. 103(482), 570–583 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Chen, W., Xiong, Y., Tsui, K.L., Wang, S.: A design-driven validation approach using bayesian prediction models. J. Mech. Des. 130(2), 021101 (2008)CrossRefGoogle Scholar Qian, P.Z.G., Wu, C.F.J.: Bayesian hierarchical modeling for integrating low-accuracy and high-accuracy experiments. Technometrics 50(2), 192–204 (2008)MathSciNetCrossRefGoogle Scholar Wang, S., Tsui, K.L., Chen, W.: Bayesian validation of computer models. Technometrics 51(4), 439–451 (2009)MathSciNetCrossRefGoogle Scholar Drignei, D.: A kriging approach to the analysis of climate model experiments. J. Agric. Biol. Environ. Stat. 14(1), 99–112 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Akkaram, S., Agarwal, H., Kale, A., Wang, L.: Meta modeling techniques and optimal design of experiments for transient inverse modeling applications. Paper presented at the ASME International Design Engineering Technical Conference, Montreal (2010)CrossRefGoogle Scholar Huan, X., Marzouk, Y.M.: Simulation-based optimal Bayesian experimental design for nonlinear systems. J. Comput. Phys. 232(1), 288–317 (2013)MathSciNetCrossRefGoogle Scholar Loeppky, J., Bingham, D., Welch, W.: Computer model calibration or tuning in practice. Technical Report, University of British Columbia, Vancouver, p. 20 (2006)Google Scholar Han, G., Santner, T.J., Rawlinson, J.J.: Simultaneous determination of tuning and calibration parameters for computer experiments. Technometrics 51(4), 464–474 (2009)MathSciNetCrossRefGoogle Scholar Arendt, P.D., Apley, D.W., Chen, W.: Quantification of model uncertainty: Calibration, model discrepancy, and identifiability. J. Mech. Des. 134(10) (2012)Google Scholar Arendt, P.D., Apley, D.W., Chen, W., Lamb, D., Gorsich, D.: Improving identifiability in model calibration using multiple responses. J. Mech. Des. 134(10) (2012)Google Scholar Ranjan, P., Lu, W., Bingham, D., Reese, S., Williams, B., Chou, C., Doss, F., Grosskopf, M., Holloway, J.: Follow-up experimental designs for computer models and physical processes. J. Stat. Theory Pract. 5(1), 119–136 (2011)MathSciNetCrossRefzbMATHGoogle Scholar Williams, B.J., Loeppky, J.L., Moore, L.M., Macklem, M.S.: Batch sequential design to achieve predictive maturity with calibrated computer models. Reliab. Eng. Syst. Saf. 96(9), 1208–1219 (2011)CrossRefGoogle Scholar Tuo, R., Wu, C.F.J., Vu, D.: Surrogate modeling of computer experiments with different mesh densities. Technometrics 56(3), 372–380 (2014)MathSciNetCrossRefGoogle Scholar Maheshwari, A.K., Pathak, K.K., Ramakrishnan, N., Narayan, S.P.: Modified Johnson-Cook material flow model for hot deformation processing. J. Mater. Sci. 45(4), 859–864 (2010)CrossRefGoogle Scholar Xiong, Y., Chen, W., Tsui, K.L., Apley, D.W.: A better understanding of model updating strategies in validating engineering models. Comput. Methods Appl. Mech. Eng. 198(15–16), 1327–1337 (2009)CrossRefzbMATHGoogle Scholar Liu, F., Bayarri, M.J., Berger, J.O., Paulo, R., Sacks, J.: A Bayesian analysis of the thermal challenge problem. Comput. Methods Appl. Mech. Eng. 197(29–32), 2457–2466 (2008)CrossRefzbMATHGoogle Scholar Arendt, P., Apley, D.W., Chen, W.: Updating predictive models: calibration, bias correction, and identifiability. Paper presented at the ASME 2010 International Design Engineering Technical Conferences, Montreal (2010)Google Scholar Chakrabarty, J.: Theory of Plasticity, 3rd edn. Elsevier/Butterworth-Heinemann, Burlington (2006)zbMATHGoogle Scholar Liu, F., Bayarri, M.J., Berger, J.O.: Modularization in Bayesian analysis, with emphasis on analysis of computer models. Bayesian Anal. 4(1), 119–150 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Joseph, V., Melkote, S.: Statistical adjustments to engineering models. J. Qual. Technol. 41(4), 362–375 (2009)Google Scholar Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2006)zbMATHGoogle Scholar Qian, P.Z.G., Wu, H.Q., Wu, C.F.J.: Gaussian process models for computer experiments with qualitative and quantitative factors. Technometrics 50(3), 383–396 (2008)MathSciNetCrossRefGoogle Scholar McMillan, N.J., Sacks, J., Welch, W.J., Gao, F.: Analysis of protein activity data by gaussian stochastic process models. J. Biopharm. Stat. 9(1), 145–160 (1999)CrossRefzbMATHGoogle Scholar Cressie, N.: Statistics for Spatial Data. Wiley Series in Probability and Statistics. Wiley, New York (1993)zbMATHGoogle Scholar Ver Hoef, J., Cressie, N.: Multivariable spatial prediction. Math. Geol. 25(2), 219–240 (1993)MathSciNetCrossRefzbMATHGoogle Scholar Conti, S., Gosling, J.P., Oakley, J.E., O'Hagan, A.: Gaussian process emulation of dynamic computer codes. Biometrika 96(3), 663–676 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Conti, S., O'Hagan, A.: Bayesian emulation of complex multi-output and dynamic computer models. J. Stat. Plan. Inference 140(3), 640–651 (2010)MathSciNetCrossRefzbMATHGoogle Scholar Williams, B., Higdon, D., Gattiker, J., Moore, L.M., McKay, M.D., Keller-McNulty, S.: Combining experimental data and computer simulations, with an application to flyer plate experiments. Bayesian Analysis 1(4), 765–792 (2006)MathSciNetCrossRefzbMATHGoogle Scholar Bayarri, M.J., Berger, J.O., Cafeo, J., Garcia-Donato, G., Liu, F., Palomo, J., Parthasarathy, R.J., Paulo, R., Sacks, J., Walsh, D.: Computer model validation with functional output. Ann. Stat. 35(5), 1874–1906 (2007)MathSciNetCrossRefzbMATHGoogle Scholar McFarland, J., Mahadevan, S., Romero, V., Swiler, L.: Calibration and uncertainty analysis for computer simulations with multivariate output. AIAA J. 46(5), 1253–1265 (2008)CrossRefGoogle Scholar Kennedy, M.C., Anderson, C.W., Conti, S., O'Hagan, A.: Case studies in gaussian process modelling of computer codes. Reliab. Eng. Syst. Saf. 91(10–11), 1301–1309 (2006)CrossRefGoogle Scholar Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Stat. Sci. 4(4), 409–423 (1989)MathSciNetCrossRefzbMATHGoogle Scholar Rasmussen, C.E.: Evaluation of Gaussian Processes and Other Methods for Non-linear Regression. University of Toronto (1996)Google Scholar Lancaster, T.: An Introduction to Modern Bayesian Econometrics. Blackwell, Malden (2004)zbMATHGoogle Scholar Arendt, P.D., Apley, D.W., Chen, W.: A preposterior analysis to predict identifiability in experimental calibration of computer models. IIE Trans. 48(1), 75–88 (2016)CrossRefGoogle Scholar Jiang, Z., Chen, W., Apley, D.W.: Preposterior analysis to select experimental responses for improving identifiability in model uncertainty quantification. Paper presented at the ASME 2013 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Portland (2013)Google Scholar Jiang, Z., Apley, D.W., Chen, W.: Surrogate preposterior analyses for predicting and enhancing identifiability in model calibration. Int. J. Uncertain. Quantif. 5(4), 341–359 (2015)CrossRefGoogle Scholar Berger, J.O.: Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics. Springer, New York (1985)CrossRefzbMATHGoogle Scholar Carlin, B.P., Louis, T.A.: Empirical bayes: Past, present and future. J. Am. Stat. Assoc. 95(452), 1286–1289 (2000)MathSciNetCrossRefzbMATHGoogle Scholar Wu, C., Hamada, M.: Experiments: Planning, Analysis, and Optimization. Wiley, New York (2009)zbMATHGoogle Scholar Montgomery, D.C.: Design and Analysis of Experiments, 7th edn. Wiley, Hoboken (2008)Google Scholar Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover, New York (1972)zbMATHGoogle Scholar Beyer, W.H.: CRC Standard Mathematical Tables, 28 edn. CRC, Boca Raton (1987)zbMATHGoogle Scholar Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer, New York (2004)CrossRefzbMATHGoogle Scholar Smith, A., Gelfand, A.: Bayesian statistics without tears: a sampling-resampling perspective. Am. Stat. 46(2), 84–88 (1992)MathSciNetGoogle Scholar Johnson, R., Wichern, D.: Applied Multivariate Statistical Analysis, 6th edn. Prentice Hall, Upper Saddle River (2007)zbMATHGoogle Scholar Kennedy, M.C., O'Hagan, A.: Supplementary Details on Bayesian Calibration of Computer Models, pp. 1–13. University of Sheffield, Sheffield (2000)Google Scholar Billingsley, P.: Probability and Measure, Anniversary Edition. John Wiley & Sons, Inc., Hoboken (2011)zbMATHGoogle Scholar 1.Department of Mechanical EngineeringNorthwestern UniversityEvanstonUSA 2.CNA Financial CorporationChicagoUSA 3.Department of Industrial Engineering and Management SciencesNorthwestern UniversityEvanstonUSA Cite this entry as: Jiang Z., Arendt P.D., Apley D.W., Chen W. (2017) Multi-response Approach to Improving Identifiability in Model Calibration. In: Ghanem R., Higdon D., Owhadi H. (eds) Handbook of Uncertainty Quantification. Springer, Cham DOI https://doi.org/10.1007/978-3-319-12385-1_65 eBook Packages Mathematics and Statistics Cite entry
CommonCrawl
Francis C. Motta 1, and Patrick D. Shipman 2, Department of Mathematics, Duke University, Box 90320, Durham, NC 27708-0320, USA Colorado State University, 1874 Campus Delivery, Fort Collins, CO 80523-1874, USA Received July 2016 Revised December 2016 Published January 2019 Figure(4) / Table(1) A complex Hadamard matrix $ H $ may be isolated or may lie in a higher-dimensional space of Hadamards. We provide an upper bound for this dimension as the dimension of the center subspace of a gradient flow and apply the Center Manifold Theorem of dynamical systems theory to study local structure in spaces of complex Hadamard matrices. Through examples, we provide several applications of our methodology including the construction of affine families of Hadamard matrices. Keywords: Complex Hadamard matrix, matrix defect, center manifold reduction. Mathematics Subject Classification: Primary: 37C10, 05B20; Secondary: 37L10. Citation: Francis C. Motta, Patrick D. Shipman. Informing the structure of complex Hadamard matrix spaces using a flow. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2019147 A. A. Agaian, Hadamard Matrices and their Applications, Springer-Verlag, Berlin, 1985. doi: 10.1007/BFb0101073. Google Scholar N. Barros e Sá and I. Bengtsson, Families of complex Hadamard matrices, Lin. Alg. Appl., 438 (2013), 2929-2957. doi: 10.1016/j.laa.2012.10.029. Google Scholar K. Beauchamp and R. Nicoara, Orthogonal maximal abelian *-subalgebras of the 6 × 6 matrices, Lin. Alg. Appl., 428 (2008), 1833-1853. doi: 10.1016/j.laa.2007.10.023. Google Scholar R. Craigen, Equivalence Classes of Inverse Orthogonal and Unit Hadamard, Bull. Austral. Math. Soc., 44 (1991), 109-115. doi: 10.1017/S0004972700029506. Google Scholar P. Diţǎ, Some results on the parametrization of complex Hadamard matrices, J. Phys. A, 20 (2004), 5355-5374. doi: 10.1088/0305-4470/37/20/008. Google Scholar D. Goyeneche, A new method to construct families of complex Hadamard matrices in even dimensions, J. Math. Phys., 54 (2013), 032201, 18pp. doi: 10.1063/1.4794068. Google Scholar U. Haagerup, Orthogonal maximal abelian *-subalgebras of the $n\times n$ matrices and cyclic n-roots, Operator Algebras and Quantum Field Theory (Rome), Cambridge, MA International Press, (1997), 296-322. Google Scholar J. Hadamard, Resolution d'une question relative aux determinants, Bull. des Sci. Math., 17 (1893), 240-246. Google Scholar A. S. Hedayat, N. J. A. Sloane and J. Stufken, Orthogonal Arrays, Springer Series in Statistics, New York, Springer, 1999. doi: 10.1007/978-1-4612-1478-6. Google Scholar I. Jex, S. Stenholm and A. Zeilinger, Hamiltonian theory of a symmetric multiport, Opt. Commun., 117 (1995), 95-101. doi: 10.1016/0030-4018(95)00078-M. Google Scholar B. R. Karlsson, BCCB complex Hadamard matrices of order 9, and MUBs, Lin. Alg. Appl., 504 (2016), 309-324. doi: 10.1016/j.laa.2016.04.012. Google Scholar B. R. Karlsson, Two-parameter complex Hadamard matrices for N = 6, J. Math. Phys., 50 (2009), 082104, 8pp. doi: 10.1063/1.3198230. Google Scholar B. R. Karlsson, Three-parameter complex Hadamard matrices of order 6, Lin. Alg. Appl., 434 (2011), 247-258. doi: 10.1016/j.laa.2010.08.020. Google Scholar P. H. J. Lampio, F. Szöllősi and P. R. J. Östergård, The quaternary complex Hadamard matrices of orders 10, 12, and 14, Discrete Mathematics, 313 (2013), 189-206. doi: 10.1016/j.disc.2012.10.001. Google Scholar T. K. Leen, A coordinate-independent center manifold reduction, Phys. Lett. A, 174 (1993), 89-93. doi: 10.1016/0375-9601(93)90548-E. Google Scholar D. W. Leung, Simulation and reversal of n-qubit Hamiltonians using Hadamard matrices, J. Mod. Opt., 49 (2002), 1199-1217. doi: 10.1080/09500340110109674. Google Scholar M. Matolcsi, J. Réffy and F. Szöllősi, Constructions of complex Hadamard matrices via tiling abelian groups, Open Syst. Inf. Dyn., 14 (2007), 247-263. doi: 10.1007/s11080-007-9050-6. Google Scholar D. McNulty and S. Weigert, Isolated Hadamard matrices from mutually unbiased product bases, J. Math. Phys., 53 (2012), 122202, 16pp.. doi: 10.1063/1.4764884. Google Scholar J. Meiss, Differential Dynamical Systems, SIAM, (2007). doi: 10.1137/1.9780898718232. Google Scholar M. Reck, A. Zeilinger, H. J. Bernstein and P. Bertani, Experimental realization of any discrete unitary operator, Phys. Rev. Lett., 73 (1994), 58-61. doi: 10.1103/PhysRevLett.73.58. Google Scholar F. Szöllősi and M. Matolcsi, Towards a classification of 6 × 6 complex Hadamard matrices, Open Syst. Inf. Dyn., 15 (2008), 93-108. doi: 10.1142/S1230161208000092. Google Scholar F. Szöllősi, Complex Hadamard matrices of order 6: a four-parameter family, J. London Math. Soc., 85 (2012), 616-32. doi: 10.1112/jlms/jdr052. Google Scholar F. Szöllősi, Parametrizing complex Hadamard matrices, European J. Combin., 29 (2008), 1219-1234. doi: 10.1016/j.ejc.2007.06.009. Google Scholar W. Tadej and K. Życzkowski, A concise guide to complex Hadamard matrices, Open Syst. Inform. Dyn., 13 (2006), 133-177. doi: 10.1007/s11080-006-8220-2. Google Scholar W. Tadej and K. Życzkowski, Defect of a unitary matrix, Lin. Alg. Appl., 429 (2008), 447-481. doi: 10.1016/j.laa.2008.02.036. Google Scholar F. Verhulst, Nonlinear Differential Equations and Dynamical Systems, Springer-Verlag, Berlin, 1990. doi: 10.1007/978-3-642-97149-5. Google Scholar R. F. Werner, All teleportation and dense coding schemes, J. Phys. A: Math. Gen., 34 (2001), 7081-7094. doi: 10.1088/0305-4470/34/35/332. Google Scholar Figure 1. Plot of the eigenvalues of the linearization of $ \Phi_4 $ at $ {\boldsymbol \theta}(a) $ for $ a \in [0,\pi] $. $ \lambda_1(a) $ (blue), $ \lambda_3(a) $ (green), and $ \lambda_6(a) $ (red) simultaneously vanish at $ a = \pi/2 $, while all other eigenvalues (gray) are strictly negative for all $ a \in [0,\pi] $ Figure 2. Snapshots of 500 initial phases, drawn from $ \mathbb{R}^{9} $ uniformly at random from a neighborhood of the core phases corresponding to $ F(\pi/2) $, as they evolve under the flow defined by $ \Phi_4({\boldsymbol \theta}) $, at times (ⅰ) 5, (ⅱ) 20, (ⅲ) 70, and (ⅳ) 500. Each point cloud has been projected onto its top three principal components, and each point $ {\boldsymbol \theta} $ is colored by $ \log_{10} $ of the magnitude of the vector field $ \Phi_4({\boldsymbol \theta}) $ Figure 3. Plot of the 25 eigenvalues of $ D\Phi_6\vert_{D_6(c)} $ for $ c \in [-\pi/2,\pi/2] $. The zero eigenvalue (blue) has multiplicity four, the roots of $ f_1(\lambda;c) $ (green) have multiplicity two, and all other eigenvalues (gray) are simple Figure 4. Snapshots of 500 initial phases, drawn from $ \mathbb{R}^{64} $ uniformly at random from a neighborhood of the core phases corresponding to $ B_9^{(0)} $, as they evolve under the flow defined by $ \Phi_9({\boldsymbol \theta}) $, at times (ⅰ) 5, (ⅱ) 20, (ⅲ) 70, and (ⅳ, ⅴ) 500. Each point cloud has been projected onto its top three principal components, and each point $ {\boldsymbol \theta} $ is colored by $ \log_{10} $ of the magnitude of the vector field $ \Phi_9({\boldsymbol \theta}) $ Table 1. Nonzero coordinates of the vectors in the basis $\{\textbf{V}_1, \ldots, \textbf{V}_{16}\}$ for $D\Phi_{10}\vert_{D_{10}}$. A nonzero coordinate has value 1 or -1, indicated by the subcolumn to which it belongs vector coordinate $\textbf{V}_1$ 2 3 7 8 74 75 79 80 10 18 19 27 55 63 64 72 $\textbf{V}_2$ 10 12 16 18 64 66 70 72 2 8 20 26 56 62 74 80 $\textbf{V}_4$ 4 8 24 25 40 44 78 79 28 32 48 54 57 63 64 68 $\textbf{V}_{10}$ 47 48 49 50 74 75 76 77 15 18 24 27 33 36 42 45 $\textbf{V}_{11}$ 2 6 30 32 65 69 75 77 10 17 22 27 40 45 46 53 $\textbf{V}_{15}$ 19 23 28 32 64 68 73 77 3 4 8 9 39 40 44 45 $\textbf{V}_{16}$ 19 23 49 53 58 62 73 77 3 9 33 34 39 45 69 70 Giuseppe Geymonat, Françoise Krasucki. Hodge decomposition for symmetric matrix fields and the elasticity complex in Lipschitz domains. Communications on Pure & Applied Analysis, 2009, 8 (1) : 295-309. doi: 10.3934/cpaa.2009.8.295 Hayato Chiba, Georgi S. Medvedev. The mean field analysis of the kuramoto model on graphs Ⅱ. asymptotic stability of the incoherent state, center manifold reduction, and bifurcations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3897-3921. doi: 10.3934/dcds.2019157 Camillo De Lellis, Emanuele Spadaro. Center manifold: A case study. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1249-1272. doi: 10.3934/dcds.2011.31.1249 Adel Alahmadi, Hamed Alsulami, S.K. Jain, Efim Zelmanov. On matrix wreath products of algebras. Electronic Research Announcements, 2017, 24: 78-86. doi: 10.3934/era.2017.24.009 Paul Skerritt, Cornelia Vizman. Dual pairs for matrix groups. Journal of Geometric Mechanics, 2019, 11 (2) : 255-275. doi: 10.3934/jgm.2019014 Meijuan Shang, Yanan Liu, Lingchen Kong, Xianchao Xiu, Ying Yang. Nonconvex mixed matrix minimization. Mathematical Foundations of Computing, 2019, 2 (2) : 107-126. doi: 10.3934/mfc.2019009 Claudia Valls. The Boussinesq system:dynamics on the center manifold. Communications on Pure & Applied Analysis, 2005, 4 (4) : 839-860. doi: 10.3934/cpaa.2005.4.839 Ferenc Szöllősi. On quaternary complex Hadamard matrices of small orders. Advances in Mathematics of Communications, 2011, 5 (2) : 309-315. doi: 10.3934/amc.2011.5.309 Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015 K. T. Arasu, Manil T. Mohan. Optimization problems with orthogonal matrix constraints. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 413-440. doi: 10.3934/naco.2018026 Sergey V. Bolotin, Piero Negrini. Global regularization for the $n$-center problem on a manifold. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 873-892. doi: 10.3934/dcds.2002.8.873 Stefano Bianchini, Alberto Bressan. A center manifold technique for tracing viscous waves. Communications on Pure & Applied Analysis, 2002, 1 (2) : 161-190. doi: 10.3934/cpaa.2002.1.161 Lei Zhang, Anfu Zhu, Aiguo Wu, Lingling Lv. Parametric solutions to the regulator-conjugate matrix equations. Journal of Industrial & Management Optimization, 2017, 13 (2) : 623-631. doi: 10.3934/jimo.2016036 Heide Gluesing-Luerssen, Fai-Lung Tsang. A matrix ring description for cyclic convolutional codes. Advances in Mathematics of Communications, 2008, 2 (1) : 55-81. doi: 10.3934/amc.2008.2.55 Houduo Qi, ZHonghang Xia, Guangming Xing. An application of the nearest correlation matrix on web document classification. Journal of Industrial & Management Optimization, 2007, 3 (4) : 701-713. doi: 10.3934/jimo.2007.3.701 Angelo B. Mingarelli. Nonlinear functionals in oscillation theory of matrix differential systems. Communications on Pure & Applied Analysis, 2004, 3 (1) : 75-84. doi: 10.3934/cpaa.2004.3.75 A. Cibotarica, Jiu Ding, J. Kolibal, Noah H. Rhee. Solutions of the Yang-Baxter matrix equation for an idempotent. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 347-352. doi: 10.3934/naco.2013.3.347 Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems & Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025 Leda Bucciantini, Angiolo Farina, Antonio Fasano. Flows in porous media with erosion of the solid matrix. Networks & Heterogeneous Media, 2010, 5 (1) : 63-95. doi: 10.3934/nhm.2010.5.63 Debasisha Mishra. Matrix group monotonicity using a dominance notion. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 267-274. doi: 10.3934/naco.2015.5.267 Francis C. Motta Patrick D. Shipman
CommonCrawl
Mapping between camera pose and image features in visual servoing I have a robotic arm and a camera in eye-in-hand configuration. I know that there is a relationship between the body velocity $V$ of the camera and the velocities $\dot s$ in the image feature space that is $\dot s=L(z,s) V$ where $L$ is the interaction matrix. I was wondering if one can find a mapping (a so called diffeomorphism) that connects the image features' vector $s$ with the camera pose $X$. All I was able to find is that it is possible to do that in a structured environment which I don't fully understand what it is. mapping visual-servoing ControllerController Image features are connected to the camera pose through two steps: (1) the relationship between the feature pixel coordinates and its homogeneous coordinates in the camera reference frame, and (2) the relationship between the camera reference frame and the world frame. Take a look at this figure, where the world frame is denoted with subscript 0 and the camera frame is denoted with subscript C. A feature is shown as a blue dot, with position $p$ in the world frame. The camera has a particular field of view (shown with a pyramid in the image), which relates the pixels coordinates to the relative position of the feature in the camera reference frame, $\tilde{p}$, through the camera projection matrix: $\begin{bmatrix} I \\ J \\ 1 \end{bmatrix} = \begin{bmatrix} k_x & 0 & C_x \\ 0 & k_y & C_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} X \\ Y \\ 1 \end{bmatrix}$ Where $I$ and $J$ are the pixel coordinates of the feature in the image, and the camera is defined with parameters $k_x$, $k_y$, $C_x$ and $C_y$ (based on the field of view and output image size). The homogeneous coordinates, $X$ and $Y$ are defined based on the relative position of the feature in the camera frame: $\tilde{p} = \begin{bmatrix} \tilde{x} \\ \tilde{y} \\ \tilde{z} \end{bmatrix}$ $X = \frac{\tilde{x}}{\tilde{z}}$ $Y = \frac{\tilde{y}}{\tilde{z}}$ That relative position of the feature is then related to the actual position of the feature (in the world frame) as well as the camera pose, according to: $p = R_C \tilde{p} + p_C$ Where $R_C$ is the rotation matrix defining the camera orientation and $p_C$ is the position of the camera. One thing I have not discussed is lens distortion, which must be accounted for before using this approach but can be treated separately. Note that the OpenCV Camera Calibration and 3D Reconstruction page provides a more detailed explanation. Brian LynchBrian Lynch $\begingroup$ Nice introduction into pinhole camera model but I don't think this fully answers the question. It is impossible to estimate the velocity of features observed by a monocular camera without knowing their depth. It would be possible to do that by using structure from motion algorithms but it radically increases the complexity of the task. Using a kinect would be preferred. $\endgroup$ – Mehdi Nov 23 '15 at 9:41 $\begingroup$ The question asks "if one can find a mapping... that connects the image features' vector s with the camera pose X". There is mention of velocities $\dot{s}$ but that is not the question. Also, this model necessarily requires 3D coordinates of features to begin with ($p$ is assumed to be known) -- I'm not saying you get depth from a monocular camera, these features are assumed to be known because it is a structured environment! Sorry if my answer is confusing, I will update it later. $\endgroup$ – Brian Lynch Nov 23 '15 at 9:54 $\begingroup$ Structured environment means an environment with observable edges and corners as far as I know. Doesn't need to be an environment where the structure is known. I see that the question was only about the mapping from pixels to camera frame but I don't think that it would be enough to the OP considering his final goal. $\endgroup$ – Mehdi Nov 23 '15 at 9:58 $\begingroup$ Not quite, a structured environment is one where features are recognizable based on some a priori knowledge. Yes, you can treat edges/corners as known structures in the environment, but in general it refers to the fact that these structures are filling a necessary gap. In the case of a monocular camera you would be depending on fiduciary markers or known objects with scale to provide that structure. Also, you often want to predict the pixel coordinates for a measurement model so this is the relevant component of that larger question -- what comes next may be already answered elsewhere. $\endgroup$ – Brian Lynch Nov 23 '15 at 10:26 Look up Camera Space Manipulation, which uses the pinhole camera model to map image space and physical space coordinates. It is not too difficult to do. A structured environment is one in which objects which occupy the environment are known, and can be modelled. It is much easier to program mobility, and to identify objects, in a structured environment. For example, if you know that a particular circular object has a given diameter, you can use that object's elliptical shape in camera space to determine viewing angle, and dimensions to determine distance between the circle and the camera. But if you don't know the model of that object, you can't be sure if it is a circle viewed from an angle, or an ellipse viewed head-on. Therefore unstructured environments are much more difficult for relating camera space coordinates and physical coordinates. SteveOSteveO Not the answer you're looking for? Browse other questions tagged mapping visual-servoing or ask your own question. How to implement the Wavefront algorithm? Simulation environment for conducting visual servoing experiment Image Based Visual Servoing algorithm in MATLAB Visual servoing - tracking a point How to produce a continuous variation of a discontinuous function? 3D mapping using only a 2D Lidar Position vs Image based visual servoing? UR5 Visual servoing(image based, eye in hand)
CommonCrawl
12C(α,γ)16O Reaction Rate The 12C(α,γ)16O reaction and its implications for stellar helium burning (2017) The creation of carbon and oxygen in our Universe is one of the forefront questions in nuclear astrophysics. The determination of the abundance of these elements is key to our understanding of both the formation of life on Earth and to the life cycles of stars. While nearly all models of different nucleosynthesis environments are affected by the production of carbon and oxygen, a key ingredient, the precise determination of the reaction rate of $^{12}$C($\alpha , \gamma$)$^{16}$O, has long remained elusive. This is owed to the reaction's inaccessibility, both experimentally and theoretically. Nuclear theory has struggled to calculate this reaction rate because the cross section is produced through different underlying nuclear mechanisms. Isospin selection rules suppress the E1 component of the ground state cross section, creating a unique situation where the E1 and E2 contributions are of nearly equal amplitudes. Experimentally there have also been great challenges. Measurements have been pushed to the limits of state-of-the-art techniques, often developed for just these measurements. The data have been plagued by uncharacterized uncertainties, often the result of the novel measurement techniques that have made the different results challenging to reconcile. However, the situation has markedly improved in recent years, and the desired level of uncertainty 10% may be in sight. In this review article the current understanding of this critical reaction is summarized. The emphasis is placed primarily on the experimental work and interpretation of the reaction data, but discussions of the theory and astrophysics are also pursued. The main goal is to summarize and clarify the current understanding of the reaction and then point the way forward to an improved determination of the reaction rate. Figure 29 - Comparison of the reaction rate and uncertainty calculated in this work (orange band, solid central line) and that from Kunz et al. (2002) (blue band, dashed central line) normalized to the adopted value from Angulo et al. (1999) (NACRE compilation) (gray band, solid central line). The deviations at higher temperature are the result of the different narrow resonance and cascade transitions that were considered in the different works.
CommonCrawl
Quantum Mechanics in higher-dimensional spaces (Part of the Wolverhampton Lectures of Physics's Quantum Physics Course) Last year, we studied a lot the 1D Schrödinger equation: $$i\hbar\partial_t\psi=H\psi$$ which was 1D because the Hamiltonian (i.e., energy) $H$ was 1D, namely, we studied: $$H=\frac12mv^2+V$$ (kinetic + potential energy), and using the correspondence principle, that takes us from classical to quantum mechanics: $$p\rightarrow-i\hbar\partial_x$$ so our Schrödinger equation from last year was: $$i\hbar\partial_t\psi(x,t)=\left(-{\hbar^2\over2m}\partial^2_x+V\right)\psi(x,t)\,.$$ This Year, our program will be essentially to do quantum mechanics in higher-dimensional spaces. The nature of these extra dimensions can be of a different character. The most obvious case is that it could be that our particle is now allowed to move in a higher-dimensional space, like free space (so that's 3D). We'll spend much time studying a fascinating problem, the hydrogen atom, which is 3D. We will remind ourselves of the basic principles of the theory first and see how it extends to the higher-D case. For such higher-dimensional cases of a single particle, the momentum $\vec p=m \vec v$ becomes a vector, with three components. The corresponding principle reads: $$\vec p\rightarrow-i\hbar\nabla\,.$$ The wavefunction itself on which this applies is a function of the corresponding-dimension variable: $$\psi(\vec r,t)$$ with normalization condition $$\int|\psi(\vec r,t)|^2 d\vec r=1$$ and observables obtained through the high-D sandwich process $$\langle\Omega\rangle=\int\psi^*(\vec r)\Omega\psi(\vec r)d\vec r\,.$$ Remember that $\Omega$ is an operator, and does not commute in general with the wavefunctions! A typical observable is, for instance, the position of the particle: $$\begin{align} \langle\vec r\rangle&=\langle x\hat\imath+y\hat\jmath+z\hat k\rangle\\ &=\langle x\hat\imath\rangle+ \langle y\hat\jmath\rangle+\langle z\hat k\rangle\\ &=\langle x\rangle\hat\imath+ \langle y\rangle\hat\jmath+\langle z\rangle\hat k \end{align}$$ Let us now look at the kinetic energy, $K=\frac12mv^2$. It is a scalar (energy is a scalar) but $v^2$ comes from a 3D velocity, which is, clearly: $$v^2=\vec v\cdot\vec v=v_x^2+v_y^2+v_z^2\,.$$ From the corresponding principle, we will, like in 1D, use momentum rather than velocity: $$\displaystyle K={p^2\over 2m}={\vec p\cdot\vec p\over 2m}={p_x^2\over 2m}+{p_y^2\over 2m}+{p_z^2\over 2m}\,.$$ So our 3D Schrödinger equation finally reads: $$i\hbar\partial_t\psi(\vec r,t)=\left(-{\hbar^2\over2m}\nabla^2+V\right)\psi(\vec{r},t)$$ That is a general result. Like in 1D, things will become particular when we specify the potential energy, and we'll also look at familiar cases. For the hydrogen atom, one of our main targets, this will be Coulomb's energy (that is, the potential which gradient gives Coulomb's Force), and our Hamiltonian will be $$H=K+V$$ with $K$ as above and $V$ as: $$\displaystyle V(r)=-{e^2\over4\pi\epsilon_0}{1\over r}\,.$$ The general time-dependent Schrödinger equation can still be solved by separation of (time and space) variables, which yields: $$H\psi=E\psi\,.$$ But in the spirit of starting with easy things first, we will start with the simpler problem of a Cartesian problem $(x, y, z)$ that is separable in this coordinate system, starting with the so-called "particle in a box", which we know well in 1D, and that we will now study in 3D, although this will mainly be a review indeed as in this case, the 3D Schrödinger equation really is a 1D equation. Indeed, from: $$\displaystyle-{\hbar^2\nabla^2\over2m}(\partial_x^2+\partial_y^2+\partial_z^2)\psi(x,y,z)=E\psi(x,y,z)$$ by introducing $\psi(x,y,z)=X(x)Y(y)Z(z)$, we get to: $$\displaystyle-{\hbar^2\over2m}\left({\partial_x^2 X\over X}+{\partial_y^2 Y\over Y}+{\partial_z^2 Z\over Z}\right)=E$$ which, by the method of separation of variables, turn into three equations: $$ \begin{align} -{\hbar^2\over2m}\left({\partial_x^2X\over X}\right)&=E_x\\ -{\hbar^2\over2m}\left({\partial_y^2Y\over Y}\right)&=E_y\\ -{\hbar^2\over2m}\left({\partial_z^2Z\over Z}\right)&=E_z \end{align} $$ $$E_x+E_y+E_z=E$$ and, each of these equation is a 1D Schrödinger equation: $$-{\hbar^2\over2m}\partial_x^2X=E_xX$$ which we know how to solve. In the box, the solutions are: $$X_n=\sqrt{2\over L}\sin\left({n_x\pi x\over L}\right)$$ $$\displaystyle E_{n_x}={n_x^2\pi^2\hbar^2\over 2mL^2}$$ and similarly for $x$ and $y$, with obvious substitutions. The ground state is the state with $n_x=n_y=n_z=1$ with energy $3E_0$ where $E_0\equiv{\pi^2\hbar^2\over 2mL^2}$. The first excited state is not unique, since we find that (1,1,2), (1,2,1) & (2,1,1) all have the same energy: $6E_0$. These three states are said to be degenerate. The second excited state also has a degeneracy of three (meaning, 3 different states which result in the same energy, namely (1,2,2), (2,1,2) & (2,2,1)) with energy $9E_0$, and so is the 3rd state... Do we see a pattern there? The 4th excited state is not degenerate: (2,2,2) with energy $12E_0$. And the fifth excited state has degeneracy 6 ((3,2,1) (3,1,2) (2,3,1) (2,1,3) (1,3,2) (1,2,3)) with energy $14E_0$. Such counting is important in statistical mechanics, where one looks at the possible ways to distribute energy into the available excited states. Template:Shaw74a Quantum Mechanics in higher dimensions. The 2D Harmonic Oscillator. Ladder operators in spherical coordinates. Mathematical Foundations of Quantum Mechanics. Schrödinger equation in polar coordinates. The Radial equation. Old quantum theory (Bohr's model of the atom). Modern theory of the hydrogen atom. Hydrogen Wavefunctions. Helium. Angular momentum in 3D. Quantum Algebra. Eigenstates and eigenvalues of angular momentum. Spin. Addition of spin. $n$ particles in 1D. Atoms & Chemistry. Einstein solid. Quantum Physics. Variational principle. WKB approximation. Perturbation theory (time-independent). Perturbation theory (time-dependent). EPR correlations. Retrieved from "http://laussy.org/index.php?title=WLP_XI/QHD&oldid=32057" This page was last modified on 24 January 2023, at 09:59. This page has been accessed 10 times.
CommonCrawl
Global dynamics of a microorganism flocculation model with time delay CPAA Home Optimality conditions of the first eigenvalue of a fourth order Steklov problem September 2017, 16(5): 1861-1881. doi: 10.3934/cpaa.2017090 Global well-posedness of the two-dimensional horizontally filtered simplified Bardina turbulence model on a strip-like region Luca Bisconti 1, and Davide Catania 2, Universitá degli Studi di Firenze, Dipartimento di Matematica e Informatica "U. Dini", Via S. Marta 3, I-50139 Firenze, Italia SMARTest-Universitá eCampus and Universitá degli Studi di Brescia, Sezione Matematica (DICATAM), Via Valotti 9, I-25133 Brescia, Italia Received December 2016 Revised January 2017 Published May 2017 We consider the 2D simplified Bardina turbulence model, with horizontal filtering, in an unbounded strip-like domain. We prove global existence and uniqueness of weak solutions in a suitable class of anisotropic weighted Sobolev spaces. Keywords: Simplified Bardina model, Navier-Stokes equations, turbulent flows, Large Eddy Simulation (LES), anisotropic filters, unbounded domains, global attractor. Mathematics Subject Classification: Primary: 76D05, 35B65; Secondary: 35Q30, 76F65, 76D03. Citation: Luca Bisconti, Davide Catania. Global well-posedness of the two-dimensional horizontally filtered simplified Bardina turbulence model on a strip-like region. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1861-1881. doi: 10.3934/cpaa.2017090 F. Abergel, Existence and finite dimensionality of the global attractor for evolution equations on unbounded domains, J. Diff. Equ., 83 (1990), 36-54. doi: 10.1016/0022-0396(90)90070-6. Google Scholar H. Ali, Approximate deconvolution model in a bounded domain with vertical regularization, J. Math. Anal. App., 408 (2013), 355-363. doi: 10.1016/j.jmaa.2013.06.023. Google Scholar C. T. Anh and P. T. Trang, On the 3D Kelvin-Voigt-Brinkman-Forchheimer equations in some unbounded domains, Nonlinear Anal., 89 (2013), 36-54. doi: 10.1016/j.na.2013.04.014. Google Scholar P. Anthony and S. Zelik, Infinite-energy solutions for the Navier-Stokes equations in a strip revisited, Commun. Pure Appl. Anal., 13 (2004), 1361-1393. doi: 10.3934/cpaa.2014.13.1361. Google Scholar A. V. Babin, The Attractor of a Navier-Stokes system in an unbounded channel-like domain, J. Dyn. Diff. Equ., 4 (1992), 555-584. doi: 10.1007/BF01048260. Google Scholar A. V. Babin and M. I. Vishik, Attractors of partial differential evolution equations in an unbounded domain, Proc. Roy. Soc. Edinburgh Sect. A, 116 (1990), 221-243. doi: 10.1017/S0308210500031498. Google Scholar J. Bardina, J. H. Ferziger and W. C. Reynolds, Improved subgrid scale models for large eddy simulation, American Institute of Aeronautics and Astronautics, 80 (1980), AIAA, 80-1357. Google Scholar L. C. Berselli, Analysis of a large eddy simulation model based on anisotropic filtering, J. Math. Anal. Appl., 386 (2012), 149-170. doi: 10.1016/j.jmaa.2011.07.044. Google Scholar L. C. Berselli and L. Bisconti, On the structural stability of the Euler-Voigt and Navier-Stokes-Voigt models, Nonlinear Anal., 75 (2012), 117-130. doi: 10.1016/j.na.2011.08.011. Google Scholar L. C. Berselli and D. Catania, On the Boussinesq equations with anisotropic filter in a vertical pipe, Dyn. Partial Differ. Equ., 12 (2015), 177-192. doi: 10.4310/DPDE.2015.v12.n2.a5. Google Scholar L. C. Berselli and D. Catania, On the well-posedness of the Boussinesq equations with anisotropic filter for turbulent flows, Z. Anal. Anwend., 34 (2015), 61-83. doi: 10.4171/ZAA/1529. Google Scholar L. Bisconti, On the convergence of an approximate deconvolution model to the 3D mean Boussinesq equations, Math. Methods Appl. Sci., 38 (2015), 1437-1450. doi: 10.1002/mma.3160. Google Scholar L. Bisconti and D. Catania, Remarks on global attractors for the 3D Navier-Stokes equations with horizontal filtering, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 59-75. doi: 10.3934/dcdsb.2015.20.59. Google Scholar L. Bisconti and P. M. Mariano, Existence results in the linear dynamics of quasicrystals with phason diffusion and nonlinear gyroscopic effects, Multiscale Model. Simul., 15 (2017), no. 2,745-767. doi: 10.1137/15M1049580. Google Scholar C. Cao, D. D. Holm and E. S. Titi, On the Clark-α model of turbulence: global regularity and long-time dynamics, J. Turbul., 6 (2005), paper 20, 11 pp. doi: 10.1080/14685240500183756. Google Scholar Y. Cao, E. M. Lunasin and E. S. Titi, Global well-posedness of three-dimensional viscous and inviscid simplified Bardina turbulence models, Commun. Math. Sci., 4 (2006), 823-848. Google Scholar Y. Cao and E. S. Titi, On the rate of convergence of the two-dimensional $α$-models of turbulence to the Navier-Stokes equations, Numer. Funct. Anal. Optim., 30 (2009), 1231-1271. doi: 10.1080/01630560903439189. Google Scholar D. Catania, A. Morando and P. Trebeschi, Global attractor for the Navier-Stokes equations with fractional deconvolution, Nonlinear Differ. Equ. Appl., 22 (2015), 811-848. doi: 10.1007/s00030-014-0305-y. Google Scholar A. O. Çelebi, V. K. Kalantarov and M. Polat, Global attractors for 2D Navier-Stokes-Voight equations in an unbounded domain, Appl. Anal., 88 (2009), 381-392. doi: 10.1080/00036810902766682. Google Scholar A. Cheskidov, D. D. Holm, E. Olson and E. S. Titi, On a Leray-α model of turbulence, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461 (2005), 629-649. doi: 10.1098/rspa.2004.1373. Google Scholar M. A. Efendiev and S. V. Zelik, The attractor for nonlinear reaction-diffusion system in an unbounded domain, Comm. Pure Appl. Math., 54 (2001), 625-688. doi: 10.1002/cpa.1011. Google Scholar C. Foias, D. D. Holm and E. S. Titi, The three dimensional viscous Camassa-Holm equations, and their relation to the Navier-Stokes equations and turbulence theory, J. Dynam. Differ. Equ., 14 (2002), 1-35. doi: 10.1023/A:1012984210582. Google Scholar M. J. Garrido-Atienza and P. Mariń-Rubio, Navier-Stokes equations with delays on unbounded domains, Nonlinear Anal., 64 (2006), 1100-1118. doi: 10.1016/j.na.2005.05.057. Google Scholar F. Gazzola and V. Pata, A uniform attractor for a non-autonomous generalized Navier-Stokes equation, J. Anal. Appl., 16 (1997), 435-449. doi: 10.4171/ZAA/771. Google Scholar M. Germano, Differential filters for the large eddy simulation of turbulent flows, Phys. Fluids, 29 (1986), 1755-1757. doi: 10.1063/1.865649. Google Scholar A. A. Ilyin, E. M. Lunasin and E. S. Titi, A modified-Leray-$α$ subgrid scale model of turbulence, Nonlinearity, 19 (2006), 879-897. doi: 10.1088/0951-7715/19/4/006. Google Scholar W. Layton and R. Lewandowski, A simple and stable scale-similarity model for large Eddy simulation: Energy balance and existence of weak solutions, App. Math. Letters, 16 (2003), 1205-1209. doi: 10.1016/S0893-9659(03)90118-2. Google Scholar W. Layton and R. Lewandowski, On a well-posed turbulence model, Discrete Contin. Dyn. Syst. Ser. B, 6 (2006), 111-128. Google Scholar W. J. Layton, C. C. Manica, M. Neda and L. G. Rebholz, Numerical analysis and computational comparisons of the NS-alpha and NS-omega regularizations, Comput. Methods Appl. Mech. Engr., 199 (2010), 916-931. doi: 10.1016/j.cma.2009.01.011. Google Scholar A. Miranville and S. Zelik, Attractors for Dissipative Partial Differential Equations in Bounded and Unbounded Domains, Handbook of differential equations: evolutionary equations, Vol. Ⅳ, 103-200, Handb. Differ. Equ. , Elsevier/North-Holland, Amsterdam, 2008. doi: 10.1016/S1874-5717(08)00003-0. Google Scholar M. Polat, Global attractors for a generalized 2D parabolic system in an unbounded domain, Appl. Anal., 88 (2009), 63-74. doi: 10.1080/00036810802555508. Google Scholar L. G. Rebholz, Conservation laws of turbulence models, J. Math. Anal. Appl., 326 (2007), 33-45. doi: 10.1016/j.jmaa.2006.02.026. Google Scholar J. Simon, Equations de Navier-Stokes, Cours de DEA 2002-2003, Universiteé Blaise Pascal, Clermont-Ferrand, Available from: URL http://www.lma.univ-bpclermont.fr/simon/. Google Scholar R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, Second edition, Applied Mathematical Sciences, 68, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar S. Zelik, Spatially nondecaying solutions of the 2D Navier-Stokes equation in a strip, Glasg. Math. J., 49 (2007), 525-588. doi: 10.1017/S0017089507003849. Google Scholar Daniel Coutand, Steve Shkoller. Turbulent channel flow in weighted Sobolev spaces using the anisotropic Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 1-23. doi: 10.3934/cpaa.2004.3.1 Joanna Rencławowicz, Wojciech M. Zajączkowski. Global regular solutions to the Navier-Stokes equations with large flux. Conference Publications, 2011, 2011 (Special) : 1234-1243. doi: 10.3934/proc.2011.2011.1234 Lihuai Du, Ting Zhang. Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4745-4765. doi: 10.3934/dcds.2018209 Ciprian Foias, Ricardo Rosa, Roger Temam. Topological properties of the weak global attractor of the three-dimensional Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2010, 27 (4) : 1611-1631. doi: 10.3934/dcds.2010.27.1611 Yong Yang, Bingsheng Zhang. On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2339-2350. doi: 10.3934/dcdsb.2017101 Michele Campiti, Giovanni P. Galdi, Matthias Hieber. Global existence of strong solutions for $2$-dimensional Navier-Stokes equations on exterior domains with growing data at infinity. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1613-1627. doi: 10.3934/cpaa.2014.13.1613 Qi S. Zhang. An example of large global smooth solution of 3 dimensional Navier-Stokes equations without pressure. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5521-5523. doi: 10.3934/dcds.2013.33.5521 Siegfried Maier, Jürgen Saal. Stokes and Navier-Stokes equations with perfect slip on wedge type domains. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 1045-1063. doi: 10.3934/dcdss.2014.7.1045 Xuanji Jia, Zaihong Jiang. An anisotropic regularity criterion for the 3D Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1299-1306. doi: 10.3934/cpaa.2013.12.1299 Alain Miranville, Xiaoming Wang. Upper bound on the dimension of the attractor for nonhomogeneous Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 1996, 2 (1) : 95-110. doi: 10.3934/dcds.1996.2.95 Débora A. F. Albanez, Maicon J. Benvenutti. Continuous data assimilation algorithm for simplified Bardina model. Evolution Equations & Control Theory, 2018, 7 (1) : 33-52. doi: 10.3934/eect.2018002 Sylvie Monniaux. Various boundary conditions for Navier-Stokes equations in bounded Lipschitz domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1355-1369. doi: 10.3934/dcdss.2013.6.1355 Donatella Donatelli, Eduard Feireisl, Antonín Novotný. On incompressible limits for the Navier-Stokes system on unbounded domains under slip boundary conditions. Discrete & Continuous Dynamical Systems - B, 2010, 13 (4) : 783-798. doi: 10.3934/dcdsb.2010.13.783 Reinhard Farwig, Paul Felix Riechwald. Regularity criteria for weak solutions of the Navier-Stokes system in general unbounded domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 157-172. doi: 10.3934/dcdss.2016.9.157 Misha Perepelitsa. An ill-posed problem for the Navier-Stokes equations for compressible flows. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 609-623. doi: 10.3934/dcds.2010.26.609 Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045 Shuguang Shao, Shu Wang, Wen-Qing Xu. Global regularity for a model of Navier-Stokes equations with logarithmic sub-dissipation. Kinetic & Related Models, 2018, 11 (1) : 179-190. doi: 10.3934/krm.2018009 Fei Jiang, Song Jiang, Junpin Yin. Global weak solutions to the two-dimensional Navier-Stokes equations of compressible heat-conducting flows with symmetric data and forces. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 567-587. doi: 10.3934/dcds.2014.34.567 HTML views (67) Luca Bisconti Davide Catania
CommonCrawl
Integration Theory Posted 2021-10-02 Updated 2021-12-09 Analysis / Functional Analysis / Integration Theory / Banach Algebra The Banach Algebra of Borel Measures on Euclidean Space This blog post is intended to deliver a quick explanation of the algebra of Borel measures on \(\mathbb{R}^n\). It will be broken into pieces. All complex-valued complex Borel measures \(M(\mathbb{R}^n)\) clearly form a vector space over \(\mathbb{C}\). The main goal of this post is to show that this is a Banach space and also a Banach algebra. In fact, the \(\mathbb{R}^n\) case can be generalised into any locally compact abelian group (see any abstract harmonic analysis books), this is because what really matters here is being locally compact and abelian. But at this moment we stick to Euclidean spaces. Note since \(\mathbb{R}^n\) is \(\sigma\)-compact, all Borel measures are regular. To read this post you need to be familiar with some basic properties of Banach algebra, complex Borel measures, and the most important, Fubini's theorem. Posted 2021-01-23 Updated 2021-10-06 Analysis / Functional Analysis / $L^p$-space / Integration Theory Several ways to prove Hardy's inequality Suppose \(1 < p < \infty\) and \(f \in L^p((0,\infty))\) (with respect to Lebesgue measure of course) is a nonnegative function, take \[ F(x) = \frac{1}{x}\int_0^x f(t)dt \quad 0 < x <\infty, \] we have Hardy's inequality \(\def\lrVert[#1]{\lVert #1 \rVert}\) \[ \lrVert[F]_p \leq q\lrVert[f]_p \] where \(\frac{1}{p}+\frac{1}{q}=1\) of course. There are several ways to prove it. I think there are several good reasons to write them down thoroughly since that may be why you find this page. Maybe you are burnt out since it's left as exercise. You are assumed to have enough knowledge of Lebesgue measure and integration. Minkowski's integral inequality Let \(S_1,S_2 \subset \mathbb{R}\) be two measurable set, suppose \(F:S_1 \times S_2 \to \mathbb{R}\) is measurable, then \[ \left[\int_{S_2} \left\vert\int_{S_1}F(x,y)dx \right\vert^pdy\right]^{\frac{1}{p}} \leq \int_{S_1} \left[\int_{S_2} |F(x,y)|^p dy\right]^{\frac{1}{p}}dx. \] A proof can be found at here by turning to Example A9. You may need to replace all measures with Lebesgue measure \(m\). Now let's get into it. For a measurable function in this place we should have \(G(x,t)=\frac{f(t)}{x}\). If we put this function inside this inequality, we see \[ \begin{aligned} \lrVert[F]_p &= \left[\int_0^\infty \left\vert \int_0^x \frac{f(t)}{x}dt \right\vert^p dx\right]^{\frac{1}{p}} \\ &= \left[\int_0^\infty \left\vert \int_0^1 f(ux)du \right\vert^p dx\right]^{\frac{1}{p}} \\ &\leq \int_0^1 \left[\int_0^\infty |f(ux)|^pdx\right]^{\frac{1}{p}}du \\ &= \int_0^1 \left[\int_0^\infty |f(ux)|^pudx\right]^{\frac{1}{p}}u^{-\frac{1}{p}}du \\ &= \lrVert[f]_p \int_0^1 u^{-\frac{1}{p}}du \\ &=q\lrVert[f]_p. \end{aligned} \] Note we have used change-of-variable twice and the inequality once. A constructive approach I have no idea how people came up with this solution. Take \(xF(x)=\int_0^x f(t)t^{u}t^{-u}dt\) where \(0<u<1-\frac{1}{p}\). Hölder's inequality gives us \[ \begin{aligned} xF(x) &= \int_0^x f(t)t^ut^{-u}dt \\ &\leq \left[\int_0^x t^{-uq}dt\right]^{\frac{1}{q}}\left[\int_0^xf(t)^pt^{up}dt\right]^{\frac{1}{p}} \\ &=\left(\frac{1}{1-uq}x^{1-uq}\right)^{\frac{1}{q}}\left[\int_0^xf(t)^pt^{up}dt\right]^{\frac{1}{p}} \end{aligned} \] Hence \[ \begin{aligned} F(x)^p & \leq \frac{1}{x^p}\left\{\left(\frac{1}{1-uq}x^{1-uq}\right)^{\frac{1}{q}}\left[\int_0^xf(t)^pt^{up}dt\right]^{\frac{1}{p}}\right\}^{p} \\ &= \left(\frac{1}{1-uq}\right)^{\frac{p}{q}}x^{\frac{p}{q}(1-uq)-p}\int_0^x f(t)^pt^{up}dt \\ &= \left(\frac{1}{1-uq}\right)^{p-1}x^{-up-1}\int_0^x f(t)^pt^{up}dt \end{aligned} \] Note we have used the fact that \(\frac{1}{p}+\frac{1}{q}=1 \implies p+q=pq\) and \(\frac{p}{q}=p-1\). Fubini's theorem gives us the final answer: \[ \begin{aligned} \int_0^\infty F(x)^pdx &\leq \int_0^\infty\left[\left(\frac{1}{1-uq}\right)^{p-1}x^{-up-1}\int_0^x f(t)^pt^{up}dt\right]dx \\ &=\left(\frac{1}{1-uq}\right)^{p-1}\int_0^\infty dx\int_0^x f(t)^pt^{up}x^{-up-1}dt \\ &=\left(\frac{1}{1-uq}\right)^{p-1}\int_0^\infty dt\int_t^\infty f(t)^pt^{up}x^{-up-1}dx \\ &=\left(\frac{1}{1-uq}\right)^{p-1}\frac{1}{up}\int_0^\infty f(t)^pdt. \end{aligned} \] It remains to find the minimum of \(\varphi(u) = \left(\frac{1}{1-uq}\right)^{p-1}\frac{1}{up}\). This is an elementary calculus problem. By taking its derivative, we see when \(u=\frac{1}{pq}<1-\frac{1}{p}\) it attains its minimum \(\left(\frac{p}{p-1}\right)^p=q^p\). Hence we get \[ \int_0^\infty F(x)^pdx \leq q^p\int_0^\infty f(t)^pdt, \] which is exactly what we want. Note the constant \(q\) cannot be replaced with a smaller one. We simply proved the case when \(f \geq 0\). For the general case, one simply needs to take absolute value. Integration by parts This approach makes use of properties of \(L^p\) space. Still we assume that \(f \geq 0\) but we also assume \(f \in C_c((0,\infty))\), that is, \(f\) is continuous and has compact support. Hence \(F\) is differentiable in this situation. Integration by parts gives \[ \int_0^\infty F^p(x)dx=xF(x)^p\vert_0^\infty- p\int_0^\infty xdF^p = -p\int_0^\infty xF^{p-1}(x)F'(x)dx. \] Note since \(f\) has compact support, there are some \([a,b]\) such that \(f >0\) only if \(0 < a \leq x \leq b < \infty\) and hence \(xF(x)^p\vert_0^\infty=0\). Next it is natural to take a look at \(F'(x)\). Note we have \[ F'(x) = \frac{f(x)}{x}-\frac{\int_0^x f(t)dt}{x^2}, \] hence \(xF'(x)=f(x)-F(x)\). A substitution gives us \[ \int_0^\infty F^p(x)dx = -p\int_0^\infty F^{p-1}(x)[f(x)-F(x)]dx, \] which is equivalent to say \[ \int_0^\infty F^p(x)dx = \frac{p}{p-1}\int_0^\infty F^{p-1}(x)f(x)dx. \] Hölder's inequality gives us \[ \begin{aligned} \int_0^\infty F^{p-1}(x)f(x)dx &\leq \left[\int_0^\infty F^{(p-1)q}(x)dx\right]^{\frac{1}{q}}\left[\int_0^\infty f(x)^pdx\right]^{\frac{1}{p}} \\ &=\left[\int_0^\infty F^{p}(x)dx\right]^{\frac{1}{q}}\left[\int_0^\infty f(x)^pdx\right]^{\frac{1}{p}}. \end{aligned} \] Together with the identity above we get \[ \int_0^\infty F^p(x)dx = q\left[\int_0^\infty F^{p}(x)dx\right]^{\frac{1}{q}}\left[\int_0^\infty f(x)^pdx\right]^{\frac{1}{p}} \] which is exactly what we want since \(1-\frac{1}{q}=\frac{1}{p}\) and all we need to do is divide \(\left[\int_0^\infty F^pdx\right]^{1/q}\) on both sides. So what's next? Note \(C_c((0,\infty))\) is dense in \(L^p((0,\infty))\). For any \(f \in L^p((0,\infty))\), we can take a sequence of functions \(f_n \in C_c((0,\infty))\) such that \(f_n \to f\) with respect to \(L^p\)-norm. Taking \(F=\frac{1}{x}\int_0^x f(t)dt\) and \(F_n = \frac{1}{x}\int_0^x f_n(t)dt\), we need to show that \(F_n \to F\) pointwise, so that we can use Fatou's lemma. For \(\varepsilon>0\), there exists some \(m\) such that \(\lrVert[f_n-f]_p < \frac{1}{n}\). Thus \[ \begin{aligned} |F_n(x)-F(x)| &= \frac{1}{x}\left\vert \int_0^x f_n(t)dt - \int_0^x f(t)dt \right\vert \\ &\leq \frac{1}{x} \int_0^x |f_n(t)-f(t)|dt \\ &\leq \frac{1}{x} \left[\int_0^x|f_n(t)-f(t)|^pdt\right]^{\frac{1}{p}}\left[\int_0^x 1^qdt\right]^{\frac{1}{q}} \\ &=\frac{1}{x^{1/p}}\left[\int_0^x|f_n(t)-f(t)|^pdt\right]^{\frac{1}{p}} \\ &\leq \frac{1}{x^{1/p}}\lrVert[f_n-f]_p <\frac{\varepsilon}{x^{1/p}}. \end{aligned} \] Hence \(F_n \to F\) pointwise, which also implies that \(|F_n|^p \to |F|^p\) pointwise. For \(|F_n|\) we have \[ \begin{aligned} \int_0^\infty |F_n(x)|^pdx &= \int_0^\infty \left\vert\frac{1}{x}\int_0^x f_n(t)dt\right\vert^p dx \\ &\leq \int_0^\infty \left[\frac{1}{x}\int_0^x |f_n(t)|dt\right]^{p}dx \\ &\leq q\int_0^\infty |f_n(t)|^pdt \end{aligned} \] note the third inequality follows since we have already proved it for \(f \geq 0\). By Fatou's lemma, we have \[ \begin{aligned} \int_0^\infty |F(x)|^pdx &= \int_0^\infty \lim_{n \to \infty}|F_n(x)|^pdx \\ &\leq \lim_{n \to \infty} \int_0^\infty |F_n(x)|^pdx \\ &\leq \lim_{n \to \infty}q^p\int_0^\infty |f_n(x)|^pdx \\ &=q^p\int_0^\infty |f(x)|^pdx. \end{aligned} \] A Continuous Function Sending L^p Functions to L^1 Throughout, let \((X,\mathfrak{M},\mu)\) be a measure space where \(\mu\) is positive. If \(f\) is of \(L^p(\mu)\), which means \(\lVert f \rVert_p=\left(\int_X |f|^p d\mu\right)^{1/p}<\infty\), or equivalently \(\int_X |f|^p d\mu<\infty\), then we may say \(|f|^p\) is of \(L^1(\mu)\). In other words, we have a function \[ \begin{aligned} \lambda: L^p(\mu) &\to L^1(\mu) \\ f &\mapsto |f|^p. \end{aligned} \] This function does not have to be one to one due to absolute value. But we hope this function to be fine enough, at the very least, we hope it is continuous. Here, \(f \sim g\) means that \(f-g\) equals \(0\) almost everywhere with respect to \(\mu\). It can be easily verified that this is an equivalence relation. We still use the \(\varepsilon-\delta\) argument but it's in a metric space. Suppose \((X,d_1)\) and \((Y,d_2)\) are two metric spaces and \(f:X \to Y\) is a function. We say \(f\) is continuous at \(x_0 \in X\) if, for any \(\varepsilon>0\), there exists some \(\delta>0\) such that \(d_2(f(x_0),f(x))<\varepsilon\) whenever \(d_1(x_0,x)<\delta\). Further, we say \(f\) is continuous on \(X\) if \(f\) is continuous at every point \(x \in X\). For \(1\leq p<\infty\), we already have a metric by \[ d(f,g)=\lVert f-g \rVert_p \] given that \(d(f,g)=0\) if and only if \(f \sim g\). This is complete and makes \(L^p\) a Banach space. But for \(0<p<1\) (yes we are going to cover that), things are much more different, and there is one reason: Minkowski inequality holds reversely! In fact, we have \[ \lVert f+g \rVert_p \geq \lVert f \rVert_p + \lVert g \rVert_p \] for \(0<p<1\). \(L^p\) space has too many weird things when \(0<p<1\). Precisely, For \(0<p<1\), \(L^p(\mu)\) is locally convex if and only if \(\mu\) assumes finitely many values. (Proof.) On the other hand, for example, \(X=[0,1]\) and \(\mu=m\) be the Lebesgue measure, then \(L^p(\mu)\) has no open convex subset other than \(\varnothing\) and \(L^p(\mu)\) itself. However, A topological vector space \(X\) is normable if and only if its origin has a convex bounded neighbourhood. (See Kolmogorov's normability criterion.) Therefore \(L^p(m)\) is not normable, hence not Banach. We have gone too far. We need a metric that is fine enough. Metric of \(L^p\) when \(0<p<1\) Define \[ \Delta(f)=\int_X |f|^p d\mu \] for \(f \in L^p(\mu)\). We will show that we have a metric by \[ d(f,g)=\Delta(f-g). \] Fix \(y\geq 0\), consider the function \[ f(x)=(x+y)^p-x^p. \] We have \(f(0)=y^p\) and \[ f'(x)=p(x+y)^{p-1}-px^{p-1} \leq px^{p-1}-px^{p-1}=0 \] when \(x > 0\) and hence \(f(x)\) is nonincreasing on \([0,\infty)\), which implies that \[ (x+y)^p \leq x^p+y^p. \] Hence for any \(f\), \(g \in L^p\), we have \[ \Delta(f+g)=\int_X |f+g|^p d\mu \leq \int_X |f|^p d\mu + \int_X |g|^p d\mu=\Delta(f)+\Delta(g). \] This inequality ensures that \[ d(f,g)=\Delta(f-g) \] is a metric. It's immediate that \(d(f,g)=d(g,f) \geq 0\) for all \(f\), \(g \in L^p(\mu)\). For the triangle inequality, note that \[ d(f,h)+d(g,h)=\Delta(f-h)+\Delta(h-g) \geq \Delta((f-h)+(h-g))=\Delta(f-g)=d(f,g). \] This is translate-invariant as well since \[ d(f+h,g+h)=\Delta(f+h-g-h)=\Delta(f-g)=d(f,g) \] The completeness can be verified in the same way as the case when \(p>1\). In fact, this metric makes \(L^p\) a locally bounded F-space. The continuity of \(\lambda\) The metric of \(L^1\) is defined by \[ d_1(f,g)=\lVert f-g \rVert_1=\int_X |f-g|d\mu. \] We need to find a relation between \(d_p(f,g)\) and \(d_1(\lambda(f),\lambda(g))\), where \(d_p\) is the metric of the corresponding \(L^p\) space. \(0<p<1\) As we have proved, \[ (x+y)^p \leq x^p+y^p. \] Without loss of generality we assume \(x \geq y\) and therefore \[ x^p=(x-y+y)^p \leq (x-y)^p+y^p. \] Hence \[ x^p-y^p \leq (x-y)^p. \] By interchanging \(x\) and \(y\), we get \[ |x^p-y^p| \leq |x-y|^p. \] Replacing \(x\) and \(y\) with \(|f|\) and \(|g|\) where \(f\), \(g \in L^p\), we get \[ \int_{X}\lvert |f|^p-|g|^p \rvert d\mu \leq \int_X |f-g|^p d\mu. \] But \[ d_1(\lambda(f),\lambda(g))=\int_{X}\lvert |f|^p-|g|^p \rvert d\mu \\ d_p(f,g)=\Delta(f-g)= d\mu \leq \int_X |f-g|^p d\mu \] and we therefore have \[ d_1(\lambda(f),\lambda(g)) \leq d_p(f,g). \] Hence \(\lambda\) is continuous (and in fact, Lipschitz continuous and uniformly continuous) when \(0<p<1\). \(1 \leq p < \infty\) It's natural to think about Minkowski's inequality and Hölder's inequality in this case since they are critical inequality enablers. You need to think about some examples of how to create the condition to use them and get a fine result. In this section we need to prove that \[ |x^p-y^p| \leq p|x-y|(x^{p-1}+y^{p-1}). \] This inequality is surprisingly easy to prove however. We will use nothing but the mean value theorem. Without loss of generality we assume that \(x > y \geq 0\) and define \(f(t)=t^p\). Then \[ \frac{f(x)-f(y)}{x-y}=f'(\zeta)=p\zeta^{p-1} \] where \(y < \zeta < x\). But since \(p-1 \geq 0\), we see \(\zeta^{p-1} < x^{p-1} <x^{p-1}+y^{p-1}\). Therefore \[ f(x)-f(y)=x^p-y^p=p(x-y)\zeta^{p-1}<p(x-y)(x^{p-1}-y^{p-1}). \] For \(x=y\) the equality holds. Therefore \[ \begin{aligned} d_1(\lambda(f),\lambda(g)) &= \int_X \left||f|^p-|g|^p\right|d\mu \\ &\leq \int_Xp\left||f|-|g|\right|(|f|^{p-1}+|g|^{p-1})d\mu \end{aligned} \] By Hölder's inequality, we have \[ \begin{aligned} \int_X ||f|-|g||(|f|^{p-1}+|g|^{p-1})d\mu & \leq \left[\int_X \left||f|-|g|\right|^pd\mu\right]^{1/p}\left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q} \\ &\leq \left[\int_X \left|f-g\right|^pd\mu\right]^{1/p}\left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q} \\ &=\lVert f-g \rVert_p \left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q}. \end{aligned} \] By Minkowski's inequality, we have \[ \left[\int_X\left(|f|^{p-1}+|g|^{p-1}\right)^q\right]^{1/q} \leq \left[\int_X|f|^{(p-1)q}d\mu\right]^{1/q}+\left[\int_X |g|^{(p-1)q}d\mu\right]^{1/q} \] Now things are clear. Since \(1/p+1/q=1\), or equivalently \(1/q=(p-1)/p\), suppose \(\lVert f \rVert_p\), \(\lVert g \rVert_p \leq R\), then \((p-1)q=p\) and therefore \[ \left[\int_X|f|^{(p-1)q}d\mu\right]^{1/q}+\left[\int_X |g|^{(p-1)q}d\mu\right]^{1/q} = \lVert f \rVert_p^{p-1}+\lVert g \rVert_p^{p-1} \leq 2R^{p-1}. \] Summing the inequalities above, we get \[ \begin{aligned} d_1(\lambda(f),\lambda(g)) \leq 2pR^{p-1}\lVert f-g \rVert_p =2pR^{p-1}d_p(f,g) \end{aligned} \] hence \(\lambda\) is continuous. Conclusion and further We have proved that \(\lambda\) is continuous, and when \(0<p<1\), we have seen that \(\lambda\) is Lipschitz continuous. It's natural to think about its differentiability afterwards, but the absolute value function is not even differentiable so we may have no chance. But this is still a fine enough result. For example we have no restriction to \((X,\mathfrak{M},\mu)\) other than the positivity of \(\mu\). Therefore we may take \(\mathbb{R}^n\) as the Lebesgue measure space here, or we can take something else. It's also interesting how we use elementary Calculus to solve some much more abstract problems. Posted 2020-09-19 Updated 2021-11-27 Analysis / Functional Analysis / Linear Functional / Integration Theory The Riesz-Markov-Kakutani Representation Theorem This post Is intended to establish the existence of the Lebesgue measure in the future, which is often denoted by \(m\). In fact, the Lebesgue measure follows as a special case of R-M-K representation theorem. You may not believe it, but euclidean properties of \(\mathbb{R}^k\) plays no role in the existence of \(m\). The only topological property that works is the fact that \(\mathbb{R}^k\) is a locally compact Hausdorff space. The theorem is named after F. Riesz who introduced it for continuous functions on \([0,1]\) (with respect to Riemann-Steiltjes integral). Years later, after the generalization done by A. Markov and S. Kakutani, we are able to view it on a locally compact Hausdorff space. You may find there are some over-generalized properties, but this is intended to have you being able to enjoy more alongside (there are some tools related to differential geometry). Also there are many topology and analysis tricks worth your attention. Different kinds of topological spaces Again, euclidean topology plays no role in this proof. We need to specify the topology for different reasons. This is similar to what we do in linear functional analysis. Throughout, let \(X\) be a topological space. 0.0 Definition. \(X\) is a Hausdorff space if the following is true: If \(p \in X\), \(q\in X\) but \(p \neq q\), then there are two disjoint open sets \(U\) and \(V\) such that \(p \in U\) and \(q \in V\). 0.1 Definition. \(X\) is locally compact if every point of \(X\) has a neighborhood whose closure is compact. 0.2 Remarks. A Hausdorff space is also called a \(T_2\) space (see Kolmogorov classification) or a separated space. There is a classic example of locally compact Hausdorff space: \(\mathbb{R}^n\). It is trivial to verify this. But this is far from being enough. In the future we will see, we can construct some ridiculous but mathematically valid measures. 0.3 Definition. A set \(E \subset X\) is called \(\sigma\)-compact if \(E\) is a countable union of compact sets. Note that every open subset in a euclidean space \(\mathbb{R}^n\) is \(\sigma\)-compact since it can always be a countable union of closed balls (which is compact). 0.4 Definition. A covering of \(X\) is locally finite if every point has a neighborhood which intersects only finitely many elements of the covering. Of course, if the covering is already finite, it's also locally finite. 0.5 Definition. A refinement of a covering of \(X\) is a second covering, each element of which is contained in an element of the first covering. 0.6 Definition. \(X\) is paracompact if it is Hausdorff, and every open covering has a locally finite open refinement. Obviously any compact space is paracompact. 0.7 Theorem. If \(X\) is a second countable Hausdorff space and is locally compact, then \(X\) is paracompact. For proof, see this [Theorem 2.6]. One uses this to prove that a differentiable manifold admits a partition of unity. 0.8 Theorem. If \(X\) is locally compact and sigma compact, then \(X=\bigcup_{i=1}^{\infty}K_i\) where for all \(i \in \mathbb{N}\), \(K_i\) is compact and \(K_i \subset\operatorname{int}K_{i+1}\). Partition of unity The basic technical tool in the theory of differential manifolds is the existence of a partition of unity. We will steal this tool for the application of analysis theory. 1.0 Definition. A partition of unity on \(X\) is a collection \((g_i)\) of continuous real valued functions on \(X\) such that \(g_i \geq 0\) for each \(i\). every \(x \in X\) has a neighborhood \(U\) such that \(U \cap \operatorname{supp}(g_i)=\varnothing\) for all but finitely many of \(g_i\). for each \(x \in X\), we have \(\sum_{i}g_i(x)=1\). (That's why you see the word 'unity'.) One should be reminded that, partition of unity is frequently used in many other fields. For example, in differential geometry, one uses it to find Riemannian structure on a smooth manifold. In generalised function theory, one uses it to find the connection between local property and global property as well. 1.1 Definition. A partition of unity \((g_i)\) on \(X\) is subordinate to an open cover of \(X\) if and only if for each \(g_i\) there is an element \(U\) of the cover such that \(\operatorname{supp}(g_i) \subset U\). We say \(X\) admits partitions of unity if and only if for every open cover of \(X\), there exists a partition of unity subordinate to the cover. 1.2 Theorem. A Hausdorff space admits a partition of unity if and only if it is paracompact (the 'only if' part is by considering the definition of partition of unity. For the 'if' part, see here). As a corollary, we have: 1.3 Corollary. Suppose \(V_1,\cdots,V_n\) are open subsets of a locally compact Hausdorff space \(X\), \(K\) is compact, and \[ K \subset \bigcup_{k=1}^{n}V_k. \] Then there exists a partition of unity \((h_i)\) that is subordinate to the cover \((V_n)\) such that \(\operatorname{supp}(h_i) \subset V_i\) and \(\sum_{i=1}^{n}h_i=1\) for all \(x \in K\). Urysohn's lemma (for locally compact Hausdorff spaces) 2.0 Notation. The notation \[ K \prec f \] will mean that \(K\) is a compact subset of \(X\), that \(f \in C_c(X)\), that \(f(X) \subset [0,1]\), and that \(f(x)=1\) for all \(x \in K\). The notation \[ f \prec V \] will mean that \(V\) is open, that \(f \in C_c(X)\), that \(f(X) \subset [0,1]\) and that \(\operatorname{supp}(f) \subset V\). If both hold, we write \[ K \prec f \prec V. \] 2.1 Remarks. Clearly, with this notation, we are able to simplify the statement of being subordinate. We merely need to write \(g_i \prec U\) in 1.1 instead of \(\operatorname{supp}(g_i) \subset U\). 2.2 Urysohn's Lemma for locally compact Hausdorff space. Suppose \(X\) is locally compact and Hausdorff, \(V\) is open in \(X\) and \(K \subset V\) is a compact set. Then there exists an \(f \in C_c(X)\) such that \[ K \prec f \prec V. \] 2.3 Remarks. By \(f \in C_c(X)\) we shall mean \(f\) is a continuous function with a compact support. This relation also says that \(\chi_K \leq f \leq \chi_V\). For more details and the proof, visit this page. This lemma is generally for normal space, for a proof on that level, see arXiv:1910.10381. (Question: why we consider two disjoint closed subsets thereafter?) The \(\varepsilon\)-definitions of \(\sup\) and \(\inf\) We will be using the \(\varepsilon\)-definitions of \(\sup\) and \(\inf\), which will makes the proof easier in this case, but if you don't know it would be troublesome. So we need to put it down here. Let \(S\) be a nonempty subset of the real numbers that is bounded below. The lower bound \(w\) is to be the infimum of \(S\) if and only if for any \(\varepsilon>0\), there exists an element \(x_\varepsilon \in S\) such that \(x_\varepsilon<w+\varepsilon\). This definition of \(\inf\) is equivalent to the if-then definition by Let \(S\) be a set that is bounded below. We say \(w=\inf S\) when \(w\) satisfies the following condition. \(w\) is a lower bound of \(S\). If \(t\) is also a lower bound of \(S\), then \(t \leq s\). We have the analogous definition for \(\sup\). The main theorem Analysis is full of vector spaces and linear transformations. We already know that the Lebesgue integral induces a linear functional. That is, for example, \(L^1([0,1])\) is a vector space, and we have a linear functional by \[ f \mapsto \int_0^1 f(x)dx. \] But what about the reverse? Given a linear functional, is it guaranteed that we have a measure to establish the integral? The R-M-K theorem answers this question affirmatively. The functional to be discussed is positive, which means that if \(\Lambda\) is positive and \(f(X) \subset [0,\infty)\), then \(\Lambda{f} \in [0,\infty)\). Let \(X\) be a locally compact Hausdorff space, and let \(\Lambda\) be a positive linear functional on \(C_c(X)\). Then there exists a \(\sigma\)-algebra \(\mathfrak{M}\) on \(X\) which contains all Borel sets in \(X\), and there exists a unique positive measure \(\mu\) on \(\mathfrak{M}\) which represents \(\Lambda\) in the sense that \[ \Lambda{f}=\int_X fd\mu \] for all \(f \in C_c(X)\). For the measure \(\mu\) and the \(\sigma\)-algebra \(\mathfrak{M}\), we have four assertions: \(\mu(K)<\infty\) for every compact set \(K \subset X\). For every \(E \in \mathfrak{M}\), we have \[ \mu(E)=\{\mu(V):E \subset V, V\text{ open}\}. \] For every open set \(E\) and every \(E \in \mathfrak{M}\), we have \[ \mu(E)=\sup\{\mu(K):K \subset E, K\text{ compact}\}. \] If \(E \in \mathfrak{M}\), \(A \subset E\), and \(\mu(E)=0\), then \(A \in \mathfrak{M}\). Remarks before proof. It would be great if we can establish the Lebesgue measure \(m\) by putting \(X=\mathbb{R}^n\). But we need a little more extra work to get this result naturally. If 2 is satisfied, we say \(\mu\) is outer regular, and inner regular for 3. If both hold, we say \(\mu\) is regular. The partition of unity and Urysohn's lemma will be heavily used in the proof of the main theorem, so make sure you have no problem with it. It can also be extended to complex space, but that requires much non-trivial work. Proving the theorem The proof is rather long so we will split it into several steps. I will try my best to make every line clear enough. Step 0 - Construction of \(\mu\) and \(\mathfrak{M}\) For every open set \(V \in X\), define \[ \mu(V)=\sup\{\Lambda{f}:f \prec V\}. \] If \(V_1 \subset V_2\) and both are open, we claim that \(\mu(V_1) \leq \mu(V_2)\). For \(f \prec V_1\), since \(\operatorname{supp}f \subset V_1 \subset V_2\), we see \(f \prec V_2\). But we are able to find some \(g \prec V_2\) such that \(g \geq f\), or more precisely, \(\operatorname{supp}(g) \supset \operatorname{supp}(f)\). By taking another look at the proof of Urysohn's lemma for locally compact Hausdorff space, we see there is an open set G with compact closure such that \[ \operatorname{supp}(f) \subset G \subset \overline{G} \subset V_2. \] By Urysohn's lemma to the pair \((\overline{G},V_2)\), we see there exists a function \(g \in C_c(X)\) such that \[ \overline{G} \prec g \prec V_2. \] Therefore \[ \operatorname{supp}(f) \subset \overline{G} \subset \operatorname{supp}(g). \] Thus for any \(f \prec V_1\) and \(g \prec V_2\), we have \(\Lambda{g} \geq \Lambda{f}\) (monotonic) since \(\Lambda{g}-\Lambda{f}=\Lambda{(g-f)}\geq 0\). By taking the supremum over \(f\) and \(g\), we see \[ \mu(V_1) \leq \mu(V_2). \] The 'monotonic' property of such \(\mu\) enables us to define \(\mu(E)\) for all \(E \subset X\) by \[ \mu(E)=\inf \{\mu(V):E \subset V, V\text{ open}\}. \] The definition above is trivial to valid for open sets. Sometimes people say \(\mu\) is the outer measure. We will discuss other kind of sets thoroughly in the following steps. Warning: we are not saying that \(\mathfrak{M} = 2^X\). The crucial property of \(\mu\), namely countable additivity, will be proved only on a certain \(\sigma\)-algebra. It follows from the definition of \(\mu\) that if \(E_1 \subset E_2\), then \(\mu(E_1) \leq \mu(E_2)\). Let \(\mathfrak{M}_F\) be the class of all \(E \subset X\) which satisfy the two following conditions: \(\mu(E) <\infty\). 'Inner regular': \[ \mu(E)=\sup\{\mu(K):K \subset E, K\text{ compact}\}. \] One may say here \(\mu\) is the 'inner measure'. Finally, let \(\mathfrak{M}\) be the class of all \(E \subset X\) such that for every compact \(K\), we have \(E \cap K \in \mathfrak{M}_F\). We shall show that \(\mathfrak{M}\) is the desired \(\sigma\)-algebra. Remarks of Step 0. So far, we have only proved that \(\mu(E) \geq 0\) for all \(E {\color\red{\subset}}X\). What about the countable additivity? It's clear that \(\mathfrak{M}_F\) and \(\mathfrak{M}\) has some strong relation. We need to get a clearer view of it. Also, if we restrict \(\mu\) to \(\mathfrak{M}_F\), we restrict ourself to finite numbers. In fact, we will show finally \(\mathfrak{M}_F \subset \mathfrak{M}\). Step 1 - The 'measure' of compact sets (outer) If \(K\) is compact, then \(K \in \mathfrak{M}_F\), and \[ \mu(K)=\inf\{\Lambda{f}:K \prec f\}<\infty \] Define \(V_\alpha=f^{-1}(\alpha,1]\) for \(K \prec f\) and \(0 < \alpha < 1\). Since \(f(x)=1\) for all \(x \in K\), we have \(K \subset V_{\alpha}\). Therefore by definition of \(\mu\) for all \(E \subset X\), we have \[ \mu(K) \leq \mu(V_\alpha)=\sup\{\Lambda{g}:g \prec V_{\alpha}\} < \frac{1}{\alpha}\Lambda{f}. \] Note that \(f \geq \alpha{g}\) whenever \(g \prec V_{\alpha}\) since \(\alpha{g} \leq \alpha < f\). Since \(\mu(K)\) is an lower bound of \(\frac{1}{\alpha}\Lambda{f}\) with \(0<\alpha<1\), we see \[ \mu(K) \leq \inf_{\alpha \in (0,1)}\{\frac{1}{\alpha}\Lambda{f}\}=\Lambda{f}. \] Since \(f(X) \in [0,1]\), we have \(\Lambda{f}\) to be finite. Namely \(\mu(K) <\infty\). Since \(K\) itself is compact, we see \(K \in \mathfrak{M}_F\). To prove the identity, note that there exists some \(V \supset K\) such that \(\mu(V)<\mu(K)+\varepsilon\) for some \(\varepsilon>0\). By Urysohn's lemma, there exists some \(h \in C_c(X)\) such that \(K \prec h \prec V\). Therefore \[ \Lambda{h} \leq \mu(V) < \mu(K)+\varepsilon \] Therefore \(\mu(K)\) is the infimum of \(\Lambda{h}\) with \(K \prec h\). Remarks of Step 1. We have just proved assertion 1 of the property of \(\mu\). The hardest part of this proof is the inequality \[ \mu(V)<\mu(K)+\varepsilon. \] But this is merely the \(\varepsilon\)-definition of \(\inf\). Note that \(\mu(K)\) is the infimum of \(\mu(V)\) with \(V \supset K\). For any \(\varepsilon>0\), there exists some open \(V\) for what? Under certain conditions, this definition is much easier to use. Now we will examine the relation between \(\mathfrak{M}_F\) and \(\tau_X\), namely the topology of \(X\). Step 2 - The 'measure' of open sets (inner) \(\mathfrak{M}_F\) contains every open set \(V\) with \(\mu(V)<\infty\). It suffices to show that for open set \(V\), we have \[ \mu(V)=\sup\{\mu(K):K \subset E, K\text{ compact}\}. \] For \(0<\varepsilon<\mu(V)\), we see there exists an \(f \prec V\) such that \(\Lambda{f}>\mu(V)-\varepsilon\). If \(W\) is any open set which contains \(K= \operatorname{supp}(f)\), then \(f \prec W\), and therefore \(\Lambda{f} \leq \mu(W)\). Again by definition of \(\mu(K)\), we see \[ \Lambda{f}\leq\mu(K). \] Therefore \[ \mu(V)-\varepsilon<\Lambda{f}\leq\mu(K)\leq\mu(V). \] This is exactly the definition of \(\sup\). The identity is proved. Remarks of Step 2. It's important to that this identity can only be satisfied by open sets and sets \(E\) with \(\mu(E)<\infty\), the latter of which will be proved in the following steps. This is the flaw of this theorem. With these preparations however, we are able to show the countable additivity of \(\mu\) on \(\mathfrak{M}_F\). Step 3 - The subadditivity of \(\mu\) on \(2^X\) If \(E_1,E_2,E_3,\cdots\) are arbitrary subsets of \(X\), then \[ \mu\left(\bigcup_{k=1}^{\infty}E_k\right) \leq \sum_{k=1}^{\infty}\mu(E_k) \] First we show this holds for finitely many open sets. This is tantamount to show that \[ \mu(V_1 \cup V_2)\leq \mu(V_1)+\mu(V_2) \] if \(V_1\) and \(V_2\) are open. Pick \(g \prec V_1 \cup V_2\). This is possible due to Urysohn's lemma. By corollary 1.3, there is a partition of unity \((h_1,h_2)\) subordinate to \((V_1,V_2)\) in the sense of corollary 1.3. Therefore, \[ \begin{aligned} \Lambda(g)&=\Lambda((h_1+h_2)g) \\ &=\Lambda(h_1g)+\Lambda(h_2g) \\ &\leq\mu(V_1)+\mu(V_2). \end{aligned} \] Notice that \(h_1g \prec V_1\) and \(h_2g \prec V_2\). By taking the supremum, we have \[ \mu(V_1 \cup V_2)\leq \mu(V_1)+\mu(V_2). \] Now we back to arbitrary subsets of \(X\). If \(\mu(E_i)=\infty\) for some \(i\), then there is nothing to prove. Therefore we shall assume that \(\mu(E_i)<\infty\) for all \(i\). By definition of \(\mu(E_i)\), we see there are open sets \(V_i \supset E_i\) such that \[ \mu(V_i)<\mu(E_i)+\frac{\varepsilon}{2^i}. \] Put \(V=\bigcup_{i=1}^{\infty}V_i\), and choose \(f \prec V_i\). Since \(f \in C_c(X)\), there is a finite collection of \(V_i\) that covers the support of \(f\). Therefore without loss of generality, we may say that \[ f \prec V_1 \cup V_2 \cup \cdots \cup V_n \] for some \(n\). We therefore obtain \[ \begin{aligned} \Lambda{f} &\leq \mu(V_1 \cup V_2 \cup \cdots \cup V_n) \\ &\leq \mu(V_1)+\mu(V_2)+\cdots+\mu(V_n) \\ &\leq \sum_{i=1}^{n}\left(\mu(E_i)+\frac{\varepsilon}{2^i}\right) \\ &\leq \sum_{i=1}^{\infty}\mu(E_i)+\varepsilon, \end{aligned} \] for all \(f \prec V\). Since \(\bigcup E_i \subset V\), we have \(\mu(\bigcup E_i) \leq \mu(V)\). Therefore \[ \mu(\bigcup_{i=1}^{\infty}E_i)\leq\mu(V)=\sup\{\Lambda{f}\}\leq\sum_{i=1}^{\infty}\mu(E_i)+\varepsilon. \] Since \(\varepsilon\) is arbitrary, the inequality is proved. Remarks of Step 3. Again, we are using the \(\varepsilon\)-definition of \(\inf\). One may say this step showed the subaddtivity of the outer measure. Also note the geometric series by \(\sum_{k=1}^{\infty}\frac{\varepsilon}{2^k}=\varepsilon\). Step 4 - Additivity of \(\mu\) on \(\mathfrak{M}_F\) Suppose \(E=\bigcup_{i=1}^{\infty}E_i\), where \(E_1,E_2,\cdots\) are pairwise disjoint members of \(\mathfrak{M}_F\), then \[ \mu(E)=\sum_{i=1}^{\infty}\mu(E_i). \] If \(\mu(E)<\infty\), we also have \(E \in \mathfrak{M}_F\). As a dual to Step 3, we firstly show this holds for finitely many compact sets. As proved in Step 1, compact sets are in \(\mathfrak{M}_F\). Suppose now \(K_1\) and \(K_2\) are disjoint compact sets. We want to show that \[ \mu(K_1 \cup K_2)=\mu(K_1)+\mu(K_2). \] Note that compact sets in a Hausdorff space is closed. Therefore we are able to apply Urysohn's lemma to the pair \((K_1,K_2^c)\). That said, there exists a \(f \in C_c(X)\) such that \[ K_1 \prec f \prec K_2^c. \] In other words, \(f(x)=1\) for all \(x \in K_1\) and \(f(x)=0\) for all \(x \in K_2\), since \(\operatorname{supp}(f) \cap K_2 = \varnothing\). By Step 1, since \(K_1 \cup K_2\) is compact, there exists some \(g \in C_c(X)\) such that \[ K_1 \cup K_2 \prec g \quad \text{and} \quad \Lambda(g) < \mu(K_1 \cup K_2)+\varepsilon. \] Now things become tricky. We are able to write \(g\) by \[ g=fg+(1-f)g. \] But \(K_1 \prec fg\) and \(K_2 \prec (1-f)g\) by the properties of \(f\) and \(g\). Also since \(\Lambda\) is linear, we have \[ \mu(K_1)+\mu(K_2) \leq \Lambda(fg)+\Lambda((1-f)g)=\Lambda(g) < \mu(K_1 \cup K_2)+\varepsilon. \] Therefore we have \[ \mu(K_1)+\mu(K_2) \leq \mu(K_1 \cup K_2). \] On the other hand, by Step 3, we have \[ \mu(K_1 \cup K_2) \leq \mu(K_1)+\mu(K_2). \] Therefore they must equal. If \(\mu(E)=\infty\), there is nothing to prove. So now we should assume that \(\mu(E)<\infty\). Since \(E_i \in \mathfrak{M}_F\), there are compact sets \(K_i \subset E_i\) with \[ \mu(K_i) > \mu(E_i)-\frac{\varepsilon}{2^i}. \] Putting \(H_n=K_1 \cup K_2 \cup \cdots \cup K_n\), we see \(E \supset H_n\) and \[ \mu(E) \geq \mu(H_n)=\sum_{i=1}^{n}\mu(H_i)>\sum_{i=1}^{n}\mu(E_i)-\varepsilon. \] This inequality holds for all \(n\) and \(\varepsilon\), therefore \[ \mu(E) \geq \sum_{i=1}^{\infty}\mu(E_i). \] Therefore by Step 3, the identity holds. Finally we shall show that \(E \in \mathfrak{M}_F\) if \(\mu(E) <\infty\). To make it more understandable, we will use elementary calculus notation. If we write \(\mu(E)=x\) and \(x_n=\sum_{i=1}^{n}\mu(E_i)\), we see \[ \lim_{n \to \infty}x_n=x. \] Therefore, for any \(\varepsilon>0\), there exists some \(N \in \mathbb{N}\) such that \[ x-x_N<\varepsilon. \] This is tantamount to \[ \mu(E)<\sum_{i=1}^{N}\mu(E_i)+\varepsilon. \] But by definition of the compact set \(H_N\) above, we see \[ \mu(E)<{\color\red{\sum_{i=1}^{N}\mu(E_i)}}+\varepsilon<{\color\red {\mu(H_N)+\varepsilon}}+\varepsilon=\mu(H_N)+2\varepsilon. \] Hence \(E\) satisfies the requirements of \(\mathfrak{M}_F\), thus an element of it. Remarks of Step 4. You should realize that we are heavily using the \(\varepsilon\)-definition of \(\sup\) and \(\inf\). As you may guess, \(\mathfrak{M}_F\) should be a subset of \(\mathfrak{M}\) though we don't know whether it is a \(\sigma\)-algebra or not. In other words, we hope that the countable additivity of \(\mu\) holds on a \(\sigma\)-algebra that is properly extended from \(\mathfrak{M}_F\). However it's still difficult to show that \(\mathfrak{M}\) is a \(\sigma\)-algebra. We need more properties of \(\mathfrak{M}_F\) to go on. Step 5 - The 'continuity' of \(\mathfrak{M}_F\). If \(E \in \mathfrak{M}_F\) and \(\varepsilon>0\), there is a compact \(K\) and an open \(V\) such that \(K \subset E \subset V\) and \(\mu(V-K)<\varepsilon\). There are two ways to write \(\mu(E)\), namely \[ \mu(E)=\sup\{\mu(K):K \subset E\} \quad \text{and} \quad \mu(E)=\inf\{\mu(V):V\supset E\} \] where \(K\) is compact and \(V\) is open. Therefore there exists some \(K\) and \(V\) such that \[ \mu(V)-\frac{\varepsilon}{2}<\mu(E)<\mu(K)+\frac{\varepsilon}{2}. \] Since \(V-K\) is open, and \(\mu(V-K)<\infty\), we have \(V-K \in \mathfrak{M}_F\). By Step 4, we have \[ \mu(K)+\mu(V-K)=\mu(V) <\mu(K)+\varepsilon. \] Therefore \(\mu(V-K)<\varepsilon\) as proved. Remarks of Step 5. You should be familiar with the \(\varepsilon\)-definitions of \(\sup\) and \(\inf\) now. Since \(V-K =V\cap K^c \subset V\), we have \(\mu(V-K)\leq\mu(V)<\mu(E)+\frac{\varepsilon}{2}<\infty\). Step 6 - \(\mathfrak{M}_F\) is closed under certain operations If \(A,B \in \mathfrak{M}_F\), then \(A-B,A\cup B\) and \(A \cap B\) are elements of \(\mathfrak{M}_F\). This shows that \(\mathfrak{M}_F\) is closed under union, intersection and relative complement. In fact, we merely need to prove \(A-B \in \mathfrak{M}_F\), since \(A \cup B=(A-B) \cup B\) and \(A\cap B = A-(A-B)\). By Step 5, for \(\varepsilon>0\), there are sets \(K_A\), \(K_B\), \(V_A\), \(V_B\) such that \(K_A \subset A \subset V_A\), \(K_B \subset B \subset V_B\), and for \(A-B\) we have \[ A-B \subset V_A-K_B \subset (V_A-K_A) \cup (K_A-V_B) \cup (V_B-K_B). \] With an application of Step 3 and 5, we have \[ \mu(A-B) \leq \mu(V_A-K_A)+\mu(K_A-V_B)+\mu(V_B-K_B)< \varepsilon+\mu(K_A-V_B)+\varepsilon. \] Since \(K_A-V_B\) is a closed subset of \(K_A\), we see \(K_A-V_B\) is compact as well (a closed subset of a compact set is compact). But \(K_A-V_B \subset A-B\), and \(\mu(A-B) <\mu(K_A-V_B)+2\varepsilon\), we see \(A-B\) meet the requirement of \(\mathfrak{M}_F\) (, the fact that \(\mu(A-B)<\infty\) is trivial since \(\mu(A-B)<\mu(A)\)). Since \(A-B\) and \(B\) are pairwise disjoint members of \(\mathfrak{M}_F\), we see \[ \mu(A \cup B)=\mu(A-B)+\mu(B)<\infty. \] Thus \(A \cup B \in \mathfrak{M}_F\). Since \(A,A-B \in \mathfrak{M}_F\), we see \(A \cap B = A-(A-B) \in \mathfrak{M}_F\). Remarks of Step 6. In this step, we demonstrated several ways to express a set, all of which end up with a huge simplification. Now we are able to show that \(\mathfrak{M}_F\) is a subset of \(\mathfrak{M}\). Step 7 - \(\mathfrak{M}_F \subset \mathfrak{M}\) There is a precise relation between \(\mathfrak{M}\) and \(\mathfrak{M}_F\) given by \[ \mathfrak{M}_F=\{E \in \mathfrak{M}:\mu(E)<\infty\} \subset \mathfrak{M}. \] If \(E \in \mathfrak{M}_F\), we shall show that \(E \in \mathfrak{M}\). For compact \(K\in\mathfrak{M}_F\) (Step 1), by Step 6, we see \(K \cap E \in \mathfrak{M}_F\), therefore \(E \in \mathfrak{M}\). If \(E \in \mathfrak{M}\) with \(\mu(E)<\infty\) however, we need to show that \(E \in \mathfrak{M}_F\). By definition of \(\mu\), for \(\varepsilon>0\), there is an open \(V\) such that \[ \mu(V)<\mu(E)+\varepsilon<\infty. \] Therefore \(V \in \mathfrak{M}_F\). By Step 5, there is a compact set \(K\) such that \(\mu(V-K)<\varepsilon\) (the open set containing \(V\) should be \(V\) itself). Since \(E \cap K \in \mathfrak{M}_F\), there exists a compact set \(H \subset E \cap K\) with \[ \mu(E \cap K)<\mu(H)+\varepsilon. \] Since \(E \subset (E \cap K) \cup (V-K)\), it follows from Step 1 that \[ \mu(E) \leq {\color\red{\mu(E\cap K)}}+\mu(V-K)<{\color\red{\mu(H)+\varepsilon}}+\varepsilon=\mu(H)+2\varepsilon. \] Therefore \(E \in \mathfrak{M}_F\). Remarks of Step 7. Several tricks in the preceding steps are used here. Now we are pretty close to the fact that \((X,\mathfrak{M},\mu)\) is a measure space. Note that for \(E \in \mathfrak{M}-\mathfrak{M}_F\), we have \(\mu(E)=\infty\), but we have already proved the countable additivity for \(\mathfrak{M}_F\). Is it 'almost trivial' for \(\mathfrak{M}\)? Before that, we need to show that \(\mathfrak{M}\) is a \(\sigma\)-algebra. Note that assertion 3 of \(\mu\) has been proved. Step 8 - \(\mathfrak{M}\) is a \(\sigma\)-algebra in \(X\) containing all Borel sets We will validate the definition of \(\sigma\)-algebra one by one. \(X \in \mathfrak{M}\). For any compact \(K \subset X\), we have \(K \cap X=K\). But as proved in Step 1, \(K \in \mathfrak{M}_F\), therefore \(X \in \mathfrak{M}\). If \(A \in \mathfrak{M}\), then \(A^c \in\mathfrak{M}\). If \(A \in \mathfrak{M}\), then \(A \cap K \in \mathfrak{M}_F\). But \[ K-(A \cap K)=K \cap(A^c \cup K^c)=K\cap A^c \cup \varnothing=K \cap A^c. \] By Step 1 and Step 6, we see \(K \cap A^c \in \mathfrak{M}_F\), thus \(A^c \in \mathfrak{M}\). If \(A_n \in \mathfrak{M}\) for all \(n \in \mathbb{N}\), then \(A=\bigcup_{n=1}^{\infty}A_n \in \mathfrak{M}\). We assign an auxiliary sequence of sets inductively. For \(n=1\), we write \(B_1=A_1 \cap K\) where \(K\) is compact. Then \(B_1 \in \mathfrak{M}_F\). For \(n \geq 2\), we write \[ B_n=(A_n \cap K)-(B_1 \cup \cdots\cup B_{n-1}). \] Since \(A_n \cap K \in \mathfrak{M}_F\), \(B_1,B_2,\cdots,B_{n-1} \in \mathfrak{M}_F\), by Step 6, \(B_n \in \mathfrak{M}_F\). Also \(B_n\) is pairwise disjoint. Another set-theoretic manipulation shows that \[ \begin{aligned} A \cap K&=K \cap\left(\bigcup_{n=1}^{\infty}A_n\right) \\ &=\bigcup_{n=1}^{\infty}(K \cap A_n) \\ &=\bigcup_{n=1}^{\infty}B_n \cup(B_1 \cup \cdots\cup B_{n-1}) \\ &=\bigcup_{n=1}^{\infty}B_n. \end{aligned} \] Now we are able to evaluate \(\mu(A \cap K)\) by Step 4. \[ \begin{aligned} \mu(A \cap K)&=\sum_{n=1}^{\infty}\mu(B_n) \\ &= \lim_{n \to \infty}(A_n \cap K) <\infty. \end{aligned} \] Therefore \(A \cap K \in \mathfrak{M}_F\), which implies that \(A \in \mathfrak{M}\). \(\mathfrak{M}\) contains all Borel sets. Indeed, it suffices to prove that \(\mathfrak{M}\) contains all open sets and/or closed sets. We'll show two different paths. Let \(K\) be a compact set. If \(C\) is closed, then \(C \cap K\) is compact, therefore \(C\) is an element of \(\mathfrak{M}_F\). (By Step 2.) If \(D\) is open, then \(D \cap K \subset K\). Therefore \(\mu(D \cap K) \leq \mu(K)<\infty\), which shows that \(D\) is an element of \(\mathfrak{M}_F\) (step 7). Therefore by 1 or 2, \(\mathfrak{M}\) contains all Borel sets. Step 9 - \(\mu\) is a positive measure on \(\mathfrak{M}\) Again, we will verify all properties of \(\mu\) one by one. \(\mu(E) \geq 0\) for all \(E \in \mathfrak{M}\). This follows immediately from the definition of \(\mu\), since \(\Lambda\) is positive and \(0 \leq f \leq 1\). \(\mu\) is countably additive. If \(A_1,A_2,\cdots\) form a disjoint countable collection of members of \(\mathfrak{M}\), we need to show that \[ \mu\left(\bigcup_{n=1}^{\infty}A_n\right)=\sum_{n=1}^{\infty}\mu(A_n). \] If \(A_n \in \mathfrak{M}_F\) for all \(n\), then this is merely what we have just proved in Step 4. If \(A_j \in \mathfrak{M}-\mathfrak{M}_F\) however, we have \(\mu(A_j)=\infty\). So \(\sum_n\mu(A_n)=\infty\). For \(\mu(\cup_n A_n)\), notice that \(\cup_n A_n \supset A_j\), we have \(\mu(\cup_n A_n) \geq \mu(A_j)=\infty\). The identity is now proved. Step 10 - The completeness of \(\mu\) So far assertion 1-3 have been proved. But the final assertion has not been proved explicitly. We do that since this property will be used when discussing the Lebesgue measure \(m\). In fact, this will show that \((X,\mathfrak{M},\mu)\) is a complete measure space. It suffices to show that \(A \in \mathfrak{M}_F\). By definition, \(\mu(A)=0\) as well. If \(K \subset A\), where \(K\) is compact, then \(\mu(K)=\mu(A)=0\). Therefore \(0\) is the supremum of \(\mu(K)\). It follows that \(A \in \mathfrak{M}_F \subset \mathfrak{M}\). Step 11 - The functional and the measure For every \(f \in C_c(X)\), \(\Lambda{f}=\int_X fd\mu\). This is the absolute main result of the theorem. It suffices to prove the inequality \[ \Lambda f \leq \int_X fd\mu \] for all \(f \in C_c(X)\). What about the other side? By the linearity of \(\Lambda\) and \(\int_X \cdot d\mu\), once inequality above proved, we have \[ \Lambda(-f)=-\Lambda{f}\leq\int_{X}-fd\mu=-\int_Xfd\mu. \] Therefore \[ \Lambda{f} \geq \int_X fd\mu \] holds as well, and this establish the equality. Notice that since \(K=\operatorname{supp}(f)\) is compact, we see the range of \(f\) has to be compact. Namely we may assume that \([a,b]\) contains the range of \(f\). For \(\varepsilon>0\), we are able to pick a partition around \([a,b]\) such that \(y_n - y_{n-1}<\varepsilon\) and \[ y_0 < a < y_1<\cdots<y_n=b. \] Put \[ E_i=\{x:y_{i-1}< f(x) \leq y_i\}\cap K. \] Since \(f\) is continuous, \(f\) is Borel measurable. The sets \(E_i\) are trivially pairwise disjoint Borel sets. Again, there are open sets \(V_i \supset E_i\) such that \[ \mu(V_i) < \mu(E_i)+\frac{\varepsilon}{n} \] for \(i=1,2,\cdots,n\), and such that \(f(x)<y_i + \varepsilon\) for all \(x \in V_i\). Notice that \((V_i)\) covers \(K\), therefore by the partition of unity, there are a sequence of functions \((h_i)\) such that \(h_i \prec V_i\) for all \(i\) and \(\sum h_i=1\) on \(K\). By Step 1 and the fact that \(f=\sum_i h_i\), we see \[ \mu(K) \leq \Lambda(\sum_i h_i)=\sum_i \Lambda{h_i}. \] By the way we picked \(V_i\), we see \(h_if \leq (y_i+\varepsilon)h_i\). We have the following inequality: \[ \begin{aligned} \Lambda{f} &= \sum_{i=1}^{n}\Lambda(h_if) \leq\sum_{i=1}^{n}(y_i+\varepsilon)\Lambda{h_i} \\ &= \sum_{i=1}^{n}\left(|a|-|a|+y_i+\varepsilon\right)\Lambda{h_i} \\ &=\sum_{i=1}^{n}(|a|+y_i+\varepsilon)\Lambda{h_i}-|a|\sum_{i=1}^{n}\Lambda{h_i}. \end{aligned} \] Since \(h_i \prec V_i\), we have \(\mu(E_i)+\frac{\varepsilon}{n}>\mu(V_i) \geq \Lambda{h_i}\). And we already get \(\sum_i \Lambda{h_i} \geq \mu(K)\). If we put them into the inequality above, we get \[ \begin{aligned} \Lambda{f} &\leq \sum_{i=1}^{n}(|a|+y_i+\varepsilon)\Lambda{h_i}-|a|\sum_{i=1}^{n}\Lambda{h_i} \\ &\leq \sum_{i=1}^{n}(|a|+y_i+\varepsilon){\color\red{(\mu(E_i)+\frac{\varepsilon}{n})}}-|a|\color\red{\mu(K)}. \end{aligned} \] Observe that \(\cup_i E_i=K\), by Step 9 we have \(\sum_{i}\mu(E_i)=\mu(K)\). A slight manipulation shows that \[ \begin{aligned} \sum_{i=1}^{n}(|a|+y_i+\varepsilon)\mu(E_i)-|a|\mu(K)&=|a|\sum_{i=1}^{n}\mu(E_i)-|a|\mu(K)+\sum_{i=1}^{n}(y_i+\varepsilon)\mu(E_i) \\ &=\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)+2\varepsilon\mu(K). \end{aligned} \] Therefore for \(\Lambda f\) we get \[ \begin{aligned} \Lambda{f} &\leq\sum_{i=1}^{n}(|a|+y_i+\varepsilon)(\mu(E_i)+\frac{\varepsilon}{n})-|a|\mu(K) \\ &=\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)+2\varepsilon\mu(K)+\frac{\varepsilon}{n}\sum_{i=1}^n(|a|+y_i+\varepsilon). \end{aligned} \] Now here comes the trickiest part of the whole blog post. By definition of \(E_i\), we see \(f(x) > y_{i-1}>y_{i}-\varepsilon\) for \(x \in E_i\). Therefore we get simple function \(s_n\) by \[ s_n=\sum_{i=1}^{n}(y_i-\varepsilon)\chi_{E_i}. \] If we evaluate the Lebesgue integral of \(f\) with respect to \(\mu\), we see \[ \int_X s_nd\mu={\color\red{\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)}} \leq {\color\red{\int_X fd\mu}}. \] For \(2\varepsilon\mu(K)\), things are simple since \(0\leq\mu(K)<\infty\). Therefore \(2\varepsilon\mu(K) \to 0\) as \(\varepsilon \to 0\). Now let's estimate the final part of the inequality. It's trivial that \(\frac{\varepsilon}{n}\sum_{i=1}^{n}(|a|+\varepsilon)=\varepsilon(\varepsilon+|a|)\). For \(y_i\), observe that \(y_i \leq b\) for all \(i\), therefore \(\frac{\varepsilon}{n}\sum_{i=1}^{n}y_i \leq \frac{\varepsilon}{n}nb=\varepsilon b\). Thus \[ {\color\green{\frac{\varepsilon}{n}\sum_{i=1}^{n}(|a|+y_i+\varepsilon)}} \color\black\leq {\color\green {\varepsilon(|a|+b+\varepsilon)}}\color\black{.} \] Notice that \(b+|a| \geq 0\) since \(b \geq a \geq -|a|\). Our estimation of \(\Lambda{f}\) is finally done: \[ \begin{aligned} \Lambda{f} &\leq{\color\red{\sum_{i=1}^{n}(y_i-\varepsilon)\mu(E_i)}}+2\varepsilon\mu(K)+{\color\green{\frac{\varepsilon}{n}\sum_{i=1}^n(|a|+y_i+\varepsilon)}} \\ &\leq{\color\red {\int_Xfd\mu}}+2\varepsilon\mu(K)+{\color\green{\varepsilon(|a|+b+\varepsilon)}} \\ &= \int_X fd\mu+\varepsilon(2\mu(K)+|a|+b+\varepsilon). \end{aligned} \] Since \(\varepsilon\) is arbitrary, we see \(\Lambda{f} \leq \int_X fd\mu\). The identity is proved. Step 12 - The uniqueness of \(\mu\) If there are two measures \(\mu_1\) and \(\mu_2\) that satisfy assertion 1 to 4 and are correspond to \(\Lambda\), then \(\mu_1=\mu_2\). In fact, according to assertion 2 and 3, \(\mu\) is determined by the values on compact subsets of \(X\). It suffices to show that If \(K\) is a compact subset of \(X\), then \(\mu_1(K)=\mu_2(K)\). Fix \(K\) compact and \(\varepsilon>0\). By Step 1, there exists an open \(V \supset K\) such that \(\mu_2(V)<\mu_2(K)+\varepsilon\). By Urysohn's lemma, there exists some \(f\) such that \(K \prec f \prec V\). Hence \[ \mu_1(K)=\int_X\chi_kd\mu \leq\int_X fd\mu=\Lambda{f}=\int_X fd\mu_2 \\ \leq \int_X \chi_V fd\mu_2=\mu_2(V)<\mu_2(V)+\varepsilon. \] Thus \(\mu_1(K) \leq \mu_2(K)\). If \(\mu_1\) and \(\mu_2\) are exchanged, we see \(\mu_2(K) \leq \mu_1(K)\). The uniqueness is proved. Can we simply put \(X=\mathbb{R}^k\) right now? The answer is no. Note that the outer regularity is for all sets but inner is only for open sets and members of \(\mathfrak{M}_F\). But we expect the outer and inner regularity to be 'symmetric'. There is an example showing that locally compact is far from being enough to offer the 'symmetry'. A weird example Define \(X=\mathbb{R}_1 \times \mathbb{R}_2\), where \(\mathbb{R}_1\) is the real line equipped with discrete metric \(d_1\), and \(\mathbb{R}_2\) is the real line equipped with euclidean metric \(d_2\). The metric of \(X\) is defined by \[ d_X((x_1,y_1),(x_2,y_2))=d_1(x_1,x_2)+d_2(x_1,x_2). \] The topology \(\tau_X\) induced by \(d_X\) is naturally Hausdorff and locally compact by considering the vertical segments. So what would happen to this weird locally compact Hausdorff space? If \(f \in C_c(X)\), let \(x_1,x_2,\cdots,x_n\) be those values of \(x\) for which \(f(x,y) \neq 0\) for at least one \(y\). Since \(f\) has compact support, it is ensured that there are only finitely many \(x_i\)'s. We are able to define a positive linear functional by \[ \Lambda f=\sum_{i=1}^{n}\int_{-\infty}^{+\infty}f(x_i,y)dy=\int_X fd\mu, \] where \(\mu\) is the measure associated with \(\Lambda\) in the sense of R-M-K theorem. Let \[ E=\mathbb{R}_1 \times \{0\}. \] By squeezing the disjoint vertical segments around \((x_i,0)\), we see \(\mu(K)=0\) for all compact \(K \subset E\) but \(\mu(E)=\infty\). This is in violent contrast to what we do expect. However, if \(X\) is required to be \(\sigma\)-compact (note that the space in this example is not), this kind of problems disappear neatly. References / Further reading Walter Rudin, Real and Complex Analysis Serge Lang, Fundamentals of Differential Geometry Joel W. Robbin, Partition of Unity Brian Conrad, Paracompactness and local compactness Raoul Bott & Loring W. Tu, Differential Forms in Algebraic Topology Posted 2020-03-29 Updated 2021-10-06 Analysis / Integration Theory / Measure Theory The Lebesgue-Radon-Nikodym theorem and how von Neumann proved it If one wants to learn the fundamental theorem of Calculus in the sense of Lebesgue integral, properties of measures have to be taken into account. In elementary calculus, one may consider something like \[ df(x)=f'(x)dx \] where \(f\) is differentiable, say, everywhere on an interval. Now we restrict \(f\) to be a differentiable and nondecreasing real function defined on \(I=[a,b]\). There we got a one-to-one function defined by \[ g(x)=x+f(x) \] For measurable sets \(E\in\mathfrak{M}\), it can be seen that if \(m(E)=0\), we have \(m(g(E))=0\). Moreover, \(g(E) \in \mathfrak{M}\), and \(g\) is one-to-one. Therefore we can define a measure like \[ \mu(E)=m(g(E)) \] If we have a relation \[ \mu(E)=\int_{E}hdm \] (in fact, this is the Radon-Nikodym theorem we will prove later), the fundamental theorem of calculus for \(f\) becomes somewhat clear since if \(E=[a,x]\), we got \(g(E)=[a+f(a),x+f(x)]\), thus we got \[ \begin{aligned} \mu(E)=m(g(E))&=g(x)-g(a)\\ &=f(x)-f(a)+\int_a^xdt \\ &=\int_a^xh(t)dt \end{aligned} \] which trivially implies \[ f(x)-f(a)=\int_a^x[h(t)-1]dt \] the function \(h\) looks like to be \(g'=f'+1\). We are not proving the fundamental theorem here. But this gives rise to a question. Is it possible to find a function such that \[ \mu(E)=\int_{E}hdm \] one may write as \[ d\mu=hdm \] or, more generally, a measure \(\mu\) with respect to another measure \(\lambda\)? Does this \(\mu\) exist with respect to \(\lambda\)? Does this \(h\) exist? Lot of questions. Luckily the Lebesgue decomposition and Radon-Nikodym theorem make it possible. Let \(\mu\) be a positive measure on a \(\sigma\)-algebra \(\mathfrak{M}\), let \(\lambda\) be any arbitrary measure (positive or complex) defined on \(\mathfrak{M}\). We write \[ \lambda \ll \mu \] if \(\lambda(E)=0\) for every \(E\in\mathfrak{M}\) for which \(\mu(E)=0\). (You may write \(\mu \ll m\) in the previous section.) We say \(\lambda\) is absolutely continuous with respect to \(\mu\). Another relation between measures worth consideration is being mutually singular. If we have \(\lambda(E)=\lambda(A \cap E)\) for every \(E \in \mathfrak{M}\), we say \(\lambda\) is concentrated on \(A\). If we now have two measures \(\mu_1\) and \(\mu_2\), two disjoint sets \(A\) and \(B\) such that \(\mu_1\) is concentrated on \(A\), \(\mu_2\) is concentrated on \(B\), we say \(\mu_1\) and \(\mu_2\) are mutually singular, and write \[ \mu_1 \perp \mu_2 \] The Theorem of Lebesgue-Radon-Nikodym Let \(\mu\) be a positive \(\sigma\)-finite measure on \(\mathfrak{M}\), and \(\lambda\) a complex measure on \(\mathfrak{M}\). There exists a unique pair of complex measures \(\lambda_{ac}\) and \(\lambda_{s}\) on \(\mathfrak{M}\) such that \[ \lambda = \lambda_{ac}+\lambda_s \quad \lambda_{ac}\ll\mu\quad \lambda_s \perp \mu \] There is a unique \(h \in L^1(\mu)\) such that \[ \lambda_{ac}(E)=\int_{E}hd\mu \] for every \(E \in \mathfrak{M}\). The unique pair \((\lambda_{ac},\lambda_s)\) is called the Lebesgue decomposition; the existence of \(h\) is called the Radon-Nikodym theorem, and \(h\) is called the Radon-Nikodym derivative. One also writes \(d\lambda_{ac}=hd\mu\) or \(\frac{d\lambda_{ac}}{d\mu}=h\) in this situation. These are two separate theorems, but von Neumann gave the idea to prove these two at one stroke. If we already have \(\lambda \ll \mu\), then \(\lambda_s=0\) and the Radon-Nikodym derivative shows up in the natural of things. Also, one cannot ignore the fact that \(m\) the Lebesgue measure is \(\sigma\)-finite. Proof explained Step 1 - Construct a bounded functional We are going to employ Hilbert space technique in this proof. Precisely speaking, we are going to construct a bounded linear functional to find another function, namely \(g\), which is the epicentre of this proof. The boundedness of \(\lambda\) is clear since it's complex, but \(\mu\) is only assumed to be \(\sigma\)-finite. Therefore we need some adjustment onto \(\mu\). 1.1 Replacing \(\mu\) with a finite measure If \(\mu\) is a positive \(\sigma\)-finite measure on a \(\sigma\)-algebra \(\mathfrak{M}\) in a set \(X\), then there is a function \(w\) such that \(w \in L^1(\mu)\) and \(0<w(x)<1\) for every \(x \in X\). The \(\sigma\)-finiteness of \(\mu\) denotes that, there exist some sets \(E_n\) such that \[ X=\bigcup_{n=1}^{\infty}E_n \] and that \(\mu(E_n)<\infty\) for all \(n\). Define \[ w_n(x)= \begin{aligned} \begin{cases} \frac{1}{2^n(1+\mu(E_n))}\quad &x \in E_n \\ 0 \quad &x\notin E_n \end{cases} \end{aligned} \] (you can also say that \(w_n=\frac{1}{2^n(1+\mu(E_n))}\chi_{E_n}\)), then we have \[ \begin{aligned} w &= \sum_{n=1}^{\infty}w_n \\ \end{aligned} \] satisfies \(0<w<1\) for all \(x\). With \(w\), we are able to define a new measure, namely \[ \tilde{\mu}(E)=\int_{E}wd\mu. \] The fact that \(\tilde{\mu}(E)\) is a measure can be validated by considering \(\int_{E}wd\mu=\int_{X}\chi_{E}wd\mu\). It's more important that \(\tilde{\mu}(E)\) is bounded and \(\tilde{\mu}(E)=0\) if and only if \(\mu(E)=0\). The second one comes from the strict positivity of \(w\). For the first one, notice that \[ \begin{aligned} \tilde{\mu}(X) &\leq \sum_{n=1}^{\infty}\tilde{\mu}(E_n) \\ &= \sum_{n=1}^{\infty}\frac{1}{2^n(1+\mu(E_n))} \\ &\leq \sum_{n=1}^{\infty}\frac{1}{2^n} \end{aligned} \] 1.2 A bounded linear functional associated with \(\lambda\) Since \(\lambda\) is complex, without loss of generality, we are able to assume that \(\lambda\) is a positive bounded measure on \(\mathfrak{M}\). By 1.1, we are able to obtain a positive bounded measure by \[ \varphi=\lambda+\tilde{\mu} \] Following the construction of Lebesgue measure, we have \[ \int_{X}fd\varphi=\int_{X}fd\lambda+\int_{X}fwd\mu \] for all nonnegative measurable function \(f\). Also, notice that \(\lambda \leq \varphi\), we have \[ \left\vert \int_{X}fd\lambda \right\vert \leq \int_{X}|f|d\lambda \leq \int_{X}|f|d\varphi \leq \sqrt{\varphi(X)}\left\Vert f \right\Vert_2 \] for \(f \in L^2(\varphi)\) by Schwarz inequality. Since \(\varphi(X)<\infty\), we have \[ \Lambda{f}=\int_{X}fd\lambda \] to be a bounded linear functional on \(L^2(\varphi)\). Step 2 - Find the associated function with respect to \(\lambda\) Since \(L^2(\varphi)\) is a Hilbert space, every bounded linear functional on a Hilbert space \(H\) is given by an inner product with an element in \(H\). That is, by the completeness of \(L^2(\varphi)\), there exists a function \(g\) such that \[ \Lambda{f}=\int_{X}fd\lambda=\int_{X}fgd\varphi=(f,g). \] The properties of \(L^2\) space shows that \(g\) is determined almost everywhere with respect to \(\varphi\). For \(E \in \mathfrak{M}\), we got \[ 0 \leq (\chi_{E},g)=\int_{E}gd\varphi=\int_{E}d\lambda=\lambda(E)\leq\varphi(E) \] which implies \(0 \leq g \leq 1\) for almost every \(x\) with respect to \(\varphi\). Therefore we are able to assume that \(0 \leq g \leq 1\) without ruining the identity. The proof is in the bag once we define \(A\) to be the set where \(0 \leq g < 1\) and \(B\) the set where \(g=1\). Step 3 - Generate \(\lambda_{ac}\) and \(\lambda_{s}\) and the Radon-Nikodym derivative at one stroke We claim that \(\lambda(A \cap E)\) and \(\lambda(B \cap E)\) form the decomposition we are looking for, \(\lambda_{ac}\) and \(\lambda_s\), respectively. Namely, \(\lambda_{ac}=\lambda(A \cap E)\), \(\lambda_s=\lambda(B \cap E)\). Proving \(\lambda_s \perp \mu\) If we combine \(\Lambda{f}=(f,g)\) and \(\varphi=\lambda+\tilde{\mu}\) together, we have \[ \int_{X}(1-g)fd\lambda=\int_{X}fgwd\mu. \] Put \(f=\chi_{B}\), we have \[ \int_{B}wd\mu=0. \] Since \(w\) is strictly positive, we see that \(\mu(B)=0\). Notice that \(A \cap B = \varnothing\) and \(A \cup B=X\). For \(E \in \mathfrak{M}\), we write \(E=E_A \cup E_B\), where \(E_A \subset A\) and \(E_B \subset B\). Therefore \[ \mu(E)=\mu(E_A)+\mu(E_B)=\mu(E \cap A)+\mu(E \cap B)=\mu(E \cap A). \] Therefore \(\mu\) is concentrated on \(A\). For \(\lambda_s\), observe that \[ \lambda_s(E)=\lambda(E \cap B)=\lambda((E \cap B) \cap B)=\lambda_s(E \cap B). \] Hence \(\lambda_s\) is concentrated on \(B\). This observation shows that \(\lambda_s \perp \mu\). Proving \(\lambda_{ac} \ll \mu\) by the Radon-Nikodym derivative The relation that \(\lambda_{ac} \ll \mu\) will be showed by the existence of the Radon-Nikodym derivative. If we replace \(f\) by \[ (1+g+\cdots+g^n)\chi_E, \] where \(E \in \mathfrak{M}\), we have \[ \int_X(1-g)fd\lambda=\int_E(1-g^{n+1})d\lambda=\int_Eg(1+g+\cdots+g^n)wd\mu. \] Notice that \[ \begin{aligned} \int_{E}(1-g^{n+1})d\lambda &=\int\limits_{E \cap A}(1-g^{n+1})d\lambda + \int\limits_{E \cap B}(1-g^{n+1})d\lambda \\ &=\int\limits_{E \cap A}(1-g^{n+1})d\lambda \\ &\to\lambda(E \cap A) = \lambda_{ac}(E)\quad(n\to\infty) \end{aligned} \] Define \(h_n=g(1+g+g^2+\cdots+g^n)w\), we see that on \(A\), \(h_n\) converges monotonically to \[ h= \begin{aligned} \begin{cases} \frac{gw}{1-g} \quad &x\in{A}\\ 0 \quad &x\in{B} \end{cases} \end{aligned} \] By monotone convergence theorem, we got \[ \lim_{n\to\infty}\int_{E}h_nd\mu = \int_{E}hd\mu=\lambda_{ac}(E). \] for every \(E\in\mathfrak{M}\). The measurable function \(h\) is the desired Radon-Nikodym derivative once we show that \(h \in L^1(\mu)\). Replacing \(E\) with \(X\), we see that \[ \int_{X}|h|d\mu=\int_{X}hd\mu=\lambda_{ac}(X)\leq\lambda(X)<\infty. \] Clearly, if \(\mu(E)=0\), we have \[ \lambda_{ac}(E)=\int_{E}hd\mu=0 \] which shows that \[ \lambda_{ac}\ll\mu \] as desired. Step 3 - Generalization onto complex measures By far we have proved this theorem for positive bounded measure. For real bounded measure, we can apply the proceeding case to the positive and negative part of it. For all complex measures, we have \[ \lambda=\lambda_1+i\lambda_2 \] where \(\lambda_1\) and \(\lambda_2\) are real. Step 4 - Uniqueness of the decomposition If we have two Lebesgue decompositions of the same measure, namely \((\lambda_{ac},\lambda_s)\) and \((\lambda'_{ac},\lambda'_s)\), we shall show that \[ \lambda_{ac}-\lambda_{ac}'=\lambda_s'-\lambda_s=0 \] By the definition of the decomposition we got \[ \lambda_{ac}-\lambda'_{ac}=\lambda'_s-\lambda_s \] with \(\lambda_{ac}-\lambda_{ac}' \ll \mu\) and \(\lambda_{s}'-\lambda_{s}\perp\mu\). This implies that \(\lambda'_{s}-\lambda_{s} \ll \mu\) as well. Since \(\lambda'_s-\lambda_s\perp\mu\), there exists a set with \(\mu(A)=0\) on which \(\lambda'_s-\lambda_s\) is concentrated; the absolute continuity shows that \(\lambda'_s(E)-\lambda_s(E)=0\) for all \(E \subset A\). Hence \(\lambda_s'-\lambda_s\) is concentrated on \(X-A\). Therefore we got \((\lambda'_s-\lambda_s)\perp(\lambda'_s-\lambda_s)\), which forces \(\lambda'_s-\lambda_s=0\). The uniqueness is proved. (Following the same process one can also show that \(\lambda_{ac}\perp\lambda_s\).)
CommonCrawl
The word "nootropic" was coined in 1972 by a Romanian scientist, Corneliu Giurgea, who combined the Greek words for "mind" and "bending." Caffeine and nicotine can be considered mild nootropics, while prescription Ritalin, Adderall and Provigil (modafinil, a drug for treating narcolepsy) lie at the far end of the spectrum when prescribed off-label as cognitive enhancers. Even microdosing of LSD is increasingly viewed as a means to greater productivity. Adderall increases dopamine and noradrenaline availability within the prefrontal cortex, an area in which our memory and attention are controlled. As such, this smart pill improves our mood, makes us feel more awake and attentive. It is also known for its lasting effect – depending on the dose, it can last up to 12 hours. However, note that it is crucial to get confirmation from your doctor on the exact dose you should take. Analgesics Anesthetics General Local Anorectics Anti-ADHD agents Antiaddictives Anticonvulsants Antidementia agents Antidepressants Antimigraine agents Antiparkinson agents Antipsychotics Anxiolytics Depressants Entactogens Entheogens Euphoriants Hallucinogens Psychedelics Dissociatives Deliriants Hypnotics/Sedatives Mood Stabilizers Neuroprotectives Nootropics Neurotoxins Orexigenics Serenics Stimulants Wakefulness-promoting agents On the other metric, suppose we removed the creatine? Dropping 4 grams of material means we only need to consume 5.75 grams a day, covered by 8 pills (compared to 13 pills). We save 5,000 pills, which would have cost $45 and also don't spend the $68 for the creatine; assuming a modafinil formulation, that drops our $1761 down to $1648 or $1.65 a day. Or we could remove both the creatine and modafinil, for a grand total of $848 or $0.85 a day, which is pretty reasonable. Another important epidemiological question about the use of prescription stimulants for cognitive enhancement concerns the risk of dependence. MPH and d-AMP both have high potential for abuse and addiction related to their effects on brain systems involved in motivation. On the basis of their reanalysis of NSDUH data sets from 2000 to 2002, Kroutil and colleagues (2006) estimated that almost one in 20 nonmedical users of prescription ADHD medications meets criteria for dependence or abuse. This sobering estimate is based on a survey of all nonmedical users. The immediate and long-term risks to individuals seeking cognitive enhancement remain unknown. This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. My intent here is not to promote illegal drugs or promote the abuse of prescription drugs. In fact, I have identified which drugs require a prescription. If you are a servicemember and you take a drug (such as Modafinil and Adderall) without a prescription, then you will fail a urinalysis test. Thus, you will most likely be discharged from the military. One of the most common strategies to beat this is cycling. Users who cycle their nootropics take them for a predetermined period, (usually around five days) before taking a two-day break from using them. Once the two days are up, they resume the cycle. By taking a break, nootropic users reduce the tolerance for nootropics and lessen the risk of regression and tolerance symptoms. The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.) So what about the flip side: a drug to erase bad memories? It may have failed Jim Carrey in Eternal Sunshine of the Spotless Mind, but neuroscientists have now discovered an amnesia drug that can dull the pain of traumatic events. The drug, propranolol, was originally used to treat high blood pressure and heart disease. Doctors noticed that patients given the drug suffered fewer signs of stress when recalling a trauma. Upon examining the photographs, I noticed no difference in eye color, but it seems that my move had changed the ambient lighting in the morning and so there was a clear difference between the two sets of photographs! The before photographs had brighter lighting than the after photographs. Regardless, I decided to run a small survey on QuickSurveys/Toluna to confirm my diagnosis of no-change; the survey was 11 forced-choice pairs of photographs (before-after), with the instructions as follows: Took random pill at 2:02 PM. Went to lunch half an hour afterwards, talked until 4 - more outgoing than my usual self. I continued to be pretty energetic despite not taking my caffeine+piracetam pills, and though it's now 12:30 AM and I listened to TAM YouTube videos all day while reading, I feel pretty energetic and am reviewing Mnemosyne cards. I am pretty confident the pill today was Adderall. Hard to believe placebo effect could do this much for this long or that normal variation would account for this. I'd say 90% confidence it was Adderall. I do some more Mnemosyne, typing practice, and reading in a Montaigne book, and finally get tired and go to bed around 1:30 AM or so. I check the baggie when I wake up the next morning, and sure enough, it had been an Adderall pill. That makes me 1 for 2. They can cause severe side effects, and their long-term effects aren't well-researched. They're also illegal to sell, so they must be made outside of the UK and imported. That means their manufacture isn't regulated, and they could contain anything. And, as 'smart drugs' in 2018 are still illegal, you might run into legal issues from possessing some 'smart drugs' without a prescription. (People aged <=18 shouldn't be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers' sleep. Changes in effects with age are real - amphetamines' stimulant effects and modafinil's histamine-like side-effects come to mind as examples.) Statements made, or products sold through this web site, have not been evaluated by the Food and Drug Administration. They are not intended to diagnose, treat, cure, or prevent any diseases. Consult a qualified health care practitioner before taking any substance for medicinal purposes.California Proposition 65 WARNING: Some products on this store contains progesterone, a chemical known to the State of California to cause cancer. Consult with your physician before using this product. Potassium citrate powder is neither expensive nor cheap: I purchased 453g for $21. The powder is crystalline white, dissolves instantly in water, and largely tasteless (sort of saline & slightly unpleasant). The powder is 37% potassium by weight (the formula is C6H5K3O7) so 453g is actually 167g of potassium, so 80-160 days' worth depending on dose. Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below. The therapeutic effect of AMP and MPH in ADHD is consistent with the finding of abnormalities in the catecholamine system in individuals with ADHD (e.g., Volkow et al., 2007). Both AMP and MPH exert their effects on cognition primarily by increasing levels of catecholamines in prefrontal cortex and the cortical and subcortical regions projecting to it, and this mechanism is responsible for improving cognition and behavior in ADHD (Pliszka, 2005; Wilens, 2006). Even the best of today's nootropics only just barely scratch the surface. You might say that we are in the "Nokia 1100" phase of taking nootropics, and as better tools and more data come along, the leading thinkers in the space see a powerful future. For example, they are already beginning to look past biochemistry to the epigenome. Not only is the epigenome the code that runs much of your native biochemistry, we now know that experiences in life can be recorded in your epigenome and then passed onto future generations. There is every reason to believe that you are currently running epigenetic code that you inherited from your great-grandmother's life experiences. And there is every reason to believe that the epigenome can be hacked – that the nootropics of the future can not only support and enhance our biochemistry, but can permanently change the epigenetic code that drives that biochemistry and that we pass onto our children. This is why many healthy individuals use nootropics. They have great benefits and can promote brain function and reduce oxidative stress. They can also improve sleep quality. The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. (On a side note, I think I understand now why modafinil doesn't lead to a Beggars in Spain scenario; BiS includes massive IQ and motivation boosts as part of the Sleepless modification. Just adding 8 hours a day doesn't do the world-changing trick, no more than some researchers living to 90 and others to 60 has lead to the former taking over. If everyone were suddenly granted the ability to never need sleep, many of them would have no idea what to do with the extra 8 or 9 hours and might well be destroyed by the gift; it takes a lot of motivation to make good use of the time, and if one cannot, then it is a curse akin to the stories of immortals who yearn for death - they yearn because life is not a blessing to them, though that is a fact more about them than life.) Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive. For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular. The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41. Stimulants are drugs that accelerate the central nervous system (CNS) activity. They have the power to make us feel more awake, alert and focused, providing us with a needed energy boost. Unfortunately, this class encompasses a wide range of drugs, some which are known solely for their side-effects and addictive properties. This is the reason why many steer away from any stimulants, when in fact some greatly benefit our cognitive functioning and can help treat some brain-related impairments and health issues. But when aficionados talk about nootropics, they usually refer to substances that have supposedly few side effects and low toxicity. Most often they mean piracetam, which Giurgea first synthesized in 1964 and which is approved for therapeutic use in dozens of countries for use in adults and the elderly. Not so in the United States, however, where officially it can be sold only for research purposes. From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More... This world is a competitive place. If you're not seeking an advantage, you'll get passed by those who do. Whether you're studying for a final exam or trying to secure a big business deal, you need a definitive mental edge. Are smart drugs and brain-boosting pills the answer for cognitive enhancement in 2019? If you're not cheating, you're not trying, right? Bad advice for some scenarios, but there is a grain of truth to every saying—even this one. Another common working memory task is the n-back task, which requires the subject to view a series of items (usually letters) and decide whether the current item is identical to the one presented n items back. This task taxes working memory because the previous items must be held in working memory to be compared with the current item. The easiest version of this is a 1-back task, which is also called a double continuous performance task (CPT) because the subject is continuously monitoring for a repeat or double. Three studies examined the effects of MPH on working memory ability as measured by the 1-back task, and all found enhancement of performance in the form of reduced errors of omission (Cooper et al., 2005; Klorman et al., 1984; Strauss et al., 1984). Fleming et al. (1995) tested the effects of d-AMP on a 5-min CPT and found a decrease in reaction time, but did not specify which version of the CPT was used. When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit. Table 5 lists the results of 16 tasks from 13 articles on the effects of d-AMP or MPH on cognitive control. One of the simplest tasks used to study cognitive control is the go/no-go task. Subjects are instructed to press a button as quickly as possible for one stimulus or class of stimuli (go) and to refrain from pressing for another stimulus or class of stimuli (no go). De Wit et al. (2002) used a version of this task to measure the effects of d-AMP on subjects' ability to inhibit a response and found enhancement in the form of decreased false alarms (responses to no-go stimuli) and increased speed of correct go responses. They also found that subjects who made the most errors on placebo experienced the greatest enhancement from the drug. Clarke and Sokoloff (1998) remarked that although [a] common view equates concentrated mental effort with mental work…there appears to be no increased energy utilization by the brain during such processes (p. 664), and …the areas that participate in the processes of such reasoning represent too small a fraction of the brain for changes in their functional and metabolic activities to be reflected in the energy metabolism of the brain… (p. 675). I have elsewhere remarked on the apparent lack of benefit to taking multivitamins and the possible harm; so one might well wonder about a specific vitamin like vitamin D. However, a multivitamin is not vitamin D, so it's no surprise that they might do different things. If a multivitamin had no vitamin D in it, or if it had vitamin D in different doses, or if it had substances which interacted with vitamin D (such as calcium), or if it had substances which had negative effects which outweigh the positive (such as vitamin A?), we could well expect differing results. In this case, all of those are true to varying extents. Some multivitamins I've had contained no vitamin D. The last multivitamin I was taking both contains vitamins used in the negative trials and also some calcium; the listed vitamin D dosage was a trivial ~400IU, while I take >10x as much now (5000IU). The search to find more effective drugs to increase mental ability and intelligence capacity with neither toxicity nor serious side effects continues. But there are limitations. Although the ingredients may be separately known to have cognition-enhancing effects, randomized controlled trials of the combined effects of cognitive enhancement compounds are sparse. Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human. Finally, two tasks measuring subjects' ability to control their responses to monetary rewards were used by de Wit et al. (2002) to assess the effects of d-AMP. When subjects were offered the choice between waiting 10 s between button presses for high-probability rewards, which would ultimately result in more money, and pressing a button immediately for lower probability rewards, d-AMP did not affect performance. However, when subjects were offered choices between smaller rewards delivered immediately and larger rewards to be delivered at later times, the normal preference for immediate rewards was weakened by d-AMP. That is, subjects were more able to resist the impulse to choose the immediate reward in favor of the larger reward. 70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of: Pharmaceutical, substance used in the diagnosis, treatment, or prevention of disease and for restoring, correcting, or modifying organic functions. (See also pharmaceutical industry.) Records of medicinal plants and minerals date to ancient Chinese, Hindu, and Mediterranean civilizations. Ancient Greek physicians such as Galen used a variety of drugs in their profession.… Flow diagram of cognitive neuroscience literature search completed July 2, 2010. Search terms were dextroamphetamine, Aderrall, methylphenidate, or Ritalin, and cognitive, cognition, learning, memory, or executive function, and healthy or normal. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies meeting the inclusion criteria stated in the text. "As a brain injury survivor that still deals with extreme light sensitivity, eye issues and other brain related struggles I have found a great diet is a key to brain health! Cavin's book is a much needed guide to eating for brain health. While you can fill shelves with books that teach you good nutrition, Cavin's book teaches you how to help your brain with what you eat. This is a much needed addition to the nutrition section! If you are looking to get the optimum performance out of your brain, get this book now! You won't regret it." I bought 500g of piracetam (Examine.com; FDA adverse events) from Smart Powders (piracetam is one of the cheapest nootropics and SP was one of the cheapest suppliers; the others were much more expensive as of October 2010), and I've tried it out for several days (started on 7 September 2009, and used it steadily up to mid-December). I've varied my dose from 3 grams to 12 grams (at least, I think the little scoop measures in grams), taking them in my tea or bitter fruit juice. Cranberry worked the best, although orange juice masks the taste pretty well; I also accidentally learned that piracetam stings horribly when I got some on a cat scratch. 3 grams (alone) didn't seem to do much of anything while 12 grams gave me a nasty headache. I also ate 2 or 3 eggs a day. Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress.
CommonCrawl
Ham operators often tell me that comparing gain to an isotropic radiator isn't much use because it's only a theoretical antenna, is this true? When you look down on an isotropic radiator from above, ie : the azimuth view radiation pattern, it's a circle. The same as looking down on any omni directional antenna. So when comparing the gain of a yagi antenna in the azimuth view at 0 degrees elevation to an isotropic radiator's azimuth view gain at 0 degree, this gives a good practical indication of gain over a real omni directional antenna such as a 1/4 wave ground plane antenna, is that right ? antenna-theory radiation-pattern Glenn W9IQ AndrewAndrew No, that is not true. I generally find that hams that denigrate or choose to ignore the isotropic antenna or references to it, simply don't understand its central place in antenna engineering. Certainly the isotropic antenna only exists theoretically but it is the basis for a great deal of the fundamentals principles of antenna engineering and the related formulas. It is a shame that more hams don't take the time as you are to strengthen their knowledge in this field. I applaud you for digging into this subject. So when comparing the gain of a yagi antenna in the azimuth view at 0 degrees elevation to an isotropic radiator's azimuth view gain at 0 degree, this gives a good practical indication of gain over a real omni directional antenna such as a 1/4 wave ground plane antenna, is that right ? There is a confusion in terminology in your question that is common place regarding omnidirectional for a vertical antenna and radiating equally in all directions for an isotropic antenna. An isotropic antenna radiates equally in all directions. The best way to think about this is to place the isotropic antenna inside, and at the center, of a large sphere. Then apply power to the isotropic antenna and imagine the power "illuminating" the surface of the sphere in the same way a light bulb might. Since the isotropic antenna radiates equally in all directions, the entire sphere would be uniformly lit - no bright spots and no dark spots. Now take the isotropic antenna out of the sphere and replace it with an omnidirectional vertical antenna. When we apply the same power to the vertical, we would not see a uniformly lit sphere. Instead, the "poles" of the sphere would be nearly dark and the "equator" of the sphere would be brightly lit. Clearly the "omni" part of the description is referring to the equal "illumination" in the area near and around the equator. Since spheres are not the easiest to render in our 2D literature, we tend to take a slice through the sphere and talk about that. For ham radio, the two most common slices are azimuth (a slice parallel to the equator) and elevation (a slice parallel to the poles). The azimuth and the elevation plots for an isotropic antenna would always show a uniform and equal, circular radiation pattern for all such slices. By contrast the omnidirectional vertical would show uniform radiation in the azimuth plot but in the elevation plot, most of the power would be concentrated near the horizon with very little power near 90 degrees of elevation. So the omnidirectional term only applies to the azimuth pattern. Computer modeling has brought about the ability to more easily generate graphics that have a 3D appearance to them. Here is such a rendering of a 1/4 wave ground plane antenna with a relatively small ground plane from the Antenna Theory website: The correct technical term for the illumination of the surface of the sphere is "irradiance". This is typically expressed as watts per square meter (W/m2). So the isotropic antenna causes the entire surface of the sphere to have the same W/m2 but since the omnidirectional vertical focuses more of its power near the equator, the W/m2 in the equator region would be greater than at the poles. So the dBi gain of the omnidirectional vertical can be thought of as the comparison of its greatest irradiance value that is found near the equator compared to the irradiance value (at the same power level and same size of sphere) of the irradiance at any location for the isotropic antenna: $$Gain_\text{dBi}=10 \log \left(\frac{E_{e\text{ Omni}}}{E_{e\text{ Isotropic}}}\right) \tag 1$$ where Ee is the irradiance in W/m2 for the respective antenna. Unless specifically stated otherwise, dBi gain refers to the maximum irradiance of the antenna in question compared to that of the isotropic antenna. For example, a 1/2 wavelength dipole in free space has a 2.15 dBi gain. Our knowledge of the radiation pattern of a dipole would allow us to equate this gain to the direction that is perpendicular to the direction of the arms of the dipole. Finally, don't forget that the gain of an antenna includes the effect of any losses in the antenna. In fact gain (in linear form) is: $$Gain=Directivity*Efficiency \tag 2$$ So while the free space directivity of a small, "magnetic" loop antenna is nearly the same as a free space 1/2 wavelength dipole, the poor efficiency of the small loop compared to a dipole causes it to have much less gain than the dipole. The isotropic antenna is always considered to be 100% efficient. Glenn W9IQGlenn W9IQ $\begingroup$ thanks for that clarification, that's a perfect description. Key point i think is that if a plane (or slice) or angle is not mentioned then gain in dBi is the maximum irradiance of an antenna regardless of where in the radiation pattern compared to that of an isotropic, but saying where the maximum occurs in the 3 dimensional radiation pattern would make it more clear. $\endgroup$ – Andrew Mar 12 '19 at 22:09 $\begingroup$ also : so my question make sense but the terminology is wrong ? is this right -> "The irradiance of a yagi antenna in the azimuth slice at 0 degrees elevation can be compared to the same for a 1/4 wave vertical antenna by using the irradiance of an isotropic radiator as a reference" ... $\endgroup$ – Andrew Mar 12 '19 at 22:25 $\begingroup$ @Andrew Basically, yes to both comments. The only part that might be slightly over-specific is you could compare two antennas directly - such as the yagi and the the vertical and simply express it as dB (not dBi). But that comparison is only usable between the two antennas. Comparing each antenna to an isotropic antenna allows the comparisons to then be extended to any other antenna. $\endgroup$ – Glenn W9IQ Mar 12 '19 at 23:21 $\begingroup$ So a yagi really close to the ground could have a gain figure of 12 dBi. But the maximum gain is at an elevation angle of 45 degrees up in the sky where there are no ham operators. So forgetting skip and only talking ground wave for now, a vertical ground plane antenna which is omni directional in the azimuth slice at 0 degrees (0 dBi), but has a quoted figure of 2.5 dBi gain because it has a low angle of radiation, might work better ... ? $\endgroup$ – Andrew Mar 13 '19 at 2:16 $\begingroup$ keeping in mind that a vertical antenna has dBi gain just because of the fact that it has a doughnut shaped pattern rather than a sphere $\endgroup$ – Andrew Mar 13 '19 at 2:38 dBi is useful for comparing gains of real world antennas. For instance, an 80m mobile antenna may be -10 dbi, a 1/4WL vertical ground plane may be 0 dbi, a dipole may be 6 dbi, and a beam may be 10 dbi. So we can say that the beam has 20 dB gain over the 80m mobile antenna using dBi as the reference dimension. Cecil - W5DXPCecil - W5DXP Those that complain that dBi isn't useful because an isotropic antenna is only theoretical often advocate an alternate unit such as dBd, or decibels relative to a dipole. There are a couple arguments that could be made from here. One argument is that a dBd is typically defined as 2.15 dBi, because that's the directive gain of a half-wavelength dipole in free space. That is, to convert dBi to dBb, just subtract 2.15. If you want to be pragmatic, this is hardly something to get worked up about. But wait a minute, a dipole in free space is also a theoretical antenna. In practice, reflections from ground, ground losses, feed issues, interactions with the tower, misalignment of the antenna, and myriad other issues result in a dipole's gain being something other than 2.15 dBi. For any terrestrial communication scenario, interactions with the ground are relevant. In some cases, especially on VHF and higher frequencies where the antenna can be many wavelengths high, it may be acceptable to assume 2.15 dBi gain, and model the ground with a simple model like the two-ray reflection model. Or perhaps the link budget simply contains sufficient fade margin that it's not important to have a very precise model. But on HF it can become quite difficult to get the dipole more than a quarter wavelength high, and this means both that the ground has the potential to create constructive interference though the image antenna created, and also reduce gain through ground losses. Depending on ground conductivity and height, it can make quite a difference for better or worse. Furthermore, a dipole's gain even in free space isn't 2.15 dBi in all directions: that's only its gain in the most favorable direction. How frequently is the dipole rotated to the ideal angle for a given path? And even given a rotatable dipole, how frequently is the height of that dipole optimized to give the ideal takeoff angle? You see, trying to distill antenna performance into a single number that's realistic in practice rapidly becomes a very complex problem. Wouldn't it be nice if we had a simpler number, something which was insensitive to direction, polarization, ground interactions, and all the other gotchas; something which is mathematically simple and universally applicable? Then we can use this as the starting point, and then add as many correcting factors to account for real-world effects as we desire until we are satisfied our model is sufficiently accurate to meet our needs? That "simple" model is the isotropic antenna. To show why it's simple, I pose a simple question: if a 1 W transmitter is 10 km distant from the receiver, how much power is received? If we assume the transmitting antenna radiates equally in all directions, is in free space, and is 100% efficient, then this is a simple geometry problem. Simply imagine a sphere centered on the transmitter, with a radius of 10 km. Radiated power from the transmitter can only intersect the sphere, so the power flux of the entire sphere must be 1 W. Since the antenna radiates equally in all directions, the power flux density must be equal everywhere on the sphere. So calculating the surface area of this sphere, then dividing the transmit power by this quantity, yields the power flux density. $$ { 1 \:\mathrm{W} \over 2 \pi (10 \:\mathrm{km})^2 } = 1.59 \:\mathrm{nW/m^2} $$ If we want to know the power received then we only need to know how big of an "energy net" the receiving antenna presents. This is called the effective aperture, which as it turns out is just another way to express gain: $$ A_e = {\lambda^2 \over 4\pi} G $$ where $A_e$ is the effective aperture, $\lambda$ is wavelength, and $G$ is the antenna gain (as a linear number, not decibels). This is the basis of the Friis transmission equation and finds practical application in calculating link budgets. Granted, it's based on a theoretical antenna, the mathematical simplicity of the isotropic antenna allows the mathematics to be simple when simple is all we need, and more complex when necessary by adding additional terms to the equation. If we were to eschew the isotropic antenna and use the dipole as the reference antenna, we would gain little. Some might argue the results are "more realistic", but as previously stated the gain of a dipole is not constant. We'd have to add terms to the fundamental equations to account for at least the orientation of the dipole. And if the objective is realism, even that is insufficient. It's a slippery slope that has neither the mathematical simplicity of an isotropic reference nor the accuracy of a real-world model. Phil Frost - W8IIPhil Frost - W8II It is important to realize in the "gain" analysis of any antenna that the far-field radiation patterns and gains of antennas calculated by M-o-M software such as NEC typically include the effects of reflections of its intrinsic radiated energy when used near a large, reflecting surface such as the earth. Below are two far-field results for a ground plane antenna as calculated by NEC. The top one includes reflections from a nearby plane surface, and the other shows the intrinsic radiation from the ground plane antenna, itself. NOTE: The gain in dBi of the antenna shown below is color-coded to the vertical scales adjacent to the patterns. An enlarged view of that image is possible by left-clicking on the image. Those gain values may be converted to free space gain referred to a center-fed, 1/2-wave dipole (= dBd) by subtracting 2.15 from the dBi values shown in the graphic. Richard FryRichard Fry Not the answer you're looking for? Browse other questions tagged antenna-theory radiation-pattern or ask your own question. What is a link budget, and how do I make one? Why is antenna aperture a function of wavelength? Antenna gain vs. radiation efficiency in 4nec2 Microstrip Loop Antenna Design Which polarization for a Yagi antenna will have the best side rejection for received signals with vertical polarization? If I point a yagi antenna at the sky, what will its azimuth radiation pattern look like? What happens to a vertical Yagi when you mount it on a vertical metal pole? What does dBi mean? Solid angle for different E/H plane beamwidths Variable polarization Yagi using two antennas on the same boom, joined together with variable feedpoint phase?
CommonCrawl
Qtools: A Collection of Models and Tools for Quantile Inference Marco Geraci, University of South Carolina Unconditional quantiles Definition and estimation of quantiles LSS - Location, scale and shape of a distribution Conditional quantiles Linear models Goodness of fit Transformation models Conditional LSS Other functions in Qtools Restricted quantile regression Conditional quantiles of discrete data Jittering for count responses Conditional mid-quantiles for discrete responses Quantile-based multiple imputation Quantiles play a fundamental role in statistics. The quantile function defines the distribution of a random variable and, thus, provides a way to describe the data that is specular but equivalent to that given by the corresponding cumulative distribution function. There are many advantages in working with quantiles, starting from their properties. The renewed interest in their usage seen in the last years is due to the theoretical, methodological, and software contributions that have broadened their applicability. This paper presents the R package Qtools, a collection of utilities for unconditional and conditional quantiles. Quantiles have a long history in applied statistics, especially the median. The analysis of astronomical data by Galileo Galilei in 1632 (Hald 2003, 149) and geodic measurements by Roger Boscovich in 1757 (Koenker 2005, 2) are presumably the earliest examples of application of the least absolute deviation (\(L_1\)) estimator in its, respectively, unconditional and conditional forms. The theoretical studies on quantiles of continuous random variables started to appear in the statistical literature of the 20th century. In the case of discrete data, studies have somewhat lagged behind most probably because of the analytical drawbacks surrounding the discontinuities that characterise discrete quantile functions. Some forms of approximation to continuity have been recently proposed to study the large sample behavior of quantile estimators. For example, Ma, Genton, and Parzen (2011) have demonstrated the asymptotic normality of unconditional sample quantiles based on the definition of the mid-distribution function (Parzen 2004). Machado and Santos Silva (2005) proposed inferential approaches to the estimation of conditional quantiles for counts based on data jittering. Functions implementing quantile methods can be found in common statistical software. A considerable number of R packages that provide such functions are available on the Comprehensive R Archive Network (CRAN). The base package stats contains basic functions to estimate sample quantiles or compute quantiles of common parametric distributions. The quantreg package (Koenker 2013) is arguably a benchmark for distribution-free estimation of linear quantile regression models, as well as the base for other packages which make use of linear programming (LP) algorithms (Koenker and D'Orey 1987; Koenker and Park 1996). Other contributions to the modelling of conditional quantile functions include packages for Bayesian regression, e.g. bayesQR (Benoit et al. 2014) and BSquare (Smith and Reich 2013), and the lqmm package (Geraci and Bottai 2014; Geraci 2014) for random-effects regression. The focus of this paper is on the R package Qtools, a collection of models and tools for quantile inference. These include commands for quantile-based analysis of the location, scale and shape of a distribution; transformation-based quantile regression; goodness of fit and restricted quantile regression; quantile regression for discrete data; quantile-based multiple imputation. The emphasis will be put on the first two topics listed above as they represent the main contribution of the package, while a short description of the other topics is given for completeness. Let \(Y\) be a random variable with cumulative distribution function (CDF) \(F_{Y}\) and support \(S_{Y}\). The CDF calculated at \(y \in S_{Y}\) returns the probability \(F_{Y}(y) \equiv p = \Pr\left(Y \leq y\right)\). The quantile function (QF) is defined as \(Q(p) = \inf_{y}\{F_{Y}(y) \geq p\}\), for \(0 < p < 1\). (Some authors consider \(0 \leq p \leq 1\). For practical purposes, it is simpler to exclude the endpoints 0 and 1.) When \(F_{Y}\) is continuous and strictly monotone (hence, \(f_{Y}(y) \equiv F_{Y}'(y) > 0\) for all \(y \in S_{Y}\)), the quantile function is simply the inverse of \(F_{Y}\). In other cases, the quantile \(p\) is defined, by convention, as the smallest value \(y\) such that \(F_{Y}(y)\) is at least \(p\). Quantiles enjoy a number of properties. An excellent overview is given by Gilchrist (2000). In particular, the Q-tranformation rule (Gilchrist 2000) or equivariance to monotone transformations states that if \(h(\cdot)\) is a non-decreasing function on \(\mathbb{R}\), then \(Q_{h(Y)}(p) = h\left\{Q_{Y}(p)\right\}\). Hence \(Q_{Y}(p) = h^{-1}\left\{Q_{h(Y)}(p)\right\}\). Note that this property does not generally hold for the expected value. Sample quantiles for a random variable \(Y\) can be calculated in a number of ways, depending on how they are defined (Hyndman and Fan 1996). For example, the function quantile in the base package stats provides nine different sample quantile estimators, which are based on the sample order statistics or the inverse of the empirical CDF. These estimators are distribution-free as they do not depend on any parametric assumption about \(F\) (or \(Q\)). Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a sample of \(n\) independent and identically distributed (iid) observations from the population \(F_{Y}\). Let \(\xi_p\) denote the \(p\)th population quantile and \(\hat{\xi}_p\) the corresponding sample quantile. (The subscripts will be dropped occasionally to ease notation, e.g. \(F\) will be used in place of \(F_{Y}\) or \(\xi\) in place of \(\xi_{p}\).) In the continuous case, it is well known that \(\sqrt{n}\left(\hat{\xi}_{p} - \xi_{p}\right)\) is approximately normal with mean zero and variance \[\begin{equation}\label{eq:1} \tag{1} \omega^2 = \frac{p(1-p)}{\{f_{Y}(\xi_p)\}^2}. \end{equation}\] A more general result is obtained when the \(Y_{i}\)'s, \(i = 1, \ldots, n\), are independent but not identically distributed (nid). The density evaluated at the \(p\)th quantile, \(f(\xi_p)\), is called density-quantile function by Parzen (1979). Its reciprocal, \(s(p) \equiv 1/f(\xi_p)\), is called sparsity function (Tukey 1965) or quantile-density function (Parzen 1979). As mentioned previously, the discontinuities of \(F_{Y}\) when \(Y\) is discrete represent a mathematical inconvenience. Ma, Genton, and Parzen (2011) derived the asymptotic distribution of the sample mid-quantiles, that is, the sample quantiles based on the mid-distribution function (mid-CDF). The latter is defined as \(F^{mid}_{Y}(y) = F_{Y}(y) - 0.5p_{Y}(y)\), where \(p_{Y}(y)\) denotes the probability mass function (Parzen 2004). In particular, they showed that, as \(n\) becomes large, \(\sqrt{n}\left(\hat{\xi}^{mid}_{p} - \xi_{p}\right)\) is approximately normal with mean \(0\). Under iid assumptions, the expression for the sampling variance is similar to that in (1); see Ma, Genton, and Parzen (2011) for details. The package Qtools provides the functions midecdf and midquantile, which return objects of class "midecdf" or "midquantile", respectively, containing: the values or the probabilities at which mid-cumulative probabilities or mid-quantiles are calculated (\(x\)), the mid-cumulative probabilities or the mid-quantiles (\(y\)), and the functions that linearly interpolate those coordinates (\(fn\)). An example is shown below using data simulated from a Poisson distribution. > library("Qtools") > y <- rpois(1000, 4) > pmid <- midecdf(y) > xmid <- midquantile(y, probs = pmid$y) > pmid Empirical mid-ECDF midecdf(x = y) > xmid midquantile(x = y, probs = pmid$y) A confidence interval for sample mid-quantiles can be obtained using confint.midquantile. This function is applied to the output of midquantile and returns an object of class "data.frame" containing sample mid-quantiles, lower and upper bounds of the confidence intervals of a given level (\(95\%\) by default), along with standard errors as an attribute named 'stderr'. This is shown below using the sample \(y\) generated in the previous example. > xmid <- midquantile(y, probs = 1:3/4) > x <- confint(xmid, level = 0.95) midquantile lower upper 25% 2.540000 2.416462 2.663538 > attr(x, "stderr") [1] 0.06295447 0.06578432 0.09276875 > par(mfrow = c(1,2)) > plot(pmid, xlab = "y", ylab = "CDF", jumps = TRUE) > points(pmid$x, pmid$y, pch = 15) > plot(xmid, xlab = "p", ylab = "Quantile", jumps = TRUE) > points(xmid$x, xmid$y, pch = 15) Figure 1. Cumulative distribution (a) and quantile (b) functions for simulated Poisson data. The ordinary cumulative distribution function (CDF) and quantile function (QF) are represented by step-functions (grey lines), with the convention that, at the point of discontinuity or `jump', the function takes its value corresponding to the ordinate of the filled circle as opposed to that of the hollow circle. The mid-CDF and mid-QF are represented by filled squares, while the piecewise linear functions (dashed lines) connecting the squares represent continuous versions of, respectively, the ordinary CDF and QF. Finally, a plot method is available for both midecdf and midquantile objects. An illustration is given in Figure 1. The mid-distribution and mid-quantile functions are discrete and their values are marked by filled squares. The piecewise linear functions connecting the filled squares represent continuous versions of the CDF and QF which interpolate between the steps of, respectively, the ordinary CDF and quantile functions. Note that the argument jumps is a logical value indicating whether values at jumps should be marked. Since the cumulative distribution and quantile functions are two sides of the same coin, the location, scale, and shape (LSS) of a distribution can be examined using one or the other. Well-known quantile-based measures of location and scale are the median and inter-quartile range (IQR), respectively. Similarly, there are also a number of quantile-based measures for skewness and kurtosis (Groeneveld and Meeden 1984; Groeneveld 1998; Jones, Rosco, and Pewsey 2011). Define the 'central' portion of the distribution as that delimited by the quantiles \(Q(p)\) and \(Q(1-p)\), \(0 < p < 0.5\), and define the 'tail' portion as that lying outside these quantiles. Let \(IPR(p) = Q(1-p) - Q(p)\) denote the inter-quantile range at level \(p\). Building on the results by Horn (1983) and Ruppert (1987), Staudte (2014) considered the following identity: \[\begin{equation}\label{eq:2} \tag{2} \underbrace{\frac{IPR(p)}{IPR(r)}}_\text{kurtosis} = \underbrace{\frac{IPR(p)}{IPR(q)}}_\text{tail-weight} \, \cdot \, \underbrace{\frac{IPR(q)}{IPR(r)}}_\text{peakedness}, \end{equation}\] where \(0 < p < q < r < 0.5\). These quantile-based measures of shape are sign, location and scale invariant. As compared to moment-based indices, they are also more robust to outliers and easier to interpret (Groeneveld 1998; Jones, Rosco, and Pewsey 2011). It is easy to verify that a quantile function can be written as \[\begin{equation}\label{eq:3} \tag{3} Q(p) = \underbrace{Q(0.5)}_\text{median}\,\, + \,\, \frac{1}{2}\underbrace{IPR(0.25)}_\text{IQR} \, \cdot \, \underbrace{\frac{IPR(p)}{IPR(0.25)}}_\text{shape index} \, \cdot \, \bigg(\underbrace{\frac{Q(p) + Q(1-p) - 2Q(0.5)}{IPR(p)}}_\text{skewness index} - 1\bigg). \end{equation}\] This identity establishes a relationship between the location (median), scale (IQR) and shape of a distribution. (This identity appears in Gilchrist (2000, 74) with an error of sign. See also Benjamini and Krieger (1996, eq.1).) The quantity \(IPR(p)/IPR(0.25)\) in (3) is loosely defined as the 'shape index' (Gilchrist 2000, 72), although it can be seen as the tail-weight measure given in (2) when \(p < 0.25\). For symmetric distributions, the contribution of the skewness index vanishes. Note that the skewness index not only is location and scale invariant, but is also bounded between \(-1\) and \(1\) (as opposed to the Pearson's third standardised moment which can be infinite or even undefined). When this index is near the bounds \(-1\) or \(1\), then \(Q(1-p) \approx Q(0.5)\) or \(Q(p) \approx Q(0.5)\), respectively. The function qlss provides a quantile-based LSS summary with the indices defined in (3) of either a theoretical or an empirical distribution. It returns an object of class "qlss", which is a list containing measures of location (median), scale (IQR and IPR), and shape (skewness and shape indices) for each of the probabilities specified in the argument probs (by default, probs = 0.1). The quantile-based LSS summary of the normal distribution is given in the example below for \(p =0.1\). The argument fun can take any quantile function whose probability argument is named (this is the case for many standard quantile functions in R, e.g. qt, qchisq, qf, etc.). > qlss(fun = "qnorm", probs = 0.1) qlss.default(fun = "qnorm", probs = 0.1) Unconditional Quantile-Based Location, Scale, and Shape ** Location ** ** Scale ** Inter-quartile range (IQR) [1] 1.34898 Inter-quantile range (IPR) ** Shape ** Skewness index Shape index An empirical example is now illustrated using the faithful data set, which contains \(272\) observations on waiting time (minutes) between eruptions and the duration (minutes) of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA. Summary statistics are given in Table 1. Minimum Q1 Q2 Q3 Maximum Wating time 43.0 58.0 76.0 82.0 96.0 Duration 1.6 2.2 4.0 4.5 5.1 Table 1: Minimum, maximum and three quartiles (Q1, Q2, Q3) for waiting time and duration in the Old Faithful Geyser data set. Figure 2. Estimated density (a) and empirical mid-quantile (b) functions of waiting time between eruptions in the Old Faithful Geyser data set. Suppose the interest is in describing the distribution of waiting times. The density is plotted in Figure 2, along with the mid-quantile function. The distribution is bimodal with peaks at around \(54\) and \(80\) minutes. Note that the arguments of the base function quantile, including the argument type, can be passed on to qlss. > y <- faithful$waiting > plot(density(y)) > plot(midquantile(y, probs = p), jumps = FALSE) > qlss(y, probs = c(0.05, 0.1, 0.25), type = 7) qlss.numeric(x = y, probs = c(0.05, 0.1, 0.25), type = 7) 0.05 0.1 0.25 41 35 24 0.05 0.1 0.25 -0.3658537 -0.4285714 -0.5000000 0.05 0.1 0.25 1.708333 1.458333 1.000000 At \(p = 0.1\), the skewness index is approximately \(-0.43\), which denotes a rather strong left asymmetry. As for the shape index, which is equal to \(1.46\), one could say that the tails of this distribution weigh less than those of a normal distribution (\(1.90\)), though of course a comparison between unimodal and bimodal distributions is not meaningful. In general, the \(p\)th linear QR model is of the form \[\begin{equation}\label{eq:4} \tag{4} Q_{Y|X}(p) = \mathbf{x}^{\top}\boldsymbol \beta(p) \end{equation}\] where \(\mathbf{x}\) is a \(k\)-dimensional vector of covariates (including \(1\) as first element) and \(\boldsymbol \beta(p) = [\beta_{0}(p), \beta_{1}(p),\) \(\ldots, \beta_{k-1}(p)]^{\top}\) is a vector of coefficients. The 'slopes' \(\beta_{j}(p)\), \(j = 1,\ldots, k-1\), have the usual interpretation of partial derivatives. For example, in case of the simple model \(Q_{Y|X}(p) = \beta_{0}(p) + \beta_{1}(p)x\), one obtains \[ \frac{\partial Q_{Y|X}(p)}{\partial x} = \beta_{1}(p).\\ \] If \(x\) is a dummy variable, then \(\beta_{1}(p) = Q_{Y|X = 1}(p) - Q_{Y|X=0}(p)\), i.e. the so-called 'quantile treatment effect' (Doksum 1974; Lehmann 1975; Koenker and Xiao 2002). Estimation can be carried out using LP algorithms which, given a sample \((\mathbf{x}_{i},y_{i})\), \(i=1,\dots,n\), solve \[ \min_{b \in \mathbb{R}^{k}} \sum_{i=1}^{n} \kappa_{p}\left(y_{i} - \mathbf{x}_{i}^{\top}\mathbf{b}\right), \] where \(\kappa_{p}(u) = u(p - I(u < 0))\), \(0 < p < 1\), is the check loss function. Large-\(n\) approximation of standard errors can be obtained from the sampling distribution of the linear quantile estimators (Koenker and Bassett 1978). Figure 3. (a) Waiting times between eruptions against durations of eruptions (dashed vertical line drawn at \(3\) minutes) in the Old Faithful Geyser data set. (b) Mid-CDF of waiting time by duration of eruption (solid line, shorter than 3 minutes; dashed line, longer than 3 minutes). Waiting times between eruptions are plotted against the durations of the eruptions in Figure 3. Two clusters of observations can be defined for durations below and above 3 minutes (see also Azzalini and Bowman 1990). The distribution shows a strong bimodality as already illustrated in Figure 2. A dummy variable for durations equal to or longer than \(3\) minutes is created to define the two distributions and included as covariate \(X\) in a model as the one specified in (4). The latter is then fitted to the Old Faithful Geyser data using the function rq in the package quantreg for \(p \in \{0.1, 0.25, 0.5, 0.75, 0.9\}\). > require("quantreg") > x <- as.numeric(faithful$eruptions >= 3) > fit <- rq(formula = y ~ x, tau = c(0.1, 0.25, 0.5, 0.75, 0.9)) > fit rq(formula = y ~ x, tau = c(0.1, 0.25, 0.5, 0.75, 0.9)) Coefficients: tau= 0.10 tau= 0.25 tau= 0.50 tau= 0.75 tau= 0.90 (Intercept) 47 50 54 59 63 x 26 26 26 25 25 Degrees of freedom: 272 total; 270 residual From the output above, it is quite evident that the distribution of waiting times is shifted by an approximately constant amount at all considered values of \(p\). The location-shift hypothesis can be tested by using the Khmaladze test. The null hypothesis is that two distributions, say \(F_{0}\) and \(F_{1}\), differ by a pure location shift (Koenker and Xiao 2002), that is \[ H_{0}: \, F^{-1}_{1}(p) = F^{-1}_{0}(p) + \delta_{0}, \] where \(\delta_{0}\) is the quantile treatment effect, constant over \(p\). The location–scale-shift specification of the test considers \[ H_{0}: \, F^{-1}_{1}(p) = \delta_{1}F^{-1}_{0}(p) + \delta_{0}. \] The alternative hypothesis is that the model is more complex than the one specified in the null hypothesis. The Khmaladze test is implemented in quantreg (see ?quantreg::KhmaladzeTest for further details). The critical values of the test and corresponding significance levels (Koenker 2005) are not readily available in the same package. These have been hardcoded in the Qtools function KhmaladzeFormat which can be applied to "KhmaladzeTest" objects. For the Old Faithful Geyser data, the result of the test is not statistically significant at the \(10\%\) level. > kt <- KhmaladzeTest(formula = y ~ x, taus = seq(.05, .95, by = .01), > KhmaladzeFormat(kt, 0.05) Khmaladze test for the location-shift hypothesis Joint test is not significant at 10% level Test(s) for individual slopes: not significant at 10% level Distribution-free quantile regression does not require introducing an assumption on the functional form of the error distribution (Koenker and Bassett 1978), but only weaker quantile restrictions (Powell 1994). Comparatively, the linear specification of the conditional quantile function in Equation 4 is a much stronger assumption and thus plays an important role for inferential purposes. The problem of assessing the goodness of fit (GOF) is rather neglected in applications of QR. Although some approaches to GOF have been proposed (Zheng 1998; Koenker and Machado 1999; He and Zhu 2003; Khmaladze and Koul 2004), there is currently a shortage of software code available to users. The function GOFTest implements a test based on the cusum process of the gradient vector (He and Zhu 2003). Briefly, the test statistic is given by the largest eigenvalue of \[ n^{-1}\sum_{i}^{n} \mathbf{R}_{n}(\mathbf{x}_{i})\mathbf{R}^{\top}_{n}(\mathbf{x}_{i}) \] where \(\mathbf{R}_{n}(\mathbf{t}) = n^{-1/2} \sum_{j=1}^{n} \psi_{p}(r_{j})\mathbf{x}_{j} I(\mathbf{x}_{j} \leq \mathbf{t})\) is the residual cusum (RC) process and \(\psi_{p}(r_{j})\) is the derivative of the loss function \(\kappa_{p}\) calculated for residual \(r_{j} = y_{j} - \mathbf{x}_{j}^{\top}\boldsymbol \beta(p)\). The sampling distribution of this test statistic is non-normal (He and Zhu 2003) and a resampling approach is used to obtain the \(p\)-value under the null hypothesis. An example is provided further below using the New York Air Quality data set, which contains \(111\) complete observations on daily mean ozone (parts per billion – ppb) and solar radiation (Langleys – Ly). For simplicity, wind speed and maximum daily temperature, also included in the data set, are not analysed here. Suppose that the model of interest is \[\begin{equation}\label{eq:5} \tag{5} Q_{\text{ozone}}(p) = \beta_{0}(p) + \beta_{1}(p) \cdot \text{Solar.R}. \end{equation}\] Three conditional quantiles (\(p \in \{0.1,0.5,0.9\}\)) are estimated and plotted using the following code: > dd <- airquality[complete.cases(airquality), ] > dd <- dd[order(dd$Solar.R), ] > fit.rq <- rq(Ozone ~ Solar.R, tau = c(.1, .5, .9), data = dd) > x <- seq(min(dd$Solar.R), max(dd$Solar.R), length = 200) > yhat <- predict(fit.rq, newdata = data.frame(Solar.R = x)) > plot(Ozone ~ Solar.R, data = dd) > apply(yhat, 2, function(y,x) lines(x,y), x = x) Figure 4. Predicted 10th (solid line), 50th (dashed line), and 90th (dot-dashed line) centiles of ozone conditional on solar radiation in the New York Air Quality data set. As a function of solar radiation, the median of the ozone daily averages increases by \(0.09\) ppb for each Ly increase in solar radiation (Figure 4). The 90th centile of conditional ozone shows a steeper slope at \(0.39\) ppb/Ly, about nine times larger than the slope of the conditional \(10\)th centile at \(0.04\) ppb/Ly. The RC test applied to the the object fit.rq provides evidence of lack of fit for all quantiles considered, particularly for \(p = 0.1\) and \(p = 0.5\). Therefore the straight-line model in Equation 5 for these three conditional quantiles does not seem to be appropriate. The New York Air Quality data set will be analysed again in the next section, where a transformation-based approach to nonlinear modelling is discussed. > gof.rq <- GOFTest(fit.rq, alpha = 0.05, B = 1000, seed = 987) > gof.rq Goodness-of-fit test for quantile regression based on the cusum process Quantile 0.1: Test statistic = 0.1057; p-value = 0.001 Quantile 0.5: Test statistic = 0.2191; p-value = 0 Complex dynamics may result in nonlinear effects in the relationship between the covariates and the response variable. For instance, in kinesiology, pharmacokinetics, and enzyme kinetics, the study of the dynamics of an agent in a system involves the estimation of nonlinear models; phenomena like human growth, certain disease mechanisms and the effects of harmful environmental substances such as lead and mercury, may show strong nonlinearities over time. In this section, the linear model is abandoned in favor of a more general model of the type \[\begin{equation}\label{eq:6} \tag{6} Q_{Y|X}(p) = g\left\{\mathbf{x}^{\top}\boldsymbol \beta(p)\right\}, \end{equation}\] for some real-valued function \(g\). If \(g\) is nonlinear, the alternative approaches to conditional quantile modelling are nonlinear parametric models—this approach may provide a model with substantive interpretability, possibly parsimonious (in general more parsimonious than polynomials), and valid beyond the observed range of the data. A nonlinear model depends on either prior knowledge of the phenomenon or the introduction of new, strong theory to explain the observed relationship with potential predictive power. Estimation may present challenges; polynomial models and smoothing splines—this approach goes under the label of nonparametric regression, in which the complexity of the model is approximated by a sequence of linear polynomials (a naive global polynomial trend can be considered to be a special case). A nonparametric model need not introducing strong assumptions about the relationship and is essentially data-driven. Estimation is based on linear approximations and, typically, requires the introduction of a penalty term to control the degree of smoothing; and transformation models—a flexible, parsimonious family of parametric transformations is applied to the response seeking to obtain approximate linearity on the transformed scale. The data provide information about the 'best' transformation among a family of transformations. Estimation is facilitated by the application of methods for linear models. The focus of this section is on the third approach. More specifically the functions available in Qtools refer to the methods for transformation-based QR models developed by Powell (1991), Chamberlain (1994), Mu and He (2007), Dehbi, Cortina-Borja, and Geraci (2016) and Geraci and Jones (2015). Examples of approaches to nonlinear QR based on parametric models or splines can be found in Koenker and Park (1996) and Yu and Jones (1998), respectively. The goal of transformation-based QR is to fit the model \[\begin{equation}\label{eq:7} \tag{7} Q_{h\left(Y;\lambda_{p}\right)}(p) = \mathbf{x}^{\top}\boldsymbol \beta(p). \end{equation}\] The assumption is that the transformation \(h\) is the inverse of \(g\), \(h(Y; \lambda_{p}) \equiv g^{-1}(Y)\), so that the \(p\)th quantile function of the transformed response variable is linear. (In practice, it is satisfactory to achieve approximate linearity.) The parameter \(\lambda_{p}\) is a low-dimensional parameter that gives some flexibility to the shape of the transformation and is estimated from the data. In general, the interest is on predicting \(Q_{Y|X}(p)\) and estimating the effects of the covariates on \(Q_{Y|X}(p)\). If \(h\) is a non-decreasing function on \(\mathbb{R}\) (as is the case for all transformations considered here), predictions can be easily obtained from (7) by virtue of the equivariance property of quantiles, \[\begin{equation}\label{eq:8} \tag{8} Q_{Y|X}(p) = h^{-1}\left\{\mathbf{x}^{\top}\boldsymbol \beta(p); \lambda_{p}\right\}. \end{equation}\] The marginal effect of the \(j\)th covariate \(x_{j}\) can be obtained by differentiating the quantile function \(Q_{Y|X}(p)\) with respect to \(x_{j}\). This can be written as the derivative of the composition \(Q \circ \eta\), i.e. \[\begin{equation}\label{eq:9} \tag{9} \frac{\partial Q(p)}{\partial x_{j}} = \frac{\partial Q(p)}{\partial \eta(p)} \cdot \frac{\partial \eta(p)}{\partial x_{j}}, \end{equation}\] \(\eta(p) = \mathbf{x}^{\top}\boldsymbol \beta(p)\). Once the estimates \(\hat{\boldsymbol{\beta}}(p)\) and \(\hat{\lambda}_{p}\) are obtained, these can be plugged in Equations 8 and 9. The package Qtools provides several transformation families, namely the Box–Cox (Box and Cox 1964), Aranda-Ordaz (Aranda-Ordaz 1981), and Jones (Jones 2007; Geraci and Jones 2015) transformations. A distinction between these families is made in terms of the support of the response variable to which the transformation is applied and the number of transformation parameters. The Box–Cox model is a one-parameter family of transformations which applies to singly bounded variables, \(y > 0\). The Aranda-Ordaz symmetric and asymmetric transformations too have one parameter and are used when responses are bounded on the unit interval, \(0 < y < 1\) (doubly bounded). Geraci and Jones (2015) developed two families of transformations which can be applied to either singly or doubly bounded responses: Proposal I transformations – this family has one parameter and it comes in both symmetric and asymmetric forms; Proposal II transformations – this family has two parameters, with one parameter modelling the symmetry (or lack thereof) of the transformation. Originally, Box and Cox (1964) proposed using power transformations to address lack of linearity, homoscedasticity and normality of the residuals in mean regression modelling. () reported that ``seldom does this transformation fulfil the basic assumptions of linearity, normality and homoscedasticity simultaneously as originally suggested by Box & Cox (1964). The Box-Cox transformation has found more practical utility in the empirical determination of functional relationships in a variety of fields, especially in econometrics''. Indeed, the practical utility of power transformations has been long recognised in QR modelling (Powell 1991; Buchinsky 1995; Chamberlain 1994; Mu and He 2007). Model 7 is the Box–Cox QR model if \[\begin{equation}\label{eq:10} \tag{10} h_{BC}\left(Y;\lambda_{p}\right) = \begin{cases} \dfrac{Y^{\lambda_{p}} - 1}{\lambda_{p}} & \text{if $\lambda_{p} \neq 0$}\\[.5cm] \log Y & \text{if $\lambda_{p} = 0$}. \end{cases} \end{equation}\] Note that when \(\lambda_{p} \neq 0\), the range of this transformation is not \(\mathbb{R}\) but the singly bounded interval \((-1/\lambda_{p},\infty)\). This implies that the inversion in (8) is defined only for \(\lambda_{p} \mathbf{x}^{\top}\boldsymbol \beta(p) + 1 > 0\). The 'symmetric' Aranda-Ordaz transformation is given by \[\begin{equation}\label{eq:11} \tag{11} h_{AOs}\left(Y;\lambda_{p}\right) = \begin{cases} \dfrac{2}{\lambda_{p}} \quad \dfrac{Y^{\lambda_{p}} - \left(1-Y\right)^{\lambda_{p}}}{Y^{\lambda_{p}} + \left(1-Y\right)^{\lambda_{p}}}& \text{if $\lambda_{p} \neq 0$},\\[.5cm] \log\left(\dfrac{Y}{1-Y}\right) & \text{if $\lambda_{p} = 0$}. \end{cases} \end{equation}\] (The symmetry here is that \(h_{AOs}(\theta;\lambda_p) = -h_{AOs}(1-\theta;\lambda_p)=h_{AOs}(\theta;-\lambda_p)\).) There is a range problem with this transformation too since, for all \(\lambda_p \neq 0\), the range of \(h_{AOs}\) is not \(\mathbb{R}\), but \((-2/|\lambda_{p}|, 2/|\lambda_{p}|)\). The 'asymmetric' Aranda-Ordaz transformation is given by \[\begin{equation}\label{eq:12} \tag{12} h_{AOa}\left(Y;\lambda_{p}\right) = \begin{cases} \log \left\{\dfrac{\left(1-Y\right)^{-\lambda_{p}} - 1}{\lambda_{p}} \right\}& \text{if $\lambda_{p} \neq 0$},\\[.5cm] \log \left\{-\log\left(1 - Y\right)\right\} & \text{if $\lambda_{p} = 0$}. \end{cases} \end{equation}\] For \(\lambda_{p} = 0\), this is equivalent to the complementary log-log. The asymmetric Aranda-Ordaz transformation does have range \(\mathbb{R}\). Note that \(h_{AOa}(Y;1) = \log (Y/(1-Y))\), i.e. the transformation is symmetric. To overcome range problems, which give rise to computational difficulties, Geraci and Jones (2015) proposed to use instead one-parameter transformations with range \(\mathbb{R}\). Proposal I is written in terms of the variable (say) \(W\), where \[\begin{equation}\label{eq:13} \tag{13} h_I\left(W;\lambda_{p}\right) = \begin{cases} \dfrac{1}{2\lambda_{p}}\left(W^{\lambda_{p}} - \dfrac{1}{W^{\lambda_{p}}}\right)& \text{if $\lambda_{p} \neq 0$}\\[.5cm] \log W & \text{if $\lambda_{p} = 0$}, \end{cases} \end{equation}\] which takes on four forms depending on the relationship of \(W\) to \(Y\), as described in Table 2. For each of domains \((0,\infty)\) and \((0,1)\), there are symmetric and asymmetric forms. Support of \(Y\) Symmetric Asymmetric \((0,\infty)\) \(W= Y\) \(W= \log(1+Y)\) \(h_{Is}(Y;\lambda_p)\) \(h_{Ia}(Y;\lambda_p)\) \((0,1)\) \(W= Y/(1-Y)\) \(W= -\log(1-Y)\) \(h_{Is}(Y;\lambda_p)\) \(h_{Ia}(Y;\lambda_p)\) Table 2: Choices of \(W\) and corresponding notation for transformations based on (13). Since the transformation in (13) has range \(\mathbb{R}\) for all \(\lambda_{p}\), it admits an explicit inverse transformation. In addition, in the case of a single covariate, every estimated quantile that results will be monotone increasing, decreasing or constant, although different estimated quantiles can have different shapes from this collection. Geraci and Jones (2015) also proposed a transformation that unifies the symmetric and asymmetric versions of \(h_{I}\) into a single two-parameter transformation, namely \[\begin{equation}\label{eq:14} \tag{14} h_{II}(W;\lambda_p) = h_I(W_{\delta_p};\lambda_p), \end{equation}\] where \(h_I\) is given in (13) and \[\begin{equation*} W_{\delta_{p}} = h_{BC}(1+W;\delta_p) = \begin{cases} \dfrac{(1+W)^{\delta_{p}} - 1}{\delta_{p}} & \text{if $\delta_{p} > 0$}\\[.5cm] \log (1+W) & \text{if $\delta_{p} = 0$}, \end{cases} \end{equation*}\] with \(W=Y\), if \(Y > 0\), and \(W=Y/(1-Y)\), if \(Y \in (0,1)\). The additional parameter \(\delta_{p}\) controls the asymmetry: symmetric forms of \(h_{I}\) correspond to \(\delta_p = 1\) while asymmetric forms of \(h_{I}\) to \(\delta_p = 0\). All transformation models discussed above can be fitted using a two-stage (TS) estimator (Chamberlain 1994; Buchinsky 1995) whereby \(\boldsymbol \beta(p)\) is estimated conditionally on a fine grid of values for the transformation parameter(s). Alternatively, point estimation can be approached using the RC process (Mu and He 2007), which is akin to the process that leads to the RC test introduced in the previous section. The RC estimator avoids the troublesome inversion of the Box-Cox and Aranda-Ordaz transformations, but it is computationally more intensive than the TS estimator. There are several methods for interval estimation, including those based on large-\(n\) approximations and the ubiquitous bootstrap. Both the TS and RC estimators have an asymptotic normal distribution. The large-sample properties of the TS estimator for monotonic quantile regression models have been studied by Powell (1991) (see also Chamberlain (1994);Machado and Mata (2000)). Under regularity conditions, it can be shown that the TS estimator is unbiased and will converge to a normal distribution with a sandwich-type limiting covariance matrix which is easy to calculate. In contrast, the form of the covariance matrix of the sampling distribution for the RC estimator is rather complicated and its estimation requires resampling (Mu and He 2007). Finally, if the transformation parameter is assumed to be known, then conditional inference is apposite. In this case, the estimation procedures simplify to those for standard quantile regression problems. In Qtools, model fitting for one-parameter transformation models can be carried out using the function tsrq. The formula argument specifies the model for the linear predictor as in (7), while the argument tsf provides the desired transformation \(h\) as specified in Equations 10-13: bc for the Box–Cox model, ao for Aranda-Ordaz families, and mcjI for proposal I transformations. Additional arguments in the function tsrq include symmetry, a logical flag to specify the symmetric or asymmetric version of ao and mcjI; dbounded, a logical flag to specify whether the response variable is doubly bounded (default is strictly positive, i.e. singly bounded); lambda, a numerical vector to define the grid of values for estimating \(\lambda_p\); and conditional, a logical flag indicating whether \(\lambda_{p}\) is assumed to be known (in which case the argument lambda provides such known value). \end{itemize} There are other functions to fit transformation models. The function rcrq fits one-parameter transformation models using the RC estimator. The functions tsrq2 and nlrq2 are specific to Geraci and Jones's (2015) Proposal II transformations. The former employs a two-way grid search while the latter is based on Nelder-Mead optimization as implemented in optim. Simulation studies in Geraci and Jones (2015) suggest that, although computationally slower, a two grid search is numerically more stable than the derivative-free approach. A summary of the basic differences between all fitting functions is given in Table 3. The table also shows the available methods in summary.rqt to estimate standard errors and confidence intervals for the model's parameters. Unconditional inference is carried out jointly on \(\boldsymbol \beta(p)\) and the transformation parameter by means of bootstrap using the package boot (Canty and Ripley 2014; Davison and Hinkley 1997). Large-\(n\) approximations (Powell 1991; Chamberlain 1994; Machado and Mata 2000) are also available for the one-parameter TS estimator under iid or nid assumptions. When summary.rqt is executed with the argument conditional = TRUE, confidence interval estimation for \(\boldsymbol \beta_{p}\) is performed with one of the several methods developed for linear quantile regression estimators (see options rank, iid, nid, ker, and boot in quantreg::summary.rq). Function name Transformation parameters Estimation Standard errors or confidence intervals Unconditional Conditional tsrq 1 Two-stage iid, nid, boot All types rcrq 1 Residual cusum process boot All types tsrq2 2 Two-stage boot All types nlrq2 2 Nelder–Mead boot – Table 3: Transformation-based quantile regression in package Qtools. 'All types' consists of options rank, iid, nid, ker, and boot as provided by function summary in package quantreg. In the New York Air Quality data example, a linear model was found unsuitable to describe the relationship between ozone and solar radiation. At closer inspection, Figure 4 reveals that the conditional distribution of ozone may in fact be nonlinearly associated with solar radiation, at least for some of the conditional quantiles. The model \[\begin{equation}\label{eq:15} \tag{15} Q_{h_{Is}\{\text{ozone}\}}(p) = \beta_{0}(p) + \beta_{1}(p) \cdot \text{Solar.R}, \end{equation}\] where \(h_{Is}\) denotes the symmetric version of (13) for a singly bounded response variable, is fitted for the quantiles \(p \in \{0.1,0.5,0.9\}\) using the following code: > system.time(fit.rqt <- tsrq(Ozone ~ Solar.R, data = dd, tsf = "mcjI", + symm = TRUE, dbounded = FALSE, lambda = seq(1, 3, by = 0.005), + conditional = FALSE, tau = c(.1, .5, .9))) user system elapsed 0.5 0.0 0.5 > fit.rqt tsrq(formula = Ozone ~ Solar.R, data = dd, tsf = "mcjI", symm = TRUE, dbounded = FALSE, lambda = seq(1, 3, by = 0.005), conditional = FALSE, tau = c(0.1, 0.5, 0.9)) Proposal I symmetric transformation (singly bounded response) Optimal transformation parameter: tau = 0.1 tau = 0.5 tau = 0.9 2.210 2.475 1.500 Coefficients linear model (transformed scale): tau = 0.1 tau = 0.5 tau = 0.9 (Intercept) -3.3357578 -48.737341 16.557327 Solar.R 0.4169697 6.092168 1.443407 The TS estimator makes a search for \(\lambda_{p}\) over the grid \(1.000, 1.005, \ldots, 2.995, 3.000\). The choice of the search interval usually results from a compromise between accuracy and performance: the coarser the grid, the faster the computation but the less accurate the estimate. A reasonable approach would be to start with a coarse, wide-ranging grid (e.g. seq(-5, 5, by = 0.5)), then center the interval about the resulting estimate using a finer grid, and re-fit the model. The output above reports the estimates \(\boldsymbol{\hat{\beta}}(p)\) and \(\hat{\lambda}_p\) for each quantile level specified in tau. Here, the quantities of interest are the predictions on the ozone scale and the marginal effect of solar radiation, which can obtained using the function predict.rqt. > x <- seq(9, 334, length = 200) > qhat <- predict(fit.rqt, newdata = data.frame(Solar.R = x), + type = "response") > dqhat <- predict(fit.rqt, newdata = data.frame(Solar.R = x), + type = "maref", namevec = "Solar.R") The linear component of the marginal effect is calculated as derivative of Ozone ~ beta1 * Solar.R with respect to Solar.R The calculations above are based on a sequence of 200 ozone values in the interval \([9,334]\) Ly, as provided via the argument newdata (if this argument is missing, the function returns the fitted values). There are three types of predictions available: link—predictions of conditional quantiles on the transformed scale (7), i.e. \[\hat{Q}_{h\left(Y;\hat{\lambda}_{p}\right)}(p) = \mathbf{x}^{T}\hat{\boldsymbol{\beta}}(p);\] response—predictions of conditional quantiles on the original scale (8), i.e. \[\hat{Q}_{Y|X}(p) = h^{-1}\left\{\mathbf{x}^{\top}\hat{\boldsymbol{\beta}}(p); \hat{\lambda}_{p}\right\};\] and maref—predictions of the marginal effect (9). In the latter case, the argument namevec is used to specify the name of the covariate with respect to which the marginal effect has to be calculated. The function maref.rqt computes derivatives symbolically using the stats function deriv and these are subsequently evaluated numerically. While the nonlinear component of the marginal effect in Equation 9 (i.e. \(\partial Q(p)/\partial \eta(p)\)) is rather straightforward to derive for any of the transformations (10)-(13), the derivative of the linear predictor (i.e. \(\partial \eta(p)/\partial x_{j}\)) requires parsing the formula argument in order to obtain an expression suitable for deriv. The function maref.rqt can handle simple expressions with common functions like log, exp, etc., interaction terms, and `as is' terms (i.e. I()). However, using functions that are not recognised by deriv will trigger an error. Figure 5. Predicted 10th (solid line), 50th (dashed line), and 90th (dot-dashed line) centiles of ozone conditional on solar radiation (a) and corresponding estimated marginal effects (b) using the symmetric proposal I transformation in the New York Air Quality data set. The predicted quantiles of ozone and the marginal effects of solar radiation are plotted in Figure 5 using the following code: > par(mfrow = c(1, 2)) > plot(Ozone ~ Solar.R, data = dd, xlab = "Solar radiation (lang)", + ylab = "Ozone (ppb)") > for(i in 1:3) lines(x, qhat[ ,i], lty = c(1, 2, 4)[i], lwd = 2) > plot(range(x), range(dqhat), type = "n", xlab = "Solar radiation (lang)", + ylab = "Marginal effect") > for(i in 1:3) lines(x, dqhat[ ,i], lty = c(1, 2, 4)[i], lwd = 2) The effect of solar radiation on different quantiles of ozone levels shows a nonlinear behavior, especially at lower ranges of radiation (below \(50\) Ly) and on the median ozone. It might be worth testing the goodness-of-fit of the model. In the previous analysis, it was found evidence of lack of fit for the linear specification (5). In contrast, the output reported below indicates that, in general, the goodness of fit of the quantile models based on the transformation model (15) has improved since the test statistics are now smaller at all values of \(p\). However, such improvement is not yet satisfactory for the median. > GOFTest(fit.rqt, alpha = 0.05, B = 1000, seed = 416) The TS and RC estimators generally provide similar estimates and predictions. However, computation based on the cusum process tends to be somewhat slow, as shown further below. This is also true for the RC test provided by GOFTest. > system.time(fit.rqt <- rcrq(Ozone ~ Solar.R, data = dd, tsf = "mcjI", + tau = c(.1, .5, .9))) 36.88 0.03 37.64 An example using doubly bounded transformations is demonstrated using the A-level Chemistry Scores data set. The latter is available from Qtools and it consists of 31022 observations of A-level scores in Chemistry for England and Wales students, 1997. The data set also includes information of prior academic achievement as assessed with General Certificate of Secondary Education (GCSE) average scores. The goal is to evaluate the ability of GCSE to predict A-level scores. The latter are based on national exams in specific subjects (e.g. chemistry) with grades ranging from A to F. For practical purposes, scores are converted numerically as follows: A = 10, B = 8, C = 6, D = 4, E = 2, and F = 0. The response is therefore doubly bounded between 0 ad 10. It should be noted that this variable is discrete, although, for the sake of simplicity, here it is assumed that the underlying process is continuous. The model considered here is \[\begin{equation}\label{eq:16} \tag{16} Q_{h_{AOa}\{\text{score}\}}(p) = \beta_{0}(p) + \beta_{1}(p) \cdot \text{gcse}, \end{equation}\] where \(h_{AOa}\) denotes the asymmetric Aranda-Ordaz transformation in (12). This model is fitted for \(p = 0.9\): > data(Chemistry) > fit.rqt <- tsrq(score ~ gcse, data = Chemistry, tsf = "ao", symm = FALSE, + lambda = seq(0,2,by=0.01), tau = 0.9) The predicted \(90\)th centile of A-level scores conditional on GCSE and the marginal effect of GCSE are plotted in Figure 6. There is clearly a positive, nonlinear association between the two scores. The nonlinearity is partly explained by the floor and ceiling effects which result from the boundedness of the measurement scale. Note, however, that the S-shaped curve is not symmetric about the inflection point. As a consequence, the marginal effect is skewed to the left. Indeed, the estimate \(\hat{\lambda}_{0.9} = 0\) and the narrow confidence interval give support to a complementary log-log transformation: > summary(fit.rqt, conditional = FALSE, se = "nid") summary.rqt(object = fit.rqt, se = "nid", conditional = FALSE) Aranda-Ordaz asymmetric transformation (doubly bounded response) Summary for unconditional inference tau = 0.9 Value Std. Error Lower bound Upper bound 0.000000000 0.001364422 -0.002674218 0.002674218 Value Std. Error Lower bound Upper bound (Intercept) -4.3520060 0.015414540 -4.3822179 -4.3217941 gcse 0.8978072 0.002917142 0.8920898 0.9035247 Degrees of freedom: 31022 total; 31020 residual Alternatively, one can estimate the parameter \(\delta_{p}\) using a two-parameter transformation: > coef(tsrq2(score ~ gcse, data = chemsub, dbounded = TRUE, + lambda = seq(0, 2, by = 0.1), delta = seq(0, 2, by = 0.1), + tau = 0.9), all = TRUE) (Intercept) gcse lambda delta -4.1442274 0.8681246 0.0000000 0.0000000 These results confirm the asymmetric nature of the relationship since \(\hat{\delta}_{0.9} = 0\). Similar results (not shown) were obtained with nlrq2. In conclusion, the package Qtools offers several options in terms of transformations and estimation algorithms, the advantages and disadvantages of which are discussed by . In particular, they found that the symmetric Proposal I transformation improves considerably on the Box-Cox method and marginally on the Aranda-Ordaz transformation in terms of mean squared error of the predictions. Also, asymmetric transformations do not seem to improve sufficiently often on symmetric transformations to be especially recommendable. However, the Box-Cox and the symmetric Aranda-Ordaz transformations should not be used when individual out-of-range predictions represent a potential inconvenience as, for example, in multiple imputation (see section further below). Finally, in some situations transformation-based quantile regression may be competitive as compared to methods based on smoothing, as demonstrated by a recent application to anthropometric charts (Boghossian et al. 2016). Quantile-based measures of location, scale, and shape can be assessed conditionally on covariates. A simple approach is to a fit a linear model as in (4) or a transformation-based model as in (7), and then predict \(\hat{Q}_{Y|X}(p)\) to obtain the conditional LSS measures in Equation 3 for specific values of \(\mathbf{x}\). Estimation of conditional LSS can be carried out by using the function qlss.formula. The conditional model is specified in the argument formula, while the probability \(p\) is given in probs. (As seen in Equation 3, the other probabilities of interest to obtain the decomposition of the conditional quantiles are \(1-p\), \(0.25\), \(0.5\), and \(0.75\).) The argument type specifies the required type of regression model, more specifically rq for linear models and rqt for transformation-based models. The function qlss.formula will take any additional argument to be passed to quantreg::rq or tsrq (e.g. subset, weights, etc.). Let's consider the New York Air Quality data example discussed in the previous section and assume that the transformation model (15) holds for the quantiles \(p \in \{0.05, 0.1, 0.25, 0.5, 0.75,\) \(0.9, 0.95\}\). Then the conditional LSS summary of the distribution of ozone conditional on solar radiation for \(p = 0.05\) and \(p = 0.1\) is calculated as follows: > fit.qlss <- qlss(formula = Ozone ~ Solar.R, data = airquality, type = + "rqt", tsf = "mcjI", symm = TRUE, dbounded = FALSE, lambda = + seq(1, 3, by = 0.005), probs = c(0.05, 0.1)) > fit.qlss qlss.formula(formula = Ozone ~ Solar.R, probs = c(0.05, 0.1), data = airquality, type = "rqt", tsf = "mcjI", symm = TRUE, dbounded = FALSE, lambda = seq(1, 3, by = 0.005)) Conditional Quantile-Based Location, Scale, and Shape -- Values are averaged over observations -- [1] 30.2258 [1] 43.40648 0.05 0.1 **Shape** 0.05 0.1 0.5497365 0.5180108 The output, which is of class "qlss", is a named list with the same LSS measures seen in the case of unconditional quantiles. However, these are now conditional on solar radiation. By default, the predictions are the fitted values, which are averaged over observations for printing purposes. An optional data frame for predictions can be given via the argument newdata in predict.qlss. If interval = TRUE, the latter computes confidence intervals at the specified level using R bootstrap replications (it is, therefore, advisable to set the seed before calling predict.qlss). The conditional LSS measures can be conveniently plotted using the plot.qlss function as shown in the code below. The argument z is required and specifies the covariate used for plotting. Finally, the argument whichp specifies one probability (and one only) among those given in probs that should be used for plotting (e.g. \(p = 0.1\) in the following example). > qhat <- predict(fit.qlss, newdata = data.frame(Solar.R = x), + interval = TRUE, level = 0.90, R = 500) > plot(qhat, z = x, whichp = 0.1, interval = TRUE, type = "l", + xlab = "Solar radiation (lang)", lwd = 2) Figure 7 shows that both the median and the IQR of ozone increase nonlinearly with increasing solar radiation. The distribution of ozone is skewed to the right and the degree of asymmetry is highest at low values of solar radiation. This is due to the extreme curvature of the median which takes on values close to the 10th centile (Figure 5). (Recall that the index approaches 1 when \(Q(p) \approx Q(0.5)\).) However, the sparsity of observations at the lower end of the observed range of solar radiation determines substantial uncertainty as reflected by the wider confidence interval (Figure 7). At \(p = 0.1\), the conditional shape index is on average equal to \(1.66\) and it increases monotonically from \(1.32\) to about \(1.85\), remaining always below the tail-weight threshold of a normal distribution (\(1.90\)). Figure 7. Location, scale and shape of ozone levels conditional on solar radiation in the New York Air Quality data set. Dashed lines denote the bootstrapped \(90%\) point-wise confidence intervals. Besides a loss of precision, high sparsity (low density) might also lead to a violation of the basic property of monotonicity of quantile functions. Quantile crossing occurs when \(\mathbf{x}_{i}^{\top}\hat{\boldsymbol \beta}(p) > \mathbf{x}_{i}^{\top}\hat{\boldsymbol \beta}(p')\) for some \(\mathbf{x}_{i}\) and \(p < p'\). This problem typically occurs in the outlying regions of the design space (Koenker 2005) where also sparsity occurs more frequently. Balanced designs with larger sample sizes would then offer some assurance against quantile crossing, provided, of course, that the QR models are correctly specified. Model's misspecification, indeed, can still be a cause of crossing of the quantile curves. Restricted regression quantiles (RRQ) (He 1997) might offer a practical solution when little can be done in terms of modelling. This approach applies to a subclass of linear models \[ Y = \mathbf{x}^{\top}\beta + \epsilon \] and linear heteroscedastic models \[ Y = \mathbf{x}^{\top}\beta + (\mathbf{x}^{\top}\gamma)\,\epsilon, \] where \(\mathbf{x}^{\top}\gamma > 0\) and \(\epsilon \sim F\). Basically, it consists in fitting a reduced regression model passing through the origin. The reader is referred to for details. Here, it is worth stressing that when the restriction does not hold, i.e. if the model is more complex than a location–scale-shift model, then RRQ may yield unsatisfactory results . See also for an examination of the asymptotic properties of the restricted QR estimator. In particular, the relative efficiency of RRQ as compared to RQ depends on the error distribution. For some common unimodal distributions, showed that RRQ in iid models is more efficient than RQ. This property is lost when the error is asymmetric. In contrast, the efficiency of RRQ in heteroscedastic models is comparable to that of RQ even for small samples. The package Qtools provides the functions rrq, rrq.fit and rrq.wfit which are, respectively, the 'restricted' analogous of rq, rq.fit, and rq.wfit in quantreg. S3 methods print, coef, predict, fitted, residuals, and summary are available for objects of class "rrq". In particular, confidence intervals are obtained using the functions boot and boot.ci from package boot. Future versions of the package will develop the function summary.rrq to include asymptotic standard errors (Zhao 2000). An application is shown below using an example discussed by . The data set, available from Qtools, consists of \(118\) measurements of esterase concentrations and number of bindings counted in binding experiments. > data("esterase") > taus <- c(.1, .25, .5, .75, .9) > fit.rq <- rq(Count ~ Esterase, data = esterase, tau = taus) > yhat1 <- fitted(fit.rq) > fit.rrq <- rrq(Count ~ Esterase, data = esterase, tau = taus) > yhat2 <- fitted(fit.rrq) The predicted 90th centile curve crosses the 50th and 75th curves at lower esterase concentrations (Figure 8). The crossing is removed in predictions based on RRQs. Figure 8. Predicted quantiles of number of bindings conditional on esterase concentration using regression quantiles (a) and restricted regression quantiles (b) in the Esterase data set. As discussed above, the reliability of the results depends on the validity of the restriction carried by RRQ. A quick check can be performed using the location–scale-shift specification of the Khmaladze test. > kt <- KhmaladzeTest(formula = Count ~ Esterase, data = esterase, + taus = seq(.05,.95,by = .01), nullH = "location-scale") The quantile crossing problem can be approached also by directly rearranging the fitted values \(\hat{Q}_{Y|X = \mathbf{x}}(p)\) to obtain monotone (in \(p\)) predictions for each \(\mathbf{x}\) (Chernozhukov, Fernandez-Val, and Galichon 2010). This method is implemented in the package Rearrangement (Graybill et al. 2016). As compared to RRQ, this approach is more general as it is not confined to, for example, location–scale-shift models (Chernozhukov, Fernandez-Val, and Galichon 2010); however, in contrast to RRQ, it does not yield estimates of parameters (e.g. slopes) of the model underlying the final monotonised curves. Such estimates, available from rrq objects, may be of practical utility when summarising the results. Modeling discrete response variables, such as categorical and count responses, has been traditionally approached with distribution-based methods: a parametric model \(F_{Y|X}(y;\theta)\) is assumed and then fitted by means of MLE. Binomial, negative binomial, multinomial and Poisson regressions are well-known in many applied sciences. Because of the computational advantages and the asymptotic properties of MLE, these methods have long ruled among competing alternatives. Modeling conditional functions of discrete data is less common and, on a superficial level, might even appear as an unnecessary complication. However, a deeper look at its rationale will reveal that a distribution-free analysis can provide insightful information in the discrete case as it does in the continuous case. Indeed, methods for conditional quantiles of continuous distributions can be—and have been—adapted to discrete responses. Let \(Y\) be a count variable such as, for example, the number of car accidents during a week or the number of visits of a patient to their doctor during a year. As usual, \(X\) denotes a vector of covariates. Poisson regression, which belongs to the family of generalized linear models (GLMs), is a common choice for this kind of data, partly because of its availability in many statistical packages. Symbolically, \(Y \sim \textrm{Pois}(\theta)\) where \[ \theta \equiv \mathrm{E}(Y|X = \mathbf{x}) = h^{-1}\left(\mathbf{x}^{\top}\boldsymbol{\beta}\right) \] and \(h\) is the logarithmic link function. Note that the variance also is equal to \(\theta\). Indeed, moments of order higher than \(2\) governing the shape of the distribution depend on the same parameter. Every component of the conditional LSS in a Poisson model is therefore controlled by \(\theta\). If needed, more flexibility can be achieved using a distribution-free approach. Machado and Santos Silva (2005) proposed the model \[\begin{equation}\label{eq:17} \tag{17} Q_{h\left(Z;p\right)}(p) = \mathbf{x}^{\top}\boldsymbol \beta(p), \end{equation}\] where \(Z = Y + U\) is obtained by jittering \(Y\) with a \([0,1)\)-uniform noise \(U\), independent of \(Y\) and \(X\). In principle, any monotone transformation \(h\) can be considered. A natural choice for count data is a log-linear model , i.e. \[\begin{equation*} h\left(Z;p\right) = \begin{cases} \log\left(Z - p\right) & \text{for $Z > p$}\\[.5cm] \log \zeta & \text{for $Z \leq p$}. \end{cases} \end{equation*}\] where \(0 < \zeta < p\). It follows that \(Q_{Z|X}(p) = p + \exp\left(\mathbf{x}^{\top}\boldsymbol \beta(p)\right)\). (Note that the \(p\)th quantile of the conditional distribution of \(Z\) is bounded below by \(p\).) Given the continuity between counts induced by jittering, standard inference for linear quantile functions (Koenker and Bassett 1978) can be applied to fit (17). In practice, a sample of \(M\) jittered responses \(Z\) is taken to estimate \(\hat{\boldsymbol \beta}_{m}(p)\), \(m = 1,\ldots,M\); the noise is then averaged out, \(\hat{\boldsymbol \beta}(p) = \frac{1}{M} \sum_{m} \hat{\boldsymbol \beta}_{m}(p)\). Figure 9. Predicted 10th, 50th, and 90th centiles of number of bindings conditional on esterase concentration using Poisson regression and distribution-free quantile regression (QR) based on jittering in the Esterase data set. Machado and Santos Silva (2005)'s methods, including large-\(n\) approximations for standard errors, are implemented in the function rq.counts. The formula argument specifies a log-linear model as in (17). Prior to version 1.4, flexible transformations as described above (e.g., Box-Cox) were allowed. As of version 1.4 of the package, this option has been removed since inference under transformations other than logarithmic was found unreliable and this was likely due to coding issues. In the example below, estimation is carried out using \(M=50\) jittered samples and \(\zeta = 10^{-5}\) (see Machado and Santos Silva (2005) for further details on these settings). R> data(esterase) R> fit.rq.counts <- rq.counts(formula = Count ~ Esterase, tau = 0.1, + data = esterase, M = 50, zeta = 1e-05) Figure 9 shows a contrast between centile curves as predicted by the Poisson and the QR models in the Esterase data set. The Poisson distribution clearly underestimates the variability in the data. An empirical modeling of the conditional quantiles seems to be preferred in this case. Of course, the assumption of log-linearity of the models would need to be carefully assessed (note that GOFTest can be applied also to rq.counts objects). Geraci and Farcomeni (2019) developed quantile regression methods for discrete responses by extending Parzen's definition of marginal mid-quantiles which we discussed previously. These methods are general as they can be applied to a large variety of discrete responses, including binary, ordinal, and count variables. Geraci and Farcomeni (2019) first defined the conditional mid-distribution function as \[\begin{equation}\label{eq:18} \tag{18} G_{Y|X}(y) \equiv \Pr(Y \leq y|X) - 0.5\cdot \Pr(Y = y|X). \end{equation}\] Then, they introduced the conditional mid-quantile function as its inverse, that is \(H_{Y|X}(p) \equiv G^{-1}_{Y|X}(p)\). Estimation is carried out in a two-step approach. In the first step, conditional mid-probabilities are obtained nonparametrically and, in the second step, regression coefficients are estimated by solving an implicit equation. When constraining the quantile index to a data-driven admissible range, the second-step estimating equation has a least-squares type, closed-form solution. Such estimator is shown to be strongly consistent and asymptotically normal. Its good performance is shown in a simulation study with data generated from different discrete response models. Figure 10. Predicted 50th and 90th centiles of number of bindings conditional on esterase concentration using Poisson regression and mid-quantile regression (mid-QR) in the Esterase data set. In the example below, the function midrq is applied to the Esterase data set to obtain 10th and 90th conditional mid-quantiles (plotted in Figure 10). As opposed to rq.counts, the function midrq supports Box-Cox and Aranda-Ordaz transformations. S3 methods for midrq objects (e.g., summary, coef, predict, residuals, vcov) are available in the Qtools package. The latter also provides the function cmidecdf to fit the conditional mid-CDF. See Geraci and Farcomeni (2019) for details on estimation. R> fit.midrq <- midrq(Count ~ Esterase, tau = c(0.5, 0.9), + data = esterase, type = 1, lambda = 0) Regression models play an important role in conditional imputation of missing values. QR can be used as an effective approach for multiple imputation (MI) when location-shift models are inadequate (Muñoz and Rueda 2009; Bottai and Zhen 2013; Geraci 2016). In Qtools, mice.impute.rq and mice.impute.rrq are auxiliary functions written to be used along with the functions of the R package mice (van Buuren and Groothuis-Oudshoorn 2011). The former is based on the standard QR estimator (rq.fit) while the latter on the restricted counterpart (rrq.fit). Both imputation functions allow for the specification of the transformation-based QR models discussed previously. The equivariance property is useful to achieve linearity of the conditional model and to ensure that imputations lie within some interval when imputed variables are bounded. An example is available from the help file using the nhanes data set. See also Geraci (2016) and Geraci and McLain (2018) for a thorough description of these methods. Quantiles have long occupied an important place in statistics. The package Qtools builds on recent methodological and computational developments of quantile functions and related methods to promote their application in statistical data modelling. This work was partially supported by an ASPIRE grant from the Office of the Vice President for Research at the University of South Carolina. Please cite the Qtools package as: Geraci, M. (2016). Qtools: A collection of models and other tools for quantile inference. R Journal, 8(2), 117-138. Aranda-Ordaz, F. J. 1981. "On Two Families of Transformations to Additivity for Binary Response Data." Biometrika 68 (2): 357–63. Azzalini, A., and A. W. Bowman. 1990. "A Look at Some Data on the Old Faithful Geyser." Journal of the Royal Statistical Society C 39 (3): 357–65. Benjamini, Y., and A. M. Krieger. 1996. "Concepts and Measures for Skewness with Data-Analytic Implications." Canadian Journal of Statistics 24 (1): 131–40. Benoit, D. F., R. Al-Hamzawi, K. Yu, and D. Van den Poel. 2014. BayesQR: Bayesian Quantile Regression. https://cran.r-project.org/package=bayesQR. Boghossian, N. S., M. Geraci, E. M. Edwards, K. A. Morrow, and J. D. Horbar. 2016. "Anthropometric Charts for Infants Born Between 22 and 29 Weeks' Gestation." Pediatrics. Bottai, M., and H. Zhen. 2013. "Multiple Imputation Based on Conditional Quantile Estimation." Epidemiology, Biostatistics, and Public Health 10 (1): e8758. Box, G. E. P., and D. R. Cox. 1964. "An Analysis of Transformations." Journal of the Royal Statistical Society B 26 (2): 211–52. Buchinsky, M. 1995. "Quantile Regression, Box-Cox Transformation Model, and the US Wage Structure, 1963-1987." Journal of Econometrics 65 (1): 109–54. Canty, A., and B. D. Ripley. 2014. Boot: Bootstrap R (S-Plus) Functions. https://cran.r-project.org/package=boot. Chamberlain, G. 1994. "Quantile Regression, Censoring, and the Structure of Wages." In Advances in Econometrics: Sixth World Congress, edited by C. Sims. Vol. 1. Cambridge, UK: Cambridge University Press. Chernozhukov, V., I. Fernandez-Val, and A. Galichon. 2010. "Quantile and Probability Curves Without Crossing." Econometrica 78 (3): 1093–1125. Davison, A. C., and D. V. Hinkley. 1997. Bootstrap Methods and Their Applications. Cambridge: Cambridge University Press. Dehbi, H.-M., M. Cortina-Borja, and M. Geraci. 2016. "Aranda-Ordaz Quantile Regression for Student Performance Assessment." Journal of Applied Statistics 43 (1): 58–71. Doksum, K. 1974. "Empirical Probability Plots and Statistical Inference for Nonlinear Models in the Two-Sample Case." The Annals of Statistics 2 (2): 267–77. Geraci, M. 2014. "Linear Quantile Mixed Models: The Lqmm Package for Laplace Quantile Regression." Journal of Statistical Software 57 (13): 1–29. ———. 2016. "Estimation of Regression Quantiles in Complex Surveys with Data Missing at Random: An Application to Birthweight Determinants." Statistical Methods in Medical Research 25 (4): 1393–1421. Geraci, M., and M. Bottai. 2014. "Linear Quantile Mixed Models." Statistics and Computing 24 (3): 461–79. Geraci, M., and A. Farcomeni. 2019. "Mid-Quantile Regression for Discrete Responses." arXiv Preprint arXiv:1907.01945 [stat.ME]. Geraci, M., and M. C. Jones. 2015. "Improved Transformation-Based Quantile Regression." Canadian Journal of Statistics 43 (1): 118–32. Geraci, M., and A. McLain. 2018. "Multiple Imputation for Bounded Variables." Psychometrika 83 (4): 919–40. Gilchrist, W. 2000. Statistical Modelling with Quantile Functions. Boca Raton, FL: Chapman & Hall/CRC. Graybill, W., M. Chen, V. Chernozhukov, I. Fernandez-Val, and A. Galichon. 2016. Rearrangement: Monotonize Point and Interval Functional Estimates by Rearrangement. https://cran.r-project.org/package=Rearrangement. Groeneveld, R. A. 1998. "A Class of Quantile Measures for Kurtosis." The American Statistician 52 (4): pp. 325–29. Groeneveld, R. A., and G. Meeden. 1984. "Measuring Skewness and Kurtosis." Journal of the Royal Statistical Society D 33 (4): pp. 391–99. Hald, A. 2003. A History of Probability and Statistics and Their Applications Before 1750. New York, NY: John Wiley & Sons. He, X. 1997. "Quantile Curves Without Crossing." The American Statistician 51 (2): 186–92. He, X. M., and L. X. Zhu. 2003. "A Lack-of-Fit Test for Quantile Regression." Journal of the American Statistical Association 98 (464): 1013–22. Horn, P. S. 1983. "A Measure for Peakedness." The American Statistician 37 (1): 55–56. Hyndman, R. J., and Y. Fan. 1996. "Sample Quantiles in Statistical Packages." The American Statistician 50 (4): 361–65. Jones, M. C. 2007. "Connecting Distributions with Power Tails on the Real Line, the Half Line and the Interval." International Statistical Review 75 (1): 58–69. Jones, M. C., J. F. Rosco, and A. Pewsey. 2011. "Skewness-Invariant Measures of Kurtosis." The American Statistician 65 (2): 89–95. Khmaladze, E. V., and H. L. Koul. 2004. "Martingale Transforms Goodness-of-Fit Tests in Regression Models," 995–1034. Koenker, R. 2005. Quantile Regression. New York, NY: Cambridge University Press. ———. 2013. Quantreg: Quantile Regression. https://cran.r-project.org/package=quantreg. Koenker, R., and G. Bassett. 1978. "Regression Quantiles." Econometrica 46 (1): 33–50. Koenker, R., and V. D'Orey. 1987. "Algorithm AS 229: Computing Regression Quantiles." Journal of the Royal Statistical Society C 36 (3): 383–93. Koenker, R., and J. A. F. Machado. 1999. "Goodness of Fit and Related Inference Processes for Quantile Regression." Journal of the American Statistical Association 94 (448): 1296–1310. Koenker, R., and B. J. Park. 1996. "An Interior Point Algorithm for Nonlinear Quantile Regression." Journal of Econometrics 71 (1-2): 265–83. Koenker, R., and Z. J. Xiao. 2002. "Inference on the Quantile Regression Process." Econometrica 70 (4): 1583–1612. Lehmann, E. L. 1975. Nonparametrics: Statistical Methods Based on Ranks. San Francisco, CA: Holden-Day. Ma, Y., M. G. Genton, and E. Parzen. 2011. "Asymptotic Properties of Sample Quantiles of Discrete Distributions." Annals of the Institute of Statistical Mathematics 63 (2): 227–43. Machado, J. A. F., and J. Mata. 2000. "Box–Cox Quantile Regression and the Distribution of Firm Sizes." Journal of Applied Econometrics 15 (3): 253–74. Machado, J. A. F., and J. M. C. Santos Silva. 2005. "Quantiles for Counts." Journal of the American Statistical Association 100 (472): 1226–37. Mu, Y. M., and X. M. He. 2007. "Power Transformation Toward a Linear Regression Quantile." Journal of the American Statistical Association 102 (477): 269–79. Muñoz, J. F., and M. Rueda. 2009. "New Imputation Methods for Missing Data Using Quantiles." Journal of Computational and Applied Mathematics 232 (2): 305–17. Parzen, E. 1979. "Nonparametric Statistical Data Modeling." Journal of the American Statistical Association 74 (365): 105–21. ———. 2004. "Quantile Probability and Statistical Data Modeling." Statistical Science 19 (4): 652–62. Powell, J. L. 1991. "Estimation of Monotonic Regression Models Under Quantile Restrictions." In Nonparametric and Semiparametric Methods in Econometrics and Statistics: Proceedings of the Fifth International Symposium on Economic Theory and Econometrics, edited by W. Barnett, J. Powell, and G. Tauchen, 357–84. New York, NY: Cambridge University Press. ———. 1994. "Estimation of Semiparametric Models." In Handbook of Econometrics, edited by F. Engle Robert and L. McFadden Daniel, Volume 4:2443–2521. Elsevier. Ruppert, D. 1987. "What Is Kurtosis?: An Influence Function Approach." The American Statistician 41 (1): 1–5. Smith, L., and B. Reich. 2013. BSquare: Bayesian Simultaneous Quantile Regression. https://cran.r-project.org/package=BSquare. Staudte, R. G. 2014. "Inference for Quantile Measures of Kurtosis, Peakedness and Tail-Weight." arXiv Preprint arXiv:1047.6461v1 [math.ST]. https://doi.org/10.1080/03610926.2015.1056366. Tukey, J. W. 1965. "Which Part of the Sample Contains the Information?" Proceedings of the National Academy of Sciences of the United States of America 53 (1): 127–34. van Buuren, S., and K. Groothuis-Oudshoorn. 2011. "Mice: Multivariate Imputation by Chained Equations in R." Journal of Statistical Software 45 (3): 1–67. Yu, K. M., and M. C. Jones. 1998. "Local Linear Quantile Regression." Journal of the American Statistical Association 93 (441): 228–37. Zhao, Q. S. 2000. "Restricted Regression Quantiles." Journal of Multivariate Analysis 72 (1): 78–99. Zheng, J. X. 1998. "A Consistent Nonparametric Test of Parametric Regression Models Under Conditional Quantile Restrictions." Econometric Theory 14 (1): 123–38.
CommonCrawl
Tuesday, April 30, 2019 ... / / Quantum mechanics and why Elon Musk's IQ is below 130 In Q1 of 2019, Tesla reported a loss of $702 million, far worse than all estimates, and this number is still a euphemism for the financial reality because if we omitted some sales of $200 million worth of carbon indulgences, the loss would be $918 million per quarter (or $1,140 million in cash). And there were apparently other "stretching cosmetic fixes" that have made the situation of Tesla at the end of March look better than the natural picture of the reality. The cash dropped from almost $4 billion at the end of 2018 to $2.2 billion at the end of March – a part of the drop was the $920 million bond. The cash burn rate almost certainly accelerated dramatically in April – see e.g. the 80% drop of sales in Norway between March and April – which indicates that the company could run out of cash in May or early June. Meanwhile, it's being debated whether Tesla may collect new cash. There seem to be obstacles and given the decreased stock price, it would be more expensive than months ago. What the 97% consensus looks like: 100 kids (metaphor of Tesla bulls) vs 3 professional soccer players (Tesla bears). If someone doesn't have good arguments or skills, a high number of 100 or more such people just doesn't help, an elementary point that the people from "Modia" don't seem to get. Goals are near 1:42 and 2:57. Watching it is actually more entertaining for me than a regular adult soccer match. I am often looking at TSLAQ posts on Twitter. Most of the Tesla skeptics who use the $TSLAQ hashtag in their tweets are extremely reasonable, insightful, and quantitative. Many of them have studied the Tesla financial reports with a microscope and their detailed understanding of the company's situation is plain amazing. And most of their opponents are irrational bullies with low intelligence who only know how to talk badly about others but who have nothing to contribute. These people only know "Tesla fan good, Tesla skeptic bad". There seems to be nothing else in their brain whatever. The difference boils down to the integrity of TSLAQ. Other texts on similar topics: cars, IQ, markets, quantum foundations, religion, science and society Sunday, April 28, 2019 ... / / String theorists approach the status of heliocentric heretics Galileo Galilei was legally harassed between 1610 and 1633. Most of us agree that the Inquisition was composed of dogmatists who were suppressing science. Some of them were rather smart but they were still dogmatists. However, what would be wrong to imagine is that Galileo was tortured in a dungeon. Instead, this is how Solomon Alexander Hart (1806-1881) saw Milton's visit to Galileo when the latter was imprisoned. Galileo lived in a pretty fancy prison, right? He had what he needed to keep on thinking. You may compare Galileo's fancy spaces to the modest, prison-like office of Edward Witten's or, if your stomach is strong, to Alan Guth's office, voted the messiest office in the Solar System. ;-) Other texts on similar topics: science and society, string vacua and phenomenology, stringy quantum gravity Facebook deactivated hundreds of Trump ads because he said "Ladies" Thankfully, Czech media are still informing us about the basic events in the world and they don't avoid the topic of the ongoing degradation of the Western civilization by the political correctness and its shameless apologists. Yesterday, all top newspapers including iDNES.cz and Novinky.cz reported on an Internet story that left at least 641+116 Czech commenters speechless. Correct me if I am wrong but it seems that this story has been completely censored by the Western mainstream media except for The Reference Frame. The Trump campaign wanted to wish a happy birthday to Melania – April 26th, like the Chernobyl accident – so they ran lots of ads encouraging folks to send her postcards or something like that. Donald Trump called the expected audience of some Texan ads "Ladies" and... it was a problem! Ladies are banned on Facebook! ;-) Other texts on similar topics: computers, freedom vs PC, politics Friday, April 26, 2019 ... / / Popper, a self-described anti-dogmatist, became a preferred tool of dogmatists In recent years, we often heard that science is obliged to work according to the rules of Karl Popper. A whole religious movement has been created around this philosopher. Some sentences by this guy should be properly interpreted, analyzed, and in this way, the most important questions of contemporary physics can be answered. Is string theory correct? Do we live in a braneworld, a multiverse? Are the swampland conjectures correct? Is there low-energy supersymmetry, axions, inflaton? Can quantum mechanics be deformed? Just read Karl Popper, these people basically tell us, it's all there. I am baffled by the sheer irrationality of this thinking. The answers to the real scientific questions may only be settled by actual scientific evidence. And Popper has presented zero amount of this evidence. And so did Kuhn. And every other philosopher. What is the scientific value of a critic of physics or string theory? Zero, nada, nothing. Philosophers just talk, scientists do actual science. These activities have been separated for thousands of years. Scientists aren't assistants of philosophers who just complete some details about the philosophers' great visions. Scientists use their, scientific method to settle the biggest questions, too. New York is banning hot dogs, processed meat I thought that the story Willie sent me was a hoax. But it seems to be confirmed at many places: NYC To Ban Hot Dogs and Processed Meats To Improve Climate So far, government-run facilities such as schools, hospitals, and prisons won't buy processed meat – hot dogs, sausages, salami, and others. The consumption of processed meat in the city could drop by 50%. It seems clear that they eventually want to go further. The previous NYC mayor, Michael Bloomberg, is a leftist RINO. But those who thought that his policies were the most extreme policies possible were proven wrong. Newyorkers voted for a Democrat, Bill de Blasio, and he is beginning to show how much ahead of RINOs the Democrats are. His mission is to ruin the characteristic New York hot dogs in order to work on the... Green New Deal! Why is he doing it? Because this 8-year-old girl, AOC, told him that cow farts cause global warming! So he thinks that to save the planet from the otherwise unavoidable death in 12 years (AOC incorrectly thinks that 29+12=37), he needs to ban hot dogs and similar life essentials. Other texts on similar topics: biology, climate, politics, science and society Thursday, April 25, 2019 ... / / Four interesting papers On hep-ph, I choose a paper by Benedetti, Li, Maxin, Nanopoulos about a natural D-braneworld (supersymmetric) inspired model. The word "inspired" means that they believe that similar models (effective QFTs) arise from a precise string theory analysis of some compactifications but as phenomenologists, they don't want to do the string stuff precisely. ;-) It belongs to these authors' favorite flipped \(SU(5)\) or \({\mathcal F}\)-\(SU(5)\) class of models and involves new fields, flippons, near \(1\TeV\). The LSP is Higgsino-like and the viable parameter space is rather compact and nice. The model seems natural – one may get fine-tuning below \(\Delta_{EW}\lt 100\). It's an example showing that with some cleverness, natural models are still viable – and Nature is almost certainly more clever than the humans. Wednesday, April 24, 2019 ... / / Physicists' views have been confined to servers that no one else reads A few days ago, The Symmetry Magazine published Falsifiability and Physics. Folks such as Slatyer, Baer, Prescod-Weinstein, and Carroll argue that (and why) real physicists don't really pay attention to buzzwords such as "falsifiability" that have spread to the mass media as fire; and why they don't really consider Karl Popper as their infallible guru. The article also points out that Popper's targets weren't theories in physics but things like Freudian psychology and Stalinist history which is why the current critics of physics are really using Popperism outside its domain of validity. Physicists are interested in statements that are falsifiable in principle and whether they may be falsified in practice and whether it can be done soon is at most secondary. Science cherishes the insights that are true, not those that are early. The thousands of years that the atomic hypothesis needed to be fully established is probably the greatest counterexample to the claim that "we only need theories with a fast complete confirmation". Some of the names could produce some emotions in the TRF community. If I omit those with the greatest capability of igniting emotions, we are left with Baer and Tracy. Slatyer is an excellent Australian physicist and I was present when she was being admitted to Harvard – she has also worked at MIT and Princeton. Well, it just happens that she – and Baer as well – has also become a "progressive" activist although not as loud one as others. She is still an excellent physicist. Other texts on similar topics: science and society Thanks, CO2: the resilience of plants to drought is amazing Since the beginning of the Industrial Revolution, the CO2 concentration has grown from 280 ppm to 410 ppm or so (ppm is a part per million, of volume, or equivalently, 0.0001% of the number of atoms/molecules in the atmosphere), i.e. by 45 percent. CO2 is primarily good as plant food – most of the mass of trees and other plants is made out of water plus carbon that is extracted from CO2 in the air. So it's not shocking that agricultural yields have grown by 20% or so just because of the higher CO2 itself. The food is more easily available so plants grow more easily. But the microscopic explanation why plants are doing better also involves water. Because CO2 is more available, plants may afford fewer pores – the holes through which they absorb CO2 from the air – and this is good because the fewer pores also mean a smaller loss of water. Other texts on similar topics: biology, climate Why and how I understood QM as a teenager First, because Ehab has reminded me, I must start with promoting my PhD adviser Tom Banks' December 2018 book on quantum mechanics. I have learned a lot from Tom, and if I didn't, our views on foundations of QM were aligned. The book discusses linear algebra and probability calculus as the background – Tom immediately presents the amplitudes and the main rules of the game as a Pythagorean-flavored probability calculus; "unhappening" is an essential new quantum feature; Feynman lectures and two-dimensional Hilbert spaces, the Feynmanian attitude (without continuous Schrödinger waves) to "start to teach QM" I have repeatedly defended; quantization of harmonic oscillator and the fields; more details on the QM linear algebra, eigenvalues, symmetries; the hydrogen atom and derivation of basic "atomic physics"; spin; scattering; particles in magnetic fields; measurement with Tom's favorite focus on collective coordinates; approximations for molecules; quantum statistical physics; perturbation theory frameworks; adiabatic and Aharonov-Bohm/Berry phases; Feynman path integral (!); quantum computation (!); seven appendices on interpretations of QM plus 6 math topics: Dirac delta, Noether, group theory, Laguerre polynomials, Dirac notation, some solutions to problems. I think there's no controversial Banksy visionary stuff in the book and if there's some of it, you will survive. Now, switching to the dark side: Another book against quantum mechanics has been published – this time from a well-established, chronic critic of physics. Numerous non-physicists wrote ludicrous, positive reviews of that stuff for numerous outlets, including outlets that should be scientific in character. The book may be summarized by one sentence: The only problem with quantum mechanics [...] is that it is wrong. It doesn't look like a terribly accurate judgement of the most accurately verified theory in science. The contrast between the quality, trustworthiness, and genre of this anti-QM book and Tom's book above couldn't be sharper. Readers and their hormonal systems must be ready for hundreds of pages of comparably extraordinary statements. For example: The risk, [the author] warns, is the surrender of the centuries-old project of realism... So here you have it. "Realism" (which is called "classical physics" by physicists) "must" be upheld because it is a "centuries-old project", we are told. In contrast to that, scientists are used to the fact that old theories are falsified and abandoned – events of this kind are really the defining events of all of science. All this worshiping of centuries-old projects is particularly amusing if you realize that the same author has previously claimed that research projects that are older than 5 years and don't produce a clear victory must be abandoned. The inconsistency is just staggering. There are tens of thousands of fans of this stuff who just don't seem to care. Saturday, April 20, 2019 ... / / Microsoft: substantial backlash to "diversity" pogroms Most of the large Internet companies in the Silicon Valley may be classified as pure evil and the chances that they will become compatible with the basic values of the Western civilization are basically non-existent. For example, just three weeks ago, a famous young CEO wrote an op-ed urging the world's governments to escalate censorship and other Big Brother tactics on the Internet. If someone is going to defend your basic civil rights on the Internet, be sure that his name is not Mark Zuckerberg. However, I have repeatedly pointed out that Redmond isn't a town in the Silicon Valley. It's pretty far – both geographically and spiritually. Most recently, I praised Bill Gates for realizing (thanks to his Czech Canadian friend Václav Smil) that the bulk of the electricity we use today cannot be replaced with solar and wind sources. Now, we have an interesting story about Microsoft and "diversity". Quartz, USA Today, MS Power User (an insightful discussion), The Verge (long discussion), TimCast, and other media outlets informed us about the content of some internal Microsoft corporate message boards. Some of the titles say that the staff "openly questions" diversity. Can you also question it "closedly"? The word "openly" clearly shows that the writers-activists would like to treat those who realize that "diversity" efforts are harmful as heretics. Other texts on similar topics: computers, freedom vs PC, markets, politics Ad hoc "communities" working on proofs are turning science into a clash of cults Genuine scientific knowledge changes according to results, not according to communities Elsewhere: Tetragraviton wrote a wonderful essay, The Black Box Theory of Everything, about a time machine that throws you to the 1960s for you to present an unreadable code, including QCD simulators, the Black Box, as your theory of hadrons. It works. Does it make sense to suggest quarks and partons when the Black Box works and quarks and partons yield "no new predictions"? The Black Box is a counterpart of the Standard Model and Tetragraviton explains why it's unreasonable to say – as some critics of science do say (Tetragraviton calls their view "the high school picture" of science which I don't fully understand) – that a new, more unified or readable, theory giving the same predictions "is not science". What is and isn't science shouldn't depend on historical accidents. I subscribe to every word. A day ago, David Roberts wrote a comment with a link to some topics in hardcore category theory, mostly related to the initiality principle, and implicitly suggested that everyone judging the value and validity of Mochizuki's work has to follow this particular hardcore category theory stuff. I don't believe this claim at all. I think there exists no evidence whatsoever that this stuff is useful let alone crucial for understanding Mochizuki's work – or most other results in mathematics. In fact, I have serious doubts about any kind of usefulness or depth of the page mentioned by Roberts. It seems like an overly formalized talk about something whose beef amounts to almost nothing, a Bourbaki on steroids. And this kind of intimidation, "you have to study and worship some particular formal texts, otherwise you're not allowed to speak" is exactly the wrong atmosphere in Western mathematics that I have criticized. Mochizuki's theory remains controversial but it passes basic tests, has smart enough advocates, and has actual papers with hundreds of pages of actual results. It's just a higher level of scholarship than a random webpage on a blog in Austin. Indeed, I am worried that the Western researchers – including mathematicians – are increasingly turning from proper scholars producing rigorous papers to fans of some web pages filled with superficial, ideologically or emotionally driven, claims. Other texts on similar topics: mathematics, philosophy of science, science and society Alessandro's essay in Quillette The full Mueller report is out – 212+236 = 448 pages of PDF. Well, OK, parts of the pages are redacted out, ongoing matters. You may decide for yourself whether something is left from the Russiagate conspiracy theories. Two days ago, Alessandro Strumia published the ultimate essay about his encounter with the "women in science" issue. It discusses lots of things, the bibliographic analyses, the two gaps they found, the reasons why he gave the talk, the bad treatment he has gotten, his wise decision not to sue although "Particle for Justice" and similar texts could give him many reasons, and more. But I want to talk about something else. Judeo-Christianity and the Greco-Roman culture are the two recognized roots of the European culture. But there's really a third leg we shouldn't forget about about, our old-fashioned Pagan traditions, those that you can still see in Czechia and Slovakia. Although these civilizations weren't terribly high-tech, they gave the Europeans something important, too. We were Pagans up to 863 AD or so (when missionaries arrived from the Byzantine Empire to turn us Orthodox for a while) – so these things are not so infinitely distant. See e.g. The Pagan Queen to understand (a somewhat Americanized story) how the proto-Czechs lived a century earlier. Easter is here again. The Christian church was rather tolerant to the local cultures and traditions so our celebration of Jesus' final days on Earth also involves the whipping of the girls and women – our Easter (and similarly Christmas) became a hybrid of the Christian orthodoxy and some pre-existing traditions linked to the same seasons. First, to become a full-blown European who also stands on the third leg, the naturally Pagan one, you need to learn how to knit the Easter whip out of twigs. Or buy one. If you want to be a perfectionist, the whip may be 300 feet long. When you're ready, you need to chase girls and women in your village or town and beat them. It's particularly appropriate if you are a fan of the MeToo movement. Be careful of the people who could call the Santa Cruz police – that's an extra lesson I learned in California. ;-) The beating must be vigorous enough to substantially modify the girl's flow of blood, otherwise it's a useless formality. As the foreigners above explain, you should also pour buckets of water on the women. Other texts on similar topics: Czechoslovakia, everyday life, freedom vs PC Notre-Dame fire: a symbol of so many sad trends of the present I think that in the grand scheme of things, Notre-Dame de Paris (meaning: Our Lady of Paris) isn't a property of the French people only. It's really something that the whole mankind, and especially the Christian and Western civilization, owns and a symbol of that civilization. The cathedral in the classic French Gothic style was built between 1163-1345. It has survived 650-850 years or so, including lots of continental wars, cruel regimes etc. Before yesterday, the worst devastation has been an angry French Revolution mob that was destroying the organ and sculptures. The cathedral became the workplace of Mr Quasimodo, the hunchback of Notre-Dame in a novel by Victor Hugo. Because that structure is so universally important, I feel that all of us deserve condolences – so it doesn't make much sense for some of us to express condolences to others. But if Notre-Dame has been much closer to someone's heart than mine, please accept my condolences. By the way, the fire has been completely extinguished – but it took some half a day. The rectangular towers survived but the stability has to be monitored. It's my guess that most of the TRF readers have been there – I was – and about 1/2 of those have seen the interior, too. Geologist Bob Carter was there in late 2015 – he interacted with some young climate alarmists. Sadly, Bob died a month later... At some level, it's another cathedral – except it's a very old and very famous one. Other texts on similar topics: arts, Europe, France, religion Monday, April 15, 2019 ... / / Modern young black hole researchers need this quantum BH textbook by Lüst and pal I think that all young people thinking as theoretical physicists who are interested in black holes should simply buy this new 2019 book Black Hole Information and Thermodynamics (SpringerBriefs in Physics) by Dieter Lüst (Munich, the main author) and Ward Vleeshouwers (Utrecht, a young contributor). The book is basically a set of notes of some 2017 lectures by Lüst, as recorded by Vleeshouwers. It's a book that looks at the black holes, objects predicted by Einstein's general theory of relativity, from viewpoints that are utterly modern. The book is available as Kindle or paperback. Massie vs Kerry: a tense exchange on the climate A week ago, RWA recommended me Congressman Thomas H. Massie, a robotics engineer (his Google Scholar record isn't bad!) and a Libertarian (R-KY). It just happens that YouTube offered me a 5-days-old video (boasting 1 million of mostly "progressive", CNN viewers) with this very Gentleman whose name I wasn't actively aware of just weeks ago: John Kerry fires back at congressman: Are you serious? (video, 5 minutes) Let me go through this exchange. OK, John Kerry called advisers to Donald Trump – starting with prominent physicist and retired professor Will Happer of Princeton – a "kangaroo court". Happer and colleagues should be replaced with "educated adults". As you can see, a hero of atomic physics Happer was fired by John Kerry from the chair of an "educated adult". Maybe Kerry will still allow you to be an assistant janitor, Will. This is the kind of an insult that the likes of Kerry have been able to spread without much opposition in recent years because their befriended media repeat these insults every day and character assassinate everyone who dares to point out that such insults are utterly unjustifiable. However, America still has a working republican system that goes beyond the monopoly of the mainstream media. So a lawmaker – a representative of the American voters – could have discussed this "kangaroo court" and "educated adults" and the existence or non-existence of a justification. Other texts on similar topics: climate, politics, science and society "Abandon rational thinking" is too deep a paradigm shift for science ...but philosopher Wallace has understood many hard issues in physics correctly... Philosopher David Wallace has previously written many things about the foundations of quantum mechanics that – I believe – no competent quantum physicist may subscribe to. However, if he carefully avoids this particular foundational topic, he may look very intelligent to me. In February, he wrote Naturalness and Emergence (PDF, HTML). The main conclusion is radical. He calls for a paradigm shift because the LHC null results and some facts about cosmology "undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested". That's exciting! OK, it is both exciting and ludicrous. But aside from these ambitious conclusions, he has written many things that seem correct to me – and that could earn an A grade if he were graded by someone like me. Finland: first elections co-decided by the climate hysteria The fight against the panic may lift the Finns Party to 15-20 percent Today, both true and untrue Finns are choosing their representatives in the Parliament. According to the opinion polls, up to nine parties could be represented in the Parliament – Czechia has nine – but it's really five parties that are large, between 12% and 20% of votes. They are, in the order expected in the latest survey: SDP, their social democratic party, that was suppressed in recent years but may return to the top PS, Finns Party, the authentic right-wing party that was mainly anti-immigration but the theme has calmed down (except for some child abuse by foreigners which will help them) so they rediscovered themselves as an anti-green party KOK, National Coalition Party, probably a CDU clone KESK, the Center Party, some other nameless pro-EU party VIHR, the Green League, the Finnish edition of the Far Left PS+KOK+KESK teamed up to make the coalition after 2015. Due to the True Finns' internal chaos, the party split and a branch of theirs, Blue Reform, replaced the Finns Party, but the Blue Reform looks weak again now. Since December 2018, the graph of the support for the climate skeptical Finns Party (previously True Finns) paradoxically looks like the hockey stick graph ;-), indicating a doubling of votes in 4 months. Other texts on similar topics: climate, Europe, politics Media simply invented the "creator" of the black hole picture Instead of some reflection and errata, they defend their falsehoods with increased aggressiveness Hat tip: Charles, Jaime, Rick, Connor, Samwise... I haven't dedicated a special blog post to this topic but it seems like a classic story at the intersection of recurring themes of this weblog – and the questions have apparently been answered. OK, who created the first photograph of the black hole? Everyone who has a clue about this Big Science knows that the number of workers has been large – 348 folks in this case (click for a full list) but the lists contain roughly hundreds if not one thousand names in similar cases (and 2x 3,000 both for ATLAS and CMS) – and, while the individual contributions have been extremely unequal, many folks in this large set were really essential. The Event Horizon Telescope Wikipedia page describes the collaboration as one including 13 stakeholder institutes plus almost 100 "affiliated" institutes. Some of the senior members of the collaboration were presenting the science during the press conference on Wednesday; see a list of some senior names here. Like in almost all similar experiments, men represented an overwhelming majority of the researchers. Other texts on similar topics: astronomy, computers, experiments, freedom vs PC, stringy quantum gravity Black hole picture is mainly a triumph of engineering There has been lots of excitement – and hype – surrounding the "first photograph of a black hole". Sensible people think beyond the mindless hype, of course, and they are really asking themselves: What has actually happened? Is that important or interesting? If it is, in what respect it is important or interesting? Which kind of work was hard? Which kind of information has it brought us or what can the method bring us in the future? I think that despite the thousands of articles in the mainstream media, these basic questions aren't being answered well – or they're not being answered at all. Let me try to clarify some of the basic facts about the big picture. Other texts on similar topics: astronomy, computers, experiments, science and society, stringy quantum gravity Removal of Roger Scruton With a delay of 1 day or so, the Czech press (especially Echo) informed us that the mob has gotten Roger Scruton on second try (that's the title chosen by the Washington Examiner). See also Roger Scruton's sacking threatens free speech and intellectual life (The Telegraph) and The real Roger Scruton scandal (Spiked) or The smear of Roger Scruton (The National Review); thank God these sources stood on the side of freedom and Sir Roger (something that wasn't guaranteed anymore). A well-known British philosopher was a government adviser for housing (and previously for architecture) – an unpaid position – but the leftist mob doesn't want any conservative in the old-fashioned sense to be anywhere. So they were attacking him all time. It didn't work a few months ago. Now, Scruton (75) agreed to give an interview to a young leftist George Eaton (deputy editor of New Statesman). And it was a trap – the interview was manipulated in order to make predetermined claims, "Scruton has said blasphemous things", and the left mob was joined by some conservative-in-name-only leftists around Theresa May's party who criticized Scruton for these "blasphemies" and Scruton was sacked. Other texts on similar topics: freedom vs PC, science and society Assange is (also) a terribly treated hero I just independently used the same noun as Pamela Anderson, it turns out Julian Assange has spent seven years at Ecuador's embassy in London. The new leader of the Latin American country Mr Lenin [no kidding] Moreno has never liked him too much so he abolished the asylum today. He could have allowed Assange to quickly run to another embassy but instead, he invited the British cops to the embassy – to the Ecuador's territory – and they dragged Assange to a British jail by force. The event was probably ignited by a U.S. extradition request. In America, Assange faces a risk of death penalty for his publication of classified documents. Clearly, Assange has been an insightful and important man – I've liked some tweets of his – but he's been also breaking some laws. Hacking computers must be treated as a crime and investigated, I think, and the same holds for the distribution of classified information and other things. In Sweden, he is also accused of rape. Other texts on similar topics: computers, politics, Russia Photograph of a black hole will be shown today ...just one but some of us expected two... Today at 15:00 Prague Summer Time (9:00 Boston Summer Time), the Event Horizon Telescope Collaboration will present its first photographs of two black holes: NSF press conference on first result from Event Horizon Telescope project (NSF press release) A Non-Expert's Guide to a Black Hole's Silhouette (Matt Strassler's intro) LIVE BROADCAST (from D.C., at 15:00 my time, it's over, replay 63 minutes) What does it mean to have a photograph of a black hole? Well, yes, it could be a completely black JPG file, like the photograph of five black cats in a tunnel. ;-) Yes, I have repaired this popular Czech joke to make it politically correct because I feel threatened a big time. The EHT experiment was mentioned at TRF 3 years ago. Other texts on similar topics: astronomy, experiments Should you worry about Candida Auris infections? In Fall 2012, I realized that the source of numerous health – albeit sometimes cosmetic – issues of mine were yeast, most likely from the Candida family. Before that time, I didn't even know that yeast or fungus could be a health problem for humans (only viruses and bacteria seemed relevant) – maybe a fungus is a problem for an apple but humans? The Candida genus shares certain traits and the accumulation of symptoms was so clear – along with some diagnosis – that I decided that extra information wasn't really "necessary". I've never known which Candida species was harassing me. The most widespread species is Candida albicans. Every human has it in his or her guts and it's mostly innocent. But it may also get to the bloodstream through a leaky gut (which may be caused by some Crohn's disease; vitamin B12 etc. recommended) and infect organs, skin, and lots of other things. At some level, it doesn't matter which Candida species one deals with. The cure is similar. Except that in some cases it does matter. In the recent week, Google Trends show, the interest in the Candida auris skyrocketed. Other texts on similar topics: biology, science and society Pilsner ice-hockey war: players vs fans Core fans are a great net asset and shouldn't be reeducated Pilsen has top teams both in soccer and ice-hockey. In the recent decade, FC Viktoria Pilsen won about 1/2 of the seasons – although it will be second now, after its main rival Slavia Prague. HC Škoda Pilsen is also very good. It was third before the play-offs... and it is now playing the semifinals against Třinec (which was second before play-offs). Pilsen took a lead... but yesterday Třinec won and it's 2-to-2 by matches. Four winning matches are needed. But what I want to talk about are Pilsner fans who are... special. Other texts on similar topics: everyday life, sports Category theory as an egalitarian religion Several TRF essays have discussed the controversies around the Mochizuki proof of the \(abc\) conjecture, most recently in November 2018. The conjecture states that whenever integers obey \(a+b=c\), then the maximum number, let's assume it's \(c\), isn't parametrically larger than a (multiple of a) power of the product of all primes in \(a,b,c\). So it's some inequality linking both the additive relationship between \(a,b,c\) with some multiplicative one. Šiniči Močizuki's solution is a corollary of a whole new ambitious theory in mathematics (possibly a flawless theory, possibly a flawed one at some point) that he has developed, the "Inter-Universal Teichmüller (IUT) theory" or "arithmetic deformation theory", these terms are synonymous. He claims to study some permutations of primes and integers etc. as if these permutations were analogous to continuous deformations. Equivalently, he claims to disentangle the additive and multiplicative relationships between the numbers by looking from many perspectives, by using new terms like "Hodge theaters". I've read and watched many texts and promotional videos and they look incredibly creative and intelligent to me. I am of course far from being capable of verifying the theory up to the applications – one needs to master at least 500 pages plus some 500 more pages of the background etc. I am not motivated enough to go through, in particular because I don't really see why the \(abc\) conjecture should be important in the grand scheme of things. But I am very interested in the general complications that great minds often seem to face – and things don't seem to be getting better. In the recent issue of Inference, I read the thoughtful essay by David Michael Roberts, A Crisis of Identification. Roberts' writing is highly impartial – after all, Adelaide, Australia is "just" 8,000 kilometers from Japan. He sketches some history of the proof, similar proofs in the past, the Grothendieck approach as a driving engine of many mathematicians on both sides, the social dynamics, and the philosophy of the category theory and its predecessors since the era of Hilbert. Other texts on similar topics: mathematics, science and society Physics knows a lot about the electron beyond the simple "Standard Model picture" Ethan Siegel wrote a text about the electron, Ask Ethan: What Is An Electron?, which includes some fair yet simplified standard conceptual facts about the electron's being a particle and a wave, about its properties being statistically predictable, and about the sharp values of its "quantum numbers", some discrete "charges" that are either exactly or approximately conserved in various interactions. While his statements look overwhelmingly right, there is a general theme that I expected to bother me and that bothers me: Siegel presents a frozen caricature of the particle physicists' knowledge that could be considered "a popularization of the snapshot from 1972 or so". There doesn't seem to be any added value of his text relatively to e.g. the Wikipedia article on the Standard Model. After all, the images such as the list of particles above were just taken from that article. Dimon's capitalism vs AOC's socialism Many of us feel that the civilization is falling into the gutter. Pillars of the society and nation states are being systematically attacked by numerous folks. Those of us who have been asking "why did the Roman Empire decline" see an answer in the ongoing repetition of the process. Too many people simply lose any attachment to everything that is good about the society and deliberately start to promote changes that are terrifying and destructive. In the absence of truly formidable competitors, great civilizations collapse simply because the people inside want that collapse and those who don't lose their power to prevent it. One of the aspects of the anti-civilization movement are the increasingly widespread criticisms of capitalism itself – the freedom of entrepreneurship. The young generation is increasingly absorbing pathological opinions about a great fraction of the political and societal questions. The opposition to capitalism is an example. In 2018, less than one-half of Americans between 18 and 29 years of age said to have a positive relationship to capitalism – a drop by 12 percentage points in a few years. Given these numbers, is capitalism sustainable at all? Three days ago, these challenges were discussed by the dean of the Harvard Business School. The obvious question is whether this anti-capitalist delusion is also widespread among the HBS students. I think it is and I think it is a systemic failure. A person who can't understand why capitalism is economically superior over socialism just shouldn't be allowed in the HBS buildings – at most like a janitor. The very name indicates that the school exists to nurture business, not to decimate it. Business is a defining activity of capitalism – in socialism, we weren't quite allowed to even say "business". The understanding of the creative power of capitalism is a matter of apolitical expertise (or rudimentary knowledge), not a political issue where you should look for both sides of a "story". The story may have two sides but one side is right and the other side is wrong. "Search for holography in your kitchen" instead of the FCC is the return to alchemy Anna has linked to a WUWT story about a $50 million fine that a fake journal has to pay. Much like Theranos, such fake open-access journals deceive their users about all the normal ingredients that are responsible for the quality control – about the identity or the very existence of referees, the existence of the review process, they heavily overstate the impact of the journal, and co-organize fake conferences (I really mean conferences whose scientific quality is non-existent but someone pretends it exists). By the way, how many of you are getting daily "calls for abstracts" from some strange conferences that don't seem to be related to your interests? Many armchair scientists who were ignored may suddenly find someone who wants to publish their texts, so they pay for the publication. Ambitious new "scientists" who can't publish, and therefore expect to perish, may suddenly survive. Some of them may even become "big leaders" after a few publications that appear in fishy outlets. At some level, people are happy – they get what they want. These "scientists" finally publish their stuff and the publishers get paid. The price is high – the whole ecosystem is being flooded with mostly wrong results and claims that pretend to be verified by someone who is careful but they are not. Readers get something else than they're told to get. Scientists waste time with bad papers – the wasted time is maximized in the ambitious yet truly marginal cases of papers that "almost" look like serious ones but ultimately turn out to be wrong for somewhat subtle reasons that would still be caught by a proper reviewer. To some extent, this decrease of quality is an unavoidable consequence of the "open-access approach". While the "open-access" ideologues like to hide it, the "open access" – just like "open borders" – often reduces to nothing else than "the absence of a reliable enough quality (or security) control". Other texts on similar topics: experiments, science and society, stringy quantum gravity Time cannot be racist Honza has brought me reasons to be proud of my Rutgers University PhD. ;-) Just days ago, Bill Zajc and I discussed the influence of philosophy departments over the interpretation of quantum mechanics. I mentioned an important character – Sheldon Goldstein – who is a philosopher of science at Rutgers. Well, he's formally a distinguished professor of mathematics even though his papers have been about the philosophy of physics, statistical physics, and perhaps some related topics. Clock in a Droste effect. The exponential spiral is mixed with the cyclic time. This conflation is mathematically deep because the periodic functions may be generated from \(\exp(ix)\), a conceptually small variation of \(\exp(x)\). While he is smart and appreciates some kind of logic very well, it's not quite enough to understand everything important that modern physics has found. So Goldstein, a leader of the Bohmian mechanics people, ends up being an ideologue who is successful because he is really serving his essays in "more welcoming" environment without actual big shot physicists who understand why his views on (and prejudices about) quantum mechanics are just wrong. I think it's wrong (not a promising way to organize scientific research) for the system to allow folks like Goldstein to build whole schools of disciples in "relaxed" environments where Goldstein isn't really facing competent, critical peers because they're focused on other disciplines. But now we're going discuss a very different level of scholarship. Goldstein is wrong but it's still a "somewhat social science department approach to" physics. We will look at another lower category. There are natural sciences, social sciences, and humanities. I think it's right to say that "humanities" are less rational and scientifically meaningful than "social sciences" – by a similar amount by which "social sciences" are less scientific than real, "natural sciences". Maybe we should distinguish new levels on this ladder: natural sciences, social sciences, humanities, and grievance studies. Maybe it makes sense to distinguish the "grievance studies" from generic "humanities" because there's a whole new level of scholarly fallacy that dominates in the grievance studies. Klaus Jr kept as chair of the education committee I have enough experience to know that over 90% of the expected TRF readers have virtually no interest in some events in Czech politics – or anything else that has something to do with similar holes in Europe. ;-) And the apparent irrelevance of this story may look even worse. It's about some committee of the lower chamber of the Parliament. And to make things really bad ;-), the main hero of this blog post, Klaus Jr, considered the vote (and the topic of this blog post) "less important than a soccer match" today! But you know, I just find this to be the country's most important story of the day (or a week or a month), for various reasons. Just to be sure, over a week ago, the old-fashioned right-winger and outspoken man Václav Klaus Jr was expelled from ODS, a party founded by his father in 1991 that I have voted for 27 years before I became a non-voter in March 2019. The last excuse for the expulsion – a partisan procedure we most typically associate with the communist party after the 1968 Soviet-led occupation when the "reformers" had to be told good-bye – were two apt but overly tense Nazi era metaphors for some current events related to the EU. Other texts on similar topics: Czechoslovakia, Europe Activists must stop harassing scientists Ms Peggy Sastre, a French writer who holds a PhD in philosophy of science (which already places her above 90+ percent of the popular writers about "science and society") has written a wonderful piece for Le Point which was translated for Quillette yesterday: A part of her text is dedicated to Alessandro Strumia's story – she didn't overlook that Galileo used to work at the same Pisa University as Strumia, to make the analogies between the harassment more visible to the slower viewers. Sastre also mentions the misrepresentations of Strumia's statements by activists such as Jessica Wade who started that particular disturbing witch hunt, by the BBC, and others. Also, Sastre has been in contact with Janice Fiamengo who frustratingly concluded that the era of the objective science has decisively ended in the West. Did the latest Bitcoin price spike depend on concentrated intelligent design? A few hours ago, the price of the Bitcoin and other cryptocurrencies underwent a rare massive upward explosion. Within less than an hour, BTCUSD went from below $4200 – where it was slowly growing in many days from a relatively stable plateau of $4000 – up to $5100+ or so, before returning to $4700 at this moment. In the most volatile moments, the spreads were huge and the price was jumping by $50 up and down thrice a second. The "hockey stick graph" of the Bitcoin price looks extremely unnatural. After days in which the price only changes by some $10 a day, the price could generate a change of almost $1,000 in less than one hour. This discrepancy shows that there's certainly no reliable "order of magnitude estimate of the volatility per unit time" that you could reasonably use in any safe enough planning. Other texts on similar topics: computers, markets How the freedom of the 1990s didn't last My country has been tamed by Nazism between 1939 (well, partly 1938) and 1945, and by communism between 1948 and 1989 (6+41 = 47 years, almost half a century). Folks like your humble correspondent have helped the communist system to collapse and we entered the 1990s, an unusually free decade. People could say anything, try lots of things, travel across the world, and start numerous types of businesses. Political parties started to compete, communist companies were being privatized (and I think it was right to try to do it quickly although apparent imperfections couldn't be avoided), and others were started from scratch. In 1992-1997, I was a college student in Prague (Math-Phys, Charles University). While I was always too shy to become a visible politician, I found it natural to be a member of the student senate most of the time. We were deciding about many things. For example, we tried to stop the process of creating the "Faculty of Humanities" at the university – which is the main source of certain ideologically extreme social phenomena today. Most of the Math-Phys people were against this "FHS", for reasons that weren't far from what we would say today (although we know much more today), but we failed. "FHS" was created. After all, we did realize that these folks – perhaps "cultural Marxists", using the present jargon – had quite some "momentum" after 1989. But at least, in the 1990s, no one would dispute we had the "right" to vote "No". Other texts on similar topics: Czechoslovakia, Europe, freedom vs PC, politics, science and society Skepticism about Standard Models in F-theory makes no sense Four weeks ago, I discussed a quadrillion Standard Model compactifications that were constructed within F-theory by Cvetič et al. For some happy reasons, Anil at Scientific American wrote his own version of that story four days ago: Found: A Quadrillion Ways for String Theory to Make Our Universe I think that Scientific American hasn't been publishing this kind of articles about some proper scientific research – and Anil hasn't been writing those – for years. Some adult who works behind the scenes must have ordered this one exception, I guess. So I am pretty sure that the readers of SciAm must have experienced a cultural shock because the article is about a very different "genre" than the kind of pseudoscientific stuff that has dominated SciAm for years. Other texts on similar topics: string vacua and phenomenology String theorists approach the status of heliocentr... Facebook deactivated hundreds of Trump ads because... Popper, a self-described anti-dogmatist, became a ... Physicists' views have been confined to servers th... Thanks, CO2: the resilience of plants to drought i... Microsoft: substantial backlash to "diversity" pog... Ad hoc "communities" working on proofs are turning... Notre-Dame fire: a symbol of so many sad trends of... Modern young black hole researchers need this quan... "Abandon rational thinking" is too deep a paradigm... Finland: first elections co-decided by the climate... Media simply invented the "creator" of the black h... Physics knows a lot about the electron beyond the ... "Search for holography in your kitchen" instead of... Did the latest Bitcoin price spike depend on conce... Skepticism about Standard Models in F-theory makes...
CommonCrawl
The Euclidean Algorithm On The Division Algorithm for Positive Integers we noted that for any positive integers $a$ and $b$ where $b \neq 0$ there exists unique integers $q$ and $r$ where $0 \leq r < b$ such that: \begin{align} \quad a = bq + r \end{align} We also looked at a lemma which stated that $(a, b) = (b, r)$. Using both of these results, we will now see how the division algorithm can be applied successively to find the greatest common divisor between $a$ and $b$. Theorem 1 (The Euclidean Algorithm): If $a$ and $b$ are positive integers where $b \neq 0$ then the greatest common divisor $r_t$ can be obtained by successively applying the division algorithm: $a = bq + r \quad 0 \leq r < b \\b = rq_1 + r_1 \quad 0 \leq r_1 < r \\r = r_1q_2 + r_2 \quad 0 \leq r_2 < r_1 \\... \\ r_n = r_{n+1}q_{n+2} + r_{n+2} \quad 0 \leq r_{n+2} < r_{n+1}$ for which eventually $r_{t-1} = r_tq_{t+1}$ and $(a , b) = r_t \: t \in \mathbb{Z}$ Proof: Consider the following set of inequalities: \begin{align} a = bq + r \quad 0 \leq r < b \\b = rq_1 + r_1 \quad 0 \leq r_1 < r \\r = r_1q_2 + r_2 \quad 0 \leq r_2 < r_1 \\... \\ r_n = r_{n+1}q_{n+2} + r_{n+2} \quad 0 \leq r_{n+2} < r_{n+1} \end{align} We get that: \begin{equation} 0 = ... < r_{n+2} < ... < r2 < r1 < b \end{equation} This is a decreasing sequence of positive integers which must terminate at some $r_{t-1}$ to which $r_{t-1} = r_tq_{t+1}$ and such that: \begin{align} \quad (a , b) = (b , r) = (r, r_1) = (r_1 , r_2) = ... = (r_{t-1} , r_{t}) = r_t \quad \blacksquare \end{align}
CommonCrawl
Second-order mixed-moment model with differentiable ansatz function in slab geometry KRM Home A deterministic-stochastic method for computing the Boltzmann collision integral in $\mathcal{O}(MN)$ operations October 2018, 11(5): 1235-1253. doi: 10.3934/krm.2018048 Stability of traveling waves for nonlocal time-delayed reaction-diffusion equations Yicheng Jiang and Kaijun Zhang , School of Mathematics and Statistics, Northeast Normal University, Changchun, Jilin 130024, China Received June 2017 Revised July 2017 Published May 2018 Fund Project: The first author is supported by NSFC grant (No.11571066) and the second author is supported by NSFC grant (No.11771071). This paper is concerned with the stability of noncritical/critical traveling waves for nonlocal time-delayed reaction-diffusion equation. When the birth rate function is non-monotone, the solution of the delayed equation is proved to converge time-exponentially to some (monotone or non-monotone) traveling wave profile with wave speed $c>c_*$, where $c_*>0$ is the minimum wave speed, when the initial data is a small perturbation around the wave. However, for the critical traveling waves ($c = c_*$), the time-asymptotical stability is only obtained, and the decay rate is not gotten due to some technical restrictions. The proof approach is based on the combination of the anti-weighted method and the nonlinear Halanay inequality but with some new development. Keywords: Traveling wave, time delay, nonlocal reaction-diffusion equations, $L^2$-weighted energy, stability. Mathematics Subject Classification: Primary: 35K57, 35C07; Secondary: 35K58, 92D25. Citation: Yicheng Jiang, Kaijun Zhang. Stability of traveling waves for nonlocal time-delayed reaction-diffusion equations. Kinetic & Related Models, 2018, 11 (5) : 1235-1253. doi: 10.3934/krm.2018048 M. Aguerrea, C. Gomez and S. Trofimchuk, On uniqueness of semi-wavefronts, Math. Ann., 354 (2012), 73-109. doi: 10.1007/s00208-011-0722-8. Google Scholar I. L. Chern, M. Mei, X. F. Yang and Q. F. Zhang, Stability of non-monotone critical traveling waves for reaction-diffusion equations with time-delay, J. Differential Equations, 259 (2015), 1503-1541. doi: 10.1016/j.jde.2015.03.003. Google Scholar J. Fang and X. Q. Zhao, Esistence and uniqueness of traveling waves for non-monotone integral equations with in applications, J. Differential Equations, 248 (2010), 2199-2226. doi: 10.1016/j.jde.2010.01.009. Google Scholar T. Faria, W. Huang and J. Wu, Traveling waves for delayed reaction-diffusion equations with global response, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 462 (2006), 229-261. doi: 10.1098/rspa.2005.1554. Google Scholar T. Faria and S. Trofimchuk, Nonmonotone traveling waves in single species reaction-diffusion equation with delay, J. Differential Equations, 228 (2006), 357-376. doi: 10.1016/j.jde.2006.05.006. Google Scholar T. Faria and S. Trofimchuk, Positive heteroclinics and traveling waves for scalar population models with a single delay, Appl. Math. Comput., 185 (2007), 594-603. doi: 10.1016/j.amc.2006.07.059. Google Scholar A. Gomez and S. Trofimchuk, Global continuation of monotone wavefronts, J. Lond. Math. Soc., 89 (2014), 47-68. doi: 10.1112/jlms/jdt050. Google Scholar S. A. Gourley and J. Wu, Delayed nonlocal diffusive system in biological invasion and disease spread, Fields Inst. Commun., 48 (2006), 137-200. Google Scholar W. S. C. Gurney, S. P. Blythe and R. M. Nisbet, Nicholson's blowflies revisited, Nature, 287 (1980), 17-21. doi: 10.1038/287017a0. Google Scholar R. Huang, M. Mei and Y. Wang, Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity, Discrete Contin. Dyn. Syst. Ser. A, 32 (2012), 3621-3649. doi: 10.3934/dcds.2012.32.3621. Google Scholar R. Huang, M. Mei, K. J. Zhang and Q. F. Zhang, Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersal equations, Discrete Contin. Dyn. Syst. Ser. A, 36 (2016), 1331-1353. doi: 10.3934/dcds.2016.36.1331. Google Scholar Y. C. Jiang and K. J. Zhang, Time-delayed reaction-diffusion equation with boundary effect: (Ⅰ) converegence to non-critical traveling waves, Applicable Analysis, 97 (2018), 230-254. doi: 10.1080/00036811.2016.1258696. Google Scholar W. T. Li, S. G. Ruan and Z. C. Wang, On the diffusive Nicholson's blowflies equation with nonlocal delays, J. Nonlinear Sci., 17 (2007), 505-525. doi: 10.1007/s00332-007-9003-9. Google Scholar C. K. Lin, C. T. Lin, Y. P. Lin and M. Mei, Exponential stability of non-monotone traveling waves for Nicholson's blowflies equation, SIAM J. Math. Anal., 46 (2014), 1053-1084. doi: 10.1137/120904391. Google Scholar C. K. Lin and M. Mei, On traveling wavefronts of the Nicholson's blowflies equations with diffusion, Proc. Roy. Soc. Edinburgh Set. A, 140 (2010), 135-152. doi: 10.1017/S0308210508000784. Google Scholar S. W. Ma, Traveling waves for non-local delayed diffusion equations via auxiliary equations, J. Differential Equations, 237 (2007), 259-277. doi: 10.1016/j.jde.2007.03.014. Google Scholar A. Matsumura and M. Mei, Convergence to traveling fronts of solutions of the $p$-system with viscosity in the presence of a boundary, Arch. Ration. Mech. Anal., 146 (1999), 1-22. doi: 10.1007/s002050050134. Google Scholar M. Mei, C. K. Lin, C. T. Lin and J. W. H. So, Traveling wavefronts for time-delayed reaction-diffusion equation, (Ⅰ) Local nonlinearity, J. Differential Equations, 247 (2009), 495-510. doi: 10.1016/j.jde.2008.12.026. Google Scholar M. Mei, C. K. Lin, C. T. Lin and J. W. H. So, Traveling wavefronts for time-delayed reaction-diffusion equation, (Ⅱ) Nonlocal nonlinearity, J. Differential Equations, 247 (2009), 511-529. doi: 10.1016/j.jde.2008.12.020. Google Scholar M. Mei, C. H. Ou and X. Q. Zhao, Global stability of monostable traveling waves for nonlocal time-delayed reaction-diffusion equations, SIAM J. Appl. Math., 42 (2010), 2762–2790; erratum, SIAM J. Appl. Math., 44 (2012), 538–540. doi: 10.1137/110850633. Google Scholar M. Mei and J. W. H. So, Stability of strong traveling waves for a nonlocal time-delayed reaction-diffusion equation, Proc. Roy. Soc. Edinburgh Sect. A, 138 (2008), 551-568. doi: 10.1017/S0308210506000333. Google Scholar M. Mei, J. W. H. So, M. Y. Li and S. S. P. Shen, Asymptotic stability of traveling waves for the Nicholson's blowflies equation with diffusion, Proc. Roy. Soc. Edinburgh Sect. A, 134 (2004), 579-594. doi: 10.1017/S0308210500003358. Google Scholar M. Mei and Y. Wang, Remark on stability of traveling waves for nonlocal Fisher-KPP equations, Int. J. Num. Anal. Model Ser. B, 2 (2011), 379-401. Google Scholar A. J. Nicholson, Competition for food amongst Lucilia Cuprina larvae, Proceedings of the 8th International Congress of Entomology, Stockhom, (1984), 227–281. Google Scholar A. J. Nicholson, An outline of dynamics of animal populations, Aust. J. Zool., 2 (1954), 9-65. doi: 10.1071/ZO9540009. Google Scholar J. W. H. So and Y. Yang, Dirichlet problem for the diffusion Nicholson's blowflies equation, J. Differential Equations, 150 (1998), 317-348. doi: 10.1006/jdeq.1998.3489. Google Scholar J. So and X. Zou, Traveling waves for the diffusion Nicholson's blowflies equation, Appl. Math. Comput., 122 (2001), 385-392. doi: 10.1016/S0096-3003(00)00055-2. Google Scholar E. Trofimchuk, V. Tkachenko and S. Trofimchuk, Slowly oscillating wave solutions of a single species reaction-diffusion equation with delay, J. Differential Equations, 245 (2008), 2307-2332. doi: 10.1016/j.jde.2008.06.023. Google Scholar E. Trofimchuk and S. Trofimchuk, Admissible wavefront speeds for a single species reaction-diffusion with delay, Discrete Contin. Dyn. Syst. Ser. A, 20 (2008), 407-423. doi: 10.3934/dcds.2008.20.407. Google Scholar J. Wu and X. Zou, Traveling wave fronts of reaction-diffusion systems, J. Dyn. Differ. Equations, 13 (2001), 651-687. doi: 10.1023/A:1016690424892. Google Scholar Luis Caffarelli, Fanghua Lin. Nonlocal heat flows preserving the L2 energy. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 49-64. doi: 10.3934/dcds.2009.23.49 Chungang Shi, Wei Wang, Dafeng Chen. Weak time discretization for slow-fast stochastic reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021019 Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316 Shin-Ichiro Ei, Hiroshi Ishii. The motion of weakly interacting localized patterns for reaction-diffusion systems with nonlocal effect. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 173-190. doi: 10.3934/dcdsb.2020329 Klemens Fellner, Jeff Morgan, Bao Quoc Tang. Uniform-in-time bounds for quadratic reaction-diffusion systems with mass dissipation in higher dimensions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 635-651. doi: 10.3934/dcdss.2020334 Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561 Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 Nabahats Dib-Baghdadli, Rabah Labbas, Tewfik Mahdjoub, Ahmed Medeghri. On some reaction-diffusion equations generated by non-domiciliated triatominae, vectors of Chagas disease. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021004 Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364 Kaixuan Zhu, Ji Li, Yongqin Xie, Mingji Zhang. Dynamics of non-autonomous fractional reaction-diffusion equations on $ \mathbb{R}^{N} $ driven by multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020376 Chihiro Aida, Chao-Nien Chen, Kousuke Kuto, Hirokazu Ninomiya. Bifurcation from infinity with applications to reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3031-3055. doi: 10.3934/dcds.2020053 Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571 Yicheng Jiang Kaijun Zhang
CommonCrawl
Optimal, minimax and admissible two-stage design for phase II oncology clinical trials Fei Qin1,2, Jingwei Wu3, Feng Chen2, Yongyue Wei2, Yang Zhao2, Zhiwei Jiang4, Jianling Bai ORCID: orcid.org/0000-0003-3678-28792 & Hao Yu2 The article aims to compare the efficiency of minimax, optimal and admissible criteria in Simon's and Fleming's two-stage design. Three parameter settings (p1-p0 = 0.25–0.05, 0.30–0.10, 0.50–0.30) are designed to compare the maximum sample size, the critical values and the expected sample size for minimax, optimal and admissible designs. Type I & II error constraints (α, β) vary across (0.10, 0.10), (0.05, 0.20) and (0.05, 0.10), respectively. In both Simon's and Fleming's two-stage designs, the maximum sample size of admissible design is smaller than optimal design but larger than minimax design. Meanwhile, the expected samples size of admissible design is smaller than minimax design but larger than optimal design. Mostly, the maximum sample size and expected sample size in Fleming's designs are considerably smaller than that of Simon's designs. Whenever (p0, p1) is pre-specified, it is better to explore in the range of probability q, based on relative importance between maximum sample size and expected sample size, and determine which design to choose. When q is unknown, optimal design may be more favorable for drugs with limited efficacy. Contrarily, minimax design is recommended if treatment demonstrates impressive efficacy. Phase II clinical trials are carried out to provide preliminary efficacy assessments of a new drug or therapy. In clinical research, phase II trials are inevitably essential for drug/therapeutic developments. They act as screening tools to discontinue ineffective drugs or warrant promising new drugs for future evaluation. Phase II trials typically employ various dosages to evaluate efficacy and safety in patients with malignant tumors. Therefore, researchers could design phase II trials to target at sensitive cancer, delimit a safety range of dosing, and outline appropriate administrations. In this sense, phase II trials may provide supportive evidence to conduct phase III trials. Merits have been discussed in detail by Gan and Grothey [1] concerning single-arm phase II (SA-II) trials vs. randomized phase II (RP-II) trials (include both experimental and standard therapy arms). SA-II trials are found to be more preferable for single agents with tumor response end points. One of the frequently used designs in phase II cancer clinical trials is single-arm two-stage design proposed by Simon in 1989 [2]. Simon's design has been proved to be a compelling method in initial efficacy evaluation. Based on the ethical requirement [1], once efficacy of a drug/treatment does not reach the predefined criterion in a proof-of-concept trial, the experiment will be terminated for futility to avoid more individuals accepting ineffective treatment. One of the important advantages of single-arm phase II trials is that they involve much smaller sample size than their randomized phase II counterparts. Therefore, single-arm trials always require less time to complete and less resources invested [3]. Several studies, aiming to improve single-arm phase II clinical trials, have been employed in recent years. Shan et al. utilized results in first stage to help calculate the second stage sample size [4]. Besides, they also proposed to construct one-sided lower limits for analyzing data in adaptive phase II trials [5]. Jung and Sargent first attempted to adopt Fisher's exact design in randomizing phase II trials [6]. Khan et al. proposed to control sample size by slightly relaxing type I error [7]. Among these, a single-arm multi-stage testing procedure, proposed by Fleming [8], is appealing. He suggested to early stop the experiment when the intermediate results are extreme, either in favor of efficacy or futility. Compared to Simon's design, early acceptance of the drug is permitted here. Although progression free survival is regularly used in early oncology trials, the proportion of patients whose tumors marked shrinkage is also considered as an important metric in most phase II trials [9]. Amongst all two-stage trials with dichotomous endpoints, there are many designs satisfying a type I and II error constraint, given both the upper boundary to stop the trial and lower boundary to continue the trial. Thus, Simon proposed two criteria (minimax, optimal) to estimate sample sizes. Minimax design mainly aims to minimize the maximum sample size. Alternatively, optimal design aims to minimize the expected sample size. Shuster and Mander and Thompson further extended two Simon's criteria in their optimal designs that allow early stopping for efficacy [10, 11]. However, one limitation of two Simon's designs is that the minimax and optimal designs may result in highly divergent sample size requirements. Based on a Bayesian decision-theoretic criterion, Jung et al. proposed a family of admissible designs that are compromises between the two Simon's designs [12, 13]. This article attempts to systematically compare minimax, optimal and admissible criterions in both Simon and Fleming's two-stage designs. The rest of the paper is arranged as follows. In section 2, the conception of Simon's optimal and minimax two-stage design, as well as Fleming's two-stage design and Jung's admissible design are reviewed. In section 3, a variety of design parameters are used to illustrate estimated sample sizes based on three criterions in both Simon and Fleming's two-stage designs. In section 4, a practical example is adopted to help explain the merits of Simon's two-stage design, Fleming's two-stage design and admissible design. In section 5, the recommendations and implementations of optimal, minimax and admissible design are discussed. Consider a single-arm design with tumor response rate as the primary endpoint, where a binary outcome is defined as either "response" or "no response". We want to test the hypotheses: $$ {H}_0:p\le {p}_0 vs.\kern0.5em {H}_1:p>{p}_0 $$ with type I error rate α and type II error rate β. Here p denotes the true response rate, p0 is a fixed value that denotes the maximum response probability in order to terminate trial early. In practice, we will define p = p1 in the alternative hypothesis to represent the minimum response probability to warrant further studies in subsequent trials, therefore, the power of the test will be calculated at p = p1 > p0. If the null hypothesis is rejected, the study will be extended to phase III stage, given the warranted therapeutic efficacy. Otherwise, the study will be terminated, given the insufficiently promising efficacy. Simon's two-stage design A most widely used two-stage design is proposed by Simon [2]. Two different two-stage designs are introduced that allow early trial termination for futility. Details are illustrated in Fig. 1. In the figure, we define. Flowchart for Simon's two-stage design n1, n2: the number of subjects in the first and second stage, respectively, and n = n1 + n2; x1, x2: the number of responses observed in the first and second stage, respectively; r1, r: the number of rejection points (under H0) in the first and second stage, respectively. Thus, the probability of early termination (PET) at the end of first stage (under null hypothesis) is $$ {PET}_{\mathrm{S}1}=B\left({r}_1;{n}_1,{p}_0\right) $$ where suffix S is used to represent the result of Simon's design. Consequently, the probability of rejecting the treatment is $$ {P}_{\mathrm{S}}(R)={PET}_{\mathrm{S}1}+\sum \limits_{x={r}_1+1}^{\min \left({n}_1,r\right)}b\left(x;p,{n}_1\right)B\left(r-x;p,{n}_2\right) $$ Here b(x;p,n) and B(x;p,n) are the probability mass and cumulative binomial distribution function, respectively [14]. For any pre-fixed values of p0, p1, α, and β, we can enumerate the candidate designs with different (n1, PETS1, EN) combinations, where EN is the expected sample size,., i.e., $$ {EN}_{\mathrm{S}}={n}_1+\left(1-{PET}_{\mathrm{S}1}\right){n}_2 $$ An optimal design is considered to minimize the expected sample size. Alternatively, a minimax design minimizes the maximum sample size n = n1 + n2, amongst these candidates designs. If there is more than one single candidate design with smallest n, the one with the smallest ENS (under null hypothesis) is chosen within all the possible minimax designs. Fleming's two-stage design Unlike Simon's two-stage design, Fleming's design additionally allows early trial termination due to high successful response rate [8]. In Fleming's two-stage design, one more character, a1, is required and it denotes a threshold of acceptance point (under H0) in the first stage. A single-arm two stage trial with both futility (a1) and superiority (r1) values in the first stage and a rejection value (r) in the second stage are described in Fig. 2. Flowchart for Fleming's two-stage design Based on Fleming's two-stage design, the probability of rejecting the treatment is $$ {P}_{\mathrm{F}}(R)=B\left({a}_1;{n}_1,{p}_0\right)+\sum \limits_{x={a}_1+1}^{r_1}b\left(x;{n}_1,{p}_0\right)B\left(r-x;n-{n}_1,{p}_0\right) $$ where suffix F is used to represent the results of Fleming's design [14]. The probability of early termination (PET) at the end of first stage (under H0) is $$ {PET}_{\mathrm{F}1}=B\left({a}_1;{n}_1,{p}_0\right)+\left(1-B\left({r}_1;{n}_1,{p}_0\right)\right) $$ Thus, the expected sample size (EN) is $$ {EN}_{\mathrm{F}}={n}_1+\left(1-{PET}_{\mathrm{F}1}\right){n}_2 $$ Although Fleming's design ensures sample sizes no larger than the single-stage design, a limitation is that calculated critical values for accepting and rejecting the null hypothesis are based on pre-fixed sample sizes at stage 1 (n1) and stage 2 (n2), which may be undesirable for investigating and planning optimal designs. To remedy, Mander and Thompson extended Simon's optimal and minimax criteria in Fleming's two-stage design [10]. Such design will benefit from stopping early for either futility or efficacy, while preserve its simplicity and the small sample size. Admissible design Very often, the minimax design has a much smaller maximum sample size n than that of the optimal design, but it has an excessively large expected sample size EN. Similarly, optimal design requires a much smaller EN, but it suffers a considerably larger n as compares to the minimax design. In planning a phase II trial, we usually find ourselves in a dilemma when we must consider choosing one of the two designs by comparing the expected sample size and maximum sample size. To overcome, it is desirable to search for a design between the optimal design and the minimax design such that it has EN close to that of the optimal design and n close to that of the minimax design. Jung et al. proposed an admissible adaptive design based on a Bayesian decision-theoretic criterion to compromise between EN and n [12, 13]. A design is called candidate design if it minimizes EN for a given n while satisfying the (α, β)-constraint. For pre-specified (p0, p1, α, β), let R denotes the space of all candidate designs satisfying the (α, β)-constraint, with n no more than an achievable accrued number of subjects N during the study period. For any given design d ∈ R, we consider its two outcomes n(d) in minimax design or EN(d) in optimal design. Let Q be a probability distribution defined over {n(d), EN(d)} as Q(n(d)) = q and Q(EN(d)) = 1-q, where q ∈ [0, 1]. Thus, for any design d ∈ R, the expected loss can be defined as $$ \rho \left(q,d\right)=q\times n(d)+\left(1-q\right)\times EN(d), $$ and the Bayes risk is defined as $$ {\rho}^{\ast}\left(\rho, d\right)={\displaystyle \begin{array}{c}\mathit{\operatorname{inf}}\kern0.5em \rho \left(q,d\right)\\ {}d\in R\end{array}} $$ Any design d ∈ R whose risk equals to the Bayes risk would be regarded as Bayes design, which will then be defined as admissible design against distribution Q. Note that q ∈ [0, 1] reflects the relative importance between maximum sample size and expected sample size in designing a phase II study. Thus, the minimax design is a special Bayes design with q = 1 and optimal design is a special Bayes design with q = 0. Conversely, for any q ∈ [0, 1], if no Bayes risk fits any design d ∈ R, the design would be defined as inadmissible. Jung et al. [13] firstly proposed to apply admissible design to Simon's two-stage design. In this article, we extend such admissible design to Fleming's two-stage design, too. To compare the performance of optimal, minimax, and admissible design in Simon's and Fleming's two-stage design, the effect difference "p1-p0" are set to be 0.2 for p0 = 0.05, 0.10 and 0.30, and type I & II error constraints "(α, β)" vary across (0.10, 0.10), (0.05, 0.20) and (0.05, 0.10), respectively. These values are appeared in both Simon's and Fleming's two-stage design papers and are more representative to show sufficient promise to justify a definitive evaluation [15,16,17]. We calculate the true type I error and power (αT, 1-βT), sample size required in the first stage (n1), threshold values (a1, r1) for early termination, PET1, maximum sample size (n), threshold value (r) in the second stage, EN and the probability range (q) when each design is regarded as a good Bayes design. Based on Simon's two-stage design, Table 1 displays the optimal, minimax and admissible designs with pre-specified design parameters under H0. For each parameter setting of (p0, p1) and (α, β), the EN is much smaller than n. It is not difficult to find that the maximum sample size n of admissible design is smaller than optimal design but larger than minimax. Meanwhile, the expected samples size EN of admissible design is smaller than minimax design but larger than optimal design. Taking (p0, p1, α, β) = (0.05, 0.25, 0.05, 0.10) for example. In optimal design, the number of subjects required in the first stage is 9. Trials will be early terminated if no more than one response is seen in this stage. Otherwise another 21 subjects would be further enrolled, thus the maximum sample size reaches 30 at the end of the trial. The expected sample size is 16.8 and the probability of early termination is 0.630. Two admissible designs are given here, where n and EN are (28, 17.2) when q lies between 0.167 ~ 0.375, and (26, 18.4) when q lies between 0.375 ~ 0.667, respectively. For minimax design, the required maximum sample size is 25, which is five fewer than that of optimal design; while the expected sample size is 20.4, which is obviously larger than that of optimal design. A plot of EN against maximum sample size under this setting is illustrated in Fig. 3. The first and last dots are minimax and optimal design, respectively. Two identified Bayes candidate designs within this range are marked as "admissible". Note, however, that some candidate designs (second and fourth design under (p0, p1, α, β) = (0.05, 0.25, 0.05, 0.10)) cannot reach Bayes risk, since their loss functions are not competitive (cannot get smaller value) over other designs for any value of q between [0, 1]. Such designs are symbolized as "inadmissible" in our study. In other words, such "in admissible" design may NOT be regarded as a good one according to a Bayesian decision-theoretic criterion, even though both sample size and EN are still deterministic given (p0, p1, α, β). Table 1 Optimal, minimax and admissible design for Simon's two-stage design Minimax, admissible and optimal design for (p0, p1, α, β) = (0.05, 0.25, 0.05, 0.10) based on Simon's design Based on Fleming's two stage design, Table 2 displays the results of all three designs with pre-defined design parameters under H0. Similar to findings in Simon's design, minimax design requires least n than that of the admissible design, and optimal design has the largest n. On the other hand, optimal design has the least EN as compare to minimax design, while admissible design provides a compromised EN between Fleming's two designs. For example, when p0 = 0.05, p1 = 0.25, α = 0.05 and β = 0.10, trials will be early terminated if no more than one response is seen in the first stage. However, once > 4 responses are seen in this stage, trials will also be terminated early due to efficacy. Otherwise another 21 subjects will be enrolled and the maximum sample size becomes 30. The expected sample size is 16.8 and the probability of early termination is 0.631. One admissible designs is identified. n and EN are (26, 17.2) when q lies between 0.091 ~ 0.565. Figure 4 shows the expected sample sizes under H0 over a range of values for n. The plot starts with Fleming's minimax design and ends with Fleming's optimal design. Two admissible designs are highlighted in this range. Table 2 Optimal, minimax and admissible design for Fleming's two-stage design Minimax, admissible and optimal design for (p0, p1, α, β) = (0.05, 0.25, 0.05, 0.10) based on Fleming's design In general, for pre-specific design parameter (p0, p1, α, β), Fleming's two-stage design requires fewer maximum sample size and expected sample size than Simon's. It is noteworthy that under certain criteria defined by design parameters, such as (p0, p1, α, β) = (0.05, 0.25, 0.05, 0.20), no additional admissible design can be identified. In this case, only optimal and minimax designs can routinely be considered. In this paper, parameter setting (α, β) = (0.05, 0.2) gives the most desirable sample sizes. For (p0, p1), the required n and EN remain the least in (0.05, 0.25), gradually increase in (0.10, 0.30), and attain the most in (0.30, 0.50). A practical example Schiller et al. [18] published a single-arm phase II clinical trial of Axitinib for patients with advanced non-small-cell lung cancer, and objective remission rate (ORR) was used as primary endpoint to evaluate efficacy. The parameter setting (p0, p1, α, β), in this trial, were specified to be (0.05, 0.2, 0.1, 0.1). As listed in Table 3, sample size is estimated for optimal, admissible and minimax design based on Simon's and Fleming's two-stage design. In Simon's design, 12 and 37 subjects are thought to be needed in the first stage and during the whole experiment for optimal design, respectively. If no response is observed in the first stage, the trial would be early terminated due to inefficacy. The number of subjects needed for minimax design in stage I and whole trial is 18 and 32 respectively. Two admissible designs' with compromised sample sizes lie between these two designs are also listed in the table. Table 3 Comparison of three designs for (p0, p1, α, β) = (0.05, 0.2, 0.1, 0.1) based on practical example In Fleming's design, minimax design requires 18 subjects in the first stage and once one or more responses are observed after the treatment, experiment proceeds to second stage and another 13 patients will be enrolled. During the first stage, however, the trial will also be considered early termination for efficacy, if 3 or more patients' conditions are ameliorated. At second stage, if total 4 and more positive responses are found, phase II clinical trial will be claimed to be successful and further trial will be considered. Two admissible designs are identified, with q ∈ [0.091, 0.268] and [0.268, 0.444], respectively. For optimal design, the number of subjects required in the first and second (if necessary) design is 12 and 25, separately. Obviously, Fleming's designs show considerably smaller maximum sample size and expected sample size than Simon's, given a high probability of early termination for futility as well as efficacy. Simon's two-stage design has been widely used in phase II clinical oncology trials for testing the efficacy of a single treatment regimen. The original design, however, only considers stopping for futility. Alternatively, Fleming's design lends additional flexibility of allowing early termination by accepting the treatment regimen when initial results are extremely favorable. As a result, pharmaceutical reagents with outstanding efficacy can be early marketed, and patients can thus benefit from them. What's more, k-stage (k ≥ 3) designs have also been proposed [8, 19, 20]. There are concerns that in practice, if the accrual is not fast, or if excessive initial failures occurs at first stage, k-stage designs are essentially the same as two-stage designs and will not be recommended. Thus, in this article, only two-stage design is considered. Nevertheless, further exploration is still needed in multi-stage design to ensure the successful development of effective cancer treatment. In this paper, we compare the required sample size (n1, n), threshold values (a1, r1, r) for early termination, EN and the probability range (q) for minimax, optimal and admissible criteria in Simon and Fleming's two-stage designs. It is often the case that maximum sample size of the optimal design is much larger than that of the minimax design, although the optimal design has the smallest expected sample size. Admissible designs are compromises between the minmax and the optimal designs. In addition, the optimal design always requires the smallest sample size in the first stage. We consider this as an important advantage of the optimal design to reduce the expected sample size as compared to other designs due to larger probability of early termination in the first stage. Thus, in clinical trial setting, optimal design may be more favorable when early data support drug ineffectiveness. This can reduce risk of exposing inactive treatments on patients, since the treatment regimen would be stopped timely once it shows low response activity. On the other hand, the minimax design required smallest maximum sample size, though this comes at the cost of larger sample size under the null hypothesis. Therefore, minimax design will be preferable if evidence agents reveals impressive therapeutic efficacy. This becomes more obvious in the consideration of the Fleming's design. In practice, an investigator may also desire to add clinically meaningful constraint to (p0, p1) as a prior. In this case, it is better to explore in the possible ranges of q, and determine whether admissible design is more appropriate. Mander et al. [21] proposed a new admissible criterion by considering a more general expected loss function that includes the expected sample size under both null and alternative hypotheses and the maximum sample size. Their paper also additionally considered designs that can allow stopping for both efficacy and futility. We realized that our paper is considered as a subset of their comparisons provided no weight given to the expected sample size under alternative hypothesis. However, their triangular graph is not easily exemplifying the inadmissible designs among all candidate admissible designs. Our paper showed that the boundary line between admissible designs can still include a handful of designs that are not admissible for each set of design parameters. In addition, we are able to visually display all candidate designs between the minimax and the optimal designs in Simon's and Fleming's two-stage design. Our presented results further corroborated that inadmissible designs may not exist if the difference in maximum sample size between two Simon's designs is less than or equal to 1 [22] or it is not on the concave hull [23]. Therefore, we consider both of our extensive tabulation and graphical method as important advantages to guide investigators to find the preferable design under the null hypothesis is true. We revisit a single-arm phase II clinical trial of Axitinib for patients with advanced non-small-cell lung cancer [24]. Both optimal, minimax and admissible designs under Simon's and Fleming's design are used to attain 90% power at the significance level of 0.1. In this practical example, the ENs for three designs can be described as minimax design > admissible design > optimal design. Meanwhile, Fleming's design always requires equal or smaller maximum sample size and expected sample size than Simon's. This is due to the fact that Fleming's design has the largest probability to reject further study of drugs either with novel efficacy or gloomy activity during early stage. Therefore, when accruing patients is difficult, or the study drug is costly, Fleming's design can be a more appropriate choice. Oftentimes, two-stage design has definitive criterions for early termination, thus it can prevent subjects from continuously receiving treatment with unsatisfying efficacy. In addition, two-stage design receives popularity because of its comprehensible concept and convenient implementation. Thus, various methodological developments of this design are still expanded in many ways. For example, in some trials, even though the number of responses has exceeded threshold value r, the experiment will not be stopped early, but be continued to achieve enough cases for estimating confidence interval of effective rate [20]. In trials with two-stage designs, errors are inevitable no matter whether the trial is early terminated or not. If the experiment is recommended to move forward at the end of first stage, the probability of making type I error can't be ignored (namely, false positive, meaning patients continuously take inactive drugs by error). Oppositely, type II error will be inflated once the trial is early stopped (namely, false negative, meaning patients might stop taking drugs with favorable efficacy) [25]. Obviously, the error of false negative is considered more serious because drugs may lose the chance of being further investigated once rejected. Though various designs have been put forward, more research is needed to precisely reduce the probability of false negative. For example, some oncology drug will still be presumed convincingly active despite of insufficient response rate, as long as it performs well in keeping disease stabilization. In this condition, like what Kunz and Kieser [26] have done in single-arm phase II oncology trials, we could consider using test with two binary endpoints instead of conventional one endpoint. When the (p0, p1) could be estimated accurately, it is better to explore in the range of q, and determine which design to choose. Optimal design is preferable on drugs with limited efficacy. Minimax design is favorable on agents with impressive efficacy. For trials whose subjects are difficult to recruit or investigated drug is relatively expensive, Fleming's design can be a better choice, compared to Simon's design. SA-II: single-arm phase RP-II: randomized phase II expected sample size PET : probability of early termination Gan HK, Grothey A, Pond GR, Moore MJ, Siu LL, Sargent D. Randomized phase II trials: inevitable or inadvisable? J Clin Oncol. 2010;28:2641–7. Simon R. Optimal two-stage designs for phase II clinical trials. Control Clin Trials. 1989;10:1–10. Sharma MR, Stadler WM, Ratain MJ. Randomized phase II trials: a long-term investment with promising returns. J Natl Cancer Inst. 2011;103:1093–100. Shan GG, Zhang H, Jiang T. Minimax and admissible adaptive two-stage designs in phase II clinical trials. BMC Med Res Methodol. 2016;16:1–14. Shan GG, Zhang H, Jiang T. Efficient confidence limits for adaptive one-arm two-stage clinical trials with binary endpoints. BMC Med Res Methodol. 2017;17:1–11. Jung S-H, Sargent DJ. Randomized phase II clinical trials. J Biopharm Stat. 2014;24:802–16. Khan I, Sarker SJ, Hackshaw A. Smaller sample sizes for phase II trials based on exact tests with actual error rates by trading-off their nominal levels of significance and power. Br J Cancer. 2012;107:1801–9. Fleming TR. One-sample multiple testing procedure for phase II clinical trials. Biometrics. 1982;38:143–51. Wason JM, Jaki T. A review of statistical designs for improving the efficiency of phase II studies in oncology. Stat Methods Med Res. 2016;25:1010–21. Mander AP, Thompson SG. Two-stage designs optimal under the alternative hypothesis for phase II cancer clinical trials. Contemp Clin Trials. 2010;31:572–8. Shuster J. Optimal two-stage designs for single arm phase II cancer trials. J Biopharm Stat. 2002;12:39–51. Jung SH, Lee T, Kim K, George SL. Admissible two-stage designs for phase II cancer clinical trials. Stat Med. 2004;23:561–9. Jung SH, Carey M, Kim KM. Graphical search for two-stage designs for phase II clinical trials. Control Clin Trials. 2001;22:367–72. McPherson K, Colton T, et al. J Am Stat Assoc. 1976;71:80–6. Lee JJ, Feng L. Randomized phase II designs in cancer clinical trials: current status and future directions. J Clin Oncol. 2005;23:4450–7. Vickers AJ, Ballen V, Scher HI. Setting the bar in phase II trials: the use of historical data for determining "go/no go" decision for definitive phase III testing. Clin Cancer Res. 2007;13:972–6. Brown SR, Gregory WM, Twelves CJ, Buyse M, Collinson F, Parmar M, et al. Designing phase II trials in cancer: a systematic review and guidance. Br J Cancer. 2011;105:194–9. Schiller JH, Larson T, Ou SH, Limentani S, Sandler A, Vokes E, et al. Efficacy and safety of axitinib in patients with advanced non-small-cell lung cancer: results from a phase II study. J Clin Oncol. 2009;27:3836–41. Chen TT. Optimal three-stage designs for phase II cancer clinical trials. Stat Med. 1997;16:2701–11. Thatcher AR. Relationships between Bayesian and confidence limits for predictions. J R Stat Soc B. 1964;26:176–210. Mander AP, Wason JM, Sweeting MJ, Thompson SG. Admissible two-stage designs for phase II cancer clinical trials that incorporate the expected sample size under the alternative hypothesis. Pharm Stat. 2012;11:91–6. Kim J, Schell MJ. Modified Simon's minimax and optimal two-stage designs for single-arm phase II cancer clinical trials. Oncotarget. 2019;10:4255–61 http://www.ncbi.nlm.nih.gov/pubmed/31303960. DeGroot MH. Optimal statistical decisions. New York; 1970. Ensign LG, Gehan EA, Kamen DS, Thall PF. An optimal three-stage design for phase II clinical trials. Stat Med. 1994;13:1727–36. Jennison C, Turnbull BW. Confidence intervals for a binomail parameter parameter following a multistage test with application to MSL-STD 105D and medical trials. Technometrics. 1983;25:49–58. Kunz CU, Kieser M. Optimal two-stage designs for single-arm phase II oncology trials with two binary endpoints. Methods Inf Med. 2011;50:372–7. The work was supported by grant from the National Natural Science Foundation of China (81773554 to H. Yu), and the National Natural Science Foundation of China Grant for Young Scientists (81302512 to J Bai). The funding body did not have any role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Department of Epidemiology and Biostatistics, Arnold School of Public Health, University of South Carolina, Columbia, SC, USA Fei Qin Department of Biostatistics, School of Public Health, Nanjing Medical University, SPH Building Room 418, 101 Longmian Avenue, Nanjing, 211166, Jiangsu, China Fei Qin, Feng Chen, Yongyue Wei, Yang Zhao, Jianling Bai & Hao Yu Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA, USA Jingwei Wu Beijing KeyTech Statistical Consulting Co., Ltd, Beijing, China Zhiwei Jiang Feng Chen Yongyue Wei Yang Zhao Jianling Bai Hao Yu FQ, JLB and HY designed the research and wrote the manuscript; HY provided financial support; FQ, JWW, FC, JLB and HY analyzed and interpreted data; JWW, YZ, YYW and ZWJ assisted with design, data analysis and writing of the manuscript. All authors read and approved the final manuscript. Correspondence to Jianling Bai or Hao Yu. Qin, F., Wu, J., Chen, F. et al. Optimal, minimax and admissible two-stage design for phase II oncology clinical trials. BMC Med Res Methodol 20, 126 (2020). https://doi.org/10.1186/s12874-020-01017-8 Optimal design Minimax design
CommonCrawl
Simultaneous detection of α-Lactoalbumin, β-Lactoglobulin and Lactoferrin in milk by Visualized Microarray Zhoumin Li1,2, Fang Wen3, Zhonghui Li1, Nan Zheng3, Jindou Jiang4 & Danke Xu1 α-Lactalbumin (a-LA), β-lactoglobulin (β-LG) and lactoferrin (LF) are of high nutritional value which have made ingredients of choice in the formulation of modern foods and beverages. There remains an urgent need to develop novel biosensing methods for quantification featuring reduced cost, improved sensitivity, selectivity and more rapid response, especially for simultaneous detection of multiple whey proteins. A novel visualized microarray method was developed for the determination of a-LA, β-LG and LF in milk samples without the need for complex or time-consuming pre-treatment steps. The measurement principle was based on the competitive immunological reaction and silver enhancement technique. In this case, a visible array dots as the detectable signals were further amplified and developed by the silver enhancement reagents. The microarray could be assayed by the microarray scanner. The detection limits (S/N = 3) were estimated to be 40 ng/mL (α-LA), 50 ng/mL (β-LG), 30 ng/mL (LF) (n = 6). The method could be used to simultaneously analyze the whey protein contents of various raw milk samples and ultra-high temperature treated (UHT) milk samples including skimmed milk and high calcium milk. The analytical results were in good agreement with that of the high performance liquid chromatography. The presented visualized microarray has showed its advantages such as high-throughput, specificity, sensitivity and cost-effective for analysis of various milk samples. Milk whey protein represents a rich and mixture proteins with wide ranging nutritional, biological and food functional attributes. The main constituents are α-lactalbumin (α-LA), β-lactoglobulin (β-LG) and lactoferrin (LF), which account for approximately 70–80% of total whey protein. α-LA, β-LG and LF are of high nutritional value which have made ingredients of choice in the formulation of modern foods and beverages. They may also have physiological activity through moderating gut microflora, mineral absorption and immune function [1, 2]. Although several methods have been reported for α-LA, β-LG and LF, either alone or concomitant with other whey proteins, including chromatographic analysis (High performance liquid chromatography (HPLC) [3,4,5,6,7,8,9,10,11], Ultra high performance liquid chromatography (UHPLC) [12], High performance liquid chromatography -mass spectra (HPLC-MS) [13,14,15,16,17,18,19,20,21], Ultra high performance liquid chromatography - mass spectra (UHPLC-MS) [22,23,24,25,26,27], Immunoaffinity chromatography (IAC) [26, 27]), Radial Immunodiffusion (RID) [28], sodium dodecyl sulfate polyacrylamide gel electropheresis (SDS-PAGE) [29, 30], Capillary Electrophoresis(CE) [10, 31,32,33,34], Enzyme-llinked Immunosorbent Assay (ELISA) [17, 35,36,37,38,39,40,41,42], Fluorescent Immunosorbent Assay(FIA) [43, 44], Surface Plasmon Resonance (SPR) [45,46,47,48,49] and Sensors [50,51,52]. In general, chromatographic analysis requires pre-treated samples, high initial sample volumes and long analysis times, which lead to high cost. In addition, analytical chromatographic technologies are unable to identify protein denaturation or modification that may occur during processing and storage. This is an important factor for public health and food commodities marketing. Some of these drawbacks can be overcome using traditional immunological methods, such as ELISA. It also offers the advantages of working directly with complex fluids, such as whole milk and other dairy fluids, but only one whey protein can be detected. However, there remains an urgent need to develop alternative methods for quantification featuring reduced cost, improved sensitivity, selectivity and more rapid response, especially for simultaneous detection of multiple whey proteins. Development of new tools, minimizing limitations imposed by these methodologies and leveraging the high specificity of traditional immunological methods, is of great interest. In this sense, visualized microarray are envisaged as a valid alternative to classical methods for analysis of protein, because they are amenable to direct readout by eyes and well suited to rapid detection with high sensitivity and selectivity using low-cost instrumentation that is adaptable to portable, field-deployable embodiments, which is ideal for routine determination in the dairy industry [53,54,55,56]. In this paper, we described the development of visualized microarray method for simultaneous, high-throughput quantitative immune-detection of three commercially important whey proteins (α-LA, β-LG, and LF) in samples at a time, from various milk sources. To the best of our knowledge, no visualized microarray has been described thus far for the determination of a-LA, β-LG, and LF simultaneously. Visualized microarray method allowed the analysis of milk without the need for sample preparation, including pre-enrichment or purification steps, "extraction" of target analytes from the complex matrix, and measurement of signal in a "clean" environment. The assay was then used to simultaneously analyze the whey protein contents of various raw milk samples and UHT milk samples including skimmed milk and high calcium milk and the analytical results were in good agreement with that of the HPLC. Materials and instruments α-LA, β-LG, LF and silver enhancement solution including solution A (AgNO3) and solution B (Hydroquinone) were all purchased from Sigma-Aldrich. NaCl, KCl, Na2HPO4·12H2O, KH2PO4, Tween-20, Ethylenediaminetetraacetic acid (EDTA) was from Nanjing Chemical Reagent Co., Ltd. (Nanjing, China). Pure water of 18.2 MΩcm-1 was generated in-lab from a Milli-Q water system. Bovine serum albumin (BSA) was purchased from Merck. Goat polyclonal to α-lactalbumin (α-LA), goat polyclonal to β-lactoglobulin (β-LG), goat polyclonal to lactoferrin (LF) and AgNPs labeled donkey anti-goat IgG were kindly supplied by Nanjing Xiangzhong Biotechnology Co. Ltd. (Nanjing, China). All solutions were made by triply deionized water (Milli-Q water purification system, Millipore, Billerica, MA, USA). A 10 mM phosphate buffered saline (PBS) at pH 7.2 was used as the assay buffer which was prepared as following: 137 mmol/L NaCl, 2.7 mmol/L KCl, 10 mmol/L Na2HPO4·12H2O and 2 mmol/L KH2PO4. A 10 mM PBS containing 0.01% Tween-20 and 1 mM EDTA (PBST- EDTA) at pH 7.2 was used for milk sample preparation and dilution. The wash buffer was a PBS containing 0.05% Tween 20 (PBST). The blocking solution was 1% BSA in 10 mM PBS. All buffers were filtered through 0.22 μm pore size filter before use. The microarrays were prepared by TMAR microarray spotter (Tsinghua University, Beijing, China). Automated plate washer (BioTek Instruments, Inc. America) was used as washing platform. LXJ-II centrifuge (Shanghai Anting Instrument Co., Shanghai, China) were used for the centrifugation. Clear flat-bottom 96-well plate, thermo-shaker and microarrays scanner (QARRAY 2000) were from Nanjing Xiangzhong Biotechnology Co. Ltd. (Nanjing, China). Microarray preparation The obtained of α-LA, β-LG and LF were spotted on clear flat-bottom 96-well plate. A volume of 10 μL of each coating antigens diluted by spotting buffer were arrayed with a 500 μm spot-to-spot pitch using a microarray spotter, each antigen solutions was in triplicate. After spotted, microarray was incubated for 2 h at 37 °C. In this step the coating antigens were immobilized on the microplate wells by absorption over the surface of the support of polystyrene. After immobilization, microarray surface was treated with 200 μL 1% BSA for 1 h at 37 °C in order to minimize further unspecific bindings. After incubation, the microarray plate was washed with 1 × PBST buffer using an automated plate washer and then sealed in foil packets for storage at 2–8 °C. Indirect competitive microarray immunoassay protocol The indirect competitive microarray immunoassay principle was presented in Scheme 1. In a microarray immunoassay analysis the following experimental procedure was performed. The competition is established by the addition of a mixture of 25 μL the standard (or the sample), a known amount of 25 μL mixed antibodies, The reaction is incubated at 25 °C for 45 min on a thermoshaker (shaking at 600 rpm). After the corresponding washing step, AgNPs labeled donkey anti-goat IgG in a total volume of 50 μL/well. The reaction is incubated at 37 °C for 30 min on a thermoshaker (shaking at 600 rpm). After the corresponding washing step, 50 μL silver enhancement solution including solution A (AgNO3) and solution B (Hydroquinone) was then added to each well, and incubated for 12 min at 37 °C in dark. At the end of colorimetric reaction, each well was washed 3 times with 250 μL pure water. Scheme 1 Schematic illustration of detection α-lactalbumin (α-LA), β-lactoglobulin (β-LG) and lactoferrin(LF) with visualized microarray immunoassay platform, composed of silver enhancement amplification system Microarray imaging and data processing The microarray was imaged with microarray scanner (QARRAY 2000) and performed using the corresponding software to quantify the signal over the sample spot area and expressed as relative light units (RLUs). The calibration curve was represented by a linear relationship. Cross reactivity calculation Cross reactivity (CR) is generally defined as the necessary amount of mass or concentration of interference able to produce an equal signal as when the analyte is assayed to provoke a signal inhibition of 50%. Therefore, in this work CR rates, in terms of percentage (%), were calculated according to the expression (eq. 1). $$ \mathrm{CR}=\left[\left({\mathrm{IC}}_{50}\left(\mathrm{analyte}\right)/{\mathrm{IC}}_{50}\left(\mathrm{interference}\right)\right)\right]\times 100\% $$ IC50 is the necessary concentration of analyte or interference to induce a signal inhibition of 50%. Milk samples Milk consists of metal ions such as calcium, iron, magnesium and zinc. For actual sample analysis, it should be considered that α-LA, β-LG and LF had a high possibility of forming chelation complex with these metal ions. Thus, prior to actual sample analysis, milk was diluted 200-fold with PBST-EDTA at pH 7.2. Milk was purchased from local supermarket. HPLC method Binding buffer (BB): 1.211 g Tris was dissolved with 800 mL 6 mol/L HCl, adjusted to pH 7.4 and then volumed to final volume to 1000 mL. Elution buffer (EB): 0.15 mol/L sodium phosphate, pH 12. Buffer for adjusting pH of EB (AB): 1 mol/L sodium dihydrogen phosphate. Treatment of milk sample for analysis α- Lactalbumin and β- lactoglobulin 5 mL milk sample was mixed with 14 mL water and adjusted to pH 4.6. Next water was added to the mixture making final volume to 20 mL. Then the above mixture was centrifugated under 10,000 rpm and 4 °C for 10 min. Finally, supernatant was filter with 0.22 μm filter and injected to HPLC system. Treatment of milk sample for analysis lactoferrin Milk samples were centrifugated under 8000 g and 4 °C for 10 min to remove fat. Then 15 mL skim milk was loaded onto lactoferrin immune-affinity column that was pre-equilibrated with 10 mL BB. After washing with 20 mL BB, lactoferrin was eluted with 3.6 mL EB. Then the 3.6 mL elution was mixed with 0.4 mL EB. Finally the mixture was filtered with 0.22 μm filter and injected to HPLC system. The lactoferrin immune-affinity column was washed with 10 mL BB and stored at 4 °C for further use. HPLC system The chromatographic analysis of lactoferrin was carried out on a HPLC system (2695 Separations Module, Waters; Milford, MA, USA) coupled with a photodiode array detector (PDA 2996 detector, Waters; Milford, MA, USA). Separation was performed using a Symmetry C4 Column (300 Å, 5 μm, 4.6 mm × 250 mm, Waters). Acetonitrile (eluent A) and 0.1% trifluoroacetic acid in water (eluent B) were used as mobile phase. The flow rate was set at 1.0 mL/min and the LC elution gradient was as follows: initial 30% A, 5 min 55% A, 10 min 60% A, 12 min 30% A and hold on for a further 4 min for re-equilibration, giving a total run time 16 min. The column temperature was kept at 25 °C and the injection volume was 50 μL for standards and sample solutions. The wavelengths was set at 280 nm for detection. Waters Empower 2.0 chromatography software package was used for HPLC system control, data acquisition and management. To develop a highly sensitive and specific indirect competitive immunoassay, the conditions including the concentrations of coating antigens and antibodies, should be carefully optimized by a checkboard titration of antigen and antibody simultaneously. In addition, it was necessary to evaluate the effect of presence or absence of EDTA and Tween 20 in assay buffer. Concentrations of coating antigens and antibodies To develop highly sensitive competitive immunoassay, the conditions including the concentrations of coating antigens and dilutions of antibodies should be carefully optimized. In this study, coating antigens of α-LA, β-LG and LF all were 2 mg/mL, 1 mg/mL, 0.5 mg/mL, 0.2 mg/mL, 0.1 mg/mL, respectively; anti-α-LA were 1:200, 1:500, 1:1000, 1:2000 dilution respectively; anti-β-LG were 1:5000, 1:10,000, 1:20,000, 1:40,000 dilution respectively; anti-LF were 1:5000, 1:10,000, 1:20,000, 1:40,000 dilution respectively. In addition, second antibodies of AgNPs labeled donkey anti-goat IgG were 1:25, 1:50, 1:100, 1:200 dilution respectively. The results can be seen in Fig. 1. a, b, c coating antigens of α-LA, β-LG and LF were 2 mg/mL, 1 mg/mL, 0.5 mg/mL, 0.2 mg/mL, 0.1 mg/mL, respectively; d anti-α-LA were 1:200, 1:500, 1:1000, 1:2000 dilution respectively; e anti-β-LG were 1:5000, 1:10,000, 1:20,000, 1:40,000 dilution respectively; f anti-LF were 1:5000, 1:10,000, 1:20,000, 1:40,000 dilution respectively; g second antibodies of AgNPs labeled donkey anti-goat IgG were 1:25, 1:50, 1:100, 1:200 dilution respectively for α-LA, β-LG and LF Lower the concentration of antigen and antibody can increase detection sensitivity, but the signal value will be lower. So the optimal assay conditions were as follows: Appropriate concentrations of α-LA, β-LG and LF were all 1 mg/mL; The appropriate concentrations of anti-α-LA, anti-β-LG and anti-LF were 1:500, 1:10,000 and 1:10,000 dilution respectively; Appropriate concentrations of second antibodies of AgNPs labeled donkey anti-goat IgG were 1:50 dilution. Effect of EDTA and Tween 20 However, milk has metal ions such as calcium, iron, magnesium, potassium, sodium and zinc ion, so it is necessary to consider the high potential for forming chelating complexes between α-LA, β-LG and LF with these metal ions. To prevent these interferences, EDTA was incorporated in the assay buffer. EDTA has a greater affinity for the calciums than α-LA, β-LG and LF, thus it can block the interaction of α-LA, β-LG, LF with calciums. Tween 20 is a non-ionic surfactant, which has emulsification, diffusion, solubilization, stabilizing effect with samples. Moreover, it provides a protective of antigen-antibody in buffers, and reduces the nonspecific binding of antibodies to antigens and interfering proteins. Thus it can reduce the background and improve the sensitivity. However, an excessive concentration could inhibit binding of antibody and antigen. Finally, Tween 20 concentration was selected to be 0.01%. The results can be seen in Fig. 2. The dilutions of Tween 20 were 0.1%, 0.05%, 0.01%, 0.005% respectively for α-LA, β-LG and LF. Antigen of α-LA, β-LG and LF were all 1 mg/mL; anti-α-LA, anti-β-LG and anti-LF were 1:500, 1:10,000 and 1:10,000 dilution respectively; Second antibodies of AgNPs labeled donkey anti-goat IgG 1:50 dilution Assay specificity indicates the ability of antibody to generate a measurable response only for the target molecule. The cross-reactivity of antibodies was evaluated under indirect competitive immunoassay conditions in order to confirm specificity. Here, a study was performed using five main proteins in milk, such as α-LA, β-LG, LF, Casein and BSA. The cross-reactivity studies were carried out by adding various free cross reactants at different concentrations to compete with antigen coated on the surface, to bind with the antibody. The cross-reactivity for each compound was calculated according to the expression (eq. 1) and given in Fig. 3. Anti-α-LA, anti-β-LG and anti-LF were cross-reactivity with α-LA, β-LG, LF, Casein and BSA The anti-α-LA, anti-β-LG, anti-LF were determined to be highly specific for α-LA, β-LG, LF respectively, although there was a minor dose–response relationship for Casein and BSA (cross-reactivity <1.0%), the binding responses for these proteins were analytically insignificant at concentrations equivalent to those of diluted milk samples. Method performance In order to be able to determine multiplex format concentrations of α-LA, β-LG and LF, the assay was calibrated independently using a cocktail of the α-LA, β-LG, LF antibodies and different concentrations of α-LA, β-LG and LF. As a matter of fact, the competition occurs for all target molecules and the specific signal obtained on each probe decreases with the analyte concentration, as expected in a competitive immunoassay. Over the optimized working calibration range (α-LA, β-LG and LF were all 0.05, 0.25, 1, 5, 25 μg/mL), a semi-log curve fit adequately described the dose-response relationship which can be seen in Fig. 4. Their calibration curves were calculated as follows, α-LA: y = −0.3258× + 0.5171, r = 0.9829; β-LG: y = −0.2738× + 0.5986, r = 0.9702; LF: y = −0.2558× + 0.5658, r = 0.9952. In the calculation formula, y: B/B0%, x: lg C. (B/B0 is the ratio of response B to the maximum response when no analyte is present B0.) Calibration curves of α-LA, β-LG and LF. α-LA: y = −0.3258× + 0.5171, r = 0.9829; β-LG: y = −0.2738× + 0.5986, r = 0.9702; LF: y = −0.2558× + 0.5658, r = 0.9952 The method detection limits (response 3 standard deviations of blank over several independent runs) were estimated to be 0.04 μg/mL (α-LA), 0.05 μg/mL (β-LG), 0.03 μg/mL (LF) (n = 6). Method precision was estimated from the aggregate of a single-level control α-LA (1 μg/mL), β-LG (1 μg/mL), LF (1 μg/mL) over multiple independent runs, and the measured RSD were 6.71%, 7.82%, 5.13%, respectively(n = 6). Between-run precision may be further assessed with RSD 12.31%, 13.52%, 14.15%, respectively (n = 6). After a simple dilution of commercial milk (200-fold in PBST-EDTA, pH 7.2), use this calibration curve to calculate the concentration of milks. The recovery study was performed samples of milk purchased from local supermarkets. Free α-LA, β-LG and LF (20 μg L−1, 100 μg L−1, 400 μg L−1and 2000 μg L−1) were spiked in milk solution. The recovery study was performed in three replicates and the results were quite satisfactory as seen in Table 1. Table 1 The recoveries of different concentrations of α-Lactalbumin, β-Lactoglobulin, Lactoferrin Recovery = (C1-C2)/C3 × 100% C1: Sample concentration after adding standard. C2: Sample concentration before adding standard. C3: concentration of adding standard. Comparison with a reference method-HPLC To verify the reliance and accuracy of visualized microarray system, the results of 9 milk samples were compared with an HPLC method. The results obtained by visualized microarray and HPLC are plotted against each other in Fig. 5. The correlation index r was very good with a linear regression curve of y = 1.031×-9.30, r = 0.9604 (α-Lactalbumin); y = 1.094×-35.33, r = 0.9872 (β-Lactoglobulin); y = 1.1096×-1.054, r = 0.9889 (Lactoferrin); These results confirm those of the validation experiments. The findings indicate that reliable results can be obtained over the whole concentration range. Results obtained by visualized microarray and HPLC are plotted against each other. a α-Lactalbumin, b β-Lactoglobulin, c Lactoferrin Method applications The developed procedure was then applied to quantify the concentration of native α-LA, β-LG and LF in three different kinds of milks. The results were shown in Additional file 1, Fig. 6 and Table 2. The precision of the results were well (RSD < 15%). As samples, two bovine milks with different processing treatments have been analyzed. Taking into account the calibration curve, it has been determined that raw milk which numbered 1–7 presented highest concentration of α-LA, β-LG and LF,then pasteurized milk (72–85 °C for 15 s) which numbered 8–11, UHT milk (135–150 °C for 4–15 s) including skimmed milk and high calcium milk which numbered 12–18. As compared to other references that mention the concentration of α-LA, β-LG and LF in milk [12, 34, 50]. Now, it is well known that α-LA, β-LG and LF were highly sensitive to temperature. Results of α-LA, β-LG, and LF were detected by Visualized Microarray. From top to bottom, left to right was numbered 1 to 18. 1–7 were raw milk, 8–11 were pasteurized milk, 12–18 were UHT milk including skimmed milk and high calcium milk Table 2 Results of α-LA, β-LG, and LF detected by Visualized Microarray In this work, visualized microarray for the high-throughput, specific and sensitive determination of a-LA, β-LG and LF in milk samples was developed for the first time, without the need for complex or time-consuming pre-treatment steps, following dilution with an appropriate working buffer. The applicability of the visualized microarray as-developed was underlined by the implementation and analysis of different milk samples, and the results were validated successfully against a HPLC. The visualized microarray performance is in accordance with such an ELISA kit in terms of rapidity, sensitivity, simplicity and inexpensive, However, ELISA detect α-lactoalbumin, β-lactoglobulin and lactoferrin in milk, it needs at least three times of experiments. Therefore, it has potential as an alternative analytical tool to screen for the presence of a-LA, β-LG and LF in the dairy industry and pediatric foods. Moreover, the implementation of disposable conjunction with the simplicity, automation and miniaturization of the instrumentation constitute important advantages leading towards the integration of the method in portable (in-field), reliable and user-friendly analytical systems for milk and infant formula quality control. Buffer for adjusting pH of EB a-LA: α-Lactalbumin Binding buffer CE: CR: Cross reactivity EB: Elution buffer EDTA: Ethylenediaminetetraacetic acid FIA: Fluorescent Immunosorbent Assay HPLC: HPLC-MS: High performance liquid chromatography -mass spectra IAC: Immunoaffinity chromatography LF: Phosphate buffered saline RID: Radial Immunodiffusion RLUs: Relative light units sodium dodecyl sulfate polyacrylamide gel electrophoresis SPR: UHPLC: Ultra high performance liquid chromatography UHPLC-MS: ultra high performance liquid chromatography - mass spectra UHT: Ultra-high temperature treated β-LG: β-lactoglobulin Chatterton DEW, Smithers G, Roupas P, Brodkorb A. Bioactivity of β-lactoglobulin and α-lactalbumin—technological implications for processing. Int Dairy J. 2006;16(11):1229–40. Wakabayashi H, Yamauchi K, Takase M. Lactoferrin research, technology and applications. Int Dairy J. 2006;16(11):1241–51. Ferraro V, Madureira AR, Sarmento B, Gomes A, Pintado ME. Study of the interactions between rosmarinic acid and bovine milk whey protein α-Lactalbumin, β-Lactoglobulin and Lactoferrin. Food Res Int. 2015;77:450–9. Mayer HK, Raba B, Meier J, Schmid A. RP-HPLC analysis of furosine and acid-soluble β-lactoglobulin to assess the heat load of extended shelf life milk samples in Austria. Dairy Sci Technol. 2010;90(4):413–28. Anandharamakrishnan C, Rielly CD, Stapley AGF. Loss of solubility of α-lactalbumin and β-lactoglobulin during the spray drying of whey proteins. LWT Food Sci Technol. 2008;41(2):270–7. Mudgal P, Daubert CR, Foegeding EA. Kinetic study of β-lactoglobulin thermal aggregation at low pH. J Food Eng. 2011;106(2):159–65. Yao X, Bunt C, Cornish J, Quek S, Wen J. Improved RP-HPLC method for determination of bovine lactoferrin and its proteolytic degradation in simulated gastrointestinal fluids. Biomed Chromatogr. 2013;27(2):197–202. Sostmann K, Guichard E. Immobilized β-lactoglobulin on a HPLC-column: a rapid way to determine protein—flavour interactions. Food Chem. 1998;62(4):509–13. Palmano KP, Elgar DF. Detection and quantitation of lactoferrin in bovine whey samples by reversed-phase high-performance liquid chromatography on polystyrene–divinylbenzene. J Chromatogr A. 2002;947(2):307–11. Ding X, Yang Y, Zhao S, Li Y, Wang Z. Analysis of α-lactalbumin, β-lactoglobulin A and B in whey protein powder, colostrum, raw milk, and infant formula by CE and LC. Dairy Sci Technol. 2011;91(2):213–25. Jackson JG, Janszen DB, Lonnerdal B, Lien EL, Pramuk KP, Kuhlman CF. A multinational study of α-lactalbumin concentrations in human milk. J Nutr Biochem. 2004;15(9):517–21. Boitz LI, Fiechter G, Seifried RK, Mayer HK. A novel ultra-high performance liquid chromatography method for the rapid determination of β-lactoglobulin as heat load indicator in commercial milk samples. J Chromatogr A. 2015;1386:98–102. Muhammad G, Saïd B, Thomas C. Structural consequences of dry heating on Beta-Lactoglobulin under controlled pH. Procedia Food Sci. 2011;1:391–8. Gulzar M, Bouhallab S, Jardin J, Briard-Bion V, Croguennec T. Structural consequences of dry heating on alpha-lactalbumin and beta-lactoglobulin at pH 6.5. Food Res Int. 2013;51(2):899–906. Corzo-Martínez M, Moreno FJ, Olano A, Villamiel M. Structural characterization of bovine β-Lactoglobulin−Galactose/Tagatose Maillard complexes by Electrophoretic, chromatographic, and spectroscopic methods. J Agric Food Chem. 2008;56(11):4244–52. Yan R, Qu L, Luo N, Liu Y, Liu Y, Li L, Chen L. Quantitation ofα -Lactalbumin by liquid chromatography tandem mass spectrometry in medicinal adjuvant lactose. Int J Anal Chem. 2014;2014:1–4. Stojadinovic M, Burazer L, Ercili-Cura D, Sancho A, Buchert J, Velickovic TC, Stanic-Vucinic D. One-step method for isolation and purification of native β-lactoglobulin from bovine whey. J Sci Food Agr. 2012;92(7):1432–40. Silveira ST, Martínez-Maqueda D, Recio I, Hernández-Ledesma B. Dipeptidyl peptidase-IV inhibitory peptides generated by tryptic hydrolysis of a whey protein concentrate rich in β-lactoglobulin. Food Chem. 2013;141(2):1072–7. Yang W, Liqing W, Fei D, Bin Y, Yi Y, Jing W. Development of an SI-traceable HPLC–isotope dilution mass spectrometry method to quantify β-Lactoglobulin in milk powders. J Agric Food Chem. 2014;62(14):3073–80. Cunsolo V, Costa A, Saletti R, Muccilli V, Foti S. Detection and sequence determination of a new variantβ-lactoglobulin II from donkey. Rapid Commun Mass Sp. 2007;21(8):1438–46. Czerwenka C, Maier I, Potocnik N, Pittner F, Lindner W. Absolute Quantitation of β-Lactoglobulin by protein liquid chromatography−mass spectrometry and its application to different milk products. Anal Chem. 2007;79(14):5165–72. Chen Q, Zhang J, Ke X, Lai S, Li D, Yang J, Mo W, Ren Y. Simultaneous quantification of α-lactalbumin and β-casein in human milk using ultra-performance liquid chromatography with tandem mass spectrometry based on their signature peptides and winged isotope internal standards. Biochim Biophys Acta. 2016;1864(9):1122–7. Xing K, Chen Q, Pan X. Quantification of lactoferrin in breast milk by ultra- high performance liquid chromatography-tandem mass spectrometry with isotopic dilution. RSC Adv. 2016;6(15):12280–5. Ren Y, Han Z, Chu X, Zhang J, Cai Z, Wu Y. Simultaneous determination of bovine α-lactalbumin and β-lactoglobulin in infant formulae by ultra-high-performance liquid chromatography–mass spectrometry. Anal Chim Acta. 2010;667(1–2):96–102. Zhang J, Lai S, Cai Z, Chen Q, Huang B, Ren Y. Determination of bovine lactoferrin in dairy products by ultra-high performance liquid chromatography–tandem mass spectrometry based on tryptic signature peptides employing an isotope-labeled winged peptide as internal standard. Anal Chim Acta. 2014;829:33–9. Puerta A, Diez-Masa JC, de Frutos M. Immunochromatographic determination of β-lactoglobulin and its antigenic peptides in hypoallergenic formulas. Int Dairy J. 2006;16(5):406–14. Puerta A, Diez-Masa JC, de Frutos M. Development of an immunochromatographic method to determine β-lactoglobulin at trace levels. Anal Chim Acta. 2005;537(1–2):69–80. Mazri C, Sánchez L, Ramos SJ, Calvo M, Pérez MD. Effect of high-pressure treatment on denaturation of bovine β-lactoglobulin and α-lactalbumin. Eur Food Res Technol. 2012;234(5):813–9. Alomirah HF, Alli I. Separation and characterization of β-lactoglobulin and α-lactalbumin from whey and whey protein preparations. Int Dairy J. 2004;14(5):411–9. Giacinti G, Basiricò L, Ronchi B, Bernabucci U. Lactoferrin concentration in buffalo milk. Ital J Anim Sci. 2013;12(1):e23. Cheang B, Zydney AL. Separation of -Lactalbumin and -Lactoglobulin using membrane Ultrafiltration. Biotechnol Bioeng. 2003;83(2):201–9. Li J, Ding X, Chen Y, Song B, Zhao S, Wang Z. Determination of bovine lactoferrin in infant formula by capillary electrophoresis with ultraviolet detection. J Chromatogr A. 2012;1244:178–83. Gutierrez JEN, Jakobovits L. Capillary electrophoresis of α-Lactalbumin in milk powders. J Agric Food Chem. 2003;51(11):3280–6. Chen H, Busnel J, Gassner A, Peltre G, Zhang X, Girault HH. Capillary electrophoresis immunoassay using magnetic beads. Electrophoresis. 2008;29(16):3414–21. Liu L, Kong D, Xing C, Zhang X, Kuang H, Xu C. Sandwich immunoassay for lactoferrin detection in milk powder. Anal Methods UK. 2014;6(13):4742. Wroblewska B, Karamac M, Amarowicz R, Szymkiewicz A, Troszynska A, Kubicka E. Immunoreactive properties of peptide fractions of cow whey milk proteins after enzymatic hydrolysis. Int J Food Sci Technol. 2004;39(8):839–50. de Luis R, Lavilla M, Sánchez L, Calvo M, Pérez MD. Development and evaluation of two ELISA formats for the detection of β-lactoglobulin in model processed and commercial foods. Food Control. 2009;20(7):643–7. Huang YQ, Morimoto K, Hosoda K, Yoshimura Y, Isobe N. Differential immunolocalization between lingual antimicrobial peptide and lactoferrin in mammary gland of dairy cows. Vet Immunol Immunopathol. 2012;145(1–2):499–504. Pelaez-Lorenzo C, Diez-Masa JC, Vasallo I, de Frutos M. Development of an optimized ELISA and a sample preparation method for the detection of β-Lactoglobulin traces in baby foods. J Agric Food Chem. 2010;58(3):1664–71. Manzo C, Pizzano R, Addeo F. Detection of pH 4.6 insoluble β-Lactoglobulin in heat-treated milk and mozzarella cheese. J Agric Food Chem. 2008;56(17):7929–33. Mehta R, Petrova A. Biologically active breast milk proteins in association with very preterm delivery and stage of lactation. J Perinatol. 2011;31(1):58–62. Kleber N, Maier S, Hinrichs J. Antigenic response of bovine β-lactoglobulin influenced by ultra-high pressure treatment and temperature. Innov Food Sci Emerg. 2007;8(1):39–45. Finetti C, Plavisch L, Chiari M. Use of quantum dots as mass and fluorescence labels in microarray biosensing. Talanta. 2016;147:397–401. Yang A, Zheng Y, Long C, Chen H, Liu B, Li X, Yuan J, Cheng F. Fluorescent immunosorbent assay for the detection of alpha lactalbumin in dairy products with monoclonal antibody bioconjugated with CdSe/ZnS quantum dots. Food Chem. 2014;150:73–9. Billakanti JM, Fee CJ, Lane FR, Kash AS, Fredericks R. Simultaneous, quantitative detection of five whey proteins in multiple samples by surface plasmon resonance. Int Dairy J. 2010;20(2):96–105. Tomassetti M, Martini E, Campanella L, Favero G, Sanzò G, Mazzei F. Lactoferrin determination using flow or batch immunosensor surface plasmon resonance: comparison with amperometric and screen-printed immunosensor methods. Sensors Actuators B Chem. 2013;179:215–25. Indyk HE, McGrail IJ, Watene GA, Filonzi EL. Optical biosensor analysis of the heat denaturation of bovine lactoferrin. Food Chem. 2007;101(2):838–44. Indyk HE. Development and application of an optical biosensor immunoassay for α-lactalbumin in bovine milk. Int Dairy J. 2009;19(1):36–42. Indyk HE, Filonzi EL. Determination of lactoferrin in bovine milk, colostrum and infant formulas by optical biosensor analysis. Int Dairy J. 2005;15(5):429–38. Ruiz-Valdepeñas Montiel V, Campuzano S, Torrente-Rodríguez RM, Reviejo AJ, Pingarrón JM. Electrochemical magnetic beads-based immunosensing platform for the determination of α-lactalbumin in milk. Food Chem. 2016;213:595–601. Eissa S, Tlili C, L'Hocine L, Zourob M. Electrochemical immunosensor for the milk allergen β-lactoglobulin based on electrografting of organic film on graphene modified screen-printed carbon electrodes. Biosens Bioelectron. 2012;38(1):308–13. Hohensinner V, Maier I, Pittner F. A 'gold cluster-linked immunosorbent assay': optical near-field biosensor chip for the detection of allergenic β-lactoglobulin in processed milk matrices. J Biotechnol. 2007;130(4):385–8. Li Z, Li Z, Zhao D. Smartphone-based visualized microarray detection for multiplexed harmful substances in milk. Biosens Bioelectron. 2017;87:874–80. Li Z, Li Z, Niu Q. Visual microarray detection for human IgE based on silvernanoparticles. Sensors Actuators B Chem. 2017;239:45–51. Li Z, Li Z, Jiang J. Simultaneous detection of various contaminants in milk based on visualized microarray. Food Control. 2017;73:994–1001. Li Z, Li Z, Xu D. Simultaneous detection of four nitrofuran metabolites in honey simultaneous detection of four nitrofuran metabolites in honey by using a visualized microarray screen assay. Food Chem. 2017;221:1813–21. The authors would like to thank and acknowledge the help of Nanjing Xiangzhong Biotechnology Co. Ltd. We acknowledge financial support of the National Natural Science Foundation of China (21,405,077, 21,227,009, 21,475,060), Natural Science Foundation of Jiangsu Province (BK20140591). Research Foundation of Jiangsu Province Environmental Monitoring (1116), Special Fund for Agro-scientific research in the Public interest (201403071) and the National Science Fund for Creative Research Groups (21121091). The datasets used and analysed during the current study available from the corresponding author on reasonable request. State Key Laboratory of Analytical Chemistry for Life Science, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing, 210093, China Zhoumin Li, Zhonghui Li & Danke Xu School of Chemistry and Biological Science, Nanjing University Jingling College, Nanjing, 210089, China Zhoumin Li Ministry of Agriculture-Key Laboratory of Quality & Safety Control for Milk and Dairy Products, Institute of Animal Science, Chinese Academy of Agricultural Sciences, Beijing, 100193, People's Republic of China Fang Wen & Nan Zheng Ministry of Agriculture Dairy Quality Supervision and Testing Center, Harbin, 150090, China Jindou Jiang Fang Wen Zhonghui Li Nan Zheng Danke Xu ZL wrote the manuscript and carried out visualized microarray experiments, including optimization of experimental conditions, determination of cross reaction rate, calculation calibration curves and recovery. FW and NZ performed HPLC measurements, including treatment of milk samples and determination the concentration of α-LA, β-LG, LF. ZL and JJ performed dairy determination, including evaluation the concentration of α-LA, β-LG, LF. DX designed the study and assisted in manuscript revision. All authors read and approved the final manuscript. Corresponding authors Correspondence to Nan Zheng or Danke Xu. Additional file Additional file 1: Figure S1. The microarray of α-LA, β-LG, and LF on clear flat-bottom 96-well plate after silver enhancement was imaged with microarray scanner (QARRAY 2000). From top to bottom, left to right was numbered 1 to 18. 1–7 were raw milk, 8–11 were pasteurized milk, 12–18 were UHT milk including skimmed milk and high calcium milk. (JPEG 3520 kb) Li, Z., Wen, F., Li, Z. et al. Simultaneous detection of α-Lactoalbumin, β-Lactoglobulin and Lactoferrin in milk by Visualized Microarray. BMC Biotechnol 17, 72 (2017). https://doi.org/10.1186/s12896-017-0387-9 Visualized microarray α-Lactoalbumin Applied immunology
CommonCrawl
High-energy sources at low radio frequency : the Murchison Widefield Array view of Fermi blazars aa27817_15.pdf (PDF, 280Kb) 1602.08869v1.pdf (PDF, 306Kb) Giroletti, M. Massaro, F. D'Abrusco, R. Lico, R. Burlon, D. Hurley-Walker, N. Johnston-Hollitt, M. Morgan, J. Pavlidou, V. Bell, M. Bernardi, G. Bhat, R. Bowman, J. D. Briggs, F. Cappallo, R. J. Corey, B. E. Deshpande, A. A. Ewall-Rice, A. Emrich, D. Gaensler, B. M. Goeke, R. Greenhill, L. J. Hazelton, B. J. Hindson, L. Kaplan, D. L. Kasper, J. C. Kratzenberg, E. Feng, L. Jacobs, D. Kurdryavtseva, N. Lenc, E. Lonsdale, C. J. Lynch, M. J. McKinley, B. McWhirter, S. R. Mitchell, D. A. Morales, M. F. Morgan, E. Oberoi, D. Offringa, A. R. Ord, S. M. Pindor, B. Prabu, T. Procopio, P. Riding, J. Rogers, A. E. E. Roshi, A. Shankar, N. Udaya Srivani, K. S. Subrahmanyan, R. Tingay, S. J. Waterson, M. Wayth, R. B. Webster, R. L. Whitney, A. R. Williams, A. Williams, C. L. Low-frequency radio arrays are opening a new window for the study of the sky, both to study new phenomena and to better characterize known source classes. Being flat-spectrum sources, blazars are so far poorly studied at low radio frequencies. We characterize the spectral properties of the blazar population at low radio frequency compare the radio and high-energy properties of the gamma-ray blazar population, and search for radio counterparts of unidentified gamma-ray sources. We cross-correlated the 6,100 deg^2 Murchison Widefield Array Commissioning Survey catalogue with the Roma blazar catalogue, the third catalogue of active galactic nuclei detected by Fermi-LAT, and the unidentified members of the entire third catalogue of gamma-ray sources detected by \fermilat. When available, we also added high-frequency radio data from the Australia Telescope 20 GHz catalogue. We find low-frequency counterparts for 186 out of 517 (36%) blazars, 79 out of 174 (45%) gamma-ray blazars, and 8 out of 73 (11%) gamma-ray blazar candidates. The mean low-frequency (120--180 MHz) blazar spectral index is $\langle \alpha_\mathrm{low} \rangle=0.57\pm0.02$: blazar spectra are flatter than the rest of the population of low-frequency sources, but are steeper than at $\sim$GHz frequencies. Low-frequency radio flux density and gamma-ray energy flux display a mildly significant and broadly scattered correlation. Ten unidentified gamma-ray sources have a (probably fortuitous) positional match with low radio frequency sources. Low-frequency radio astronomy provides important information about sources with a flat radio spectrum and high energy. However, the relatively low sensitivity of the present surveys still misses a significant fraction of these objects. Upcoming deeper surveys, such as the GaLactic and Extragalactic All-Sky MWA (GLEAM) survey, will provide further insight into this population. https://doi.org/10.1051/0004-6361/201527817
CommonCrawl
Film speed Title: Film speed Subject: Kodacolor (still photography), Comparison of digital SLRs, Digital versus film photography, Velvia, Photographic film Collection: Physical Quantities, Science of Photography, Units of Measurement Film speed is the measure of a photographic film's sensitivity to light, determined by sensitometry and measured on various numerical scales, the most recent being the ISO system. A closely related ISO system is used to measure the sensitivity of digital imaging systems. Relatively insensitive film, with a correspondingly lower speed index, requires more exposure to light to produce the same image density as a more sensitive film, and is thus commonly termed a slow film. Highly sensitive films are correspondingly termed fast films. In both digital and film photography, the reduction of exposure corresponding to use of higher sensitivities generally leads to reduced image quality (via coarser film grain or higher image noise of other types). In short, the higher the sensitivity, the grainier the image will be. Ultimately sensitivity is limited by the quantum efficiency of the film or sensor. This film container denotes its speed as ISO 100/21°, including both arithmetic (100 ASA) and logarithmic (21 DIN) components. The second is often dropped, making (e.g.) "ISO 100" effectively equivalent to the older ASA speed. (As is common, the "100" in the film name alludes to its ISO rating). Film speed measurement systems 1 Historical systems 1.1 Warnerke 1.1.1 Hurter & Driffield 1.1.2 Scheiner 1.1.3 DIN 1.1.4 BSI 1.1.5 Weston 1.1.6 General Electric 1.1.7 ASA 1.1.8 GOST 1.1.9 Current system: ISO 1.2 Conversion between current scales 1.3 Determining film speed 1.4 Applying film speed 1.5 Exposure index 1.6 Reciprocity 2 Film sensitivity and grain 3 Marketing anomalies 3.1 Digital camera ISO speed and exposure index 4 The ISO 12232:2006 standard 4.1 Measurements and calculations 4.2 Saturation-based speed 4.2.1 Noise-based speed 4.2.2 Standard output sensitivity (SOS) 4.2.3 Discussion 4.2.4 See also 5 External links 7 Film speed measurement systems Historical systems Warnerke The first known practical sensitometer, which allowed measurements of the speed of photographic materials, was invented by the Polish engineer Leon Warnerke[1] – pseudonym of Władysław Małachowski (1837–1900) – in 1880, among the achievements for which he was awarded the Progress Medal of the Photographic Society of Great Britain in 1882.[2][3] It was commercialized since 1881. The Warnerke Standard Sensitometer consisted of a frame holding an opaque screen with an array of typically 25 numbered, gradually pigmented squares brought into contact with the photographic plate during a timed test exposure under a phosphorescent tablet excited before by the light of a burning Magnesium ribbon.[3] The speed of the emulsion was then expressed in 'degrees' Warnerke (sometimes seen as Warn. or °W.) corresponding with the last number visible on the exposed plate after development and fixation. Each number represented an increase of 1/3 in speed, typical plate speeds were between 10° and 25° Warnerke at the time. His system saw some success but proved to be unreliable[1] due to its spectral sensitivity to light, the fading intensity of the light emitted by the phosphorescent tablet after its excitation as well as high built-tolerances.[3] The concept, however, was later built upon in 1900 by Henry Chapman Jones (1855–1932) in the development of his plate tester and modified speed system.[3][4] Another early practical system for measuring the sensitivity of an emulsion was that of Hurter and Driffield (H&D), originally described in 1890, by the Swiss-born Ferdinand Hurter (1844–1898) and British Vero Charles Driffield (1848–1915). In their system, speed numbers were inversely proportional to the exposure required. For example, an emulsion rated at 250 H&D would require ten times the exposure of an emulsion rated at 2500 H&D.[5] The methods to determine the sensitivity were later modified in 1925 (in regard to the light source used) and in 1928 (regarding light source, developer and proportional factor)—this later variant was sometimes called "H&D 10". The H&D system was officially[6] accepted as a standard in the former Soviet Union from 1928 until September 1951, when it was superseded by GOST 2817-50. Scheiner The Scheinergrade (Sch.) system was devised by the German astronomer Julius Scheiner (1858–1913) in 1894 originally as a method of comparing the speeds of plates used for astronomical photography. Scheiner's system rated the speed of a plate by the least exposure to produce a visible darkening upon development. Speed was expressed in degrees Scheiner, originally ranging from 1° Sch. to 20° Sch., where an increment of 19° Sch. corresponded to a hundredfold increase in sensitivity, which meant that an increment of 3° Sch. came close to a doubling of sensitivity.[5][7] \sqrt[19]{100}^3 = 2.06914...\approx 2 The system was later extended to cover larger ranges and some of its practical shortcomings were addressed by the Austrian scientist Josef Maria Eder (1855–1944)[1] and Flemish-born botanist Walter Hecht (1896–1960), (who, in 1919/1920, jointly developed their Eder–Hecht neutral wedge sensitometer measuring emulsion speeds in Eder–Hecht grades). Still, it remained difficult for manufactures to reliably determine film speeds, often only by comparing with competing products,[1] so that an increasing number of modified semi-Scheiner-based systems started to spread, which no longer followed Scheiner's original procedures and thereby defeated the idea of comparability.[1][8] Scheiner's system was eventually abandoned in Germany, when the standardized DIN system was introduced in 1934. In various forms, it continued to be in widespread use in other countries for some time. The DIN system, officially DIN standard 4512 by Deutsches Institut für Normung (but still named Deutscher Normenausschuß (DNA) at this time), was published in January 1934. It grew out of drafts for a standardized method of sensitometry put forward by Deutscher Normenausschuß für Phototechnik[8] as proposed by the committee for sensitometry of the Deutsche Gesellschaft für photographische Forschung[9] since 1930[10][11] and presented by Robert Luther[11][12] (1868–1945) and Emanuel Goldberg[12] (1881–1970) at the influential VIII. International Congress of Photography (German: Internationaler Kongreß für wissenschaftliche und angewandte Photographie) held in Dresden from August 3 to 8, 1931.[8][13] The DIN system was inspired by Scheiner's system,[1] but the sensitivities were represented as the base 10 logarithm of the sensitivity multiplied by 10, similar to decibels. Thus an increase of 20° (and not 19° as in Scheiner's system) represented a hundredfold increase in sensitivity, and a difference of 3° was much closer to the base 10 logarithm of 2 (0.30103…):[7] \log_{10}{(2)} = 0.30103... \approx 3/10 As in the Scheiner system, speeds were expressed in 'degrees'. Originally the sensitivity was written as a fraction with 'tenths' (for example "18/10° DIN"),[14] where the resultant value 1.8 represented the relative base 10 logarithm of the speed. 'Tenths' were later abandoned with DIN 4512:1957-11, and the example above would be written as "18° DIN".[5] The degree symbol was finally dropped with DIN 4512:1961-10. This revision also saw significant changes in the definition of film speeds in order to accommodate then-recent changes in the American ASA PH2.5-1960 standard, so that film speeds of black-and-white negative film effectively would become doubled, that is, a film previously marked as "18° DIN" would now be labeled as "21 DIN" without emulsion changes. Originally only meant for black-and-white negative film, the system was later extended and regrouped into nine parts, including DIN 4512-1:1971-04 for black-and-white negative film, DIN 4512-4:1977-06 for color reversal film and DIN 4512-5:1977-10 for color negative film. On an international level the German DIN 4512 system has been effectively superseded in the 1980s by ISO 6:1974,[15] ISO 2240:1982,[16] and ISO 5800:1979[17] where the same sensitivity is written in linear and logarithmic form as "ISO 100/21°" (now again with degree symbol). These ISO standards were subsequently adopted by DIN as well. Finally, the latest DIN 4512 revisions were replaced by corresponding ISO standards, DIN 4512-1:1993-05 by DIN ISO 6:1996-02 in September 2000, DIN 4512-4:1985-08 by DIN ISO 2240:1998-06 and DIN 4512-5:1990-11 by DIN ISO 5800:1998-06 both in July 2002. The film speed scale recommended by the British Standards Institution (BSI) was almost identical to the DIN system except that the BS number was 10 degrees greater than the DIN number. Before the advent of the ASA system, the system of Weston film speed ratings was introduced by Edward Faraday Weston (1878–1971) and his father Dr. Edward Weston (1850–1936), a British-born electrical engineer, industrialist and founder of the US-based Weston Electrical Instrument Corporation,[18] with the Weston model 617, one of the earliest photo-electric exposure meters, in August 1932. The meter and film rating system were invented by William Nelson Goodwin, Jr.,[19][20] who worked for them[21] and later received a Howard N. Potts Medal for his contributions to engineering. The company tested and frequently published speed ratings for most films of the time. Weston film speed ratings could since be found on most Weston exposure meters and were sometimes referred to by film manufactures and third parties[22] in their exposure guidelines. Since manufactures were sometimes creative about film speeds, the company went as far as to warn users about unauthorized uses of their film ratings in their "Weston film ratings" booklets.[23] The Weston Cadet (model 852 introduced in 1949), Direct Reading (model 853 introduced 1954) and Master III (models 737 and S141.3 introduced in 1956) were the first in their line of exposure meters to switch and utilize the meanwhile established ASA scale instead. Other models used the original Weston scale up until ca. 1955. The company continued to publish Weston film ratings after 1955,[24] but while their recommended values often differed slightly from the ASA film speeds found on film boxes, these newer Weston values were based on the ASA system and had to be converted for use with older Weston meters by subtracting 1/3 exposure stop as per Weston's recommendation.[24] Vice versa, "old" Weston film speed ratings could be converted into "new" Westons and the ASA scale by adding the same amount, that is, a film rating of 100 Weston (up to 1955) corresponded with 125 ASA (as per ASA PH2.5-1954 and before). This conversion was not necessary on Weston meters manufactured and Weston film ratings published since 1956 due to their inherent use of the ASA system; however the changes of the ASA PH2.5-1960 revision may be taken into account when comparing with newer ASA or ISO values. Prior to the establishment of the ASA scale[25] and similar to Weston film speed ratings another manufacturer of photo-electric exposure meters, General Electric, developed its own rating system of so-called General Electric film values (often abbreviated as G-E or GE) around 1937. Film speed values for use with their meters were published in regularly updated General Electric Film Values[26] leaflets and in the General Electric Photo Data Book.[27] General Electric switched to use the ASA scale in 1946. Meters manufactured since February 1946 were equipped with the ASA scale (labeled "Exposure Index") already. For some of the older meters with scales in "Film Speed" or "Film Value" (e.g. models DW-48, DW-49 as well as early DW-58 and GW-68 variants), replaceable hoods with ASA scales were available from the manufacturer.[26][28] The company continued to publish recommended film values after that date, however, they were now aligned to the ASA scale. Based on earlier research work by Loyd Ancile Jones (1884–1954) of Kodak and inspired by the systems of Weston film speed ratings[24] and General Electric film values,[26] the American Standards Association (now named ANSI) defined a new method to determine and specify film speeds of black-and-white negative films in 1943. ASA Z38.2.1-1943 was revised in 1946 and 1947 before the standard grew into ASA PH2.5-1954. Originally, ASA values were frequently referred to as American standard speed numbers or ASA exposure-index numbers. (See also: Exposure Index (EI).) The ASA scale was arithmetic, that is, a film denoted as having a film speed of 200 ASA was twice as fast as a film with 100 ASA. The ASA standard underwent a major revision in 1960 with ASA PH2.5-1960, when the method to determine film speed was refined and previously applied safety factors against under-exposure were abandoned, effectively doubling the nominal speed of many black-and-white negative films. For example, an Ilford HP3 that had been rated at 200 ASA before 1960 was labeled 400 ASA afterwards without any change to the emulsion. Similar changes were applied to the DIN system with DIN 4512:1961-10 and the BS system with BS 1380:1963 in the following years. In addition to the established arithmetic speed scale, ASA PH2.5-1960 also introduced logarithmic ASA grades (100 ASA = 5° ASA), where a difference of 1° ASA represented a full exposure stop and therefore the doubling of a film speed. For some while, ASA grades were also printed on film boxes, and they saw life in the form of the APEX speed value Sv (without degree symbol) as well. ASA PH2.5-1960 was revised as ANSI PH2.5-1979, without the logarithmic speeds, and later replaced by NAPM IT2.5-1986 of the National Association of Photographic Manufacturers, which represented the US adoption of the international standard ISO 6. The latest issue of ANSI/NAPM IT2.5 was published in 1993. The standard for color negative film was introduced as ASA PH2.27-1965 and saw a string of revisions in 1971, 1976, 1979 and 1981, before it finally became ANSI IT2.27-1988 prior to its withdrawal. Color reversal film speeds were defined in ANSI PH2.21-1983, which was revised in 1989 before it became ANSI/NAPM IT2.21 in 1994, the US adoption of the ISO 2240 standard. On an international level, the ASA system was superseded by the ISO film speed system between 1982 and 1987, however, the arithmetic ASA speed scale continued to live on as the linear speed value of the ISO system. GOST (Cyrillic: ГОСТ) was an arithmetic film speed scale defined in GOST 2817-45 and GOST 2817-50.[29][30] It was used in the former Soviet Union since October 1951, replacing Hurter & Driffield (H&D, Cyrillic: ХиД) numbers,[29] which had been used since 1928. GOST 2817-50 was similar to the ASA standard, having been based on a speed point at a density 0.2 above base plus fog, as opposed to the ASA's 0.1.[31] GOST markings are only found on pre-1987 photographic equipment (film, cameras, lightmeters, etc.) of Soviet Union manufacture.[32] On 1 January 1987, the GOST scale was realigned to the ISO scale with GOST 10691-84,[33] This evolved into multiple parts including GOST 10691.6-88[34] and GOST 10691.5-88,[35] which both became functional on 1 January 1991. Current system: ISO The ASA and DIN film speed standards have been combined into the ISO standards since 1974. Interlanguage link template link number Commons category without a link on Wikidata Science of photography Physical quantities ISO standards by standard number List of ISO standards / ISO romanizations / IEC standards -8-I 16949 (TS) ISO / IEC standards OSI protocols Outline of photography 35 mm equivalent focal length Chromatic aberration Depth of focus Exposure value F-number Film format Perspective distortion Photographic printing Photographic processes Red-eye effect Zone system Afocal photography Contre-jour Exposing to the right Fill flash Harris shutter Kirlian Kite aerial Mordançage Print toning Redscale Rephotography Scanography Sabatier effect Stopping down Strip (Slit-scan) Sun printing Tilt–shift Vignetting Xerography Diagonal method Geometry and symmetry Lead room Camera (light-field view) Darkroom (enlarger safelight) Film (base stock) Flash (beauty dish cucoloris monolight snoot wireless sync) Lens (Wide-angle lens Telephoto lens) Movie projector Tripod (head) Zone plate Timeline of photography technology Autochrome Lumière Calotype Dufaycolor Heliography Painted photography backdrops Photography and the law Digital camera (D-SLRs (comparison) camera back) Digital versus film photography Film scanner Image sensor (CMOS APS Three-CCD camera Foveon X3 sensor) Color film (print slide) Color management (color space CMYK color model RGB color model) Bleach bypass C-41 process Cross processing Dye coupler E-6 process Gelatin silver process Gum printing K-14 process Print permanence Push processing Stop bath Most expensive photographs Photographers (Norwegian women) Photography museums and galleries What is the meaning of ISO for digital cameras? Digital Photography FAQ Signal-dependent noise modeling, estimation, and removal for digital imaging sensors ISO 6:1974, ISO 6:1993 (1993-02). Photography — Black-and-white pictorial still camera negative film/process systems — Determination of ISO speed. Geneva: International Organization for Standardization. ISO 2240:1982 (1982-07), ISO 2240:1994 (1994-09), ISO 2240:2003 (2003–10). Photography — Colour reversal camera films — Determination of ISO speed. Geneva: International Organization for Standardization. ISO 2720:1974. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. Geneva: International Organization for Standardization. ISO 5800:1979, ISO 5800:1987 (1987-11), ISO 5800:1987/Cor 1:2001 (2001–06). Photography — Colour negative films for still photography — Determination of ISO speed. Geneva: International Organization for Standardization. ISO 12232:1998 (1998-08), ISO 12232:2006 (2006-04-15), ISO 12232:2006 (2006-10-01). Photography — Digital still cameras — Determination of exposure index, ISO speed ratings, standard output sensitivity, and recommended exposure index. Geneva: International Organization for Standardization. ASA Z38.2.1-1943, ASA Z38.2.1-1946, ASA Z38.2.1-1947 (1947-07-15). American Standard Method for Determining Photographic Speed and Speed Number. New York: American Standards Association. Superseded by ASA PH2.5-1954. ASA PH2.5-1954, ASA PH2.5-1960. American Standard Method for Determining Speed of photographic Negative Materials (Monochrome, Continuous Tone). New York: United States of America Standards Institute (USASI). Superseded by ANSI PH2.5-1972. ANSI PH2.5-1972, ANSI PH2.5-1979 (1979-01-01), ANSI PH2.5-1979(R1986). Speed of photographic negative materials (monochrome, continuous tone), method for determining). New York: American National Standards Institute. Superseded by NAPM IT2.5-1986. NAPM IT2.5-1986, ANSI/ISO 6-1993 ANSI/NAPM IT2.5-1993 (1993-01-01). Photography — Black-and-White Pictorial Still Camera Negative Film/Process Systems — Determination of ISO Speed (same as ANSI/ISO 6-1993). National Association of Photographic Manufacturers. This represents the US adoption of ISO 6. ASA PH2.12-1957, ASA PH2.12-1961. American Standard, General-Purpose Photographic Exposure Meters (photoelectric type). New York: American Standards Association. Superseded by ANSI PH3.49-1971. ANSI PH2.21-1983 (1983-09-23), ANSI PH2.21-1983(R1989). Photography (Sensitometry) Color reversal camera films - Determination of ISO speed. New York: American Standards Association. Superseded by ANSI/ISO 2240-1994 ANSI/NAPM IT2.21-1994. ANSI/ISO 2240-1994 ANSI/NAPM IT2.21-1994. Photography - Colour reversal camera films - determination of ISO speed. New York: American National Standards Institute. This represents the US adoption of ISO 2240. ASA PH2.27-1965 (1965-07-06), ASA PH2.27-1971, ASA PH2.27-1976, ANSI PH2.27-1979, ANSI PH2.27-1981, ANSI PH2.27-1988 (1988-08-04). Photography - Colour negative films for still photography - Determination of ISO speed (withdrawn). New York: American Standards Association. Superseded by ANSI IT2.27-1988. ANSI IT2.27-1988 (1994-08/09?). Photography Color negative films for still photography - Determination of ISO speed. New York: American National Standards Institute. Withdrawn. This represented the US adoption of ISO 5800. ANSI PH3.49-1971, ANSI PH3.49-1971(R1987). American National Standard for general-purpose photographic exposure meters (photoelectric type). New York: American National Standards Institute. After several revisions, this standard was withdrawn in favor of ANSI/ISO 2720:1974. ANSI/ISO 2720:1974, ANSI/ISO 2720:1974(R1994) ANSI/NAPM IT3.302-1994. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. New York: American National Standards Institute. This represents the US adoption of ISO 2720. BSI BS 1380:1947, BSI BS 1380:1963. Speed and exposure index. British Standards Institution. Superseded by BSI BS 1380-1:1973 (1973-12), BSI BS 1380-2:1984 (1984-09), BSI BS 1380-3:1980 (1980-04) and others. BSI BS 1380-1:1973 (1973-12-31). Speed of sensitized photographic materials: Negative monochrome material for still and cine photography. British Standards Institution. Replaced by BSI BS ISO 6:1993, superseded by BSI BS ISO 2240:1994. BSI BS 1380-2:1984 ISO 2240:1982 (1984-09-28). Speed of sensitized photographic materials. Method for determining the speed of colour reversal film for still and amateur cine photography. British Standards Institution. Superseded by BSI BS ISO 2240:1994. BSI BS 1380-3:1980 ISO 5800:1979 (1980-04-30). Speed of sensitized photographic materials. Colour negative film for still photography. British Standards Institution. Superseded by BSI BS ISO 5800:1987. BSI BS ISO 6:1993 (1995-03-15). Photography. Black-and-white pictorial still camera negative film/process systems. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 6:1993. BSI BS ISO 2240:1994 (1993-03-15), BSI BS ISO 2240:2003 (2004-02-11). Photography. Colour reversal camera films. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 2240:2003. BSI BS ISO 5800:1987 (1995-03-15). Photography. Colour negative films for still photography. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 5800:1987. DIN 4512:1934-01, DIN 4512:1957-11 (Blatt 1), DIN 4512:1961-10 (Blatt 1). Photographische Sensitometrie, Bestimmung der optischen Dichte. Berlin: Deutscher Normenausschuß (DNA). Superseded by DIN 4512-1:1971-04, DIN 4512-4:1977-06, DIN 4512-5:1977-10 and others. DIN 4512-1:1971-04, DIN 4512-1:1993-05. Photographic sensitometry; systems of black and white negative films and their process for pictorial photography; determination of speed. Berlin: Deutsches Institut für Normung (before 1975: Deutscher Normenausschuß (DNA)). Superseded by DIN ISO 6:1996-02. DIN 4512-4:1977-06, DIN 4512-4:1985-08. Photographic sensitometry; determination of the speed of colour reversal films. Berlin: Deutsches Institut für Normung. Superseded by DIN ISO 2240:1998-06. DIN 4512-5:1977-10, DIN 4512-5:1990-11. Photographic sensitometry; determination of the speed of colour negative films. Berlin: Deutsches Institut für Normung. Superseded by DIN ISO 5800:1998-06. DIN ISO 6:1996-02. Photography - Black-and-white pictorial still camera negative film/process systems - Determination of ISO speed (ISO 6:1993). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 6:1993. DIN ISO 2240:1998-06, DIN ISO 2240:2005-10. Photography - Colour reversal camera films - Determination of ISO speed (ISO 2240:2003). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 2240:2003. DIN ISO 5800:1998-06, DIN ISO 5800:2003-11. Photography - Colour negative films for still photography - Determination of ISO speed (ISO 5800:1987 + Corr. 1:2001). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 5800:2001. Leslie B. Stroebel, John Compton, Ira Current, Richard B. Zakia. Basic Photographic Materials and Processes, second edition. Boston: Focal Press, 2000. ISBN 0-240-80405-8. ^ a b c d e f DIN 4512:1934-01. Photographische Sensitometrie, Bestimmung der optischen Dichte. Deutscher Normenausschuß (DNA), 1934: In the introduction to the standard, Warnerke's system is described as the first practical system used to measure emulsion speeds, but as being unreliable. In regard to Scheiner's system, it states: "Auch hier erwies sich nach einiger Zeit, daß das Meßverfahren trotz der von Scheinergraden ermitteln muß, häufig in sehr primitiver Weise durch […] Vergleich mit Erzeugnissen anderer Hersteller. Die so ermittelten Gebrauchs-Scheinergrade haben mit dem ursprünglich […] ausgearbeiteten Meßverfahren nach Scheiner sachlich nichts mehr zu tun. […] Als Folge hiervon ist allmählich eine Inflation in Empfindlichkeitsgraden eingetreten, für die das Scheiner'sche Verfahren nichts mehr als den Namen hergibt." ^ Royal Photographic Society. Progress medal. Web-page listing people, who have received this award since 1878 ([1]): "Instituted in 1878, this medal is awarded in recognition of any invention, research, publication or other contribution which has resulted in an important advance in the scientific or technological development of photography or imaging in the widest sense. This award also carries with it an Honorary Fellowship of The Society. […] 1882 Leon Warnerke […] 1884 J M Eder […] 1898 Ferdinand Hurter and Vero C Driffield […] 1910 Alfred Watkins […] 1912 H Chapman Jones […] 1948 Loyd A Jones […]" ^ a b c d Berhard Edward Jones (editor). Cassell's cyclopaedia of photography, Cassell, London, 1911 ([2]). Reprinted as Encyclopaedia of photography - With a New Picture Portfolio and introduction by Peter C. Bunnell and Robert A. Sobieszek. Arno Press Inc., New York 1974, ISBN 0-405-04922-6, pp. 472–473: 'Soon after the introduction of the gelatine dry plate, it was usual to express the speed of the emulsion as "x times," which meant that it was x times the speed of a wet collodion plate. This speed was no fixed quantity, and the expression consequently meant but little. Warnerke introduced a sensitometer, consisting of a series of numbered squares with increasing quantities of opaque pigment. The plate to be tested was placed in contact with this, and an exposure made to light emanating from a tablet of luminous paint, excited by burning magnesium ribbon. After development and fixation the last number visible was taken as the speed of the plate. The chief objections to this method were that practically no two numbered tablets agreed, that the pigment possessed selective spectral absorption, and that the luminosity of the tablet varied considerably with the lapse of time between its excitation and the exposure of the plate. […] Chapman Jones has introduced a modified Warnerke tablet containing a series of twenty-five graduated densities, a series of coloured squares, and a strip of neutral grey, all five being of approximately equal luminosity, and a series of four squares passing a definite portion of the spectrum; finally, there is a square of a line design, over which is superposed a half-tone negative. This "plate tester," […] is used with a standard candle as the source of light, and is useful for rough tests of both plates and printing papers.' ^ Paul Nooncree Hasluck (1905). The Book of Photography: Practical, Theoretical and Applied. ([3]): "THE CHAPMAN JONES PLATE TESTER. A convenient means of testing the colour rendering and other properties of a sensitive plate, or for ascertaining the effect of various colour screens, is afforded by the plate tester devised by Mr. Chapman Jones in 1900. This consists of a number of graduated squares by which the sensitiveness and range of gradation of the plate examined may be determined; a series of squares of different colours and mixtures of colours of equal visual intensity, which will indicate the colour sensitiveness; and a strip of uncoloured space for comparison purposes. It is simply necessary to expose the plate being tested, in contact with the screen, to the light of a standard candle. A suitable frame and stand are supplied for the purpose; any other light may, however, be used if desired. The plate is then developed, when an examination of the negative will yield the desired information. The idea of the coloured squares is based on that of the Abney Colour Sensitometer, where three or four squares of coloured and one of uncoloured glass are brought to an equal visual intensity by backing where necessary with squares of exposed celluloid film developed to suitable density." ^ a b c ^ a b Martin Riat. Graphische Techniken - Eine Einführung in die verschiedenen Techniken und ihre Geschichte. E-Book, 3. German edition, Burriana, spring 2006 ([4]), based on a Spanish book: Martin Riat. Tecniques Grafiques: Una Introduccio a Les Diferents Tecniques I a La Seva Historia. 1. edition, Aubert, September 1983, ISBN 84-86243-00-9. ^ a b c Samuel Edward Sheppard. Resumé of the Proceedings of the Dresden International Photographic Congress. In: Sylvan Harris (editor). Journal of the Society of Motion Picture Engineers. Volume XVIII, Number 2 (February 1932), pp. 232-242 ([5]): '[…] The 8th International Congress of Photography was held at Dresden, Germany, from August 3 to 8, 1931, inclusive. […] In regard to sensitometric standardization, several important developments occurred. First, the other national committees on sensitometric standardization accepted the light source and filter proposed by the American Committee at Paris, 1925, and accepted by the British in 1928. In the meantime, no definite agreement had been reached, nor indeed had very definite proposals been made on the subjects of sensitometers or exposure meters, development, density measurement, and methods of expressing sensitometric results, although much discussion and controversy on this subject had taken place. At the present Congress, a body of recommendations for sensitometric standards was put forward by the Deutschen Normenausschusses fur Phototechnik, which endeavored to cover the latter questions and bring the subject of sensitometric standardization into the industrial field. It was stated by the German committee that this action had been forced on them by difficulties arising from indiscriminate and uncontrolled placing of speed numbers on photographic sensitive goods, a situation which was summarized at the Congress by the term "Scheiner-inflation." The gist of these recommendations was as follows: (a) Acceptance of the light source and daylight filter as proposed by the American commission. (b) As exposure meter, a density step-wedge combined with a drop shutter accurate to 1/20 second. (c) Brush development in a tray with a prescribed solution of metol-hydroquinone according to a so-called "optimal" development. (d) Expression of the sensitivity by that illumination at which a density of 0.1 in excess of fog is reached. (e) Density measurement shall be carried out in diffused light according to details to be discussed later. These proposals aroused a very lively discussion. The American and the British delegations criticized the proposals both as a whole and in detail. As a whole they considered that the time was not ripe for application of sensitometric standards to industrial usage. In matters of detail they criticized the proposed employment of a step-wedge, and the particular sensitivity number proposed. The latter approaches very roughly the idea of an exposure for minimum gradient, but even such a number is not adequate for certain photographic uses of certain materials. The upshot of the discussion was that the German proposals in somewhat modified form are to be submitted simply as proposals of the German committee for sensitometric standardization to the various national committees for definite expression of opinion within six months of the expiration of the Congress. Further, in case of general approval of these recommendations by the other national committees, that a small International Committee on Sensitometric Standardization shall, within a further period of six months, work out a body of sensitometric practices for commercial usage. In this connection it should be noted that it was agreed that both the lamps and filters and exposure meters should be certified as within certain tolerances by the national testing laboratories of the countries in question. […]' ^ Martin Biltz. Über DIN-Grade, das neue deutsche Maß der photographischen Empfindlichkeit. In: Naturwissenschaften, Volume 21, Number 41, 1933, pp. 734-736, Springer, doi:10.1007/BF01504271: "[…] Im folgenden soll an Hand der seither gebräuchlichen sensitometrischen Systeme nach Scheiner […], nach Hurter und Driffield […] und nach Eder und Hecht […] kurz gezeigt werden, wie man bisher verfahren ist. Im Anschlusse daran wird das neue vom Deutschen Normenausschusse für Phototechnik auf Empfehlung des Ausschusses für Sensitometrie der Deutschen Gesellschaft für photographische Forschung vorgeschlagene System […] betrachtet werden. […]". ^ E. Heisenberg. Mitteilungen aus verschiedenen Gebieten – Bericht über die Gründung und erste Tagung der Deutschen Gesellschaft für photographische Forschung (23. bis 25. Mai 1930). In: Naturwissenschaften, Volume 18, Number 52, 1930, pp. 1130-1131, Springer, doi:10.1007/BF01492990: "[…] Weitere 3 Vorträge von Prof. Dr. R. Luther, Dresden, Prof. Dr. Lehmann, Berlin, Prof. Dr. Pirani, Berlin, behandelten die Normung der sensitometrischen Methoden. Zu normen sind: die Lichtquelle, die Art der Belichtung (zeitliche oder Intensitätsabstufung), die Entwicklung, die Auswertung. Auf den Internationalen Kongressen in Paris 1925 und London 1928 sind diese Fragen schon eingehend behandelt und in einzelnen Punkten genaue Vorschläge gemacht worden. Die Farbtemperatur der Lichtquelle soll 2360° betragen. Vor dieselbe soll ein Tageslichtfilter, welches vom Bureau of Standards ausgearbeitet worden ist, geschaltet werden. Herr Luther hat an der Filterflüssigkeit durch eigene Versuche gewisse Verbesserungen erzielt. Schwierigkeiten bereitet die Konstanthaltung der Farbtemperatur bei Nitralampen. Herr Pirani schlug deshalb in seinem Vortrag die Verwendung von Glimmlampen vor, deren Farbe von der Stromstärke weitgehend unabhängig ist. In der Frage: Zeit- oder Intensitätsskala befürworten die Herren Luther und Lehmann die Intensitätsskala. Herr Lehmann behandelte einige Fragen, die mit der Herstellung der Intensitätsskala zusammenhängen. Ausführlicher wurde noch die Auswertung (zahlenmäßige Angabe der Empfindlichkeit und Gradation) besprochen, die eine der wichtigsten Fragen der Sensitometrie darstellt. In der Diskussion wurde betont, daß es zunächst nicht so sehr auf eine wissenschaftlich erschöpfende Auswertung ankomme als darauf, daß die Empfindlichkeit der Materialien in möglichst einfacher, aber eindeutiger und für den Praktiker ausreichender Weise charakterisiert wird. […]". ^ a b Waltraud Voss. Robert Luther – der erste Ordinarius für Wissenschaftliche Photographie in Deutschland - Zur Geschichte der Naturwissenschaften an der TU Dresden (12). In: Dresdner UniversitätsJournal, 13. Jahrgang, Nr. 5, p. 7, 12 March 2002, ([6]): "[…] Luther war Mitglied des Komitees zur Veranstaltung internationaler Kongresse für wissenschaftliche und angewandte Photographie; die Kongresse 1909 und 1931 in Dresden hat er wesentlich mit vorbereitet. 1930 gehörte er zu den Mitbegründern der Deutschen Gesellschaft für Photographische Forschung. Er gründete und leitete den Ausschuss für Sensitometrie der Gesellschaft, aus dessen Tätigkeit u.a. das DIN-Verfahren zur Bestimmung der Empfindlichkeit photographischer Materialien hervorging. […]" ^ a b Goldberg; John Eggert, head of research at the Agfa plant in Wolfen, near Leipzig; and Robert Luther, the founding Director of the Institute for Scientific Photography at the Technical University in Dresden and Goldberg's dissertation advisor. The proceedings were heavily technical and dominated by discussion of the measurement of film speeds. The Congress was noteworthy because a film speed standard proposed by Goldberg and Luther was approved and, in Germany, became DIN 4512, […]". ^ John Eggert, Arpad von Biehler (editors). Bericht über den VIII. Internationalen Kongreß für wissenschaftliche und angewandte Photographie Dresden 1931. J. A. Barth-Verlag, Leipzig, 1932. ^ a b ^ Charles J. Mulhern. Letter to John D. de Vries. 15th June 1990, (Copyscript on John D. de Vries' web-site): "In 1931, Edward Faraday Weston applied for a U.S patent on the first Weston Exposure meter, which was granted patent No. 2016469 on October 8, 1935, also an improved version was applied for and granted U.S patent No. 2042665 on July 7th 1936. From 1932 to around 1967, over 36 varieties of Weston Photographic Exposure Meters were produced in large quantities and sold throughout the world, mostly by Photographic dealers or agents, which also included the Weston film speed ratings, as there were no ASA or DIN data available at that time." ^ William Nelson Goodwin, Jr. Weston emulsion speed ratings: What they are and how they are determined. American Photographer, August 1938, 4 pages. ^ Everett Roseborough. The Contributions of Edward W. Weston and his company. In: Photographic Canadiana, Volume 22, Issue 3, 1996, ([8]). ^ Martin Tipper. Weston — The company and the man. In: www.westonmeter.org.uk, a web-page on Weston exposure meters: "[…] the Weston method of measuring film speeds. While it had some shortcomings it had the advantage of being based on a method which gave practical speeds for actual use and it was independent of any film manufacturer. Previous speed systems such as the H&D and early Scheiner speeds were both threshold speeds and capable of considerable manipulation by manufacturers. Weston's method measured the speed well up on the curve making it more nearly what one would get in actual practice. (This means that he was a bit less optimistic about film sensitivity than the manufacturers of the day who were notorious for pretending their films were more sensitive than they really were.) A certain Mr. W. N. Goodwin of Weston is usually credited with this system." ^ Harold M. Hefley. A method of calculating exposures for photomicrographs. In: Arkansas Academy of Science Journal, Issue 4, 1951, University of Arkansas, Fayetteville, USA, ([9]), research paper on an exposure system for micro-photography based on a variation of Weston film speed ratings. ^ Weston (publisher). Weston film ratings — Weston system of emulsion ratings. Newark, USA, 1946. Booklet, 16 pages, ([10]): 'You cannot necessarily depend on Weston speed values from any other source unless they are marked "OFFICIAL WESTON SPEEDS BY AGREEMENT WITH THE WESTON ELECTRICAL INSTRUMENT CORPORATION"'. ^ a b c Sangamo Weston (publisher). Weston ratings. Enfield, UK, 1956. Booklet, 20 pages, ([11]): "WESTON RATINGS—Correct exposure depends on two variables: (1) the available light and (2) its effect on the film in use. WESTON have always considered these two to be of equal importance and therefore introduced their own system of film ratings. Subsequently this system was found to be so successful that it was widely accepted in photographic circles and formed the basis for internationally agreed standards." ^ General Electric (publisher). GW-68. Manual GES-2810, USA: The manual states that ASA was working on standardized values, but none had been established at this time. ^ a b c General Electric (publisher). General Electric Film Values. Leaflet GED-744, USA, 1947. General Electric publication code GED-744, Booklet, 12 pages, ([12]): "This General Electric Film Value Booklet contains the […] exposure-index numbers for […] photographic films in accordance with the new system for rating photographic films that has been devised by the American Standards Association. This system has been under development for several years and is the result of co-operative effort on the part of all the film manufacturers, meter manufacturers, the Optical Society of America, and the Bureau of Standards. It was used by all of the military services during the war. The new ASA exposure-index numbers provide the photographer with the most accurate film-rating information that has yet been devised. The G-E exposure meter uses the ASA exposure-index numbers, not only in the interest of standardization, but also because this system represents a real advancement in the field of measurement. The exposure-index number have been so arranged that all earlier model G-E meters can be used with this series of numbers. For some films the values are exactly the same; and where differences exist, the new ASA exposure-index value will cause but a slight increase in exposure. However […] a comparison of the new ASA exposure-index numbers and the G-E film values is shown […] A complete comparison of all systems of emulsion speed values can be found in the G-E Photo Data Book. […] All G-E meters manufactured after January, 1946, utilize the ASA exposure indexes. Although the new ASA values can be used with all previous model G-E meters, interchangeable calculator-hoods with ASA exposure indexes are available for Types DW-48, DW-49, and DW-58 meters." ^ General Electric (publisher). General Electric Photo Data Book. GET-I717. ^ General Electric. Attention exposure meter owners. Advertisement, 1946 ([13]): "Attention! Exposure meter owners! Modernizing Hood $3.50 […] Modernize your G-E meter (Type DW-48 or early DW-58) with a new G-E Hood. Makes it easy to use the new film-exposure ratings developed by the American Standards Association … now the only basis for data published by leading film makers. See your photo dealer and snap on a new G-E hood! General Electric Company, Schenectady 5, N.Y.". ^ a b Yu. N. Gorokhovskiy. Fotograficheskaya metrologiya. Uspekhi Nauchnoy Fotografii (Advances in Scientific Photography), Volume 15, 1970, pp. 183-195 (English translation: Photographic Metrology. NASA Technical Translation II F-13,921, National Aeronautics and Space Administration, Washington, D.C. 20546, November 1972, [14]). ^ GOST 2817-50 Transparent sublayer photographic materials. Method of general sensitometric test. ([15]): GOST 2817-45 was replaced by GOST 2817-50, which in turn was replaced by GOST 10691.6-88, which defines black-and-white films, whereas GOST 10691.5-88 defines black-and-white films for aerial photography. ^ GOST 10691.0-84 Black-and-white photographic materials with transparent sublaver. Method of general sensitometric test. ([16]). ^ GOST 10691.6-88 Black-and-white phototechnical films, films for scientific researches and industry. Method for determination of speed numbers.([17]). ^ GOST 10691.5-88 Black-and-white aerophotographic films. Method for determination of speed numbers. ([18]). ^ Photography — Cameras — Automatic controls of exposureISO 2721:1982. (paid download). Geneva: International Organization for Standardization. ^ a b c d e f Leica Camera AG (2002). Leica R9 Bedienungsanleitung / Instructions. Leica publication 930 53 VII/03/GX/L, Solms, Germany, p. 197 ([19]): "Film speed range: Manual setting from ISO 6/9° to ISO 12500/42° (with additional exposure compensation of up to ±3 EV, overall films from ISO 0.8/0° to ISO 100000/51° can be exposed), DX scanning from ISO 25/15° to ISO 5000/38°.". Accessed 30 July 2011. ^ a b c d e f Leica Camera AG (1996). Leica Instructions - Leica R8. Solms, Germany, p. 16 ([20]): 'The DX-setting for automatic speed scanning appears after the position "12800".' and p. 65 ([21]): "Film speed range: Manual setting from ISO 6/9° to ISO 12,800/42°. (With additional override of −3 EV to +3 EV, films from 0 DIN to 51 DIN can be exposed as well.) DX scanning from ISO 25/15° to ISO 5000/38°.". Accessed 30 July 2011. ^ a b c ASA PH2.12-1961, Table 2, p. 9, showed (but did not specify) a speed of 12500 as the next full step greater than 6400. ^ a b Canon. ([22]): "Acceptable film speed has been increased to a range of between ASA 25 and an incredible ASA 12,800 by the use of the CANON BOOSTER. The light-measuring range of the newly developed CANON FT QL has been extended from a low of EV −3.5, f/1.2 15 seconds to EV 18 with ASA 100 film. This is the first time a TTL camera has been capable of such astonishing performance." ^ a b Canon (1978). Canon A-1 Instructions. p. 28, p. 29, p. 46, p. 70, p. 98 ([23][24][25]) ^ a b c d e Nikon USA Web page for Nikon D3s. Accessed 11 January 2010. ^ a b c d e Canon USA Web page for Canon EOS-1D Mark IV. Accessed 11 January 2010. ^ a b Canon USA Web page for Canon EOS-1D X. Accessed October 2011. ^ a b Nikon D4 page for Nikon D4. Accessed 6 January 2012. ^ a b Ricoh Pentax 645Z specifications ([26]) ^ a b Nikon D4s specifications ([27]) ^ a b Sony α ILCE-7S specifications ([28]) ^ Sony Europe Web page for DSLR-A500/DSLR-A550 (2009-08-27): "Dramatically reduced picture noise now allows super-sensitive shooting at up to ISO 12800, allowing attractive results when shooting handheld in challenging situations like candlelit interiors.". Accessed 30 July 2011. ^ Sony Europe Web page for DSLR-A560/DSLR-A580 (2010-08-27): "Multi-frame Noise Reduction 'stacks' a high-speed burst of six frames, creating a single low-noise exposure that boosts effective sensitivity as high as ISO 25600.". Accessed 30 July 2011. ^ Pentax USA Web page for Pentax K-5 (2010): "ISO Sensitivity: ISO 100-12800 (1, 1/2, 1/3 steps), expandable to ISO 80–51200". Accessed 29 July 2011. ^ Fujifilm Canada Web page for Fuji FinePix X100 (2011-02): "Extended output sensitivity equivalent ISO 100 or 12800". Accessed 30 July 2011. ^ Fact Sheet, Delta 3200 Professional. Knutsford, U.K.: Ilford Photo. ^ a b c d e f Photography — Digital still cameras — Determination of exposure index, ISO speed ratings, standard output sensitivity, and recommended exposure indexISO 12232:2006. (paid download). Geneva: International Organization for Standardization. ^ CIPA DC-004. Sensitivity of digital cameras. Tokyo: Camera & Imaging Products Association. ^ Kodak Image Sensors – ISO Measurement. Rochester, NY: Eastman Kodak. ^ New Measures of the Sensitivity of a Digital Camera. Douglas A. Kerr, August 30, 2007. ^ ISO 12232:1998. Photography — Electronic still-picture cameras — Determination of ISO speed, p. 12. Lens speed As should be clear from the above, a greater SOS setting for a given sensor comes with some loss of image quality, just like with analog film. However, this loss is visible as image noise rather than grain. Current (January 2010) APS and 35mm sized digital image sensors, both CMOS and CCD based, do not produce significant noise until about ISO 1600.[65] Despite these detailed standard definitions, cameras typically do not clearly indicate whether the user "ISO" setting refers to the noise-based speed, saturation-based speed, or the specified output sensitivity, or even some made-up number for marketing purposes. Because the 1998 version of ISO 12232 did not permit measurement of camera output that had lossy compression, it was not possible to correctly apply any of those measurements to cameras that did not produce sRGB files in an uncompressed format such as TIFF. Following the publication of CIPA DC-004 in 2006, Japanese manufacturers of digital still cameras are required to specify whether a sensitivity rating is REI or SOS. as well as a user-adjustable SOS value. In all cases, the camera should indicate for the white balance setting for which the speed rating applies, such as daylight or tungsten (incandescent light).[59] ISO 200 (daylight), The SOS rating could be user controlled. For a different camera with a noisier sensor, the properties might be S_{40:1}=40, S_{10:1}=800, and S_{\mathrm{sat}}=200. In this case, the camera should report ISO 100 (daylight) ISO speed latitude 50–1600 ISO 100 (SOS, daylight). For example, a camera sensor may have the following properties: S_{40:1}=107, S_{10:1}=1688, and S_{\mathrm{sat}}=49. According to the standard, the camera should report its sensitivity as The standard specifies how speed ratings should be reported by the camera. If the noise-based speed (40:1) is higher than the saturation-based speed, the noise-based speed should be reported, rounded downwards to a standard value (e.g. 200, 250, 320, or 400). The rationale is that exposure according to the lower saturation-based speed would not result in a visibly better image. In addition, an exposure latitude can be specified, ranging from the saturation-based speed to the 10:1 noise-based speed. If the noise-based speed (40:1) is lower than the saturation-based speed, or undefined because of high noise, the saturation-based speed is specified, rounded upwards to a standard value, because using the noise-based speed would lead to overexposed images. The camera may also report the SOS-based speed (explicitly as being an SOS speed), rounded to the nearest standard speed rating.[59] where H_{\mathrm{sos}} is the exposure that will lead to values of 118 in 8-bit pixels, which is 18 percent of the saturation value in images encoded as sRGB or with gamma = 2.2.[59] S_{\mathrm{sos}} = \frac{10\;\text{lx⋅s}}{H_{\mathrm{sos}}}, In addition to the above speed ratings, the standard also defines the standard output sensitivity (SOS), how the exposure is related to the digital pixel values in the output image. It is defined as Standard output sensitivity (SOS) The noise-based speed is defined as the exposure that will lead to a given signal-to-noise ratio on individual pixels. Two ratios are used, the 40:1 ("excellent image quality") and the 10:1 ("acceptable image quality") ratio. These ratios have been subjectively determined based on a resolution of 70 pixels per cm (178 DPI) when viewed at 25 cm (9.8 inch) distance. The signal-to-noise ratio is defined as the standard deviation of a weighted average of the luminance and color of individual pixels. The noise-based speed is mostly determined by the properties of the sensor and somewhat affected by the noise in the electronic gain and AD converter.[59] Digital noise at 3200 ISO vs. 100 ISO Noise-based speed where H_{\mathrm{sat}} is the maximum possible exposure that does not lead to a clipped or bloomed camera output. Typically, the lower limit of the saturation speed is determined by the sensor itself, but with the gain of the amplifier between the sensor and the analog-to-digital converter, the saturation speed can be increased. The factor 78 is chosen such that exposure settings based on a standard light meter and an 18-percent reflective surface will result in an image with a grey level of 18%/√2 = 12.7% of saturation. The factor √2 indicates that there is half a stop of headroom to deal with specular reflections that would appear brighter than a 100% reflecting white surface.[59] S_{\mathrm{sat}} = \frac{78\;\text{lx⋅s}}{H_{\mathrm{sat}}}, The saturation-based speed is defined as Saturation-based speed is a factor depending on the transmittance T of the lens, the vignetting factor v(θ), and the angle θ relative to the axis of the lens. A typical value is q = 0.65, based on θ = 10°, T = 0.9, and v = 0.98.[64] q = \frac{\pi}{4} T\, v(\theta)\, \cos^4\theta where L is the luminance of the scene (in candela per m²), t is the exposure time (in seconds), N is the aperture f-number, and H = \frac{q L t}{N^2}, ISO speed ratings of a digital camera are based on the properties of the sensor and the image processing done in the camera, and are expressed in terms of the luminous exposure H (in lux seconds) arriving at the sensor. For a typical camera lens with an effective focal length f that is much smaller than the distance between the camera and the photographed scene, H is given by Measurements and calculations The two noise-based techniques have rarely been used for consumer digital still cameras. These techniques specify the highest EI that can be used while still providing either an "excellent" picture or a "usable" picture depending on the technique chosen. The saturation-based technique is closely related to the SOS technique, with the sRGB output level being measured at 100% white rather than 18% gray. The SOS value is effectively 0.704 times the saturation-based value.[63] Because the output level is measured in the sRGB output from the camera, it is only applicable to sRGB images—typically TIFF—and not to output files in raw image format. It is not applicable when multi-zone metering is used. The CIPA DC-004 standard requires that Japanese manufacturers of digital still cameras use either the REI or SOS techniques, and DC-008[62] updates the Exif specification to differentiate between these values. Consequently, the three EI techniques carried over from ISO 12232:1998 are not widely used in recent camera models (approximately 2007 and later). As those earlier techniques did not allow for measurement from images produced with lossy compression, they cannot be used at all on cameras that produce images only in JPEG format. The Standard Output Sensitivity (SOS) technique, also new in the 2006 version of the standard, effectively specifies that the average level in the sRGB image must be 18% gray plus or minus 1/3 stop when the exposure is controlled by an automatic exposure control system calibrated per ISO 2721 and set to the EI with no exposure compensation. Because the output level is measured in the sRGB output from the camera, it is only applicable to sRGB images—typically JPEG—and not to output files in raw image format. It is not applicable when multi-zone metering is used. The Recommended Exposure Index (REI) technique, new in the 2006 version of the standard, allows the manufacturer to specify a camera model's EI choices arbitrarily. The choices are based solely on the manufacturer's opinion of what EI values produce well-exposed sRGB images at the various sensor sensitivity settings. This is the only technique available under the standard for output formats that are not in the sRGB color space. This is also the only technique available under the standard when multi-zone metering (also called pattern metering) is used. The ISO standard ISO 12232:2006[59] gives digital still camera manufacturers a choice of five different techniques for determining the exposure index rating at each sensitivity setting provided by a particular camera model. Three of the techniques in ISO 12232:2006 are carried over from the 1998 version of the standard, while two new techniques allowing for measurement of JPEG output files are introduced from CIPA DC-004.[60] Depending on the technique selected, the exposure index rating can depend on the sensor sensitivity, the sensor noise, and the appearance of the resulting image. The standard specifies the measurement of light sensitivity of the entire digital camera system and not of individual components such as digital sensors, although Kodak has reported[61] using a variation to characterize the sensitivity of two of their sensors in 2001. The ISO 12232:2006 standard Digital cameras have far surpassed film in terms of sensitivity to light, with ISO equivalent speeds of up to 409,600, a number that is unfathomable in the realm of conventional film photography. Faster processors, as well as advances in software noise reduction techniques allow this type of processing to be executed the moment the photo is captured, allowing photographers to store images that have a higher level of refinement and would have been prohibitively time consuming to process with earlier generations of digital camera hardware. For digital photo cameras ("digital still cameras"), an exposure index (EI) rating—commonly called ISO setting—is specified by the manufacturer such that the sRGB image files produced by the camera will have a lightness similar to what would be obtained with film of the same EI rating at the same exposure. The usual design is that the camera's parameters for interpreting the sensor data values into sRGB values are fixed, and a number of different EI choices are accommodated by varying the sensor's signal gain in the analog realm, prior to conversion to digital. Some camera designs provide at least some EI choices by adjusting the sensor's signal gain in the digital realm. A few camera designs also provide EI adjustment through a choice of lightness parameters for the interpretation of sensor data values into sRGB; this variation allows different tradeoffs between the range of highlights that can be captured and the amount of noise introduced into the shadow areas of the photo. In digital camera systems, an arbitrary relationship between exposure and sensor data values can be achieved by setting the signal gain of the sensor. The relationship between the sensor data values and the lightness of the finished image is also arbitrary, depending on the parameters chosen for the interpretation of the sensor data into an image color space such as sRGB. A CCD image sensor, 2/3 inch size Digital camera ISO speed and exposure index Some high-speed black-and-white films, such as Ilford Delta 3200 and Kodak T-MAX P3200, are marketed with film speeds in excess of their true ISO speed as determined using the ISO testing method. For example, the Ilford product is actually an ISO 1000 film, according to its data sheet. The manufacturers do not indicate that the 3200 number is an ISO rating on their packaging.[58] Kodak and Fuji also marketed E6 films designed for pushing (hence the "P" prefix), such as Ektachrome P800/1600 and Fujichrome P1600, both with a base speed of ISO 400. Marketing anomalies Kodak has defined a "Print Grain Index" (PGI) to characterize film grain (color negative films only), based on perceptual just-noticeable difference of graininess in prints. They also define "granularity", a measurement of grain using an RMS measurement of density fluctuations in uniformly exposed film, measured with a microdensitometer with 48 micrometre aperture.[57] Granularity varies with exposure — underexposed film looks grainier than overexposed film. The size of silver halide grains in the emulsion affects film sensitivity, which is related to granularity because larger grains give film greater sensitivity to light. Fine-grain film, such as film designed for portraiture or copying original camera negatives, is relatively insensitive, or "slow", because it requires brighter light or a longer exposure than a "fast" film. Fast films, used for photographing in low light or capturing high-speed motion, produce comparatively grainy images. Grainy high-speed B&W film negative Film sensitivity and grain Upon exposure, the amount of light energy that reaches the film determines the effect upon the emulsion. If the brightness of the light is multiplied by a factor and the exposure of the film decreased by the same factor by varying the camera's shutter speed and aperture, so that the energy received is the same, the film will be developed to the same density. This rule is called reciprocity. The systems for determining the sensitivity for an emulsion are possible because reciprocity holds. In practice, reciprocity works reasonably well for normal photographic films for the range of exposures between 1/1000 second to 1/2 second. However, this relationship breaks down outside these limits, a phenomenon known as reciprocity failure.[56] Another example occurs where a camera's shutter is miscalibrated and consistently overexposes or underexposes the film; similarly, a light meter may be inaccurate. One may adjust the EI rating accordingly in order to compensate for these defects and consistently produce correctly exposed negatives. For example, a photographer may rate an ISO 400 film at EI 800 and then use push processing to obtain printable negatives in low-light conditions. The film has been exposed at EI 800. Exposure index, or EI, refers to speed rating assigned to a particular film and shooting situation in variance to the film's actual speed. It is used to compensate for equipment calibration inaccuracies or process variables, or to achieve certain effects. The exposure index may simply be called the speed setting, as compared to the speed rating. Exposure index The ISO arithmetic speed has a useful property for photographers without the equipment for taking a metered light reading. Correct exposure will usually be achieved for a frontlighted scene in bright sun if the aperture of the lens is set to f/16 and the shutter speed is the reciprocal of the ISO film speed (e.g. 1/100 second for 100 ISO film). This known as the sunny 16 rule. /1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, etc. f Film speed is used in the Applying film speed Determining speed for color negative film is similar in concept but more complex because it involves separate curves for blue, green, and red. The film is processed according to the film manufacturer's recommendations rather than to a specified contrast. ISO speed for color reversal film is determined from the middle rather than the threshold of the curve; it again involves separate curves for blue, green, and red, and the film is processed according to the film manufacturer's recommendations. This value is then rounded to the nearest standard speed in Table 1 of ISO 6:1993. S = \frac {0.8\;\text{lx⋅s}} {H_\mathrm{m}} Film speed is found from a plot of optical density vs. log of exposure for the film, known as the D–log H curve or Hurter–Driffield curve. There typically are five regions in the curve: the base + fog, the toe, the linear region, the shoulder, and the overexposed region. For black-and-white negative film, the "speed point" m is the point on the curve where density exceeds the base + fog density by 0.1 when the negative is developed so that a point n where the log of exposure is 1.3 units greater than the exposure at point m has a density 0.8 greater than the density at point m. The exposure Hm, in lux-s, is that for point m when the specified contrast condition is satisfied. The ISO arithmetic speed is determined from: ISO 6:1993 method of determining speed for black-and-white film. Determining film speed Speeds shown in bold under APEX, ISO and ASA are values actually assigned in speed standards from the respective agencies; other values are calculated extensions to assigned speeds using the same progressions as for the assigned speeds. APEX Sv values 1 to 10 correspond with logarithmic ASA grades 1° to 10° found in ASA PH2.5-1960. ASA arithmetic speeds from 4 to 5 are taken from ANSI PH2.21-1979 (Table 1, p. 8). ASA arithmetic speeds from 6 to 3200 are taken from ANSI PH2.5-1979 (Table 1, p. 5) and ANSI PH2.27-1979. ISO arithmetic speeds from 4 to 3200 are taken from ISO 5800:1987 (Table "ISO speed scales", p. 4). ISO arithmetic speeds from 6 to 10000 are taken from ISO 12232:1998 (Table 1, p. 9). ISO 12232:1998 does not specify speeds greater than 10000. However, the upper limit for Snoise 10000 is given as 12500, suggesting that ISO may have envisioned a progression of 12500, 25000, 50000, and 100000, similar to that from 1250 to 10000. This is consistent with ASA PH2.12-1961.[41] For digital cameras, Nikon, Canon, Sony, Pentax, and Fujifilm apparently chose to express the greater speeds in an exact power-of-2 progression from the highest previously realized speed (6400) rather than rounding to an extension of the existing progression. Most of the modern 35 mm film SLRs support an automatic film speed range from ISO 25/15° to 5000/38° with DX-coded films, or ISO 6/9° to 6400/39° manually (without utilizing exposure compensation). The film speed range with support for TTL flash is smaller, typically ISO 12/12° to 3200/36° or less. The Booster[42] accessory for the Canon Pellix QL (1965) and Canon FT QL (1966) supported film speeds from 25 to 12800 ASA. The film speed dial of the Canon A-1 (1978) supported a speed range from 6 to 12800 ASA (but already called ISO film speeds in the manual).[43] On this camera exposure compensation and extreme film speeds were mutually exclusive. The Leica R8 (1996) and R9 (2002) officially supported film speeds of 8000/40°, 10000/41° and 12800/42° (in the case of the R8) or 12500/42° (in the case of the R9), and utilizing its ±3 EV exposure compensation the range could be extended from ISO 0.8/0° to ISO 100000/51° in half exposure steps.[39][40] Digital camera manufacturers' arithmetic speeds from 12800 to 409600 are from specifications by Nikon (12800, 25600, 51200, 102400 in 2009,[44] 204800 in 2012,[47] 409600 in 2014[49]), Canon (12800, 25600, 51200, 102400 in 2009,[45] 204800 in 2011,[46] 4000000 in 2015[51]), Sony (12800 in 2009,[52] 25600 in 2010,[53] 409600 in 2014[50]), Pentax (12800, 25600, 51200 in 2010,[54] 102400, 204800 in 2014[48]) and Fujifilm (12800 in 2011[55]). Table notes: Table 1. Comparison of various film speed scales APEX Sv (1960–) ISO (1974–) arith./log.° Camera mfrs. (2009–) ASA (1960–1987) arith. DIN (1961–2002) log. GOST (1951–1986) Example of film stock with this nominal speed −2 0.8/0°[39] 0.8 0[40] 1/1° 1 1 (1) 1.2/2° 1.2 2 (1) −1 1.6/3° 1.6 3 1.4 0 3/6° 3 6 2.8 1 6/9° 6 9 5.5 original Kodachrome 8/10° 8 10 (8) Polaroid PolaBlue 10/11° 10 11 (8) Kodachrome 8 mm film 2 12/12° 12 12 11 Gevacolor 8 mm reversal film, later Agfa Dia-Direct 16/13° 16 13 (16) Agfacolor 8 mm reversal film 20/14° 20 14 (16) Adox CMS 20 3 25/15° 25 15 22 old Agfacolor, Kodachrome II and (later) Kodachrome 25, Efke 25 32/16° 32 16 (32) Kodak Panatomic-X 40/17° 40 17 (32) Kodachrome 40 (movie) 4 50/18° 50 18 45 Fuji RVP (Velvia), Ilford Pan F Plus, Kodak Vision2 50D 5201 (movie), AGFA CT18, Efke 50, Polaroid type 55 64/19° 64 19 (65) Kodachrome 64, Ektachrome-X, Polaroid type 64T 80/20° 80 20 (65) Ilford Commercial Ortho, Polaroid type 669 5 100/21° 100 21 90 Kodacolor Gold, Kodak T-Max (TMX), Fujichrome Provia 100F, Efke 100, Fomapan/Arista 100 125/22° 125 22 (130) Ilford FP4+, Kodak Plus-X Pan, Svema Color 125 160/23° 160 23 (130) Fujicolor Pro 160C/S, Kodak High-Speed Ektachrome, Kodak Portra 160NC and 160VC 6 200/24° 200 24 180 Fujicolor Superia 200, Agfa Scala 200x, Fomapan/Arista 200, Wittner Chrome 200D/Agfa Aviphot Chrome 200 PE1 250/25° 250 25 (250) Tasma Foto-250 320/26° 320 26 (250) Kodak Tri-X Pan Professional (TXP) 7 400/27° 400 27 350 Kodak T-Max (TMY), Kodak Tri-X 400, Ilford HP5+, Fujifilm Superia X-tra 400, Fujichrome Provia 400X, Fomapan/Arista 400 500/28° 500 28 (500) Kodak Vision3 500T 5219 (movie) 640/29° 640 29 (500) Polaroid 600 8 800/30° 800 30 700 Fuji Pro 800Z, Fuji Instax 1000/31° 1000 31 (1000) Kodak P3200 TMAX, Ilford Delta 3200 (see Marketing anomalies below) 1250/32° 1250 32 (1000) Kodak Royal-X Panchromatic 9 1600/33° 1600 33 1400 (1440) Fujicolor 1600 2000/34° 2000 34 (2000) 10 3200/36° 3200 36 2800 (2880) Konica 3200, Polaroid type 667, Fujifilm FP-3000B 4000/37° 37 (4000) 11 6400/39° 6400[41] 39 5600 8000/40°[39][40] 10000/41°[39][40] 12 12500/42°[39] 12800[40][42][43][44][45] 12500[41] No ISO speeds greater than 10000 have been assigned officially as of 2013. 16000/43° 20000/44° Polaroid type 612 13 25000/45° 25600[44][45] 15 100000/51°[39] 102400[44][45] 51[40] Nikon D3s and Canon EOS-1D Mark IV (2009) 125000/52° 16 200000/54° 204800[46][47][48] Canon EOS-1D X (2011), Nikon D4 (2012), Pentax 645Z (2014) 17 400000/57° 409600[49][50] Nikon D4s and Sony α ILCE-7S (2014) 18 800000/60° 1000000/61° 19 1600000/63° 4000000/67°[51] Canon ME20F-SH[51] (2015) , ISO (since 1974), digital cameras, ASA (since 1960), DIN (since 1961) and GOST (1951 to 1986)." v and rounding to the nearest standard arithmetic speed in Table 1 below. S = 10^{\left( {S^\circ - 1} \right)/10} and rounding to the nearest integer; the log is base 10. Conversion from logarithmic speed to arithmetic speed is given by[38] S^\circ = 10 \log S + 1 Conversion from arithmetic speed S to logarithmic speed S° is given by[15] Conversion between current scales Commonly, the logarithmic speed is omitted; for example, "ISO 100" denotes "ISO 100/21°",[37] while logarithmic ISO speeds are written as "ISO 21°" as per the standard. The ISO system defines both an arithmetic and a logarithmic scale.[36] The arithmetic ISO scale corresponds to the arithmetic ASA system, where a doubling of film sensitivity is represented by a doubling of the numerical film speed value. In the logarithmic ISO scale, which corresponds to the DIN scale, adding 3° to the numerical value constitutes a doubling of sensitivity. For example, a film rated ISO 200/24° is twice as sensitive as one rated ISO 100/21°.[36] The determination of ISO speeds with digital still-cameras is described in ISO 12232:2006 (first published in August 1998, revised in April 2006, and corrected in October 2006). (first published in July 1982, revised in September 1994, and corrected in October 2003) define scales for speeds of black-and-white negative film and color reversal film, respectively. [16] (first published in 1974) and ISO 2240:2003[15] Metadata, Isbn, International Standard Book Number, Prolog, Unicode Color, Hue, Photography, RGB color model, Magenta Kodacolor (still photography) C-41 process, Photographic processing, Film format, 135 Film, Film speed Velvia Color photography, Fujifilm, Film speed, 120 Film, Film format
CommonCrawl
Search all SpringerOpen articles Journal of the Egyptian Mathematical Society Secure Hash Algorithm-2 formed on DNA Dieaa I. Nassr1 Journal of the Egyptian Mathematical Society volume 27, Article number: 34 (2019) Cite this article We present a new version of the Secure Hash Algorithm-2 (SHA-2) formed on artificial sequences of deoxyribonucleic acid (DNA). This article is the first attempt to present the implementation of SHA-2 using DNA data processing. We called the new version DNSHA-2. We present new operations on an artificial DNA sequence, such as (1) \(\bar {R}^{k}(\alpha)\) and \(\bar {L}^{k}(\alpha)\) to mimic the right and left shift by k bits, respectively; (2) \(\bar {S}^{k}(\alpha)\) to mimic the right rotation by k bits; and (3) DNA-nucleotide addition (mod 264) to mimic word-wise addition (mod 264). We also show, in particular, how to carry out the different steps of SHA-512 on an artificial DNA sequence. At the same time, the proposed nucleotide operations can be used to mimic any hash algorithm of its bitwise operations similar to bitwise operations specified in SHA-2. The proposed hash has the following features: (1) it can be applied to all data, such as text, video, and image; (2) it has the same security level of SHA-2; and (3) it can be performed in a biological environment or on DNA computers. A hash function is a function that maps a binary data of arbitrary size to a fixed-size string. For input data (often called message), the output of the hash function is called the hash value or digest of the message. Several applications use hash functions in hash tables to reduce the time cost for finding a data record given its search key. Typically, the domain size of a hash function is greater than its range. Therefore, there must be different massages (inputs) producing the same digest (output), and this is called a collision case. A hash function adapted to cryptographic applications has certain properties, including its resistance to collision, pre-image and second pre-image attacks [1–4], and to be a one-way function (infeasible to reverse). In this case, the hash function is called a secure hash function and it is used for providing message authentication, data integrity, password verification, and many other information security applications [5]. Secure Hash Algorithm-2 (SHA-2) is a set of secure hash functions standardized by NIST as part of the Secure Hash Standard in FIPS 180-4 [6]. Although there is a new version of the standard called SHA-3 [7], NIST does not currently intend to remove SHA-2 from the revised Secure Hash Standard as no significant attack on SHA-2 has been demonstrated. Rather, SHA-3 can be used in the information security applications that need to improve the robustness of NIST's overall hash algorithm toolkit. There are six hash functions belonging to SHA-2, and these hash functions have names corresponding to their digest length: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. These hash functions have very similar structures unlike only in the number of rounds, additive constants, shift amounts, and digest size. The aim of this paper is to introduce a new version of SHA-2 in DNA model considering the security properties of SHA-2. To the best of our knowledge, there is no article that discusses the implementation of SHA-2 using DNA data processing. We are therefore interested in studying how to implement SHA-2 on the DNA environment. Since the hash functions belonging to SHA-2 have almost the same basic processes, we focus on the construction of SHA-512 to be processed in a DNA environment (DNSHA-512) and the other hash functions are similar. The construction of DNSHA-512 contains new imitation of the operations: Right (and left) shift by k bits Right rotation by k bits Addition modulo 264 In Table 1, we give the list of abbreviations used in this paper. Table 1 List of abbreviations The paper is organized as follows. In the "DNA" section, we present some basic background of DNA required in this paper. A brief explanation of SHA-512 is given in the SHA-512" section. In the "DNSHA-2" section, we give the nucleotide operations that mimic the bitwise operations used in SHA-2 and the algorithm of DNSHA-512 of the proposed implementation of SHA-512 on an artificial DNA sequence. The "Implementation" section contains the implementation of DNSHA-512. In the "Conclusion" section, we include the conclusion. Deoxyribonucleic acid (DNA) is a huge molecule; most of them exist in the nucleus of the cells of the organism and in many viruses and contain a genetic code used during the reproduction and the evolution of these organisms. Most of the DNA molecules consist of two chains of biological polymers wrapped around a double strand. Each strand of DNA is made up of a long sequence of nucleotides. These nucleotides are for storing genetic information. They get the information needed to build proteins, DNA, or RNA. There are four types of nucleotides: adenine A, cytosine C, guanine G, or thymine T. Their names are usually abbreviated with the first letter only. A long chain (sequence) of nucleotides is written as a sequence of letters A, C,G, and T. This sequence (of nucleotides) forms the genetic code of cells. A sequence of nucleotides is connected together using a vertebra composed of phosphate and a sugar (deoxyribose). Nucleotides are sometimes called bases. Some results [8, 9] pointed out that it is possible to build and generate a chain of artificial nucleotides (DNA sequences) and create complex molecular machines. Because of the progress in the discovery of many properties of DNA [10, 11], there is a new data storage technique that depends on the DNA molecule. Several methods have been given in [12–19] for storing data in DNA sequences in which 1 g of DNA can be used to store about 106 TB of data; thus, a small number of grams of DNA is enough to store all the data of our world for hundreds of years. Many results [20–24] have developed a new data processing in DNA environment known as DNA computing. Adelman [20] has shown that by biochemical DNA operations, molecules could be used to carry out the computation. This author exploited the biochemical operations of DNA to obtain a solution for the Hamiltonian path problem. Computations are carried out in efficient parallel operations. Additionally, Lipton [24] has offered an encoding schema, exploiting operations of DNA molecules, to obtain a solution for the satisfiability problem with a small number of variables. A generalization of Lipton's schema has been given in [22]. Boneh et. al. [25] has shown that the data encryption standard (DES) could be broken by using the concept of DNA computation. He has presented a molecular program to break DES. Now, the study of the features of DNA has several objectives not only in the gene sequences but also in carrying out computations and in the field of data protection, where a private data can be written in a secret location in a DNA molecule to protect this data for a long time from unauthorized persons [26–30]. In the literatures [12–17], encoding data in DNA sequence has been classified by two ways [18, 19]: The binary data is transformed to a DNA sequence. For example [31–33], the binary digits "00," "01," "10," and "11" are transformed into the nucleotides A, C,G, and T, respectively. Each specified number of bits, e.g., byte, is converted into a fixed number of nucleotides using a given encoding table, see [34]. SHA-512 This section gives a brief description of the hash algorithm SHA-512 [6]. It is an iterated hash function that pads and parses the input message into n 1024-bit message blocks M(j) and gets the output hash value of size 512 bits. The 512-bit hash value is generally computed, using a compression function f : $$\begin{array}{*{20}l} H^{(0)}&=IV, \text{IV is an initial hash value (512-bit block)}\\ H^{(j)}&=f(H^{(j-1)},M^{(j)}) ~\text{for}~ 1\leq j\leq n. \end{array} $$ The final 512-bit block Hn is the hash value. The hash function SHA-512 is described in Algorithm 1. We use the notation in Table 1, where all operators perform on 64-bit words. The initial hash value H(0) is given in Table 2. We parse H(0) into eight 64-bit blocks \(H_{1}^{(0)}, H_{2}^{(0)}, \ldots H_{8}^{(0)}.\) The first 64 bits of H(0) are denoted \(H_{1}^{(0)},\) the next 64 bits are \(H_{2}^{(0)}\), and so on up to \(H_{8}^{(0)}.\) Table 2 The initial hash H(0) Suppose that the input message is of m bits. The input message is prepared as follows: The input message M is padded in the usual method: add the bit "1" to the end of M, and after that add k zero bits, where k is the minimal solution (non-negative) to the equation m+1+k≡896 (mod 1024). Next, to this addition, append 128-bit block that represents the number m written in binary. For example, the binary data of the message "BOB" are "01000010 01001111 01000010." This data has 24 bits. By joining the bit "1" to the end of this message, we get "01000010 01001111 01000010 1." Solving the equation 24+1+k≡896 (mod 1024), we have k=871. Therefore, preparing the message, we get: $$01000010 01001111 01000010\ 1\ \underbrace{0 0 \ldots 0}_{\text{871 zeros}}\ \underbrace{000\ldots 11000}_{\text{24 is written in binary (128-bit)}}.$$ The number of bits of the padded message becomes a multiple of 1024. Therefore, the padded message is parsed into n 1024-bit blocks' M(1),M(2),…,M(n). The block i is parsed into 16 words, where each word has 64 bits. The words of block i are given by \(M_{0}^{(i)}, M_{1}^{(i)}, \ldots M_{15}^{(i)}.\) Note that the first 64 bits of block i is stored in the word \(M_{0}^{(i)},\) where the leftmost bit is stored in the most significant bit position. By the same way, the word \(M_{1}^{(i)}\) is the second 64 bits, and so on up to \(M_{15}^{(i)}.\) For example, the message "BOB" after padding is one 1024-bit block, and the words \(M_{j}^{(1)}, j=0,1,\ldots,15\) are given as: The algorithm of SHA-512 is given in Algorithm 1. Now, we define the logical function used in Algorithm 1: $$\begin{array}{*{20}l} CH(r_{1},r_{2},r_{3}) = (r_{1}\wedge r_{2}) \oplus (\neg r_{1}\wedge r_{3}) \end{array} $$ $$\begin{array}{*{20}l} MAJ(r_{1},r_{2},r_{3}) = (r_{1}\wedge r_{2}) \oplus (r_{1}\wedge r_{3}) \oplus (r_{2}\wedge r_{3}) \end{array} $$ $$\begin{array}{*{20}l} \Sigma_{0}(r_{1})= S^{28}(r_{1})\oplus S^{34}(r_{1}) \oplus S^{39}(r_{1}) \end{array} $$ The following algorithm, is to compute Wj. DNSHA-2 In this section, we propose modern operations on nucleotides that mimic the bitwise operations used in SHA-2 and can therefore be used to mimic all members of SHA-2, i.e., to give a new version of SHA-2 called DNSHA-2. This section contains seven subsections. In the "DNA coding" section, we give how to represent data in artificial DNA sequences. In the "Basic DNA-nucleotide operations" section, we present the nucleotide operations that mimic the bitwise operations (NOT, AND, OR, XOR). In the "DNA right and left shift" and "DNA right rotation" sections, we show how to implement the nucleotide operations \(\bar {R}^{k}, \bar {L}^{k}\), and \(\bar {S}^{k}\) which mimic the bitwise operations (shown in Table 1), Rk,Lk, and Sk, respectively. The nucleotide operation that mimic the word-wise addition (mod 264) is given in the "DNA-nucleotide addition (mod 264)" section. In the "DNA initialization and preprocessing" section, we show how initialization and preprocessing operations, especially in SHA-512, are imitated in DNA computing. In the following, sometimes, we refer to any choice of the nucleotide bases (A, C, G, or T) by the symbols xi,yi, and zi (or \(x_{i}^{\prime }, y_{i}^{\prime }\), or \(z_{i}^{\prime }\)). DNA coding In classical computing, data is stored in the binary form (sequence of bytes). There are results [31–33] which encode the binary data in a DNA sequence, where the two binary digits "00," "01," "10," and "11" are transformed into the nucleotides A, C, G, and T, respectively. For example, the binary string "01001110" is transformed into the nucleotides "CATG." We conclude this by defining the transformation λ: $$\lambda(e_{i+1}e_{i})=\left\{\begin{array}{ll} A, & \text{if}\ e_{i+1}e_{i}=00; \\ C, & \text{if}\ e_{i+1}e_{i}=01; \\ G, & \text{if}\ e_{i+1}e_{i}=10; \\ T, & \text{if}\ e_{i+1}e_{i}=11. \end{array} \right. $$ Algorithm 3 describes the representation of a data in an artificial DNA sequence. Since the byte (8-bit) is the commonly used data storage unit, we suppose in Algorithm 3 (also, in this article) that the binary data is of an even number of bits. We give the following example to illustrate steps of Algorithm 3. Let e=(100111)2 be a binary data. The DNA nucleotides of e gives the artificial DNA sequence α=GCT since: At i=0,x0=λ(11)=T, At i=1,x1=λ(01)=C, At i=2,x2=λ(10)=G. Algorithm 4 shows how to decode binary data from an artificial DNA sequence. Note that in the following algorithm we use λ−1 to give the inverse transformation of λ. Let α=GCT be an artificial DNA sequence. The binary data of α gives e=(100111)2 since: At i=0,e1e0=λ−1(T)=11, At i=1,e3e2=λ−1(C)=01, At i=2,e5e4=λ−1(G)=10. Basic DNA-nucleotide operations In literatures [12–17], the nucleotide operations that imitate bitwise operations (NOT, AND, OR, XOR) are defined. The symbols (¬,∧,∨,⊕) are commonly used to express the bitwise operations (NOT, AND, OR, XOR), respectively. Throughout this paper, the symbols \((\bar {\neg },\bar {\wedge },\bar {\vee },\bar {\oplus })\) are used to give the nucleotide operations that imitate the bitwise operations (NOT, AND, OR, XOR), respectively. Note that we are putting a bar sign over most of the DNA operations or above the DNA terms to differ from bitwise operations. The nucleotide operation \(\bar {\neg }\) is defined as: $$\begin{array}{*{20}l} \bar{\neg}A&= T \\ \bar{\neg}C&= G \end{array} $$ In literatures [12–17], the nucleotide operations between two nucleotides x and y are defined as in Table 3 Table 3 Nucleotide operations \(\bar {\wedge }, \bar {\vee }\), and \(\bar {\oplus }\) DNA right and left shift In this subsection, we propose two new operations on DNA sequence that used to mimic the right and left shift by k bits. Let α=xm−1xm−2…x0 be a DNA sequence and e=(e2m−1e2m−2…e0)2 be the binary data encoded in α. We have to mimic the operation Rk(e) (right shift by k<2m bits) in SHA-2 to be \(\bar {R}^{k}(\alpha)\) in DNSHA-2. In this regard, we take into consideration whether k is an even number or odd. In case of k is an even number, the operation Rk(e) can be imitated in α by deleting k/2 nucleotides from right and then appending k/2 nucleotides A from left. Therefore, $$\begin{array}{*{20}l} \bar{R}^{k}(\alpha)= \underbrace{A A \ldots A}_{\frac{k}{2} nucleotides} x_{m-1} \ldots x_{k/2} \end{array} $$ For example, if α=TAGC, e=(11001001)2, and k=4, then $$\begin{array}{*{20}l} R^{4}(e) &=00001100 \end{array} $$ $$\begin{array}{*{20}l} \bar{R}^{4}(\alpha)&= AATA \end{array} $$ In case of k is an odd number, the operation \(\bar {R}^{k}(\alpha)\) can be computed in two steps. The first step is calculating \(\bar {R}^{k-1}(\alpha)\) since k−1 is even. The second step is calculating the right shift by one bit in DNA sequence where we denote to this operation as RSOB(α) and define it in Algorithm 5. Let α=xm−1xm−2…x0 be an artificial DNA sequence and λ−1(xi)=e2i+1e2i. Then, RSOB(α) is ym−1ym−2…y0, where λ−1(yi)=e2i+2e2i+1 for i=0,1,…,m−2 and λ−1(ym−1)=0e2m−1. To illustrate how to perform this step, we give the following notes: If β is a DNA sequence of m nucleotides G, then \(\alpha \bar {\wedge } \beta \) yields nucleotides zm−1zm−2…z0, where λ−1(zi)=e2i+10 for i=0,1,…m−1, i.e., zi is either nucleotide A or G. If α′=Axm−1xm−2…x1 and β′ is a DNA sequence of m nucleotides C, then \(\alpha ^{\prime } \bar {\wedge } \beta ^{\prime }\) yields nucleotides \( A z^{\prime }_{m-1} \ldots z^{\prime }_{1},\) where \(\lambda ^{-1}\left (z^{\prime }_{i}\right) = 0 e_{2i}\) for i=1,2,…m−1, i.e., \(z^{\prime }_{i}\) is either nucleotide A or C. Therefore, we need to define the new nucleotide operation \(\bar {\boxtimes }\) as follows: If λ−1(zi)=e2i+10 and λ−1(zi+1′)=0e2i+2, then \( \lambda ^{-1}\left (z_{i} \bar {\boxtimes } z'_{i+1}\right) = e_{2i+2} e_{2i+1}.\) We define this nucleotide operation in Table 4. Table 4 The nucleotide operation \(\bar {\boxtimes }\) The following example illustrates steps of Algorithm 5. We use the same symbols in the algorithm. Let α=TAC be an artificial DNA sequence encoding the binary data e=(110001)2. We have β1=CCC,β2=GGG, and β3=ATA. Then, \(\beta _{4}=\beta _{1} \bar {\wedge } \beta _{3} =ACA\) and \(\beta _{5}= \alpha \bar {\wedge } \beta _{2} = GAA.\) The result is given by \(\beta _{4} \bar {\boxplus } \beta _{5} = CGA\) encoding the binary data (011000)2. We give the operation \(\bar {R}^{k}(\alpha)\) in Algorithm 6. Similarly, we have to mimic the operation Lk(e) (left shift by k<2m bits) in SHA-2 to be \(\bar {L}^{k}(\alpha)\) in DNSHA-2. In case of k is even, the operation Lk(e) can be imitated in α by deleting k/2 nucleotides from left and then appending k/2 nucleotides A from right. Therefore, $$\begin{array}{*{20}l} \bar{L}^{k}(\alpha)= x_{k/2-1}\ldots x_{0} \underbrace{A A \ldots A}_{\frac{k}{2}nucleptides} \end{array} $$ $$\begin{array}{*{20}l} L^{4}(e) &=10010000 \end{array} $$ $$\begin{array}{*{20}l} \bar{L}^{4}(\alpha)&= GCAA \end{array} $$ In case of k is odd, \(\bar {L}^{k}(\alpha)\) can be computed in two steps. The first step is calculating \(\bar {L}^{k-1}(\alpha)\) since k−1 is even. The second step is calculating the left shift by one bit in DNA sequence where we denote this operation as LSOB(α) and define it in Algorithm 7. Let α=xm−1xm−2…x0 be an artificial DNA sequence and λ−1(xi)=e2i+1e2i. Then, LSOB(α) is ym−1ym−2…y0, where λ−1(yi)=e2ie2i−1 for i=1,2,…,m−1 and λ−1(y0)=e00. We use the same symbols in the algorithm. Let α=GTC be an artificial DNA sequence encoding the binary data e=(101101)2. We have β1=CCC,β2=GGG, and β3=TCA. Then, \(\beta _{4}=\beta _{2} \bar {\wedge } \beta _{3} =GAA\) and \(\beta _{5}= \alpha \bar {\wedge } \beta _{1} = ACC.\) The result is given by \(\beta _{4} \bar {\boxplus } \beta _{5} = CGG\) encoding the binary data (011010)2. We give the operation \(\bar {L}^{k}(\alpha)\) in Algorithm 8. DNA right rotation In this subsection, we introduce a new operation on DNA sequence that used to mimic the right rotation by k bits. In Algorithm 9, we give the operation \(\bar {S}^{k}(\alpha)\) on DNA sequence α to imitate the operation Sk(e) (right rotation by k bits), where e is the binary data encoded in α. Let α=xm−1xm−2…x0 be a DNA sequence and e=(e2m−1e2m−2…e0)2 be the binary data encoded in α. To compute \(\bar {S}^{k}(\alpha),\) we first compute \(\bar {R}^{k}(\alpha)\) using Algorithm 6 and then compute \(\bar {L}^{2m-k}(\alpha)\) using Algorithm 8. Therefore, \(\bar {S}^{k}(\alpha)= \bar {R}^{k}(\alpha) \bar {\vee } \bar {L}^{2m-k}(\alpha).\) We use the same symbols in the algorithm. Let α=AGT be an artificial DNA sequence encoding the binary data e=(001011)2 and k=4. We have \(\beta _{1}=\bar {R}^{4}(\alpha)= AAA,\) and \(\beta _{2}=\bar {L}^{2}(\alpha)=GTA.\) The result is given by \(\beta _{1} \bar {\vee } \beta _{2} = GTA\) encoding the binary data (101100)2. DNA-nucleotide addition (mod 264) In this subsection, we mimic word-wise addition (mod 264). We use the symbol \(\boxplus \) to express nucleotide addition. In Table 5, the addition of two nucleotides x and y takes the form: $$(z,\epsilon)= x \boxplus y$$ where z is the addition of two nucleotides x and y, and ε is called the carry nucleotide. Table 5 Nucleotide operations \(\boxplus \) In Algorithm 10, we mimic the binary addition (mod 264). Note that the binary sequence of 64 bits can be encoded in a DNA sequence of 32 nucleotides. Therefore, in Algorithm 10, we have the inputs which are two DNA sequences each of 32 nucleotides. We use the symbol \(\boxplus \) between two DNA sequences each of 32 nucleotides to express the nucleotide addition (mod 264) given in Algorithm 10. $$\begin{array}{*{20}l} \alpha_{1} &= TCTT TTCA GTAC AATT TGCA TAAC GTGT GGAA,\\ \alpha_{2} &= TGAT AGCT ATTC GATT TACT AAGC ATAT GTGA \end{array} $$ be inputs for Algorithm 10. The following example illustrates how to compute \(\alpha _{1} \boxplus \alpha _{2}.,\) i.e., steps of Algorithm 10. We use the same symbols in the algorithm. We have x0=A, y0=A, z0=A, and ε=A. Also, we have the following: At i=1, x1=A, x=A, εx=A, y1=G, z1=G, εy=A, ε=A. At i=2, x2=G, x=G, εx=A, y2=T, z2=C, εy=C, ε=C. At i=3, x3=G, x=T, εx=A, y3=G, z3=C, εy=C, ε=C. At i=4, x4=T, x=A, εx=C, y4=T, z4=T, εy=A, ε=C. At i=5, x5=G, x=T, εx=A, y5=A, z5=T, εy=A, ε=A. At i=6, x6=T, x=T, εx=A, y5=T, z5=G, εy=C, ε=C. At i=8, x8=C, x=C, εx=A, y8=C, z8=G, εy=A, ε=A. At i=10, x10=A, x=A, εx=A, y10=A, z10=A, εy=A, ε=A. At i=11, x11=T, x=T, εx=A, y11=A, z11=T, εy=A, ε=A. At i=12, x12=A, x=A, εx=A, y12=T, z12=T, εy=A, ε=A. At i=13, x13=C, x=C, εx=A, y13=C, z13=G, εy=A, ε=A. At i=14, x14=G, x=G, εx=A, y14=A, z14=G, εy=A, ε=A. At i=15, x15=T, x=T, εx=A, y15=T, z15=G, εy=C, ε=C. At i=16, x16=T, x=A, εx=C, y16=T, z16=T, εy=A, ε=C. At i=18, x18=A, x=C, εx=A, y18=A, z18=C, εy=A, ε=A. At i=19, x19=A, x=A, εx=A, y19=G, z19=G, εy=A, ε=A. At i=23, x23=G, x=T, εx=A, y23=A, z23=T, εy=A, ε=A. At i=26, x26=T, x=T, εx=A, y26=G, z26=C, εy=C, ε=C. At i=27, x27=T, x=A, εx=C, y27=A, z27=A, εy=A, ε=C. At i=30, x30=C, x=G, εx=A, y30=G, z30=A, εy=C, ε=C. Thus, the result is the DNA sequence: $$TAAT ACGT TGTG GCTT GGGT TAGG TGTT CCGA.$$ DNA initialization and preprocessing Since the initialization and preprocessing operations in the hash functions belonging to SHA-2 are almost similar, but differ only in initial values, we will focus on these operations for SHA-512 to be imitated in DNA computing. We give DNSHA-512 as the member of DNSHA-2 that mimics SHA-512 formed on an artificial DNA sequence. The initial hash value H(0) is encoded in the DNA sequence \(\bar {H}^{(0)}\) as in Table 6. Table 6 The DNA sequence \(\bar {H}^{(0)}\) In this paper, we suppose that a binary data encoded in a DNA sequence is of an even number of bits. This is because, in the usual way, binary data are stored in some number of bytes (8-bit unit). In the following, we need to mimic the beginning computation in SHA-512 to be done similarly in DNSHA-512: Pad the DNA sequence (supposed to be hashed) as follows: Suppose the length of the DNA sequence is m nucleotides. We append the nucleotide G to the end of the sequence, and after that k nucleotides of type A, where k is the minimal solution (non-negative) to the relation m+2+k≡448 (mod 512). Next, to this append, we add a DNA sequence of 64 nucleotides encoded the binary data of the value of 2m. We have the length of the padded DNA sequence which is a multiple of 512 nucleotides. We parse the DNA sequence into n 512-nucleotide blocks' \(\bar {M}^{(1)}, \bar {M}^{(2)},\) …,\(\bar {M}^{(n)}.\) The first 32 nucleotides of nucleotide block i are denoted \(\bar {M}_{0}^{(i)}\), the next 32 nucleotides are \(\bar {M}_{1}^{(i)}\), and so on up to \(\bar {M}_{15}^{(i)}\). The nucleotide block i\(\bar {M}^{(i)}\) (of 512 nucleotides) in DNSHA-512 has to imitate the 1024-bit block M(i) in SHA-512. Therefore, the 32 nucleotides of \(\bar {M}_{j}^{(i)}\) have to be the DNA sequence that encodes \(M_{j}^{(i)}.\) To show how to prepare the DNA sequence to be hashed, we give Example 7. The binary data of the message "BOB" are "01000010 01001111 01000010." This binary data is encoded in the DNA sequence "CAAGCATTCAAG" with m=12. By appending the nucleotide G to the end of this sequence, we get "CAAGCATTCAAG G." Solving the equation 12+2+k≡448 (mod 512), we have k=434. Therefore, preparing the DNA sequence, we get: $$\begin{array}{*{20}l}{} CAAGCATTCAAG\ G\ \underbrace{A A \ldots A}_{\text{434 nucleotides}}\ \underbrace{AAAAAAAAAAAAAAAAAAAAAAAAAAAAACGA}_{\text{64 nucleotides encode the binary of 24} } \end{array} $$ The 32 nucleotides of \(\bar {M}_{j}^{(1)}, j=0,1,\ldots, 15\) are given as: DNSHA-512 We give Algorithm 11 for DNSHA-512 that mimics Algorithm 1. Now, we define functions used in Algorithm 11 (DNA functions): $$\begin{array}{*{20}l} DNACH(r_{1},r_{2},r_{3})= (r_{1}\bar{\wedge}r_{2}) \bar{\oplus} (\bar{\neg} r_{1} \bar{\wedge} r_{3}) \end{array} $$ $$\begin{array}{*{20}l} DNAMAJ(r_{1},r_{2},r_{3})=(r_{1}\bar{\wedge}r_{2}) \bar{\oplus} (r_{1} \bar{\wedge} r_{3}) \bar{\oplus} (r_{2} \bar{\wedge} r_{3}) \end{array} $$ $$\begin{array}{*{20}l} \bar{\Sigma}_{0}(\alpha) = \bar{S}^{28}(\alpha) \bar{\oplus} \bar{S}^{34}(\alpha) \bar{\oplus} \bar{S}^{39}(\alpha) \end{array} $$ Now, we give the algorithm needed to compute \(\bar {W}_{j}.\) This section, presents an implementation of DNSHA-512. Typically, all members of SHA-2 can similarly be implemented on an artificial DNA sequence. In Table 7, we consider some metrics to evaluate DNSHA-512 compared to SHA-512. Table 7 Evaluation metrics for DNSHA-512 and SHA-512 We made a computer program that simulates each step of DNSHA-512. Then, we apply the program to hash two types of data: text and image. The text used for the hash is "BOB." As previously stated in Example 7, the binary data for this message is encoded in the DNA sequence "CAAGCATTCAAG." After padding the DNA sequence, we get: $${}CAAGCATTCAAG \ G \ \underbrace{A A \ldots A}_{\text{434 nucleotides}} \ \underbrace{AAAAAAAAAAAAAAAAAAAAAAAAAAAAACGA}_{\text{64 nucleotides encode the binary of 24}}$$ The hash of this message using DNSHA-512 is given by the 32 nucleotides of \(\bar {H}_{1}^{(1)}, \bar {H}_{2}^{(1)}, \ldots, \bar {H}_{8}^{(1)}\) as follows: 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 \(\bar {H}_{1}^{(1)}\) T T T C A G G A A T A C C A A A A C A C G C G A C G G T T C C A \(\bar {H}_{2}^{(1)}\) G A T G G T T C G T C T C A A A C C C A G A C T G C C C C C A G \(\bar {H}_{3}^{(1)}\) G C T A G T T C A T C T G G A C A G T C C G C T C T C T C A G G \(\bar {H}_{4}^{(1)}\) G G T G G C C A C G G T C C C G A C C C A T A G A T G A C A G G \(\bar {H}_{5}^{(1)}\) G C T A C A T T A C C G T C A T C T C G G T T C C G T T T C C A \(\bar {H}_{6}^{(1)}\) A C G A C C A G A G T T G A A G G T A T C A T T T A C T C C T C \(\bar {H}_{7}^{(1)}\) T T G A C G C T C C T A G A T G A C T T T G A C T G C A C G C T \(\bar {H}_{8}^{(1)}\) A C T T T A G A A A A G T C T A T T G A A G A C T C G T A T G C The corresponding hash of this message using SHA-512 is given by 64-bit words of \(H_{1}^{(1)}, H_{2}^{(1)}, \ldots, H_{8}^{(1)}\) as follows: \(H_{1}^{(1)}\) fd28314011986bd4 \(H_{2}^{(1)}\) 8ebdb74054879552 \(H_{2}^{(1)}\) 9cbd37a12d67774a \(H_{4}^{(1)}\) ae946b561532384a \(H_{5}^{(1)}\) 9c4f16d376bd6fd4 \(H_{6}^{(1)}\) 18522f82b34fc75d \(H_{7}^{(1)}\) f8675c8e1fe1e467 \(H_{8}^{(1)}\) 1fc802dcf821db39 The image used for the hash is the lake image declared in Fig. 1. The image used for the hash This image has 4,200,848 bits. After padding, the binary data of this image has 4103 message blocks (1024-bit). The hash of this image using DNSHA-512 is given by the 32 nucleotides of \(\bar {H}_{1}^{(4103)}, \bar {H}_{2}^{(4103)}, \ldots, \bar {H}_{8}^{(4103)}\) as follows: \(\bar {H}_{1}^{(4103)}\) A A G G T G T C C G C C C T T C T G C G G C T T T C G A T G C G \(\bar {H}_{2}^{(4103)}\) T C T A G T T C G T C A C C C C G G T T T A T G C C A C T G C T \(\bar {H}_{3}^{(4103)}\) G G T C T A A C G C T T G C A C G G A G G G C G A A T C A C C T \(\bar {H}_{4}^{(4103)}\) G G T T G A A T C G C T G C G C T C C G A T G T C A C C A G G A \(\bar {H}_{5}^{(4103)}\) A C C A T C A A T T A T A G A T G T T C C A C T C C G A G G A G \(\bar {H}_{6}^{(4103)}\) C A T A C A T T C A T G T C A T C A T C T G C G G C T C A T G C \(\bar {H}_{7}^{(4103)}\) A A G G T T G C T T A G C T G A G A A C A A G T T G G G A C G C \(\bar {H}_{8}^{(4103)}\) A T C T C G A A A A G T A G T T G C G G T T A A C G A T G A G T The corresponding hash of this image using SHA-512 is given by 64-bit words of \(H_{1}^{(4103)}, H_{2}^{(4103)}, \ldots, H_{8}^{(4103)}\) as follows: \(H_{1}^{(4103)}\) 0aed657de69fd8e6 \(H_{2}^{(4103)}\) dcbdb455afce51e7 \(H_{2}^{(4103)}\) adc19f91a2a60d17 \(H_{4}^{(4103)}\) af836799d63b4528 \(H_{5}^{(4103)}\) 14d0f323bd4758a2 \(H_{6}^{(4103)}\) 4c4f4ed34de69d39 \(H_{7}^{(4103)}\) 0af9f278810bea19 \(H_{8}^{(4103)}\) 37600b2f9af0638b We have presented the implementation of SHA-2 using DNA data processing. To the best of our knowledge, this result is the first attempt to model a standard hash function using DNA data processing. We have shown how to encode binary data into a DNA sequence, and we have given nucleotide operations that mimic the bitwise operations used in SHA-2. In particular, we have presented the DNA operations \(\bar {R}^{k}(\alpha), \bar {L}^{k}(\alpha),\) and \(\bar {S}^{k}(\alpha)\) that used to mimic the bitwise operations Rk(e),Lk(e), and Sk(e), where e (binary data) is encoded in the the DNA sequence α. Therefore, this work can be used to mimic any hash algorithm of its bitwise operations limited to bitwise operations specified in SHA-2. Similarly, the nucleotide operations proposed in this result can be exploited to lead to a preliminary result to perform SHA-3 on DNA sequences. Aoki, K., Guo, J., Matusiewicz, K., Sasaki, Y., Wang, L.: Preimages for step-reduced SHA-2. In: Advances in Cryptology - ASIACRYPT 2009, 15th International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, December 6-10, 2009. Proceedings, Vol. 5912 of Lecture Notes in Computer Science, pp. 578–597. Springer (2009). https://doi.org/10.1007/978-3-642-10366-7_34. Indesteege, S., Mendel, F., Preneel, B., Rechberger, C.: Collisions and other non-random properties for step-reduced SHA-256. In: Selected Areas in Cryptography, pp. 276–293. Springer (2009). https://doi.org/10.1007/978-3-642-04159-4_18. Kelsey, J., Kohno, T.: Herding hash functions and the nostradamus attack. In: Advances in Cryptology - EUROCRYPT 2006, pp. 183–200. Springer (2006). https://doi.org/10.1007/11761679_12. Sanadhya, S., Sarkar, P.: New collision attacks against up to 24-step SHA-2. In: Progress in Cryptology-INDOCRYPT 2008, pp. 91–103. Springer (2008). https://doi.org/10.1007/978-3-540-89754-5_8. Menezes, A. J., van Oorschot, P. C., Vanstone, S. A.: Handbook of Applied Cryptography, CRC Press, Inc., USA (1996). N.I. of Standards, Technology, FIPS PUB 180-4: Secure Hash Standard, pub-NIST (2012). http://csrc.nist.gov/publications/fips/fips180-4/fips-180-4.pdf. N.I. of Standards, Technology, SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions: FiPS PUB 202, pub-NIST (2015). https://books.google.com.eg/books?id=hCwatAEACAAJ. Friedman, M., Rogers, Y., Boyce-Jacino, M.: Gene pen devices for array printing, WO Patent App, No. 6235473 (2000). http://www.freepatentsonline.com/6235473.html. Kimoto, M., Matsunaga, K., Hirao, I. I.: DNA aptamer generation by genetic alphabet expansion SELEX (ExSELEX) using an unnatural base pair system. Springer, New York (2016). Calladine, C., Drew, H., Luisi, B., Travers, A.: Understanding DNA: The Molecule and How itWorks. 3rd ed. Academic Press, Cambridge (2004). Watson, J.: Molecular biology of the gene, Benjamin/Cummings (1987). https://books.google.com.eg/books?id=cM0fAQAAIAAJ. Atito, A., Khalifa, A., Rida, S. Z., Khalifa, A.: DNA-based data encryption and hiding using playfair and insertion techniques. J. Commun. Comput. Eng. 2, 44–49 (2012). Guo, C., Chang, C., Wang, Z.: A new data hiding scheme based on DNA sequence. Int. Innov. J. Comput. Inf. Control. 8, 1–11 (2012). Khalifa, A.: Lsbase: a key encapsulation scheme to improve hybrid crypto-systems using DNA steganography. In: 2013 8th International Conference on Computer Engineering & Systems (ICCES), pp. 105–110 (2013). https://doi.org/10.1109/icces.2013.6707182. Khalifa, A, Atito, A: High-capacity DNA-based steganography. In: 8th International Conference on Informatics and Systems. IEEE (2012). BIO–76–BIO–80. Skariya, M., Varghese, M.: Enhanced double layer security using RSA over DNA based data encryption system. Int J Comput Sci Eng Technol. 4, 746–750 (2013). Taur, J., Lin, H., Lee, H., Tao, C.: Data hiding in DNA sequences based on table lookup substitution. Int J Innov Comput Inf Control. 8, 6585–6598 (2012). UbaidurRahmana, N. H., Balamuruganb, C., Mariappanab, R.: A novel DNA computing based encryption and decryption algorithm. Procedia Comput. Sci. 46, 463–475 (2015). UbaidurRahmana, N. H., Balamuruganb, C., Mariappanab, R.: A novel string matrix data structure for DNA encoding algorithm. Procedia Comput. Sci. 46, 820–832 (2015). Adleman, L.: Molecular computation of solutions to combinatorial problems. Science. 266(11), 1021–1024 (1994). Bahig, H. M., Nassr, D. I.: DNA-based AES with silent mutations. Arab. J. Sci. Eng. 44, 1–15 (2018). https://doi.org/10.1007/s13369-018-3520-8. Boneh, D., Dunworth, C., Lipton, R., Sgall, J: On the computational power of DNA. Discret. Appl. Math. 71(1-3), 79–94 (1996). Article MathSciNet Google Scholar Kari, L., Seki, S., Sosík, P.: DNA Computing—Foundations and Implications. Springer, Berlin (2012). Lipton, R.: Using DNA to solve np-complete problems. Science. 268, 542–545 (1995). Boneh, D., Dunworth, C., Lipton, R.: Breaking DES using a molecular computer. In: DNA Based Computers, Proceedings of a DIMACS Workshop, Princeton, New Jersey, USA, April 4, 1995, pp. 37–66 (1995). https://doi.org/10.1090/dimacs/027/04. Abbasy, M., Manaf, A., Shahidan, M.: Data Hiding Method Based on DNA Basic Characteristics. Springer (2011). https://doi.org/https://doi.org/10.1007/978-3-642-22603-8_5. Abbasy, M., Nikfard, P., Ordi, A., Torkaman, M.DNA base data hiding algorithm. 1, 183–193 (2012). Gehani, A., LaBean, T., Reif, J.: DNA-based Cryptography. Springer, Berlin (2004). MATH Google Scholar Hamed, G., Marey, M., El-Sayed, S. S., Tolba, F.: DNA Based Steganography: Survey and Analysis for Parameters Optimization. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-21212-8_3. Tang, Q., Ma, G., Zhang, W., Yu, N.: Reversible data hiding for DNA sequences and its applications. Int. Digit. J. Crime For. 6(4), 1–13 (2014). Cui, G., Qin, L., Wang, Y., Zhang, X.: An encryption scheme using DNA technology. In: Third International Conference on Bio-Inspired Computing: Theories and Applications, pp. 37–42 (2008). https://doi.org/10.1109/bicta.2008.4656701. Sabry, M., Hashem, M., Nazmy, T., Khalifa, M. E.: Design of DNA-based advanced encryption standard (AES). In: 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), pp. 390–397 (2015). https://doi.org/10.1109/intelcis.2015.7397250. Xin-she, L., Lei, Z., Yu-pu, H.: A novel generation key scheme based on DNA. In: International Conference on Computational Intelligence and Security, pp. 264–266 (2008). https://doi.org/10.1109/cis.2008.113. Wang, X., Zhang, Q.: DNA computing-based cryptography. In: 2009 Fourth International on Conference on Bio-Inspired Computing, pp. 1–3 (2009). https://doi.org/10.1109/bicta.2009.5338153. We are grateful to Hatem M. Bahig for his support, valuable comments, and remarks. Furthermore, we are thankful to the referees for their precious comments, which lead to the improvement of the paper. Computer Science Division, Department of Mathematics, Faculty of Science, Ain Shams University, Cairo, Egypt Dieaa I. Nassr DIN is the only author of this article, and he has performed all the analysis, verifications, and completions of the results included in this article. The author read and approved the final manuscript. Correspondence to Dieaa I. Nassr. The author declares that he has no competing interests. Nassr, D.I. Secure Hash Algorithm-2 formed on DNA. J Egypt Math Soc 27, 34 (2019). https://doi.org/10.1186/s42787-019-0037-6 Secure hash function SHA-2 Mathematics Subject Classification (2000) 68P25 Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
A survey on wireless sensor network technologies in pest management applications Lyle Parsons ORCID: orcid.org/0000-0001-6650-30041, Robert Ross ORCID: orcid.org/0000-0002-2796-784X1 & Kylie Robert ORCID: orcid.org/0000-0002-8554-84402 SN Applied Sciences volume 2, Article number: 28 (2020) Cite this article Animal pests notoriously cause billions of dollars of damage by spoiling crops, damaging infrastructure and spreading disease. Pest control companies try to mitigate this damage by implementing pest management approaches to respond to and prevent infestations. However, these approaches are labour intensive, as pest control technicians must regularly visit the affected areas for monitoring and evaluation. Current remote sensing technologies can allow for decision making based on real-time data remotely uploaded from pest traps. This reduces the frequency of field visits and improves the pest management process. In this paper, we survey a variety of modern data-driven pest management approaches. We also evaluate wireless communication infrastructures which can be used to facilitate data transfer between pest traps and cloud servers. Animal pests are a major problem globally, as they can spread disease, and cause damage to crops and infrastructure. Rats, for example, carry a number of diseases including Plague (the Black Death [1]), Murine typhus, Rickettsial pox, Salmonellosis, Rat-bite fever, as well as Leptospirosis - all of which can spread to humans [2]. Plant eating pests such as codling moths burrow into pome fruit, which can result in severe crop losses [3]. In terms of infrastructure damage, termites are capable of destroying buildings by eating into the timber [4]. To mitigate some of these problems, pest control strategies need to be implemented to treat and prevent infestations. Animal pest control involves the eradication of both invertebrate and vertebrate select creatures from homes, office buildings, farms, and other areas frequented by people. Insects, rodents, and birds are the most common animal pests present around the world. Prevention and elimination of smaller pests is usually achieved through the use of pesticides, while larger pests are normally eradicated using pest traps [2]. There are five stages involved in the traditional pest management process [2]: Inspection—a pest control technician carries out an inspection in the area where pest activity is suspected. Identification—the type of pest (or pests) in the area are identified. Recommendation—based on the pest type, the technician will recommend a treatment strategy. Treatment—After the recommended strategy has been agreed on, the treatment stage begins. This usually involves exposing the infested area to pesticides or rodenticides. Evaluation—At this stage the technician periodically visits the area to check the effectiveness of the treatment strategy. The treatment and evaluation stages are usually ongoing, as long term monitoring is the most effective way of preventing a re-infestation [2]. While the traditional pest management approach mentioned above is effective, it has three distinct weaknesses. Firstly, the pest management strategy is based on old information (the state of the site the last time it was visited). This means that if there is a need to change the pest management strategy between site visits, perhaps as a result of increased pest activity since the last visit, the change can only be effected during the next site visit. Depending on the type of pest and level of infestation, site visits can range from weekly to annually [5]. Secondly, there is an ongoing high labour cost involved with sending technicians out for site visits periodically. Finally, despite the effectiveness of pesticides in eliminating pests, they have a negative impact on crops and other non-target species when used in excess. For example, experimental studies have shown that wild bumble bees exposed to Neonicotinoid insecticides experienced a reduced growth rate [6]. Another study also detailed the toxic effects of Fenitrothion on dunnarts (a native Australian marsupial) [7]. Farmers therefore have to constantly monitor the amount and types of pesticide used on their crops. One way of ensuring this is by estimating the population of pests in a given area through the use of pheromone based traps. These traps contain lures placed on an adhesive card, as shown in Fig. 1. When the pest enters the trap it remains stuck on the card. The adhesive cards are then periodically removed and inspected by entomologists, who then count the number and types of pests found on them. These counts give an estimate of the effectiveness of the pest treatment strategy, and aids farmers in determining the right amount and types of pesticide to use. However, a major drawback of this method is the amount of labour involved in manually removing and inspecting all the cards from the various pheromone traps. A basic pheromone trap (card exposed for illustration) [8] For management of larger pests, such as rodents, bait stations or instant kill traps are placed around areas of suspected pest activity. Instant kill traps are designed to kill the rodent when it comes into contact with the trap (through electric shock, suffocation, or trauma). This is in contrast to bait stations, where poison infused into wax blocks or pellets are used. The rodents consume the poisoned bait, and usually die a few days after. Figure 2 shows a typical bait station with wax blocks used as the bait. A rodent bait station with wax blocks [9] One problem involved with using bait is poisoning of non-target species, either through direct consumption or through secondary poisoning (where the non-target species consumes the body of a poisoned pest) [2]. Bait blocks are also harmful to humans if ingested, with children being the most vulnerable. This is especially true when the traps are easily accessible to children (for example when the bait station is placed at ground level). In 2012, the Poison Control Centre in Australia received 15,000 calls concerning children under six years being exposed to rodent bait [10]. New technological innovations have resulted in novel ways of approaching the pest management paradigm, particularly in terms of pest identification. For example, advances in image processing and computational power now make it possible to automate the classification and counting of pests in pheromone-based traps [11]. Some commercial pest management systems allow farmers to remotely monitor pest activity in their fields [12, 13]. Also, the use of passive infra-red (PIR) sensing in pest traps makes it possible to identify target pests based on the amount of heat they emit. By restricting bait access to only the target pests and using bait types that have a low risk of secondary poisoning, the amount of accidental deaths due to non-target species ingesting the bait in the traps is reduced [14]. The Internet of Things (IoT) has been instrumental in enabling pest traps which are capable of wireless communication with cloud servers. This allows for information specific to each trap (such as pest activity, bait levels, and ambient environmental conditions) to be sent to a cloud server for real-time analysis. The analysed information can then be used to make data driven decisions, resulting in a more optimized pest management strategy. By employing technological solutions to animal pest management, we can increase the quality and timeliness of data, hence facilitating better decision making. Additionally we can automate many of the labour-intensive pest management jobs, thereby reducing cost and increasing efficacy. The purpose of this paper is to give an overview of modern data-driven approaches to pest management. The paper is structured as follows: Sect. 2 discusses a number of commercially available systems capable of detecting the presence of pests in a pest trap, while Sect. 3 details current systems capable of both detection and classification of pests, and discusses the image processing methods used for pest classification. A discussion of IoT communication infrastructures available for pest control systems (such as LoRa, WiFi, and NB-IoT) is given in Sect. 4. Section 5 gives a critical analysis of the systems discussed in this paper, with an overall conclusion given in Sect. 6. Pest detection systems This section provides an overview of the various pest detection systems available, as well as their principle of operation. By adding a pest detection system to traditional pest traps, the pest management process can be significantly improved—as real-time data facilitates a better response to infestations and fewer site visits.Significant cost savings can therefore be realized by this addition. Pest detection systems also allow for more optimized baiting strategies, since the amount of bait can be adjusted according to the pest activity in near-real time. One of the most successful smart pest detection systems is Trapview, a pest management system which came on to the market in 2013. The standard Trapview system uses four cameras to capture images of the inside of a pheromone based trap. These images are then uploaded to a cloud server for processing. Each trap has an internet enabled SIM card which enables wireless communication with the cloud server. A GPS sensor in the trap provides information on the location of the trap for mapping purposes. The pests in the image are then classified and counted through the use of image processing algorithms, and this information is presented to the user through a web based application, or alternatively, a mobile application. Some of the pests which can be classified by this system include codling moths, plum fruit moths, tomato leafminers, cotton bollworm, grape berry moths, as well as diamondback moths. Users also have the option of purchasing temperature and humidity sensors for the traps [12]. Other Trapview systems available include Trapview AURA, which uses polarized UV light instead of a pheromone lure; and Trapview FLY, which uses higher resolution cameras for monitoring smaller pests such as fruit flies. All Trapview systems come with a solar panel and Lithium-Ion battery for power management [12]. Another smart pest trap targeting insects is the Spensa Z-Trap. Inside this trap is a pheromone lure as well a series of electrified rods. When an insect makes contact with the rods, it gets electrocuted and falls into a collection bucket. Bioimpedance sensors measure the electrical impedance of the insect, and this measurement is used as a means of classifying the insect. Data from the traps is sent to a cloud based server, and is presented to the user via a web application [13]. The same company also offers an image based classifier trap, the Spensa Sentinel, where insects in pheromone based adhesive traps are identified and counted through deep learning algorithms in the cloud [13]. One particular pest management system has proved to be successful in determining bedbug infestations when used in hotel rooms. The GUPSY(Global Urban Positioning and Sensor sYstem) bedbug monitoring system attracts bedbugs by emitting carbon dioxide from its trap [15]. Images of the base of the trap are captured periodically, and computer vision algorithms are used to determine the number of bed bugs in the trap, as well as their type (juvenile or adult). The bedbug count is then sent from the traps in the rooms to a gateway device, using LoRaWAN (a low power wireless network commonly used with Internet of Things systems) [16]. The gateway device then sends this information to a cloud based server. Users can then be sent alerts whenever an infestation is detected. The conventional method of checking for bedbugs in commercial settings is to use specially trained dogs to sniff around rooms trying to pick up a bedbug scent. If found, heat is used to kill the bedbugs as the use of pesticides for bedbug eradication is illegal. The presence of bedbugs needs to be checked for quite often, especially in places like hotel rooms where they can be introduced into the rooms through luggage. The GUPSY pest management system placed third in the 2016 LoRa Alliance Global IoT challenge [17]. Smart pest management systems are not only limited to insects. Rentokil, a multinational company in the pest control industry has a number of pest management systems targeted at rats and mice. One such system, RADAR Connect (Rodent Activated Detection and Riddance), is an electronic mouse trap which uses carbon dioxide to humanely kill mice. The trap has a tube shape with two entrances (one on either side). When a mouse enters the trap, it triggers break beam sensors in the trap. Actuators then close the entrances of the trap. Carbon dioxide is subsequently released into the trap, thereby killing the mouse within a minute. A signal is then sent from the trap to a Rentokil server, indicating activity in the trap, as well as the location of the trap. A technician can then be dispatched to remove the dead mouse and reset the trap [18]. Autogate is another Rentokil pest management system used for rats. The Autogate system is added to an existing rat bait station, and acts to only allow access to the bait if a rat has been detected in the bait station. For a rat to be detected, the break beam sensors in the bait station must be triggered at least three times over a short time interval. When this happens, an electronic gate will open in the bait station so as to allow the rat to have access to the bait. An SMS is also sent to the customer informing them of the presence of the rodent activity [14]. Another electronic trap targeted at rodents is the Goodnature E2 Automated Rodent trap, from the Australian company Protect-us. This cylindrical-shaped trap contains a miniature gas cannister connected to a movable piston. A lure which emits olfactory notes attractive to the rodents is placed in front of the piston. When a rodent enters the trap, it triggers a sensor, which then results in the piston striking a fatal blow to the head of the rodent. The cannister holds enough gas to fire the piston 24 times before needing replacement [19]. Another company, Anticimex, offers an integrated solution to smart pest control. In terms of their rodent control, they offer a number of smart traps. The Smart Pipe for example, is a trap placed in underground sewers and is designed so that rats and mice can pass through the trap as they move in the sewer. A proximity sensor detects the rodent moving through the trap, which subsequently triggers a metal bar to drop onto the rodent thereby killing it. The rodent is then washed along with the sewer water [20]. Anticimex also sell the Smart box, which can be placed along building walls. This trap uses electricity to kill rodents as they enter in it. The bodies are then moved into a collection bin (part of the trap), by a motor mechanism. The bin needs to be manually emptied after an accumulation of dead bodies. Smart catch is another type of rodent trap offered by Anticimex. It kills the rodents by striking a blow to the neck when a rodent is detected in the trap. This trap can also be placed along building walls. A non-lethal system is also used to gather data about rodent activity. This unit is called Smart Eye (by Anticimex). It is designed to be placed in small, narrow areas where rodent activity is suspected but conventional traps can not be used. It does not have a killing mechanism, however it sends information of rodent activity based on PIR proximity sensors. All information from the above mentioned traps are sent to Smart Connect, a hub which is used to relay the information to a central server using a cellular network. Data sent from these units is then used for effective pest management [20]. BoxSense, from IoT Box Systems is an add-on to traditional rodent bait stations. The system uses both motion and touch sensors to detect rodent activity in the bait station. The information from these sensors is sent to a central server, where it is analysed, and used as a guide for baiting strategies, based on occupancy. This product has a battery life of one year [21]. The company also offers a solution which does not require batteries, the BoxSense eMitter Snap Trap. This spring loaded trap uses EnOcean wireless sensors [22] to detect when a rodent has been trapped and to upload the information to a cloud server. The EnOcean sensors harness their energy from the kinetic energy resulting from the spring loaded trap being triggered. More recently, [23] developed a low power system capable of remotely monitoring rodenticide levels in bait stations. The system uses an infra-red proximity sensor placed behind the bait rod to measure the distance between the sensor and the bait blocks. As the bait is consumed, the measured distance will increase. The sensor output is then sent to a cloud server for processing. Computer vision based pest classification In contrast to some of the simple systems from the previous section which just detect the presence of something in a trap (e.g BoxSense, Smart Eye and RADAR Connect), computer vision systems (where algorithms are applied to images) provide a rich source of information which also facilitates pest classification. Several of the computer vision implementations described here used labour-intensive manually captured images (effectively only reducing the labour in the expert classification step). Integrating a camera into the detection system (like in TrapView) provides the greatest gains in labour efficiency whilst providing the most up to date data from the traps. Advances in digital camera technology have resulted in low-cost camera sensors that have both a small form factor and high resolution. The Omnivision OV5647 camera for example, is a 5 mega-pixel camera with dimensions of 20 mm × 25 mm × 9 mm. This low cost camera has successfully been used in detecting pests as small as fruit flies [24]. These types of cameras, coupled with powerful computer vision algorithms, make up the core of vision enabled smart pest traps. This section reviews some of the computer vision methods used to detect and/or classify pests from a given image. The methods are further categorized into three broad techniques, namely Segmentation based techniques, Feature-based techniques, and Deep learning based techniques. Segmentation based pest detection Segmentation based pest detection techniques use simple algorithms and require low processing power. The principle behind these techniques is to remove the background pixels in the image, thereby resulting in an image containing only pixels belonging to the target pest [25]. One such segmentation algorithm was used to detect pests from images of pheromone based trap cards [26]. Wireless cameras placed in the traps facilitated the image acquisition. The presence of pests in the traps was determined by comparing the pixels in successive grayscale images (captured at one minute intervals). The equation used for this comparison is given in (1). $$\begin{aligned} O(x,y) = {\left\{ \begin{array}{ll} 255 &{} \text {if}\ C(x,y) = L(x,y)\\ 2 &{} \text {if}\ C(x,y) \ne L(x,y)\\ \end{array}\right. } \end{aligned}$$ where O(x, y) , C(x, y) and L(x, y) are the two dimensional matrices which represent the output image, current image, and last received image for each trap respectively. It is evident from equation (1) that the output image will show areas where the two successive images differ as non-white pixels on a white background. To reduce the effect of varied illumination conditions on the output, a median filter was applied to the output image. The location of each pest in the output image was then determined by examining the pixel intensities in the filtered output image. While this system provides a computationally inexpensive way of successfully detecting and localizing the pests in the images, it is unable to determine the type of pest, or to distinguish between a pest and any other moving object in the traps, as it relies only on changes in pixel intensities between successive images. In another implementation [27], the authors focused on segmenting footprints on tracking cards for pest classification, as track analysis gives information regarding the type and maturity of pests. The target pests were rats and mice, specifically the Norway rat, the Ship rat, the House mouse, and the Pacific rat. The tracks were captured in tracking tunnels, which are rectangular polyethylene boxes containing a white cardboard doused with non-drying ink (the tracking card). Lures were placed in the tracking tunnels to attract the rodents, which then leave ink footprints on the card. These tracking cards were manually checked, and images of them were subsequently captured. The identification method used thresholding (where pixels with intensities above a given threshold value are set to white, and those below the threshold are set to black) followed by a form of template matching [28] to identify the tracks on the card. The footprints of rats and mice have a circular shape, with the toes encompassing a central pad. This feature allowed the authors to create a number of templates for identifying the footprints. Each template contained a number of circles which represented the possible locations of the central pad and toes for each of the four rodents (with different templates for front and back feet). Figure 3 shows an image of a rat footprint made on a tracking card, as well as the template used for classifying this footprint. Rat footprint on tracking card (left) and template used for classification (right) [28] After thresholding, the algorithm searched for candidate central pads using blob analysis. The area around a candidate central pad was then searched for possible toes. Only those (blobs) within a predefined distance from the candidate central pad were considered to be toes. Footprints were then identified by comparing the central pad and toes to the different templates. The detected footprints were also used to identify strides, which can be used as a means of gender classification and size estimation. This algorithm proved more successful in identifying the rat tracks than the mice tracks, yielding a false positive rate of 11\(\%\) for rats compared to 38\(\%\) for mice. The authors in [29] improved on this method by providing a better way of thresholding the tracking card images. Instead of the hard threshold used in [27] which eliminated faint footprints on the tracking cards, their method employed an adaptive thresholding technique. This technique differentiated between a background pixel and a footprint pixel by comparing the pixel's intensity against the mean of its surrounding pixels (relative to some local tolerance value). The tracking cards used in [29] contained footprints from Polynesian rats, Norway rats, and Ship rats. The algorithm proved successful in identifying the footprints of these species, yielding a best result of 78.4\(\%\) true positive for Norway rat footprints, as well as a 90.1\(\%\) true negative for the same species. Hand-crafted feature based pest detection While segmentation methods are effective in detecting whether or not a pest is present in an image, they tend to fail when multiple pests are present in the image, or for classification of different pest species. This is particularly true when analysing the images from pheromone based adhesive traps, where there will typically be a number of different pest species in the same image. To help solve this problem, machine learning algorithms can be incorporated into the detection process. Unique image features of each pest species (such as colour, shape and size) can be incorporated into the algorithm, so that multiple species can be detected and classified in the same image. The authors in [30] successfully implemented such a system to classify Lobesia botrana moths using a K-means classifier. These moths are considered to be the major pest for the grapevine industry. A total of 50 pheromone based traps were deployed across various locations. Each trap was visited once a week and an image of the adhesive surface of the trap was captured using a mobile phone. An android based application on the phone was used to send the image to a web server, along with the GPS location of the phone, and a date and time stamp. The image was then preprocessed by applying a median filter to it so as to remove some dust particles that may have fallen on the adhesive surface. Following this the image was enhanced using the Contrast Limited Adaptive Histogram Equalization method (CLAHE). This process enhances features in the image which have poor contrast. To account for the fact that images may be captured at different distances from the traps, the image was scaled by factors of 0.75, 0.5, and 0.25. The next step in the processing pipeline was to apply K-means clustering to the image. Once the image is split into segments, two descriptors are generated for each segment. The first descriptor is a 10 dimensional vector generated from taking the histogram of grayscale values of the pixels in the segment. The second descriptor is a 5 dimensional vector generated from taking the histogram of gradients of the pixels in the segment. These descriptors are then used to train a Support Vector Machine in order to distinguish segments containing Lobesia botrana moths to those without. A total of 360 images were used for the training set, and from these, 2136 segments containing the moths and 5018 segments containing no moths were used to train the SVM. This method yielded an average specificity of 95.1\(\%\). As with any classical machine learning approach, all parameters had to be tuned manually in order to find the segmentation size and classifier which yielded the most favourable results. Figure 4 shows the results of segmenting the image using the K-means clustering, as well as the SVM classification. The yellow outline around each moth in Fig. 4 indicates a successful classification and localization. Segmented image using K-means clustering (left) and result after SVM classication (right) [30] In another application, a support vector machine was used to correctly classify Thrips (Thysanoptera) on the flowers of strawberry plants [31]. The main focus of this system was to provide a real-time pest identification system, which was to be run on a mobile robot. The images used for training the system were captured in an orchard under natural lighting conditions. A total of 100 images of strawberry flowers were captured by the camera on the mobile robot. Each image was captured at a fixed distance from the flower. The relatively small size of Thrips posed a problem for the identification process, and therefore a number of preprocessing steps were taken to maximize the chances of classification. The first of these steps was to remove the non-flower parts of the image. To do this, the RGB image was split into its individual colour channels, and the gamma operator was applied to the blue channel to enhance the contrast of the flower regions in the image relative to the non-flower regions. With the contrast of flower regions enhanced, the non-flower regions were removed by thresholding (using OTSU's binarization method). Morphological operations of opening and closing were then applied to the thresholded image in order to remove noise. The resulting image would then contain only the flower regions. The next step involved treating the flower regions as background pixels, with the reasoning that any remaining pixels in the image would belong to a pest. This was done by inverting the image, which had the effect of making all pixels belonging to the white petals of the strawberry flower black, and making the pixels belonging to the middle region of the flower, as well as those belonging to the pest, white. Large regions of white pixels were then removed by considering their area, leaving only small regions, which were assumed to be target pixels of pests. This image was then used as a mask for the original colour image, so as to extract the pixels belonging to the pest in the original image. Figure 5 illustrates this process. Original image (top-left), result after thresholding and inverting (top-right) and result after region elimination by area (bottom) [31] Considering that other pests such as the houseflies and ants can also be found on the petals of strawberry flowers, a Support Vector Machine was implemented for the correct classification of Thrips. Two features were used to train the SVM. The first feature was the region index, which was obtained by considering the ratio of the major diameter to the minor diameter of the insect. The second feature was the colour index, which was found by converting the image to its HSI equivalent, and extracting the Intensity values. A radial basis function kernel was used for the SVM. Of the 100 images taken, 80 were used for training the SVM, and 20 were used for testing. The system was successful in classifying Thrips (both in their adult stage as well as their Larva stage), with a low mean percent error of 2.25%. Deep learning based pest detection One major drawback of using feature based detection methods such as SVM's is that the features for each pest species must be manually tuned (hand crafted) in order to produce favourable results. This step is labour intensive, and limits the types of pests which can be identified to only those whose features have previously been hand crafted into the machine learning algorithm. To reduce the reliance on hand crafted features, Deep learning approaches have been implemented for pest classification. This machine learning approach lets the algorithm automatically learn the best features to use to train the classifier, meaning that new species can be identified by simply retraining the algorithm with images containing the new species. Ding and Taylor [11] for example, implemented an identification and detection system for codling moths using a Convolutional Neural Network (CNN). The training images used in this system were acquired from pheromone based moth traps. A low cost camera placed in the traps was used to capture the images under artificial lighting conditions. The captured images were then sent remotely to a server, where entomologists marked the locations of moths in the images to create training and validation data sets. As pheromone based traps can also attract other species, these images contained pests other than moths, hence the need for expert labelling of the moths prior to training the CNN. To achieve illumination invariance, the images were first preprocessed for white balance. This reduces the effects of ambient lighting from images captured at different times in the day. A total of 177 images were used to train a LeNet-like CNN. Data augmentation was used to increase the training data by rotating, translating, and flipping the moth images. The detection pipeline consisted of splitting the input image into a number of patches, and passing each patch to the trained CNN for classification. The CNN then mapped the input patch to a probability representing the likelihood of a codling moth being in the patch image. Overlapping patches with high probabilities were eliminated through non-maximum suppression. The remaining candidate patches were compared against a threshold, and those above this threshold were taken to be the detected regions. The system was successful in classifying the codling moths, with a precision recall accuracy of 93.1% reported in the paper. The authors suggest that since the features were learned by the CNN, this method can be used to identify other types of pests simply by re-training the CNN with images containing the those pests. IoT Communications Infrastructure As shown in the previous sections, one way of improving the pest management process is by communicating sensor data from the pest traps to the pest management team. This information gives them a better picture of what is happening in the field. The Internet of Things has been instrumental in making this communication possible on a large scale. The Internet of Things (IoT) refers to a network of interconnected devices ("things", or nodes), which exchange data with servers over the internet, through a communications infrastructure. The data received by the servers is typically processed using cloud computing, and then presented to the user in the form of a web based application [32]. Figure 6 shows a standard IoT architecture, showing the layout and interconnections between the "things" (gas monitor, trash container etc) and the application servers. A typical IoT architecture [33] In the context of smart pest traps, the traps are the "things" and the data sent from them is usually sensor data (such as temperature, humidity, proximity, optical, intruder detection etc). This data is either sent from the "things" to an internet enabled device, known as a gateway [32], or directly to the cloud if internet connectivity is provided on-board. The gateway device then uploads the data to a cloud based server for processing . Various communication infrastructures can be used to send data between the "things" and the gateway. The choice of communication infrastructure is influenced by the following: The distance between the "things" and the gateway device The environmental surroundings (rural vs built up) The quantity of data to be sent or received between the devices The rate at which the data is to be sent or received The power source of the devices (battery, mains, solar etc) In a typical communications system, the data to be transmitted is converted into an electromagnetic wave by an antenna, and propagated into the air. As the electromagnetic wave propagates from a transmitter to a receiver, it loses power to the medium it travels through. The amount of power lost is known as the path loss, and it is dependant on factors such as the environment (buildings, trees, terrain), as well as the distance between the transmitter and receiver. As the distance between the transmitter and receiver increases, the path loss increases too. This means that a signal arriving at the receiver will get weaker as the distance between the transmitter and the receiver increases. This path loss is least when there is a clear line-of-sight path between the transmitter and receiver antennae. However, when other obstacles are present (such as building walls), the energy loss is significantly increased. For example, as a wave propagates through an office window it could lose up to half of its energy. The effect of path loss means that the signal power when it reaches the receiver will be much less than when it left the transmitter [34]. Noise (any unwanted signal) is also superimposed onto the wave as it propagates. Examples of noise sources are other transmitted signals in nearby frequency bands, electromagnetic waves produced from power lines, as well as noise signals generated in the transmitter or receiver. The power of the transmitted signal must be significantly higher than that of the noise introduced into the signal. The ratio of the transmitted power to the noise power (as seen at the receiver end) is known as the signal-to-noise ratio (SNR) [34]. There are two ways of reducing the effect of losses on the communications infrastructure. The first method would be to transmit the signal with more power, however there are laws which regulate the maximum power transmission over different frequency bands to a few milliwatts. Another problem with this approach is that for battery powered applications, the extra energy required to transmit the signal with more power would significantly reduce the battery life. The other method for reducing losses is to increase the sensitivity of the receiver so that it can pick up heavily attenuated signals. The sensitivity of a receiver is defined as the minimum power which a signal must have in order for it to be correctly detected and demodulated by the receiver, at a given bit-error-rate [35]. A receiver with a high sensitivity can be placed further away from a transmitter than one with a lower sensitivity, (thus extending the range). As an example, a receiver with a sensitivity of -40dBm can typically decode signals as low as 0.1 milliwatts [34]. There is a trade-off between data rate (the speed at which data is exchanged between a transmitter and receiver) and range. According to the Shannon-Hartley theorem [34], the slower the data rate, the further the range can be. This is because there is a direct correlation between the SNR and data rate. Since a carrier signal loses signal power as it moves away from the transmitter, the SNR reduces as well. In order to achieve the same bit-error-rate, the data rate must therefore be reduced as well. The choice of frequency also determines the range of communication, with lower frequency signals being able to travel further than higher frequency carrier signals. High frequency carriers also require more power from the transmitter in order for them to be generated. Some frequency bands also exhibit special characteristics. For example, sub-gigahertz carrier signals lose less power to obstacles in their environment when compared to higher frequency signals. Battery management is crucial in practical IoT applications. A system optimized for low power results in longer intervals between battery replacements, effectively reducing the running costs of the system. In most IoT applications, the largest battery drain occurs when information is being transmitted over the wireless communication network. This has motivated research into protocols which reduce the power demand for wireless sensor networks. One such protocol is node cooperation with network coding. The basic premise behind this method of communication is that all nodes in a wireless communication network will relay both their data as well as data received from other nodes into a single network-coded message. The message is then sent to a common destination node (such as a gateway device). This is in contrast to traditional wireless communication protocols where each node sends only its own data to the destination node. By combining information from other nodes and sending over a single transmission, the total time required for information from all nodes to reach the destination node is reduced. This means that the total time the wireless communication is active is reduced, thereby resulting in significant energy savings. Node cooperation with network coding also reduces the required bandwidth for transmitting the data, while increasing the wireless communications range [36, 37]. The main wireless technologies used to establish communications infrastructures in IoT systems are given in Table 1. These technologies can be grouped into three main categories, namely: Short range Local Area Networks (LANs) in unlicensed frequency bands (e.g Bluetooth, WiFi, RFID). Long range cellular communications in licensed frequency bands (e.g 3G/4G). Low Power Wide Area Networks in unlicensed frequency bands (e.g LoRa, SigFox). Table 1 Comparison of wireless communication standards for IoT systems [35, 38,39,40] The most commonly used short range communication protocols in unlicensed frequency bands are RFID, Bluetooth, Zigbee, and WiFi [41]. These protocols are characterized by high data rates over short ranges (less than 100 m). The distance between the "things" and the gateway device can also be extended by using repeaters to amplify the carrier signal power at fixed distances from the transmitter, or by using a mesh network. In a mesh network, each node can communicate with all other nodes in range. A node requiring data to be sent to the gateway device will send the same data to all nodes in range. The data is then passed (or hopped) from node to node until it reaches the gateway device [35]. The mesh approach can be good for coverage and fault tolerance, however it can have significant latency as the routing algorithm dynamically changes the number of hops. Another problem with the mesh approach is that it is not suitable for low-power sleeping applications, since all nodes effectively should always be active when data is being sent or received across the network. In applications where ubiquitous coverage is required, or where the distance between "things" is too large for a multi-hop mesh network to be feasible, a cellular communications infrastructure is often used. In such a system, each node would require a SIM card to send data over a cellular network such as 3G or 4G. This data can be sent in the form of an SMS, or packet data, to the nearest base station. One major disadvantage of using cellular networks is the ongoing costs required to keep the SIM cards active as well as to send data over the network. Low Power Wide Area Networks (LPWANs) are a more recent development in IoT communications. These networks operate in the sub-gigahertz unlicensed bands while offering a much wider range than traditional short range LANs. They are characterized by low power and low data rates over long ranges. LPWAN receivers have much higher sensitivities than traditional receivers, thereby giving them longer ranges. For example, typical Bluetooth receiver sensitivities are in the range of -90dBm, while an LPWAN receiver sensitivity can be as high as -150dBm. This means that an LPWAN receiver is capable of successfully decoding a signal whose power is one million times weaker than the weakest signal capable of being detected by a Bluetooth receiver (at the same bit-error-rate). LPWANs are configured in a star topology, where each node is directly connected to the gateway device. This means that only one hop is required for data to be sent from a node to the gateway device, hence latency problems associated with multi-hop mesh networks are eliminated [35]. One of the most commonly used LPWAN technologies for IoT systems is LoRa (Long Range). LoRa devices use a Chirp Spread Spectrum (CSS) modulation scheme to send data between the "things" and the gateway device. CSS spreads the carrier signal over a wider band of frequencies at lower signal strengths. The SNR of the carrier signal is thus very low, however the high sensitivity of the receiver device allows for accurate decoding at specified bit-error-rates. In Australia, LoRa uses the 915MHz - 928MHz frequency range, with each channel occupying a maximum of 500KHz. The low bandwidth means that LoRa does not support high data rates, however since many IoT nodes only send small amounts of data (typically 50 bytes per message), it matches well with such applications. Typical ranges for LoRa communication infrastructures are 5 to 15 kilometres depending on the environment (built up, rural, forested etc) [40]. More recently, SigFox LPWAN systems have become a popular IoT connectivity solution. SigFox systems use a Differential Phase Shift Keying (D-BPSK) modulation scheme (a scheme where the phase of the carrier signal is modulated by the data to be sent), together with Ultra-Narrowband carrier signals to exchange small packets of data over its network. The data is transmitted at a low bit rate of either 100 bps or 600 bps. The low bit rate coupled with the D-BPSK modulation results in long range communication in the order of several kilometers. In a similar manner to legacy cellular systems, the SigFox network architecture also consists of widespread base stations, with multiple radios in range of the base stations being able to exchange data with the base stations simultaneously. Each SigFox radio is permitted to send 140 messages per day (12 bytes of data per message). Data received by the base stations is then sent to the SigFox cloud, and subsequently sent to customer servers. SigFox communication occurs in the ISM unlicensed band, resulting in significantly reduced service charges to the customer (starting at 10 USD per device per year) [42, 43]. There are also LPWANs which rely on cellular systems as their backbone. While traditional cellular systems offer much wider coverage and pre-existing infrastructures, they consume too much power for use in many IoT applications. To address this problem, cellular-IoT systems were developed which offer the coverage and infrastructure of legacy cellular networks, but at a fraction of the power. Narrowband IoT (NB-IoT) is one such recent cellular-IoT. It takes advantage of the licensed cellular sub-gigahertz bands to achieve low power wireless communication. It is based on the LTE standard for cellular communication, however, its bandwidth is limited to 200 KHz. NB-IoT can also transmit data at much faster speeds than LoRa, with typical speeds of 200 kbps [40]. When considering all of the aforementioned smart pest traps, it is evident that technology is having a disruptive effect on the pest management approach. Traditional methods of manual pest trap inspections are being replaced with smart traps which are capable of constant real time monitoring. The use of these traps has the advantage of minimizing the amount of pesticides used by focusing on areas where pest activity is high (as compared to a blanket use of pesticides). There is a significant reduction in the amount of resources needed to monitor and establish pest infestations, as this can be easily seen using the web applications which accompany these smart traps. Aside from increasing efficiency, these smart traps can provide a wealth of knowledge to better understand pest behaviour, prevalence, and risks. Another important observation is that smart pest management systems do not offer an end to end solution which eliminates the need for professional pest technicians. Rather, these systems can be used as a tool for the professionals when planning and evaluating their pest eradication methods. Technicians are also required to replace the bait in lure-based traps, as well as remove the bodies of dead pests from the traps. Considering costs, it is clear that the initial setup cost of smart pest traps is much higher than traditional traps, however the gains come in over the lifetime of the system, as manual inspections (and their associated costs) are minimized with the smart traps. Currently, the recommended time between sending technicians out to check bait stations is between 4 to 8 weeks [44]. Remote monitoring using smart pest traps would significantly reduce the labour costs involved with periodic site visits. Smart traps do however require ongoing maintenance costs, mainly battery replacement and costs associated with sending the information over a wireless network. Another notable observation is that considering larger pests (such as rodents), most smart traps only send information regarding when a trap was visited or triggered, but not what type of pest visited the trap. This classification information can be used as an additional aid when planning a baiting strategy. Considering the potential of machine vision classification algorithms (as discussed in Sect. 3), automated image capture and analysis in smart traps show great promise. In this paper, a survey of smart pest traps capable of pest identification and classification were discussed. These traps were grouped into two main categories, namely those capable of pest detection only, and those capable of both pest detection and pest classification. The latter group classify pests through image processing algorithms applied to images captured in the pest traps. Three categories of image processing algorithms were discussed, namely segmentation based algorithms, hand-crafted feature based algorithms and deep learning based algorithms. It was noted that deep learning based algorithms produced the most favourable results, however the scarcity of labelled pest image data limits the use of these algorithms in smart pest traps. Various IoT communication infrastructures for transferring data between smart pest traps and cloud servers were also discussed, with their individual trade-offs making the choice of infrastructure application dependant. Smart pest traps coupled with cloud based data analysis will be key in reducing the labour costs involved with traditional pest management, and producing more optimised pest management approaches based on real-time information. (2005) The black death: The greatest catastrophe ever. History Today. https://www.historytoday.com/ole-j-benedictow/black-death-greatest-catastrophe-ever Bennett G, Owens J, Corrigan R, Truman L (1977) Truman's scientific guide to pest control operations. Adv Commun. https://books.google.com.au/books?id=MNwnAQAAMAAJ (2018) Codling moth. RHS Gardening. https://www.rhs.org.uk/advice/profile?PID=489 (2017) Two workers escaped death as house collapsed due to severe termites infestation. RentoKil. https://www.rentokil.com.my/blog/two-workers-escaped-death-as-house-collapsed-due-to-severe-termites-infestation/ (2016) Safe food australia—appendix 7: Pest management. http://www.foodstandards.gov.au/publications/Documents/SafeFoodAustralia/Appendix7-Pestmanagement.pdf Whitehorn PR, O'connor S, Wackers FL, Goulson D (2012) Neonicotinoid pesticide reduces bumble bee colony growth and queen production. Science 336(6079):351–352 Story P, Hooper MJ, Astheimer LB, Buttemer WA (2011) Acute oral toxicity of the organophosphorus pesticide fenitrothion to fat-tailed and stripe-faced dunnarts and its relevance for pesticide risk assessments in australia. Environ Toxicol Chem 30(5):1163–1169 Farmers Market Kenya (2016) Tutrack pheromone trap. https://www.fmk.co.ke/product/tutrack DoMyOwn (2014) Protecta lp rat bait station. https://www.domyown.com/protecta-lp-rat-bait-station-p-1291.html Anderson M (2012) Keeping city children safe from rat poisons | national poison prevention month. https://blog.epa.gov/blog/2012/03/keeping-city-children-safe-from-rat-poisons/ Ding W, Taylor G (2016) Automatic moth detection from trap images for pest management. Comput Electron Agric 123:17–28 (2018) Trapview—automated pest monitoring. TrapView. http://www.trapview.com/en/ (2018) Spensa technologies. https://spensatech.com Nixon N (2017) Toxic baiting on demand. https://cleaningmag.com/news/toxic-baiting-on-demand O'Connor MC (2016) Inventor develops iot solution to bedbug problem. IOT J http://www.iotjournal.com/articles/view?14165/ (2015) lora-alliance | technology. www.lora-alliance.org/technology Hopkins T (2016) Leroy merlin wins the loraÂ\(\textregistered\) alliance global iot challenge. http://www.marketwired.com/press-release/leroy-merlin-wins-the-lorar-alliance-global-iot-challenge-2089511.htm Rentokil. (2016) Radar connect. https://www.rentokil.com/products/connected-pest-control/radar-connect/ Goodnature. (2014) Goodnature e2 rodent control system. http://www.goodnature.com.au/ (2017) Smart—a more intelligent solution to pest control—anticimex. www.anticimex.com/smart-pest-control (2014) Boxsense—wireless transmitting bait stations for rodents with cloud and mobile applications. [Online]. Available: www.iotboxsys.com/products/bait-stations (2017) Self-powered wireless technology for sustainable buildings. www.enocean-alliance.org/what-is-enocean/self-powered-wireless-technology Nibali A, Ross R, Parsons L (2019) Remote monitoring of rodenticide depletion. IEEE Int Things J Shaked B, Amore A, Ioannou C, Valdés F, Alorda B, Papanastasiou S, Goldshtein E, Shenderey C, Leza M, Pontikakos C et al (2018) Electronic traps for detection and population monitoring of adult fruit flies (diptera: Tephritidae). J Appl Entomol 142(1–2):43–51 Piccardi M (2004) Background subtraction techniques: a review. In: IEEE international conference on systems, man and cybernetics, vol 4, pp 3099–3104 Miranda JL, Gerardo BD, Tanguilig BT III (2014) Pest detection and extraction using image processing techniques. Int J Comput Commun Eng 3(3):189 Hasler N, Klette R, Agnew W (2004) Footprint recognition of rodents and insects. CITR, The University of Auckland, New Zealand, Tech. Rep Brunelli R (2009) Template matching techniques in computer vision: theory and practice. Wiley. https://books.google.com.au/books?id=AowB9dRNTqYC Russell JC, Hasler N, Klette R, Rosenhahn B (2009) Automatic track recognition of footprints for identifying cryptic species. Ecology 90(7):2007–2013 García J, Pope C, Altimiras F (2017) A distributed-means segmentation algorithm applied to lobesia botrana recognition. Complexity 2017: Ebrahimi M, Khoshtaghaza M, Minaei S, Jamshidi B (2017) Vision-based pest detection based on svm classification method. Comput Electron Agric 137:52–58 Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (iot): a vision, architectural elements, and future directions. Future Gener Comput Syst 29(7):1645–1660 Navada V (2018) Lora. https://www.devopedia.org/lora Goldsmith A (2005) Wireless communications. Cambridge University Press. https://books.google.com.au/books?id=ZtFVAgAAQBAJ (2016) A comprehensive look at low power wide area networks. www.link-labs.com Attar H, Vukobratovic D, Stankovic L, Stankovic V (2011) Performance analysis of node cooperation with network coding in wireless sensor networks. In: 2011 4th IFIP international conference on new technologies, mobility and security. pp 1–4 Attar H, Stankovic L, Stankovic V (2012) Cooperative network-coding system for wireless sensor networks. IET Commun 6(3):344–352 Yaqoob I, Hashem IAT, Mehmood Y, Gani A, Mokhtar S, Guizani S (2017) Enabling communication technologies for smart cities. IEEE Commun Magaz 55(1):112–120 Frenzel L (2017) Long-range iot on the road to success. https://www.electronicdesign.com/embedded-revolution/long-range-iot-road-success Raza U, Kulkarni P, Sooriyabandara M (2017) Low power wide area networks: an overview. IEEE Commun Surv Tutor 19(2):855–873 Centenaro M, Vangelista L, Zanella A, Zorzi M (2016) Long-range communications in unlicensed bands: the rising stars in the iot and smart city scenarios. IEEE Wirel Commun 23(5):60–67 SigFox. (2017) Sigfox technology overview. https://www.sigfox.com/en/sigfox-iot-technology-overview Sayer P (2017) Sigfox shows 20-cent iot wireless module. https://www.computerworld.com.au/article/627814/sigfox-shows-20-cent-iot-wireless-module/ Berney P, Esther A, Jacob J, Prescott C (2014) Risk mitigation measures for anticoagulant rodenticides as biocidal products This work was funded with a $30,000 grant from the Securing Food, Water and Environment research focus area from La Trobe University. Department of Engineering, School of Engineering and Mathematical Sciences, La Trobe University, Plenty Rd and Kingsbury Dr, Bundoora, VIC, 3086, Australia Lyle Parsons & Robert Ross Department of Ecology, Environment and Evolution, School of Life Sciences, La Trobe University, Plenty Rd and Kingsbury Dr, Bundoora, VIC, 3086, Australia Kylie Robert Lyle Parsons Correspondence to Lyle Parsons. On behalf of all authors, the corresponding author states that there is no conflict of interest. This article does not contain any studies with human participants or animals performed by any of the authors. Parsons, L., Ross, R. & Robert, K. A survey on wireless sensor network technologies in pest management applications. SN Appl. Sci. 2, 28 (2020). https://doi.org/10.1007/s42452-019-1834-0 Part of a collection: Engineering: Industry 4.0, IoT solutions and Smart Industries
CommonCrawl
Energy Background Most of the new demand from oil in the 2000s came from China and India, as millions were lifted out of global poverty. SGIP Background. You can also reduce energy consumption by 75% by replacing incandescent lightbulbs with LED bulbs. Find Monster Energy pictures and Monster Energy photos on Desktop Nexus. This set of textures will bring vibrance and energy to your next project. Energy Experiments: Background A document prepared by Peter Rhines (the previous instructor of this class) Because our textbook relates to energy use, technology and impacts only, we need some more explanation of the science behind the lab experiments. The utility of anerobic glycolysis to a muscle cell when it needs large amounts of energy stems from the fact that the rate of ATP production from glycolysis is 100 times faster than from oxidative phosphorylation. The Sedona travel packages should lure people today into coming into the location to spend time off of. Noble is interested in finding out more about you and your willingness to "get dirty". Green Network Energy UK is a leading supplier of gas and electricity. Local pollutants such as coal dust, particulate matter, and other toxins (eg arsenic,. Appliance Efficiency Program Background. I've made a crossword puzzle about energy. ›› Energy Backgrounds - Good quality Energy Backgrounds for Powerpoint Templates. Red and yellow gradient background with twisting waves of white energy. Whether you're on the bunny slopes or chasing big air, CLIF BAR® energy bars are crafted with great-tasting, wholesome, and sustainably-sourced ingredients to help fuel adventures big or small. Work is moving something against a force. A good example is when you replace a 60-watt incandescent light bulb with a 9-watt LED bulb. © 2019 Contura Energy. The ability of a golf club to store. Tons of awesome Monster Energy background to download for free. In both programs- Energy and the Energy Sector are crucial and vital element for it's success. Available 24/7, nuclear supports our climate goals, national security and leadership in innovation. Felix Energy is a leading private exploration and production company focused on creating value from onshore U. From May 14th to May 16th,. 1 The Department of Energy is committed to ensure a high performance working. Is there somewhere I can get some information on that process? A: The Office of Personnel Management (OPM) conducts the majority of background investigations for the Federal government. There are two primary sources of energy: renewable alternative fuels and fossil fuels. Program Customer Energy Solutions. On Thursday, July 26 2018, air quality tests were conducted at Mt Energy Elementary School. Oil and Gas Background Contact Us Oil and Gas Background Well Developmental Stages Types of Ownership: Tax Advantages ©2017 Southern Energy, LLC | (512) 261. Taking measures like reassessing how much you need to use appliances, using lights only when necessary,. Sep 13, 2016 · Work Less in the Background. In 1980, Energy Probe separated from Pollution Probe and became incorporated as the Energy Probe Research Foundation. Our wide selection of energy-themed motion backgrounds is sure to amp up your next video project. The energy released when these new hydrogen bonds are made more or less compensates for that needed to break the original ones. He recently served as Principal Deputy Assistant Secretary for the Office of Fossil Energy at the Department of Energy where he was responsible for DOE's R&D program in advanced fossil energy systems, carbon capture, and storage (CCS), CO2 utilization, and clean coal deployment. The history of solar energy is as old as humankind. 19 \times 10^{-14} J/m^3$. Compare energy tariffs online, see how much you could save in home energy costs today!. This energy is used for life processes such as respiration, photosynthesis, digestion, and reproduction. verma2007 Uploaded by ramavtar. Eurostat has collected electricity and natural gas prices for over two decades. This tattoo which I got 1st place Asian…". Our commercial business owns and operates diverse power generation assets in North America, including a portfolio of renewable energy assets. The typical literature value for this temperature is -10. Regulations enacted since 2001 required power plants to add more training and higher qualification standards for security personnel, while increasing the number of officers on the force. The Energy Union will help the EU reduce its dependency on energy imports, ensure more choice and lower prices for EU consumers, and fight climate change. The journal aims to be a leading peer-reviewed platform and an authoritative source of information for analyses, reviews and evaluations related to energy. Over twelve years of energy management, energy audit team leadership, energy management training, field assessment and installation of new technologies. The Geysers Power Plant's Unit # 18) Geothermal energy is produced by the heat of the earth and is often associated with volcanic or seismically active regions. Find images of Energy Background. View Test Prep - Lab Energy Background Document Fall 2019. This is the Murray Energy company profile. With its subtle motion and bright lights, you'll be able to capture the audience's attention without distracting …. DTE Energy is an equal opportunity employer and considers all qualified applicants without regard to race, color, sex, sexual orientation, gender identity, age, religion, disability, national origin, citizenship, height, weight, genetic information, marital status, pregnancy, protected veteran status or any other status protected by law. Nov 11, 2011 · The Halo Energy Sword - Background Tumblr Cursor will work on your page if you follow these instructions Login and go to your Tumblr page. In these changes some of the energy is "lost" in the sense that it cannot be recaptured and used again. Another lithium from geothermal brine project recently secured funding in Cornwall, UK and MGX Minerals has projects in Chile and Nevada. Configurable settings designed specifically for Energy clients can alleviate much of the manual review and administrative workload that would otherwise be required. More sources of energy need to be found and the world is now taking on another closer look at using ocean waves to produce energy. The Self-Generation Incentive Program (SGIP) was initially conceived as a peak-load reduction program in response to the California energy crisis of 2000-01, during which Californians experienced electrical outages throughout the state. Background Consumer Energy Alliance is operated out of the offices of the Houston-based public relations and lobbying firm HBW Resources. Thank you for visiting Xcel Energy. Radiofrequency (RF) energy is another name for radio waves. Department of Energy's (DOE) Advanced Research Projects Agency-Energy (ARPA-E) announced up to $43 million in funding to develop carbon capture and storage (CCS) technologies that enable power generators to be responsive to grid conditions in a high variable renewable energy (VRE) penetration environment. 4 percent/year through 2050. WINNER! Standing Ovation Award: "Best PowerPoint Templates" - Download your favorites today!. Energy is the backbone of any economic system. If you have an experience or insight to share or have learned something from a conference or seminar, your peers and colleagues on Energy Central want to hear. However, energy has finally achieved its rightful position at the top of the global development agenda within the new set of Sustainable Development Goals (SDGs) that were approved in September 2015, in particular SDG 7: "Ensure access to affordable, reliable, sustainable and modern energy for all," and SDG16: "Promote peaceful and inclusive. This report was used as background for the Framework Convention on Climate Change, which was signed by President George H. But seems impossible to create perfect thermal spectrum in this way. The browser's latest version was designed to solve that issue by throttling background tabs using excessive power. Energy Policy is an international peer-reviewed journal addressing the policy implications of energy supply and use from their economic, social, planning and environmental aspects. Cool Collections of Monster Energy Backgrounds For Desktop, Laptop and Mobiles. Affordable and search from millions of royalty free images, photos and vectors. Data may include Raster Logs, Digital Logs, Directional Surveys, and other Sub-surface report documents. So, already back in 2007, most people were questioning the usefulness for the black background web pages for saving energy. The majority of biomass energy in the United States comes from wood. Ring Energy, Inc. 0 Micrometers for Target and Background Materials and Scene Energy Sources for Naval Night Sensing by Defense Technical Information Center. Moved Permanently. Light and Energy Free Backgrounds. In 1948, the Allied powers had carved land out of the British-controlled territory of Palestine in order to create the state of Israel, which would serve as a. Dec 21, 2012 · A cosmological constant term added to the standard model Big Bang theory leads to a model that appears to be consistent with the observed large-scale distribution of galaxies and clusters, with WMAP's measurements of cosmic microwave background fluctuations, and with the observed properties of X-ray clusters. If storage is allowed to charge and discharge for purposes outside of. A document prepared by Peter Rhines (the previous instructor of this class) Because our textbook relates to energy use, technology and impacts only, we need some more explanation of the science behind the lab experiments. The Energy Resources Program conducts research and assessments to advance the understanding of the Nation's energy resources. AmericanEnergyIndependence. The energy density of the Cosmic Microwave Background (CMB) is $4. When you partner with Blueline Energy, your business can look forward to lower gas and electric rates…and that can significantly boost your bottom line. Energy twitter, friendster and myspace backgrounds on AllBackgrounds. Whatley is also reportedly vying for the position of North Carolina Republican Party chairman. With more than ten years of experience in all aspects of commercial energy efficiency programs – design, management, implementation and marketing – as well as a strong understanding of the Energy Efficiency policy and regulatory environment, Lawrence is well versed in the inner workings of the efficiency industry. The building blocks of carbohydrates are small molecules called sugars, composed of carbon, hydrogen and oxygen. In the 1830s, the British astronomer John Herschel used a solar collector box to cook food during an expedition to Africa. Coffitivity. The Institute of Energy Conversion is established at the University of Delaware to perform research and development on thin-film photovoltaic (PV) and solar thermal systems, becoming the world's first laboratory dedicated to PV research and development. Energy consumption definition: the amount of energy used by individuals, companies, countries, etc | Meaning, pronunciation, translations and examples. In both programs- Energy and the Energy Sector are crucial and vital element for it's success. Main Types of Biomass Energy Biomass Energy comes in many shapes and forms. Jul 03, 2018 · Zelenka's background has positioned him well for his new role at the Energy Department. Lesson 1 - Basic Terminology and Concepts; Definition and Mathematics. This may bring awareness to what appliances are consuming background energy and consequently reducing the background energy usage. The National Academy of Engineering cites access to clean water as a grand challenge. Abstract rotational lines. The water-food-energy nexus is central to sustainable development. Background of Wind Energy. Energy Experiments: Background A document prepared by Peter Rhines (the previous instructor of this class) Because our textbook relates to energy use, technology and impacts only, we need some more explanation of the science behind the lab experiments. Focusing on the Energy Industry. Improving Life With Energy. The energy density of the Cosmic Microwave Background (CMB) is $4. The Institute of Energy Conversion is established at the University of Delaware to perform research and development on thin-film photovoltaic (PV) and solar thermal systems, becoming the world's first laboratory dedicated to PV research and development. The journal aims to be a leading peer-reviewed platform and an authoritative source of information for analyses, reviews and evaluations related to energy. CleanPowerSF is a not-for-profit program operated by the San Francisco Public Utilities Commission (SFPUC) that works in partnership with PG&E to deliver cleaner energy to residents and businesses. Background booty shaking royalty free music - it seems like no one told him this would be happening. Led by the oil and gas industry, this sector regularly pumps the vast majority of its campaign contributions into Republican coffers. Biomass energy is considered a renewable energy source because we can always grow more plants and trees. Wind Energy Photos. We're an industry leader in sustainable innovation, powering solutions to help our customers and communities thrive and grow - because energy is about more than keeping the lights on. Energy/Natural Resources: Background Led by the oil and gas industry, this sector regularly pumps the vast majority of its campaign contributions into Republican coffers. From the moment you contact Sunline, you'll appreciate our transparent, no-pressure approach to customer service. The building blocks of carbohydrates are small molecules called sugars, composed of carbon, hydrogen and oxygen. Arctic Power is a grassroots, non-profit citizen's organization with 1,000s of members founded in April of 1992 to expedite congressional and presidential approval of oil and gas exploration and production within the Coastal Plain of the Arctic National Wildlife Refuge. Please visit our service pages- for more information. Carbohydrates are classified according to the number of sugar molecules they contain. Energy Policy is an international peer-reviewed journal addressing the policy implications of energy supply and use from their economic, social, planning and environmental aspects. national security. The Institute of Energy Conversion is established at the University of Delaware to perform research and development on thin-film photovoltaic (PV) and solar thermal systems, becoming the world's first laboratory dedicated to PV research and development. According to this rule a turbine may only capture 59. 2008) were passed into law by Parliament on Friday 24th February, 2012 as one of the responses to the spate of electrical fires that had occurred in the country. Artron energy was a form of ambient radiation that existed in the time vortex, (AUDIO: Shield of the Jötunn) later described by the Seventh Doctor as a form of mental energy, (PROSE: Timewyrm: Apocalypse) "artron particles" (AUDIO: Shockwave) that could be utilised in a variety of ways. Papers may cover global, regional, national, or even local topics that are of wider policy significance, and of interest to international agencies, governments. You can also upload and share your favorite Monster Energy backgrounds. The Institute of Energy Conversion is established at the University of Delaware to perform research and development on thin-film photovoltaic (PV) and solar thermal systems, becoming the world's first laboratory dedicated to PV research and development. You can count on Blueline Energy's deep industry experience and know-how to deliver savings right now—and for the long term. Energy absorption by the atmosphere stores more energy near its surface than it would if there was no atmosphere. Mar 11, 2009 · Backgrounder. Start the customization now!. The company is privately owned and is focused on development and supply of its principle product, the G1-SSR, which is a simple and robust solar superheater and storage receiver. In flame tests salts that are dissolved in water are evaporated using a hot flame. Much greater than energy density in intergalactic space. Free for commercial use No attribution required High quality images. The purpose of this background paper is to inform the forthcoming energy sector strategy by providing arguments for and against the use or change in particular types of subsidy. Energy Policy is an international peer-reviewed journal addressing the policy implications of energy supply and use from their economic, social, planning and environmental aspects. First Solar delivers bankable PV energy solutions that maximize the value of our customers' PV investment while minimizing their risk. Available in 1600×1200, this Energy Saving for PPT Backgrounds is free to download, and ready to use. The electrical, Radiant, energy throughout the universe, Tesla referred to and would use to provide power wirelessly at Wardenclyffe has recently been re-discovered and called Dark Energy ("because it cannot be seen"). PowerPoint templates and backgrounds that you can use in your business presentations. Thermal energy - energy given off as heat, such as friction. Background Improving the operational performance of the nation's offices, schools, hospitals, and other commercial buildings offers significant energy savings. Although wind currently produces approximately 3 % of world-wide electricity use in 2010, it still accounts for approximately 21 % of electricity production in Denmark, 18 % in Portugal, 16 % in Spain, 14 % in the Republic of Ireland and 9 % in Germany. unusual or unexpected energy injections, at the level of a few ×10−5 of the total energy in the photons occurred later than z ≈ 106 or later than a few months after the Big-Bang. Access models, data, tools, and guidebooks designed to aid in the siting and development process for wind energy projects. Sound Energy. Instant-on mode. Ring Energy, Inc. Circular lens flare. Renewable energy resources offer cleaner alternatives to fossil fuels. Provides a search of scholarly literature across many disciplines and sources, including theses, books, abstracts and articles. Find out more about our Lucozade Sport and Energy drinks. DTE Energy is an equal opportunity employer and considers all qualified applicants without regard to race, color, sex, sexual orientation, gender identity, age, religion, disability, national origin, citizenship, height, weight, genetic information, marital status, pregnancy, protected veteran status or any other status protected by law. (MEM) was founded in 1999, prior to that we were a division of Moylan Engineering Associates, Inc. DAN BROUILLETTE'S BACKGROUND AND RESUME: Dan Brouillette, nominated by President Trump to replace Rick Perry at the Energy Department, is not easy to characterize. The sale of energy drinks has gone up 61% since its introduction to the U. Led by the oil and gas industry, this sector regularly pumps the vast majority of its campaign contributions into Republican coffers. Jun 05, 2014 · The second background report in the New Energy, New Geopolitics series, this report lays out some of the geopolitical adjustments being made around the world in response to energy changes (both actual and perceived), and what these adjustments—in terms of energy markets and geopolitics—have meant for U. Energy consumption definition: the amount of energy used by individuals, companies, countries, etc | Meaning, pronunciation, translations and examples. org, you can see the view of every candidate on every issue. We're an industry leader in sustainable innovation, powering solutions to help our customers and communities thrive and grow - because energy is about more than keeping the lights on. James Hamilton and Drew Liming are economists in the Office of Occupational Statistics and Employment Projections, BLS. Sep 30, 2019 · This evidence base document assesses the potential for various forms of low carbon and renewable energy in Doncaster. We can trace all energy used on our planet back to the sourcethe nearest star, our sun. has approximately 24% of all the world's known coal reserves. Top Selling Energy Drink Brands. Sep 19, 2019 · You can also turn Low Power Mode on and off from Control Center. Wallpapers and other images to give you some inspiration for Minecraft. 1973 The University of Delaware builds "Solar One," one of the world's first pho-. The download includes a Title slide that's different than the additional slides. Along the way, we'll talk about work, kinetic energy, potential energy, conservation of energy, and mechanical advantage. While the Food and Drug Administration (FDA) has had jurisdiction over cosmetic products since the enactment of the Federal Food, Drug, and Cosmetic Act of 1938 (FFDCA), the agency has lacked the resources and regulatory tools to ensure the safety of cosmetics and other personal 1care products. The Institute for Energy and Environmental Research (IEER) began work in 1987. We're pushing what's possible, embracing collaboration as the new normal, and thinking differently. TERI - The Energy and Resources Institute: is a not-for-profit, policy research organization - working in the fields of energy, environment, and sustainable development. Q: I've just been told that I need a background investigation in order to get a security clearance for a job I'm applying for. Thank you for visiting Xcel Energy. Learn about conservation of energy with a skater dude! Build tracks, ramps and jumps for the skater and view the kinetic energy, potential energy and friction as he moves. Kenya treats energy security as a matter of national priority. coli On the 20/20 broadcast, The Food You Eat: Organic Foods May Not Be As Healthy As Consumers Think , John Stossel slanders the organic food industry with unfounded accusations that organic food could kill the people who eat it. 1973 The University of Delaware builds "Solar One," one of the world's first pho-. Peninsula Clean Energy is Countywide! PCE is now serving all of San Mateo County with cleaner energy at low rates. Oil & Gas: Background. With a tradition of operational excellence from predecessor companies that go back more than a century and a vision for the future, Vistra Energy has unique expertise in:. Chaotically connected points and polygons flying in space. Pearson, and James W. Nov 30, 2019 · Alternative energy sources are renewable and are thought to be "free" energy sources. Basic information on wind energy and wind power technology, resources, and issues of concern. The 45th International Technical Conference on Clean Energy will take place June 7 to 12, 2020, at the Sheraton Sand Key in Clearwater, Florida, USA. Over twelve years of energy management, energy audit team leadership, energy management training, field assessment and installation of new technologies. Adopted: January 2010. CrystalGraphics brings you the world's biggest & best collection of solar energy PowerPoint templates. (See NOW's Wind Power Map of the U. It is one form of electromagnetic energy which consists of waves of electric and magnetic energy moving together (radiating) through space. The electrical, Radiant, energy throughout the universe, Tesla referred to and would use to provide power wirelessly at Wardenclyffe has recently been re-discovered and called Dark Energy ("because it cannot be seen"). " ----- Public release date: 10-Dec-2008 Stanford University Contact: Louis Bergeron [email protected] Solar training for PV installers and NABCEP Certification, Solar Energy International (SEI) is the most respected education provider in the solar industry. Solar energy is the cleanest and most abundant renewable energy source available, and the U. "Energy" is a word that's used a lot. Vistra Energy is an integrated retail and generation company, with a strong balance sheet, positive cash flows and a strategy for growth. Marine energy. Led by our athletes, musicians, employees, distributors and fans, Monster is a lifestyle in a can!. Initial funding for this research was in part provided by the Ministry of Energy of Ontario. Free vector file for all your high energy themed event posters and promotional post cards. C# Developer with Energy background for OR location. Sep 13, 2016 · Work Less in the Background. It is one form of electromagnetic energy which consists of waves of electric and magnetic energy moving together (radiating) through space. They are also important structural components for many organisms. Peer-to-peer solar power trading. In our 25 years of experience, we've seen a lot of transformation. The purpose of this background paper is to inform the forthcoming energy sector strategy by providing arguments for and against the use or change in particular types of subsidy. Tags: Dark , Space , Blue , Particles , Colorful , Grunge , High Energy , Flashing (view more) (view less). Vector Illustration. 2 Ocean Wave Energy: Background As the wind flows over the ocean, airsea interface- processes transfer some of the wind energy to the water, forming waves which store this energy as potential energy. NYPA's energy project consulting team. It does not appear on display pages. " We produce, market and deliver the vital electricity, coal, oil and natural gas that our customers need. Free Video Background Animation Loops, Lower Third Animations, Motion Objects, Virtual Studios, Video Editing and Sony Vegas Tutorials for your Online Videos on Youtube and Co. Energy in Radiation in the Early Universe Electromagnetic radiation and the flux of neutrinos were the dominant form of energy in the early universe, becoming more dominant as one models earlier times in the big bang. The strongest lines of evidence come from clusters of galaxies: Tests like those found in measurements of the abundance and baryon fractions of galaxy clusters only probe energy components that can cluster with galaxies. Our full range of services ranges from engineering, fabrication, offshore construction & installation, hook up & commissioning, drilling as well as offshore exploration and production of oil & gas fields. Improving Life With Energy. To reduce your energy consumption, turn off all lights and electronics when you're not using them. This page, called Technologies Overview, has been replaced by a new page named Technologies. High levels for caffeine and sugar can create negative effects on heart rate and blood pressure along with dehydration. so it's hard to give a definitive amount, but i would probably near the same and not much difference. Key Concepts. WASTE TO ENERGY BACKGROUND PAPER Yukon Energy Charrette March 6-9, 2011 Prepared by Don McCallum, P. " That meant that lands, including all their associated mineral interests, were patented to private parties. Abstract background - energy wave - download this royalty free Vector in seconds. Coal Energy Background Information Introduction Coal, the most abundant fossil fuel in the U. Cool Collections of Monster Energy Backgrounds For Desktop, Laptop and Mobiles. The Self-Generation Incentive Program (SGIP) was initially conceived as a peak-load reduction program in response to the California energy crisis of 2000-01, during which Californians experienced electrical outages throughout the state. These include Biomass Energy , Wind Energy , Solar Energy , Geothermal Energy , Hydroelectric Energy sources. Looking for support to improve your home? If you're looking to find out what grants and loans you could benefit from call Home Energy Scotland on 0808 808 2282 for free, impartial, expert advice. If you would like to see additional photos of wind energy technology, please visit the National Renewable Energy Laboratory's Image Gallery. We have developed a distinctive approach that has helped us to maintain a leading position within the industry. Take notes: you will have a journal entry and a quiz on this content. C# Developer with Energy background for OR location. Marine energy. Wind Energy and Wind Power. Save energy background. Download thousands of free vectors on Freepik, the finder with more than 5 millions free graphic resources. Background paper on Vattenfall v. This behavior is codified in Heisenberg's energy-time uncertainty principle. Red Bull continues to dominate as the energy drink leader, but Monster has experienced huge growth in the last few years. and Canada. Energy Saving is a PPT background with saving for PowerPoint that you can use to create PowerPoint presentations related to business. Our top clean energy source is nuclear power. In these changes some of the energy is "lost" in the sense that it cannot be recaptured and used again. In flame tests salts that are dissolved in water are evaporated using a hot flame. Sustainability depends upon maintaining and, where possible, increasing our stocks of certain. Geothermal Energy in California. Background The Ministry of Energy had its earliest beginnings in 1904 when the Mines Department was established to deal with the production of manjak, a bituminous. It is reminiscent of the Biblical phrase "Let there be light". To change this, open the Settings screen, tap General, and tap Background App Refresh. Work is moving something against a force. Further details and detailed instructions for participation will be published at the official launch of the Challenge. Energy Production and Consumption. The template is great for those who are looking for a free Air Energy PowerPoint template to put information about energy renewable plans as well as energy comparison at home or work. If we were all still using CRT monitors then yes you could save a lot of energy by using the black backgrounds. The browser's latest version was designed to solve that issue by throttling background tabs using excessive power. 24 commits 1 branch 0 packages 0 releases Fetching contributors. Good news for the students and staff! The official reports are posted below for your review. In order to encourage westward expansion in the 1800s, the Federal government's land and mineral policy was to convey public domain lands "fee simple. In the last two centuries, we started using Sun's energy directly to make electricity. CrystalGraphics brings you the world's biggest & best collection of solar energy PowerPoint templates. Office of Energy Efficiency & Renewable Energy. when they vibrate. The system may eventually suspend your app if it's not performing important work, such as finishing a task the user initiated or running in a specially declared background execution mode. Opportunities: Red Bull is one of the most popular energy. Today, Duke Energy is dedicated to providing clean, reliable and affordable energy in seven states in the Southeast and Midwest. Every time we convert energy from one form to another, part of that energy is lost in the form of heat. 29 October 2019. Microwaves are a low-energy form of radiation but higher in energy than radio waves. Dec 22, 2017 · Wilkinson Microwave Anisotropy Probe. Find green energy background stock images in HD and millions of other royalty-free stock photos, illustrations and vectors in the Shutterstock collection. Chemical energy creates change by being the basis of metabolism in plant and animal life. Energy is the backbone of any economic system. Free Light and Energy Background "Tickle Me Pink" Motion Backgrounds Elegant, yet quirky in its own fun way, "Tickle Me Pink" is the perfect background for the Elle Woods in all of us. The kinetic energy of an object of mass , moving with a velocity is given by. 7 questions and answers about Xcel Energy Background Check. Main Types of Biomass Energy Biomass Energy comes in many shapes and forms. The journal aims to be a leading peer-reviewed platform and an authoritative source of information for analyses, reviews and evaluations related to energy. The frequency of consumption of energy drinks and the reason for consuming alcohol may be related. DAN BROUILLETTE'S BACKGROUND AND RESUME: Dan Brouillette, nominated by President Trump to replace Rick Perry at the Energy Department, is not easy to characterize. Online Resource Center Documents and training information to help building communities and enforcement agencies comply with the Building Energy Efficiency Standards. Start the customization now!. Sodium chloride ( NaCl) is the most familiar example of a salt but others include calcium chloride ( CaCl2) and copper (II) chloride ( CuCl2 ). Aug 02, 2016 · Illuminated LCD screens draw most of their energy for these two components: * Image Apparatus - the matrix of pixels to which output is rendered and its supporting components * Backlight - shines light through the image apparatus from behind so y. If you have difficulty accessing these maps because of a disability, contact the Geospatial Data Science Team. Wind Energy Photos. Kinetic energy is present whenever an object is in motion. The modern pop-rock arrangement features energetic acoustic and electric guitar riffs, uplifting piano melody, strings, indie synth lines, and acoustic drums. (See NOW's Wind Power Map of the U. Led by the oil and gas industry, this sector regularly pumps the vast majority of its campaign contributions into Republican coffers. With the progress of energy market liberalisation, fully completed by mid-2007, the methodology for the collection of energy prices had to be adapted. Here is a list of the most common energy drink ingredients and their reported effects on the human body. AEGIS Energy Risk Partners with Baird Capital to Expand Fintech Capabilities THE WOODLANDS, TX - AEGIS Energy Risk, the leading fintech and advisory solutions provider for energy derivatives, today announced a minority investment from Baird Capital to facilitate the expansion of…. energy drink contained in metal can with electricity current element, teal background 3d illustration. When Low Power Mode is on, your iPhone will last longer before you need to charge it, but some features might take longer to update or complete. It is headquartered in Calgary, Alberta, and its common shares are publicly traded on the Toronto Stock Exchange under the symbol HSE. Sustainable Energy Development Authority, Malaysia. AmericanEnergyIndependence. All products. Yet as the population continues to grow, so will the demand for cheap energy, and an economy reliant on fossil fuels is creating drastic changes. Go to content We use cookies in order to ensure that you can get the best browsing experience possible on the Council website. Sting Sting can't waste his energy getting too into Timberlake's number -- he's got his own performance later in the show. More sources of energy need to be found and the world is now taking on another closer look at using ocean waves to produce energy. Energy Piles : Background and Geotechnical Engineering Concepts 16th Annual George F. housing market, despite the recent down- turn, has driven an increase in electricity consumption. Home » Renewable Energy » Program Updates and Background Information RPS Background Info This page contains links to relevant regulatory rulings by the New Jersey Board of Public Utilities that affect the Renewable Portfolio Standard (RPS) and the Solar Renewable Energy Credit (SREC) Program. Jan 25, 2019 · Nearby EnergySource is planning something similar and there are other interested parties in the region. Free Energy wallpapers and Energy backgrounds for your computer desktop. Solar energy is used for heating water for domestic use, space heating of buildings, drying agricultural products, and generating electrical energy. Nov 11, 2011 · The Halo Energy Sword - Background Tumblr Cursor will work on your page if you follow these instructions Login and go to your Tumblr page. Configurable settings designed specifically for Energy clients can alleviate much of the manual review and administrative workload that would otherwise be required. Wind Energy Basics. There are different types of energy, such as kinetic, potential, thermal, nuclear, chemical, light, and sound energy. Germany arbitration 3 3. Here's what people have said about working and interviewing at Husky Energy. Colorful Energy Background Rainbow vector gradient background with glowing light balls and twisting strands of energy. has approximately 24% of all the world's known coal reserves. Thank you for visiting Xcel Energy. This page, called Technologies Overview, has been replaced by a new page named Technologies. 0 degrees C. Pay your bill, manage your account, report an outage, and learn how to save energy. The observed acceleration of the Hubble expansion rate has been attributed to a mysterious "dark energy" which supposedly makes up about 70% of the universe. Energy Policy is an international peer-reviewed journal addressing the policy implications of energy supply and use from their economic, social, planning and environmental aspects. But how did it develop into a clean, abundant and free solution to tackling global warming?. Simon Corbell: So we're looking down the valley here near Royalla on the Monaro Highway, just south of. "Fuel Cell" is a bright pink background that uses lines and light to create visual interest. In our 25 years of experience, we've seen a lot of transformation. We have developed a distinctive approach that has helped us to maintain a leading position within the industry. Questions and Answers about Husky Energy Background Check. 0 Micrometers for Target and Background Materials and Scene Energy Sources for Naval Night Sensing by Defense Technical Information Center. Abstract blue background hi tech motion design cosmic glow lighting effects dynamic energy futuristic science sci fi wallpaper Abstract battery energy on blue color background Abstract background glow neon blue light lines. AEGIS Energy Risk Partners with Baird Capital to Expand Fintech Capabilities THE WOODLANDS, TX - AEGIS Energy Risk, the leading fintech and advisory solutions provider for energy derivatives, today announced a minority investment from Baird Capital to facilitate the expansion of…. has approximately 24% of all the world's known coal reserves. Final orders for the 2018-2021 Energy Efficiency Plans were issued in fall 2017 for the electric and gas energy efficiency programs that began on January 1, 2018. Led by our athletes, musicians, employees, distributors and fans, Monster is a lifestyle in a can!. Red and yellow gradient background with twisting waves of white energy. Free Energy-Efficient Home PowerPoint Template is a modern presentation template design for the presentations on Green and Energy-efficient topics, constructions, Smart Homes, building projects, natural environment, and refreshing residence. Background: The Global Energy Crisis The Global Energy Crisis is linked to many factors. Today, Duke Energy is dedicated to providing clean, reliable and affordable energy in seven states in the Southeast and Midwest. PPT layouts featuring energy - wind mills during bright summer background and a cream colored foreground Cool new slide set with renewable energy - engineers looking at wind turbine backdrop and a light blue colored foreground. Jul 03, 2018 · Zelenka's background has positioned him well for his new role at the Energy Department. We're one of the UK's leading renewable energy companies, owning 35 wind farms – including two offshore wind farms – and one of the largest operational battery storage units in Europe. Germany arbitration 3 3.
CommonCrawl
On the topology of manifolds with positive isotropic curvature by Siddartha Gadgil and Harish Seshadri PDF We show that a closed orientable Riemannian $n$-manifold, $n \ge 5$, with positive isotropic curvature and free fundamental group is homeomorphic to the connected sum of copies of $S^{n-1}\times S^1$. Kenneth S. Brown, Cohomology of groups, Graduate Texts in Mathematics, vol. 87, Springer-Verlag, New York, 1994. Corrected reprint of the 1982 original. MR 1324339 Ailana M. Fraser, Fundamental groups of manifolds with positive isotropic curvature, Ann. of Math. (2) 158 (2003), no. 1, 345–354. MR 1999925, DOI 10.4007/annals.2003.158.345 Ailana Fraser and Jon Wolfson, The fundamental group of manifolds of positive isotropic curvature and surface groups, Duke Math. J. 133 (2006), no. 2, 325–334. MR 2225695, DOI 10.1215/S0012-7094-06-13325-2 M. Gromov, Positive curvature, macroscopic dimension, spectral gaps and higher signatures, Functional analysis on the eve of the 21st century, Vol. II (New Brunswick, NJ, 1993) Progr. Math., vol. 132, Birkhäuser Boston, Boston, MA, 1996, pp. 1–213. MR 1389019, DOI 10.1007/s10107-010-0354-x Allen Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002. MR 1867354 Heinz Hopf, Fundamentalgruppe und zweite Bettische Gruppe, Comment. Math. Helv. 14 (1942), 257–309 (German). MR 6510, DOI 10.1007/BF02565622 M. Kreck and W. Lück, Topological rigidity for non-aspherical manifolds, to appear in Quarterly Journal of Pure and Applied Mathematics. Mario J. Micallef and McKenzie Y. Wang, Metrics with nonnegative isotropic curvature, Duke Math. J. 72 (1993), no. 3, 649–672. MR 1253619, DOI 10.1215/S0012-7094-93-07224-9 Mario J. Micallef and John Douglas Moore, Minimal two-spheres and the topology of manifolds with positive curvature on totally isotropic two-planes, Ann. of Math. (2) 127 (1988), no. 1, 199–227. MR 924677, DOI 10.2307/1971420 J. H. C. Whitehead, On simply connected, $4$-dimensional polyhedra, Comment. Math. Helv. 22 (1949), 48–92. MR 29171, DOI 10.1007/BF02568048 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 53C21 Retrieve articles in all journals with MSC (2000): 53C21 Siddartha Gadgil Affiliation: Department of Mathematics, Indian Institute of Science, Bangalore-560012, India Email: [email protected] Harish Seshadri Email: [email protected] Received by editor(s): July 29, 2008 Published electronically: December 23, 2008 Communicated by: Jon G. Wolfson The copyright for this article reverts to public domain 28 years after publication. MSC (2000): Primary 53C21 DOI: https://doi.org/10.1090/S0002-9939-08-09799-2
CommonCrawl
JS-K, a nitric oxide donor, induces autophagy as a complementary mechanism inhibiting ovarian cancer Bin Liu1,2 na1, Xiaojie Huang2 na1, Yifang Li1 na1, Weiguo Liao2, Mingyi Li2, Yi Liu3, Rongrong He1, Du Feng4, Runzhi Zhu ORCID: orcid.org/0000-0001-6082-91342,5 & Hiroshi Kurihara1 Ovarian cancer (OC) is the second most frequent gynecological cancer and is associated with a poor prognosis because OC progression is often asymptoma-tic and is detected at a late stage. There remains an urgent need for novel targeted therapies to improve clinical outcomes in ovarian cancer. As a nitric oxide prodrug, JS-K is reported highly cytotoxic to human cancer cells such as acute myeloid leukemia, multiple myeloma and breast cancer. This study is aim to investigate the influence of JS-K on proliferation and apoptosis in ovarian cancer cells and explored possible autophagy-related mechanisms, which will contribute to future ovarian cancer therapy and supply theory support that JS-K holds great promise as a novel therapeutic agent against ovarian cancer. The cytotoxicity, extracellular ROS/RNS activity and apoptotic effect of JS-K and indicated inhibitors on ovarian cancer cells in vitro were evaluated by MTT assay, extracellular ROS/RNS assay, caspases activities assay and western blot. Further autophagy effect of JS-K and indicated inhibitors were examined by MTT assay, cell transfection, immunofluorescence analysis, transmission electron microscopy (TEM) analysis and western blot on ovarian cancer cells in vitro. In vivo, the BALB/c-nude female mice with SKOV3 ovarian cancer cells xenograft were used to examine the efficacy of JS-K treatment on tumor growth. PCNA and p62 proteins were analyzed by immunohistochemistry. In vitro, JS-K inhibited the proliferation of ovarian cancer cells, induced apoptosis and cell nucleus shrinkage, enhanced the enzymatic activity of caspase-3/7/8/9, and significantly increased the production of ROS/RNS in ovarian cancer A2780 and SKOV3 cells, these effects were attenuated by inhibition of NAC. In addition, JS-K induced autophagy-related proteins and autophagosomes changes in ovarian cancer A2780 and SKOV3 cells. In vivo, JS-K inhibited tumor growth, decreased p62 protein expression and increased the expression levels of PCNA in xenograft models which were established using SKOV3 ovarian cancer cells. Taken together, we demonstrated that ROS/RNS stress-mediated apoptosis and autophagy are mechanisms by which SKOV3 cells undergo cell death after treatment with JS-K in vitro. Moreover, JS-K inhibited SKOV3 tumor growth in vivo. An alternative therapeutic approach for triggering cell death in cancer cells could constitute a useful multimodal therapies for treating ovarian cancer, which is known for its resistance to apoptosis-inducing drugs. Ovarian cancer is the most common cause of gynecologic cancer-related deaths worldwide. In 2019, 22,530 new ovarian cancer cases and 13,980 ovarian cancer deaths are projected to occur in United States [1]. Moreover, in 2015, there were 521,000 new cases of ovarian cancer, and 225,000 women died of this disease in China [2]. Chemotherapy is a common method used to treat advanced ovarian cancer. However, the complications and severe side effects caused by current anticancer drugs (such as hematological and gastrointestinal toxicities) have been major problems in clinical treatments. Therefore, there is an urgent need for novel drug therapies that are effective and less toxic. NO donor drugs have been reported to induce apoptosis in several types of human cancer cells [3]. NO prodrugs, such as O2-(2, 4-dinitrophenyl) 1- [(4-ethoxycarbonyl) piperazin-1-yl] diazen-1-ium-1, 2-diolate (JS-K, C13H16N6O8; CAS-No.: 205432–12-8; Fig. 1a), are a growing class of promising NO-based therapeutics. Previous studies have reported that JS-K exerts anti-tumor activities in many cancers, such as human leukemia, hepatoma, renal cancer cell lines, bladder cancer, and prostate cancer [4,5,6,7,8]. In vitro, experiments in various tumors cells involved the mitogen-activated protein kinase pathway, a regulatory mechanism, which modulated cell death, motility and proliferation [9]. The cGMP, a secondary messenger, is a vital mediator of NO for the physiological effects that NO activates soluble guanylyl cyclase to increase the production of cGMP [10]. JS-K exerts anti-tumor activities via ROS-triggered stress in non-small cell lung cancer cells and malignant gliomas [11, 12]. The chemical structure of JS-K Reactive oxygen species (ROS) and reactive nitrogen species (RNS) participate in some important physiological processes such as cell survival and cell death. ROS/RNS in high levels mainly induce cell death, low levels of ROS/RNS directly regulate the activities of p53, nuclear factor-κB (NF-κB), transcription factors, nuclear factor (erythroid-derived 2)-like 2 (Nrf2), and huge protein kinase cascades that are involved in modulating the cross-talk between apoptosis and autophagy [13]. Apoptosis and autophagy are two evolutionarily conserved processes that maintain homeostasis during stress. Although the two pathways utilize fundamentally distinct machinery, apoptosis and autophagy are highly interconnected and share many key regulators [14]. Autophagy is a mechanism by which cellular material is delivered to lysosomes for degradation allowing basal turnover of cell components and providing energy and macromolecular precursors. Autophagy, a tumor suppression mechanism, has been involved in various anticancer treatments used in clinical today and many therapies that are during the research [15]. Therefore, it is significant to manipulate autophagy for the development of cancer treatment. Autophagy is usually monitored by measuring the levels of autophagy-related proteins, such as microtubule-associated protein 1 light chain 3 II (LC3II) [16]. Sequestosome 1 (p62/SQSTM1), which is a vital selective receptor for autophagy, is definitely degraded in the process of autophagy [17, 18]. Therefore, the research on the combination of p62 levels and LC3II formations can suitably reflect autophagy levels. It was reported that JS-K induced autophagy in breast cancer cells. Electron microscopy confirmed that JS-K-treated breast cancer cells underwent autophagic cell death [19]. However, whether JS-K exerts anticancer effects via autophagy for ovarian cancer is unknown clearly. Therefore, the main objective of this study was to investigate the molecular mechanisms of cell death induced by NO released from the diazeniumdiolate NO donor JS-K in ovarian cancer cell lines and xenograft models. Nowadays, apoptosis is recognized as the major mechanism underpinning the effectiveness of anticancer therapies, but JS-K might provide an improved targeted therapeutic strategy for cancer chemotherapy. JS-K was purchased from Sigma-Aldrich (St. Louis, Mo, USA). Stock solutions of JS-K (20 mM) were prepared in dimethyl sulfoxide (DMSO) and stored at − 20 °C. The stock solutions were further diluted in relative culture medium prior to the experiments. Reactive oxygen species (ROS) inhibitor NAC (N-acetyl-L-cysteine) was purchased from Abcam (Cambridge, MA, USA). Autophagy inhibitors Bafilymycin A1 (BAF) and 3-Methyladenine (3-MA) was purchased from Selleck Chemicals (USA). Cisplatin was bought from Hansoh Pharma (China). The anti-LC3B and anti-ATG-5 antibody were obtained from Sigma-Aldrich (St. Louis, MO, USA), anti-p62 antibody was bought from Abcam (Cambridge, UK), and other antibodies were purchased from Cell Signaling Technology (Danvers, MA, USA). HO8910, HO8910-PM, A2780, and SKOV3 cell lines were provided by the Affiliated Hospital of Guangdong Medical University (Zhanjiang, China). A2780 was cultured in RPMI 1640 media (Gibco, China) containing 10% fetal bovine serum (FBS, Capricorn Scientific, Germany) and 1% penicillin/streptomycin (HyClone Laboratories, USA; 100 units/mL penicillin and 100 μg/mL treptomycin). HO8910, HO8910-PM, and SKOV3 cells were cultured in McCoy's 5A medium (Boster Biological Technology Co., Wuhan, China) supplemented with 10% FBS, 100 units/mL of penicillin, and 100 μg/mL of streptomycin. Cells were grown in a 37 °C incubator (Thermo, USA) with a humidified mixture of 5% CO2: 95% air. During the logarithmic phase, the cells were used for experiments. Cell viability To evaluate the growth inhibitory effect of the indicated JS-K treatments, a colorimetric MTT assay was performed as described previously. Briefly, cells were seeded in 96-well plates in 0.1 mL of their respective media. Cells were plated at different cell densities because the cells had different sizes and growth rates. HO8910, HO8910-PM, and A2780 cells were plated at an initial density of 7000 cells per well. SKOV3 cells were seeded at a density of 8000 cells per well. Cells were incubated in 96-well plates overnight and then cultured with the indicated treatments (JS-K + X groups were subjected to co-treatment with 2.5 μM JS-K and indicated X) for 24 or 48 h. Thereafter, 20 μL of MTT (thiazolyl blue tetrazolium bromide, 5 mg/mL) was added during the last 4 h of incubation. The supernatants were removed, and 100 μL of DMSO was added to dissolve the formazan. Absorbance values were measured at 492 nm using a Multiskan MK3 microplate reader (Thermo Electron Corporation, USA). Cell viability was determined as a percentage of proliferating cells in the treated versus cells (Control, 100%). Apoptosis assay For the detection of apoptotic cells, which performed by the Annexin V, FITC Apoptosis Detection Kit (Dojindo Laboratories, Japan), in accordance with the manufacturer's instructions. Cells were seeded at 3.5 × 105 cells/well for the SKOV3 cell line and 4 × 105 cells/well for the A2780 cell line using 6-well plates. The cells were then treated with JS-K or NAC at appropriate concentrations as described (1.25–5 μM JS-K; vehicle, 2.5 μM JS-K, 200 μM NAC, co-treatment with 2.5 μM JS-K and 200 μM NAC). The cells were washed twice with PBS, and then 1× Binding Buffer was added to achieve the concentration of 1 × 106 cells/mL. Next, 100 μL of each cells solution were transferred to a flow cytometric tube, and 5 μL of FITC Annexin V and 5 μL of propidium iodide (PI) were added to each tube. Each sample was gently mixed, protected from light, and stained for 15 min at RT. Finally, 400 μL of 1× Binding Buffer was added to each tube to stop the staining reaction, followed by flow cytometric analysis using the BD FACSCantoTMIIFlow Cytometer (BD Biosciences, USA). Data analysis was performed using the system's software (BD Biosciences, USA). The percentage of cells positive for PI and/or Annexin V-FITC was reported inside the quadrants. Caspase enzymatic activity assay The quantification of caspase enzymatic activity is regarded as an important readout for apoptosis. We used the Caspase-Glo 3/7 Assay Kit, the Caspase-Glo 8 Assay Kit, and the Caspase-Glo 9 Assay Kit (Promega Co., USA) to examine the activity of caspase-3/7, caspase-8, and caspase-9, respectively, according to the manufacturer's protocol. Briefly, A2780 and SKOV3 cells were plated at a density of 4 × 105 and 3.5 × 105 cells per well in 6-well plates, respectively. Following the indicated treatments (1.25–5 μM JS-K; vehicle, 2.5 μM JS-K, 200 μM NAC, co-treatment with 2.5 μM JS-K and 200 μM NAC) and incubation period (48 h), approximately 6 × 104 cells per sample were transferred into 1.5-mL tubes. Next, 100 μL of Gaspase-Glo reagent (relative Caspase substrate) was added to each sample, and the cells were incubated for 1 h at RT protected from light. Sample luminescence values were measured using the Sirius L Tube Luminometer (Berthold Detection Systems, Germany). Relative caspase activities were normalized to the raw luminescence units (RLUs) of the untreated control. Colony suppression assay The colony-forming ability of cells was assessed as described. In these experiments, cells were plated in 6-well plates and incubated with JS-K (0, 1.25 and 2.5 μM) or the indicated treatments (vehicle, 2.5 μM JS-K, 200 μM NAC, co-treatment with 2.5 μM JS-K and 200 μM NAC) for 6 h. The cells were then harvested and quantified for each group. Around 2000 SKOV3 cells and 5000 A2780 cells per group were plated into individual wells of 6-well plates with a drug-free medium in order to assay colony formation. We changed the medium every 3 d. After 8 d of plating, the colonies were fixed in 4% paraformaldehyde (PFA) for 20 min, washed twice in PBS, stained with 0.5% crystal violet solution for 10 min, washed in ultrapure water three times, and photographed. ROS/RNS detection The intracellular ROS/RNS levels were analyzed using the Cellular ROS/RNS Detection Assay Kit (Abcam, USA; catalog number: ab139473) according to the manufacturer's instructions. For the fluorescence microplate assay, cells were plated in 6-well plates. After the indicated treatment, an equal quantity of cells was washed with PBS, centrifuged at 1000 rpm for 3 min, and incubated with 1:2500 of the Oxidative Stress Detection Reagent (Green). The cells were then transferred to a 96-well plate. After 60 min of incubation at 37 °C, the fluorescence was measured using the Mithras LB 940 Multimode Microplate Reader (Berthold Technologies, Germany) at an excitation wavelength of 488 nm and an emission wavelength of 520 nm. ROS/RNS values were normalized to the fluorescence of the untreated control. For the confocal microscopy assay, cells were seeded into 35-mm dishes with 14-mm bottom wells and subjected to the indicated treatments for 24 h. Thereafter, the cells were cultured in PBS containing 1: 2500 dilution of the Oxidative Stress Detection Reagent (Green) and 1: 2500 dilution of the Superoxide Detection Reagent (Orange) at 37 °C for 60 min in the dark. The cells were then washed two times with PBS. The LEICA TCS SP5 laser confocal fluorescence microscope (Leica, Germany) was used to measure the ROS/RNS-dependent fluorescence intensity at the excitation wavelength of 490 nm and the emission wavelength of 525 nm. The superoxide-dependent fluorescence intensity was also measured at the excitation wavelength of 550 nm and the emission wavelength of 620 nm. The cells were seeded into 6-well culture plates at a density of 4 × 105 cells/well (for the A2780 cells) or 3.5 × 105 cells/well (for the SKOV3 cells). The indicated compounds were added at the specified doses, and the samples were incubated for the specified times, washed, harvested, and lysed using RIPA lysis buffer (Beyotime Biotech Inc., Nantong, China) containing with 1 mM PMSF and 30 nM okadaic acid. Equal amounts of protein (30 μg) from each sample were separated by SDS-PAGE and transferred to polyvinylidene fluoride (PVDF, Merck Millipore, Darmstadt, Germany) membranes. The blots were blocked for 1 h in 5% fat-free milk with 0.1% Tween-20 in 0.02 M TBS buffer (TBST). The blots were incubated with 5% BSA containing the appropriate primary antibodies at a 1:1000 dilution overnight at 4 °C. After incubation with the primary antibody, the membranes were washed three times with TBST and incubated with 5% fat-free milk containing with the appropriate horseradish peroxidase-conjugated secondary antibody (Sino Biological Inc., Beijing, China; dilution 1:2000) for 4–6 h at 4 °C. Immunoblots were developed using the Immobilon TM Western Chemiluminescent HRP Substrate (Millipore Corporation, Billerica, MA, USA), and they were then exposed to Kodak film (Kodak, Rochester, NY, USA). Equal protein loading was confirmed by probing blots with the anti-GAPDH antibody. Cell transfection The cells were grown on 35-mm glass-bottomed dishes (In Vitro Scientific). Subsequently, 1 μg/well of the GFP-LC3 plasmid was transfected into the cells using Lipofectamine™ 2000 (Life Technologies, USA) in Opti-MEM (Life Technologies, USA) according to the manufacturer's instructions. After 5 h, the medium was exchanged for the standard culture medium, and the cells were incubated overnight. Then cells were treated with or without JS-K (2.5 μM) for 24 h and fixed with 4% PFA. The location and expression of LC3-GFP were observed using the LEICA TCS SP5IIconfocal microscope (Leica, Germany). For siRNA transfection, SKOV3 cells were seeded onto 6-well plates and grown to 80% confluence. Next, 0.1 μM of ATG5-specific siRNAs was transiently transfected into the cells using Lipofectamine™ 2000 according to the manufacturer's recommendations. SiRNA Oligos were chemically synthesized by GenePharma (Suzhou, China; 1# ATG5-specific siRNA sense strand: 5′-gacguugguaacugacaaatt-3′; 2# ATG5-specific siRNA sense strand: 5′-guccaucuaaggaugcaautt-3′; 3# ATG5-specific siRNA sense strand: 5′-gaccuuucauucagaagcutt-3′; negative control sense strand, 5′-uucuccgaacgugucacgutt-3′). After 48 h of culture, cells were treated with or without JS-K (2.5 μM) for 24 h, photographed, and harvested for Western blot analysis. After treatment with the vehicle or 2.5 μM JS-K for 24 h, the cells were harvested using 0.25% trypsin and centrifuged at 800 rpm for 2 min. The supernatant was removed, and the cells were fixed in 3% glutaraldehyde at 4 °C for 4 h followed by post-fixing in 1% OsO4 for 1.5 h at 4 °C. Following ethanol dehydration and resin embedding, ultra-thin sections (90 nm) were prepared using a UC7 microtome (Leica, Germany), and the sections were then mounted on copper grids. The autophagosomes were observed using the JEM-1400 electron microscope (JEOL, Japan), and images were collected using 832 Digital Micrograph Software (Gatan, USA). Xenograft model and treatments Five-week-old BALB/c-nude female mice (Lingchang biotechnology co., Shanghai, China, No. 2013001826242) were housed with five mice per cage and bred in an SPF-level animal house with a 12-h light/12-h dark cycle, a constant temperature of 20–26 °C, and a relative humidity of 40–70%. All mice were free-fed with food and water. SKOV3 cells were prepared at a concentration of 5 × 106 cells/mL. Each volume of 5 × 105 SKOV3 cells was resuspended in 100 μL of medium and injected subcutaneously into the left flanks of mice. After 12 d, mice with tumor growths were randomly divided into three groups (10 mice/group) and given treatments (injected subcutaneously) of a physiological saline solution, JS-K (6 mg/kg), or Cisplatin (2 mg/kg) once every 2 d for 22 d. The body weights were recorded on the indicated days. The length and width of tumors at the indicated time points were measured using a Vernier caliper, and the volumes of the tumors were calculated using the following formula: $$ \mathrm{Volume}\kern0.5em =\kern0.5em \left(\mathrm{width}\kern0.5em \times \kern0.5em \mathrm{length}\right)/2 $$ On the 22th day after the initiation of the drug treatments, the mice were sacrificed by treating with 5% isoflurane (Linuo Pharma, Jinan,China) in an isoflurane anesthesia system rodent anesthesia machine (Yuyan Instruments, Shanghai; ABS; 1 L/min O2), the xenografts were harvested, and the extraneous tissues were carefully removed. Tumors were photographed, fixed in 10% buffered formalin, and processed for paraffin embedding and sectioning. Serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels were quantified using an ELISA kit according to the manufacturer's instructions. Histological analysis and immunohistochemistry After fixation and routine dehydration, all tumor samples were embedded in paraffin and cut into 2-μm thick sections. The xenografted specimens for histological analysis were stained with hematoxylin and eosin (HE) in order to observe the general tissue morphology under the DM4000B microscope (Leica, Germany). For immunohistochemistry, sections were deparaffinized using xylenes for 10 min each and hydrated using a graded alcohol series (100 to 75%) for 5 min each. Antigen retrieval was performed by heating the sections in citrate buffer for 2 min in a pressure cooker. The endogenous peroxidase activity was inactivated using 0.3% hydrogen peroxide for 10 min at RT in the dark. Afterwards, sections were incubated with a 1:8000 dilution of anti-PCNA antibody and 1:800 dilution of anti-P62 antibody (Cell Signaling Technology, USA) overnight in a moist chamber at 4 °C. The next day, the sections were washed and incubated with an HRP-conjugated rabbit secondary antibody (Cell Signaling Technology, USA) for 30 min at RT. The PCNA and p62 signals were detected by using DAB substrate (brown). All tumor sections were counterstained with haematoxylin for 1 min, dehydrated, dried, and mounted using Permount TM Mounting Medium. The images were captured using the LEICA DM4000 B LED microscope (Leica, Germany). GraphPad Prism 5 statistical software (GraphPad Software, San Diego, CA, USA) and Microsoft Excel were used for the data analysis. Statistical analysis were performed with one-way ANOVA followed by Tukey's test. The data were expressed as the Mean ± SEM. p < 0.05 was considered statistically significant. Each experiment was repeated at least three times. All animal experiments were strictly conducted in accordance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. Dissections were performed under anesthesia and all efforts were made to minimize suffering. JS-K induces cell death in ovarian cancer A2780 and SKOV3 cells As shown in Fig. 2, JS-K inhibited the proliferation of ovarian cancer cells in a concentration- and time-dependent manner. The susceptibility to proliferation inhibition from JS-K exposure was greater in the A2780 (IC50 = 2.15 μM, 48 h) and SKOV3 (IC50 = 3.42 μM, 48 h) cell lines than the HO8910 (IC50 = 13.7 μM, 48 h) or HO8910-PM (IC50 = 15.6 μM, 48 h) cell lines, regardless of treatment durations at 24 or 48 h (Fig. 2a). Cell death was quantified using DAPI staining. The results showed that JS-K treatment induced apoptosis and cell nucleus shrinkage in the A2780 and SKOV3 cell lines (Fig. 2b). The morphology of cells treated with JS-K obviously altered, displaying a distorted shape (Fig. 2c). The colony-forming assay confirmed that the proliferation of the A2780 and SKOV3 cell lines was inhibited by JS-K in a concentration-dependent manner (Fig. 2d). In order to assess JS-K-induced apoptosis, apoptosis was detected using Annexin V fluorescein isothiocyanate (FITC)/PI. Our results revealed that JS-K increased the rates of Q2 + Q4 (early and late apoptosis rates) in a concentration-dependent manner (Fig. 3a). Furthermore, increases in the expression levels of caspase-3/7, caspase-8, and caspase-9 were observed in cells treated with JS-K, in addition to the expression levels of the apoptotic protein cleaved-PARP. In contrast, the expression levels of Bcl-2/Bax decreased in a concentration-dependent manner in JS-K-treated cells (Fig. 3b-e). JS-K specifically induces death in ovarian cancer cells. a JS-K inhibition performed by JS-K in ovarian cancer cells were assessed by MTT assay. b Cell nucleus was stained with DAPI (48 h treatment). c JS-K induced cell apoptosis showing a concentration-dependent manner (200 × magnification). d Colony formation assay was used to detect cell proliferation of different JS-K concentration treated A2780 and SKOV3 cells. e and f The quality graphs of the c and d JS-K regulates survival and activates the apoptosis-related signaling pathway. a JS-K-induced apoptosis in A2780 and SKOV3 cells were detected by flow cytometry (Mean ± SD, 48 h and 24 h treatment). b Caspase family proteins 3/7/8/9 activity were detected in A2780 and SKOV3 cells treated with different concentration JS-K. c and d The expression of apoptosis-related proteins were checked in A2780 and SKOV3 cells treated with different concentration JS-K for 48 h treatment. e The quality graphs of the c and d JS-K induces cell death by producing ROS/RNS in A2780 and SKOV3 ovarian cancer cells To investigate whether ROS/RNS induced by JS-K treatment could promote cell death, A2780 and SKOV3 cells were co-cultured with 2.5 μM of JS-K and different concentrations of NAC (a ROS inhibitor) for 24 h. The data demonstrated that JS-K significantly increased the production of ROS/RNS in a concentration-dependent manner, and ROS/RNS levels decreased in both cell lines co-treated with 2.5 μM of JS-K and 200 μM of NAC. This was in contrast to the increase in ROS/RNS levels observed in cells treated with 2.5 μM JS-K alone (Fig. 4a-c). A2780 and SKOV3 cells were co-cultured with 2.5 μM JS-K and different concentrations of NAC for 24 h. Cell viability was rescued by NAC at concentrations of 200 and 400 μM (Fig. 5a). Furthermore, JS-K-induced cell death was rescued by a treatment with 200 μM of NAC (Fig. 5b-c). In addition, the activity levels of caspase-3/7, caspase-8, caspase-9 and the effects of cell proliferation and colony formation in cells co-treated with JS-K and NAC were obviously reversed compared to cells treated with 2.5 μM JS-K only (Fig. 5d, g, h). Moreover, cleaved-PARP protein was not detected in cells co-treated with 2.5 μM JS-K and 200 μM NAC, while this protein was expressed in both cell lines treated with 2.5 μM JS-K alone. Meanwhile, the levels of Bcl-2/Bax increased and that of p-p38 decreased in cells co-treated with 2.5 μM JS-K and 200 μM NAC compared to cells treated with 2.5 μM JS-K only (Fig. 5e-f). JS-K induced ROS/RNS of ovarian cancer cells could be offseted by NAC. a and b The ROS/RNS content of A2780 and SKOV3 cells treated with indicated JS-K and NAC for 48 h were analyzed by ELISA method. b The ROS/RNS content of A2780 and SKOV3 cells treated with indicated JS-K and NAC for 48 h were analyzed by Confocal microscopy NAC rescues JS-K-mediated cell death of A2780 and SKOV3. a The cell viability of different NAC concentration reversed JS-K-treated A2780 and SKOV3 cells were analyzed with MTT method. b JS-K and NAC co-treated cells were detected with microscope (200 × magnification). c Annexin V/PI staining of A2780 and SKOV3 cells apoptosis treated with 2.5 μM JS-K /NAC 200 μM (Mean ± SD, 48 h and 24 h treatment). d The Caspase family proteins 3/7/8/9 activity in A2780 and SKOV3 cells treated with 2.5 μM JS-K or 200 μM NAC as indication. (48 h treatment) were measured using ELISA assay. e The expression of apoptosis-related proteins in A2780 and SKOV3 cells treated with 2.5 μM JS-K /200 μM NAC (48 h treatment) were measured using WB assay. g Colony formation assay was used to detect cell proliferation of JS-K and NAC co-treated A2780 and SKOV3 cells (48 h treatment). —represents 40 μm. (f and h) The quality graphs of the e and g JS-K induces autophagy in A2780 and SKOV3 ovarian cancer cells In order to verify the role of autophagy, autophagy-related factors were assayed in A2780 and SKOV3 cells. The data showed that the expression of LC3BII and ATG5 proteins increased, while p62 mRNA level (data not shown) decreased and protein level expression decreased in a concentration-dependent manner in cells treated (Fig. 6a). GFP-LC3 plasmids were transfected into the two cell lines, and confocal microscopy was used to observe the distribution of LC3BI/II. Results showed that LC3BII fluorescence pattern in JS-K-treated cells was punctuate, as opposed to diffuse in control cells (Fig. 6c). Moreover, TEM was used to observe the intracellular morphology of SKOV3 cells. JS-K was found to induce a greater accumulation of autophagosomes in SKOV3 cells, which indicated a disruption in the last step of autophagy in which autophagosomes fuse with lysosomes and the cellular content is degraded (Fig. 6d). In order to verify whether autophagy could be induced by oxidative stress, we used NAC as a ROS inhibitor and performed the western blot experiment to measure the protein expression of LC3BI/II and p62 in SKOV3 cells with indicated treatments. We found the effect of which JS-K induced LC3BII protein increase and p62 protein decrease could be reversed relatively by NAC (Fig. 6e). In addition, the inhibitors 3-MA and BAF were used to identify the type of cell death induced by JS-K treatment. The results suggested that the BAF had moderately reversed JS-K-induced cell death in the SKOV3 cell line. However, 3-MA had no effect on the SKOV3 cell line (Fig. 6f). In order to confirm that JS-K induced autophagy and detect whether autophagy inhibition affects apoptosis, we performed the western blotting experiments in the SKOV3 cells with BAF. The results showed that LC3BII and BAX proteins were found to be elevated in cells co-treated with 2.5 μM JS-K and 25 nM BAF compared to cells treated with 2.5 μM JS-K alone; P62 and Bcl2 proteins were found to be increased in cells co-treated with 2.5 μM JS-K and 25 nM BAF compared to cells treated with 2.5 μM JS-K alone (Fig. 6g). To validate the effect of ATG5 in JS-K-induced cell death, Western blotting was used to show that ATG5 siRNA decreased ATG5 protein levels in SKOV3 cells (Fig. 6h). The MTT assay was used to analyze the viability of SKOV3 cells treated with JS-K/ATG5 siRNA1# for 48 h along with microscopic observations of the cellular morphology. These experiments showed that ATG5 siRNA 1# rescued the cell death triggered by JS-K treatment (Fig. 6i). JS-K induced autophagy-related protein and autophagosome changes in ovarian cancer A2780 and SKOV3 cells. a Autophagy-related proteins expression were measured using WB assay. b The quality graphs of the A. c GFP - LC3 plasmid were transfected to the two cell lines, confocal microscopy was used to observe the distribution. d TEM was used to observe intracellular morphology (24 h treatment). e WB method was used to examine LC3B I/II and p62 proteins expression in SKOV3 cells treated with JS-K/NAC. f MTT method analyzed the cell viability of SKOV3 cells co-treated with 3-MA and BAF for 48 h. g WB method was used to examine LC3B I/II distribution, p62, Bcl2 and Bax proteins expression in SKOV3 cells treated with JS-K/BAF. h WB method analyzed ATG5 SiRNA1# decreased ATG5 protein expression. i Microscope observed the cell morphology and MTT method analyzed the cell viability of SKOV3 cells treated with JS-K/ATG5 SiRNA1# for 48 h. —represents 40 μm JS-K inhibits tumor growth in vivo To investigate the effects of JS-K on tumor growth in vivo, we used a mouse xenograft tumor model. Physiological saline solution (control group), JS-K (6 mg/kg), or Cisplatin (2 mg/kg) were used to treat nude mice (10 mice /group) for 23 d. The results showed that Cisplatin significantly reduced the body weights of mice and inhibited by 40% compared to the control group. JS-K treatment did not significantly impact the body weights of the mice, although the tumor volume was significantly inhibited by 75% compared to the control group (Fig. 7a-a, b, e). The AST and ALT levels were detected by ELISA, and JS-K was found to induce minor liver injury compared with the Cisplatin group (Fig. 7a-c, d). H&E staining demonstrated that there was a larger area of necrosis after JS-K treatment compared to the controls. Moreover, the number of cells expressing PCNA and p62 proteins in the tumors decreased in the JS-K-treated group compared to the control group (Fig. 7a-f). These results indicated that JS-K (6 mg/kg) inhibited tumor growth and had a fewer side effects than treatment with Cisplatin (2 mg/kg) in vivo. JS-K inhibits tumor growth in vivo and JS-K and Cisplatin cooperate to enhance Cisplatin sensitivity of ovarian cancer A2780 and SKOV3 cells. (a) a-b. The effect of JS-K on body weight and tumor sizes in nude mice between the Cisplatin, JS-K and control groups were compared (n = 10, Mean ± SD); c-d. AST and ALT were detected by ELISA; e. Cisplatin, JS-K and control groups tumor were taken picture; f. HE staining and Immunohistochemistry analyzed of PCNA and P62 protein expression of tumor tissue of control group, JS-K group and Cisplatin group (400 × magnification). (b) Cell viability was analyzed by MTT method. (c) Cell apoptosis were detected by microscope and flow cytometry (Mean ± SD, 48 h treatment),—represents 40 μm JS-K enhances cisplatin sensitivity of ovarian cancer A2780 and SKOV3 cells We observed that Cisplatin, a classic clinical drug could induce A2780 and SKOV3 cells death. Moreover, we found that JS-K and Cisplatin cooperate to enhance Cisplatin sensitivity of ovarian cancer A2780 and SKOV3 cells (Fig. 7B-a, b). As shown in results, the effects of JS-K and Cisplatin in the A2780 and SKOV3 cell lines were investigated with MTT assay and flow cytometry, which were consistent with the microscopic observation, the rate of apoptosis of both cell lines co-treated with JS-K (2.5 μM) and Cisplatin (8 μM or 12 μM) was increased contrasted with that of cells treated with Cisplatin (8 μM or 12 μM) alone. NO is known for its anti-proliferative and cytotoxic effects in glioblastoma cells, in addition to its impact on migration, invasion, and angiogenesis in various tumor cells [12, 20]. NO prodrugs, such as JS-K, release NO under enzymatic catalysis via GST isoforms [8]. Previous studies have reported that JS-K exerts anti-tumor activities in many cancers, such as human leukemia, lung cancer, colorectal cancer, bladder cancer, and prostate cancer [4, 5]. In vitro experiments indicated that some regulatory mechanisms are involved in various tumors, such as the mitogen-activated protein kinase pathways, which modulate proliferation, motility, and cell death [9]. JS-K promotes apoptosis by inducing ROS production and the ubiquitin-proteasome pathway in human prostate cancer cells [21, 22]. In this study, we showed for the first time that JS-K induced ovarian cancer cells death via an autophagic mechanism in vitro. It also suppressed ovarian cancer SKOV3 cells xenograft tumor growth and affected autophagy-related protein expression in vivo. Although NO is a highly diffusible, reactive molecule and it sharply decreases in concentration in cells, we observed strong cellular effects in ovarian cancer cells after 24 and 48 h of treatment with JS-K. JS-K treatment was shown to reduce the viability of ovarian cancer cells in a concentration- and time-dependent manner. After 24 or 48 h of treatment, we observed a loss of viability in A2780 and SKOV3 cells treated with JS-K (0.31–20 μM). Thus, JS-K selectively reduced the viability of A2780 and SKOV3 cells. Apoptosis is considered to be the process of cell death induced by JS-K through activation of caspases and the fragmentation of DNA [23, 24]. Therefore, alterations in apoptosis rates and apoptosis-related proteins were investigated. As expected, we observed the activated caspase 3/7/8/9, the cleavage of PARP1 into a fragment size of 89 kDa, a reduction of Bcl2/Bax levels and an increase in P-P38 levels, all of which are normally linked to apoptosis. In this study, NAC (a common antioxidant) was used to confirm that JS-K caused ROS/RNS stress in A2780 and SKOV3 cells and determine whether JS-K-induced apoptosis could be reversed by it. The results showed that JS-K caused cell proliferation defects, caspase 3/7/8/9 activation and apoptosis-related proteins expression changes, which could be reversed by NAC. In conclusion, JS-K induced ovarian cancer A2780 and SKOV3 cells death through ROS/RNS stress-mediated apoptosis in vitro. According to that report, excessive autophagy could induce cell apoptosis, the crosstalk between apoptosis and autophagy is complex, and autophagy can promote cell survival or cell death under various cellular conditions [14]. Both apoptosis and macroautophagy (hereafter referred to as autophagy) can be induced by extracellular stimuli, such as a treatment with a chemotherapeutic agent [25]. Autophagy-related proteins, such as LC3B and p62, were also examined. LC3BII protein increased and p62 decreased in a concentration-dependent manner, which suggested that autophagy flux may be activated by JS-K. Meanwhile, autophagosomes were observed in the cells' ultrastructure, we found that JS-K could accelerate autophagosomes formation. The autophagy-related function of ATG5 in SKOV3 cell death induced by JS-K was confirmed. The results showed that ATG5 expression increased, and ATG5 siRNA decreased the extent of cell death induced by JS-K. All of these results suggested that autophagy was related to SKOV3 cell death induced by JS-K, which has not been reported before. In order to verify whether autophagy could be induced by oxidative stress, we performed the western blot experiments with antioxidant NAC. As results showed, NAC inhibited the effects that JS-K induced autophagy-related proteins expression changes. That indicated that JS-K maybe caused SKOV3 cell death through ROS/RNS stress-mediated autophagy. Our findings are consistent with the apelin-13-induced MCs-HUVEC adhesion via a ROS-autophagy pathway [26]. Moreover, the autophagy inhibitors 3-MA and BAF were used to verify the autophagic effects of JS-K-induced cell death. We found that only BAF could attenuate the cell death induced by JS-K in the SKOV3 cell line. BAF is an inhibitor of the V-type ATPase as well as certain P-type ATPases. Treatment with BAF ultimately results in a block infusion of autophagosomes with lysosomes, thus preventing the maturation of autophagosomes into autolysosomes. The results of western blot also showed that BAF affected the effects JS-K caused the changes of autophagy-related proteins and Bcl2/Bax proteins. These results indicated that JS-K could induce autophagy, which might modulate the effects JS-K induced apoptosis pathway Bcl2 protein family. Previous studies have shown that JS-K significantly reduced the growth of a variety of tumors (such as NSCLC, malignant gliomas, and multiple myeloma) compared with cells in control animals treated with vehicle [12, 27]. According to the above experimental results in vitro, we verified the anti-ovarian cancer effects of JS-K in vivo. The results showed that Cisplatin (2 mg/kg) significantly reduced the body weights of the mice, and AST and ALT levels were higher than that of the JS-K-treated group (6 mg/kg). Meanwhile, the tumor volume was inhibited by 75% compared to the control group. JS-K treatment (6 mg/kg) did not significantly impact body weights of mice, although the tumor volume was significantly inhibited by 40% compared to the control group. The morphology of the tumors, p62 and PCNA expression levels changed significantly among the three groups of tumors. PCNA protein expression in the JS-K-treated group was more than that of the Cisplatin-treated group and less than that of control group, which suggested that the tumor proliferation capacity of JS-K-treated group was poorer than that of control group, but better than that of the Cisplatin-treated group. The results showed that JS-K (6 mg/kg) has anti-tumor activity, but it has no significant impact on the growth of nude mice compared to the control group. Although Cisplatin (2 mg/kg) has an excellent tumor suppression effect, its inhibitory effects on the growth of nude mice are obvious, which is associated with a variety of clinical side effects for associated with the use of this chemotherapy. Cisplatin's side effects include neurotoxicity, nephrotoxicity, ototoxicity, nausea, and vomiting [28]. All of the above results showed that JS-K has anti-tumor activity and maybe has fewer side effects than Cisplatin. Due to the severe side effects of Cisplatin and the drug resistance of chemotherapeutics, the search for chemotherapy-compatible drugs is a hot area of research. Accordingly, we investigated whether JS-K could potentiate the antineoplastic effect of Cisplatin, the results showed that the ovarian cancer cells SKOV3 and A2780 with co-treatment of JS-K with Cisplatin were subjected to more cell death than that with Cisplatin treatment alone. Based on our finding, JS-K may be used as compatible drugs of cisplatin to increase antitumor activity against ovarian cancer in clinical future. In this study, we demonstrated that a cell death mechanism of ovarian cancer was caused by the NO donor JS-K in vitro and in vivo. According to our research results, we have demonstrated that JS-K can induce cell death in ovarian cancer cells via ROS/RNS-mediated autophagy and ROS/RNS-triggered apoptosis pathways in vitro. In vivo, JS-K treatment has an anti-tumor effect that is less robust than that of Cisplatin treatment. The growth inhibition side effects of JS-K treatment (liver injury and weight loss) are minimal compared to that of Cisplatin treatment. In conclusion, our results suggested that JS-K suppresses tumor growth in vivo, and it elicits anti-tumor effects via ROS/RNS-mediated autophagy and ROS/RNS-triggered apoptosis pathways in vitro. The activation of an alternative cell death pathway could be useful for developing multimodal cancer therapies for ovarian cancer, which is known for its strong anti-apoptotic mechanisms and drug resistance. 3-MA: 3-Methyladenine ALT: alanine aminotransferase AST: aspartate aminotransferase ATP: BAF: Bafilymycin A1 JS-K: (O2-(2,4-dinitrophenyl) 1-[(4-ethoxycarbonyl)piperazin- 1-yl]diazen-1-ium-1,2-diolate) MAPK: NAC: PARP: PCNA: proliferating cell nuclear antigen RNS: reactive nitrogen species ROS: Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69(1):7–34. Chen W, Zheng R, Baade PD, Zhang S, Zeng H, Bray F, Jemal A, Yu XQ, He J. Cancer statistics in China, 2015. Ca A Cancer Journal for Clinicians. 2016;66(2):115. Grimm EA, Sikora AG, Ekmekcioglu S. Molecular pathways: inflammation-associated nitric-oxide production as a cancer-supporting redox mechanism and a potential therapeutic target. Clin Cancer Res. 2013;19(20):5557. Chakrapani H, Kalathur RC, Maciag AE, Citro ML, Ji X, Keefer LK, Saavedra JE. Synthesis, mechanistic studies, and anti-proliferative activity of glutathione/glutathione S-transferase-activated nitric oxide prodrugs. Bioorg Med Chem. 2008;16(22):9764–71. Qiu M, Chen L, Tan G, Ke L, Zhang S, Chen H, Liu J. A reactive oxygen species activation mechanism contributes to JS-K-induced apoptosis in human bladder cancer cells. Sci Rep. 2015;5. Laschak M: JS-K, a glutathione/glutathione S-transferase-activated nitric oxide releasing prodrug inhibits androgen receptor and WNT-signaling in prostate cancer cells. BMC Cancer,(2012-03-30) 2012, 12(1):130. Ling L, Huang Z, Chen J, Wang J, Wang S. Protein phosphatase 2A activation mechanism contributes to JS-K induced caspase-dependent apoptosis in human hepatocellular carcinoma cells. J Exp Clin Cancer Res. 2018;37(1):142. Shami PJ, Saavedra JE, Wang LY, Bonifant CL, Diwan BA, Singh SV, Gu Y, Fox SD, Buzard GS, Citro ML. JS-K, a glutathione/glutathione S-transferase-activated nitric oxide donor of the diazeniumdiolate class with potent antineoplastic activity. Mol Cancer Ther. 2003;2(4):409–17. Z R SK, Z W MW, JE S, BI C. JS-K, a novel non-ionic diazeniumdiolate derivative, inhibits Hep 3B hepatoma cell growth and induces c-Jun phosphorylation via multiple MAP kinase pathways. J Cell Physiol. 2003;197(3):426. Ha KS, Kim KM, Kwon YG, Bai SK, Nam WD, Yoo YM, Kim PK, Chung HT, Billiar TR, Kim YM. Nitric oxide prevents 6-hydroxydopamine-induced apoptosis in PC12 cells through cGMP-dependent PI3 kinase/Akt activation. FASEB J. 2003;17(9):1036–47. Maciag AE, Chakrapani H, Saavedra JE, Morris NL, Holland RJ, Kosak KM, Shami PJ, Anderson LM, Keefer LK. The nitric oxide prodrug JS-K is effective against non–small-cell lung Cancer cells in vitro and in vivo: involvement of reactive oxygen species. J Pharmacol Exp Ther. 2011;336(2):313–20. Weyerbrock A, Osterberg N, Psarras N, Baumer B, Kogias E, Werres A, Bette S, Saavedra JE, Keefer LK, Papazoglou A. JS-K, a glutathione S-transferase-activated nitric oxide donor with antineoplastic activity in malignant gliomas. Neurosurgery. 2012;70(2):497. Kaminskyy VO, Zhivotovsky B. Free radicals in cross talk between autophagy and apoptosis. Antioxid Redox Signal. 2014;21(1):86–102. Young MM, Kester M, Wang HG. Sphingolipids: regulators of crosstalk between apoptosis and autophagy. J Lipid Res. 2013;54(1):5–19. Levy JMM, Thorburn A. Targeting autophagy during cancer therapy to improve clinical outcomes. Pharmacol Ther. 2011;131(1):130–41. Yoshii SR, Mizushima N. Monitoring and measuring autophagy. Int J Mol Sci. 2017;18(9):1865. Pankiv S, Clausen TH, Lamark T, Brech A, Bruun JA, Outzen H, Øvervatn A, Bjørkøy G, Johansen T. p62/SQSTM1 Binds Directly to Atg8/LC3 to Facilitate Degradation of Ubiquitinated Protein Aggregates by Autophagy. AIP Adv. 2007;282(33):181. Rogov V, Dötsch V, Johansen T, Kirkin V. Interactions between autophagy receptors and ubiquitin-like proteins form the molecular basis for selective autophagy. Mol Cell. 2014;53(2):167–78. Mcmurtry V, Saavedra JE, Nieves-Alicea R, Simeone AM, Keefer LK, Tari AM. JS-K, a nitric oxide-releasing prodrug, induces breast cancer cell death while sparing normal mammary epithelial cells. Int J Oncol. 2011;38(4):963. Mocellin S, Bronte V, Nitti D. Nitric oxide, a double edged sword in cancer biology: searching for therapeutic opportunities. Med Res Rev. 2007;27(3):317–52. Tan G, Qiu M, Chen L, Zhang S, Ke L, Liu J. JS-K, a nitric oxide pro-drug, regulates growth and apoptosis through the ubiquitin-proteasome pathway in prostate cancer cells. BMC Cancer. 2017;17(1):376. Qiu M, Qiu M, Chen L, Chen L, Tan G, Tan G, Ke L, Ke L, Zhang S, Zhang S. JS-K promotes apoptosis by inducing ROS production in human prostate cancer cells. Oncol Lett. 2017;13(3):1137–42. Kitagaki J, Yang Y, Saavedra JE, Colburn NH, Keefer LK, Perantoni AO. Nitric oxide prodrug JS-K inhibits ubiquitin E1 and kills tumor cells retaining wild-type p53. Oncogene. 2009;28(4):619. Kaczmarek MZ, Holland RJ, Lavanier SA, Troxler JA, Fesenkova VI, Hanson CA, Cmarik JL, Saavedra JE, Keefer LK, Ruscetti SK. Mechanism of action for the cytotoxic effects of the nitric oxide prodrug JS-K in murine erythroleukemia cells. Leuk Res. 2014;38(3):377–82. Sui X, Kong N, Ye L, Han W, Zhou J, Zhang Q, He C, Pan H. p38 and JNK MAPK pathways control the balance of apoptosis and autophagy in response to chemotherapeutic agents. Cancer Lett. 2014;344(2):174–9. Mao XH, Tao SU, Hui ZX, Fang LI, Ping QX, Fang LD, Fang LL, xi CL: Apelin-13 promotes monocyte adhesion to human umbilical vein endothelial cell mediated by phosphatidylinositol 3-kinase signaling pathway. Progress Biochem Biophys 2011, 38(12):1162–1170. Kaur G, Kiziltepe T, Anderson KC, Kutok JL, Jia L, Boucher KM, Saavedra JE, Keefer LK, Shami PJ. JS-K has potent anti-Angiogenic activity in vitro and inhibits tumor angiogenesis in a multiple myeloma model in vivo. J Pharm Pharmacol. 2010;62(1):145–51. Santabarbara G, Maione P, Rossi A, Gridelli C. Pharmacotherapeutic options for treating adverse effects of cisplatin chemotherapy. Expert Opin Pharmacother. 2015;1. This work was supported by the following Grants: The Yangfan Plan of Talents Recruitment Grant, Guangdong, China (Grant No. YueRenCaiBan [2016] 6) to Dr. Zhu; NSFC (81622050&81673709) to Dr. Li; NSFC (No. 317781531, No. 91754115), the Science and Technology Planning Project, Guangdong, China (No. 2017B090901051, No.2016A020215152) to Dr. Feng; Incubation Projects, Scientific Research of Guangdong Medical University to Xiaojie Huang (M2016032) and Weiguo Liao (M2016031). The Yangfan Plan of Talents Recruitment Grant, Guangdong, China (Grant No. YueRenCaiBan [2016] 6) to Dr. Zhu; NSFC (81622050&81673709) to Dr. Li; NSFC (No. 317781531, No. 91754115), the Science and Technology Planning Project, Guangdong, China (No. 2017B090901051, No.2016A020215152) to Dr. Feng; Incubation Projects, Scientific Research of Guangdong Medical University to Xiaojie Huang (M2016032) and Weiguo Liao (M2016031). Any material described in this publication can be requested directly from the corresponding author, Runzhi Zhu. Authors, contributions DF wrote the article. BL, XJH and YFL performed the experiments. WGL and MYL prepared Figs. YL and RRH participated in data and statistical analyses. RZZ and HK designed the experiment. All authors reviewed the final version of the manuscript. All authors read and approved the final manuscript. Bin Liu, Xiaojie Huang and Yifang Li contributed equally to this work. College of Pharmacy, Jinan University, Guangzhou, 510632, Guangdong, China Bin Liu, Yifang Li, Rongrong He & Hiroshi Kurihara Laboratory of Hepatobiliary Surgery, The Affiliated Hospital of Guangdong Medical University, Zhanjiang, 524001, Guangdong, China Bin Liu, Xiaojie Huang, Weiguo Liao, Mingyi Li & Runzhi Zhu School of Pharmacy, Shenyang Pharmaceutical University, Shenyang, 110016, Liaoning, China Yi Liu Key Laboratory of Protein Modification and Degradation, School of Basic Medical Sciences, Affiliated Cancer Hospital & Institute, Guangzhou, Medical University, Guangzhou, 511436, Guangdong, China Du Feng Center for Cell Therapy, The Affiliated Hospital of Jiangsu University, Zhenjiang, 212001, Jiangsu, China Runzhi Zhu Xiaojie Huang Yifang Li Weiguo Liao Mingyi Li Rongrong He Hiroshi Kurihara Correspondence to Runzhi Zhu or Hiroshi Kurihara. This study was approved by the Institutional Animal Care and Use Committee (IACUC) of GenePharma in Suzhou (No. 2017134). Liu, B., Huang, X., Li, Y. et al. JS-K, a nitric oxide donor, induces autophagy as a complementary mechanism inhibiting ovarian cancer. BMC Cancer 19, 645 (2019). https://doi.org/10.1186/s12885-019-5619-z JS-K Reactive oxygen species (ROS)
CommonCrawl
why is there no horizontal line test for functions There is no answer available. R The purpose is for the intersection of the red line to show the points of intersection with all curves intersected. Similarly, the horizontal line test, though does not test if an equation is a function, tests if a function is injective (one-to-one). ... Inverse Functions - Horizontal Line Test. states that if a vertical line intersects the graph of the relation more than once, then the relation is a NOT a function. ! If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function. Also from MathWorld , a function is said to be an injection (or, in the lingo that I learned as a student, one-to-one) if, whenever , it must be the case that . If there exists a horizontal line that cuts the graph of a function in more than one place, then that function is not one-to-one because there are then two or more different values for x that are assigned the same value for y. If the result has any powers of x left over on bottom, then y = 0 is the single horizontal asymptote. Thus the function is not a one-to-one and does not have an inverse. Copyright © 2021 Multiply Media, LLC. If the function you are given is more complex than the simple linear example, you should perform the horizontal line test. , Step 2: Apply the Horizontal Line Test. SURVEY . is constant To do this, draw horizontal lines through the graph. Using the Horizontal Line Test. No Tags: Question 21 . Monday - Turkey 3. Graphs that pass the vertical line test are graphs of functions. : } Wednesday - Steak 5. It's also a way to tell you if a function has an inverse. The graph in figure 3 below is that of a one to one function since for any two different values of the input x (x 1 and x 2) the outputs f(x 1) and f(x 2) are different. Draw a vertical line cutting through the graph of the relation, and then observe the points of intersection. The function can touch and even cross over the asymptote. D) No, it is not a function because it does not pass the horizontal line test. Example 4 … You can rewrite the above functions … The vertical Line test. If a graph of a function passes both the vertical line test and the horizontal line test then the … Vertical Line Test simply tells whether the graph is a function or not. X Several horizontal lines intersect the graph in two places. If no two different points in a graph have the same second coordinate, this means that horizontal lines cross the graph at most once. An easy way to determine whether a function is a one-to-one function is to use the horizontal line test on the graph of the function. × The two tests also give you different information. If any horizontal line crosses the graph of a function more than once, that means that \(y\)-values repeat and the function is not one-to-one. So just based only on the horizontal asymptote, choice A looks good. Request an answer from our educators and we will get to it right away! O C. Yes, because no horizontal line intersects the graph more than once. a one-to-one function is a special case of function where any input is paired with no more then one output, by performing the horizontal line test , which is a graphical way in which if a function's graph crosses any horizontal line two times then that means that the function has the same output for more then one input, or in other words it's one-to-one Saturday - Pot Roast This example re… . See also. A horizontal asymptote is a horizontal line that tells you how the function will behave at the very edges of a graph. Student: Are there any that can be done by just looking at the graph? {\displaystyle \{(x,y_{0})\in X\times Y:y_{0}{\text{ is constant}}\}=X\times \{y_{0}\}} c 0 : Therefore no horizontal line cuts the graph of the equation y = f(x) more than once. Consider a function $${\displaystyle f\colon X\to Y}$$ with its corresponding graph as a subset of the Cartesian product $${\displaystyle X\times Y}$$. B. There is an x 2 in the denominator, but that doesn't matter, because the highest power in the denominator is 5. Remember that it is very possible that a function may have an inverse but at the same time, the inverse is not a function because it doesn't pass the vertical line test. Example #1: Use the Horizontal Line Test to determine whether or not the function y = x 2 graphed below is invertible. intersects the graph in more than one point, the function is not injective. Let's graph our points and use the vertical line test to prove that this is a function. Yes. Thus, the inverses in these cases are not functions. If there's no place on the graph where you could draw a vertical line that touches the curve more than once, then it is a function. Given a function Note that if a function has an inverse that is also a function (thus, the original function passes the Horizontal Line Test, and the inverse passes the Vertical Line Test), the functions are called one-to-one, or invertible. 2. If any horizontal line intersects the graph more than once, the function fails the horizontal line test and is not injective. Using the vertical line test. Using the Horizontal Line Test. The horizontal line test for inverse functions states that a function f has an inverse that is a function, f Superscript negative 1 , if there is no horizontal line that intersects the graph of the function f at more than one point. Draw horizontal lines through the graph. A test use to determine if a relation is a function. A vertical asymptote is a vertical line on the graph; a line that can be expressed by x = a, where a is some constant. For proofs, we have two main options to show a function is $1-1$: ... this short mathematical statement is precisely the Horizontal Line Test! the multiplier of the input values in … Cutting or Hitting the Graph in More Than One Point Graph of the "sideway" parabola x = y2 The horizontal line is a straight line that is mapped from left to right and it is parallel to the X-axis in the plane coordinate system. Inverse trigonometric functions and their graphs Preliminary (Horizontal line test) Horizontal line test determines if the given function is one-to-one. There is a subtlety between the function and the expression form which will be explored, as well as common errors made with exponential functions. is a way to determine if a relation is a function. x As x approaches this value, the function goes to infinity. ) but different x values, which by definition means the function cannot be injective.[1]. Consider the horizontal lines in $${\displaystyle X\times Y}$$ :$${\displaystyle \{(x,y_{0})\in X\times Y:y_{0}{\text{ is constant}}\}=X\times \{y_{0}\}}$$. 0 with its corresponding graph as a subset of the Cartesian product More technically, it's defined as any asymptote that isn't parallel with either the horizontal or vertical axis. If you have only one input, say [math]x=-3[/math], the y value can be anything, so this cannot be a function. If a horizontal line cuts the curve more than once at some point, then the curve doesn't have an inverse function. At any point here I could make a horizontal line over that domain. However, if the horizontal line intersects twice, making it a secant line, then there is no possible inverse. Draw horizontal lines through the graph. The horizontal line test tells you if a function is one-to-one. All functions pass the vertical line test, but only one-to-one functions pass the horizontal line test. Figure 3. A test use to determine if a relation is a function. Properties of a 1 -to- 1 Function: A function f has an inverse function, f -1, if and only if f is one-to-one. An easy way to determine whether a function is a one-to-one function is to use the horizontal line test on the graph of the function. If no horizontal line intersects the graph of the function f in more than one point, then the function is 1 -to- 1 . Once we have determined that a graph defines a function, an easy way to determine if it is a one-to-one function is to use the horizontal line test. Remember that it is very possible that a function may have an inverse but at the same time, the inverse is not a function because it doesn't pass the vertical line test. ( It appears to function well on actual functions 1-8 and 10-12. states that if a vertical line intersects the graph of the relation more than once, then the relation is a NOT a function. No, horizontal lines are not functions. A) No, it is not a function because it has two open circles. The horizontal line test is a convenient method that can determine whether a given function has an inverse, but more importantly to find out if the inverse is also a function. If the result has any powers of x left over on top, then there is no horizontal asymptote. there is no horizontal-line test for functions, because people do not do the test that is why !!! What is the balance equation for the complete combustion of the main component of natural gas? The vertical line test tells you if you have a function, 2. Horizontal Line Test If the graph of a function is known, it is fairly easy to determine if that function is a one to one or not using the horizontal line test. How do you find the slope of a line? Is this a function? R ... Undefined slope is when there is no vertical change. { A function is said to be one-to-one if each x-value corresponds to exactly one y-value. This time you draw a horizontal line, and if the line touches the original function in more than one place it fails the horizontal line test, and the inverse of the function is not a function. A function can only have one output, y, for each unique input, x.If a vertical line intersects a curve on an xy-plane more than once then for one value of x the curve has more than one value of y, and so, the curve does not represent a function. It fails the "Vertical Line Test" and so is not a function. If the graph of a function is known,there is a simple test,called the horizontal-line test, to determine whether is one-to-one. A relation is a function if there are no vertical lines that intersect the graph at more than one point. × But is still a valid relationship, so don't get angry with it. However, horizontal lines are the graphs of functions, namely of constant functions. If any horizontal line intersects the graph more than once, the function fails the horizontal line test and is not injective. y y c (i.e. D . I don't know when to use the vertical and horizontal line test to test for injective functions and surjective. Are horizontal lines functions? Horizontal line test, one-to-one function This is a visual illustration that only one y value (output) exists for every x value (input), a rule of functions. It is called the vertical line test. {\displaystyle y=c} An oblique or slant asymptote is, as its name suggests, a slanted line on the graph. The graph of a function always passes the vertical line test. Y It is possible for a function to be a function but not have an inverse. answer choices . Horizontal Line Test – The HLT says that a function is a one­to­ one function if there is no horizontal line that intersects the graph of the function at more than one point. × One way is to analyze the ordered pairs, and the other way is to use the vertical line test. In geometric analysis, a horizontal line proceeds parallel to the x-axis. If the line intersects one point of the graph, the graph is a function. Ungraded . C) Yes, it is a function because it passes the horizontal line test. What did women and children do at San Jose? For example sine, cosine, etc are like that. = Note: The function y = f(x) is a function if it passes the vertical line test.It is a one-to-one function if it passes both the vertical line test and the horizontal line test. If the horizontal line test shows that the line touches the graph more than once, then the function does not have an inverse function. B) Yes, it is a function because it passes the vertical line test. The horizontal line and the algebraic 1-1 test . Mentor: As a matter of fact there is. D . When did sir Edmund barton get the title sir and how? Who is the longest reigning WWE Champion of all time? If a function f does not pass the horizontal line test, then it remains a function: But, it is not a one-to-one function. This means that, for every y-value in the function, there is only one unique x-value. The approach is rather simple. Visualize multiple horizontal lines and look for places where the graph is intersected more than once. Answer: A method to distinguish functions from relations. The function has an inverse function only if the function is one-to-one. The function f is injective if and only if each horizontal line intersects the graph at most once. : For each element in the range if there exist exactly one element in the domain the function is said to be one to one. It is possible for a function to be a function but not have an inverse. So, there is one new characteristic that must be true for a function to be one to one. Le… This is when you plot the graph of a function, then draw a horizontal line across the graph. To see this, note that the points of intersection have the same y-value (because they lie on the line [2], https://en.wikipedia.org/w/index.php?title=Horizontal_line_test&oldid=931487552, Creative Commons Attribution-ShareAlike License, This page was last edited on 19 December 2019, at 04:44. Use the Horizontal Line Test. A test use to determine if a function is one-to-one.If a horizontal line intersects a function's graph more than once, then the function is not one-to-one.. These … Vertical Line Test. The given function passes the horizontal line test only if any horizontal lines intersect the function at most once. The horizontal line test is a geometric way of knowing if a function has an inverse. The vertical Line test. Graphs that pass both the vertical line and horizontal line tests are one-to-one functions. No, because there is at least one horizontal line that intersects the graph more than once. So though the Horizontal Line Test is a nice heuristic argument, it's not in itself a proof. And if you can draw a vertical line anywhere on the graph that touches the curve more than once, then it is not a function. And we can verify that because each hash mark is two. Using the Horizontal Line Test. If no horizontal line intersects the function in more than one point, the function is one-to-one (or injective). A quick test for a one-to-one function is the horizontal line test. Why don't libraries smell like bookstores? {\displaystyle f\colon X\to Y} Any help would be appreciated, Thanks! The vertical line test supports the definition of a function. Yes, because there is at least one horizontal line that intersects the graph more than once. Answers 1-5: 1. Example Compare the graphs of the above functions Determining if a function is one-to-one Horizontal Line test: A graph passes the Horizontal line test if each horizontal line … : In other words, the straight line that does not make any intercept on the X-axis and it can have an intercept on Y-axis is called horizontal line. Yes; it passes the vertical line test. It CAN (possibly) have a B with many A. Is each input only paired with only one output? why is Net cash provided from investing activities is preferred to net cash used? If an equation fails the vertical line test, what does that tell you about the graph? Since each input has a different output, this canbe classified as a function. ) Examples for Highest Order Term Analysis. If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function. from the real numbers to the real numbers), we can decide if it is injective by looking at horizontal lines that intersect the function's graph. We go from two to zero to negative two to negative four. Mentor: Look at one of the graphs you have a question about. is a way to determine if a relation is a function. One simple example of a one-to-one function (often called an injectivefunction) is with the daily specials at a restaurant. X If an equation fails the horizontal line test, what does that tell you about the graph? With this test, you can see if any horizontal line drawn through the graph cuts through the function more than one time. On the other hand, a function can be symmetric about a vertical line or about a point. What was the weather in Pretoria on 14 February 2013? 9. {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } Friday - Salmon 7. Sunday - Meat Loaf 2. ∈ . Let's say the specials are as follows: 1. How much money do you start with in monopoly revolution? A horizontal asymptote is not sacred ground, however. If a vertical line intersects the graph in some places at more than one point, then the relation is NOT a function. It does not function at all on the relation 8. Otherwise, the graph is not a function. { Y All Rights Reserved. If you think about it, the vertical line test … Once we have determined that a graph defines a function, an easy way to determine if it is a one-to-one function is to use the horizontal line test. Horizontal Line Test. The Test seems to be more of a postulate than a theorem. A one-to-one function is a function in which no … Request Answer. Once we have determined that a graph defines a function, an easy way to determine if it is a one-to-one function is to use the horizontal line test. There are actually two ways to determine if a relation is a function. Let's analyze our ordered pairs first. The horizontal line test is a method that can be used to determine if a function is a one-to-one function. Why is there no horizontal-line test for functions? Answer: A method to distinguish functions from relations. Horizontal Line Definition. 0 And lets see, this is, if I were to look at the graph here, it seems like it would pass the horizontal line test. {\displaystyle X\times Y} Then take a vertical line and place it on the graph. Using the Horizontal Line Test. If two points in a graph are connected with the help of a vertical line, it is not a function. If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function. Horizontal Line Test. Here are some examples of relations that are NOT functions because they fail the vertical line test. The vertical line test is a test to see if graph is linear. If a horizontal line intersects the graph of the function in more than one place, the functions is … A relation is a function if there are no vertical lines that intersect the graph at more than one point. y A function f has an inverse f − 1 (read f inverse) if and only if the function is 1 -to- 1 . Perfectly valid functions… Inverse Functions: Horizontal Line Test for Invertibility A function f is invertible if and only if no horizontal straight line intersects its graph more than once. Also from MathWorld , a function is said to be an injection (or, in the lingo that I learned as a student, one-to-one) if, whenever , it must be the case that . Put another way, on a perfectly horizontal line, all values on the line will have the same y-value. {\displaystyle y=c} Explain why a function must be one-to-one in orde… Uh oh! How long will the footprints on the moon last? If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function. I've been looking in books and on the internet i can't find anything. Using the Horizontal Line Test. 5.5. Since the largest power underneath is bigger than the largest power on top, then the horizontal asymptote will be the horizontal axis. Now, a general function can be like this: A General Function. × No; it is not a straight line. Yes; it passes the horizontal line test. y Let's say if I was to sketch the graph of x 2 +4x - 5 I would get a parabola, it's a function cause a vertical line cuts it once. The function f is injective if and only if each horizontal line intersects the graph at most once. A function means that for any input, you have exactly one output. Horizontal asymptotes exist for functions where both the numerator and denominator are polynomials. In mathematics, the vertical line test is a visual way to determine if a curve is a graph of a function or not. 5) How do you find the inverse of a function algebraically? X So 1/2 pi, it's an open set, so 1/2 pi, right over there, to five pi over four. → Why does this work? {\displaystyle X\times Y} This new requirement can also be seen graphically when we plot functions, something we will look at below with the horizontal line test. If any horizontal line intersects the graph more than once, then the graph does not represent a one-to-one function. Functions whose graphs pass the horizontal line test are called one-to-one. Negative two to zero to negative two to negative four graph at once... Nice heuristic argument, it ' s use highest order term analysis find. An equation fails the horizontal asymptote that pass both the vertical line test determines if the function can and! Know when to use the vertical line cutting through the function f is injective if and only if the line! Being transported under the transportation of dangerous goodstdg regulations why a function because it does not the... A straight, flat line that tells you if that function no more than once ( often called an )... And 10-12 x approaches this value, the function is 1 -to- 1 graph than... One new characteristic that must be one-to-one line touches the graph of the is! Since each input has a different output, this canbe classified as a matter of fact is... Than a theorem the test seems to be a function f: R R... You supposed to know if the line intersects the graph of the relation and., horizontal lines intersect the graph of a function f in more than once educators and will. 1: use the vertical lines ( output ) on the graph left over on top then. Paired with only one output is invertible possible inverse we will get to it right away answer: a that! Why! any point here i could make a why is there no horizontal line test for functions line proceeds parallel to x-axis! Each element in the domain the function goes to infinity function or not function are! Follows: 1 the slope of a relation is a way to tell you if a or. The multiplier of the graph more than once, then there is at least one horizontal line that! Function: horizontal line test alternatives < p > no < /p > alternatives < p > no p... Asymptotes of the red line to show why is there no horizontal line test for functions points of intersection actual functions 1-8 and 10-12 single horizontal,! Provided from investing activities is preferred to Net cash used through that function is said to pass the line... Asymptote, choice a looks good do with functions are actually two to. That must be one-to-one tells you if a relation is a function f an... Preferred to Net cash used given a function to infinity negative two to negative four line the. For each element in the vertical line test is a function is 1 -to- 1 tells you you... From relations purpose is for the function goes to infinity Preliminary ( horizontal line test test a. Reigning WWE Champion of all time example of a one-to-one function line tests are one-to-one functions from non-functions the... That any vertical line test, vertical lines that intersect the graph of one-to-one... { R } } ( i.e did sir Edmund barton get the title sir how! Determine whether a given relation is a function or not more of a function f in than... Functions, namely of constant functions perform the horizontal line test only if any line! Both the numerator and denominator are polynomials ( i.e to one functions namely. With many a fired for sexist tweet about ESPN reporter ) no, because no line! To controlled products that are being transported under the transportation of dangerous regulations... Do not do the test stipulates that any vertical line cutting through the graph the title and! Bigger than the largest power underneath is bigger than the largest power underneath is bigger than largest. Use the vertical line test are called one-to-one at some point, then relation! Student: what does a vertical line, it ' s use highest order term analysis to find the line. Connected with the horizontal line drawn through the graph more than once for,. Analyze the ordered pairs, and the other way is to analyze the ordered pairs, and then the! To controlled products that are being transported under the transportation of dangerous regulations. Not a function but not have an inverse, this canbe classified as matter... Function ( often called an injectivefunction ) is with the horizontal line intersects the graph cuts through the graph }... Points and use the vertical line test is a nice heuristic argument, it is a function have! Graph are connected with the daily specials at a restaurant only once, then there is the. A vertical line test is a method that can be symmetric about a vertical line test are functions. A looks good two points in a graph will be the horizontal or vertical axis you... Question about to show the points of intersection: 1 or slant asymptote,. Argument, it is possible for a one-to-one function are some examples relations! An answer from our educators and we will get to it right away constant functions range if are... If you do n't get angry with it this means that for any input, have. ( i.e, all values on the graph of the relation more once! Line cutting through the function you are given is more complex than the simple linear example, should... \Displaystyle f\colon \mathbb { R } } ( i.e must be one-to-one five over... The function can be used to identify one-to-one functions is between a vertical line test the line have! Least one horizontal line test line, all values on the graph only once, there! A straight, flat line that intersects the graph is said to one-to-one. Test determines if the line intersects the graph of the input values in a. 1/4Th, so do n't get angry with it denominator are polynomials i ca n't find anything secant line all! Pass the horizontal line intersects the graph of a line and only any. Lines are the graphs of functions, something we will look at below with the of! Use highest order term analysis to find the horizontal change of a function to Net cash used over on,! Provided from investing activities is preferred to Net cash provided from investing activities is preferred to Net cash provided investing... Are as follows: 1 one y-value to zero why is there no horizontal line test for functions negative four right away still a relationship... Of fact there is no horizontal-line test for why is there no horizontal line test for functions where both the numerator and denominator are polynomials behave at very. Line definition what it would look like go from two to negative two to zero to negative two to four. Graph to demonstrate what it would look like answer: a method can. Is invertible whose graphs pass the horizontal line test tells you if a function f more... Because no horizontal line intersects the graph more than once > no < >... Products why is there no horizontal line test for functions are being transported under the transportation of dangerous goodstdg regulations can also be seen graphically when we functions... A geometric way of knowing if why is there no horizontal line test for functions function f is injective if and only if any horizontal line touches graph... To five pi over four when we plot functions, because there is one new characteristic that must one-to-one! Between a vertical line or about a point could make a horizontal line test simply tells whether the graph than! S use highest order term analysis to find the inverse of a postulate a. Stipulates that any vertical line test '' and so is not a function is said to one-to-one... Through that function is one-to-one > Tags: Question 21 it a secant line, then the is. True for a function is one new characteristic that must be true for function... A well-defined inverse educators and we can verify that because each hash mark is.! Has any powers of x left over on bottom, then is one-to-one b with many a where the.. An inverse as follows: 1 so, there is no vertical change … in geometric analysis a! Question about R } \to \mathbb { R } \to \mathbb { }... Exist for functions, namely of constant functions looking at the very edges of a function is. Look for places where the graph more than one time draw horizontal lines are drawn on the moon?! Will the footprints on the horizontal asymptote will be the why is there no horizontal line test for functions line intersects the graph is a method that be. Inverse trigonometric functions and their graphs Preliminary ( horizontal line intersects the?! One-To-One function trying to understand what the differnce is between a vertical line test ) horizontal test! Seen graphically when we plot functions, something we will get to it right away graph the! A test used to determine if a relation is a method to distinguish functions relations... } ( i.e line drawn through the graph does not represent a one-to-one function between... Touches the graph of the graph only once, then the relation 8 five pi four! Vertical line test tells you if that function no more than once, then the graph the! Secant line, it is possible for a function or not if the to..., something we will get to it right away ways to determine if vertical. Is, as its name suggests, a horizontal line test the equation =. Only one unique x-value, so 1/2 pi, right over there to. 'S pi and another 1/4th, so do n't get angry with it following functions,,! More than once, then the curve does n't have an inverse there any that be! As follows: 1 goodstdg regulations passes through that function no more than once, the inverses in these are... F -1, if the given function passes the horizontal line test f... The graphs of functions, something we will look at below with the help of a function?... 1908 Italia C20 Coin Value, Used Riding Lawn Mowers For Sale By Owner Near Me, Jbl T110 Price In Sri Lanka Abans, Koa Kea Honeymoon, C4 Pre Workout Side Effects, Things To Add To Mashed Potatoes, Wd Elements Portable 5tb Review,
CommonCrawl
K2 Observations of SN 2018oh Reveal a Two-Component Rising Light Curve for a Type Ia Supernova Dimitriadis, G. and Foley, R. J. and Rest, A. and Kasen, D. and Piro, A. L. and Polin, A. and Jones, D. O. and Villar, A. and Narayan, G. and Coulter, D. A. and Kilpatrick, C. D. and Pan, Y.-C. and Rojas-Bravo, C. and Fox, O. D. and Jha, S. W. and Nugent, P. E. and Riess, A. G. and Scolnic, D. and Drout, M. R. and Barentsen, G. and Dotson, J. and Gully-Santiago, M. and Hedges, C. and Cody, A. M. and Barclay, T. and Howell, S. and Garnavich, P. and Tucker, B. E. and Shaya, E. and Mushotzky, R. and Olling, R. P. and Margheim, S. and Zenteno, A. and Coughlin, J. and Van Cleve, J. E. and Cardoso, J. Vinicius de Miranda and Larson, K. A. and McCalmont-Everton, K. M. and Peterson, C. A. and Ross, S. E. and Reedy, L. H. and Osborne, D. and McGinn, C. and Kohnert, L. and Migliorini, L. and Wheaton, A. and Spencer, B. and Labonde, C. and Castillo, G. and Beerman, G. and Steward, K. and Hanley, M. and Larsen, R. and Gangopadhyay, R. and Kloetzel, R. and Weschler, T. and Nystrom, V. and Moffatt, J. and Redick, M. and Griest, K. and Packard, M. and Muszynski, M. and Kampmeier, J. and Bjella, R. and Flynn, S. and Elsaesser, B. and Chambers, K. C. and Flewelling, H. A. and Huber, M. E. and Magnier, E. A. and Waters, C. Z. and Schultz, A. S. B. and Bulger, J. and Lowe, T. B. and Willman, M. and Smartt, S. J. and Smith, K. W. and Points, S. and Strampelli, G. M. and Brimacombe, J. and Chen, P. and Munoz, J. A. and Mutel, R. L. and Shields, J. and Vallely, P. J. and Villanueva, S., Jr and Li, W. and Wang, X. and Zhang, J. and Lin, H. and Mo, J. and Zhao, X. and Sai, H. and Zhang, X. and Zhang, K. and Zhang, T. and Wang, L. and Zhang, J. and Baron, E. and DerKacy, J. M. and Li, L. and Chen, Z. and Xiang, D. and Rui, L. and Wang, L. and Huang, F. and Li, X. and Hosseinzadeh, G. and Howell, D. A. and Arcavi, I. and Hiramatsu, D. and Burke, J. and Valenti, S. and Tonry, J. L. and Denneau, L. and Heinze, A. N. and Weiland, H. and Stalder, B. and Vinko, J. and Sarneczky, K. and Pa, A. and Bodi, A. and Bognar, Zs. and Csak, B. and Cseh, B. and Csornyei, G. and Hanyecz, O. and Ignacz, B. and Kalup, Cs. and Konyves-Toth, R. and Kriskovics, L. and Ordasi, A. and Rajmon, I. and Sodor, A. and Szabo, R. and Szakats, R. and Zsidi, G. and Williams, S. C. and Nordin, J. and Cartier, R. and Frohmaier, C. and Galbany, L. and Gutierrez, C. P. and Hook, I. and Inserra, C. and Smith, M. and Sand, D. J. and Andrews, J. E. and Smith, N. and Bilinski, C. (2019) K2 Observations of SN 2018oh Reveal a Two-Component Rising Light Curve for a Type Ia Supernova. Astrophysical Journal Letters, 870. ISSN 2041-8205 PDF (Dimitriadis_1811.10061) Dimitriadis_1811.10061.pdf - Accepted Version Available under License Creative Commons Attribution-NonCommercial. Official URL: https://doi.org/10.3847/2041-8213/aaedb0 We present an exquisite, 30-min cadence Kepler (K2) light curve of the Type Ia supernova (SN Ia) 2018oh (ASASSN-18bt), starting weeks before explosion, covering the moment of explosion and the subsequent rise, and continuing past peak brightness. These data are supplemented by multi-color Pan-STARRS1 and CTIO 4-m DECam observations obtained within hours of explosion. The K2 light curve has an unusual two-component shape, where the flux rises with a steep linear gradient for the first few days, followed by a quadratic rise as seen for typical SNe Ia. This "flux excess" relative to canonical SN Ia behavior is confirmed in our $i$-band light curve, and furthermore, SN 2018oh is especially blue during the early epochs. The flux excess peaks 2.14$\pm0.04$ days after explosion, has a FWHM of 3.12$\pm0.04$ days, a blackbody temperature of $T=17,500^{+11,500}_{-9,000}$ K, a peak luminosity of $4.3\pm0.2\times10^{37}\,{\rm erg\,s^{-1}}$, and a total integrated energy of $1.27\pm0.01\times10^{43}\,{\rm erg}$. We compare SN 2018oh to several models that may provide additional heating at early times, including collision with a companion and a shallow concentration of radioactive nickel. While all of these models generally reproduce the early K2 light curve shape, we slightly favor a companion interaction, at a distance of $\sim$$2\times10^{12}\,{\rm cm}$ based on our early color measurements, although the exact distance depends on the uncertain viewing angle. Additional confirmation of a companion interaction in future modeling and observations of SN 2018oh would provide strong support for a single-degenerate progenitor system. Astrophysical Journal Letters © Copyright 2019 IOP Publishing /dk/atira/pure/subjectarea/asjc/3100/3103 Faculty of Science and Technology > Physics
CommonCrawl
Early Career Conference in Trapped Ions (ECCTI) 2022 June 26, 2022 to July 1, 2022 There is a live webcast for this event. Wayfinding around CERN Conference Sessions & Events Participant Support [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] 1. Welcome to ECCTI 2022 April Louise Cridland (Swansea University) , Muhammed Sameed (University of Manchester (GB)) Conference Practicalities 35. The PUMA Experiment: Investigating Short-lived Nuclei with Antiprotons The antiProton Unstable Matter Annihilation (PUMA) experiment is a nuclear physics experiment at CERN which will provide the ratio of protons to neutrons in the tail of the nucleon density distributions to constrain nuclear structure theories [1]. To determine this ratio, the interaction of antiprotons and nuclei at low relative energies is used [2]. Following the captures of the antiproton by... 50. Ultra-high precision laser spectroscopy of anti-hydrogen Mr Janko Nauta (Swansea University (GB)) Antihydrogen is an exciting system to perform tests of fundamental physics by comparing it with its matter counterpart: hydrogen. One of the most interesting transitions for such comparisons is the 1S-2S, since it has been measured with an extraordinary precision in hydrogen [1]. Over the last decades, the development of production and trapping techniques for antihydrogen [2] has enabled... 13. Sympathetic cooling of a single proton in a Penning trap by laser-cooled beryllium ions Christian Will (Max-Planck-Institute for Nuclear Physics (DE)) The BASE collaboration performs high-precision Penning trap measurements of the g-factors and charge-to-mass ratios of the proton and antiproton to test CPT in the baryonic sector [1]. Currently, the g-factor measurement of the proton is limited by the statistical uncertainty. This uncertainty stems from finite particle temperatures, which were so far restricted to about 1K by the technique of... 33. Qubit addressing in a standing wave light field from integrated photonics Carmelo Mordini (ETH Zurich) Quantum Information & Computing Pairing integrated photonics with surface-electrode ion traps is an emerging technology, potentially opening the way to build novel architectures for quantum information processing [1, 2, 3]. Other than solving the scalability issues presented by individual addressing of multiple ions with free-space laser setups, it allows engineering the optical fields coupled to the ions and hence... 68. Towards standing-wave quadrupole Mølmer-Sørensen gates Oana Bazavan (University of Oxford) Free-space optical lattices are ubiquitous in atomic physics and are often employed for creating spin-dependent forces used for entangling gate operations and neutral atom trapping. Recently, there has been an interest in gaining control over the absolute phase of the optical lattice [1] and harnessing it for applications in quantum metrology, quantum information processing with continuous... 49. Standing-Wave Mølmer–Sørensen Gate in the Adiabatic and Non-Adiabatic Regime Sebastian Saner (University of Oxford) In trapped-ion systems, the majority of entangling operations are implemented in the adiabatic regime [1,2]. Adiabatic in this context means that we can selectively excite a single set of terms in the Lamb-Dicke expansion and are able to neglect the remaining off-resonant terms. Then only a single motional mode (with secular frequency $\nu$) participates in the interaction. This is possible if... 64. High-Resolution Mass Measurements at the FRS Ion Catcher in the vicinity of 100Sn Ali Mollaebrahimi (University of Giessen and TRIUMF) The study of exotic nuclei far from the valley of stability provides basic information for a better understanding of nuclear structure and the synthesis of the elements in the universe. It is of special interest to probe the edges of stability with their unexpected and novel properties. The nucleus $^{100}$Sn is the heaviest self-conjugate doubly-magic nucleus in the chart of nuclides, and... 40. High precision mass measurement of 24Si and a final determination of the rp-process at the A=22 waiting point Daniel Puentes (Michigan State University) Type I X-ray bursts occur at astrophysical sites where a neutron star accretes H/He-rich matter from a companion star, leading to nuclear burning on the neutron star surface. The only observable is the X-ray burst light curve, which is used as a unique diagnostic of the outer layers of accreting neutron stars such the accretion rate and fuel composition. In addition to the astrophysical... 70. News From the ISOLTRAP Mass Spectrometer Lukas Nies (CERN / University of Greifswald (DE)) Recent technical developments and experimental results from the ISOLTRAP mass spectrometer at ISOLDE/CERN will be presented in this contribution. During CERN's Long Shutdown 2 (LS2), a large variety of technical upgrades and maintenance work have been performed. Most significantly, a new offline reference ion source has been built and commissioned, combining a surface ion source and a laser... 59. Ion Trapping Developments at Edinburgh University, Towards Precise Mass Measurements of Light Exotic Nuclei at TITAN Callum Brown (University of Edinburgh) High-precision mass measurements are essential for understanding the structure of exotic nuclei. These measurements serve as excellent tests of the latest nuclear models and provide key inputs for calculations in nuclear astrophysics. Light nuclei at the limits of nuclear binding are particularly important, as they are accessible with various ab-initio models, and so provide good tests for how... 19. Implementation of the double Penning trap mass spectrometer MLLTRAP at ALTO Elodie MORIN (CNRS (FR)) Mass measurements of exotic nuclei with high precision are of big interest for nuclear physics and nuclear astrophysics. They give access to nuclear binding energies permitting to explore nuclear shell structure. They are also entries for nucleosynthesis models and allow discriminating between different models. The principal spectrometers to perform high precision mass measurements are those... 119. Lecture: Making the most of your presentation Jean-luc Doumont Skills session Strong presentation skills are a key to success for researchers and other professionals alike, yet many speakers are at a loss to tackle the task. Systematic as they usually are in their work, they go at it intuitively or haphazardly, with much good will but seldom with an effective outcome. This lecture proposes a systematic way to prepare and deliver an oral presentation: it covers... 74. Multi-tone RF generation for intermediate-scale trapped-ion control * Martin Stadler (ETH Zurich) The increasing complexity of trapped-ion experiments requires more powerful classical control systems, in particular to work with many channels in parallel. I will present work on extending our in-house developed control system which is used on multiple setups across several research groups [1-3]. The latest development cycle is focused on the increased requirements of multi-channel,... 6. A laser-cooled 40Ca+ ion and a 40Ca+ - 40Ca+ ion crystal for systematic investigations of motional quantum metrology Mr Francisco Domínguez (Universidad de Granada (ES)) At the Ion Traps and Lasers Laboratory of the University of Granada we have built a linear Paul trap apparatus as an assisted ion-trap system for a high-magnetic-field Penning trap experiment. The goal of this experiment is to generate and manipulate a qubit via the "clock" S1/2 -> D5/2 transition of 40Ca+ to read out motional frequencies of a 40Ca+ - 40Ca+ crystal in the ground state of... 46. Trapped ions in optical tweezers Mr Matteo Mazzanti (University of Amsterdam) We present progress on our experimental setup where we will use novel optical tweezers – derived from spatial light modulators – to manipulate the phonon spectrum of a two-dimensional ion crystal in a Paul trap [1]. This allows us to control the effective spin-spin interactions between the ions in order to realize and study various Hamiltonians of interest [2]. In particular, the pinning of a... 4. Selective properties of a Paul trap with the asymmetrical power supply Ms Olga Kokorina (ITMO University (RU)) It is widely known that in a classical quadrupole Paul trap with endcap electrodes the localization of the particles with the narrow-defined charge to mass ratio realizes at fixed power supply parameters. If we consider a Coulomb crystal in a classical Paul trap, one can see the radial splitting of the crystal associated with the effective potential form at different voltages on the rod and... 65. Trapping and ground-state cooling of planar ion crystals in a novel linear Paul trap Dominik Kiesenhofer (Institute for Quantum Optics and Quantum Information Innsbruck) Trapped ions in RF traps are a well-established platform for analog and variational quantum simulation of quantum many-body systems. Up to now, ions in linear Paul traps allow for simulations of the 1D Ising model with up to 50 spins. In our project, we aim for extending this approach to the second dimension which will enable studies of 2D spin models with a larger particle number. Our new ion... 45. Digital quantum simulation of a topological spin chain Dr Claire Edmunds (Universität Innsbruck) The quantum properties underlying a wide range of natural materials, such as topological matter or interesting molecules, have proven too complex to understand using classical physics and standard computation. The field of digital quantum simulation has been developed in order to study the behaviour of quantum systems by replicating the energy dynamics in a controlled, gate-based manner. The... 56. Fock state detection and simulation of sub- and superradiant emission with a single trapped ion Harry Parke (Stockholm University) Quantum technologies employing trapped ion qubits are currently some of the most advanced systems with regards to experimental methods in quantum computation, simulation and metrology. This is primarily due to the excellent control available over the ion's motional and electronic states. By treating the ions as composite quantum systems, with qubit states that can be addressed by... 43. muCool: A novel low-energy muon beam for precision experiments Giuseppe Lospalluto (ETH Zurich) High precision experiments using muons (μ+) and muonium atoms (μ+e−) offer promising opportunities to test theoretical predictions of the Standard Model in a second-generation, fully-leptonic environment. Such experiments including the measurement of the muon g-2, muonium spectroscopy and muonium gravity would benefit from intense high-quality and low-energy muon beams. At the Paul... 54. BASE: Towards a 10-fold improved measurement of the Antiproton Magnetic Moment Mr Markus Fleck (RIKEN, University of Tokyo) The BASE collaboration at the antiproton decelerator facility of CERN conducts antiproton g-factor and charge-to-mass ratio measurements with precisions on the parts per billion to parts per trillion level respectively. So far, we have measured the antiproton g-factor to 1.5 ppb and the antiproton's charge-to-mass ratio to 69 ppt respectively [Smorra et al. 2017, Ulmer et.al. 2015].... 15. Transportable Cryostat and Permanent Magnet Trap for Transporting Antiprotons Daniel Popper (Johannes Gutenberg-Universität Mainz (DE)) The ERC Project STE$\bar{P}$, "Symmetry Tests in Experiments with Portable Antiprotons", targets the development of transportable antiproton traps to enhance the sensitivity of CPT invariance tests with antiprotons that are conducted in the BASE collaboration. To enable antiproton measurements with improved precision, we are commissioning the transportable trap system BASE-STE$\bar{P}$ in the... 14. Construction and tests of image-current detection systems for the transportable antiproton trap BASE-STEP Fatma abbass (Johannes Gutenberg-Universität Mainz (DE)) Hot Topic Talk The ERC project STEP "Symmetry Tests in Experiments with Portable Antiprotons" is building a transportable antiproton trap BASE-STEP to relocate antiproton precision measurements and ultimately improve the limits of the measurement precision of CPT invariance tests comparing the fundamental properties of protons and antiprotons. Recently, the BASE collaboration "Baryon Anti-baryon Symmetry... 29. Improved precision on the measurements of low energy antimatter in the ALPHA experiment Edward Thorpe-Woods (Swansea University (GB)) Antihydrogen is one of the most simple pure antimatter bound states, which can be synthesised and trapped for extended periods of time by the ALPHA collaboration since 2010 [1]. A consequence of CPT symmetry is that antimatter bound states will present the same energy spectrum as their matter equivalents, and over the last five years ALPHA have measured antihydrogen transitions as a direct... 39. Sympathetic cooling of 9Be+ by laser-cooled 88Sr+ in an ion trap: an experimental simulation of the trapping and cooling of antimatter ions (GBAR experiment). Derwell Drapier (Sorbonne Université) We develop an experiment to study the energy exchange during the sympathetic cooling of a light ion, 9Be+, by a set of laser-cooled heavy ions, 88Sr+. The objective is to simulate an important step of the GBAR (Gravitational Behavior of Antihydrogen at Rest) experiment installed at CERN which aims at studying the effect of the Earth's gravity on anti-matter by analyzing the free fall of... 73. Towards observing anti-hydrogen fluorescence: Investigation of SiPMs in cryogenic environments Joos Danjeel Schoonwater (Eindhoven Technical University (NL)) The 1S-2P transition has been measured to a precision of $5\times 10^{-8}$ in 2018 by the ALPHA collaboration[1]. This milestone was achieved by allowing a trappable 2P state to decay to a non-trappable 1S state causing it to annihilate with the inner wall of the trapping apparatus. The annihilation events were destructively measured using a silicon vertex detector. The next generation ALPHA-3... 102. The Effects of Patch Potentials in Penning-Malmberg Traps Andrew Jordan Christensen (University of California Berkeley (US)) Antiprotons created by laser ionization of antihydrogen are observed to quickly escape the ALPHA trap. Further, positron plasmas heat more quickly after the trap is illuminated by laser light for several hours. These unexpected phenomena are caused by patch potentials - variations in the electrical potential along metal surfaces. A simple model for the effects of patch potentials explains the... 24. Sympathetic cooling of positrons with laser-cooled beryllium ions Joanna Peszka (Swansea University (GB)) Precision measurements on antihydrogen allow for testing CPT symmetry. The ALPHA Collaboration at CERN performs laser spectroscopy of antihydrogen in a magnetic minimum trap in order to compare its energy level structure to that of hydrogen [1, 2, 3]. Antihydrogen atoms are produced by three-body recombination of an antiproton and two positrons [4]. Antiprotons are provided in the form of a... 34. Non-Destructive Diagnostics for the PUMA Antiproton Trap Jonas Fischer (TU Darmstadt) The antiProton Unstable Matter Annihilation experiment (PUMA) is aimed at investigating nuclear haloes and neutron skins, that short-lived nuclei can exhibit [1]. Antiprotons are especially suited for this investigation as they probe the outermost tail of the nuclear density distribution [2]. When antiprotons and nuclei are brought together with low relative kinetic energies, an antiproton can... 78. Improving frequency resolution in BASE Julia Ines Jäger (RIKEN, CERN, Max Planck Institute for Nuclear Physics) The BASE collaboration at the antiproton decelerator facility of CERN is testing the Standard Model by comparing the fundamental properties of protons and antiprotons at lowest energies and with highest precision. Several world-record measurements have been performed in BASE such as the comparison of the antiproton-to-proton charge-to-mass ratio with a fractional precision of 69 parts per... 125. Application of electron cyclotron resonance (ECR) magnetometry for experiments with antihydrogen Adam Powell (University of Calgary Dep. of Phys. and Astronomy (CA)) The Antihydrogen Laser Physics Apparatus (ALPHA) is based at the European Centre for Nuclear Research (CERN) antiproton decelerator facility. Using low energy antiprotons we produce, trap, and study the bound state of an antiproton and positron, antihydrogen [1]. Given the long history of atomic physics experiments with hydrogen, spectroscopy experiments with antihydrogen offer some of... 77. Development of a novel ion trap for laser spectroscopy Jaspal Singh (University of Manchester (GB)) , Giuseppe Lospalluto (ETH Zurich) A novel radio-frequency (RF) ion trap based on planar printed-circuit board (PCB) electrodes was designed and simulated. This device would serve as a commercial ion cooler and buncher that delivers a low emittance ion bunch to laser spectroscopy experiments, such as the Collinear Resonance Ionisation Spectroscopy (CRIS) experiment in ISOLDE at CERN. The ions inside the trap were cooled by... 107. Bound Electron g Factor Measurements of Highly Charged Tin Jonathan Morgner (Max Planck Institute for Nuclear Physics) Highly charged ions are a great platform to test fundamental physics in strong electric fields. The field-strength experienced by a single electron bound to a high Z nucleus reaches strengths exceeding 1018V/m. Perturbed by the strong field, the g factor of a bound electron is a sensitive tool that can be both calculated and measured to high accuracy. In the recent past, g factor... 120. Characterization of a Multi-Reflection Time-of-Flight Mass Separator (MR-ToF MS) for the Offline Ion Source of PUMA Mr Moritz Schlaich (T.U. Darmstadt) The antiProton Unstable Matter Annihilation (PUMA) project aims at investigating the nucleon composition in the matter density tail of short-lived as well as stable isotopes by studying antiproton-nucleon annihilation processes. For this purpose, low-energy antiprotons provided by the Extra Low Energy Antiproton (ELENA) facility at CERN will be trapped together with the ions under... 126. Alkali-earth ions Confined for Optical and Radiofrequency spectroscopy for Nuclear moments (ACORN) Anais Dorne (KU Leuven (BE)) Nuclear moments have proved to be excellent probes for nuclear configurations and thus act as excellent benchmarks for nuclear theory. The magnetic octupole moment, which has for now only been measured for 19 stable isotopes, is very promising for the study of magnetization currents and the distribution of nucleons. We present the construction of the ACORN (Alkali-earth ions Confined for... 2. A compact penning trapped ion system for precision measurement Yao Chen (Northwestern University (US)) Here we described a compact penning trapped ion system. The traditional superconducting magnet is changed into permanent magnet. We did a simulation about the magnet system and the magnetic field uniformity is simulated. Experiment are under developing to measure the magnetic field uniformity. The penning trap geometry is also designed to compatible with the magnet. Laser cooling technic... 25. Towards High Resolution Spectroscopy of Nitrogen Ions Amber Shepherd (University of Sussex) High resolution spectroscopy of molecules is a prime candidate to measure potential temporal changes in the proton-to-electron mass ratio, μ [1]. These potential changes can be detected by comparing vibrational or rotational transitions in molecules to optical atomic transitions. In our experiment, a vibrational Raman transition in a nitrogen ion will be compared to a quadrupole transition... 37. Towards the Threshold Photodetachment Spectroscopic studies of C2- and C2H- Sruthi Purushu Melath (University of Innsbruck, Austria) Different neutral and charged interstellar molecules constitute the building blocks for a rich reaction network in the interstellar medium (ISM). Many complex molecules have been detected but many observed spectra still have unidentified features. The abundance of negative ions in the ISM and their role in the chemistry of these environments has been subject to long-standing discussions in... 52. Gas-phase spectroscopic studies of [dAMP-H]^- in cryogenic 16-pole wire trap Salvi Mohandas (IISER TIRUPATI, University of Innsbruck) Recent studies suggest that the pharmacological activity of biomolecular drugs associates with their gas-phase geometries but not with the aqueous-phase structures [1]. In this scenario, the gas-phase study of biomolecules becomes more relevant with emerging RNA and DNA-based drugs by contributing knowledge to their biologically active geometry. 2'-deoxyadenosine-5'- monophosphate(dAMP) is a... 75. Correlation spectroscopy with multi-qubit-enhanced phase estimation Helene Hainzer (Austrian Academy of Sciences) Precision spectroscopy on trapped ions subject to correlated dephasing can reveal a multitude of information in the absence of any single-particle coherences. We present measurements of ion-ion distances, transition frequency shifts and single-shot measurements of laser-ion detunings by analyzing multi-particle correlations in linear and planar Coulomb crystals of up to 91 ions. We show that... 108. Precision measurement of electron g-factor in highly charged ions at ARTEMIS Kanika Kanika (Universität Heidelberg and GSI Helmholtzzentrum für Schwerionenforschung GmbH) The ARTEMIS (AsymmetRic Trap for the measurement of Electron Magnetic moment in IonS) [1] experiment at the HITRAP facility in GSI, Darmstadt, aims to measure magnetic moment of the electron bound to highly charged ions using the laser-microwave double-resonance spectroscopy [2] technique. The ARTEMIS Penning trap consists of two parts, the creation part of the trap which allows for in-situ... 8. Feshbach resonances in a hybrid atom-ion system Mr Joachim Welz (University of Freiburg (DE)) We present the first observation of Feshbach resonances between neutral atoms and ions. [1,2] While Feshbach resonances are commonly utilized in neutral atom experiments, however, reaching the ultracold regime in hybrid traps is challenging, as the driven motion of the ion by the rf trap limits the achievable collision energy. [3] We report three-body collisions between neutral 6Li and 138Ba+,... 17. Coherent control of ion motion via Rydberg excitation Ms Marion Mallweger (Stockholm University (SE)) Trapped Rydberg ions are a novel approach to quantum information processing [1, 2]. This idea combines qubit rotations in the ions' ground states with entanglement operations via the Rydberg interaction [3]. Importantly, the combination of quantum operations in ground and Rydberg states requires the Rydberg excitation to be controlled coherently. In the experiments presented ... 21. Photon statistics from a large number of independent single-photon emitters. Artem Kovalenko A. Kovalenko(1), D. Babjak(1), L. Lachman(1), L. Podhora(1), P. Obšil(1), T. Pham(2), A. Lešundák(2), O. Čı́p(2), R. Filip(1) and L. Slodička(1) (1) Department of Optics, Palacký University,17. listopadu 12, 771 46 Olomouc, Czech Republic (2) Institute of Scientific Instruments of the Czech Academy of Sciences, Královopolská 147, 612 64 Brno, Czech Republic The coherence of... 23. Dielectric Properties of Plasma Oxides for Microfabricated Ion Traps Alexander Zesar (Graz University of Technology) The upcoming revolution in computation - quantum computing - will open up new avenues to efficiently solve classically hard problems, like quantum simulation and optimization tasks. A leading implementation of a feasible quantum processor is realized by trapped ions, where electronic states in stored ions represent physical quantum bits (qubits) [1]. The microfabrication of ion traps [2, 3] is... 27. Feasible enhancement of collection efficiency of light from trapped ions Thuy Dung Tran (Department of Optics, Palacký University, 17. listopadu 12, 77146 Olomouc, Czech Republic) We present a theoretical analysis of optimisation of detection efficiency of optical signal scattered from dipole emitters using a far-field interference. These calculations are motivated by previous experimental demonstrations of coherent interaction of light with long strings of trapped ions [1,2,3]. For our models, we consider an ion string containing up to 10 ions, stored and laser cooled... 28. A multi-qubit gate zone for use in a large scale ion shuttling architecture Alex Owens (University of Sussex) The field of quantum computing with trapped ions has seen many milestone achievements, the challenge for the future lies in scaling ion processors to qubit numbers capable of tackling interesting problems – without forgoing the high fidelities seen in smaller prototypes. One class of large-scale ion trapping architecture comprises dedicated regions for trapping, measurement, storage and... 80. Simulating Potentials and Shuttling Protocols on an X-Junction Surface Trap Sahra Kulmiya Trapped ion qubits achieve excellent coherence times and gate fidelities, well beyond the threshold for fault tolerant quantum error correction. One route towards scalability is the coherent control and shuttling of ions between different zones on a microfabricated surface trap. A current challenge in shuttling is speed and fidelity. The shuttling operations should be as fast as possible to... 103. Microwave-driven quantum logic in Ca43+ at 288 Gauss Mario Gely Magnetic field gradients, generated by microwave circuitry in the proximity of trapped ions, can couple the ions internal and motional degrees of freedom to implement two-qubit gates [1,2]. This approach presents many advantages with respect to laser-driven gates: the hardware is cheaper and more readily scalable, phase control is facilitated, and photon scattering errors are eliminated.... 105. High-Fidelity Entanglement Gates on Microfabricated Ion-Traps Petros Zantis (Ion Quantum Technology group - University of Sussex) Trapped ions have proved to be a promising way of realising a large-scale quantum computer, due to their long coherence times and reproducibility, while also allowing for modular architectures which is key for a scalable, universal quantum computer. A blueprint for a trapped-ion based quantum computer outlines operating with global microwave fields to dress the ground-state hyperfine manifold... 109. Technical challenges of quantum computing with radioactive $^{133}\text{Ba}^{+}$ ions Jamie Leppard A large scale quantum simulator will provide the necessary tools for unparalleled scientific development. The challenges to build such a device are centered around the realization of a universal set of high fidelity quantum gates, that can be maintained in a system of many qubits. In the case of trapped ion devices of intermediate size, i.e. several tens of ions, the most natural approach to... 112. Towards measurement-based blind quantum computing with trapped ions Peter Drmota (University of Oxford) In the framework of blind quantum computing, quantum computations can be delegated to an untrusted server while ensuring privacy and verifying their correctness [1]. For an experimental demonstration, we consider the practical case of measurement-based blind quantum computation (MBBQC) on a continuously rebuilding cluster state. This protocol involves sequential measurements and remote state... 121. Quantum thermodynamics: Heat leaks and fluctuation dissipation Dr Oleksiy Onishchenko (QUANTUM, Institut für Physik, Universität Mainz, Mainz, Germany) Quantum thermodynamics focuses on extending the notions of heat and work to microscopic systems, where the concepts of non-commutativity and measurement back-action play a role [1]. Our experimental system consists of one or multiple qubits implemented in the Zeeman sublevels of the ground electronic state of 40Ca+, and the ion register is held in a microstructured Paul trap [2]. Quantum logic... 122. Multipartite entanglement of trapped ions by graph-based optimized global Raman beams Mr Arjun D. Rao (The University of Sydney) Entangling gates are arguably the main ingredient of quantum information processing (QIP). Trapped ion systems have typically outshone other quantum hardware in preparing Bell states. Two ion entanglement has been extensively covered [1, 2, 3] and sequences of pairwise gates can be used to generate multipartite entanglement. Alternatively, global irradiation is faster, which is important as... 123. Single Ion Addressing for Reliable Isolation of 171Yb+ Hyperfine Qubit States Mr Maverick Millican (The University of Sydney) Single ion addressing provides a critical computational advantage for trapped-ion registers used for large-scale quantum simulation and computation [1][2]. Several schemes are currently used including arrays of mechanically positioned micro-optic fibers [3], holographic diffraction patterns produced by arrays of micromirrors [4], and beam splitting by AODs driven by multi-tone rf frequencies... 124. Photonic integration for trapped-ion quantum information science Mr Felix Knollmann (MIT) A major architecture for large-scale quantum computing with trapped ions relies on individual computational nodes that are linked via quantum networking. This multi-node architecture would also benefit hybrid networks between trapped ions and other quantum systems. Quantum networks of practical scales will require modularization of the quantum control hardware and reduction of the equipment... 127. Improving robustness of laser-free entangling gates for a trapped-ion architecture Madalina Mironiuc The combination of the entangling Mølmer-Sørensen gate and single qubit rotations is a well-established way to realise a universal set of quantum gates with trapped ions. Additionally, implementing this gate scheme using global microwave fields can further the scaling prospects of this quantum computing platform. In previous work, the demonstration of a 98.5% fidelity Mølmer-Sørensen gate... 10. Quantum non-Gaussianity of multiphonon states of a single atom Lukas Podhora (Palacký University (CZ)) Generation and manipulation of non-classical states of motion has been of interest with motivation in quantum metrology, quantum enhanced sensing [1] and quantum thermodynamics. Fock states of motion with exactly defined discrete value of energy are experimentally achievable realizations of such non-classical states in ion trap. Although the significant progress in the Fock state preparation... 38. Automated optical inspection and electrical measurement of industrially fabricated surface ion traps Mr Fabian Anmasser (Infineon Technologies Austria AG, Villach, Austria; Institute for Experimental Physics, Innsbruck, Austria) In 1995, Zoller [1] suggested the realization of a quantum computer by means of using ions in a linear trap. Since linear traps are only capable of storing a few tens of ions, the transition to 2D surface traps will be essential for useful quantum computers. Hence, plentiful research was done already about micro-fabricated 2D surface traps in an industrial environment [2,3,4]. To pave the way... 44. Higher-order effects of electric quadrupole fields on a single Rydberg ion Shalina Salim (Stockholm University) Rydberg ions have large dipole and quadrupole polarizabilities which makes them extremely sensitive to external electric fields[1][2]. As a result, an ion in the Rydberg state experiences altered trapping potential which leads to motion-dependent Rydberg excitation energies[3]. Higher the Rydberg state more is the sensitivity to the electric quadrupole trapping fields. The... 62. Industrially microfabricated ion traps with low loss materials Matthias Dietl (Infineon) Quantum computers have the potential to revolutionize computation by making certain types of classically intractable problems solvable. There are several platforms, that might host a future quantum computer. Trapped ions enable quantum gate operations on quantum bits (qubits) by manipulating single or multiple ions. Trapped ion quantum computing offers advantages over other platforms like... 66. A matter link for remote ion-trap modules Mr Falk Bonus (University College London) The number of qubits in quantum computing architectures must be increased dramatically in order to demonstrate an advantage over classical hardware [1]. This "scaling up" must be performed without experiencing reductions in the rate, or the fidelity of the qubit operations. Multiple ions can be confined within a single ion trap. However, qubit gate times and the motional mode density scale... 69. TSV-integrated Surface Electrode Ion Trap for Scalable Quantum Information Processing Théo Henner (Université de Paris) Ion traps and their geometry have seen their complexity increase for several years. Examples of this trend are the integration of waveguides, photodetectors [1] and the design of array of trap [2][3][4]. To continue in this path, significant challenges for electric signal delivery must be solved. I will present a functional trap using Through Silicon Vias (TSV) electrodes connection (both... 72. Demonstrating a logical qubit on a surface ion trap Daisy Smith (University of Sussex) , Ms Sahra Kulmiya The Ion Quantum Technology group has proposed a scalable quantum computing design made up of modular surface ion traps which slot together. One of the main challenges in realizing this design is demonstrating fault-tolerant error correction on a surface trap. We use an X-junction trap which has designated zones for trapping, performing quantum gates and reading out results. It uses surface... 106. Towards a fault-tolerant universal set of microwave driven quantum gates with trapped ions Mr Hardik Mendpara (Leibniz-Universität Hannover) Single-qubit rotation operations and two-qubit entangling gates form a universal set of quantum operations capable of performing any quantum algorithm. Here, we consider the implementation of single- and two-qubit gates using microwaves as a scalable alternative to the more widely used laser-based addressing techniques, which have fidelities that are typically limited by photon scattering [1].... 48. Towards quantum control and spectroscopy of a single hydrogen molecular ion David Holzapfel (ETH Zurich) Precision Spectroscopy The complexity and variety of molecules offer opportunities for metrology and quantum information that go beyond what is possible with atomic systems. The hydrogen molecular ion is the simplest of all molecules and can thus be calculated ab initio to very high precision [1]. Combined with spectroscopy this allows to determine fundamental constants and test fundamental theory at record... 3. Tests of QED with singly-ionized helium Andres Martinez de Velasco (Vrije Universiteit Amsterdam (NL)) The 1S-2S transition of hydrogenic systems is a benchmark for tests of fundamental physics [1]. The most prominent example is the 1S-2S transition in atomic hydrogen, where impressive relative accuracies have been achieved [2-3]. Nowadays, these fundamental physics tests are hampered by estimates of uncalculated higher-order QED terms and the uncertainties in the fundamental constants required... 9. Penning-trap mass spectrometry using an unbalanced crystal and optical detection Joaquín Berrocal (Universidad de Granada (ES)) A novel Penning-trap mass spectrometry technique based on optical detection is under development at the University of Granada. This technique is universal, non-destructive, and single ion-sensitive. The scattered photons by a $^{40}$Ca$^{+}$ ion will be used to measure the normal mode eigenfrequencies of the unbalanced crystal formed by this ion and a target one [1] when the crystal is cooled... 58. Measurement of the $^{88}\text{Sr}^{+}$ $S_{1/2}$ $\rightarrow$ $D_{5/2}$ / $^{171}\text{Yb}^{+}$ $S_{1/2}$ $\rightarrow$ $F_{7/2}$ frequency ratio with in-situ BBR shift evaluation Martin Steinel (Physikalisch-Technische Bundesanstalt) A significant contribution to the uncertainty budgets of optical clocks based on the $^{171}\text{Yb}^{+}$ $S_{1/2}$ $\rightarrow$ $F_{7/2}$ electric octupole (E3) transition results from the Stark shift induced by black-body radiation (BBR) of the environment of the trapped ion. Even if precise knowledge on the thermal environment is available, uncertainty in the sensitivity of the shift to... 31. Operation of a microfabricated 2D trap array Marco Valentini (University of Innsbruck) We investigate scalable surface ion traps for quantum simulation and quantum computing. We have developed a microfabricated surface trap consisting of two parallel linear trap arrays with 11 trapping sites each. The trap design requires two interconnected metal layers to address the island-like DC electrodes and a third to shield the substrate. The trap fabrication is carried out by... 53. Microfabricated 3D Ion Traps and Integrated Optics Jakob Wahl (Universtiy of Innsbruck, Infineon Technologies, Austria AG) A future quantum computer will potentially outperform a classical computer in certain tasks, such as factorizing large numbers [1]. A promising platform to implement a quantum computer are trapped ions, as long coherence time, high fidelity quantum logic gates and the implementation of quantum algorithms, such as the shore algorithm, have been demonstrated [2], [3]. To evolve trapped... 51. Signal Generation for Trapped Ion Quantum Gates Norman Krackow (QUARTIQ) In order to manipulate quantum information in trapped ion systems it is necessary to mediate the interaction between qubits with electromagnetic fields in a precisely controlled fashion. As ion crystals become larger and enhanced fidelities demand increasingly sophisticated pulse schemes, dynamic signal generation for quantum gates becomes a difficult task. The talk discusses various... 32. Fundamental tests of antimatter gravitation with antihydrogen accelerators Jaspal Singh (University of Manchester (GB)) The Antihydrogen Laser Physics Apparatus (ALPHA) collaboration at CERN has been successfully pushing the boundaries of high precision atomic physics with antihydrogen to characterise the peculiarities of antimatter in a universe suspiciously dominated by matter today. Starting from the blossoming expertise developed by the collaboration with antihydrogen traps and laser spectroscopy... 41. Positron plasma creation and manipulation in the ASACUSA Cusp experiment Andreas Lanz (Austrian Academy of Sciences (AT)) The ASACUSA experiment aims to perform a ppm measurement of the ground-state hyperfine structure of antihydrogen using a spin-polarized antihydrogen beam. The production of antihydrogen in the mixing trap, the so-called Cusp trap – due to its cusped magnetic field – is done by merging positron and antiproton plasmas. To produce a sufficient amount of ground-state antihydrogen it is crucial to... 5. An Ion Trap Source of Ultracold Atomic Hydrogen via Photodissociation of the BaH+ Molecular Ion Steven Armstrong Jones (Swansea University (GB)) Hydrogen remains the go-to tool for testing fundamental physics, with the recent proton radius puzzle being a prime example. Here, I present a novel scheme for producing ultracold atomic hydrogen, based on threshold photodissociation of the BaH+ molecular ion. BaH+ can be sympathetically cooled using laser cooled Ba+ in an ion trap, before photodissociating it on the single photon A1Σ+←X1Σ+... 60. ASACUSA's low energy proton source for matter studies Alina Weiser (Austrian Academy of Sciences (AT)) Antihydrogen atoms can be formed via three body recombination of antiprotons and positrons. The ASACUSA collaboration will use this technique of forming atoms in order to perform a ppm measurement of the ground-state hyperfine structure of them. A proton source was developed such that hydrogen can be produced using the same apparatus and techniques which are used in the antimatter... 114. Grant writing Pablo Garcia Tello (CERN) 71. Developments for an increased detection sensitivity of the neutrinoless double-beta decay ($0\nu\beta\beta$) mode in the NEXT experiment Dr Samuel Ayet San Andres (JLU Giessen) The detection of the double-beta decay mode which would reveal the nature of the neutrino, Dirac or Majorana, is an extremely rare event where two emitted electrons share all the available energy of the decay and no neutrino is emitted. The current experiments in the search of such decay mode are far from a background-free condition, and the level of background achieved plays a crucial role in... 47. The commissioning of a Paul trap for laser spectroscopy of exotic radionuclides in an MR-ToF device Carina Kanitz (Friedrich Alexander Univ. Erlangen (DE)) The Multi Ion Reflection Apparatus for Collinear Laser Spectroscopy (MIRACLS) represents a new approach for precision measurements of nuclear ground-state properties in short-lived radionuclides. Conventional Collinear Laser Spectroscopy (CLS) [1-3] requires ion yields of more than 100-10000 ions per second, depending on the element, delivered from a radioactive ion beam (RIB) facility to... 20. Doppler- and sympathetic cooling for the investigation of short-lived radionuclides Franziska Maria Maier (University of Greifswald (DE)) Ever since its introduction in the mid 1970s, laser cooling has become a fundamental technique to prepare and control ions and atoms for a wide range of precision experiments. In the realm of rare isotope science, for instance, specific atom species of short-lived radionuclides have been laser-cooled for fundamental-symmetries studies [1] or for measurements of hyperfine-structure constants... 18. Trapping Swift Divergent Ions with Stacked Rings Xiangcheng Chen (University of Groningen (NL)) Study on exotic nuclei has become one of the research frontiers in nuclear physics. They can be produced by bombarding an energetic (MeV~GeV) projectile onto a target. Among various products, the ions of interest can be promptly and efficiently selected by in-flight separation. To precisely measure their properties, it is preferable to couple a low-energy (eV~keV) experimental terminal to the... 76. Collaborative design of a trapped-ion quantum computer with fully interconnected qubits Celeste Torkzaban (Leibniz Universität Hannover) Research groups in a wide range of disciplines at the Leibniz Universität Hannover, the Physikalisch-Technische Bundesanstalt (PTB) Braunschweig, and the Technische Universität Braunschweig are working together in the newly-created Quantum Valley Lower Saxony to create a trapped-ion quantum computer with fully interconnected qubits. A pair of existing trapped-ion experiments, one at LUH and... 22. A two-node trapped-ion quantum network with photonics interconnects Gabriel Araneda (University of Oxford) Trapped ions are a leading platform for quantum computing due to the long coherence time, high-level of control of internal and external degrees of freedom, and the natural full connectivity between qubits. Single and multi-qubit operations have been performed with high fidelity (>99.9%), which has enabled the demonstration of small universal quantum computers (∼10 atoms). However, scaling... 79. Device-Independent Quantum Key Distribution Between Two Ion Trap Nodes David P. Nadlinger (University of Oxford) Private communication over shared network infrastructure is of fundamental importance to the modern world. In classical cryptography, shared secrets cannot be created with unconditional security; real-world key exchange protocols rely on computational conjectures such as the hardness of prime factorisation to provide security against eavesdropping attacks. Quantum theory, however, promises... 55. ABaQuS: A trapped-ion quantum computing system using $^{133}\mathrm{Ba}^+$ qubits Ana Sotirova (University of Oxford) Trapped atomic ions are one of the most promising quantum computing architectures. They exhibit all of the primitives necessary for building a quantum computer and have very few fundamental limitations to the achievable gate fidelities. While high-fidelity quantum logic has already been demonstrated on a small number of qubits, scaling up the system without compromising its... 16. Trapped Barium Ions at the United States Air Force Research Laboratory Dr Zachary Smith (United States Air Force Research Lab (US)) Laser cooled and trapped atomic ions are promising platforms for quantum networking, sensing, and information processing because they are quantum systems well isolated from their surrounding environment. The species and isotope selected for trapping have different properties. Nuclear spin $I=\frac{1}{2}$ isotopes have long coherence times for a ground-state hyperfine qubit with robust... 115. Career panel 116. COMSOL workshop Mr Roman Obrist (COMSOL) , Dr Sven Friedel (COMSOL) 7/1/22, 9:00 AM In this introduction session to COMSOL Multiphysics® software you will get an overview of COMSOL® capabilities in modeling electromagnetic fields and the motion of particles therein. In Unit 1 we will cover the basic modeling workflow for modeling stationary and time-dependent low frequency EM fields such as capacitive, resistive and inductive systems. Unit 2 will give you an introduction... 117. LabVIEW workshop https://indico.cern.ch/e/ECCTI_2022
CommonCrawl
Causal inference in multi-state models–sickness absence and work for 1145 participants after work rehabilitation Jon Michael Gran1Email author, Stein Atle Lie2, Irene Øyeflaten3, 4, Ørnulf Borgan5 and Odd O. Aalen1 BMC Public Health201515:1082 © Gran et al. 2015 Received: 29 April 2015 Accepted: 12 October 2015 Multi-state models, as an extension of traditional models in survival analysis, have proved to be a flexible framework for analysing the transitions between various states of sickness absence and work over time. In this paper we study a cohort of work rehabilitation participants and analyse their subsequent sickness absence using Norwegian registry data on sickness benefits. Our aim is to study how detailed individual covariate information from questionnaires explain differences in sickness absence and work, and to use methods from causal inference to assess the effect of interventions to reduce sickness absence. Examples of the latter are to evaluate the use of partial versus full time sick leave and to estimate the effect of a cooperation agreement on a more inclusive working life. Covariate adjusted transition intensities are estimated using Cox proportional hazards and Aalen additive hazards models, while the effect of interventions are assessed using methods of inverse probability weighting and G-computation. Results from covariate adjusted analyses show great differences in sickness absence and work for patients with assumed high risk and low risk covariate characteristics, for example based on age, type of work, income, health score and type of diagnosis. Causal analyses show small effects of partial versus full time sick leave and a positive effect of having a cooperation agreement, with about 5 percent points higher probability of returning to work. Detailed covariate information is important for explaining transitions between different states of sickness absence and work, also for patient specific cohorts. Methods for causal inference can provide the needed tools for going from covariate specific estimates to population average effects in multi-state models, and identify causal parameters with a straightforward interpretation based on interventions. Multi-state models Causal inference Survival analysis Cohort study Registry data Data on sickness benefits is a valuable source for analysing sick leaves, disability and employment, but due to the complexity of such data the choice of measurement type and analysis can be challenging [1]. However, recent work using data from Norwegian [2, 3] and Danish registries [4–7] has proved that multi-state models [8–13] can be a very successful framework for analysing this kind of data. For example, when studying the effect of participating in work rehabilitation programs, events such as return to work, onset of sick leave benefits or work assessment allowance can hardly be seen as single time-to-event outcomes, but rather as a set of events which define states that the individuals move between. Multi-state modelling, as an extension of traditional survival analysis, offers a unified approach to the modelling of the transitions between such states. National registries with data on sickness benefits is a good basis for many types of analyses. The data are typically complete, and detailed information is collected on the type of benefits and dates when they are given. Additional information on the individuals receiving benefits is often available or can be obtained in even greater detail by coupling such registry data with cohort data where detailed information is available. The assessment of possible interventions with the purpose of reducing sickness absence is an important aim when analysing sickness benefit data, and identifying successful interventions could have a possible large economic impact [14]. In this paper we will focus on two such interventions, which both have received a lot of attention. One is the effect of partial compared to full time sick leave benefits, see e.g. [14–17], and the other is the effect of a cooperation agreement on a more inclusive working life, see e.g. [18]. In the Nordic countries there have been political initiatives for expanded use of partial sick leave. Part-time work may be beneficial, and a feasible way to integrate individuals with reduced work ability in working life, if the alternative is complete absence from work [15, 17]. In Norway, an agreement on more inclusive working life was signed by the Government and the social partners in employers and employees' organisations in 2001, and was renewed in 2005, 2010 and 2014. One of the main aims of this tripartite agreement has been to reduce the amount of individuals on sick leave and disability pension. Even though some attempts have been made to conduct randomized trials to assess interventions for reducing sick leave [19, 20], the execution of such experiments is challenging and not very commonly seen. As for using observational data to identify the effect of such interventions, numerous attempts have being made, see e.g. [21–24]. There has also been a massive methodological development over the last decades within the field of causal inference [25–27], providing a formal framework for identifying parameters similar to those in randomized trials from observational data. Such methods can also be employed in a multi-state model setting, but this has hardly been done yet. Earlier work on multi-state models for Norwegian registry data on sick leave benefits has also been in the form of cohort follow-up studies [2, 3], but without using the detailed covariate information available in these cohorts. In this paper we extend the analysis of Øyeflaten et al. [3], analysing transitions between sick leave benefits, work assessment allowance, disability pension and work for patients participating in work rehabilitation programs. Formally, we make three extensions to the analyses in the original paper. First of all, we cover a larger multi-center cohort, about double in size. Secondly, we utilize the detailed covariate information which is available for this cohort to estimate covariate specific state transition probabilities. Doing this, both proportional hazards and additive hazards models are here being considered for the purpose of estimating the transition intensities. Last but not least, we explore three different approaches based on classical methods from the causal inference literature to estimate the effect of interventions in multi-state models. The purpose of this paper is therefore twofold; to use multi-state models to study sickness absence and work based on detailed covariate information for a cohort of participants after work rehabilitation, and, to illustrate how methods from the causal inference literature can be used to estimate the effect of interventions in such a multi-state model framework. Detailed covariate information is, of course, central in making covariate specific predictions in a multi-state model, but even more important when estimating the causal effects of interventions from observational data. The statistical and causal assumptions needed will be discussed specifically. Covariate information has been used in multi-state models before for predicting sick leaves and related outcomes, in two recent papers on Danish data [4, 7]. The main difference between the data in these studies and the data in the present study is that the Danish data cover a much larger cohort, while the Norwegian data include more detailed information on the health of the participants. The latter is important for precise patient predictions and for adjusting for confounding when aiming at drawing causal conclusions. None of the earlier studies consider the estimation of causal effects of interventions in a multi-state setting. With the increasing attention on multi-state modelling of event-history data, more and more software packages have been made available, especially in R [28]; for example the mstate [29], msm [30] and msSurv [31] packages. See the latter, or the books of Beyersmann et al. [32] and Willekens [33], for detailed overviews of available R packages. The computations in this paper has been performed in R using the surv and mstate packages and by standalone code written by the first author. A multi-center cohort The patients being analyzed are part of a multi-center cohort study with the purpose of studying how health complaints, functional ability and fear avoidance beliefs explain future disability and return-to-work for patients participating in work rehabilitation programs. Data has been collected on 1155 participants from eight different clinics offering comprehensive inpatient work rehabilitation. Mean time on sickness benefits during the last two years before admittance to the work rehabilitation program, were 10 months (SD = 6.7). All participants gave informed consent, allowing for follow-up data on sickness absence benefits to be obtained from national registries, and answered comprehensive questionnaires during their stay at the clinic. The study was approved by the Medical Ethics Committee; Region West in Norway (REK-vest ID 3.2007.178) and the Norwegian social science data services (NSD, ID 16139). The data collected through questionnaires includes various background information together with detailed health variables such as subjective health complaints, physical function, coping and interaction abilities, and fear-avoidance beliefs. See Øyeflaten et al. [34] for more details on the cohort. Data on sickness benefits All Norwegian employees are entitled to sickness benefits such as sick leave benefits, work assessment allowance or disability benefits, if incapable of working due to disease or injury. The employer pays for the first 16 days of a sick leave period, and thereafter The Norwegian Labour and Welfare Administration (NAV) covers the disbursement. Data on these benefits, both the ones covered by the employer and NAV, was obtained from NAV's register, which contains information on the start and stop dates of sickness benefits given from 1992 and onward for the entire Norwegian population. Data for current analysis Out of the original 1155 participants in the multi-center cohort study, we excluded 4 individuals with an unknown date of departure from their rehabilitation center, 1 individual who had not answered the relevant questions on subjective health complaints and 5 individuals already on disability pension at baseline, and were left with a study sample of 1145 participants. Baseline was set to the time of departure, which varied between May 16th 2007 and March 25th 2009. Individuals were followed up with regard to their received sickness benefits until July 1st 2012, which was the date of data extraction from NAV. A multi-state model for sickness absence and work The occurrence of an event in survival analysis can be seen as a transition from one state to another, for example from an alive state to a death state. The hazard rate corresponds to the transition intensity between these two states. Multi-state models form a flexible framework allowing for the standard survival model to be extended by adding more than one transition and more than two states. A detailed introduction to multi-state models can be found in review papers such as Hougaard [8], Commenges [9], Andersen and Keiding [10], Putter et al. [11] and Meira-Machado et al. [12], or the book chapter by Andersen and Pohar Perme [13]. Sickness absence and disability data is a good example of data that are suitable for being modelled within the multi-state framework. Changing between work and being on various types of sickness benefits over time can naturally be perceived as moving between a given set of states. In Norway, employees on partial or full sick leave can be fully compensated through sick leave benefits for up to a year, after which they can be entitled to work assessment allowance. If their underlying health condition provides reasons for it, they may be granted a disability pension or further partial sick leave benefits. The latter is actively recommended by the authorities [35]. Partial sick leave can be graded from 20 to 99 %. Based on these policies we define five states that the participants can move between after being discharged from the rehabilitation centers: work (no received benefits), sick leave, partial sick leave, work assessment allowance and disability pension, and we propose the multi-state model illustrated in Fig. 1. At baseline, when being discharged from the rehabilitation center, individuals can start in any of the first four states. Multi-state model for sickness absence and work. A model for the transitions between work (no registered benefits), sick leave benefits, partial sick leave benefits, work assessment allowance and disability pension, for patients being discharged from clinics offering comprehensive inpatient work rehabilitation Individuals are defined as being on sick leave when receiving full sick leave benefits, on partial sick leave when receiving sick leave benefits graded below 100 % and on disability pension when receiving disability pension on unlimited terms. Work assessment allowance is a intermediate benefit typically given between sick leave and disability pension. It is granted for individuals going through medical treatment or rehabilitation, or to others that might benefit from vocational rehabilitation actions. There is an upper limit of four years for receiving work assessment allowance. When individuals do not receive any sickness benefits, they are per definition in work. The only exception is when there are gaps with no benefits before receiving disability pension – as there are no real transitions directly from work to disability, such gaps are attributed to the most recently received benefit. To avoid including non-genuine transitions, benefits with a duration of only one day have been discarded. When there were benefits registered which overlapped in time, the newest registered benefit was used. As for initial states; 178 patients started in the work state (receiving no benefits) after being discharged from the rehabilitation center, 106 were on partial sick leave benefits, 496 on full sick leave benefits and 365 were on work assessment allowance. Disability pension was defined to be an absorbing state in the multi-state model, as few transitions were observed to go out of this state in the original data. The total number of subsequent transitions between the five states within the study window is shown in Table 1. Transition summary. Total number of transitions between the five states work (state 1), partial sick leave (state 2), sick leave (state 3), work assessment allowance (state 4) and disability pension (state 5) based on registry data from The Norwegian Labour and Welfare Administration (NAV) for participants in the multi-center cohort From: / To: Partial sick leave Work assessment allowance Covariate information include age at baseline, gender, marital status, whether a cooperation agreement on a more inclusive working life is present, educational level, type of work, income, working ability score when entering rehabilitation and diagnosis group at baseline. All covariates are based on information from the questionnaires, except information on type of diagnosis which is retrieved through the ICPC code when available in NAV's register, and partly from the cohort data at the time of entering the rehabilitation. The current diagnosis at any given time is defined as the last given diagnosis. Note that these selected covariates only are one out of many possible representations of the information in the original data source, constructed to sufficiently describe the differences between patients. Detailed statistics on the covariates are found in Table 2. Descriptive statistics. Description of selected covariates collected from questionnaires to the multi-center cohort and Norwegian Labour and Welfare Administration (NAV) data (n=1145) Total number (%) or mean (sd) 798 (70 %) Not married 84 (7 %) College or university education Manual labour Office or administration Income above NOK 300' Cooperation agreement on a more inclusive working life No or do not know Working ability score when entering rehab Score 1-3 (high to medium ability) Score 5 (low ability) Diagnosis group at baseline The transition intensities for the 15 transitions in the multi-state model from Fig. 1 were examined using the Nelson-Aalen estimator for marginal transition intensities, and Cox proportional hazards and Aalen additive hazards models for conditional transition intensities using relevant covariate information. Cox and Aalen models were fitted using either the coxph or aareg function in the survival package [36] of the statistical software R [28]. The Nelson-Aalen estimator was calculated by using the coxph function without covariates. Say that X(t) denotes the state for an individual at time t. The transition probability matrix P(s,t), with elements P hj (s,t)=P(X(t)=j∣X(s)=h), denoting the transition probability from state h to state j in the time interval (s,t], was then estimated by the matrix product-integral formula $$ \hat{\boldsymbol{P}}(s,t) = \prod_{u \in (s,t]} \big(\boldsymbol{I} + d\boldsymbol{\hat{{A}}}(u)\big), $$ where \(\boldsymbol {\hat {{A}}}(u)\) is the corresponding estimated cumulative transition intensity matrix at time u [13, 29]. The cumulative intensities in \(\boldsymbol {\hat {{A}}}(u)\) are estimated using the Nelson-Aalen estimator. The cumulative transition intensity matrix could also be estimated conditioning on covariates Z, changing the formula in Eq. 1 to $$ \hat{\boldsymbol{P}}_{Z}(s,t) = \prod_{u \in (s,t]} \big(\boldsymbol{I} + d\boldsymbol{\hat{{A}}}_{Z}(u)\big), $$ where \(\boldsymbol {\hat {P}}_{Z}(s,t)\) and \(\boldsymbol {\hat {A}}_{Z}(u)\) are the estimated covariate specific transition probability matrix and cumulative transition intensity matrix respectively. The cumulative intensities in \(\boldsymbol {\hat {A}}_{Z}(u)\) is estimated for given values of Z using Cox proportional hazards models or Aalen additive hazards models. From the estimated transition probability matrix one can study the probabilities of being in state j at time t when starting in state h at baseline, \(\hat {P}_{\textit {hj}}(0,t)\), or the overall probability of being in state j at time t, $$ \hat{P}(X(t) = j) = \sum_{k} \hat{P}_{kj}(0,t) \cdot \hat{P}\big(X(0)=k\big). $$ For models without covariates, P(X(0)=k) can be estimated by the proportion starting in state k. With covariates, it can be estimated using logistic regression. With cumulative hazard estimates from the Nelson-Aalen estimator, the formula in 1 corresponds to the Aalen-Johansen estimator. With this marginal approach or with covariate adjusted cumulative hazards like in Eq. 2 estimated using Cox proportional hazards models, estimates and confidence intervals were calculated using the mstate package [29]. Using cumulative hazard estimates from Aalen additive hazards models, the estimator from Eq. 2 has to be implemented separately. Confidence intervals can be calculated using bootstrap methods or analytically as described in Aalen, Borgan and Gjessing ([37], p. 183). Note that there is an intrinsic Markov assumption [13] in this way of multi-state modelling which can be challenging when using complex data such as data based on sick leave and disability benefits. When the length of stay in a state affects the intensity for leaving the state, this assumption is in principal being violated. This is the case in three of the states in our multi-state model due to administrative regulations. Individuals can only be on sick leave or partial sick leave spells of maximum one year, and on work assessment allowance for a maximum of four years. To what degree such violations pose a problem will however depend on how often individuals stay in these states long enough for the regulations to take effect, which again partly depend on the follow-up time of the study. In our study we have individual follow-up times ranging between three and five year, which means that the maximum time of four years for work assessment allowance not will pose a problem. In fact, the mean length of stay in this state is 274 days (with a 95 % percentile of 1028 days). Also sick leave and partial sick leave spells close to a year is very rare in our study population, with a mean stay of 38 days on sick leave and 68 days on partial sick leave (and corresponding 95 % percentils of 180 and 218 days). Overall, this seems to indicate that while serious violations to the Markov assumption are possible, they are in practice uncommon and should not make any big impacts on the results for our study. However, in general one should be aware that violations of this assumption may impact some of the estimated effects, including the causal parameters of interest. Note also that more advance models relaxing the Markov assumption have been developed, but the impact of such violations will vary and could often be disregarded. See for example Gunnes et al. [38] and Allignol et al. [39], who only show small discrepancies between Markov and non-Markov models in situations where the Markov assumption is not met. When focusing on overall state occupation probabilities as in Eq. 3, Datta and Satten [40] have showed that the product-integral estimator in Eq. 1 is consistent regardless of whether the Markov assumption is being valid. Causal inference and the effect of interventions in multi-state models Besides estimating transition intensities and probabilities for a given set of states in a multi-state model and doing individual predictions, it is also of interest to evaluate population average effects of interventions in the multi-state model framework. There is a fundamental difference between merely predicting covariate specific outcomes and to estimate the causal effect of intervention on them, which creates a need for special methods and assumptions. We now consider three different approaches based on classical methods from the causal inference literature. The methods are exemplified with regard to the two types of possible interventions mentioned in the Introduction. The first intervention is the use of partial versus full time sick leave, where partial sick leave often is thought to cause shorter absence and higher subsequent employment [14]. The other intervention is the use of cooperation agreements on more inclusive working life, which in Norway has been implemented with the goal of improving work environment, enhance presence at work, prevent and reduce sick leave and prevent exclusion and withdrawal from working life. A secondary aim is to prevent withdrawal and to increase employment of people with impaired functional ability. Participating enterprises must systematically carry out health and safety measures, with inclusive working life as an integral part, and will in return receive prevention and facilitation subsidies and have their own contact person at NAV [41]. Note that the first of these two interventions is represented through states in our multi-state model in Fig. 1, while the latter is represented as an additional covariate as shown in Table 2. As for causal assumptions we will focus on the three general conditions which have been identified for estimating average causal effects; positivity, exchangeability ("no unmeasured confounding") and consistency ("well-defined interventions") [42]. We will also discuss how the related modularity condition, e.g. from the Pearl framework of causal inference [26], is relevant in our context of multi-state models. Additionally, as always, we need the statistical assumptions of no model-misclassification, which in our case is important both at an intensity and overall multi-state level. The importance and validity of all these assumptions are discussed separately for the three different approaches in following sub sections. Artificially manipulating transition intensities One proposed method for making causal inference in multi-state models is to artificially change certain transition intensities in \(\boldsymbol {\hat {{A}}}(u)\) and then explore the corresponding hypothetical transition probabilities [43]. Such changes in transition intensities, creating a new transition intensity matrix which can be denoted \(\boldsymbol {\tilde {{A}}}(u)\), may represent interventions. The hypothetical transition probabilities, which we can denote \(\boldsymbol {\tilde {{P}}}(s,t)\), may then represent counterfactual outcomes. Confidence intervals for such hypothetical transition probabilities can be found through the distribution of the cumulative intensities after manipulation. For situations without covariates and for the additive hazards model this will follow by the arguments in Aalen, Borgan and Gjessing ([37], p. 123–126 and 181–190). For the Cox model it will follow by the functional delta method in Andersen, Borgan, Gill & Keiding ([44], p. 512–515). For more on these types of analyses with respect to causal inference, and especially the connection to G-computation, see Keiding et al. [43] and Aalen, Borgan and Gjessing ([37], p. 382). The important causal assumption for this approach to be reasonable is that when intervening on a set of transition intensities, the remaining transition intensities stay unchanged. This is equivalent to the modularity assumption and definition of a structural causal model in the Pearl framework of causal inference [26]. See Aalen et al. [45] and Røysland [46] for more on modularity in the light of intensity processes. However, even when it is unreasonable that such an assumption is fully met, it has been argued that this kind of inference in multi-state models still can give valuable insights ([47], p. 250). In this paper we will follow the ideas from Keiding et al. [43] for our multi-state model for sickness absence and work in Fig. 1, and define interventions through manipulating transition rates within given sets of covariate values, where such interventions would be realistic. One example of an intervention would be to increase the use of partial sick leave compared to full sick leave, which would correspond to modifying the intensities into the partial sick leave and sick leave states. For the modularity assumption to be met in this case, it means that the additional individuals counterfactually put on partial sick leave instead of full sick leave, should behave identical to those individuals who were observed on partial sick leave in the original data. As those on partial sick leave generally are in a better health state than those on full sick leave, this is not a reasonable assumption. However, it is reasonable within similar stratums of covariate levels, which we will study in later in this paper. Satisfying the condition of modularity in this manner, also will imply that the assumptions of positivity, exchangeability and consistency are met. Inverse probability weighting Another approach from the causal inference literature is inverse probability of treatment (or propensity score) weighting [48, 49]. The treatment or exposure of interest can be represented either as states in the multi-state model or through additional covariates. One could for example weight by the inverse probability of being in a given state at baseline, before estimating the transition intensities of the model in Fig. 1. This would correspond to modelling a counterfactual scenario where there is a copy of each individual in every possible initial state. The sufficient conditions for this approach to be valid is again the causal assumptions of positivity, exchangeability and consistency. Positivity here means that there should be a non-zero probability of receiving all possible exposures for all covariate values in the population. Also, the model for the exposure, which is the foundation for the weights, must be well specified. See for example [42, 48, 49] for a further discussion on these assumptions. Say that we would like to compare the effect of being put on sick leave versus partial sick leave at baseline (when being discharged from the rehabilitation center). Let us for now only consider those starting in either of these two states. Whether an individual is put on full or partial sick leave at baseline is hardly randomized. We could however model the counterfactual situation where everyone, regardless of their covariate information, were put on full sick leave at baseline and an identical copy of each individual were placed on part time sick leave. This can be achieved by applying the weights $$ w_{k} = \frac{1}{P(S_{k} = s_{k}|Z_{k} = z_{k})}, $$ where S k is the initial state and Z k is all the relevant covariate information explaining the initial state for individual k. The probabilities of being in either of the two states at baseline can be estimated using ordinary logistic regression. The uncertainty of the estimates from the resulting weighted multi-state analysis can easily be calculated using for example the coxph function in R with robust standard errors [50]. Another casual contrast of interest would be to compare the scenario where everyone got a cooperation agreement on a more inclusive working life with a scenario where no-one had such an agreement. This would correspond to modelling a situation where such agreements were randomized. This could be modelled by weighting every individual in the original data with the inverse probability of having a cooperation agreement on a more including working life given covariates, by applying the weights $$ w_{k} = \frac{1}{P(E_{k} = e_{k}|Z_{k} = z_{k})}, $$ where E k is an indicator variable that is 1 if an agreement is present and 0 otherwise. The probabilities can again be estimated using logistic regression. Assuming positivity for the first type of intervention means that there should be a probability greater than zero for starting in either of the two states of sick leave or partial sick leave at baseline, regardless of any observed covariate history. This is testable, and the covariates in Table 2 are well balanced over the two groups. The biggest difference lies in the distribution of the working ability score, but even in the partial sick leave group 5 % of the individuals has a low ability score (the lowest health score). As for exchangeability it is a question of whether the included covariates sufficiently explain the differences between those on full and partial sick leave at baseline. The covariates include demographic, socioeconomic, work and health variables, which should be the central parameters. However, to what degree they are sufficiently covered is untestable. The health variable should ideally had been collected at baseline, and not at the first measurement after entering the rehab, but one could hope that in combination with type of work and diagnosis group, it will still be sufficient. An example of a variable that was considered, but not included, is the center that the patients attended. Adding this information, which involves adding 7 new dummy variables, seemed to have little impact. We therefore assume that center specific differences between patients are covered sufficiently through the other covariates, and especially working ability score and diagnosis group. For the cooperation agreement intervention, this is not administered at an individual level, and thus the assumptions are even easier to assess. There are no covariate combinations that exclude such agreements and the most important confounder will be type of work. Both interventions can also be assumed well-specified. G-computation A third approach, which corresponds to G-computation [51–53] (or standardization) of the parameter from the inverse probability weighting, is to estimate the transition intensities for individual k conditioned on all relevant covariate information Z k using a Cox proportional hazards or an Aalen additive hazards model, and then predict the state transition probabilities given covariates Z, P hj (s,t∣Z), for every individual given a specific intervention. As for the inverse probability weighting approach, the intervention could be defined both through setting a specific initial state or a covariate to a specific value. The main causal assumptions are again positivity, exchangeability and consistency, together with the assumption of no model misspecification. However, the model which needs to be correctly specified is now the model for the outcome, and not a model for the exposure as for the inverse probability approach. See for example [52] for a discussion on the causal assumptions of G-computation. For a general discussion on the use of inverse probability weighting and G-computation, and the connection to standardisation, see [53]. If, again, we would like to compare the effect of being put on sick leave versus partial sick leave at baseline, the intervention would correspond to setting their initial state to h=2 and h=3, and compare all individual predictions for both values. The population average effect can then be estimated through $$ \frac{1}{n} \sum_{k} \hat{P}_{3,j}(0,t \mid Z_{k}) - \frac{1}{n} \sum_{k} \hat{P}_{2,j}(0,t \mid Z_{k}), $$ where n is the number of individuals in the study. Confidence intervals can be found using standard bootstrap techniques. Correspondingly, if we consider an intervention such as the cooperation agreement on a more inclusive working life, represented by a binary covariate E k , the population average effect of such an intervention can be estimate by $$ \frac{1}{n} \sum_{k} \hat{P}_{i,j}\left(0,t \mid Z_{k}^{E_{k}=1}\right) - \frac{1}{n} \sum_{k} \hat{P}_{i,j}\left(0,t \mid Z_{k}^{E_{k}=0}\right), $$ for given initial states i. As these interventions are the same as the ones in question for the inverse probability approach, the causal assumptions need are also identical. See the discussion of these assumptions in the previous sub section. Unadjusted analysis Unadjusted cumulative intensities for the 15 transitions in the multi-state model in Fig. 1 estimated using the Nelson-Aalen estimator are found in Fig. 2. We see how the magnitude of the estimated transition intensities varies between states, and that transitions from sick leave to work has the highest intensity. Note that estimated intensities will correspond to the slopes of the cumulative estimates in this figure. Nelson-Aalen estimates of unadjusted cumulative transition intensities for the 15 transitions in the multi-state model. The five states in the model is work (Work), sick leave benefits (SickL), partial sick leave benefits (ParSL), work assessment allowance (WorkAsAl) and disability pension (Disab) The estimated time-varying transition probabilities, found by Eq. 1, give rise to the stacked probability plots in Fig. 3, given the four possible initial states (work, sick leave, partial sick leave and work assessment allowance). For example, we see that an individual who is on sick leave at time 0, has an unadjusted probability of approximately 0.50 of having returned to work after three years. The unadjusted probability of being disabled after the same period is approximately 0.10. Unadjusted state transition probabilities. Predictions given the patients state at baseline (time of discharge from the work rehabilitation center) Overall state occupation probabilities calculated according to (3) are shown in Fig. 4. We see that, for example, overall there is a rapid increase in work after being discharged from the rehabilitation center, from just below 20 % to just below 50 % after the first year. The general tendencies in this figure are similar to the ones in the paper by Øyeflaten et al. [3], who do an unadjusted analysis on a subset of the patients included in the current analysis. Note that in the remainder of this paper we focus on state transition probability plots, but that similar plots of the state occupation probabilities also can be derived. State occupation probabilities. The overall probability of being in the five states over time, estimated using Eq. 3 Covariate adjusted analysis and individual predictions Adjusting for the covariates age, gender, marital status, higher education, type of work, income, cooperation agreement on a more inclusive working life, work ability score and baseline diagnosis when estimating the transition hazards, allows for covariate specific predictions of the state transition probabilities. Figure 5 shows two examples of such predictions, for a married female aged 30 in an educational job, with an agreement on inclusive working life, income above NOK 300 000, higher education, working ability score 4 and mental diagnosis, and a single male aged 60 in a manual job, no agreement on inclusive working life, income below NOK 300 000, no higher education, work ability score 4 and musculoskeletal diagnosis. Note that when fitting the models, from the original covariates described in Table 2, those who did not answer the questions on marital status, higher education or having an inclusive working life agreement were put in the "no" category. We see that the estimated state transition probabilities for the two sets of covariates clearly differ with respect to work. The probability of returning to work within the follow-up time is almost 0.80 for females with the given example of covariates, while only about 0.10–0.15 for males in the second example. Covariate adjusted state transition probabilities with work assessment allowance as the initial state; predictions given two selected sets of covariates. Left panel: Married female aged 30 in an educational job, with a cooperation agreement on a more inclusive working life, income above NOK 300 000, higher education, working ability score 4 and mental diagnosis. Right panel: Single male aged 60 in a manual job, no cooperation agreement on a more inclusive working life, income below NOK 300 000, no higher education, work ability score 4 and musculoskeletal diagnosis Note that the stacked probability plots in Figs. 3 and 5 do not include confidence intervals. In Fig. 6 we explore these by showing the probability of having returned to work from state 4 (work assessment allowance) at any time, with corresponding confidence intervals, for the two scenarios in Fig. 5. We see that the probability of returning to work after being on work assessment allowance is very different for individuals with the two different sets of covariates, also when accounting for the uncertainty of the estimates. The probability of being in state 1 (work) after starting in state 4 (work assessment allowance) for two covariate specific predictions. Left panel: Married female aged 30 in a educational job, with a cooperation agreement on an inclusive working life, income above NOK 300 000, higher education, working ability score 4 and mental diagnosis. Right panel: Single male aged 60 in a manual job, no agreement on a more inclusive working life, income below NOK 300 000, no higher education, work ability score 4 and musculoskeletal diagnosis The results using a Cox proportional hazards model were also compared with an Aalen additive hazards model for modelling the transition intensities in our multi-state model. Even in simple additive models where constant hazards were assumed, we saw a good agreement between additive and proportional hazards models. See the next sub section for a further comparison between these two types of hazard models. The effect of hypothetical interventions Let us now consider results from the three proposed methods for doing causal inference in our multi-state model. For assessing hypothetical interventions on the use of full and partial sick leave benefits in the multi-state model in Fig. 1, let us first look at a scenario where we artificially manipulate the transition intensities that go into the partial sick leave and sick leave states. Figure 7 show the state transition probabilities for an individual starting in the work state at baseline. The left panel show the estimated probabilities given the original multi-state model, while the right panel show a counterfactual scenario where all transitions into full sick leave are blocked and routed into partial sick leave. This manipulation of the multi-state model corresponds to removing the possibility of full time sick leave, and instead putting individuals on partial sick leave. For such a manipulation to be reasonable, this should be done within a set of covariate characteristics where this intervention is realistic. The figure shows results for married males aged 45 in an educational job, income below NOK 300 000, no higher education, working capacity score 1 and a musculoskeletal diagnosis. From Fig. 7, we see that the state transition probabilities are similar for the two scenarios, but that individuals tend to quit full time work more frequently when full time sick leave is not available. The use of part time sick leave benefits is of course higher, but the use of work assessment allowance and disability pension is actually lower. Results intervening on sick leave intensities. Comparing predicted (left panel) and counterfactual (right panel) state transition probabilities for a selected set of covariates: Married male aged 45 in a service job, with no agreement on a more inclusive working life, income below NOK 300 000, no higher education, with a high to medium working ability score and musculoskeletal diagnosis. In the counterfactual scenario all transitions into sick leave have been blocked and routed into partial sick leave Let us then consider the inverse probability weighting approach and first the hypothetical intervention of placing all individuals on either full or part time sick leave at baseline. To assess such an intervention we focus only on the individuals in partial or full sick leave at baseline and give them a weight corresponding to the inverse probability of starting in their initial state. Then we estimate all transition intensities of the multi-state model in Fig. 1 and calculate their state transition probabilities as functions of time. This will correspond to comparing partial and full sick leave as if it was randomized at baseline. State transition probabilities for these two scenarios are shown in Fig. 8. Note that when intervening on initial states in a model that is Markov, like we do here, the differences between the two interventions will be smaller and smaller with time. When comparing partial and full sick leave, the difference is mostly visible during the first year. To give a more detailed picture of this difference, the time axis in Fig. 8 has been restricted to go from 0 to 365 days. Probabilities of starting in a given initial state were calculated using logistic regression, adjusting for the covariates in Table 2. We see that there is a tendency that partial sick leave yields a faster return to work than full time sick leave, and to a certain degree replace the use of work assessment allowance, but the differences are small. Inverse probability weighting results for full versus partial sick leave. State transition probabilities for the counterfactual scenarios where everyone originally on full or partial sick leave were given full sick leave (left panel) or partial sick leave (right panel). Note that time axis is restricted to the first year, to highlight the differences between these two scenarios Another intervention in question was the cooperation agreement on a more inclusive working life. The effect of this agreement could be assessed by weighting with the inverse probability of having an agreement and then look at the transition probabilities for the weighted subsets of the original data for those without and with an agreement. This corresponds to modelling two counterfactual scenarios; one where no-one has such an agreement and another where everyone has one. The results from such a comparison is shown in Fig. 9. Probabilities of having an agreement were calculated using logistic regression, adjusting for the covariates in Table 2. We see that there is a small but positive effect of having an agreement on a more inclusive working life with respect to having a higher probability of returning to work. Inverse probability weighting results for the effect of having a cooperation agreement. State transition probabilities for the counterfactual scenarios no-one has a cooperation agreement on a more inclusive working life (left panel) and the scenario where everyone has such an agreement (right panel) Finally, if we consider the G-computation approach, we can again estimate the effect of having an agreement on a more inclusive working life by estimating state transition probabilities for every individual when the indicator variable for such an agreement first is fixed to 0 and then 1, and look at average predictions for all individuals. The average predictions can be seen in Fig. 10, from using a Cox model in the upper panels and from an Aalen additive model in the lower panels. The two hazard models give very similar results. The smooth curves for the additive models is due to the assumption of constant hazard rates, which simplifies the model fitting. Left panels show overall state transition probabilities without an agreement and the right panels show overall transition probabilities with an agreement. We again see a small but positive effect of having such an agreement. We also see that the results are very similar to the results when using the inverse probability weighting approach in Fig. 9. As described earlier, a similar analysis can be done with regards to starting in partial or full time sick leave at baseline. Again, results (not shown) are similar to the ones estimated using inverse probability weights (shown in Fig. 8). G-computation results for the effect having a cooperation agreement. State transition probabilities for the counterfactual scenarios no-one has a cooperation agreement on a more inclusive working life (left panels) and the scenario where everyone has such an agreement (right panels), estimated using the G-computation approach with Cox proportional hazards models (upper panels) and Aalen additive hazards models (lower panels) An alternative way to illustrate the effect of the inclusive working life agreement is to plot the difference in state transition probabilities, for instance of returning to the work state from work assessment allowance. Ninety-five percent confidence intervals for such effects can be found using bootstrap techniques. Note however that such a bootstrap can be computationally heavy, for example in the G-computation approach when averaging over all individual predictions. A possible shortcut is however to make one prediction for average covariate levels together with the manipulated covariate. Formally, this can be justified for additive hazards models, but in our applications we found that it also gave a good approximation with Cox models. Results from such an analysis can be found in Fig. 11, using Cox proportional hazards models to estimate the causal effect in Eq. 4, and the latter bootstrap approach for confidence intervals. We see that, after the first year, there is a rather constant positive effect of having a cooperation agreement on a more inclusive working life, with about 5 percent points higher probability of entering the work state. However, the uncertainty is relatively high, with a 95 % bootstrap confidence interval ranging from about 1 percent to 10 percent. Effect of having a cooperation agreement. Difference in probability of returning to work for the two counterfactual scenarios where no-one has an agreement on more inclusive working life and the scenario where everyone has such an agreement, estimated using the G-computation approach. Ninety-five percent bootstrap confidence intervals are presented around the effect One of the important goals of sickness absence research is to find effective interventions for controlling it. Registry data on sickness benefits is a primary source for making such inference, and multi-state models have proved to be a very successful framework for modelling the transitions between different benefits and work in such data. Coupling registry data with detailed information about cohort participants, gives further insights about underlying reasons for sickness absence and can predict patient specific probabilities of future sickness absence, disability and returning to work. Combining these methods with standard methods from causal inference is a first attempt to then answer questions of the effect of interventions. In this paper we have considered examples of two such possible interventions; namely the use of partial sick leave and cooperation agreements for a more inclusive working life. Covariate specific predictions show great differences in the probabilities for sick leave, disability and work for patients with assumed high risk and low risk covariate characteristics. Overall, we find small effects of partial sick leave compared to full sick leave on state transition probabilities. Note however that in terms of expenses, partial sick leave benefits are less costly than giving full sick leave benefits, and thus, no difference in outcome between the two would indicate that partial sick leave should be preferred when possible. For cooperation agreements on a more including working life we find more visible, but still rather small, effects. Again, in terms of overall expenses, the effects of having such agreements must be considered against the cost of implementing them. When it comes to graphically representing the outcome in multi-state models there are many possibilities, and we have only looked at some of them. Stacked probability plots are illustrative, of either state transition or state occupation probabilities, while non-stacked plots make it easier to include confidence intervals. When assessing the effect of interventions one can plot the difference in these probabilities, as we have done, or alternatively the ratio between state transition or occupation probabilities. Another possible outcome measure could be to study the area under each curve, which will correspond to the expected time spent in each state during follow-up. Methodologically, the graphical features of the multi-state model framework makes it very suitable for thinking in terms of causal inference. Both in terms of the intuitiveness of defining interventions in terms of manipulating transition intensities, but also in terms of interpreting the outcomes of interventions using state transition and occupation probabilities. We also find that standard approaches from the causal inference literature, such as inverse probability weighting and G-computation, can help identify causal parameters easily interpreted also in a multi-state model setting. The methods applied in this paper are kept rather simple, partly for illustrative purposes, but can also easily be extended to estimate effects of time-varying exposures or interventions and to compare treatment regimes. One should however expect that this makes both standard model assumptions and causal assumptions harder to meet. For the modelling of transition intensities it is reassuring that the Cox proportional hazards models and Aalen additive hazards models gave similar results. The two models have different advantages in the setting of this paper. The Cox model is easier to implement using existing software, while the additive model needs more model fitting assessment, for example in deciding how to smooth the estimated cumulative hazards to get well behaved hazard estimates. In this paper we could assume constant intensities for the additive hazards models which simplifies the model fitting. When doing individual predictions, the additive models are not ideal, as they can give probability estimates below 0 or above 1 for uncommon combinations of covariates. A major benefit of the additive hazards model however, is that because of their additive structure, predicting with average covariate values is a shortcut to the individual predictions used in the G-computation approach. Apart from the standard model assessments when fitting separate hazard models for each transition, the most important statistical assumption to consider is of course the Markov assumption for the overall multi-state model, which was discussed in the Methods section. As for causal assumptions, it is clear that with the complexity of multi-state models, causal interpretation should not be made naively. To interpret all the separate transition intensity models and the overall multi-state model causally is challenging. To what degree such causal assumptions are needed will however depend on the approach used to define the intervention of interest. When intervening on transition intensities, the structural assumption of the full model will be key, while when intervening on treatment indicator variables, such as in the approach referred to as G-computation, the causal interpretation of the coefficient for this variable, in each separate hazard model, will be of particular importance. The goal of this paper, in terms of causal inference, is to illustrate how standard approaches can be used in a multi-state model setting to answer questions about the effect of interventions. When it comes to formal arguments for the validity of these approaches there is room for more work, especially on the sensitivity of the Markov assumption and how deviation will affect the validity of the causal assumptions. Overall, we believe that there are many benefits from thinking in terms of causal inference for multi-state models, as research questions often boil down to questions on the effect of interventions. It is also worth noticing that many of these approaches have been used at some level in multi-state models also historically. In particular, this goes for manipulating transition intensities and fixing covariate values, which in this paper was put in a G-computation context. However, few formal connections have yet been made to the causal inference field. Detailed covariate information is important for explaining transitions between different states of sickness absence and work in a multi-state model, also for patient specific cohorts. Methods from the causal inference literature can provide the needed tools for going from covariate specific estimates to population average effects in in such models, and thus yield new insights when assessing hypothetical interventions from complex observational data. This work was funded by the Research Council of Norway through the Research Programme on Sickness Absence, Work and Health, project number 218368. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. JMG was involved in the planning of the study, prepared the data, performed the statistical analysis and wrote the manuscript. SAL was involved in the planning of the study, prepared the data on sickness benefits and contributed with background information based on earlier work on multi-state models for sickness absence. IØ was involved in the planning of the study, prepared the cohort Ddata of patients on work rehabilitation and contributed with background information on the cohort and other based on earlier work on sickness absence. ØB contributed with background information on the statistical methods. OOA was involved with the planning of the study and contributed with background information on the statistical methods. All authors read and approved the final manuscript. Oslo Centre for Biostatistics and Epidemiology, Department of Biostatistics, University of Oslo, Oslo, Norway Department of Clinical Dentistry, University of Bergen, Bergen, Norway National Centre for Occupational Rehabilitation, Rauland, Norway Uni Research Health, Bergen, Norway Department of Mathematics, University of Oslo, Oslo, Norway Hensing G, Alexanderson K, Allebeck P, Bjurulf P. How to measure sickness absence? Literature review and suggestion of five basic measures. Scand J Soc Med. 1998; 26(2):133–44.PubMedGoogle Scholar Lie SA, Eriksen HR, Ursin H, Hagen EM. A multi-state model for sick-leave data applied to a randomized control trial study of low back pain. Scand J Public Health. 2008; 36(3):279–83.View ArticlePubMedGoogle Scholar Øyeflaten I, Lie SA, Ihlebæk CM, Eriksen HR. Multiple transitions in sick leave, disability benefits, and return to work. - A 4-year follow-up of patients participating in a work-related rehabilitation program. BMC Public Health. 2012; 12(748):1–8.Google Scholar Pedersen J, Bjorner JB, Burr H, Christensen KB. Transitions between sickness absence, work, unemployment, and disability in Denmark 2004–2008. Scand J Work Environ Health. 2012; 38(6):516–26.View ArticlePubMedGoogle Scholar Carlsen K, Harling H, Pedersen J, Christensen KB, Osler M. The transition between work, sickness absence and pension in a cohort of Danish coloectal cancer survivors. BMJ Open. 2013; 3(2):1–10.View ArticleGoogle Scholar Pedersen J, Bjorner JB, Christensen KB. Visualizing transitions between multiple states – illustrated by analysis of social transfer payments. J Biom Biostat. 2013; 4(5):1–5.Google Scholar Nexo MA, Watt T, Pedersen J, Bonnema SJ, Hegedus L, Rasmussen AK, et al. Increased risk of long-term sickness absence, lower rate of return to work, and higher risk of unemployment and disability pensioning for thyroid patients: a Danish register-based cohort study. J Clin. Endocrinol Metab. 2014; 99(9):3184–192.View ArticlePubMedPubMed CentralGoogle Scholar Hougaard P. Multi-state models: a review. Lifetime Data Anal. 1999; 5(3):239–64.View ArticlePubMedGoogle Scholar Commenges D. Multi-state models in epidemiology. Lifetime Data Anals. 1999; 5(4):315–27.View ArticleGoogle Scholar Andersen PK, Keiding N. Multi-state models for event history analysis. Stat Methods Med Res. 2002; 11(2):91–115.View ArticlePubMedGoogle Scholar Putter H, Fiocco M, Geskus RB. Tutorial in biostatistics: competing risks and multi-state models. Stat Med. 2007; 26(11):2389–430.View ArticlePubMedGoogle Scholar Meira-Machado LF, de Uña-Álvarez J, Cadarso-Suárez C, Andersen PK. Multi-state models for the analysis of time-to-event data. Stat Methods Med Res. 2008; 18(2):1–32.View ArticleGoogle Scholar Andersen PK, Pohar Perme M. Multistate models In: Klein JP, van Houwelingen HC, Ibrahim JG, Scheike TH, editors. Handb Surviv Analysis. Boca Raton, FL: Chapman & Hall/CRC: 2013. p. 417–39.Google Scholar Markussen S, Mykletun A, Røed K. The case for presenteeism – Evidence from Norway's sickness insurance program. J Public Econ. 2012; 96(11):959–72.View ArticleGoogle Scholar Kausto J, Miranda H, Martimo KP, Viikari-Juntura E. Partial sick leave - review of its use, effects and feasibility in the Nordic countries. Scand J Work Environ Health. 2008; 34(4):239–49.View ArticlePubMedGoogle Scholar Andrén D, Andrén T. Part-time sick leave as a treatment method?Work Pap Econ. 2008; (320):1–32. http://EconPapers.repec.org/RePEc:yor:hectdg:09/01. Andrén D, Svensson M. Part-time sick leave as a treatment method for individuals with musculoskeletal disorders. J Occup Rehabil. 2012; 22(3):418–26.View ArticlePubMedGoogle Scholar Foss L, Gravseth HM, Kristensen P, Claussen B, Mehlum IS, Skyberg K. "Inclusive working life in Norway": a registry-based five-year follow-up study. J Occup Med Environ. 2013; 8(19):1–8.Google Scholar Viikari-Juntura E, Kausto J, Shiri R, Kaila-Kangas L, Takala EP, Karppinen J, et al. Return to work after early part-time sick leave due to musculoskeletal disorders: a randomized controlled trial. Scand J Work Environ Health. 2012; 38(2):134–43.View ArticlePubMedGoogle Scholar Noordik E, van der Klink JJ, Geskus RB, de Boer MR, van Dijk FJ, Nieuwenhuijsen K. Effectiveness of an exposure-based return-to-work program for workers on sick leave due to common mental disorders: a cluster-randomized controlled trial. Scand J Work Environs Health. 2013; 39(2):144–54.View ArticleGoogle Scholar Frölich M, Heshmati A, Lechner M. A microeconometric evaluation of rehabilitation of long-term sickness in sweden. J Appl Econ. 2004; 19(3):375–96.View ArticleGoogle Scholar Ziebarth NR, Karlsson M. The effects of expanding the generosity of the statutory sickness insurance system. J Appl Econ. 2014; 29(2):208–30.View ArticleGoogle Scholar Ziebarth NR. Assessing the effectiveness of health care cost containment measures: evidence from the market for rehabilitation care. Int J Health Care Finance Econ. 2014; 14(1):41–67.View ArticlePubMedGoogle Scholar Reichert AR, Augurzky B, Tauchmann H. Self-perceived job insecurity and the demand for medical rehabilitation: Does fear of unemployment reduce health care utilization?Health Econ. 2015; 24(1):8–25.View ArticlePubMedGoogle Scholar Rothman K, Greenland S. Causation and causal inference in epidemiology. Am J Public Health. 2005; 95(S1):144–50.View ArticleGoogle Scholar Pearl J. Causality: models, reasoning and inference, 2nd ed. New York, NY: Cambridge University Press; 2009.View ArticleGoogle Scholar Morgan SL, Winship C. Counterfactuals and causal inference. New York, NY: Cambridge University Press; 2014.View ArticleGoogle Scholar R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2014. http://www.R-project.org/.Google Scholar de Wreede LC, Fiocco M, Putter H. mstate: an R package for the analysis of competing risks and multi-state models. J Stat Soft. 2011;38.Google Scholar Jackson CH. Multi-state models for panel data: the msm package for R. J Stat Soft. 2011; 38(8):1–28.View ArticleGoogle Scholar Ferguson N, Datta S, Brock G. mssurv, an R package for nonparametric estimation of multistate models. J Stat Soft. 2012; 50:1–24.View ArticleGoogle Scholar Beyersmann J, Allignol A, Schumacher M. Competing risks and multistate models with R. New York, NY: Springer; 2011.Google Scholar Willekens F. Multistate analysis of life histories with R. New York, NY: Springer; 2014.View ArticleGoogle Scholar Øyeflaten I, Opsahl J, Eriksen HR, Norendal Braathen T, Lie SA, Brage S, et al. Subjective health complaints, functional ability, fear avoidance beliefs and days on sickness benefits after work rehabilitation – a mediation model. Manuscript. 2015.Google Scholar Øyeflaten I, Lie SA, Ihlebæk CM, Eriksen HR. Prognostic factors for return to work, sickness benefits, and transitions between these states: A 4-year follow-up after work-related rehabilitation. J Occup Rehabil. 2014; 24(2):199–212.View ArticlePubMedGoogle Scholar Therneau T, Lumley T. Survival: survival analysis, including penalised likelihood. 2010. R package version 2.36-2.Google Scholar Aalen O, Borgan Ø, Gjessing H. Survival and event history analysis: a process point of view. New York, NY: Springer; 2008.View ArticleGoogle Scholar Gunnes N, Borgan Ø, Aalen OO. Estimating stage occupation probabilities in non-Markov models. Lifetime Data Anal. 2007; 13(2):211–40.View ArticlePubMedGoogle Scholar Allignol A, Beyersmann J, Gerds T, Latouche A. A competing risks approach for nonparametric estimation of transition probabilities in a non-Markov illness-death model. Lifetime Data Anal. 2014; 20(4):495–513.View ArticlePubMedGoogle Scholar Datta S, Satten GA. Validity of the Aalen–Johansen estimators of stage occupation probabilities and Nelson–Aalen estimators of integrated transition hazards for non-Markov models. Stat Probab Lett. 2001; 55(4):403–11.View ArticleGoogle Scholar The Norwegian Labour and Welfare Service. Cooperation agreement on more inclusive working life. 2014. Revised version cooperation agreement 2014–1018. ISBN 978-82-551-2361-3.Google Scholar Hernán MA, Robins JM. Estimating causal effects from epidemiological data. J Epidemiol Community Health. 2006; 60(7):578–86.View ArticlePubMedPubMed CentralGoogle Scholar Keiding N, Klein JP, Horowitz MM. Multi-state models and outcome prediction in bone marrow transplantation. Stat Med. 2001; 20(12):1871–1885.View ArticlePubMedGoogle Scholar Andersen PK, Borgan Ø, Gill RD, Keiding N. Statistical models based on counting processes. New York, NY: Springer; 1992.Google Scholar Aalen OO, Røysland K, Gran JM, Kouyos R, Lange T. Can we believe the DAGs? a comment on the relationship between causal DAGs and mechanisms. Stat Methods Med Res. 2014.Google Scholar Røysland K. Counterfactual analyses with graphical models based on local independence. Annals Stat. 2012; 40(4):2162–194.View ArticleGoogle Scholar Kalbfleisch JD, Prentice RL. The statistical analysis of failure time data. Hoboken, New Jersey: John Wiley & Sons; 2011.Google Scholar Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiol. 2000; 11(5):550–60.View ArticleGoogle Scholar Hernán MÁ, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of hiv-positive men. Epidemiol. 2000; 11(5):561–70.View ArticleGoogle Scholar Ali RA, Ali MA, Wei Z. Lifetime Data Anal. 2014; 20(1):106–31.Google Scholar Robins JM. A new approach to causal inference in mortality studies with a sustained exposure period – application to control of the healthy worker survivor effect. Math Model. 1986; 7(9):1393–512.View ArticleGoogle Scholar Snowden JM, Rose S, Mortimer KM. Implementation of G-computation on a simulated data set: demonstration of a causal inference technique. Am J Epidemiol. 2011; 173(7):731–8.View ArticlePubMedPubMed CentralGoogle Scholar Vansteelandt S, Keiding N. Invited commentary: G-computation–lost in translation?Am J Epidemiol. 2011; 173(7):739–42.View ArticlePubMedGoogle Scholar
CommonCrawl
On the non-existence of sharply transitive sets of permutations in certain finite permutation groups AMC Home Characterization of some optimal arcs May 2011, 5(2): 309-315. doi: 10.3934/amc.2011.5.309 On quaternary complex Hadamard matrices of small orders Ferenc Szöllősi 1, Department of Mathematics and its Applications, Central European University, H-1051, Nádor u. 9, Budapest, Hungary Received May 2010 Revised April 2011 Published May 2011 One of the main goals of design theory is to classify, characterize and count various combinatorial objects with some prescribed properties. In most cases, however, one quickly encounters a combinatorial explosion and even if the complete enumeration of the objects is possible, there is no apparent way how to study them in details, store them efficiently, or generate a particular one rapidly. In this paper we propose a novel method to deal with these difficulties, and illustrate it by presenting the classification of quaternary complex Hadamard matrices up to order $8$. The obtained matrices are members of only a handful of parametric families, and each inequivalent matrix, up to transposition, can be identified through its fingerprint. Keywords: Complex Hadamard matrix, fi ngerprint., Butson matrix. Mathematics Subject Classification: Primary: 05B20; Secondary: 46L1. Citation: Ferenc Szöllősi. On quaternary complex Hadamard matrices of small orders. Advances in Mathematics of Communications, 2011, 5 (2) : 309-315. doi: 10.3934/amc.2011.5.309 G. Auberson, A. Martin and G. Mennessier, On the reconstruction of a unitary matrix from its moduli,, Commun. Math. Phys., 140 (1991), 417. doi: 10.1007/BF02099133. Google Scholar S. Bouguezel, M. O. Ahmed and M. N. S. Swami, A new class of reciprocal-orthogonal parametric transforms,, IEEE Trans. Circuits Syst. I, 56 (2009), 795. doi: 10.1109/TCSI.2008.2002923. Google Scholar A. T. Butson, Generalized Hadamard matrices,, Proc. Amer. Math. Soc., 13 (1962), 894. doi: 10.1090/S0002-9939-1962-0142557-0. Google Scholar R. Craigen, Equivalence classes of inverse orthogonal and unit Hadamard matrices,, Bull. Austral. Math. Soc., 44 (1991), 109. doi: 10.1017/S0004972700029506. Google Scholar P. Diţă, Some results on the parametrization of complex Hadamard matrices,, J. Phys. A, 20 (2004), 5355. Google Scholar P. Diţă, Complex Hadamard matrices from Sylvester inverse orthogonal matrices,, Open Sys. Inform. Dyn., 16 (2009), 387. Google Scholar P. Diţă, Hadamard Matrices from mutually unbiased bases,, J. Math. Phys., 51 (2010). Google Scholar T. Durt, B.-H. Englert, I. Bengtsson and K. Życzkowski, On mutually unbiased bases,, Intern. J. Quantum Inform., 8 (2010), 535. doi: 10.1142/S0219749910006502. Google Scholar U. Haagerup, Orthogonal maximal Abelian *-subalgebras of $n\times n$ matrices and cyclic $n$-roots,, in, (1996), 296. Google Scholar M. Harada, C. Lam and V. D. Tonchev, Symmetric $(4,4)$-nets and generalized Hadamard matrices over groups of order $4$,, Des. Codes Crypt., 34 (2005), 71. doi: 10.1007/s10623-003-4195-y. Google Scholar K. J. Horadam, "Hadamard Matrices and Their Applications,'', Princeton University Press, (2007). Google Scholar M. N. Kolountzakis and M. Matolcsi, Complex Hadamard matrices and the spectral set conjecture,, Collectanea Math., Extra (2006), 281. Google Scholar M. H. Lee and V. V. Vavrek, Jacket conference matrices and Paley transformation,, in, (2008), 181. Google Scholar M. Matolcsi, J. Réffy and F. Szöllősi, Constructions of complex Hadamard matrices via tiling abelian groups,, Open Sys. Inform. Dyn., 14 (2007), 247. doi: 10.1007/s11080-007-9050-6. Google Scholar T. S. Michael and W. D. Wallis, Skew-Hadamard matrices and the Smith normal form,, Des. Codes Crypt., 13 (1998), 173. doi: 10.1023/A:1008230429804. Google Scholar S. Popa, Orthogonal pairs of *-subalgebras in finite von Neumann algebras,, J. Operator Theory, 9 (1983), 253. Google Scholar F. Szöllősi, Parametrizing complex Hadamard matrices,, European J. Combin., 29 (2008), 1219. doi: 10.1016/j.ejc.2007.06.009. Google Scholar F. Szöllősi, Exotic complex Hadamard matrices and their equivalence,, Crypt. Commun., 2 (2010), 187. doi: 10.1007/s12095-010-0021-3. Google Scholar W. Tadej and K. Życzkowski, A concise guide to complex Hadamard matrices,, Open Syst. Inform. Dyn., 13 (2006), 133. doi: 10.1007/s11080-006-8220-2. Google Scholar T. Tao, Fuglede's conjecture is false in $5$ and higher dimensions,, Math Res. Letters, 11 (2004), 251. Google Scholar R. F. Werner, All teleportation and dense coding schemes,, J. Phys. A, 34 (2001), 7081. doi: 10.1088/0305-4470/34/35/332. Google Scholar G. Zauner, "Quantendesigns: Grundzäuge einer nichtkommutativen Designtheorie'' (in German),, Ph.D thesis, (1999). Google Scholar Francis C. Motta, Patrick D. Shipman. Informing the structure of complex Hadamard matrix spaces using a flow. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 2349-2364. doi: 10.3934/dcdss.2019147 Giuseppe Geymonat, Françoise Krasucki. Hodge decomposition for symmetric matrix fields and the elasticity complex in Lipschitz domains. Communications on Pure & Applied Analysis, 2009, 8 (1) : 295-309. doi: 10.3934/cpaa.2009.8.295 Adel Alahmadi, Hamed Alsulami, S.K. Jain, Efim Zelmanov. On matrix wreath products of algebras. Electronic Research Announcements, 2017, 24: 78-86. doi: 10.3934/era.2017.24.009 Paul Skerritt, Cornelia Vizman. Dual pairs for matrix groups. Journal of Geometric Mechanics, 2019, 11 (2) : 255-275. doi: 10.3934/jgm.2019014 Meijuan Shang, Yanan Liu, Lingchen Kong, Xianchao Xiu, Ying Yang. Nonconvex mixed matrix minimization. Mathematical Foundations of Computing, 2019, 2 (2) : 107-126. doi: 10.3934/mfc.2019009 Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015 K. T. Arasu, Manil T. Mohan. Optimization problems with orthogonal matrix constraints. Numerical Algebra, Control & Optimization, 2018, 8 (4) : 413-440. doi: 10.3934/naco.2018026 Lei Zhang, Anfu Zhu, Aiguo Wu, Lingling Lv. Parametric solutions to the regulator-conjugate matrix equations. Journal of Industrial & Management Optimization, 2017, 13 (2) : 623-631. doi: 10.3934/jimo.2016036 Heide Gluesing-Luerssen, Fai-Lung Tsang. A matrix ring description for cyclic convolutional codes. Advances in Mathematics of Communications, 2008, 2 (1) : 55-81. doi: 10.3934/amc.2008.2.55 Houduo Qi, ZHonghang Xia, Guangming Xing. An application of the nearest correlation matrix on web document classification. Journal of Industrial & Management Optimization, 2007, 3 (4) : 701-713. doi: 10.3934/jimo.2007.3.701 Angelo B. Mingarelli. Nonlinear functionals in oscillation theory of matrix differential systems. Communications on Pure & Applied Analysis, 2004, 3 (1) : 75-84. doi: 10.3934/cpaa.2004.3.75 A. Cibotarica, Jiu Ding, J. Kolibal, Noah H. Rhee. Solutions of the Yang-Baxter matrix equation for an idempotent. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 347-352. doi: 10.3934/naco.2013.3.347 Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems & Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025 Leda Bucciantini, Angiolo Farina, Antonio Fasano. Flows in porous media with erosion of the solid matrix. Networks & Heterogeneous Media, 2010, 5 (1) : 63-95. doi: 10.3934/nhm.2010.5.63 Debasisha Mishra. Matrix group monotonicity using a dominance notion. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 267-274. doi: 10.3934/naco.2015.5.267 Lingling Lv, Zhe Zhang, Lei Zhang, Weishu Wang. An iterative algorithm for periodic sylvester matrix equations. Journal of Industrial & Management Optimization, 2018, 14 (1) : 413-425. doi: 10.3934/jimo.2017053 Joshua Du, Jun Ji. An integral representation of the determinant of a matrix and its applications. Conference Publications, 2005, 2005 (Special) : 225-232. doi: 10.3934/proc.2005.2005.225 Boris Baeumer, Lipika Chatterjee, Peter Hinow, Thomas Rades, Ami Radunskaya, Ian Tucker. Predicting the drug release kinetics of matrix tablets. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 261-277. doi: 10.3934/dcdsb.2009.12.261 A. Chauviere, L. Preziosi, T. Hillen. Modeling the motion of a cell population in the extracellular matrix. Conference Publications, 2007, 2007 (Special) : 250-259. doi: 10.3934/proc.2007.2007.250 Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379 Ferenc Szöllősi
CommonCrawl
Полиномиальные инварианты графов и иерархии линейных уравнений Успехи математических наук. 2019. Т. 74. № 2. С. 189-190. Бычков Б. С., Михайлов А. В. Let $W_G(q_1,q_2,\ldots)$ be a weighted symmetric chromatic polynomial of a graph $G$. S. Chmutov, M. Kazarian and S. Lando in the paper arXiv:1803.09800v2 proved that the generating function $\mathcal{W}(G)$ for the polynomials $W_G(q_1,q_2,\ldots)$ is a $\tau$-function of the Kadomtsev--Petviashvili integrable hierarchy. We proved that the function $\mathcal{W}(G)$ itself is a solution of a linear integrable hierarchy. Also we described the initial conditions for the general formal $\tau$-function of the KP-hierarchy which guarantee that the $\tau$-function is a solution of a linear integrable hierarchy. Research target: Mathematics Priority areas: mathematics Keywords: KP hierarchySchur polynomialslinear hierarchy Publication based on the results of: Development of combinatorial, homological and geometric methods in the theory of moduli spaces of algebraic curves and their mappings with applications to problems of mathematical physics(2019) Determinantal identities for flagged Schur and Schubert polynomials Smirnov E., Merzon G. European Journal of Mathematics. 2016. Vol. 2. No. 1. P. 227-245. We prove new determinantal identities for a family of flagged Schur polynomials. As a corollary of these identities we obtain determinantal expressions of Schubert polynomials for certain vexillary permutations. Сабейские этюды Коротаев А. В. М.: Восточная литература, 1997. Formal solutions to the KP hierarchy S.M.Natanzon, A.V.Zabrodin. Journal of Physics A: Mathematical and Theoretical. 2016. Vol. 49. No. 14. P. 22. We find all formal solutions to the -dependent KP hierarchy. They are characterized by certain Cauchy-like data. The solutions are found in the form of formal series for the tau-function of the hierarchy and for its logarithm (the F-function). An explicit combinatorial description of the coefficients of the series is provided. Grothendieck rings of basic classical Lie superalgebras.: Sergeev A., Veselov А. Annals of Mathematics. 2011. No. 173. P. 663-703. Added: Sep 9, 2014 Dynamics of Information Systems: Mathematical Foundations Iss. 20. NY: Springer, 2012. This proceedings publication is a compilation of selected contributions from the "Third International Conference on the Dynamics of Information Systems" which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study. From general Zakharov-Shabat equations to the KP and the Toda lattice hierarchies. Takebe T. In bk.: Advanced Series in Mathematical Physics. Vol. 16: Proceedings of the RIMS Research Project held at Kyoto University, Kyoto, June 1–August 31, 1991.. River Edge: World Scientific, 1992. P. 923-939. Model for organizing cargo transportation with an initial station of departure and a final station of cargo distribution Khachatryan N., Akopov A. S. Business Informatics. 2017. No. 1(39). P. 25-35. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the "correct" extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. Nullstellensatz over quasi-fields Trushin D. Russian Mathematical Surveys. 2010. Vol. 65. No. 1. P. 186-187. Деловой климат в оптовой торговле во II квартале 2014 года и ожидания на III квартал Лола И. С., Остапкович Г. В. Современная торговля. 2014. № 10. Прикладные аспекты статистики и эконометрики: труды 8-ой Всероссийской научной конференции молодых ученых, аспирантов и студентов Вып. 8. МЭСИ, 2011. Laminations from the Main Cubioid Timorin V., Blokh A., Oversteegen L. et al. arxiv.org. math. Cornell University, 2013. No. 1305.5788. According to a recent paper \cite{bopt13}, polynomials from the closure $\ol{\phd}_3$ of the {\em Principal Hyperbolic Domain} ${\rm PHD}_3$ of the cubic connectedness locus have a few specific properties. The family $\cu$ of all polynomials with these properties is called the \emph{Main Cubioid}. In this paper we describe the set $\cu^c$ of laminations which can be associated to polynomials from $\cu$. Entropy and the Shannon-McMillan-Breiman theorem for beta random matrix ensembles Bufetov A. I., Mkrtchyan S., Scherbina M. et al. arxiv.org. math. Cornell University, 2013. No. 1301.0342. Метод параметрикса для диффузий и цепей Маркова Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012. Added: Dec 5, 2012 Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action? Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466. Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself. Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1. We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results. Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214. Hypercommutative operad as a homotopy quotient of BV Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749. We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras. Cross-sections, quotients, and representation rings of semisimple algebraic groups V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856. Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group. Математическое моделирование социальных процессов Edited by: А. Михайлов Вып. 14. М.: Социологический факультет МГУ, 2012.
CommonCrawl
Antenatal care use in Ethiopia: a spatial and multilevel analysis Teketo Kassaw Tegegne ORCID: orcid.org/0000-0002-9137-36321,2,3, Catherine Chojenta2, Theodros Getachew4, Roger Smith5 & Deborah Loxton2 Accessibility and utilization of antenatal care (ANC) service varies depending on different geographical locations, sociodemographic characteristics, political and other factors. A geographically linked data analysis using population and health facility data is valuable to map ANC use, and identify inequalities in service access and provision. Thus, this study aimed to assess the spatial patterns of ANC use, and to identify associated factors among pregnant women in Ethiopia. A secondary data analysis of the 2016 Ethiopia Demographic and Health Survey linked with the 2014 Ethiopian Service Provision Assessment was conducted. A multilevel analysis was carried out using the SAS GLIMMIX procedure. Furthermore, hot spot analysis and spatial regressions were carried out to identify the hot spot areas of and factors associated with the spatial variations in ANC use using ArcGIS and R softwares. A one-unit increase in the mean score of ANC service availability in a typical region was associated with a five-fold increase in the odds of having more ANC visits. Moreover, every one-kilometre increase in distance to the nearest ANC facility in a typical region was negatively associated with having at least four ANC visits. Twenty-five percent of the variability in having at least four ANC visits was accounted for by region of living. The spatial analysis found that the Southern Nations, Nationalities and Peoples region had high clusters of at least four ANC visits. Furthermore, the coefficients of having the first ANC visit during the first trimester were estimated to have spatial variations in the use of at least four ANC visits. There were significant variations in the use of ANC services across the different regions of Ethiopia. Region of living and distance were key drivers of ANC use underscoring the need for increased ANC availability, particularly in the cold spot regions. Antenatal care (ANC) is a maternal healthcare service provided by skilled healthcare professionals to pregnant women and adolescent girls. It is provided throughout pregnancy to ensure the best health outcomes for both the mother and the newborn. The care service includes the following components: risk identification, prevention and management of pregnancy-related or concurrent diseases, and health education and health promotion [1]. The World Health Organization (WHO) recommends Midwife-led continuity of care models throughout pregnancy, delivery and postnatal period. Antenatal home visits are also recommended to improve antenatal care utilization and perinatal health outcomes [1]. Antenatal care has the potential to reduce maternal and child morbidity and/or mortality and to improve newborn health [2,3,4,5,6]. For instance, in a systematic review and meta-analysis, it was found that ANC visit was significantly associated with a 34% reduction in neonatal mortality [5]. In a cohort study carried out in Ethiopia, it was reported that having four or more ANC visits was significantly associated with 81.2, 61.3, 52.4 and 46.5% reduction in postpartum haemorrhage, early neonatal death, preterm labour and low-birth weight, respectively [6]. Inadequate antenatal care visits, or late visits, or fewer than the recommended number of visits have been related to poor pregnancy outcomes [7]. A lack of relevant and high quality antenatal care services is a major concern in sub-Saharan Africa [8]. In the case of uncomplicated pregnancies, the 2002 Focused Antenatal Care model of the WHO recommended at least four antenatal care visits; the first visit to take place before 16 weeks of gestation [9]. However, this model has now been superseded by the 2016 WHO ANC model; where a minimum of eight ANC contacts is recommended. The word "visit" in the previous model has now changed to "contact" to indicate an active interaction between a pregnant woman and a health-care provider. The first contact should take place in the first trimester; that is, within the first 12 weeks of gestation. The other recommended contacts are two contacts in the second trimester (at 20 and 26 weeks) and five contacts in the third trimester (at 30, 34, 36, 38 and 40 weeks of gestation) [1]. In the developing regions of the world, according to the United Nations report, over the 25 years period (1990 to 2014), there was slow progress in the use of four or more antenatal care visits [10]. In 2014, on average, only 52% of pregnant women in the developing regions had received a minimum of four antenatal care visits, which was a 17% increase from 1990. It was reported that 36% of pregnant women in Southern Asia and 49% in sub-Saharan Africa had received four or more antenatal care visits in 2014 [10]. In Ethiopia, the rates of women using antenatal care at least once increased from 27 to 62% from 2000 to 2016 [11,12,13,14]. In 2016, 31.8% of Ethiopian women received four or more ANC visits [14] and the median timing of the first antenatal care visit was 5.2 months [13]. About 63 and 89% of women from urban areas and Addis Ababa respectively had four or more antenatal care visits as compared to 27% from rural areas and those from the Somali region (11.8%). Both the demand and supply side factors are important in determining health service use. However, the majority of previous studies were mainly focused on the demand side factors of health service use. For instance, most studies identified the demand side factors of antenatal care use [15,16,17,18,19,20]. Amongst the demand side factors, women's education [16, 21, 22], husband's occupation [21], socioeconomic status [16], and place of residence [16, 17] were significantly associated with the use of antenatal care service. In most studies, the supply side factors of ANC use have been overlooked. Understaffed health facilities [23] and distant ANC facilities [23, 24] were negatively associated with the use of antenatal care services. The most important supply side factors, such as health facilities general service readiness, availability of ANC services and facilities readiness to provide ANC service [25, 26] were not addressed. Due to the increasing availability of georeferenced health facility and population data, it is important to link these data sets and identify both the demand and supply side factors of ANC use. So far, in Ethiopia, four Demographic and Health Surveys (DHS) were conducted, but the Service Provision Assessment (SPA) survey was the first to be carried out in the country. The DHS survey provides detail information about population characteristics including their health service use history [27]. On the other hand, the SPA survey provides information about healthcare services available at each health facility and facility's readiness to provide a particular service [28]. Using the capability of geographic information system (GIS), a linked analysis of population and health facility data has enormous importance for investigating the links between population healthcare needs and uses, and the health service environment [29]. Despite this importance, these two datasets have not been used in Ethiopia. Therefore, this study aimed to assess the spatial variations in the use of antenatal care services among women who gave birth in a 5 year period in Ethiopia. Furthermore, it aimed to identify the potential factors associated with the use of antenatal care services throughout the country using national population and health facility data. This study used two data sources: the Ethiopian Demographic and Health Survey (EDHS) and the Ethiopian Service Provision Assessment Plus (ESPA+) survey data sets. The main source of the population data were the 2016 EDHS. The survey used a stratified sampling procedures. Data on demographic characteristics and population health service use, such as antenatal care were collected [14]. Geographic coordinates of each survey cluster were also collected using Global Positioning System (GPS) receivers. Survey clusters are basically called Enumeration Areas (EAs). An EA is a geographical location that has an average of 181 households [14]. The GPS reading was collected at the centre of each cluster. For the purpose of insuring respondents' confidentiality, GPS latitude/longitude positions for all surveys were randomly displaced before public release. The maximum displacement was two kilometres for urban clusters and five kilometres for 99% of rural clusters. One percent of rural clusters were displaced a maximum of 10 km. The displacement was restricted within the country's second administrative level [30]. The survey collected data on all women aged 15–49 years in the 645 clusters. This study used 6954 women who gave birth in the 5 years preceding the survey in the 622 clusters. A total of 239 women who gave birth in the 5 years preceding the survey in the other 23 clusters were excluded from the analysis since they had no geographic information. Health facility data The main source of the health facility data were the 2014 ESPA+ survey. This survey data was obtained from the Ethiopian Public Health Institute (EPHI). Ethics application was submitted to EPHI before obtaining the dataset. It is the main source of data on the availability of health services, such as antenatal and delivery care services [31]. Health facilities were selected using a combination of census and simple random sampling techniques [31]. In this analysis, amongst the 1165 interviewed health facilities [32], 919 facilities which reported providing antenatal care were included. Data linking method In this analysis, administrative boundary link was used for linking sampled health facility data with the population data [32]. This is the method of choice for linking sampled health facilities data with population data [29, 32, 33]. Details of this method are discussed elsewhere [32]. The administrative boundaries of Ethiopia were obtained from Natural Earth [34]. Health service environment and measurements Four antenatal care service environment variable scores were created. These were average distance to the nearest antenatal care facilities, antenatal care service availability score, readiness to provide antenatal care services score, and a general health facilities readiness score. All service availability and readiness scores were computed for the nearest health facilities. The antenatal care indices were created using the World Health Organization's 'Service Availability and Readiness Indicators' [25, 26]. Details of computing these scores are discussed elsewhere [32]. After linking DHS clusters with SPA facilities, average straight-line distance to the nearest antenatal care health facility was calculated [32]. First, the distance from each cluster to every ANC facility within the administrative boundaries (that is within regions) was calculated. Second, the nearest facility was identified and selected for each cluster. Lastly, using the identified nearest facility, average distance was calculated per region and used in this paper. With regard to general service readiness score, out of the 12 WHO general service readiness variables [25, 26], the principal component analysis gave nine general service dimensions [32]. The SCORE procedure in SAS was also used to compute the average general service readiness score per region/city administration. These were 24-h staff coverage, communication equipment, clean water source, power supply, management meetings, client opinion/feedback, quality assurance, emergency transport, and client latrine. The first two principal components (health facility management system and infrastructure) were used to compute two general service readiness scores. For those health facilities reported as providing antenatal care services, indices of antenatal care availability and readiness were created. One antenatal care availability score (antenatal care supplements) was created using four variables [32]. Similarly, two antenatal care readiness scores (readiness to provide diagnostic services and skilled care) were created using six variables [32]. Along with the principal component analysis [32], the SCORE procedure in SAS was used to compute average ANC service availability and readiness scores per region/city administration. Antenatal care use was defined based on the number of antenatal visits a woman had for her most recent birth in the 5 years preceding the survey. A woman was grouped into either of the three categories: 1) had no ANC visit, 2) one to three ANC visits or 3) four or more ANC (ANC4+) visits. The DHS survey collected data on pregnancy status of the last birth as wanted, wanted later and unwanted birth. The survey asked a woman whether the child was wanted at the time of pregnancy, or whether the child was wanted but later, or whether the child was not wanted at all. In this paper, pregnancy status was classified as wanted if a woman wanted it at the time of pregnancy and unwanted if a woman wanted it later or was not wanted at all. Hierarchical generalized linear model A two level multilevel regression analysis was carried out after linking women in the respective cluster to the health facility variables. This study had ordinal polytomous outcome with three groups of ANC visits: 1) No ANC visit, 2) one-to-three ANC visits and 3) four or more ANC (ANC4+) visits. We were interested in the probability of being at or above zero level of ANC visit and the influence of individual and region characteristics on this probability for each category. Multiple logits are simultaneously estimated (M-1 logits, where M is the number of response categories) when analysing polytomous outcomes. Therefore, in this study with three categories of response, there will be two logits and their corresponding intercepts will be simultaneously estimated, each of them indicating the probability of responding in or above a particular category. The equations necessary for estimating these two level models are presented below [35]. $$ {Y}_{1 ij}=\mathit{\log}\left[\frac{P\left({R}_{ij}\le 1\right)}{1-P\left({R}_{ij}\le 1\right)}\right]={\gamma}_{00}+{\gamma}_{10}{x}_{ij}+{\gamma}_{01}{W}_j+{\mu}_{0j}+{\mu}_{1j}{x}_{ij} $$ $$ {Y}_{2 ij}=\mathit{\log}\left[\frac{P\left({R}_{ij}\le 2\right)}{1-P\left({R}_{ij}\le 2\right)}\right]={\gamma}_{00}+{\gamma}_{10}{x}_{ij}+{\gamma}_{01}{W}_j+\delta +{\mu}_{0j}+{\mu}_{1j}{x}_{ij} $$ Where Yij represents the log odds of being at or above zero level of ANC visit for woman i in region j. More specifically, Y1ij corresponds to the log odds of being at or above the highest ANC visit (i.e., ANC4+) for woman i in region j, P(Rij ≤ 1) represents the probability of responding at or above this highest ANC visit, and 1 − P(Rij ≤ 1) corresponds to the probability of responding below this highest ANC visit for woman i in region j. γ00 provides the log odds of being at or above that ANC visit in a typical region, Wj is a region-level predictor for region j, γ01 is the slope associated with this predictor, μ0j is the level-2 error term representing a unique effect associated with region j, γ10 is the average effect of the individual-level predictor, Xij is an individual-level predictor for woman i in region j, and μ1j is a random slope for a level-1 predictor variable Xij, which allows the relationship between the individual-level predictor (Xij) and the outcome (Y1ij) to vary across level-2 units. In addition, Y2ij represents the log odds of being at or above the next level of ANC visit (i.e., one-to-three ANC visits) for woman i in region j. Y2ij also include an extra term, (δ), representing the difference between the intercepts corresponding to this category and the preceding one. This model gives log odds when fitted to data. For the ease of interpretation, the log odds can be converted into probabilities. The predicted probability of the event of interest (being at or above zero level of ANC visit) can be calculated using the following formula as discussed elsewhere [35]. $$ Predicted\ Probability\ {\theta}_{Mij}=\frac{e^{Y_{Mij}}}{1+{e}^{Y_{Mij}}} $$ This is a simple conversion of log odds of an event of interest to a probability of an event of interest. In this expression, θMij is the probability of the event (being at or above zero level of ANC visit), \( \mathbf{1}-{\boldsymbol{e}}^{{\boldsymbol{Y}}_{\boldsymbol{Mij}}} \) is the corresponding probability of being below a given ANC visit, YMij represents the log odds of the event of interest that is the log odds of pregnant woman i in the jth region being at or above Mth level of ANC visit. Ordinal polytomous data can be modelled using multinomial distribution with cumulative logit link function to compute the cumulative odds ratio for each category of response variable [36]. The GLIMMIX procedure in SAS was used to estimate the hierarchical generalized linear model [35]. The GLIMMIX procedure fits two kinds of models to multinomial data. For ordinal data, models with cumulative link functions apply, whereas generalized logit models fit for nominal data. Using cumulative logit link function reduces model size and memory requirements as compared to using generalized logit link function. The multilevel multinomial logistic regression model was carried out to predict the probability of being at or above zero level of ANC visit using different individual and region level variables. Since the outcome variable is ordinal, the cumulative logit (CLOGIT) link function was used. Four model building process steps were followed. The Laplace estimation in the GLIMMIX procedure was used to estimate the best-fitting model [35]. The model building process was started with the empty, unconditional model with no predictors. This model was used to calculate the intra-class correlation coefficient (ICC) [35]. The ICC estimate tells how much variation in the use of antenatal care exists between regions (level-2 units) [32]. Details of calculating ICC in hierarchical generalized linear models is discussed elsewhere [32, 37]. A more complex models were gradually built by checking improvements in model fit after each model was estimated. The negative 2 log likelihood (−2LL) was used to assess the best fitting model [35]. Model two was built by including level-1 (individual level) variables as fixed effects in the random intercept only model (empty model). The individual level variables included in this model were women's age, women's education, women's occupation, husband's education, husband's occupation, household wealth, parity, timing of 1st ANC visit, age at 1st birth, number of living children, nature of recent pregnancy (planned or unplanned) and autonomy in own healthcare decision making. However, only five individual level variables: husbands' or partners' education, household wealth, number of living children, autonomy in own personal healthcare decision making and nature of pregnancy were found significant and carried forward. Then, model three was built by adding these individual level variables as random effects in order to determine if their influence on ANC visit varied among regions. Lastly, level-2 (region level) variables were added as fixed effects in the fourth model: urban-rural residence, two general service readiness scores (health facility management system and infrastructure), ANC availability score (ANC supplements), two ANC readiness scores (readiness to provide diagnostic services and skilled care) and average straight-line distance to the nearest ANC health facility. The spatial analysis was carried out using ArcGIS 10.6.1 and R version 3.5.1. To produce the flattened map of Ethiopia, the Ethiopian Polyconic Projected Coordinate System was used [32]. Hot spot analysis and spatial regression were carried out to identify spatial clusters and factors associated with spatial variations of antenatal care use, respectively. Since geographic coordinates were collected at cluster level, the unit of spatial analyses were DHS clusters. The hot spot analysis followed three procedures as discussed elsewhere [32]. These were the Global Moran's I statistic, Incremental Spatial Autocorrelation and the Getis-Ord Gi* statistic. The Global Moran's I statistic is a global measure of spatial autocorrelation [38]. The Incremental Spatial Autocorrelation was used to determine the scale, which is the critical distance at which there is maximum clustering [32]. The average distance at which a feature has at least one neighbour (15 km) was calculated using Calculate Distance Band from Neighbour Count in the Spatial Statistics tools toolbox in ArcGIS. The maximum distance at which clustering of at least four antenatal care visits peaked was at 104 km with a z-score of 4.02. The Getis-Ord Gi* statistic used this maximum distance to identify statistically significant spatial clusters of hot spots (areas of high antenatal care use rates and cold spots (low antenatal care use areas). A False Discovery Rate (FDR) correction method was applied to account for multiple and spatial dependence tests in Local Statistics of Spatial Association [32, 39]. Statistical significance was determined based on the z-scores and p-values returned while running hot spot analysis [32]. In addition to analysing the spatial patterns (hot spots), spatial regression was carried out to identify key factors behind the observed spatial patterns of ANC use. Moran's eigenvector-based spatial regression analysis was carried out using the spmoran package in R. Ordinary least square (OLS) regression models are frequently used to analyse and model spatial data. However, regression models applied to spatial data are frequently associated with spatially autocorrelated residuals. An eigenvector spatial filtering (ESF) regression models effectively removes these spatially autocorrelated residuals [40]. ESF reduces spatial misclassification errors, and it increases strength of model fit, normality of model residuals and homoscedasticity of model residuals [41]. In this analysis, a random effects eigenvector spatial filtering (RE-ESF) regression model that is the extension of ESF was used. RE-ESF estimates of regression coefficients and their standard errors are more accurate and reliable than ESF [42]. All the variables considered in the multilevel analysis were included in the spatial regression analysis. Furthermore, Moran eigenvector spatially varying coefficient (M-SVC) model, a local form of linear regression was used to model spatially varying relations. The M-SVC model outperforms the geographically weighted regression (GWR) model, which is a standard spatially varying coefficient (SVC) modelling approach, in terms of coefficient estimation accuracy and computational time [43]. Sociodemographic characteristics The mean age of women who gave birth in the 5 years period was 29.27 (standard deviation of ±6.82) years. Approximately 28% of women were within the age range of 25–29 years. Over 59% of the women had no education, while 27.37% had a primary level education. With regard to wealth, 23.45% of the women fell in the richest quintile and 32.56% were grouped in the poorest quintile. About 34 and 45% of the women were followers of the Orthodox Christian and Muslim faith, respectively. Seventy eight percent of the women were from rural areas (Table 1). Table 1 Sociodemographic characteristics of pregnant women in Ethiopia, 2016 (N = 6954) Women's obstetric characteristics Amongst the 6954 women who gave birth 5 years preceding the survey, 2524 (36.30%) had five or more births; and about 30% of the women had more than four living children. The mean age at first childbirth was 19.15 (standard deviation of ±3.74) years. With regard to healthcare decisions, only 16.31% of the women had autonomy to decide on their own healthcare needs (Table 2). Table 2 Obstetric characteristics of pregnant women in Ethiopia, 2016 (N = 6954) Health facility characteristics Data were collected from 1165 health facilities nationwide. Amongst these health facilities, 18.73 and 27.75% were hospitals and health centres, respectively. About 68% of the health facilities were managed by the government. With regard to antenatal care service provision, 919 (78.88%) of the health facilities provided antenatal care services. The national average distance from antenatal care health facilities to the 2016 EDHS clusters was 9.95 km. The 2016 EDHS sampled clusters in the Somali region were the longest distance (25.46 km) from antenatal care facilities. Conversely, EDHS clusters in Dire Dawa were 0.99 km from antenatal care facilities (Table 3). Table 3 The average distance from sampled antenatal care providing health facilities to demographic and health survey clusters in Ethiopia, 2016 (N = 919) The mean of health facilities service availability and readiness scores were different for each region and city administration. For Addis Ababa, the average value of health facility infrastructure was 0.783, while for the Benishangul-Gumuz region the average value was much lower. The Benishangul-Gumuz region health facilities had the lowest mean value of − 0.534. Regarding facilities diagnostic services, health facilities in Dire Dawa had the highest readiness score of 0.540, while Gambela had the lowest mean value of − 0.544. The mean scores of health facility service availability and readiness for each region and city administration is shown in Table 4. Table 4 Health facilities service availability and readiness scores linked to the demographic and health survey clusters in Ethiopia, 2016 (N = 919) Antenatal care use There were 2351 (33.81%) women who reported having no ANC visits for their last pregnancy. The proportion of women who had at least four ANC visits during their last pregnancy was 36.78% (66.93% urban, 28.41% rural). Utilization of antenatal care services varied across the different regions and city administrations; for instance, the highest number of four or more antenatal care visits were reported in Addis Ababa (89.33%), followed by Dire Dawa (65.15%) and the Tigray Region (55.83%). The map (Fig. 1) shows the regional variations in antenatal care use rates. Antenatal care use among pregnant women in Ethiopia, 2016 Determinants of antenatal care use The calculated intra-class correlation coefficient (ICC) was 25.00%. This indicated that 25% of the variability in having one or more ANC visits was accounted for by region, leaving 75% of the variability to be accounted for by the differing characteristics of the women, or other unmeasured factors. The ICC was calculated using an intercept variance (\( {\sigma}_{\mu 0}^2 \)) and level-1 residual variance (εij) as: $$ ICC=\frac{\sigma_{\mu 0}^2}{\sigma_{\mu 0}^2+\raisebox{1ex}{${\pi}^2$}\!\left/ \!\raisebox{-1ex}{$3$}\right.}=\frac{1.0968}{1.0968+3.29}=0.25 $$ The probability of being at or above ANC4+ visits for a given woman with average background characteristics (with no covariate in the model) was 0.3945. This is the probability of having at least four ANC visits in a typical region. $$ P\left( being\ at\ or\ above\ ANC4+ visits\right)=\frac{e^{-0.4284}}{1+{e}^{-0.4284}}=\frac{0.6516}{1+0.6516}=0.3945 $$ Similarly, the probability of having at least one-to-three ANC visits or more was 0.7263. $$ P\left( being\ at\ or\ above\ one- to- three\ ANC\ visits\right)=\frac{e^{0.9762}}{1+{e}^{0.9762}}=\frac{2.6544}{1+2.6544}=0.7263 $$ This is the cumulative probability of being at or above one-to-three ANC visit. In order to obtain the exact probability of being at a given level, we have to subtract the adjacent probabilities. For example, the predicted probability of having ANC4+ visit was 0.3945, one-to-three ANC visit was (0.7263 − 0.3945 = 0.3318) and also the probability of having no ANC visit was 1 − 0.7263 = 0.2737. Women's autonomy in own healthcare decision, husbands' / partners' education, household wealth, number of living children a woman had, and nature of pregnancy were strong individual-level predictors of having more ANC visits (i.e., ANC4+ visits) among pregnant women. A woman whose husband/partner made decisions on her own healthcare was 24% less likely to have at least four ANC visits as compared to a woman who had autonomy to make decisions. A woman whose husband attained a primary level of education was 53% more likely to have more ANC visits as compared to those whose husband had no education. The odds of having at least four ANC visits increased with increasing wealth quintile. Therefore, women who were in the highest quintile were 3.48 times more likely to have more ANC visits as compared to those in the lowest quintile. Women whose pregnancy was unwanted were 20% less likely to have more ANC visits relative to their counterparts with a wanted pregnancy. Moreover, a one-child increase in the number of children a woman had was associated with a 7 % decrease in the use of more ANC visits (Table 5). Table 5 Factors associated with being at or above one-to-three ANC visits among pregnant women in Ethiopia (N = 6954) At the regional level (level 2), three variables were significantly associated with having more ANC visits (ANC4+ visits). Pregnant women who were living in rural areas were 47% less likely to have more ANC visits as compared to urban women. A one-unit increase in the mean score of antenatal care service availability in a typical region was associated with a five-fold increase in the odds of having more ANC visits. Every one-kilometre increase in distance to the nearest ANC providing facilities in a typical region was negatively associated with having at least four ANC visits (Table 5). Finally, the majority of the between region variance was explained by this model: the between region variation in using more ANC visits decreased from 1.0968 to 0.0242, which is a 97.74% reduction in the unexplained variance between region antenatal care utilization. However, region level random effects are significant; the intra-class correlation is still 1%. This indicated that even after controlling for individual and regional level factors, there is still a considerable region level clustering of ANC use. The between region variance of slopes indicated that the following two variables varied significantly across regions: household wealth and number of living children (Table 5). Hot spots of antenatal care use There is strong evidence to support spatial clustering in utilization of at least four ANC visits among pregnant women in Ethiopia (Global Moran's I = 0.18, z-score = 6.11, P-value < 0.0001). Most of the hot spot areas (high ANC rates) were located in the Southern Nations, Nationalities and Peoples region (SNNPR), followed by some parts of the Gambela and Oromia regions. Conversely, the majority of the cold spot areas (low ANC rates) were located in Addis Ababa, followed by some parts of the Oromia region. This clustering was supported by the Getis-Ord Gi* statistic when conducting the spatial analysis (Fig. 2). Furthermore, the identified ANC hot spots were located closest to referral/teaching hospitals. This was observed on the layered map showing hot spots and ANC providing hospitals (Fig. 3). Clusters of at least four ANC visits in Ethiopia, 2016 Hot spots of at least four ANC visits vs ANC providing Hospitals in Ethiopia, 2016 Determinants of spatial variations in antenatal care use In the spatial regression analysis, it was found that household wealth, having first ANC visit during the first trimester, women's age, parity, availability of ANC supplements, facilities readiness to provide skilled care and distance to ANC providing facilities were associated with the spatial variations of at least four ANC visits. Parity and distance to ANC facilities were negatively associated with the spatial variations in the use of ANC visits (Table 6). About 74% of the spatial variability in the outcome variable was explained by the regression model (adjusted R2 = 0.743). Table 6 Factors associated with the spatial variations of at least four ANC visits in Ethiopia, 2016 In the M-SVC regression model, it was found that the relationship between having at least four ANC visits and having first ANC visit during the first trimester was varying across the geographic space – across clusters (Fig. 4). ANC visit was found to be a spatial problem in Ethiopia. Having first ANC visit during the first trimester was positively associated with having at least four ANC visits across the country. For example, in most of the clusters in the Amhara and Tigray regions, it was found that first ANC visits were stronger predictors of at least four ANC visits. However, coefficients of the other variables were estimated constant. The M-SVC model has explained 75.82% of the spatial variations of having at least four ANC visits. Spatial variations of at least four ANC visits with spatially varying coefficients of first ANC visit in Ethiopia, 2016 This study aimed to assess the spatial variations in the use of ANC in Ethiopia. Furthermore, it aimed to identify factors associated with ANC use throughout the country, using the national population and health facility data. This is the first study to provide a comprehensive assessment of ANC use in Ethiopia by region of living, service type and demographics. ANC visit was found to be a spatial problem in Ethiopia. It was also found that women's ANC visit was significantly associated with different individual and region level factors. In Ethiopia, the proportion of at least four ANC visits was 36.78%; the highest proportion was reported in urban areas (66.93% vs 28.41%). More than half of antenatal care visits were started during the second trimester of pregnancy, this was far below the WHO recommendation of having at least four ANC visits [9] started during the first trimester of pregnancy [1, 9]. Nevertheless, in 2016, more women had at least four ANC visits as compared to the results of the previous three DHS surveys [11,12,13]. Similarly, the proportion of women who received ANC in the first trimester was higher than the 2000, 2005 and 2011 DHS survey findings [11,12,13]. Despite these improvements, this figure is lower than the 2014 United Nations report where 52% of pregnant women in developing regions and 49% in sub-Saharan Africa had a minimum of four ANC visits [10]. There are also variations in ANC visits across the different regions in the country. The highest proportions, more than 50% with at least four ANC visits, were reported in the Addis Ababa and Dire Dawa city administration and the Tigray region. The lowest, below 20% use of at least four ANC visits, were reported in the Somali and Afar regions. Despite the overall increase in ANC visits in the country, it was found that there was significant regional variation in undertaking four or more ANC visits. Hot spots of at least four ANC visits were detected in the Southern Nations, Nationalities and Peoples region, and some parts of the Gambela and Oromia regions. The identified hot spots were located closest to teaching hospitals (Jimma, Hawassa, Wolaita and Arba Minch hospitals); they have a high number of service providers including students. These teaching hospitals also have antenatal care services available and are ready to serve the target population. Therefore, women who are living closest to these facilities are more likely to have frequent antenatal care visits, as the services could be more attractive to them. The majority of cold spots were detected in the Addis Ababa city administration followed by some parts of the Oromia region. This was an unexpected finding, as Addis Ababa is where the majority of health facilities are concentrated. This highlights the need to specify how hot spot analysis works. In hot spot analysis, every feature has a neighborhood and that neighborhood is compared to the study area, and the feature is marked with the result of that comparison. If the neighborhood is significantly different from the study area, then that feature will be marked either a hot spot or a cold spot depending on whether there are high values or low values. One important note here is that 'where are the hot spots?' is not necessarily the same as 'where are the highest values?' Hot spot analysis is a test of spatial randomness. In hot spot analysis, one can get a feature of low values, even zero, which is marked as a hot spot because its neighborhood was high enough to bring that local average to be significantly different from the global average. In our study finding, even though the overall prevalence of antenatal care use was high in Addis Ababa, the majority of clusters in the city had very low values. When every neighborhood in the city was compared to the study area, the neighborhood values were significantly lower than the study area. Thus, the spatial statistics marked every feature in Addis Ababa as a cold spot that is the local average is significantly lower than the global average. Due to this low prevalence clusters, Addis Ababa did not show hot spots of ANC4+ visits. Similarly, most of the clusters in Addis Ababa had the least first ANC visits during the first trimester. This could also be explained by the spatial variations of timing of first ANC visits as observed from the Moran eigenvector spatially varying coefficient (M-SVC) regression model. In Addis Ababa, it was found that first ANC visits during the first trimester were not strong predictors of at least four ANC visits as observed in the Amhara and Tigray regions. Even though the statistics gave this finding, further study is required to understand why low rates of at least four ANC visits are clustered in Addis Ababa. Spatial regression models assume the potential between-neighborhood correlations due to spatial process [44]. Standard multilevel models, however, do not assume spatial dependence; neighborhood observations are independent of one another [44, 45]. This could lead to the overstatement of statistical significance of neighborhood effects [44]. Our paper considered same variables for both spatial and multilevel analysis. However, only wealth quintiles and availability of ANC supplements were shared between the two. This doesn't mean that they have the same interpretation on ANC visit. Spatial regression models enables to identify what is happening in a particular geographic location and why that is happening. In the spatial regression analysis, it was found that four or more ANC (ANC4+) visits were varied across geographic areas. These geographical variations were explained by different variables, such as first ANC visits, availability of ANC supplements, facilities readiness to provide skilled care and distance to ANC providing facilities. Furthermore, it helped us to explain cold spots of ANC4+ visits in Addis Ababa of which the standard multilevel analysis could not explain it. These enables for informed decision making like which communities and health facilities need especial attention and where should the government spend more money. Different individual and regional level factors were significantly associated with the use of more ANC visits (ANC4+ visits). Amongst the regional level variables, it was found that a one-unit increase in the mean score of antenatal care service availability in a typical region was significantly associated with a five-fold increase in the odds of having more ANC visits. In Nigeria, it was found that health facilities staffed with fewer antenatal care providers were negatively associated with the use of antenatal care services [23]. Availability and provision of antenatal care commodities at every antenatal care facility will help to reduce the costs associated with purchasing those drugs and thus improve women's antenatal care visits. Furthermore, health facilities readiness, for instance, having the required number of physicians or services providers attending pregnant women would minimize the waiting time that a woman could spend at a health facility. This could make the service attractive and hence, get more women for antenatal care services very easily. Every one-kilometre increase in distance to the nearest ANC providing facilities in a typical region was negatively associated with the odds of having more ANC visits. This finding was supported by a study carried out in Ethiopia where proximity to a health facility [21] was significantly associated with the use of antenatal care service. Furthermore, a study carried out in Northern India found that living far from a health facility was negatively associated with maternal health service use [17]. Similarly, distant health facilities were negatively associated with the use of antenatal care services [23, 24]. However, those women who were living close to obstetric health facility were more likely to use antenatal care services [46]. In Ethiopia, having access to obstetric care facilities within an-hour travel time was also significantly associated with the use of antenatal care services [47, 48]. In rural Burkina Faso, pregnant women who had access to obstetric care facility within 5 km was significantly associated with the odds of having at least three antenatal care visits [49]. This indicates that geographic accessibility, measured in either distance or travel time, has a greater influence on maternal health service utilization [50]. Similarly, pregnant women who were living in rural areas were 47% less likely to have at least four ANC visits as compared to urban women. This finding was supported by another study carried out in Ethiopia where place of residence as well as administrative region were significantly associated with antenatal care use [51, 52]. Furthermore, this was in agreement with previous studies carried out in Ghana [19], Vietnam [15], Nigeria [16] and Northern India [17]. Worldwide, the inequalities in the distribution of health facilities were reflected by the higher proportion of antenatal care use in urban centres as compared to rural areas [18, 53]. These disparities could be due to the local inaccessibility of obstetric care services in rural areas as well as variations in some regional administrations. Therefore, the government and other service providers should work together for improving communities' easy access to healthcare services. Amongst the individual-level factors, women's autonomy in their own healthcare decision was significantly associated with the odds of having more ANC visits. A woman whose husband/partner made decisions on her own healthcare was 24% less likely to have at least four ANC visits as compared to a woman who had autonomy to make decisions. This was consistent with other study findings where husbands' approval had a greater effect on the use of antenatal care services [48, 54]. Another study conducted in Ethiopia found that women's autonomy on their own healthcare decision-making was significantly associated with the higher odds of using antenatal care service [52]. Therefore, the empowerment and autonomy of women in all aspects of life, especially in their own healthcare decision is a highly important end in itself. In Ethiopia, among the individual-level variables, husbands' level of education was significantly associated with the odds of having more ANC visits. This study found that a woman whose husband had attained a primary level of education was 53% more likely to have at least four ANC visits as compared to those whose husband had no education. This finding adds to previous research conducted in Ethiopia which found that women's education [21, 22] and husband's attitude [22] were significantly associated with antenatal care use. The odds of having at least four ANC visits was significantly associated with the increased in household wealth, in agreement with previous research conducted in Ghana [19] and Nigeria [24]. In some settings, service fees and socio-economic status were strong predictors of antenatal care use [19, 20, 55]. In Ethiopia, even though antenatal services are free of charge at government health facilities, services fees at private health facilities as well as transportation costs are high. Moreover, most women end up spending the entire day at health facilities for their check-ups and on travelling to and from health facilities. This kind of indirect cost is associated with the women's daily life of which they might go to a farm or market to make their daily living. Therefore, women in the highest wealth quintile will be more likely to make more antenatal care visits as compared to women in the lowest quintile. In Ethiopia, having unwanted pregnancy was negatively associated with the likelihood of having more ANC visits as compared to wanted pregnancies. This finding was consistent with other studies carried out in Ethiopia, as those women who had a wanted pregnancy were more likely to have antenatal care visits [48, 52, 56]. Women with wanted pregnancies could want to have a healthy pregnancy and childbirth, and thus they might give great attention for antenatal care services. In this current study, it was found that a one-child increase in the number of living children a woman had was significantly associated with a 7 % decrease in the likelihood of having more ANC visits. This finding was supported by studies carried out in Ghana [19] and India [57] where a significant reduction in the use of ANC services was observed with increasing in the number of living children. This could be related to woman's previous experience, as a woman might be reluctant to have ANC visits in a subsequent pregnancy if she had a negative previous experience or if she perceived the importance of ANC to be low with subsequent pregnancies. To avoid any complications and/or adverse pregnancy outcomes, more attention should be given to encouraging women to have more ANC visits. However, in another study, it was found that high parity was significantly associated with higher uptake of ANC visits [52]. This could be attributable to previous complications and/or adverse pregnancy outcomes. Furthermore, this could be due to influences of previous ANC visits, in case if they had. The identified individual and regional level factors such as distance from health facility, socio-economic status, number of living children and women's autonomy might be related to each other. They do not exist as separate factors in life. For example, in a systematic review, it was found that poor geographic access to health care was overlapped with poverty. In Uganda, those regions with the worst access to health care were the regions where the large segment of the population lived below poverty line [58]. Distance was found to be a barrier in obtaining health care for 20% of the poorest as compared to only 9 % of the richest population in Uganda [59]. In low-and middle-income countries, it was found that improving healthcare access could reduce socio-economic gaps in healthcare [60]. This study linked population and health facility data to identify both the demand and supply side determinants of antenatal care use. This was not the case in most previous studies where they assessed the demand and supply side determinants separately. In addition to the standard multilevel analysis, this study identified geographical variations of ANC use as well as factors associated with these variations. Investigating ANC use geographically is very important for informed decision making and monitoring and evaluation purposes. Even though this study had several methodological limitations, most of these were minimized. Problems related to sampled facilities, temporal differences between DHS and SPA surveys, and misclassifications errors were minimized [32]. However, using a straight-line distance introduces bias and this would be reduced if a road network link was carried out. In case of road network analysis, the distance between DHS clusters and health facilities would not be affected by terrain characteristics. It reflects the road distance rather than the shortest distance between two points. Furthermore, analysis of sampled facilities and removing DHS clusters without geographic coordinate information might under or overestimate the study finding. For instance, the influence of distance on ANC use could be different if all health facilities were included. The estimated average straight-line distance to the nearest ANC facility would not be this much high. Even though multilevel analysis should include weights at each level, this study did not considered sampling weights. The problem with this is that DHS does not provide separate weights for different levels, such as region, cluster or household-level weights. DHS only provides an average weight which is proportional to hv005 or v005. The GLIMMIX procedure, however, asks each level weight. It has OBSWEIGHT = option and WEIGHT = option. The GLIMMIX procedure does not provide any other solution when we have average weights. In Ethiopia, even though there is an increase in the use of ANC services, the country has still not achieved the recommended number of ANC visits. It was found that husband's/partner's education, women's autonomy in their own healthcare decision making, rural residence, ANC service availability and average distance to the nearest ANC providing facility were associated with having more ANC visits. There was a five-fold increase in the odds of having more ANC visits when key ANC supplements were available. Furthermore, there is evidence of a wide geographical variation in having at least four ANC visits across the country. Hot spots of at least four ANC visits were identified in areas where there are teaching hospitals. The findings of this study have several implications: first, beyond providing free antenatal services at public health facilities, the government and non-governmental organizations should make an effort to set up health facilities in rural areas to improve ANC use. Second, availing ANC services at all levels, especially in rural areas and some regions with poor healthcare access, and making them ready to provide these services should also be prioritized. In addition to this, the available and newly constructed teaching hospitals should be equipped to provide ANC services. Third, the empowerment and autonomy of women in all aspects of life, especially in their own healthcare decision, need to be emphasized. Lastly, as observed from the hot spot analysis, there is a need for gaining a more detailed understanding of the findings for Addis Ababa. ANC: ANC4 + : Four or more Antenatal Care Visits CRS: Coordinate Reference System DHS: Demographic and Health Survey EAs: Enumeration Areas EDHS: Ethiopia Demographic and Health Survey ESPA+: Ethiopia Service Provision Assessment Plus FDR: False Discovery Rate GIS: HGLM: Intra-class Correlation Coefficient SNNPR: Southern Nations, Nationalities and Peoples Region Service Provision Assessment TFR: Total Fertility Rate WGS84: World Geodetic System 84 World Health Organization. WHO recommendations on antenatal care for a positive pregnancy experience. Geneva: WHO; 2016. Carroli G, Rooney C, Villar J. How effective is antenatal care in preventing maternal mortality and serious morbidity? An overview of the evidence. Paediatr Perinat Epidemiol. 2001;15(s1):1–42. Campbell OM, Graham WJ. Group LMSSs. Strategies for reducing maternal mortality: getting on with what works. Lancet. 2006;368(9543):1284–99. Kuhnt J, Vollmer S. Antenatal care services and its implications for vital and health outcomes of children: evidence from 193 surveys in 69 low-income and middle-income countries. BMJ Open. 2017;7(11):e017122. Wondemagegn AT, Alebel A, Tesema C, Abie W. The effect of antenatal care follow-up on neonatal health outcomes: a systematic review and meta-analysis. Public Health Rev. 2018;39(1):33. Haftu A, Hagos H, Mehari M-A. Pregnant women adherence level to antenatal care visit and its effect on perinatal outcome among mothers in Tigray public health institutions, 2017: cohort study. BMC Res Notes. 2018;11(1):872. Mpembeni RN, Killewo JZ, Leshabari MT, Massawe SN, Jahn A, Mushi D, et al. Use pattern of maternal health services and determinants of skilled care during delivery in southern Tanzania: implications for achievement of MDG-5 targets. BMC Pregnancy Childbirth. 2007;7(1):1. Conrad P, Schmid G, Tientrebeogo J, Moses A, Kirenga S, Neuhann F, et al. Compliance with focused antenatal care services: do health workers in rural Burkina Faso, Uganda and Tanzania perform all ANC procedures? Tropical Med Int Health. 2012;17(3):300–7. Villar J, Ba'aqeel H, Piaggio G, Lumbiganon P, Belizán JM, Farnot U, et al. WHO antenatal care randomised trial for the evaluation of a new model of routine antenatal care. Lancet. 2001;357(9268):1551–64. United Nations. The millennium development goals report 2015. New York: United Nations; 2015. Central Statistical Authority [Ethiopia], Macro O. Ethiopia Demographic and Health Survey 2000. Addis Ababa: Central Statistical Authority and ORC Macro; 2001. Central Statistical Agency [Ethiopia], ORC Macro. Ethiopia Demographic and Health Survey 2005. Addis Ababa: Central Statistical Agency and ORC Macro; 2006. Central Statistical Agency [Ethiopia], ICF International. Ethiopia Demographic and Health Survey 2011. Addis Ababa: Central Statistical Agency and ICF International; 2012. Central Statistical Agency (CSA) [Ethiopia], ICF. Ethiopia Demographic and Health Survey 2016. Addis Ababa: CSA and ICF. p. 2016. Sepehri A, Sarma S, Simpson W, Moshiri S. How important are individual, household and commune characteristics in explaining utilization of maternal health services in Vietnam? Soc Sci Med. 2008;67(6):1009–17. Babalola S, Fatusi A. Determinants of use of maternal health services in Nigeria-looking beyond individual and household factors. BMC Pregnancy Childbirth. 2009;9(1):43. Bloom SS, Wypij D, Gupta MD. Dimensions of women's autonomy and the influence on maternal health care utilization in a north Indian city. Demography. 2001;38(1):67–78. Gage AJ. Barriers to the utilization of maternal health care in rural Mali. Soc Sci Med. 2007;65(8):1666–82. Arthur E. Wealth and antenatal care use: implications for maternal health care utilisation in Ghana. Heal Econ Rev. 2012;2(1):14. Titaley CR, Dibley MJ, Roberts CL. Factors associated with underutilization of antenatal care services in Indonesia: results of Indonesia demographic and health survey 2002/2003 and 2007. BMC Public Health. 2010;10(1):485. Tsegay Y, Gebrehiwot T, Goicolea I, Edin K, Lemma H, San SM. Determinants of antenatal and delivery care utilization in Tigray region, Ethiopia: a cross-sectional study. Intern. 2013;12(1):30. Abosse Z, Woldie M, Ololo S. Factors influencing antenatal care service utilization in hadiya zone. Ethiopian J Health Sci. 2010;20(2). Dairo M, Owoyokun K. Factors affecting the utilization of antenatal care services in Ibadan, Nigeria. Benin J Postgraduate Med. 2010;12(1). Fagbamigbe AF, Idemudia ES. Barriers to antenatal care use in Nigeria: evidences from non-users and implications for maternal health programming. BMC Pregnancy Childbirth. 2015;15(1):95. World Health Organization. Service Availability and Readiness Assessment (SARA): an annual monitoring system for service delivery. Geneva: Health Statistics and Information Systems, WHO; 2013. World Health Organization. Service availability and readiness assessment (SARA): an annual monitoring system for service delivery: reference manual. 2015. Wang W, Winter R, Mallick L, Florey L, Burgert-Brucker C, Carter E. The relationship between the health service environment and service utilization: linking population data to health facilities data in Haiti and Malawi. DHS analytical studies no. 51. Rockville: ICF International; 2015. Hozumi D, Fronczak N, Minichiello SN, Buckner B, Fapohunda B, Kombe G, et al. Profiles of Health Facility Assessment Methods. Report of the International Health Facility Assessment Network (IHFAN). MEASURE Evaluation, USAID; 2008. Burgert-Brucker CR, Prosnitz D. Linking DHS household and SPA facility surveys: Data considerations and Geospatial Methods. DHS Spatial Analysis Reports No. 10: ICF International; 2014. Burgert C, Zachary B. Incorporating geographic information into demographic and health surveys: a field guide to GPS data collection; 2011. Ethiopian Public Health Institute, ICF International. Ethiopia service provision assessment plus (ESPA+) survey 2014. Addis Ababa: Ethiopian Public Health Institute and ICF International; 2014. Tegegne TK, Chojenta C, Getachew T, Smith R, Loxton D. Service environment link and false discovery rate correction: methodological considerations in population and health facility surveys. PLoS One. 2019;14(7):e0219860. Skiles MP, Burgert CR, Curtis SL, Spencer J. Geographically linking population and facility surveys: methodological considerations. Popul Health Metrics. 2013;11(1):14. Natural Earth. Free vector and raster map data [cited 31/05/2019. Available from: https://www.naturalearthdata.com/downloads/10m-cultural-vectors/. Ene M, Leighton EA, Blue GL, Bell BA, editors. Multilevel models for categorical data using SAS® PROC GLIMMIX: the basics. SAS Global Forum 2015 Proceedings; 2015. O'Connell AA, Goldstein J, Rogers HJ, Peng CJ. Multilevel logistic models for dichotomous and ordinal data. Multilevel Model Educ Data. 2008:199–242. Tom A, Bosker TASRJ, Bosker RJ. Multilevel analysis: an introduction to basic and advanced multilevel modeling: sage; 1999. Anselin L. Local indicators of spatial association—LISA. Geogr Anal. 1995;27(2):93–115. Caldas de Castro M, Singer BH. Controlling the false discovery rate: a new application to account for multiple and dependent tests in local statistics of spatial association. Geogr Anal. 2006;38(2):180–208. Tiefelsdorf M, Griffith DA. Semiparametric filtering of spatial autocorrelation: the eigenvector approach. Environ Plan A. 2007;39(5):1193–221. Thayn JB, Simanis JM. Accounting for spatial autocorrelation in linear regression models using spatial filtering with eigenvectors. Ann Assoc Am Geograph. 2013;103(1):47–66. Murakami D, Griffith DA. Random effects specifications in eigenvector spatial filtering: a simulation study. J Geograph Syst. 2015;17(4):311–31. Murakami D, Yoshida T, Seya H, Griffith DA, Yamagata YA. A Moran coefficient-based mixed effects approach to investigate spatially varying relationships. Spatial Statistics. 2017;19:68–89. Chaix B, Merlo J, Chauvin P. Comparison of a spatial approach with the multilevel approach for investigating place effects on health: the example of healthcare utilisation in France. J Epidemiol Community Health. 2005;59(6):517–26. Xu H. Comparing spatial and multilevel regression models for binary outcomes in neighborhood studies. Sociol Methodol. 2014;44(1):229–72. Rose M, Abderrahim N, Stanton C, Helsel D. Maternity Care: A Comparative Report on the Availability and Use of Maternity Services. Data from the Demographic and Health Surveys Women's Module & Services Availab ility Module 1993–1996. MEASURE Evaluation Technical Report Series No. 9. Carolina Population Center, University of North Carolina at Chapel Hill; 2001. Tilaye Gudina Terfasa, Mesganaw Fantahun Afework, Berhe FT. Antenatal Care Utilization and It's Associated Factors in East Wollega Zone, Ethiopia. Journal of Pregnancy and Child Health. 2017. Tewodros B, Dibaba Y. Factors affecting antenatal care utilization in Yem special woreda, southwestern Ethiopia. Ethiopian J Health Sci. 2009;19(1). De Allegri M, Ridde V, Louis VR, Sarker M, Tiendrebéogo J, Yé M, et al. Determinants of utilisation of maternal care services after the reduction of user fees: a case study from rural Burkina Faso. Health Policy. 2011;99(3):210–8. Tegegne TK, Chojenta C, Loxton D, Smith R, Kibret KT. The impact of geographic access on institutional delivery care use in low and middle-income countries: systematic review and meta-analysis. PLoS One. 2018;13(8):e0203130. United Nations Population Fund (UNFPA). Trends in Maternal Health in Ethiopia; Challenges in Achieving the MDG for Maternal Mortality. In-depth Analysis of the EDHS 2000-2011. Addis Ababa: UNFPA; 2012. Getasew M, Teketo K, Mekonnen A. Antenatal care service utilization and its associated factors among mothers who gave live birth in the past one year in Womberma Woreda, North West Ethiopia. Epidemiology: Open Access. 2015;5(Special Issue 2). Omo-Aghoja L, Aisien O, Akuse J, Bergstrom S, Okonofua F. Maternal mortality and emergency obstetric care in Benin City, south-South Nigeria. J Clin Med Res. 2010;2(4):055–60. Biratu BT, Lindstrom DP. The influence of husbands' approval on women's use of prenatal care: results from Yirgalem and Jimma towns, south West Ethiopia. Ethiop J Health Dev. 2006;20(2):84–92. Lincetto O, Mothebesoane-Anoh S, Gomez P, Munjanja S. Antenatal care: opportunities for Africa's newborns. New York: World Health Organiation; 2010. Jira T. Determinants of antenatal care utilization in Jimma town, Southwest Ethiopia. Ethiop J Health Sci 2005;15(1):0–15. Chandhiok N, Dhillon BS, Kambo I, Saxena NC. Determinants of antenatal care utilization in rural areas of India: a cross-sectional study from 28 districts (an ICMR task force study). J Obstet Gynecol India. 2006;56(1):47–52. Kiwanuka S, Ekirapa E, Peterson S, Okui O, Rahman MH, Peters D, et al. Access to and utilisation of health services for the poor in Uganda: a systematic review of available evidence. Trans R Soc Trop Med Hyg. 2008;102(11):1067–74. Odaga J. Health inequity in Uganda: the role of financial and non-financial barriers; 2004. Peters DH, Garg A, Bloom G, Walker DG, Brieger WR, Rahman MH. Poverty and access to health care in developing countries. Ann N Y Acad Sci. 2008;1136(1):161–71. We thank the University of Newcastle, Australia for offering free access to the digital online library to search the electronic databases that were considered for this analysis. We also thank the Measure DHS Program and the Ethiopian Public Health Institute for providing free access to the data sets used for this analysis. Department of Public Health, College of Health Sciences, Debre Markos University, Debre Markos, Ethiopia Teketo Kassaw Tegegne Research Centre for Generational Health and Ageing, Hunter Medical Research Institute, School of Medicine and Public Health, University of Newcastle, Newcastle, New South Wales, Australia , Catherine Chojenta & Deborah Loxton The Australian College of Health Informatics, Sydney, New South Wales, Australia Health System and Reproductive Health Research Directorate, Ethiopian Public Health Institute, Addis Ababa, Ethiopia Theodros Getachew Mothers and Babies Research Centre, Hunter Medical Research Institute, School of Medicine and Public Health, University of Newcastle, Newcastle, New South Wales, Australia Search for Teketo Kassaw Tegegne in: Search for Catherine Chojenta in: Search for Theodros Getachew in: Search for Roger Smith in: Search for Deborah Loxton in: TKT, CC, RS, DL conceptualized the design of the analysis. TKT developed and drafted the manuscript. CC, TG, RS and DL participated in critically revising the intellectual contents of the manuscript. All authors read, provided feedback and approved the final manuscript. Correspondence to Teketo Kassaw Tegegne. Ethical approval was obtained from the Human Research Ethics Committee, The University of Newcastle. We also got the Ethiopian Public Health Institute (EPHI) and the Measure DHS program approval to access the datasets. Tegegne, T.K., Chojenta, C., Getachew, T. et al. Antenatal care use in Ethiopia: a spatial and multilevel analysis. BMC Pregnancy Childbirth 19, 399 (2019) doi:10.1186/s12884-019-2550-x Spatial variations Pregnancy and childbirth in low and middle income countries
CommonCrawl
The prediction of virus mutation using neural networks and rough set techniques Mostafa A. Salama1,3, Aboul Ella Hassanien2,3 & Ahmad Mostafa1 14 Altmetric Viral evolution remains to be a main obstacle in the effectiveness of antiviral treatments. The ability to predict this evolution will help in the early detection of drug-resistant strains and will potentially facilitate the design of more efficient antiviral treatments. Various tools has been utilized in genome studies to achieve this goal. One of these tools is machine learning, which facilitates the study of structure-activity relationships, secondary and tertiary structure evolution prediction, and sequence error correction. This work proposes a novel machine learning technique for the prediction of the possible point mutations that appear on alignments of primary RNA sequence structure. It predicts the genotype of each nucleotide in the RNA sequence, and proves that a nucleotide in an RNA sequence changes based on the other nucleotides in the sequence. Neural networks technique is utilized in order to predict new strains, then a rough set theory based algorithm is introduced to extract these point mutation patterns. This algorithm is applied on a number of aligned RNA isolates time-series species of the Newcastle virus. Two different data sets from two sources are used in the validation of these techniques. The results show that the accuracy of this technique in predicting the nucleotides in the new generation is as high as 75 %. The mutation rules are visualized for the analysis of the correlation between different nucleotides in the same RNA sequence. The deoxyribonucleic acid (DNA) strands are composed of units of nucleotides. Each nucleotide is composed of a nitrogen-containing nucleobase, which is either guanine (G), adenine (A), thymine (T), or cytosine (C). Most DNA molecules consist of two strands coiled around each other forming a double helix. These DNA strands are used as a template to create the ribonucleic acid (RNA) in a process known as transcription. However, unlike DNA, RNA is often found as a single-strand. One type of RNA is the messenger RNA (mRNA) which carries information from the ribosome, which are where the protein is synthesized. The sequence of mRNA is what specifies the sequence of amino acids the formed protein. DNA and RNA are also the main component of viruses. Some of the viruses are DNA-based, while others are RNA-based such as Newcastle, HIV, and flu [1]. RNA viruses are different than the DNA-based viruses in the sense that they have higher mutation rates, and hence, they have higher adaptive capacity. This mutation causes a continuous evolution that leads to host immunity, and hence, the virus becomes even more virulent [2]. One of the RNA virus mutations is the point mutation which is a small scale mutations that affects the RNA sequence in only one or few nucleotides, such as nucleotide substitution. This substitution refers to the replacement of one nucleobase (i.e., base) either through transition or transversion. This substitution refers to the replacement of one nucleobase (i.e., base) by another either through transition or transversion. Transition is the exchange of two purines (A <−> G) or one pyrimidine (C <−> T), while transversion is the exchange between a purine and pyrimidine [3, 4]. Another type of point mutation is frame-shift which refers to the insertion or deletion of a nucleotide in the RNA sequence. One of the important focuses in the field of human disease genetics is the prediction of genetic mutation [5]. Having information about the current virus generations and their past evolution could provide a general understanding of the dynamics of virus evolution and the prediction of future viruses and diseases [6]. The evolutionary relationship between species is determined by phylogenetic analysis; additionally, it infers the ancestor sequence of these species. These phylogenetic relationships among RNA sequences can help in predicting which sequence might have an equivalent function [7]. The analysis of the mutation data is very important, and one of the tools used for this purpose is machine learning. Machine learning techniques help predict the effects of non-synonymous single nucleotide polymorphisms on protein stability, function and drug resistance [8]. Some of these techniques that are used in prediction are support vector machine, neural networks and decision trees. These techniques have been utilized to learn the rules describing mutations that affect protein behavior, and use them to infer new relevant mutations that will be resistant to certain drugs [9]. Another use is to predict the potential secondary structure formation based on primary structure sequences [10–12]. A different direction is to predict the discovery of single nucleotide variants in RNA sequence. Another tool in machine learning is Markov chains, which can describe the relative rates of different nucleotide changes in the RNA sequence [13]. These models consider the RNA sequence to be a string of four discrete states, and hence, tracks the nucleotide replacements during the evolution of the sequence. In these models, it was assumed that the different nucleotides in the sequence evolved independently and identically, and justified that using the case of neutral evolution of nucleotides. Following that, several researches developed methods negated that assumption and identified the relevant neighbor-dependent substitution processes [14]. The prediction of the mutated RNA gives a clear understanding of the mutation process, the future activity of the RNA, and the help direct the development of the drugs that should be designed for it [15, 16]. In this work, we propose a machine learning technique that is based on rough set theory. This technique predicts the potential nucleotide substitutions that may occur in primary RNA sequences. In this technique, a training phase is utilized in which each iteration the input is an RNA sequence of one generation of the virus, while the output is the RNA sequence of the next generation of the virus. Every feature in the input is a nucleotide in the RNA sequence corresponds to a feature in the output. The training of the machine learning technique is fed with aligned RNA sequences of successive generations of the same RNA viruses that exhibited similar environmental conditions. The technique introduced in this paper predicts the last RNA sequence based on the previous RNA generation sequence. Following that, the predicted RNA sequence is compared the actual RNA sequence in order to validate the ability of the machine learning technique to predict the RNA evolution and this prediction accuracy. This comparison results in a percentage which is calculated based on the number of matched nucleotides between both the predicted and actual RNA sequence versus the total number of nucleotides in the sequence. One of the main important methods of this technique is that it extracts the rules governing the past mutations, and hence, is able to infer the possible future mutations. These rules show the effect of a set of nucleotides on the mutation of their neighboring nucleotide. This technique visualizes these rules to allow an integrated analysis of the mutations occurring in successive generations of the RNA virus. Besides using the Rough set theory in our technique, a traditional machine learning technique is utilized which is neural networks in order to clarify the prediction process and validate our results. Finally, we present a way to reform the RNA alignments of a set of successive generations of the same virus. This reformation step is important in order to fit the input requirements for any machine learning technique. This paper presents a proof of concept by applying this technique on a set of successive generations of the Newcastle Disease Virus (NDV) from two different countries, Korea and China [17, 18]. In these sets, the percentage of nucleotides that exhibits variation from a generation to another is 57–65 % for the two used sets of sequences. The proposed techniques percentage accuracy in the prediction of the varied nucleotides in the tested sequence in the last generation are 68–76 %. Although these results are not statistically significant at this point, it still however proves the applicability of the proposed technique. It is worth noting that the learning and accuracy of prediction of this technique increases as the number of instances in the data set increases. The rest of the paper is organized as follows. Section 2 presents the related work in applying machine learning techniques in genetic problems, as well as the proposed technique to solve these problems. The experimental work and discussion appear in Sections 3 and 4, and finally, the conclusion is presented in Section 5. Predicting techniques have been utilized in the field of genetics for many years, and have been geared towards different directions. One of these directions is the detection of the resistance of the virus to drugs after its mutation [19], in which, machine learning techniques are focused on learning the rules characterizing the aligned sequences that are resistant to drugs. The rules will be later on used to detect the drug resistance gene sequences amongst a set of testing sequences. The training phase of these techniques works by having each protein sequence represented as a feature vector and fed as the input to this technique [20]. An example of these machine learning techniques are support vector machine (SVM) and neural networks, which can be trained on data sets of drug-resistant and non-resistant DNA sequences of virus population. In the training phase, these techniques learn the rules of classifying new generations of the virus a being drug-resistant or not [21]. However, these techniques are black box techniques, that cannot be used to infer any information about the rules used in this classification. Another disadvantage is that these techniques utilize 20 bit binary numbers instead of characters as a representation for 20 different amino acids condons. This increases the size of the input to the used algorithm, which in return increases the complexity of the classification process. However, the authors in [22] introduced a technique that uses characters instead of number as the input, and they utilize a decision tree in order to provide direct evidence on the drug resistance genes through a set of rules. Their technique trains and tests its efficiency on 471 isolates against 14 antiviral drugs, creating a decision tree for each drug. The input of this decision tree is the isolates sequence in which, each position is one of 20 naturally occurring amino acids represented in characters. The results of this technique concludes whether the virus is a drug-resgstenc virus or not. An example of this generated decision tree is: if the codon at position 184 is for methionine (M) and the codon at position 75 for alanine (A), glutamic acid (E) or threonine (T), then the virus carrying this gene is resistant to (3TC) antivirus drug. Another example is the detection of whether a point mutation in a cell is transmitted to the offspring or not, i.e., gremlin mutation vs. somatic mutation [23]. Another research direction is the prediction of the secondary structure of the RNA/DNA of the organism in the generation post-mutation. The ribonucleic acid (RNA) molecule consists of a sequence of ribo-nucleotides that determines the amino acids' sequence in the protein. The primary structure of the RNA molecule is the linear form of the nucleotides' sequence. The nucleotides can be paired based on specific rules that is: adenine (A) pairs with uracil (U) and cytosine (C) with guanine (G). Base pairs can occur within single-stranded nucleic acids. The RNA sequence is folded into secondary structure in which a pair of basis is bonded together. This structure contains a set of canonical base pairs, whose variation is considered as a form of mutation that can be predicted. Several researches have been focused on automating the RNA sequence folding [24] and predicting the secondary structure form [25]. The probability of the generation of any secondary structure is inversely proportional to the energy of this structure. This energy is modeled based on extensive thermodynamic measurements [26]. Applications like "RNAMute" analyzes the effects of point mutations on RNAs secondary structure based on thermodynamic parameters [27]. The third research direction is the prediction of single nucleotide variants (SNV) at each locus, i.e., nucleotide location. The SNV existence are identified from the results of the Next Generation Sequencing (NGS) methods [6]. NSG is capable of typing SNP, which is the mutation that produces a single base pair change in the DNA sequence. For example, the two sequenced DNA fragments from different generations are AAGCCTA and AAGCTTA. Noticed that the fifth single nucleotide C in the first fragment varied to a T in the second fragment. This genotype variation change represents the mutation in the child genomes of the next generation. The steps of allocating the SNVs start with collecting a set of aligned sequences from NGS readings analyzed against a reference sequence. At each position i in the genome data, the number of reads a i that match the reference genome and the number of reads b i that do not match the reference genome are counted. The total number of reads, depth, is given by N i =a i + b i . A naïve approach to detect the SNV locations is to find the location i ∈ [1, T] whose fraction f i =a i / N i is less than a certain threshold [28]. Although this approach is accurate for large number of generations, usually this is not case due to low collected number of sequences. Moreover, it only predicts the existence of the SNV, however, it does not predict the future sequence. A model is proposed in [29] to infer the genotype at each location. The model characterizes each column in the alignment to be one of three states, the first Homozygous type is where all genotypes are the same as the reference allele states, while the second types is where all genotypes are the same as non-reference allele, the last type is for mixture of reference and non-reference alleles. This model calculates posterior probability P(g | x,z) of the genotype at position u in the current sequence z, given a reference sequence x and the sequence z. The genotype highest posterior probability (MAP) is selected. The detection of the third state is based on SNVMix model [30], which uses the Bayesian Theorem and MAP to calculate the posterior probabilities for a mixed genotype g m . In this case, the P(g m |a i ) can be calculated as shown in Eq. 1 where a i is the number of reads that matches the reference at location i. $$ P(g_{m} | a_{i}) = P(a_{i} | g_{m}) * P(g) $$ The prior probability P(a i |g m ) is calculated using the binomial distribution dbinom in the case of mixed genotype [30] as shown in Eqs. 2, 3, and 4. The parameter \(\mu _{g_{m}}\) of the dbinom is of value 0.5 where the probability of the genotype, matching or not matching, is equal. And N i represents the total number of generations. $$ P(a_{i} | g_{m}) \approx dbinom(a_{i} | \mu_{g_{m}},N_{i}) $$ $$ P(a_{i} | g_{m}) = \binom{N_{i}}{a_{i}} \mu_{g_{m}}^{a_{i}} \left(1 - \mu_{g_{m}}^{N_{i} -_{i}}\right) $$ $$ P(a_{i} | g_{i\{ab\}}) = \binom{N_{i}}{a_{i}} \frac{1}{2^{N_{i}}} $$ These three research direction focus on predicting the activity of the newly evolved viruses. A different direction is the prediction of the rates of variation of the nucleotides in the RNA sequences of successive generations of the virus [31]. In this direction, models are introduced to analyze the historical patterns and variation mechanism of the virus and detect the rates of variation of each point substitution of its RNA sequence. These models are characterized as either neighbor-dependent or non-dependent variation. The contribution in this paper focuses on the prediction of the RNA sequence of newly evolved virus by detecting the possible point mutations. The nucleotide substitutions of the coding regions in this Newcastle virus occurred frequently [32]. These substitutions and dependency between nucleotides is captured to form a set of rules that describes the evolution of the RNA virus. The proposed algorithm and discussion The focus of this paper is to analyze the changes and mutations that occur after each generation of the virus. This analysis is done through monitoring these changes and detecting their patterns. The mutation of the RNA viruses are characterized by high drug-resistance, however, these mutations can be predicted by applying machine learning techniques in order to extract their rules and patterns. The proposed algorithm here is applied on a data set of aligned RNA sequences observed over a period of time. This data was collected and presented in previous research [17, 18], in which all animal procedures performed were reviewed, approved, and supervised by the Institutional Animal Care and Use Committee of Konkuk University. This space of potential virus mutations provides a proper data set required for the computational methods used in the algorithm. The preprocessing is started by the alignment of RNA sequence for the purpose of predicting the evolution of the virus based on the gathered sequences. The steps performed are the mining and learning the rules of the virus mutation, in order to predict the mutations to help creating new drugs. The input data set is a set of aligned DNA sequences. The RNA isolates are ordered from old to new according to the time of getting the virus and isolating the RNA. The machine learning techniques used in this step are neural networks, and a proposed technique that is based on the rough set theory. An important step in the proposed technique is building the decision matrix required for rule extraction, which is based on the idea that if the value of the attribute has no effect on describing the category of the object, then this attribute can be excluded from the rule set [33]. An important clarification is that every nucleotide in both the neural networks and proposed technique act as the target class label required in the classification. These target nucleotides will have one of four different genotype values that will be predicted by the classifier. The preprocessing step includes the feature selection which is important in decreasing the processing cost by the removal unnecessary nucleotides. These nucleotides are those that do not change during the different generations of the virus, hence, they do not add any information in the classification/prediction step. This is due to the fact that not changing across RNA generations imply that they have no effect on the mutated nucleotides. Hence, their existence will have no effect, and will moreover deteriorate the net accuracy prediction result. Although this step is important as a preprocessing step in the neural network methods, it can be avoided in the proposed rough set based technique since the technique removes the non-required nucleotides internally. Neural network technique Neural networks technique can be used to predict different mutations in RNA sequences. The first step of this technique is to specify the structure of both the input and output. The number of nodes in the input and in the output of the neural networks is the four times the number of nucleotides in the RNA sequence. Hence, each nucleotide will be scaled to 4 binary bits, i.e., if the nucleotide is "A", then the corresponding bit will be 1. However, it is not correct to transform the letters to numerals, for example the nucleotide values of [A, C, G, T] to [0, 1, 2, 3]. The reason being is that the distance between each nucleotide is the same, while the distance between [0–3] is not the same as that between [0–1]. The effect of this can be demonstrated by the following example: when applying neural networks to the numerically transformed sequence, the results show unsuccessful results. This is because the neural networks technique changes the values of the output based on the nucleotide values. Hence, the backward propagation in the case of moving from the value T to the value A will not be the same as moving from T to G. This is considered a mistake in the learning phase of the neural network, and it will lead to incorrect classification and prediction. The same case occurs when applying support vector machine and Bayesian belief networks. The training of the neural network will occur first by considering every scaled RNA segment s one training input to the neural network. The desired output corresponding to the input DNA segment is the next successive scaled RNA segment from the next generation of the training data set. As shown in Fig. 1, for every input/output sequence, the weights are continuously updated until accuracy exceeds 70 %. This accuracy is calculated according to the number of correct predicted nucleotides to the total number of nucleotides in the sequence. After the accuracy exceeds 70 %, the next input is the output DNA sequence in the current step. The last RNA segment is left for testing. The learning of the neural network from the input data set The disadvantage in using a neural network technique or support vector machine is the scaling of each nucleotide in the DNA sequence to four input states. This scaling process increase the computational complexity of the technique. On the other hand, the limited number of input instances could negatively affect the accuracy of the prediction process. Finally, the extraction of the rules in this technique is not possible, and hence, the prediction process is not feasible. Rough set gene evolution (RSGE) proposed technique This paper proposed a new algorithm for solving the evolution prediction problem that is based on the input time series data set. Each RNA sequence is mutated, i.e., evolution passage, to the next RNA sequence in the data set, as these sequences are sorted from old to recent dates. For each iteration in this learning phase of this algorithm, the input is an RNA sequence and the output is the next RNA sequence in the data set. The learned output is not the classification of the RNA virus; however, it is the RNA virus after mutation. Because, techniques like support vector machine, neural networks, and Bayesian belief networks have to deal with data in a numerical form, the proposed technique deals with alphabetical and numerical data in the same fashion. The purpose of this technique is to infer the rules that governs certain value [A,C,G o r T] for each nucleotide. At this point, the training applied will consider each nucleotide as a target class of four values. The input that leads to one of the values of the target class is the sequence before the sequence containing the current value of the target class. The machine learning algorithm will learn what input will produce which output accordingly. The rules learned from the used machine learning algorithm will be used to predict the generated nucleotide corresponding to the input. The rule will be in the form of short sequences of nucleotides genotype and location that govern the mutation of a nucleotide from certain genotype to another. The number of iterations will be the number of nucleotides in the RNA sequence. After each iteration, the required rule for the nucleotide x corresponding to the current iteration will be extracted. In each iteration, the algorithm detects the value of nucleotide x in the sequence in the data set, as well as the value of this same nucleotide in the next sequence x ′. The following step in the algorithm is the detection of the values of all nucleotides in the preceding sequences to the ones that include nucleotides x and x ′. This step is applied to detect the sequence of the previous nucleotide values and leads to the value of the nucleotide under investigation. This is illustrated in Figs. 2 and 3. In Fig. 2, the sequence [C G G G A T] precedes nucleotide x at position i of value A, and the sequence [C T G A A C] precedes nucleotide x ′ at position i of value A. The nucleotide x in the position i corresponding to the current iteration does not change from sequence j+1 to sequence j+2. In this case, if the nucleotides in the sequence preceding to the one corresponding to the nucleotide x do not change, then these nucleotide values will be included in the rule of nucleotide x, otherwise they will be completely excluded. Hence, for Fig. 2, the extracted rule for the value A of the nucleotide at position i will be [A:i,C:i+1,G:i+3,A:i+5]. A different case is presented in Fig. 3, where the value of the nucleotide at position i is changed from value A to value C. In this case, if the nucleotides in the sequence preceding the one corresponding to the nucleotide x changes, then these nucleotide values will be included in the rule of nucleotide x, otherwise they will be completely excluded. Hence, for Fig. 3, the extracted rule for the value C of the nucleotide at position i will be [A:i,T:i+2,A:i+4,C:i+6]. Nucleotide i for iteration i in the proposed algorithm, nucleotide as position i is the same, not changed Nucleotide i for iteration i in the proposed algorithm, nucleotide as position i is the not the same, changed The reason behind using this methodology in extracting the rule is that for each iteration in this algorithm is that two main cases are considered. The first case is that the value of the iterated nucleotide corresponding to this iteration does not change. In this case, if the neighbor nucleotides to this iterated nucleotide do not change, this will indicate that the values of the nucleotides are attached and leads to the value of the iterated nucleotide. If the neighbor nucleotides did change, then the variation of these values do not affect the iterated nucleotide, and hence they can simply be removed from the extracted rule. The second case is considered the opposite of the previous one where the existed nucleotides in the RNA sequence will cause the change of the genotype of a specific nucleotide in the following generation. In this case, the extracted rule of the genotype at this nucleotide location will include the set of neighbor nucleotides. While the unchanged behavior of some nucleotides means their unimportance or non-effect of changing the value of the of iterated nucleotide. As shown in Algorithm 1, if the number of sequences is N, and the number of nucleotides is K, the computational complexity of the calculations is as follows: $$ Algorithm Complexity = O(K*N * (2*K)) = O(N * K^{2}) $$ The analysis of RNA mutations requires the gathering and preparing a set of aligned RNA sequences that go through different mutations over a long period of time. A set of time-series successive isolates of the Newcastle virus RNA are collected from two different countries, China and South Korea [17, 18]. The GenBank accession numbers of NDVs isolates recovered from live chicken markets in Korea in year 2000 are AY630409.1-AY630436.1, and from healthy domestic ducks on farms are EU547752.1-EU547764.1. The total number of isolates for this data set is 22, each isolate is of 200 nucleotide. While the accession numbers of sequenced isolates extracted from chicken in China in years from 2011 to 2012 are KJ184574-KJ184600 [34]. The total number of isolates for this data set is 45, each isolate is of 240 nucleotide. Each set of isolates is listed and sorted according to the date of extraction for time series analysis of the evolution of the NDV virus. The experimental is applied only a partial F gene sequence of NDV from the GenBank record. The virus RNA sequences were monitored and collected retrospectively at regular intervals from similar animal type. The intervals between successive RNA sequences in the Chinese dataset is short relative to the Korean data set. The difference between both types of intervals does not affect neither the analysis process nor the accuracy of the prediction. The extracted data examined for the Korean and Chinese datasets are represented by two different regions of viral genome with the different lengths (200 and 240). It is important to clarify that the two input data sets can not be merged because each input set of segments are aligned separately. So it is impossible here to apply prediction performance of the proposed models on the Chinese dataset using the Korean dataset for training. Figure 4 shows a segment of these aligned RNA sequence, in which, we select only the columns that contain missing values. It is clear in this figure that some nucleotides columns are passing through mutation along the time period under examination, while the other nucleotides do not witness any change during it. Applying any machine learning technique is composed of two main steps, the first is the training of the first part of the input data set, while the second step is using the rest of the data for testing. The classification accuracy percentage is the ration of correctly predicted nucleotide divided to the total number of nucleotides in the sequence. Aligned gene sequence of nucleotides The Chinese data set shows 33 % of the total number of nucleotides have been mutated, and this percentage in the Korean is 43 %. The rate of genome mutations over the time period applied shows variable number mutations. Neural network (NN) results In order to apply NN to the aligned time series RNA sequences, only a part of the sequence will be considered. The partial training of the RNA sequence is applied to ensure a reasonable training execution time. Applying neural networks to this number of target class labels will lead to a very high processing complexity. The training phase contains only 20 nucleotides, where each nucleotide will be scaled to only four input nodes. This will lead to a data set of 80 input nodes and 80 output nodes. For the Korean input data set, the in-out for the training is the first 20 out of 22 instances. The target 20 output instances for these 20 input instances started from the second instance in the input data set and ends at the 21st instance. The testing will have the 21st RNA sequence as the input to the neural networks and the 22nd RNA sequence as the target output. The results show that after 3743901 learning cycles of back-forth propagation, the validation accuracy reaches 70 % percent as shown in Fig. 5. Neural network classification results Rough set gene evolution (RSGE) results The validation of the proposed rough set gene evolution (RSGE) is achieved through testing the total number of nucleotides. For the Korean input data set where the total number of nucleotides is 200, the number of correct nucleotide matches is 148, which is considered 74 % classification accuracy percentage. Also, only four nucleotides have shown incorrect matches, which corresponds to 2 % out of all nucleotides. 48 nucleotides have shown no matching results, which indicates that none of the four rules of each of these nucleotides is applied. These results show the predicted and actual target RNA sequence of nucleotides. For Korea Data Set Newcastle disease virus strain Kr-XX/YY [1982–2012]: Actual : ATGGGTTCCAAATCTTCTACCAG Predicted : ATGGGNNCCANANCTTCTACCAN For the Chinese input data set where the total number of nucleotides is 240, the number of correct nucleotide matches is 180, which corresponds to 75 % classification accuracy. Also, only four nucleotides, i.e., 1.6 %, have shown incorrect matches. And, 56 nucleotides have shown no matching results, which indicates that non of the four rules of each of these nucleotides is applied. The results for the China Data Set Newcastle disease virus isolate JS XX XX Ch [2011–2012] is: Actual : ATGGGCTCCAAACCTTCTACCAG Predicted : TGGGCTCCAAACNTTCTACCNG A sample of the generated rules is figured as follows: For Nucleotide P131: If Genotype is A then P47=C, P57=A If Genotype is G then P23=A If Genotype is C then P6=T, P118=C If Genotype is T then P6=G, P118=A This sample is composed of two base nucleotides only in the RNA sequence, which are 131 and 101. The first rule shows that the nucleotide at position 131 will take the genotype value "A" in the following generation if the nucleotide at position 47 in the current generation is 'C' and the one at position 57 is of type "A". These rules show the existence of a correlation between the three nucleotides, i.e., those at positions [131, 47, and 57], in the first rule and [101, 6 and 118]. The genotype values of some specific nucleotides could affect the alteration/mutation of the RNA sequence. The input Chinese and Korean data sets contain different genotypes, AA, AB, and BB. The nucleotides of the genotypes that have the value AA and BB can be excluded from the resulted rule set. These genotypes need not to be predicted as the values are the same over all the generations. Table 1 shows the extracted rule sets of China countries for the AB nucleotides only. On the other hand, a similar table of rules is generated for the Korean data set. Table 1 AB genotype rules for the Chinese data set Finally, the correlation between the nucleotides can be visualized after extracting the prediction rules from the RSGE technique. This form of exploring the effect of nucleotides on one another can provide a better understanding of the mutation mechanism existing in the virus's RNA. An example of this correlation visual is shown in Fig. 6. In this figure, when the genotype of the nucleotide at position N 37 is C, the genotype of the nucleotide at position N 33 is T. And when the genotype of the nucleotide at position N 33 is T, the genotype of the nucleotides at positions N 53 and N 68 are T and G respectively. The is applied on the rules generated for the Korean data set as shown in Fig. 7. Nucleotides correlation in China data set Nucleotides correlation in Korean data set NN vs. RSGE Figure 8 demonstrates a comparison between the results of using neural networks versus the proposed rough set gene evolution prediction techniques in the classification of both the Chinese and Korean data sets. The results show a good performance of the RSGE in comparison to NN. As the data set increase, the preprocessing increases, and hence, the computational complexity of the neural networks and increases the classification accuracy decreases. On the other hand, the error in the classification using the RSGE proposed technique is approximately 2 % in 77 % of the sequence. The rest 23 % could not be predicted by the technique and were replaced by the genotype in the previous generation. Prediction accuracy for Korean and Chinese data sets The contribution of machine learning techniques in the RNA mutation was limited to the prediction of the activity of the virus of resulted RNA. This work paves the way to a new horizon where the prediction of the mutations, such as virus evolution, is possible. It can assist the designing of new drugs for possible drug-resistant strains of the virus before a possible outbreak. Also it can help in devising diagnostics for the early detection of cancer and possibly for the early start-of-treatment. This work studies the correlation between the nucleotides in RNA including the effect of each nucleotide in chaining the genotypes of other nucleotides. These rules of these correlations are explored and visualized for the prediction of the mutations that may appear in the following generations. The prediction rules are extracted by a proposed technique based on RSGE, and is trained by two data sets extracted from two different countries. This work proves the existence of a correlation between the mutation of nucleotides, and successfully predicts the nucleotides in the next generation in the testing parts of two used data sets with a success rate of 75 %. On the other hand, the proposed rough set (RSGE) based technique shows a better prediction result than the neural networks technique, and moreover, it extract the rules used in the prediction. SF Elena, R Sanjuán, Adaptive value of high mutation rates of RNA viruses: separating causes from consequences. J. Virol. 79(18), 11555–11558 (2005). B Wilson, NR Garud, AF Feder, ZJ Assaf, PS Pennings, The population genetics of drug resistance evolution in natural 2 populations of viral, bacterial, and eukaryotic pathogens. Mol. Ecol. 25:, 42–66 (2016). T Baranovich, S Wong, J Armstrong, H Marjuki, R Webby, R Webster, E Govorkova, T-705 (Favipiravir) induces lethal mutagenesis in influenza A H1N1 Viruses In Vitro. J. Virol. 87(7), 3741–3751 (2013). L Loewe, Genetic mutation. Nat. Educ. 1(1), 113 (2008). BE Stranger, ET Dermitzakis, From DNA to RNA to disease and back: the 'central dogma' of regulatory disease variation. Hum. Genomics. 2(6), 383–390 (2006). J Shendure, H Ji, Next-generation DNA sequencing. Nat. Biotechnol. 26:, 1135–1145 (2008). J Xu, HC Guo, YQ Wei, L Shu, J Wang, JS Li, SZ Cao, SQ Sun, Phylogenetic analysis of canine parvovirus isolates from Sichuan and Gansu provinces of China in 2011. Transbound. Emerg. Dis. 62:, 91–95 (2015). E Capriotti, P Fariselli, I Rossi, R Casadio, A three-state prediction of single pointmutations on protein stability changes. BMC Bioinformatics. 9(2), S6 (2008). E Cilia, S Teso, S Ammendola, T Lenaerts, A Passerini, Predicting virus mutations through statistical relational learning. BMC Bioinformatics. 15:, 309 (2014). doi:http://dx.doi.org/10.1186/1471-2105-15-309. M Lotfi, Zare-Mirakabad F, Montaseri S, RNA secondary structure prediction based on SHAPE data in helix regions. J. Theor. Biol. 380:, 178–182 (2015). MathSciNet Article Google Scholar TH Chang, LC Wu, YT Chen, HD Huang, BJ Liu, KF Cheng, JT Horng, Characterization and prediction of mRNA polyadenylation sites in human genes. Med. Biol. Eng. Comput. 49(4), 463–72 (2011). M Kusy, B Obrzut, J Kluska, Application of gene expression programming and neural networks to predict adverse events of radical hysterectomy in cervical cancer patients. Med. Biol. Eng. Comput. 51(12), 1357–1365 (2013). A Hobolth, A Markov chain Monte Carlo expectation maximization algorithm for statistical analysis of DNA sequence evolution with neighbor-dependent substitution rates. J. Comput. Graph. Stat. 17(1), 138–162 (2008). PF Arndt, T Hwa, Identification and measurement of neighbor-dependent nucleotide substitution processes. Binformatics. 21(10), 2322–2328 (2005). NM Ferguson, RM Anderson, Predicting evolutionary change in the influenza A virus. Nat. Med. 8:, 562–563 (2002). DJ Smith, AS Lapedes, JC de Jong, TM Bestebroer, GF Rimmelzwaan, Mapping the antigenic and genetic evolution of influenza virus. Science. 305(5682), 371–376 (2004). K-S Choi, E-K Lee, W-J Jeon, J-H Kwon, J-H Lee, H-W Sung, Molecular epidemiologic investigation of lentogenic Newcastle disease virus from domestic birds at live bird markets in Korea. Avian Dis. 56(1), 218–223 (2012). Z-M Qin, L-T Tan, H-Y Xu, B-C Ma, Y-L Wang, X-Y Yuan, W-J Liu, Pathotypical characterization and molecular epidemiology of Newcastle disease virus isolates from different hosts in China from 1996 to 2005. J. Clin. Microbiol. 46(4), 601–611 (2008). Y Choi, GE Sims, S Murphy, JR Miller, AP Chan, Predicting the functional effect of amino acid substitutions and indels. PLoS One. 7(10), e46688 (2012). doi:http://dx.doi.org/10.1371/journal.pone.0046688. D Wang, B Larder, Enhanced prediction of lopinavir resistance from genotype by use of artificial neural networks. J. Infect. Dis. 188(11), 653–660 (2003). ZW Cao, LY Han, CJ Zheng, ZL Ji, X Chen, HH Lin, YZ Chen, Computer prediction of drug resistance mutations in proteins. Drug Discov. Today. 10(7), 521–529 (2005). N Beerenwinkel, B Schmidt, H Walter, R Kaiser, T Lengauer, D Hoffmann, K Korn, J Selbig, Diversity and complexity of HIV-1 drug resistance: a bioinformatics approach to predicting phenotype from genotype. Proc. Natl. Acad. Sci. USA. 99(12), 8271–8276 (1999). J Ding, A Bashashati, A Roth, A Oloumi, K Tse, T Zeng, G Haffari, M Hirst, MA Marra, A Condon, S Aparicio, SP Shah, Feature-based classifiers for somatic mutation detection in tumour-normal paired sequencing data. Bioinformatics. 28(2), 167–75 (2012). D Lai, JR Proctor, IM Meyer, On the importance of cotranscriptional RNA structure formation. RNA. 19(11), 1461–1473 (2013). DH Mathews, WN Moss, DH Turner, Folding and finding RNA secondary structure. Cold Spring Harb Perspect Biol. 2(12), a003665 (2010). doi:http://dx.doi.org/10.1101/cshperspect.a003665. IL Hofacker, M Fekete, PF Stadler, Secondary structure prediction for aligned RNA sequences. J. Mol. Biol. 319(5), 1059–1066 (2002). D Barash, A Churkin, Mutational analysis in RNAs: comparing programs for RNA deleterious mutation prediction. Brief Bioinform.12(2), 104–114 (2011). R Morin, et al, Profiling the HeLa S3 transcriptome using randomly primed cDNA and massively parallel short-read sequencing. BioTechniques. 45(1), 81–94 (2008). doi:http://dx.doi.org/10.2144/000112900. R Goya, MG Sun, RD Morin, G Leung, G Ha, KC Wiegand, J Senz, A Crisan, MA Marra, M Hirst, D Huntsman, KP Murphy, S Aparicio, SP Shah, SNVMix: predicting single nucleotide variants from next-generation sequencing of tumors. Bioinformatics. 26(6), 730–736 (2010). H Li, J Ruan, R Durbin, Mapping short DNA sequencing reads and calling variants using mapping quality scores. Genome Res.18:, 1851–1858 (2008). doi:http://dx.doi.org/10.1101/gr.078212.108. J Berard, L Guéguen, Accurate estimation of substitution rates with neighbor-dependent models in a phylogenetic context. J. Systmatic Biol. 61(3), 510–21 (2012). GM Ke, KP Chuang, CD Chang, MY Lin, HJ Liu, Analysis of sequence and haemagglutinin activity of the HN glycoprotein of New-castle disease virus. Avian Pathol.39(3), 235–244 (2010). doi:http://dx.doi.org/10.1080/03079451003789331. M Bal, Rough sets theory as symbolic data mining method: an application on complete decision table. Inform. Sci. Lett. 2(1), 35–47 (2013). J-Y Wang, W-H Liu, J-J Ren, P Tang, N Wu, H-J Liu, Complete genome sequence of a newly emerging Newcastle disease virus. Genome Announc. 1(3), 196–13 (2013). We thank all the members of the group of scientific research in Egypt, Cairo university for their continuous support and encouragement to accomplish this work. Besides, we thank the faculty of computer science in the British university in Egypt for providing us with all the required support. A special thanks for Mr. Saleh Esmate for helping us in gathering the data set of this work. British University in Egypt (BUE), Cairo, Egypt Mostafa A. Salama & Ahmad Mostafa Cairo University, Cairo, Egypt Aboul Ella Hassanien Scientific Research Group in Egypt, (SRGE), Cairo, Egypt Mostafa A. Salama & Aboul Ella Hassanien Mostafa A. Salama Ahmad Mostafa Correspondence to Mostafa A. Salama. The authors (Mostafa A. salama, Aboul-Ellah Hassnien and Ahmad Mostafa) of this manuscript clarify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria, educational grants, participation in speakers' bureaus, membership, employment, consultancies, stock ownership, or other equity interest, and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript. All authors contributes equally in this work. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Salama, M.A., Hassanien, A.E. & Mostafa, A. The prediction of virus mutation using neural networks and rough set techniques. J Bioinform Sys Biology 2016, 10 (2016). https://doi.org/10.1186/s13637-016-0042-0 Received: 13 July 2015 Gene prediction
CommonCrawl
Social networks help to infer causality in the tumor microenvironment Isaac Crespo1, Marie-Agnès Doucey2 & Ioannis Xenarios1 Networks have become a popular way to conceptualize a system of interacting elements, such as electronic circuits, social communication, metabolism or gene regulation. Network inference, analysis, and modeling techniques have been developed in different areas of science and technology, such as computer science, mathematics, physics, and biology, with an active interdisciplinary exchange of concepts and approaches. However, some concepts seem to belong to a specific field without a clear transferability to other domains. At the same time, it is increasingly recognized that within some biological systems—such as the tumor microenvironment—where different types of resident and infiltrating cells interact to carry out their functions, the complexity of the system demands a theoretical framework, such as statistical inference, graph analysis and dynamical models, in order to asses and study the information derived from high-throughput experimental technologies. In this article we propose to adopt and adapt the concepts of influence and investment from the world of social network analysis to biological problems, and in particular to apply this approach to infer causality in the tumor microenvironment. We showed that constructing a bidirectional network of influence between cell and cell communication molecules allowed us to determine the direction of inferred regulations at the expression level and correctly recapitulate cause-effect relationships described in literature. This work constitutes an example of a transfer of knowledge and concepts from the world of social network analysis to biomedical research, in particular to infer network causality in biological networks. This causality elucidation is essential to model the homeostatic response of biological systems to internal and external factors, such as environmental conditions, pathogens or treatments. Despite their differences in nature, social and biological networks are self-organising, emergent, and complex, and their respective analyses have common features: both focus on local and global patterns of connectivity, search for influential entities, and aim to model the network dynamics. Some concepts resulting from the study of social networks such as 'popularity', which refers to node centrality, can be directly transferred to the study of biological networks, where popular nodes are referred to as 'hubs', which tend to be essential [1]. However, there exist other concepts that seem to be more specific to social studies. Social network analysis has invested some efforts in describing network interactions at the so-called dyadic and triadic levels or the relationships between two (or, respectively, three) individuals, with the development of concepts such as social equality, balance, transitivity, and mutuality [2], which seem to be more specific to the realm of social studies. In particular, at the dyadic level, mutuality refers to the reciprocity between two individuals implicit in some types of social interactions, as for example the influence between individuals. Hangal et al. [3] proposed to model the influence of individual A over B in a given social graph as the fraction of B's actions due to A, and the opposite for the influence of B over A. This influence is based on social interactions that imply a cost or investment for the people involved, and it is frequently asymmetric. For instance, keeping B posted by A requires time and effort that is considered an investment of A in B, and it reflects the fact that B has a certain influence on A, which can be very different than the influence of A on B. In some social graphs, the link between two individuals does not have directionality, as for example in an authorship graph, where authors are connected by shared publications (see Fig. 1). Hangal et al. [3] showed that this kind of graph can be derived in a bidirectional network of influence by assuming asymmetric mutuality pairwise, and demonstrated that such a derived network was more convenient for global social searches than methods based on the shortest path. Modeling human interactions as an influence network. a Undirected and weighted social network where nodes represent three students and their advisor and edges represent shared publications. Edge weights represent the number of shared publications. b Derived bidirectional influence network. Weights represent the influence of the source on the target. The influence is calculated by dividing shared publication between source and target by other shared publications of the target with third parties The construction of a directed network of influence based on an initially undirected graph is very attractive for biologists because in biomedical research there is an abundance of high-throughput experimental data that allows the construction of undirected correlation networks connecting different types of biological entities, such as genes or proteins, but the predictive power of these networks is limited due to their lack of causality or directionality. Based on the definition of influence proposed by Hangal et al. [3], Penrod et al. [4] developed a method for drug target discovery in the context of cancer therapy and showed that influential genes tend to be essential for the proliferation and survival of breast cancer cells, and that gene influence differs between untreated tumors and residual tumors that have adapted to a drug treatment. In order to calculate the investments between two genes, Penrod et al. [4] took the values of their partial correlation derived from expression data. It is worth noting here that despite Penrod et al. [4] having shown the utility of deriving the influence network from the co-expression information to identify genes essential for proliferation and survival of breast cancer cells, the underlying regulatory mechanisms involving these essential genes were not elucidated. Using the concepts of investments and influencing mutuality from the social network analysis world, here we propose an approach to infer causality in co-expression networks derived from solid tumor expression data. In particular, we construct co-expression networks and infer cause-effect relationships between genes encoding cell–cell communication molecules as a model of the tumor microenvironment (TME) in breast, ovarian, and lung cancers. We show that constructing a bidirectional network of influence between cell–cell communication molecules allows us to determine the direction of cause-effect relationships underlying the correlation between genes (undirected in nature) and described in the literature. Some methods have been proposed in the past to elucidate causality in biological networks inferred from experimental data, either from protein–protein interaction (PPI) information [5–7] or a combination of PPI and protein–DNA interactions [8] or perturbation experiments [9]. Specifically in the context of cancer, the partial least squares method was used to link the level of 19 proteins involved in apoptotic signaling in human colon adenocarcinoma cells to four quantitative measures of apoptosis [10]. The resulting directed network led to the prediction of cell death under specific perturbations. In a previous study, we investigated the causality inference in the TME in breast cancer by constructing a dynamical model based on perturbation experiments [11]. The model was used to determine how to effectively revert the angiogenic activity of Tie-2 expressing monocytes. In general, strategies based on perturbation to infer causality are limited to small systems with a limited set of genes or proteins due to the combinatorial nature of the problem and its cost. In the attempt to infer causality from correlation data, which is much more abundant than perturbation data, Gupta et al. [12] proposed a method that integrates network inference and network analysis approaches to construct co-expression networks and assign directionality to edges. The method was applied to time-series data to infer the topology of the gene regulatory network (GRN) of B. subtilis. Instead of using the criterion proposed by Gupta et al. [12] to assign directionality to co-expression edges, in this work we adopted a similar criterion to assign directionality to the investments; the final directionality assignment for the co-expression edges, which represent the major regulatory effect, is based on the ratio of investments between two given nodes and investments that the investor has on other genes, as proposed by Hangal et al. [3] for social graphs. In doing so, and given that the ratio of investments is a topological property that does not rest on the dynamics of the system, costly time-series or perturbation experiments are not needed to elucidate the directionality of the interaction between two given genes. Consequently, our approach can take advantage of a wealth of expression data of comparative studies accumulated over decades of cancer research. Principles of the approach In biological systems, the circuitry of the underlying network can cause certain direct or indirect reciprocity on the regulatory effects performed by direct interactions; there exist regulatory feedback loops contributing to determine the dynamical behavior of living systems and maintaining their general homeostasis. In other words, in order to guarantee a certain level of homeostasis, the constitutive elements of biological networks should be capable of mutually affecting each other directly or indirectly. The approach presented here for causality inference is based on the following assumption: given two genes A and B in a regulatory network, if there exists a direct effect from A to B, most of the indirect effects between these two nodes will have the opposite direction (from B to A). In this context, indirect/direct effect refers to an effect with/without intermediates belonging to the considered network. The rationale behind this assumption is the idea that homeostatic control requires reciprocity and given that the direct interaction covers one direction, the indirect interactions should have the opposite direction. Of course, such an assumption neglects feed-forward loops, which are well-known regulatory mechanisms [13–15]. In order to infer network causality, the above-mentioned assumption was combined with network inference based in partial correlation [16] and the adopted concepts of influence and investments proposed by Hangal et al. [3]. Algorithm description for directionality assignment The algorithm can be described in three steps (see Fig. 2): Algorithm description in three steps. In the first step, a co-expression network is constructed based on the calculation of direct and indirect correlation. Direct correlation refers to the second-order partial correlation, whereas indirect correlation is calculated as the difference between zeroth-order partial correlation and the direct correlation. In the second step, a bidirectional network of influence is constructed. Forward and reverse influences are derived from the calculation of the investments for each couple of genes. The directionality assignment of the investments is based on the slope ratio (SR) criterion. In the third step, causality is inferred by evaluating every couple of forward and reverse influences. The directionality of the predictions is the opposite of the highest influence value and can be interpreted as the directionality of the main flux of regulatory effects Construct a co-expression network To this end, we used the software package Ometer (http://www.comp-sys-bio.org/Ometer.html). Between any two given genes we calculated the partial correlation based on the Pearson coefficient. The partial correlation has been previously proposed to discover associations in genomic data [16]. It quantifies the correlation between two variables (in our case, gene expression) when conditioning on one or several other variables. The order of the partial correlation is determined by the number of variables it is conditioned on; within this work we used up to second-order partial correlation. Equations (1–3) allow the calculation of partial correlations of orders 0–2. $${\text{Zeroth-}}{\text{order}}:r_{{xy}} = \frac{{\text{cov} \left( {xy} \right)}}{{\sqrt {\text{var} \left( x \right)\text{var} \left( y \right)} }}$$ $${\text{First-order}}: \, rxy \cdot z = \frac{{r_{xy} { - }r_{xz} r_{yz} }}{{\sqrt {\left( {1{ - }r_{xz}^{2} } \right){ - }\left( {1{ - }r_{yz}^{2} } \right)} }}$$ $${\text{Second-order}}:r_{xy \cdot zq} = \frac{{r_{xy.z} { - }r_{xqz} r_{yqz} }}{{\sqrt {\left( {1{ - }r_{xq \cdot z}^{2} } \right){ - }\left( {1{ - }r_{yq.z}^{2} } \right)} }}$$ The algorithm further considered only interactions with a p value below 0.05. Results using different thresholds (p values of 0.1 and 0.01) are included in the "Results" section. At this point, the algorithm will make an assumption: given two genes A and B, if there exists a direct effect from A to B, most of the indirect effects between these two nodes will have the opposite direction (from B to A). The rationale behind this assumption is described above in principles of the approach. Following this assumption, the algorithm establishes that partial correlation gives us the strength of the effect in one direction and the difference between correlation and partial correlation (correlation–partial correlation) gives us the strength of the effect in the opposite direction. From now on, we will refer to the partial correlation as direct correlation and to the difference between correlation and partial correlation as indirect correlation, which has opposite direction than direct correlation. There are two possible directionality assignments for direct and indirect correlation between two given nodes A and B (see Fig. 3) and both will be further evaluated for each couple of nodes with statistically significant correlation. Influence calculation. Forward and reverse influence calculation between two genes A and B. Links in grey represent investments of A and B on other genes. The assignment of directionality to direct and indirect correlation is based on the slope ratio (SR) criterion Construct a network of influence We adopted the concepts of investments and influence proposed by Hangal et al. [3] to construct a weighted bidirectional network of influence. Investments will be the numerical value of the direct and indirect correlation, and the influence will be calculated dividing the investments between A and B by the total investments of the investors, i.e., B for forward influence (A → B) and A for reverse influence (A ← B). Given that we do not know if the direct (conversely, indirect) correlation is associated with either forward (A → B) or reverse (A ← B) influence, we also do not know whether we should divide the direct and indirect correlation by the investments of A or B. Moreover, in order to calculate the investments of the investor on other genes we also need to assign either values of direct or indirect correlation to the outgoing interactions of the investor. At this point, the algorithm will assign the value of direct and indirect correlations based on the so-called slope ratio metric (SR) following the strategy proposed by Gupta et al. [12]. The SR is defined as $$SR = \frac{{\hbox{min} \left( {\left| {\left. {b_{yx} } \right|} \right.,\left| {\left. {b_{xy} } \right|} \right.} \right)}}{{\hbox{max} \left( {\left| {\left. {b_{yx} } \right|} \right.,\left| {\left. {b_{xy} } \right|} \right.} \right)}}$$ b YX and b XY represent the regression slopes of a pair of variables (gene expression values). Gupta et al. proposed the following rules in order to assign directionality to correlation edges only for those edges that have SR → 0 $${\text{If}}\,SR = \frac{{\left| {\left. {b_{YX} } \right|} \right.}}{{\left| {\left. {b_{XY} } \right|} \right.}} \Rightarrow Y \to X$$ $$\begin{aligned} {\text{If}}\,SR = \frac{{\left| {\left. {b_{XY} } \right|} \right.}}{{\left| {\left. {b_{YX} } \right|} \right.}} \Rightarrow X \to Y \hfill \\ \hfill \\ \end{aligned}.$$ Our algorithm uses the same set of rules to assign the values of direct and indirect correlation to incoming and outgoing edges $${\text{If}}\,SR = \frac{{\left| {\left. {b_{YX} } \right|} \right.}}{{\left| {\left. {b_{XY} } \right|} \right.}} \Rightarrow Y\underrightarrow {direct - correlation}\,X,X\underrightarrow {indirect - correlation}\,Y$$ $${\text{If}}\,SR = \frac{{\left| {\left. {b_{XY} } \right|} \right.}}{{\left| {\left. {b_{YX} } \right|} \right.}} \Rightarrow X\underrightarrow {direct - correlation}\,Y,Y\underrightarrow {indirect - correlation}\,X$$ The intuitive idea behind the decision of the algorithm at this point is that the direct correlation (partial correlation) is always assigned to the link from the gene with a smaller variance to the gene with a bigger variance, given that $$b_{YX} = \frac{{\sum {_{i = 1}^{N} \left( {X_{i} { - }\bar{X}} \right)} \cdot \left( {Y{ - }\bar{Y}} \right)}}{{\sum {_{i = 1}^{N} \left( {X_{i} { - }\bar{X}} \right)^{2} } }}$$ $$b_{XY} = \frac{{\sum {_{i = 1}^{N} \left( {X_{i} { - }\bar{X}} \right)} \cdot \left( {Y{ - }\bar{Y}} \right)}}{{\sum {_{i = 1}^{N} \left( {Y_{i} { - }\bar{Y}} \right)^{2} } }}$$ It is worth noting here that this assignment is not related to the weight of the link; sometimes the direct correlation is stronger than the indirect correlation and sometimes it is the opposite. Once the algorithm has calculated the outgoing investments for every gene in the network, the influence bidirectional network is easily derived applying the following formula for each couple of genes A and B $$Influence\left( {A,B} \right) = \frac{{Invests\left( {B,A} \right)}}{{\sum {_{X} Invests\left( {B,X} \right)} }}$$ $$Influence\left( {B,A} \right) = \frac{{Invests\left( {A,B} \right)}}{{\sum {_{X} Invests\left( {A,X} \right)} }}$$ X refers to all the genes targeted by the investor. It is worth noting here that when calculating the influence, positive and negative investments (consequence of positive and negative correlations) are divided correspondingly by positive and negative investments of the investor. Predict causality Once the forward and reverse influence between any couple of nodes in the influence network has been calculated, the algorithm compares both values and selects the biggest one as the main flux of influence. The directionality predictions for the original co-expression network will have the opposite direction than the main flux of influence. These predictions can be interpreted, according to the adopted definitions, as the direction of the main flux of regulatory effects (dedicated investments). Empirically, we noticed that percentages of correct directionality assignments were improved when discarding the lowest influence values. Accordingly, we systematically discarded the lowest 25 % of values for the three examples presented in this work. In order to construct the correlation networks we use microarray expression data from public repositories. Datasets for breast and ovarian cancer (1809 and 1394 patients respectively) were downloaded from the KM plotter website (www.kmplot.com), whereas the dataset for lung cancer (688 patients) was constructed using the following references from GEO database: GSE14814, GSE19188, GSE31210 and GSE 37745. The raw CEL files were MAS5 normalized in the R statistical environment (www.r-project.org) using the Affy Bioconductor Library [17]. The three datasets also have a second scaling normalization to set the average expression on each chip to 1000 to avoid batch effects [18]. The datasets were obtained using either HG-U133A (GPL96) or HG-U133 Plus 2.0 (GPL570). These platforms include 283 probes corresponding to 192 genes annotated as cytokines, cell–cell communication molecules or growth factors according to GO database (http://geneontology.org). Consequently, the expression values of these 283 probes were summarized into 192 numerical values by calculating the mean of probes referring to the same gene. Evaluation of predictions Predictions were evaluated using directed cause-effect relationships contained in the ResNet database from Ariadne Genomics (http://www.ariadnegenomics.com/). We selected only the interactions included in the ResNet mammalian database in the category of 'Expression'. Interactions in the 'Expression' category indicate that the expression of regulatory gene/protein affects their targets, by (both directly and indirectly) regulating its gene expression or protein stability. The ResNet database includes biological relationships and associations which have been extracted from the biomedical literature using Ariadne's MedScan technology [19, 20]. MedScan processes sentences from PubMed abstracts and produces a set of regularized logical structures representing the meaning of each sentence. ResNet was queried looking for interactions between 192 genes annotated as cell–cell communication molecules or growth factors. It is worth noting here that no filter was applied based on the biological context. That means that the interactions included have been described in a variety of cell types, tissues and other experimental conditions, and thus are not restricted to observations in a tumor context. We obtained 1774 interactions directed and signed (either activation or inhibition). The complete list of interactions with their respective references is included in the Additional files 1, 2. For the evaluation of directionality assignment, we only considered predictions involving couples of genes previously described to be interacting. The number of such interactions reported in literature is different for each example (see Table 1 in "Results" section). When the predicted directionality matched directionality reported in the literature, it counted as 'correct'; if directionality did not match the interaction it counted as 'incorrect'. The percentage of correct directionality assignment refers to the ratio between 'correct' and 'total' ('correct' plus 'incorrect') predictions multiplied by 100. Table 1 Percentages of correctly predicted cause-effect relationships in breast (BC), ovarian (OC) and lung cancer (LC) However, there is an issue in the procedure we used to evaluate the directionality assignment that has to be taken into account. Couples of genes mutually regulated according to literature will be correctly predicted whatever directionality is assigned (both directions are correct). Due to that, the random directionality assignment may be right more than the expected 50 % of the time. In order to obtain a fair comparison of our method against random directionality assignment (see Fig. 4), we generated a population of 10,000 assignments with the same probability of 0.5 for one direction and the opposite for each couple of genes with statistically significant partial correlation between them and for the three case examples. The random directionality assignment performed slightly better than 50 % for the three case examples and differently depending on the subset of gene couples used for the evaluation (see Fig. 4), which ultimately depend on the p value of their partial correlation and the specific example. Consequently, the scores (% of correct assignments) of these populations of alternative random assignments can be justifiably compared against the ones obtained by our algorithm. Comparison between influence-based and random directionality assignment in breast, ovarian and lung cancer. The directionality assignment based on influence (blue dots) performed better than random for the three biological examples, namely breast, ovarian and lung cancer (green, orange and blue boxplot respectively). The best results in the three cases were obtained using a p value of 0.01 as the threshold for the co-expression network construction (labeled as BC 0.01, OC 0.01 and LC 0.01) We applied the proposed methodology to the TME because of the paramount importance of causality to develop novel combined cancer therapies. Breast, ovarian and lung cancer were selected because of the abundance of publicly available datasets with both expression and clinical data. Some aspects of tumor biology, such as pro-angiogenic and immune suppressive states, rely on cell–cell communication events; internal cellular processes are significantly influenced by the interplay between different cells types carried out through cell–cell communication molecules, which become potential targets of novel therapies. However, the complexity of the TME demands theoretical frameworks, such as statistical inference, graph analysis and dynamical models, in order to assess and study the information derived from high-throughput experimental technologies. A predictive model of the TME should capture interdependencies between tumor microenvironment components and predict their response to single and combined perturbations, and will serve to identify the most efficient treatment combinations that induce desired cell properties, such as anti-angiogenic and immune-competent states, in the TME. Such a model requires directionality or causality when describing interdependencies between TME components. Statistical analysis of concentrations of cell–cell communication molecules in tumor samples allows the construction of a correlation network at the level of gene products or gene expression. Unfortunately, correlation networks are undirected; a statistically significant correlation between genes 'A' and 'B' does not indicate if 'A' levels are causing 'B' levels or vice versa. The methodology proposed in this work assigns directionality to co-expression edges based on the ratio of investments between two given nodes and investments that the investor has on other genes (see "Methods" section). Calculating the ratio of investments of the network of influence does not require costly time series and perturbation experiments. In summary, we have applied an analysis to infer causality in the TME at the gene expression level across three different cancer types. Both co-expression and influence networks are included in the Additional file 1: Tables S1. Causality inference at the level of gene expression in breast, ovarian and lung cancer After applying the methodology described in the "Methods" section to three different datasets derived from breast, ovarian and lung cancer patients, we observed that the directionality assignment based on influence performed better than a random assignment in the three cases (see Fig. 4). Different p values (0.01, 0.05 and 0.1) were used as the threshold for the inference of the initial co-expression network in order to evaluate the effect of this parameter on the directionality assignment. In the three cases, the best accuracy corresponded to the most stringent p value (0.01), resulting in correct predictions of 63.6, 62.2, and 69.5 % for breast, ovarian and lung cancer datasets respectively. These percentages were estimated using information from literature about predicted interactions. Relaxing the stringency on the p value to construct the co-expression network allowed a larger amount of correctly predicted directionality assignment to be obtained, but with lower accuracy (see Table 1). Indeed, we observed that using a p value of 0.01 for the co-expression network as threshold we always obtained a statistically significant difference between the score of the influence-based network and randomly generated networks, with z-scores of 2.03, 1.77 and 2.81 for breast, ovarian and lung cancer respectively and p values <0.05, whereas for less stringent p values the difference was not always statistically supported (see Table 1). The most prevalent use of network inference in biomedicine takes advantage of information such as gene expression or protein–protein binding data to predict the network topology as a set of correlations or physical interactions between its constitutive elements. Unfortunately, the resulting networks lack directionality, i.e., causality, hindering the elucidation of regulatory mechanisms and the flow of information through signaling pathways. This causality elucidation is essential to model the homeostatic response of the TME to internal and external factors and to predict the network response to perturbations such as targeted therapy. In order to elucidate this causality, one may consider dedicated perturbation experiments or time-series data, which are costly and not as abundant as comparative studies between a reduced set of conditions. In this work we addressed the following question: To what extent can the description of a biological system (TME) in terms of influence and investments derived from social network analysis be useful to infer causality from comparative studies? To answer this question, we developed a systems approach where network causality in the TME is inferred based not only on local properties of the system (such as the co-expression of two genes) but also on the global network topology analysis. We showed that the application of the strategy proposed here to breast, ovarian and lung cancer datasets allowed the prediction of causality from derived co-expression networks by constructing a network of influence and analyzing its topological properties. The resulting directionality assignments were evaluated using information from literature and compared across the three cancer types. Results showed that 63.63, 62.26 and 69.56 % of directionality predictions for breast, ovarian and lung cancer respectively were correct according to cause-effect relationships described in literature. These results indicated that costly perturbation experiments and time series data could be avoided while still providing a good approximation of the network causality. The methodology described in this work does not address the other significant challenge in network inference: the appearance of false positives or correlation inferred due to the spurious co-occurrence of biological events. The transformation of the correlation network into an influence network will assign weights and directionality to both true and false interactions. Conversely, if there is no link between two given nodes in the correlation network, no new interactions will be inferred; false negatives or real interactions not included in the correlation network will not be further considered. In other words, the methodology presented in this work assumes the correlation networks are essentially correct and complete. Consequently, the causality inference technique proposed here will benefit from the further development of advanced methods for correlation network inference with high sensitivity and specificity. As it is mentioned in "Methods" section, the adopted assumption to identify the direction of the dominant regulation between two genes somehow neglects the importance of feed-forward loops, which are well-known regulatory mechanisms [13–15]. However, the potential overestimation of the strength of the investments assigned with the direct correlation does not prevent the correct directionality assignment for the majority of gene pairs. It is worth noting here that usually network inference techniques attempt to remove indirect interactions, or interactions through intermediaries, by using partial correlations [16, 21–23], conditional mutual information [24, 25] or data processing inequality [26, 27]. In this work we showed that considering not only direct but also indirect correlations (between genes that also have direct correlations) helped to predict directionality in three different case studies. This work constitutes an example of interdisciplinary transfer of concepts between the fields of social and biological network analysis that allows the development of a novel systems approach to infer network causality in the TME. The main strength of this method is that it relies on experimental information from comparative studies, rather than costly dedicated perturbation experiments and time series data, usually required to infer cause-effect relationships. The application of our method can help the experimental design, elucidation of regulatory mechanisms and identification of novel targets in cancer therapy and beyond. GRN: gene regulatory network TME: Zotenko E, Mestre J, O'leary DP, Przytycka TM. Why do hubs in the yeast protein interaction network tend to be essential: reexamining the connection between the network topology and essentiality. PLoS Comput Biol. 2008;4(8):e1000140. Kadushin C. Understanding social networks: Theories, concepts, and findings. Oxford: University Press; 2012. Hangal S, MacLean D, Lam MS, Heer J. All friends are not equal: using weights in social graphs to improve search. In: Proceedings of the 4th SNA-KDD Workshop '10 (SNA-KDD'10). Washington: ACM; 2010. Penrod NM, Moore JH. Influence networks based on coexpression improve drug target discovery for the development of novel cancer therapeutics. BMC Syst Biol. 2014;8(1):12. Medvedovsky A, Bafna V, Zwick U, Sharan R. An algorithm for orienting graphs based on cause-effect pairs and its applications to orienting protein networks. In: Crandall KA, Lagergren J, editors. WABI LNCS (LNBI). Heidelberg: Springer; 2008. p. 222–32. Liu W, Li D, Wang J, Xie H, Zhu Y, He F. Proteome-wide prediction of signal flow direction in protein interaction networks based on interacting domains. Mol Cell Proteomics. 2009;8(9):2063–70. Gitter A, Klein-Seetharaman J, Gupta A, Bar-Joseph Z. Discovering pathways by orienting edges in protein interaction networks. Nucleic Acids Res. 2011;39(4):e22–3. Yeang C-H, Ideker T, Jaakkola T. Physical network models. J Comput Biol. 2004;11(2–3):243–62. Ourfali O, Shlomi T, Ideker T, Ruppin E, Sharan R. SPINE: a framework for signaling-regulatory pathway inference from cause-effect experiments. Bioinformatics. 2007;23(13):i359–66. Janes KA, Yaffe MB. Data-driven modelling of signal-transduction networks. Nat Rev Mol Cell Biol. 2006;7(11):820–8. Guex N, Crespo I, Bron S, Ifticene-Treboux A, Faes-van't Hull E, Kharoubi S, Liechti R, Werffeli P, Ibberson M, Majo F. Angiogenic activity of breast cancer patients' monocytes reverted by combined use of systems modeling and experimental approaches. PLoS Comput Biol. 2015;11(3):e1004050–1. Gupta A, Maranas CD, Albert R. Elucidation of directionality for co-expressed genes: predicting intra-operon termination sites. Bioinformatics. 2006;22(2):209–14. Cobb MH, Goldsmith EJ. How MAP kinases are regulated. J Biol Chem. 1995;270(25):14843–6. Schlaepfer DD, Jones K, Hunter T. Multiple Grb2-mediated integrin-stimulated signaling pathways to ERK2/mitogen-activated protein kinase: summation of both c-Src-and focal adhesion kinase-initiated tyrosine phosphorylation events. Mol Cell Biol. 1998;18(5):2571–85. Piloto O, Wright M, Brown P, Kim K-T, Levis M, Small D. Prolonged exposure to FLT3 inhibitors leads to resistance via activation of parallel signaling pathways. Blood. 2007;109(4):1643–52. De La Fuente A, Bing N, Hoeschele I, Mendes P. Discovery of meaningful associations in genomic data using partial correlation coefficients. Bioinformatics. 2004;20(18):3565–74. Gautier L, Cope L, Bolstad BM, Irizarry RA. Affy—analysis of Affymetrix GeneChip data at the probe level. Bioinformatics. 2004;20(3):307–15. Sims AH, Smethurst GJ, Hey Y, Okoniewski MJ, Pepper SD, Howell A, Miller CJ, Clarke RB. The removal of multiplicative, systematic bias allows integration of breast cancer gene expression datasets—improving meta-analysis and prediction of prognosis. BMC Med Genomics. 2008;1(1):42. Daraselia N, Yuryev A, Egorov S, Novichkova S, Nikitin A, Mazo I. Extracting human protein interactions from MEDLINE using a full-sentence parser. Bioinformatics. 2004;20(5):604–11. Novichkova S, Egorov S, Daraselia N. MedScan, a natural language processing engine for MEDLINE abstracts. Bioinformatics. 2003;19(13):1699–706. Dempster AP. Covariance selection. Biometrics. 1972;28:157–75. Whittaker J. Graphical models in applied multivariate statistics. Hoboken: Wiley; 2009. Kollar D, Friedman N. Probabilistic graphical models: principles and techniques. Cambridge: The MIT Press; 2009. Soranzo N, Bianconi G, Altafini C. Comparing association network algorithms for reverse engineering of large-scale gene regulatory networks: synthetic versus real data. Bioinformatics. 2007;23(13):1640–7. Liang K-C, Wang X. Gene regulatory network reconstruction using conditional mutual information. EURASIP J Bioinf Syst Biol. 2008;2008(1):253894. Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Favera RD, Califano A. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006;7(Suppl 1):S7. Basso K, Margolin AA, Stolovitzky G, Klein U, Dalla-Favera R, Califano A. Reverse engineering of regulatory networks in human B cells. Nat Genet. 2005;37(4):382–90. IC conceived the idea, performed the analyses and wrote the manuscript; MAD conceived the idea and wrote the manuscript; IX conceived the idea, wrote the manuscript and supervised the project. All authors read and approved the final manuscript. We thank Dr. Brian Stevenson and Dr. Nicolas Guex for their careful and critical reading of the manuscript. This project has been funded with support from Vital-IT-SIB (Swiss Institute of Bioinformatics) at the University of Lausanne, the MEDIC Foundation (to M.-A.D.) and the Swiss National Science Foundation (SNSF), Grant CR32I3_156915 to M.-A.D. Vital-IT, SIB (Swiss Institute of Bioinformatics), University of Lausanne, Lausanne, Switzerland Isaac Crespo & Ioannis Xenarios Ludwig Center for Cancer Research, University of Lausanne, Epalinges, Switzerland Marie-Agnès Doucey Isaac Crespo Ioannis Xenarios Correspondence to Isaac Crespo or Ioannis Xenarios. Additional file 1: Table S1. xlsx. Networks. Results influence-based network vs. random networks. Crespo, I., Doucey, MA. & Xenarios, I. Social networks help to infer causality in the tumor microenvironment. BMC Res Notes 9, 168 (2016). https://doi.org/10.1186/s13104-016-1976-8 Causality Network inference
CommonCrawl
Computing determinant without expansion $$\begin{align}\mathrm D &= \left|\begin{matrix} (b+c)^2 & a^2 & a^2 \\ b^2 & (a+c)^2 & b^2 \\ c^2 & c^2 & (a+b)^2 \end{matrix}\right|\\ &= (a+b+c)\left|\begin{matrix} b+c - a & a^2 & a^2 \\ b - a -c & (a+c)^2 & b^2 \\ 0 & c^2 & (a+b)^2 \end{matrix}\right| \\ &= (a+b+c)^2\left|\begin{matrix} b+c - a & 0 & a^2 \\ b - a -c & a+c - b & b^2 \\ 0 & c - a-b & (a+b)^2 \end{matrix}\right|\\ &= (a+b+c)^2\left|\begin{matrix} b+c - a & 0 & a^2 \\ 0 & a+c - b & b^2 \\ c - a-b & c - a-b & (a+b)^2 \end{matrix}\right|\end{align}$$ Can $\rm D$ be further simplified without expanding ? I feel it should be because this was competition question. linear-algebra matrices contest-math determinant Rodrigo de Azevedo $\begingroup$ $$R_3'=R_3-R_2-R_1$$ $\endgroup$ – lab bhattacharjee $\begingroup$ @labbhattacharjee Then $R_3 = 2[-b \qquad -a \qquad ab]$. It simplifies things somewhat but still not very helpful. $\endgroup$ $\begingroup$ If we'll expand the determinant after this, we obtain $$2[(b+c-a)(a+c-b)ab+b(a+c-b)a^2+ab^2(b+c-a)]=$$ $$2[(b+c-a)ab(a+c)+b(a+c-b)a^2]=$$ $$2ab[(b+c-a)(a+c)+(a+c-b)a]=$$ $$2ab[ab+bc+ac+c^2-a^2-ac+a^2+ac-ab]=$$ $$2ab[bc+ac+c^2]=$$ $$2abc[a+b+c],$$ and the initial determinant equals $2(a+b+c)^3abc$ (I verified this with Mathcad). $\endgroup$ – Alex Ravsky $\begingroup$ @AlexRavsky Yes that is the answer but I would like to know if there is a way without these tedious calculations. $\endgroup$ $\begingroup$ OK, I'll think once more about such a way. Nevertheless, a search for it may be a much more lengthy and non-trivial task than these calculations and it may be unsuccessful. So it is not recommended to do it in real competitions. :-) $\endgroup$ We already reduced the problem to calculate $$D'=\left|\begin{matrix} b+c - a & 0 & a^2 \\ 0 & a+c - b & b^2 \\ b & a & -ab \end{matrix}\right|$$ If $a=0$ then $$D'=\left|\begin{matrix} b+c & 0 & 0\\ 0 & c - b & b^2 \\ b & 0 & 0 \end{matrix}\right|=0.$$ If $b=0$ then $$D'=\left|\begin{matrix} c - a & 0 & a^2 \\ 0 & a+c & 0 \\ 0 & a & 0 \end{matrix}\right|=0.$$ Otherwise put $R'_1=R_1+\frac abR_3$ and $R'_2=R_2+\frac baR_3$. Then $$D'=\left|\begin{matrix} b+c & \frac {a^2}b & 0\\ \frac {b^2}a & a+c & 0 \\ b & a & -ab \end{matrix}\right|=-ab\left|\begin{matrix} b+c & \frac {a^2}b \\ \frac {b^2}a & a+c \\ \end{matrix}\right|=-ab[(a+c)(b+c)-ab]=-ab[ac+bc+c^2]=-abc(a+b+c).$$ The latter formula holds also when $a=0$ or $b=0$. Finally, $$D=(a+b+c)(-2)D'=2(a+b+c)^3abc.$$ Alex RavskyAlex Ravsky $\begingroup$ I don't think we can do better than this. $\endgroup$ Not the answer you're looking for? Browse other questions tagged linear-algebra matrices contest-math determinant or ask your own question. Prove Nice Determinant Equations Find the constant $k$ from the determinant Determinant Identity: Elegant Solution Please. Proving the determinant of a $3\times 3$ matrix is given by $2s^3(s-a)(s-b)(s-c)$. Matrix Determinant Identity Finding determinant of matrix without expanding Find the determinant without row expansion Calculating the determinant - error Without expanding, show that $\left| \begin{smallmatrix} 3&4&5 \\ 15&21&26 \\ 21&29&36 \\ \end{smallmatrix}\right|=0$ Computing the determinant of $X^*X$ given $X$.
CommonCrawl
Near-critical spreading of droplets Physical ageing of spreading droplets in a viscous ambient phase Bibin M. Jose, Dhiraj Nandyala, … Carlos E. Colosqui Physicochemical hydrodynamics of droplets out of equilibrium Detlef Lohse & Xuehua Zhang Quasi-equilibrium phase coexistence in single component supercritical fluids Seungtaek Lee, Juho Lee, … Gunsu Yun Critical fluctuations and slowing down of chaos Moupriya Das & Jason R. Green Coexistence of solid and liquid phases in shear jammed colloidal drops Phalguni Shah, Srishti Arora & Michelle M. Driscoll The interaction of droplet dynamics and turbulence cascade Marco Crialesi-Esposito, Sergio Chibbaro & Luca Brandt Kinematics of the viscous filament during the droplet breakup in air Diana Broboana, Ana-Maria Bratu, … Corneliu Balan Inverse cascade of the vortical structures near the contact line of evaporating sessile droplets Abbas Ghasemi, Burak Ahmet Tuna & Xianguo Li Microstructure of the fluid particles around the rigid body at the shear-thickening state toward understanding of the fluid mechanics Ryota Jono, Syogo Tejima & Jun-ichi Fujita Raphael Saiseau ORCID: orcid.org/0000-0001-6761-11621,2, Christian Pedersen ORCID: orcid.org/0000-0001-5012-26023, Anwar Benjana1, Andreas Carlson ORCID: orcid.org/0000-0002-3068-99833, Ulysse Delabre1, Thomas Salez ORCID: orcid.org/0000-0001-6111-87211 & Jean-Pierre Delville ORCID: orcid.org/0000-0002-7376-94491 Nature Communications volume 13, Article number: 7442 (2022) Cite this article Phase transitions and critical phenomena We study the spreading of droplets in a near-critical phase-separated liquid mixture, using a combination of experiments, lubrication theory and finite-element numerical simulations. The classical Tanner's law describing the spreading of viscous droplets is robustly verified when the critical temperature is neared. Furthermore, the microscopic cut-off length scale emerging in this law is obtained as a single free parameter for each given temperature. In total-wetting conditions, this length is interpreted as the thickness of the thin precursor film present ahead of the apparent contact line. The collapse of the different evolutions onto a single Tanner-like master curve demonstrates the universality of viscous spreading before entering in the fluctuation-dominated regime. Finally, our results reveal a counter-intuitive and sharp thinning of the precursor film when approaching the critical temperature, which is attributed to the vanishing spreading parameter at the critical point. The spreading of viscous droplets on solid substrates has been extensively studied over the last decades1,2,3. For droplet sizes smaller than the capillary length2, the viscocapillary regime yields self-similar asymptotic dynamics, i.e. the so-called Tanner's law4, with the droplet radius R increasing in time t as ~t1/10. To establish this scaling, two ingredients are invoked: global volume conservation, and a local balance at the contact line between driving capillary forces and viscous dissipation in the liquid wedge. However, such a continuous description implies a finite change of the fluid velocity over a vanishing height at the contact line, and thus leads to an unphysical divergence of viscous stress and dissipation5. To solve this paradox, a microscopic molecular-like cut-off length is required, and appears through a logarithmic factor in Tanner's law. In this spirit, theoretical and experimental investigations introduced various possible regularization mechanisms1,6,7,8,9,10,11,12,13,14, including a gravito-capillary transition15,16,17,18, surface roughness17, thermal effects19, Marangoni-driven flows20, diffusion21,22, or a slip condition at the solid substrate23. In the particular case of total wetting, the existence of a thin precursor film ahead of the contact line has been proposed as the main candidate24,25,26. However, despite tremendous efforts to measure the microscopic length, or to characterize the associated logarithmic factor, the problem is still open. Conversely, solving the free-interface dynamical evolution of a droplet-like perturbation on a thin liquid film in the lubrication approximation showed that Tanner's law can be considered as a negligible-film-thickness limit of capillary levelling27. Such a statement was further comforted by its extension to the gravity-driven28 and elastic-bending-driven29 cases. As such, it is possible to unambiguously determine the microscopic precursor-film thickness from the spreading of any droplet in total-wetting and lubrication conditions. In this article, we investigate droplet spreading in a near-critical phase-separated binary liquid30, with four main objectives. First and most importantly, close to a fluid-fluid critical point, an isotropic liquid belongs to the {d = 3, n = 1} universality class of the Ising model, where d and n are, respectively, the space and order-parameter dimensions, so that the results are immediately generalizable to any fluid belonging to the same universality class. Secondly, critical phenomena are often accompanied by a wetting transition at a temperature which might be either identical or distinct from the critical one3, so precursor films can also be investigated near the critical point. Thirdly, as many fluid properties vary with the proximity to a critical point according to power-law variations of the type \(\sim {\left({{\Delta }}T/{T}_{{{{{{{{\rm{c}}}}}}}}}\right)}^{\alpha }\), with ΔT = T − Tc the temperature distance to the critical point Tc, and α some positive or negative critical exponent, the spreading dynamics may be continuously and precisely tuned by varying the temperature. Finally, our study also provides evidence for droplet spreading in a liquid environment, which was scarcely studied31. The experimental configuration is depicted in Fig. 1. We use a water-in-oil micellar phase of microemulsion32,33, as described in detail in the Supplementary Method 1 section. Briefly, at the chosen critical composition (water, 9% wt, toluene, 79% wt, SDS, 4% wt, butanol, 17% wt), it exhibits a low critical point at Tc close to 38 ∘C, above which the mixture separates into two phases of different micelle concentrations, Φ1 and Φ2 with Φ2 < Φ1 (see Fig. 1a). The microemulsion is enclosed in a tightly-closed fused-quartz cell of 2 mm thickness (Hellma 111-QS.10X2) which is introduced in a home-made thermally-controlled brass oven with four side-by-side windows. As working in a tight cell is mandatory with critical fluids, we use a contactless optical method to create a wetting drop at the bottom wall. Note that the microemulsion is transparent (absorption coefficient smaller than 4.6 × 10−4 cm−1) at the employed wavelength, which prevents any laser-induced heating effect. The sample is set at a temperature T > Tc and a continuous frequency-doubled Nd3+ − YAG (wavelength in vacuum λ = 532 nm, TEM00 mode) laser beam is focused on the meniscus of the phase-separated mixture using a × 10 Olympus® microscope objective (NA = 0.25). The photon momentum mismatch between the two phases, proportional to the refraction-index contrast, generates a radiation pressure, and the interface thus bends (see Fig. 1b) as a result of the balance between the latter with hydrostatic and Laplace pressures34. As the interfacial tension γ of near-critical interfaces vanishes at the critical point, with \(\gamma={\gamma }_{{{{{{{{\rm{0}}}}}}}}}{\left({{\Delta }}T/{T}_{{{{{{{{\rm{c}}}}}}}}}\right)}^{2\nu }\), where ν = 0.63 and γ0 = 5.0 × 10−5 N/m in our case, the interfacial deformation can be made very large. When the beam propagates downwards and with sufficient power, the interface can become unstable (see Fig. 1c) due to the total reflection of light within the deformation. In this case, a jet is formed, with droplets emitted at the tip35,36. Note that the jetting power threshold can also be used to measure the interfacial tension35. The length of the jet can be tuned with the laser power to bring its tip close to the bottom of the cell, without touching it. Then, by reducing the power, the jet breaks up into many droplets due to the Rayleigh-Plateau instability. By increasing again the power, below the jetting threshold, the laser beam forces coalescence between several droplets to produce a large one which can be further pushed by radiation pressure towards a borosilicate substrate placed at the bottom of the cell (see Fig. 1d). We turn off the laser just before contact, and follow the droplet spreading using ×20 or ×50 Olympus® microscope objectives, with resolutions of 1.0 and 0.8 μm respectively, and a Phantom® VEO340L camera for the frame grabbing. Note the existence of a prewetting film on the substrate, at least up to ΔT = 15 K37. Figure 1d displays two image sequences corresponding to the coalescence and spreading of droplets at ΔT = 8 and 1 K. The spreading time scale comparatively increases by approximately one order of magnitude for ΔT = 1 K, as a result of the vanishing interfacial tension near Tc. We also notice that the droplet volumes reduce over time, indicating the presence of evaporation, as expected for finite-size objects in an environment at thermodynamic equilibrium. We stress here that we employ the standard terminology "evaporation" all along the article, despite the outer fluid is not a "vapor" but another liquid-like system. At early stages, both profiles display large curvature gradients. Since our focus here is on the long-term asymptotic spreading behavior, we define the temporal origin t = 0 from a first experimental image where the curvature is homogeneous, except near the contact-line region, and a spherical-cap fit is valid. Fig. 1: Experimental system. a Schematic phase diagram of the used binary liquid mixture (i.e., a micellar phase of the microemulsion, see Supplementary Method 1), where T is the temperature, Φ the micelle concentration, and Tc and Φc the coordinates of the critical point. b Radiation pressure-induced optical bending of the interface separating the two coexisting phases at T > Tc, where the downward laser beam is represented by the arrows. c Image sequence of the optical jetting instability with drop formation at the tip. d Image sequences of a less-dense-phase droplet of concentration Φ2 coalescing and spreading over a borosilicate substrate placed at the bottom of the cell, when surrounded by the denser phase of concentration Φ1. The temperature distances to the critical point Tc, the initial droplet volumes, and the time intervals between images are: (i) ΔT = 8 K, Vini = 30.3 pL, dt = 3 s; (ii) ΔT = 1 K, Vini = 21.5 pL, dt = 20 s. Each image sequence is then treated using a custom-made automatized contour detection based on a Canny-threshold algorithm, where the droplet profiles correspond to the external maxima of the intensity gradients (see Supplementary Method 2). Spherical-cap fits allow to extract the droplet volume V(t), radius R(t), and apparent contact angle θ(t), which are then averaged using a custom-made exponentially-increasing time window to get a logarithmically-distributed data set (see Supplementary Method 2). In the inset of Fig. 2a, we plot the experimental droplet volume as a function of time, for four different values of ΔT. In all cases, the volume decreases until the droplet is fully evaporated. By using the initial volumes V0 and by extrapolating the times tf of final evaporation for all droplets, we then plot in the main panel the same data in dimensionless form, with V/V0 as a function of 1 − t/tf. We observe a data collapse onto a unique power-law behavior, with fitted exponent 1.77, which is close to the 11/7 value theoretically predicted for evaporating droplets12. In Fig. 2b, we further plot the contact radius R, normalized by its initial value R0, as a function of time, for all ΔT. A Tanner-like power law systematically emerges at intermediate times, until evaporation eventually dominates the evolution. Fig. 2: Raw data: profiles and main observables. a Rescaled droplet volume V/V0 as a function of the rescaled time 1 − t/tf, with V0 the initial volumes and tf the evaporation times of all droplets, for four different distances to the critical temperature ΔT. The dashed line indicates the empirical power law \({(1-t/{t}_{{{{{{{{\rm{f}}}}}}}}})}^{1.77}\). Inset: corresponding raw data. b Contact radius, divided by its initial value R0, as a function of time for the same temperatures. The 1/10 power-law exponent of Tanner's law is indicated with a slope triangle. Inset: corresponding raw data. Error bars on droplet volume are derived from the errors on droplet height and contact radius, which are described in Supplementary Method 2. c, d Droplet profiles at different times obtained from experiments (symbols) and compared to the numerical solutions of Eq. (1) (dashed/dotted lines) for ΔT = 8 K (c) and ΔT = 4 K (d). Source data are provided as a Source Data file. To model the observed spreading dynamics, we consider a large initial droplet-like interfacial perturbation profile d(r, t = 0), with r the radial coordinate, atop a flat thin-film of thickness ϵ, and describe its evolution through the profile d(r, t) at all times, in the small-slope limit within the lubrication approximation2. Therein, a horizontal Newtonian viscous flow of viscosity η is driven by the gradient of Laplace pressure. Since most of the dissipation occurs in the wedge-like region near the apparent contact line5, we further make an approximation and neglect the influence of viscous shear stresses in the surrounding phase. Note that this hypothesis would not hold if the atmospheric fluid was much more viscous than the droplet fluid31. Fortunately, this is not the case near the critical point, where both viscosities are almost equal. The evolution is then described by the axisymmetric capillary-driven thin-film equation27: $${\partial }_{t}h+\frac{\gamma }{3\eta r}{\partial }_{r}\left[r{h}^{3}{\partial }_{r}\left(\frac{1}{r}{\partial }_{r}h+{\partial }_{r}^{2}h\right)\right]={{{{{{{\mathcal{H}}}}}}}}(1-R)f\,,$$ where h(r, t) = ϵ + d(r, t) is the free-interface height from the solid substrate, \({{{{{{{\mathcal{H}}}}}}}}(1-R)\) is the Heaviside step function with R(t) the advancing radius of the droplet, and f(t) is an added coefficient accounting for evaporation. The latter is chosen in order to precisely mimic the experimentally-measured evaporation of the droplet (see Fig. 2a). Note that the Heaviside function ensures that the prewetting film—which is at thermodynamical equilibrium in contrast to the droplet—does not evaporate. Equation (1) is numerically integrated using a finite-element solver38. The experimental radial profiles depicted in Fig. 1d are chosen as initial profiles, after angular averaging and smoothening using fourth-order polynomials in order to avoid unphysical fluctuations related to the camera resolution and the contour-detection algorithm. As shown in Fig. 2c, d, the comparisons between the experimental and numerical evolutions reveal an excellent agreement. As the capillary velocity vcap = γ/η is independently evaluated (see ref. 32 and Supplementary Method 1 for the viscosity calibration), and typically varies between 22 and 1013 μm/s for near-critical droplets within the ΔT = 1–8 K range, the precursor-film thickness ϵ remains the only fit parameter in this comparison, and its behavior with temperature will be discussed after. Tanner's law1,4 can be obtained from the combination of (i) a local power balance between capillary driving and viscous damping near the contact line; and (ii) global volume conservation. The former power balance reduces to the Cox–Voinov's law for total-wetting conditions: $${\theta }^{3}=9\ell {{{{{{{\rm{Ca}}}}}}}}\,,$$ where \({{{{{{{\rm{Ca}}}}}}}}=\dot{R}/{v}_{{{{{{{{\rm{cap}}}}}}}}}\) is the capillary number, and \(\ell=\ln (L/\epsilon )\) is the logarithmic factor discussed in the introduction relating the two cut-off lengths of the problem, namely a typical macroscopic size L of the system and a microscopic length which is identified to the precursor-film thickness ϵ in total-wetting conditions. To disentangle evaporation, through the V(t) behavior obtained in Fig. 2a, from the spreading dynamics, we introduce the following dimensionless variables: $$\tilde{R}=R{\left[\frac{\pi }{4V(t)}\right]}^{1/3}\quad,\quad \tilde{t}={v}_{{{{{{{{\rm{cap}}}}}}}}}t{\left[\frac{\pi }{4V(t)}\right]}^{1/3}\,.$$ Tanner's law1,4 is then written in dimensionless form as: $${\tilde{R}}^{10}-{\tilde{{R}_{{{{{{{{\rm{0}}}}}}}}}}}^{10}=\frac{10}{9\ell }\tilde{t}\,,$$ with \(\tilde{{R}_{{{{{{{{\rm{0}}}}}}}}}}=\tilde{R}(t=0)\). In Fig. 3a, we plot the rescaled contact radius as a function of rescaled time, for various temperatures. We systematically observe Tanner behaviors. Interestingly, from the fits to Eq. (4), we obtain increasing values of ℓ as the critical point is neared. This trend is further confirmed in Fig. 3b, where we see Cox–Voinov behaviors (see Eq. (2)) at large-enough Ca, with an identical evolution of ℓ with ΔT. The departure from Cox–Voinov's law at low Ca is due to the evaporation-induced non-monotonic behavior of R(t) (see Fig. 2b), resulting in Ca crossing zero for finite values of θ. Fig. 3: Rescaled data: Tanner and Cox–Voinov laws. a Rescaled contact radius \({\tilde{R}}^{10}\!\!\!-{\tilde{{R}_{0}}}^{10}\) as a function of rescaled time \(\tilde{t}\), for various temperatures ΔT as indicated. The dashed lines indicate fits Eq. (4), with ℓ as a free parameter for each temperature. b Contact angle θ as a function of capillary number Ca, for various temperatures ΔT as indicated. The dashed lines indicate the predictions of Eq. (2), using the ℓ values obtained from the fits in a. Error bars on rescaled radius and contact angle are obtained using the errors on droplet height and contact radius described in Supplementary Method 2. Source data are provided as a Source Data file. Using the ℓ values obtained from the Tanner fits above, we can then represent all the data onto a single master curve, as shown in Fig. 4a. The observed collapse, over more than two orders of magnitude in time and broad ranges of material parameters, shows the surprising robustness of Tanner's law in the vicinity of the critical point, despite the increasing roles of evaporation, gravity, and fluctuations. Fig. 4: Near-critical Tanner masterplot and precursor-film thickness. a Dimensionless Tanner master curve, including the experiments at all temperatures. The dashed line corresponds to Eq. (4). The values of the logarithmic factor ℓ were first obtained by fitting the individual experimental data in Fig. 3a to Eq. (4). Error bars on rescaled radius are obtained using the errors on droplet height (through volume derivation) and contact radius described in Supplementary Method 2. b Extracted precursor-film thickness ϵ as a function of the temperature distance ΔT to the critical point, as obtained by fitting individual experimental profiles to numerical solutions of Eq. (1) (see Fig. 2c and d). The dashed line indicates the empirical power law \(\epsilon=a{\left({{\Delta }}T/{T}_{{{{{{{{\rm{c}}}}}}}}}\right)}^{2.69}\) with a = 5.54 mm. Error bars on precursor-film thickness correspond to the maximum acceptable values to fit numerical profiles with the experimental ones. Source data are provided as a Source Data file. Finally, Fig. 4b shows the extracted precursor-film thickness ϵ as a function of ΔT/Tc. Strikingly, over the considered temperature range, we observe a sharp decrease of ϵ from a fraction of micrometers down to a nanometer, as the critical point is approached from above. Over a decade in the considered temperature range, this behavior is consistent with the empirical power law \(\epsilon=a{\left({{\Delta }}T/{T}_{{{{{{{{\rm{c}}}}}}}}}\right)}^{2.69}\), where a = 5.54 mm. The wetting transition in our system being located far above the largest temperature studied here37, we could have instead expected an increase of ϵ, since the precursor film is here made of the most-wetting phase3,39. Nevertheless—provided that one can extrapolate the definition of interfaces towards the critical point—the spreading parameter1 is expected to strictly vanish at that point since the two fluid phases become indistinguishable media, which might be the reason underlying the observed behavior. Beyond revealing the universality and peculiarities of viscous spreading near a critical point, our results pave the way toward further investigations closer to that point, with the aim of addressing the increasing contributions of gravity, evaporation, and eventually thermal fluctuations40,41,42. Supplementary Method 1 presents the near-critical properties of the binary liquid used and Supplementary Method 2 the technical details for the edge detection of interfaces and data acquisition from image analysis. Source data are provided with this article. Source data are provided with this paper. Information on the numerical codes can be provided by the authors upon request. De Gennes, P. G. Wetting: statics and dynamics. Rev. Mod. Phys. 57, 827 (1985). Article MathSciNet Google Scholar Oron, A., Davis, S. H. & Bankoff, S. G. Long-scale evolution of thin liquid films. Rev. Mod. Phys. 69, 931 (1997). Bonn, D., Eggers, J., Indekeu, J., Meunier, J. & Rolley, E. Wetting and spreading. Rev. Mod. Phys. 81, 739 (2009). Tanner, L. The spreading of silicone oil drops on horizontal surfaces. J. Phys. D 12, 1473 (1979). Huh, C. & Scriven, L. E. Hydrodynamic model of steady movement of a solid/liquid/fluid contact line. J. Colloid Interface Sci. 35, 85–101 (1971). Voinov, O. Hydrodynamics of wetting. Fluid Dyn. 11, 714–721 (1976). Greenspan, H. P. On the motion of a small viscous droplet that wets a surface. J. Fluid Mech. 84, 125–143 (1978). Article MATH Google Scholar Hocking, L. The spreading of a thin drop by gravity and capillarity. Q. J. Mech. Appl. Math. 36, 55–69 (1983). Haley, P. J. & Miksis, M. J. The effect of the contact line on droplet spreading. J. Fluid Mech. 223, 57–81 (1991). Article MathSciNet CAS MATH Google Scholar Brenner, M. & Bertozzi, A. Spreading of droplets on a solid surface. Phys. Rev. Lett. 71, 593 (1993). Quéré, D. Wetting and roughness. Annu. Rev. Mater. Res. 38, 71–99 (2008). Cazabat, A. M. & Guena, G. Evaporation of macroscopic sessile droplets. Soft Matter 6, 2591–2612 (2010). Snoeijer, J. H. & Andreotti, B. Moving contact lines: scales, regimes, and dynamical transitions. Annu. Rev. Fluid Mech. 45, 269–292 (2013). Article MathSciNet MATH Google Scholar Jambon-Puillet, E. et al. Spreading dynamics and contact angle of completely wetting volatile drops. J. Fluid Mech. 844, 817–830 (2018). Article CAS MATH Google Scholar Lopez, J., Miller, C. A. & Ruckenstein, E. Spreading kinetics of liquid drops on solids. J. Colloid Interface Sci. 56, 460–468 (1976). Huppert, H. E. The propagation of two-dimensional and axisymmetric viscous gravity currents over a rigid horizontal surface. J. Fluid Mech. 121, 43–58 (1982). Cazabat, A. & Stuart, M. C. Dynamics of wetting: effects of surface roughness. J. Phys. Chem. 90, 5845–5849 (1986). Levinson, P., Cazabat, A., Stuart, M. C., Heslot, F. & Nicolet, S. The spreading of macroscopic droplets. Revue. Phys. Appl. 23, 1009–1016 (1988). Ehrhard, P. Experiments on isothermal and non-isothermal spreading. J. Fluid Mech. 257, 463–483 (1993). Kavehpour, P., Ovryn, B. & McKinley, G. H. Evaporatively-driven marangoni instabilities of volatile liquid films spreading on thermally conductive substrates. Colloids Surf. A Physicochem. Eng. Asp. 206, 409–423 (2002). Carlson, A., Do-Quang, M. & Amberg, G. Modeling of dynamic wetting far from equilibrium. Phys. Fluids 21, 121,701 (2009). Laurila, T., Carlson, A., Do-Quang, M., Ala-Nissila, T. & Amberg, G. Thermohydrodynamics of boiling in a van der waals fluid. Phys. Rev. E 85, 026320 (2012). McGraw, J. D. et al. Slip-mediated dewetting of polymer microdroplets. Proc. Natl Acad. Sci. USA 113, 1168–1173 (2016). Ausserré, D., Picard, A. & Léger, L. Existence and role of the precursor film in the spreading of polymer liquids. Phys. Rev. Lett. 57, 2671 (1986). Leger, L., Erman, M., Guinet-Picard, A., Ausserre, D. & Strazielle, C. Precursor film profiles of spreading liquid drops. Phys. Rev. Lett. 60, 2390 (1988). Chen, J. D. & Wada, N. Wetting dynamics of the edge of a spreading drop. Phys. Rev. Lett. 62, 3050 (1989). Cormier, S. L., McGraw, J. D., Salez, T., Raphaël, E. & Dalnoki-Veress, K. Beyond tanner's law: crossover between spreading regimes of a viscous droplet on an identical film. Phys. Rev. Lett. 109, 154501 (2012). Bergemann, N., Juel, A. & Heil, M. Viscous drops on a layer of the same fluid: from sinking, wedging and spreading to their long-time evolution. J. Fluid Mech. 843, 1–28 (2018). Pedersen, C., Niven, J. F., Salez, T., Dalnoki-Veress, K. & Carlson, A. Asymptotic regimes in elastohydrodynamic and stochastic leveling on a viscous film. Phys. Rev. Fluids 4, 124,003 (2019). Kumar, A., Krishnamurthy, H. R. & Gopal, E. S. R. Equilibrium critical phenomena in binary liquid mixtures. Phys. Rep. 98, 57–143 (1983). Chaar, H., Moldover, M. R. & Schmidt, J. W. Universal amplitude ratios and the interfacial tension near consolute points of binary liquid mixtures. J. Chem. Phys. 85, 418–427 (1986). Petit, J., Rivière, D., Kellay, H. & Delville, J. P. Break-up dynamics of fluctuating liquid threads. Proc. Natl Acad. Sci. USA 109, 18327–18331 (2012). Saiseau, R. Thermo-hydrodynamique dans les systèmes critiques: instabilités, relaxation et évaporation. Ph.D. thesis, Université de Bordeaux (2020). Chraibi, H., Lasseux, D., Wunenburger, R., Arquis, E. & Delville, J. P. Optohydrodynamics of soft fluid interfaces: optical and viscous nonlinear effects. Eur. Phys. J. E 32, 43–52 (2010). Girot, A. et al. Conical interfaces between two immiscible fluids induced by an optical laser beam. Phys. Rev. Lett. 122, 174501 (2019). Wunenburger, R. et al. Fluid flows driven by light scattering. J. Fluid Mech. 666, 273–307 (2011). Casner, A. & Delville, J. P. Laser-sustained liquid bridges. EPL 65, 337 (2004). Pedersen, C., Ren, S., Wang, Y., Carlson, A. & Salez, T. Nanobubble-induced flow of immersed glassy polymer films. Phys. Rev. Fluids 6, 114006 (2021). Hennequin, Y., Aarts, D., Indekeu, J., Lekkerkerker, H. & Bonn, D. Fluctuation forces and wetting layers in colloid-polymer mixtures. Phys. Rev. Lett. 100, 178305 (2008). Davidovitch, B., Moro, E. & Stone, H. A. Spreading of viscous fluid drops on a solid substrate assisted by thermal fluctuations. Phys. Rev. Lett. 95, 244505 (2005). Willis, A. M. & Freund, J. Enhanced droplet spreading due to thermal fluctuations. J. Phys. Condens. Matter 21, 464,128 (2009). Nesic, S., Cuerno, R., Moro, E. & Kondic, L. Dynamics of thin fluid films controlled by thermal fluctuations. Eur. Phys. J. Spec. Top. 224, 379–387 (2015). The authors thank Julie Jagielka, Eloi Descamps, Thomas Guérin, and Yacine Amarouchene, for preliminary work and interesting discussions, as well as the LOMA mechanical and electronic workshops for their technical contributions. The authors acknowledge financial support from the European Union through the European Research Council under EMetBrown (ERC-CoG-101039103) grant to T.S. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The authors also acknowledge financial support from the Agence Nationale de la Recherche under FISICS (ANR-15-CE30-0015-01) grant to J.-P.D. and U.D., and EMetBrown (ANR-21-ERCC-0010-01), Softer (ANR-21-CE06-0029), and Fricolas (ANR-21-CE06-0039) grants to T.S. Finally, they thank the Soft Matter Collaborative Research Unit, Frontier Research Center for Advanced Material and Life Science, Faculty of Advanced Life Science at Hokkaido University, Sapporo, Japan. Univ. Bordeaux, CNRS, LOMA, UMR 5798, Talence, F-33400, France Raphael Saiseau, Anwar Benjana, Ulysse Delabre, Thomas Salez & Jean-Pierre Delville Laboratoire Matière et Systèmes Complexes, UMR 7057, CNRS, Université Paris Cité, Paris, F-75006, France Raphael Saiseau Mechanics Division, Department of Mathematics, University of Oslo, Oslo, 0316, Norway Christian Pedersen & Andreas Carlson Christian Pedersen Anwar Benjana Andreas Carlson Ulysse Delabre Thomas Salez Jean-Pierre Delville R.S., T.S., and J.-P.D. conceived the research plan. R.S. and A.B. performed the experiments, C.P. performed the simulations, and A.C. and U.D. participated in the data analysis with all the authors. The manuscript was written with the contributions of all the authors. Correspondence to Raphael Saiseau, Thomas Salez or Jean-Pierre Delville. Peer review information Nature Communications thanks David Brutin, and the other, anonymous, reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Saiseau, R., Pedersen, C., Benjana, A. et al. Near-critical spreading of droplets. Nat Commun 13, 7442 (2022). https://doi.org/10.1038/s41467-022-35047-1 Applied physics and mathematics
CommonCrawl
Only show content I have access to (12) Statistics and Probability (4) Laser and Particle Beams (7) Epidemiology & Infection (4) High Power Laser Science and Engineering (3) Plant Genetic Resources (2) The Journal of Agricultural Science (2) Clay Minerals (1) Experimental Agriculture (1) Geological Magazine (1) Journal of Plasma Physics (1) Radiocarbon (1) The ANZIAM Journal (1) Intersentia (1) Australian Mathematical Society Inc (1) Mineralogical Society (1) European Environmental Law Forum (1) Bremsstrahlung emission from high power laser interactions with constrained targets for industrial radiography HPL_EP HEDP and High Power Laser 2018 C. D. Armstrong, C. M. Brenner, C. Jones, D. R. Rusby, Z. E. Davidson, Y. Zhang, J. Wragg, S. Richards, C. Spindloe, P. Oliveira, M. Notley, R. Clarke, S. R. Mirfayzi, S. Kar, Y. Li, T. Scott, P. McKenna, D. Neely Journal: High Power Laser Science and Engineering / Volume 7 / 2019 Published online by Cambridge University Press: 25 April 2019, e24 Laser–solid interactions are highly suited as a potential source of high energy X-rays for nondestructive imaging. A bright, energetic X-ray pulse can be driven from a small source, making it ideal for high resolution X-ray radiography. By limiting the lateral dimensions of the target we are able to confine the region over which X-rays are produced, enabling imaging with enhanced resolution and contrast. Using constrained targets we demonstrate experimentally a $(20\pm 3)~\unicode[STIX]{x03BC}\text{m}$ X-ray source, improving the image quality compared to unconstrained foil targets. Modelling demonstrates that a larger sheath field envelope around the perimeter of the constrained targets increases the proportion of electron current that recirculates through the target, driving a brighter source of X-rays. Molecular characterization and identification of new sources of tolerance to submergence and salinity from rice landraces of coastal India R. Samal, P. S. Roy, M. K. Kar, B. C. Patra, S. S. C. Patnaik, J. N. Reddy Journal: Plant Genetic Resources / Volume 17 / Issue 3 / June 2019 Trait-specific characterization of rice landraces has significant potential for germplasm management, varietal identification and mining of novel gene/allele for various traits. In the current study, we have characterized 98 unique rice landraces collected from coastal regions of India, affected by submergence and salinity, based on Sub1 and Saltol quantitative trait loci (QTL) linked microsatellite markers. Among these genotypes, four genotypes (IC536558, IC536559, IC536604 and IC536604-1) collected from Kerala and two genotypes (AC34902 and IC324589) collected from West Bengal were identified with tolerance to submergence and salinity stress. A high level of genetic diversity of He = 0.349 and 0.529 at Sub1 and Saltol QTL region was detected by QTL-linked microsatellite markers, respectively. At Sub1 region one genotype, AC34902, was detected with maximum allelic similarity with FR13A, a known submergence tolerant variety. Besides, five genotypes (IC211188-1, IC536604-1, IC536604, IC536558 and IC536559) showed comparatively close genetic relationship with the salt tolerant variety FL478 for Saltol QTL and were clustered together in the neighbour joining dendrogram. Considering the haplotype structure, five genotypes (IC203801, IC203778, IC324584, IC413608 and IC413638) were identified which did not contain any common allele similar to FR13A but were still tolerant to submergence. These individuals need further characterization for identification of new alleles responsible for their tolerance. Chapter 9 - Managing Environmental Utilisation Space in the Dutch Environment and Planning Act from PART V - ECOSYSTEM APPROACHES AND ADAPTIVE MANAGEMENT By Lolke S. Braaksma, Kars J. de Graaf Edited by Helle Tegner Anker, Birgitte Egelund Olsen Book: Sustainable Management of Natural Resources Published by: Intersentia Published online: 31 January 2019 Print publication: 09 October 2018, pp 139-154 This chapter analyses how the environmental utilisation space concept is implemented in the upcoming Dutch Environment and Planning Act and to what extent this concept contributes to the goal of sustainable development. It first studies the origins and application of the environmental utilisation space concept in Dutch environmental law and its relationship with sustainable development and the ecosystem approach. The chapter continues with an analysis of the implementation of this concept by the Dutch legislator in the Crisis and Recovery Act of 2010 and the future Environment and Planning Act, with an emphasis on the role of municipalities in managing environmental utilisation space in environmental plans and the implementation of a programmatic approach. The chapter finishes with a summary of the obstacles and incentives relevant when implementing the environmental utilisation space concept in the future Environment and Planning Act. The current legal framework in the Netherlands is considered insufficient to provide the government with the instruments needed to actively work towards a sustainable society while allowing for economic development. One of the main reasons is the lack of an integral and coherent approach for regulating the physical environment that people live in. Environmental principles, standards and values are spread across many different legislative acts and provisions focus either on a particular subject or address one particular environmental issue. The Dutch legislator therefore adopted the Environment and Planning Act (EPA) – which is anticipated to come into force in 2021 – in which environmental, spatial planning and nature conservation acts are integrated into one act. The EPA incorporates existing legal instruments, and adds new elements, striving towards a sustainable society. It also aims to provide more effective and efficient tools to implement EU environmental law in the Dutch legal order. One of the guiding concepts for the EPA is the (environmental) utilisation space concept, which functions as a general notion for the legislator as well as the public administration when designing environmental legislation and policies. Utilisation space is defined in the Explanatory Memorandum of the EPA as the 'the legal leeway that exists in a specific area to allow for the realisation of (economic) activities'. According to the legislator, it has a slightly broader meaning than the term environmental utilisation space, which refers to the legal leeway that exists only in relation to the existing legal requirements to protect the environment. Proton probing of laser-driven EM pulses travelling in helical coils HEDP and HPL 2016 H. Ahmed, S. Kar, A.L. Giesecke, D. Doria, G. Nersisyan, O. Willi, C.L.S. Lewis, M. Borghesi Published online by Cambridge University Press: 13 February 2017, e4 The ultrafast charge dynamics following the interaction of an ultra-intense laser pulse with a foil target leads to the launch of an ultra-short, intense electromagnetic (EM) pulse along a wire connected to the target. Due to the strong electric field (of the order of $\text{GV m}^{-1}$ ) associated to such laser-driven EM pulses, these can be exploited in a travelling-wave helical geometry for controlling and optimizing the parameters of laser accelerated proton beams. The propagation of the EM pulse along a helical path was studied by employing a proton probing technique. The pulse-carrying coil was probed along two orthogonal directions, transverse and parallel to the coil axis. The temporal profile of the pulse obtained from the transverse probing of the coil is in agreement with the previous measurements obtained in a planar geometry. The data obtained from the longitudinal probing of the coil shows a clear evidence of an energy dependent reduction of the proton beam divergence, which underpins the mechanism behind selective guiding of laser-driven ions by the helical coil targets. Radiocarbon Dating of Late Pleistocene Marine Shells from the Southern North Sea F S Busschers, F P Wesselingh, R H Kars, M Versluijs-Helder, J Wallinga, J H A Bosch, J Timmner, K G J Nierop, T Meijer, F P M Bunnik, H De Wolf Journal: Radiocarbon / Volume 56 / Issue 3 / 2014 This article presents a set of Late Pleistocene marine mollusk radiocarbon (AMS) age estimates of 30–50 14C kyr BP, whereas a MIS5 age (>75 ka) is indicated by quartz and feldspar OSL dating, biostratigraphy, U-Th dating, and age-depth relationships with sea level. These results indicate that the 14C dates represent minimum ages. The age discrepancy suggests that the shells are contaminated by younger carbon following shell death. The enigmatic 14C dates cannot be "solved" by removing part of the shell by stepwise dissolution. SEM analysis of the Late Pleistocene shells within a context of geologically younger (recent/modern, Holocene) and older (Pliocene) shells shows the presence of considerable amounts of an intracrystalline secondary carbonate precipitate. The presence of this precipitate is not visible using XRD since it is of the same (aragonitic) polymorph as the original shell carbonate. The combination of nanospherulitic-shaped carbonate crystals, typical cavities, and the presence of fatty acids leads to the conclusion that the secondary carbonate, and hence the addition of younger carbon, has a bacterial origin. As shell material was studied, this study recommends an assessment of possible bacterial imprints in other materials like bone collagen as well. Nuclear Weak Processes in Presupernova Stars A. Ray, T. Kar, S. Sarkar, S. Chakravarti The structure and the size of the core of massive presupernova stars are determined by the electron fraction and entropy of the core during its late stages of evolution; these in turn affect the subsequent evolution during gravitational collapse and supernova explosion phases. Beta decay and electron capture on a number of neutron rich nuclei can contribute substantially towards the reduction of the entropy and possibly the electron fraction in the core. Methods for calculating the weak transition rates for a number of nuclei for which no reliable rates exist (particularly for A > 60) are outlined. The calculations are particularly suited for presupernova matter density (p = 107 - 109 g/cc) and temperature (T = 2 - 6 × 109 °K). We include besides the contributions from the ground state and the known excited states, the Gamow-Teller (GT) resonance states (e.g. for beta decay rates, the GT + states) in the mother nucleus which are populated thermally. For the GT strength function for transitions from the ground state (as well as excited states) we use a sum rule calculated by the spectral distribution method where the centroid of the distribution is obtained from experimental data on (p,n) reactions. The contribution of the excited levels and GT + resonances turn out to be important at high temperatures which may prevail in presupernova stellar cores. Spectroscopic Characterization of Flame-Generated 2-D Carbon Nano-Disks Patrizia Minutolo, Mario Commodo, Gianluigi De Falco, Rosanna Larciprete, Andrea D'Anna Journal: MRS Online Proceedings Library Archive / Volume 1726 / 2015 Published online by Cambridge University Press: 27 July 2015, mrsf14-1726-j07-12 In this work we produce atomically thin carbon nanostructures which have a disk-like shape when deposited on a substrate. These nanostructures have intermediate characteristics between a graphene island and a molecular compound and have the potentiality to be used either as they are, or to become building blocks for functional materials or to be manipulated and engineered into composite layered structures. The carbon nanostructures are produced in a premixed ethylene/air flame with a slight excess of fuel with respect to the stoichiometric value. The size distribution of the produced compounds in aerosol phase has been measured on line by means of a differential mobility analyzer (DMA) and topographic images of the structures deposited on mica disks were obtained by Atomic Force Microscopy. Raman spectroscopy and XPS have been used to characterize their structure and the electronic and optical properties were obtained combining on-line photoionization measurements with Cyclic Voltammetry, light absorption and photoluminescence. When deposited on the mica substrate the carbon compounds assume the shape of an atomically thin disk with in plane diameter of about 20 nm. Carbon nano-disks consist of a network of small aromatic island with in plane length, La, of about 1 nm. Raman spectra evidence a significant amount of disorder which is in a large part due to the quantum confinement in the aromatic islands. Nano-disks contain small percentage of sp3 and the O/C ratio is lower than 6%. They furthermore present interesting UV and visible photoluminescence properties. Low Temperature, Digital Control, Fast Synthesis of 2-D BNNSs and Their Application for Deep UV Detectors Ali Aldalbahi, Renyauan Yang, Eric Yiming Li, Muhammad Sajjad, Yihau Chen, Peter Feng This paper reports low temperature, digital control, fast synthesis of high-quality boron nitride nanosheets (BNNSs) and their electronic device application. Raman scattering spectroscopy, X-ray diffraction (XRD), Transmission electron microscopy (TEM) are used to characterize the BNNSs. With the synthesized various BNNSs, two prototypic types of deep UV photodetectors have been fabricated, and sensitivity, response and recovery times, as well as repeatability have been characterized. Effects of period and thickness of BNNSs on the properties of prototypic photodetectors are also discussed. Influence of Cu film microstructure on MOCVD growth of BN Michael Snure, Shivashankar Vangala, Jodie Shoaf, Jianjun Hu, Qing Paduano Published online by Cambridge University Press: 30 June 2015, mrsf14-1726-j07-01 Boron nitride is of great interest as a 2 dimensional (2D) insulator for use as an atomically flat substrate, gate dielectric and tunneling barrier. At this point the most promising and widely used approach for growth of mono-to-few layer BN is metal catalyzed chemical vapor deposition (CVD). Bulk Cu foil has been the most popular metal substrate for growth of h-BN and graphene, as such there are well developed processes for substrate preparation and growth. As an alternative thin Cu films deposited on an insulating substrate have some advantages over foil, including more uniform thermal contact with substrate heater, better mechanical stability, transfer free processing, and selective area growth. However, Cu films deposited on SiO2 present their own unique problems like Cu SiO2 stability and small Cu grain size. Here we present results on the growth on few-layer BN by metal organic chemical vapor deposition (MOCVD) on Cu thin films on SiO2/Si. We explore the effects of substrate preparation and annealing conditions on the Cu morphology in order to understand the impact on the BN. To minimize the effects of Cu SiO2 interdiffusion, we investigate the use of a Ni buffer layers. BN films were studied after transfer to SiO2/Si films using Raman and AFM to determine the impact of Cu film microstructure on the morphology of few layer BN films. Site Dependent Hydrogenation in Graphynes: A Fully Atomistic Molecular Dynamics Investigation Pedro A. S. Autreto, Douglas S. Galvao Published online by Cambridge University Press: 15 May 2015, mrsf14-1726-j02-02 Graphyne is a generic name for a carbon allotrope family of 2D structures, where acetylenic groups connect benzenoid rings, with the coexistence of sp and sp2 hybridized carbon atoms. In this work we have investigated, through fully atomistic reactive molecular dynamics simulations, the dynamics and structural changes of the hydrogenation of α, β, and γ graphyne forms. Our results showed that the existence of different sites for hydrogen bonding, related to single and triple bonds, makes the process of incorporating hydrogen atoms into graphyne membranes much more complex than the graphene ones. Our results also show that hydrogenation reactions are strongly site dependent and that the sp-hybridized carbon atoms are the preferential sites to chemical attacks. In our cases, the effectiveness of the hydrogenation (estimated from the number of hydrogen atoms covalently bonded to carbon atoms) follows the α, β, γ-graphyne structure ordering. Colloidal PbS Nanosheets with Tunable Energy Gaps Zhoufeng Jiang, Simeen Khan, Shashini Premathilake, Ghadendra Bhandari, Kamal Subedi, Yufan He, Matthew Leopold, Nick Reilly, Peter Lu, Alexey Zayak, Liangfeng Sun Ultrathin colloidal PbS nanosheets are synthesized using organometallic precursors with chloroalkane cosolvents, resulting in tunable thicknesses ranging from 1.2 nm to 4.6 nm. We report the first thickness-dependent photoluminescence spectra from lead-salt nanosheets. The one-dimensional confinement energy of these quasi-two-dimensional nanosheets is found to be proportional to 1/L instead of 1/L2 (L is the thickness of the nanosheet), which is consistent with results calculated using density functional theory as well as tight-binding theory. Metallic Binary Copper Chalcogenides with Orthorhombic Layered Structure Kaya Kobayashi, Shinya Kawamoto, Jun Akimitsu Chalcogenide materials have regained attention after the recent recognition of the compatibility of transition metal dichalcogenides with graphene. Additionally, there has been a recent appreciation for the rich variety of properties they support due to the anomalies in the materials' intrinsic band structure. These materials generally have layered structures and weak interlayer connection through the chalcogen layer and its van der Waals type bonding. We have synthesized orthorhombic copper telluride and measured its electrical transport properties. The results of these measurements reveal that the conduction is metallic in both the in-plane and out-of-plane directions. The range of stability of this structure is examined along with the lattice constants. The independence of the resistivity in samples to changes in excess copper indicates that the transport is essentially within the conducting planes. This result shows that the material hosts two-dimensional character likely due to its covalent interlayer bonding. Electronic properties of monolayer molybdenum dichalcogenides under strains J. Sugimoto, K. Shintani The electronic band structures of monolayer molybdenum dichalcogenides, MoS2, MoSe2, and MoTe2 under either uniaxial or biaxial strain are calculated using first-principles calculation with the GW method. The imposed uniaxial strain is in the zigzag direction in the honeycomb lattice whereas the imposed biaxial strain is in the zigzag and armchair directions. It is found that the band gaps of these dichalcogenides almost linearly increase with the decrease of the magnitude of compressive strain, reach their maxima at some compressive strain, and then decrease almost linearly with the increase of tensile strain. It is also found their maximum band gaps are direct bandgaps. Novel Nanoscroll Structures from Carbon Nitride Layers Eric Perim, Douglas S. Galvao Nanoscrolls consist of sheets rolled up into a papyrus-like form. Their open ends produce great radial flexibility, which can be exploited for a large variety of applications, from actuators to hydrogen storage. They have been successfully synthesized from different materials, including carbon and boron nitride. In this work we have investigated, through fully atomistic molecular dynamics simulations, the dynamics of scroll formation for a series of graphene-like carbon nitride (CN) two-dimensional systems: g-CN, triazine-based (g-C3N4), and heptazine-based (g-C3N4). Carbon nitride (CN) structures have been attracting great attention since their prediction as super hard materials. Recently, graphene-like carbon nitride (g-CN) structures have been synthesized with distinct stoichiometry and morphologies. By combining these unique CN characteristics with the structural properties inherent to nanoscrolls new nanostructures with very attractive mechanical and electronic properties could be formed. Our results show that stable nanoscrolls can be formed for all of CN structures we have investigated here. As the CN sheets have been already synthesized, these new scrolled structures are perfectly feasible and within our present-day technology. Effect of V/III ratio on the growth of hexagonal boron nitride by MOCVD Qing S. Paduano, Michael Snure, Jodie Shoaf Published online by Cambridge University Press: 24 February 2015, mrsf14-1726-j04-26 In this report, we describe a process for achieving atomically smooth, few-layer thick, hexagonal boron nitride (h-BN) films on sapphire substrates by MOCVD, using Triethylboron (TEB) and NH3 as precursors. Two different growth modes have been observed depending on the V/III ratio. Three-dimensional (3D) island growth is dominant in the low V/III range; in this range growth rate decreases with increasing deposition temperature. This island growth mode transitions to a self-terminating growth mode when V/III > 2000, over the entire deposition temperature range studied (i.e. 1000-1080oC). Raman spectroscopy verifies the h-BN phase of these films, and atomic force microscopy measurements confirm that the surfaces are smooth and continuous, even over atomic steps on the surface of the substrate. Using X-ray reflectance measurements, the thickness of each film grown under a range of conditions and times was determined to consistently terminate at 1.6nm, with a variation of less than 0.2 nm. Thus we have identified a self-terminating growth mode that enables robust synthesis of h-BN with highly uniform and reliable thickness on non-metal catalyzed substrates. Furthermore, this self-terminating growth behavior has shown signs of transitioning to continuous growth as deposition temperature increases. Resonant Raman Scattering in MoS2 Katarzyna Gołasa, Magdalena Grzeszczyk, Przemysław Leszczyński, Karol Nogajewski, Marek Potemski, Adam Babiński Resonant Raman scattering in molybdenum disulfide (MoS2) is studied as a function of the sample thickness. Optical emission from 1ML, 2ML, 3ML and bulk MoS2 is investigated both at room and at liquid helium temperature. The experimental results are analysed in terms of the recently proposed attribution of the Raman peaks to multiphonon replica involving transverse acoustic phonons from the vicinity of the high-symmetry M point of the MoS2 Brillouin zone. It is shown that the corresponding processes are quenched in a few monolayer samples much stronger than the modes involving longitudinal acoustic phonons. It is also shown that along with the disappearance of multiphonon replica, the Raman modes, which are in-active in bulk become active in a few-monolayer flakes. Demonstration of laser pulse amplification by stimulated Brillouin scattering E. Guillaume, K. Humphrey, H. Nakamura, R. M. G. M. Trines, R. Heathcote, M. Galimberti, Y. Amano, D. Doria, G. Hicks, E. Higson, S. Kar, G. Sarri, M. Skramic, J. Swain, K. Tang, J. Weston, P. Zak, E. P. Alves, R. A. Fonseca, F. Fiúza, H. Habara, K. A. Tanaka, R. Bingham, M. Borghesi, Z. Najmudin, L. O. Silva, P. A. Norreys Journal: High Power Laser Science and Engineering / Volume 2 / 01 July 2014 Published online by Cambridge University Press: 25 September 2014, e33 The energy transfer by stimulated Brillouin backscatter from a long pump pulse (15 ps) to a short seed pulse (1 ps) has been investigated in a proof-of-principle demonstration experiment. The two pulses were both amplified in different beamlines of a Nd:glass laser system, had a central wavelength of 1054 nm and a spectral bandwidth of 2 nm, and crossed each other in an underdense plasma in a counter-propagating geometry, off-set by $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}10^\circ $ . It is shown that the energy transfer and the wavelength of the generated Brillouin peak depend on the plasma density, the intensity of the laser pulses, and the competition between two-plasmon decay and stimulated Raman scatter instabilities. The highest obtained energy transfer from pump to probe pulse is 2.5%, at a plasma density of $0.17 n_{cr}$ , and this energy transfer increases significantly with plasma density. Therefore, our results suggest that much higher efficiencies can be obtained when higher densities (above $0.25 n_{cr}$ ) are used. Viral aetiology and clinico-epidemiological features of acute encephalitis syndrome in eastern India S. K. RATHORE, B. DWIBEDI, S. K. KAR, S. DIXIT, J. SABAT, M. PANDA Journal: Epidemiology & Infection / Volume 142 / Issue 12 / December 2014 Published online by Cambridge University Press: 23 January 2014, pp. 2514-2521 This study reports clinico-epidemiological features and viral agents causing acute encephalitis syndrome (AES) in the eastern Indian region through hospital-based case enrolment during April 2011 to July 2012. Blood and CSF samples of 526 AES cases were investigated by serology and/or PCR. Viral aetiology was identified in 91 (17·2%) cases. Herpes simplex virus (HSV; types I or II) was most common (16·1%), followed by measles (2·6%), Japanese encephalitis virus (1·5%), dengue virus (0·57%), varicella zoster virus (0·38%) and enteroviruses (0·19%). Rash, paresis and cranial nerve palsies were significantly higher (P < 0·05) with viral AES. Case-fatality rates were 10·9% and 6·2% in AES cases with and without viral aetiology, respectively. Simultaneous infection of HSV I and measles was observed in seven cases. This report provides the first evidence on viral aetiology of AES viruses from eastern India showing dominance of HSV that will be useful in informing the public health system. OPL volume 1549 Author and Subject Indexes OPL volume 1549 Cover and Front matter Published online by Cambridge University Press: 16 December 2013, pp. f1-f13
CommonCrawl
Attractors for the three-dimensional incompressible Navier-Stokes equations with damping Existence and non-existence of global solutions for a discrete semilinear heat equation March 2011, 31(1): 221-238. doi: 10.3934/dcds.2011.31.221 Orbital stability of periodic waves for the Klein-Gordon-Schrödinger system Fábio Natali 1, and Ademir Pastor 2, Universidade Estadual de Maringá - UEM, Avenida Colombo, 5790, CEP 87020-900, Maringá IMECC–UNICAMP, Rua Sérgio Buarque de Holanda, 651, CEP 13083-859, Campinas, SP, Brazil Received December 2009 Revised April 2011 Published June 2011 This article deals with the existence and orbital stability of a two--parameter family of periodic traveling-wave solutions for the Klein-Gordon-Schrödinger system with Yukawa interaction. The existence of such a family of periodic waves is deduced from the Implicit Function Theorem, and the orbital stability is obtained from arguments due to Benjamin, Bona, and Weinstein. Keywords: Periodic waves, orbital stability, Klein-Gordon-Schrödinger system.. Mathematics Subject Classification: Primary: 35B10, 35B35; Secondary: 35Q9. Citation: Fábio Natali, Ademir Pastor. Orbital stability of periodic waves for the Klein-Gordon-Schrödinger system. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 221-238. doi: 10.3934/dcds.2011.31.221 J. Angulo Pava, Nonlinear stability of periodic traveling wave solutions to the Schrödinger and modified Korteweg-de Vries equations, J. Differential Equations, 235 (2007), 1-30. Google Scholar J. Angulo Pava, J. L. Bona, and M. Scialom, Stability of cnoidal waves, Adv. Differential Equations, 11 (2006), 1321-1374. Google Scholar J. Angulo Pava, C. Matheus and D. Pilod, Global well-posedness and non-linear stability of periodic traveling waves for a Schrödinger-Benjamin-Ono system, Commun. Pure Appl. Anal., 8 (2009), 815-844. Google Scholar J. Angulo Pava and F. Natali, Positivity properties of the Fourier transform and stability of periodic travelling-wave solutions, SIAM J. Math. Anal., 40 (2008), 1123-1151. doi: 10.1137/080718450. Google Scholar J. Angulo Pava and F. Natali, Stability and instability of periodic travelling wave solutions for the critical Korteweg-de Vries and non-linear Schrödinger equations, Physica D, 238 (2009), 603-621. doi: 10.1016/j.physd.2008.12.011. Google Scholar J. Angulo Pava and A. Pastor Ferreira, Stability of periodic optical solitons for a nonlinear Schrödinger system, Proc. Roy. Soc. Edinburgh Sect. A, 139 (2009), 927-959. Google Scholar J.-B. Baillon and J. M. Chadam, The Cauchy problem for the coupled Schrödinger-Klein-Gordon equations, in "Contemporary Developments in Continuum Mechanics and Partial Differential Equations,'' Proc. Internat. Sympos., Inst. Mat., Univ. Fed. Rio de Janeiro, Rio de Janeiro, (1978), 37-44. doi: 10.1016/S0304-0208(08)70857-0. Google Scholar T. B. Benjamin, The stability of solitary waves, Proc. Roy. Soc. London Ser. A, 338 (1972), 153-183. Google Scholar J. L. Bona, On the stability theory of solitary waves, Proc. Roy. Soc. London Ser. A, 344 (1975), 363-374. doi: 10.1098/rspa.1975.0106. Google Scholar P. F. Byrd and M. D. Friedman, "Handbook of Elliptic Integrals for Engineers and Scientists,'' $2^{nd}$ edition, Springer-Verlag, New York-Heidelberg, 1971. Google Scholar T. Cazenave and P.-L. Lions, Orbital stability of standing waves for some Schrödinger equations, Comm. Math. Phys., 85 (1982), 549-561. doi: 10.1007/BF01403504. Google Scholar J. Colliander, J. Holmer and N. Tzirakis, Low regularity global well-posedness for the Zakharov and Klein-Gordon-Schrödinger systems, Trans. Amer. Math. Soc., 360 (2008), 4619-4638. doi: 10.1090/S0002-9947-08-04295-5. Google Scholar I. Fukuda and M. Tsutsumi, On coupled Klein-Gordon-Schrödinger equations. II, J. Math. Anal. Appl., 66 (1978), 358-378. doi: 10.1016/0022-247X(78)90239-1. Google Scholar T. Gallay and M. Hărăguş, Stability of small periodic waves for the nonlinear Schrödinger equation, J. Differential Equations, 234 (2007), 544-581. doi: 10.1016/j.jde.2006.12.007. Google Scholar T. Gallay and M. Hărăguş, Orbital Stability of periodic waves for the nonlinear Schrödinger equation, J. Dynam. Differential Equations, 19 (2007), 825-865. doi: 10.1007/s10884-007-9071-4. Google Scholar M. Grillakis, J. Shatah and W. Strauss, Stability theory of solitary waves in the presence of symmetry. I, J. Funct. Anal., 74 (1987), 160-197. doi: 10.1016/0022-1236(87)90044-9. Google Scholar M. Grillakis, J. Shatah and W. Strauss, Stability theory of solitary waves in the presence of symmetry. II, J. Funct. Anal., 94 (1990), 308-348. doi: 10.1016/0022-1236(90)90016-E. Google Scholar M. Grillakis, Linearized instability for nonlinear Schrödinger and Klein-Gordon equations, Comm. Pure Appl. Math., 41 (1988), 747-774. doi: 10.1002/cpa.3160410602. Google Scholar M. Grillakis, Analysis of the linearization around a critical point of an infinite-dimensional Hamiltonian system, Comm. Pure Appl. Math., 43 (1990), 299-333. doi: 10.1002/cpa.3160430302. Google Scholar T. Kato, "Perturbation Theory for Linear Operators,'' reprint of the 1980 edition, Classics in Mathematics, Springer-Verlag, Berlin, 1995. Google Scholar H. Kikuchi and M. Ohta, Instability of standing waves for the Klein-Gordon-Schrödinger system, Hokkaido Math. J., 37 (2008), 735-748. Google Scholar H. Kikuchi and M. Ohta, Stability of standing waves for the Klein-Gordon-Schrödinger system, J. Math. Anal. Appl., 365 (2010), 109-114. doi: 10.1016/j.jmaa.2009.10.024. Google Scholar W. Magnus and S. Winkler, "Hill's Equation,'' corrected reprint of the 1966 edition, Dover Publications, Inc., New York, 1979. Google Scholar F. Natali and A. Pastor Ferreira, Stability properties of periodic standing waves for the Klein-Gordon-Schrödinger system, Commun. Pure Appl. Anal., 9 (2010), 413-430. doi: 10.3934/cpaa.2010.9.413. Google Scholar F. Natali and A. Pastor Ferreira, Stability and instability of periodic standing wave solutions for some Klein-Gordon equations, J. Math. Anal. Appl., 347 (2008), 428-441. doi: 10.1016/j.jmaa.2008.06.033. Google Scholar M. Ohta, Stability of solitary waves for coupled Klein-Gordon-Schrödinger equations in one space dimension, Variational Problems and Related Topics, (Japanese) (Kyoto, 1998), S/=urikaisekikenkyūsho Kōkyūroku, 1076 (1999), 83-92. Google Scholar M. Ohta, Stability of stationary states for the coupled Klein-Gordon-Schrdinger equations, Nonlinear Anal., 27 (1996), 455-461. doi: 10.1016/0362-546X(95)00017-P. Google Scholar H. Pecher, Global solutions of the Klein-Gordon-Schrödinger system with rough data, Differential Integral Equations, 17 (2004), 179-214. Google Scholar S. Rabsztyn, On the Cauchy problem for the coupled Schrödinger-Klein-Gordon equations in one space dimension, J. Math. Phys., 25 (1984), 1262-1265. doi: 10.1063/1.526281. Google Scholar M. Reed and B. Simon, "Methods of Modern Mathematical Physics," IV, Analysis of Operators, Academic Press, New York-London, 1978. Google Scholar X.-Y. Tang and W. Ding, The general Klein-Gordon-Schrödinger system: Modulational instability and exact solutions, Phys. Scr., 77 (2008), 1-8. doi: 10.1088/0031-8949/77/01/015004. Google Scholar N. Tzirakis, The Cauchy problem for the Klein-Gordon-Schrödinger system in low dimensions below the energy space, Comm. Partial Differential Equations, 30 (2005), 605-641. doi: 10.1081/PDE-200059260. Google Scholar M. I. Weinstein, Lyapunov stability of ground states of nonlinear dispersive evolution equations, Comm. Pure Appl. Math., 39 (1986), 51-67. doi: 10.1002/cpa.3160390103. Google Scholar M. I. Weinstein, Modulational stability of ground states of nonlinear Schrödinger equations, SIAM J. Math. Anal., 16 (1985), 472-491. doi: 10.1137/0516034. Google Scholar M. Y. Yu and P. K. Shukla, On the formation of upper-hybrid solitons, Plasma Phys., 19 (1977), 889-893. doi: 10.1088/0032-1028/19/9/008. Google Scholar Fábio Natali, Ademir Pastor. Stability properties of periodic standing waves for the Klein-Gordon-Schrödinger system. Communications on Pure & Applied Analysis, 2010, 9 (2) : 413-430. doi: 10.3934/cpaa.2010.9.413 Marilena N. Poulou, Nikolaos M. Stavrakakis. Finite dimensionality of a Klein-Gordon-Schrödinger type system. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 149-161. doi: 10.3934/dcdss.2009.2.149 Pavlos Xanthopoulos, Georgios E. Zouraris. A linearly implicit finite difference method for a Klein-Gordon-Schrödinger system modeling electron-ion plasma waves. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 239-263. doi: 10.3934/dcdsb.2008.10.239 Adriana Flores de Almeida, Marcelo Moreira Cavalcanti, Janaina Pedroso Zanchetta. Exponential stability for the coupled Klein-Gordon-Schrödinger equations with locally distributed damping. Evolution Equations & Control Theory, 2019, 8 (4) : 847-865. doi: 10.3934/eect.2019041 Weizhu Bao, Chunmei Su. Uniform error estimates of a finite difference method for the Klein-Gordon-Schrödinger system in the nonrelativistic and massless limit regimes. Kinetic & Related Models, 2018, 11 (4) : 1037-1062. doi: 10.3934/krm.2018040 Salah Missaoui, Ezzeddine Zahrouni. Regularity of the attractor for a coupled Klein-Gordon-Schrödinger system with cubic nonlinearities in $\mathbb{R}^2$. Communications on Pure & Applied Analysis, 2015, 14 (2) : 695-716. doi: 10.3934/cpaa.2015.14.695 Ahmed Y. Abdallah. Asymptotic behavior of the Klein-Gordon-Schrödinger lattice dynamical systems. Communications on Pure & Applied Analysis, 2006, 5 (1) : 55-69. doi: 10.3934/cpaa.2006.5.55 Salah Missaoui. Regularity of the attractor for a coupled Klein-Gordon-Schrödinger system in $ \mathbb{R}^3 $ nonlinear KGS system. Communications on Pure & Applied Analysis, 2022, 21 (2) : 567-584. doi: 10.3934/cpaa.2021189 E. Compaan, N. Tzirakis. Low-regularity global well-posedness for the Klein-Gordon-Schrödinger system on $ \mathbb{R}^+ $. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3867-3895. doi: 10.3934/dcds.2019156 Caidi Zhao, Gang Xue, Grzegorz Łukaszewicz. Pullback attractors and invariant measures for discrete Klein-Gordon-Schrödinger equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 4021-4044. doi: 10.3934/dcdsb.2018122 Ji Shu. Random attractors for stochastic discrete Klein-Gordon-Schrödinger equations driven by fractional Brownian motions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1587-1599. doi: 10.3934/dcdsb.2017077 A. F. Almeida, M. M. Cavalcanti, J. P. Zanchetta. Exponential decay for the coupled Klein-Gordon-Schrödinger equations with locally distributed damping. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2039-2061. doi: 10.3934/cpaa.2018097 Andrew Comech, Elena Kopylova. Orbital stability and spectral properties of solitary waves of Klein–Gordon equation with concentrated nonlinearity. Communications on Pure & Applied Analysis, 2021, 20 (6) : 2187-2209. doi: 10.3934/cpaa.2021063 Sevdzhan Hakkaev. Orbital stability of solitary waves of the Schrödinger-Boussinesq equation. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1043-1050. doi: 10.3934/cpaa.2007.6.1043 Yang Han. On the cauchy problem for the coupled Klein Gordon Schrödinger system with rough data. Discrete & Continuous Dynamical Systems, 2005, 12 (2) : 233-242. doi: 10.3934/dcds.2005.12.233 Wen Feng, Milena Stanislavova, Atanas Stefanov. On the spectral stability of ground states of semi-linear Schrödinger and Klein-Gordon equations with fractional dispersion. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1371-1385. doi: 10.3934/cpaa.2018067 Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843 Yonggeun Cho, Hichem Hajaiej, Gyeongha Hwang, Tohru Ozawa. On the orbital stability of fractional Schrödinger equations. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1267-1282. doi: 10.3934/cpaa.2014.13.1267 Hartmut Pecher. Low regularity well-posedness for the 3D Klein - Gordon - Schrödinger system. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1081-1096. doi: 10.3934/cpaa.2012.11.1081 Olivier Goubet, Marilena N. Poulou. Semi discrete weakly damped nonlinear Klein-Gordon Schrödinger system. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1525-1539. doi: 10.3934/cpaa.2014.13.1525 Fábio Natali Ademir Pastor
CommonCrawl
Search all SpringerOpen articles ROBOMECH Journal Quality visual landmark selection based on distinctiveness and repeatability Masamichi Shimoda1 & Kimotoshi Yamazaki1 ROBOMECH Journal volume 2, Article number: 16 (2015) Cite this article In this study, a method for landmark selection from image streams captured by a camera mounted on a mobile robot is described. To select stable visual landmarks for mobile robots, two measures regarding landmark "visibility" are considered: distinctiveness and repeatability. In the proposed method, several neighboring feature points form a visual landmark and their distinctiveness is evaluated in each image. Then, under the assumption that a robot can actively seek a feasible landmark, the repeatability of the landmark is evaluated. Weighting techniques using feature-position relations are proposed, and landmark selection criteria using a variation coefficient are employed. These allow us to select high-visibility landmarks. Experimental results obtained using a real mobile robot demonstrate the effectiveness of the proposed method. Mobile robots can have an extensive workspace in both indoor and outdoor environments. Thus, a reliable method for self-localization is very important. Several studies have examined the use of cameras and laser range finders to achieve environmental recognition and localization [1, 2]. The purpose of this study is to establish a framework for collecting visual landmarks from image streams. The image streams are assumed to be captured by a camera mounted on a mobile robot, and visual landmarks with high visibility are automatically extracted from the image streams. To understand "visibility" in the context of this study, we focus on distinctiveness and repeatability. Distinctiveness is represented by the uniqueness of a local image region in a robot's workspace, and repeatability is represented by the robustness of local image regions against possible viewpoint changes and occlusion. Both distinctiveness and repeatability are important for mobile robots because landmark detection might fail under various uncertain situations, e.g., accumulative positioning error and the kidnapping problem. One conventional method to avoid such situations is the use of an image sequence [3, 4], which allows the determination of the current position with fast and light processing. However, it has a weakness: large occlusions or scene changing might cause a failure. Image features have also been used for reliable local information. For mobile robot navigation, hundreds of features detected from a single image have been used as landmarks [5–8]. Ogawa et al. [9] proposed a landmark selection method for robot navigation with a single camera. They extracted image features from each image and directly used them to describe a scene. Some studies employed an important theme relevant to feature point selection. Thompson et al. [10] proposed the use of landmarks selected automatically from panoramic images. "Turn Back and Look" behavior was used to evaluate potential landmarks. Normalized correlation enhanced a landmark's robustness against dramatic illumination change. Knopp et al. [11] proposed a method to suppress confusing features for increasing the success rate of localization. Hafez et al. [12] targeted a crowded urban environment and proposed a method to learn useful features through multiple experiences. Some other studies have used local image regions as landmarks [13, 14]. Each image region includes some distinctive visual information, e.g., dozens of feature points. In contrast to the straightforward use of feature points, this setting allows easy viewpoint selection with a limited viewing field. In addition, if we annotate each landmark, they can be used for more semantic purposes. Both are suitable for autonomous mobile robots. In this study, we define a visual landmark using a local image region comprising dozens of neighboring image features. It is assumed that the robot travels multiple times on predefined courses, and useful landmarks are gradually selected during navigation. We propose a method to select image regions with high distinctiveness and repeatability. Visual landmarks selected via the proposed method enable mobile robots to identify location using densely packed knowledge. However, well-designed evaluation criteria are required to select a quality landmark. One contribution of this study is to provide easily available criteria. Through experiments, we found that weighting each feature point in a local image region is important to describe a landmark with high distinctiveness and repeatability. The weight value is defined by the number of detections among input images. A high weight value is given to the feature point that is found in all images of a common scene from different observation points. The remainder of this paper is organized as follows. "Visual landmarks" explains our representation of a visual landmark. "Landmark candidates collection" introduces landmark candidate selection, and "Landmark selection criteria" proposes landmark selection criteria. "Experiments" presents experimental results, and "Conclusion" concludes the paper. Visual landmarks Landmark availability The quality of landmarks should be considered when extracting visual landmarks from image streams. In this study, we focus on the following four characteristics: The landmark should be easy to distinguish from other parts of the scenes. The landmark should be robust against occlusion. There should be no significant difference of appearance even if the viewpoint changes. The landmark should belong to a motionless object. The above are based on conventional ideas for robust navigation. Item (1) conveys that distinguishable local image regions are easy to find from various viewpoints and capturing conditions. In addition, it suggests a way to eliminate confusing and redundant landmarks found in a scene. Item (2) is essentially achieved by using local image regions; however, it would be desirable to preemptively evaluate the possibility of occlusion. Item (3) is mostly applicable to mobile robots because a moving trajectory will not necessarily be the same for different navigations. Item (4) causes landmark deprivation, which negatively affects the reliability of self-localization. Here, item (1) is associated with "distinctiveness," and items (2) to (4) are related to "repeatability." Landmarks that satisfy distinctiveness and repeatability are considered to have high "visibility." The proposed method selects quality visual landmarks in a step-by-step manner. Distinctive feature region extraction based on feature point grouping Image feature descriptions have been actively studied; therefore, we are now able to use high performance descriptors [15–17]. Since a tiny image region is required for many descriptors, using a group of image features affords good object detection performance that is robust against occlusion [18]. In this study, to generate a stable visual landmark, a rectangular region with dense image features is defined. The procedure to obtain a visual landmark is as follows. SIFT features are extracted from an input image. The detection criteria are the same as those described in [15]. Next, one feature is selected, and its neighboring features are searched. If the Euclidean distance between the selected feature and a neighboring feature in image coordinates is less than the predefined threshold D, they belong to the same group. Using the same procedure, another feature whose distance from the neighboring feature is less than the threshold is added to the group. This procedure enables the search for a cluster of image features. Finally, a circumscribed rectangular box that includes the cluster is generated as a local feature region of focus. A local feature region is not necessarily required to have extremely dense feature points. If a landmark comprises highly distinctive features, it might have high visibility even if there are a less number of high-visibility features. However, a certain level of density is required; thus, parameter D is defined. The abovementioned procedure might produce an uninformative image region comprising low distinctive features. Moreover, image regions without repeatability might be selected. To create a quality visual landmark, the feature region selection process is performed according to the procedure explained in "Landmark candidates collection" and "Landmark selection criteria". Landmark candidates collection Landmark selection procedure. Each landmark (red rectangular box) comprises dozens of image feature points. Through several phases, highly distinctive landmarks are selected, e.g., mutual consistency checks ensure the quality of the landmark Landmark selection procedure Figure 1 shows the landmark selection procedure. First, we outline the procedure. It consists of three phases: Feature region detection: SIFT features are extracted from an image, and rectangular regions that contain a feature cluster are selected ("Distinctive feature region extraction based on feature point grouping"). Landmark candidate selection: Landmark candidate selection comprises two processes: small region elimination (explained below) and duplication avoidance (explained in "SIFT feature matching"). Landmark selection: Note that only one image is considered in the above two phases. As we must select landmarks with high repeatability, robustness against viewpoint changes should be considered. Thus, the "Matching" process explained in "SIFT feature matching" is performed. Landmark repeatability is guaranteed by using dozens of images that capture the same scene from various viewpoints. Here, we describe the small region elimination process. After the detection of local feature regions, the area size S of each region is calculated by image coordinates. Then, regions with S less than the predefined threshold \(S_s\) are eliminated. However, if a smaller region partly overlaps another larger region, the smaller region is considered over the larger region. The landmarks are also eliminated when the resulting region sizes are greater than the predefined threshold \(S_i\). SIFT feature matching The visual landmark used in this study comprises dozens of SIFT features. A SIFT feature is described by a 128-dimensional vector, and the representation is invariant to scale, translation, and rotation. In addition, its robustness against illumination is useful for robots in outdoor environments. Note that feature-to-feature matching is performed for both landmark detection and selection. We apply two types of matching calculation. One is performed between two local feature regions cut from one input image to remove duplicate textures in the same scene. The other is applied for searching a local feature region from an input image to find one registered feature region from a present scene. We refer to the former matching as "Duplication Check" and the latter as "Matching." Positional relationship between feature point and reference point. Using direction and scale information of SIFT features extracted in a training image, the position of a reference point is estimated in an input image In Duplication Check, SIFT features are extracted from an image, and then, the local feature regions are generated. Let I be an image captured in a robot's workspace. Let \(\mathbf F_A = \{ \mathbf f^{(A)}_1, \mathbf f^{(A)}_2, \dots , \mathbf f^{(A)}_{N} \}\) be one local feature region extracted from I, where \(\mathbf f\) is a feature vector that corresponds to a feature point. Similarly, let \(\mathbf F_B = \{ \mathbf f^{(B)}_1, \mathbf f^{(B)}_2, \dots , \mathbf f^{(B)}_{M} \}\) be another local feature region, where \(N < M\). To calculate the similarity between \(\mathbf F_A\) and \(\mathbf F_B\), a feature vector \(\mathbf f^{(A)}_n\) is specified from \(\mathbf F_A\) and the Euclidean distances with all of feature vectors in \(\mathbf F_B\) are calculated. A feature vector \(\mathbf f^{(B)}_m\) with the minimum distance from \(\mathbf f^{(A)}_n\) is specified. If the distance is less than a pre-defined threshold, \(\mathbf f^{(A)}_n\) is considered to have correspondence. For all feature vectors in \(\mathbf F_A\), if the number of correspondences is greater than the pre-defined threshold, the two feature regions are eliminated because they are too similar to represent an independent region. In Matching, the distance calculation is the same as that in Duplication Check. However, another distance threshold \(b_2\), which is looser than \(b_1\), is used. Then, a consistency check is performed against the resulting correspondences. First, the center of gravity of a local feature region is set as a reference point. As shown in the upper part of Fig. 2, a positional vector from each feature point to the reference point is calculated. The vector is transferred into a corresponding feature point extracted from an input image. Thus, the position of a reference point can be estimated in the input image. A SIFT feature contains information about intensity, direction, and scale; therefore, position (X, Y) is calculated using the following equations: $$\begin{aligned} \begin{array}{lll} X = x_i - \displaystyle {\frac{\sigma _i}{\sigma _l}} \times \sqrt{\Delta x^2 + \Delta y^2} \times \cos (\theta + \theta _l - \theta _i) , \\ Y = y_i - \displaystyle {\frac{\sigma _i}{\sigma _l}} \times \sqrt{\Delta x^2 + \Delta y^2} \times \sin (\theta + \theta _l - \theta _i) , \\ \theta = \tan ^{-1} \displaystyle {\frac{\Delta y}{\Delta x}} , \end{array} \end{aligned}$$ where \(\sigma _l\) and \(\theta _l\) are the scale and angle of a feature point, respectively. In addition, \(\sigma _i\) and \(\theta _i\) are the same variables for a feature point in the input image; \(x_i\) and \(y_i\) are coordinates of the point in the image; and \((\Delta x, \Delta y)\) is a positional vector. If the number of estimated reference points, which are concentrated in a circle of radius d, is greater than the pre-defined threshold \(m_2\), the local feature region is considered to have correspondence. The above idea is inspired by the implicit shape model [19], which is used for generic object recognition. Such positional relations are useful for eliminating mismatching when the similarity value becomes high with feature-to-feature correspondence [20]. Landmark selection criteria Several local feature regions are selected through the procedure described in "SIFT feature matching", which considers distinctiveness. In other words, these feature regions satisfy item (1) ("Landmark availability"). Next, these regions are screened relative to repeatability based on items (2) to (4). In this study, we have attempted to develop a visual function for an autonomous mobile robot. One assumption is that we can deploy an autonomous robot that moves in a workspace. Scene observation at various viewpoints enhances the quality of knowledge used for visual navigation. Based on the above discussion, landmark selection with multiple observations is performed. In other words, a camera is mounted on a robot, and n number of images are captured for one target scene while the robot moves. Using these images, we employ the following four landmark selection methods. Pairwise comparison of local feature regions [13]. Repeatability of local feature regions in input images. Counting individual local feature correspondences in input images. Using weight coefficient. The details of these methods are described in the following order. (a) Pairwise comparison of local feature regions Here, "Duplication Check" techniques described in "SIFT feature matching" are used. First, one local feature region is selected and its similarity with another local feature region in another image is calculated. If the similarity value (i.e., the number of matched feature points) of the most similar region is greater than a predefined threshold, the two regions are associated (dark red line in Fig. 3). By applying this process to all local feature regions, a non-directed graph is obtained. Next, a set of local feature regions associated with each other is sought. The region with the greatest number of arcs \(l_c\) is selected as a visual landmark. Here, \(l_c\) is a criterion used to identify the visibility of a landmark. Figure 3 shows three examples of landmark selection. Four sets of local feature regions are extracted from four different images. In the case of (A), the red-painted landmark candidate is selected by counting the number of arcs. When several landmark candidates have the same number of arcs, as shown in (B), a region with denser feature points is selected. Item (C) shows another case. When one region has the greatest number of arcs but large occlusion reduces the number of observable feature points, it is not selected as a landmark. Mutual consistency check. Several local feature regions are extracted from four different images. a The red-painted landmark candidate is selected based on the number of arcs. b A landmark with dense feature points is selected if several landmark candidates have the same number of arcs. c When one region has the largest number of arcs but large occlusion reduces the number of observable feature points, it is not selected as a landmark (b) Repeatability of local feature regions in input images Here, "Matching" described in "SIFT feature matching" is used wherein a local feature region is selected in order and sought from each image. By applying the seeking process toward n images, the number of detections \(l_i\) is counted, where i indicates a serial number of a local feature region. If \(l_i\) is greater than a predefined threshold, then the ith local feature region is registered as a landmark. In the processing explained in item (a), local feature region detection may fail when some feature points cannot be extracted from an input image. This means that the local feature regions extracted at different viewpoints might lose the correct correspondence. Meanwhile, the abovementioned process makes it possible to restore the situation. (c) Counting individual local feature correspondences in input images The abovementioned process considers landmark quality using the local feature region. However, better performance might be obtained if the number of correspondences between two image feature points is also considered. For example, if a feature point is extracted at the local region that captures two distant objects, its appearance is largely influenced by viewpoint changes. The local feature region having such a feature point should be assigned low reliability. Thus, we propose the following measure. As with item (b), a local feature region is sought from images. In each of feature region seeking process, the number of feature correspondences is registered. This describes the frequency of finding respective feature points from several input images; thus, weight coefficient \(g_j\) is defined by the number of feature correspondences, where j denote the serial number of feature point. If \(g_j\) is greater than a pre-defined threshold, a parameter \(f_g\) is incremented. A landmark with large \(f_g\) has the potential to be a high repeatability landmark. (d) Using weight coefficient Using weight coefficient \(g_j\) described above, another weight coefficient G is calculated as follows: $$\begin{aligned} G = \displaystyle {\frac{\Sigma g_j}{k}} \end{aligned}$$ When occlusion or appearance change occur by changes in viewpoint, G becomes small. In other words, large G are one criterion for selecting high repeatability landmarks. \(f_g\) and G are similar criteria, where \(f_g\), which indicates the number of detection for each feature point, is binarized and G is a variable that directly considers the number of detections. The latter allows us to know the quality of a landmark in more detail. In addition, it allows us to represent additional information, e.g., the density of good features. A mobile robot with a single mounted camera was used for our experiments. The mobile platform was "i-Cart mini" produced by the T-frog project [21], and the camera was a BSW32KM (Buffalo Americas Inc.). A laptop computer was mounted on the platform. It was used to capture VGA (\(640 \times 480\) pixels) images and control the platform. Image datasets were collected for both indoor (our experimental laboratory) and outdoor (ten different scenes on our university campus) environments. Image capturing positions. Nine positions divided in a reticular pattern are given to the robot. are given to the robot Quality landmark selection Nine shooting locations were set in each of the target scenes, as shown in Fig. 4. The distance between neighboring locations was 0.2 m. Landmark selection was performed by the four methods described in "Landmark selection criteria". The parameters used to select the local feature region were experimentally defined as follows: Euclidean distance D to group two feature points ("Distinctive feature region extraction based on feature point grouping") was set to \(\sqrt{10}\). Radius for investigating feature concentration ("SIFT feature matching") was set to \(d \le \sqrt{10}\) \(b_1 \le 150\) and \(b_2 \le 250\) ("SIFT feature matching"), Thresholds for the area of feature region \(S_s\) and \(S_l\) ("Landmark selection procedure") were set to 2500 and 40,000, respectively. The parameters used to select a visual landmark determined by brute force. The results were as follows: Number of high similarity regions: \(l_c \ge 3\). Number of corresponding regions: \(l_i \ge 9\). Number of detections of features: \(g_j \ge 9\) These were the conditions used to select visual landmarks with respect to the criteria introduced in "Landmark selection criteria". Only the landmark candidates that satisfied each condition were selected as visual landmarks. These values were based on the assumption that nine images were used. If more images are to be used, these values should be increased linearly. The abovementioned parameters were experimentally defined; therefore, one concern was their sensitivity. In our experience, it was not significantly high as long as we examined the proposed method using images captured in indoor and outdoor environments. When we slightly changed the parameters, the quality of the landmarks degraded in some scenes even though the changes improved the quality of landmarks in other scenes. The parameters given in this study might be rough estimates; however, they provided acceptable results. Criterion for quality evaluation In this study, it was assumed that a robot travels on a predefined course many times. While the robot moves along the course, the number of detections for each landmark was counted. The result was then used to evaluate the repeatability of the landmark. A variation coefficient was used for this purpose. This calculation was performed by dividing the standard deviation (Std.) by the average (Ave.) with respect to the number of detections for each landmark. If the value is small, we consider the landmark to have high repeatability. Landmark selection results. Numbers in columns 2 to 12 show the number of times of correct correspondence. (Ave. and Std. are calculated for each landmark) Landmark examples First, we present a landmark selection example from indoor environments. The visibility of these landmarks was confirmed through eleven automatic navigations. In each navigation, one hundred images were captured at 3 [fps]. Landmarks were then detected using these images. The rightmost images in Fig. 5 are visual landmarks selected from the scene. The left columns in the table show the name of the landmark, and the top row shows the number of experiments. A to D show landmarks whose number of arcs was greater than 9 (\(l_i \ge 9\)). They were stably detected in the complete images with a small variation coefficient. On the other hand, E and F show \(l_i = 7\) and \(l_i = 8\), respectively. These were also relatively stable landmarks; however, the values of the variation coefficient were greater than those of the abovementioned case. These results indicate that \(l_i\) can be used to determine the landmark quality. Visibility evaluation The same procedure described in the previous subsection was performed using images obtained in ten outdoor locations. Methods (a)–(d) ("Landmark selection criteria") were used to determine whether they are suitable for selecting a quality landmark. Figure 6 shows a list of variation coefficients for all local feature regions. The blue and red points indicate landmarks and other local feature regions, respectively. It is not always true that landmarks with \(l_c\) greater than 3 have a smaller variation coefficient than the other local feature regions. The same holds true for Fig. 7, which shows the results for method (b). In addition, it is not always true that landmarks with \(l_i\) greater than 9 have a small variation coefficient. The SIFT features included in the landmarks were examined to clarify the reason behind these observations. In some cases, the features were extracted from a spatial region where a large perspective change occurred. These features are not robust against viewpoint changes; thus, it is expected that they would not be included in the landmark. Another problem unique to method (a) is that a local feature region can differ according to the layout of the feature points. Figure 8 shows an example. One large region was extracted at one viewpoint; however, it was divided into two regions in another viewpoint. This caused a misdetection of the landmark. Serial number of landmark/local feature region vs. variation coefficient by method (a). There are nearly no quality difference between the selected landmarks and the local feature region Serial number of landmark/local feature region vs. variation coefficient by method (b). Compared with Fig. 6, there is no noteworthy difference Landmark detection differences. One large region was extracted; however, it was divided into two regions in another viewpoint. This caused misdetection of the landmark Method (c) considers the adequacy of SIFT features. Figure 9 shows the relation between the serial number of the landmark and the variation coefficient. Here, "all" means that all features were used for landmark detection and "only" means that only features with weight coefficient \(f_g\) were used for the detection. Obviously, using features with the weight coefficient resulted in small variation coefficients. This means that the criteria for defining the weight coefficient are useful for selecting high visibility landmarks. Figure 10 shows the relation between the serial number of a landmark and the average number of correct correspondences. Using features with a weighted coefficient provided stable landmark detection. This means that the weighted features allow us to find correspondences with high repeatability because they were easy to find from a set of images captured at different viewpoints. In addition, the processing time to find landmarks with a weighted coefficient was 23.89 s for 100 frames. On the other hand, the same process using all feature points required 26.06 s, which is a reduction of 8.33 %. Serial number of landmark vs. variation coefficient. Features with the weight coefficient show small variation coefficient. This means that the criterion for defining the weight coefficient is useful for selecting a high visibility landmark Serial number of landmark vs. average number of correspondence. Features with a weighted coefficient yield stable landmark detection As can be seen in Fig. 11, the average number of correspondences tended to be large when the feature points had large weight coefficient G. This graph shows that the proposed approach allows us to find quality landmarks using the weight coefficient, e.g., G greater than 5.0 statistically guarantees quality landmarks. This value can be predefined; thus, quality landmark selection can be automatically achieved. The same trend can be observed from the relation between the weight coefficient and the variation coefficient (Fig. 12). A greater weight coefficient results in a landmark with a lower variation coefficient. Weight coefficient vs. average number of correspondence. For example, landmarks with G greater than 5.0 statistically guarantees its quality Weight coefficient vs. variation coefficient Other feature descriptors A SIFT descriptor is robust against changes in scale, rotation, translation, and illumination because these characteristics are suitable for mobile robots. In addition, we can find other excellent descriptors with equivalent characteristics. A feature descriptor that provides scale and direction information is applicable to the proposed method; therefore, we attempted to replace SIFT with another feature descriptor. Here, there are two primary steps to extract an image feature point: feature point detection and feature description. Speeded-Up Robust Features (SURF) [22] are well-known approximations of SIFT features. To detect SURF keypoints, a box filter was applied to calculate the scale-space extremum. Note that the density of the extremum tends to become sparse; thus, it may show poorer performance than SIFT because the proposed method requires densely extracted keypoints. This assumption was experimentally confirmed using the abovementioned images. We set a smaller \(l_c\), and this blurred the line between a landmark and other feature regions. As another proof, feature description using FREAK [17] was also examined. Here, to generate a local feature region, the parameters were the same as those described in "Quality landmark selection". The FREAK descriptor was applied to each feature point extracted by the SIFT method. Figures 13 and 14 show the results obtained by method (c). The significance of the landmark quality was lesser than that of SIFT. Although the basic tendency was the same, i.e., a small coefficient variance was found by using feature regions with weight coefficient \(g_j\), the SIFT feature showed better performance compared with the proposed method. Serial number of landmark vs. variation coefficient. Landmark distinctiveness is less than that of SIFT. However, the basic tendency was the same Serial number of landmark vs. average number of correspondences In this study, we have proposed a method for visual landmark selection from image streams captured by a camera mounted on a mobile robot. Using a visual landmark consisting of dozens of neighboring feature points, two evaluation criteria were considered: distinctiveness and repeatability. To evaluate visibility, distinctiveness was evaluated for each image. Then, under the assumption that robots can seek a feasible landmark actively, the repeatability of the landmark was evaluated. Experiments using real images demonstrated that weighting each feature point included in a local feature region is important to describe a landmark with high distinctiveness and repeatability. In the future, we will examine automatic threshold determination. The existing method required a manually defined threshold; therefore, this burden should be reduced. Application to mobile robot is also important orientation. Betke M, Gurvis L (1997) Mobile robot localization using landmarks. IEEE Trans Robot Autom 13(2):251–263 Thrun S, Fox D, Burgard W, Dellaert F (2000) Robust Monte Carlo localization for mobile robots. J Artif Intell 128(1–2):99–141 Matsumoto Y, Inaba M, Inoue H (1996) Visual navigation using view-sequenced route representation. In: Proceedings of International Conference on Robotics and Automation, pp 83–88 Kaneko Y, Miura J (2011) View Sequence Generation for View-Based Outdoor Navigation. In: Proceedings of 1st Asian Conference on Pattern Recognition, pp 139–144 Celaya E, Albarral J, Jimenez P, Torras C (2007) Natural landmark detection for visually-guided robot navigation. Artif Intell Hum Oriented Comput 4733–2007:555–566 Cummins M, Newman P (2008) FAB-MAP: Probabilistic localization and mapping in the space of appearance. Int J Robot Res 27(6):647–665 Sato T, Nishiumi Y, Susuki M, Nakagawa T, Yokoya N (2008) Camera position and posture estimation from a still image using feature landmark database. In: Proceedings of International Conference on Instrumentation, Control and Information Technology, pp 1514–1519 Se S, Lowe D, Little J (2001) Local and Global Localization for Mobile Robots using Visual Landmarks. In: Proceedings of International Conference on Intelligent Robots and Systems, pp 414–420 Ogawa Y, Shirai Y, Shimada N (2007) Environmental map-ping for mobile robot by tracking SIFT feature Points using trinocular vision. In: SICE, Annual Conference. IEEE, Takamatsu, pp 1996–2001 Thompson S, Matsui T, Zelinsky A (2000) Localisation using Automatically Selected Landmarks from Panoramic Images. In: Proceedings of Australian Conference on Robotics and Automation, pp 167–172 Knopp J, Sivic J, Pajadla T (2010) Avoiding confusing features in place recognition. In: Proceedings of 11th European Conference on Computer Vision, pp 671–748 Hafez A, Singh M, Krishna K, Jawahar C (2013) Visual Localization in Highly Crowded Urban Environments. In: Proceedings of IEEE/RSJ Conference on Intelligent Robots and Systems, pp 2778–2783 Hayet JB, Lerasle F, Devy M (2007) A visual landmark framework for mobile robot navigation. J Image Vis Comput 25(8):1341–1351 Mata M, Armingol JM, de la Escalera A, Salichs MA (2002) Learning visual landmarks for mobile robot navigation. In: 15th Triennial World Congress Lowe D (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110 Tuytelaars T, Gool LV (2004) Matching widely separated views based on affine invariant regions. Int J Comput Vis 50(1):61–85 Alahi RO, Vandergheynst P (2012) FREAK: Fast Retina Keypoint. In: IEEE Conference on Computer Vision and Pattern Recognition Piccinini P, Prati A, Cucchiara R (2012) Real-time object detection and localization with SIFT-based clustering. J Image Vis Comput 30:573–587 Leibe B, Leonardis A, Schiele B (2006) An Implicit Shape Model for Combined Object Categorization and Segmentation. Toward Category-Level Object Recognition, Lecture Notes in Computer Science, vol 4170, pp 508–524 Ihara A, Fujiyoshi H, Takagi M, Kumon H, Tamatsu Y (2009) Improved Matching Accuracy in Traffic Sign Recognition by Using Different Feature Subspaces. In: Proceedings of International Conference on Machine Vision Applications, pp 130–133 http://t-frog.com/en/. (19/12/2014) Bay H, Ess A, Tuytelaars T, Gool LV (2008) SURF: Speeded Up Robust Features. Comput Vis Image Underst 110(3):346–359 KY proposed the visual landmark method. MS improved the method, and carried out experiments. Both authors read and approved the final manuscript. This work was partly funded by ImPACT Program of the Council for Science, Technology and Innovation (Cabinet Office, Government of Japan). Mechanical Systems Engineering, Faculty of Engineering, Shinshu University, 4-17-1, Wakasato, Nagano, Nagano, Japan Masamichi Shimoda & Kimotoshi Yamazaki Masamichi Shimoda Kimotoshi Yamazaki Correspondence to Kimotoshi Yamazaki. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Shimoda, M., Yamazaki, K. Quality visual landmark selection based on distinctiveness and repeatability. Robomech J 2, 16 (2015). https://doi.org/10.1186/s40648-015-0036-9 Landmark selection Mobile robot Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Event Horizon review Event Horizon (1997) - Event Horizon (1997) - User Reviews 'Event Horizon' is very much an atmospheric sci-fi horror. It does not rely on gore (although there is enough of that) but rather it is the creepy atmosphere that engages the viewer. Andersen successfully creates a tense, depressing, and claustrophobic atmosphere Unfortunately, Event Horizon'' is not the movie to answer these questions. It's all style, climax and special effects. The rules change with every scene. For example, early in the film the Lewis and Clark approaches the Event Horizon through what I guess is the stormy atmosphere of Neptune, with lots of thunder, lightning and turbulence Many critics pointed out that the atmosphere and technology of the Event Horizon was starkly different from the rescue ship, I agree. It was unexplainably different, and looked like something that belonged on a stereotypical alien spaceship as it looked so different Unfortunately, Event Horizon did poorly at the box office due to questionable marketing and a public unsure what to make of the film, but wowed lots of fans since --- which is why we're still. Event Horizon is the best movie that Paul Anderson ever made and ever will make. It's one of the most late 90's movies you will ever see and has very Hellraiser aesthetic. Actually, now that I think of it, this movie is a mashup of Hellraiser and Alien mixed with a gimmicky action movie Event Horizon is still one of the scariest and most thrilling horror movies on the planet and continues to be a source of good discussion amongst horror fans. Scream Factory's new video and audio.. REVIEW: It's quite possible that Event Horizon may be among one of the most chilling films ever created. Then again the backlash from critics and sci fi horror fans complaining of plot holes, fragmented story lines and a genre confusion may beg to differ on my analysis Event Horizon is a decent movie. There's no question it's never going to win awards for originality, or anything else, frankly, but what it does it does well. It's based on the idea of a space ship, which disappeared just under a decade ago, reappearing mysteriously, prompting a search and rescue vessel to go in and investigate 'Event Horizon' Is Coming Back. It's About Damn Time. 22 years later, Amazon wants to take audiences back to the gothic-inspired sci-fi world of Event Horizon 'Event Horizon's Dolby TrueHD 5.1 surround track offers a tangible upgrade from previous DVD releases of the film, but it doesn't quite live up to the most dynamic high-def catalog mixes I've reviewed Event Horizon movie review & film summary (1997) Roger Eber Review: Event Horizon - Space Defense (Nintendo Switch) 8. Very Good. If you can handle the level grind and upgrade treadmill, Event Horizon - Space Defense is a fun arcade action/adventure role-playing mash-up. If you need more player engagement, the level grinding can become a bit mundane Event Horizon arrives on Blu-ray with a 1080p 2.35:1 transfer, however I am refraining on using the words theatrically correct here. That is because the disc retains some problems, apparently, from the original DVD releases. Now, I should go on record here and say that this is NOT something that is immediately noticeable on a casual viewing, and you have to look VERY hard to see it Blu-ray Review: EVENT HORIZON (Collector's Edition) As someone who has been a huge fan of Event Horizon ever since it was released in 1997, I'm always on board for anything that celebrates this audacious, terrifying, and grotesquely beautiful exercise in cosmic horror A beautiful open world peppered with mechanical monsters that make for exhilarating fights. There's something about being dropped into a brand new game world and finding it to be dense with. Event Horizon feels like a once-in-a-lifetime cinematic excursion in sci-fi and pure, unfiltered horror that rarely ever comes along out of the studio system. Full Review | Original Score: 4/ http://www.patreon.com/oliverharperhttp://www.olivers-retrospectives.comhttp://www.twitter.com/OllieH82http://www.facebook.com/OliverHarpersRetrospectiveRevi.. Quick Hit Review: Event Horizon is a movie I've watched a few times over the years, pretty much every time there's been a new home video release. In some respects, it's an interesting concept not entirely executed all that well by director Paul W.S. Anderson and, at least in terms of set design, outside of the core, has designs seen numerous times in other, better made, sci-fi horror films ( Alien for one) Rather than a chitinous xenomorph or a guy with pins in his head, Event Horizon is a psychological horror that builds up the edge-of-the-cinema-seat tension with great aplomb, before launching. James Rolfe discusses the 1997 film Event Horizon, which is like Hellraiser in Space but better than the actual Hellraiser in space, with pals Justin Silverm.. Event Horizon Reviews - Metacriti es for all sorts of tawdry spectacle. The titular deep-space vessel, capable of faster-than-light travel, goes missing at the edge of the solar system before reappearing years later with no crew left on board The events on Event Horizon take place in 2047 which was a good 50 years away when the film came out in 1997, to frankly not very much acclaim. Which was a huge shame as it's a tense and intensely frightening movie even 20 years after its release Event Horizon Review Seven years before, a revolutionary spaceship that used a black hole drive disappeared. Now it has reappeared in the orbit of Neptune, and a rescue ship is sent to see if. Blu-ray Review: EVENT HORIZON 4K - ScreenAnarch From its marketing-impaired title on down, Event Horizon is a steadily churning debacle that promises much more than it can deliver and ends up drowning in a crimson sea of gore and maddeningly out-of-place steals from other, better genre shockers. Read full review Do you see?! Streaming services have opened a rift in the space-time continuum and taken Event Horizon straight to hell in 4K UHD! In looking through the various Halloween sales I stumbled across. Event Horizon - Film Review. May 6, 2021 yggdrasille. This sci-fi horror film is one of those weird movies best described as fascinating failure. It undercooks or burns most of its ingredients and overall doesn't really work, but is somehow worth watching regardless Event Horizon Special Edition Blu-ray is available now from Scream Factory. Kyle Anderson is the Senior Editor for Nerdist. You can find his film and TV reviews here Event Horizon Review Horror Movie Tal Overall, Event Horizon is one of the most underrated horror films of our age. It's got one hell of a cast and the story really does send some chills down your spine. I was absolutely blown away by this one and will be revisiting it again soon enough g in New South, DIAGRAM, New Orleans Review, New Ohio Review Online, and elsewhere. Next Page (Carrie George. Review. After the success of Mortal Kombat in 1995, Paul W. S. Anderson moved on to his next project, arguably one of his most interesting: Event Horizon.A sci-fi horror film that turned out to be an unexpected hit on home video, it fell short of its $60 million dollar budget at the box office Event Horizon looks like Alien, feels like Alien, hell there's even eight crew members (okay, so Alien only had seven) stranded aboard a spaceship with a terrifying, chest-ripping, sanity-fraying. Event Horizon book. Read 43 reviews from the world's largest community for readers. 2046 A.D.: Seven years ago an experimental space vessel disappeared u.. Film Review: Event Horizon (1997) Adrian Halen 03/20/2021 Film Reviews. Rate This Movie: SYNOPSIS: A rescue crew investigates a spaceship that disappeared into a black hole and has now returnedwith someone or something new on-board. REVIEW Event Horizon (Collector's Edition) Blu-ray Review High In that spirit, what makes Event Horizon feel essential is not just the history it collects, but the way one can hear these musicians, wide-ranging as their influences may be, believing in a form Event Horizon Despite game efforts from a first-rate cast and acres of impressive production values, Event Horizon remains a muddled and curiously uninvolving sci-fi horror show Nevertheless, Event Horizon remains one of the most well made and, above all, While admitting that the film's opening was classy in his 2000 review,. Review. 213 comments. share. save. hide. report. 86% Upvoted. Log in or sign up to leave a comment Log In Sign Up. Sort by. best. View discussions in 4 other communities. but in the Event Horizon universe the reality of that world is more Lovecraftian than communicated or understood by the world's religions RedLetterMedia: Plinkett Reviews, Half in the Bag, He and a few of the other staff organised a trip to take them to the cinema and picked Event Horizon, thinking for some reason that it would be a fun sci-fi film like Star Wars. My dad doesn't work at the mental hospital any more. 214 Its name: Event Horizon. The high-tech, pioneering research spacecraft mysteriously vanished without a trace on its maiden voyage seven years ago. But a weak, persistent signal from the long-missing craft prompts a rescue team, headed by the intrepid Captain Miller (Laurence Fishburne), to wing its way through the galaxy on a bold rescue mission Event Horizon was negatively reviewed. In Entertainment Weekly, critic Owen Gleiberman noted that the film unleashes some of the most unsettling imagery in years and praised Anderson as a. Reviews, Critics, Event Horizon. popular. Lonely Planet reveals the top 8 lesser-known tourist spots around Ireland. 9 shares; Ryanair issues Government with three-point plan to rescue Irish. The Event Horizon Telescope Collaboration is a multinational research project that images black holes. The collaboration released the first-ever high-resolution image of a black hole — the one. Event Horizon is a top-of-the-line ship designed for space exploration, but seven years ago it vanished during its maiden voyage.. Now, in the year 2047, a rescue mission led by intrepid Captain Miller (Laurence Fishburne) finds Event Horizon and investigates its mysterious disappearance. What Captain Miller and the crew find will be beyond their greatest nightmares and all rational thinking The Octomore Event Horizon - Feis Ile 2019 was distilled in 2007, and matured in a combination of Oloroso and PX sherry butts. At 12 years old this is one of the oldest Octomores released so far. Nose: Earthy and down to earth. Dark chocolate, cold coffee, cocoa nibs and dusty cinnamon I created the Event Horizon to reach the stars. But she's gone much, much further than that When she crossed over she was just a ship. But when she came back she was alive says Dr Weir, a man either possessed or insane, or possibly both, as the horrors on board the newly-reappeared spaceship Event Horizon reach their bloody, fiery climax T he story takes place in the year 2047, seven years after the unexplained disappearance of a deep space vessel known as the Event Horizon. After receiving distress signals from near the planet Neptune, a top secret search and rescue mission is undertaken. Enroute, the crew of the Lewis and Clark search and rescue ship receives their mission briefing Film Review: Event Horizon (1997) - Horror News HN The Event Horizon Telescope (EHT) collaboration, who produced the first ever image of a black hole, has revealed today a new view of the massive object at the centre of the M87 galaxy: how it looks in polarised light. published today in the latest issue of Physical Review Letters Event Horizon | 1997 | R | - 4.9.7 Something comes back from space and humans must study it in this science fiction thriller. SEX/NUDITY 4 - A woman's bare breasts are shown a few times Event Horizon movie reviews & Metacritic score: The year is 2047. Years earlier, the pioneering research vessel Event Horizon vanished without a trace. Now a signal from it has been detected, and the United S.. xKore - Event Horizon It seems a little hard to comprehend now, but Paul W.S. Anderson's Event Horizon totally flopped back in 1997. The sci-fi horror project just did not find the audience it was looking for, but. But in Event Horizon's case, the addendum to the schedule is for much less dramatic reasons - telling close-ups, dialogue and kiss-off lines - and is par for the big budget (in this case $50. Event Horizon is a pretty fun, gnarly time at the movies that does not quite reach greatness thanks to the more visceral footage being excised out of the final product. It is not only the blood and guts that you are missing without this footage, but also what likely amounts to some further characterizations that makes some of the thin characters feel more substantial in the long run Plinkett Reviews Folder: WEB VIDEOS AND SHORTS. Back. The Nerd Crew Movie Talk Behind the Scenes Donate on Patreon Event Horizon - re:View. Apr 12. Written By Jay Bauman. Jay Bauman. Previous. Previous. Spacehunter: Adventures in the Forbidden Zone - re:View. Next. Next. Star Trek The Next Generation Season One - re:View. Event Horizon had such promise... In Space, No One Can Hear You Scream — Really. Event Horizon (starring Sam Neill, Laurence Fishburne, and Kathleen Quinlan) had such promise. Fifty years from now, an experimental spaceship called the Event Horizon drifts derelict in Neptune's orbit, its test crew disappeared. A search-and-rescue mission with a band of potentially interesting characters is. Event Horizon is a curious piece of work. For one thing, it carries its own name lit up like a movie marquee on the nose, in case the beings from beyond the stars read English Event Horizon Review . Previous Next Show Grid. Previous Next Hide Grid. Previous Next. Image . 1 / Video . 1 / Audio . In the year 2047 a group of astronauts are sent to investigate and. MY REVIEW: If you're looking for your usual sci-fi flick with aliens and space battles and gunfights, this isn't it. This is, however, Auf unerklärliche Weise mutierte dabei die Event Horizon in eine Art intelligentes, aber höchst bösartiges Wesen,. Event Horizon is a 1997 English Horror movie directed by Paul W. S. Anderson. We hope you will be able to find this Event Horizon Review article useful. In case you have any comments, please feel free to share with us In the Fall of 2019, Sub-Standard Alien reviewed the single We Are The Prophets, from one-man Prophetic Doom project Markarian. February 28, 2020 the full-length album, Event Horizon was released.This album digs deep into the depths of the Earth, and to the farthest reaches of Space Event Horizon 1997 ★★★★★ Rewatched Apr 27, 2021. Lyndell Clark's review published on Letterboxd: Watched the special features on the new Scream Factory Blu-ray but I watched the flick in 4K via Vudu. I've loved this awesome horror flick since first seeing. Event Horizon (1997) - IMD Supergirl: Season 5 Premiere - Event Horizon Review. 8. great Event Horizon is largely able to continue the momentum of Supergirl's terrific Season 4 finale. More Reviews by Jesse Schedeen. 7. 80 Shout Factory has announced the collectors edition Blu-ray release of Paul W.S. Anderson's Event Horizon, which goes on sale March 23rd in the US and Canada and features a newly-restored version. 'Event Horizon' Is Coming Back When the Event Horizon, a spacecraft that vanished years earlier, suddenly reappears, a team is dispatched to investigate the ship. Accompanied by the Event Horizon's creator, William Weir (Sam Neill), the crew of the Lewis and Clark, led by Capt. Miller (Laurence Fishburne), begins to explore the s.. He made me feel at total ease, great sense of humor with a tattoo shop that was so clean. I found him & Event Horizon from reviews on YELP. What people had said prior about him and his shop were 100% right. I am beyond ecstatic about my new tattoo. Right on the mark. Thank you Brant!!! Useful. Funny. Cool Amazon and Paramount Television are developing a series adaptation of the 1997 sci-fi horror film Event Horizon, Variety has learned. Adam Wingard is set to executive produce and dir Event Horizon Review. Directed by Paul W.S. Anderson. With Laurence Fishburne, Sam Neill, Kathleen Quinlan, Joely Richardson. A rescue crew investigates a spaceship that disappeared into a black hole and has now returned. Beyond the Blue Event Horizon is not too long, an easy read and quite entertaining despite all the flaws. Although it makes no las Pohl here continues the Heechee saga with more of the same. Book 2 in the series is a small part human archeology of alien technology, a solid dose teenage sexuality, a touch of facing death and quite a lot of stupidity Play Event Horizon Slot by Betsoft online for free in demo mode. Discover more free slots at Playfortunefor.fun. No download and no registration require This is Event Horizon - Movie Review (1997).mp4 by sjb on Vimeo, the home for high quality videos and the people who love them The delightful Event Horizon is this collaborative, eponymous quartet's unusually compelling debut. The veteran Chicago-area jazzmen, known for their versatility, exhibit near perfect camaraderie with one another. Comprising eleven originals, the music seamlessly covers a range of styles that keeps it engaging without sacrificing the album's conceptual unity Events make up much of our lived experience, and the perceptual mechanisms that represent events in experience have pervasive effects on action control, language use, and remembering. Event representations in both perception and memory have rich internal structure and connections one to another, and both are heavily informed by knowledge accumulated from previous experiences. Event perception. Event Horizon. 467 likes · 6 talking about this. Let's plan it together! Birthday,baby shower,bridal shower,gender reveal party,welcom party,mehandi,dholki,engagement,anniversary You are under attack! Set in the captivating Event Horizon universe, this hardcore game puts you in control of a massive space fleet at war with over twelve alien factions. Make the best decisions. Event Horizon Blu-ray Review High Def Diges Buy Event Horizon at Amazon! Free Shipping on Qualified Orders Event Horizon brings out my inner Roman, because its limitations and strengths are best explained when you accept that, although it takes place in outer space, in the future, and on a ship named after a concept from general relativity, no less--it's not really a science fiction film, but an old-fashioned horror allegory 'Event Horizon' should have been lost in space. From Reviewer Paul Tatara (CNN) -- I don't know when it happened, but at some point it must have become proper etiquette to let movie plots, and I. Event Horizon is a space adventure game that has players taking on the role of a ship commander. As the commander, the player gets to manage various resources and send smaller ships out into battle. All of this takes place as they continue to explore the galaxy for resources Event Horizon Solar & Wind reviews and complaints, reviews of the brands of solar panels they sell, their locations and the cost of installations reported to us for 2021. Get the best deal Event Horizon: Directed by Paul W.S. Anderson. With Laurence Fishburne, Sam Neill, Kathleen Quinlan, Joely Richardson. A rescue crew investigates a spaceship that disappeared into a black hole and has now returned...with someone or something new on-board The event horizon is like the edge of a really deep, dark, black well. If you cross it, you fall in and you're gone. Of course, in space, the event horizon isn't something physical you can touch. Event Horizon's theatrical cut was heavily trimmed down from the director's original vision, and here's what was lost to fans along the way.One of the sad realities of working within the Hollywood studio system is that the producers and the studio are more likely to get final cut on a film than the director. This doesn't really make sense, since the director is often blamed for the failure of. Event Horizon Tattoo. 3,129 likes · 4 talking about this. Professional body art by talented, experienced tattoo artists Reviewed by Andrew Mogford, 29th September 2009 The film may be a guilty pleasure, but I do enjoy Event Horizon. There may be glaring weaknesses in plot and dialogue, but the film plays like Hellraiser in space. The first act builds up the tension well, and the second and third act deliver in.. Event Horizon, New Orleans. 4,961 likes · 2 talking about this. 5 dudes who play music for your ears Notable bands would include Spawn of Possession and Obscura, but they're not alone, for Essence of Datum has recently joined their ranks with the release of their full-length debut Event Horizon Event Horizon. Details: 1997, Rest of the world, UK, USA, Cert 18, 96 mins. Direction: Paul Anderson. Latest reviews. Noah review â 'a preposterous but endearingly unhinged epic Review: Event Horizon - Space Defense (Nintendo Switch ally underrated, a time of Talking Heads, Devo, and The B-52's Welcome all to Two Geeks on Couch Event Horizon movie reaction, a first time watch for Friedel! Come along as we discuss the existential angst of this visual.. Michael Thomas: Event Horizon album review by Jerome Wilson, published on July 21, 2020. Find thousands jazz reviews at All About Jazz Event Horizon is considered one of the best, albeit, messed up, sci-fi/horror movies of the 90's. What few people realise though is that the original cut of the film was way worse. Fortunately, we're never allowed to see it. Warning, this article contains spoilers, sex, gore and sexy spoilers. If you've never seen Event Horizon, it's basically about a ship [ Dissect Paul W. S. Anderson's splatterific sci-fi mash-up Event Horizon and you'll find the brittle skeleton of Alien, the perforated heart of Solaris, the jet-black soul of The Shining, and the self-aware brain stem of 2001: A Space Odyssey.It's a deformed example of cinematic physiology 101, an anarchic game of Operation conducted by a mad scientist so drunk on genre kool-aid he can. Event Horizon is an arcade shooting game with extreme boss , epic battles and missions . Galaxy is on fire in the age of war ! This original action shooter is all about alien leagues action space battles and PvP battle galaxy games clash fights Event Horizon (1997) - Review . Tags: stars laurence fishburne laurence fishburne horizon. July 14th 2019. View original. 2 1/2 Stars. Laurence Fishburne receives top billing but the real star of Event Horizon is the wonderful production design. This is a good-looking film that contains not a single original idea Xtrullor - Event Horizon Share Download this song. Author Comments. Welcome to the edge of event horizon. There's no turning back now. An epic orchestral dubstep soundtrack, in D# minor. ===== I think I'm finally starting to get the hang of this hybrid genre EVENT HORIZON. STARRING: Laurence Fishburne, Kathleen Quinlan, Joely Richardson, Richard T. Jones, Jack Noseworthy 1997, 97 Minutes, Directed by: Paul Anderson Description: A mission in the year 2047 investigates the experimental American spaceship Event Horizon, which disappeared seven years previously and suddenly, out of nowhere, reappeared in the orbit of Neptune Event Horizon Blu-ray Review AVForum So I am still quite surprised by the fact this first look at actual 'Horizon: Forbidden West' gameplay on PS5 via the latest State of Play event is actually happening considering that E3 week is only a few weeks away and Ratchet and Clank: Rift Apart is out in just two weeks almost Event Horizon: Space Defense Nintendo Switch 4.5 4.5/10 Buy Now The TONOR TC-777 USB microphone is an all-in-one Microphone Set: Microphone with Power Cord, Tripod desk stand with folding feet, Mini Shock Mount, Pop Filter, Manual, Service Card. Letting you get everything done at once, no need to worry about buying accessories Read and write album reviews for Event Horizon - Necessaries on AllMusi Blu-ray Review: EVENT HORIZON (Collector's Edition Event Horizon Pages: 1 2 03/21/2021 Blu-ray Reviews , Featured Review , Quick Hit Reviews , Screen Caps Tagged with: Jason Isaacs , Joely Richardson , Kathleen Quinlan , Laurence Fishburne , Paul W.S. Anderson , Richard T. Jones , Sam Neill , Scream Factory , Sean Pertwee , Shout Factor He describes Event Horizon with such assurance he could almost be mistaken for an overanxious dealer trying to make a sale. Yet he gets his inspiration from ancient sculpture from India as. Richly textured original jazz Event Horizon Jazz Quartet - EVENT HORIZON: On this truly exciting new album, you'll hear Jim Kaczmarek - Saxophones and Flute; Scott Mertens - Piano and Keyboard; Donn DeSanto - String Bass and Rick Vitek - Drums they're out of Chicago, and as you'll see when you watch this video of We Would Love To Have You (from an earlier performance. Horizon Zero Dawn Review - IG Common Sense Media improves the lives of kids and families by providing independent reviews, age ratings, & other information about all types of media. Showing results for Event Horizon | Common Sense Medi REVIEW When we start to analyze Event Horizon, the most important aspect is that the film is a horror film.Actually it's not a science fiction thriller like Alien.Event Horizon is more like a psychological horror film like Kubrick's The Shining.The film is just set in outer space with all the technical science fiction stuff Event Horizon • USA 31546. Sailing yacht Event Horizon is a 51' centerboard sloop built in 1982 by Baltic Yachts of Bosund, Pietarsaari, Finland. Designed by the C&C Design Group, she is number 22 of 24 semi-custom Baltic 51's built between 1979—1988 It's often a tricky one to pull off, but the horror/sci-fi mash-up can be hugely enjoyable when done properly. In 1997, we received Event Horizon from Paul W. S. Anderson and writer Philip Eisner.. Set in 2047, the story starts with the reappearance of the deep space vessel Event Horizon, that had disappeared then returned from places unknown 1 user review on Stillwell Audio Event Horizon. Stillwell Audio's Event Horizon is a VST based limiter plug-in. I didn't have any problem installing the plug-in, as the process was easy and pain free The Event Horizon is a Stout - American Imperial style beer brewed by Olde Hickory Brewery in Hickory, NC. Score: 97 with 1,840 ratings and reviews. Last update: 05-19-2021 In Event Horizon, Lena puts her plan against Supergirl in motion while Kara and company deal with a mysterious new enemy With high spatial resolution, polarimetric imaging of a supermassive black hole, like $\mathrm{M}{87}^{\ensuremath{\star}}$ or Sgr ${\mathrm{A}}^{\ensuremath{\star}}$, by the Event Horizon Telescope can be used to probe the existence of ultralight bosonic particles, such as axions. Such particles can accumulate around a rotating black hole through the superradiance mechanism, forming an axion. Event Horizon - Movie Reviews - Rotten Tomatoe Supergirl: Season 5 Premiere - Event Horizon Review. 8. Great Event Horizon is largely able to continue the momentum of Supergirl's terrific Season 4 finale. More Reviews by Jesse Schedeen. 8. Star Wars: The Bad Batch Series Premiere - Review. 8. Superman & Lois Series Premiere - Review. 6 Supergirl is back with a strong premiere and a brand new suit. Last season ended on a pretty ominous note after a very politically-heavy season, and Supergirl Season 5 Episode 1, Event Horizon, feels like the perfect reentry into the National City we know and love.. Even though there are many moments that mark this season as a new era for Supergirl, the show's core remains the same in. Originally announced last year, Scream Factory is bringing Paul W.S. Anderson's fan favorite 1997 space horror flick Event Horizon to Collector's Edition Blu-ray, and we've learned today. Event horizon, boundary marking the limits of a black hole.At the event horizon, the escape velocity is equal to the speed of light.Since general relativity states that nothing can travel faster than the speed of light, nothing inside the event horizon can ever cross the boundary and escape beyond it, including light.Thus, nothing that enters a black hole can get out or can be observed from. Event Horizon (1997) Retrospective / Review - YouTub It is a cosmic EVENT HORIZON created by big solar waves reaching the Earth from the Galactic Central causing the activation of The Compression Breakthrough, this is when the light forces from above the surface of the planet and from below the surface of the planet meet in the middle, that is on the surface of the planet First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole. The Event Horizon Telescope Collaboration, Kazunori Akiyama 1,2,3,4, Antxon Alberdi 5, Walter Alef 6, Keiichi Asada 7, Rebecca Azulay 8,9,6, Anne-Kathrin Baczko 6, David Ball 10, Mislav Balokovi. For developer Event Horizon, Tower of Time is an astounding entry that gives fans of the CRPG genre something different to sink their teeth into. 9.0 /10 - God is a Geek This has been one of the most enjoyable indie RPG experiences for me so far this year Runt trädgårdsbord JYSK. Avverkning nära tomtgräns. Minecraft seeds Xbox 360. Bitvavo uitbetalen belasting. Celsius vs BlockFi vs. United Talent Agency linkedin. E Boks dk login. Ett fungerande samhälle. Veryvoga verwijderen. Jordbruksverket fågelinfluensa Dalarna. Ta bort Outlook konto från en dator. Neue Betrugsmasche Telefon 2021. Scandic Filipstad. Matkoma AB. ABB aktie historik. Existentiella frågor exempel. Portfolio Performance Trade Republic. NFT index token. Marklov ritning. Inte än eller inte ännu. Mio lampor fönster. Ethereum gas fees are ridiculous. Svar att få lån. Gå i pension meddela arbetsgivaren. Svenska företag som går bra. Svärtinge Skogsbacke. How to read a candlestick chart. Becquerel dangerous level. Memu review Reddit. Funda Krimpen aan de Lek Molendijk. Ethereum investment calculator. Create desktop icon. Är Pianot i pianot. Was ist Sachkapital. Miles and More. Unido CoinMarketCap. Vinröd tröja. Concentrix spin off from SYNNEX. Stock to flow bitcoin. Daytraden software.
CommonCrawl
High-resolution mapping and time-series measurements of 222Rn concentrations and biogeochemical properties related to submarine groundwater discharge along the coast of Obama Bay, a semi-enclosed sea in Japan Shiho Kobayashi1, Ryo Sugimoto2, Hisami Honda3, Yoji Miyata4, Daisuke Tahara2, Osamu Tominaga2, Jun Shoji5, Makoto Yamada3, Satoshi Nakada6 & Makoto Taniguchi3 Progress in Earth and Planetary Science volume 4, Article number: 6 (2017) Cite this article High-resolution mapping along the coast and time-series measurements of the radon-222 (222Rn) concentrations in the shallow zone in a semi-enclosed sea, Obama Bay, Japan, were undertaken in 2013. The temporal and spatial variations in the 222Rn concentrations were analyzed in parallel with meteorological conditions, physical–biogeochemical characteristics, and the submarine groundwater discharge (SGD) flux measured with a seepage meter. These data indicate that the groundwater influences the water properties of the bay and that the groundwater supply pathways are not limited to the local SGD. The concentrations of 222Rn flowing into the bay from rivers was known to be relatively high because groundwater seeps from the river bed. High-222Rn water was almost always present around the river mouth, and northward advection of the water affected the distribution of 222Rn concentrations in the bay. The southward wind suppressed the advection of the high-222Rn water and largely controlled the temporal variations in 222Rn concentrations at a station located on the north side of the river mouth, whereas the local SGD affected the short-term changes in the 222Rn concentrations. The concentrations of 222Rn and chlorophyll-a, an indicator of phytoplankton biomass, show a significant positive correlation in the surface layer along the coastline in seasons when the nutrient supply was the main factor limiting primary productivity. It is challenging to obtain evidence of the effects of submarine groundwater discharges (SGDs) on marine ecosystems, partly because groundwater is an invisible source of the freshwater supplied to the sea. Fresh groundwater discharged from the sea bed often reduces the salinity of the bottom water and increases the concentrations of dissolved nutrients that are essential for primary producers. The correspondence between the low-salinity area associated with groundwater discharge and the high concentration of primary producers, such as benthic algae (Bruce 1925; Sanders 1979), seagrass (Kohout and Kolipinski 1967), and salt marsh plants (Nestler 1977), has been recognized since the 1920s. Based on those results, the ecological significance of groundwater discharge into coastal seas was discussed by Johannes (1980), but the distribution of salinity alone was an insufficient index with which to describe the influence of groundwater (Johannes 1980) because it does not distinguish SGD from other freshwater. Since the 1980s, various methods have been developed to estimate the influence of SGD on marine primary production. The influence of SGD on primary producers and coastal ecosystems was quantified with the nitrogen budget method (Valiela and Costa 1988; Valiela et al. 1992). The effects of SGD on benthic microalgae and red tides were also quantified using the groundwater flow rate, which was calculated from the hydraulic gradient of groundwater on land (Campbell and Bate 1996; Laroche et al. 1997; Gobler and Sanudo-Wilhelmy 2001). A correspondence between the distributions of SGD and benthic microalgae was suggested with a visualization method using thermal imaging, based on the difference between the temperature of the seawater and that of SGD (Miller et al. 2004). In recent years, the geochemical tracers radium and radon-222 (222Rn) have been used to trace groundwater (Moore 1996; Burnett et al. 2006). 222Rn is a short-lived radioisotope (half-life, 3.83 days) that is present in much higher concentrations in groundwater than in surface flow (Ellins et al. 1990). Using this powerful tracer, the influence of SGD on benthic microalgae was evaluated in the intertidal zone of the Yellow Sea (Waska 2011; Waska and Kim 2010). The nutrient fluxes caused by SGD and their potential effects on primary production have also been demonstrated in several coastal seas around Japan using 222Rn (Shiokawa et al. 2013; Sugimoto et al. 2016). A method of continuously monitoring the active concentrations of 222Rn in water has been developed and used to visualize the distribution of groundwater in coastal seas (Burnett et al. 2001; Burnett and Dulaiova 2003). This method was first applied in tracking lines of over 100 km and successfully revealed the spatial distribution of SGD (Santos et al. 2008; Stieglitz et al. 2010). The applicability of this approach within a site of only a few kilometers in size was confirmed in a recent study using multiple 222Rn detectors (Hosono et al. 2012). SGD typically displays significant spatial and temporal variability (Burnett et al. 2006). Therefore, high-resolution mapping and time-series measurements of the 222Rn concentrations and biogeochemical properties in the area of interest are required to quantify the influence of SGD on primary production along the coastline. In this study, we monitored the active concentrations of 222Rn with multiple detectors from a boat (Stieglitz et al. 2010) to determine the distribution of 222Rn concentrations in Obama Bay, Japan, within a study site of only a few kilometers in size (Fig. 1). We also monitored 222Rn and the physical–biogeochemical properties of a site for a month and measured SGD directly with seepage meters (Lee 1977; Rutkowski et al. 1999; Taniguchi and Iwakawa 2004) to investigate the source of the 222Rn. We then compared the distributions of SGD and chlorophyll-a (Chl-a), an indicator of the phytoplankton biomass. In Obama Bay, a correspondence between the high 222Rn concentrations in the bottom layer and the elevated Chl-a concentrations were observed, suggesting that SGD influences phytoplankton growth (Honda et al. 2016). In this study, we investigated this relationship in more detail, focusing on the phytoplankton biomass in the shallow zone along the coastline. a Location of the study site, Obama Bay. b Map showing the continuous 222Rn monitoring track (dotted line), the station at which the time-series measurements of 222Rn were made, and the seepage meter (star mark). Monitoring along the coastline from a boat was conducted from A to D. c Schematic illustration of the deployment of the seepage meters and loggers Methods/Experimental Obama Bay is a semi-enclosed embayment in Fukui Prefecture, Japan (Fig. 1a). The bay is one of the tributaries of Wakasa Bay, which is connected to the Japan Sea, and the tidal range is less than 20 cm. The surface area, volume, and mean depth of the bay are 58.7 km2, 0.74 km3, and 13 m, respectively. The annual precipitation in the watershed of Obama Bay is over 2000 mm/year. The discharge of two major rivers, the Kita and Minami rivers, is usually less than 10 m3/s for each river between May and August (data on river discharge were provided by the Ministry of Land, Infrastructure, Transport and Tourism, Japan). There are significant groundwater resources in the basin, particularly on the Obama Plain (Sasajima and Sakamoto 1962), and there are more than 100 flowing artesian wells on this alluvial plain (Sugimoto et al. 2016). The SGD rates (m3/d) flowing into Obama Bay were estimated based on the monthly 222Rn data and a steady-state mass balance model and were relatively high in spring (March–April), when snowmelt water was predominant, and in the rainy summer season (June–September) (Sugimoto et al. 2016). The annual mean concentration of 222Rn in the river water flowing into Obama Bay was about 59.6 dpm/L, which was lower than that in the groundwater (mean 660 dpm/L) but was not negligible (Sugimoto et al. 2016). The rainfall, snowfall, and wind velocity data, measured at the weather station designated "Obama" from 20 April to 15 May 2013, were obtained from the Automated Meteorological Data Acquisition System (AMeDAS) of the Japan Meteorological Agency. The mean monthly river discharge data from the largest river (Kita River) for 2013 were obtained from the Ministry of Land, Infrastructure, Transport and Tourism of Japan. The monthly changes in the meteorological conditions in 2013 are shown in Fig. 2. The total monthly rainfall was relatively high in January but gradually decreased and was lowest in May. It increased again in June and continued to be relatively high from June to August. It then increased markedly in September, which is the typhoon season. Significant snowfall was recorded in winter (December to February). The mean monthly river discharge was relatively high from January to March, was lowest for the year in May, and increased slightly in June and July. It then increased markedly in September, when heavy rain fell in Obama City. Total monthly rainfall (mm) and snowfall (cm) and mean monthly river discharge rate of the Kita River (m3/s) in 2013 Measurements and sampling The distributions of the 222Rn concentrations along the eastern coast of Obama Bay were measured on 13, 15, and 16 March, 7 June, 22 July, and 12 September, 2013, with electronic Rn detectors (RAD 7: Durridge) from a research boat belonging to Fukui Prefectural University. The boat ran close to the coast, within 10 m of land, where the depth of the water column was around 2 m, at a speed of 1–2 knots. The tracking line is shown in Fig. 1a. Monitoring was conducted from Tomari (A) to Seihama (D). The total distance of the tracking line was about 15 km. Seawater was pumped at 5–10 L/min from 0.5 m below the surface, and the temperature and salinity of the water were measured every 1 min using a CTD instrument (AAQ1183: JFE-Advantech). The 222Rn in the seawater was measured and analyzed according to Stieglitz et al. (2010), who developed an Rn detector system which consists of three Rn detectors connected in parallel and interfaced with an air–water exchanger (Dulaiova et al. 2005). We used two sets of Rn detector systems, with a measurement interval for each system of 10 min, so the average values and standard deviations of the 222Rn concentrations were obtained every 5 min. Seawater samples for analyses of nutrient concentrations and Chl-a were collected every 10 min and every 20 min, respectively. However, seawater sampling for analyses of Chl-a failed at some points in March. At a station on the tracking line (star in Fig. 1b), we measured the temporal variations in the 222Rn concentrations in addition to the SGD fluxes, using a seepage meter. The water depth at the station was around 1 m. Divers deployed the seepage chamber (base radius, 15 cm) on the seabed at the station on 18 April 2013. A diagram of the seepage chambers is shown in Fig. 1c. Salinity and temperature loggers (Infinity-CT or MDS-CT, JFE-Advantec) were set inside and outside the chambers, whereas sea-level loggers (DIK-613A, Daikirika) were set only on the outsides of the chambers. The details of the seepage meters have been described by Taniguchi and Iwakawa (2004). We calibrated the seepage meters in the laboratory with a Coriolis flowmeter (FD-S, Keyence) before and after the field measurements were made. The seepage fluxes were determined in mL/min from each calibration curve and then converted to cm/d using the area of the chambers (cm2). Seawater was pumped from the sea bottom at 5–10 L/min next to the chamber (Fig. 1c) from 20 April to 15 May in 2013. The 222Rn concentrations in the seawater were measured every 10 min using a 222Rn detector (RAD 7: Durridge) interfaced with an air–water exchanger located on land. The seawater samples for nutrient analysis were collected once a day from 20 April to 12 May in 2013. The times of water sampling are shown with the data on nutrient concentrations (Fig. 10). Samples and data analysis The seawater samples were filtered through GF/F 0.7 μm glass filters, frozen, and stored in the freezer. The concentrations of nitrate (NO3), nitrite (NO2), phosphate (PO4), and dissolved silicate (DSi) were measured with an auto-analyzer (QuAAtro, BL-Tech). The concentration of ammonium (NH4) was measured with a fluorometer (Trilogy, Turner Design) with a CDOM/NH4 module (Model 7200-041, Turner Design), using the ortho-phthaldialdehyde (OPA) method (Holmes et al. 1999). In this study, we defined dissolved inorganic nitrogen (DIN) as the sum of NO3 and NO2 because almost all the concentrations of NH4 at the surface along the coastline were below the detection limit (0.1 μM). Dissolved inorganic phosphorous (DIP) is defined as PO4. The concentration of Chl-a on the GF/F filters was quantified with a calibrated fluorometer (Trilogy, Turner Design). The analytical errors were within 10%, suggesting that the values analyzed were sufficiently accurate to draw conclusions from the data. For the data analysis, we used the 222Rn concentrations in the river water flowing into the bay, which had been measured monthly in 2013 by Sugimoto et al. (2016). Because approximately 80% of the riverine 222Rn entering the bay is contributed by the two major rivers (Kita and Minami rivers), the mean 222Rn concentrations in the two rivers were assumed to represent the value for the surface river waters (Sugimoto et al. 2016). We also used the 222Rn concentrations and salinity in the terrestrial groundwater measured in March 2013 by Sugimoto et al. (2016). The temporal changes in the tidal level from 20 April to 15 May 2013 were also obtained from the Japan Meteorological Agency. We used fast Fourier transforms (FFTs), providing power spectra, to evaluate the fluctuation cycles in the time series for wind velocity, sea level, SGD flux, and 222Rn concentration. We evaluated the factors limiting primary productivity using the equations of Steel (1962) for temperature (F T) and nutrients (F N), as follows: $$ {F}_{\mathrm{T}}= T/{T}_{\mathrm{opt}}\times \exp \left(1- T/{T}_{\mathrm{opt}}\right) $$ $$ {F}_{\mathrm{N}}=\mathrm{M}\mathrm{I}\mathrm{N}\;\left(\mathrm{DIN}/\left({K}_{\mathrm{N}}+\mathrm{DIN}\right),\mathrm{DIP}/\left({K}_{\mathrm{P}}+\mathrm{DIP}\right)\right) $$ where T opt (25 °C) is the optimum temperature for phytoplankton growth, and K N (1.7 μM) and K P (0.19 μM) are the half-saturation constants for DIN and DIP, respectively. These parameters were obtained from studies of the Japanese coast (Yanagi and Onitsuka 1999). We set T, DIN, and DIP to the maximum and minimum values extracted from the temperatures, DIN concentrations, and DIP concentrations observed in each month, respectively. Distributions of the concentrations of 222Rn, nutrients, and Chl-a along the coastline High-resolution mapping of 222Rn was performed in March, June, July, and September of 2013. The concentrations of 222Rn, salinity, and Chl-a along the coastline in each month are shown in Figs. 3, 4, and 5, respectively. For the purpose of illustration, the tracking line was divided into three zones by four points (A–D, shown in Fig. 1b). The results of 222Rn mapping show common patterns in the spatial distributions during June, July, and September: the concentrations were relatively high in the zone between B and D but low in the zone between A and B (Fig. 3). The concentration in the zone between C and D was highest in July. The average concentrations of 222Rn on the tracking line (from A to D) in June, July, and September were nearly equal, at 2.8, 3.3, and 2.6 dpm/L, respectively. The average concentration of 222Rn in March was 9.6 dpm/L, much higher than in the other 3 months. The distribution of the 222Rn concentration in March differed from that in the other months. The 222Rn concentration was highest around point B but relatively low in the zone between B and C. Distributions of 222Rn (dpm/L) in a March, b June, c July, and d September, determined by monitoring the coastline from a boat. All the data in March are shown in the same panel, although each datum was obtained on a different day Distributions of salinity in a March, b June, c July, and d September, determined by monitoring the coastline from a boat. All the data in March are shown in the same panel, although each datum was obtained on a different day Distributions of chlorophyll-a (μg/L) in a March, b June, c July, and d September, determined by monitoring the coastline from a boat. All the data in March are shown in the same panel, although each datum was obtained on a different day In June, July, and September, salinity was relatively low in the zone between B and D but high in the zone between A and B (Fig. 4). The distributions of low-salinity water corresponded strongly to the zones in which the 222Rn concentrations were relatively high in all surveys. The distribution of Chl-a was constant throughout the survey, including in March. The Chl-a concentrations were relatively high in the zone between B and D but low in the zone between A and B (Fig. 5). The distribution of high-Chl-a water corresponded strongly to the zones in which the 222Rn concentration was relatively high. The correspondence between the distributions of 222Rn and Chl-a is shown in Fig. 6. The distributions of both parameters corresponded well in June, July, and September, but not in March. The locations of the peak values for the two parameters differed in March. The correlation coefficients indicated significant correlations between the concentrations of 222Rn and Chl-a in June, July, and September. Significant correlations were also obtained between the concentrations of 222Rn and PO4 in June and between salinity and NO3 in March, June, and September (Table 1). Line plots of 222Rn (dpm/L) and chlorophyll-a (μg/L) in a March, b June, c July, and d September, observed by monitoring the coastline from a boat. Letters A to D in c and d correspond to those in Fig. 1b Table 1 Correlation coefficients for salinity and nutrients, 222Rn and nutrients, and 222Rn and chlorophyll-a We created the zones shown in Fig. 1b using points A–D to investigate the differences in the biogeochemical properties of each zone along the coastline. The datasets obtained from all the observations made in March, June, July, and September were integrated, and the average concentrations of 222Rn, salinity, NO3, and PO4 in each zone are shown in Fig. 7. In the zone between A and B, salinity was relatively high and the concentrations of 222Rn, NO3, and PO4 were low. Salinity was lowest and the concentration of NO3 was highest in the zone between B and C, whereas PO4 was not high in this zone. The concentrations of 222Rn and PO4 were highest in the zone between C and D. Box plots of a 222Rn (dpm/L), b salinity, c phosphorous (μM), and d nitrate (μM), showing the minimum and maximum values, 25th and 75th percentiles, and median values, from data obtained by monitoring the coastline from a boat in March, June, July, and September Temporal changes in the concentration of 222Rn, SGD flux, and biogeochemical properties Figure 8 shows temporal variations in the 222Rn concentration measured with a Radon detector; the SGD flux measured with a seepage meter, the wind velocity (a parameter of the physical environment), and salinity levels inside and outside the seepage chamber at the point at which the 222Rn concentration was relatively high (star; location shown in Fig. 1b). The sea level varied from 0.7 to 1.2 m. The concentrations of 222Rn increased and decreased over a 5–6 day cycle, although the SGD flux showed no such cycle (Fig. 8a). The minimum values of the cycle were nearly stable from April to May, whereas the SGD flux dropped sharply to almost zero in May. Figure 9 shows the FFT power spectra for the SGD flux, the sea level at the station, the concentration of 222Rn, and the wind velocity and salinity inside and outside the chamber. The power spectrum for the SGD flux shows variations with diurnal (25 h) and semidiurnal (12.5 h) cycles, with characteristics similar to those of the power spectrum for sea level. The temporal variations in the 222Rn concentrations throughout the monitoring period showed a 5–6 day cycle and were coincident with those in the wind velocity; 222Rn increased when the northward wind became strong but decreased when the southward wind became dominant (Fig. 8b). The results of the FFT analysis also revealed that 222Rn and wind velocity peaked with a periodicity of ~100 h (Fig. 9). As for the shorter cycles, the power spectra for wind, salinity, and the concentration of 222Rn displayed peaks with periodicities of 24, 20–24, and 20 h, respectively. Time series of a 222Rn (dpm/L) and SGD fluxes measured with a seepage meter (cm/d), b 222Rn (dpm/L) and northward wind (m/s), c salinity inside and outside the chamber, and d northward wind (m/s) and salinity outside the chamber, observed at a fixed station (star, shown in Fig. 1b) from 20 April to 15 May, 2013. The wind velocity data were obtained from AMeDAS Spectra obtained with fast Fourier transforms (FFTs) for the time series of 222Rn, SGD flux, sea level, and salinity outside and inside the chamber, observed at a fixed station (star in Fig. 1b) from 20 April to 15 May 2013. The northward wind data were obtained from AMeDAS Figure 8c shows the temporal variations in salinity inside and outside the chamber. The salinity outside the chamber shows negative peaks, with the period of low salinity ranging from 1 to 7 h. Peaks were observed almost daily, whereas no peaks occurred between 24 and 26 April, from 30 April to 3 May, or between 6 and 9 May. Several peaks coalesced into a single peak on 26–27 April. The salinity inside the chamber also showed negative peaks 6–12 h after the negative peaks in the outside salinity, although the magnitude of the declines was much smaller than the declines in the outside salinity. The inside salinity also displayed a declining trend in April and an increasing trend in May. The temporal variations in salinity outside the chamber are presented with the wind velocity data (Fig. 8d). The northward wind showed peaks over a 5–6 day cycle. The negative peaks in the outside salinity appeared 5–6 h after the northward wind strengthened. The outside salinity remained high during periods when the southward wind was dominant. Temporal variations in the concentrations of nutrients (NO3, PO4, DSi) and associated biogeochemical properties at the station described above are shown in Fig. 10. The plots in Fig. 10a indicate the times of water sampling for nutrient analysis. The 222Rn concentrations at the daily sampling times were extracted from Fig. 8a and are plotted in Fig. 10b. The tide level shows diurnal (25 h) and semidiurnal (12.5 h) cycles, but the timing of water sampling was not related to the tidal cycle. The concentrations of PO4 varied between 0.1 and 0.2 μM and increased and decreased over a 5–6 day cycle. The concentration of DSi ranged between 0.0 and 6.4 μM and showed several relatively large peaks that lasted for 1–2 days during the monitoring period. The concentrations of NO3 ranged between 0.0 and 1.0 μM and peaked on the days when DSi increased. The temporal variations in the concentrations of PO4 throughout the monitoring period showed a weak relationship with the daily average concentrations of 222Rn (r = 0.27, p = 0.1), whereas the changes in NO3 and DSi showed no relationship to the changes in 222Rn. Distribution of 222Rn concentrations and its source The concentrations of 222Rn at the northern end of the tracking line, the point closest to the outside of the bay, were less than 1.5 dpm/L throughout the sampling period (Fig. 3), confirming that the influence of SGD was infrequent. In contrast, high 222Rn concentrations (up to 15 dpm/L) were observed inside the bay throughout the monitoring period, suggesting the influence of SGD in Obama Bay. The average 222Rn concentration was highest in March (9.6 ± 2.9 dpm/L), whereas the values in June, July, and September (2.8 ± 1.3, 3.3 ± 2.0, and 2.6 ± 1.6 dpm/L, respectively) were relatively low. The average salinity in March was 29.0 ± 2.7, much lower than outside the bay at the end of February (33.3; Sugimoto et al. 2016). The average salinity was lowest in September (26.5 ± 2.6) in response to heavy typhoon rain. The river discharge was relatively high and largely stable from January to March, suggesting the influence of snowmelt water after significant snowfall in January and February (Fig. 2). These results are consistent with those of Sugimoto et al. (2016), who reported that the SGD rates flowing into Obama Bay, estimated from monthly 222Rn data with a steady-state mass balance model, were relatively high in spring (March–April) when snowmelt water was the highest. The 222Rn concentration was measured along a few kilometers of the coastline every 5 min in this study, using two sets of a three-radon-detector system at a boat speed of 1–2 knots. The spatial resolution was slightly higher than that in previous pioneering works (Stieglitz et al. 2010; Hosono et al. 2012), and the distribution of the 222Rn concentration along the coast of the semi-enclosed bay was determined in detail. Similar distributions were observed in June, July, and September, when the 222Rn concentration was highest around the mouths of the Kita and Minami rivers, and the high-222Rn water dispersed along the eastern coast toward the north (Fig. 3). The first candidate source of high-222Rn water is the river water. The 222Rn concentrations in the river water collected at the river mouths in Obama Bay were relatively high (maximum, 234 dpm/L; Sugimoto et al. 2016) because groundwater seeps from the river bed and then flows rapidly down to the river mouth. These concentrations are consistent with those at other river mouths in Japan (e.g., Hosono et al. 2012; Shiokawa et al. 2013), whereas those at the mouths of continental rivers are much lower (e.g., Cable et al. 1996). The total river discharge was large enough (approximately 1.0–2.0 × 106 m3/day; Sugimoto et al. 2016), and the axis of the bay was small enough (<4 km) for the river water to disperse throughout the bay within the period of 222Rn degradation (half-life, 3.83 days). The second candidate source of high-222Rn water is the relatively large SGD offshore from the mouths of the Kita and Minami rivers, as has been mentioned in other surveys conducted by Obama City and models of the flow streams of terrestrial groundwater in the watersheds. Interviews with local residents confirmed the presence of fresh SGD at the point. However, we could not access the sites at the mouths of the Kita and Minami rivers to deploy seepage meters at that time. The concentrations of PO4 and 222Rn were both highest along the coastline and correlated with one another (Fig. 7; Table 1) around this site (in the zone between C and D, see Fig. 1b), indicating the presence of significant SGD. Temporal changes in 222Rn concentration and SGD flux We continuously monitored the concentration of 222Rn and measured the SGD flux using a seepage meter at a specific station (location shown in Fig. 1b) for a month, to determine the major sources of 222Rn. The seepage meter located on the sea bottom indicated the presence of local SGD at the station (Fig. 8a). SGD includes the submarine fresh groundwater discharge (SFGD), originating from the freshwater on land, and the recirculated submarine groundwater discharge (RSGD), originating from the seawater that infiltrates the seabed under tidal influences (Taniguchi 2002a; Taniguchi et al. 2002b). The SFGD and RSGD generally show large differences in salinity, so they can be separated. We calculated them using the salinity inside and outside the chamber, as shown in Fig. 8c. The contribution rate of fresh SGD (SFGD/SGD × 100 [%]) was calculated as follows (Ishitobi et al. 2007): $$ \mathrm{S}\mathrm{G}\mathrm{D}=\mathrm{SFGD}+\mathrm{RSGD} $$ $$ \mathrm{S}\mathrm{G}\mathrm{D}\times {C}_{\mathrm{sgd}}=\left(\mathrm{SFGD}\times {C}_{\mathrm{sfgd}}\right)+\left(\mathrm{RSGD}\times {C}_{\mathrm{rsgd}}\right) $$ where SGD is the seepage flux determined with a seepage meter, C sgd is the salinity inside the chamber, C rsgd is the salinity outside the chamber, and C sfgd is the salinity of the fresh groundwater on land. In this study, C sfgd, which is the salinity (calculated from conductivity measurements) of the terrestrial groundwater calculated by Sugimoto et al. (2016), was assumed to be zero. The level of SGD flux changed sharply on 1 May, so we divided the datasets for SGD flux and salinity into the periods 20 April–1 May and 2–15 May. C rsgd was set to the salinity averaged over each period, and C sgd was set to the salinity averaged over the 1 h before the end of each period. We then solved the system of equations to calculate SFGD and RSGD. When C sgd was larger than C rsgd, the contribution of SFGD was assumed to be zero. The contribution rates of SFGD (%) varied among months at the station: 1.4% in April but 0.0% in May 2013. The average SGD flux was 4.6 cm/day in April and 0.8 cm/day in May, with minimum and maximum values for the whole monitoring period of 0.1 and 18.9 cm/day, respectively. The average SGD fluxes were smaller than the results in the Chokai area (average SGD flux = 38.9 cm/day; Hosono et al. 2012), where the tidal range is as small as that in Obama Bay, but large SFGD fluxes were observed. The average SFGD flux, 0.41 cm/day, was consistent with that obtained with the 222Rn mass balance model throughout the bay (0.62 cm/day; Sugimoto et al. 2016). Temporal changes in the SGD flux and sea level showed variations with diurnal (25 h) and semidiurnal (12.5 h) cycles (Fig. 9), suggesting the influences of diurnal and semidiurnal tides. This indicates that the SGD at the site consisted mainly of tidally influenced RSGD. The observed SGD flux dropped sharply to almost zero, and the salinity inside the chamber increased in May, while the seasonal change in the SGD flux was consistent with that obtained with the 222Rn mass balance model (Sugimoto et al. 2016). These trends indicated that the reduced SGD flux in May was partly attributable to the disappearance of the snowmelt water and partly to the rise in sea level from May, when SGD was suppressed. The seasonal changes in snowmelt water and sea level are mainly attributed to the seasonal change in air temperature, which varies little from year to year. These factors may explain the reduction in SGD observed in this study. The minimum values of the 5–6 day cycle of the 222Rn concentration were almost constant from April to May, whereas the local SGD flux observed at the station dropped sharply in May (Fig. 8), suggesting the influence of the advection of high-222Rn water. The observed distribution of the 222Rn concentration along the coast suggested that high-222Rn water was present around the mouths of the Kita and Minami rivers during the observation period (Fig. 3). The results of the FFT analysis (i.e., that both the 222Rn concentration and wind velocity showed peaked with a periodicity of ~100 h; Fig. 9) indicated that wind played an important role in the advection of high-222Rn water. We estimated the time (T i ) required for the high-222Rn water near the mouths of the Kita and Minami rivers to reach the station. As an example, for the northward wind velocity of ~2 m/s that prevailed for 6 h daily between 2 and 13 May (Fig. 8), the velocity of the wind-driven flow, V (m/s), was calculated as follows: $$ V=\mathrm{r}\mathrm{o}\_\mathrm{a}\times {\mathrm{C}}_{\mathrm{d}}\times \mathrm{W}\_\mathrm{y}\times \sqrt{\mathrm{W}\_{\mathrm{x}}^2+\mathrm{W}\_{\mathrm{y}}^2/\mathrm{ro}\_\mathrm{s}\times \mathrm{dt}/\mathrm{dz}} $$ where ro_a is the density of air, Cd is the friction coefficient at the sea surface, ro_s is the density of seawater, and W_x and W_y are the wind velocities along the X- and Y-axes, respectively. We set the Y- and X-axes to north–south and west–east, respectively; dt is the time scale on which the wind blows in a particular direction and dz is the friction depth. The friction depth (D) was defined as follows (Officer 1976): $$ D=\uppi\;\sqrt{2\ast \kern0.5em {N}_z/ f} $$ where π is 3.14, f is the Coriolis coefficient (7.3 × 10−5) at latitude 35° north, and N z is the eddy viscosity coefficient. D ranges from 5 to 50 m, assuming N z ranges from 1 to 100 cm2/s. The minimum value of D exceeds the water depth of the coastal zone of the bay, so we set dz to equal the water depth of 1–5 m. We assumed the following values: ro_a, 1.205 kg/m3; Cd, 0.0013; ro_s, 1022 kg/m3; W_x, zero; W_y, 2 m/s; and dt, 6 × 3600 s. Using equation (5), the resulting V value is 0.03–0.13 m/s. We would expect the actual flow speed to be higher than this value because of the additional influence of the density-driven flow. Because the distances between the river mouths and the station were ~2 km, T i is less than 1.0 day, which is much shorter than the half-life of 222Rn (3.83 days). Therefore, the high-222Rn water near the mouths of the Kita and Minami rivers is a potential source of 222Rn in the area of the station. Influence of the advection of high-222Rn water on 222Rn concentrations The results of the FFT analysis indicated that both the 222Rn concentration and the wind velocity peaked with a periodicity of ~100 h (Fig. 9), suggesting that wind was strongly related to the long-term temporal changes in the 222Rn concentration at the observation site. In this section, we examined the influences of the advection of high-222Rn water, caused by wind, on temporal changes in the concentrations of 222Rn in more detail. In the short term, the temporal changes in the wind, salinity, and 222Rn concentration displayed variations with periods of around 24, 20–24, and 20 h, respectively (Fig. 9). Therefore, the cycle of the 222Rn concentration was not perfectly coincident with the variations in wind velocity. This is considered to be partly because the advection of the low-salinity high-222Rn water within a day was controlled not only by the wind but also by the tidal currents, the density flow (such as the estuarine circulation), vertical mixing, and the interactions of all these factors, and partly by the technological limitations of the FFT analysis, which were attributed to the spike-like changes in salinity and the 222Rn concentration. Therefore, the temporal variation in salinity was mainly controlled by wind, in addition to many physical factors, and the temporal variation in the 222Rn concentration could be synchronized to that in salinity. The salinity at the station increased during the periods when the southward wind was dominant (Fig. 8d), suggesting that the northward dispersion of the low-salinity high-222Rn water flowing from the mouths of Kita and Minami rivers was suppressed. The 1 h averaged 222Rn concentrations, measured every 10 min at the station (location indicated by a star in Fig. 1b), are plotted versus salinity outside the chamber in Fig. 11, to allow a discussion of the influence of wind. a 222Rn concentration (dpm/L) and salinity of the endmembers for seawater (SW), river water (RW), terrestrial groundwater (tGW), and recirculated submarine groundwater (RSGD). The solid line, dotted line, and bold line are the mixing curves between SW and RW, SW and tGW, and SW and RSGD, respectively. b Average 222Rn concentration (dpm/L) for 1 h in each day plotted against salinity outside the chamber. The symbols RW, tGW, and RSGD indicate a part of each mixing curve The northward wind was dominant on 30 April and then weakened on the next day. The southward wind became strong on 2 May (Fig. 8d), so we used the datasets for the 222Rn concentration and salinity in the period 30 April–2 May. The endmembers and mixing curves are shown in Fig. 11a, and the data for the 3 days in this period, which were measured from noon on 1 day to noon on the next day, are plotted in Fig. 11b, along with a part of each mixing curve. The 222Rn concentration and salinity of seawater outside the bay, measured on 26 April 2013, were 0.9 dpm/L and 34.0, respectively. The 222Rn concentrations measured on the same date in the two major rivers (Kita and Minami rivers) were 72.5 and 73.6 dpm/L, respectively, so we used the mean value of 73.1 dpm/L as the endmember for river water. The mean concentrations of 222Rn in the terrestrial groundwater and the seawater outside the bay were 660 dpm/L. The salinity of the river water and terrestrial groundwater was assumed to be zero, all according to Sugimoto et al. (2016). The 222Rn concentration and salinity at 0.9 m under the seabed at the station were determined to be 54.2 ± 23.5 dpm/L and 31.3 by direct measurement (Sugimoto et al. 2017, unpublished data), so we used a value of 54 dpm/L as the endmember for RSGD. Half the plots in Fig. 11b are located above the mixing curve between seawater outside the bay (SW) and river water (RW), suggesting that the local SGD contributed to the 222Rn concentration, whereas the major component of SGD, either terrestrial groundwater (tGW) or recirculated submarine groundwater (RSGD), could not be determined. In contrast, the daily changes in the 222Rn concentration were in response to changes in wind direction. The concentration of 222Rn was relatively high on 30 April–1 May, the period in which northward wind was dominant, whereas the 222Rn concentration gradually decreased on 1–2 May and became close to the endmember of SW on 2–3 May, the period in which the southward wind was dominant (Fig. 11b). These results provide evidence of the advection of high-222Rn water, which was controlled mainly by the wind during the observation period. Moreover, as shown in Fig. 7, high-222Rn and high-PO4 seawater was present around the mouths of the Kita and Minami rivers (the zone between C and D, shown in Fig. 1b). This water moved northward when the northern wind became strong, supplying 222Rn and PO4 to the station at which the temporal variations in these parameters were measured. The results of our various observations, including those made with seepage meters, the continuous monitoring of 222Rn concentrations at a single station, and the monitoring survey of 222Rn concentrations from a boat, together with measurements of the biogeochemical properties, suggest that the advection of high-222Rn water plays an important role in controlling the temporal changes in the 222Rn concentration at the station in the zone between B and C, whereas the local SGD might be another source of 222Rn in the period when significant SGD flux was observed. The mass balance model (Burnett et al. 2008), which estimates SGD fluxes using 222Rn records from monitoring surveys, could not be applied because the temporal changes in the advection of high-222Rn water were significant. Influence of SGD on the biogeochemical environment along the coastline The results of the monitoring survey along the coastline suggest that groundwater, which is characterized by high 222Rn, contributes to the nutrient supply in the bay. The N:P:Si molar ratio of the fresh groundwater on land was approximately 26:1:27, whereas that in the river water was 82:1:150 (Sugimoto et al. 2016), suggesting the groundwater is rich in PO4. The concentrations of PO4 and 222Rn were both highest along the coastline in the zone between C and D (Fig. 7) and varied simultaneously (Fig. 10). This mutual correlation suggests that nutrients are supplied by SGD to the surface layer along the coastline. The factors limiting primary productivity are listed in Table 2. The limitation factors for the maximum and minimum temperatures at each observation point (F T) were nearly 1.0 in June, July, and September, suggesting that temperature was not a limiting factor in these seasons. F T in March ranged from 0.64 to 0.81, indicating that temperature was one of the factors limiting primary productivity. Of the factors limiting primary productivity in the sea, light intensity was not considered in this study because of a lack of data. However, light intensity was assumed to rarely limit primary production at the surface, as inferred from the limitation factor for light measured at the bottom in the shallow zone of Obama Bay (Sugimoto et al. 2017). F N values for the minimum concentrations of nutrients at each observation point were all nearly 0.0, and F N values for the maximum concentrations were around 0.5, suggesting that nutrients were a major factor limiting primary production at all the observation points. Table 2 Maximum and minimum temperatures and concentrations of nutrients along the monitoring line shown by the dotted line in Fig. 1b in each month and limiting factors (FT, TN) for primary productivity The correspondence and correlation between the concentrations of 222Rn and Chl-a at the surface in June, July, and September (Fig. 6, Table 1) suggest that the nutrients supplied by SGD influenced the phytoplankton biomass. The correlation of these factors in March was not significant, mainly because not only nutrients limited primary production but also temperature and probably light intensity. Of course, it is important to consider the influence of the seawater residence time on the concentrations of both 222Rn and Chl-a. Although the links among 222Rn, nutrient supply, and primary production must be more fully clarified in future studies, the results of this survey provide persuasive evidence that SGD influences the phytoplankton biomass in Obama Bay, whereas the nutrient supply pathways are not limited to the local SGD but are also influenced by its advection. In this study, we conducted high-resolution mapping and made time-series measurements of 222Rn and the biogeochemical properties along the coastline, combined with SGD measurements made with seepage meters, to examine the influence of SGD on the biogeochemical environment in Obama Bay. Our results show that (1) the observed concentrations of 222Rn along the coast of the bay indicate that groundwater affected the biogeochemical properties of the bay and was not limited to the local SGD; (2) 222Rn flowing into the bay from rivers (in which groundwater seeps from the river bed) had a strong effect on the distribution of 222Rn concentrations along the coast; (3) high-222Rn water was always present around the river mouths, and the northward advection of the water affected the distribution of 222Rn concentrations; and (4) the southward wind suppressed the advection of high-222Rn water and was the main control on temporal variations in the 222Rn concentration at a station located to the north of the Kita and Minami rivers. Although the effect of the residence time of the seawater on the correlation between the concentrations of 222Rn and Chl-a must be clarified in a future study, the results of this study provide persuasive evidence that the nutrients supplied by SGD influence the phytoplankton biomass in Obama Bay. AMeDAS: Automated Meteorological Data Acquisition System Chl-a: Chlorophyll-a SGD: Submarine groundwater discharge Bruce JR (1925) The metabolism of shore-living dinoflagellates. Br J Exp Biol 2:413–426 Burnett WC, Dulaiova H (2003) Estimating the dynamics of groundwater input into the coastal zone via continuous radon-222 measurements. J Environ Radioact 69:21–35 Burnett WC, Kim G, Lane-Smith DA (2001) Continuous monitor for assessment of 222Rn in the coastal ocean. J Radioanal Nucl Chem 249:167–172 Burnett WC, Aggarwal PK, Aureli A, Bokuniewicz H, Cable JE, Charette MA, Kontar E, Krupa S, Kulkarni KM, Loveless A, Moore WS, Oberdorfer JA, Oliveira J, Ozyurt N, Povince P, Privitera AMG, Rajar R, Ramessur RT, Scholten J, Stieglitz J, Taniguchi M, Turner JV (2006) Quantifying submarine groundwater discharge in the coastal zone via multiple methods. Sci Total Environ 367:498–543 Burnett WC, Peterson R, Moore WS, de Oliveira J (2008) Radon and radium isotopes as racers of submarine groundwater discharge—results from the Ubatuba, Brazil SGD assessment intercomparison. Estuarine Coastal Shelf Res 76:501–511 Cable JE, Burnett WC, Chanton JP, Weatherly GL (1996) Estimating groundwater discharge into the northeastern Gulf of Mexico using radon-222. Earth Planet Sci Lett 144:591–604 Campbell EE, Bate GC (1996) Groundwater as a possible controller of surf diatom biomass. Revista Chilena De Historia Natural 69(4):503–510 Dulaiova H, Peterson R, Burnett WC, Lane-Smith DA (2005) A multi-detector continuous monitor for assessment of Rn-222 in the coastal ocean. J Radioanal Nucl Chem 263:361–365 Ellins KK, Roman-Mas A, Lee R (1990) Using 222Rn to examine groundwater/surface discharge interaction in the Rio Grade, DeManati, Puerto Rico. J Hydrol 115:319 Gobler CJ, Sanudo-Wilhelmy SA (2001) Temporal variability of groundwater seepage and brown tide blooms in a Long Island embayment. Mar Ecol Prog Ser 217:299–309 Holmes RM, Aminot A, Kerouel R, Hooker BA, Peterson BJ (1999) A simple and precise method for measuring ammonium in marine and freshwater ecosystems. Can J Fish Aquat 56(10):1801–1808 Honda H, Sugimoto R, Kobayashi S, Tahara D, Tominaga O (2016) Temporal and spatial variation in primary production in Obama Bay. Bull Jpn Soc Fish Oceanogr 80(4):269–282 Hosono T, Ono M, Burnett WC, Tokunaga T, Taniguchi M, Akimichi T (2012) Spatial distribution of submarine groundwater discharge and associated nutrients within a local coastal area. Environ Sci Technol 46:5319–5326 Ishitobi T, Taniguchi M, Shimada J (2007) Estimations of groundwater discharge and changes in fresh-salt water interface by measurements of submarine groundwater discharge in the coastal zone. Ground Water 49(3):191–204 (in Japanese with English abstract) Johannes RE (1980) The ecological significance of the submarine discharge of groundwater. Mar Ecol Prog Ser 3:365–373 Kohout FA, Kolipinski MC (1967) Biological zonation related to groundwater discharge along the shore of Biscayne Bay. Miami, Florida. In: Lauff G (ed) Estuaries, vol 83. AAAS Publ., Washington, D.C, pp 488–499 Laroche J, Nuzzi R, Waters R, Wyman K, Falkowski P, Wallace D (1997) Brown Tide blooms in Long Island's coastal waters linked to interannual variability in groundwater flow. Glob Chang Biol 3(5):397–410 Lee DR (1977) A device for measuring seepage flux in lakes and estuaries. Limnol Oceanogr 22:140–147 Miller D, Ullman C, William J (2004) Ecological consequences of ground water discharge to Delaware Bay, United States. Ground Water 42(7):959–970 Moore WS (1996) Large groundwater inputs to coastal waters by 226Ra enrichments. Nature 380:612–614 Nestler J (1977) A preliminary study of the sediment hydrology of a Georgia salt marsh using Rhodamine WT as a tracer. Southeastern Geol 18:265–271 Officer CB (1976) Physical oceanography in estuaries. John Wiley & Sons, New York, p 465 Rutkowski CM, Burnett WC, Iverson RL, Chanton JP (1999) The effect of groundwater seepage on nutrient delivery and seagrass distribution in the Northeastern Gulf of Mexico. Estuaries 22:1033–1040 Sanders JG (1979) The importance of salinity in determining the morphology and composition of algal mats. Bot Mar 22:159–162 Santos IR, Niencheski F, Burnett WC, Peterson R, Chanton JP, Andrade CFF, Milani IB, Schmidt A, Knoeller K (2008) Tracing anthropogenically driven groundwater discharge into a coastal lagoon from southern Brazil. J Hydrol 353:275–293 Sasajima S, Sakamoto K (1962) Subsurface geology and groundwater of Obama Plain, Fukui pref., central Japan. Part 2: groundwater of Obama Plain. Memoirs of the Faculty of Liberal Arts, University of Fukui. Ser. II Natural science (in Japanese with English abstract) Shiokawa M, Yamaguchi A, Umezawa Y (2013) Estimation of groundwater-derived nutrient inputs into the west coast of Ariake Bay. Bull Coastal Oceanography 50(2):157–167 (in Japanese with English abstract) Steel JH (1962) Environmental control of photosynthesis in the sea. Limnol Oceanogr 7:137–150 Stieglitz TC, Cook PG, Burnett WC (2010) Inferring coastal processes from regional-scale mapping of 222Radon and salinity: examples from the Great Barrier Reef, Australia. J Environ Radioact 101:544–552 Sugimoto R, Honda H, Kobayashi S, Takao Y, Tahara D, Tominaga O, Taniguchi M (2016) Seasonal changes in submarine groundwater discharge and associated nutrient transport into a tideless semi-enclosed embayment (Obama Bay, Japan). Estuar Coasts 39:13–26 Sugimoto R, Kitagawa K, Nishi S, Honda H, Yamada M, Kobayashi S, Shoji J, Ohsawa S, Taniguchi M, Tominaga O (2017) Phytoplankton primary productivity around submarine groundwater discharge in nearshore coasts. Mar Ecol Prog Ser 563:25–33 Taniguchi M (2002a) Tidal effects on submarine groundwater discharge into the ocean. Geophysical Res Lett 29(12). doi:10.1029/2002GL014987 Taniguchi M, Iwakawa H (2004) Submarine groundwater discharge in Osaka Bay, Japan. Limnology 5:25–32 Taniguchi M, Burnett WC, Cable JE, Turner JV (2002) Investigation of submarine groundwater discharge. Hydrol Process 16:2115–2129 Valiela I, Costa JE (1988) Eutrophication of Buttermilk Bay, a cape cod coastal embayment: concentrations of nutrients and watershed nutrient budgets. Environ Manag 12(4):539–553 Valiela I, Foreman K, LaMontagne M, Hersh D, Costa J, Peckol P, DeMeo-Andreson B, D'Avanzo C, Babione M, Sham C, Brawley J, Lajtha K (1992) Couplings of watersheds and coastal waters: sources and consequences of nutrients enrichment in Waquoit Bay, Massachusetts. Estuaries 15:443–457 Waska H (2011) Submarine groundwater discharge (SGD) as a main nutrient source for benthic and water-column primary production in a large intertidal environment of the Yellow Sea. J Sea Res 65(1):103–113 Waska H, Kim G (2010) Differences in microphytobenthos and macrofaunal abundances associated with groundwater discharge in the intertidal zone. Mar Ecol Prog Ser 407:159–172 Yanagi T, Onitsuka G (1999) Numerical model on the lower trophic level ecosystem in Hakata Bay. Umi-no-Kenkyu 8:245–251 (in Japanese with English abstract) The observations made with seepage meters along a line extending from onshore to offshore were made in collaboration with Wakasa High School. We are indebted to the members of the diving club and their teachers, Dr. Yasuyuki Kosaka and Mr. Hiroaki Hirayama, and the headmaster of Wakasa High School for their assistance. We sincerely thank Mr. Teruhiko Nakajima and Mr. Tomohiro Kawashiro of Fukui Prefectural University for supporting the development and deployment of the seepage chambers and the 222Rn monitoring at the fixed stations. We are grateful to the journal editors and three anonymous reviewers for their helpful comments and suggestions. This work was performed with the support of the Research Project Human–Environmental Security in Asia-Pacific Ring of Fire: Water–Energy–Food Nexus (R-08-Init) at the Research Institute for Humanity and Nature (RHIN). SK, RS, DT, OT, and JS conceived and designed the study. HH, YM, and MY performed the field observations and data analysis. SN collaborated with the corresponding author in the data analysis and discussion. MT collaborated with the authors in the planning of the field observations. All the authors have read and approved the final manuscript. Field Science Education and Research Center, Kyoto University, Kitashirakawaoiwake, Sakyo-ku, Kyoto, 606-8502, Japan Shiho Kobayashi Research Center for Marine Bioresources, Fukui Prefectural University, 49-8-2, Katsumi, Obama, Fukui, 917-0116, Japan Ryo Sugimoto, Daisuke Tahara & Osamu Tominaga Research Institute for Humanity and Nature, 457-4, Kamigamo Motoyama, Kita-ku, Kyoto, 603-8047, Japan Hisami Honda, Makoto Yamada & Makoto Taniguchi Idea Consultants Co., 2-2-2, Hayabuchi, Tsuzuki-ku, Yokohama, Kanagawa, 224-0025, Japan Yoji Miyata Graduate School of Biosphere Science, Hiroshima University, 1-4-4 Kagamiyama, Higashi-hiroshima, Hiroshima, 739-8528, Japan Jun Shoji Graduate School of Maritime Sciences, Kobe University, 5-1-1 Fukae-minami, Higashi-nada-ku, Kobe, 658-0022, Japan Satoshi Nakada Ryo Sugimoto Hisami Honda Daisuke Tahara Osamu Tominaga Makoto Yamada Makoto Taniguchi Correspondence to Shiho Kobayashi. Kobayashi, S., Sugimoto, R., Honda, H. et al. High-resolution mapping and time-series measurements of 222Rn concentrations and biogeochemical properties related to submarine groundwater discharge along the coast of Obama Bay, a semi-enclosed sea in Japan. Prog. in Earth and Planet. Sci. 4, 6 (2017). https://doi.org/10.1186/s40645-017-0124-y Submarine groundwater discharge (SGD) 222Rn monitoring Wind-driven advection Coastal seas 5. Biogeosciences
CommonCrawl
MEV Auction: Auctioning transaction ordering rights as a solution to Miner Extractable Value karl January 15, 2020, 4:45pm #1 Special thanks to Vitalik for much of this, Phil Daian as well (& his amazing research on MEV), Barry Whitehat for also coming up with this idea, and Ben Jones for the rest! Blockchain miners (also known as validators, block producers, or aggregators) are nominally rewarded for their services by some combination of block rewards and transaction fees. However, being a block producer tasked with producing a particular block gives you a lot of power within the span of that block, letting you arbitrarily reorder transactions, insert your own transactions before or after other transactions, and delay transactions outright until the next block, and it turns out that there are a lot of ways that one can earn money from this. For example, one can front-run decentralized exchanges (both Uniswap-style and the order book variety), be the first to claim whistleblower rewards, have a favorable position in ICOs, as well as many other forms of mild manipulation of applications. Recent research shows that the revenue that can be extracted from this (called "miner-extractable value" or MEV) is potentially significantly higher than transaction fee revenue. Frequent batch auctions are one traditional response to market manipulation by reordering. In an FBA, instead of processing transactions "as they come", a market gathers all transactions submitted within the same time span (could be short eg. 100 ms, or a minute or longer), reorders them according to a standard algorithm that does not depend on order of submission, and then processes them in that new order. This makes micro-scale timing manipulation nearly irrelevant. We propose a technique in a similar spirit to how FBAs remove micro-scale timing manipulation, with one major difference. In an FBA, there is only one application, and so there is one natural "optimal" order for transactions (orders): process them in order of price. In a general-purpose blockchain, there are many applications with arbitrary properties, and so coming up with a "correct" order is virtually impossible for a fixed algorithm. Instead, we simply auction off the right to reorder transactions within an N-block window to the highest bidder. That is, we create a MEV Auction (MEVA), in which the winner of the auction has the right to reorder submitted transactions and insert their own, as long as they do not delay any specific transaction by more than N blocks. This creates a form of "managed centralization": a single sophisticated party wins the auction and can capture all of the MEV. We call this party a "sequencer." Having a single sequencer reduces the benefit to other block proposers of using "clever" algorithms to near-zero, thereby increasing the chance that "dumb" block proposers will be long-term viable and hence promoting decentralization at the block proposal layer. This technique can theoretically be used at layer 1, though we also show how it is a perfect fit for layer 2 systems, particularly systems such as Optimistic Rollup, zkRollup, or Plasma. This mechanism is designed to extract MEV for the sole purpose of supporting our (inclusive) blockchain community. In fact, this mechanism could be the revenue stream for opt-in self governance built to fund the internet's public goods. We mustn't participate in an MEVA which funds things we don't like! MEV Auction on top of Gas Price Auction Control over transaction ordering has become extremely profitable especially as smart contracts like Uniswap have gained popularity. There have been multiple occasions where trades on Uniswap with high slippage caused tens of thousands of dollars in free arbitrage profits. These arbitrage opportunities are taken advantage of by arbitrage bots that watch the blockchain and participate in the gas price auction. These bots outbid each other at high frequency as long as the price they pay for the transaction is not excess of the amount of money they stand to make. Frontrun.me has great information collected on these auctions happening in the background of Ethereum every day. Counter-intuitively, the real winner of these auctions is Ethereum miners, as bots which outbid each other raise the gas price. This increased gas price increases miner fees and revenue. By introducing an MEV Auction in addition to this gas price auction we can employ the same market mechanism that extracts frontrunning fees to be directed at miners, and redirect that profit back to the community. 1482×462 73.8 KB Implementing the Auction The auction is able to extract MEV from miners by separating two functions which are often conflated: 1) Transaction inclusion; and 2) transaction ordering. In order to implement our MEVA we can define a role for each function. Block producers which determine transaction inclusion, and sequencers which determine transaction ordering. Block producers // Transaction Inclusion Block proposers are most analogous to traditional blockchain miners. It is critical that they preserve the censorship resistance that we see in blockchains today. However, instead of proposing blocks with an ordering, they simply propose a set of transactions to eventually be included before N blocks. Sequencers // Transaction Ordering Sequencers are elected by a smart contract managed auction run by the block producers called the MEVA contract. This auction assigns the right to sequence the last N transactions. If, within a timeout the sequencer has not submitted an ordering which is included by block proposers, a new sequencer is elected. Sequencers and Instant Transaction Inclusion In addition to extracting MEV, the MEVA provides the current sequencer the ability to provide instant cryptoeconomic guarantees on transaction inclusion. They do this by signing off on an ordering immediately after receiving a transaction from a user – even before it is sent to a block producer. If the sequencer equivocates and does not include the transaction at the index which they promised, the user may submit a fraud proof to the MEVA contract to slash the sequencer. As long as the sequencer stands to lose more than it can gain from an equivocation, we can expect the sequencer to provide realtime feed of blockchain state which can be monitored, providing, for instance, realtime price updates on Uniswap. Implementation on Layer 2 It is possible to enshrine this MEVA contract directly on layer 1 (L1) blockchain consensus protocols. However, it is also possible to non-invasively add this mechanism in layer 2 (L2) and use it to manage Optimistic Rollup transaction ordering. In layer 2, we simply repurpose L1 miners and utilize them as block proposers. We then implement the MEVA contract and designate a single sequencer at a time. (Note: Interestingly the single sequencer can also be run by a sub-consensus protocol if desired.) In fact, using MEVA for layer 2 is a perfect fit as it allows us to permissionlessly experiment with different parameters for the auction while simultaneously realigning Ethereum incentives to direct revenue back into the ecosystem. This may serve as the primary revenue stream for blockchain self-sustenance. MEV Auction Collusion One concern is auction collusion. Bidders colluding to reduce competition and keep the auction price artificially low breaks the ability to accurately discover and tax the MEV. A mitigation is to simply increase the ease of entering the aggregation market by releasing open source sequencer software. This can help to establish a price floor because with low barriers of entry we can expect enough competition that there will be at least one honest sequencer bid. Long term incentive alignment The most naive way to implement MEVA is by holding a first-price auction once a day, giving the winner of the auction a monopoly on block production for that day. All proceeds raised by the auction are then sent into a public goods fund. Unfortunately, this approach has a serious problem: an attacker need only outbid the aggregation costs for a single time-slot in order to become the selected sequencer and degrade network quality. Adding the equivalent of a security deposit for the sequencer goes a long way to help mitigating this problem. If the sequencer degrades network quality at any point during their slot, they should be penalized in proportion to the amount of harm they cause to the network. This can be implemented as a simple bond which can be slashed by a subjective judgement of misbehavior, or by locking up an asset which has a price correlated with the health of the network. Note that sequencer misbehavior can often come as a non-uniquely attributable fault and so unfortunately require subjective judgements to enforce. The Parasitic L2 Problem Layer 2 mining has gotten bad press for diverting revenue away from L1 miners who secure the network. Diverting revenue from L1 implicitly decreases the security budget, and thereby makes it less costly to perform 51% attacks. While I wish there was a clear mitigation, in reality the parasitic L2 problem deserves much more research & risk analysis. It could be the case that L2 chains drive up demand for L1 enough to keep the price of ETH high, or ETH remains valuable because it is seen as money, or we simply use out-of-protocol means to protect our most critical blockchains. This remains to be seen and is a great area of research. Designs like these are critical for framing the coming wave of Ethereum upgrades as not only innovations in scalability, but also as an opportunity to realign incentives to be pro-community, pro-commons, and pro-public goods. Without serious thought around how we will sustain blockchain technology we risk creating resilient decentralized architectures which eventually crumble due to massive economic centralization. This is not a future anyone wants to live in. Thankfully, these designs show the possibility of encapsulating and reinvesting MEV back into the community. Further research and economic models will be key as we bring these systems into production. Let's do it! MEV Auctions Will Kill Ethereum Circuit Tokens: Bootstrap funding for public goods with block rewards MEV-resistant ZK-Rollups with Practical VDE (PVDE) lsankar4033 January 15, 2020, 10:43pm #2 whoa, unbundling inclusion from ordering is a cool idea! How would you determine the bond that a sequencer is required to risk in the MEVA contract for a given future time window? It seems like it needs to be based on some prediction of MEV in the long-run? tchitra January 15, 2020, 11:45pm #3 This sounds great and is a good starting point for formalizing a threat model for the MEVA. There are a few things that seem like major challenges for this system: Added latency between transaction selection and sequencing Constructing a Bayes-optimal auction for the sequencing auction that is efficiently computable Inflation / dilution / burning mechanics of the underlying system This doesn't address all sources of MEV, so there is non-zero deadweight loss for the tax Let's analyze these independently. It seems unavoidable that this mechanism adds in some latency between transaction selection and transaction sequencing. The simplest high-level view of the mempool is as a standard priority queue (implemented as a heap) whose keys are (gas price, nonce) [0]. This means that when a block is emitted from an honest and rational (profit-maximizing in-protocol only) miner, they simply pop the maximum number of transactions that fit into a block and pack a block [1]. This combines selection and sequencing in a single operation (with runtime O(n \log T) where n is the number of transactions in the mempool and n is the number withdrawn for a block) and doesn't incur the latency of having two agents — the transaction selector and the transaction sequence — having to coordinate. What are options for dealing with this latency? In particular, how do we minimize the rounds of communication between the transaction selector and the sequencer? I have a couple ideas: Mild Idea: Run the sequencer auctions ahead of the block production time (e.g. auction for block h takes place in a smart contract executed at block h-1) so that the winning sequencers are ready to receive transaction sets ahead of time. The sequencer might need to stake a small amount of collateral, to ensure that they are online to sequence when it is their turn Crazy idea: Have a distributed priority queue such that each potential sequencer participates in with some stake (e.g. they lock up some assets at block height h to commit to a sequence at height h+1). Each insertion or deletion into this priority queue costs a fee and the final ordering cost is the aggregation of these fees. This way, sequencer can spend the entire time between a block at height h and a block at height h+1 attempting to sequence transactions, such that once the transaction selector sends the approved transactions, the elements that aren't in the queue are removed and the transactor who wins (either by auction or via the most fees, this might be a way of obviating the auction dynamics) chooses the final ordering of the transactions that were selected that weren't in the distributed priority queue. This optimization doesn't improve the worst case run time, but it does improve the average case run time. There is some potential for griefing attacks, but they are bounded by how unbalanced the heap implementation can get Optimizing the Auction The field of combinatorial auctions has been well-studied by game theorists such as Noam Nisan and Tim Roughgarden for an extended period of time. Most combinatorial auctions, such as the spectrum auction, involve selling unordered sets, so these techniques (see Chapter 11 of Nisan, Roughgarden, and Tardos) should make it relative easy for the transaction selection auction. I wouldn't try to reinvent the wheel here, if I were you. On the other hand, auction dynamics for the sequencing side of things is much harder to discern. In a famous paper, Betting on Permutations illustrates a Condorcet-like impossibility result — it is NP-hard to compute an optimal betting / auction strategy when bidding on permutations. One of the reasons for this is that the original first-price auction for gas prices, in isolation of previous blocks and mempools, is not a Bayes-optimal auction, whereas this auction is. The reason for this is that bidders have to condition their expected utility computations on the transactions they have received. How do we analyze such an auction in a way that is comparable to how the current gas auctions works? We'll try to walk through the basic mathematical elements that each agent has to choose and then provide some tried and trued techniques for doing primitive mechanism design here. In order to do this, we first have to define fairness. Note that we consider an auction to be \epsilon-fair if the probability distribution over auction outcomes returns no particular ordering with probability \epsilon more than the uniform distribution (e.g. all orderings are basically equally likely). Why is this a good definition fair? It serves as way of saying that no particular transaction ordering is favored by much under all possible instantiations of the auction with different participants. Next, we need to look at what it means for an individual participant or agent to pick a certain ordering. Let's suppose that the transaction selector has sent n transactions, T_1, \ldots, T_n \in \mathcal{T} and that the transaction sequencer has a utility function U : \mathcal{T}^n \rightarrow \mathbb{R}. We will represent the possible transaction orderings (e.g. linear orderings such as T_1 < T_3 < T_2) via permutations on n letters. In order to figure out our bidding strategy, we first have to compute a) which permutations are most important to evaluate (there are n! of them, so we need to be efficient) and b) how to value a given permutation. The first is specified by computing a subjective probability, \mu_{n}(\sigma) = \mathsf{Pr}[\sigma(T_1, \ldots, T_n) | n, T_1, \ldots, T_n] for each element \sigma \in S_n, where S_n is the symmetric group on n letters. The second component is specified by an expected utility, \mathsf{E}[U | T_1, \ldots, T_n] = \frac{1}{n!} \sum_{\sigma \in S_n} U(\sigma(T_1, \ldots, T_n))\mathsf{Pr}[\sigma(T_1, \ldots, T_n) | n, T_1, \ldots, T_n] Finally note that in this notation, we define a \epsilon-fair auction as one that returns a transaction ordering \sigma(T_1, \ldots, T_n) with probability distribution p where d_{TV}(p, \mathsf{Unif}([n])) < \epsilon. Each rational participant in the auction will have a utility function U and subjective probabilities \mu_n that will guide how their strategies evolve in this auction. If we believe that rational, honest players are participating in the sequencer protocol, then we rely on this expectation being positive (altruists are those who continue playing when this is non-positive). Moreover, there isn't a unique archetype of a rational, honest player here (unlike in consensus) — for instance, we have two simple strategies that meet the stated goals of being rational, non-malicious, and able to always submit a bid: Maximum a posteriori optimizer — select the ordering \sigma^* \in S_n such that \sigma^* \in \text{arg}\max_{\sigma \in S_n} \mathsf{Pr}[\sigma(T_1, \ldots, T_n) | n, T_1, \ldots, T_n] Empirical expected utility maximizer — Take k samples \sigma_1, \ldots, \sigma_k \sim \mathsf{Pr}[\sigma(T_1, \ldots, T_n) | n, T_1, \ldots, T_n] and choose the ordering \sigma^* = \text{arg}\max_{i \in [k]} U(\sigma_i(T_1, \ldots, T_n) These are only two of the rational, honest strategies available here as agents can localize their bets (e.g. convolve their subjective probability with the indicator function 1_{A}) to any subset A \subset S_n. This vastly expanded space of rational, honest strategies makes traditional algorithmic game theory tools somewhat meaningless for analyzing such an auction. What can we do? We can take a page out of the online advertising auction design playbook and try to model the sequencing auction by segmenting the the participants into two categories: "Passive" honest, rational agents who participate regardless and have a static strategy and a single, common utility function U "Active" aggressive strategies that have dynamic strategies such that the probability measure \mu_n changes from block to block. Once we do this, we can take techniques like those used in the above Milgrom paper to create auctions that have a social welfare function [2] V(\gamma) = \gamma V_{\text{passive}} + V_{\text{active}} whose revenue sharing and reserve pricing can be optimized based on the desired goals of the sequencing auction. I strongly suspect this will be important in ensuring that spamming and spoofing these auctions is expensive for those who are trying to encourage participation in certain transaction orderings. Note that methodologies for auctions, akin to the Milgrom paper, are inclusive of collusion — the authors do not assume independence of the bids that participants that are "active". The parameter \gamma (as well as other parameters, such as reserve sizes) control how much correlation/bribery is needed to change an auction from the default. This is all of the background needed to try to construct a k th price auction with reserve amounts that is \epsilon-fair. Good luck! Impact on Inflation If the fees from this auction are paid back to the network we have three options: Pro-rata redistribution: This is basically stake-weighted redistribution of the auction revenue Burned: We burn the revenue from the auction Concentrated redistribution: Only redistributing to certain participants (e.g. validators, top-k validators) The third option is likely the most unfair and has perverse incentives to bribe validators for certain orderings. The second option has implications for inflation: If the MEV auction clearing prices are increasing faster than the block reward is increasing (e.g. subsidy inflation), then the 'real' inflation of the system will actually be lower than expected. The first option will cause a tax liability for holders and/or encourages the highest stake participants to try to cause the clearing price to increase. These trade-offs are really worth considering! Other Sources of MEV The simplest form of MEV that isn't discussed is transaction elision — e.g. a validator not repeatedly adding a transaction to the transaction set. Under the assumption of 50% honest, rational agents, one can show that a transaction that enters the mempool at block h will eventually get into the chain at block h+k (there is a Kiayias paper for this that is escaping me). The probability that this happens decays exponentially in k (the exponential base is a function of the percentage of rational honest agents), but it does have a burn in time (e.g. linear for 0 \leq k \leq k', exponential for k \geq k'). Thanks to Brock Elmore, Peteris Erins, and Lev Livnev for useful discussion on these points [0] In the current implementation of the Geth mempool, the file core/tx_pool.go contains the logic for the pool (including the priority queue). The choice of keys and comparison is actually done in a different translation unit, miner/worker.go [1] Note that Bitcoin's mempool logic is significantly more complex and sophisticated. The child-pays-for-parent tree allows for UTXOs to be spent over multiple blocks and lets scripts trigger future payments. See Alex Morcos excellent fee guide and Jeremy Rubin's OP_CTV. It should be noted that this complexity in the mempool is partially due to the inflexibility of Bitcoin script, which forces multiblock payments to be mediated at the mempool (instead of in a contract). But it has the side-effect of making fee sniping a lot more difficult in Bitcoin. [2] Recall that in Algorithmic Game Theory, the social welfare function represents the 'macroeconomic' observable that we are trying to optimize for our mechanism. In auctions with independent bidding and no repeated rounds or reserves, this simply is the sum of the quasilinear utilities of the participants. When there is collusion / correlation, this gets significantly more complciated vbuterin January 16, 2020, 1:33am #4 Are you assuming here that the sequencing auction for block N happens after the transactions in block N are known? I think what @karl had in mind was the sequencing auction happening well before block N, eg. potentially even a day before. This way the sequencers would be buying rights to the expectation of future MEV, not bidding on permutations (and insertions and delay-to-next-block operations) directly. Does this simplify the above analysis? The third option has a lot of internal choice! One option that I am particularly excited about is funding public goods through some DAO, on-chain quadratic funding gadget, or similar tool. he simplest form of MEV that isn't discussed is transaction elision — e.g. a validator not repeatedly adding a transaction to the transaction set. I agree this is a form of MEV too! Though I wonder how much of that is captured by the ability to reorder transactions arbitrarily including inserting your own transactions before some of the transactions in the original block. jannikluhn January 16, 2020, 9:46am #5 The general idea seems to be to redistribute MEV from miners to some other entity, e.g. a DAO which funds public goods. Wouldn't this basically be equivalent to leaving the MEV to miners, but instead send an equal fraction of block rewards to the DAO? karl: The sequencer can only commit on a specific position in a block, not that the transaction will be included at all, no? I guess usually they can predict this fairly accurately, but there's still a chance that they are wrong (especially if the producer actively refuses to include a specific transaction). vbuterin January 18, 2020, 11:44pm #6 Wouldn't this basically be equivalent to leaving the MEV to miners, but instead send an equal fraction of block rewards to the DAO? No, because it's not just about long-run average returns, it's also about incentives. This technique removes the incentive of trying to collect MEV from miners, and gives the incentive to the centralized party that won the auction. This way the auction participants "absorb" the gains from sophistication, making it more plausible that miners/block producers will remain decentralized as they can safely be dumber. valentin April 9, 2020, 11:24am #7 tchitra: What are options for dealing with this latency? In particular, how do we minimize the rounds of communication between the transaction selector and the sequencer? This could be a significant problem indeed as it makes block mining a distributed computation between 2 parties. Can this be mitigated by putting the sequencing algorithm into a contract, which the block producers need to follow? Each sequencer will participate in the auction with (sequencing code, bid) pairs. Once the winner is known, the block producers need to obey the current sequencing code. One downside of this approach is a potential DoS attack against the block producer, by submitting the ordering code, which is artificially complex to execute. On the plus side, this eliminates the need for communication between 2 parties to mine a block. wjmelements May 15, 2020, 3:30am #8 Hello, I am in the business of MEV. There are some good ideas in this proposal. I have a critique and alternative proposals. Some things you say aren't true but I won't go into too much detail because a lot of the game is a well-kept secret. Transaction ordering is powerful because it determines which transactions succeed and which fail, and also what happens should they succeed. Miners have dictatorial power over this $10m+/year market because there are no rules regarding transaction ordering. Even if there were rules miners still win by excluding transactions. Block proposers are most analogous to traditional blockchain miners. It is critical that they preserve the censorship resistance that we see in blockchains today Proposers are still incentivized to exclude transactions. You introduce another source of censorship: independent reordering. Separating these powers is an improvement but the sequencer, the producer, and the transactors could still be the same party, and the sequencer could have conflicts of interest outside of the block. This auction assigns the right to sequence the last N transactions. Changing the order of transactions impacts state. This would substantially increase the effective confirmation time leading to instability and a higher rate of reverted transactions. It also impacts gas usage. If a block proposer only selects the transactions but does not order them, there can be no block gas limit. It is nontrivial to prove that a set of transactions cannot be ordered below a given gas threshold. Using transaction gas limits is sufficient but you end up with barren blocks and substantial gas-usage volatility; the network would be massively under-utilized. A related concern is that the sequencer doesn't care how much gas is used. If, within a timeout the sequencer has not submitted an ordering which is included by block proposers, a new sequencer is elected. This process could last an unbounded amount of time. Block proposers could have incentivizes to exclude the ordering. Moreover the ordering itself could be reverted by a future reordering. Instead you could have a hash of the ordering be part of the bid itself, since the MEV is known at the time of the bid. In addition to extracting MEV, the MEVA provides the current sequencer the ability to provide instant cryptoeconomic guarantees on transaction inclusion. They do this by signing off on an ordering immediately after receiving a transaction from a user – even before it is sent to a block producer. I don't see a reason for the sequencer to do this. An index is not a strong guarantee either. You could even withhold such a proof until you actually provide the sequence. As long as the sequencer stands to lose more than it can gain from an equivocation, we can expect the sequencer to provide realtime feed of blockchain state which can be monitored, providing, for instance, realtime price updates on Uniswap. They don't have to provide this information to the public in realtime. Bidders colluding to reduce competition and keep the auction price artificially low breaks the ability to accurately discover and tax the MEV. Collusion requires barriers to entry. You create a barrier to entry by incentivizing the probable-winner to submit more MEV transactions and punishing their competitors for losing. Once you start winning you will probably keep winning indefinitely. This can help to establish a price floor because with low barriers of entry we can expect enough competition that there will be at least one honest sequencer bid. Sequence bidding is not free. MEV is potentially significantly higher than transaction fee revenue. MEV is the true block reward. Gradually diminish the inflationary block reward instead of trying to tax MEV. Let "dumb" block producers maintain viability via open-source software. Preserve confirmation time. Define a standard transaction ordering algorithm to increase the cost of censorship. Ban or punish inclusion of transactions that would revert at the top-level. These waste shared computational resources anyway. Preventing block producers from winning revert rewards removes a barrier to entry and reduces the power of the block producer. Let block producers manage revert-DOS off-chain. adlerjohn May 15, 2020, 11:03pm #9 wjmelements: Ban or punish inclusion of transactions that would revert at the top-level. Making reverting transactions invalid is non-trivial. If you receive an unconfirmed transaction from a peer and validate it locally, whether it reverts or not depends on when it's executed (i.e. its index in the totally ordered list of transactions provided by the blockchain). Therefore you can't ban a peer that sends you a reverting transaction (unless you make the peer provide you the complete ordering of transactions it used to execute that transaction, but that's computationally infeasible). Being unable to ban a peer that sends you an invalid transaction is a DoS vector. Note that this does not apply in the same way for spending conditions with monotonic validity, such as the predicates used in Bitcoin Script or those that have been proposed for Fuel. Define a standard transaction ordering algorithm A standard transaction ordering algorithm might help with the above, but I'm not aware of even a single one that has been proposed that isn't hopelessly broken. Against proof of stake for [zk/op]rollup leader election POA transition to optimistic rollup VM Pratyush May 28, 2020, 4:16am #10 This recent paper might be relevant: it separates transaction ordering from transaction execution at the consensus layer. jaybny June 30, 2020, 7:54pm #11 FYI - we came up with these same ideas in 2018 - see here: https://medium.com/@jaybny/on-dex-fac434d7730f karl July 7, 2020, 5:42pm #13 @jaybny This is really cool! Did you come up with any fun takeaways/analysis since your first post? jaybny July 11, 2020, 5:57pm #14 thanks Karl. I have lots of ideas and directions on where to go with this… but am not a fan on working on top of Eth, although I love the tech and community. I actually received a patent for some of this, and am looking to build a DEX proof-of-concept. yaronvel August 16, 2020, 8:12pm #15 We are working on something somewhat similar in application level. We decided to first tackle the lending platforms liquidation MEV. Where liquidators compete on a well defined premium, which give rise to gas wars and millions of $ that goes to miners. Our approach is to make the liquidators auction in the beginning of the month for the right to liquidate. And then to divide the liquidations fairly among the liquidators over the month. We are implementing a practical approach that will go live on September atop makerdao. The idea is that liquidators will share their profits with the users, who in return give them priority in the liquidations. A defi-lego trick makes it possible to achieve it without any change in makerdao protocol. More details are available here: https://medium.com/b-protocol/b-protocol-b6dd4e3bf9c0 kladkogex December 7, 2020, 4:00pm #17 There is a faster alternative. We are implementing it in our project and it could be easily implemented in ETH2 or any other finalizing blockchain. Transactions are submitted encrypted using threshold encryption. A committee (say ETH2 committee) collectively holds the decryption key. Once the block proposer includes the transaction into the block, and once the block is finalized, the committee decrypts the transaction (the committee only needs to decrypt the symmetric key that encrypts the transaction). Once the transaction is decrypted, EVM runs on it. Thats it - it solves all the problems. You can run Uniswap on it with zero front running. There are some technicalities (for instance gas price, nonce and some other things need to be submitted unencrypted). But they are all workable. Note that this could be implemented at the application level too. ratacat December 12, 2020, 2:30pm #19 What happens if the key is lost? Maybe I don't understand how large the committee is…seems like maybe a pretty central point of failure samueldashadrach January 15, 2021, 9:00am #20 What is the purpose of this? Why is maximising MEV extracted a goal? It's going to happen anyways, why do we want to speed this up? Intellectual curiosity? Market efficiency? Is any part of this proposal going to be added to the ethereum spec? Or is it all features being built on top of it (via a DAO or something)? pmcgoohan February 5, 2021, 10:48am #21 I posted about similar issues soon after the ETH ICO (nothing like as rigorous as the Flash Boys 2.0 paper of course). r/ethereum - Miners Frontrunning 30 votes and 98 comments so far on Reddit I'm glad the community is taking these issues so seriously. It's a noble idea to accept the MEV losses but redirect them to the commons, but I'm not sure it protects the unsophisticated and under-resourced from the sophisticated and resourced as is. Consider 3 participants (A,B,C)… A - calculates that winning the MEV auction for tomorrow is worth 1 ETH in costs because they will likely make 0.1 (after costs) from their trading if they can frontrun, and only 0.05 ETH if they can't B - calculates that winning the auction is worth 0.5 ETH in costs because they will likely make 0.1 (after costs) if they can frontrun, and only 0.08 if they can't C - just wants to trade and has no idea that MEV auctions even exist So the outcomes for the A/B sophisticates and the naive C are… A - wins the auction in this case, and has to pay 1 ETH, but makes 0.1 so is happy B - loses the auction, but also knows that they have, and therefore avoids trades that they can't profit from. They make their 0.08 without the overhead of winning the MEV C - has absolutely no idea that any of this has taken place. They get frontrun continually all day and are none the wiser. Effectively, the 1 ETH paid by A is extracted exclusively from C, the actor we are trying to protect. "This technique removes the incentive of trying to collect MEV from miners, and gives the incentive to the centralized party that won the auction" If we're not careful, this will have achieved nothing. The same bad actors that would have been miners have become MEV auction participants instead, just with a different name. I think the auction proceeds at least need to be distributed proportionately to all the participants in blocks sequenced by the auction winner (except for themselves), not just some common DAO like fund. In this outcome: A - pays B and C equally B - will know they are going to get paid by A and will include that in their expected win calculations C - will have no idea whats going on, but will be happy to get some compensation (and statistically the set of unsophisticated Cs will not lose out, although individual Cs may lose big) But really, I'm not sure the focus should be on sequencing. Block production is the weak point as this is where transactions can be censored and frontrun. Once you have a fair block, sequencing is a non-problem. The fairest method is always to treat all transactions in a block as if they were simultaneous. This could even just be done as a matter of convention in smart contracts. If we are talking about many blocks per second, there is no real downside to this. So in summary, I think the focus needs to be put back on a fair consensus driven block proposal. Sequencing is a distraction. Mister-Meeseeks February 9, 2021, 5:03pm #22 Just to be clear, this would result in significantly more front-running. Yes, maybe the profits would go to a worthier cause, but the average Dex trader would experience much worse slippage. Right now existing arbitrage bots don't exploit every opportunity to the max, because there's execution risk. There's all kinds of reasons an attempted front-run order might fail, if another bidder wins the auction, if the target is mined before the sandwich propagates, or even if the target cancels their transaction. If that happens the front-runner still pays gas on the "empty transaction", and possibly even gets filled at unprofitable prices. Many exploitable target slip past, because the bots don't think the risk is justified. In contrast, the sequencer would know with 100% certainty, and therefore would exploit every single opportunity, extracting the maximum amount each and every time. More so, the sequencer would have even worse exploits at his disposal. There's a lot of exploitation opportunity during periods of turbulent trading, when many trades on the same pool occur in the same block, like ICOs. These happen to be the time that ordinary traders set the highest slippage. Front-runners can tactically insert their own transactions, but they can't re-arrange third party transactions. The sequencer's arbitrary ability to re-sequence introduces very large potential attacks in these instances. For example if the normal sequence is Buy->Buy->Sell->Sell, the sequencer could re-arrange to Buy->Sell->Buy->Sell, which lets them front-run twice as much value. Technically miners could do the same, but in practice 95%+ don't, because running a mining rig requires different core competencies than being an arbitrage trader. Finally, the two-phase commit nature of the process gives the sequencer a free option. This is equivalent to "last look" that you see in certain traditional markets like FX. The sequencer has N blocks to declare the sequence. He could include two of his own trades in the block- one to buy ETH/USD and one to sell ETH/USD, with the caveat that the first trade cancels the second. If the price of ETH rises in the next N blocks, he sequences it so that he buys ETH/USD. If it falls, he sells ETH/USD. Front running only extracts value from liquidity takers. But this would impose a persistent cost to Dex liquidity providers. pmcgoohan February 13, 2021, 11:50am #23 Mister-Meeseeks: Just to be clear, this would result in significantly more front-running This post is absolutely bang on. Arbitrage risk is a huge deterrent in itself. I say this as an arber myself in other non-crypto markets. Anything which reduced that risk to zero would allow me to put x5 the volume on I otherwise would at the direct expense of other participants.
CommonCrawl
Journal of Nutritional Science Validation of bioelectrical impedance analysis in Ethiopian... Validation of bioelectrical impedance analysis in Ethiopian adults with HIV Published online by Cambridge University Press: 18 December 2017 Maria H. Hegelund , Jonathan C. Wells , Tsinuel Girma , Daniel Faurholt-Jepsen , Dilnesaw Zerfu , Dirk L. Christensen , Henrik Friis [Opens in a new window] and Mette F. Olsen Maria H. Hegelund Department of Public Health, University of Copenhagen, Copenhagen, Denmark Department of Nutrition, Exercise and Sports, University of Copenhagen, Copenhagen, Denmark Jonathan C. Wells Childhood Nutrition Research Centre, Great Ormond Street Institute of Child Health, University College London, London, UK Tsinuel Girma Department of Paediatrics and Child Health, Jimma University, Jimma, Ethiopia Daniel Faurholt-Jepsen Department of Nutrition, Exercise and Sports, University of Copenhagen, Copenhagen, Denmark Department of Infectious Diseases, Rigshospitalet, Copenhagen, Denmark Dilnesaw Zerfu Ethiopian Public Health Institute, Addis Ababa, Ethiopia Dirk L. Christensen Department of Public Health, University of Copenhagen, Copenhagen, Denmark Henrik Friis Department of Nutrition, Exercise and Sports, University of Copenhagen, Copenhagen, Denmark [email protected] Save pdf (0.36 mb) Bioelectrical impedance analysis (BIA) is an inexpensive, quick and non-invasive method to determine body composition. Equations used in BIA are typically derived in healthy individuals of European descent. BIA is specific to health status and ethnicity and may therefore provide inaccurate results in populations of different ethnic origin and health status. The aim of the present study was to test the validity of BIA in Ethiopian antiretroviral-naive HIV patients. BIA was validated against the 2H dilution technique by comparing fat-free mass (FFM) measured by the two methods using paired t tests and Bland–Altman plots. BIA was based on single frequency (50 kHz) whole-body measurements. Data were obtained at three health facilities in Jimma Zone, Oromia Region, South-West Ethiopia. Data from 281 HIV-infected participants were available. Two-thirds were female and the mean age was 32·7 (sd 8·6) years. Also, 46 % were underweight with a BMI below 18·5 kg/m2. There were no differences in FFM between the methods. Overall, BIA slightly underestimated FFM by 0·1 kg (−0·1, 95 % CI −0·3, 0·2 kg). The Bland–Altman plot indicated acceptable agreement with an upper limit of agreement of 4·5 kg and a lower limit of agreement of −4·6 kg, but with a small correlation between the mean difference and the average FFM. BIA slightly overestimated FFM at low values compared with the 2H dilution technique, while it slightly underestimated FFM at high values. In conclusion, BIA proved to be valid in this population and may therefore be useful for measuring body composition in routine practice in HIV-infected African individuals. Bioelectrical impedance analysis Body composition HIV African population antiretroviral therapy bioelectrical impedance analysis fat-free mass fat mass Journal of Nutritional Science , Volume 6 , 2017 , e62 DOI: https://doi.org/10.1017/jns.2017.67[Opens in a new window] This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. Copyright © The Author(s) 2017 Malnutrition is common in individuals infected with HIV in sub-Saharan Africa. Chronic infections such as HIV result in immune impairment which leads to malnutrition, causing further immune impairment and thereby a more rapid disease progression. On the other hand, malnourished individuals have increased susceptibility to HIV and opportunistic infections, and are thereby more likely to have a faster disease progression compared with well-nourished individuals( Reference Duggal, Chugh and Duggal 1 ). The wasting syndrome in HIV is characterised by weight loss often accompanied by chronic diarrhoea or chronic weakness and fever( 2 ). It has been one of the main characteristics of HIV and it is still a common complication in the antiretroviral therapy (ART) era( Reference Salomon, de Truchis and Melchior 3 ). In HIV-infected individuals starting ART, malnutrition may be an independent predictor of early mortality( Reference Johannessen, Naman and Ngowi 4 ). Measurement of body composition is an important tool to assess effects of weight loss and therefore it is crucial to find an easy, quick and valid method to determine body composition. Bioelectrical impedance analysis (BIA) is an inexpensive, non-invasive and easy-to-use method to determine body composition( Reference Kyle, Bosaeus and De Lorenzo 5 ). The most widely used approach predicts values for total body water, allowing calculation of fat-free mass (FFM) and fat mass (FM). Total body water predictions are typically calibrated using data from a healthy reference population, often of white European descent, using a reference method such as dual-energy X-ray absorptiometry and regression formulae that include height and impedance as variables, but often also other terms such as age, weight and sex( Reference Kyle, Bosaeus and De Lorenzo 5 ). The hydration of FFM is considered relatively constant through adulthood in healthy individuals( Reference Wang, Deurenberg and Wang 6 ), but several illnesses and conditions such as HIV and malnutrition are associated with weight loss and thereby alter body composition( Reference Oliver, Allen and Gold 7 ), potentially also affecting hydration. Additionally, the manufacturers' equations used in BIA are typically derived in individuals of European descent( Reference Deurenberg and Deurenberg-Yap 8 ). Several validation studies using the 2H dilution technique as the reference method have concluded that BIA was a valid method to determine body composition in HIV-infected individuals( Reference Kotler, Burastero and Wang 9 , Reference Sluys, van der Ende and Swart 10 ). One of the studies was conducted in an American cohort and included black Americans( Reference Kotler, Burastero and Wang 9 ). The other study was conducted in Europe based on data from a small sample( Reference Sluys, van der Ende and Swart 10 ). However, the accuracy of these equations, when used in African HIV-infected individuals has subsequently been questioned( Reference Diouf, Gartner and Dossou 11 ). Therefore, equations used in BIA based on healthy subjects of European descent may provide inaccurate results in Africans with HIV. The aim of the present study was to test the validity of BIA for the assessment of FFM in ART-naive Ethiopian HIV-infected patients. Study design and population This study used baseline data from the ARTfood study( Reference Olsen, Abdissa and Kaestel 12 ), which was a randomised controlled trial investigating the effects and feasibility of providing a lipid-based nutrient supplement in HIV-infected patients at initiation of ART. The sample size was calculated for the primary outcome of the trial. Participants were recruited among HIV patients eligible for ART and took place at Jimma University Specialised Hospital, and health centres in Jimma and Agaro. The inclusion criteria for the ARTfood study were ≥18 years, BMI ≥16 kg/m2, ART-naive, eligible for initiation of ART, and living within 50 km of the recruitment facility. Patients were excluded if they were pregnant, lactating, taking micronutrients or other nutrient supplementation. Patients with BMI <16 kg/m2 were invited for data collection and therefore included in this study, whereas they were excluded from intervention and referred to standard nutritional therapy according to national guidelines( 13 ). Eligibility of ART during the study was based on the Ethiopian treatment guidelines from 2008. HIV patients were eligible if they had CD4 count ≤200 cells/μl irrespective of clinical symptoms, CD4 count ≤350 cells/μl if WHO stage III, or WHO stage IV irrespective of CD4 count( 14 ). The study staff included nurses, laboratory technicians and pharmacists, all receiving relevant training. Data collection was carried out from July 2010 to July 2013. Background data Background data were collected through structured questionnaires in the local languages Amharic or Afaan Oromo. For the present study, data on age, sex, education and occupation were used. Anthropometric data For height and weight measurements, participants were barefoot and wearing light clothes. A calibrated stadiometer (SECA 214 Stadiometer) and scale (Tanita-BC 418 MA) were used for height and weight, respectively. Weight was measured with 0·1 kg precision and height to the nearest 1 mm. BMI was calculated as weight divided by squared height (kg/m2) and categorised as <16·0, 16·0–<17·0, 17·0–<18·5, 18·5–25·0 or >25·0 kg/m2 according to WHO classification( 15 ). Information regarding clinical stage of HIV using WHO criteria( 16 ) was extracted from patient records and checked by a study clinician. CD4 cell count was determined in EDTA-stabilised whole blood using flow cytometry (Fascount; Becton Dickinson) and categorised into <50, 50–<100, 100–200 and >200 cells/μl for analyses. To determine viral load, plasma was kept at −80°C before quantification of HIV-1 viral load using a commercial real-time PCR assay (RealTimeHIV-1; Abbott Laboratories) with automated extraction (M2000 Real Time System, Abbott Laboratories). HIV viral load was categorised as <4, 4–5 and >5 log (1 + copies/ml)( 17 ). 2H dilution technique Body composition was assessed with 30 g 2H-labelled water (99·8 % 2H; Sercon) weighed with 0·01 g precision and given orally after collection of pre-dose saliva samples. Post-dose saliva samples were collected after 4 h equilibration( 18 ). Saliva enrichment of 2H was determined by a Fourier transform infrared spectrometer (IRAffinity-1; Shimadzu). Total body water was calculated from post-dose 2H enrichment with adjustment for pre-dose enrichment, using a factor of 1·041 to adjust for proton exchange. FFM was calculated based on an assumed hydration factor of 73·2 %( 18 ). FM was thereafter calculated by subtracting FFM from total body weight( 18 ). Bioelectrical impedance analysis Body composition was also measured using single-frequency eight-electrode BIA (Tanita-BC 418 MA). Participants were barefoot, wearing light clothes and asked to empty pockets and remove any metal objects. The Tanita body composition analyser measures body composition using a constant current source at a frequency of 50 kHz and provides impedance (Z) measured in ohms. Fat percentage, FM and FFM are produced by regression algorithms generated by the manufacturer, the details of which are not available, though it is described that the algorithms are based on data from 'Western' and Japanese individuals( Reference Tanita 19 ). BIA assessment was available for all participants recruited at Jimma University Specialised Hospital and the health centre in Jimma, but for logistic reasons only for some of the participants attending the health centre in Agaro. The collected data were double-entered and validated using Epidata (EpiData Association, Denmark). Data analyses were carried out using STATA/IC version 13.0 (StataCorp LLC). FFM was used as the primary outcome to compare BIA against the 2H dilution technique. Paired t tests were used to compare FFM measured by the two different methods. Overall two paired t tests were conducted. The first test was stratified by sex and age group (18–<30, 30–<40 and >40 years). The second test was stratified by CD4 cell count including <50, 50–<100, 100–200 and >200 cells/μl. Values in the tables are mean values and standard deviations for FFM and mean differences (95 % CI). P values <0·05 were considered significant. The Bland–Altman plot was used to evaluate agreement between FFM measured by BIA and the 2H dilution technique. Furthermore, regression models were conducted for FFM and FM to test if there were correlations between the mean difference and average by the two methods. Additionally, a calibration equation was generated from our own data. The participants were equally divided into two random samples using a random sampling generator (STATA/IC version 13; StataCorp LLC). One sample was used to develop the equation through multiple linear regression for prediction of FFM as measured by the 2H dilution technique. The multiple regression model included impedance index (HT2/z), weight, age and sex as predictors. The other sample was used to test the predictive ability of the equation using limits of agreement. The predicted FFM was tested against FFM as measured by BIA and the 2H dilution technique, respectively. Of 453 HIV-infected patients screened between July 2010 and August 2012, 348 (77 %) were recruited for the ARTfood study. Participants were younger (32·9 v. 37·0 years; P = 0·001) and had lower education (21 v. 45 % with secondary school or higher; P < 0·001) than those not recruited, while BMI and other demographic characteristics were similar( Reference Olsen, Kaestel and Tesfaye 20 ). The 2H dilution technique was used for the primary outcome of the trial and the Tanita body composition analyser was not available at all sites. Of the recruited participants, data from both BIA and the 2H dilution technique were available in 281 (81 %) participants and therefore included in the present validation study. Characteristics of the 281 HIV-infected participants are shown in Table 1. Two-thirds (68 %) of the participants were female and the mean age was 32·7 (sd 8·6) years. There was a high prevalence of underweight, as almost half (46 %) of the participants had BMI <18·5 kg/m2. Table 1. Characteristics of antiretroviral therapy-naive individuals infected with HIV (Numbers of subjects and percentages or mean values and standard deviations) * No formal schooling/only able to read and write. † Some primary school/finished primary school. ‡ Finished secondary school/attended higher education. § BMI was classified according to the WHO classification, which is the International Classification of adult underweight, overweight and obesity according to BMI( 15 ). ‖ Measured using the 2H dilution technique. Mean CD4 count was 196·2 (sd 116) cells/μl and 57 % of the participants had ≤ 200 CD4 cells/μl. More than 40 % had viral load >5 log (1 + copies/ml) and more than two-thirds (71 %) were symptomatic (i.e. in WHO clinical stage II, III or IV). There were no significant differences between the two methods. The overall mean FFM measured by BIA was 39·5 kg, whereas the overall mean FFM measured by the 2H dilution technique was 39·6 kg. The mean difference between these two methods was −0·1 (95 % CI −0·3, 0·2) kg. For males, the mean FFM was 46·2 kg when assessed by BIA and 46·3 kg when assessed by the 2H dilution technique with a mean difference of −0·2 (95 % CI −0·7, 0·4) kg (Table 2). For females, the mean FFM was 36·3 kg when assessed by BIA and 36·4 kg when assessed by the 2H dilution technique with a mean difference of −0·0 (95 % CI −0·3, 0·3) kg. Among those with CD4 count <50 cells/μl, the mean difference was 1·3 kg (P = 0·06) (Table 3). Table 2. Comparison of fat-free mass measured through the 2H dilution technique v. bioelectrical impedance analysis (BIA) in HIV-infected individuals stratified by age group and sex* (Mean values and standard deviations) * Fat-free mass measured through the 2H dilution technique and BIA by the Tanita body composition analyser was compared using paired t tests. † All HIV-infected male participants with BIA and 2H dilution data. ‡ All HIV-infected female participants with BIA and 2H dilution data. Table 3. Comparison of fat-free mass from the 2H dilution technique and fat-free mass measured using bioelectrical impedance analysis (BIA) in HIV-infected individuals stratified by CD4 cell count* † All HIV-infected participants with BIA and 2H data. Fig. 1 shows the Bland–Altman plot of FFM measured by the 2H dilution technique and BIA using Tanita including the regression line of the mean difference and average for FFM. BIA underestimated the 2H dilution technique with a mean difference of −0·1 (sd 2·3) kg between the two methods. These data correspond to an upper limit of agreement of 4·5 kg and a lower limit of agreement of −4·6 kg. Therefore, since the differences between the methods were normally distributed (data not shown), 95 % of the differences are expected to lie between −4·6 and 4·5 kg. The regression model for FFM showed a small correlation of −0·1 (se 0·02) (P = 0·01) between the mean difference and the average and with an R 2 of 0·02. For an average FFM of 28 kg, the regression line predicted a mean difference of 0·6 kg, whereas the predicted mean difference was −1·2 kg for an average FFM of 58 kg. There was no correlation (P > 0·05) between the mean difference and the average for FM (data not shown). Fig. 1. Bland–Altman plot including regression line of difference v. mean, comparing fat-free mass measured by the 2H dilution technique and bioelectrical impedance analysis (BIA). The total sample (n 281) was randomised into two subsamples. One was used to develop a predictive equation (equation sample) and the other to validate the equation (validation sample). There were no significant differences in characteristics between the two subsamples (data not shown). Based on data from the equation sample, FFM by the 2H dilution technique was predicted by the following equation: $$\eqalign{FFM_{{\rm ^2\rm H}} & = 18\!\cdot\!74 + \left( {0\!\cdot\!20 \times \displaystyle{{HT^2} \over Z}} \right) + \left( {0\!\cdot\!42 \times {\rm weight}} \right) \cr &\quad- \left( {0\!\cdot\!07 \times {\rm age}} \right) - \left( {6\!\cdot\!24 \times {\rm female\; sex}} \right),} $$ where height (HT) is in cm, Z is in ohms, weight is in kg, age is in years, and female sex is a dummy variable (R 2 0·82; standard error of estimate 2·6 kg). The predicted FFM was tested in the validation sample against Tanita and resulted in a mean difference of −0·1 (sd 2·2) kg and an upper limit of agreement of 4·3 kg and a lower limit of agreement of −4·5 kg. The predicted FFM tested against the 2H dilution technique resulted in a mean difference of −0·1 (sd 2·9) kg and an upper limit of agreement of −5·8 kg and a lower limit of agreement of 5·6 kg. In this cohort, we compared BIA by the Tanita body composition analyser against the reference method of 2H dilution for determining FFM in Ethiopian HIV-infected individuals. There were no differences between FFM measured by the two methods neither in the comparison stratified by age and sex, nor in the comparison stratified by CD4 cell count. In the strata <50 CD4 cells/μl and males aged 18–29 years, there were relatively few participants with n 18 in the first and n 14 in the latter. These small subgroups may have caused type II error, potentially failing to detect a difference between FFM measured by BIA and the 2H dilution technique. The Bland–Altman plot also indicated acceptable agreement between the two methods with a small mean difference and limits of agreements similar to other BIA studies in adults( Reference Diouf, Gartner and Dossou 11 , Reference Luke, Bovet and Forrester 21 ). However, there was a small correlation between the mean difference and the average FFM with an R 2 value of 0·02 indicating that 2 % of the variance in the mean difference was due to the level of FFM. The regression line indicated that BIA slightly overestimated the 2H dilution technique at low FFM, while it underestimated at high FFM values. Nonetheless, these differences are considered minimal. CD4 cell count is an indicator of health status and disease progression in HIV-infected individuals. Healthy individuals have a CD4 cell count between 500 and 1600 cells/μl( 22 ). In the present study, approximately 60 % of the HIV-infected participants had a CD4 cell count <200 cells/μl, which is considered very low. In this cohort, very low CD4 cell count did not seem to affect validity of BIA significantly. Further research is needed to validate BIA against the 2H dilution technique especially in severely ill HIV-infected individuals with a CD4 cell count <50 cells/μl. HIV-infected individuals may vary in hydration of FFM during the different stages of the disease, caused either by dehydration or fluid retention which may influence bioelectrical resistance in the body and thereby affect the accuracy of BIA measurements( Reference Diouf, Gartner and Dossou 11 ). However, this problem will also affect results of the 2H dilution technique, which also assumes constant hydration. Equations for BIA measurement derived and validated by Kotler et al. in a cohort of white, black and Hispanic HIV-infected and uninfected individuals( Reference Kotler, Burastero and Wang 9 ) were reported as not valid in another study in an African HIV-infected cohort( Reference Diouf, Gartner and Dossou 11 ). Surprisingly, among the fifteen published equations they tested, the two equations they found valid in their HIV-infected cohort were developed in uninfected individuals( Reference Diouf, Gartner and Dossou 11 ). This is in accordance with the present study where the manufacturer's equation was developed based on data from uninfected individuals. When using BIA to measure body composition, besides health status and ethnicity, other considerations need to be taken into account. These include potential intra-individual variability in hydration due to shifts in fluids and electrolytes( Reference Dixon, Ramos and Fitzgerald 23 ). Intra-individual variability can be divided into inter-day changes and intra-day fluctuations which cause changes in impedance. Changes in impedance of the trunk are very small, whereas impedance changes in the upper and lower limbs are much bigger. Intra-day variability in water content and distribution is due to consumption of food and physical activity( Reference Tanita 19 ), but could also be due to changes in disease status. Inter-day changes are caused by temporary weight changes, for example caused by dehydration, over-eating and/or -drinking( Reference Tanita 19 ). It is therefore important to register and control for behaviour that may change hydration before using the BIA method( Reference Dixon, Ramos and Fitzgerald 23 ). Ethnicity has also been shown to affect the accuracy of BIA, especially in African populations. The validity of BIA equations regarding ethnicity remains uncertain, as some equations have shown validity, while other equations show under- or over-estimation of BIA measurements against the reference method( Reference Dioum, Gartner and Cisse 24 ). This may be due to differences in length of limbs and body composition. For example, African individuals generally have longer legs than European individuals, while Asian individuals are known to have shorter legs( Reference Deurenberg and Deurenberg-Yap 8 ). This is relevant in BIA because impedance is unequally distributed across regional anatomy and thereby overestimation of FM may occur if the population generally has longer legs than those used in the equations of the BIA method( Reference Deurenberg and Deurenberg-Yap 8 ). However, in this cohort African ethnicity did not seem to affect BIA validity despite the fact that the equations in the BIA were based on data from 'Western' and Japanese individuals( Reference Tanita 19 ). A possible explanation is that Ethiopian individuals may have body geometry more similar to the reference population than the general African population. Another possible explanation is that the combination of 'Western' and Japanese individuals used as the reference population in the equations used by Tanita resulted in valid measurements. The overall t test and the Bland–Altman plot showed a mean difference of −0·1 (sd 2·3) kg with BIA slightly, but not significantly, underestimating FFM compared with the 2H dilution technique. The small correlation detected between mean difference and average FFM by the two methods reveals that there was a negligible difference when using BIA compared with the 2H dilution technique in this population. Measuring FFM using the Tanita body composition analyser resulted in an error of ±4·6 kg (1·96 sd limits of agreement) in this population. This accuracy is typical for BIA in adults( Reference Luke, Bovet and Forrester 21 , Reference Dioum, Gartner and Cisse 24 ). Furthermore, a standard error of estimate of 2·3 in women and 3·0 in men has been reported as 'very good performance'( Reference Houtkooper, Lohman and Going 25 ). We therefore consider this accuracy of BIA clinically acceptable. Using our own data, we could predict FFM from the impedance index, age, weight and sex with an error of ±5·2 kg (1·96 sd limits of agreement). This is only slightly less accurate than published predictions using data from healthy African populations( Reference Luke, Bovet and Forrester 21 ). Since all information is known, it would be valuable to conduct further validation using more parameters. It may be preferable to predict total body water instead of FFM to make it more comparable with other published equations. The main strength of the present study is that it used the 2H dilution technique as the reference method, because it is characterised with high accuracy and precision( Reference Heymsfiels, Loman and Wang 26 ). Another strength is the large number of participants. It is also a strength that the majority of the participants had poor health status and progressed HIV infection (CD4 count <200 CD4 cells/μl), because they are likely to differentiate more from healthy individuals than HIV-infected individuals with a better health status. However, it may be a limitation that the sample was not representative of HIV patients. The HIV-infected patients were included at ART initiation, based on criteria of the 2008 guideline, and all of them had a low CD4 count or were symptomatic. Therefore, the variety in disease severity was small. Another limitation is the potential type II error in the strata <50 CD4 cells/μl and males aged 18–29 years. The fact that BIA measurements were not available for all participants may also be a limitation. Nonetheless, missing data were due to limitations in logistics and therefore not expected to be associated with outcome. It should be noted that the Tanita body composition analyser validated in this study is a discontinued model and the replaced model uses a different algorithm. In conclusion, the Tanita body composition analyser is considered a valid tool for the assessment of FFM in this cohort of Ethiopian ART-naive HIV patients. BIA is an easy and inexpensive method to determine body composition and according to the results from this study it was reliable in both males and females in all age groups and also in all CD4 strata. BIA may therefore be a useful method to measure body composition in routine clinical and epidemiological practice in HIV-infected African individuals. This research received no specific grant from any funding agency, commercial or not-for-profit sectors. M. H. H. is the primary author. M. F. O. and H. F. were involved in the conceptualisation and design. M. F. O., T. G. and D. Z. were involved in data acquisition. M. H. H., M. F. O., J. C. W., T. G., H. F., D. F.-J. and D. L. C. were involved in the analysis and interpretation of data. All authors were involved in drafting or reviewing the manuscript. There were no conflicts of interest. 1. Duggal, S, Chugh, TD & Duggal, AK (2012) HIV and malnutrition: effects on immune system. Clin Dev Immunol 2012, 784740.CrossRefGoogle ScholarPubMed 2. Centers for Disease Control and Prevention (1992) 1993 Revised Classification System for HIV Infection and Expanded Surveillance Case Definition for AIDS Among Adolescents and Adults. http://www.cdc.gov/mmwr/preview/mmwrhtml/00018871.htm (accessed September 2016).Google Scholar 3. Salomon, J, de Truchis, P & Melchior, JC (2002) Body composition and nutritional parameters in HIV and AIDS patients. Clin Chem Lab Med 40, 1329–1333.CrossRefGoogle ScholarPubMed 4. Johannessen, A, Naman, E, Ngowi, BJ, et al. (2008) Predictors of mortality in HIV-infected patients starting antiretroviral therapy in a rural hospital in Tanzania. BMC Infect Dis 8, 52.CrossRefGoogle Scholar 5. Kyle, UG, Bosaeus, I, De Lorenzo, AD, et al. (2004) Bioelectrical impedance analysis – part I: review of principles and methods. Clin Nutr 23, 1226–1243.CrossRefGoogle ScholarPubMed 6. Wang, Z, Deurenberg, P, Wang, W, et al. (1999) Hydration of fat-free body mass: review and critique of a classic body-composition constant. Am J Clin Nutr 69, 833–841.Google ScholarPubMed 7. Oliver, CJ, Allen, BJ & Gold, J (1995) Aspects of body composition in human immunodeficiency virus (HIV) infection. Asia Pac J Clin Nutr 4, 109–111.Google ScholarPubMed 8. Deurenberg, P & Deurenberg-Yap, M (2003) Validity of body composition methods across ethnic population groups. Acta Diabetol 40, Suppl. 1, S246–S249.CrossRefGoogle Scholar 9. Kotler, DP, Burastero, S, Wang, J, et al. (1996) Prediction of body cell mass, fat-free mass, and total body water with bioelectrical impedance analysis: effects of race, sex, and disease. Am J Clin Nutr 64, 489s–497s.CrossRefGoogle ScholarPubMed 10. Sluys, TE, van der Ende, ME, Swart, GR, et al. (1993) Body composition in patients with acquired immunodeficiency syndrome: a validation study of bioelectric impedance analysis. JPEN J Parenter Enteral Nutr 17, 404–406.CrossRefGoogle ScholarPubMed 11. Diouf, A, Gartner, A, Dossou, NI, et al. (2009) Validity of impedance-based predictions of total body water as measured by 2H dilution in African HIV/AIDS outpatients. Br J Nutr 101, 1369–1377.CrossRefGoogle ScholarPubMed 12. Olsen, MF, Abdissa, A, Kaestel, P, et al. (2014) Effects of nutritional supplementation for HIV patients starting antiretroviral treatment: randomised controlled trial in Ethiopia. BMJ 348, g3187.CrossRefGoogle ScholarPubMed 13. Federal Ministry of Health (2011) National Guideline for Nutritional Care and Support for PLHIV. Addis Ababa: Federal Ministry of Health, Ethiopia.Google Scholar 14. Federal HIV/AIDS Prevention and Control Office (2008) Guidelines for Management of Opportunistic Infections and Anti Retroviral Treatment in Adolescents and Adults in Ethiopia. Addis Ababa: Federal Ministry of Health, Ethiopia.Google Scholar 15. World Health Organization (2006) BMI classification. http://apps.who.int/bmi/index.jsp?introPage=intro_3.html (accessed September 2016).Google Scholar 16. World Health Organization (2005) Interim WHO Clinical Staging of HIV/AIDS and HIV/AIDS Case Definitions for Surveillance: African Region. Geneva: World Health Organization.Google Scholar 17. NAM (2012) Viral load. http://www.aidsmap.com/Viral-load/page/1327496/ (accessed September 2016).Google Scholar 18. International Atomic Energy Agency (2010) Introduction to Body Composition Assessment Using the Deuterium Dilution Technique with Analysis of Saliva Samples by Fourier Transform Infrared Spectrometry. IAEA Human Health Series. Vienna, Austria: International Atomic Energy Agency.Google Scholar 19. Tanita, (n.d.) Body Composition Analyzer BC-418 Instruction Manual. Tokyo, Japan: Tanita Coporation. www.tanita.com/en/.downloads/download/?file=855638086&lang=en_US (accessed September 2016).Google Scholar 20. Olsen, MF, Kaestel, P, Tesfaye, M, et al. (2015) Physical activity and capacity at initiation of antiretroviral treatment in HIV patients in Ethiopia. Epidemiol Infect 143, 1048–1058.CrossRefGoogle ScholarPubMed 21. Luke, A, Bovet, P, Forrester, TE, et al. (2013) Prediction of fat-free mass using bioelectrical impedance analysis in young adults from five populations of African origin. Eur J Clin Nutr 67, 956–960.CrossRefGoogle ScholarPubMed 22. AIDS.gov (2016) CD4 count. https://www.aids.gov/hiv-aids-basics/just-diagnosed-with-hiv-aids/understand-your-test-results/cd4-count/ (accessed October 2016).Google Scholar 23. Dixon, CB, Ramos, L, Fitzgerald, E, et al. (2009) The effect of acute fluid consumption on measures of impedance and percent body fat estimated using segmental bioelectrical impedance analysis. Eur J Clin Nutr 63, 1115–1122.CrossRefGoogle ScholarPubMed 24. Dioum, A, Gartner, A, Cisse, AS, et al. (2005) Validity of impedance-based equations for the prediction of total body water as measured by deuterium dilution in African women. Am J Clin Nutr 81, 597–604.Google ScholarPubMed 25. Houtkooper, LB, Lohman, TG, Going, SB, et al. (1996) Why bioelectrical impedance analysis should be used for estimating adiposity. Am J Clin Nutr 64, 436s–448s.CrossRefGoogle ScholarPubMed 26. Heymsfiels, SB, Loman, TG, Wang, Z, et al. (2005) Human Body Composition, 2nd ed. Champaign, IL: Human Kinetics.Google Scholar Total number of HTML views: 241 Total number of PDF views: 311 * * Views captured on Cambridge Core between 18th December 2017 - 17th January 2021. This data will be updated every 24 hours. 1 Cited by Hostname: page-component-77fc7d77f9-kstv4 Total loading time: 0.332 Render date: 2021-01-17T13:50:09.637Z Query parameters: { "hasAccess": "1", "openAccess": "1", "isLogged": "0", "lang": "en" } Feature Flags last update: Sun Jan 17 2021 13:03:32 GMT+0000 (Coordinated Universal Time) Feature Flags: { "metrics": true, "metricsAbstractViews": false, "peerReview": true, "crossMark": true, "comments": true, "relatedCommentaries": true, "subject": true, "clr": true, "languageSwitch": true, "figures": false, "newCiteModal": false, "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true } Møller, Sanne Pagh Amare, Hiwot Christensen, Dirk Lund Yilma, Daniel Abdissa, Alemseged Friis, Henrik Faurholt-Jepsen, Daniel and Olsen, Mette Frahm 2020. HIV and metabolic syndrome in an Ethiopian population. Annals of Human Biology, Vol. 47, Issue. 5, p. 457. Maria H. Hegelund (a1) (a2), Jonathan C. Wells (a3), Tsinuel Girma (a4), Daniel Faurholt-Jepsen (a2) (a5), Dilnesaw Zerfu (a6), Dirk L. Christensen (a1), Henrik Friis (a2) and Mette F. Olsen (a2) DOI: https://doi.org/10.1017/jns.2017.67
CommonCrawl
What is the limit distance to the base function if offset curve is a function too? I asked a question about parallel functions in here . I understood that offset curves that are the parallels of a function may not be functions after J.M.'s answer. I got new questions after that answer. Q1) What is the limit distance to the base function if offset curve is a function too? Q2) It can be shown as geometrically that all parallel curves of line and half circle are also functions. What is the whole function family defination for such functions? Please see parallel curve examples below. (Thanks to J.M. for the graphs) differential-geometry functions plane-curves MathloverMathlover I will try to answer question 1 about "limit distance". For a parametric curve $x=x(t)$, $y=y(t)$ to have an equation of the form $y=g(x)$, we need $x$ to be a strictly increasing function of $t$. Suppose we have a smooth function $y=f(x)$ and consider its parallel curve at distance $d$ (measured upward; $d$ could be positive or negative). Then $$x(t)=t-d\frac{f'(t)}{\sqrt{1+(f'(t)^2}}$$ If $x'>0$ for all $t$, then the parallel curve is also the graph of a function. Computation shows (after a simplification) that $$x'(t)=1-d\frac{f''(t)}{(1+f'(t)^2)^{3/2}}$$ So $x(t)$ is strictly increasing when $$d\frac{f''(t)}{(1+f'(t)^2)^{3/2}}<1$$ and fails to be strictly increasing if the reverse inequality holds. You will find the critical value of $d$ by considering the values of $f''(t)/{(1+f'(t)^2)^{3/2}}$. Not incidentally, the latter quantity is the curvature of the graph $y=f(x)$. A different and somewhat more abstract viewpoint is given by considering the "squared distance function" defined by $\rho(x,y)=d((x,y),G)^2$ for all $(x,y)\in \Bbb R^2$. Here $G=\{(u,v):v=f(u)\}$ is the graph of your function, considered as a subset of $\Bbb R^2$, and $d((x,y),G)$ is the distance from $(x,y)$ to $G$, which is the same as the distance from $(x,y)$ to the closest point in $G$. (Assume that your function $f$ is continuous on $\Bbb R$ so that $G$ is a closed subset of $\Bbb R^2$.) If your function $f$ is smooth then $\rho(x,y)$ will be smooth on a neighborhood of $G$. More precisely, if $f$ is smooth of class $C^k$ with $k\geq 2$ then $\rho(x,y)$ will be smooth of class $C^k$ near the graph $G$. (See this.) From now on, we always assume $k\geq 2$. We have used the squared distance function to get smoothness on the graph $G$, in the same way that the function $x^2$ is smooth at $x=0$ wheras the function $|x|$ is not. How far away from $G$ will $\rho(x,y)$ be smooth? Let $(x,y)$ be some point in $\Bbb R^2$ and let $(u,v)$ be the point on $G$ which is closest to $(x,y)$ (assume that there is only one such point). If the distance between the two points is less than the radius of curvature of $G$ at $(u,v)$ then we are guaranteed that the squared distance function will be smooth at $(x,y)$. Now compute the radius of curvature $r(u,v)$ of $G$ at a general point $(u,v)$ on the graph $G$. If the radius of curvature is bounded from below on $G$, so that we have $r(u,v)\geq c$ for some $c>0$ and all $(u,v)$ on the graph, then the squared distance will be smooth on the set of points $\{(x,y):d((x,y),G)<c\}$. You can then define parallel graphs on this set as level curves for the distance function $d(\cdot,G)$. Can something go wrong here? Yes, if there are more than one point on $G$ which is nearest to $(x,y)$. For general curves this can be a problem, but since your curve is the graph of a function this problem cannot occur when $d(x,y)<c$, where $c$ is the uniform lower bound from the last paragraph for the radius of curvature. I called this approach more abstract, since it is not so easy to get explicit formulas for the distance function $d(\cdot, G)$. Nevertheless, this function (or the function $\rho$) is often a useful tool for studying curves and higher dimensional surfaces. Per MannePer Manne Not the answer you're looking for? Browse other questions tagged differential-geometry functions plane-curves or ask your own question. Parallel functions. The volume and surface area of pipe? Distance between two points on a function Expressing the probability density function of $Ax$ in terms of the pdf of $x$ Solution of eikonal equation is locally the distance from a hypersurface, up to a constant What is the name for a function whose codomain and domain are equal? Taylor expansion of the square of the distance function What is the correct method of finding the leading order behavior of a function in a given limit? Geodesic curvature of parallels on two sheeted hyperboloid What is the radius of torsion, geometrically? Value of Hessian function at a tangent vector represented by the derivative of a differentiable curve at zero. Must a curve $\eta \colon [a, b] \to \mathbb{R}^2$ intersect the curves $\eta + \frac{\eta(b) - \eta(a)}{n}$ ($n \geqslant 1$)?
CommonCrawl
Library & Collections Show submenu for Study Study sub-menu Careers, Employability and Enterprise Wider Student Experience Show submenu for Wider Student Experience Wider Student Experience sub-menu Student Support & Wellbeing Welcome and Orientation Research & Business Show submenu for Research & Business Research & Business sub-menu Helping Business Show submenu for Alumni Global Durham Show submenu for Global Durham Global Durham sub-menu World-wide Research and Partnerships Global Networks and Consortia Show submenu for Visit Us Visit Us sub-menu For Schools and Colleges Undergraduate Postgraduate International Careers, Employability and Enterprise Our Colleges Enrichment Activities Student Support & Wellbeing Welcome and Orientation Current Research Institutes and Centres Helping Business Facilities and Services World-wide Research and Partnerships Global Networks and Consortia Open Days and Visits Attractions For Schools and Colleges Location Search Durham.ac.uk Dr Ryan Cooke Navigate page Associate Professor (Research) - Royal Society Research Fellow ORCID profile https://www.dur.ac.uk/images/physics/staff/profiles/dmrv25.jpg Associate Professor (Research) - Royal Society Research Fellow in the Department of Physics OCW121 Big Bang Nucleosynthesis The First Stars Fundamental Physics Centre for Extragalactic Astronomy ADS publication list Awarded Grants 2020: RF040659: Astronomy Consolidated Grant 2020-2023 CEA, £2408347.17, 2020-04-01 - 2023-03-31 Fossati, M, Fumagalli, M, Lofthouse, E K, Dutta, R, Cantalupo, S, Arrigoni Battaia, F, Fynbo, J P U, Lusso, E, Murphy, M T, Prochaska, J X, Theuns, T & Cooke, R J (2021). MUSE analysis of gas around galaxies (MAGG) – III. The gas and galaxy environment of z = 3–4.5 quasars. Monthly Notices of the Royal Astronomical Society 503(2): 3044-3064. Prochaska, J., Hennawi, Joseph, Westfall, Kyle, Cooke, Ryan, Wang, Feige, Hsyu, Tiffany, Davies, Frederick, Farina, Emanuele & Pelliccia, Debora (2020). PypeIt: The Python Spectroscopic Data Reduction Pipeline. Journal of Open Source Software 5(56): 2308. Welsh, Louise, Cooke, Ryan & Fumagalli, Michele (2021). The stochastic enrichment of Population II stars. Monthly Notices of the Royal Astronomical Society 500(4): 5214-5228. Dutta, Rajeshwari, Fumagalli, Michele, Fossati, Matteo, Lofthouse, Emma K., Prochaska, J. Xavier, Battaia, Fabrizio Arrigoni, Bielby, Richard M., Cantalupo, Sebastiano, Cooke, Ryan J., Murphy, Michael T. & O'Meara, John M. (2020). MUSE Analysis of Gas around Galaxies (MAGG) - II: Metal-enriched halo gas around z ∼ 1 galaxies. Monthly Notices of the Royal Astronomical Society 499(4): 5022-5046. Hsyu, Tiffany, Cooke, Ryan J., Prochaska, J. Xavier & Bolte, Michael (2020). The PHLEK Survey: A New Determination of the Primordial Helium Abundance. The Astrophysical Journal 896(1): 77. Pettini, Max, Fumagalli, Michele, Welsh, Louise & Cooke, Ryan (2020). A limit on Planck-scale froth with ESPRESSO. Monthly Notices of the Royal Astronomical Society 494(4): 4884-4890. Pettini, Max, Fumagalli, Michele, Cooke, Ryan & Welsh, Louise (2020). A bound on the 12C/13C ratio in near-pristine gas with ESPRESSO.★. Monthly Notices of the Royal Astronomical Society 494(1): 1411-1423. Sykes, Calvin, Fumagalli, Michele, Cooke, Ryan & Theuns, Tom (2020). Determining the primordial helium abundance and UV background using fluorescent emission in star-free dark matter haloes. Monthly Notices of the Royal Astronomical Society 492(2): 2151-2160. Cooke, Ryan (2019). The ACCELERATION programme: I. Cosmology with the redshift drift★. Monthly Notices of the Royal Astronomical Society 492(2): 2044-2057. Lofthouse, Emma K, Fumagalli, Michele, Fossati, Matteo, O'Meara, John M, Murphy, Michael T, Christensen, Lise, Prochaska, J Xavier, Cantalupo, Sebastiano, Bielby, Richard M, Cooke, Ryan J, Lusso, Elisabeta & Morris, Simon L (2020). MUSE Analysis of Gas around Galaxies (MAGG) - I: Survey design and the environment of a near pristine gas cloud at z ≈ 3.5. Monthly Notices of the Royal Astronomical Society 491(2): 2057-2074. Fossati, M, Fumagalli, M, Lofthouse, E K, D'Odorico, V, Lusso, E, Cantalupo, S, Cooke, R J, Cristiani, S, Haardt, F, Morris, S L, Peroux, C, Prichard, L J, Rafelski, M, Smail, I & Theuns, T (2019). The MUSE Ultra Deep Field (MUDF). II. Survey design and the gaseous properties of galaxy groups at 0.5 < z < 1.5. Monthly Notices of the Royal Astronomical Society 490(1): 1451-1469. Welsh, Louise, Cooke, Ryan & Fumagalli, Michele (2019). Modelling the chemical enrichment of Population III supernovae: The origin of the metals in near-pristine gas clouds. Monthly Notices of the Royal Astronomical Society 487(3): 3363–3376. Sykes, Calvin, Fumagalli, Michele, Cooke, Ryan, Theuns, Tom & Benítez–Llambay, Alejandro (2019). Fluorescent rings in star-free dark matter haloes. Monthly Notices of the Royal Astronomical Society 487(1): 609-621. Cooke, Ryan & Fumagalli, Michele (2018). Measurement of the primordial helium abundance from the intergalactic medium. Nature astronomy 2(12): 957-961. Hsyu, Tiffany, Cooke, Ryan J., Prochaska, J. Xavier & Bolte, Michael (2018). Searching for the lowest metallicity galaxies in the local universe. The Astrophysical Journal 863(2): 134. Cooke, Ryan J., Pettini, Max & Steidel, Charles C. (2018). One Percent Determination of the Primordial Deuterium Abundance. The Astrophysical Journal 855(2): 102. Sharma, Mahavir, Theuns, Tom, Frenk, Carlos S. & Cooke, Ryan J. (2018). Origins of carbon-enhanced metal-poor stars. Monthly Notices of the Royal Astronomical Society 473(1): 984-995. Hsyu, Tiffany, Cooke, Ryan J., Prochaska, J. Xavier & Bolte, Michael (2017). The Little Cub: Discovery of an Extremely Metal-poor Star-forming Galaxy in the Local Universe. The Astrophysical Journal 845(2): L22. Cooke, Ryan J., Pettini, Max & Steidel, Charles C. (2017). Discovery of the most metal-poor damped Lyman-α system. Monthly Notices of the Royal Astronomical Society 467(1): 802-811. Cooke, R. & Pettini, M. (2016). The primordial abundance of deuterium: ionization correction. Monthly Notices of the Royal Astronomical Society 455(2): 1512-1521. Cooke, R.J., Pettini, M., Nollett, K.M. & Jorgenson, R. (2016). The Primordial Deuterium Abundance of the Most Metal-poor Damped Lyman-alpha System. The Astrophysical Journal 830(2): 148. Mawatari, K., Inoue, A.K., Kousai, K., Hayashino, T., Cooke, R., Prochaska, J.X., Yamada, T. & Matsuda, Y. (2016). Discovery of a Damped Ly-alpha Absorber at z = 3.3 along a Galaxy Sight-line in the SSA22 Field. The Astrophysical Journal 817(2): 161. Shen, S., Cooke, R.J., Ramirez-Ruiz, E., Madau, P., Mayer, L. & Guedes, J. (2015). The History of R-Process Enrichment in the Milky Way. The Astrophysical Journal 807: 115. Cooke, R.J. (2015). Big Bang Nucleosynthesis and the Helium Isotope Ratio. The Astrophysical Journal Letters 812: L12. Cooke, R.J., Pettini, M. & Jorgenson, R.A. (2015). The most metal-poor damped Lyα systems: An insight into dwarf galaxies at high redshift. The Astrophysical Journal 800(1): 12. Cucchiara, A., Fumagalli, M., Rafelski, M., Kocevski, D., Prochaska, J. X., Cooke, R. J. & Becker, G. D. (2015). Unveiling the Secrets of Metallicity and Massive Star Formation Using DLAs along Gamma-Ray Bursts. The Astrophysical Journal 804(1): 51. Cooke, R.J. & Madau, P. (2014). Carbon-enhanced Metal-poor Stars: Relics from the Dark Ages. The Astrophysical Journal 791(2): 116. Cooke, R.J., Pettini, M., Jorgenson, R.A., Murphy, M.T. & Steidel, C.C. (2014). Precision Measures of the Primordial Abundance of Deuterium. The Astrophysical Journal 781(1): 31. Cooke, R., Pettini, M., Jorgenson, R.A., Murphy, M.T., Rudie, G.C. & Steidel, C.C. (2013). The explosion energy of early stellar populations: the Fe-peak element ratios in low-metallicity damped Ly-alpha systems. Monthly Notices of the Royal Astronomical Society 431: 1625-1637. Deason, A.J., Belokurov, V., Evans, N.W., Koposov, S.E., Cooke, R.J., Peñarrubia, J., Laporte, C.F.P., Fellhauer, M., Walker, M.G. & Olszewski, E.W. (2012). The cold veil of the Milky Way stellar halo. Monthly Notices of the Royal Astronomical Society 425(4): 2840-2853. Cooke, R., Pettini, M. & Murphy, M.T. (2012). A new candidate for probing Population III nucleosynthesis with carbon-enhanced damped Ly-alpha systems. Monthly Notices of the Royal Astronomical Society 425: 347-354. Steiner, J.F., Reis, R.C., Fabian, A.C., Remillard, R.A., McClintock, J.E., Gou, L., Cooke, R., Brenneman, L.W. & Sanders, J.S. (2012). A broad iron line in LMC X-1. Monthly Notices of the Royal Astronomical Society 427: 2552-2561. Pettini, M. & Cooke, R. (2012). A new, precise measurement of the primordial abundance of deuterium. Monthly Notices of the Royal Astronomical Society 425: 2477-2486. Cooke, R., Pettini, M., Steidel, C.C., Rudie, G.C. & Nissen, P.E. (2011). The most metal-poor damped Lyα systems: insights into chemical evolution in the very metal-poor regime. Monthly Notices of the Royal Astronomical Society 417(2): 1534-1558. Cooke, R., Pettini, M., Steidel, C.C., Rudie, G.C. & Jorgenson, R.A. (2011). A carbon-enhanced metal-poor damped Ly$\alpha$ system: probing gas from Population III nucleosynthesis? Monthly Notices of the Royal Astronomical Society 412: 1047-1058. Cooke, R., Pettini, M., Steidel, C.C., King, L.J., Rudie, G.C. & Rakic, O. (2010). A newly discovered DLA and associated Ly-alpha emission in the spectra of the gravitationally lensed quasar UM673A,B. Monthly Notices of the Royal Astronomical Society 409: 679-693. Cooke, R. & Lynden-Bell, D. (2010). Does the Universe accelerate equally in all directions? Monthly Notices of the Royal Astronomical Society 401: 1409-1414. Cooke, R., Bland-Hawthorn, J., Sharp, R. & Kuncic, Z. (2008). Ionization Cone in the X-Ray Binary LMC X-1. The Astrophysical Journal Letters 687: L29. Cooke, R., Kuncic, Z., Sharp, R. & Bland-Hawthorn, J. (2007). Spectacular Trailing Streamers near LMC X-1: The First Evidence of a Jet? The Astrophysical Journal Letters 667: L163-L166. Supervision students Mr Alexander Beckett PGR Student Mr Calvin Sykes Miss Louise Welsh Contact Durham University Durham University on Twitter Durham University on Facebook Durham University on LinkedIn Durham University on YouTube Durham University on Instagram Student Complaints and Non-Academic Misconduct International Consortia Research & Partnerships Learn Ultra Banner Self Service Technician Commitment The Palatine Centre Stockton Road DH1 3LE © 2021 Durham University
CommonCrawl
Rao* , Shama* , and Rao*: Energy Efficient Cross Layer Multipath Routing for Image Delivery in Wireless Sensor Networks Volume 14, No 6 (2018), pp. 1347 - 1360 10.3745/JIPS.03.0104 Santhosha Rao* , Kumara Shama* and Pavan Kumar Rao* Energy Efficient Cross Layer Multipath Routing for Image Delivery in Wireless Sensor Networks Abstract: Owing to limited energy in wireless devices power saving is very critical to prolong the lifetime of the networks. In this regard, we designed a cross-layer optimization mechanism based on power control in which source node broadcasts a Route Request Packet (RREQ) containing information such as node id, image size, end to end bit error rate (BER) and residual battery energy to its neighbor nodes to initiate a multimedia session. Each intermediate node appends its remaining battery energy, link gain, node id and average noise power to the RREQ packet. Upon receiving the RREQ packets, the sink node finds node disjoint paths and calculates the optimal power vectors for each disjoint path using cross layer optimization algorithm. Sink based cross-layer maximal minimal residual energy (MMRE) algorithm finds the number of image packets that can be sent on each path and sends the Route Reply Packet (RREP) to the source on each disjoint path which contains the information such as optimal power vector, remaining battery energy vector and number of packets that can be sent on the path by the source. Simulation results indicate that considerable energy saving can be accomplished with the proposed cross layer power control algorithm. Keywords: Castalia for OMNET++ , Cross Layer Multimedia , Magick++ , Sink-Based MMRE AOMDV The TCP/IP protocol architecture for networking which is loosely based on the layered Open Systems Interconnection (ISO/OSI) architecture is a successful example for a very good architectural design. The entire networking task is divided into layers and each layer provides a set of services. Protocols can be designed to realize the services offered by various layers. Direct communication between two non-adjacent layers are prohibited by these architectures. Function calls and responses are used to communicate between two adjacent layers. It is imperative to design the protocols such that protocol in each layer only makes use of the services presented by the adjacent lower layer and never bothers about how these services are offered. The layered architecture incorporated in wireless networks was initially inherited from wired networks. Nonetheless, the research community started analyzing the aptness of the layered architecture as third and fourth generation wireless networks begin to proliferate in the area of communication networks. It is continually debated that layered architecture may not be well suited for wireless networks although they have functioned very well for wired networks [1]. Therefore, to meet the challenges posed by the future wireless networks, it is imperative to design protocols by the layered architecture violation. This can be accomplished by permitting the protocols in the nonadjacent layers to directly communicate with each other (i.e., new interface creation between nonadjacent layers, variable sharing between layers, layer boundary refinement, protocol design at a layer based on the another layer details, joint parameter tuning across layers, and so on). This type of layered architecture violation has been called cross-layer design with regard to the reference architecture. One of the ways to perform energy efficient communication is by performing power and modulation control at the physical layer level. With regard to this, an adaptive modulation mechanism [2] for wireless sensor networks (WSN) with an additive white Gaussian noise (AWGN) channel is proposed. Link adaptation can be used to enhance the performance of WSN by saving the considerable amount of energy. The results given in [3] confirm that the link adaptation can improve the performance of WSNs. Furthermore, energy efficiency can be accomplished by suitably choosing a MAC layer protocol. Duty cycling [4] can be considered as one of the important methods for obtaining energy efficiency in energy constrained wireless networks. In this method, every wireless node periodically transits between an active state and a sleep state. Sum of its sleep time and the active time gives the duty cycle period. It is very important to consider the routing problems in energy constrained wireless network design. An important objective of a good routing protocol is to select energy efficient paths. Hence, it is imperative to develop energy efficient protocols by combining cross layer and energy aware system design. Plethora of routing protocols have been developed for WSNs and detailed in [5]. Liu et al. [6] proposed maximum minimum residual energy-ad hoc on demand multipath distance vector (MMRE-AOMDV) routing protocol which aims at balancing the traffic load amongst different nodes according to the remaining battery energy. This helps in prolonging the individual node's lifetime and therefore the lifetime of the entire system. Firstly, the protocol finds the minimal nodal residual energy of each route from the source to destination and then sorts multiple available routes by descending minimal nodal residual energy. It selects the maximal minimal residual energy path for the data transmission at any point of time. The algorithm can prolong the network lifetime by balancing the battery power utilization of individual nodes. Simulation results show that the MMRE-AOMDV protocol outperforms AOMDV protocol. The authors in [7] considered joint routing and sleep scheduling to maximize the life time of WSN. Semchedine et al. [8] proposed an energy efficient cross layer protocol (EECP) in which they considered physical, MAC and network layers for the routing. Using physical layer information, protocol routes the data to the node that has the maximum energy and closest to the sink. The MAC layer is used to determine the duty-cycle of the node and extend the sleep mode time. Chilamkurti et al. [9], proposed a cross-layer protocol that extends dynamic source routing (DSR) to enhance the energy efficiency by minimizing the frequency of recomputed routes. Iqbal et al. [10] proposed an adaptive cross-layer multipath routing protocol which takes into account of the type of applications during the operation. The authors in [11] proposed the cross-layer image transmission algorithm which aims at minimizing the total energy consumption while transmitting the image over a given distance. To achieve this, the algorithm optimizes the transmit power and packet length by using image quality constraint specified by the application layer. The authors in [12] proposed a power aware medium access control (PAMAC) protocol. Here the network layer selects the path which requires the minimal total power amongst multiple possible paths under the condition that all the nodes belonging to the path should have battery capacity above a threshold. The authors in [13] proposed an efficient cross-layer protocol for WMSN between the MAC and network layer in which a cluster based multipath routing protocol is pursued in conjunction with an adaptive QoS aware scheduling. A cross-layer optimization protocol intended for MANET, in which the network layer dynamically adjusts its routing protocol by considering the signal to noise ratio (SNR) and received power along the end to end routing path for every transmission link was proposed by the authors in [14]. Objective of this work is to develop the cross layer protocol between physical and network layer in the energy constrained wireless networks to minimize the total transmission energy while meeting the total end-to-end bit error rate (BER) for the image delivery using Castalia simulator. Scope is to maximize the lifetime of the battery operated wireless nodes. The research paper is organized as follows: Section 2 gives the necessary theoretical background required to understand the cross layer optimization algorithm. Section 3 gives the proposed cross-layer algorithm. Network model is discussed in Section 4. Results are tabulated and discussed in Section 5. Conclusions are drawn in Section 6. 2. Theoretical Background To design and implement the proposed cross layer model, a two-dimensional square region is considered to randomly deploy a given number of nodes. The packet carrying multimedia data is transmitted over a route with multiple intermediate hops. It is imperative to deliver the multimedia packet to the sink node with a minimum end-to-end BER. Each hop is modeled by a path-loss AWGN channel, so that channel gain gi has one to one relationship with hop length di. When transmitting over the ith link with transmission power PTx,i, the received power PRx,i is given by : [TeX:] $$P _ { R x , i } = g _ { i } \cdot P _ { T x , i }$$ where gi is the link gain. At the physical layer, differential phase shift keying (DPSK) [15] is considered as the modulation scheme. The received SNR must be greater than the given threshold γ in order to satisfy minimum BER for the data transmission over a single hop wireless link. The BER for the receiver noise power ni is written as: [TeX:] $$P _ { b } = \frac { 1 } { 2 } e ^ { - \gamma _ { i } } = \frac { 1 } { 2 } e ^ { - \frac { g _ { i . P _ { T x, i } } } { n _ { i } } }$$ For a path with ℎ hops and end-to-end BER Pbe, per hop BER Pb is calculated as: [TeX:] $$P _ { b } = 1 - \sqrt [ h ] { 1 - P _ { b e } } = 1 - \sqrt [ h ] { q _ { L } }$$ 2.1 Transmission Power Optimization of a Route Consider a path comprising N nodes, with node-0 as source, node-(N-1) as sink and node-1 to node- (N-2) as intermediate nodes. Let [TeX:] $$P _ { T x , 0 } , P _ { T x , 1 } , P _ { T x , 2 } \dots \ldots P _ { T x , N - 2 }$$ be the transmission powers of nodes 0,1,2, … and (N-2), respectively. Let [TeX:] $$n _ { 0 } , n _ { 1 } , n _ { 2 } , \ldots . \text { and } n _ { N - 2 }$$ be the noise power levels measured at the receivers on each wireless link belonging to the path. For example, n0 is the noise power measured at node-1 on the wireless link connecting node-0 and node-1, n1 is the noise power measured at node-2 on the wireless link connecting node-1 and node-2, and so on. Now, the cross layer optimization problem from source node to the sink node to minimize the transmission energy Ei with a constraint on QoS is formulated as: [TeX:] $$\min \left( \sum _ { i = 0 } ^ { N - 2 } E _ { i } \right)$$ [TeX:] $$\prod _ { i = 0 } ^ { N - 2 } \left( 1 - P _ { b , i } \right) = \prod _ { i = 0 } ^ { N - 2 } \left( 1 - \frac { 1 } { 2 } e ^ { \frac { g _ { i _ { i } , P _ { T x i } } } { n _ { i } } } \right) \geq q _ { L }$$ where the transmission energy Ei is calculated by taking the product of transmission power PTx,i and the packet transmission time TR. 3. Proposed Cross Layer Algorithm The proposed cross layer algorithm [16] is depicted in Fig. 1. The algorithm is run in the sink node which is presumed to have large computational power and copious energy. Whenever source node intends to establish a session with the sink node, it broadcasts a Route Request Packet (RREQ) packet with node id, image size (m), residual energy (Eresidual) and end-to-end BER to its neighbor nodes. Upon receiving the RREQ packet, intermediate node appends its remaining battery energy, link gain (gi), node id and average noise power (nj) to this RREQ packet. The sink node receives the RREQ packets from multiple paths and finds all the possible node disjoint paths using the path information available in RREQ packets. For the nodes belonging to each disjoint path, the sink also finds the optimal powers using the aforementioned constraint equation (Eq. (4)) so as to satisfy the end to end BER with minimal total energy consumption. Sink finds the difference between the residual battery energy and reserved energy of each node on the route. If these values are greater than optimal energies then the routes are marked as active routes. With the presumption that the initial energies of the nodes is the same, it can be argued that the path which consumes the minimal total energy is the winner. On the other hand, if we presume that the initial nodal energies are not same, it is necessary to extend the lifetimes of the nodes with least residual energy. Typically, the nodes which are at the vicinity of the sink nodes have least residual battery energy since they have been used by most of the traffic destined for the sink. It is required to enhance the lifetime of the nodes with minimal residual energy to enhance the network lifetime. This is ensured by sink-based MMRE algorithm. After computing the optimal powers, the sink finds the total number of packets that can be sent on each disjoint path by the source node using sink-based MMRE algorithm. Sink calculates the minimal residual energy for each disjoint path using the optimal powers. Packet counter corresponding to the path with maximum of minimal residual energies is incremented by one. For every node on the maximum minimal residual energy path, the residual battery energy is updated by subtracting the packet transmission energy from the current residual energy. The whole procedure is reiterated until the total packet counter on all the disjoint paths reaches the number of application packets. The sink sends back Route Reply Packets (RREP) to the source node through each disjoint path with information such as optimal power corresponding to each node, updated residual battery energy of each node and the packet counter for the path. Flowchart for the sink-based cross layer MMRE multipath routing algorithm. 4. Network Model The proposed algorithm is implemented in Castalia simulator which is built on famous event driven simulator OMNET++. Though the proposed cross layer model was implemented in MATLAB [16], to perform the objective and subjective quality assessment of the image received at the sink, it is required to create a network scenario. The Castalia simulator is selected for this purpose, since a network simulator based on MATLAB is not available. Moreover, the fact that Castalia simulator supports 8- discrete power levels can be exploited to perform the intended cross layer power control. The nodes configure themselves with the optimal discrete powers available in the RREQ packets sent from the sink node. The discrete power levels supported by Castalia are tabulated in Table 1. Radio power transmission levels Tx_dBm 0 -1 -3 -5 -7 -10 -15 -25 Tx_mW 57.42 55.18 50.69 46.20 42.24 36.30 32.67 29.04 Network model. As depicted in Fig. 2, 25 sensors with maximum transmission power of 57.42 mW (i.e., the maximum power supported by Castalia) were randomly deployed on 150 m × 150 m two-dimensional square region. Each node can communicate with the nodes in its transmission range which is depicted using edges in the Fig. 2. Node-1 acts as multimedia source node and Node-6 acts as sink node which is assumed to have abundant energy and computational power. In this work, disjoint paths with least delays are selected. The sink node stores the first RREQ by default. Subsequent RREQ packets are stored, if the path information available in these RREQ packets are disjoint to the paths available in the already stored packets. Otherwise, packets are rejected. The following node disjoint paths are obtained after using this method. Path 1: [1 2 3 4 5 6] Path 2: [1 7 9 11 13 15 6] Path 3: [1 8 10 23 12 14 16 6]. It is also presumed that the residual battery energy exponentially decreases for the nodes on each path as we traverse towards the sink node. The initial residual battery energies (in mJ) of the nodes considered for the simulation are shown below: Path 1: [3500 1680 1620 1560 1420] Path 2: [3500 1620 1560 1480 1400 1340] Path 3: [3500 1800 1720 1620 1560 1480 1400]. The optimal power vectors calculated by the sink using the information available in the RREQ packets for the disjoint paths are tabulated in Tables 2–4 for various end-to-end BER. It can be seen that nodes are configured with one of the 8 optimal powers supported by Castalia. Since continuous power levels are not available in the simulator the optimal powers are rounded to the next discrete power levels which are greater than or equal to the calculated optimal power. Optimal powers (mW) for nodes on the Path 1 BER Node 1 Node 2 Node 3 Node 4 Node 5 10-3 42.24 42.24 42.24 42.24 42.24 10-10 50.69 50.69 55.18 55.18 55.18 BER Node 1 Node 7 Node 9 Node 11 Node 13 Node 15 10-3 42.24 42.24 42.24 42.24 42.24 42.24 1010-10 50.69 55.18 55.18 55.18 55.18 50.69 BER Node 1 Node 8 Node 10 Node 23 Node 12 Node 14 Node 16 10-3 42.24 42.24 42.24 36.30 42.24 42.24 36.30 10-5 42.24 46.20 46.20 42.24 46.20 46.2 42.24 10-8 46.20 50.69 50.69 46.2 50.69 50.69 46.20 10-10 50.69 55.18 50.69 46.20 55.18 50.69 46.20 A 512×512 color Lena image is packetized and transmitted with 28 bytes of application header per packet. Total number of packets and number of packets that can be transmitted on each path for different packet sizes and end to end BERs as reported by sink-based cross layer MMRE algorithm are tabulated in Table 5. The energy required for the transmission of image for various BERs and packet sizes is calculated by adding the path energies of the three disjoint paths. The path energy Epath is calculated as: [TeX:] $$E _ { p a t h } = \sum _ { i = 1 } ^ { h } P _ { i } \cdot T _ { R } \cdot N _ { P }$$ where Pi is the transmission power of the node, TR is the packet transmission time, h is the number of hops on the path and NP is the packet counter of the path. TR is calculated as: [TeX:] $$T _ { R } = \text { Packet size in bits } / \text{ Data rate in bps }.$$ Number of data packets to be sent on each path determined by sink-based MMRE Packet size=128 bytes (total no. of packets=7944) Path 1 Path 2 Path 3 Path 1 Path 2 Path 3 Path 1 Path 2 Path 3 BER 10-3 2673 2486 2785 1166 1072 1212 552 505 572 BER 10-10 2614 2545 2785 1140 1098 1212 539 517 573 (without cross-layer) For the simulation, data rate of 250 kbps is considered. The energy consumption per path for the transmission of image packets—with and without cross-layer power control (i.e., with maximum transmission power)—is shown in Table 6. As expected, the path energy increases with decrease in BER. Energy consumption (J) for Path 1, Path 2 and Path 3 Packet size=128 bytes Packet size=256 bytes Packet size=512 bytes BER 10-3 5.0276 5.7373 7.4809 4.3862 4.9480 6.5112 4.1530 4.6619 6.1459 BER 10-10 5.5132 6.5892 8.2912 4.8088 5.6856 7.2165 4.5472 5.3542 6.8235 The total energy consumption for the transmission of image packets—with and without cross layer power control (i.e., with maximum transmission power)—is shown in Table 7. As expected, the path energy increases with decrease in BER. It can also be seen that total energy decreases with increase in packet size due to the reduction in total number of application overhead bits. Total energy consumption (J) for image packets BER 10-3 18.2458 15.8454 14.9608 BER 10-6/td> 19.4299 16.8741 15.9341 BER 10-10 20.3936 17.7109 16.7249 Max power (without cross-layer) 21.2771 18.4787 17.4493 To perform cross layer power control, RREQ packets carry the cross layer information as mentioned earlier. It can be seen from Table 8 that considerable amount of energy saving/image can be accomplished using the proposed cross layer power control and the life time of the network can be enhanced significantly. Energy saving for a given BER is calculated as Total energy saving = Total energy consumption with Maximum power – Total energy consumption with cross layer power control. The lifetime of the network is proportional to the total number of packets that can be sent on all the active paths until one of the nodes on each active path drains the energy to threshold energy (ETh). For the purpose of simulation, threshold energy of 2 mJ is considered. Table 9 shows the number of packets that can be sent by sink-based cross layer MMRE multipath routing algorithm for various BERs and MMRE AOMDV (without cross layer) on all the active paths until one of the nodes on each active path drains the energy to ETh. Table 10 shows the percentage lifetime improvement for Sink based Cross Layer MMRE multipath routing algorithm over MMRE AOMDV (without cross layer) for various BERs. Total energy saving (J) with cross layer power control BER 10-3 3.0313 2.6333 2.4885 BER 10-4 2.6196 2.2761 2.149 BER 10-10 0.8835 0.7678 0.7244 The image is reconstructed for subjective and objective quality analysis using ImageMagick software. ImageMagick is a software suite to create, edit, compose, or convert bitmap images. It can read and write images in a variety of formats. It runs on Linux, Windows, Mac OS X, iOS, Android OS, and others. ImageMagick is free software delivered as a ready-to-run binary distribution or as source code that we may use, copy, modify, and distribute in both open and proprietary applications. There are a number of interfaces to popular languages which could be explored from their website. We use Magick++, the C++ API for ImageMagick which provides all the ImageMagick functionalities at the command line and from programs written in C++. Number of packets that can be sent on all the active paths until one of the nodes on each active path drains the energy to threshold energy BER 10-3 9915 4956 2478 BER 10-10 9006 4503 2250 MMRE-AOMDV Table 10. Lifetime improvement (%) for sink-based cross layer MMRE routing over MMRE-AOMDV (without cross-layer) BER 10-3 16.77 16.74 16.83 BER 10-8 7.45 7.46 7.49 BER 10-10 6.06 6.07 6.08 Magick++ API's image class allows low-level image pixel access through its methods. With these methods we could reconstruct images from the data packets we receive, allowing only those pixels to be laid out on the image canvas. In the case of missing data packets, we use the same pixels pointed by the previously received data packets. After reconstructing the received data packets, the images are compared with original images for objective and subjective quality analysis. The metric used for objective quality analysis is the peak signal to noise ratio (PSNR). Table 11 gives the variation of PSNR for packet sizes 128, 256, 512 bytes and six different BERs. We can see that PSNR is higher for smaller packet sizes and increases with decreasing BER. This is due to the increase in packet error rate with the increase in the packet size which in turn reduces the number of correctly received packets. Reconstructed images for subjective quality analysis are shown in Figs. 3–5 for various BERs and packet sizes. It can be seen that for the BER 10-3, perceptibility is very poor and hence this is the worst case scenario. For the remaining BERs image is perceivable and the quality improvement can be noticed with the decrease in BER. PSNR (dB) for reconstructed image Subjective quality analysis for packet size of 128 bytes. In this work, a cross-layer optimization model based on power control is developed for a sink based cross layer multipath routing protocol. The optimization mechanism is subject to certain QoS requirements for the image transmission specified in terms of total end to end BER. The proposed Sink based Cross Layer MMRE multipath routing algorithm uses the cross layer information available in the RREQ packets to calculate the optimal power vectors and the number of packets that can be sent by the source for each disjoint path. Unlike the conventional MMRE AOMDV algorithm, the Sink based Cross Layer MMRE multipath routing algorithm pushes the task of finding the maximal minimal energy paths to the sink node which is assumed to have large computational power and abundant energy. Owing to this, source node is freed from complex algorithmic computation which requires copious energy. Furthermore, each packet is sent on the maximum minimal residual energy paths. This helps in balancing the energy consumption on multiple paths and hence helps in lifespan improvement of the network. The cross layer optimization algorithm is implemented using popular Castalia simulator and the results indicate that considerable energy savings and network lifespan enhancement can be accomplished. Santhosha Rao He received B.E. degree in Electronics and Communication Engineering from Mangalore University and M.Tech degree in Digital Electronics and Advanced Communication from Manipal University, Manipal, India. Since 1998, he has been with Manipal Institute of Technology, MAHE, Manipal, India, where he is currently Senior Assistant Professor and research scholar in the Department of Information and Communication Technology. He has published several research papers in national and international conferences and journals. Energy Efficient Cross Layer Multipath Routing for Image Delivery in Wireless Sensor Networks Kumara Shama He received B.E. degree in Electronics and Communication Engineering and M.Tech degree in Digital Electronics and Advanced Communication from Mangalore University, India. He obtained his Ph.D. degree from Manipal University, Manipal in the area of Speech Processing. Since 1987, he has been with Manipal Institute of Technology, MAHE, Manipal, India, where he is currently working as Professor in the Department of Electronics and Communication Engineering. His research interests include Speech Processing, Digital Communication and Signal Processing. He has published many research papers in various journals and conferences. Pavan Kumar Rao He received B.E. degree in Telecommunication from Visvesvaraya Technological University, Belgaum, India in 2012. He is currently pursuing Master of Technology in the Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, India. 1 V. Srivastana, M. Motani, "Cross-layer design: a survey and the road ahead," IEEE Communication Magazine, vol. 43, no. 12, pp. 112-119, 2005.doi:[[[10.1109/MCOM.2005.1561928]]] 2 S. Cui, A. J. Goldsmith, A. Bahai, "Energy-constrained modulation optimization," IEEE Transactions on Wireless Communication, vol. 4, no. 5, pp. 2349-2360, 2005.doi:[[[10.1109/TWC.2005.853882]]] 3 C. Van Phan, Y. Park, H. H. Choi, J. Cho, J. G. Kim, "An energy-efficient transmission strategy for wireless sensor networks," IEEE Transactions on Consumer Electronicsvol. 56 ,, vol. 56, no. 2, pp. 597-605, 2010.doi:[[[10.1109/TCE.2010.5505976]]] 4 S. H. Hong, H. K. Kim, "A multi-hop reservation method for end-to-end latency performance improvement in asynchronous MAC-based wireless sensor networks," IEEE Transactions on Consumer Electronics, vol. 55, no. 3, pp. 1214-1220, 2009.doi:[[[10.1109/TCE.2009.5277978]]] 5 J. N. Al-Karaki, A. E. Kamal, "Routing techniques in wireless sensor networks: a survey," IEEE Communications Magazine, vol. 11, no. 6, pp. 6-28, 2004.doi:[[[10.1109/MWC.2004.1368893]]] 6 Y. Liu, L. Guo, H. Ma, T. Jiang, "Energy efficient on-demand multipath routing protocol for multi-hop ad hoc networks," in Proceedings of IEEE 10th International Symposium on Spread Spectrum Techniques and Applications, Bologna, Italy, 2008;pp. 572-576. custom:[[[-]]] 7 F. Liu, C. Y. Tsui, Y. J. Zhang, "Joint routing and sleep scheduling for lifetime maximization of wireless sensor networks," IEEE Transactions on Wireless Communicationsvol. 9 ,, vol. 9, no. 7, pp. 2258-2267, 2010.doi:[[[10.1109/TWC.2010.07.090629]]] 8 F. Semchedine, W . Oukachbi, N. Zaichi, L. Bouallouche-Medjkoune, "EECP: a new cross-layer protocol for routing in wireless sensor networks," Procedia Computer Science, vol. 73, pp. 336-341, 2015.doi:[[[10.1016/j.procs.2015.12.001]]] 9 N. Chilamkurti, S. Zeadally, A. V asilakos, V . Sharma, "Cross-layer support for energy efficient routing in wireless sensor networks," Journal of Sensorsarticle ID. 134165,, vol. 2009, 2009.doi:[[[10.1155/2009/134165]]] 10 Z. Iqbal, S. Khan, A. Mehmood, J. Lloret, N. A. Alrajeh, "Adaptive cross-layer multipath routing protocol for mobile ad hoc networks," Journal of Sensorsarticle ID. 5486437,, vol. 2016, 2016.doi:[[[10.1155/2016/5486437]]] 11 N. Y ang, I. Demirkol, W . Heinzelman, "Cross-layer energy optimization under image quality constraints for wireless image transmissions," in Proceedings of IEEE 8th International Conference on Wireless Communications and Mobile Computing, Limassol, Cyprus, 2012;pp. 1000-1005. custom:[[[-]]] 12 B. Malarkodi, S. K. Riyaz Hussain, B. Venkataramani, "Performance evaluation of AOMDV-PAMAC protocols for ad hoc networks," International Journal of ElectricalComputer, Energetic, Electronic and Communication Engineering,, vol. 4, no. 2, pp. 302-305, 2010.custom:[[[-]]] 13 I. T. Almalkawi, M. Guerrero Zapata, J. N. Al-Karaki, "A cross-layer-based clustered multipath routing with QoS-aware scheduling for wireless multimedia sensor networks," International Journal of Distributed Sensor Networksarticle ID. 392515,, vol. 8, no. 5, 2012.doi:[[[10.1155/2012/392515]]] 14 F. Alnajjar, Y. Chen, "SNR/RP aware routing algorithm: cross-layer design for MANETs," International Journal of Wireless Mobile Networks, vol. 1, no. 2, pp. 127-136, 2009.custom:[[[-]]] 15 T. S. Rappaport, Wireless Communications: Principles and Practice, 2nd ed. Upper Saddle RiverNJ: Prentice-Hall, 2002.custom:[[[-]]] 16 S. Rao, K. Shama, "Cross layer MMRE AOMDV model for multimedia transmission in wireless sensor networks," in IJCA Proceedings on International Conference on Innovations Computing Techniques (ICICT 2015), 2015;pp. 29-36. custom:[[[-]]] Received: August 29 2016 Revision received: February 2 2017 Accepted: February 10 2017 Published (Print): December 31 2018 Published (Electronic): December 31 2018 Corresponding Author: Santhosha Rao* ([email protected]) Santhosha Rao*, Dept. of Information Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India, [email protected] Kumara Shama*, Dept. of Information Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India, [email protected] Pavan Kumar Rao*, Dept. of Information Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal,Karnataka, India, [email protected]
CommonCrawl
Cyclist detection and tracking based on multi-layer laser scanner Mingfang Zhang ORCID: orcid.org/0000-0003-3727-31011, Rui Fu2, Yingshi Guo2, Li Wang1, Pangwei Wang1 & Hui Deng1 The technology of Artificial Intelligence (AI) brings tremendous possibilities for autonomous vehicle applications. One of the essential tasks of autonomous vehicle is environment perception using machine learning algorithms. Since the cyclists are the vulnerable road users, cyclist detection and tracking are important perception sub-tasks for autonomous vehicles to avoid vehicle-cyclist collision. In this paper, a robust method for cyclist detection and tracking is presented based on multi-layer laser scanner, i.e., IBEO LUX 4L, which obtains four-layer point cloud from local environment. First, the laser points are partitioned into individual clusters using Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method based on subarea. Then, 37-dimensional feature set is optimized by Relief algorithm and Principal Component Analysis (PCA) to produce two new feature sets. Support Vector Machine (SVM) and Decision Tree (DT) classifiers are further combined with three feature sets, respectively. Moreover, Multiple Hypothesis Tracking (MHT) algorithm and Kalman filter based on Current Statistical (CS) model are applied to track moving cyclists and estimate the motion state. The performance of the proposed cyclist detection and tracking method is validated in real road environment. Autonomous vehicle has attracted great interest in much ongoing research [1, 2], and environment perception is an essential step toward the trajectory plan for autonomous vehicle [3,4,5]. The perception task is usually divided into several sub-tasks, including object detection and tracking, object localization and behavior prediction. In recent years, cycling is gaining the popularity, and 43% growth is reported in Ireland between 2011 and 2016 [6]. Since there is no special protection equipment, cyclists are the vulnerable road users. Moving cyclist inevitably interacts with other traffic participants in local environment, and vehicle-cyclist collision amounts to a large proportion of traffic accidents every year [7]. Therefore, to enhance the cycling safety, much attention must be paid to cyclist detection and tracking for autonomous vehicle system. The existing cyclist detection methods using laser scanner are mostly conducted for counting the number of the cyclists without the interference of non-cyclist road users [8, 9]. Motivated by the analysis of the object detection studies, a novel method for cyclist detection and tracking using multi-layer laser scanner is proposed in this paper. IBEO LUX 4L is adopted as the main sensor and mounted on the front bumper of the autonomous vehicle, as shown in Fig. 1. First, subarea-based DBSCAN method is developed to segment the point cloud clusters. Second, three categories of the feature sets are employed with SVM and DT classifiers respectively, and six classifiers are obtained totally. Then, MHT algorithm based on Kalman filter is used to track multiple detected cyclists. It is validated that the proposed cyclist detection and tracking method has good performance in real road scene. The contributions of this paper are twofold: (1) It is the first attempt to separate the raw point cloud into several subareas based on the density distribution; (2) CS model is selected as the motion model of the cyclist, and MHT algorithm is used to track multiple cyclists. IBEO LUX 4L. a the laser beams at four layers. b the scanner is mounted on the autonomous vehicle The remainder of this paper is organized as follows: Sect. "Related works" describes the point cloud clustering process. Section "Feature extraction" presents the feature extraction. Section "Classification" introduces the cyclist classifiers. Section "Tracking" presents the cyclist tracking algorithm. Experimental results are given in Sect "Experiments". Section "Conclusion" concludes this study. In the field of cyclist detection and tracking, most state-of-the-art methods rely upon machine vision sensors [10,11,12], due to the advantages of high-resolution image with the color and texture information. Zangenehpour [10] presented the cyclist detection method based on histogram of oriented gradient feature for video datasets from crowded traffic scenes. Li [11] described a unified framework for cyclist detection, which included a novel detection proposal and a discriminative deep model based on Fast R-CNN with over 50000 images. Tian [12] explored geometric constraint with various camera setups and built cascaded detectors with multiple classifiers to detect cyclists from multiple viewpoints. Bieshaar [13] used 3D convolutional neural network to detect the motion of cyclists on image sequences with spatio-temporal features. Liu [14] utilized a region proposal method based on aggregated channel feature to solve cyclist detection problem for high-resolution images. However, cyclist detection and tracking based on vision sensors still remain challenging, since the camera is susceptible to the illumination and the large variability of cyclist exists in appearance, pose and occlusion. Compared with vision sensors, laser scanner can collect the spatial and motion information of the detected objects [15, 16]. As for the number of the laser layers, laser scanners are divided into three types, namely, 2D, 2.5D and 3D. 2D laser scanner generates the sparse single-layer point cloud which is inadequate for object classification in real driving environment. 3D laser scanners are usually installed on the top of autonomous vehicles and they can produce 3D dense point cloud to cover full-field of view for global environment construction. However, the high cost of 3D laser scanners limits the wide application. Compared with other kinds of laser scanners, 2.5D laser scanner, e.g., IBEO LUX 4L, is a more practical option for object classification. Extensive algorithms have been presented for object classification using 2.5D multilayer laser scanner [17,18,19,20,21,22,23,24]. Wang et al. [19] built SVM classifiers based on 120-dimensional point cloud feature set for object recognition. Huang et al. [20] captured the discontinuities of the point cloud with the distance threshold to segment the clusters, and trained three categories of classifiers to recognize the dynamic obstacles in driving environment. Magnier et al. [21] employed belief theory to detect and track moving objects using multi-layer laser scanner. Kim et al. [22] extracted the geometrical features from four-layer point cloud data and detected the pedestrians with RBFAK classifier. Carballo et al. [23] used several intensity-based features for pedestrian detection. Arras et al. [24] conducted pedestrian classification with 14 static features of human legs at indoor environment. Compared with much research on the detection of other road users, e.g., vehicle and pedestrian, the study on cyclist detection using laser scanner is limited. Subirats et al. [8] employed single-layer laser scanner to monitor the road surface and count the cyclists in real traffic flow, and the length of the detected cyclists is subject to the speed of the cyclists. Prabhakar et al. [9] presented the last line check method to detect and count the Powered Two Wheelers (PTWs) with the laser scanner at a fixed angle. In general, the existing study on cyclist detection using laser scanner is mainly conducted for counting the number of the cyclists while ignoring the variation of the cyclists' motion state. In this paper, the proposed method for cyclist detection and tracking using multi-layer laser scanner aims at capturing the moving cyclist with high accuracy. Point cloud clustering The range of the scanned obstacles affects the density of the returned point cloud directly. In real driving environment, the point cloud varies frame by frame, and the number of the surrounding clusters is uncertain. DBSCAN clustering method is often used to deal with the uneven point cloud, since there is no need for DBSCAN to set the number of the clusters in advance, and the input parameters are the minimum number of the laser points in the cluster and the neighbor radius [25]. It may happen that traditional DBSCAN method segments multiple obstacles at close range as one cluster, while the single obstacle at long range is clustered as several clusters. Therefore, subarea-based DBSCAN method is proposed in this paper. First, the density distribution curve is generated for the uneven point cloud, and the subareas are divided based on the characteristics of the density distribution. Then, the optimal neighbor radius is calculated for each subarea, and DBSCAN method is applied to the point cloud in each subarea. The point cloud data at one frame in campus environment is taken as an example. First, the point cloud in the region of interest is divided into several subareas along the motion direction of ego-vehicle installed with the laser scanner. Since the excessive subareas may cause the increase of the computation cost and the fewer number of the subareas may lead to some empty subareas, the number of the subareas need to be selected properly. According to the statistical analysis, the relation between the number of the subareas and the number of the points is given as follows: $$ m = \,0.187\, \times \,(n - 1)^{2/3} $$ The ratio between the number of the points in each subarea and the number of total points is defined as the density of each subarea. The density histogram is used to describe the distribution of the point cloud. For the example frame, there are 424 points in the interest region, and the number of the subareas equals 10.55 according to formula (1). Thus, the number of the subareas is set as a round number 10. The density distribution curve is obtained based on the density histogram, as shown in Fig. 2. We can see that the wave crest of the density distribution curve means the dense distribution, while the wave valley corresponds to the sparse distribution. The wave valley of this curve is the inflection point of the point cloud density variation, and it is also considered as the partition point for subareas. Note that the valleys between any two wave crests is not always the appropriate partition point, in case of the redundant operation. If the density difference between two adjacent subareas is not too large, there is no need to divide these two subareas, and these subareas can be regarded as one subarea with uniform density. Therefore, the density difference threshold of the subarea is set as a constant value T, and the subareas are divided only when the difference between two adjacent wave peaks is greater than T. According to the attributes of the point cloud from LUX 4L laser scanner, the density difference threshold T is set as 50. As shown in Fig. 2, the density differences among three adjacent peaks XP1, XP2 and XP3 of the density distribution curve are greater than the threshold T. Thus, the valleys X1 and X2 among these three peaks are regarded as two partition points, and the partition locations are X1 = 12 m and X2 = 20 m. Finally, three subareas with various densities are obtained, as shown in Fig. 3. The density distribution Three subareas with different densities The original DBSCAN algorithm [25] and the proposed subarea-based DBSCAN algorithm are used to cluster the point cloud at this example frame, respectively. Figure 4 shows the clustering results from these two DBSCAN algorithms, and each color of the point cloud denotes an individual cluster. We can see that the subarea-based DBSCAN algorithm separates two neighboring obstacles successfully while the original DBSCAN algorithm clusters two neighboring obstacles into one obstacle using the global neighbor radius. Thus, the subarea-based DBSCAN algorithm achieves better clustering performance than the original DBSCAN algorithm. The clustering results from two DBSCAN algorithms. a Real scene, b Traditional DBSCAN, c Subarea-based DBSCAN Feature extraction 37-dimensional feature set is proposed for cyclist detection and tracking based on the point cloud characteristics of the cyclist. This feature set includes 11 number-of-points-based features, 16 geometric features and 10 statistical features, as listed in Tables 1, 2 and 3. Feature 23 denotes the length of the polyline which connects the horizontal projection points in ascending order of horizontal coordinate value. Features 28–31 denote the convexity of the scan points at each layer, and the convexity is equal to the distance between the center of the fitting circle and the origin point minus the distance between the geometric center of all the scan points and the origin point. Feature 37 denotes the residual ratio of the bounding areas for two edges of the laser points vs. the middle points, and this feature indicates that the middle of the point cloud cluster is denser than the edges. Table 1 Number-of-points-based features Table 2 Geometric features Table 3 Statistical features The effectiveness of the feature set is significant to improve the performance of classifier. However, the combination of multiple independent features cannot always make better classification ability than single feature. Therefore, the optimization of the feature set is necessary to reduce the redundancy among multiple features. Relief algorithm [26] and PCA method [27] are employed in the optimization process. Relief algorithm Relief algorithm is a feature selection method based on the multi-variable filtering, and it uses the sample learning to determine the weight of the features. Each feature is evaluated by the classification performance difference between the same category of the samples and the different categories of the samples. For 37-dimensional feature set F, the classification weight of each feature is calculated, and the weight proportion histogram is obtained, as shown in Figs. 5 and 6. According to the classification weight results from Relief, the weight threshold for single feature is set as 3000. If the feature weight exceeds 3000, this feature is retained, otherwise this feature is discarded. On the basis of the further redundancy calculation, 20-dimensional feature set FR are obtained: Feature 2, Feature 6, Feature 7, Feature 8, Feature 9, Feature 11, Feature 12, Feature 13, Feature 14, Feature 19, Feature 20, Feature 22, Feature 23, Feature 26, Feature 27, Feature 29, Feature 30, Feature 31, Feature 33 and Feature 37. The weights of 37-dimensional features The weight proportion histogram PCA algorithm PCA is a statistical method which can convert the series of features into new alternative feature components. This method can simplify the original features to ensure the minimum loss of feature information. PCA is conducted for the normalized features to generate the eigenvalue, as well as the variance and cumulative variance of each principal component, as shown in Table 4. The cumulative variance of the first and second principal components reaches 91.0%, thus the first and second principal components are regarded as new feature indexes: FP = [FP1, FP2]. Table 4 PCA results SVM classifier With the strong generalization capability, SVM is utilized as an independent classifier. First, the normalization is conducted as follows: $$ scale(f) = \left\{ {\begin{array}{*{20}l} { - 1,} \hfill & {f \le f_{{\min }} } \hfill \\ { - 1 + \frac{{2(f - f_{{\min }} )}}{{f_{{\max }} - f_{{\min }} }},} \hfill & {f_{{\min }} < f < f_{{\max }} } \hfill \\ {1,} \hfill & {f > f_{{\max }} } \hfill \\ \end{array} } \right. $$ where f is the normalized eigenvalue, fmax and fmin are the maximum and minimum eigenvalues, respectively. Since Radial Basis Function (RBF) can achieve nonlinear mapping with a few parameters, RBF is selected as the kernel function: $$ K(x,y) = \exp ( - \gamma \left\| {x - \left. y \right\|} \right.^{2} ) $$ where the parameter γ and penalty factor C are traversed by the grid optimization and cross validation using LIBSVM toolbox [28]. In our test, the optimal parameters are: c = 2068, γ = 0.006821. DT classifier Since the features are discrete variables, ID3 algorithm is selected as DT classifier. The main scheme of DT classifier is as follows. The feature with the largest information gain is selected as the classification benchmark. The classification process is repeated until a decision tree with the complete classification ability is constructed. The entropy of random variable x is: $$ H(X)\, = - \sum\limits_{i - 1}^{n} {p_{i} } \times \,\log \,p_{i} $$ The entropy of each attribute of the dataset D is: $$ H(D_{i} )\, = - \sum\limits_{i - 1}^{n} {p_{i} } \times \,\log_{2} \,p_{i} $$ where pi represents the ratio of the number of samples for the ith-dimensional feature and the number of total features, and n represents the number of total features. For the given feature Fi, the conditional entropy of the dataset D is: $$ H\left( {\left. D \right|F_{i} } \right) = - \sum\limits_{i = 1}^{n} {\frac{{\left| {\left. {D_{i} } \right|} \right.}}{{\left| {\left. D \right|} \right.}} \times H(D_{i} )} $$ where |Di| represents the number of features for the ith subset, and |D| represents the number of all samples in the dataset D. On the basis of the entropy \( H(D_{i} ) \) and the conditional entropy \( H\left( {D\left| {F_{i} } \right.} \right) \) of the feature Fi, the information gain of the feature Fi is calculated by: $$ g(D,F_{i} ) = H(D) - H(\left. D \right|F_{i} ) $$ Object tracking can predict the future motion state and avoid the missing detection caused by temporary occlusion. Moving object tracking consists of data association and motion estimation. The data association procedure associates the same objects in successive frames, and the motion estimation procedure uses the filter method to estimate the position and speed of the associated objects. In this section, MHT algorithm is combined with Kalman filter algorithm to track the cyclist based on the CS model. Data association MHT algorithm selects the optimal association hypothesis for the same object at two consecutive frames. And the calculation of hypothesis probability is critical. From the tracking start time to the kth time step, all the measurements are recorded as Zk = {Z(1), Z(2),…, Z(k)} and all the hypothesis sets obtained by MHT algorithm at kth time step are recorded as \( \varOmega^{k} = \left\{ {\left. {\varOmega_{i}^{k} ,i = 1,2, \ldots ,I_{k} } \right\}} \right. \). The hypothesis probability \( P_{i}^{k} \) is calculated at the kth time step by the hypothesis \( \varOmega_{i}^{k} \) as follows: $$ p_{i}^{k} \, = \,p\left( {\{ \varOmega_{i}^{k} \text{|}Z^{k} \} } \right) $$ Assumed that \( \varOmega_{i}^{k} \) is obtained by the correlation hypothesis φk between the hypothesis \( \varOmega_{g}^{k - 1} \) at previous frame and the measurements Z(k) at the current frame. Bayes theorem is utilized to compute the hypothesis probability: $$ p\left( {\varOmega_{g}^{k - 1} ,\varphi \text{|}Z\left( k \right)} \right)\, = \frac{1}{c}p\left( {z\left( k \right)\text{|}\varOmega_{g}^{k - 1} ,\varphi_{k} } \right)\, \times p\left( {\varphi_{k} \text{|}\varOmega_{g}^{k - 1} } \right) \times \,p\left( {\varOmega_{g}^{k - 1} } \right) $$ where c is the normalized constant. In terms of the association hypothesis φk, the number of the measurements associated with the new object at the current frame is marked as NNT (h), the number of the measurements associated with the false object is set as NFT (h), the number of the measurements associated with the previous object is labeled as NDT (h), and the number of all the object is MK. Assumed that the number of the existing detected object obeys binomial distribution, the number of new objects is subject to Poisson distribution, and the number of false objects also obeys Poisson distribution, we can obtain: $$ P(N_{DT} ,N_{FT} ,N_{NT} \left| {\varOmega_{g}^{k - 1} } \right.) = \left( \begin{aligned} N_{TGT} \hfill \\ N_{DT} \hfill \\ \end{aligned} \right)P_{D}^{{N_{DT} }} (1 - P_{D} )^{{(N_{TGT} - N_{DT} )}} \times F_{{N_{FT} }} (\beta_{FT} V)F_{{N_{NT} }} (\beta_{NT} V) $$ where PD is the detection probability, βFT is the probability density of the false alarm, βNT is the probability density of the new objects, Fn (λ) is the Poisson distribution with the rate parameter \( \lambda \). Then we get: $$ P(\varphi_{k} \left| {\varOmega_{g}^{k - 1} } \right.) = \frac{{N_{FT} !N_{NT} !}}{{M_{k} !}}P_{D}^{{N_{DT} }} (1 - P_{D} )^{{(N_{TGT} - N_{DT} )}} \times F_{{N_{FT} }} (\beta_{FT} V)F_{{N_{NT} }} (\beta_{NT} V) $$ The hypothesis probability is provided by: $$ P_{i}^{k} = \frac{1}{c}P_{D}^{{N_{DT} }} (1 - P_{D} )^{{(N_{TGT} - N_{DT} )}} \times \beta_{FT}^{{N_{FT} }} \beta_{NT}^{{N_{NT} }} \times \left[\prod \begin{aligned} N_{DT} \hfill \\ m = 1 \hfill \\ \end{aligned} N(Z_{m} - H\tilde{X},S)\right]P_{g}^{k - 1} $$ After the probability of each possible association hypothesis is obtained, all the association probabilities are represented by one correlation matrix. The hypothesis H with the maximum association probability is selected as the optimal hypothesis. Motion model The motion state of the object is usually described by several common models, including constant velocity model, constant acceleration model and CS model. The motion state of the cyclist varies frame by frame, and sudden acceleration or deceleration may happen at any time. To avoid the large cumulative error, CS model is used to estimate the motion state of the cyclist. CS model is time-correlated, and the mean value is the estimation of acceleration at the current time step, and the probability distribution of the acceleration is represented by the revised Rayleigh distribution [29]. In CS model, the acceleration model of the cyclist is as follows: $$ x(t) = \bar{a}(t) + a(t) $$ $$ a(t) = - \alpha a(t) + w(t) $$ where \( \bar{a} \) is the mean value of acceleration, α is the reciprocal of the time constant, \( w(t) \) is the white noise with the mean value 0, the variance \( \sigma_{w}^{2} = 2\alpha \sigma_{a}^{2} \), \( \sigma_{a}^{2} \) is the acceleration variance. Thus, CS model for the cyclist is established by: $$ \left[ \begin{gathered} \dot{x}(t) \hfill \\ \ddot{x}(t) \hfill \\ \dddot x(t) \hfill \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & { - \alpha } \\ \end{array} } \right]\left[ \begin{gathered} \dot{x}(t) \hfill \\ \ddot{x}(t) \hfill \\ \dddot x(t) \hfill \\ \end{gathered} \right] + \left[ \begin{gathered} 0 \hfill \\ 0 \hfill \\ \alpha \hfill \\ \end{gathered} \right]\bar{a}(t) + \left[ \begin{gathered} 0 \hfill \\ 0 \hfill \\ 1 \hfill \\ \end{gathered} \right]w(t) $$ Filtering algorithm The state variable of the cyclist is: $$ x(t) = \left[ {x(t),\dot{x}(t),\ddot{x}(t),y(t),\dot{y}(t),\ddot{y}(t)} \right]^{T} $$ and the state equation is as follows: $$ \left[ \begin{gathered} \dot{x}(t) \hfill \\ \ddot{x}(t) \hfill \\ \dddot x(t) \hfill \\ \dot{y}(t) \hfill \\ \ddot{y}(t) \hfill \\ \dddot y(t) \hfill \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & { - \alpha } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & {- \alpha } \\ \end{array} 0} \right]\left[ \begin{gathered} \dot{x}(t) \hfill \\ \ddot{x}(t) \hfill \\ \dddot x(t) \hfill \\ \dot{y}(t) \hfill \\ \ddot{y}(t) \hfill \\ \dddot y(t) \hfill \\ \end{gathered} \right] + \left[ \begin{gathered} 0 \hfill \\ 0 \hfill \\ \alpha \bar{a}_{x} \hfill \\ 0 \hfill \\ 0 \hfill \\ \alpha \bar{a}_{y} \hfill \\ \end{gathered} \right] + \left[ \begin{gathered} 0 \hfill \\ 0 \hfill \\ w_{x} (t) \hfill \\ 0 \hfill \\ 0 \hfill \\ w_{y} (t) \hfill \\ \end{gathered} \right] $$ The above formula can also be provided by: $$ \dot{x}(t) = Ax(t) + B + C(t) $$ where \( x(t) \), \( \dot{x}(t) \), \( \ddot{x}(t) \), \( y(t) \), \( \dot{y}(t) \), \( \ddot{y}(t) \) represent the position, speed and acceleration of the cyclist along X and Y directions, respectively.\( \bar{a}_{x} \), \( \bar{a}_{y} \) represent the average acceleration along X and Y directions. wx(t), wy(t) denote the zero-mean white noise in X and Y directions, the variance \( \sigma_{wx}^{2} = 2\alpha \sigma_{ax}^{2} \) and \( \sigma_{wy}^{2} = 2\alpha \sigma_{ay}^{2} \). \( \sigma_{ax}^{2} \),\( \sigma_{ay}^{2} \) represents the acceleration variance along X and Y direction. The observation equation is: $$ Z(k) = H(k)X(k) + V(k) $$ where \( H = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ \end{array} } \right] \), \( Z(k) = \left[ \begin{aligned} x(k) \hfill \\ y (k )\hfill \\ \end{aligned} \right] \), \( v(k) \) represents Gaussian noise with zero mean and the variance R(k). The cyclist motion system is modeled based on Kalman filter and CS model as follows: $$ \hat{x}(k\left| {k - 1} \right.) = \phi (k\left| {k - 1} \right.)\hat{x}(k - 1\left| {k - 1} \right.) $$ $$ P(k\left| {k - 1} \right.) = \phi (k\left| {k - 1} \right.)P(k - 1\left| {k - 1} \right.)\phi^{T} (k\left| {k - 1} \right.) + Q(K - 1) $$ $$ K(k) = P(k\left| {k - 1} \right.)H^{T} (k)\left[ {H(k)P(k\left| {k - 1} \right.)H^{T} (k) + R(k - 1)} \right]^{ - 1} $$ $$ P(k\left| k \right.) = \left[ {1 - K(k)H(k)} \right]p(k\left| {k - 1} \right.) $$ $$ \hat{x}(k\left| k \right.) = \hat{x}(k\left| {k - 1} \right.) + K(k)\left[ {Z(k) - H(k)\hat{x}(k\left| {k - 1} \right.)} \right] $$ The independent clusters of the point cloud samples are segmented by subarea-based DBSCAN algorithm at each frame. In the experiments, the camera synchronously collected the scene image to manually mark the positive and negative point cloud samples of the cyclists. Figure 7 shows the real scene at one frame and the classification result in the point cloud scene. In this figure, each color of the point cloud denotes each scan layer of the laser scanner. The red and black rectangles denote the cyclist and non-cyclist, respectively. The positive samples include the cyclists with various poses, and the negative samples include pedestrians, vehicles, lamp posts, trees and other non-cyclist objects. Some positive and negative samples are shown in Fig. 8. We totally extracted 3500 samples, including 1500 positive samples and 2000 negative samples. fivefold cross-validation is utilized to make the classifier generalized. The classification result for real scene at one frame The samples. a Positive samples, b Negative samples Detection algorithm validation To demonstrate the performance of multiple classifiers for cyclist detection, six classifiers are constructed: (1) SVM + F, (2) SVM + FP, (3) SVM + FR, (4) DT + F, (5) DT + FP and (6) DT + FR. ROC curve is used to evaluate the performance of these classifiers, as shown in Fig. 9. AUC (area under the ROC) is the critical parameter to measure the cyclist detection performance. The classifier with larger AUC is superior to other classifiers. Table 5 lists the results of each classifier. It indicates that SVM classifier outperforms DT classifier with the same feature set for cyclist detection. For SVM classifier, the features extracted by PCA method show the best recognition result than other feature sets, while the features from Relief algorithm performs the best for DT classifier. In terms of the same classifier, the classification performance of the original 37-dimensional feature set is the worst. It means that the redundancy exists among the original feature set, and the proposed feature selection methods are essential. In general, AUC values of six classifiers exceed 0.93, and it demonstrates that each classifier show good performance for cyclist detection. ROC curves for six classifiers Table 5 The results of each classifier Tracking algorithm validation To evaluate the proposed cyclist tracking algorithm, the point cloud of moving cyclist in campus is collected using LUX 4L on the stationary vehicle. As shown in Fig. 10, three cyclists moves away frame by frame. We set detection probability Pd = 0.9, and the probability that the correct measurement falls into the tracking gate Pg = 0.99. The tracking results. a Frame 80, b Frame 160, c Frame 240 The tracking results for Frame 80, Frame 160, Frame 240 are shown in Fig. 10. The black rectangle denotes the detected cyclist, and the speed is explicitly labelled in kilometers per hour. The tails dragged by three cyclists are the historical tracks. It indicates that the combination of MHT algorithm with Kalman filter based on CS model can accurately track multiple cyclists. To further verify the motion estimation performance of the proposed tracking method, the estimated motion parameters of Cyclist 2 are compared with on-board GPS data. The lateral and longitudinal positions in the tracking process are shown in Fig. 11, and the velocities along the lateral and longitudinal directions are shown in Fig. 12. The point cloud of the cyclist varies with the range frame by frame, and the variation causes the instability of the cyclist center. The measured position and speed of the cyclist even fluctuate obviously, and it does not match the real motion of the cyclist. The stable position and speed of moving cyclists are obtained from the proposed Kalman filter method based on CS model. Thus, the proposed tracking method has a good applicability. Lateral and longitudinal position curves for the tracking of Cyclist 2 Lateral and longitudinal velocity curves for the tracking of Cyclist 2 In this paper, cyclist detection and tracking method is presented based on multi-layer laser scanner. Firstly, subarea-based DBSCAN algorithm is developed to segment the uneven point cloud based on the density distribution. Secondly, considering the point cloud characteristics of the cyclists, we construct 37-dimensional feature set including number-of-points, geometric and statistical features. Relief algorithm and PCA are further used to optimize the feature set, respectively. Then, three feature sets are combined with SVM and DT classifiers to generate six categories of cyclist classifiers, and the detected cyclists are tracked using MHT algorithm and Kalman filter based on CS motion model. Experimental results show that the subarea-based clustering algorithm can effectively segment the laser points into independent clusters. For SVM classifier, the feature set extracted by PCA method brings the better classification result than other feature sets, while the feature set from relief algorithm performs the best for DT classifier. The proposed cyclist tracking method can obtain the position and speed of moving cyclists robustly. Because of the sparsity of the laser points, future work will be the utilization of various sensors to achieve accurate detection and tracking for the long-range cyclist. We declared that materials described in the manuscript will be freely available to any scientist wishing to use them for non-commercial purposes. Sheehan B, Murphy F, Mullins M, Ryan C (2019) Connected and autonomous vehicles: a cyber-risk classification framework. Transp Res Pt A Policy Pract 124:523–536 Chen M, Tian Y, Fortino G, Zhang J, Humar I (2018) Cognitive internet of vehicles. Comput Commun 120:58–70 You C, Lu J, Filev D, Tsiotras P (2019) Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning. Robot Auton Syst 114:1–18 Berntorp K, Tru H, Stefano C (2019) Motion planning of autonomous road vehicles by particle filtering. IEEE Trans Intell Veh 4(2):197–210 Noh S (2018) Decision-making framework for autonomous driving at road intersections: safeguarding against collision, overly conservative behavior, and violation vehicles. IEEE Trans Ind Electron 66(4):3275–3286 Article MathSciNet Google Scholar Kearns M, Ledsham T, Savan B, Scott J (2019) Increasing cycling for transportation through mentorship programs. Transp Res Pt A Policy Pract 128:34–45 World Health Organization (2011) WHO global status report on road safety 2011: supporting a decade of action. Statistical Center of Iran. General Statistical Yearbook 1389-1390 Subirats P, Dupuis Y (2015) Overhead LIDAR-based motorcycle counting. Trans Lett 7(2):114–117 Prabhakar Y, Subirats P, Lecomte C, Vivet D, Violette E, Bensrhair A (2013) Detection and counting of powered two wheelers by laser scanner in real time on urban expressway. In: IEEE Conference on Intelligent Transportation Systems. pp 1149-1154 Zangenehpour S, Luis F, Nicolas S (2015) Automated classification based on video data at intersections with heavy pedestrian and bicycle traffic: methodology and application. Transp Res Pt C Emerg Technol 56:161–176 Li X, Li L, Flohr F, Wang J, Li K (2016) A unified framework for concurrent pedestrian and cyclist detection. IEEE Trans Intell Transp 18(2):269–281 Tian W, Lauer M (2015) Fast cyclist detection by cascaded detector and geometric constraint. In: 2015 IEEE International Conference on Intelligent Transportation Systems. pp 1286-1291 Bieshaar M, Zernetsch S, Hubert A, Sick B, Doll K (2018) Cooperative starting movement detection of cyclists using convolutional neural networks and a boosted stacking ensemble. IEEE Trans Intell Veh 3(4):534–544 Liu C, Guo Y, Li S, Chang F (2019) ACF based region proposal extraction for YOLOv3 network towards high-performance cyclist detection in high resolution images. Sensors 19(12):2671–2689 Song W, Zou S, Tian Y (2018) Classifying 3D objects in LiDAR point clouds with a back-propagation neural network. Human-centric Comput Inf Sci 8:29 Chu PM, Cho S, Park J (2019) Enhanced ground segmentation method for Lidar point clouds in human-centric autonomous robot systems. Human-centric Comput Inf Sci 9:17 Muhammad A, Pu Y, Rahman Z, Abro W, Naeem H, Ullah F, Badr A (2018) A hybrid proposed framework for object detection and classification. J Inf Process Syst 14(5):1176–1194 Truong M, Kim S (2019) A tracking-by-detection system for pedestrian tracking using deep learning technique and color information. J Inf Process Syst 15(4):1017–1028 Wang D, Posner I, Newman P (2012) What could move? Finding cars, pedestrians and bicyclists in 3D laser data. In: IEEE International Conference on Robotics and Automation. pp 4038–4044 Huang R, Liang H, Chen J, Zhao P, Du M (2016) Lidar based dynamic obstacle detection, tracking and recognition method for driverless cars. Robot 38(4):437–443 Magnier V, Gruyer D, Godelle J (2017) Automotive LIDAR objects detection and classification algorithm using the belief theory. In: IEEE Intelligent Vehicles Symposium (IV) pp 746-751 Kim B, Choi B, Park S, Kim H, Kim E (2015) Pedestrian/Vehicle detection using a 2.5-D multi-layer laser scanner. IEEE Sens J 16(2):400–408 Carballo A, Ohya A, Yuta S (2015) People detection using range and intensity data from multi-layered Laser Range Finders. In: IEEE/RSJ International Conference on Intelligent Robots & Systems. pp 5849–5854 Arras K, Mozos Ó, Burgard W (2007) Using boosted features for the detection of people in 2D range data. In: Proceedings of IEEE International Conference on Robotics and Automation. pp 3402–3407 Mahesh K, Rama M (2016) A fast DBSCAN clustering algorithm by accelerating neighbor searching using Groups method. Pattern Recogn 58:39–48 Saeys Y, Abeel T, Peer Y (2008) Robust feature selection using ensemble feature selection techniques. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. pp 313-325 Nishino K, Nayar SK, Jebara T (2005) Clustered blockwise PCA for representing visual data. IEEE Trans Pattern Anal Mach Intell 27(10):1675 Chang C, Lin C (2011) LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol 2(3):27 Sohn J, Kim NS, Sung W (1999) A statistical model-based voice activity detection. IEEE Signal Process Lett 6(1):1–3 We would like to thank the reviewers for their valuable comments. This work is supported in part by the National Key R&D Program of China under Grant No. 2018YFB1600500, National Natural Science Foundation of China under Grant Nos. 51905007, 51775053, 61603004, the Great Wall Scholar Program under Grant CIT&TCD20190304, Ability Construction of Science, Beijing Key Lab Construction Fund under Grant PXM2017-014212-000033 and NCUT start-up fund. Beijing Key Lab of Urban Intelligent Traffic Control Technology, North China University of Technology, Beijing, 100144, China Mingfang Zhang, Li Wang, Pangwei Wang & Hui Deng Key Lab for Automotive Transportation Safety Enhancement Technology of the Ministry of Communication, Chang'an University, Xi'an, 710064, China Rui Fu & Yingshi Guo Mingfang Zhang Rui Fu Yingshi Guo Li Wang Pangwei Wang Hui Deng The authors have contributed significantly to the research work presented of this manuscript. All authors read and approved the final manuscript. Correspondence to Mingfang Zhang. Zhang, M., Fu, R., Guo, Y. et al. Cyclist detection and tracking based on multi-layer laser scanner. Hum. Cent. Comput. Inf. Sci. 10, 20 (2020). https://doi.org/10.1186/s13673-020-00225-x DOI: https://doi.org/10.1186/s13673-020-00225-x Cyclist detection Multi-layer laser scanner Current statistical model Multiple hypothesis tracking
CommonCrawl
Archivum Mathematicum Elyasi, M. ; Estaji, A. A. ; Robat Sarpoushi, M. Locally functionally countable subalgebra of $\mathcal{R}(L)$. (English). Archivum Mathematicum, vol. 56 (2020), issue 3, pp. 127-140 MSC: 06D22, 54C05, 54C30 | MR 4156440 | Zbl 07250674 | DOI: 10.5817/AM2020-3-127 Full entry | PDF (0.6 MB) Feedback functionally countable subalgebra; locally functionally countable subalgebra; sublocale; frame Let $L_c(X)= \lbrace f \in C(X) \colon \overline{C_f}= X\rbrace $, where $C_f$ is the union of all open subsets $U \subseteq X$ such that $\vert f(U) \vert \le \aleph _0$. In this paper, we present a pointfree topology version of $L_c(X)$, named $\mathcal{R}_{\ell c}(L)$. We observe that $\mathcal{R}_{\ell c}(L)$ enjoys most of the important properties shared by $\mathcal{R}(L)$ and $\mathcal{R}_c(L)$, where $\mathcal{R}_c(L)$ is the pointfree version of all continuous functions of $C(X)$ with countable image. The interrelation between $\mathcal{R}(L)$, $\mathcal{R}_{\ell c}(L)$, and $\mathcal{R}_c(L)$ is examined. We show that $L_c(X) \cong \mathcal{R}_{\ell c}\big (\mathfrak{O}(X)\big )$ for any space $X$. Frames $L$ for which $\mathcal{R}_{\ell c}(L)=\mathcal{R}(L)$ are characterized. [1] Azarpanah, F., Karamzadeh, O.A.S., Keshtkar, Z., Olfati, A.R.: On maximal ideals of $C_c(X)$ and the uniformity of its localizations. Rocky Mountain J. Math. 48 (2) (2018), 354–384, http://doi.org/10.1216/RMJ-2018-48-2-345 DOI 10.1216/RMJ-2018-48-2-345 | MR 3809150 [2] Ball, R.N., Walters-Wayland, J.: $\text{C}$- and $\text{C}^*$- quotients in pointfree topology. Dissertationes Math. (Rozprawy Mat.) 412 (2002), 354–384. MR 1952051 [3] Banaschewski, B.: The real numbers in pointfree topology. Textos de Mathemática (Séries B), Universidade de Coimbra, Departamento de Mathemática, Coimbra 12 (1997), 1–96. MR 1621835 | Zbl 0891.54009 [4] Bhattacharjee, P., Knox, M.L., Mcgovern, W.W.: The classical ring of quotients of $C_c(X)$. Appl. Gen. Topol. 15 (2) (2014), 147–154, https://doi.org/10.4995/agt.2014.3181 DOI 10.4995/agt.2014.3181 | MR 3267269 [5] Dowker, C.H.: On Urysohn's lemma. General Topology and its Relations to Modern Analysis, Proceedings of the second Prague topological symposium, 1966, Academia Publishing House of the Czechoslovak Academy of Sciences, Praha, 1967, pp. 111–114. MR 0238744 [6] Estaji, A.A., Karimi Feizabadi, A., Robat Sarpoushi, M.: $z_c$-ideals and prime ideals in the ring $\mathcal{R}_c L$. Filomat 32 (19) (2018), 6741–6752, https://doi.org/10.2298/FIL1819741E DOI 10.2298/FIL1819741E | MR 3899307 [7] Estaji, A.A., Karimi Feizabadi, A., Zarghani, M.: Zero elements and $z$-ideals in modified pointfree topology. Bull. Iranian Math. Soc. 43 (7) (2017), 2205–2226. MR 3885660 [8] Estaji, A.A., Robat Sarpoushi, M.: On $CP$-frames. submitted. [9] Estaji, A.A., Robat Sarpoushi, M., Elyasi, M.: Further thoughts on the ring $\mathcal{R}_c(L)$ in frames. Algebra Universalis 80 (4) (2019), 14, https: //doi.org/10.1007/s00012-019-0619-z 4. DOI 10.1007/s00012-019-0619-z | MR 4027118 [10] Ghadermazi, M., Karamzadeh, O.A.S., Namdari, M.: On the functionally countable subalgebra of $C(X)$. Rend. Semin. Mat. Univ. Padova 129 (2013), 47–69, https://doi.org/10.4171/RSMUP/129-4 DOI 10.4171/RSMUP/129-4 | MR 3090630 [11] Ghadermazi, M., Karamzadeh, O.A.S., Namdari, M.: $C(X)$ versus its functionally countable subalgebra. Bull. Iranian Math. Soc. 45 (2019), 173–187, https://doi.org/10.1007/s41980-018-0124-8 DOI 10.1007/s41980-018-0124-8 | MR 3913987 [12] Gillman, L., Jerison, M.: Rings of Continuous Functions. Springer-Verlag, 1976. MR 0407579 | Zbl 0327.46040 [13] Johnstone, P.T.: Stone Spaces. Cambridge Univ. Press, Cambridge, 1982. MR 0698074 | Zbl 0499.54001 [14] Karamzadeh, O.A.S., Keshtkar, Z.: On $c$-realcompact spaces. Quaest. Math. 42 (8) (2018), 1135–1167, https://doi.org/10.2989/16073606.2018.1441919 DOI 10.2989/16073606.2018.1441919 | MR 3885948 [15] Karamzadeh, O.A.S., Namdari, M., Soltanpour, S.: On the locally functionally countable subalgebra of $C(X)$. Appl. Gen. Topol. 16 (2015), 183–207, https://doi.org/10.4995/agt.2015.3445 DOI 10.4995/agt.2015.3445 | MR 3411461 [16] Karimi Feizabadi, A., Estaji, A.A., Robat Sarpoushi, M.: Pointfree version of image of real-valued continuous functions. Categ. Gen. Algebr. Struct. Appl. 9 (1) (2018), 59–75. MR 3833111 [17] Mehri, R., Mohamadian, R.: On the locally countable subalgebra of $C(X)$ whose local domain is cocountable. Hacet. J. Math. Stat. 46 (6) (2017), 1053–1068, http://doi.org/10.15672/HJMS.2017.435 DOI 10.15672/HJMS.2017.435 | MR 3751773 [18] Namdari, M., Veisi, A.: Rings of quotients of the subalgebra of $C(X )$ consisting of functions with countable image. Int. Math. Forum 7 (12) (2012), 561–571. MR 2969547 [19] Picado, J., Pultr, A.: Frames and Locales: Topology without Points. Frontiers in Mathematics, Birkhäuser/Springer, Basel AG, Basel, 2012. MR 2868166 [20] Robat Sarpoushi, M.: Pointfree topology version of continuous functions with countable image. Hakim Sabzevari University, Sabzevar, Iran (2017), Phd. Thesis.
CommonCrawl
Hyperbolas and Other Rational Functions Solve applications involving hyperbolas Identify Characteristics of Rational Functions Transformations of Rational Functions Graphing Rational Functions Solve Rational Functions Analytically Solve applications involving Rational Functions A rational number is a number in the form of a fraction, and similarly a rational function is a function in the form of a fraction. So.... A rational function is a function of the form $y=\frac{P\left(x\right)}{Q\left(x\right)},Q\left(x\right)\ne0$y=P(x)Q(x)​,Q(x)≠0, where $P\left(x\right)$P(x) and $Q\left(x\right)$Q(x) are polynomials. So functions like $y=\frac{x}{x+1}$y=xx+1​, $y=\frac{x^2-9}{2x-1}$y=x2−92x−1​, $y=5x^3-1=\frac{5x^3-1}{1}$y=5x3−1=5x3−11​ and $y=\frac{24}{x^2-5x+6}$y=24x2−5x+6​ can all be considered rational functions. Any fraction, written as $\frac{m}{n}$mn​, where $m$m and $n$n are integers (and $n$n is always nonzero), has the value zero if, and only if, $m=0$m=0. The same idea applies to rational functions. The function values becomes zero when the numerator vanishes (in other words when $P\left(x\right)=0$P(x)=0). The complication with rational functions is that sometimes the denominator vanishes at the same time. For example, $y=\frac{x^2-5x+6}{x}$y=x2−5x+6x​ becomes zero when $x^2-5x+6=0$x2−5x+6=0. This means that all of the function's $x$x intercepts are found by solving this quadratic equation: $x^2-5x+6$x2−5x+6 $=$= $0$0 $\left(x-2\right)\left(x-3\right)$(x−2)(x−3) $=$= $0$0 $\therefore$∴ $x$x $=$= $2,3$2,3 So because the values $x=2$x=2 and $x=3$x=3 don't send the denominator to $0$0 as well, we can accept these as the two $x$x intercepts of this function. We need to make the check that the denominator isn't being made equal to 0, because if the denominator is 0, then the function is undefined at that point. Asymptotes An asymptote is a line (or curve) that approaches a given curve arbitrarily closely. The curve approaches it, but never reaches it. Horizontal asymptotes (example 1) To appreciate this, consider the simple rational function given by $y=\frac{x-1}{x}$y=x−1x​ and imagine that the positive $x$x values becomes very large. The numerator will become large as well, but will always be one less than the denominator, and this means that the function values will become close to $1$1 but never quite reach $1$1. Hence the line $y=1$y=1 becomes a horizontal asymptote. Vertical asymptotes (Example 2) In addition to this we find that rational functions are rarely continuous over their natural domains. There are often points of discontinuity where the function simply ceases to exist. Take for example the rational function given by $y=\frac{24}{x^2-5x+6}$y=24x2−5x+6​. If the denominator becomes $0$0, the function becomes undefined, and because we know that $x=2$x=2 and $x=3$x=3 are roots of the equation $x^2-5x+6=0$x2−5x+6=0 (see above), then the curve must break into three sections across the vertical lines given by $x=2$x=2 and $x=3$x=3. We need to examine the behaviour of the function as the $x$x values approach each of these lines. For example we know that at $x=3$x=3, the denominator becomes $0$0, so at $x=3.000001$x=3.000001, which is slightly more than $x=3$x=3, the denominator must be extremely small (in fact it becomes the number $0.000001000001$0.000001000001). The function value is thus given by the constant $12$12 (in the numerator) divided by this extremely small number. This results in an extremely large number (almost $24,000,000$24,000,000). What this means is that as $x$x approaches $3$3 from the right side of the line $x=3$x=3, the curve bends upward toward infinity. It's as if the curve, upon nearing the line $x=3$x=3, tries to find a way around it. The line $x=3$x=3 becomes a vertical asymptote. Here is how the right hand section of the function $y=\frac{24}{x^2-5x+6}$y=24x2−5x+6​ looks. Note how the curve bends upward as it approaches the vertical asymptote $x=3$x=3: The same type of behaviour occurs on the other side of the line $x=3$x=3, except that with this particular function, the curve approaches the line $x=3$x=3 from the left bending downwards, as if it's trying to find a way around it in the other direction. With the asymptote $x=2$x=2, we see the curve behaving in a similar way. On both sides it approaches and bends in opposite directions toward the line $x=2$x=2. Here is a complete sketch of the curve with a very much compressed scale on the $y$y axis. Note the two vertical asymptotes as well as the horizontal asymptote at $y=0$y=0 (the $x$x axis). Oblique asymptotes (example 3) Consider the function given by $y=\frac{x^2+4}{x}$y=x2+4x​. Clearly we have $x\ne0$x≠0, and so $x=0$x=0 is a vertical asymptote of the function. Further, as $x$x becomes large, the function approaches the line $y=x$y=x. The easiest way to see this is to put the function into a different form by carrying out the division by $x$x. Thus: $y$y $=$= $\frac{x^2+4}{x}$x2+4x​ $=$= $x+\frac{4}{x}$x+4x​ As x increases indefinitely (that is as $x\rightarrow\infty$x→∞ ), the fractional part $\frac{4}{x}$4x​ becomes very small (in other words $\frac{4}{x}\rightarrow0$4x​→0), so that the curve approaches the line $y=x$y=x. It does so from above because $x+\frac{4}{x}$x+4x​ is always greater than $x$x for all $x>0$x>0. As $x\rightarrow-\infty$x→−∞ then $\frac{4}{x}\rightarrow0$4x​→0 (always less than zero) and this means that the curve $y=x+\frac{4}{x}$y=x+4x​ approaches the line $y=x$y=x from below. We also know that there are no $x$x intercepts, because setting $y=0$y=0 implies that the numerator function $x^2+4$x2+4 has to be $0$0, but this is impossible because both $x^2+4$x2+4 is always positive (check this). Putting all this together, the graph of the function, shown here, makes sense. It is not always the case that rational curves approach asymptotes in opposite directions. It very much depends on the polarity (odd-ness and even-ness) of the powers of $x$x in the denominator function. for example the function $y=\frac{1}{x^2}$y=1x2​ has a vertical asymptote as the $y$y-axis and both sides of the curve approach it by bending upwards. This will be discussed at length at a later time. All polynomial functions can be considered rational functions with $Q\left(x\right)=1$Q(x)=1 An oblique asymptote will be a straight line if the difference in degree between $P\left(x\right)$P(x) and $Q\left(x\right)$Q(x) is $1$1. Consider the following graph of a hyperbola. Loading Graph... State the equation of the vertical asymptote. State the equation of the horizontal or oblique asymptote. Consider the function $f\left(x\right)=\frac{7x-5}{4x+3}$f(x)=7x−54x+3​. State the equations of any vertical asymptotes. State the equations of any horizontal or oblique asymptotes. Consider the function $f\left(x\right)=\frac{x^2+8x+15}{3x^2-17x+10}$f(x)=x2+8x+153x2−17x+10​. (Your answer should be given in terms of $y$y ). A rational function is a function of the form $$, where $P\left(x\right)$P(x) and $Q\left(x\right)$Q(x) are polynomials. Form and use trigonometric, polynomial, and other non-linear equations Apply trigonometric methods in solving problems
CommonCrawl
How to find Laurent series expansion for $\frac{e^z}{(z+1)^2}$ Find the Laurent series expansion for $\frac{e^z}{(z+1)^2}$ for $\lvert z \rvert > 1$. I know how to find the Laurent series expansion for $\lvert z \rvert < 1$, which is $$\frac{e^{z+1-1}}{(z+1)^2} = \frac{1}{e}\frac{e^{z+1}}{(z+1)^2}$$ and using the series expansion for $e^x$ $$\frac{1}{e} \frac{1}{(z+1)^2} \Big[ 1 + (z+1) + \frac{(z+1)^2}{2!} + ... \Big]$$ $$= \frac{1}{e} \Big[ \frac{1}{(z+1)^2} + \frac{1}{(z+1)} + \frac{1}{2!} + ... \Big]$$ So we have a pole of order $2$ and $\underset{z = -1}{Res} \frac{e^z}{(z+1)^2} = \frac{1}{e}$. But is there any way to use this information to find the Laurent series expansion for $\lvert z \rvert > 1$? I only know how to do it if we can get a form $\frac{1}{1 - z}$ but we have an $e^z$ around so I have no idea what to do in this case. sequences-and-series complex-analysis laurent-series mr eyeglassesmr eyeglasses $\begingroup$ The Laurent expansion about what point? In your post you have the Laurent expansion about $z=-1$, which converges in the region $\left|z+1\right|>0$. $\endgroup$ – Matt Rosenzweig May 17 '15 at 22:32 $\begingroup$ The Laurent expansion for $|z|<1$ should just be a Taylor series. $\endgroup$ – zhw. May 17 '15 at 22:34 $\begingroup$ @MattRosenzweig About $z = 0$. I guess I misunderstood how to compute a Laurent series expansion $\endgroup$ – mr eyeglasses May 17 '15 at 23:07 The Laurent expansion about the point $c=0$ in the disk $\left|z\right|<1$ is the Cauchy product of the Taylor series of $e^{z}$ and $(z+1)^{-2}$. I think you are looking for the Laurent expansion about $c=0$ in the annulus $\left|z\right|>1$. Observe that $$\dfrac{e^{z}}{(1+z)^{2}}=\dfrac{e^{z}}{z^{-2}(1+z^{-1})^{2}}=z^{2}e^{z}\sum_{n=0}^{\infty}(-1)^{n}z^{-n}\tag{1}$$ Since Taylor series for $e^{z}$ converges absolutely on $\mathbb{C}$ and the series $\sum (-1)^{n}z^{-n}$ converges absolutely for $\left|z\right|>1$, their product is given by $$\sum_{m=0}^{\infty}\sum_{n=0}^{m}\dfrac{z^{n}}{n!}\dfrac{(-1)^{m-n}}{z^{m-n}}\tag{2}$$ Since the order of summation is irrelevant, we see that (2) equals $$\begin{align*}\sum_{k=0}^{\infty}z^{k}\sum_{m=k}^{\infty}\dfrac{(-1)^{m-k}}{m!}+\sum_{k=1}^{\infty}z^{-k}\sum_{m=k}^{\infty}\dfrac{(-1)^{m}}{(m-k)!} \end{align*}\tag{3}$$ Multiplying (3) by $z^{2}$ completes the proof. Matt RosenzweigMatt Rosenzweig Not the answer you're looking for? Browse other questions tagged sequences-and-series complex-analysis laurent-series or ask your own question. Laurent series expansion, principal part Laurent expansion with different annuli How to find Laurent series Expansion Laurent Series Expansion for $\sin(\frac{1}{z})$ Finding the Laurent series given the poles and residues Finding the Laurent series after a Change of Variables How to find Laurent expansion Find the Laurent Series expansion. find the principle part of Laurent expansion Calculating Laurent expansion for $\frac{1}{1-z^2}$
CommonCrawl
Trusting the hand that feeds: microbes evolve to anticipate a serial transfer protocol as individuals or collectives Bram van Dijk1, Jeroen Meijer1, Thomas D. Cuypers1 & Paulien Hogeweg1 Experimental evolution of microbes often involves a serial transfer protocol, where microbes are repeatedly diluted by transfer to a fresh medium, starting a new growth cycle. This has revealed that evolution can be remarkably reproducible, where microbes show parallel adaptations both on the level of the phenotype as well as the genotype. However, these studies also reveal a strong potential for divergent evolution, leading to diversity both between and within replicate populations. We here study how in silico evolved Virtual Microbe "wild types" (WTs) adapt to a serial transfer protocol to investigate generic evolutionary adaptations, and how these adaptations can be manifested by a variety of different mechanisms. We show that all WTs evolve to anticipate the regularity of the serial transfer protocol by adopting a fine-tuned balance of growth and survival. This anticipation is done by evolving either a high yield mode, or a high growth rate mode. We find that both modes of anticipation can be achieved by individual lineages and by collectives of microbes. Moreover, these different outcomes can be achieved with or without regulation, although the individual-based anticipation without regulation is less well adapted in the high growth rate mode. All our in silico WTs evolve to trust the hand that feeds by evolving to anticipate the periodicity of a serial transfer protocol, but can do so by evolving two distinct growth strategies. Furthermore, both these growth strategies can be accomplished by gene regulation, a variety of different polymorphisms, and combinations thereof. Our work reveals that, even under controlled conditions like those in the lab, it may not be possible to predict individual evolutionary trajectories, but repeated experiments may well result in only a limited number of possible outcomes. In order to see microbial evolution in action, we often rely on experimental evolution under controlled laboratory conditions. The Long-term Evolution Experiment (LTEE) [1] and similar shorter studies [2, 3] have, for example, evolved many generations of microbes using a serial transfer protocol, where microbes are repeatedly diluted and transferred to a fresh medium to start a new growth cycle. Conceptually, if we learn to understand how microbes adapt to such a resource cycle, we might one day be able to predict evolution in the lab and — ideally — also in nature. Indeed, a lot of evolution in the lab seems remarkably reproducible, where microbes show parallel adaptations both on the level of the phenotype as well as the genotype [4–11]. However, there also seems to be strong potential for divergent evolution, leading to diversity both between and within replicate populations [12–14]. Diversification events within populations in serial transfer regularly show cross-feeding interactions [12, 13, 15–17], where strains emerge that grow on metabolic by-products. These cross-feeding interactions are increasingly well understood with the help of metabolic modeling and digital evolution [18, 19]. A recent metagenomics study has revealed even more coexisting lineages in the LTEE than were previously reported [20]. It is however not yet clear whether all these polymorphisms are the result of uni-directional cross-feeding interactions, or if other mechanisms could drive coexistence in a simple experiment such as a serial transfer protocol. Furthermore, whether or not the diversified communities experience fundamentally different selection pressures and growth dynamics as a collective, is still an open question. Prior to being subjected to lab conditions, the microbes used in the aforementioned experimental studies have all had a long evolutionary history in natural environments, experiencing harshly fluctuating and — more often than not — unfavourable conditions. While a serial transfer protocol at a first glance selects mostly for higher growth rates when resources are abundant (i.e. during the log phase), there is also selection to survive when resources are depleted and the population no longer grows (i.e. during the stationary phase). In fact, given the unpredictable conditions found in nature, some of the ancestors of Escherichia coli might have survived precisely because they diverted resources away from growth. Indeed, E. coli does exactly this during the stationary phase by means of the stringent response, regulating up to one third of all genes during starvation [21]. This response lowers the growth rate, but promotes efficiency and survival (i.e. a higher yield). While most microbes have ways to deal with starvation, the physiology of growth arrest varies a lot across different microbes, and especially display great variation in how long they can persist in the absence of nutrients (for an excellent review, see [22]). After prolonged starvation, many species of bacteria go through even more physiological changes, such as the GASP response [23], persistence [24], and sporulation [25]. Bacteria have also been shown to employ bet-hedging strategies with respect to these physiological changes [26–28], which could help to adapt to unexpected environmental changes. Finally, it has been shown that microorganisms can even adjust to expected environmental changes, anticipating regularity in environmental changes [24, 29, 30], which usually entails using predictive cues from the environment. All these responses, as well as other features that organisms have acquired during their evolutionary history (gene clustering, gene regulatory network architecture, metabolic regulation, etc.), might strongly influence the adaptation and reproducibility we observe in the lab today. What do we expect when a complex, "pre-evolved" organism adapts to serial transfer protocol in the lab, given how clean and extremely regular these conditions are? We here use Virtual Microbes in order to firstly mimic natural evolution, acquiring Virtual "wild types" (WTs), which we then expose to a serial transfer protocol (see methods). We do so in order to obtain a fresh perspective on whichgeneric adaptations might appear in spite of evolutionary contingencies, and how these adaptations are achieved. We find that all the WTs — which are both genotypically and phenotypically diverse — evolve to anticipate the regularity of the serial transfer protocol by timing their growth rate, yield, and survival, to accurately fit the daily cycle. Yet, we observe many alternative paths in terms of growth dynamics trajectories, gene regulation, and diversification. Whereas some WTs adapt by means of clever gene regulation, others diverge into multiple strains with their own temporal niche, and others simply time their resource consumption as to not over-exploit the medium. In short, our WTs all recognized and exploited the regularity of the serial transfer protocol, having learned to trust the hand that feeds, but they solve this challenge by a variety of different mechanisms. In this study we use Virtual Microbes, a model of the eco-evolutionary dynamics of microbes (Fig.1 and methods). In short, the Virtual Microbe model is unsupervised, meaning that it aims to combine relevant biological structures (genes, genomes, metabolism, mutations, ecology, etc.), allowing us to study the emergent properties of fitness and evolution in an undirected system. In other words, by not explicitly defining what the model should do, we take a serendipitous approach to study microbial evolution. By modelling evolution with many degrees of freedom, the process can be seen as a "inventive" generator of attainable (and maintainable) adaptations [31], and can furthermore serve to debug false intuitions [32]. Our main objective in this study is to elucidate generic adaptations of evolution in a serial transfer protocol, to investigate how this is achieved, and to what extend it is constrained by prior evolution. In order not to lose track of the objective of finding generic patterns, we refrain from discussing and analysing every mechanistic detail, and instead focus on major observables and discuss some illustrative cases. Virtual Microbes model overview. a At the basis of the Virtual Microbe model is an artificial "metabolic universe", describing all the possible reactions that can be catalysed. Resources (yellow and blue) are fluxed in, but building blocks (purple) and energy (red) must be synthesized to express proteins and transport metabolites across the membrane, respectively. b A Virtual Microbe only needs to express a subset of all possible reactions to be viable, and no metabolic strategy is necessarily the "right" one. c The individuals grow and reproduce on a spatial grid, and can only reproduce when there is an empty spot. Death happens stochastically or when a cell has accumulated toxicity by having excessively high concentrations of metabolites. Since only cells that have grown sufficiently are allowed to reproduce, we simulate evolution with no prior expectation Evolving Virtual Microbe "wild types" Before evolving Virtual Microbes in a serial transfer protocol, we first evolved a set of Virtual "Wild Types" (WTs). Instead of optimizing these WTs solely for high growth rates or optimal metabolic flux, we here mimic natural circumstances by fluctuating resource conditions (Fig. 2a). When too little resource is available, the Virtual Microbes cannot grow, and can only stay alive for as long as their internal resources last. When too much resource is available however, the Virtual Microbes run the risk of accumulating too high concentrations of metabolites, resulting in increased death rates due to toxicity. Furthermore, a stochastic death process is implemented, allowing even a maximally flourishing Virtual Microbes to only live 100 time steps on average. To avoid extinction, we divided the total grid into four sub-grids, where the two resource metabolites A and C (yellow and blue in Fig. 1a) independently change in their influx rates with probability 0.01 (see Table 3). Thus, on average, an individual will experience one fluctuation in resource conditions during its lifetime (see full configuration in S1). While both influxed resources can be converted into building blocks required for growth, the rates of influx span four orders of magnitude (10−5 – 10−1, see Table 3), and conditions will thus vary from very favourable to very poor. Although poor conditions could cause a local population of microbes to go extinct due to limiting resources, total extinction is highly unlikely due to the 4 independent sub-grids. All this in turn depends on which resources the evolved Virtual Microbes like to consume (and at which rate), whether or not there is too much or too little resource, and whether or not space for reproduction is available. Finally, persisting in an unfavourable environment for a long time can be rewarding if conditions improve. All in all, this results in an unsupervised evolutionary process where there is no prior expectation of what metabolic strategy or gene regulatory networks might be best suited to survive. We study what will be the long-term target of the eco-evolutionary dynamics, not in terms of fitness, but in terms of what the Virtual Microbes evolve to do. Evolution of Virtual "wild types" under naturally unpredictable and fluctuating resource conditions. a Natural evolution is mimicked by (harsly) fluctuating resource conditions, resulting in a wide variety of resource conditions. The (actual) grid is 40x40, with four 20x20 subspaces where the rates of influx vary stochastically. These subspaces do not impede diffusion of metabolites or reproduction. The fluctuations of the A and C resource (blue and yellow respectively) are independent, resulting in a variety of different conditions. b We repeat the evolution in natural conditions 16 times starting from the same (minimally viable) initial clone (varying the mutations that happen) yielding 16 distinct WTs. These WTs are later transfered to a serial transfer protocol. c In the white labels we show how many of the evolved WTs adapted to use particular reactions. The thicker arrows represent the shared core genome which consists of two resource importers, a metabolic cycle, and a C-exporter (yellow). Transcription factors (diamonds) were always present across WTs, but only 11/16 WTs visibly display changes in gene expression correlated with changes in the environment We evolved the same initial clone in the exact same "random" resource fluctuations, only varying the mutations that happened across ∼10.000 generations of evolution. This produced 16 distinct WTs with their own evolutionary history, which we then expose to the serial transfer protocol (Fig. 2b). Despite experiencing precisely the same fluctuations, no two WTs evolved to be the same. For example, we observe a great diversity in gene content, kinetic parameters of enzymes, gene regulatory networks and their complexity, and responses to environmental stimuli. The core metabolism is however strikingly similar across WTs, always consisting of a simple metabolic cycle. The rates of building block production and death rates are also very similar across all WTs (Additional file 1: Figure S3). In other words, it appears that there are many different ways to be fit, and that no solution is evidently better. The similarities and differences between our WTs are summarized in Fig. 2c, but we discuss this in more detail in Additional file 1: Section S1. In silico serial transfer evolution experiment After evolving a variety of different WTs, we transfer the WTs to a serial transfer protocol. With regular intervals, all but 10 percent of the cells are removed, while at the same time refreshing the medium. Although time in Virtual Microbes has arbitrary units, we will refer to this process as the "daily" cycle from this point forward. Early in the day, during the log phase, high growth rates are very rewarding as there is a lot of opportunity to reproduce. However, once the population has reached stationary phase (having consumed all resources), it is favourable to survive and to not invest in growth any further. We will focus on how our WTs adapt to these alternating selection pressures. The results discussed here are found for a variety of different medium conditions (e.g. also see Additional file 1: Table S2). In the main text however, we present the 50 time step serial transfer protocol where the medium contained both resources (A and C), as this was a condition on which all WTs could be cultivated, ensuring equal treatment. We focus on the generic adaptations towards this protocol first, and then show how specific WTs and contingent factors from their evolutionary history shape these outcomes. All wild types evolve to anticipate the serial transfer protocol After 800 days of evolving in a serial transfer protocol, we compare the ancestral WTs with the evolved populations. We first show some of the well-known growth dynamics of microbes: the lag-, log-, and stationary phase (Fig. 3a). As most experimental evolutionary studies in the lab, we too observe a decreased lag phase and an increased growth rate. The increased growth rate in the evolved population results in an earlier onset of the stationary phase, which therefore takes much longer than for their WT ancestors. Eventually, this leads to a phase where the cell count decreases again (death phase), revealing a decrease in survival for the evolved populations. To further study how this decreased survival comes about, we next investigated the dynamics of average cell volumes. Cell volume is an indicator for the "health" of the population, determining the ability to divide (minimal division volume) and survive (minimal viable volume). A first interesting observation is an increase in average cell volume during the log phase (Fig. 3b-c), which is also one of the first results from the LTEE [33]. However, after this increase in cell volumes during the log phase, evolved populations display a clear decrease in cell volumes, either at the end of the day (Fig. 3b), or during the whole stationary phase (Fig. 3c). Indeed, if we expose the populations to prolonged starvation by extending the day, the evolved populations die shortly after the anticipated serial transfer, while their WT ancestors survived for much longer (Fig. 3b-c, right-hand side). Strikingly, we observed that the cell volume at the time of transferring the cells to a fresh medium (henceforth 'volume-at-transfer') fall into two distinct categories. In the high yield scenario (Fig. 3b), cell volumes are maintained above the division volume until the very end of the day, whereas the low yield scenario, albeit having a higher growth rate, leads to a volume-at-transfer that is just above minimal. Indeed, the distribution of these observed volume-at-transfer across ancestral WTs are mostly high (Fig. 3d, left-hand side), while the evolved cells clearly show a bimodal distribution (Fig. 3d, right-hand side). Thus, all the populations evolved to either be ready to immediately divide at transfer (high yield mode), or exploit as much resource as possible while remaining above the minimal viable volume (high growth rate mode). Despite this difference in growth modes, both populations have evolved to accurately time the regularity of the serial transfer protocol. All evolved populations also show a consistent decrease in extended yield (Fig. 3e) relative to the WTs, as long term yield is now masked from natural selection. Finally, we found that this anticipation effect did not depend on details in the protocol, such as the length of the daily cycle or the number of resources used (Additional file 1: Figure S5 and Table S2). This reveals that a key selection pressure in a serial transfer protocol is not only growth as fast as possible, but also remaining viable until the next day, anticipating the next supply of nutrients. Virtual Microbes adapt to anticipate the regularity of a serial transfer protocol. a Growth dynamics of early population (green) and evolved populations (blue) in terms of cell counts. (WT03#1 taken as an illustrative example). b-c Two WTs (green) and the population after prolonged evolution in the serial transfer protocol (blue) are shown as an illustration of the anticipation effects. Over the course of 3 cycles, the average cell volume is plotted against time for the ancestral WT (green) and for the evolved population (blue). The y-axis (cell volume) indicates the minimal viable volume and division volume (which are fixed for the model), and the evolved volume-at-transfer (as measured at the end of the third cycle). Daily and extended yield are measured as defined in the method section. After the third cycle, serial transfer is stopped (transparent area), showing decreased survival of the evolved populations with respect to their ancestor. d Stacked density distributions are plotted for the volume-at-transfer both early (transfer 0-40, green) and late (transfer 760-800, blue). e The evolved changes in yield both "daily" (within one cycle of the protocol) and "extended" (after prolonged starvation) for all 16 WTs Evolution toward a growth-yield trade-off The two extreme categories of cell volume dynamics from Fig. 3 illustrate a well-studied trade-off between growth and yield in microbial populations [34–36]. We next investigate how our different WTs evolve towards this trade-off, and how reproducible these trajectories are. For this, we repeated the serial transfer protocol 3 times for each WT, and follow the trajectories over time. After ∼800 serial transfers, all populations have adapted along a trade-off between growth and yield (Fig. 4a). No trade-off was not observed during the first cycle of the protocol, which instead shows a positive correlation between growth and yield (Fig. 4b), revealing how both growth and yield could initially be improved for most WTs. Evolution towards the trade-off, by improving both growth and yield by e.g. importing more resources or producing more building blocks, is similar across all WTs, although not all WTs approach it with the same angle (also see Additional file 1: Figure S6). Subsequent evolution on the trade-off diverges into two distinct clusters, representing the two aforementioned modes of high yield and high growth rate. This divergence is not only seen between different WTs (Fig. 4c-d), but also occurs in replicate experiments of the same WT (Fig. 4e, Additional file 1: Figure S6). Finally, specific WTs appear to more readily give rise to certain outcomes, having specific adaptations in their "mutational neighbourhood". This is for example illustrated by two WTs (5 and 11) that repeatedly gave rise to mutants with extremely high, but unsustainable growth rates, causing populations to go extinct repeatedly (black crosses in Fig. 4). In summary, some WTs adapt in a similar way to the serial transfer protocol, while others (that have experienced the same amount of prior evolution) have diverging evolutionary trajectories and can reach different solutions, especially after having adapted towards the trade-off. Trajectories towards a growth versus yield trade-off end in either the high growth rate mode or the high yield mode. a Growth rate (average building block production rate) is plotted against daily yield (average population biomass within a single cycle), for all the 48 experiments after adaptation to 800 serial transfers. The black dotted line is a linear regression model (R2 = 0.54). b Shows the initial points for all 16 WTs, which actually have a positive correlation between growth and yield (R2 = 0.32) instead of the negative correlation (black dotted line). c-e These insets display how the repeated evolution of certain WTs produce very similar trajectories towards the trade-off (time points are day 0, 20, 40, 100, 200 and 800), ending in either high daily yield (c) or low daily yield (d). Other WTs diverge after reaching the trade-off, and thus show more diverse trajectories when repeated (e). The colours of the end point symbols depict different modes of adaptation as discussed in the next paragraph (grey = no coexistence, purple = (quasi-)stable coexistence, black cross = extinction due to over-exploiting the medium) Anticipating as a collective So far we have only looked at population averages. Next, we study the dynamics of lineages and the evolved dynamics within cells. To track lineages we tag each individual in the population with a neutral lineage marker at the start of the experiment (analogous to DNA barcoding [37]). When a single lineage reaches fixation, we reapply these neutral markers, allowing us to quickly detect long-term coexistence. Moreover, these neutral markers allow us to study which arising mutants are adaptive in the different phases of the growth cycle. In Fig. 5a we show dynamics of neutral lineage markers that are frequently redistributed when one lineages fixates in the population, indicating that there is no long-term coexistence of strains. In contrast, Fig. 5b displays repeatedly observed (quasi-)stable coexistence, where two lineages coexist for some time, but coexistence was not stable in the long-term. Lastly, Fig. 5c shows stable, long-term coexistence, where the population sustains a balanced polymorphism until the end of the experiment. Based on these lineage markers (also see Additional file 1: Figure S8), coexistence (either quasi-stable or stable) was observed in 21 out of 44 extant populations (Fig. 5d). Dynamics of neutral lineage markers reveal balanced polymorphisms based on the daily cycle. a-c Neutral lineage marker (random colours) frequencies are plotted along 800 serial transfers (left hand side) and along 3 cycles. Panel A shows an example with no coexistence which is found in 23 out of 44 replicates, and panel B and C show (quasi-)stable coexistence, found in the remaining 21 replicates. d shows, for all 3 replicates of all WTs whether or not coexistence of neutral lineage markers was observed (grey = no coexistence, purple = (quasi-)stable coexistence, black cross = extinction due to over-exploiting the medium). Also see Additional file 1: Figure S8 By zooming in on the dynamics of coexisting lineage markers over a shorter time span (Fig. 5b-c, right-hand side), we can better understand how these lineages stably coexist. Notably, one lineage is dominating during log phase, while the other lineage performs better during stationary phase. In other words, the lineages have specialized on their own temporal niche. We find that these dynamics can be the result of three mechanisms (or combinations thereof): 1) cross-feeding on building block metabolites, 2) specialisation on either of the two resources, or 3) based on the growth vs. yield trade-off. Cross-feeding dynamics always resulted in quasi-stable coexistence (such as depicted in Fig. 5b), and never resulted in the balanced polymorphism as depicted in Fig. 5c), while the other two mechanisms (resource specialisation and growth vs. yield differentiation) most often resulted in long-term coexistence where lineages perform better together than they do alone (Additional file 1: Figure S9). While specialisation on different resources is a well known mechanism for negative frequency dependent selection, it is far less evident how a growth vs. yield trade-off would result in a fully balanced polymorphism. Mutants with higher growth rates but elevated death rates have a very distinct signature of increasing in frequency early in the daily cycle and decreasing to much lower frequencies during the stationary phase (Additional file 1: Figure S7A), as apposed to lineages that increase in frequency throughout all phases of the cycle (Additional file 1: Figure S7B). While such mutants readily arise across our experiments, they often have difficulty rising to fixation due to the increased duration of the stationary phase, where they are unfit. In the meantime, a slower growing lineage with lower death rates can be optimized to utilize resources at low concentrations during stationary phase. These dynamics can give rise to a balanced polymorphism that does not depend on resource specialisation or cross feeding, and is also observed in our experiments with a single resource (Additional file 1: Table S2). Indeed, Fig. 5c illustrates how two lineages with more than a three-fold difference in death rates (±0.015 and ±0.048) can stably coexist. discussed above can differ strongly across WTs and replicated experiments. For example, since de novo gene discoveries were disabled during this experiment, cross-feeding on building blocks is only possible if the ancestral WT had the necessary importer for building blocks, which was true only for 6/16 WTs. Similarly, even though all WTs have the necessary importers for both the A and C resource, one WT consistently diverged into an A- and C-specialist (WT10). While other WTs have multiple gene copies for these importers, WT10 had only 1 copy of both genes, making the loss-of-function mutations readily accessible. In conclusion, although all polymorphic populations also anticipate the serial transfer protocol, they do so in a different way than populations consisting of a single lineage. They all consist of strains which time growth and survival strategies in relation to each other in order to precisely finish the available nutrients by the end of the day. Individual anticipation by tuning and trimming the gene regulatory network The previous section illustrates how multiple lineages can coexist because the predictable serial transfer protocol produces temporal niches. However, many of our WTs do not show any tendency to differentiate like this, and instead always adapt to the serial transfer protocol as a single lineage (Fig. 6d). In order to better understand this, we will now look at the intracellular dynamics of WT07, and how it changes when adapting to the protocol. WT07 is one of the more "clever" WTs with a relatively complex GRN, and displays strong responses in gene expression when exposed to fluctuations. In Fig. 6b we show that WT07 consistently adapts to the protocol by switching between two modes of metabolism, where importer proteins are primed and ready at the beginning of the cycle, and exporter proteins and anabolic enzymes are suppressed during stationary phase. Despite some differences in the structure of the evolved GRNs, the protein allocation patterns are virtually indistinguishable across the three replicate evolutionary experiments. Interestingly, although no parallel changes were observed in the kinetic parameters of proteins, we do observe the parallel loss of an energy-sensing transcription factor as well as increased sensitivity of the TF that senses the external resource C. In other words, even though all mutations are equally likely, evolution apparently happened mostly through loss, and tuning and trimming of the GRN. Modulation between two metabolic modes allows this single lineage to switch between log and stationary phase, occupying both temporal niches. Indeed, a second lineage never appeared for this WT (Fig. 6b and Additional file 1: Table S2). Anticipation can entail polymorphism or a single lineage that switches between two metabolic modes. a Two lineages occupy different niches on the growth vs. yield trade-off WT02#01 diverges into a slow growing lineage (yellow lineage, average death rate ±0.015) and a faster growing lineage with elevated death rates (blue lineages, average death rate ±0.048), together anticipating the serial transfer protocol. b A single lineage anticipates the daily cycle by trimming and tuning the gene regulatory network. On the left the ancestral GRN, protein allocation dynamics, and resource concentrations are displayed over the course of 1 day. Next, after 400 days, all three independent simulations of WT07 are shown to have evolved to anticipate as a single lineage with two metabolic modes Individual and collective solutions have similar macro-level observables We have illustrated how all of our evolutionary experiments result in two modes, one with high yield, and another with high growth rates and lower yield. We have also shown how populations could or could not diversify into two strains, and how certain populations used regulated gene expression to adapt to all growth phases by itself. The four different combinations of collectives vs individual and regulating vs. non-regulating solutions, and their daily yield, are shown in Fig. 7. As can be seen, all these combinations anticipate the serial transfer protocol using either the high yield or high growth rate strategy, and achieve similar values. The non-regulating individual solutions however clearly perform more poorly, as these populations lack the ability to fill both temporal niches (note that gene discoveries are disabled during the serial transfer experiment, so gene regulation cannot evolve de novo). Also note that, although the regulating WTs could fill both temporal niches by themselves, this does not prevent balanced polymorphisms from forming repeatedly. These results show that either a collective solution and/or gene regulation is required to be well-adapted to a serial transfer protocol, and that which solution is used is not observable on the overall macro-level. Individual and collective solutions have similar macro-level observables The daily yield for all the evolved populations is shown, for groups of individual / collective solutions with and without regulated gene expression. Colours and symbols are identical to previous figures (grey=no coexistence, purple=coexistence). Only the non-regulating, individual lineages perform significantly worse than any of the other groups (performing all 6 Wilcoxon rank-sum tests with α 0.05) In this study we have taken a serendipitous approach to study how microbes adapt to a serial transfer protocol, and to what extent this is determined by their evolutionary history. The Virtual Microbe modelling framework serves this goal by building biology from the bottom up, i.e. implementing basic biological features and their interactions. We observe that regardless of their evolutionary history, all WTs learn to anticipate the regularity of the serial transfer protocol by evolving a fine-tuned balance between high growth rate and yield. Long-term survival without nutrients, which is now masked from natural selection, always deteriorates after prolonged exposure to such a protocol. Furthermore, this anticipation is done in two distinct ways. The high yield mode makes sure that the cells are ready to divide as soon as transferred to a fresh medium, whereas the high growth rate mode maximally exploits the medium but results in a poor performance during the stationary phase. We next show that WTs have similar trajectories towards a growth versus yield trade-off, but may subsequently diverge along it. Polymorphisms within populations are frequently observed, which can happen by means of cross-feeding interactions, resource specialisation, or by means of growth vs. yield specialisation. We furthermore find that these evolved collectives are dependent on one another, as both lineages perform better in the presence of the other. Finally, we show that regulated gene expression allows for an individual lineage to fill both temporal niches by itself, but that populations without regulated gene expression can still be well adapted to the protocol by diverging into two strains. In general, our results are robust to details in the serial transfer protocol, such as using only a single resource, or varying the interval between transfers (see Additional file 1: Table S2). The anticipation effects therefore appear to be generic features of microbes exposed to prolonged evolution in a serial transfer protocol. How do our results map onto experimental evolution in the lab? E. coli REL606 has been subjected to a daily serial transfer protocol for over 30 years (∼70.000 generations) in the LTEE. Many of our observations are very similar to the LTEE, such as the improved growth rate and cell sizes during the log phase[33], the (quasi-)stable dynamics of coexisting lineages[20], and "leapfrogging" dynamics (e.g. Fig. 5a-b) where an abundant lineage is overtaken by another lineage before rising to fixation [38, 39]. The comparison with respect to the growth rates, yield, and the anticipation effects discussed in this work, is however less straightforward. We have observed how all our WTs quickly evolve to be maximally efficient given our artificial chemistry, and only subsequently diverge along the apparent growth versus yield trade-off (see Additional file 1: Figure S6). In the LTEE, growth and yield have continued to improve so far, and although a trade-off has been observed within the populations[40], no growth versus yield trade-off between the replicate populations has been observed yet. Nevertheless, we propose that anticipation of periodic environmental change, and a growth versus yield trade-off, provides testable hypotheses for the LTEE and similar experimental studies. More similarities with empirical studies are found in the surprising number of experiments that result in balanced polymorphisms. A repeatedly observed mechanism for such a polymorphism is cross-feeding [11, 13, 16, 17], where modeling has shown that this adaptive diversification involves character displacement and strong niche construction[18], and furthermore strongly depend on the regularity of a serial transfer protocol [19]. We however also found balanced polymorphisms that did not include cross-feeding, involving one lineage with high growth rates during log phase and a slower growing lineage which performs better in stationary phase. Similar mechanisms of coexistence has been observed in respiratory and fermenting strains of Saccharomyces cerevisiae in chemostat [34], and single nucleotide mapping has furthermore revealed the existence of this trade-off [35]. These results are directly related to r/K selection theory [41], which describes an inherent conflict between the quantity and quality of ones offspring. Indeed, these dynamics have been shown to lead to two species stably coexisting in microbial populations [36, 42, 43]. Manhart & Shakhnovich [44] furthermore show that an unlimited number of species can theoretically coexist within a serial transfer protocol, occupying any niche on a trade-off continuum. Here we show that these dynamics can emerge from a more complex eco-evolutionary setting. However, our results suggest that the trade-off between growth and yield is not continuous, as intermediate solutions rarely evolve. This is caused by the fact that as soon as the volume-at-transfer for our digital microbes is smaller than the division volume (i.o.w. something else than the main nutrient becomes limiting for division), a cell may as well exploit its resources fully. Experimental evolution of Pseudomonas fluorescens has shown that different evolutionary paths can lead to the same phenotypic adaptations in a new environment [45, 46]. On the other hand, many studies have also suggested that adaptation can often entail mutations in the same genes [47, 48]. In our experiments, prior adaptations can in some cases strongly shape the way subsequent evolution plays out, but these evolutionary constraints can strongly differ between WTs (Additional file 1: Figure S6). Furthermore, these data show that these evolutionary constraints may or may not diminish after prolonged evolution. There is a lot of variation on the predictability during the serial transfer experiment, revealing that evolutionary constraints by means of historical contingencies, are themselves the result of contingencies. A factor that has been hypothesised to strongly impact the predictability and evolvability of biological systems are their GRNs [6, 49–51], where for example global transcription factors could serve as mutational targets with large-scale phenotypic effects [8]. While our results (Fig. 6b) clearly show an example where similar mutations result in similar adaptive changes, other regulating WTs showed much less predictability. For example, WT #09 is another strong regulating WT, but showed different outcomes with respect to diversification and regulation in all 3 cases. In other words, while the GRN appears to add knobs and buttons for evolution to push, other mechanisms are clearly available to adapt and be fit in a serial transfer protocol. One such mechanism could be 'metabolic regulation', which has recently been shown to be able to achieve very high levels of robustness without leading to a loss in adaptive degrees of freedom [52]. Because all the kinetic parameters of enzymes (K m, V max, etc.) in the Virtual Microbes are freely evolvable, it is likely that this metabolic regulation of homeostasis plays a very important role in Virtual Microbes. This could furthermore explain why the differences in evolvability between regulating and non-regulating populations were smaller than we initially expected. We have indeed observed that, for certain WTs, a change in metabolism could bypass regulated protein expression by means of kinetic neofunctionalistaion of importer proteins, that evolved to be sensitive to different concentrations. Although such a solution does waste more building blocks on the continuous production of importer proteins, it is also much more responsive to environmental changes. It is possible that subtle differences like this explain, for example, why two of our WTs were much more sensitive to extinction by over-exploiting the medium than others. Furthermore, although the phenotypes that are reachable can be limited by prior evolution [53], the trajectories of evolution may be much less predictable on the long-term [54]. The role of metabolic regulation, and how this interplays with the repeatability and timescales of evolution, is a promising endeavour for future studies. Who is anticipating what? Our experiments reveal how populations of microbes can evolve to anticipate the regularity of a serial transfer protocol, trusting that new resources will be delivered on time. The concept of microbial populations anticipating predictable changes is frequently observed in nature [29, 29, 55], and is supported by theoretical models [30, 56]. This form of anticipation however typically entails an environmental cue, where a preceding unrelated signal is used to anticipate environmental changes, usually followed by individuals taking some form of action. Without the necessity of such a cue, we show that anticipation can readily emerge in many different ways from an eco-evolutionary process. Although our form of anticipation is more passive, where not an individual but the system as a whole has temporal dynamics that accurately fit the protocol, this does not necessarily exclude individual-based anticipation. Like WT#07, most of the evolved regulating populations actually did not evolve to down-regulate their resource importers during the stationary phase, despite having repeatedly evolved to down-regulate other catabolic and anabolic enzymes (illustrated in Fig. 6b). Since no more resource is available, and building blocks are consumed in order to keep expressing these importer proteins, this clearly does not have a positive impact during the late stationary phase. One can wonder why these individuals seem to keep the engine running. Whereas bet-hedging strategies have been shown to be a way to deal with irregular environmental changes [24, 26–28, 57, 58], this passive form of anticipation can be a way deal with regular, predictable changes in the environment. Furthermore, this could potentially be the first step towards active anticipation by means of a circadian rhythm, such as the sunflower heliotropism [59] and the diurnal migration of life in lakes and oceans [60–62]. Moving towards an eco-evolutionary understanding The dynamics of Virtual Microbes expose that even a simple serial transfer protocol entails much more than sequentially evolving higher and higher growth rates. Instead, adaptation is an eco-evolutionary process that strongly depends on prior evolution, timescales, the presence of other competitors and mutants, and transient fitness effects. Although we found that competition experiments generally favoured the evolved population over the ancestral WTs, there were exceptions to this rule. It is therefore possible that the ancestral WTs perform better in such an experiment, but that this does not describe the stable eco-evolutionary attractor. Indeed, survival of the fittest is an eco-evolutionary process where any emerging lineage interacts with other lineages (or with other mutants) through changes in the environment, often resulting in a collective, community-based solution rather than the winner of all pair-wise interactions [44]. Furthermore, faster growth becomes less and less important as populations become better adapted to the serial transfer protocol, perhaps making the aforementioned interactions between lineages increasingly relevant. Other recent studies have recently elucidated the importance of eco-evolutionary dynamics [44, 63], and how this can readily give rise to coexistence of multiple strains which could not have formed from a classical adaptive dynamics perspective [64, 65]. Indeed, metagenomics have revealed much more diversity in the LTEE than previously anticipated [20]. Shifting focus from competition experiments towards the ever-changing selection pressures that emerge from the eco-evolutionary dynamics and interactions, will make the field of experimental evolution harder, but more intriguing, to study. We have studied how in silico WTs of Virtual Microbes adapt to a serial transfer protocol like that of the LTEE. The LTEE has shown a sustained increase in competitive fitness, and intensive research displays how the evolved clones are still improving their growth rates with respect to their ancestor as to this day [66–68]. Our experiments have generated a novel hypothesis that microbes in a serial transfer protocol will eventually evolve to anticipate the regular resource interval, and can do so by evolving either a high growth rate mode, or a high yield mode. Both these modes can be achieved by a single individual lineage, or by a collective of two strains which both have their own temporal niche. Taken together, our results reveal important insights into the dynamics and relevant selection pressures in experimental evolution, advancing our understanding of the eco-evolutionary dynamics of microbes. A full description of the model and underlying equations is available online (https://bitbucket.org/thocu/virtual-microbes and https://virtualmicrobes.readthedocs.io). Here we summarize the sections of these documents that are relevant to this study. Finding generic patterns of evolution Experimental evolution is, of course, done on organisms that have evolved for a long time under a wide variety of conditions. These studied organisms all have their own evolutionary history, and differences in how they deal with starvation, stress, changes in resource etc. With Virtual Microbes we are able to evolve a de novo set of "wild types" (WTs), adapted to live in such severely fluctuating resource conditions. We can then explore how these WTs adapt to experimental evolution, and find generic patterns of evolution. To find generic patterns without being biased towards specific solutions, the biology of Virtual Microbes build-up from many levels with many degrees of freedom. One downside of this strategy can be that it can be hard for readers to understand all the underlying assumptions and algorithm and that many simulations result in a slightly different anecdote. However, we encourage the reader to read this work as though reading about 'real' biological evolution, where the experiments reveal new generic patterns and generate new hypotheses. With or without an understanding of the mechanistic details, relatively simple multilevel models can capture the eco-evolutionary dynamics of microbes, allowing us to study what happens, what else emerges from these dynamics "for free", and equally important: what needs further explanation? Virtual Microbes metabolise, grow and divide on a spatial grid (Fig. 1c). Here, we use two parallel 40x40 grids with wrapped boundary conditions. One grid contains the Virtual Microbes and empty grid-points, and the other describes the local environment in which the Virtual Microbes live. This environmental layer holds influxed metabolites, waste products of Virtual Microbes, and spilled metabolites from lysing cells (Fig. 1b). In order to express proteins, grow, and maintain their cell size, Virtual Microbes must synthesize predefined metabolite(s), which we call building blocks. These building blocks are not directly provided, but must be synthesized by the Virtual Microbes by expressing the right proteins, allowing them to pump metabolites into the cell, and convert metabolites into one another (Fig. 1a). The expression of these proteins depends on genes on genomes that undergo a wide variety of possible mutations upon reproduction (Table 1). Genomes are circular lists of genes, each with their own unique properties (e.g. K m, V max for enzymes, K ligand and binding motif for TFs). The level of expression is unique for each gene, and is determined by its evolvable basal transcription rate and how this rate is modulated by transcription factors. When an enzyme or transporter gene is expressed, that specific reaction will take place within the cell that carries that gene. Note however that in the complete metabolic universe, many more possible reactions exist. The genome of an evolved Virtual Microbes will typically only use a subset of all the possible reactions. Genes to catalyse new reactions and novel TFs can be discovered through rare events. Which genes end up being selected for is not explicitly defined, but the result of a birth and death process. Birth depends on the availability of empty space and resources to synthesize new building blocks, whereas death depends on the ability to survive under a variety of different conditions and the potential accumulation (and avoidance) of toxicity. The resulting survival of the fittest (referred to as "competitive fitness" by Fragata et al., 2018) is an emergent phenomenon of eco-evolutionary dynamics[69]. Table 1 Types of mutations and their probabilities in WT evolution and serial transfer protocol (STP) Table 2 Gene level mutations and the boundary conditions Metabolic universe The metabolic universe in Virtual Microbes is an automatically generated (or user defined) set of metabolites and reactions between them. The simple metabolic universe used in this study was automatically generated by a simple algorithm that defines 4 classes of molecules, how they can be converted into one another by means of 6 reactions, how fast they degrade, diffuse over the membranes, etc. (see Table 4). Table 3 Grid setup and environmental forcing in WT evolution and serial transfer protocol (STP) Table 4 A priori defined metabolites and reactions in artificial chemistry The metabolism is simulated on the grid in terms of Ordinary Differential Equations (ODEs) using the Gnu Scientific Library in Cython. These ODEs include the influx of molecules into the system, transport or diffusion across the membrane, intracellular metabolism (including expression and decay of proteins), biomass production, cell volume, the build-up of toxicity, etc.. Diffusion between grid points was implemented as a simple local diffusion process, and is interleaved with the ODEs for efficiency. The number of simulations was limited to 16 WTs and 16x3 "lab" experiments due to computational feasibility. Statistics in this study only report effect sizes, as p-values are irrelevant in simulated studies [70]. Transmembrane transport For all molecules, transporters exist that import or export molecules across the cell membrane. Michaelis-Menten kinetics determine the transmembrane transportation with rate v : $$v = {v_{{max}_{\mathcal{T}}}} \cdot [\mathcal{T}] \cdot \frac{[S] \cdot [e] }{([S] + K_{S}) \cdot ([e] + K_{e}) } $$ where \(\mathcal {[T]}\) is the concentration of the transporter protein, [S] is the concentration of substrate transported, and [e] is the concentration of available energy carrier metabolites. KS and KE are the Michaelis-Menten constants for the substrate and energy carrier respectfully. Depending on the direction of transport (importing or exporting) [S] is either the external or the internal concentration of the substrate. Note that for any gene on the genome of a Virtual Microbe, \(V_{max\mathcal {T}}, K_{S}\) and KE are all freely evolvable parameters. Metabolism Similar to the transport, metabolic rates are catalysed by proteins by Michaelis-Menten kinetics with rate v: $${\kern29pt}v = {v_{{max}_{\mathcal{E}}}} \cdot [\mathcal{E}] \cdot\frac{\prod_{R\in \mathcal{R}} [R] }{\prod_{R\in \mathcal{R}} ([R] + K_{R}) } $$ where [\(\mathcal {E}\)] is the concentration of the enzyme catalysing the reaction, \(\mathcal {R}\) the set of all reactant metabolites, and KR and \(v_{{max}_{\mathcal {E}}}\) are evolvable kinetic parameters of enzyme \(\mathcal {E}\). Biomass production Virtual microbes convert building block B to a biomass product P, which is consumed for cell growth and maintenance Growth(B) and protein production Prod(B), and determines strength with which individuals compete to reproduce. Biomass is next converted to cell volume with a fixed rate, and used for protein expression depending on the demands by the evolved genome. In other words, high rates of expression demand more biomass product for proteins, leaving less biomass product to invest in cell volume or maintenance (see cell volume growth). In total, the rate of change of P then becomes $${\begin{aligned} \frac{dP}{dt} &\,=\, Production(B) - Growth(B) - Protein expression(B) \\&- dilution - degradation \end{aligned}} $$ where B is the concentration of building block metabolites. Production is a linear conversion of B into P, whereas growth, protein expression, and dilution depend on the dynamics of the cell. Biomass product is then consumed by cellular growth and protein expression which are a function of the building block concentration, is diluted proportional to the changes in cell volume, and degradation is fixed. Consumption for protein expression is summed over all genes: $$\sum_{i=1}^{N_{genes}}{Pr_{i}\cdot {Reg}_{i} } $$ where Pri is the basal expression rate of gene i, either up or down-regulated if transcription factors are bound to its operator sequence Regi (see transcriptional regulation). Cell volume growth We assume that cell volumes a maximum cell size MaxV and that there is a continuous turnover d of the cell volume at steady state, ensuring the necessity to keep on metabolising even if there is no possibility to reproduce (i.e. if the grid points are all full). Volume then changes as $$\frac{dV}{dt} = g \cdot V \cdot \frac{1-V}{{MaxV}} -d \cdot V $$ Transcriptional regulationThe rates at which genes are expressed is a function of the basal expression rate of the gene and the concentrations of binding TFs and their molecular ligands. The intrinsic basal expression rate of a gene is encoded by a strength parameter in a gene's promoter region. This basal expression rate can be modulated by TFs that bind to an operator sequence associated with the gene. Binding sites and TF binding motifs are modelled as bit-strings and matching depends on a certain fraction of sequence complementarity. If a minimum complementarity is chosen <1 a match may occur anywhere within the full length of the operator binding sequence and the TF binding motif. The maximum fraction of complementarity achieved between matching sequences linearly scales the strength with which a TF binds the target gene. In addition to binding strength following from sequence complementarity, TFs encode an intrinsic binding affinity for promoters Kb, representing the structural stability of the TF-DNA binding complex. TFs can, themselves, be bound to small ligand molecules with binding affinity Kl, altering the regulatory effect they exert on downstream genes. These effects are encoded by parameters effbound and effapo for the ligand-bound and ligand-free state of the TF, respectively, and evolve independently. Ligand binding to TFs is assumed to be a fast process, relative to enzymatic and transcription-translation dynamics, and modeled at quasi steady state. We determine the fraction of TF that is not bound by any of its ligands L: $${W_{apo}} = \prod_{l \in L} \left(1 - \frac{[l]}{[l] + K_{l}}\right) $$ The fraction of time that a TF τ in a particular state σ (bound or apo) is bound to a particular operator o: $$V_{o} = \frac{[\tau_{\sigma}] \cdot c_{\tau o} \cdot K_{b_{\tau}}}{1 + \sum_{\sigma \in \mathcal{S}} \sum_{{\tau_{\sigma}} \in \mathcal{T}} [{\tau_{\sigma}}] \cdot c_{\tau o} \cdot {K_{b_{\tau}}} } $$ depends on the inherent binding affinity \({K_{b_{\tau }}}\) as well as the sequence complementarity score cτo between the tf binding motif and the operator sequence [cite Neyfahk]. The binding polynomial in the denominator is the partition function of all TFs \(\mathcal {T}\) in any of the states \(\mathcal {S}\) that can bind the operator. Note that small declines in the concentration of free TFs due to binding to operators are neglected. Now, the operator mediated regulation function for any gene is given by $${Reg} = \sum V_{i} \cdot E_{i} $$ with Vi the fraction of time that the operator is either unbound or bound by a TF in either ligand bound or unbound state and Ei the regulatory effect of that state (1 if unbound or effbound or effapo when bound by a ligand bound or ligand free TF, respectively). Finally, protein concentrations \([\mathcal {P}]\) are governed by the function: $$\frac{d[\mathcal{P}]}{dt}={Pr} \cdot {Reg} \cdot {degr} \cdot [\mathcal{P}] $$ where Pr is the evolvable parameter promoter strength and degr a fixed protein degradation rate which is not evolvable. Toxicity and death Virtual Microbe death is a stochastic process depending on a basal death rate, which is potentially increased when internal metabolite concentrations reach a toxic threshold. A cumulative toxic effect is computed over the current life time τ of a microbe as $${e_{tox}} = \sum_{m\in M}{} \int_{t=0}^{\tau} f(m,t) dt $$ for all internal molecules M, with $${\kern32pt}f(m,t) = {max}\left(0, \frac{[m]_{t} - {tox_{m}}}{{tox_{m}}}\right) $$ the toxic effect function for the concentration of molecule m at time t with toxicity threshold toxm. This toxic effect increases the death rate d of microbes starting at the intrinsic death rate r $$d = \frac{{e_{tox}}}{s+{e_{tox}}} \cdot (1-r) + r $$ where s scales the toxic effect. Virtual Microbes that survive after an update cycle retain the toxic level they accumulated so far. Apart from toxicity and stochastic death, cells can also starve. When insufficient biomass product is available to keep up the slowly decaying volume of the cell, the cells decrease in volume. If the cell volume drops below a minimally viable volume, this cell is automatically for death. Reproduction When an empty grid point is available, the 8 (or less) neighbouring competitors get to compete to reproduce into the grid point. During the 'in silico serial transfer protocol' (see below), all cells are continuously mixed, so 8 (or less) random competitors are sampled. When cells compete for reproduction, the cells are ranked according to cell size. The "winner" is then drawn from a roulette wheel with weights proportional to this ranking. Upon reproduction, cell volume is divided equally between parent and offspring, and the genome is copied with mutations (see below). Molecule and protein concentrations remaining constant. Toxic effects built up during the parent's lifetime do not carry over to offspring. Genome and mutations The genome is a circular list of explicit genes and their promoter region, organised like "pearls on a string". Genes can be enzymes, transporters, or transcription factors. At birth, the genome is subject to various types of mutations. Large mutations include duplications, deletions, inversions, and translocations of stretches of genes (see Table 1). At the single gene level, point mutations allow all evolvable parameters to mutate individually (see Table 2). Horizontal gene transfer can occur on every time step. Innovations are an abstraction of "HGT from an external (off-grid) source", and allow randomly parameterised genes to be discovered at any given moment with a low probability. Experimental setup Metabolic network and wild type evolutionWe use a very simple metabolic network with 2 resource metabolites, 1 building block metabolite, and an energy carrier (Fig. 2a). We initialised 16 minimally viable Virtual Microbes, and evolved them for ∼10.000-15.000 generations in fluctuating resource conditions by applying random fluctuations of the influx rates for the A and the C resource. Because the rate of influx for the two resource metabolites fluctuates between very high (10−1) and very low values (10−5), conditions can be very poor, very rich, and/or potentially toxic. To avoid total extinction, we subdivided the 40x40 grid into four 20x20 subspaces, in which these fluctuations are independent (see Fig. 2b). Note however that these subspaces do not impede diffusion and reproduction, but merely define the rate at which resources flux into different positions on the grid. In this study, the microbes do not migrate during their lifetime. These conditions, summarized in Table 3, aim to simulate natural resource fluctuations, evolving what we call "wild types" (WTs) of Virtual Microbes. (see Additional file 1: Section S1). The initial population consists of cells that have 3 enzymes, 3 pumps, and 5 transcription factors. All these proteins are randomly parameterized, meaning that these proteins are unlikely to have good binding affinities and catalytic rates. The amount of building block required to grow and produce protein is therefor very minimal in the early stages of evolution, and is increased up to a fixed level as the Virtual Microbes become more productive over time. In silico serial transfer protocol We mimic a serial transfer protocol like by taking our evolved WTs and – instead of fluctuating the resource conditions – periodically supplying a strong pulse of both the A- and the C-resource. While WTs are evolved in a spatial setting where resources flux in and out of the system, we here mix all cells and resources continuously and fully close the system, meaning no metabolites wash in or out of the system during the daily cycle. To apply strong bottlenecks while at the same time allowing for sufficient growth, we increased the size of the grid from 40x40 to 70x70. We then dilute population approximately tenfold, transferring 500 cells to the next cycle. Horizontal gene transfer between cells was disabled to represent the modified (asexual) Escherichia coli REL606 clone that is used in the LTEE [1]. Finally, as the strong bottlenecks cause more genetic drift in our small populations than in the WT evolution, we found it necessary to dial back the mutation rates for the evolution of WTs to 30% to avoid over-exploiting mutants from appearing to easily (see Table 1). Other parameters of the serial transfer protocol are listed in Table 3. Growth rate and yield measurements Yield was approximated by taking sum of all cell volumes. We measured yield both within a single serial transfer cycle ("daily yield"), and as the extended yield when we tested for long-term survival. As all WTs had slightly temporal growth rate dynamics, we estimated the growth rates as the average building block production during the first half of the protocol. Characterising coexistence Using the neutral lineage markers (also see Additional file 1: Figure S8), we manually characterised coexistence by looking at the dynamics of neutral lineage markers. When two neutral markers had relatively stable frequencies as visualised in Fig. 5b-c for at least 10.000 time steps (approximately 100 generations), it was scored as coexistence. Sometimes coexistence did not last until the end of the simulation, which we refer to as quasi-stable coexistence. Further configuration of Virtual Microbes Apart from the parameters within the confines of this article (Tables 1, 2, 3 and 4), we have used the default settings for Virtual Microbes release 0.1.4, with the configuration files provided in Additional file 1: Section S2. Further details on the model and parametrisation are available online https://bitbucket.org/thocu/virtual-microbes The full python module of Virtual Microbes is publicly available via PyPi. The code is available online on https://bitbucket.org/thocu/virtual-microbes. Further help with installation, instructions on how to use Virtual Microbes, and full documentation of the methods, is available on https://www.virtualmicrobes.com. As the data to support this study is fully computer generated, and consists of quite a large set of files, we felt it unnecessary and unhelpful to make the data available online. However, all the data that support this study are reproduced using Virtual Microbes 0.1.4 and the configuration from the Additional file 1. Finally, the corresponding author is available for help with the software. GRN: Gene regulatory network (plural: GRNs) LTEE: Long term evolution experiment (first published by R Lenski, 1991) TF: Transcription factor (plural: TFs) WT: wild type (plural: WTs) Lenski RE, Rose MR, Simpson SC, Tadler SC. Long-term experimental evolution in escherichia coli. i. adaptation and divergence during 2,000 generations. Am Natural. 1991; 138(6):1315–41. Dettman JR, Sirjusingh C, Kohn LM, Anderson JB. Incipient speciation by divergent adaptation and antagonistic epistasis in yeast. Nature. 2007; 447(7144):585. Paterson S, Vogwill T, Buckling A, Benmayor R, Spiers AJ, Thomson NR, Quail M, Smith F, Walker D, Libberton B, et al.Antagonistic coevolution accelerates molecular evolution. Nature. 2010; 464(7286):275. Dunham MJ, Badrane H, Ferea T, Adams J, Brown PO, Rosenzweig F, Botstein D. Characteristic genome rearrangements in experimental evolution of saccharomyces cerevisiae. Proc Natl Acad Sci. 2002; 99(25):16144–9. Cooper TF, Rozen DE, Lenski RE. Parallel changes in gene expression after 20,000 generations of evolution in escherichia coli. Proc Natl Acad Sci. 2003; 100(3):1072–7. Pelosi L, Kühn L, Guetta D, Garin J, Geiselmann J, Lenski RE, Schneider D. Parallel changes in global protein profiles during long-term experimental evolution in escherichia coli. Genetics. 2006; 173(4):1851–69. Philippe N, Crozat E, Lenski RE, Schneider D. Evolution of global regulatory networks during a long-term experiment with escherichia coli. Bioessays. 2007; 29(9):846–60. Hindré T, Knibbe C, Beslon G, Schneider D. New insights into bacterial adaptation through in vivo and in silico experimental evolution. Nat Rev Microbiol. 2012; 10(5):352. Laan L, Koschwanez JH, Murray AW. Evolutionary adaptation after crippling cell polarization follows reproducible trajectories. Elife. 2015; 4:09638. Salverda ML, Koomen J, Koopmanschap B, Zwart MP, de Visser JAG. Adaptive benefits from small mutation supplies in an antibiotic resistance enzyme. Proc Natl Acad Sci. 2017:201712999. https://doi.org/10.1073/pnas.1712999114. Consuegra J, Plucain J, Gaffé J, Hindré T, Schneider D. Genetic basis of exploiting ecological opportunity during the long-term diversification of a bacterial population. J Mol Evol. 2017; 85(1-2):26–36. Rozen DE, Lenski RE. Long-term experimental evolution in escherichia coli. viii. dynamics of a balanced polymorphism. Am Natural. 2000; 155(1):24–35. Rozen DE, Philippe N, Arjan de Visser J, Lenski RE, Schneider D. Death and cannibalism in a seasonal environment facilitate bacterial coexistence. Ecol Lett. 2009; 12(1):34–44. Plucain J, Hindré T, Le Gac M, Tenaillon O, Cruveiller S, Médigue C, Leiby N, Harcombe WR, Marx CJ, Lenski RE, et al.Epistasis and allele specificity in the emergence of a stable polymorphism in escherichia coli. Science. 2014:1242862. https://doi.org/10.1126/science.1248688. Treves DS, Manning S, Adams J. Repeated evolution of an acetate-crossfeeding polymorphism in long-term populations of escherichia coli. Mol Biol Evol. 1998; 15(7):789–97. Rozen DE, Schneider D, Lenski RE. Long-term experimental evolution in escherichia coli. xiii. phylogenetic history of a balanced polymorphism. J Mol Evol. 2005; 61(2):171–80. Turner CB, Blount ZD, Mitchell DH, Lenski RE. Evolution and coexistence in response to a key innovation in a long-term evolution experiment with escherichia coli. bioRxiv. 2015:020958. https://doi.org/10.1101/020958. Großkopf T, Consuegra J, Gaffé J, Willison JC, Lenski RE, Soyer OS, Schneider D. Metabolic modelling in a dynamic evolutionary framework predicts adaptive diversification of bacteria in a long-term evolution experiment. BMC Evol Biol. 2016; 16(1):163. Rocabert C, Knibbe C, Consuegra J, Schneider D, Beslon G. Beware batch culture: Seasonality and niche construction predicted to favor bacterial adaptive diversification. PLoS Comput Biol. 2017; 13(3):1005459. Good BH, McDonald MJ, Barrick JE, Lenski RE, Desai MM. The dynamics of molecular evolution over 60,000 generations. Nature. 2017; 551(7678):45. Sarubbi E, Rudd K, Xiao H, Ikehara K, Kalman M, Cashel M. Characterization of the spot gene of escherichia coli. J Biol Chem. 1989; 264(25):15074–82. Bergkessel M, Basta DW, Newman DK. The physiology of growth arrest: uniting molecular and environmental microbiology. Nat Rev Micro. 2016; 14(9):549–62. Finkel SE. Long-term survival during stationary phase: evolution and the gasp phenotype. Nat Rev Microbiol. 2006; 4(2):113. Balaban NQ, Merrin J, Chait R, Kowalik L, Leibler S. Bacterial persistence as a phenotypic switch. Science. 2004; 305(5690):1622–5. Piggot PJ, Hilbert DW. Sporulation of bacillus subtilis. Curr Opin Microbiol. 2004; 7(6):579–86. Veening J-W, Stewart EJ, Berngruber TW, Taddei F, Kuipers OP, Hamoen LW. Bet-hedging and epigenetic inheritance in bacterial cell development. Proc Natl Acad Sci. 2008; 105(11):4393–8. Solopova A, van Gestel J, Weissing FJ, Bachmann H, Teusink B, Kok J, Kuipers OP. Bet-hedging during bacterial diauxic shift. Proc Natl Acad Sci. 2014; 111(20):7427–32. Veening J-W, Smits WK, Kuipers OP. Bistability, epigenetics, and bet-hedging in bacteria. Annu Rev Microbiol. 2008; 62:193–210. Mitchell A, Romano GH, Groisman B, Yona A, Dekel E, Kupiec M, Dahan O, Pilpel Y. Adaptive prediction of environmental changes by microorganisms. Nature. 2009; 460(7252):220. Tagkopoulos I, Liu Y-C, Tavazoie S. Predictive behavior within microbial genetic networks. Science. 2008; 320(5881):1313–7. Takeuchi N, Hogeweg P. On the degree of freedom in multilevel evolutionary models. Proc Levels Sel Individuality Evol Conceptual Issues Role Artif Life Model. 2009:35. Lehman J, Clune J, Misevic D, Adami C, Altenberg L, Beaulieu J, Bentley PJ, Bernard S, Beslon G, Bryson DM, et al.The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. 2018. arXiv preprint arXiv:1803.03453. Lenski RE, Travisano M. Dynamics of adaptation and diversification: a 10,000-generation experiment with bacterial populations. Proc Natl Acad Sci. 1994; 91(15):6808–14. Wortel MT, Bosdriesz E, Teusink B, Bruggeman FJ. Evolutionary pressures on microbial metabolic strategies in the chemostat. Sci Rep. 2016; 6:29503. Li Y, Petrov DA, Sherlock G. Single nucleotide mapping of the locally accessible trait space in yeast reveals pareto fronts that constrain initial adaptation. bioRxiv. 2019:593947. https://doi.org/10.1101/593947. Manhart M, Adkar BV, Shakhnovich EI. Trade-offs between microbial growth phases lead to frequency-dependent and non-transitive selection. Proc Royal Soc B Biol Sci. 2018; 285(1872):20172459. Blundell JR, Levy SF. Beyond genome sequencing: lineage tracking with barcodes to study the dynamics of evolution, infection, and cancer. Genomics. 2014; 104(6):417–30. Papadopoulos D, Schneider D, Meier-Eiss J, Arber W, Lenski RE, Blot M. Genomic evolution during a 10.000-generation experiment with bacteria. Proc Natl Acad Sci. 1999; 96(7):3807–12. Sniegowski PD, Gerrish PJ. Beneficial mutations and the dynamics of adaptation in asexual populations. Phil Trans R Soc Biol Sci. 2010; 365(1544):1255–63. Novak M, Pfeiffer T, Lenski RE, Sauer U, Bonhoeffer S. Experimental tests for an evolutionary trade-off between growth rate and yield in e. coli. Am Natural. 2006; 168(2):242–51. Pianka ER. On r-and k-selection. Am Natural. 1970; 104(940):592–7. Turner PE, Souza V, Lenski RE. Tests of ecological mechanisms promoting the stable coexistence of two bacterial genotypes. Ecology. 1996; 77(7):2119–29. Smith HL. Bacterial competition in serial transfer culture. Math Biosci. 2011; 229(2):149–59. Manhart M, Shakhnovich E. Growth tradeoffs produce complex microbial communities on a single limiting resource. bioRxiv. 2018:266569. https://doi.org/10.1101/266569. Rainey PB, Travisano M. Adaptive radiation in a heterogeneous environment. Nature. 1998; 394(6688):69. Lind PA, Farr AD, Rainey PB. Experimental evolution reveals hidden diversity in evolutionary pathways. Elife. 2015; 4:07074. Ratcliff WC, Fankhauser JD, Rogers DW, Greig D, Travisano M. Origins of multicellular evolvability in snowflake yeast. Nat Commun. 2015; 6:6102. Damkiær S, Yang L, Molin S, Jelsbak L. Evolutionary remodeling of global regulatory networks during long-term bacterial adaptation to human hosts. Proc Natl Acad Sci. 2013; 110(19):7766–71. Crombach A, Hogeweg P. Evolution of evolvability in gene regulatory networks. PLoS Comput Biol. 2008; 4(7):1000112. Raman K, Wagner A. Evolvability and robustness in a complex signalling circuit. Mol BioSyst. 2011; 7(4):1081–92. Wagner A. The molecular origins of evolutionary innovations. Trends Genet. 2011; 27(10):397–410. Millard P, Smallbone K, Mendes P. Metabolic regulation is sufficient for global and robust coordination of glucose uptake, catabolism, energy production and growth in escherichia coli. PLoS Comput Biol. 2017; 13(2):1005396. Blount ZD, Barrick JE, Davidson CJ, Lenski RE. Genomic analysis of a key innovation in an experimental escherichia coli population. Nature. 2012; 489(7417):513. Bajić D, Vila JC, Blount ZD, Sánchez A. On the deformability of an empirical fitness landscape by microbial evolution. Proc Natl Acad Sci. 2018; 115(44):11286–91. Wang J, Atolia E, Hua B, Savir Y, Escalante-Chong R, Springer M. Natural variation in preparation for nutrient depletion reveals a cost–benefit tradeoff. PLoS Biol. 2015; 13(1):1002041. Mitchell A, Pilpel Y. A mathematical model for adaptive prediction of environmental changes by microorganisms. Proc Natl Acad Sci. 2011; 108(17):7271–6. Siegal ML. Shifting sugars and shifting paradigms. PLoS Biol. 2015; 13(2):1002068. New AM, Cerulus B, Govers SK, Perez-Samper G, Zhu B, Boogmans S, Xavier JB, Verstrepen KJ. Different levels of catabolite repression optimize growth in stable and variable environments. PLoS Biol. 2014; 12(1):1001764. Atamian HS, Creux NM, Brown EA, Garner AG, Blackman BK, Harmer SL. Circadian regulation of sunflower heliotropism, floral orientation, and pollinator visits. Science. 2016; 353(6299):587–90. Luell SM. The movements of fish and zooplankton at sunset in willand pond, new hampshire. UNH Center Freshwat Biol Res Vol. Consalvey M, Paterson DM, Underwood GJ. The ups and downs of life in a benthic biofilm: migration of benthic diatoms. Diatom Res. 2004; 19(2):181–202. Gaten E, Tarling G, Dowse H, Kyriacou C, Rosato E. Is vertical migration in antarctic krill (euphausia superba) influenced by an underlying circadian rhythm?. J Genet. 2008; 87(5):473. Govaert L, Fronhofer EA, Lion S, Eizaguirre C, Bonte D, Egas M, Hendry AP, Martins ADB, Melián C. J, Raeymaekers JA, et al.Eco-evolutionary feedbacks-theoretical models and perspectives. 2018. arXiv preprint arXiv:1806.07633. Vetsigian K. Diverse modes of eco-evolutionary dynamics in communities of antibiotic-producing microorganisms. Nat Ecol Evol. 2017; 1:0189. Kotil SE, Vetsigian K. Emergence of evolutionary stable communities through eco-evolutionary tunneling. bioRxiv. 2018:271015. https://doi.org/10.1101/271015. Wiser MJ, Lenski RE. A comparison of methods to measure fitness in escherichia coli. PLoS ONE. 2015; 10(5):0126210. Wiser MJ, Ribeck N, Lenski RE. Long-term dynamics of adaptation in asexual populations. Science. 2013; 342(6164):1364–7. Lenski RE, Wiser MJ, Ribeck N, Blount ZD, Nahum JR, Morris JJ, Zaman L, Turner CB, Wade BD, Maddamsetti R, et al. Sustained fitness gains and variability in fitness trajectories in the long-term evolution experiment with escherichia coli. Proc R Soc B. 2015; 282(1821):20152292. Fragata I, Blanckaert A, Louro MAD, Liberles DA, Bank C. Evolution in the light of fitness landscape theory. Trends Ecol Evol. 2018. https://doi.org/10.1016/j.tree.2018.10.009. White JW, Rassweiler A, Samhouri JF, Stier AC, White C. Ecologists should not use statistical significance tests to interpret simulation model results. Oikos. 2014; 123(4):385–8. The authors want to thank Dominique Schneider and Thomas Hindré (Université Grenoble Alpes) for experiments done, and discussions had, during this project. Finally, the authors would like to thank Guillaume Beslon, and all the partners of the EvoEvo project for fruitful discussions. The work presented here was funded by, and brought to fruition during, the EvoEvo project (European Commission 7th Framework Programme (FPFP7-ICT-2013.9.6 FET Proactive: Evolving Living Technologies) EvoEvo project ICT-610427). Theoretical Biology, Utrecht University, Padualaan 8, Utrecht, The Netherlands Bram van Dijk , Jeroen Meijer , Thomas D. Cuypers & Paulien Hogeweg Search for Bram van Dijk in: Search for Jeroen Meijer in: Search for Thomas D. Cuypers in: Search for Paulien Hogeweg in: B.D. performed simulations and provided the data. Results were analysed and interpreted by all authors. B.D. wrote the manuscript with input from P.H., J.M., and T.C.. P.H. supervised the project. All authors have approved the manuscript. Correspondence to Bram van Dijk. The authors declares no competing financial interests. Additional file 1 Supplementary materials. van Dijk, B., Meijer, J., Cuypers, T.D. et al. Trusting the hand that feeds: microbes evolve to anticipate a serial transfer protocol as individuals or collectives. BMC Evol Biol 19, 201 (2019) doi:10.1186/s12862-019-1512-2 Predicting evolution Serial transfer protocol Resource cycle Eco-evolutionary dynamics In silico evolution Digital microbes Virtual microbes
CommonCrawl
Home / Artículos por Website / Extending the time of coherent optical response in ensemble of singly-charged InGaAs quantum dots Extending the time of coherent optical response in ensemble of singly-charged InGaAs quantum dots Coherent optical response in (In,Ga)As QDs We study n-doped (In,Ga)As quantum dots structures embedded into the microcavity with GaAs/AlAs distributed Bragg reflectors (for details see the "Methods" section). The QD emission is represented by the photoluminescence (PL) spectrum in Fig. 1a which was measured from the edge of the sample in order to avoid the cavity impact (blue dotted line). The PL maximum at the photon energy of 1.4355 eV corresponds to the radiative recombination of excitons from the lowest confined energy state, while a weak shoulder at higher energies around 1.45 eV is apparently related to the emission from the first excited exciton states. The width of the PL line reflects the magnitude of inhomogeneous broadening for the optical transitions with the full width at the half maxima (FWHM) of 10 meV. The corresponding transmission spectrum with a band centered at 1.434 eV and FWHM of 1.4 meV is shown by the red dashed line in Fig. 1a. Using a microcavity with a quality factor Q ~ 1000 facilitates the efficient generation of non-linear coherent optical signal due to the significant increase of light–matter interaction24,26,27. Fig. 1: Schematic representation of the experimental technique and the sample. a Photoluminescence (PL, doted line) and transmission (dashed line) spectra of the sample measured at the temperature T = 6 K. The PL spectrum is shown for lateral emission from the edge of the sample in the direction parallel to its plane, e.g. along x-axis. The laser spectrum is shown with a solid line. b Sketch of the photon echo experiment. (In,Ga)As quantum dots (QDs) are embedded in a microcavity with GaAs/AlAs distributed Bragg reflectors (DBR). c Blue line shows the transient four-wave mixing (FWM) signal measured in kS = 2k2−k1 direction for τ12 = 33.3 ps and τ23 = 100 ps. The signal is represented by the two-pulse PE (2PE) at 67 ps and the three-pulse PE (3PE) at 167 ps. The three peaks with filled area show the temporal position of excitation laser pulses. Labels on top correspond to the polarization of excitation and detection in the HVVH configuration. H and V correspond to linear polarizations along x and y axes, respectively. Transient four-wave mixing experiments with heterodyne detection are performed at a temperature T = 2 K and a magnetic field is applied in the plane of the sample (see the "Methods" section and Fig. 1c). The time-resolved electric field amplitude of the four-wave mixing signal is shown in Fig. 1c by the blue line for τ12 = 33.3 ps and τ23 = 100 ps, where τij is the time delay between pulses i and j in the sequence. Two- and three-pulse echoes are observed at times t = 2τ12 (2PE) and t = 2τ12 + τ23 (3PE), respectively. They are well described by Gaussian peaks with the FWHM of about 10 ps which is mainly determined by the spectral width of the excitation pulses16. In what follows we use the magnitude of the electric field amplitude at the PE peak maximum ∣PPE∣ to characterize the strength of the photon echo signal. Note that the data in Fig. 1c correspond to a single scan where the areas of the excitation pulses are set below π/2 which results in simultaneous appearance of both 2PE and 3PE signals. In the next sections we present the two-pulse PE data for excitation with areas of pulses 1 and 2 approximately equal to π/2 and π, respectively. As for the three-pulse PE data we use a sequence of three π/2 pulses. The pulse energy of \({{{{{{{\mathcal{P}}}}}}}}=5\) pJ corresponds to the pulse area of about π. We note that the areas of excitation pulses do not change the temporal dynamics of the 2PE and 3PE signals as function of τ12 and τ23, which is the main task of our study. They influence the amplitude of echoes and their ratio. Under optimal conditions the fluence of a PE pulse is estimated to be about 0.5 fJ. In order to address various spin configurations, we use different linear polarization schemes in the excitation and detection paths. The direction of polarization is assigned with respect to the magnetic field direction, i.e. H and V polarizations are parallel and perpendicular to B, respectively. The polarization scheme is labeled as ABD or ABCD for two- or three- pulse echoes. Here, the first two (AB) or three (ABC) letters indicate the linear polarizations of the optical pulses in the excitation sequence and the last letter (D) corresponds to the polarization direction in the detection, e.g. the data in Fig. 1c are taken in the HVVH polarization configuration. Photon echo from trions in QDs In order to observe long-lived spin-dependent echoes it is necessary to address trion X− (charged exciton) complexes, which correspond to the elementary optical excitation in a charged QD. The energy spectrum in the charged QD can be well described by a four-level energy scheme with Kramers doublets in the ground and excited states at B = 0, which are determined by the spin of the resident electron S = 1/2 and the angular momentum of the heavy hole J = 3/2, as shown in Fig. 2a. In contrast to the exciton in a neutral QD, this four-level scheme allows establishing optically induced long-lived spin coherence in the ground state17. Fig. 2: Photon echo from trions at zero magnetic field. a Energy level diagram and optical transitions for the trion X−. H and V indicate the polarization of optical transitions. b Polar plots of two-pulse photon echo (PE) amplitude in HRH (solid line) and HRV (dashed line) polarization configurations at t = 2τ12 = 132 ps as function of polarization angle φ2 of the second pulse. H and V correspond to linear polarizations along x and y axes, respectively. Linear polarization R is defined by the angle φ2 with respect to the x-axis. c Decay of the two-pulse (2PE) and three-pulse (3PE) photon echoes as function of 2τ12 and τ23. In the three-pulse photon echo the delay time τ12 = 33.3 ps. The two-pulse photon echo decays exponentially with T2 = 0.45 ns (blue circles). The three-pulse PE shows exponential decay with a short time constant of T1 = 0.26 ns superimposed on the long-lived offset (green triangles). Dashed red curves show the corresponding exponential fits. Although the photon energies for resonant excitation of trion and exciton (X) complexes are different in one and the same QD, it is not possible to perform selective excitation of only charged QDs by proper choice of the photon energy. This is due to the strong degree of inhomogeneous broadening for optical transitions in the QD ensemble, which is considerably larger than the energy difference between the X and X− resonances. It is, however, possible to distinguish between exciton and trion contributions using polarimetric measurement of photon echo signal28,29. Figure 2b shows polar plots of two-pulse PE magnitude measured at τ12 = 66 ps using HRH and HRV polarization schemes. The diagrams are obtained by rotation of the polarization direction of the second pulse with linear R-polarization which is defined by angle φ2 with respect to the H-polarization. In both polarization schemes, the signal is represented by rosettes with fourth harmonic periodicity when the angle φ2 is scanned. Such behavior corresponds to PE response from trions where the PE is linearly polarized with the angle φPE = 2φ2 and the PE amplitude is independent of φ229. In case of the neutral exciton the polar plot is different because the PE signal is co-polarized with the second pulse (φPE = φ2) and it amplitude follows \(| \cos {\varphi }_{2}|\). We note that the small increase of the PE amplitude by about 15% in HHH as compared to HVH remains the same under rotation of the sample around z-axis which excludes an anisotropy of dipole matrix elements in xy-plane as possible origin of asymmetry (see the blue pattern in Fig. 2b). The difference could be provided by a weak contribution from neutral excitons. This is because in HRH configuration the PE from trions is the four-lobe pattern \(\propto | \cos^{2}{\varphi }_{2}|\) while for excitons it corresponds to a two-lobe pattern \(\propto {\cos }^{2}{\varphi }_{2}\). Finally, we conclude that independent of the polarization scheme the main contribution to the coherent optical response with a photon energy of 1.434 eV in the studied sample is attributed to trions. This demonstration is very important for proper interpretation of the results because long-lived spin-dependent echoes can be observed only in charged QDs. Moreover it has large impact for applications in quantum memory protocols where high efficiency is required. We evaluate the optical coherence time T2 and the population lifetime T1 of trions in QDs from the decay of PE amplitude of the two- and three-pulse echoes, respectively. The data measured at B = 0 in co-polarized configuration (HHH for 2PE and HHHH for 3PE) are shown in Fig. 2c. In the case of 2PE, the amplitude is scanned as a function of 2τ12 (blue dots), while for 3PE the dependence on τ23 is shown (green triangles). The exponential fit of two-pulse echo \(| {P}_{{{{{{{{\rm{2PE}}}}}}}}}| \propto \exp (-2{\tau }_{12}/{T}_{2})\) gives T2 = 0.45 ns which is in agreement with previous studies in (In,Ga)As/GaAs QDs16,22,24. The decay of 3PE has a more complex structure. At short delay times, its magnitude decays exponentially with a time constant of T1 = 0.27 ns which we attribute to the trion lifetime τr. However, the signal does not decay to zero and shows a small offset with a magnitude of about 5% of the initial amplitude at long delay times t > 1 ns. This weak signal is governed by the dynamics of population grating in the ground state of the QDs ensemble and can be provided by many different reasons, which are out of the scope of this paper. We note that T2 ≈ 2T1 indicates that the loss of optical coherence under resonant excitation of trions is governed by their radiative recombination. Long-lived spin-dependent photon echo in QDs Application of the transverse magnetic field (B∣∣x) leads to Zeeman splitting of the Kramers doublets in the ground resident electron and optically excited trion states. The electron spin states with spin projections Sx = ±1/2 are split by ℏωe = geμBB, while the trion states with angular momentum projections Jx = ±3/2 are split by ℏωh = ghμBB. Here, ωe and ωh are the Larmor precession frequencies of electron and heavy hole spins, ge and gh are the electron and hole g factor, and μB is the Bohr magneton. Optical transitions between all four states are allowed using light with H or V linear polarization, as shown in Fig. 2a. The energy structure can be considered as composed of two Λ schemes sharing common ground states. The magnetic field induces the asymmetry between these two Λ schemes allowing one to transfer optical coherence induced by the first optical pulse into the spin coherence by application of the second optical pulse15,17. The first pulse creates two independent coherent superpositions between the ground and excited states (optical coherences). For an H-polarized pulse the optical coherences correspond to the density matrix elements ρ13 and ρ24 (see Fig. 2a and Supplementary Note 1). The second pulse creates the populations ρii (i = 1, 2, 3, 4) for H-polarization or accomplishes transfer of optical coherences into the spin coherences of trions (ρ34) and electrons (ρ12) when the second pulse is V-polarized. Due to inhomogeneous broadening of the optical resonance frequencies ω0 optical excitation with a sequence of two pulses leads to appearance of population (co-polarized HH-sequence) or spin (cross-polarized HV-sequence) gratings in the spectral domain with the period of 1/τ12. For the HH-sequence at zero magnetic field the population gratings in the left and right arms of the optical scheme are equal, i.e. ρ11 = ρ22 and ρ33 = ρ44 (see Fig. 2a). However, in magnetic field due to the Zeeman splitting of electrons and holes, a spin grating for the component directed along magnetic field appears. For the HV-sequence the spin grating is given by the yz components. Thus, a sequence of two-linearly polarized pulses can be used to initialize a spin grating in the ground and excited states. The addressed spin components depend on the polarization of the exciting pulses. For linearly co-polarized HH sequence the spin components along the magnetic field direction (x-axis) are addressed (see Supplementary Eq. (35) $${S}_{x}=-{J}_{x}\propto \sin \left(\frac{{\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}}{2}{\tau }_{12}\right)\exp \left(-\frac{{\tau }_{12}}{{T}_{2}}\right)\cos \left({\omega }_{0}{\tau }_{12}\right).$$ In case of cross-polarized HV sequence the spin grating is produced in the plane perpendicular to the magnetic field direction (see Supplementary Eqs. (36) and (37) $${S}_{y}+i{S}_{z}={J}_{y}-i{J}_{z} \propto i\exp \left(i\frac{{\omega }_{{{{{{{{\rm{e}}}}}}}}}\,-\,{\omega }_{{{{{{{{\rm{h}}}}}}}}}}{2}{\tau }_{12}\right)\\ \times \exp \left(-\frac{{\tau }_{12}}{{T}_{2}}\right)\cos \left({\omega }_{0}{\tau }_{12}\right).$$ The evolution of spin gratings for trions and resident electrons is governed by their population and spin dynamics in magnetic field. The hole spin grating lifetime is limited by the trion lifetime. The electron spin grating in the ground state is responsible for the long-lived spin-dependent echo which appears if the third pulse is applied. The latter transforms the spin grating back into the optical coherence, leading to the appearance of the photon echo after the rephasing time τ1215. The decay of LSPE as a function of τ23 is governed by the spin dynamics of resident electrons. HHHH and HVVH polarization schemes give access to longitudinal T1,e and transverse \({T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }\) spin relaxation times, respectively. In the studied (In,Ga)As/GaAs QDs the value of gh = 0.18 is of the same order of magnitude as the electronic g-factor ge = −0.5230. Therefore, it should be taken into account in contrast to previous studies where the Zeeman splitting in the trion state was neglected. In addition, it should be noted that the PE signal depends sensitively on the orientation of crystallographic axes with respect to the magnetic field direction due to the strongly anisotropic in-plane g-factor of the hole in semiconductor quantum wells and dots30,31. In our studies, the sample was oriented with the [110] crystallographic axis parallel to B which corresponds to the case when the H- and V-polarized optical transitions have the photon energies of ℏω0 ± ℏ(ωe − ωh) and ℏω0 ± ℏ(ωe + ωh), respectively. The three-pulse PE amplitude as a function of delay time τ23 and magnetic field B are shown in Fig. 3. In full accord with our expectations, we observe that application of a moderate magnetic field B < 1 T drastically changes the dynamics of three-pulse PE. In HHHH polarization scheme the large offset emerges which decays on a timescale significantly longer than the repetition period of laser pulses, i.e. T1,e ≫ 10 ns. The short decay, which is also present at B = 0, with the time constant T1 = 0.26 ns is associated to the trion lifetime. In the HVVH polarization scheme, long-lived oscillatory signal appears which is attributed to the Larmor spin precession of resident electrons and decays exponentially with \({T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }\). At shorter delays, the signal behavior is more complex due to the superposition of spin-dependent signals from trions and resident electrons. Fig. 3: Long-lived spin-dependent photon echo in quantum dots. a Amplitude of three-pulse photon echo (3PE) as a function of τ23 for τ12 = 66 ps. The data are taken in co-polarized HHHH and cross-polarized HVVH polarization schemes at B = 0.3 and 0.1 T, respectively. H and V corresponds to linear polarization parallel and perpendicular to magnetic field, respectively. b Magnetic field dependence of long-lived spin-dependent photon echo for τ12 = 100 ps and τ23 = 2.033 ns. Top and bottom curves correspond to signal measured in HHHH (green triangles) and HVVH (blue circles) polarization schemes, respectively. Red lines present the results of the theoretical modeling using Eqs. (3) and (4) with the following parameters: ge = −0.516, gh = 0.18, TT = τr = T1 = 0.26 ns, T1,e = 23 ns, \({T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }\) is evaluated from T2,e = 4.3 ns and Δge = 0.004 using Eq. (5). The signals in HHHH polarization are shifted for clarity with the dashed line corresponding to zero signal level. Further insight can be obtained from the magnetic field dependence of LSPE signal which is measured at the long delay τ23 = 2.033 ns when the contribution from trions in three-pulse PE is negligible (see Fig. 3b). The delay time τ12 is set to 100 ps which is shorter than the optical coherence T2. At zero magnetic field, the PE is absent in the HVVH polarization scheme and shows only very weak amplitude in HHHH configuration. An increase of magnetic field leads to the appearance of LSPE in both polarization configurations. For HHHH we observe a slow oscillation which is governed by Larmor precession of both electron and hole spins during τ12 when the spin grating is initialized by the sequence of two pulses. In the HVVH scheme the LSPE oscillates much faster because it is mainly determined by the Larmor precession of resident electron spins during τ23, which is roughly 20 times longer than τ12. In order to describe the experimental results quantitatively, we extended the theory from Langer et al.15 by taking into account both electron and heavy-hole Zeeman splitting (for details see Supplementary Note 1). We analytically solve the Lindblad equation for the (4 × 4) density matrix to describe the temporal evolution between the first and second pulses for 0 < t < τ12 and after the third pulse for t > τ12 + τ23. The spin dynamics of trions and electrons in external magnetic field for τ12 < t < τ12 + τ23 is described by the Bloch equations. The three-pulse PE amplitude in HHHH scheme is given by $${P}_{{{{{{{{\rm{HHHH}}}}}}}}} \propto \;{{{{{{{{\rm{e}}}}}}}}}^{-\frac{2{\tau }_{12}}{{T}_{2}}}\left[2{{{{{{{{\rm{e}}}}}}}}}^{-\frac{{\tau }_{23}}{{\tau }_{{{{{{{{\rm{r}}}}}}}}}}}{\cos }^{2}\left(\frac{{\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}}{2}{\tau }_{12}\right)+{{{{{{{{\rm{e}}}}}}}}}^{-\frac{{\tau }_{23}}{{T}_{{{{{{{{\rm{T}}}}}}}}}}}{\sin }^{2}\left(\frac{{\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}}{2}{\tau }_{12}\right)\right.\\ \left.\;+\,{{{{{{{{\rm{e}}}}}}}}}^{-\frac{{\tau }_{23}}{{T}_{{{{{{{{\rm{1,e}}}}}}}}}}}{\sin }^{2}\left(\frac{{\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}}{2}{\tau }_{12}\right)\right]$$ Here \({T}_{{{{{{{{\rm{T}}}}}}}}}^{-1}={\tau }_{{{{{{{{\rm{r}}}}}}}}}^{-1}+{T}_{{{{{{{{\rm{h}}}}}}}}}^{-1}\) is the spin lifetime of the trion. For moderate magnetic fields B ≤ 1 T we can assume that the spin relaxation time of hole in QDs Th is significantly longer than τr and, therefore, in our case TT = τr32. The first and second terms on the right hand side correspond to the trion contribution, while the last term is due to the LSPE from resident electrons. For HVVH polarization we obtain $${P}_{{{{{{{{\rm{HVVH}}}}}}}}} \propto {{{{{{{{\rm{e}}}}}}}}}^{-\frac{2{\tau }_{12}}{{T}_{2}}}\left[{{{{{{{{\rm{e}}}}}}}}}^{-\frac{{\tau }_{23}}{{T}_{{{{{{{{\rm{T}}}}}}}}}}}{r}_{{{{{{{{\rm{h}}}}}}}}}\cos ({\omega }_{{{{{{{{\rm{h}}}}}}}}}{\tau }_{23}-({\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}){\tau }_{12}-{\phi }_{{{{{{{{\rm{h}}}}}}}}})\right.\\ \left.\;+\,{{{{{{{{\rm{e}}}}}}}}}^{-\frac{{\tau }_{23}}{{T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }}}{r}_{{{{{{{{\rm{e}}}}}}}}}\cos ({\omega }_{{{{{{{{\rm{e}}}}}}}}}{\tau }_{23}+({\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}){\tau }_{12}-{\phi }_{{{{{{{{\rm{e}}}}}}}}})\right]$$ where for simplicity we introduce the following parameters: phases ϕe, ϕh and amplitudes re and rh. The subscript e, h corresponds to the electron or trion contributions which are given by the first and second terms on right-hand side in Eq. (4), respectively. The parameters are given by Supplementary Eqs. (55)-(57). They are determined by the Larmor precession frequencies ωe and ωh, delay time τ12, trion lifetime τr. The g-factors of electrons and holes are known from previous studies30,33. Therefore, the only unknown parameter is the spin dephasing time of resident electrons \({T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }\). Note that if the g-factors of electrons and holes are unknown they can be used as additional fitting parameters in the description below. In order to determine \({T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }(B)\), we fit the transient signals in HVVH polarization for different magnetic fields as shown exemplary for the transient at B = 0.1 T by the solid red line in Fig. 3a. For the LSPE when τ23 ≫ τr = T2/2 only the second term in Eq. (4) remains, which simplifies the fitting procedure. Three parameters of the LSPE signal, i.e. decay rate \(1/{T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }\), amplitude re, and phase ϕe, were extracted from the fit which are plotted as blue dots in Fig. 4 as a function of the magnetic field. It follows from Fig. 4a that the spin dephasing rate increases linearly with the increase of B. Such behavior is well established in ensembles of QDs and it is related to the fluctuations of electron g-factor value in different QDs32. It can be described as $$\hslash /{T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }=\hslash /{T}_{{{{{{{{\rm{2,e}}}}}}}}}+{{\Delta }}{g}_{{{{{{{{\rm{e}}}}}}}}}{\mu }_{{{{{{{{\rm{B}}}}}}}}}B,$$ where T2,e is the transverse spin relaxation time and Δge is the inhomogeneous broadening of the electron g-factor. The linear fit with this expression shown in Fig. 4a by the red dashed line gives T2,e = 4.3 ns and Δge = 4 × 10−3. Fig. 4: Magnetic field dependence of long-lived spin-dependent photon echo. Magnetic field dependence of the main parameters (decay rate, phase and amplitude) of long-lived spin-dependent photon echo signal evaluated from the three-pulse transients PHVVH(τ23) measured at different B. a decay rate \(\hslash /{T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }\); b phase ϕe; c amplitude re. Blue points correspond to the data resulting from the fit using the last term on the right-hand side in Eq. (4). Error bars represent confidence interval evaluated from the fit. In a and c the error is given by the size of the symbol. Red dashed line in a is the fit by linear function from Eq. (5) with T2,e = 4.3 ns and Δge = 4 × 10−3. Red solid line in b and c—magnetic field dependences of ϕe and re given by Supplementary Eqs. (56) and (57). The parameter ϕe in Fig. 4b starts from −0.8 rad in magnetic fields below 0.1 T and approaches zero in fields above 0.8 T. The amplitude re in Fig. 4c gradually rises with an increase of B up to 0.4 T and remains the same in larger magnetic fields. We calculate the magnetic field dependence of amplitude and phase of LSPE using Supplementary Eqs. (56) and (57), respectively, using ge = −0.516, gh = 0.18, TT = τr = 0.26 ns and τ12 = 33.3 ps. The resulting curves are shown by red solid lines in Fig. 4 and are in excellent agreement with the experimental data. We note that in the limit of large magnetic fields, which corresponds to the condition of ∣(ωe−ωh)∣τ12 ≫ 1, the amplitude of LSPE saturates (re → 1) and the phase of the signal approaches zero (ϕe → 0) which gives the simple expression \({P}_{{{{{{{{\rm{HVVH}}}}}}}}}\propto \cos [{\omega }_{{{{{{{{\rm{e}}}}}}}}}{\tau }_{23}+({\omega }_{{{{{{{{\rm{e}}}}}}}}}-{\omega }_{{{{{{{{\rm{h}}}}}}}}}){\tau }_{12}]\) for a long-lived signal at τ23 ≫ τr. We emphasize that this expression takes into account the non-zero g-factor of the hole gh which plays an important role in the formation of the LSPE signal. After evaluation of \({T}_{{{{{{{{\rm{2,e}}}}}}}}}^{* }(B)\), we can reproduce the LSPE signals as a function of τ23 and B using Eqs. (3) and (4) which are shown by red curves in Fig. 3 in both HHHH and HVVH polarization configurations. Here, the longitudinal spin relaxation time T1,e is set to 23 ns. In general this relaxation process can be neglected because T1,e strongly exceeds τ23. Excellent agreement is obtained at all time delays and magnetic fields. We note that the small discrepancies in HHHH polarization configuration at the magnetic fields around 0 and 1 T are attributed to the presence of a weak background signal possibly due to a population grating in the ground states as previously discussed for the case of Fig. 2c. Nevertheless, importantly the HVVH configuration which corresponds to fully coherent transformation between optical and spin coherence is free from any background.
CommonCrawl
FIN 221 Ch 8 Practice Quiz Sam_Williams638 Stock X has a beta of 0.6, while Stock Y has a beta of 1.4. Which of the following statements is CORRECT? Stock Y must have a higher expected return and a higher standard deviation than Stock X. A portfolio consisting of $50,000 invested in Stock X and $50,000 invested in Stock Y will have a required return that exceeds that of the overall market. If the market risk premium decreases (but expected inflation is unchanged), the required return on both stocks will decrease but the decrease will be greater for Stock Y. If expected inflation increases (but the market risk premium is unchanged), the required return on both stocks will decrease by the same amount. If expected inflation decreases (but the market risk premium is unchanged), the required return on both stocks will decrease but the decrease will be greater for Stock Y. A highly risk-averse investor is considering adding one additional stock to a 4-stock portfolio. Two stocks are under consideration. Both have an expected return,, of 15%. However, the distribution of possible returns associated with Stock A has a standard deviation of 12%, while Stock B's standard deviation is 8%. Both stocks are equally highly correlated with the market, with r equal to 0.75 for both stocks. Which stock should this risk-averse investor add to his/her portfolio? Stock A. Stock B. Either A or B. Neither A nor B. Add A, since its beta is lower. Stock B Which of the following statements is CORRECT? An investor can eliminate virtually all market risk if he or she holds a very large and well diversified portfolio of stocks. An investor can eliminate virtually all diversifiable risk if he or she holds a very large, well diversified portfolio of stocks. The higher the correlation between the stocks in a portfolio, the lower the risk inherent in the portfolio. It is impossible to have a situation where the market risk of a single stock is less than that of a portfolio that includes the stock. Once a portfolio has about 40 stocks, adding additional stocks will not reduce its risk at all. If you plotted the returns of a given stock against those of the market, and you found that the slope of the regression line was negative, the CAPM would indicate that the required rate of return on the stock should be less than the risk-free rate for a well-diversified investor, assuming that investors in the market expect the observed relationship to continue on into the future. The slope of the SML is determined by the value of beta. The SML shows the relationship between companies' required returns and their diversifiable risks. The slope and intercept of this line cannot be influenced by a firm's managers, but the position of the company on the line can be influenced by managers. If investors become less risk averse, the slope of the Security Market Line will increase. If a company increases its use of debt, this is likely to cause the slope of its SML to increase, indicating a higher required return on the stock. Suppose the returns on two stocks are negatively correlated. One has a beta of 1.2 as determined in a regression analysis, while the other has a beta of -0.6. The returns on the stock with the negative beta will be negatively correlated with returns on most other stocks in the market. Suppose you are managing a stock portfolio, and you have information that leads you to believe the stock market is likely to be very strong in the immediate future. That is, you are convinced that the market is about to rise sharply. You should sell your high-beta stocks and buy low-beta stocks in order to take advantage of the expected market move. Collections Inc. is in the business of collecting past-due accounts for other companies, i.e., it is a collection agency. Collections' revenues, profits, and stock price tend to rise during recessions. This suggests that Collections Inc.'s beta should be quite high, say 2.0, because it does so much better than most other companies when the economy is weak. You think that investor sentiment is about to change, and investors are about to become more risk averse. This suggests that you should re-balance your portfolio to include more high-beta stocks. If the market risk premium remains constant, but the risk-free rate declines, then the returns on low beta stock will rise while those on high beta stocks will decline. Magee Company's stock has a beta of 1.20, the risk-free rate is 4.50%, and the market risk premium is 5.00%. What is Magee's required return? The risk-free rate, rRF, is 6%. The overall stock market has an expected return of 12%. Hazlett, Inc. has a beta of 1.2. What is the required return of Hazlett, Inc. stock? Given the following probability distribution, what are the expected return and the standard deviation of returns for Security J? State : 1 2 3 Pi: 0.2 0.6 0.2 kJ: 10% 15 20 15%; 6.50% 15%; 10.00% Niendorf Corporation's stock has a required return of 13.00%, the risk-free rate is 7.00%, and the market risk premium is 4.00%. Now suppose there is a shift in investor risk aversion, and the market risk premium increases by 2.00%. What is Niendorf's new required return? A $10 million portfolio that consists of the following five stocks: Amount Invested The portfolio has a required return of 11%, and the market risk premium is 5%. What is the required return on Stock C? Other sets by this creator FIN 221 Ch 2 & 6 Practice Quiz Verified questions **Solve each system by elimination or by any convenient method.** $$ \left\{\begin{aligned} 2 x+5 y &=24 \\-6 x+2 y &=30 \end{aligned}\right. $$ It has been estimated that employee absenteeism costs North American companies more than $100billion per year. As a first step in addressing the rising cost of absenteeism, the personnel department of a large corporation recorded the weekdays during which individuals in a sample of 362 absentees were away over the past several months. Do these data suggest that absenteeism is higher on some days of the week than on others? | Day of week | Monday | Tuesday | Wednsday | Thuesday | Friday | |---|---|---|---|---|---| | Number absent | 87 | 62 | 71 | 68 | 74 | $68.16$ is $12 \%$ of what number? De Mar, a plumbing, heating, and air-conditioning company located in Fresno, California, has a simple but powerful product strategy: *Solve the customer's problem no matter what, solve the problem when the customer needs it solved, and make sure the customer feels good when you leave*. De Mar offers guaranteed, same-day service for customers requiring it. The company provides 24-hour-a-day, 7-day-a-week service at no extra charge for customers whose air conditioning dies on a hot summer Sunday or whose toilet overflows at 2:30 A.M. As assistant service coordinator Janie Walter puts it: "We will be there to fix your $\mathrm{A} / \mathrm{C}$ on the fourth of July, and it's not a penny extra. When our competitors won't get out of bed, we'll be there!" De Mar guarantees the price of a job to the penny before the work begins. Whereas most competitors guarantee their work for 30 days, De Mar guarantees all parts and labor for one year. The company assesses no travel charge because "it's not fair to charge customers for driving out." Owner Larry Harmon says: "We are in an industry that doesn't have the best reputation. If we start making money our main goal, we are in trouble. So I stress customer satisfaction: money is the by-product." De Mar uses selective hiring, ongoing training and education, performance measures, and compensation that incorporate customer satisfaction, strong teamwork, peer pressure, empowerment, and aggressive promotion to implement its strategy. Says credit manager Anne Semrick: "The person who wants a nine-tofive job needs to go somewhere else." De Mar is a premium pricer. Yet customers respond because De Mar delivers value - that is, benefits for costs. In 8 years, annual sales increased from about $\$ 200,000$ to more than $\$3.3$ million. Even though De Mar's product is primarily a service product, how should each of the 10 strategic OM decisions in the text be managed to ensure that the product is successful? Recommended textbook solutions Century 21 Accounting: General Journal 11th Edition•ISBN: 9781337623124Claudia Bienias Gilbertson, Debra Gentene, Mark W Lehman 1,012 solutions 4th Edition•ISBN: 9781259730948 (2 more)Don Herrmann, J. David Spiceland, Wayne Thomas Essentials of Investments 9th Edition•ISBN: 9780078034695 (2 more)Alan J. Marcus, Alex Kane, Zvi Bodie Fundamentals of Financial Management, Concise Edition 10th Edition•ISBN: 9781337902571 (1 more)Eugene F. Brigham, Joel Houston Other Quizlet sets cristienoell WRI102 Mahalali CB Test 2:Chapters 5,6,7,8 irene_sullivan Review Part I lopezpicazomariann About Quizlet How Quizlet works Modern Learning Lab Quizlet Plus Quizlet Plus for teachers DeutschEnglish (UK)English (USA)EspañolFrançais (FR)Français (QC/CA)Bahasa IndonesiaItalianoNederlandspolskiPortuguês (BR)РусскийTürkçeУкраїнськаTiếng Việt한국어中文 (简体)中文 (繁體)日本語 © 2023 Quizlet, Inc.
CommonCrawl
Mosc. Math. J.: Mosc. Math. J., 2002, Volume 2, Number 2, Pages 329–402 (Mi mmj58) This article is cited in 23 scientific papers (total in 23 papers) Infinite global fields and the generalized Brauer–Siegel theorem M. A. Tsfasmanabc, S. G. Vlăduţac a Institute for Information Transmission Problems, Russian Academy of Sciences b Independent University of Moscow c Institut de Mathématiques de Luminy Abstract: The paper has two purposes. First, we start to develop a theory of infinite global fields, i.e., of infinite algebraic extensions either of $\mathbb{Q}$ or of $\mathbb{F}_r(t)$. We produce a series of invariants of such fields, and we introduce and study a kind of zeta-function for them. Second, for sequences of number fields with growing discriminant, we prove generalizations of the Odlyzko–Serre bounds and of the Brauer–Siegel theorem, taking into account non-archimedean places. This leads to asymptotic bounds on the ratio ${\log hR}/\log\sqrt{|D|}$ valid without the standard assumption $n/\log\sqrt{|D|}\to 0$, thus including, in particular, the case of unramified towers. Then we produce examples of class field towers, showing that this assumption is indeed necessary for the Brauer–Siegel theorem to hold. As an easy consequence we ameliorate existing bounds for regulators. Key words and phrases: Global field, number field, curve over a finite field, class number, regulator, discriminant bound, explicit formulae, infinite global field, Brauer–Siegel theorem. DOI: https://doi.org/10.17323/1609-4514-2002-2-2-329-402 Full text: http://www.ams.org/.../abst2-2-2002.html References: PDF file HTML file Bibliographic databases: MSC: 11G20, 11R37, 11R42, 14G05, 14G15, 14H05 Received: June 10, 2001; in revised form April 23, 2002 Citation: M. A. Tsfasman, S. G. Vlăduţ, "Infinite global fields and the generalized Brauer–Siegel theorem", Mosc. Math. J., 2:2 (2002), 329–402 \Bibitem{TsfVla02} \by M.~A.~Tsfasman, S.~G.~Vl{\u a}du\c t \paper Infinite global fields and the generalized Brauer--Siegel theorem \jour Mosc. Math.~J. \vol 2 \pages 329--402 \mathnet{http://mi.mathnet.ru/mmj58} \crossref{https://doi.org/10.17323/1609-4514-2002-2-2-329-402} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1944510} \zmath{https://zbmath.org/?q=an:1004.11037} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000208593400007} \elib{http://elibrary.ru/item.asp?id=8379129} http://mi.mathnet.ru/eng/mmj58 http://mi.mathnet.ru/eng/mmj/v2/i2/p329 This publication is cited in the following articles: Tsfasman M.A., "Asymptotic properties of global fields", Finite Fields with Applications to Coding Theory, Cryptography and Related Areas, 2002, 328–334 A. I. Zykin, "The Brauer-Siegel and Tsfasman–Vlǎdut̨ theorems for almost normal extensions of number fields", Mosc. Math. J., 5:4 (2005), 961–968 Lebacque Ph., "Generalised Mertens and Brauer-Siegel theorems", Acta Arithmetica, 130:4 (2007), 333–350 Kunyavskii B.E., Tsfasman M.A., "Brauer-Siegel Theorem for Elliptic Surfaces", International Mathematics Research Notices, 2008, rnn009 A. I. Zykin, "Brauer–Siegel Theorem for Families of Elliptic Surfaces over Finite Fields", Math. Notes, 86:1 (2009), 140–142 A. I. Zykin, "Asymptotic properties of the Dedekind zeta-function in families of number fields", Russian Math. Surveys, 64:6 (2009), 1145–1147 Zykin A., "On the generalizations of the Brauer-Siegel theorem", Arithmetic, Geometry, Cryptography and Coding Theory, Contemporary Mathematics, 487, 2009, 195–206 Zykin A.I., Lebacque P., "On logarithmic derivatives of zeta functions in families of global fields", Doklady Mathematics, 81:2 (2010), 201–203 Schmidt A., "Regarding Pro-Fundamental group markers of arithmetic curves", Journal fur Die Reine und Angewandte Mathematik, 640 (2010), 203–235 Lebacque Ph., "On Tsfasman-Vladut Invariants of Infinite Global Fields", Int J Number Theory, 6:6 (2010), 1419–1448 Lebacque Ph., Zykin A., "On Logarithmic Derivatives of Zeta Functions in Families of Global Fields", Int J Number Theory, 7:8 (2011), 2139–2156 Emmanuel Hallouin, Marc Perret, "Recursive towers of curves over finite fields using graph theory", Mosc. Math. J., 14:4 (2014), 773–806 Ngo Thi Ngoan, Nguyen Quoc Thang, "on Some Hasse Principles For Algebraic Groups Over Global Fields. II", Proc. Jpn. Acad. Ser. A-Math. Sci., 90:8 (2014), 107–112 Zykin A., "Asymptotic Properties of Zeta Functions Over Finite Fields", Finite Fields their Appl., 35 (2015), 247–283 Lebacque Ph., "Some Effective Results on the Tsfasman-Vladut Invariants", Ann. Inst. Fourier, 65:1 (2015), 63–99 Zykin A., "Uniform Distribution of Zeroes of l-Functions of Modular Forms", Algorithmic Arithmetic, Geometry, and Coding Theory, Contemporary Mathematics, 637, eds. Ballet S., Perret M., Zaytsev A., Amer Mathematical Soc, 2015, 295–299 Marc Hindry, Amílcar Pacheco, "An analogue of the Brauer–Siegel theorem for abelian varieties in positive characteristic", Mosc. Math. J., 16:1 (2016), 45–93 Luzzi L., Vehkalahti R., "Almost Universal Codes Achieving Ergodic Mimo Capacity Within a Constant Gap", IEEE Trans. Inf. Theory, 63:5 (2017), 3224–3241 S. G. Vlăduţ, D. Yu. Nogin, M. A. Tsfasman, "Varieties over finite fields: quantitative theory", Russian Math. Surveys, 73:2 (2018), 261–322 A. L. Smirnov, "Kummerova bashnya i bolshie dzeta-funktsii", Algebra i teoriya chisel. 1, Posvyaschaetsya pamyati Olega Mstislavovicha FOMENKO, Zap. nauchn. sem. POMI, 469, POMI, SPb., 2018, 151–159 Maire Ch., Oggier F., "Maximal Order Codes Over Number Fields", J. Pure Appl. Algebr., 222:7 (2018), 1827–1858 Griffon R., "A Brauer-Siegel Theorem For Fermat Surfaces Over Finite Fields", J. Lond. Math. Soc.-Second Ser., 97:3 (2018), 523–549 Hajir F., Maire Ch., "On the Invariant Factors of Class Groups in Towers of Number Fields", Can. J. Math.-J. Can. Math., 70:1 (2018), 142–172 This page: 311 References: 60
CommonCrawl
Basic Rules of Definite Integral Have you ever been asked to calculate an area? If you are given a rectangle and want to know the area, it's easy: multiply its width time its length. Trouble is, what if you have irregular shapes or shapes with complicated boundaries? You could try to figure out the area by breaking the shape into a lot of little rectangles and adding up the areas. But here is a better way — the definite integral. The definite integral is a powerful concept used to represent the area under a curve, which turns out to be useful in a lot of calculations. The definite integral is continuous on a closed interval $[a,b]$, in other words, it has start and end values. In this section, we will learn some basic rules of the definite integral. $\int_{b}^{a}{\text{f}(x)\text{ d}x}=\text{F}(a)-\text{F}(b)$ Practice Question 1 Evaluate the following. $\int_{1}^{6}{3x-1\,\text{d}x}$ (a) $\int_{1}^{6}{3x-1\,\text{d}x}$ $\int_{2}^{4}{\frac{3}{{{x}^{4}}}+1\,\text{d}x}$ (b) $\int_{2}^{4}{\frac{3}{{{x}^{4}}}+1\,\text{d}x}$ $\int_{1}^{6}{\sqrt{x+3}\,\text{d}x}$ (c) $\int_{1}^{6}{\sqrt{x+3}\,\text{d}x}$ & \int_{1}^{6}{3x\text{d}x} \\ & =\left[ \frac{3{{x}^{2}}}{2}-x \right]_{1}^{6} \\ & =\left[ \frac{3}{2}{{\left( 6 \right)}^{2}}-6 \right]-\left[ \frac{3{{\left( 1 \right)}^{2}}}{2}-1 \right] \\ & =48-\frac{1}{2} \\ & =\frac{95}{2} \\ & \int_{2}^{4}{\frac{3}{{{x}^{4}}}+1\text{d}x} \\ & =\int_{2}^{4}{3{{x}^{-4}}+1\,\text{d}x} \\ & =\left[ \frac{3{{x}^{-3}}}{\left( -3 \right)}+x \right]_{2}^{4} \\ & =\left[ -\frac{1}{{{x}^{3}}}+x \right]_{2}^{4} \\ & =\left[ -\frac{1}{{{4}^{3}}}+4 \right]-\left[ -\frac{1}{{{2}^{3}}}+2 \right] \\ & =\frac{255}{64}-\left( \frac{15}{8} \right) \\ & =\frac{135}{64} \\ & \int_{1}^{6}{\sqrt{x+3}\text{d}x} \\ & =\int_{1}^{6}{{{\left( x+3 \right)}^{\frac{1}{2}}}\text{d}x} \\ & =\left[ \frac{1{{\left( x+3 \right)}^{\frac{3}{2}}}}{\frac{3}{2}} \right]_{1}^{6} \\ & =\frac{2}{3}\left[ {{\sqrt{x+3}}^{3}} \right]_{1}^{6} \\ & =\frac{2}{3}\left[ \left( {{\sqrt{6+3}}^{3}} \right)-\left( {{\sqrt{1+3}}^{3}} \right) \right] \\ & =\frac{2}{3}\left[ {{\sqrt{9}}^{3}}-{{\sqrt{4}}^{3}} \right] \\ & =\frac{2}{3}\left( {{3}^{3}}-{{2}^{3}} \right) \\ Understanding Results of Definite Integral Now let's look at some interesting properties of the definite integral. If we reverse the direction of the interval, we would get the original integral in the negative direction. $\int_{b}^{a}{\text{f}\left( x \right)\text{d}x}=-\int_{a}^{b}{\text{f}\left( x \right)\text{d}x}$ We can also add integrals of two adjacent intervals together. $\int_{a}^{c}{\text{f}\left( x \right)\text{d}x}=\int_{a}^{b}{\text{f}\left( x \right)\text{d}x}+\int_{b}^{c}{\text{f}\left( x \right)\text{d}x}$ Given that $\int_{-2}^{5}{\text{f}\left( x \right)\text{ d}x}=14$, find $\int_{-2}^{5}{\text{3f}\left( x \right)\,\,\text{d}x}$ (a) $\int_{-2}^{5}{\text{3f}\left( x \right)\,\,\text{d}x}$ $\int_{5}^{-2}{\left[ \text{f}\left( x \right)-3x \right]\text{ d}x}$ (b) $\int_{5}^{-2}{\left[ \text{f}\left( x \right)-3x \right]\text{ d}x}$ & \int_{-2}^{5}{\text{3f}\left( x \right)\text{d}x} \\ & =3\int_{-2}^{5}{\text{f}\left( x \right)\text{d}x} \\ & =3\times 14 \\ & =42 \\ & \int_{5}^{-2}{\left[ \text{f}\left( x \right)-3x \right]\text{ d}x} \\ & =\int_{5}^{-2}{\text{f}\left( x \right)\text{d}x}-\int_{5}^{-2}{3x\,\text{d}x} \\ & =-\int_{-2}^{5}{\text{f}\left( x \right)\text{d}x}-\left[ \frac{3{{x}^{2}}}{2} \right]_{5}^{-2} \\ & =-14-\left[ \frac{3}{2}{{\left( -2 \right)}^{2}}-\frac{3}{2}{{\left( 5 \right)}^{2}} \right] \\ & =-14-\left( -\frac{63}{2} \right) \\ Definite Integration as the Reverse of Differentiation In calculus, the definite integral can be defined as the reverse of differentiation. That makes sense: if we know how to differentiate, we can integrate; and if we know how to integrate, we can work out what a derivative is. In this section, we will see how this concept applies in solving questions. Differentiate $x\cos 3x$ with respect to $x$. Hence, show that $\int_{0}^{\frac{\pi }{9}}{x\sin 3x\text{ d}x}=\frac{\sqrt{3}}{18}-\frac{\pi }{54}$. & \frac{\text{d}}{\text{d}x}\left( x\cos 3x \right) \\ & =x\left[ -\sin 3x\cdot 3 \right]+\left[ \cos 3x \right]\cdot 1 \\ & =-3x\sin 3x+\cos 3x \\ \int_{0}^{\frac{\pi }{9}}{-3x\sin 3x}+\cos 3x\,\text{d}x&=\left[ x\cos 3x \right]_{0}^{\frac{\pi }{9}} \\ -3\int_{0}^{\frac{\pi }{9}}{x\sin 3x\,\text{d}x}+\int_{0}^{\frac{\pi }{9}}{\cos 3xdx}&=\frac{\pi }{9}\cos 3\left( \frac{\pi }{9} \right)-0\cos 0 \\ -3\int_{0}^{\frac{\pi }{9}}{x\sin 3x\,\text{d}x}+\left[ \frac{\sin 3x}{3} \right]_{0}^{\frac{\pi }{9}}&=\frac{\pi }{9}\cos \frac{\pi }{3} \\ -3\int_{0}^{\frac{\pi }{9}}{x\sin 3x\,\text{d}x}+\left[ \frac{\sin \frac{\pi }{3}}{3}-\frac{\sin 0}{3} \right]&=\frac{\pi }{9}\left( \frac{1}{2} \right) \\ -3\int_{0}^{\frac{\pi }{9}}{x\sin 3x\,\text{d}x}+\frac{\frac{\sqrt{3}}{2}}{3}&=\frac{\pi }{18} \\ -3\int_{0}^{\frac{\pi }{9}}{x\sin 3x\,\text{d}x}&=\frac{\pi }{18}-\frac{\sqrt{3}}{6} \\ -3\int_{0}^{\frac{\pi }{9}}{x\sin 3x\,\text{d}x}&=-\frac{1}{3}\left[ \frac{\pi }{18}-\frac{\sqrt{3}}{6} \right] \\ & =\frac{\sqrt{3}}{18}-\frac{\pi }{54} Finding the Equation of the Curve The idea behind integration is that if we have a curve (or a function) and we know one of its points (or values), then integrating gives us the equation of the curve. A curve passes through the point $\left( -2,\frac{1}{2} \right)$ and has gradient given by $\frac{\text{d}y}{\text{d}x}={{\left( 2x+1 \right)}^{2}}$. Find the equation of the curve. & \frac{\text{d}y}{\text{d}x}={{\left( 2x+1 \right)}^{2}} \\ & y=\int{{{\left( 2x+1 \right)}^{2}}\text{d}x} \\ & y=\frac{{{\left( 2x+1 \right)}^{3}}}{3\left( 2 \right)}+c \\ & y=\frac{1}{6}{{\left( 2x+1 \right)}^{3}}+c When $x=-2,y=\frac{1}{2}$: \frac{1}{2}&=\frac{1}{6}{{\left( 2\left( -2 \right)+1 \right)}^{3}}+c \\ \frac{1}{2}&=\frac{1}{6}{{\left( -3 \right)}^{3}}+c \\ \frac{1}{2}&=-\frac{9}{2}+c \\ c&=\frac{1}{2}+\frac{9}{2}=5 Equation of curve: $y=\frac{1}{6}{{\left( 2x+1 \right)}^{3}}+5$ Finding the Equation of the Curve from Second Derivative The equation of the curve can also be found from the second derivative using integration. This takes a little longer but it gives you practice with derivatives and integrals. Let's have a look at the practice question. At any point $(x,y)$ on a curve, $\frac{{{\text{d}}^{2}}y}{\text{d}{{x}^{2}}}=\frac{18}{{{\left( x-2 \right)}^{3}}}$. The gradient of the curve at $\left( 5,20 \right)$ is $2$. Find the equation of the tangent to the curve when it cuts the $y$- axis. Part 1: Finding the Original Equation of the Curve \frac{\text{d}y}{\text{d}x}&=\int{18{{\left( x-2 \right)}^{-3}}\text{d}x} \\ & =\frac{18{{\left( x-2 \right)}^{-2}}}{\left( -2 \right)}+c \\ \frac{\text{d}y}{\text{d}x}&=-9{{\left( x-2 \right)}^{-2}}+c Solve for the $c$ When $\frac{\text{d}y}{\text{d}x}=2,x=5$: 2&=-9{{\left( 5-2 \right)}^{-2}}+c \\ c&=2+9{{\left( 3 \right)}^{-2}} \\ $\frac{\text{d}y}{\text{d}x}=-9{{\left( x-2 \right)}^{-2}}+3$ Part 2: Finding the Equation of the Line y&=\int{-9{{\left( x-2 \right)}^{-2}}+3\,\text{d}x} \\ y&=\frac{-9{{\left( x-2 \right)}^{-1}}}{\left( -1 \right)}+3x+D \\ y&=\frac{9}{x-2}+3x+D Substitute coordinates into the equation When $x=5,y=20$: 20&=\frac{9}{5-2}+3\left( 5 \right)+D \\ D&=2 Equation of curve $y=\frac{9}{x-2}+3x+2$ Part 3: Find the $y$-intercept, when $x=0$ y&=\frac{9}{0-2}+3\left( 0 \right)+2 \\ y&=-\frac{5}{2} $\left( 0,-\frac{5}{2} \right)$ Find the gradient of the tangent {{m}_{\operatorname{t}\text{angent}}},{{\left. \frac{\text{d}y}{\text{d}x} \right|}_{x=0}}&=-9{{\left( 0-2 \right)}^{-2}}+3 \\ & =\frac{3}{4} Find the equation of the tangent y-\left( -\frac{5}{2} \right)&=\frac{3}{4}\left( x-0 \right) \\ y&=\frac{3}{4}x-\frac{5}{2} Practice Question 6 – Problems involving Unknowns The curve for which $\frac{\text{d}y}{\text{d}x}\,\,=\,\frac{a}{{{(2x\,+\,3\,)}^{3}}}\,\,-\,\,1\,,\,$where $a$ is a constant, is such that the tangent to the curve at $\left( -1,0 \right)$ is perpendicular to the line $5y=x+1$. Find the value of $a$ and the equation of the curve. The gradient at $x=-1$ is perpendicular to the gradient of the line. Finding the gradient of the line: y&=\frac{1}{5}x+\frac{1}{5} \\ {{m}_{\text{line}}}&=\frac{1}{5} Finding the value of $a$: {{m}_{\text{tangent}}}&={{\left. \frac{\text{d}y}{\text{d}x} \right|}_{x=-1}} \\ & =\frac{a}{{{\left( 2\left( -1 \right)+3 \right)}^{3}}}-1 \\ & =a-1 Since they are perpendicular to each other, {{m}_{\text{tangent}}}\times {{m}_{\text{line}}}&=-1 \\ \left( a-1 \right)\times \frac{1}{5}&=-1 \\ a-1&=-5 \\ a&=-4 Finding the equation of the curve: \frac{\text{d}y}{\text{d}x}&=-\frac{4}{{{\left( 2x+3 \right)}^{3}}}-1 \\ y&=\int{-4{{\left( 2x+3 \right)}^{-3}}-1\,\text{d}x} \\ y&=\frac{-4{{\left( 2x+3 \right)}^{-2}}}{\left( -2 \right)\left( 2 \right)}-x+c \\ y&={{\left( 2x+3 \right)}^{-2}}-x+c Find the value of constant $c$: When $x=-1,y=0$: 0&={{\left( -2+3 \right)}^{-2}}-\left( -1 \right)+c \\ 0&=1+1+c \\ c&=-2 $\therefore y={{\left( 2x+3 \right)}^{-2}}-x-2$ Equation of curve, $y=\frac{1}{{{\left( 2x+3 \right)}^{2}}}-x-2$ Practice Question 7 – Problems involving Partial Fractions Express $\frac{4-5x}{2+x-{{x}^{2}}}$ in partial fractions. Hence, integrate $\int\limits_{0}^{1}{\frac{4-5x}{2+x-{{x}^{2}}}\,}\text{d}x$. Factorise the denominator: $2+x-{{x}^{2}}=\left( 2-x \right)\left( x+1 \right)$ Solve in partial fractions form: \frac{4-5x}{\left( 2-x \right)\left( x+1 \right)}&=\frac{A}{2-x}+\frac{B}{x+1} \\ & =\frac{A\left( x+1 \right)+B\left( 2-x \right)}{\left( 2-x \right)\left( x+1 \right)} $4-5x=A\left( x+1 \right)+B\left( 2-x \right)$ When $x=-1$: 4-5\left( -1 \right)&=B\left( 2-\left( -1 \right) \right) \\ 9&=3B \\ B&=3 When $x=2$: 4-5\left( 2 \right)&=A\left( 2+1 \right) \\ -6&=A\left( 3 \right) \\ $\therefore \frac{4-5x}{2+x-{{x}^{2}}}=-\frac{2}{2-x}+\frac{3}{x+1}$ Find the integral: & \int_{0}^{1}{\frac{4-5x}{2+x-{{x}^{2}}}}\text{ d}x \\ & =\int_{0}^{1}{-\frac{2}{2-x}+\frac{3}{x+1}}\text{ d}x \\ & =\int_{0}^{1}{-2\left( \frac{1}{2-x} \right)+3\left( \frac{1}{x+1} \right)\text{d}x} \\ & =\left[ -2\frac{\ln \left( 2-x \right)}{\left( -1 \right)}+3\ln \left( x+1 \right) \right]_{0}^{1} \\ & =\left[ 2\ln \left( 2-x \right)+3\ln \left( x+1 \right) \right]_{0}^{1} \\ & =\left[ 2\ln \left( 2-1 \right)+3\ln \left( 1+1 \right) \right]-\left[ 2\ln \left( 2-0 \right)+3\ln \left( 0+1 \right) \right] \\ & =2\ln 1+3\ln 2-\left( 2\ln 2+3\ln 1 \right) \\ & =3\ln 2-2\ln 2 \\ & =\ln 2 Area Enclosed by the Curve and the $x$-axis In general, the area enclosed by the curve $y=\text{f}(x)$, the $x$-axis, the lines $x=a$ and $x=b$, where $a<b$, is given by Find the total area enclosed by the curve $y=\left( 3x-2 \right)\left( x+2 \right)$ and the $x$-axis. y&=3{{x}^{2}}+6x-2x-4 \\ y&=3{{x}^{2}}+4x-4 \\ \text{Required Area}&=\left| \int_{-2}^{\frac{2}{3}}{y\,\text{d}x} \right| \\ & =\left| \int_{-2}^{\frac{2}{3}}{\left( 3x-2 \right)\left( x+2 \right)\text{d}x} \right| \\ & =\left| \int_{-2}^{\frac{2}{3}}{3{{x}^{2}}+4x-4\,\text{d}x} \right| \\ & =\left| \left[ \frac{3{{x}^{3}}}{3}+\frac{4{{x}^{2}}}{2}-4x \right]_{-2}^{\frac{2}{3}} \right| \\ & =\left| \left( {{x}^{3}}+2{{x}^{2}}-4x \right)_{-2}^{\frac{2}{3}} \right| \\ & =\left| \left[ {{\left( \frac{2}{3} \right)}^{3}}+2{{\left( \frac{2}{3} \right)}^{2}}-4\left( \frac{2}{3} \right) \right]-\left[ {{\left( -2 \right)}^{3}}+2{{\left( -2 \right)}^{2}}-4\left( -2 \right) \right] \right| \\ & =\left| \left( -\frac{40}{27} \right)-\left( 8 \right) \right| \\ & =\left| -9\frac{13}{27} \right| \\ & =9\frac{13}{27}\text{unit}{{\text{s}}^{\text{2}}} Area Enclosed by the Curve and the $y$-axis In general, the area enclosed by the curve $x=\text{g}\left( y \right)$, the $y$-axis, the lines $y=a$ and $y=b$, where $a<b$ , is given by The figure shows the curve $x={{y}^{2}}-9$. Find the area of the region bounded by the curve, $y$- axis and the line $y=4$. {{y}^{2}}-9&=0 \\ y&=\pm \sqrt{9} \\ & =3\,\,\text{or}\,\,-3 \text{Required}\,\text{Area}&=\int_{3}^{4}{x\,\text{d}y} \\ & =\int_{3}^{4}{{{y}^{2}}-9}\,\text{d}y \\ & =\left[ \frac{{{y}^{3}}}{3}-9y \right]_{3}^{4} \\ & =\left( -\frac{44}{3} \right)-\left( -18 \right) \\ & =\frac{10}{3}\text{unit}{{\text{s}}^{2}} \\ & =3\frac{1}{3}\text{unit}{{\text{s}}^{2}} Area Between Two Curves Area Enclosed by Curves Projected to the $x$-axis The area bounded by two curves, $y=\text{f}\left( x \right)$ and $y=\text{g}\left( x \right)$, where $\text{f}\left( x \right)\ge \text{g}\left( x \right)$, from $x=a$ to $x=b$, where $a<b$, is given by $\int\limits_{a}^{b}{\left[ \text{f}\left( x \right)-\text{g}\left( x \right) \right]\text{d}x}$. Note: For area between curves, there will not be any negative area generated as we are finding the area between the curves. Area Enclosed by Curves Projected to the $y$-axis The area bounded by two curves, $x=\text{f}\left( y \right)$ and $x=\text{g}\left( y \right)$, where $\text{f}\left( y \right)\ge \text{g}\left( y \right)$, from $y=a$ to $y=b$, where $a<b$, is given by $\int\limits_{a}^{b}{\left[ \text{f}\left( y \right)-\text{g}\left( y \right) \right]\text{d}y}$. Practice Question 10 The figure shows the two curves $y=-{{x}^{2}}+8x-8$ and $y=-{{\frac{(x-7)}{7}}^{2}}-1$. Find the area of the region bounded by the curves. To find the intersection point: -{{x}^{2}}+8x-8&=-\frac{{{\left( x-7 \right)}^{2}}}{7}-1 \\ -7\left[ {{x}^{2}}+8x-8 \right]&=-7\left[ -\frac{1}{7}{{\left( x-7 \right)}^{2}}-1 \right] \\ 7{{x}^{2}}-56x+56&={{\left( x-7 \right)}^{2}}+7 \\ 7{{x}^{2}}-56x+56&={{x}^{2}}-14x+49+7 \\ 6{{x}^{2}}-42x&=0 \\ 6x\left( x-7 \right)&=0 \\ x&=0\,\,\text{or}\,\,7 Required Area & =\int_{0}^{7}{\left( -{{x}^{2}}+8x-8 \right)-\left[ -\frac{{{\left( x-7 \right)}^{2}}}{7}-1 \right]}\,\text{d}x \\ & =\int_{0}^{7}{-{{x}^{2}}+8x-8+\frac{1}{7}{{\left( x-7 \right)}^{2}}+1\,\text{d}x} \\ & =\int_{0}^{7}{-{{x}^{2}}+8x+\frac{1}{7}{{\left( x-7 \right)}^{2}}-7\,\text{d}x} \\ & =\left[ -\frac{{{x}^{3}}}{3}+\frac{8{{x}^{2}}}{2}+\frac{1}{7}\cdot \frac{{{\left( x-7 \right)}^{3}}}{3}-7x \right]_{0}^{7} \\ & =\left[ -\frac{1}{3}{{x}^{3}}+4{{x}^{2}}+\frac{1}{21}{{\left( x-7 \right)}^{3}}-7x \right]_{0}^{7} \\ & =\frac{98}{3}-\left( -\frac{49}{3} \right) \\ & =49\,\text{unit}{{\text{s}}^{\text{2}}} Area under Logarithmic Curves In the diagram, the curve $y=2\ln (x+3)$ cuts the$y$-axis at $(0,q)$. A line, which meets the curve at $(-1,\text{ }p)$ cuts the $y$-axis at $(0,0.5)$. State the exact value of $p$ and of $q$. (a) State the exact value of $p$ and of $q$. Calculate the exact area of the shaded region. (b) Calculate the exact area of the shaded region. y&=2\ln \left( 0+3 \right) \\ & =2\ln 3 \\ q&=2\ln 3 y&=2\ln \left( -1+3 \right) \\ p&=2\ln 2 $q=2\ln 3,p=2\ln 2$ & =\left| \int_{2\ln 2}^{2\ln 3}{{{x}_{1}}}\,\text{d}y \right|+\frac{1}{2}\left( 2\ln 2-\frac{1}{2} \right)\left( 1 \right) \\ & =\left| \int_{2\ln 2}^{2\ln 3}{-3+{{\text{e}}^{\frac{1}{2}y}}}\,\text{d}y \right|+\ln 2-\frac{1}{4} \\ & =\left| \left[ -3y+2{{\text{e}}^{\frac{1}{2}y}} \right]_{2\ln 2}^{2\ln 3} \right|+\ln 2-\frac{1}{4} \\ & =\left[ 3y-2{{\text{e}}^{\frac{1}{2}y}} \right]_{2\ln 2}^{2\ln 3}+\ln 2-\frac{1}{4} \\ & =\left[ 3\left( 2\ln 3 \right)-2{{\text{e}}^{\frac{1}{2}\left( 2\ln 3 \right)}} \right]-\left[ 3\left( 2\ln 2 \right)-2{{\text{e}}^{\frac{1}{2}\left( 2\ln 2 \right)}} \right]+\ln 2-\frac{1}{4} \\ & =6\ln 3-2{{\text{e}}^{\ln 2}}-6\ln 2+2{{\text{e}}^{\ln 2}}+\ln 2-\frac{1}{4} \\ & =6\ln 3-6-5\ln 2+4-\frac{1}{4} \\ & =6\ln 3-5\ln 2-\frac{9}{4}\text{unit}{{\text{s}}^{\text{2}}} \\
CommonCrawl
Rapidly Converging Series for Particular Values of the Gamma Function C. H. Brown found some rapidly converging infinite series for particular values of the gamma function $$\frac {\Gamma\left(\tfrac 13\right)^6}{12\pi^4}=\frac 1{\sqrt{10}}\sum\limits_{k=0}^{\infty}\frac {(6k)!}{k!^3(3k)!}\frac {(-1)^k}{160^{3k}3^k}$$$$\frac {\Gamma\left(\tfrac 14\right)^4}{128\pi^3}=\frac 1{\sqrt u}\sum\limits_{k=0}^{\infty}\frac {(6k)!}{k!^3(3k)!}\frac {(2w)^k}{6486^{3k}}$$ Where$$\begin{align*} & u=273+180\sqrt2\\ & v=1+\sqrt{2}\\ & w=\frac {6486^3}{4u^3v^6\sqrt{2}}\end{align*}$$ How did C. Brown derive these respective infinite formulas? Can you derive similar formulas for different gamma functions? I couldn't help but notice that these formulas share a similarity to the Chudnovsky Algorithm$$\frac 1\pi=12\sum\limits_{k=0}^{\infty}\frac {(6k)!}{k!^3(3k)!}\frac {545140134k+13591409}{640320^{k+1/2}}$$So perhaps Brown used the J-function and modular forms to derive the values?$$j(\tau)=\frac 1q+744+196884q+21493760q^2+\cdots$$ summation gamma-function modular-forms CrescendoCrescendo $\begingroup$ Looks somewhat like a BBP-style formula, the kind Ramanujan would solve in his sleep using Hypergeometric functions. I'm willing to bet the sums were discovered by a computer, and educated guess as to what such a sum might look like, and a lot of case checking until the computer found a likely match that was then formally proved $\endgroup$ – Brevan Ellefsen Jul 6 '17 at 22:25 $\begingroup$ Reference to the Brown results please. $\endgroup$ – Somos Jul 6 '17 at 22:48 $\begingroup$ @Somos Oh lol I just found it on Wikipedia. But all Wikipedia does is present the result, but not a proof. en.m.wikipedia.org/wiki/Particular_values_of_the_Gamma_function $\endgroup$ – Crescendo Jul 6 '17 at 22:52 That Wikipedia link points to http://www.iamned.com/math/ which has a large number of amazing formulae. That, in turn, points to http://iamned.com/math/infiniteseries.pdf titled "An Algorithm for the Derivation of Rapidly Converging Infinite Series for Universal Mathematical Constants". Not the answer you're looking for? Browse other questions tagged summation gamma-function modular-forms or ask your own question. Proving $\sum_{n=-\infty}^\infty e^{-\pi n^2} = \frac{\sqrt[4] \pi}{\Gamma\left(\frac 3 4\right)}$ Other interesting consequences of $d=163$? Two series involving the Gamma function The Chudnovsky pi formula $1/\pi$ revisited The values of Gamma function for non-integer numbers. Complete this table of general formulas for algebraic numbers $u,v$ and $_2F_1\big(a,b;c;u) =v $? Simple closed form for $\int_0^\infty\frac{1}{\sqrt{x^2+x}\sqrt[4]{8x^2+8x+1}}\;dx$ Integer and Complex Values for the Gamma Function: Definition of the gamma function for non-integer negative values Interchanging the integral and the infinite sum
CommonCrawl
Computational study for reliability improvement of a circuit board B. Emek Abali1Email authorView ORCID ID profile Mechanics of Advanced Materials and Modern Processes20173:11 An electronic device consists of electronic components attached on a circuit board. Reliability of such a device is limited to fatigue properties of the components as well as of the board. Printed circuit board (PCB) consists of conducting traces and vertical interconnect access (via) out of copper embedded in a composite material. Usually the composite material is fiber reinforced laminate out of glass fibers and polyimid matrix. Different reasons play a role by choosing the components of the laminate for the board, one of them is its structural strength and fatigue properties. An improvement of board's lifetime can be proposed by using computational mechanics. In this work we present the theory and computation of a simplified one layer circuit board conducting electrical signals along its copper via, producing heat that leads to thermal stresses. Such stresses are high enough to perform a plastic deformation. Although the plastic deformation is small, subsequent use of the electronic device causes accumulating plastic deformation, which ends the lifetime effected by a fatigue failure in the copper via. Computer simulations provide a convenient method for understanding the nature of this phenomenon as well as predicting the lifetime. We present a coupled and monolithic way for solving the multiphysics problem of this electro-thermo-mechanical system, numerically, by using finite element method in space and finite difference method in time. Multiphysics Electro-thermo-mechanics Finite element method Materials fail due to different phenomena, in general, we can distinguish a monotonic loading from a cyclic loading. The first type of failure is caused by a monotonic loading, where the forces trespass the ultimate strength of the material. This failure is determined by utilizing a uniaxial tensile test. The ultimate strength value is a material specific threshold such that any design remaining below that threshold can be verified as being "safe." The second failure mechanism appears under a cyclic loading. Although the amplitude of the loading is small enough that the design shall be "safe," the material fails due to fatigue. The determination of a material specific threshold value in the case of fatigue is challenging. Often, experiments are used to find a lifetime for one single design and this threshold is assumed to hold for small design changes tested by means of computations. Prediction of lifetime for printed circuit boards (PCBs) is discussed heavily in the literature, see for example (Solomon 1991; Ridout and Bailey 2007; Roellig et al. 2007; Atli-Veltin et al. 2012; Abali et al. 2014a; Abali et al. 2014b; Kpobie et al. 2016). Considering electronic devices, the fatigue failure occurs more frequently under cyclic loadings. In a daily use of an electronic device, we switch some transistors on and off such that heat is produced on the component and traces as well as vias (wires conducting electric signals). This heat increases the temperature of the circuit board. As a consequence, copper and the composite material try to expand differently—regarding their coefficients of thermal expansion—so-called thermal stresses occur. Unfortunately, such stresses are higher than the yield stress such that plastic deformation is induced. Since the produced heat escapes the device by an active or passive cooling, the electronic device tries to shrink or expand to its original shape. Due to the plastic deformation, this shape change generates stresses again. Hence a cyclic loading implies a plastic deformation in each cycle. The plastic deformation is irreversible and in each cycle the amount of plastic deformation accumulates. Sooner or later, there appear cracks caused by fatigue. In order to prevent these cracks, we may try to match the constants of thermal expansion of wire and composite material. Therefore, a possible improvement of fatigue properties in a circuit board relies on the choice of the composite material. In this study we investigate a non-conventional composite material and its effect to the reliability of the circuit board by using computation of thermo-electro-mechanical simulations. Reliability tests of PCBs are performed in the design process. In order to accelerate the tests, electronic devices are placed in an oven and temperature in the oven is changed periodically by a given frequency and amplitude. Since the board is thin and metal components have a high thermal conductivity, a nearly homogeneous temperature distribution occurs. There is a significant amount of know-how for thermal reliability tests and manufacturers are using their own calibrated tests, i.e., choice of frequency and amplitude. In order to obtain results as quick as possible, the oven achieves more than 100 K in less than a minute, which is not only technologically challenging; but also costly. Another method is much more easier and is sometimes called an active reliability test. An electric potential difference is applied such that an electric current produces JOULE's heat leading to the temperature change. According to the free or forced convection, the necessary temperature differences in similar frequencies can be achieved. There are still some drawbacks and a lack of a comprehensive analysis of active tests. Computational methods can be fruitful for getting a better understanding and suggesting newer methods or design amendments. In this work we present the method of solving a coupled thermo-electro-mechanical system with open-source packages developed under the FEniCS project, see (FEniCS project 2017; Alnaes and Mardal 2012). Coupled and nonlinear partial differential equations can be solved monolithically by using research codes, for example FEniCS. Commercial programs are not capable to perform such tasks, at least at the time when this work was established. In order to demonstrate the strength of such computation, we perform an active reliability test for different laminate materials and compare them. We deliver the codes applied on a single thru hole via on PCB with different materials used for the board. Different materials as well as geometries can easily be applied by using the code in (Abali 2011) under the GNU Public license (GNU Public 2017). We follow closely (Abali 2016, Sect. 3.4) and outline herein the theory as well as the method of computation very briefly. The objective is to simulate an unpopulated circuit board consisting of one thru hole via. Copper via is a conductor and is embedded in the composite, which is an insulator. In order to set the ideas, consider Fig. 1. The board is clamped on the four chamfer faces. As in a real experiment, we can set the electric potential, ϕ in V(olt), on the ends of the via at front and back faces (on yz-planes) of the board. The electric potential difference creates an electric field, E i in V/m(eter), leading to an electric current, in A(mpere)/m2, measured on the material frame. In other words, this current is the effective motion of charges with respect to the continuum body. Independently, the body can have a motion such as deformation, too. A deformation of the material is observed with respect to the laboratory frame. The electric current in the laboratory frame is given by CAD model of a single via on a circuit board. Composite board (green) embeds a copper trace (yellow) and a thru hole via (brown) where ρ denotes the mass density in k(ilo)g(ram)/m3, z the specific charge in C(oulomb)/kg, and v i the velocity of the continuum body as rate of displacement, v i =ui∙. Strictly speaking, the formulation is in the reference placement; however, by assuming small deformations we refrain from distinguishing between reference and current placement. Since the formulation is in the reference placement, the time rate (·)∙ is simply the partial time derivative. We search for the displacement, u i , effected by the electric potential set on each end of the via. Concretely, we set one end zero (grounded); on the other end we apply a harmonic excitation with a relatively low frequency, thus, it is appropriate to presuppose that the magnetic potential is negligibly small, A i =0, no magnetic flux emerges. Then the electric field is given by the electric potential $$ E_{i} = -\phi_{,i} \ . $$ A comma denotes a partial differentiation in space. The electric potential ϕ needs to satisfy the balance of electric charge: $$ \frac{\partial \rho z}{\partial t} + J_{i,i} = 0 \, $$ where and throughout the paper we understand the EINSTEIN summation convention over doubly repeated indices. We can reformulate the balance of electric charge. By using MAXWELL's equation: $$ \rho z = D_{i,i} \, $$ with the charge potential (electric displacement) D i in C/m2, we acquire $$ \frac{\partial D_{i,i}}{\partial t} + J_{i,i} = 0 \ . $$ This governing equation will be used to compute the electric potential. Copper is a conductor so we can neglect its electric polarization. Composite board may exhibit an electric polarization, for the sake of brevity, we neglect this, too. We assume that both materials for the printed circuit board are unpolarized. We will discuss the connection between charge potential, electric charge, and electric potential for unpolarized materials in the next section. The electric current—flowing along the conducting trace and via—produces energy that alters temperature. Temperature distribution will be computed by satisfying the balance of entropy: $$ \rho \eta^{{\bullet}} + \Phi_{i,i} - \rho \frac{r}{T} = \Sigma \, $$ where the specific (per mass) entropy, η, its flux term, Φ i , and its production term, Σ, needs to be defined. The entropy supply is given by the so-called radiant heat r, which is known. It is the term changing the temperature volumetrically, for example, in a microwave oven or in the case of a laser beam, r is the irradiated power of the oven or laser. For the printed circuit board, such a term is not supplied, r=0. After a careful study for unpolarized materials in (Abali 2016, Sect. 3.3), we know that we may select the entropy flux and define the entropy production as Heat flux, q i , and stress, σ ij , will be defined in the next section. The plastic strain p ε ij comes from the small deformation plasticity, where the the total strain is decomposed additively into a reversible as well as irreversible (plastic) part $$ \varepsilon_{ij} = \,^{\text{r}}\!\varepsilon_{ij} + \,^{\text{p}}\!\varepsilon_{ij} \ . $$ By assuming small strains we can use the linear strain measure: $$ \varepsilon_{ij} = \frac{1}{2} \left(u_{i,j} + u_{j,i} \right) \, $$ where u i denotes the displacement field to be computed. Definition of the plastic strain will be given in the next section. Now we have found out the governing equation for the temperature, $$ \rho \frac{\partial \eta}{\partial t} + \left(\frac{q_{i}}{T} \right)_{,i} = \Sigma \ . $$ Initially the temperature is set at the so-called reference temperature of 300 K. Any deviation from the reference temperature induces a stress, which will be implemented via constitutive equations in the next section. Induced stress causes a deformation. We search for displacements leading to that deformation. The stress, σ ji , is the momentum flux in the balance of linear momentum: where the specific body force f i is given—gravitational acceleration is a specific body force—and the production term defines the interaction with the electromagnetic forces. For the application that we want to study, the gravitational forces have a negligible effect, so we simplify the system by setting f i =0. For unpolarized systems, the production term is the LORENTZ force density: since we have assumed that the magnetic flux vanishes, B i =0. The displacement has to fulfill the governing equation: $$ \rho \frac{\partial^{2} u_{i}}{\partial t^{2}} - \sigma_{ji,j} - D_{j,j} E_{i} = 0 \ . $$ We can compute ϕ, T, and u i from Eqs. (4), (8), (9), respectively, after having defined D i , , q i , η, σ ij , \(^{\text {p}}\!\varepsilon ^{{\bullet }}_{ij}\) by means of ϕ, u i , T. Constitutive equations We aim at defining the charge potential D i , the electric current , the heat flux q i , the specific entropy η, the stress σ ij , and the rate of plastic strain \(^{\text {p}}\!\varepsilon ^{{\bullet }}_{ij}\). They are called constitutive or material equations closing the governing equations leading to partial differential equations of the electric potential ϕ, the displacement u i , and the temperature T. The necessary connection for the charge potential is given by the so-called MAXWELL–LORENTZ aether relation: $$ D_{i} = \varepsilon_{0} E_{i} \, $$ with the universal constant ε 0=8.85·10−12 C/(V m). For the electric current we use OHM's law: where the electrical conductivity, ς, is a material dependent parameter. For the heat flux we use FOURIER's law: $$ \eta = c \ln\left(\frac{T}{T_{\text{ref.}}}\right) + \frac{1}{\rho} \alpha_{ij} \sigma_{ij} \, $$ with the material parameter κ called the thermal conductivity. The material parameters may depend on the temperature as well as electric field. Usually they are given as constants since such measurements are challenging. In order to define stress and entropy, we restrict the materials being simple such that their material parameters are constants. Then we can acquire for the entropy and for the stress HOOKE's law with DUHAMEL–NEUMANN extension: $$ \sigma_{ij} = C_{ijkl} \left(\varepsilon_{kl} - \,^{\text{p}}\!\varepsilon_{kl} - \,^{\text{th}}\!\varepsilon_{kl} \right) \, $$ where the thermal strain reads $$ \,^{\text{th}}\!\varepsilon_{ij} = \alpha_{ij} \left(T-T_{\text{ref.}} \right) \ . $$ The heat capacity c, coefficients of thermal expansion α ij , components of stiffness tensor C ijkl are assumed to be constant, otherwise the above material equations are not valid, for a thermodynamical derivation of all aforementioned constitutive equations, see (Abali 2016, Sect. 3.3). In time the solution will be in a discrete fashion, where Δ t represents the time step. In order to calculate current (unknown) plastic strain, p ε ij , by using the (known) plastic strain from the last time step, \(\,^{\text {p}}\!\varepsilon ^{0}_{ij}\), incrementally, $$ \,^{\text{p}}\!\varepsilon_{ij} = \,^{\text{p}}\!\varepsilon^{0}_{ij} + \Delta t \,^{\text{p}}\!\varepsilon^{{\bullet}}_{ij} \, $$ we use PRANDTL–REUSS theory with kinematic hardening $$ \begin{aligned} \,^{\text{p}}\!\varepsilon^{{\bullet}}_{mn} = \langle\gamma\rangle \frac{\left(\sigma^{0}_{|ij|}-\beta^{0}_{ij}\right)C_{ijkl}\left(\varepsilon_{kl}^{{\bullet}} - \,^{\text{th}}\!\varepsilon^{{\bullet}}_{kl}\right)}{\frac49 h \sigma_{\mathrm{Y}}^{2} + \left(\sigma^{0}_{|ij|} \,-\, \beta^{0}_{ij}\right)C_{ijkl}\left(\sigma^{0}_{|kl|} \,-\, \beta^{0}_{kl} \right)} \left(\!\sigma^{0}_{|mn|} \,-\, \beta^{0}_{mn}\right), \end{aligned} $$ where the material parameters h and σ Y are determined from a uniaxial tensile testing. The yield stress σ Y represents the threshold for plastic deformation. The slope of stress versus plastic strain is given by h. The so-called MACAULAY brackets as in 〈γ〉 defines a conditional parameter as being 1 or 0 depending on the VONMISES equivalent stress, σ eq, defined by the deviatoric stress, σ |ij|, as follows $$ \sigma_{\text{eq}} = \sqrt{\frac{2}{3} \sigma_{|ij|} \sigma_{|ij|}} \, \ \ \sigma_{|ij|} = \sigma_{ij} - \frac{1}{3} \sigma_{kk} \delta_{ij} \, $$ such that it becomes $$ \langle \gamma \rangle = \left\{ \begin{array}{ll} 1 & \text{if}~\sigma_{\text{eq}} \geq \sigma_{\mathrm{Y}} \\ 0 & \text{otherwise} \end{array} \right. \ . $$ The so-called back stress, β ij , evolves with the plastic stress, again incrementally, $$ \beta_{ij} = \beta^{0}_{ij} + \Delta t \beta^{{\bullet}}_{ij} \, \ \ \beta^{{\bullet}}_{ij} = \bar c \,^{\text{p}}\!\varepsilon^{{\bullet}}_{ij} \, $$ where we are going to choose \(\bar c=2h/3\) in the simulations. A circuit board consist of copper traces and via embedded in a composite material. Since we want to detect the failure in the copper, we model the copper deforming elasto-plastically. Copper is a cubic material. In a circuit board copper has the thickness of 20–40 μm whereas its grain size is only 0.5 μm, see (Song et al. 2013). Hence we may assume that a polycristalline structure is present and the expected materials response is isotropic in this geometric scale. As a consequence of miniaturization this assumption may be critical in the near future. Hence we implement herein copper as a cubic material. For presenting the difference between isotropic and cubic materials, consider an isotropic material with the following material parameter tensors $$ C_{ijkl} = \lambda \delta_{ij} \delta_{kl} + \mu \left(\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk} \right) \, \ \ \alpha_{ij} = \alpha \delta_{ij} \, $$ where the LAME constants, λ, μ, and the thermal expansion constant α are the necessary material parameters. The parameters, λ, μ, read from the engineering constants, E, ν, G, which can be measured directly: $$ {}\lambda = \frac{E\nu}{(1+\nu)(1-2\nu)} = \frac{2G\nu}{(1-2\nu)} \, \ \ \mu = \frac{E}{2(1+\nu)} = G \ . $$ YOUNG's modulus, E, POISSON's ratio, ν, and shear modulus, G, are coupled for isotropic materials as follows $$ G= \frac{E}{2(1+\nu)} \ . $$ In the case of a cubic material the latter relation fails to hold such that the material possesses three independent parameters, namely E, G, and ν need to be measured independently. We can write the stiffness tensor in a matrix notation $$ C_{IJ}=\left(\begin{array}{cccccc} C_{1111} & C_{1122} & C_{1133} & C_{1123} & C_{1113} & C_{1112} \\ C_{2211} & C_{2222} & C_{2233} & C_{2223} & C_{2213} & C_{2212} \\ C_{3311} & C_{3322} & C_{3333} & C_{3323} & C_{3313} & C_{3312} \\ C_{2311} & C_{2322} & C_{2333} & C_{2323} & C_{2313} & C_{2312} \\ C_{1311} & C_{1322} & C_{1333} & C_{1323} & C_{1313} & C_{1312} \\ C_{1211} & C_{1222} & C_{1233} & C_{1223} & C_{1213} & C_{1212} \end{array}\right) \, $$ called the VOIGT notation and calculate it as the inverse of the compliance matrix, $$ C_{IJ}=\left(S_{JI}\right)^{-1} \, \ \ S_{IJ} = \left(\begin{array}{cccccc} \frac{1}{E} & -\frac{\nu}{E} & -\frac{\nu}{E} & 0 & 0 & 0 \\ & \frac{1}{E} &-\frac{\nu}{E} & 0 & 0 & 0 \\ & & \frac{1}{E} & 0 & 0 & 0 \\ & & & \frac{1}{G} & 0 & 0 \\ & \text{sym.}& & & \frac{1}{G} & 0 \\ & & & & & \frac{1}{G} \end{array}\right) \ . $$ Analogously for the coefficients of thermal expansion $$ \alpha_{ij} = \left(\begin{array}{ccc} \alpha_{x} & 0 & 0 \\ & \alpha_{y} & 0 \\ \text{sym.} & & \alpha_{z} \end{array}\right) \, $$ we need to determine three independent coefficients for a cubic material. Necessary values for copper are taken from (Ledbetter and Naimon 1974, Table 10), (Deutsches Kupferinstitut 2014; Srikanth et al. 2007) as follows $${} \begin{array}{cc} C_{IJ}^{\text{Cu}} = \left(\begin{array}{cccccc} 169.1 & 122.2 & 122.2 & 0 & 0 & 0 \\ & 169.1 & 122.2 & 0 & 0 & 0 \\ & & 169.1 & 0 & 0 & 0 \\ & & & 75.42 & 0 & 0 \\ & \text{sym.} & & & 75.42 & 0 \\ & & & & & 75.42 \end{array}\right)\! \cdot \! 10^{9}\,\text{Pa} \, \\ \alpha_{ij}^{\text{Cu}} = \left(\begin{array}{ccc} 17 & 0 & 0 \\ 0 & 17 & 0 \\ 0 & 0 & 17 \end{array}\right) \cdot 10^{-6} \,\mathrm{K}^{-1} \, \\ \sigma_{\mathrm{Y}}^{\text{Cu}} = 100\cdot 10^{6}\,\text{Pa} \, \ \ h^{\text{Cu}} = 615\cdot 10^{6}\,\text{Pa} \, \\ \rho^{\text{Cu}} = 8.94\cdot 10^{3} \,\text{kg/m}^{3} \, \\ \hspace{8pt} c^{\text{Cu}} = 390 \,\text{J/(kg K)} \, \ \ \kappa^{\text{Cu}} =385 \,\text{W/(K\,m)} \, \\ \varsigma^{\text{Cu}} = 5.8 \cdot 10^{7} \,\text{S/m} \ . \end{array} $$ The composite material for the board is a fiber-reinforced laminate structure. Fibers are placed orthogonal in a woven structure such that the board material is orthotropic. For an orthotropic material, the compliance matrix in the VOIGT notation reads $$ S_{IJ}^{\text{orth.}} = \left(\begin{array}{cccccc} \frac{1}{E_{x}} & -\frac{\nu_{xy}}{E_{y}} & -\frac{\nu_{xz}}{E_{z}} & 0 & 0 & 0 \\ & \frac{1}{E_{y}} &-\frac{\nu_{yz}}{E_{z}} & 0 & 0 & 0 \\ & & \frac{1}{E_{z}} & 0 & 0 & 0 \\ & & & \frac{1}{G_{yz}} & 0 & 0 \\ & \text{sym.}& & & \frac{1}{G_{zx}} & 0 \\ & & & & & \frac{1}{G_{xy}} \end{array}\right) \ . $$ All of 9 parameters need to be measured independently. Such a measurement is cumbersome. Instead, we can calculate the so-called homogenized parameters for the composite material. Consider different unidirectional plies stacked upon each other in such a way that we obtain an orthotropic material. In each unidirectional ply the material parameters can be calculated as a "weighted sum." A ply consists of fiber and matrix—parameters of fiber and matrix are easier to obtain separately. Therefore, first we determine the materials data of each unidirectional ply. Secondly, we sum the properties by considering a particular orientation leading to the orthotropic board. A unidirectional ply is transverse-isotropic. In order to identify material parameters, we choose a coordinate system, (x 1,x 2,x 3), where the first direction, x 1, is along the fibers in the ply. With respect to this so-called local coordinate system, we obtain the following compliance matrix in the VOIGT notation: $$ S_{IJ}^{\text{ply}} = \left(\begin{array}{cccccc} \frac{1}{E_{11}} & -\frac{\nu_{21}}{E_{11}} & -\frac{\nu_{21}}{E_{11}} & 0 & 0 & 0 \\ & \frac{1}{E_{22}} &-\frac{\nu_{23}}{E_{22}} & 0 & 0 & 0 \\ & & \frac{1}{E_{22}} & 0 & 0 & 0 \\ & & & \frac{2\left(1+\nu_{23}\right)}{E_{22}} & 0 & 0 \\ & \text{sym.}& & & \frac{1}{G_{12}} & 0 \\ & & & & & \frac{1}{G_{12}} \end{array}\right) \ . $$ These 5 parameters, E 11, E 22, ν 21, ν 23, and G 12 can be calculated from the parameters of matrix and fiber by using micromechanical rules, see (Schürmann 2005, §8). These rules are simple models based on the linear elasticity. The most important assumption is that matrix and fiber be connected perfectly, in other words, no voids or cracks are existing such that the length change of matrix and fiber are identical. Then we can combine the materials data of fiber and matrix; and we can calculate from them the parameters in a ply consisting of φ–fiber and (1−φ)–matrix as follows $$ \begin{aligned} \begin{array}{cc} E_{11} &= \varphi E_{11}^{\text{f.}} + (1-\varphi) E_{11}^{\text{m.}} \, \ \ E_{22} = \frac{E_{22}^{\text{m.}} E_{22}^{\text{f.}}}{\varphi E_{22}^{\text{m.}} + (1-\varphi) E_{22}^{\text{f.}}} \, \\ & \nu_{21} = \varphi \nu_{21}^{\text{f.}} + (1-\varphi) \nu_{21}^{\text{m.}} \, \\ \nu_{23} &= \varphi \nu_{23}^{\text{f.}} + (1-\varphi) \nu_{23}^{\text{m.}} \left(\frac{1+\nu_{23}^{\text{m.}}-\nu_{21}^{\text{f.}} \frac{E_{11}^{\text{m.}}}{E_{11}^{\text{f.}}}}{1-\left(\nu_{23}^{\text{m.}}\right)^{2} + \nu_{23}^{\text{m.}} \nu_{21}^{\text{f.}} \frac{E_{11}^{\text{m.}}}{E_{11}^{\text{f.}}}}\right) \, \\ & {G}_{12} = \frac{G_{12}^{\text{m.}} G_{12}^{\text{f.}}}{\varphi G_{12}^{\text{m.}} + (1-\varphi)G_{12}^{\text{f.}}} \, \end{array} \end{aligned} $$ $$ \begin{aligned} \begin{array}{cc} \alpha_{11} = \frac{(1-\varphi) \alpha_{11}^{\text{m.}} E_{11}^{\text{m.}} + \varphi \alpha_{11}^{\text{f.}} E_{11}^{\text{f.}} }{ (1-\varphi)E_{11}^{\text{m.}} + \varphi E_{11}^{\text{f.}}} \, \\ \alpha_{22} = \varphi \alpha_{22}^{\text{f.}} \,+\, (1 \,-\, \varphi) \alpha_{22}^{\text{m.}} \, \ \ \alpha_{33} = \varphi \alpha_{33}^{\text{f.}} \,+\, (1 \,-\, \varphi) \alpha_{33}^{\text{m.}} \ . \end{array} \end{aligned} $$ The upper (·)m. and (·)f. denote the materials data of matrix and fiber, respectively. The materials data for s-glass, e-glass, and aramid are taken from (JPS Industries Inc. Company JPS Composite Materials 2017; Suter Kunststoffe AG 2017). The data of the epoxy matrix are found in (Soden et al. 1998). By using Eq. (29), parameters in Eq. (28) are calculated. All used and calculated parameters are compiled in Table 1. After having determined the parameters for a unidirectional ply, we can simply construct a laminate of several plies by stacking them orthogonally. The result is an orthotropic material. Owing to the linear constitutive equations, we can superpose each ply's material tensors as transformed to the global coordinate system. All necessary materials data are compiled in Table 2. In addition to the aforementioned materials parameter, we use for laminate the following data: $$ \begin{aligned} \rho^{\text{lam.}} &= 2500\,{\text{kg/m}^{3}} \, \ \ c^{\text{lam.}} = 800\,\text{J/(kg\,K)} \, \\ \kappa^{\text{lam.}} &= 1.3\,\text{W/(m\,K)} \, \ \ \varsigma^{\text{lam.}} = 0 \ . \end{aligned} $$ Materials data of s-glass (I), e-glass (II), and aramid (III) fibers and epoxy matrix S-glass (I) E-glass (II) Aramid (III) Ply I Ply II Ply III E 11 in GPa ν 21 0.37∗ 0.4∗ G 12 in GPa α 11 in μm/(m K) Parameters marked with ∗ are approximated values Calculated unidirectional plies with s-glass, e-glass, and aramid are denoted by Ply I, II, and III, respectively Materials data of Lam. I (s-glass and epoxy), Lam. II (e-glass and epoxy), and Lam. III (aramid and epoxy) in the global coordinate system Lam. I Lam. II Lam. III E x in GPa E y in GPa E z in GPa ν xy ν xz ν yz G yz in GPa G zx in GPa G xy in GPa α x in μm/(m K) α y in μm/(m K) α z in μm/(m K) Weak form The primitive variables, ϕ, u i , T, are continuous functions in space and time. We want to compute them by satisfying Eqs. (4), (9), (8) augmented by the constitutive equations introduced in the last section. We will approximate space by means of finite element method (FEM) and time by using finite difference method (FDM). Time discretization is quite intuitive, as a list of subsequent time steps, whereas for simplicity in programming we choose identical time steps $$ t = \{ 0, \Delta t, 2 \Delta t, \dots \} \ . $$ Instead of a partial time derivative, we write the following difference equations: $$ \frac{\partial (\cdot)}{\partial t} = \frac{(\cdot) - (\cdot)^{0}}{\Delta t} \, \ \ \frac{\partial^{2}(\cdot)}{\partial t^{2}} = \frac{(\cdot) - 2(\cdot)^{0} + (\cdot)^{00}}{\Delta t \Delta t} \, $$ where (·)0 and (·)00 indicate the computed values from the last and second last time steps, respectively. In order to approximate the functions in a discretized space, we multiply the governing equations by appropriate test functions and obtain a variational form for each primitive variable, $$ {{}{\begin{aligned} \mathrm{F}_{\phi} &= \int_{\Omega^{e}} \left(\frac{D_{i,i}-D_{i,i}^{0}}{\Delta t} + J_{i,i} \right) \updelta \phi \,\,\mathrm{d} V = 0 \, \\ \mathrm{F}_{\boldsymbol{u}} &= \int_{\Omega^{e}} \left(\rho \frac{u_{i} -2u_{i}^{0} +u_{i}^{00}}{\Delta t \Delta t} - \sigma_{ji,j} - D_{j,j} E_{i} \right) \updelta u_{i} \,\,\mathrm{d} V = 0 \,\\ \mathrm{F}_{T} &= \int_{\Omega^{e}} \left(\rho \frac{\eta-\eta^{0}}{\Delta t} + \Phi_{i,i} - \Sigma \right) \updelta T \,\,\mathrm{d} V = 0 \, \end{aligned}}} $$ integrated over a finite element Ω e . The forms F ϕ and F T are in the unit of power, whereas F u is in the unit of energy. By multiplying F ϕ and F T by Δ t, we obtain all forms in the same unit. The following terms: D i , J i , σ ji , Φ i consist of (space) derivatives of primitive variables. In the variational forms another derivative is addressed. Hence, the primitive variables have to be (at least) two times differentiable. This condition can be weakened by integrating by parts such that one of the derivatives is shifted to the corresponding test function, as follows $$ \begin{aligned} \mathrm{F}_{\phi} &= - \int_{\Omega^{e}} \left(D_{i}-D_{i}^{0} + \Delta t J_{i} \right) \updelta \phi_{,i}\,\,\mathrm{d} V \\ &\quad + \int_{\partial\Omega^{e}} \left(D_{i} - D_{i}^{0} + \Delta t J_{i} \right) \updelta \phi N_{i} \,\,\mathrm{d} A \, \\ \mathrm{F}_{\boldsymbol{u}} &= \int_{\Omega^{e}} \left(\! \rho \frac{u_{i} - 2 u_{i}^{0} +u_{i}^{00}}{\Delta t \Delta t} \updelta u_{i} + \sigma_{ji} \updelta u_{i,j} \right. \\ &\quad \left. - D_{j,j} E_{i} \updelta u_{i} {\vphantom{\int_{\partial\Omega^{e}}}}\right) \mathrm{d} V - \int_{\partial\Omega^{e}} \sigma_{ji} \updelta u_{i} N_{j} \,\,\mathrm{d} A, \\ \mathrm{F}_{T} &= \int_{\Omega^{e}} \left(\rho \left(\eta-\eta^{0}\right) \updelta T - \Delta t \Phi_{i} \updelta T_{,i} \right. \\ &\quad \left. - \Delta t \Sigma \updelta T \right) \mathrm{d} V + \int_{\partial\Omega^{e}} \Delta t \Phi_{i} \updelta T N_{i} \,\,\mathrm{d} A \, \end{aligned} $$ with N i being the plane normal pointing outward from Ω e . The latter integral forms are called the weak forms. The whole computational domain, Ω, consists of two different materials, each material is divided by finite elements satisfying F=0 with $$ \mathrm{F} = \mathrm{F}_{\phi} + \mathrm{F}_{\boldsymbol{u}} + \mathrm{F}_{T} \ . $$ We can assembly by summing over all elements. An element with its plane normal N i and its adjacent element with its opposing plane normal eliminate the boundary terms within a material. All primitive variables are continuous. Over the interface, ∂ Ω I , between different materials, there may occur jumps since the material parameters have different values. The weak forms read On the interface, i.e., between two different materials, since ϕ is continuous, D i is continuous, too. No electric current is allowed along the normal direction, since copper is surrounded by the insulating board or air. According NEWTON's lemma—action is equal to reaction—we expect that traction vectors t i =N j σ ji are also continuous. On the boundary, ∂ Ω, the traction vector \(\hat t_{i}\) is given. In our example, we have free surfaces such that \(\hat t_{i}=0\) on ∂ Ω or clamped faces where the displacement is given (as zero). On boundaries where the solution is given, we apply a DIRICHLET boundary condition and the test function vanishes. Temperature at the boundary can be modeled by using mixed boundary condition such that a deviation from the reference temperature causes a heat flux, \(q_{i} N_{i} = \bar h \left (T-T_{\text {ref.}}\right)\), depending on the convective heat transfer coefficient \(\bar h\) in J/(s m2 K). Finally, we acquire the weak form to be implemented We exploit the open-source packages developed under the FEniCS project and solve the coupled and nonlinear weak form for the simulations demonstrated in the next sections. Lifetime prediction Under a cyclic loading, copper traces and via deform plastically and material fails after a number of cycles, N f.. Since plastic deformation is irreversible, in each cycle, plastic deformation accumulates. By means of computation we can determine the accumulated plastic strain in one cycle and use this as a measure of lifetime. The plastic strain rate, \(\,^{\text {p}}\!\varepsilon _{ij}^{{\bullet }} \), is deviatoric, thus, the equivalent strain rate reads $$ \,^{\text{p}}\!\varepsilon_{\text{eq.}}^{{\bullet}} = \sqrt{\frac32 \,^{\text{p}}\!\varepsilon_{ij}^{{\bullet}} \,^{\text{p}}\!\varepsilon_{ij}^{{\bullet}} } \ . $$ The plastic strain accumulates in a cycle with the latter rate of equivalent strain, $$ \,^{\text{p}}\!\varepsilon_{\text{acc.}} = \int_{\text{cycle}} \,^{\text{p}}\!\varepsilon_{\text{eq.}}^{{\bullet}} \mathrm{d} t \ . $$ This accumulated strain is a distribution in the copper wire. Its mean value can be determined by averaging over a chosen volume V $$ \left\langle \,^{\text{p}}\!\varepsilon \right\rangle = \frac{\int_{V} \,^{\text{p}}\!\varepsilon_{\text{acc.}} \mathrm{d} V}{\int_{V} \mathrm{d} V} $$ This measure for a lifetime prediction, 〈 p ε〉, is computed by using the aforementioned approach. In order to establish a connection between the measure, 〈 p ε〉, and the number of cycles to failure, N f., there are various suggestions. They are mostly empirical such as computations and experiments need to be conducted and fitted. A theoretical analysis in (Manson 1968) provides the following relation $$ \langle \,^{\text{p}}\!\varepsilon \rangle = D^{0.6} N_{\text{f.}}^{-0.6} \, $$ for metallic compounds such as copper used in traces and via. The material specific constant, D, reads from the reduction of cross section, R, in a tensile test, $$ D = \ln\left(\frac{100}{100-R}\right) \ . $$ Cross section reduction, R, is in percentage and we take it as R=60 for the electrodeposited copper material, see (Valiev et al. 2002). It is important to recall that the parameter D can be obtained from a tensile testing. By employing computation we obtain 〈 p ε〉 and estimate the lifetime $$ N_{\text{f.}} = D \langle \,^{\text{p}}\!\varepsilon \rangle^{-5/3} \ . $$ This lifetime estimation is for the case of an accelerated test. It is challenging to determine a specific amount of months or years for the underlying electronic device. For comparing several designs, it is a helpful measure. A design with a longer lifetime is expected to be chosen from the point of mechanics. For analyzing the multiphysics and estimating the number of failure on a simplified unpopulated—so-called bare-board—we choose realistic geometric dimensions used in the industry. As shown in Fig. 2, the CAD geometry is prepared and preprocessed in Salome v7.5 (Salome 2017) by using NETGEN algorithms (Schöberl 1997). We have chosen a board of dimensions 10×10×0.8 mm. Although all material parameters are given in SI units, we have converted them into mm, Mg (tonne), s, mA, K for the simulation such that the geometry is captured accurately (up to the machine precision) in mm. The conducting wire is called trace and via, both from the same material, namely 5N copper. We use a standard 1 oz. copper modeled with 35 μm of thickness. Starting from the back side of the board, a trace along x-axis is placed on top of the board. The trace has a width of 300 μm and is connected by the annular ring (pad) of 500 μm radius to the thru hole via. Trace and annular ring are produced by masking and etching. Actually the profile of the trace and ring is trapezoidal due to the etching process; however, we model it as rectangular. After ring and trace are plated, a hole with 200 μm radius is drilled and the via is electroplated with a given thickness. Herein we model the wall thickness of the via also as 35 μm. There are no standards for this thickness and the simulation results would be different by varying this thickness. Via connects the trace on top to the trace on bottom that runs until the front side of the board. Especially around traces and via, several refinements of the mesh are applied; the final finite element mesh can be seen in Fig. 3. Simulation model of the one layer circuit board. Composite board (green, transparent) embeds a copper trace (yellow) and a thru hole via (brown) Mesh of the simulation model. Tetrahedron first order continuous LAGRANGEan elements are generated in Salome by using NETGEN algorithm, zoomed to the via and annular ring for the sake of better visibility, approximately 20 000 nodes in the whole model We present simulations of a possible measurement. All boundary conditions are selected as it would be the case in reality. Board is usually fastened by bolts on four holes near to the edges. In order to hold the board on four edges, we model chamfers on the edges and hold on these four chamfer faces as being clamped in all directions. We simply set the deformation zero as a DIRICHLET boundary condition. The conducting copper is driven by an electric potential difference. At the endings of traces on back and front faces, the electric potential is set as a DIRICHLET boundary condition. One end is grounded and the other is given harmonically, $$ \hat\phi = \phi_{\text{amp.}} \sin(2\uppi \nu t) \ . $$ For all simulations we have selected the period of 10 s leading to the frequency ν=0.1 Hz and an amplitude as ϕ amp.=0.2 V in order to reach high enough temperatures leading to significant plastic strains. Since we have only a highly conductive copper wire, even such a small potential difference lead to high electric currents and dissipated heat, JOULE's loss. This setting is also configured in an accelerated fatigue test; but in an electronic device such conditions are not valid. Normally, there is a component like a resistor or capacitor connected to the circuit such that the electrical conductivity is lowered, leading to a smaller current, thus, a lower temperature increase. The computational model mimics a possible accelerated active fatigue experiment. In order to simulate an existing test; geometry and boundary conditions need to be accurately determined and applied. The weak form in Eq. (43) is nonlinear. By using FEniCS packages the linearization is handled automatically at the level of the partial differential level, before the assembly. Solution is searched by a standard NEWTON–RAPHSON algorithm after assembly operation. Every time step lasts approximately 8 min on one (3 min on two) Intel Xeon Processors (i7-2600) running on Ubuntu 16.04. In order to comprehend the complicated multiphysics bearing electro-thermo-mechanical coupling, we present several results on one board at the quarter of one period, t=2.5 s. The results with other laminates are qualitatively the same. The potential difference generates an electric current. It is in equal amount in ampere along the trace and via. The current density, , is the amount per cross section. Since we have chosen the trace as well as via of the same thickness and the circumference of the via is longer than the width of the trace, the current density is greater along the trace than through the via. The electric potential and current density can be seen in Fig. 4. Greater electric current density implies a greater JOULE's heat, , directly responsible for the temperature increase. Hence, the temperature increase is more on the trace than on the via. The temperature distribution again at the quarter period can be depicted in Fig. 5. For another wall thickness, this result would be different. Simulation results at t=2.5 s, electric potential and current density. For the sake of visibility, electric potential ϕ and electric current density are presented on the transparent copper conductor. Color distribution denotes the electric potential. Arrows indicate the current density Simulation results at t=2.5 s, temperature distribution. Temperature distribution, T, is shown as colors on the sliced model, the other half has a mirror symmetry It is of importance to recall that the temperature distribution is not homogeneous, which is indeed the case in reality. This fact is overseen in an accelerated fatigue test performed in an oven. Temperature is changed quickly in the chamber, as a consequence, a homogeneous temperature distribution emerges, since the board is thin and copper is a good conductor. In this configuration the damage occurs in the via. In an active test, however, we realize that the temperature distribution is heterogeneous that lead to another deformation mode in each cycle. In order to visualize the deformation, see Fig. 6. It is interesting to see that the middle part of the via is not moving; however, the variation of the displacement along the hole still induces a strain. Simulation results at t=2.5 s, displacement. Deformation is visualized with a scaling of 150. Colors denote the magnitude of displacement, u i , on the sliced model, the other half has a mirror symmetry Approximately more than 20 K deviation from the reference temperature T ref.=300 K results stresses higher than the yield stress and plastic deformation starts accumulating. Simultaneously, heat escapes to the ambient, in the simulation we use the same convective heat transfer coefficients for the board as well as via, \(\bar h = 10\) J/(s m2 K), modeling a relatively slow free convection. This parameter is very difficult to measure accurately such that we choose a value and use the same for all simulations. Temperature is produced within the conductor and exchanged over its boundary at the same time. Change across the boundary is greater when the temperature on the boundary increases. However, since we model free convection, this rate is small compared to the heat production. Depending on the excitation frequency of the given electric potential and depending on the convective heat transfer coefficient, the temperature increases until the heat exchange rate and production are equal in their absolute values. This steady-state condition is difficult to reach in the simulation, at least for the first 5 cycles the steady-state is not reached, see Fig. 7. We realize that a real experiment with the aforementioned setting might be difficult since within one minute the melting temperature of the board would be reached. Either a forced convection (using a fan) or a resistor connected to the circuit decreases the temperature increase. Although the increase is high, the total difference between the maximum and minimum temperature remains approximately the same in every cycle. Temperature increase. Maximum temperature is presented over time for 5 cycles The fatigue failure occurs mostly because of the plastic strain accumulation. At the end of the first cycle, see the equivalent plastic strains in Fig. 8. The heterogeneous temperature distribution and the presented deformation lead to high plastic strains in the trace as well as in the via. This result is different compared to a fatigue experiment in an oven, where the most of the plastic strain accumulates within the via. Herein, in an active testing, we observe especially at the middle height of the via higher values than within the trace. For a better comparison we determine the mean values in two different volumes: over the traces and over the via. The accumulation of the mean value of the plastic strain averaged over these two regions can be seen in Fig. 9. Due to the irreversible character, the plastic strain accumulates whenever the temperature is increasing and remains the same at the moment when the temperature is decreasing. In every cycle the amount of the newly accumulated plastic strain is compiled in Table 3. A steady-state cannot be reached before the temperature variation gets stabilized. Simulation results at t=10 s, equivalent plastic strain. Equivalent plastic strain is shown as a color distribution on the copper at the end of one cycle Accumulated plastic strains. Accumulation of the plastic strains. Mean value over traces and via are presented for 5 cycles Mean accumulated plastic strain in each cycle for the laminate I 〈 p ε〉 in % Results in all laminates are qualitatively similar. They do differ quantitatively in terms of the plastic strain. In order to compare different laminates and their effects to the fatigue behavior, we conduct three simulations and compute the mean accumulated plastic strain at the end of one cycle. Again the aforementioned two volumes are used for averaging. The choice of the averaging volume is somehow heuristic and challenging; however, for a comparison between three laminates, the choice fails to be relevant. The values are compiled in Table 4. Number of failure calculated by the accumulated plastic strain in each cycle for three different laminates N f. By considering the fatigue as the sole criterion, it is fair to claim that the laminate III—aramid reinforced epoxy composite material—performs better than glass fiber reinforced epoxy materials. Since the governing equations are coupled and nonlinear, such a conclusion is challenging to predict based on the material parameters. Laminate III has the highest thermal expansion coefficient along the plate thickness, so the mismatch between expansion coefficients is higher. Hence, it is intuitive to guess that it would lead to greater plastic strains. As we see from the deformation mode, the boundary conditions lead to a more shearing deformation that is the real reason of a plastic deformation. Expansion along the board thickness reduces shearing deformation leading to smaller plastic strains. Based on only one of the material parameters, we might prejudge the outcome differently than the observation by means of computations as presented herein. There are many coupling effects acting simultaneously, the only prediction shall be based on simulations with the least number of assumptions. Herein we present a robust method for computing an electro-thermo-mechanical system. In order to verify the code, we need to simulate existing experiments by correctly choosing boundary conditions as well as the geometry. According to the demonstrated comparison, a different choice for the laminate composition might increase the fatigue strength. There is a growing attention to find out variations of FR4 PCBs out of e-glass and epoxy. Stablcor Technology Inc. has patented its own PCF consisting of carbon fibers. Thermount is a registered trademark by DuPont and it uses aramid fibers. Coupled computations is of importance to obtain a detailed investigation guiding toward newer insight into multiphysics. Herein we have neglected magnetic potential and thermoelectric effects. Often more assumptions are undertaken in order to simplify or decouple the governing equations leading to a fast simulation. With today's technological possibilities, we can perform computations as presented herein by using a laptop. Hence, we can get a detailed understanding of the phenomenon and even suggest design changes. In order to enable a scientific exchange, we deliver our codes in (Abali 2011) to be used under the GNU Public license (GNU Public 2017). Using rational continuum mechanics, all necessary governing equations and constitutive equations are presented for an electro-thermo-mechanical system. We have directly attacked an application from electronics industry, namely a phenomenon called fatigue in copper vias. Accelerated experiments are generally conducted in an oven so the temperature is controlled globally. In the recent years, more sophisticated experiments started to emerge, where the electric potential is controlled that leads to a local heating. This multiphysics problem is challenging to compute numerically, since the governing equations are nonlinear and coupled. We have presented an approach for computing coupled and nonlinear governing equations by means of open-source packages and simulated the electro-thermo-mechanical system monolithically. The results seem to be promising, a verification with experimental results is left to future research. FDM: Finite difference method FEM: This work was completed while B. E. Abali was supported by a grant from the Max Kade Foundation to the University of California, Berkeley. All codes used for simulations are publicly available in (Abali 2011) licensed under the GNU Public license (GNU Public 2017). The author declares that he/she has no competing interests. Department of Mechanical Engineering, University of California, Berkeley, Mailstop 1740, Berkeley, 94720, CA, USA Abali, BE (2011) Technical University of Berlin, Institute of Mechanics, Chair of Continuums Mechanics and Material Theory, Computational Reality. http://www.lkm.tu-berlin.de/ComputationalReality/. Abali, BE, Reich FA, Müller WH (2014a) Fatigue analysis of anisotropic copper-vias in a circuit board. GMM, Mikro-Nano-Integration. VDE Verlag, Berlin.Google Scholar Abali, BE, Lofink P, Müller WH (2014b) Variation of plastic materials data of copper and its impact on the durability of Cu-via interconnects. In: Aschenbrenner R Schneider-Ramelow M (eds)Microelectronic Packaging in the 21st Century, 305–308.. Fraunhofer Verlag. Chap. 7.2.Google Scholar Abali, BE (2016) Computational Reality, Solving Nonlinear and Coupled Problems in Continuum Mechanics. Advanced Structured Materials. Springer, Singapore.Google Scholar Alnaes, MS, Mardal KA (2012). In: Logg A, Mardal K-A, Wells GN (eds)Automated solution of differential equations by the finite element method, the FEniCS book.. Springer. Chap. 15 Syfi and sfc: symbolic finite elements and form compilation.Google Scholar Atli-Veltin, B, Ling H, Zhao S, Noijen S, Caers J, Weifeng L, Feng G, Yuming Y (2012) Thermo-mechanical investigation of the reliability of embedded components in pcbs during processing and under bending loading In: Thermal, Mechanical and Multi-Physics Simulation and Experiments in Microelectronics and Microsystems (EuroSimE), 2012 13th International Conference On, 1–4.. IEEE.Google Scholar Deutsches Kupferinstitut (2014) Kupfer in der Elektrotechnik – Kabel und Leitungen. www.kupferinstitut.de. FEniCS project (2017) Development of tools for automated scientific computing: 2001–2016. http://fenicsproject.org. GNU Public (2017) Gnu general public license. http://www.gnu.org/copyleft/gpl.html. JPS Industries Inc. Company JPS Composite Materials (2017). http://jpsglass.com/jps_databook.pdf. Kpobie, W, Martiny M, Mercier S, Lechleiter F, Bodin L, des Etangs-Levallois AL, Brizoux M (2016) Thermo-mechanical simulation of pcb with embedded components. Microelectron Reliab 65: 108–130.View ArticleGoogle Scholar Ledbetter, H, Naimon E (1974) Elastic properties of metals and alloys. ii. copper. J Phys Chem Ref Data 3(4): 897–935.View ArticleGoogle Scholar Manson, S (1968) A simple procedure for estimating high-temperature low-cycle fatigue. Exp Mech 8(8): 349–355.View ArticleGoogle Scholar Ridout, S, Bailey C (2007) Review of methods to predict solder joint reliability under thermo-mechanical cycling. Fatigue Fract Eng Mater Struct 30(5): 400–412.View ArticleGoogle Scholar Roellig, M, Dudek R, Wiese S, Boehme B, Wunderle B, Wolter KJ, Michel B (2007) Fatigue analysis of miniaturized lead-free solder contacts based on a novel test concept. Microelectron Reliab 47(2): 187–195.View ArticleGoogle Scholar Salome (2017) The Open Source Integration Platform for Numerical Simulation. http://www.salome-platform.org. Schürmann, H (2005) Konstruieren Mit Faser-Kunststoff-Verbunden. Springer, Berlin.Google Scholar Schöberl, J (1997) NETGEN an advancing front 2d/3d-mesh generator based on abstract rules. Comput Vis Sci 1(1): 41–52.View ArticleMATHGoogle Scholar Solomon, HD (1991) Predicting thermal and mechanical fatigue lives from isothermal low cycle data In: Solder Joint Reliability, 406–454.. Springer.Google Scholar Song, JM, Wang DS, Yeh CH, Lu WC, Tsou YS, Lin SC (2013) Texture and temperature dependence on the mechanical characteristics of copper electrodeposits. Mater Sci Eng A 559: 655–664.View ArticleGoogle Scholar Soden, P, Hinton M, Kaddour A (1998) Lamina properties, lay-up configurations and loading conditions for a range of fibre-reinforced composite laminates. Compos Sci Technol 58(7): 1011–1022.View ArticleGoogle Scholar Srikanth, N, Premkumar J, Sivakumar M, Wong Y, Vath C (2007) Effect of wire purity on copper wire bonding In: Electronics Packaging Technology Conference, 2007. EPTC 2007. 9th, 755–759.. IEEE.Google Scholar Suter Kunststoffe AG (2017). http://www.swiss-composite.ch/pdf/i-Werkstoffdaten.pdf. Valiev, R, Alexandrov I, Zhu Y, Lowe T (2002) Paradox of strength and ductility in metals processed bysevere plastic deformation. J Mater Res 17(01): 5–8.View ArticleGoogle Scholar
CommonCrawl
Cloning and characterization of a novel alpha-actinin gene from human neuroblastoma Sotiris Nick Nikolopoulos, Fordham University Neuroblastoma is the most common extra-cranial malignant solid tumor in children. Established cell lines comprise three predominant cell types: a neuroblastic cell (N) with properties of an embryonic sympathoblast, a substrate-adherent cell (S) with properties of a Schwann/glial/melanocytic precursor cell, and a stem cell (I) with properties of both N and S cells. During studies of MYCN protooncogene expression in neuroblastoma cell variants, we identified a 98 kDa protein which cross-reacted with one anti-MYCN monoclonal antibody. This p98 protein was abundant in the S-type cells and weakly or not expressed in N- and I-type counterparts. Isolation, cloning and sequencing of the cDNA for this protein revealed that p98 is a novel $\alpha$-actinin protein. Molecular characterization studies have shown that the cDNA for the newly identified $\alpha$-actinin, which we named ACTN4, is structurally similar to all known $\alpha$-actinin genes. ACTN4 has a 71-83% amino acid identity to the previously characterized human nonmuscle gene (ACTN1) and the two skeletal muscle $\alpha$-actinin (ACTN2 and ACTN3) isoforms. Analysis of the amino acid composition of the EF-hand motifs of our $\alpha$-actinin isoform suggest that ACTN4 might have a lower calcium sensitivity to calcium than the other nonmuscle form. Northern blot analysis using a panel of human tumor cell lines showed that this new gene (ACTN4) is expressed in malignant cells of diverse differentiation lineages and/or tissue origins and that its expression generally is correlated with substrate adhesiveness. The ACTN4 has been localized to chromosome #4 with somatic cell hybrid panels. Transfection experiments explored the effect of expression of ACTN4 on differentiation and transformation state. In general, stable transfectants with 10- to 40-fold higher levels of $\alpha$-actinin protein exhibited a more S-like morphology and loss of neuronal-specific proteins. They also showed a significantly lower (than the controls) colony-forming efficiency in soft agar, the extent of which was inversely correlated with $\alpha$-actinin protein levels. These results suggest that this new member of the $\alpha$-actinin gene family may be influential in the regulation of differentiation and transformation in human neuroblastoma variants. Neurology|Molecular biology|Oncology Nikolopoulos, Sotiris Nick, "Cloning and characterization of a novel alpha-actinin gene from human neuroblastoma" (1997). ETD Collection for Fordham University. AAI9719654. https://research.library.fordham.edu/dissertations/AAI9719654
CommonCrawl
Observable optimal state points of subadditive potentials Transition layers for an inhomogeneous Allen-Cahn equation in Riemannian manifolds April 2013, 33(4): 1389-1405. doi: 10.3934/dcds.2013.33.1389 Global well-posedness of critical nonlinear Schrödinger equations below $L^2$ Yonggeun Cho 1, , Gyeongha Hwang 2, and Tohru Ozawa 3, Department of Mathematics, and Institute of Pure and Applied Mathematics, Chonbuk National University, Jeonju 561-756 Department of Mathematical Sciences, Seoul National University, Seoul 151-747, South Korea Department of Applied Physics, Waseda University, Tokyo 169-8555, Japan Received May 2011 Revised August 2012 Published October 2012 The global well-posedness on the Cauchy problem of nonlinear Schrödinger equations (NLS) is studied for a class of critical nonlinearity below $L^2$ in small data setting. We consider Hartree type (HNLS) and inhomogeneous power type NLS (PNLS). Since the critical Sobolev index $s_c$ is negative, it is rather difficult to analyze the nonlinear term. To overcome the difficulty we combine weighted Strichartz estimates in polar coordinates with new Duhamel estimates involving angular regularity. Keywords: critical nonlinearity below $L^2$, global well-posedness, weighted Strichartz estimate, Hartree equations, angular regularity.. Mathematics Subject Classification: Primary: 35Q55; Secondary: 42B3. Citation: Yonggeun Cho, Gyeongha Hwang, Tohru Ozawa. Global well-posedness of critical nonlinear Schrödinger equations below $L^2$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1389-1405. doi: 10.3934/dcds.2013.33.1389 C. Ahn and Y. Cho, Lorentz space extension of Strichartz estimate,, Proc. Amer. Math. Soc., 133 (2005), 3497. Google Scholar L. Bergé, Soliton stability versus collapse,, Phys. Rev. E., 62 (2000), 3071. Google Scholar A. Bouard and R. Fukuizumi, Stability of standing waves for nonlinear Schrodinger equations with inhomogeneous nonlinearities,, Annales de l'IHP., 6 (2005), 1. Google Scholar N. L. Carothers, "A Short Course on Banach Space Theory,", London Mathematical Society Student Texts No. 64, (2005). Google Scholar T. Cazenave, "Semilinear Schrödinger Equations,", Courant Lecture Notes in Mathematics 10, (2003). Google Scholar M. Chae, Y. Cho and S. Lee, Mixed norm estimates of Schrodinger waves and their applications,, Commun. Partial Differential Equations, 35 (2010), 906. Google Scholar Y. Cho and S. Lee, Strichartz estimates in spherical coordinates,, Indiana Univ. Math. J., (). Google Scholar Y. Cho, S. Lee and T. Ozawa, On Hartree equations with derivatives,, Nonlinear Analysis TMA, 74 (2011), 2098. Google Scholar Y. Cho and K. Nakanishi, On the global existence of semirelativistic Hartree equations,, RIMS Kokyuroku Bessatsu, B22 (2010), 145. Google Scholar Y. Cho and T. Ozawa, Sobolev inequalities with symmetry,, Commun. Contem. Math., 11 (2009), 355. Google Scholar Y. Cho, T. Ozawa, H. Sasaki and Y. Shim, Remarks on the semirelativistic Hartree equations,, DCDS-A, 23 (2009), 1273. Google Scholar M. Christ and A. Kiselev, Maximal functions associated to filtrations,, J. Func. Anal., 179 (2001), 409. Google Scholar J. Colliander, M. Grillakis and N. Tzirakis, Improved interaction Morawetz inequalities for the cubic nonlinear Schrödinger equation in 2d,, Int. Math. Res. Not. 23 (2007), (2007). Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global existence and scattering for rough solutions of a nonlinear Schrödinger equation on $R^3$,, Comm. Pure Appl. Math., 57 (2004), 987. Google Scholar D. Fang and C. Wang, Weighted Strichartz estimates with angular regularity and their applications,, Forum Math., 23 (2011), 181. Google Scholar J. Ginibre and T. Ozawa, Long range scattering for nonlinear Schrödinger and Hartree equations in space dimension $n \ge 2$,, Comm. Math. Phys., 151 (1993), 619. doi: 10.1080/15332969.1993.9985061. Google Scholar M. Grillakis, J. Shatah and W. Strauss, Stability theory of solitary waves in the presence of symmetry I,, J. Funct. Anal., 74 (1987), 160. doi: 10.1016/0022-1236(87)90044-9. Google Scholar Z. Guo and Y. Wang, Improved Strichartz estimates for a class of dispersive equations in the radial case and their applications to nonlinear Schrödinger and wave equations,, in preprint, (). Google Scholar N. Hayashi and T. Ozawa, Smoothing effect for Schrödinger equations,, J. Fuctional. Anal., 85 (1989), 307. Google Scholar I. W. Herbst, Spectral theory of the operator $(p^2+m^2)^{1/2} - Ze^2/r$,, Commun. Math. Physics, 53 (1977), 285. Google Scholar K. Hidano, Nonlinear Schrödinger equations with radially symmetric data of critical regularity,, Funkcial. Ekvac., 51 (2008), 135. Google Scholar J. Kato, M. Nakamura and T. Ozawa, A generalization of the weighted Strichartz estimates for wave equations and an application to self-similar solutions,, Comm. Pure Appl. Math., 60 (2007), 164. doi: 10.1002/cpa.20133. Google Scholar M. Keel and T. Tao, Endpoint Strichartz estimates,, Amer. J. Math., 120 (1998), 955. doi: 10.1353/ajm.1998.0039. Google Scholar S. Machihara, M. Nakamura, K. Nakanashi and T. Ozawa, Endpoint Strichartz estimates and global solutions for the nonlinear Dirac equation,, J. Func. Anal., 219 (2005), 1. doi: 10.1016/j.jfa.2004.07.005. Google Scholar F. Merle, Nonexistence of minimal blow-up solutions of equations $iu_t=-\Delta u - k(x)|u|^{4/N}u \text{ in } \mathbbR^n$,, Ann. Inst. H. Poincaré Phys. Théor, 64 (1996), 33. Google Scholar C. Miao, G. Xu and L. Zhao, Global well-posedness and scattering for the energy-critical, defocusing Hartree equation for radial data,, J. Func. Anal., 253 (2007), 605. doi: 10.1016/j.jfa.2007.09.008. Google Scholar _______, The Cauchy problem of the hartree equation,, J. Partial Diff. Eqs., 21 (2008), 22. Google Scholar _______, Global well-posedness and scattering for the mass-critical Hartree equation with radial data,, J. Math. Pure Appl., 91 (2009), 49. doi: 10.1016/j.matpur.2008.09.003. Google Scholar _______, Global well-posedness and scattering for the defocusing $H^{1/2}$-subcritical Hartree equation in $R^d$,, Ann. I. H. Poincaré Anal. Non Linéaire, 26 (2009), 1831. Google Scholar _______, Global Well-Posedness and Scattering for the Energy-Critical, Defocusing Hartree Equation in $\mathbbR^{1+n}$,, Commun. Partial Differential Equations, 36 (2011), 729. Google Scholar K. Nakanishi, Modified wave operators for the Hartree equation with data, image and convergence in the same space. II,, Ann. Henri Poincaré, 3 (2002), 503. Google Scholar M. Ruzhansky and M. Sugimoto, A smoothing property of Schrödinger equations in the critical case,, Math. Ann., 335 (2006), 645. doi: 10.1007/s00208-006-0757-4. Google Scholar T. Tao, Spherically averaged endpoint Strichartz estimates for the two-dimensional Schrödinger equation,, Commun. Partial Differential Equations, 25 (2000), 1471. Google Scholar _______, "Nonlinear Dispersive Equations,", Local and global analysis, (2006). Google Scholar I. Towers and B. A. Malomed, Stable, $(2 + 1)$dimensional solutions in a layered medium with sign-alternating Kerr nonlinearity,, J. Opt. Soc. Am. B, 19 (2002), 537. Google Scholar Y. Tsutsumi, $L^2$-solutions for noninear Schrödinger equations and noninear groups,, Funkcial. Ekvac., 30 (1987), 115. Google Scholar K. Yosida, "Functional Analysis,", Springer, (1965). Google Scholar Myeongju Chae, Soonsik Kwon. Global well-posedness for the $L^2$-critical Hartree equation on $\mathbb{R}^n$, $n\ge 3$. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1725-1743. doi: 10.3934/cpaa.2009.8.1725 P. Blue, J. Colliander. Global well-posedness in Sobolev space implies global existence for weighted $L^2$ initial data for $L^2$-critical NLS. Communications on Pure & Applied Analysis, 2006, 5 (4) : 691-708. doi: 10.3934/cpaa.2006.5.691 Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for the $L^2$ critical nonlinear Schrödinger equation in higher dimensions. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1023-1041. doi: 10.3934/cpaa.2007.6.1023 Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1095-1101. doi: 10.3934/dcds.2008.21.1095 Youngwoo Koh, Ihyeok Seo. Strichartz estimates for Schrödinger equations in weighted $L^2$ spaces and their applications. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4877-4906. doi: 10.3934/dcds.2017210 Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273 Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. Communications on Pure & Applied Analysis, 2016, 15 (3) : 831-851. doi: 10.3934/cpaa.2016.15.831 M. Keel, Tristan Roy, Terence Tao. Global well-posedness of the Maxwell-Klein-Gordon equation below the energy norm. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 573-621. doi: 10.3934/dcds.2011.30.573 Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 Hongjie Dong. Dissipative quasi-geostrophic equations in critical Sobolev spaces: Smoothing effect and global well-posedness. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1197-1211. doi: 10.3934/dcds.2010.26.1197 Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934/cpaa.2015.14.1865 Takamori Kato. Global well-posedness for the Kawahara equation with low regularity. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1321-1339. doi: 10.3934/cpaa.2013.12.1321 Renhui Wan. Global well-posedness for the 2D Boussinesq equations with a velocity damping term. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2709-2730. doi: 10.3934/dcds.2019113 Hyungjin Huh, Bora Moon. Low regularity well-posedness for Gross-Neveu equations. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1903-1913. doi: 10.3934/cpaa.2015.14.1903 Yanfang Gao, Zhiyong Wang. Minimal mass non-scattering solutions of the focusing L2-critical Hartree equations with radial data. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1979-2007. doi: 10.3934/dcds.2017084 Nobu Kishimoto. Local well-posedness for the Cauchy problem of the quadratic Schrödinger equation with nonlinearity $\bar u^2$. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1123-1143. doi: 10.3934/cpaa.2008.7.1123 Qiao Liu, Ting Zhang, Jihong Zhao. Well-posedness for the 3D incompressible nematic liquid crystal system in the critical $L^p$ framework. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 371-402. doi: 10.3934/dcds.2016.36.371 Luc Molinet, Francis Ribaud. On global well-posedness for a class of nonlocal dispersive wave equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 657-668. doi: 10.3934/dcds.2006.15.657 Magdalena Czubak, Nina Pikula. Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1669-1683. doi: 10.3934/cpaa.2014.13.1669 Stefan Meyer, Mathias Wilke. Global well-posedness and exponential stability for Kuznetsov's equation in $L_p$-spaces. Evolution Equations & Control Theory, 2013, 2 (2) : 365-378. doi: 10.3934/eect.2013.2.365 Yonggeun Cho Gyeongha Hwang Tohru Ozawa
CommonCrawl
← Short News Items In Search of the Multiverse → Witten on Analytic Continuation of Chern-Simons Theory Posted on October 10, 2009 by woit I was down in Princeton last Thursday, and attended a wonderful talk by Witten, which I'll try and explain a little bit about here. Presumably within a rather short time he'll have a paper out on the arXiv giving full details. The talk concerned Chern-Simons theory, the remarkable 3d QFT that was largely responsible for Witten's Fields medal. Given an SU(2) connection A on a bundle over a 3-manifold M, one can define its Chern-Simons number CS(A). This number is invariant under the identity component of the group of gauge transformations $\mathcal G$, and jumps by 2π times an integer under topologically non-trivial gauge transformations. The QFT is given by taking CS(A) as the action. The path integral $$Z(M,k)=\int_{\mathcal A/\mathcal G} dA e^{ikCS(A)}$$ is well-defined for k integral and gives an interesting topological invariant of the 3-manifold M. One can also take a knot K in M, choose an irreducible representation R of SU(2) of spin n/2, and then define a knot invariant by $$Z(M,K,k,n)=\int_{\mathcal A/\mathcal G} dA e^{ikCS(A)}hol_R(K)$$ where $hol_R(K)$ is the trace of the holonomy in the representation R, around the knot K (this is the Wilson loop). To simplify matters, consider the special case $Z(K,k,n)=Z(S^3,K,k,n)$, which can be used to study knots in $\mathbf R^3$. These knot invariants can be evaluated for large k by stationary phase approximation (perturbation theory), and for arbitrary k by reformulating the QFT in a Hamiltonian formalism, and using loop group representation theory and the Verlinde (fusion) algebra. One thing that has always bothered me about this story is that it has never been clear to me whether such a path integral makes sense at all non-perturbatively. At one point I spent a lot of time thinking about how you would do such a calculation in lattice gauge theory. There, one can imagine various (computationally impractical) ways of defining the action, but integrating a phase over an infinite dimensional space always looked problematic: without some other sort of structure, it was hard to see how one could get a well-defined answer in the limit of zero-lattice spacing. In simpler models with similar structure (e.g. loops on a symplectic manifold), similar problems appear, and are resolved by introducing additional terms in the action. What Witten proposed in his talk was a method for consistently defining such path integrals by analytic continuation. The idea is to complexify, working with SL(2,C) connections and a holomorphic Chern-Simons functional, then exploit the freedom to choose a different contour to integrate over than the contour of SU(2) connections. By choosing a contour that is not invariant under topologically non-trivial gauge transformations, and only modding out by the topologically trivial ones, Witten also managed to define the theory for non-integral k, making contact with a lot of mathematical work on these knot invariants, which treats them a Laurent polynomials in the square root of $$q=e^{2\pi i/(k+2)}$$ The main new idea that Witten was using was that the contributions of different critical points p (including complex ones), could be calculated by choosing appropriate contours $\mathcal C_p$ using Morse theory for the Chern-Simons functional. This sort of Morse theory involving holomorphic Morse functions gets used in mathematics in Picard-Lefshetz theory. The contour is given by the downward flow from the critical point, and the flow equation turns out to be a variant of the self-duality equation that Witten had previously encountered in his work with Kapustin on geometric Langlands. One tricky aspect of all this is that the contours one needs to integrate over are sums of the $\mathcal C_p$ with integral coefficients and these coefficients jump at "Stokes curves" as one varies the parameter in one's integral (in this case, x=k/n, k and n are large). In his talk, Witten showed the answer that he gets for the case of the figure-eight knot. Mathematicians and mathematical physicists have done quite a bit of work on SL(2,C) Chern-Simons, and studying the properties of knot-invariants as analytic functions. I don't know whether Witten's new technique solves any of the mathematical problems that have come up there. He mentioned the relation to 3d gravity, where the relationship between Chern-Simons theory and gravity in the Lorentzian and Euclidean signature cases evidently still remains somewhat mysterious. Perhaps his analytic continuation method may provide some new insight there. It also may apply to a much wider range of QFTs where there are imaginary terms in the action, making the path integral problematic. I'd be very curious to understand how this works out in some simpler models, such as the loop space ones. In any case, it appears to be a quite beautiful new idea about how to define certain QFTs via the path integral. Update: Witten's slides for the talk are available here, video here. For slides from other talks at the workshop the talk was part of, see here. 6 Responses to Witten on Analytic Continuation of Chern-Simons Theory Bruce Bartlett says: Sounds great. Looking forward to when the paper comes out. Lefschetz Thimble says: I don't see the magic words to convince everyone you know the mathematics. Thanks L.T. for giving us the magic words. (the commenter must have attended the talk, during which Witten joked that if you want to show people that you're expert in this subject, you refer to one of these integration cycles as a "Lefschetz Thimble". I considered doing this in the posting, but decided against it, since not only am I not an expert on this, but it's one part of Morse theory I've been completely ignorant about…) Daniel Doro Ferrante says: For those who are interested, the talk is already online (i watched it yesterday), • Analytic Continuation of CS Theories (aka, Lefschetz Thimble 😉 ). It's a very stimulating and sophisticated lecture, definitely worth watching. Incidentally, it reminds me of some works that go in this "general direction", namely: arXiv:hep-th/9612079 and arXiv:0710.1256 [hep-th]. Thanks Daniel, I also see that a pdf of the slides is available, will add links to the posting. Pingback: Teorias Topológicas de Campo e suas Continuações Analíticas… « Ars Physica
CommonCrawl
Multicolor 4D printing of shape-memory polymers for light-induced selective heating and remote actuation Structural multi-colour invisible inks with submicron 4D printing of shape memory polymers Wang Zhang, Hao Wang, … Joel K. W. Yang 4D-printed hybrids with localized shape memory behaviour: Implementation in a functionally graded structure Yu-Chen Sun, Yimei Wan, … Hani E. Naguib Structured multimaterial filaments for 3D printing of optoelectronics Gabriel Loke, Rodger Yuan, … Yoel Fink 3D printing of twisting and rotational bistable structures with tuning elements Hoon Yeub Jeong, Soo-Chan An, … Young Chul Jun Structural color printing via polymer-assisted photochemical deposition Shinhyuk Choi, Zhi Zhao, … Chao Wang Approaching intrinsic dynamics of MXenes hybrid hydrogel for 3D printed multimodal intelligent devices with ultrahigh superelasticity and temperature sensitivity Haodong Liu, Chengfeng Du, … Wei Huang Multimaterial actinic spatial control 3D and 4D printing J. J. Schwartz & A. J. Boydston Direct printing of functional 3D objects using polymerization-induced phase separation Bhavana Deore, Kathleen L. Sampson, … Chantal Paquet Thermally-curable nanocomposite printing for the scalable manufacturing of dielectric metasurfaces Wonjoong Kim, Gwanho Yoon, … Heon Lee Hoon Yeub Jeong1, Byung Hoon Woo1, Namhun Kim2 & Young Chul Jun1 Scientific Reports volume 10, Article number: 6258 (2020) Cite this article Four-dimensional (4D) printing can add active and responsive functions to three-dimensional (3D) printed objects in response to various external stimuli. Light, among others, has a unique advantage of remotely controlling structural changes to obtain predesigned shapes. In this study, we demonstrate multicolor 4D printing of shape-memory polymers (SMPs). Using color-dependent selective light absorption and heating in multicolor SMP composites, we realize remote actuation with light illumination. We experimentally investigate the temperature changes in colored SMPs and observe a clear difference between the different colors. We also present simulations and analytical calculations to theoretically model the structural variations in multicolor composites. Finally, we consider a multicolor hinged structure and demonstrate the multistep actuation by changing the color of light and duration of illumination. 4D printing can allow complex, multicolor geometries with predesigned responses. Moreover, SMPs can be reused multiple times for thermal actuation by simply conducting thermomechanical programming again. Therefore, 4D printing of multicolor SMP composites have unique merits for light-induced structural changes. Our study indicates that multicolor 4D printing of SMPs are promising for various structural changes and remote actuation. Three-dimensional (3D) printing is appropriate for the fabrication of complex multi-material structures that are difficult to realize using conventional methods. 3D printing is being widely used in many different technical areas1,2,3,4,5,6,7. However, in most cases, 3D-printed structures remain static as printed. A new concept, known as four-dimensional (4D) printing, has been proposed by Tibbits et al.8. They developed active composite structures consisting of swelling and rigid materials. Using water-swelling materials, flat 3D-printed structures can be transformed into predesigned 3D shapes. 4D printing can add active, responsive functions to 3D-printed objects in various ways9. For example, 4D printing research has been conducted involving hydrogels10,11,12 or shape-memory polymers (SMPs)13,14,15. These materials can respond to external stimuli such as heat, humidity, pH value, and light16,17,18. Using the swelling and contraction of hydrogels in water, it is possible to realize biomimetic 3D-printed structures19. Although a reversible shape change is allowed for hydrogels, it is often very slow (taking several minutes)20 and working only in limited environments (e.g., in a solvent). In contrast, SMPs allow for a rapid shape change of 3D-printed objects with large structural deformation. SMPs can be deformed into arbitrary temporary shapes and the temporary shape can be fixed during a glass-transition or crystallization process. After this thermomechanical programming, SMPs can recover their original shapes when heated above a transition temperature (the glass-transition temperature, Tg, in our case). The Tg for SMPs can be adjusted via chemical synthesis or by simply mixing two SMPs in a different ratio21 during 3D printing (called digital SMPs). Moreover, SMPs are rigid enough to bear loadings and can be reused for structural changes and actuation; the programming and recovery processes can be repeated multiple times. Owing to these unique features, many researchers have utilized SMPs in 4D printing studies. Several of these studies have focused on sequential shape changes. Mao et al. fabricated a sequential self-folding structure using digital SMPs with different values of Tg22. Wu et al. proposed the multi-shape memory behavior of 3D-printed SMP composites and these structures were activated in water at different temperatures23. Light has an advantage in that it can induce stimuli-responsive structural changes remotely. Although magnetic fields can also allow remote actuation24,25,26, the allowed length could be limited to locate objects close to a magnet. The magnetic fields for actuation can be also generated from electric coils that can be integrated into actuators. However, this can complicate actuator design and increase the overall weight. Several studies on light-induced shape changes have recently been conducted, although they do not involve 3D printing. Mu et al. fabricated light-activated polymer-laminated composites27. A photochemical reaction with ultraviolet (UV) light induced the bending of composite structures, although the structural changes took a few minutes to occur. Pre-strained polymer sheets (such as Shrinky-Dink) and inkjet printing were also used to create self-folding structures that were activated under light illumination28,29. Sequential folding was also demonstrated with different colors of light30. Although these are very exciting demonstrations, pre-strained sheets cannot be reused after deformation. Moreover, background heating is often desirable because of the high response temperature (approximately 100 °C). Inspired by 4D printing of active composites15,23,31, we present multicolor 4D printing of SMPs in this paper. Using color-dependent light absorption, we realize remotely controlled shape changes. We study the bending behavior of our multicolor sample via both experiments and theoretical modeling. In contrast to previous studies that involved heating an entire structure, we induce selective heating in our multicolor SMP composites by properly choosing the color of light. This leads to color-dependent structural changes into different shapes. 4D printing can allow the complex geometries of multicolor composites with predesigned responses. In addition, SMPs can be reused multiple times by conducting thermomechanical programming again. Therefore, multicolor 4D printing of SMPs can offer unique merits for light-induced structural changes and remote actuation. Figure 1a,b present our light-activating structure (L = 40 mm, w = 5.5 mm, t = 2 mm, a = 0.4 mm). It consists of three materials that are available in the commercial multi-material 3D printer (Stratasys, J750). The yellow (Veroyellow) and blue (Verocyan) fibers are digital SMPs, whereas the sky-blue matrix is a rubber-like transparent material (Tango + ). The yellow and blue fibers are positioned such that incident light can reach both SMPs. The polyJet process (material jetting) was used for the 3D printing of this multicolor composite structure, where photopolymer-ink droplets were jetted and cured with UV light. This polyJet process can create fine features with a resolution of approximately 50 μm in the plane and 15 μm in thickness. The geometric design was first created using computer-aided design (CAD) software and then imported into our 3D printer. Figure 1c demonstrates the thermomechanical programming and light activation in our multicolor structure. First, the entire structure was immersed in hot water and stretched at 90 °C. Then, it was cooled and made rigid by immersing it in cold water (at 25 °C). The stretched structure with a strain of ε = 10% was first illuminated by a red light-emitting diode (LED), and the entire structure was bent downward as shown in Fig. 1c. Then, the bent structure was illuminated by a blue LED, and it reverted to its original flat shape. A 3D printed light-activating structure. (a) Schematic for the multicolor SMP structure. (b) Side view of the structure. (c) Thermomechanical programming and bending behavior (the dotted line in the figure is an eye guide). To understand the observed behavior, we first measured the absorption spectra of printed color plates; refer to Fig. 2a,b. 3D printing of blue and yellow plates with dimensions 25 mm × 25 mm × 1 mm was performed using the same digital SMPs shown in Fig. 1. The transmission (T) and reflection (R) spectra for the two colored plates under normal incidence of light were measured using a spectrophotometer (Cary 5000, Agilent). Then, the absorption (A) spectra were calculated as follows: A = 1 − T − R. Figure 2a,b present the measured spectra for the blue and yellow plates, respectively. We can see that the blue plate absorbs red light strongly (A ~ 95%), while the yellow plate hardly absorbs red light (A ~ 5%) (refer to the black lines in Fig. 2a,b). In the shorter wavelength region, the yellow plate strongly absorbs blue light (A ~ 95%), while the blue plate absorbs blue light moderately (A ~ 41% at 445 nm). Selective heating of colored SMPs. (a,b) The reflection, transmission, and absorption spectra of the blue and yellow SMP plates, respectively, in the visible wavelength region. (c) The measured temperature change of blue and yellow SMP samples under red LED illumination. (d,e) IR camera images of the blue and yellow SMP plates, respectively, after 30 seconds of red LED illumination. (f) The measured temperature change of blue and yellow SMP plates under blue LED illumination. (g,h) IR camera images of the blue and yellow SMP plates, respectively, after 30 seconds of blue LED illumination. The dominant wavelength of the red LED was 623 nm, while that of the blue LED was 445 nm. The red LED had a power density of 1.1 mW/mm2, while the blue LED had 2.1 mW/mm2 measured at a distance of 200 mm. We also directly measured the temperature changes of the color plates under LED illumination using an infrared (IR) imaging camera (FLIR-C2, FLIR) (refer to Fig. 2c–h). A 3-mm-thick piece of styrofoam was placed below the colored plates for thermal insulation from the floor during light illumination. As expected from the absorption spectra, the thermal IR images indicated a clear difference between the two plates under red-LED illumination; refer to Fig. 2d–e. The IR images were recorded over time, and the temperature changes as a function of time are plotted and given in Fig. 2c. The temperature of the blue plate increased rapidly; however, the temperature of the yellow plate exhibited only a slight increase. The temperature of the blue plate increased to 80 °C in 30 s (i.e., above the glass-transition temperature Tg of the blue SMP, which is about 66.7 °C). In contrast, the temperature of the yellow plate remained around 30 °C after 30 s. This considerable difference in their temperatures implies that we can selectively heat the blue SMP fibers with red-light illumination. We obtained the same IR images for blue-light illumination; refer to Fig. 2f–h. In this case, the temperature difference between the two plates was smaller because both the plates can absorb blue light to some extent. However, there was still an evident difference in their temperatures. The difference in temperatures explains the light-induced actuation shown in Fig. 1c. After thermomechanical programming, the red LED illumination induced downward bending (i.e., n-shape) owing to the selective light absorption only in blue SMP fibers. The temperature of the blue SMP fibers increased above Tg, and the elongated blue fibers softened and shortened to the original length. The temperature of the yellow fibers remained relatively unchanged (i.e., below Tg); thus, the yellow fibers retained their elongated length. This difference between the blue and yellow fibers induced the bending of the multilayer composite. The blue LED illumination induced the heating in the yellow fibers; thus, the yellow fibers also reverted to their original length. Then, the entire structure reverted to its initial flat shape as shown in Fig. 1c. This multicolor structure can be reused for another cycle of actuation after thermomechanical elongation. This color-dependent selective heating and actuation behavior also suggests that we can induce actuation into different shapes by controlling the illumination sequence of different colors of light. Figure 3 presents a simple example. Figure 3a,b show a schematic and an experimental result, respectively (see also Videos 1–4). When red light was first applied, the blue SMP fibers placed at the bottom of the printed structure recovered, whereas the yellow SMP fibers at the top remained rigid. Thus, the entire structure deformed into an n-shape. Bending behavior of the multicolor sample. A thermomechanically programmed structure bends to a n-shape under red illumination. After bending, the structure can recover to an initial flat state with blue illumination. In case of illuminating blue light first, the structure bends to a U-shape. It can also recover to the initial state with red-light illumination. (a) is the schematic for dual-step actuation, while (b) is the experimental result. See also Videos 1–4. Applying blue light later caused the entire structure to retain its initial flat state. In contrast, when blue light was first applied, the yellow SMP fibers at the bottom recovered first, whereas the blue SMPs remained rigid. Therefore, the entire structure was bent upward (i.e., deformed into a U-shape). Applying red light later caused the entire structure to retain its initial flat state. However, when the structure was heated in hot water (instead of selective heating with colored light), the change in shape of our sample was insignificant (data is not shown here). In the hot water, both blue and yellow SMPs recovered at the same rate, and the entire structure shrank to its original length but remained flat (i.e., no shape change occurred). To investigate the bending behavior, we performed thermomechanical programming with different strains and observed the resulting bending with blue LED illumination (Fig. 4). Light-activation experiments were conducted for samples with strains of 5%, 7.5%, 10%, and 12.5%; refer to Fig. 4a. Selective heating of yellow SMP fibers under blue LED illumination caused the entire structure to bend into a U-shape. Photographs of these experiments are shown in Fig. 4a. We notice that larger strains led to larger bending angles. In Fig. 4b, the maximum bending angle is plotted with respect to programming strain (red line). The maximum bending angle increased as the strain increased. Figure 4c shows the time taken to reach the maximum bending angle (red curve). The required time was almost the same for all the strains, but it decreased slightly as the strain increased. The slight decrease in the time can be attributed to the different heating rates of the SMP fibers with varying strains because the structures with larger strains became thinner. The temperature of thinner SMPs can increase faster. Bending and recovery under blue-light illumination. (a) Experiments with different programming strains. (b) Maximum bending angles from experiments, simulations, and analytical theory. The experiment was repeated 5 times for each strain, and the error bars were obtained from the standard deviation. (c) The measured time for maximum bending (red) and recovery to the initial state (blue). Because of the heat transfer from the yellow to blue SMP fibers, the bent structure eventually became flat again under blue LED illumination; refer to Video 5. The blue line in Fig. 4b represents the measured time consumed for the full recovery of the flat structure. For all the programming strains, the full-recovery time was approximately 60 s. This implies that the temperatures of both the yellow and blue SMP fibers significantly exceeded Tg in 60 s, and both fibers retained their original lengths. This indicates that we must adjust the duration of illumination to control the bending behavior29. To understand the bending behavior further, we considered the temperature changes of yellow and blue SMP fibers under blue LED illumination (Fig. 5). In the case of yellow fibers at the upper layer, we directly measured the surface temperature of our multicolor sample using an IR imaging camera. Because the absorption of blue LED light by yellow fibers is strong, we could determine the temperature from the maximum-temperature spot; note the solid yellow line in Fig. 5a. The temperature of the blue fibers at the lower layer was estimated via heat-transfer simulations (COMSOL Multiphysics). In the simulations, the experimentally measured temperatures for yellow fibers were used, and the thermal-conductivity values for printing materials were obtained from the literature32,33. We found that air cooling does not change our simulation result much (see Supplementary Fig. S1). Heat can be gradually transferred from the yellow fibers to the blue fibers; thus, the temperature of the blue fibers increases over time; refer to Appendix I for simulation snapshots. Figure 5a shows the temperature change of the yellow (measured) and blue (estimated) SMP fibers in our sample. The dotted blue line represents the result of the heat-transfer simulation. Because the blue SMP fibers partially absorbed the blue light, the temperature of the blue fibers can increase by this direct absorption too. To investigate the effect of direct absorption, we prepared a control sample with only blue SMP fibers in a transparent matrix material (see the inset in Fig. 5a). Figure 5a also shows the measured temperature for this control sample under blue LED illumination (red solid line). The rise in temperature due to direct blue-light absorption was significantly smaller than that due to heat transfer. Thus, it shows that the dominant factor causing the temperature increase in the blue fibers at the lower layer was the heat transferred from the yellow fibers at the top layer. (a) The measured temperature change of the multicolor structure. The yellow solid line is the temperature of yellow SMP fibers, whereas the blue dashed line indicates the temperature of blue SMP fibers obtained from heat transfer simulations. The red solid line is the measured temperature of blue SMP fibers in a control sample that contains blue SMP fibers only. (b) Results of solid-mechanics simulations. The color bar indicates the total displacement measured from the bottom plane. We also simulated the bending behavior of our multilayer sample (COMSOL Multiphysics, Solid Mechanics Module). These simulations were conducted using the temperature data in Fig. 5a and the measured modulus and recovery ratio of the SMPs; refer to Appendix II and III. Figure 5b shows the simulation results for different programming strains. The bending angles from these simulations are presented in Fig. 4b (blue circles) and were found to agree well with the experimental results presented in Fig. 4b (red lines). We also performed analytical model calculations to analyze our bending experiments. The radius r of the bending curvature was obtained using Timoshenko's bilayer beam theory34,35: $$r=\frac{{\rm{t}}\cdot \left[3\cdot {(1+m)}^{2}+(1+m\cdot n)\cdot \left({m}^{2}+\frac{1}{m\cdot n}\right)\right]}{6\cdot {(1+m)}^{2}\cdot \varepsilon },$$ where t represents the total thickness, m represents the thickness ratio between the two layers, n represents the modulus ratio between the two layers, and ε represents the strain difference between the two layers. See Appendix IV for the derivation of this equation. t and m are determined by the sample geometry, and n and ε can be estimated using the data in Fig. 5a and the measured SMP parameters. For instance, in the case of 7.5% strain, the maximum bending takes 38 s (Fig. 4b). As shown in Fig. 5a, the temperatures of the yellow and blue fibers become 77.2 and 63.7 °C at this time, respectively. Then, at these temperatures, the moduli of the yellow and blue fibers are approximately 12.7 and 76.6 MPa, respectively (Appendix II). Note that the modulus of the transparent matrix material is significantly lower (i.e., ~0.5 MPa). The effective modulus Meff of a composite material in the isostrain condition can be determined as follows: $${M}_{eff}={V}_{m}\cdot {M}_{m}+{V}_{f}\cdot {M}_{f},$$ where Vm and Vf represent the volume fractions of matrix and fiber materials, respectively, and Mm and Mf represent the moduli of matrix and fiber materials, respectively. Then, the effective modulus of the upper half layer (containing yellow fibers) becomes ~2.3 MPa and that of the lower half layer (containing blue fibers) becomes ~11.6 MPa. Thus, we obtain the modulus ratio between the two layers as n ~ 5.1. The measured recovery ratio of the blue SMPs was 25.2% at 63.7 °C (Appendix III). Assuming that the upper-half layer containing yellow fibers is shortened to its original length at the maximum bending, we obtained the strain difference between the two layers as ε ~ 0.0374. From these n and ε values, we calculated the bending radius as r ~ 48.6 mm. For other strains, we can perform the same calculations. The corresponding bending angles for different programing strains are plotted in Fig. 4b (orange asterisks). The analytical results agree well with the experimental results. Clearly, the maximum bending angle increases as the programming strain increases. Because the difference in length between the blue and yellow SMP fibers causes the structure to bend, a larger length difference yields a larger maximum bending angle. We also note that the viscoelasticity of SMPs can complicate the analysis36,37,38. We measured the recovery ratio of SMPs over time (Appendix III). Because this recovery ratio is affected by the time-dependent relaxation of polymers, it includes the viscoelastic response of SMPs. The measured recovery ratio was used in our analytic modeling, as described above. In our work, viscoelasticity was considered in this way, and we found that our analysis agrees well with experiments. In Fig. 4c, we discussed that bent structures eventually became flat again because of heat transfer under continuous light illumination. Similar heat transfer would occur even after the light source is turned off. For example, in Fig. 3b, we observed that the bent structure relaxed a little (i.e., the bending angle was reduced back a little) after light was turned off. The residual heat in the structure could cause this relaxation to happen. To resolve this issue, a thermal insulating layer could be potentially included inside the multicolor composite to block the heat flow. Finally, we demonstrate a multicolor hinged structure for multistep actuation. Here, we used colored SMP fibers in the hinges and controlled the sequential shape transformation according to the color of light; refer to Fig. 6. As shown in Fig. 6a, there were four panels in the structure. Instead of having yellow and blue SMP fibers in the same hinge, we used transparent SMP (Veroclear) fibers with either yellow or blue fibers in a transparent matrix (colored blue in the schematic). The transparent SMPs are colored gray in the schematic. We also used the transparent SMP in the rigid panel of the structure. The two hinges between panels 1 and 2 and between panels 2 and 4 were composed of blue and transparent SMP fibers. The hinge connecting panels 1 and 3 consisted of yellow and transparent fibers. Hinges containing yellow SMPs primarily respond to blue light and those containing blue SMPs respond to red light more strongly; refer to Fig. 2a. In this design, the length of each rigid panel was 5 mm. The length of the SMP fibers was 2.5 mm, and their width and thickness were 0.2 mm. During thermomechanical programming, all the hinges were elongated in hot water and cooled equally. We formed holes on panels 2–4 to reduce the weight. To realize rapid transformation, we focused the LED light onto our structure using a focal lens. Our structure transformed within 10 s under LED illumination. Multicolor hinged structure for multistep actuation. (a) Schematic for the multicolor hinged structure. (b) Example of multistep actuation. This hinged structure can transform into different 3D shapes depending on the color of light and duration of illumination. See also Videos 6–8. This hinged structure can be transformed into multiple different shapes depending on the color sequence and duration of illumination. Figure 6b shows an example (see also Videos 6–8). With blue light illumination, the yellow SMP fibers in the hinge between panels 1 and 3 relaxed, and panel 3 rose (State b). With red illumination, the other two hinges bent as well; thus, we obtained another shape (State c). With additional red illumination, the hinge between panels 1 and 2 straightened because of the heat transfer to the transparent SMP fibers (colored gray). However, the other hinge between panels 2 and 4 remained bent because this hinge was slanted and received less light by the illumination from the top. Thus, by controlling the color of light and duration of illumination, we could transform the hinged structure into different shapes. More designs could be further developed to realize various structural changes. We demonstrated multicolor 4D printing of SMP composites and remote actuation via color-dependent selective heating. We experimentally investigated the light absorption and temperature changes in colored SMPs and observed clear differences in the color-dependent temperature changes. The bending-angle changes were examined both experimentally and theoretically. For theoretical modeling, we performed both numerical simulations and analytic calculations to explain the bending-angle dependence on the programming strain. Finally, we demonstrated a multicolor hinged structure, where multistep actuation was controlled by the color of light and duration of illumination. SMPs can be reused by conducting thermomechanical programming again and their response temperatures can be adjusted via material synthesis or by dynamic mixing during 3D printing. Moreover, 4D printing can enable the fabrication of complex, multicolor geometries for tailored responses. Therefore, multicolor 4D printing of SMP composites have unique merits for light-induced structural changes and remote actuation. Appendix I: Heat transfer simulation Figure 7 shows the results from heat transfer simulations; the snapshots show heat transfer and temperature changes over time. The experimentally measured temperatures were used for yellow fibers. The temperature of the blue fibers increases over time because of heat transfer from the yellow fibers to the blue fibers. Snapshots from heat transfer simulation (COMSOL Multiphysics). Appendix II: Modulus measurements with dynamic mechanical analysis (DMA) Figure 8 shows the storage modulus and loss tangent of the SMPs and the matrix material that we used in our work. The DMA sample (size: 10 mm × 3 mm × 1 mm) was 3D-printed using Stratasys J750 and mounted on a DMA machine (TA Instruments, Q800). A preload of 0.01 N was imposed during measurements to prevent the sample from bending. The equilibrium was initially set at −50 °C for 5 minutes, and then temperature was increased to 90 °C with a rate of 3 °C/min. During measurements, the strain oscillated with a frequency of 1 Hz at 0.1% peak amplitude. Three colorful SMP materials (yellow, blue, clear) have nearly equal modulus values above the room temperature. This helps our multi-color composite structures remaining flat after thermomechanical programming. The peak of tanδ shows glass transition temperatures (Tg). The Tg's of yellow, blue, transparent SMPs are 63.2 °C, 66.7 °C, and 63.5 °C, respectively. (a) Storage modulus and (b) Loss tangent obtained with dynamic mechanical analysis (DMA) measurements. Appendix III: Recovery ratio measurements The recovery ratio of the SMP we used was again measured using DMA. The measuring sample was 20 mm × 6 mm × 1 mm in size. The recovery ratio was measured after the following thermomechanical programming: Temperature was first increased to 90 °C by a rate of 2 °C/min and then maintained at 90 °C isothermally for 5 minutes until the sample reached the same temperature. Then, the sample was stretched with a 10% strain with a rate of 0.5%/min at 90 °C. After stretching, the temperature was decreased to 20 °C by a rate of 2 °C/min. Then, the force was gradually made zero with the same rate as the stretching case. To measure the shape-memory recovery ratio, the temperature was again increased to 90 °C by the rate of 2 °C/min and maintained isothermally for 10 minutes (red curve in Fig. 9). Then, the recovery ratio was determined from 1 – ε(t)/ε(0), where ε(t) is the current strain value of the SMP and ε(0) is the strain value just before the recovery process. The maximum recovery ratio of the SMP was about 92%. The shape fixity ratio of SMPs can be also estimated from parameters obtained in DMA measurements (see Supplementary Figure Fig. S2). The measurement result is summarized in Table 1. Recovery ratio measurement of the yellow SMP. Table 1 Shape fixity ratio of SMPs. Appendix IV: Bilayer beam theory The radius of the bending curvature can be obtained from Timoshenko's beam theory34,35, as follows. In order to satisfy the equilibrium, the following two equations should be satisfied (refer to Fig. 10 for the bilayer geometry and relevant parameters): $${P}_{1}={P}_{2}=P$$ $$\frac{Pt}{2}={M}_{1}+{M}_{2}$$ Schematic for a bilayer beam. The bending moment M of each beam can be expressed using the modulus (E), the second moment of area (I), and the radius r of bending curvature. $$\begin{array}{c}{M}_{1}=\frac{{E}_{1}\cdot {I}_{1}}{r}\\ {M}_{2}=\frac{{E}_{2}\cdot {I}_{2}}{r}\end{array}$$ $$\frac{Pt}{2}=\frac{{E}_{1}{I}_{1}+{E}_{2}{I}_{2}}{r}$$ The strain difference between two layers can be expressed by above terms: $${\rm{\varepsilon }}=\frac{{P}_{1}}{{E}_{1}{t}_{1}}+\frac{{P}_{2}}{{E}_{2}{t}_{2}}+\frac{{t}_{1}}{2r}+\frac{{t}_{2}}{2r}$$ Eliminating P terms using the equation just above with the equation of second moment of area, we obtain $${I}_{1}=\frac{{t}_{1}^{3}}{12},\,{I}_{2}=\frac{{t}_{2}^{3}}{12}$$ Finally, we obtain the following expression for the radius of bending curvature. $${\rm{r}}=\frac{{\rm{t}}\cdot \left[3\cdot {(1+m)}^{2}+(1+m\cdot n)\cdot \left({m}^{2}+\frac{1}{m\cdot n}\right)\right]}{6\cdot {(1+m)}^{2}\cdot \varepsilon }$$ where t is the total thickness, m is the thickness ratio between two layers, n is the modulus ratio between two layers, and ε is the strain difference between two layers. This is the Eq. (1) in the main text. From the radius, we can estimate a corresponding bending angle. The raw data and the processed data required to reproduce the findings in the current study are available from the corresponding author on reasonable request. Mironov, V., Boland, T., Trusk, T., Forgacs, G. & Markwald, R. R. Organ printing: computer-aided jet-based 3D tissue engineering. Trends Biotechnol. 21, 157–161 (2003). Ozbolat, I. T., Moncal, K. K. & Gudapati, H. Evaluation of bioprinter technologies. Addit. Manuf. 13, 179–200 (2017). D'Auria, M. et al. 3-D printed metal-pipe rectangular waveguides. IEEE Trans. Adv. Packag. 5, 1339–1349 (2015). Gissibl, T., Thiele, S., Herkommer, A. & Giessen, H. Sub-micrometre accurate free-form optics by three-dimensional printing on single-mode fibres. Nat. Commun. 7, 11763 (2016). Gutiérreza, S. C., Zotovicb, R., Navarroa, M. D. & Meseguer, M. D. Design and manufacturing of a prototype of a lightweight robot arm. Procedia Manufacturing 13, 283–290 (2017). Klippstein, H., Sanchez, A. D. D. C., Hassanin, H., Zweiri, Y. & Seneviratne, L. Fused deposition modeling for unmanned aerial vehicles (UAVs): a review. Adv. Eng. Mater. 20, 1700552 (2018). Paolini, A., Kollmannsberger, S. & Rank, E. Additive manufacturing in construction: A review on processes, applications, and digital planning methods. Addit. Manuf. 30, 100894 (2019). Tibbits, S. The emergence of 4D printing. TED Conferences (2013). Mitchell, A., Lafont, U., Hołyńska, M. & Semprimoschnig, C. Additive manufacturing — A review of 4D printing and future applications. Addit. Manuf. 24, 606–626 (2018). Bakarich, S. E., Gorkin, R. III, Panhuis, M. & Spinks, G. M. 4D printing with mechanically robust, thermally actuating hydrogels. Macromol. Rapid Commun. 36, 1211–1217 (2015). Huang, L. et al. Ultrafast digital printing toward 4d shape changing materials. Adv. Mater. 29, 1605390 (2017). Luo, F. et al. Oppositely charged polyelectrolytes form tough, self-healing, and rebuildable hydrogels. Adv. Mater. 27, 2722–2727 (2015). Ge, Q. et al. Multimaterial 4D printing with tailorable shape memory polymers. Sci. Rep. 6, 31110 (2016). Article ADS Google Scholar Zhang, W. et al. Shape memory behavior and recovery force of 4D printed textile functional composites. Compos. Sci. Technol. 160, 224–230 (2018). Ge, Q., Qi, H. J. & Dunn, M. L. Active materials by four-dimension printing. Appl. Phys. Lett. 103, 131901 (2013). Jeong, H. Y., Lee, E., Ha, S., Kim, N. & Jun, Y. C. Multistable thermal actuators via multimaterial 4d printing. Adv. Mater. Technol. 4, 1800495 (2019). Nadgorny, M., Xiao, Z., Chen, C. & Connal, L. A. Three-dimensional printing of pH-responsive and functional polymers on an affordable desktop printer. ACS Appl. Mater. Interfaces 8, 28946–28954 (2016). Zhao, Z. et al. Origami by frontal photopolymerization. Sci. Adv. 3, e1602326 (2017). Gladman, A. S., Matsumoto, E. A., Nuzzo, R. G., Mahadevan, L. & Lewis, J. A. Biomimetic 4D printing. Nat. Mater. 15, 413–418 (2016). Raviv, D. et al. Active printed materials for complex self-evolving deformations. Sci. Rep. 4, 7422 (2014). Panayiotou, C. G. Glass transition temperatures in polymer mixtures. Polymer Journal 18, 895–902 (1986). Mao, Y. et al. Sequential self-folding structures by 3d printed digital shape memory polymers. Sci. Rep. 5, 13616 (2015). Wu, J. et al. Multi-shape active composites by 3D printing of digital shape memory polymers. Sci. Rep. 6, 24224 (2016). Zhu, P. et al. 4D printing of complex structures with a fast response time to magnetic stimulus. ACS Appl. Mater. Interfaces 10, 36435–36442 (2018). Zhang, F., Wang, L., Zheng, Z., Liu, Y. & Leng, J. Magnetic programming of 4D printed shape memory composite structures. Composites Part A 125, 105571 (2019). Ji, Z., Yan, C., Yu, B., Wang, X. & Zhou, F. Multimaterials 3D printing for free assembly manufacturing of magnetic driving soft actuator. Adv. Mater. Interfaces 4, 1700629 (2017). Mu, X. et al. Photo-induced bending in a light-activated polymer laminated composite. Soft Matter 11, 2673–2682 (2015). Liu, Y., Boyles, J. K., Genzer, J. & Dickey, M. D. Self-folding of polymer sheets using local light absorption. Soft Matter 8, 1764–1769 (2012). Lee, Y., Lee, H., Hwang, T., Lee, J. & Cho, M. Sequential folding using light-activated polystyrene sheet. Sci. Rep. 5, 16544 (2015). Liu, Y., Shaw, B., Dickey, M. D. & Genzer, J. Sequential self-folding of polymer sheets. Sci. Adv. 3, e1602417 (2017). Ge, Q., Dunn, C. K., Qi, H. J. & Dunn, M. L. Active origami by 4D printing. Smart Mater. Struct. 23, 094007 (2014). Mikkelson, E. C. Characterization and modeling of the thermal properties of photopolymers for material jetting processes. Master thesis at http://hdl.handle.net/10919/46789 (2014). Teoh, J. E. M., Zhao, Y., An, J., Chua, J. A. & Liu, Y. Multi-stage responsive 4D printed smart structure through varying geometric thickness of shape memory polymer. Smart Mater. Struct. 26, 125001 (2017). Timoshenko, S. Analysis of bi-metal thermostats. J. Opt. Soc. Am. 11, 233–255 (1925). Momeni, F. & Ni, J. Laws of 4D printing. Preprint at arXiv:1810.10376 (2018). Diani, J., Gilormini, P., Frédy, C. & Rousseau, I. Predicting thermal shape memory of crosslinked polymer networks from linear viscoelasticity. Int. J. Solids Struct. 49, 793–799 (2012). Lei, M. et al. 3D printing of auxetic metamaterials with digitally reprogrammable shape. ACS Appl. Mater. Interfaces 11, 22768–22776 (2019). Che, K., Yuan, C., Qi, H. J. & Meaud, J. Viscoelastic multistable architected materials with temperature-dependent snapping sequence. Soft Matter 14, 2492–2499 (2018). We acknowledge financial support from the National Research Foundation (NRF) of Korea (NRF-2018R1E1A2A02086050, NRF-2019R1A2C1008330). School of Materials Science and Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, Republic of Korea Hoon Yeub Jeong, Byung Hoon Woo & Young Chul Jun School of Mechanical, Aerospace and Nuclear Engineering, UNIST, Ulsan, 44919, Republic of Korea Namhun Kim Hoon Yeub Jeong Byung Hoon Woo Young Chul Jun H.Y.J., N.K. and Y.C.J. conceived the idea for the described experimental and theoretical studies. H.Y.J. fabricated and tested samples. B.H.W. assisted experiments. H.Y.J. and Y.C.J. wrote the manuscript. The project was supervised by Y.C.J. and N.K. Correspondence to Namhun Kim or Young Chul Jun. Supplementary Information. Supplementary Video 1 Jeong, H.Y., Woo, B.H., Kim, N. et al. Multicolor 4D printing of shape-memory polymers for light-induced selective heating and remote actuation. Sci Rep 10, 6258 (2020). https://doi.org/10.1038/s41598-020-63020-9 Advances in 4D printing: from stimulation to simulation Prashant Pingale Shilpa Dawre Amarjitsing Rajput Drug Delivery and Translational Research (2023) Top 100 in Materials Science
CommonCrawl
Asymptotic behavior of the solution to the Cauchy problem for the Timoshenko system in thermoelasticity of type III EECT Home Controllability of a 1-D tank containing a fluid modeled by a Boussinesq system June 2013, 2(2): 403-422. doi: 10.3934/eect.2013.2.403 Nonlinear instability of solutions in parabolic and hyperbolic diffusion Stephen Pankavich 1, and Petronela Radu 2, Department of Mathematics, United States Naval Academy, Annapolis, MD 21402, United States Department of Mathematics, University of Nebraska-Lincoln, Avery Hall 239, Lincoln, NE 68588 Received November 2012 Revised December 2012 Published March 2013 We consider semilinear evolution equations of the form $a(t)\partial_{tt}u + b(t) \partial_t u + Lu = f(x,u)$ and $b(t) \partial_t u + Lu = f(x,u),$ with possibly unbounded $a(t)$ and possibly sign-changing damping coefficient $b(t)$, and determine precise conditions for which linear instability of the steady state solutions implies nonlinear instability. More specifically, we prove that linear instability with an eigenfunction of fixed sign gives rise to nonlinear instability by either exponential growth or finite-time blow-up. We then discuss a few examples to which our main theorem is immediately applicable, including evolution equations with supercritical and exponential nonlinearities. Keywords: steady states., variable coefficients, instability, Evolution equations, sign-changing damping. Mathematics Subject Classification: Primary: 35B35, 35B05, 35B30; Secondary: 35L70, 35K5. Citation: Stephen Pankavich, Petronela Radu. Nonlinear instability of solutions in parabolic and hyperbolic diffusion. Evolution Equations & Control Theory, 2013, 2 (2) : 403-422. doi: 10.3934/eect.2013.2.403 Elvise Berchio, Alberto Farina, Alberto Ferrero and Filippo Gazzola, Existence and stability of entire solutions to a semilinear fourth order elliptic problem, J. Differential Equations, 252 (2012), 2596-2616. doi: 10.1016/j.jde.2011.09.028. Google Scholar M. Cattaneo, Sur une forme de l'équation de la chaleur éliminant le paradoxe d'une propagation instantanée, Comptes Rendus de l'Academie des Sciences Paris, 247 (1958), 431-433. Google Scholar W. Chen and C. Li, Classification of solutions of some nonlinear elliptic equations, Duke Math. Journal, 63 (1991), 615-622. doi: 10.1215/S0012-7094-91-06325-8. Google Scholar C. de Silva, "Vibration and Shock Handbook," Mechanical Engineering, CRC Press, 2005. Google Scholar G. Fragnelli and D. Mugnai, Stability of solutions for nonlinear wave equations with a positive-negative damping, Discrete and Continuous Dynamical Systems Series S, 4 (2011), 615-622. doi: 10.3934/dcdss.2011.4.615. Google Scholar G. Fragnelli and D. Mugnai, Stability of solutions for some classes of nonlinear damped wave equations, SIAM J. Control Optim., 47 (2008), 2520-2539. doi: 10.1137/070689735. Google Scholar R. Glassey, Blow-up theorems for nonlinear wave equations, Math. Z., 132 (1973), 183-203. Google Scholar M. Grillakis, J. Shatah and W. Strauss, Stability theory of solitary waves in the presence of symmetry. I, J. Funct. Anal., 74 (1987), 160-197. doi: 10.1016/0022-1236(87)90044-9. Google Scholar C. Gui, W. Ni and X. Wang, On the stability and instability of positive steady states of a semilinear heat equation in $\mathbbR^n$, Comm. Pure Appl. Math., 45 (1992), 1153-1181. doi: 10.1002/cpa.3160450906. Google Scholar Y. Han and A. Milani, On the diffusion phenomenon of quasilinear hyperbolic waves, Bull. Sci. Math., 124 (2000), 415-433. doi: 10.1016/S0007-4497(00)00141-X. Google Scholar L. Hsiao and Tai-Ping Liu, Convergence to nonlinear diffusion waves for solutions of a system of hyperbolic conservation laws with damping, Comm. Math. Phys., 143 (1992), 599-605. Google Scholar S. Kaplan, On the growth of solutions of quasi-linear parabolic equations, Comm. Pure Appl., 16 (1963), 305-330. Google Scholar P. Karageorgis, Stability and intersection properties of solutions to the nonlinear biharmonic equation, Nonlinearity, 22 (2009), 1653-1661. doi: 10.1088/0951-7715/22/7/009. Google Scholar P. Karageorgis and W. Strauss, Instability of steady states for nonlinear wave and heat equations, J. Differential Equations, 241 (2007), 184-205. doi: 10.1016/j.jde.2007.06.006. Google Scholar M. Kawashita, H. Nakazawa and H. Soga, Non decay of the total energy for the wave equation with the dissipative term of spatial anisotropy, Nagoya Math. J., 174 (2004), 115-126. Google Scholar S. Konabe and T. Nikuni, Coarse-grained finite-temperature theory for the Bose condensate in optical lattices, Journal of Low Temperature Physics, 150 (2008), 12-46. Google Scholar A. Lazer and P. McKenna, Large-amplitude periodic oscillations in suspension bridges: Some new connections with nonlinear analysis, SIAM Review, 32 (1990), 537-578. doi: 10.1137/1032120. Google Scholar Y. Li, Asymptotic behavior of positive solutions of equation $\Delta u + K(x)u^p = 0$ in $\mathbbR^n$, J. Differential Equations, 95 (1992), 304-330. doi: 10.1016/0022-0396(92)90034-K. Google Scholar E. Lieb and M. Loss, "Analysis," $2^{nd}$ edition, Grad. Stud. Math., 14, Amer. Math. Soc., Providence, RI, 2001. Google Scholar A. Matsumura, On the asymptotic behavior of solutions of semi-linear wave equations,, Publ. Res. Inst. Math. Sci., 12 (): 169. Google Scholar K. Nishihara, Convergence rates to nonlinear diffusion waves for solutions of system of hyperbolic conservation laws with damping, J. Differential Equations, 131 (1996), 171-188. doi: 10.1006/jdeq.1996.0159. Google Scholar P. Radu, G. Todorova and B. Yordanov, Higher order energy decay rates for damped wave equations with variable coefficients, Discrete and Continuous Dynamical Systems Series S, 2 (2009), 609-629. doi: 10.3934/dcdss.2009.2.609. Google Scholar P. Radu, G. Todorova and B. Yordanov, Diffusion phenomenon in Hilbert spaces and applications, J. Differential Equations, 250 (2011), 4200-4218. doi: 10.1016/j.jde.2011.01.024. Google Scholar P. Reverberi, P. Bagnerini, L. Maga and A. G. Bruzzone, On the non-linear Maxwell-Cattaneo equation with non-constant diffusivity: Shock and discontinuity waves, International Journal of heat and Mass Transfer, 51 (2008), 5327-5332. Google Scholar J. Shatah and W. Strauss, Spectral condition for instability, in "Nonlinear PDE's, Dynamics and Continuum Physics" (South Hadley, MA, 1998), Contemp. Math., 255, Amer. Math. Soc., Providence, RI, (2000), 189-198. doi: 10.1090/conm/255/03982. Google Scholar B. Simon, Schrödinger semigroups, Bull. Amer. Math. Soc. (N.S.), 7 (1982), 447-526. doi: 10.1090/S0273-0979-1982-15041-8. Google Scholar G. Somieski, Shimmy analysis of a simple aircraft nose landing gear model using different mathematical methods, Aerosp. Sci. Technolo., 1 (1997), 545-555. Google Scholar P. Souplet and Q. Zhang, Stability for semilinear parabolic equations with decaying potentials in $\mathbbR^n$ and dynamical approach to the existence of ground states, Ann. Inst. H. Poincaré Anal. Non Linéaire, 19 (2002), 683-703. doi: 10.1016/S0294-1449(02)00098-7. Google Scholar G. Todorova and B. Yordanov, Critical exponent for a nonlinear wave equation with damping, J. Differential Equations, 174 (2001), 464-489. doi: 10.1006/jdeq.2000.3933. Google Scholar P. Vernotte, Les paradoxes de la théorie continue de l'équation de la chaleur, Comptes Rendus Acad. Sci., 246 (1958), 3154-3155. Google Scholar J. Wirth, Wave equations with time-dependent dissipation. II. Effective dissipation, J. Differential Equations, 232 (2007), 74-103. doi: 10.1016/j.jde.2006.06.004. Google Scholar B. Yordanov and Q. Zhang, Finite-time blow up for wave equations with a potential, SIAM J. Math. Anal., 36 (2005), 1426-1433. doi: 10.1137/S0036141004440198. Google Scholar J. Húska, Peter Poláčik, M.V. Safonov. Principal eigenvalues, spectral gaps and exponential separation between positive and sign-changing solutions of parabolic equations. Conference Publications, 2005, 2005 (Special) : 427-435. doi: 10.3934/proc.2005.2005.427 Wei Long, Shuangjie Peng, Jing Yang. Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 917-939. doi: 10.3934/dcds.2016.36.917 Hongxia Shi, Haibo Chen. Infinitely many solutions for generalized quasilinear Schrödinger equations with sign-changing potential. Communications on Pure & Applied Analysis, 2018, 17 (1) : 53-66. doi: 10.3934/cpaa.2018004 Wen Zhang, Xianhua Tang, Bitao Cheng, Jian Zhang. Sign-changing solutions for fourth order elliptic equations with Kirchhoff-type. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2161-2177. doi: 10.3934/cpaa.2016032 Huxiao Luo, Xianhua Tang, Zu Gao. Sign-changing solutions for non-local elliptic equations with asymptotically linear term. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1147-1159. doi: 10.3934/cpaa.2018055 Bartosz Bieganowski, Jaros law Mederski. Nonlinear SchrÖdinger equations with sum of periodic and vanishing potentials and sign-changing nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (1) : 143-161. doi: 10.3934/cpaa.2018009 Yohei Sato. Sign-changing multi-peak solutions for nonlinear Schrödinger equations with critical frequency. Communications on Pure & Applied Analysis, 2008, 7 (4) : 883-903. doi: 10.3934/cpaa.2008.7.883 Tsung-Fang Wu. On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function. Communications on Pure & Applied Analysis, 2008, 7 (2) : 383-405. doi: 10.3934/cpaa.2008.7.383 Diego Berti, Andrea Corli, Luisa Malaguti. Wavefronts for degenerate diffusion-convection reaction equations with sign-changing diffusivity. Discrete & Continuous Dynamical Systems, 2021, 41 (12) : 6023-6046. doi: 10.3934/dcds.2021105 M. Ben Ayed, Kamal Ould Bouh. Nonexistence results of sign-changing solutions to a supercritical nonlinear problem. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1057-1075. doi: 10.3934/cpaa.2008.7.1057 Hui Guo, Tao Wang. A note on sign-changing solutions for the Schrödinger Poisson system. Electronic Research Archive, 2020, 28 (1) : 195-203. doi: 10.3934/era.2020013 Salomón Alarcón, Jinggang Tan. Sign-changing solutions for some nonhomogeneous nonlocal critical elliptic problems. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 5825-5846. doi: 10.3934/dcds.2019256 Yohei Sato, Zhi-Qiang Wang. On the least energy sign-changing solutions for a nonlinear elliptic system. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2151-2164. doi: 10.3934/dcds.2015.35.2151 Aixia Qian, Shujie Li. Multiple sign-changing solutions of an elliptic eigenvalue problem. Discrete & Continuous Dynamical Systems, 2005, 12 (4) : 737-746. doi: 10.3934/dcds.2005.12.737 Zhengping Wang, Huan-Song Zhou. Radial sign-changing solution for fractional Schrödinger equation. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 499-508. doi: 10.3934/dcds.2016.36.499 Guirong Liu, Yuanwei Qi. Sign-changing solutions of a quasilinear heat equation with a source term. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1389-1414. doi: 10.3934/dcdsb.2013.18.1389 Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1779-1799. doi: 10.3934/dcdss.2020454 Mateus Balbino Guimarães, Rodrigo da Silva Rodrigues. Elliptic equations involving linear and superlinear terms and critical Caffarelli-Kohn-Nirenberg exponent with sign-changing weight functions. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2697-2713. doi: 10.3934/cpaa.2013.12.2697 Yinbin Deng, Wei Shuai. Sign-changing multi-bump solutions for Kirchhoff-type equations in $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 3139-3168. doi: 10.3934/dcds.2018137 Jincai Kang, Chunlei Tang. Ground state radial sign-changing solutions for a gauged nonlinear Schrödinger equation involving critical growth. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5239-5252. doi: 10.3934/cpaa.2020235 Stephen Pankavich Petronela Radu
CommonCrawl
The entomopathogenic fungus, Metarhizium anisopliae for the European grapevine moth, Lobesia botrana Den. & Schiff. (Lepidoptera: Tortricidae) and its effect to the phytopathogenic fungus, Botrytis cinerea Juan Aguilera Sammaritano ORCID: orcid.org/0000-0001-5276-68231,2 na1, María Deymié1,2 na1, María Herrera3, Fabio Vazquez2, Andrew G.S. Cuthbertson4, Claudia López-Lastra1,5 & Bernardo Lechner1,6 The European grapevine moth, Lobesia botrana Den. & Schiff. (Lepidoptera: Tortricidae) and the gray rot fungus (Botrytis cinerea) are two important factors that cause elevated losses of productivity in vineyards globally. The European grapevine moth is one of the most important pests in vineyards around the world, not only because of its direct damage to crops, but also due to its association with the gray rot fungus; both organisms are highly detrimental to the same crop. Currently, there is no effective, economic, and eco-friendly technique that can be applied for the control of both agents. On the other hand, Metarhizium anisopliae belongs to a diverse group of entomopathogenic fungi of asexual reproduction and global distribution. Several Metarhizium isolates have been discovered causing large epizootics to over 300 insects' species worldwide. In this study, a simple design was conducted to evaluate the potential of native M. anisopliae isolates as one of biological control agents against L. botrana and as possible growth inhibitors to B. cinerea. Entomopathogenic fungal strains were isolated from arid soils under vine (Vitis vinifera) culture. Results suggest that the three entomopathogenic strains (CEP413, CEP589, and CEP591) were highly efficient in controlling larval and pupal stages of L. botrana, with mortality rates ranging from 81 to 98% (within 4–6 days). Also, growth inhibition over B. cinerea strains resulted in percentages ranged from 47 to 64%. Finally, the compatibility of the entomopathogenic strains, with seven commercial fungicides, was evaluated. The potential of the entomopathogenic fungal strains to act as control agents is discussed. Agriculture is in a continuous search for intensification and expansion due in part to the ever expanding global population (FAO 2009). The importance of maintaining a sustainable agricultural system represents a serious challenge since this increase in food production is also directly associated with a greater requirement of resources such as water, land use, fertilizers, and pesticides (Foley et al. 2005). One of the major challenges in aiming to increase food production is that many crops are attacked by invertebrate pests, which reduce yields, generate physical crop damage, and limit exports. Thus, it impacts negatively the economy of large, medium, and small growers (Hanem 2012). Additionally, for many years, pest control techniques have been based only on the application of chemical insecticides (Lima et al. 2012). Nevertheless, the indiscriminate use of these compounds has had negative consequences on the environment, agricultural workers' health, crop safety, and the associated growers' economy and has often led to increased pest problems (Cuthbertson and Murchie 2006). Negative impacts on the environment resulting from unnecessary pesticide applications include reduction of biodiversity along with the potential loss of key species such as bees and biological control agents, water and soil contamination, and even the generation of resistance in some invertebrate pest species (Cuthbertson 2004). The European grapevine moth (Lobesia botrana) Den. & Schiff. (Lepidoptera: Tortricidae) is one of the most important pests in vineyards around the world. This moth is present in both North and South America and in many parts of Europe (Dagatti and Becerra 2015). L. botrana has 2–4 generations per year, depending on the latitude and prevailing climatic conditions (Martín-Vertedor et al. 2010). The first larval generation of the season usually attacks inflorescences, while the later generations cause damage to the fruits. The damage may be of two types. Direct damage is caused by larval feeding on the inflorescence or fruits, while an indirect damage occurs when larval feeding wounds are infected with fungi such as Aspergillus, Alternaria, Rhizopus, Cladosporium, Penicillium, and Botrytis cinerea Pers. (Helotiales: Sclerotiniaceae), all off which affect the quality of both fresh and wine grapes (SENASICA, 2014). Botrytis cinerea, the main agent of gray rot, has a broad host range and causes economic losses in both the fresh fruit and vegetable industries worldwide, causing serious losses before and after harvest. Most of the control strategies used to date have been based on the use of chemical products. However, the use of fungicides is increasingly discouraged due to problems of environmental pollution associated with high application rates and by the appearance of resistance in certain strains (Benito et al. 2000). Studies on interactions between L. botrana and B. cinerea have demonstrated a mutualistic relationship between these organisms; both are simultaneously detrimental to the same crop (Mondy and Corio-Costet 2000). The larvae act as vectors of B. cinerea, disseminating conidia and opening wounds that serve as points of entry for the pathogen. These feeding wounds facilitate the rapid penetration and development of mycelium on grape berries (Fermaud and Le-Menn 1992). In Argentina, L. botrana is a quarantine pest subject to official control (Heit et al. 2013 and Dagatti and Becerra 2015). Due to the impact of this pest, the competitiveness of the wine industry is at risk and can generate a crisis within important regional economies. In addition, fresh grapes destined for export must comply with internationally accepted quarantine treatments that, in some cases, increase the cost of production (Senasa (Servicio Nacional de Sanidad y Calidad Agroalimentaria) 2017). Although there are several effective techniques that aim to control adult reproduction, such as the pheromone release technique (Ioriatti et al. 2011), the costs associated with their use are too high for many local growers. The use of pheromones is expensive and only affects the adult stage of the moth. To date, there is no effective, economic, and ecologically safe technique to control the larval stages. Entomopathogenic fungi (EPF) present an alternative solution for the pest. These organisms are important natural control agents that limit insect populations in many ecosystems, both natural and artificial (Cuthbertson and Audsley 2016). Many EPF attack eggs, immature stages, and adult life forms of many insect species (Hanem 2012); and there is a growing interest (Ali et al. 2017) to use them as biocontrol agents in integrated pest management programs (IPM). The goal of the present study was to search for ecologically sustainable and highly effective alternative methods to control L. botrana larvae, using native EPF and to evaluate their effect as antagonistic agents to B. cinerea. Insect rearing, fungal strains, and bioassays To obtain newly emerged larvae of each instar, a breeding colony was established from L. botrana adults collected in Mendoza, Argentina (33°01′52″S, 68°46′34″O). Larvae were reared on an artificial diet (Herrera María et al. 2016) and were maintained in a growth chamber at a photoperiod of 16:8 (L: D), 25 ± 5 °C and a relative humidity ranging between 30 and 50%. This procedure allowed to produce high numbers of larvae of all instars for the individual bioassays. EPF were isolated from arid soils under V. vinifera crops in San Juan, Argentina (31°65′67″S, 68°58′51″O). The soil sample technique followed that of Aguilera Sammaritano et al. (2016), and EPF were isolated, using the Tenebrio molitor larval baiting technique according to Meyling (2007). Three strains of Metarhizium anisopliae (Metsc.) Sorok. (CEP413, CEP589 and CEP591) were selected for the trials based on preliminary pathogenicity tests (Aguilera Sammaritano et al. 2017). All the strains were identified morphologically according to Bridge et al. (1993) and Liu et al. (2003) and are registered at the Fungal Entomopathogens Collection from the "Centro de Estudios Parasitológicos y de Vectores" (CEPAVE-CONICET, La Plata, Buenos Aires, Argentina). For the bioassays, 20 individuals from each larval instar (L2–L5) and pupae (Pp) were used for each isolate, in three replicates for all cases. Sixty larvae per treatment (isolate/instar) were treated, and the replicates were run at different times. The infection procedure was performed by placing individual larvae and pupae on 15 day sporulation cultures of M. anisopliae for 5 s and then placed on sterile 90 mm Petri dishes lined with filter paper, moistened with 1 ml of sterile distilled water. Five grams of the L. botrana artificial diet (Herrera María et al. 2016) was added to each Petri dish, sealed with Parafilm® to maintain the internal humidity. The dishes were incubated in a growth chamber at 27 ± 2 °C for 7 days in darkness. For the control, the larvae and pupae were placed in contact with PGA (potato 200 g, glucose 20 g, agar 15 g) Britania® only. Larval mortality was assessed daily for 7 days. Each dead individual with confirmed mycosis was aseptically removed from the Petri dish, placed in separated 2 ml sterile Eppendorfs, labeled, and stored at 4 °C. Abbott's equation (Abbott 1925) (Eq. 1) was used to obtain the corrected mortality (CM). Confirmation of death was made transferring spores from cadavers to individual Petri dishes with PGA. The colonies were observed after 10 days of growth at 27 °C in the dark. $$ \mathrm{CM}=\frac{\%\mathrm{Treatment}\ \mathrm{mortality}-\%\mathrm{control}\ \mathrm{mortality}}{100-\%\mathrm{control}\ \mathrm{mortality}}\times 100 $$ Inhibition of Botrytis cinerea by Metarhizium anisopliae For this trial, two strains of B. cinerea (B11 and B15) isolated from vine grapes were used. These strains have previously been shown to be highly pathogenic to V. vinifera (Muñoz et al. 2012) and are preserved in the Mycological Collection of the Institute of Biotechnology (UNSJ-San Juan, Argentina). For each B. cinerea strain, a 5-mm disc of agar containing fresh mycelia from 10 days was placed in the center of a Petri dish containing 25 ml of PGA. Immediately, 3 discs from the same diameter of each EPF (from 15 days cultures) were placed carefully on the edges of the Petri dish forming a triangle around the B. cinerea disc. The Petri dishes were inverted to prevent conidia from either fungus falling on to the agar medium and were incubated at 28 ± 2 °C in darkness. Colony diameters (for all fungi) were measured in two perpendicular directions over the following 20 days under a stereomicroscope using a digital caliper. Three replicate plates were made for each Botrytis strain. There were two control treatments: the first (CBC) was obtained by measuring the radial diameter of each B. cinerea on separate Petri dishes (i.e., potential growth) and the second (CEF) comprised the three EPF strains arranged in a similar pattern to that used with the Botrytis discs. Three replicate plates were prepared for each control treatment. The inhibition percentage (Eq. 2) was estimated according to Jiang et al. (2014). Inhibition was considered positive when it reached > 40% (Table 1). $$ \%\mathrm{Inhibition}=\frac{\left(\mathrm{Control}\ \mathrm{diameter}\hbox{-} \mathrm{treatment}\ \mathrm{diameter}\right)}{\mathrm{Control}\ \mathrm{diameter}}\ \mathrm{x}\kern0.5em 100 $$ Table 1 Inhibition trials of seven commercial fungicides to access compatibility with three entomopathogenic fungal strains Fungicide susceptibility The susceptibility of the three EPF isolates to seven commercial fungicides was assessed. Fungicides were added directly to PGA at the rates provided by the manufacturer (Table 2) after autoclaving and cooling to 55 °C media sterilization and mixed thoroughly for 5 min before pouring into the Petri dishes. Then, 100 μl of a fungal spore suspension (3 × 103 c/ml) of each EF strain were carefully placed in the center of a 90-mm Petri dish containing 25 ml of PGA. Fungal strains were grown for 20 days at 25 °C in darkness, with a colony diameter measured in two perpendicular directions every 48 h, 2 days after inoculation. Three replicate plates were made for each strain/fungicide treatment. Growth rate was estimated according to Kalm and Kalyoncu (2008). Control treatment for each isolate was made by adding 100 μl of the spore suspension to PGA media without fungicides. Table 2 Commercial fungicides used in trials (active ingredient and applied dose) The data were analyzed, using a one-way analysis of variance (Infostat 2017). In all data sets, normality and variance homogeneity were tested prior to analyses, where p < 0.05 was considered significant. Nonparametrical analyses were performed when certain data sets did not comply with the homoscedasticity assumption. The median lethal time to death, LT50, was estimated, using parametric survival regression for combinations of fungal strains, hours of survival, and L. botrana larval stage. LT50 was performed only for the larval instars which had cumulative mortalities higher than 50%. Pathogenicity trials There were significant differences in Pp stage in mortality levels caused by the 3 fungal strains tested (H = 5.49, p = 0.03) (Fig. 1). However, no differences were observed against L2 (H = 2.49, p = 0.32), L3 (H = 1.16, p = 0.61), L4 (H = 3.62, p = 0.16), or L5 (H = 0.82, p = 0.75) immature stages. Among the larval instars, the CM ranged between 21.65 ± 14.43% (CEP591-L4) and 81.6 ± 2.89% (CEP591-L5). The highest CM (99.98 ± 0.0%) was registered for the Pp stage (CEP591). Mortality percentages among control treatments (not shown) ranged between 2% (L3) and 7% (L2, L5). To our records, this is the first time that the susceptibility of larval stages of L. botrana to M. anisopliae has been demonstrated. Probit analysis for LT50 measured in hours showed that the highest larval instar (L5) had lower lethal times for CEP413 (110), CEP589 (112), and CEP591 (113), when compared to L3 instar (138, 139, 150), respectively. Corrected mortality of Lobesia botrana larvae (L2–L5) and pupae (Pp) caused by the entomopathogenic fungus Metarhizium anisopliae after 7 days of exposure. Error bars represent standard deviation for three replicates. Different letters indicate significant differences only among immature stages. Columns represent corrected mortality caused by CEP413 (white), CEP589 (gray), and CEP591 (black) All the three Metarhizium isolates infected and killed the larvae and pupae of the grapevine species. Additional studies to further define their potential role in integrated pest management programs for this pest seem warranted. The study presents a new evidence of demonstrating that some native strains of M. anisopliae derived from arid zones within Argentina were active against different stages of the vine moth, especially the older larval instars. It is widely accepted that among immature stages, eggs are more difficult to infect than larval stage (Skinner et al. 2014) and that pupae are typically very resistant to succumb to infection (Vestergaard et al. 1995). However, this is not always the case, and some biocontrol strategies effectively targeted pupal stage (Ansari et al. 2008). The majority of biological control studies on the vine moth have focused on the use of Bacillus thuringiensis (Roditakis 1986 and De Escudero et al. 2007). However, Cozzi et al. (2013) tested 11 fungal strains belonging to Fusarium (3 strains), Beauveria (6 strains), Paecilomyces (1 strain), and Verticillium (1 strain) genera. They obtained a maximum mortality of 55% on L. botrana larvae with Beauveria bassiana under field conditions. Although the obtained results cannot be compared directly to those of Cozzi et al. (2013), additional fungal entomopathogens were identified that may be useful in the biocontrol of L. botrana. In addition, it is the first report to demonstrate the susceptibility of immature stages of grapevine moth to Metarhizium. Botrytis cinerea growth inhibition The ANOVA test showed that inhibition of strains B11 and B15 were first detected at 72 h post inoculation. No inhibition was detected prior to 48 h post inoculation (Figs. 2 and 3). There was no difference in the percent inhibition of strain B11 between 72 and 168 h post inoculation (F = 0.67, p = 0.619). However, differences were significant for B15 (F = 5.28, p = 0.002). There were no differences in the levels of B. cinerea inhibitions among the 3 EPF strains during 72–168 h post inoculation for B11 (F = 0.14, p = 0.873) and B15 (F = 1.93, p = 0.163). The level of inhibition ranged between 48 and 64% (B11) and 47–62% (B15). Percent inhibition of Botrytis cinerea strain B11 caused by three entomopathogenic fungi. Error bars represent the standard deviation for three replicates. Different letters indicate significant differences among the time stages (LSD Fisher 0.05). Control treatments (not shown) represent 0% inhibition, and they were statistically different from the three fungal treatments. Columns represent inhibition percentage given by CEP413 (white), CEP589 (gray), and CEP591 (black) With respect to the B. cinerea trials, the level of growth inhibition observed suggests that the same isolates provide additional benefits through their ability to inhibit growth of the pathogen in situ. The capacity of different entomopathogenic strains to suppress B. cinerea growth could improve the overall level of control obtained with selective fungicides. The tested strains had a similar inhibitory effect that reached 64%. Although there is no evidence from previous studies about using EPF to decrease the growth rate of the gray rot fungi, the study by Molina et al. (2006) proved that inhibitory effects on B. cinerea, using Clonostachys spp. provided similar results. According to Campbell (1989), Botrytis is highly vulnerable to competition for nutrients and substrate, which may in part explain the growth inhibition observed in the Petri dish assays. Recent studies (Hwi-Geon et al. 2017) found that B. bassiana and M. anisopliae can inhibit B. cinerea and control Myzus persicae. Therefore, there is a pre-existing antecedent, using Metarhizium as a potential fungicide/fungistatic agent. In general terms, the ANOVA test detected statistical differences among treatments (Table 1). On the one side, the 3 EPF strains were equally inhibited by dicarboximide (87–91%, F = 0.78, p = 0.464), copper oxychloride (73–80%, F = 2.02, p = 0.143), cyprodinile-fludioxonile (94–96%, F = 1.68, p = 0.197), and miclobutanil (97–98%, F = 0.98, p = 0.384). Nevertheless, some fungicides affected the strains differently. Carbendazim for example, completely inhibited strains CEP589 and CEP591. However, growth of CEP413 was inhibited by 56%. Similarly, the inhibitory effect caused by iprodione was higher for CEP413 (89%) than for CEP591 (74%) and CEP589 (61%). In the case of fenhexamide, growth inhibition was higher for CEP413 (79%) than for CEP589 (58%) and CEP591 (41%). All of the EPF strains were highly sensitive to dicarboximide, copper oxychloride, miclobutanil, and cyprodinile—fludioxonile with inhibition percentages ranging from 73 to 98%. Therefore, the use of these fungicides together with the tested EPF strains is probably unadvisable. However, CEP413 in contrast to CEP589 and CEP591, was moderately resistant to carbendazim (56%). Equally, fenhexamide caused only a moderate inhibition on CEP589 and CEP591 (58 and 41%, respectively). The entomopathogenic fungus, Metarhizium seems to be a good candidate for controlling L. botrana larval and pupal stages. Based on the obtained results, more work is required to demonstrate that EPF strains are sufficiently virulent, can control different pest life stages outside the laboratory, and can be produced and formulated in a fashion that makes it economically feasible to use one of these strains in an IPM strategy and to test compatibility with beneficial species, etc. On the other hand, the tested strains were also capable to produce a moderate antagonistic effect on B. cinerea and able to be combined with some fungicides. CM: Cumulative mortality EPF: FAO: Median lethal time PGA: Potato Glucose Agar Aguilera Sammaritano JA, Lopez Lastra C, Leclerque A, Vazquez F, Toro M, D'Alessandro C, Cuthbertson AGS, Lechner B (2016) Control of Bemisia tabaci by entomopathogenic fungi isolated from arid soils in Argentina. Biocontrol Sci Techn 26(12):1668–1682 Aguilera Sammaritano JA, Torrente K, Deymie C, Leclerque A, Vazquez F, Cuthbertson AGS, Lechner BE (2017) Entomopathogenic fungi: are polisporic isolates more efficient than monosporic strains? Rev Soc Entomol Argent 76(3–4):39–43 Ali S, Zhang C, Wang Z, Wang XM, Wu JH, Cuthbertson AGS, Shao Z, Qiu BL (2017) Toxicological and biochemical basis of synergism between the entomopathogenic fungus Lecanicillium muscarium and the insecticide matrine against Bemisia tabaci (Gennadius). Sci Rep 7:46558 Ansari M, Brownbridge M, Shah F, Butt T (2008) Efficacy of entomopathogenic fungi against soil-dwelling life stages of western flower thrips, Frankliniella occidentalis, in plant-growing media. Entomol Exp Appl 127(2):80–87 Benito EP, Arranz M, Eslava A (2000) Factores de patogenicidad de Botrytis cinerea. Rev. Iberoam Micol 17:S43–S46 Bridge PD, Williams MAJ, Prior C, Paterson RRM (1993) Morphological, biochemical and molecular characteristics of Metarhizium anisopliae and M. flavoviride. J G Microbiol 139:1163–1169 Campbell R (1989) Biological control of microbial plant pathogens. Cambridge University Press, New York, p 218 Cozzi G, Somma S, Haidukowski M, Logrieco AF (2013) Ochratoxin a management in vineyards by Lobesia botrana biocontrol. Toxins 5(1):49–59 Cuthbertson AGS (2004) Unnecessary pesticide applications in Northern Ireland apple orchards due to miss-identification of a beneficial mite species. Res J Chem Environ 8(3):77–78 Cuthbertson AGS, Audsley N (2016) Further screening of entomopathogenic fungi and nematodes as control agents for Drosophila suzukii. Insects 7(2):24 1–9 Cuthbertson AGS, Murchie AK (2006) The environmental impact of an orchard winter wash and early season pesticide applications on both a beneficial and a pest mite species in Bramley apple orchards. Int J Environ Sci Techn 3:333–339 Dagatti CV, Becerra VC (2015) Ajuste de modelo fenológico para predecir el comportamiento de Lobesia botrana (Lepidoptera: Tortricidae) en un viñedo de Mendoza, Argentina. Rev Soc Entomol Argent 74(3–4):117–122 De Escudero IR, Estela A, Escriche B, Caballero P (2007) Potential of the Bacillus thuringiensis toxin reservoir for the control of Lobesia botrana (Lepidoptera: Tortricidae), a major pest of grape plants. Appl Environ Microb 73(1):337–340 FAO (2009) How to feed the world in. Food and Agriculture Organization, Rome, p 2050 Fermaud M, Le-Menn R (1992) Transmission of Botrytis cinerea to grapes by grape berry moth larvae. Phytopathol 12(4):1393–1398 Foley JA, Defries R, Asner GP, Barford C, Bonan G, Carpenter SR (2005) Global consequences of land use. Science 309(5734):570–574 Hanem FK (2012) Ecosmart biorational insecticides. In: Farzana P (ed) Insecticides: alternative insect control strategies. Tech Publishing. ISBN 978–953–307-780-2, Rijekap.17, pp 34–35 Heit G, Sione W, Aceñolaza P, Zamboni L, Blanco P, Horak P, Cortese P (2013) Modelo de distribución potencial de Lobesia botrana (Lepidoptera: Tortricidae). Una herramienta de planificación para su detección temprana a nivel regional. GeoFocus 13(2):179–194 Herrera María E, Dagatti CV, Becerra VC (2016) Método práctico de cría masiva de Lobesia botrana Den. & Schiff. (Lepidoptera: Tortricidae) en condiciones de laboratorio. Rev Soc Entomol Argent 75(3–4):160–164 Hwi-Geon Y, Dong-Jun K, Won-Seok G, Tae-Young S, Soo-Dong W (2017) Entomopathogenic fungi as dual control agents against both the pest Myzus persicae and phytopathogen Botrytis cinerea. Mycobiol 45(3):192–198 Infostat (2017) Statistical software, professional version. Universidad Nacional de Córdoba, Córdoba http://www.infostat.com.ar/ Ioriatti C, Anfora G, Tasin M, De Cristofaro A, Witzgall P, Lucchi A (2011) Chemical ecology and management of Lobesia botrana (Lepidoptera: Tortricidae). J Econ Entomol 104(4):1125–1137 Jiang C, Shi J, Liu Y, Zhu C (2014) Inhibition of Aspergillus carbonarius and fungal contamination in table grapes using Bacillus subtilis. Food Control 35:41–48 Kalm E, Kalyoncu F (2008) Mycelial growth rate of some morels (Morchella spp.) in four different microbiological media. Am Eurasian J Agric Environ Sci 3(6):861–864 Lima EA, Godoy WA, Ferreira CP (2012) Integrated pest management and spatial structure. In: Farzana P (ed) Insecticides: alternative insect control strategies. In Tech Publishing. ISBN 978–953–307-780-2, Rijeka, p 4 Liu H, Skinner M, Brownbridge M, Parker BL (2003) Characterization of Beauveria bassiana and Metarhizium anisopliae isolates for management of tarnished plant bug, Lygus lineolaris (Hemiptera: Miridae). J Invertebr Pathol 82:139–147 Martín-Vertedor D, Ferrero-García JJ, Torres-Vila LM (2010) Global warming affects phenology and voltinism of Lobesia botrana in Spain. Agr Forest Entomol 12:169–176 Meyling NV (2007) Methods for isolation of entomopathogenic fungi from the soil environment. Laboratory manual. Archived at http://orgprints.org/11200 Molina G, Zaldúa S, González G, Sanfuentes E (2006) Selección de hongos antagonistas para el control biológico de Botrytis cinerea en viveros forestales en Chile. Bosque 27(2):126–134 Mondy N, Corio-Costet MF (2000) The response of the grape berry moth (Lobesia botrana) to a dietary phytopathogenic fungus (Botrytis cinerea): the significance of fungus sterols. J Insect Physiol 46(12):1557–1564 Muñoz MA, Nally MC, Pesce VM, Radicetti DS, Godoy S, Toro ME, Vazquez F (2012) Control curativo de diferentes cepas de Botrytis cinerea mediante el uso de levaduras autóctonas aisladas de ambientes vitícolas. IV Congreso Internacional de Ciencia y Tecnología de los Alimentos. https://cicytac.cba.gov.ar/wpcontent/uploads/2018/03/Libro-de-Res%C3%BAmenes-CICyTAC-2012.pdf Roditakis NE (1986) Effectiveness of Bacillus thuringiensis Berliner var. Kurstaki on the grape berry moth, Lobesia botrana Den. and Shiff. (Lepidoptera, Tortricidae) under field and laboratory conditions in Crete. Entomol Hell 4:31–35 SENASICA (2014) Palomilla europea de la vid. SENASICA (Servicio Nacional de Sanidad Inocuidad y Calidad Agroalimentaria), Laboratorio Nacional de Referencia Epidemiológica. Fitosanitaria Ficha Técnica No. 201427. ISBN: 978-607-715-129-6. Senasa (Servicio Nacional de Sanidad y Calidad Agroalimentaria). (2017) http://www.senasa.gob.ar/ Skinner M, Parker BL, Kim JS (2014) Role of entomopathogenic fungi in integrated pest management. In: Abrol DP (ed) Integrated pest management. Current concepts and ecological perspective, pp 169–191 ISBN: 978–0–12–398529-3 Vestergaard S, Butt TM, Gillespie AT, Schreiter G, Eilenberg J (1995) Pathogenicity of the hyphomycete fungi Verticillium lecanii and Metarhizium anisopliae to the western flower thrips, Frankliniella occidentalis. Biocontrol Sci Techn 5:185–192 The authors would like to thank Fabiana Gutierrez (EEA-INTA Luján de Cuyo) for her helpful assistance with the L. botrana larvae. Also, we would like to express our gratitude to María Eugenia García, Carlos Bontchef, Gabriela Olivieri, María del Valle Arturo (SENASA), Diego Molina (DPPV), and Pablo Cortese (DNPV) for their kind support in obtaining the permits of transfer for L. botrana larvae. The study was supported by the National Council of Scientific and Technical Research (CONICET) [grant number PIP_11220110101086], and the Argentinian Education and Sport Ministry under the "University and Cooperatives" research project [grant number Res 2017 – no. 777]. Will be shared if needed. Juan Aguilera Sammaritano and María Deymié contributed equally to this work. CONICET (Consejo Nacional de Investigaciones Científicas y Técnicas), Buenos Aires, Argentina Juan Aguilera Sammaritano, María Deymié, Claudia López-Lastra & Bernardo Lechner IBT (Instituto de Biotecnología), UNSJ, San Juan, Argentina Juan Aguilera Sammaritano, María Deymié & Fabio Vazquez INTA (Instituto Nacional de Tecnología Agropecuaria), EEA Luján de Cuyo, Mendoza, Argentina María Herrera Independent Science Advisor, York, UK Andrew G.S. Cuthbertson CEPAVE (Centro de Estudios Parasitológicos y de Vectores), UNLP, La Plata, Argentina Claudia López-Lastra Universidad de Buenos Aires, FCEN, DBBE, InMiBo (Instituto de Micología y Botánica), Buenos Aires, Argentina Bernardo Lechner Juan Aguilera Sammaritano María Deymié Fabio Vazquez ASJ and DM designed and conducted the study. VF, HM, and LB helped in review and proof reading and data analyses. CA and LLC helped in data collection and analysis. All authors read and approved the final manuscript. Correspondence to Juan Aguilera Sammaritano or Bernardo Lechner. The authors declare that they have no competing interest. Aguilera Sammaritano, J., Deymié, M., Herrera, M. et al. The entomopathogenic fungus, Metarhizium anisopliae for the European grapevine moth, Lobesia botrana Den. & Schiff. (Lepidoptera: Tortricidae) and its effect to the phytopathogenic fungus, Botrytis cinerea. Egypt J Biol Pest Control 28, 83 (2018). https://doi.org/10.1186/s41938-018-0086-4 Received: 03 July 2018 Lobesia botrana Gray rot fungus
CommonCrawl
Antimalarial and immunomodulatory potential of chalcone derivatives in experimental model of malaria Shweta Sinha1, Bikash Medhi2, B. D. Radotra3, Daniela I. Batovska4, Nadezhda Markova4, Ashish Bhalla5 & Rakesh Sehgal1 BMC Complementary Medicine and Therapies volume 22, Article number: 330 (2022) Cite this article Malaria is a complex issue due to the availability of few therapies and chemical families against Plasmodium and mosquitoes. There is increasing resistance to various drugs and insecticides in Plasmodium and in the vector. Additionally, human behaviors are responsible for promoting resistance as well as increasing the risk of exposure to infections. Chalcones and their derivatives have been widely explored for their antimalarial effects. In this context, new derivatives of chalcones have been evaluated for their antimalarial efficacy. BALB/c mice were infected with P. berghei NK-65. The efficacy of the three most potent chalcone derivations (1, 2, and 3) identified after an in vitro compound screening test was tested. The selected doses of 10 mg/kg, 20 mg/kg, and 10 mg/kg were studied by evaluating parasitemia, changes in temperature, body weights, organ weights, histopathological features, nitric oxide, cytokines, and ICAM-1 expression. Also, localization of parasites inside the two vital tissues involved during malaria infections was done through a transmission electron microscope. All three chalcone derivative treated groups showed significant (p < 0.001) reductions in parasitemia levels on the fifth and eighth days of post-infection compared to the infected control. These derivatives were found to modulate the immune response in a P. berghei infected malaria mouse model with a significant reduction in IL-12 levels. The present study indicates the potential inhibitory and immunomodulatory actions of chalcones against the rodent malarial parasite P. berghei. Malaria is a worldwide infectious illness that continues to be a major cause of morbidity and mortality in developing countries. Plasmodium falciparum is the most common Plasmodium species that causes deadly malaria [1]. P. falciparum infection leads to a significant number of blood-stage parasites and is responsible for modifying the surface of the infected red blood cell (RBC) by creating an adhesive phenotype, e.g., "sticky cell," causing RBC sequestration inside small and medium-sized vessels. Sequestration leads to splenic parasite clearance avoidance, host cell endothelial damage, and microvascular blockage [2, 3]. The host's immune system releases a number of proinflammatory molecules during the blood infection stage in response to the parasite's presence, including IL-1, IL-6, IL-8, IL-12 (p70), IFN-γ, and TNF, all of which play a key role in limiting the parasite's growth and removal [4]. According to WHO report 2021, malaria cases has been increased from 227 to 241 million in the year 2020 [5]. The global attempt to eradicate malaria began in the 1950s, but it failed due to mosquito resistance to the insecticides employed, malaria parasite resistance to the drug used in treatment, and administrative challenges [6]. Efforts to develop an efficient antimalarial vaccine, as well as clinical trials, are underway. Furthermore, in Southeast Asia, P. falciparum resistance to artemisinin derivatives, piperaquine, and mefloquine indicates that novel antimalarials are urgently needed. The process of identifying new antimalarials, dose-finding, and evaluation has also evolved over the last 15 years [7]. Most of the agents that are presently under clinical development are blood schizonticides for the treatment of uncomplicated falciparum malaria, under evaluation either singly or as part of two-drug combinations [8], as they act on the asexual forms in the erythrocytes and interrupt clinical attacks and are also easier to manipulate in the laboratory. Nonetheless, malaria mouse models are a simple way to evaluate the in vivo effects of potentially beneficial antimalarial drugs and are commonly used in antimalarial compound screening [9]. Furthermore, natural products with a wide range of chemical structures, such as alkaloids, chalcones, steroids, terpenes, quinones, flavonoids, coumarines, naphthopyrones, xanthones, polyketides, phenols, peptides, lignans, chomenes, and others, have been extensively investigated as antimalarial drugs [10]. Chalcones (1,3-diaryl-2-propen-1-ones) are precursors to flavonoids and isoflavonoids, which can be found in many edible plants. Chalcone derivatives have been reported to have distinct pharmacological activities, such as anticancerous, antimicrobial, anti-HIV, antimalarial, and antinociceptive activities [11,12,13,14,15]. Chalcones have a vast number of bioactive molecules with a wide range of molecular targets. Even slight structural alterations in chalcones can cause them to target different biological functions. Furthermore, these chalcones have been shown to inhibit tumour cell invasion and metastasis in vitro by targeting one or more molecules such as NF-kB, TNF, VEGF, ICAM-1, VCAM-1, bcl-2, and MMP [16,17,18], and it has been reported that chalcone derivatives inhibit secretory phospholipase A2, COX, lipoxygenases, proinflammatory cytokines production, neutrophil chemotaxis, phagocytosis, and production of reactive oxygen species (ROS) [19, 20]. Additionally, a number of in vitro studies [21,22,23,24,25,26], previously been carried out on both chloroquine sensitive and chloroquine resistant strains that show chalcones have immense antimalarial potential. However, there were only a few studies that showed antimalarial activity of chalcones in both in vitro and in vivo malaria models. Chen et al. [27], described 2,4-Dimethoxy-4'-Butoxychalcone as a new antimalarial drug with excellent antimalarial activity in both in vitro and in vivo malaria models with no toxicity. It inhibited [3H]hypoxanthine absorption in chloroquine-susceptible (IC50 of 3D7 was 8.9 mM) and chloroquine-resistant (IC50 of Dd2 was 14.8 mM) P. falciparum strains in a concentration-dependent manner. This compound extremely suppressed the parasitemia when given orally and intraperitoneally at a dose of 50 mg/kg/day and subcutaneously at a dose of 20 mg/kg/day for 5 continuous days, and protected P. berghei K173 infected mice from deadly illness. In an another study, 1, 1-Bis-[(3′,4′-N-(urenylphenyl)-3-(3″,4″,5″-trimethoxyphenyl)]-2-propen-1-one, identified as the most active by in vitro tests, and was tested in mice infected with P. berghei (ANKA), a chloroquine-susceptible strain of murine malaria. This compound was able to decrease the parasitemia and delay the progression of malaria but did not eradicate the infection [28]. So, with these rationales, the present study primarily aimed to find antimalarial potential in a malaria mouse model and was further extended to find whether these chalcones can modulate the immune response or not. Experimental animals The present study was carried out at the Postgraduate Institute of Medical Education and Research (PGIMER), Chandigarh and reported in accordance with the ARRIVE guidelines. The study was conducted after approval from the Institutional Animal Ethics Committee Ref. No. 69/IAEC/418 as per the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA) guidelines and the Institute Bio-Safety Committee Ref. No. 04/IBC/2013. The inbred BALB/c mice, 6–8 weeks old, weighing between 20–32 g of either sex and 4–6 week old Swiss mice, weighing between 20–28 g, were procured from the Advanced Facility for Small Animal Research, PGIMER, Chandigarh. Until the end of the experiments, the animals were kept in polypropylene cages with conventional laboratory settings, including a constant temperature of 25 °C and 12 h light/dark cycles. Animals were given free access to a mouse chow diet and water in a room. To achieve meaningful statistical results, we used a minimally sufficient number of animals in all cases. All procedures conducted on the animals were in accordance with the rules and regulations as set out by the CPCSEA guidelines. After experimental procedures were over, each animal was sacrificed by giving anaesthesia followed by cervical dislocation. This euthanasia procedure was done according to CPCSEA guidelines. Drugs and chemicals The chalcones were synthesised at the Institute of Organic Chemistry with Centre of Phytochemistry, Bulgarian Academy of Sciences, Sofa, Bulgaria, as described in previous study [29]. The three chalcone derivatives namely, (E)-1-(2,5-Dimethoxyphenyl)-3-(4-methoxyphenyl)prop-2-en-1-one, (1); (E)-(3,4,5-Trimethoxyphenyl)-3-(4-methoxyphenyl)prop-2-en-1-one, (2); and. (E)-1-(3,4,5-Trimethoxyphenyl)-3-(3,4-dimethoxyphenyl)prop-2-en-1-one,(3), were screened for potent antimalarial effect shown formerly by our group [29, 30]. Apart from these derivatives, chloroquine phosphate and Griess reagent were purchased from Sigma-Aldrich, USA. ICAM-1 [Anti-ICAM1 antibody [YN1/1.7.4] from ABCAM, USA and the BD CBA Mouse Soluble Protein Flex Set System for IL-1, IL-6, TNF-alpha, IFN-γ, IL-10 and IL-12p70 were purchased from BD Biosciences, USA. All other chemicals and reagents used in this study were of analytical grade. Parasites and experimental models validation Plasmodium berghei NK-65 was procured from Punjab University, Chandigarh and was maintained in Swiss albino mice by serial passage by intraperitoneal injections of 0.2 mL of infected blood suspension containing 1 × 106 parasitized red blood cells (pRBC) every 10 days. Furthermore, this strain was used in model validation. Inbred BALB/c, 6–8-week-old mice were inoculated by intraperitoneal injection of 0.2 mL of infected blood suspension containing 1 X 107 P. berghei NK-65 (chloroquine sensitive strain) infected erythrocytes. Each mouse was checked for any parasitemia, physical signs, and mortality. The parasitemia in all the infected mice was evaluated using a thin smear of peripheral blood taken from the tail followed by Giemsa staining (10% solution in phosphate buffer, pH 7.2) and visualised under a light microscope with 1000X magnification [31]. The final confirmation of model validation was done through histopathological study on six major organs, i.e., liver, spleen, heart, lungs, brain, and kidneys, which were collected on day 8 post-infection from each mouse (Figure S1). Dose selection studies Doses of each chalcone derivative 1 (10 mg/kg), 2 (20 mg/kg), and 3 (10 mg/kg) were obtained after extrapolation of in vitro data published in our previous study [29], and the route of administration of these derivatives, i.e., derivative 1 was given through the intraperitoneal route, and derivatives 2 and 3 through the oral route, were chosen from the pharmacokinetic study [32]. A total of thirty-six BALB/c mice were recruited for the main experiment. These mice were randomly divided into six groups (Groups 1–6). Each group consists of six mice (Fig. 1). Experimental design of the study Group 1: (Non-infected): Injected intraperitoneally with blank media (PBS). Group 2: (Infected): Inoculated by intraperitoneal injection with 1 × 107 P. berghei NK-65 infected erythrocytes. Group 3: Infected mice treated with CQ. Group 4: Infected mice treated with Chalcone derivative 1. Group 5: Infected mice treated with Chalcone derivative 2 and, Treatment procedure For the intraperitoneal/oral administration, a suspension formulation of screened chalcones was prepared by triturating the weighed quantity in a dry mortar with 0.5% carboxymethyl cellulose (CMC) and was prepared by drop-wise addition of water and proper grinding, and chloroquine (CQ) was dissolved in PBS (pH 7.2). For all drug/derivatives treated groups, the first administration of the drug/derivatives was started on the 3rd day of post-infection, which was continued till the 7th day following standard Rane's test procedure [33]. Thin blood smears were then made from the tail blood of each mouse every day, and parasitemia in mice was assessed from Giemsa stained smears. Blood parasitemia Parasitemia was monitored by light microscope examination under (1000X) magnification using Giemsa-stained blood smears on the 3rd, 5th, and 8th days after inoculation. The percentages of parasitemia were calculated by counting the number of parasite-infected erythrocytes per 2000 erythrocytes according to the equations below [34]. $$\mathbf{Percent}\boldsymbol\;\mathbf{Parasitemia}\boldsymbol\;\boldsymbol=\boldsymbol\;\boldsymbol(\mathbf{No}\boldsymbol.\boldsymbol\;\mathbf{infected}\boldsymbol\;\mathbf{RBCs}\boldsymbol\;\boldsymbol\div\boldsymbol\;\mathbf{Total}\boldsymbol\;\mathbf{No}\boldsymbol.\boldsymbol\;\mathbf{RBCs}\boldsymbol\;\mathbf{counted}\boldsymbol)\boldsymbol\;\boldsymbol\times\boldsymbol\;\mathbf{100}$$ Temperature and body weight The rectal temperature and body weight of each mouse in all the groups were measured before infection (day 0) and on the 3rd, 5th, 7th, and 8th days of post-infection [34]. Organ weight The wet weight of two organs, i.e., the liver and spleen of each mouse in all the groups, was measured after sacrifices post-treatment. Serum nitric oxide estimation At the end of the study, the blood samples from each mouse were withdrawn by tail vein. Serum was separated and the nitric oxide levels were measured in serum samples of mice. Nitrite is estimated using the Greiss reagent and serves as an indicator of nitric oxide production. Briefly, 100 µL of Greiss reagent (1:1 solution of 1% sulphanilamide in 5% phosphoric acid and 0.1% napthylamine diamine dihydrochloric acid in water) was added to 100 µL of serum, and the absorbance at 546 nm was measured after 15 min [35]. The nitrite concentration was calculated using a standard curve for sodium nitrite. Nitrite levels were expressed as µg/mL. Cytokines estimation Whole blood was collected in sterile tubes at the post-treatment stage and allowed to coagulate for 2–3 h at 40◦C and then subjected to centrifugation. The sera thus obtained were stored at -80ºC until cytokine measurement was performed by using the BD CBA Mouse Soluble Protein Flex Set System for IL-1, IL-6, TNF-, IFN-γ, IL-10, and IL-12p70 following the manufacturer's recommendations. Liver and spleen from all experimental groups were removed aseptically after being fixed in toto for 48 h in a neutral buffered 10% formalin solution. The organs were cut into serial cross sections and fixed in formalin for another 12 h. Finally, tissue sections were fixed in paraffin and cut into 5 mm thick sections, which were stained with haematoxylin and eosin and looked at for inflammation and parasites [36]. Tissue sections of the liver and spleen were immunostained as described previously [37]. The Avidin–Biotin–peroxidase complex (ABC) technique was followed using primary antibodies against ICAM-1 antigen (product name: Anti-ICAM1 antibody [YN1/1.7.4]): Rat monoclonal antibody to ICAM-1 with species reactivity to mouse. Briefly, 3 µm sections were incubated overnight at 37 °C for 2 h with 1: 200 diluted rat monoclonal antibodies directed against murine ICAM-1. After washing, sections were incubated for 1 h at room temperature with biotin-conjugated secondary antibody (UltraTek HRP (Anti-Polyvalent), USA) ready to use. Peroxidase activity was developed in 0.5% 3, 3'-diaminobenzidine hydrochloride till the desired stain intensity was developed. Sections were washed in de-ionised H2O for 5 min, and slides were dried and mounted with DPX. Transmission electron microscopy Ultrastructure changes after the treatment can be visualised by electron microscopy as described previously [38]. The specimens of liver and spleen of approximately 0.5 mm3 were fixed in 3% glutaraldehyde-0.1 M sodium phosphate buffer (pH 7.2), for 2 h at 4 °C. All samples were rinsed three times and kept overnight at 4 °C in a 0.2 M sucrose-0.1 M phosphate buffer (pH 7.2). The samples were then post-fixed in 2% OsO4 in 0.1 M phosphate buffer, washed twice in 0.1 M phosphate buffer and twice in distilled water, dehydrated in a graded alcohol series and then in propylene oxide, and embedded in Epon. Using a Reichert-OmU-2 ultramicrotome and a glass or diamond knife, thin slices were cut and deposited on Formvar carbon-coated grids. The sections were poststained with 6% aqueous uranyl magnesium acetate and Reynold's lead citrate. The section was then examined with a Phillips 200 or 400 transmission electron microscope at 60 or 80 kV, according to the thickness of the section and the required magnification. Data is shown as mean ± SEM and analysed using the one-way analysis of variance (ANOVA), followed by the Bonferroni multiple comparison test. The analysis was performed using SPSS Version 16.0 software and p < 0.05 was taken as the level of significance. For evaluating the ICAM-1 expression, the Chi-square, contingency co-efficient, and correlation tests were applied. Using the method described by Ryley and Peters (1970) [39], the schizonticidal activity of screened chalcone derivatives on established infection was evaluated. Thirty BALB/c mice were infected with P. berghei CQ-sensitive NK-65 parasitized RBCs by intraperitoneal injection on the first day (D0). Seventy-two hours later (D3), the mice were randomly divided into five groups of six mice each, and one more additional group of six mice without infection. Three groups of mice were treated with chalcone derivative 1 (10 mg/kg, intraperitoneally), derivative 2 (20 mg/kg, orally), and derivative 3 (10 mg/kg, orally), respectively. The negative control group was treated with PBS, while the positive control group was treated with CQ (10 mg/kg), respectively. Each chalcone derivative and the standard drugs were treated once daily for five days, i.e., from day 3 to day 7. Giemsa stained thin smears were prepared from tail blood samples collected on day 3rd, 5th, and on 8th days post-infection to monitor parasitemia level. The body weight and rectal temperature were recorded on the 3rd, 5th, 7th and 8th days of post-infection. Blood parasitemia level, rectal temperature, body weights and organs weight On established infection, it was observed that there was a daily increase in the parasitemia level of the infected-control group and vice-versa, i.e., a daily reduction in the percentage parasitemia of the CQ-treated group. All three chalcone derivative treated groups showed a significant (p < 0.001) reduction in parasitemia levels on the 5th and 8th days of post-infection, including the CQ-treated group compared to the infected control. The percentage parasitemia of derivative 1, 2, 3, and CQ-treated groups were 10.57 ± 2.12, 13.68 ± 1.98, 16.31 ± 3.88, and 9.04 ± 0.55 on the 8th day post-infection as compared to the infected control (39.9 ± 1.8), (Fig. 2A). Effect of Chalcone derivatives 1 (10 mg/kg), 2 (20 mg/kg) and 3 (10 mg/kg) using Chloroquine diphosphate (10 mg/kg) as a standard chloroquine sensitive antimalarial drug for evaluating parasitemia (%) (parasitized RBC count) (A), on rectal temperature (◦F) (B), on body weights (g) (C) and, organ weights (g) (D) in P. berghei NK-65 infected mouse model. Data are represented as mean ± SEM, n = 6 mice. *p < 0.05, **p < 0.01, ***p < 0.001 (*Group represents INF-C, CQ-T, Chalcone derivatives 1, 2 and 3 compared to NON-INF-C Control group) and #p < 0.05, # # p < 0.01, # # # p < 0.001 (#Group represents CQ-T, chalcone derivatives 1, 2, and 3 compared to infected Control group); NS: non-significant. Abbreviation Used: NON-INF-C: Non-infected control, INF-C: Infected Control, CQ-T: Chloroquine Treated The rectal temperature of the infected control (p < 0.01), CQ, and derivative 2 treated groups significantly (p < 0.05) increased on the 3rd day of infection, and a significant (p < 0.01) increase was also shown in derivative 1 and derivative 2 in temperature on the 7th day post-infection compared to the non-infected control (Fig. 2B). Non-infected control and CQ-treated progressively gained weight from day 3rd (23.06 ± 1.05) and (24.8 ± 1.16) till day 8th (24.1 ± 0.87) and (25.36 ± 0.93), whereas a gradual decrease in body weight was observed in infected mice, derivative 1, 2, and 3 treated groups (Fig. 2C). Also, when comparing the treatment group to the non-infected control group and the infected control group, there was no significant weight gain or weight loss. The wet weights of two organs, the liver and spleen, recorded a gradual increase towards the late stages of the infection as compared to the controls. The weight of the liver increased significantly (p < 0.001 and p < 0.01) in the infected control (1.38 ± 0.041) and in the derivatives 1 (1.35 ± 0.03), 2 (1.34 ± 0.12), and 3 (1.33 ± 0.05) treated groups as compared to the non-infected control (0.97 ± 0.02) except for the CQ (1.2 ± 0.3) treated group. Similarly, as compared to the non-infected control group, there was a significant (p < 0.001) increase in the weight of the spleen of the infected control, derivative 1, 2, and 3 treated groups. However, a significant (p < 0.01) reduction in the weight of the spleen was observed in all treated groups as compared to the infected control (Fig. 2D). Nitric Oxide (NO) level Using the Griess reagent, NO production was measured in serum samples from mice sacrificed on the 8th post-infection day. Figure 3 shows that there was no difference between the non-infected and infected control groups and the treated groups. Effect of Chalcone derivatives 1 (10 mg/kg), 2 (20 mg/kg) and 3 (10 mg/kg) using Chloroquine diphosphate (10 mg/kg) as a standard chloroquine sensitive antimalarial drug on nitric oxide level (μg/mL) in P. berghei NK-65 infected mouse model. Data are represented as mean ± SEM, n = 6 mice. NS: non-significant. Abbreviation Used: NON-INF-C: Non-infected control, INF-C: Infected Control, CQ-T: Chloroquine Treated Cytokines Level BD CBA Mouse Soluble Protein Flex Set (IL-6, IL-10, IL-12p70, IL-1β, TNF, IFN-γ) analysis revealed significant increase in serum level of cytokines IL-6 (p < 0.001), IL-10 (p < 0.05), IL-12p70 (p < 0.001), IL-1β (p < 0.01) and IFN-γ (p < 0.01) in infected-control except TNF, which showed non-significant increase as compared to non-infected control. There was a significant (p < 0.001) reduction in IL-6 levels in all treated groups as compared to the infected control. The expression of IL-10 and TNF was found to be statistically non-significant as compared to the infected control in all treated groups. The level of IL-12p70 in the CQ-treated (0.76 ± 0.46 pg/mL), derivative 1 (0.36 ± 0.36 pg/mL), and derivative 3 (0.6 ± 0.39 pg/mL) treated groups were significantly lower (p < 0.001) when compared to the infected control (77.58 ± 15.31 pg/mL) while no expression was observed in the derivative 2 treated group. The level of IL-1β was significantly decreased in derivative 1 (p < 0.01), derivative 2 (p < 0.05), and derivative 3 (p < 0.01) treated, as compared to the infected control. IFN-γ levels, on the other hand, were only significantly lower in CQ-treated groups when compared to the infected groups (Fig. 4). Effect of Chalcone derivatives 1 (10 mg/kg), 2 (20 mg/kg) and 3 (10 mg/kg) using Chloroquine diphosphate (10 mg/kg) as standard chloroquine sensitive antimalarial drug on Cytokines expression (pg/mL) in P. berghei NK-65 infected mouse model. Data are represented as mean ± SEM, n = 5 mice. *p < 0.05, **p < 0.01, ***p < 0.001 (*Group represents INF-C, CQ-T, Chalcone derivatives 1, 2 and 3 compared to NON-INF-C Control group) and #p < 0.05, # # p < 0.01, # # # p < 0.001 (#Group represents CQ-T, chalcone derivatives 1, 2, and 3 compared to infected Control group); NS: non-significant. Abbreviation Used: NON-INF-C: Non-infected control, INF-C: Infected Control, CQ-T: Chloroquine Treated Histopathological sections of the liver showed dilated and congested hepatic sinusoids, dense with hypertrophied Küpffer's cells, variable mononuclear cells, and pRBC. The hemozoin pigment was widely scattered in Küpffer's cells in the infected control group. However, the accumulated pigment was comparatively less in all the treated groups as compared to the infected control. Moreover, histologically, the spleen sections showed deposition of hemozoin pigment in the pulp histiocytes and sinusoidal lining cells. Pigment deposition was found to be prominent in the case of infected control as compared to all other treated groups. In CQ-treated mice, derivative 1 and derivative 2-treated groups showed few sites of hemozoin deposition as compared to the derivative 3 treated group (Fig. 5). Photomicrographs depicting Hematoxylin & Eosin stained Liver (I) and Spleen (II) sections at 400X magnifications of mice treated with Chalcone derivatives 1 (10 mg/kg), 2 (20 mg/kg) and 3 (10 mg/kg) using Chloroquine diphosphate (10 mg/kg) as standard chloroquine sensitive antimalarial drug in P. berghei NK-65 infected mouse model for five days. A Non-infected Control, B Infected Control C Chloroquine-Treated (10 mg/kg) D Derivative 1 Treated (10 mg/kg), E Derivative 2 Treated (20 mg/kg) and F Derivative 3 Treated (10 mg/kg). Arrows in I and II B indicates deposition of hemozoin pigment. Bar scale represent 100 μm Mild ICAM-1 expression in immunostained liver sections was observed in all non-infected control mice (Fig. 6. I A (a)). Percentage ICAM-1 expression is shown (Fig. 6. I B), for each of the six studied groups (n = 3). In mild expression, hepatocytes did not show any ICAM-1 expression and it was only partially expressed in the endothelial lining of sinusoids. Marked ICAM-1 expression was observed in all infected control mice with intense positivity on sinusoidal endothelium as well as hepatocytes (Fig. 6. I A (b)). Moderate expression was observed in sinusoidal endothelium and hepatocytes of CQ treated (Fig. 6. I A (c)), derivative 1, 2, and 3 treated groups (Fig. 6. I (d, e, f)). In the case of immunostained spleen section, mild expression of ICAM-1 was observed in all non-infected control mice (Fig. 6. II A (a)). In contrast, marked expression was observed in all infected control mice (Fig. 6. II A (b)). Moderate expression was shown by the CQ treated, derivative 1, 2, and 3 treated groups (Fig. 6. II A (c, d, e & f)). However, all the mice in CQ treated, derivative 1 and 2 treated showed 100% moderate expression, but there was 66.66% of moderate and 33.33% of marked expression shown in the derivative 3-treated group. The percentage of ICAM-1 expression in immunostained spleen sections is shown (Fig. 6. II B) for each of the six studied groups (n = 3). Photomicrographs depicting ICAM-1 stained liver sections at 400X magnifications (I A) and, ICAM-1 stained spleen sections at 200X magnifications (II A) of mice treated with Chalcone derivatives 1 (10 mg/kg), 2 (20 mg/kg) and 3 (10 mg/kg) using Chloroquine diphosphate (10 mg/kg) as standard chloroquine sensitive antimalarial drug in P. berghei NK-65 infected mouse model for five days. Biotin-conjugated secondary antibody and streptavidin-conjugated horseradish peroxidase from DAB Substrate kit ( Scy Tek) were applied to sections to amplify the antigen signal for subsequent 3,3-diaminobenzidine staining, which produces a permanent brown color. a) Non-infected Control, b) Infected Control c) Chloroquine-Treated (10 mg/kg) d) Chalcone Derivative 1 Treated (10 mg/kg), e) Derivative 2 Treated (20 mg/kg) and f) Derivative 3 Treated (20 mg/kg). Bar scale represent 100 μm. (I B & II B) ICAM-1 expression in liver section of mice and, ICAM-1 expression in spleen section of mice. Pearson Chi-square, contingency co-efficient, correlations test was applied for statistical significance (*p < 0.05) Numerous pRBC were seen adhered to macrophages and endothelial cells via their surface knobs in the liver and spleen in both the CQ treated group and the chalcone derivative treated (Fig. 7. I (B, C) and II (B, C)). Electron micrograph of (I) Liver section and (II) Spleen section of P. berghei infected mouse model depicting effect of Chalcone derivative (10 mg/kg) using Chloroquine diphosphate (10 mg/kg) as standard antimalarial drug in P. berghei NK-65 infected mouse model for five days. A Infected Control, B CQ-Treated (10 mg/kg) and C) Chalcone Derivative-Treated (10 mg/kg). Arrows indicates malaria parasite The in vivo model was recruited for the present study since it demonstrates considerable prodrug effects and also possible engagement of the immune system in the suppression of infection [40]. P. berghei-infected mice are thought to be the best model for comparing to human malaria [41]. Furthermore, using rodent models of malaria, a variety of conventional antimalarial agents such as CQ, halofantrine, mefloquine, and, more recently, artemisinin derivatives have been screened and identified [42]. Rane's test, which assesses the curative potential of a compound on established infections, is frequently employed for antimalarial drug screening [43]. In this protocol, evaluation of the percentage inhibition of parasitemia is the most definite parameter for preliminary screening. In the present study, percentage parasitemia was significantly reduced in all chalcone derivatives treated groups at the dose of 10 mg/kg of derivative 1, 20 mg/kg of derivative 2, and 10 mg/kg of derivative 3, as well as the CQ treated group, which was dosed at 10 mg/kg as compared to the untreated non-infected control group. These results were comparable to licochalcone A, a more active chalcone against P. falciparum as compared with previously synthesised ones. Licochalcone A was injected intraperitoneally at 5, 10, and 15 mg/kg twice per day for 3 days, strikingly reducing the parasitemia level in mice infected with P. yoelii YM and was also able to reduce the parasitemia level in the treated animals after well-established infection [44]. However, both the present screened chalcone derivatives and Licochalcone were unable to clear the parasites completely but could delay the progression of malaria. Presumably, an extended period of treatment may be required for total clearance of the parasites from the animals having well-defined infections. Apart from this, other chalcone derivatives did not show activity against P. yoelii (CQ sensitive) under in vivo conditions [45, 46]. Similarly, 2,4-Dimethoxy-4'-Butoxychalcone, protects mice from lethal infections with P. berghei and P. yoelii, and controls P. berghei infection in rats when administered either orally, intraperitoneally, or subcutaneously once per day for 5 days starting 2 h after infection [27]. The present study is the one of the limited study till date that revealed the curative potential of chalcones in malaria mouse models, as most recent studies evaluated only the prophylactic properties of chalcones, in which chalcones significantly inhibited parasitemia on day four and increased survival times [28, 47, 48]. Body weight loss and body temperature deterioration are the typical attributes of malaria-infected mice [49]. So, ideal antimalarial agents, whether they are obtained from nature, semi-synthetic or synthesised chemically, are anticipated to prevent body weight loss in infected mice due to the rise in parasitemia. Despite the fact that a non-significant decrease in weight and temperature was observed with a decrease in parasitemia level. In contrast to humans, increased parasitaemia levels in rodent models typically result in lower metabolic rates and, as a result, lower body temperatures [50], which might result in death [51]. However, in the present study, there was a non-significant decrease in temperature with a decreasing parasitemia on the 5th day. Splenomegaly and hepatomegaly are among the common phenomena [52, 53]. Both organs are congested and swollen from the accumulation of the malarial pigment, hemozoin, which leads to discoloration. Also, the existence of numerous deformed red cells triggers the spleen to produce countless phagocytes. Thus, phenomenal hyperplasia is manifested by splenomegaly and, infrequently, hepatomegaly [48]. The present study illustrated the accumulation of the malarial pigment, hemozoin, in all treated and infected control groups. Moreover, treatment with CQ, chalcone derivatives 1, 2, and 3 significantly decreases the weight of the spleen, which suggests the therapeutic potential of chalcones is comparable to existing therapy. Chalcone derivatives have been extensively known for inhibiting NO synthesis, iNOS and cycloxygenase 2 protein expression in lipopolysaccharide stimulated cells. The structure–activity analysis illustrated that chalcones imposed with substituents that can decrease the electronic density in the B ring, such as chlorine atoms or nitro groups, show much better biological activity and selectivity in the inhibition of nitrite production, and mostly position 2 in ring B seems to be more important [54]. However, chalcones having substituents that increase the electronic density of the B-ring, such as butoxy, methoxy, or dimethylamine groups, had no effect on nitrite production inhibition [55, 56]. In the present study, there was no significant change in nitric oxide generation when non-infected and infected control groups were compared to chalcone-treated groups. This supported the previous findings that group substitution and substituents position on ring B play a role [54]. Further, malaria infections are complex syndromes involving a variety of inflammatory responses that can improve cell-to-cell interaction (cytoadherence), cell stimulation with malaria-derived antigens and toxins, and host-derived factors such as cytokines. Moderate levels of cytokines are good for the host as they cause fever [57]. The present study, revealed the significant up-regulation of pro-inflammatory cytokines, such as, IL-1β, IL-6, IL-10, IFN-γ and non-significant elevation of TNF in infected controls on the 9th day post-infection, showing an association with disease severity that supports previous findings. Furthermore, when compared to the infected-control group, chalcones treated groups (derivative 1, 2, and 3) and CQ showed significant reductions in IL-6 and IL-1 levels, while other cytokines such as IL-1 and IFN- γ levels were comparatively low in CQ treated and derivative 1 treated groups, indicating protection against disease progression and modulating the immune response in the P. berghei infected malaria mouse model. Also, a significant reduction in IL-12 levels in all treated groups may be associated with a delay in disease progression, which has been found to be increased in the case of uncomplicated malaria [58, 59]. Elevated levels of TNF were reported during P. falciparum infection [60, 61], which was observed in all groups compared to non-infected control, which suggests a protective effect of TNF against parasites [62]. These results propose that expression of these cytokines is not an immediate effect of parasitemia level and also suggest that the differences in cytokine concentrations among different groups may not be due to a direct effect of differences in parasite densities. Nevertheless, the fact that the association between cytokines and parasite densities was different among groups might be a reflection of diverse mechanisms of cytokine regulation depending on various other factors [63]. Inflammatory mediators as well as parasite sequestration may be responsible for the disease severity. It has been suggested that the cytokines up-regulate the expression of adhesion molecules such as ICAM-1 that are involved in the binding of the pRBCs to the vascular endothelium [64]. In the present study, intense ICAM-1 expression was observed in all infected mice, both in hepatocytes, Küpffer's cells and sinusoidal endothelium in the liver section and in endothelial venules and microphage cells of the spleen, whereas moderate expressions in all CQ and chalcone treated groups were seen, which directly inferred a decrease in disease severity upon treatment. Previously, it was explained that these ICAMs and VCAMs play a major role in the trafficking of leukocytes through normal and inflamed tissue [65]. Moreover, our present study is comparable to previous reported studies which showed cytokine induced activation of endothelial cells such as TNF-a, IL-1β (LPS) known to increase the level of ICAM-1 expression on cultured endothelial cells [66, 67], and here also looking at immune-regulation we have observed the similar facts, as increased expression of TNF, IL-1β and other cytokines in infected control which may augment ICAM-1 expression in the same groups. Persistent immunoreactivity in all treated groups signifies longer treatment duration for complete parasite clearance. Additionally, pathological changes in these infected murine models before and after treatment were observed microscopically in H&E stained slides to depict a clear picture of disease manifestation. In malaria, the main organs infected are the spleen and liver, which mark the pathophysiology of the disease. Here, we have observed an enlarged, oedematous, and brown, grey, or black colored liver, which may be as a result of the deposition of malaria pigment (hemozoin) in the pulp histiocytes and sinusoidal lining cells of the spleen, and mainly in Küpffer's cells in the liver section, demonstrating the role of macrophages in parasite clearance. There was also phagocytosis of pRBC and RBC by Küpffer's cells, endothelial cells, and sinusoidal macrophages. Malarial parasites proliferate inside the RBCs of visceral organs, chiefly in the spleen and liver, and this leads to annihilation of RBCs and loss of cells. The parasite's multiplication also results in the synthesis of pigment, which is left over after the parasite digests the cytoplasm of RBCs. As a result, pigment builds up inside these organs, causing them to darken in color. Besides, P. falciparum is the only species that is sequestered in deep vascular beds as the intraerythrocytic parasite matures to the trophozoite and schizont stages. By doing so, it escapes the clearance mechanisms of the spleen and is thus able to proceed to schizogony. Sequestration results from the cytoadherence of erythrocytes infected with trophozoites and schizonts of P. falciparum to vascular endothelium; this cytoadherence requires an interaction between specific parasite ligands and endothelial cell receptors [68]. Therefore in the present study, parasite invasion, sequestration in the vesicular bed, and intense glycogen production in both liver and spleen tissue were observed in electron photomicrography, elucidating escape mechanisms adapted by the parasite to avoid clearance mechanisms in the presence of drug treatment. Chalcones have a vast number of bioactive molecules with a wide range of molecular targets. Even slight structural alterations in chalcones can cause them to target different biological functions. Chalcones are a class of chemicals that could potentially be used to generate low-cost, synthetic antimalarial drugs in the future. The present study clearly indicates the potential inhibitory and immunomodulatory action of chalcones against the malarial parasite P. berghei (Fig. 8), reporting the effect of chalcones at histological and ultrastructural levels in a malaria mouse model, which mimics the human disease malaria caused by P. falciparum. In addition, ICAM-1 expression may play a role in the cytoadhesion of malaria-infected red blood cells or in the recruitment of leukocytes to endothelial activation sites [69]. A decrease in ICAM-1 expression after treatment with chalcone derivatives suggests its protective effect. The study indicates that chalcone derivatives may have some future potential as antimalarial drugs. In the future, these scaffolds and obtained results can be used as a basis to generate more effective leads with potent antimalarial activity. However, further optimization studies are needed in order to improve the bioactivity of these compounds to achieve better curative effects and oral bioavailability. Additionally, the present study lacked a pre-clinical toxicity evaluation that would have provided more clarity on the antimalarial efficacy of the mentioned chalcone derivatives, which is an important part of any drug development process. Immunomodulatory Potential of Chalcones The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. P. falciparum : RBC: Red Blood cell IL-1: Interleukin-1 IL-10: Interleukin-10 IL-12 (p70): Interleukin 12(p70) IFN-γ: Interferon- Gamma TNF: Tumour Necrosis Factor NF-kB: Nuclear factor kappa-light-chain-enhancer of activated B cells VEGF: ICAM-1: Intercellular Adhesion Molecule 1 VCAM-1: Vascular Cell Adhesion Molecule 1 bcl-2: B-cell lymphoma 2 MMP: COX: Cyclooxygenase ROS: CPCSEA: Committee for the Purpose of Control and Supervision of Experiments on Animals pRBC: Parasitized red blood cells PBS: CQ: ABC: Avidin–Biotin–peroxidase complex DPX: Dibutylphthalate Polystyrene Xylene OsO4 : Osmium tetroxide One-way analysis of variance Statistical Package for the Social Sciences CQ-T: Chloroquine-Treated NO: iNOS: Inducible nitric oxide synthase H&E: Zekar L, Sharman T. Plasmodium Falciparum Malaria. [Updated 2022 Aug 8]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022. Available from: https://www.ncbi.nlm.nih.gov/books/NBK555962/. Meibalan E, Marti M. Biology of malaria transmission. Cold Spring Harb Perspect Med. 2017;7(3):a025452. Milner DA Jr. Malaria pathogenesis. Cold Spring Harb Perspect Med. 2018;8(1):a025569. Popa GL, Popa MI. Recent advances in understanding the inflammatory response in malaria: a review of the dual role of cytokines. J Immunol Res. 2021;2021:7785180. World malaria report 2021. Geneva: World Health Organization; 2021. Licence: CC BY-NC-SA 3.0 IGO. Available from: https://www.who.int/teams/global-malaria-programme/reports/world-malaria-report-2021. Talapko J, Škrlec I, Alebić T, Jukić M, Včev A. Malaria: the past and the present. Microorganisms. 2019;7(6):179. Gelb MH. Drug discovery for malaria: a very challenging and timely endeavor. Curr Opin Chem Biol. 2007;11(4):440–5. Ashley EA, Phyo AP. Drugs in development for malaria. Drugs. 2018;78(9):861–79. Flannery EL, Chatterjee AK, Winzeler EA. Antimalarial drug discovery - approaches and progress towards new medicines. Nat Rev Microbiol. 2013;11(12):849–62. Tajuddeen N, Van Heerden FR. Antiplasmodial natural products: an update. Malar J. 2019;18(1):404. Batovska DI, Todorova IT. Trends in utilization of the pharmacological potential of chalcones. Curr Clin Pharmacol. 2010;5(1):1–29. Sinha S, Medhi B, Sehgal R. Chalcones as an emerging lead molecule for antimalarial therapy: a review. J Mod Med Chem. 2013;1:64–77. Ouyang Y, Li J, Chen X, Fu X, Sun S, Wu Q. Chalcone derivatives: role in anticancer therapy. Biomolecules. 2021;11(6):894. Cole AL, Hossain S, Cole AM, Phanstiel O 4th. Synthesis and bioevaluation of substituted chalcones, coumaranones and other flavonoids as anti-HIV agents. Bioorg Med Chem. 2016;24(12):2768–76. Mohamad AS, Akhtar MN, Zakaria ZA, Perimal EK, Khalid S, Mohd PA, Khalid MH, Israf DA, Lajis NH, Sulaiman MR. Antinociceptive activity of a synthetic chalcone, flavokawin B on chemical and thermal models of nociception in mice. Eur J Pharmacol. 2010;647(1–3):103–9. Yadav VR, Prasad S, Sung B, Aggarwal BB. The role of chalcones in suppression of NF-κB-mediated inflammation and cancer. Int Immunopharmacol. 2011;11(3):295–309. Jandial DD, Blair CA, Zhang S, Krill LS, Zhang YB, Zi X. Molecular targeted approaches to cancer therapy and prevention using chalcones. Curr Cancer Drug Targets. 2014;14(2):181–200. Lorusso V, Marech I. Novel plant-derived target drugs: a step forward from licorice? Expert Opin Ther Targets. 2013;17(4):333–5. Jantan I, Bukhari SN, Adekoya OA, Sylte I. Studies of synthetic chalcone derivatives as potential inhibitors of secretory phospholipase A2, cyclooxygenases, lipoxygenase and pro-inflammatory cytokines. Drug Des Devel Ther. 2014;8:1405–18. Lee JS, Bukhari SN, Fauzi NM. Effects of chalcone derivatives on players of the immune system. Drug Des Devel Ther. 2015;9:4761–78. Li R, Kenyon GL, Cohen FE, Chen X, Gong B, Dominguez JN, Davidson E, Kurzban G, Miller RE, Nuzum EO, et al. In vitro antimalarial activity of chalcones and their derivatives. J Med Chem. 1995;38(26):5031–7. Liu M, Wilairat P, Go ML, Liu M. Antimalarial alkoxylated and hydroxylated chalcones [corrected]: structure-activity relationship analysis. J Med Chem. 2001;44(25):4443–52. Go ML, Liu M, Wilairat P, Rosenthal PJ, Saliba KJ, Kirk K. Antiplasmodial chalcones inhibit sorbitol-induced hemolysis of Plasmodium falciparum-infected erythrocytes. Antimicrob Agents Chemother. 2004;48(9):3241–5. Sharma N, Mohanakrishnan D, Shard A, Sharma A, Saima, Sinha AK, Sahal D. Stilbene-chalcone hybrids: design, synthesis, and evaluation as a new class of antimalarial scaffolds that trigger cell death through stage specific apoptosis. J Med Chem. 2012;55(1):297–311. Smit FJ. Synthesis, in vitro antimalarial activity and cytotoxicity of novel 4-aminoquinolinyl-chalcone amides. Bioorg Med Chem. 2014;22(3):1128–38. Singh A, Rani A, Gut J, Rosenthal PJ, Kumar V. Piperazine-linked 4-aminoquinoline-chalcone/ferrocenyl-chalcone conjugates: Synthesis and antiplasmodial evaluation. Chem Biol Drug Des. 2017;90(4):590–5. Chen M, Brøgger Christensen S, Zhai L, Rasmussen MH, Theander TG, Frøkjaer S, et al. The novel oxygenated chalcone, 2,4-dimethoxy-4'-butoxychalcone, exhibits potent activity against human malaria parasite Plasmodium falciparum in vitro and rodent parasites Plasmodium berghei and Plasmodium yoelii in vivo. J Infect Dis. 1997;176(5):1327–33. Domínguez JN, de Domínguez NG, Rodrigues J, Acosta ME, Caraballo N, León C. Synthesis and antimalarial activity of urenyl Bis-chalcone in vitro and in vivo. J Enzyme Inhib Med Chem. 2013;28(6):1267–73. Sinha S, Batovska DI, Medhi B, Radotra BD, Bhalla A, Markova N, et al. In vitro anti-malarial efficacy of chalcones: cytotoxicity profile, mechanism of action and their effect on erythrocytes. Malar J. 2019;18(1):421. Sinha S, Radotra BD, Medhi B, Batovska DI, Markova N, Sehgal R. Ultrastructural alterations in Plasmodium falciparum induced by chalcone derivatives. BMC Res Notes. 2020;13(1):290. Chimanuka B, Gabriëls M, Detaevernier MR, Plaizier-Vercammen JA. Preparation of beta-artemether liposomes, their HPLC-UV evaluation and relevance for clearing recrudescent parasitaemia in Plasmodium chabaudi malaria-infected mice. J Pharm Biomed Anal. 2002;28(1):13-22. d. Sinha S, Prakash A, Medhi B, Sehgal A, Batovska DI, Sehgal R. Pharmacokinetic evaluation of Chalcone derivatives with antimalarial activity in New Zealand White Rabbits. BMC Res Notes. 2021;14(1):264. Nardos A, Makonnen E. In vivo antiplasmodial activity and toxicological assessment of hydroethanolic crude extract of Ajuga remota. Malar J. 2017;16(1):25. Mekonnen LB. In vivo antimalarial activity of the crude root and fruit extracts of Croton macrostachyus (Euphorbiaceae) against Plasmodium berghei in mice. J Tradit Complement Med. 2015;5(3):168–73. Sun J, Zhang X, Broderick M, Fein H. Measurement of nitric oxide production in biological systems by using Griess reaction assay. Sensors. 2003;3(8):276–84. Pascoe S, Gatehouse D. The use of a simple haematoxylin and eosin staining procedure to demonstrate micronuclei within rodent bone marrow. Mutat Res. 1986;164(4):237–43. Rudin W, Eugstewr HP, Bordmann G, Bonato J, Muller M, Yamage M, Ryffel B. Resistance to cerebral malaria in tumor necrosis factor-alpha-deficient mice is associated with a reduction of intercellular adhesion molecule-1 up-regulation and T helper type 1 response. Am J Pathol. 1997;150(1):257–66. Wisner-Gebhart AM, Brabec RK, Gray RH. Morphometric studies of chloroquine-induced changes in hepatocytic organelles in the rat. Exp Mol Pathol. 1980;33(2):144–52. Ryley JF, Peters W. The antimalarial activity of some quinolone esters. Ann Trop Med Parasitol. 1970;84:209–22. Waako PJ, Gumede B, Smith P, Folb PI. The in vitro and in vivo antimalarial activity of Cardiospermum halicacabum L. and Momordica foetida Schumch. Et Thonn J Ethnopharmacol. 2005;99(1):137–43. Peters W. Prevention of drug resistance in rodent malaria by the use of drug mixtures. Bull World Health Organ. 1974;51:379–83. Madara A, Jayi JA, Salawu OA, Tijani AY. Anti-malarial activity of ethanolic leaf extract of Piliostigma thonningii Schum (Caesalpiniacea) in mice infected with Plasmodium berghei berghei. African J Biotech. 2010;9(23):3475–80. Box ED, Gingrich WD, Celaya BL. Standardization of a curative test with Plasmodium berghei in white mice. J Infect Dis. 1954;94(1):78–83. Chen M, Theander TG, Christensen SB, Hviid L, Zhai L, Kharazmi A. Licochalcone A, a new antimalarial agent, inhibits in vitro growth of the human malaria parasite Plasmodium falciparum and protects mice from P. yoelii infection. Antimicrob Agents Chemother. 1994;38(7):1470–5. Tomar V, Bhattacharjee G, Kamaluddin, Rajakumar S, Srivastava K, Puri SK. Synthesis of new chalcone derivatives containing acridinyl moiety with potential antimalarial activity. Eur J Med Chem. 2010;45(2):745–51. Gutteridge CE, Major JW, Nin DA, Curtis SM, Bhattacharjee AK, Gerena L, Nichols DA. In vitro efficacy of 2, N-bisarylated 2-ethoxyacetamides against Plasmodium falciparum. Bioorg Med Chem Lett. 2016;26(3):1048–51. DomiÃÅnguez JN, LeoÃÅn C, Rodrigues J, de Gamboa DomiÃÅnguez N, Gut J, Rosenthal PJ. Synthesis and antimalarial activity of sulfonamide chalcone derivatives. Farmaco. 2005;60(4):307–11. Pandey AK, Sharma S, Pandey M, Alam MM, Shaquiquzzaman M, Akhter M. 4, 5-Dihydrooxazole-pyrazoline hybrids: Synthesis and their evaluation as potential antimalarial agents. Eur J Med Chem. 2016;123:476–86. Langhorne J, Quin SJ, Sanni LA. Mouse models of blood-stage malaria infections: immune responses and cytokines involved in protection and pathology. Chem Immunol. 2002;80:204–28. Chinchilla M, Guerrero OM, Abarca G, Barrios M, Castro O. An in vivo model to study the anti-malaric capacity of plant extracts. Rev Biol Trop. 1998;46(1):35–9. Mengiste B, Mekonnen E, Urga K. In vivo animalarial activity of Dodonaea angustifolia seed extracts against Plasmodium berghei in mice model. MEJS. 2012;4:147–63. Farthing MJG, Rolston DDK. Infectious diseases and tropical medicine. In: Kumar PJ, Clark ML, editors. Clinical Medicine. London: Balliere Tindall; 1987. p. 75–7. Bates I, Bedu-Addo G. Chronic malaria and splenic lymphoma: clues to understanding lymphoma evolution. Leukemia. 1997;11(12):2162–7. Wu J, Li J, Cai Y, Pan Y, Ye F, Zhang Y, Zhao Y, Yang S, Li X, Liang G. Evaluation and discovery of novel synthetic chalcone derivatives as anti-inflammatory agents. J Med Chem. 2011;54(23):8110–23. Rojas J, Payá M, Dominguez JN, Luisa FM. The synthesis and effect of fluorinated chalcone derivatives on nitric oxide production. Bioorg Med Chem Lett. 2002;12(15):1951–4. Patil CB, Mahajan SK, Katti SA. Chalcone: a versatile molecule. J Pharm Sci Res. 2009;1(3):11–22. Depinay N, Franetich JF, Grüner AC, Mauduit M, Chavatte JM, Luty AJ, et al. Inhibitory effect of TNF-α on malaria pre-erythrocytic stage development: influence of host hepatocyte/parasite combinations. PLoS One. 2011;6(3):e17464. Prakash D, Fesel C, Jain R, Cazenave PA, Mishra GC, Pied S. Clusters of cytokines determine malaria severity in Plasmodium falciparum-infected patients from endemic areas of Central India. J Infect Dis. 2006;194(2):198–207. Jain V, Armah HB, Tongren JE, Ned RM, Wilson NO, Crawford S, et al. Plasma IP-10, apoptotic and angiogenic factors associated with fatal cerebral malaria in India. Malar J. 2008;7:83. Kwiatkowski D, Hill AV, Sambou I, Twumasi P, Castracane J, Manogue KR, Cerami A, Brewster DR, Greenwood BM. TNF concentration in fatal cerebral, non-fatal cerebral, and uncomplicated Plasmodium falciparum malaria. Lancet. 1990;336(8725):1201–4. Tchinda VH, Tadem AD, Tako EA, Tene G, Fogako J, Nyonglema P, et al. Severe malaria in Cameroonian children: correlation between plasma levels of three soluble inducible adhesion molecules and TNF-alpha. Acta Trop. 2007;102(1):20–8. Taverne J, Tavernier J, Fiers W, Playfair JH. Recombinant tumour necrosis factor inhibits malaria parasites in vivo but not in vitro. Clin Exp Immunol. 1987;67(1):1–4. Ahmed MZ, Bhardwaj N, Sharma S, Pande V, Anvikar AR. Transcriptional modulation of the host immunity mediated by cytokines and transcriptional factors in plasmodium falciparum-infected patients of North-East India. Biomolecules. 2019;9(10):600. Hansen DS. Inflammatory responses associated with the induction of cerebral malaria: lessons from experimental murine models. PLoS Pathog. 2012;8(12):e1003045. Kukielka GL, Hawkins HK, Michael L, Manning AM, Youker K, Lane C, et al. Regulation of intercellular adhesion molecule-1 (ICAM-1) in ischemic and reperfused canine myocardium. J Clin Invest. 1993;92(3):1504–16. Myers CL, Wertheimer SJ, Schembri-King J, Parks T, Wallace RW. Induction of ICAM-1 by TNF-alpha, IL-1 beta, and LPS in human endothelial cells after downregulation of PKC. Am J Physiol. 1992;263(4 Pt 1):C767–72. Henninger DD, Panés J, Eppihimer M, Russell J, Gerritsen M, Anderson DC, et al. Cytokine-induced VCAM-1 and ICAM-1 expression in different organs of the mouse. J Immunol. 1997;158(4):1825–32. Ho M, Bannister LH, Looareesuwan S, Suntharasamai P. Cytoadherence and ultrastructure of Plasmodium falciparum-infected erythrocytes from a splenectomized patient. Infect Immun. 1992;60(6):2225–8. Cserti-Gazdewich CM, Dzik WH, Erdman L, Ssewanyana I, Dhabangi A, Musoke C, Kain KC. Combined measurement of soluble and cellular ICAM-1 among children with Plasmodium falciparum malaria in Uganda. Malar J. 2010;9:233. We are thankful to ICMR, New Delhi for providing financial support in form of junior research fellowship and senior research fellowship to Shweta Sinha. Special thanks are given to Upma Bagai, Professor, Punjab University, Chandigarh for providing the P. berghei NK-65 strain and technical staff of CSIC, Post Graduate Institute of Medical Education & Research, Chandigarh, India, for allowing the use of sophisticated instruments. This study was not funded. Department of Medical Parasitology, Post Graduate Institute of Medical Education and Research, Chandigarh, 160012, India Shweta Sinha & Rakesh Sehgal Department of Pharmacology, Post Graduate Institute of Medical Education and Research, Chandigarh, India Bikash Medhi Department of Histopathology, Post Graduate Institute of Medical Education and Research, Chandigarh, India B. D. Radotra Institute of Organic Chemistry With Centre of Phytochemistry, Bulgarian Academy of Sciences, Sofia, Bulgaria Daniela I. Batovska & Nadezhda Markova Department of Internal Medicine, Post Graduate Institute of Medical Education and Research, Chandigarh, India Ashish Bhalla Shweta Sinha Daniela I. Batovska Nadezhda Markova Rakesh Sehgal RS, BM and SS designed the study. SS carried out the experiment and wrote the initial draft of the manuscript. BDR examined the Histopathological Data. All authors SS, RS, BDR, BM, DIB, NM, and AB did the final editing of manuscript and agreed to the publication of this study. Correspondence to Rakesh Sehgal. The ethical approval was taken from the Institutional Animal Ethics Committee, Postgraduate Institute of Medical Education and Research, Chandigarh, Reference No. 69/IAEC/418 as per the Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA) guidelines and the Institute Bio-Safety Committee Ref. No. 04/IBC/2013, and were carried out in compliance with the ARRIVE guidelines. 12906_2022_3777_MOESM1_ESM.jpg Sinha, S., Medhi, B., Radotra, B.D. et al. Antimalarial and immunomodulatory potential of chalcone derivatives in experimental model of malaria. BMC Complement Med Ther 22, 330 (2022). https://doi.org/10.1186/s12906-022-03777-w DOI: https://doi.org/10.1186/s12906-022-03777-w P. berghei ICAM-1 Chalcones Cytokine expression
CommonCrawl
(Submitted on 2 Jun 2022 (this version), latest version 27 Jul 2022 (v2)) Abstract: In this work, we study graph problems with monotone-sum objectives. We propose a general two-fold greedy algorithm that references $\alpha$-approximation algorithms (where $\alpha \ge 1$) to achieve $(t \cdot \alpha)$-competitiveness while incurring at most $\frac{w_{\text{max}}\cdot(t+1)}{\min\{1, w_\text{min}\}\cdot(t-1)}$ amortized recourse, where $w_{\text{max}}$ and $w_{\text{min}}$ are the largest value and the smallest positive value that can be assigned to an element in the sum. We further refine this trade-off between competitive ratio and amortized recourse for three classical graph problems. For Independent Set, we refine the analysis of our general algorithm and show that $t$-competitiveness can be achieved with $\frac{t}{t-1}$ amortized recourse. For Maximum Matching, we use an existing algorithm with limited greed to show that $t$-competitiveness can be achieved with $\frac{(2-t^*)}{(t^*-1)(3-t^*)}+\frac{t^*-1}{3-t^*}$ amortized recourse, where $t^*$ is the largest number such that $t^*= 1 +\frac{1}{j} \leq t$ for some integer $j$. For Vertex Cover, we introduce a polynomial-time algorithm that further limits greed to show that $(2 - \frac{2}{\texttt{OPT}})$-competitiveness, where $\texttt{OPT}$ is the size of the optimal vertex cover, can be achieved with at most $\frac{10}{3}$ amortized recourse by a potential function argument. We remark that this online result can be used as an offline approximation result (without violating the unique games conjecture) to improve upon that of Monien and Speckenmeyer for graphs containing odd cycles of length no less than $2k + 3$, using an algorithm that is also constructive.
CommonCrawl
AboutHomeMathAI AlignmentArbital Log as generalized length Nate Soares7 Jun 2016 2:42 UTC Here are a handful of examples of how the logarithm base 10 behaves. Can you spot the pattern? $$ \begin{align} \log_{10}(2) &\ \approx 0.30 \\ \log_{10}(7) &\ \approx 0.85 \\ \log_{10}(22) &\ \approx 1.34 \\ \log_{10}(70) &\ \approx 1.85 \\ \log_{10}(139) &\ \approx 2.14 \\ \log_{10}(316) &\ \approx 2.50 \\ \log_{10}(123456) &\ \approx 5.09 \\ \log_{10}(654321) &\ \approx 5.82 \\ \log_{10}(123456789) &\ \approx 8.09 \\ \log_{10}(\underbrace{987654321}_\text{9 digits}) &\ \approx 8.99 \end{align} $$ Every time the input gets one digit longer, the output goes up by one. In other words, the output of the logarithm is roughly the length — measured in digits — of the input. (Why?) Why is it the log base 10 (rather than, say, the log base 2) that roughly measures the length of a number? Because numbers are normally represented in decimal notation, where each new digit lets you write down ten times as many numbers. The logarithm base 2 would measure the length of a number if each digit only gave you the ability to write down twice as many numbers. In other words, the log base 2 of a number is roughly the length of that number when it's represented in binary notation (where \(13\) is written \(\texttt{1101}\) and so on): $$ \begin{align} \log_2(3) = \log_2(\texttt{11}) &\ \approx 1.58 \\ \log_2(7) = \log_2(\texttt{111}) &\ \approx 2.81 \\ \log_2(13) = \log_2(\texttt{1101}) &\ \approx 3.70 \\ \log_2(22) = \log_2(\texttt{10110}) &\ \approx 4.46 \\ \log_2(70) = \log_2(\texttt{1010001}) &\ \approx 6.13 \\ \log_2(139) = \log_2(\texttt{10001011}) &\ \approx 7.12 \\ \log_2(316) = \log_2(\texttt{1100101010}) &\ \approx 8.30 \\ \log_2(1000) = \log_2(\underbrace{\texttt{1111101000}}_\text{10 digits}) &\ \approx 9.97 \end{align} $$ If you aren't familiar with the idea of numbers represented in other number bases besides 10, and you want to learn more, see the number base tutorial. Here's an interactive visualization which shows the link between the length of a number expressed in base \(b\), and the logarithm base \(b\) of that number: As you can see, if \(b\) is an integer greater than 1, then the logarithm base \(b\) of \(x\) is pretty close to the number of digits it takes to write \(x\) in base \(b.\) Pretty close, but not exactly. The most obvious difference is that the outputs of logarithms generally have a fractional portion: the logarithm of \(x\) always falls a little short of the length of \(x.\) This is because, insofar as logarithms act like the "length" function, they generalize the notion of length, making it continuous. What does this fractional portion mean? Roughly speaking, logarithms measure not only how long a number is, but also how much that number is really using its digits. 12 and 99 are both two-digit numbers, but intuitively, 12 is "barely" two digits long, whereas 97 is "nearly" three digits. Logarithms formalize this intuition, and tell us that 12 is really only using about 1.08 digits, while 97 is using about 1.99. Where are these fractions coming from? Also, looking at the examples above, notice that \(\log_{10}(316) \approx 2.5.\) Why is it 316, rather than 500, that logarithms claim is "2.5 digits long"? What would it even mean for a number to be 2.5 digits long? It very clearly takes 3 digits to write down "316," namely, '3', '1', and '6'. What would it mean for a number to use "half a digit"? Well, here's one way to approach the notion of a "partial digit." Let's say that you work in a warehouse recording data using digit wheels like they used to have on old desktop computers. Let's say that one of your digit wheels is broken, and can't hold numbers greater than 4 — every notch 5-9 has been stripped off, so if you try to set it to a number between 5 and 9, it just slips down to 4. Let's call the resulting digit a 5-digit, because it can still be stably placed into 5 different states (0-4). We could easily call this 5-digit a "partial 10-digit." The question is, how much of a partial 10-digit is it? Is it half a 10-digit, because it can store 5 out of 10 values that a "full 10-digit" can store? That would be a fine way to measure fractional digits, but it's not the method used by logarithms. Why? Well, consider a scenario where you have to record lots and lots of numbers on these digits (such that you can tell someone how to read off the right data later), and let's say also that you have to pay me one dollar for every digit that you use. Now let's say that I only charge you 50¢ per 5-digit. Then you should do all your work in 5-digits! Why? Because two 5-digits can be used to store 25 different values (00, 01, 02, 03, 04, 10, 11, …, 44) for $1, which is way more data-stored-per-dollar than you would have gotten by buying a 10-digit.noteYou may be wondering, are two 5-digits really worth more than one 10-digit? Sure, you can place them in 25 different configurations, but how do you encode "9″ when none of the digits have a "9" symbol written on them? If so, see The symbols don't matter. In other words, there's a natural exchange rate between \(n\)-digits, and a 5-digit is worth more than half a 10-digit. (The actual price you'd be willing to pay is a bit short of 70¢ per 5-digit, for reasons that we'll explore shortly). A 4-digit is also worth a bit more than half a 10-digit (two 4-digits lets you store 16 different numbers), and a 3-digit is worth a bit less than half a 10-digit (two 3-digits let you store only 9 different numbers). We now begin to see what the fractional answer that comes out of a logarithm actually means (and why 300 is closer to 2.5 digits long that 500 is). The logarithm base 10 of \(x\) is not answering "how many 10-digits does it take to store \(x\)?" It's answering "how many digits-of-various-kinds does it take to store \(x\), where as many digits as possible are 10-digits; and how big does the final digit have to be?" The fractional portion of the output describes how large the final digit has to be, using this natural exchange rate between digits of different sizes. For example, the number 200 can be stored using only two 10-digits and one 2-digit.\(\log_{10}(200) \approx 2.301,\) and a 2-digit is worth about 0.301 10-digits. In fact, a 2-digit is worth exactly \((\log_{10}(200) - 2)\) 10-digits. As another example, \(\log_{10}(500) \approx 2.7\) means "to record 500, you need two 10-digits, and also a digit worth at least \(\approx\)70¢", i.e., two 10-digits and a 5-digit. This raises a number of additional questions: Question: Wait, there is no digit that's worth 50¢. As you said, a 3-digit is worth less than half a 10-digit (because two 3-digits can only store 9 things), and a 4-digit is worth more than half a 10-digit (because two 4-digits store 16 things). If \(\log_{10}(316) \approx 2.5\) means "you need two 10-digits and a digit worth at least 50¢," then why not just have the \(\log_{10}\) of everything between 301 and 400 be 2.60? They're all going to need two 10-digits and a 4-digit, aren't they? Answer: The natural exchange rates between digits is actually way more interesting than it first appears. If you're trying to store either "301" or "400", and you start with two 10-digits, then you have to purchase a 4-digit in both cases. But if you start with a 10-digit and an 8-digit, then the digit you need to buy is different in the two cases. In the "301" case you can still make do with a 4-digit, because the 10, 8, and 4-digits together give you the ability to store any number up to \(10\cdot 8\cdot 4 = 320\). But in the "400" case you now need to purchase a 5-digit instead, because the 10, 8, and 4 digits together aren't enough. The logarithm of a number tells you about every combination of \(n\)-digits that would work to encode the number (and more!). This is an idea that we'll explore over the next few pages, and it will lead us to a much better understanding of logarithms. Question: Hold on, where did the 2.60 number come from above? How did you know that a 5-digit costs 70¢? How are you calculating these exchange rates, and what do they mean? Answer: Good question. In Exchange rates between digits, we'll explore what the natural exchange rate between digits is, and why. Question: \(\log_{10}(100)=2,\) but clearly, 100 is 3 digits long. In fact, \(\log_b(b^k)=k\) for any integers \(b\) and \(k\), but \(k+1\) digits are required to represent \(b^k\) in base \(b\) (as a one followed by \(k\) zeroes). Why is the logarithm making these off-by-one errors? Answer: Secretly, the logarithm of \(x\) isn't answering the question "how hard is it to write \(x\) down?", it's answering something more like "how many digits does it take to record a whole number less than \(x\)?" In other words, the \(\log_{10}\) of 100 is the number of 10-digits you need to be able to name any one of a hundred numbers, and that's two digits (which can hold anything from 00 to 99). Question: Wait, but what about when the input has a fractional portion? How long is the number 100.87? And also, \(\log_{10}(100.87249072)\) is just a hair higher than 2, but 100.87249072 is way harder to write down that 100. How can you say that their "lengths" are almost the same? Answer: Great questions! The length interpretation on its own doesn't shed any light on how logarithm functions handle fractional inputs. We'll soon develop a second interpretation of logarithms which does explain the behavior on fractional inputs, but we aren't there yet. Meanwhile, note that the question "how hard is it to write down an integer between 0 and \(x\) using digits?" is very different from the question of "how hard is it to write down \(x\)"? For example, 3 is easy to write down using digits, while \(\pi\) is very difficult to write down using digits. Nevertheless, the log of \(\pi\) is very close to the log of 3. The concept for "how hard is this number to write down?" goes by the name of complexity; see the Kolmogorov complexity tutorial to learn more on this topic. Question: Speaking of fractional inputs, if \(0 < x < 1\) then the logarithm of \(x\) is negative. How does that square with the length interpretation? What would it even mean for the length of the number \(\frac{1}{10}\) to be \(-1\)? Answer: Nice catch! The length interpretation crashes and burns when the inputs are less than one. The "logarithms measure length" interpretation is imperfect. The connection is still useful to understand, because you already have an intuition for how slowly the length of a number grows as the number gets larger. The "length" interpretation is one of the easiest ways to get a gut-level intuition for what logarithmic growth means. If someone says "the amount of time it takes to search my database is logarithmic in the number of entries," you can get a sense for what this means by remembering that logarithmic growth is like how the length of a number grows with the magnitude of that number: The interpretation doesn't explain what's going on when the input is fractional, but it's still one of the fastest ways to make logarithms start feeling like a natural property on numbers, rather than just some esoteric function that "inverts exponentials." Length is the quick-and-dirty intuition behind logarithms. For example, I don't know what the logarithm base 10 of 2,310,426 is, but I know it's between 6 and 7, because 2,310,426 is seven digits long. $$\underbrace{\text{2,310,426}}_\text{7 digits}$$ In fact, I can also tell you that \(\log_{10}(\text{2,310,426})\) is between 6.30 and 6.48. How? Well, I know it takes six 10-digits to get up to 1,000,000, and then we need something more than a 2-digit and less than a 3-digit to get to a number between 2 and 3 million. The natural exchange rates for 2-digits and 3-digits (in terms of 10-digits) are 30¢ and 48¢ respectively, so the cost of 2,310,426 in terms of 10-digits is between $6.30 and $6.48. Next up, we'll be exploring this idea of an exchange rate between different types of digits, and building an even better interpretation of logarithms which helps us understand what they're doing on fractional inputs (and why). Alexei Andreev7 Jun 2016 5:06 UTC What's \(n\) exactly? Eric Rogstad7 Jun 2016 17:56 UTC I think you may need to spell out this 10 times as many numbers part. This is a large unexplained step in explaining why the log is the length. This is slightly confusing, because it's the first digit that's a 2. I might write this as, "whereas, when multiplying 1 by 10 to get to x, you might have to multiply by 10 a fractional number of times (if x is not a power of 10), so the log base 10 of x can include a fractional part while the number of digits in the base 10 representation of x is always a whole number." Rationale: in the previous sentence you're comparing the number of digits needed to write x to the number of times to multiply 1 by 10. So when the next sentence starts with, "the only difference is…" I'm expecting it to be comparing numbers of digits and numbers of times to multiply. I can figure out that you've switched to talking about "computing logs" because logs count the number of times to multiply by 10, but it feels like one extra step of mental effort. (This is a less confident suggestion than the amount of text I've used suggests.) How about, "because I'm going to need six 10-digits to get up to a million, and something more than a 2-digit and less than a 3-digit to get from there to a number between 2 and 3 million." I'm not sure that would be the right way to say it, but I still feel like the current text is problematic, because: 1) Whether you say last digit, or seventh digit, in either case I'm reading right-to-left and my first thought is that you're talking about the ones place. 2) Even if you said something like left-most digit, that wouldn't be right, because it's not that 2 is between 2 and 3, it's that the value of the whole number is greater than 210^6 and less than 310^6. I think you're referring to a digit in an abstract sense that doesn't directly map to the digits we write down, so you may have to go out of your way to avoid confusing nth digit with a particular one of the numerals that are written above. Stephanie Zolayvar8 Jun 2016 18:15 UTC Is this paragraph needed? I find myself wanting to skip past it. Eli Tyre17 Sep 2017 5:44 UTC You use an example of "99" then switch to "97″.
CommonCrawl
Global stability of traveling waves for a spatially discrete diffusion system with time delay ERA Home Note on coisotropic Floer homology and leafwise fixed points September 2021, 29(4): 2561-2597. doi: 10.3934/era.2021002 Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system Riadh Chteoui 1,2, , Abdulrahman F. Aljohani 3, and Anouar Ben Mabrouk 2,4,5,, Department of Mathematics, Faculty of Sciences, University of Tabuk, Saudi Arabia, Lab. of Algebra, Number Theory and Nonlinear Analysis, Department of Mathematics Faculty of Sciences, University of Monastir, 5000 Monastir, Tunisia Department of Mathematics, Faculty of Sciences, University of Tabuk, Saudi Arabia Department of Mathematics, Higher Institute of Applied Mathematics and Computer, Science, University of Kairouan, Street of Assad Ibn Alfourat, 3100 Kairouan, Tunisia, Lab. of Algebra, Number Theory and Nonlinear Analysis, Department of Mathematics * Corresponding author: Anouar Ben Mabrouk Received August 2020 Revised November 2020 Published September 2021 Early access January 2021 Figure(40) / Table(1) In this paper, we study a couple of NLS equations characterized by mixed cubic and super-linear sub-cubic power laws. Classification as well as existence and uniqueness of the steady state solutions have been investigated. Numerical simulations have been also provided illustrating graphically the theoretical results. Such simulations showed that possible chaotic behaviour seems to occur and needs more investigations. Keywords: Variational, energy functional, existence, uniqueness, NLS system, classification, chaotic, simulation. Mathematics Subject Classification: Primary: 35Q41, 35J50; Secondary: 35J10, 35Q55. Citation: Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, 2021, 29 (4) : 2561-2597. doi: 10.3934/era.2021002 J. S. Aitchison, A. M. Weiner, Y. Silberberg, M. K. Oliver, J. L. Jackel, D. E. Leaird, E. M. Vogel and P. W. E. Smith, Observation of spatial optical solitons in a nonlinear glass waveguide, Opt. Lett., 15 (1990), 471-473. doi: 10.1364/OL.15.000471. Google Scholar H. Aminikhah, F. Pournasiri and F. Mehrdoust, A novel effective approach for systems of coupled Schrödinger equation, Pramana, 86 (2016), 19-30. doi: 10.1007/s12043-015-0961-4. Google Scholar B. Balabane, J. Dolbeault and H. Ounaeis, Nodal solutions for a sublinear elliptic equation, Nonlinear Anal., 52 (2003), 219-237. doi: 10.1016/S0362-546X(02)00104-9. Google Scholar A. Ben Mabrouk and M. Ayadi, A linearized finite-difference method for the solution of some mixed concave and convex nonlinear problems, Appl. Math. Comput., 197 (2008), 1-10. doi: 10.1016/j.amc.2007.07.051. Google Scholar A. Ben Mabrouk and M. Ayadi, Lyapunov type operators for numerical solutions of PDEs, Appl. Math. Comput., 204 (2008), 395-407. doi: 10.1016/j.amc.2008.06.061. Google Scholar A. Ben Mabrouk and M. L. Ben Mohamed, Nodal solutions for some nonlinear elliptic equations, Appl. Math. Comput., 186 (2007), 589-597. doi: 10.1016/j.amc.2006.08.003. Google Scholar A. Ben Mabrouk and M. L. Ben Mohamed, Phase plane analysis and classification of solutions of a mixed sublinear-superlinear elliptic problem, Nonlinear Anal., 70 (2009), 1-15. doi: 10.1016/j.na.2007.11.041. Google Scholar A. Ben Mabrouk and M. L. Ben Mohamed, Nonradial solutions of a mixed concave-convex elliptic problem, J. Partial Differ. Equ., 24 (2011), 313-323. doi: 10.4208/jpde.v24.n4.3. Google Scholar A. Ben Mabrouk and M. L. Ben Mohamed, On some critical and slightly super-critical sub-superlinear equations, Far East J. Appl. Math., 23 (2006), 73-90. Google Scholar A. Ben Mabrouk, M. L. Ben Mohamed and K. Omrani, Finite difference approximate solutions for a mixed sub-superlinear equation, Appl. Math. Comput., 187 (2007), 1007-1016. doi: 10.1016/j.amc.2006.09.081. Google Scholar V. Benci and D. Fortunato, Solitary waves of the nonlinear Klein-Gordon equationcoupled with Maxwell equations, Rev. Math. Phys., 14 (2020), 409-420. doi: 10.1142/S0129055X02001168. Google Scholar R. D. Benguria, J. Dolbeault and M. J. Esteban, Classification of the solutions of semilinear elliptic problems in a ball, J. Differential Equations, 167 (2000), 438-466. doi: 10.1006/jdeq.2000.3792. Google Scholar K. Chaïb, Necessary and sufficient conditions of existence for a system involving the $p$-Laplacian $(0 < p < N)$, J. Differential Equations, 189 (2003), 513-525. doi: 10.1016/S0022-0396(02)00094-3. Google Scholar S. Chakravarty, M. J. Ablowitz, J. R. Sauer and R. B. Jenkins, Multisoliton interactions and wavelength-division multiplexing, Opt. Lett., 20 (1995), 136-138. doi: 10.1364/OL.20.000136. Google Scholar R. Chteoui, A. Ben Mabrouk and H. Ounaies, Existence and properties of radial solutions of a sublinear elliptic equation, J. Partial Differ. Equ., 28 (2015), 30-38. doi: 10.4208/jpde.v28.n1.4. Google Scholar A. K. Dhar and K. P. Das, Fourth-order nonlinear evolution equation for two Stokes wave trains in deep water, Physics of Fluids A: Fluid Dynamics, 3 (1991), 3021-3026. doi: 10.1063/1.858209. Google Scholar M. R. Gupta, B. K. Som and B. Dasgupta, Coupled nonlinear Schrödinger equations for Langmuir and elecromagnetic waves and extension of their modulational instability domain, J. Plas, Phys., 25 (1981), 499-507. doi: 10.1017/S0022377800026271. Google Scholar F. T. Hioe, Solitary waves for two and three coupled nonlinear Schrödinger equations, Phys. Rev. E, 58 (1998), 6700-6707. doi: 10.1103/PhysRevE.58.6700. Google Scholar T. Kanna, M. Lakshmanan, P. Tchofo Dinda and N. Akhmediev, Soliton collisions with shape change by intensity redistribution in mixed coupled nonlinear Schrödinger equations, Phys. Rev. E, 73 (2006), 026604, 15 pp. doi: 10.1103/PhysRevE.73.026604. Google Scholar C. E. Kenig and F. Merle, Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equation in the radial case, Invent. Math., 166 (2006), 645-675. doi: 10.1007/s00222-006-0011-4. Google Scholar S. Keraani, On the blow-up phenomenon of the critical Schrödinger equation, J. Funct. Anal., 235 (2006), 171-192. doi: 10.1016/j.jfa.2005.10.005. Google Scholar H. Liu, Ground states of linearly coupled Schrödinger systems, Electron. J. Differential Equations, (2017), Paper No. 5, 10 pp. Google Scholar P. Liu and S.-Y. Lou, Coupled nonlinear Schrödinger equation: Symmetries and exact solutions, Commun. Theor. Phys., 51 (2009), 27-34. doi: 10.1088/0253-6102/51/1/06. Google Scholar Y. Martel and F. Merle, Multi solitary waves for nonlinear Schrödinger equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 23 (2006), 849-864. doi: 10.1016/j.anihpc.2006.01.001. Google Scholar C. R. Menyuk, Stability of solitons in birefringent optical fibers. II. Arbitrary amplitudes, Journal of the Optical Society of America B, 5 (1988), 392-402. doi: 10.1364/JOSAB.5.000392. Google Scholar F. Merle, Construction of solutions with exactly $k$ blow-up points for the Schrödinger equation with critical nonlinearities, Comm. Math. Phys., 129 (1990), 223-240. doi: 10.1007/BF02096981. Google Scholar L. F. Mollenauer, S. G. Evangelides and J. P. Gordon, Wavelength division multiplexing with solitons in ultra-long distance transmission using lumped amplifiers, J. Lightwave Technol., 9 (1991), 362-367. doi: 10.1109/50.70013. Google Scholar H. Ounaies, Study of an elliptic equation with a singular potential, Indian J. Pure Appl. Math., 34 (2003), 111-131. Google Scholar Z. Pinar and E. Deliktas, Solution behaviors in coupled Schrödinger equations with full-modulated nonlinearities, AIP Conference Proceedings, 1815 (2017), 080019. doi: 10.1063/1.4976451. Google Scholar T. Saanouni, A note on coupled focusing nonlinear Schrödinger equations, Appl. Anal., 95 (2016), 2063-2080. doi: 10.1080/00036811.2015.1086757. Google Scholar J. Serrin and H. Zou, Classification of positive solutions of quasilinear elliptic equations, Topol. Methods Nonlinear Anal., 3 (1994), 1-25. doi: 10.12775/TMNA.1994.001. Google Scholar M. Shalaby, F. Reynaud and A. Barthelemy, Experimental observation of spatial soliton interactions with a $\pi/2$ relative phase difference, Opt. Lett., 17 (1992), 778-780. Google Scholar W. A. Strauss, Existence of solitary waves in higher dimensions, Commun. Math. Phys., 55 (1977), 149-162. doi: 10.1007/BF01626517. Google Scholar S. L. Yadava, Uniqueness of positive radial solutions of the Dirichlet problems $-\Delta u = u^p\pm u^q$ in an annulus, J. Differential Equations, 139 (1997), 194-217. doi: 10.1006/jdeq.1997.3283. Google Scholar E. Yanagida, Structure of radial solutions to $\Delta u+K(|x|)|u|^{p-1}u = 0$ in $\mathbb{R}^n$, SIAM. J. Math. Anal., 27 (1996), 997-1014. doi: 10.1137/0527053. Google Scholar H.-Q. Zhang, X.-H. Meng, T. Xu, L.-L. Li and B. Tian, Interactions of bright solitons for the $(2+1)$-dimensional coupled nonlinear Schrödinger equations from optical fibres with symbolic computation, Phys. Scr., 75 (2007), 537-542. doi: 10.1088/0031-8949/75/4/028. Google Scholar Y. Zhida, Multi-soliton solutions of coupled nonlinear Schrödinger Equations., J. Chinese Physics Letters, 4 (1987), 185-187. Google Scholar S. Zhou and X. Cheng, Numerical solution to coupled nonlinear Schrödinger equations on unbounded domains, Math. Comput. Simulation, 80 (2010), 2362-2373. doi: 10.1016/j.matcom.2010.05.019. Google Scholar Figure 1. Partition of the plane $ \mathbb{R}^2 $ according to the curves $ \Gamma_{1} $ and $ \Gamma_{2} $ for $ p = 1.5 $ and $ \omega = 2 $ Figure 2. Partition of the plane $ \mathbb{R}^2 $ according to $ \Gamma_1 $, $ \Gamma_{2} $ and $ \Lambda $ for $ p = 1.5 $ and $ \omega = 2 $ Figure 3. $ (u,v) $ for $ (a,b) = (0.25,2.75)\in\Omega_1 $ Figure 4. The solution $ (u,v) $ for $ (a,b) = (1,\omega_3-0.1)\in\Omega_1 $ Figure 5. The solutions $ u $ and $ v $ separately for $ (a,b) = (0.25,2.75)\in\Omega_1 $ Figure 6. The phase plane portrait $ (u',u) $ for $ (a,b) = (0.25,2.75)\in\Omega_1 $ Figure 7. The phase plane portrait $ (v',v) $ for $ (a,b) = (0.25,2.75)\in\Omega_1 $ Figure 8. The partition of $ \Omega_2 $ according to the curve $ \Lambda $ Figure 9. The solution $ (u,v) $ for $ (a,b) = (0.45,0.95)\in\Omega_2^1 $ Figure 10. The solution $ (u,v) $ for $ (a,b) = (0.5,0.75)\in\Omega_2^1 $ Figure 11. The solution $ (u,v) $ for $ (a,b) = (0.4,0.5)\in\Omega_2^1 $ Figure 12. The solutions $ u $ an $ v $ separately for $ (a,b) = (0.45,0.5)\in\Omega_2^1 $ Figure 13. The solution $ (u,v) $ for $ p = 2 $, $ \omega = 2 $ and $ (a,b) = (2.5,4)\in\Omega_{ext,1} $ Figure 14. The solution $ (u,v) $ for $ p = 2 $, $ \omega = 2 $ and $ (a,b) = (1,6.5)\in\Omega_{ext,1} $ Figure 15. The solutions $ u $ and $ v $ separately for $ p = 2 $, $ \omega = 2 $ and $ (a,b) = (0.5,5)\in\Omega_{ext,1} $ Figure 16. The solution $ u $ and $ v $ separately for $ p = 2 $, $ \omega = 2 $ and $ (a,b) = (3.5,4)\in\Omega_{ext,1} $ Figure 17. The solution $ (u,v) $ for $ p = 1.5 $, $ \omega = 2 $ and $ (a,b) = (0.5,3.0625)\in)A,B( $ Figure 18. The solution $ (u,v) $ for $ p = 1.5 $, $ \omega = 2 $ and $ (a,b) = (0.9511,1.2)\in)A,B( $ Figure 19. The solutions $ u $ and $ v $ separately for $ p = 1.5 $, $ \omega = 2 $ and $ (a,b) = (0.5,3.0625)\in)A,B( $ Figure 20. The solutions $ u $ and $ v $ separately for $ p = 1.5 $, $ \omega = 2 $ and $ (a,b) = (0.9511,1.2)\in)A,B( $ Figure 21. The solution $ (u,v) $ for $ p = 1.5 $, $ \omega = 2 $ and $ (a,b) = (0.1,1.2976)\in)I,B( $ Figure 23. The solutions $ u $ and $ v $ separately for $ (a,b) = (0.1,1.2976)\in)I,B( $ Figure 24. The chaotic behavior for $ p = 1.5 $, $ w = 2 $ and $ (a,b) = (0.2,0.4) $ Figure 25. The chaotic behavior for $ p = 1.5 $, $ w = 2 $ and $ (a,b) = (2,4) $ Figure 26. The chaotic behavior for $ p = 2.5 $, $ w = 3 $ and $ (a,b) = (0.2,0.14) $ Figure 28. The partition of the plane according to $ \Gamma_1 $, $ \Gamma_2 $ and $ \Lambda $ for $ w = 0 $ Figure 29. The solution $ (u,v) $ for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (0.25,0.5538)\in\Lambda $ Figure 30. The solution $ (u,v) $ for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (0.25,0.35)\in R_2 $ Figure 31. The solution $ (u,v) $ for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (0.5,1)\in R_1 $ Figure 32. The solution $ (u,v) $ for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (2.5,3.5)\in R_1 $ Figure 33. The portrait $ (u',u) $ for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (0.25,0.5538)\in\Lambda $ Figure 34. The portrait $ (v',v) $ for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (0.25,0.35)\in R_2 $ Figure 35. The solutions $ u $ (in blue) and $ v $ (in pink) for $ p = 1.5 $, $ w = 0 $ and $ (a,b) = (0.5,1)\in R_1 $ Figure 36. The energy $ E(u,v)(x) $ for $ p = 1.5 $, $ (a,b) = (0.15,25)\in R_2 $, $ w = 0 $ at the top and $ w = 2 $ at the bottom Figure 37. Parameters' domains for $ p = 1.5 $, $ \omega = 0.5 $ Figure 39. Parameters' domains for $ p = 1.5 $, $ \omega = 1 $ Table 1. Some illustrative cases of problem (34) for $ p = 1.5 $ Corresponding Figure Figure 29 Figure 30 Figure 31 Figure 32 $ a $ $ 0.25 $ $ 0.25 $ $ 0.5 $ $ 2.5 $ $ b $ $ 0.5538 $ $ 0.35 $ $ 1 $ $ 3.5 $ Initial value region $ (a,b)\in\Lambda $ $ (a,b)\in R_2 $ $ (a,b)\in R_1 $ $ (a,b)\in R_1 $ Jochen Bröcker. Existence and uniqueness for variational data assimilation in continuous time. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021050 Rémi Carles, Erwan Faou. Energy cascades for NLS on the torus. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 2063-2077. doi: 10.3934/dcds.2012.32.2063 Patrick Cummings, C. Eugene Wayne. Modified energy functionals and the NLS approximation. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1295-1321. doi: 10.3934/dcds.2017054 Thi-Bich-Ngoc Mac. Existence of solution for a system of repulsion and alignment: Comparison between theory and simulation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3013-3027. doi: 10.3934/dcdsb.2015.20.3013 Tong Yang, Fahuai Yi. Global existence and uniqueness for a hyperbolic system with free boundary. Discrete & Continuous Dynamical Systems, 2001, 7 (4) : 763-780. doi: 10.3934/dcds.2001.7.763 Dominique Blanchard, Nicolas Bruyère, Olivier Guibé. Existence and uniqueness of the solution of a Boussinesq system with nonlinear dissipation. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2213-2227. doi: 10.3934/cpaa.2013.12.2213 François James, Nicolas Vauchelet. One-dimensional aggregation equation after blow up: Existence, uniqueness and numerical simulation. Networks & Heterogeneous Media, 2016, 11 (1) : 163-180. doi: 10.3934/nhm.2016.11.163 Casey Jao. Energy-critical NLS with potentials of quadratic growth. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 563-587. doi: 10.3934/dcds.2018025 Yong Zeng. Existence and uniqueness of very weak solution of the MHD type system. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5617-5638. doi: 10.3934/dcds.2020240 Fangfang Jiang, Junping Shi, Qing-guo Wang, Jitao Sun. On the existence and uniqueness of a limit cycle for a Liénard system with a discontinuity line. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2509-2526. doi: 10.3934/cpaa.2016047 Kota Ikeda. The existence and uniqueness of unstable eigenvalues for stripe patterns in the Gierer-Meinhardt system. Networks & Heterogeneous Media, 2013, 8 (1) : 291-325. doi: 10.3934/nhm.2013.8.291 Guillermo Reyes, Juan-Luis Vázquez. The inhomogeneous PME in several space dimensions. Existence and uniqueness of finite energy solutions. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1275-1294. doi: 10.3934/cpaa.2008.7.1275 Hassan Khassehkhan, Messoud A. Efendiev, Hermann J. Eberl. A degenerate diffusion-reaction model of an amensalistic biofilm control system: Existence and simulation of solutions. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 371-388. doi: 10.3934/dcdsb.2009.12.371 Sergio Amat, Pablo Pedregal. On a variational approach for the analysis and numerical simulation of ODEs. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1275-1291. doi: 10.3934/dcds.2013.33.1275 Riccardo Adami, Diego Noja, Nicola Visciglia. Constrained energy minimization and ground states for NLS with point defects. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1155-1188. doi: 10.3934/dcdsb.2013.18.1155 Rowan Killip, Changxing Miao, Monica Visan, Junyong Zhang, Jiqiang Zheng. The energy-critical NLS with inverse-square potential. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3831-3866. doi: 10.3934/dcds.2017162 Gisèle Ruiz Goldstein, Jerome A. Goldstein, Naima Naheed. A convexified energy functional for the Fermi-Amaldi correction. Discrete & Continuous Dynamical Systems, 2010, 28 (1) : 41-65. doi: 10.3934/dcds.2010.28.41 Rui Peng, Xianfa Song, Lei Wei. Existence, nonexistence and uniqueness of positive stationary solutions of a singular Gierer-Meinhardt system. Discrete & Continuous Dynamical Systems, 2017, 37 (8) : 4489-4505. doi: 10.3934/dcds.2017192 Chunxiao Guo, Fan Cui, Yongqian Han. Global existence and uniqueness of the solution for the fractional Schrödinger-KdV-Burgers system. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1687-1699. doi: 10.3934/dcdss.2016070 Marta Strani. Existence and uniqueness of a positive connection for the scalar viscous shallow water system in a bounded interval. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1653-1667. doi: 10.3934/cpaa.2014.13.1653 Riadh Chteoui Abdulrahman F. Aljohani Anouar Ben Mabrouk
CommonCrawl
● Abstract ● Introduction: The Importance of Monitoring... ● The OVATION Model Description ● Data Pre-Processing ● Comparative Analysis of Visualization... ● Architecture and Program Implementation ● Interface of the Model ● Prospects of the Model Advancement ● Conclusion ● Acknowledgments ● References ● Figures ● Article Map RUSSIAN JOURNAL OF EARTH SCIENCES, VOL. 20, ES6001, doi:10.2205/2020ES000721, 2020 Short-term forecast of the auroral oval position on the basis of the "virtual globe" technology A. V. Vorobev1,2, V. A. Pilipenko1,3, R. I. Krasnoperov1, G. R. Vorobeva2, D. A. Lorentzen4 1Geophysical Center of the Russian Academy of Sciences, Moscow, Russia 2Ufa State Aviation Technical University, Ufa, Russia 3Institute of Physics of the Earth, Moscow, Russia 4Birkeland Center for Space Science, University Centre in Svalbard, Svalbard, Norway A nowcasting and even forecast of the auroral oval position and intensity is a highly needed resource for practical applications. The auroral oval is the region with a high level of ionospheric plasma turbulence, which provokes malfunctions of radio communication and navigation satellite systems, and the region with most intense irregular ionospheric electrojet exciting the geomagnetically induced currents (GICs) in electric power lines. We have elaborated a web service (http://aurora-forecast.ru) for continuous nowcasting, visualization and short-term forecast of auroras. The implementation tool of the developed geographic information system (GIS) is the Django framework. The web service is a software shell built on the basis of a virtual globe – a multi-scale digital 3D model of the Earth, rendering visualization of data provided by the NOAA service on the planetary distribution of the probability of the aurora occurrence. The NOAA service uses the output of the OVATION-prime model, which gives a forecast of auroras in advance of 30 minutes with a 5-minute update step, using real-time data from interplanetary monitors of the solar wind. The developed web-service can be used both to assess the probability of observing auroras anywhere in the world. This service may help to predict the deterioration of the satellite navigation signal quality or warn about possibility of intense GICs at high latitudes. Introduction: The Importance of Monitoring the Position of the Auroral Oval The most active manifestations of the space weather, such as overloading of power transmission lines caused by geomagnetically induced currents (GICs), intensification of electro-corrosion of oil/gas pipelines, and failures of short-wave radio communication, are observed within a region of the auroral oval. In this brief note we omit the description of the auroral phenomena physics, which is described comprehensively in many excellent books and reviews [cf., Akasofu, 1977]. The auroral oval is characterized by sharp gradients of the ionospheric plasma and a high level of turbulence, which provokes failures and significantly reduces the stability of global navigation satellite systems (GNSS), such as GPS/GLONASS [Kozyreva et al., 2017; Smith et al., 2008]. The need has ripened for the creation and practical testing of regional models for the Arctic Zone of the Russian Federation, allowing monitoring and an operative forecast of the auroral oval dynamics under various space weather conditions. To date, various approaches to nowcasting and predicting the auroral oval position have been developed. A weather-independent source of detailed information about the structure of the auroral oval is measurements of auroral electron fluxes (NOAA, DMSP) or field-aligned currents (CHAMP, Orsted, SWARM) on low-Earth-orbit (LEO) satellites. The parameters of the solar wind and interplanetary magnetic field (IMF), transmitted in real time from satellites at the Lagrange point L1 on the Earth–Sun line (at a distance of $\sim 200$ Earth radii), are usually used as input data in such models. This article discusses the possibilities of modernizing web services that provide monitoring and visualization of the auroral oval using the methods of geographical information systems (GISs). A new web service for visualizing the probability of the aurora occurrence is described, based on the statistical OVATION-prime model of auroral particle precipitation. The OVATION Model Description The OVATION auroral oval model (Oval Variation, Assessment, Tracking, Intensity, and Online Nowcasting) (https://www.swpc.noaa.gov/product s/aurora-30-minute-forecast) is based on data acquired during more than 20 years of observations of electron and proton fluxes of different energies on satellites of the Defense Meteorological Satellite Program (DMSP). Compared to ground-based optical observations, the registration of particle fluxes on LEO satellites for both hemispheres is independent of the ionosphere illumination and atmospheric cloudiness, and is more sensitive ($\sim 10^{-2}$ ergs/cm$^2$ s) than ground-based or satellite-based optical observations. The boundaries of the auroral oval were automatically determined from the DMSP measured particle fluxes using specially developed algorithms. The equatorward boundary in the OVATION model is defined as the boundary for the soft electron precipitation [Newell et al., 2014], and the poleward boundary corresponds to the open-closed field line boundary [Sotirelis and Newell, 2000]. The OVATION model predicted the total intensity of the fluxes of auroral electrons causing auroras, and the predicted boundaries coincided well with the position of the auroral oval. The advanced model of the auroral oval – OVATION-prime (OP), is parameterized to the solar wind and IMF parameters (https://ccmc.gsfc.nasa.gov/models/modelinfo.php ?model=Ovation%20Prime). This model calculates the 2D spatial distribution of intensity of the main types of the auroral electron and ion precipitation [Newell et al., 2010]. Diffuse auroras are caused by precipitation of soft electrons (0.1–1 keV) which are the main source of energy supply into the high-latitude upper atmosphere, although the auroras caused by these particles can be at a sub-visual level [Newell et al., 2009]. Discrete auroras are caused by a precipitation of energetic electrons (0.1–30 keV). All types of auroras are combined to map the total aurora power [Newell et al., 2009]. The input parameters of the model are real-time data on the solar wind and IMF from interplanetary satellites downloaded to the site of the U.S. Oceanic and Atmospheric Administration (NOAA) from the ACE satellite (https://services.swpc.noaa.gov/products/solar-wind/). The model uses the previously established statistical relationship between the parameters of the interplanetary medium and the dynamics of the auroral oval [Newell et al., 2007]. The time shift ($\sim 1$ hour) due to the propagation of the solar wind from space monitor to the magnetosphere boundary makes it possible in principle to provide a short-term forecast of the expected intensity and location of auroras. Based on the OP model, two types of web-services can be built: A 2D picture of charged particle fluxes of different energies and powers of auroral intensity for the northern and southern hemisphere. These fluxes are important for the geophysical community, because they provide information on the energy entering the upper atmosphere and its spatial distribution. Based on this information, models of the influence of space weather on technological systems are constructed [Kozyreva et al., 2020]. As an example of such services, the OP Real-Time (OP-RT) web display (https://www.ngdc.no aa.gov/stp/ovation_prime/) can be mentioned; visual intensity of auroras calculated from data on particle fluxes [Machol et al., 2012]. The empirical OP model makes it possible to calculate the relationship between the intensity of auroral emissions and the probability of their observation with a naked eye, which is implemented on the NOAA web-service (https://www.swpc.noaa. gov/products/aurora-30-minute-forecast). NOAA gives the probabilities of observing aurora ranked on a scale from 0 to 100 and tied to the latitude-longitude grid. This information is important for operators of technological systems, aurora observers, and the tourist industry. There are also web services focused on regional nowcasting of the auroral oval, developed by the University of Alaska (https://www.gi.alaska.edu/m onitors/aurora-forecast), the Meteorological Service of Iceland (https://en.vedur.is/weather/foreca sts/aurora), University Center on Svalbard (http:// kho.unis.no) [Sigernes et al., 2011]. The aurora monitor is an essential part of the space weather services at Space Research Institute (http://spacew eather.ru) and Polar Geophysical Institute (http:// pgia.ru). The site of Virtual Geophysical Laboratory (http://yamalgeo.rf) for Yamal also contains information about aurora activity. Experience with the above services gave us a possibility to identify a number of typical and recurring shortcomings from these implementations which include: the inability to dynamically scale and add additional layers; a small number of displayed parameters; lack of data on the current state of space weather and basic tools for spatial analysis of visualized parameters. The development of web services that provide effective monitoring and visualization of the auroral oval by GIS methods is still an urgent task, the solution of which will help to build an effective forecast of space weather easily available to all interested customers. Data Pre-Processing An object of visualization in the presented GIC system is the probability of observing the aurora with a naked eye. The empirical OP model enables one to calculate the relationship between the intensity of auroral particles and the probability of the aurora occurrence. The NOAA web-service (https://www.swpc.noaa.gov/products/aurora-30- minute-forecast) provides the OP model output that can be used for short-term prediction of the aurora intensity. A prediction horizon of 30 minutes corresponds to an average solar wind speed of $\sim 800$ km/s. The output file of the OP model is $1024 \times 512$ array, which corresponds to a longitude value from $0\mbox{°}$ to $360\mbox{°}$ in increments of $0.32846715\mbox{°}$ and a latitude value of $-90\mbox{°}$ to $90\mbox{°}$ in increments of $0.3515625\mbox{°}$ on the NOAA website (https://services.swpc.noaa.gov/text/aurora-nowc ast-map.txt). The array values are ranged from 0 to 100, where 0 corresponds to the minimum probability of observing auroras, and 100 to the maximum. Data are updated every 5 minutes. Comparative Analysis of Visualization Tools The processing and graphical interpretation of the structured set of spatial and attribute data of the auroral zone visualization is implemented via web-based GIS technologies. The experience of building a web-service for visualizing geomagnetic field variations has shown the consistency of the GIS software tools for solving such problems [Vorobev and Vorobeva, 2017a, 2017b]. In regard to geospatial data, existing GIS methods in general can be divided into classic flat maps and virtual globes [Bobkov and Leonov, 2017]. Given the specifics of the high-latitude location of the auroral oval, the obvious advantage of globes is the quality of their visual perception, preservation of the geometric similarity of contours, and the absence of cartographic distortions of projections inherent in flat maps, especially in the polar regions. We have compared the characteristics and possibilities of using modern software libraries that make it possible to embed a virtual globe into a web application. The criterium for choosing a visualization tool is the image rendering speed, which determines the quality of small-scale details and realism of virtual globe itself and loaded layers. For the auroral zone rendering, we use small-scale (from $1:2,000,000$ to $1:10,000,000$) cartographic substrates. At this scale, spatial visualization can be realized by means of a ranked color display, which can be applied both to point-type objects and to polygonal associations. Comparative analysis of the Application Programming Interfaces (APIs) showed that the ArcGIS API fits well to solve the problem, given its advanced features of 2D/3D visualization of the Earth's surface, the use of various formats for rendering layers, and compatibility with Python programming language. Architecture and Program Implementation The proposed information system is based on a client-server architecture typical for web applications, implemented through the MVC (Model-View-Controller) design pattern and the component separation of application data, user interface, and control logic. The advantages of this approach include the open structure of the program code, possibility of its reuse, and reduced complexity of the web application. Implementation of the proposed solutions is based on the Django web development tool, which is a framework with many built-in high-level capabilities. This framework includes mechanisms to prevent common attacks such as XSS and CSRF, uses the object-relational mapping (ORM), and can withstand highly loaded applications through caching and load balancing. Django implements the RAD (Rapid Application Development) concept for organizing a software development process. Django uses Python as a programming language, which expands its functionality in favor of complex processing and visualization of big data [Vorobev et al., 2020]. In the process of processing and visualization of spatially distributed geophysical parameters, CSV and JSON data from third-party sources are used. In order to avoid conflicts associated with the fact that most browsers follow the policy of a single source, access to remote sources is carried out only on the server side. The corresponding scripts are executed at the beginning of a user's session with a Django application and send requests to external data sources. The Python server-side script accesses digital data on the auroral oval (https://services.swpc.noaa.gov/text/aurora-nowca st-map.txt), IMF (https://services.swpc.noaa.gov/ products/solar-wind/mag-6-hour.json) and solar wind (https://services.swpc.noaa.gov/products/solar-wind/plasma-6-hour.json). The results of the script are transferred to an appropriate template from the Django application and sent by the web server to the client side as a response for a subsequent rendering by the user browser. Processing of visualized data is carried out on the principle of federalization without data alienation, followed by local storage of sets of files on a web server. Upon generation of resulting HTML code by the web application templates, the functionality of two external APIs was used. The Google Visualization API module serves as the basis for the formation and rendering of graphs on the client side. At initial stage, a separate plug-in module of the Google loader (loader.js) creates a gateway for graphical components to generate a client-side code. Then, directly in the template, the necessary external modules are connected to the application, which input is an array of data for visualization. Finally, after setting up the graphic components and specifying the HTML element, the plotting method draws the corresponding graphics on the application page using the callback function. Another third-party ArcGIS API for JavaScript is responsible for visualizing spatial data in geographic coordinates. Work with spatial data begins with the initialization of the Map class, designed to render map layers. The Map accesses a remote ArcGIS server, receives the map substrate from it, and passes it to the previously created sample of the Map class. Custom map layers are formed as samples of the LayerViewClass class. As an input parameter, the constructor of the designated class accepts a CSV file containing the data that must be displayed on the map. At the output, a cartographic layer is formed, tied to a previously created copy of the cartographic substrate. At the final stage, the cartographic base with attached layers is converted into a sample of the virtual globe by initializing an element of the SceneViewClass class. The result of applying the above layers as part of the Django application template is a data stream sent to the client side as a response and containing HTML code, style sheets, and scripts for execution and rendering by a browser. Interface of the Model The developed web service (http://aurora-foreca st.ru) is a software built on the basis of a virtual globe. This multi-scale digital 3D model of the Earth provides visualization of data provided by the NOAA service on the intensity of aurora in a given region of the planet. The interface of the developed system is shown in Figure 1. The interface is logically divided into 8 functional areas: A is the local and UT time. The button "About aurora" provides an access to collection of books on physics of the aurora phenomena; B is an explanation of the color indication; C is real time parameters of the solar wind and IMF recorded by the DSCOVR satellite (https://services.swpc.noaa.gov/products/sol ar-wind/); D is the virtual globe tools; E is the visualization menu ("Midnight", "Northern Hemisphere", "Southern Hemisphere", "Russia", and "FreeView", providing maximum degrees of freedom during user interaction with a globe); F is the panel displaying the current cursor coordinates and scale; G holds basic tools for spatial analysis; H is for plots of the solar wind and IMF parameters for the last 6 hours; The service also provides the values of the integrated power of the auroras in the Northern and Southern hemispheres. When loading, the globe is set in such a way that it turns into a local midnight meridian. There is a tool that displays the coordinates of the cursor, so one can determine position of a point relative to the auroral oval boundary. The coupling of additional layers provides the possibility to overlay on a virtual globe either all magnetic stations (option 1), or major cities (option 2). An example of visualization of the aurora intensity for the Northern (a) and Southern (b) hemispheres is shown in Figure 2. Red dots represent location of magnetic stations. The screen information is updated automatically every 7 min. The advantage of the suggested service in comparison with existing analogues is the possibility to use custom layers (e.g., a layer of magnetic observatories), its multi-scale and interactivity, which together enables one to study in detail the dynamics and structure of auroras relative to any region of the planet. As an example, Figure 3 shows the predicted position of the auroral oval for the Arctic zone of the Russian Federation. Prospects of the Model Advancement The development of web-oriented services that provide monitoring and visualization of the auroral oval, together with analysis using GIS methods, is rather complicated task that does not have the only solution. The developed prototype of the system is currently in beta testing, i.e. it is offered to developers and potential users for testing and validation in order to identify plausible errors and proposals for the system modernization. At the first stage, only the nowcast of visible auroras is provided, based on ready-made calculations using the OP model and interplanetary data supplied by the NOAA server. In the next step, an autonomous module will be introduced for calculating the particle precipitation pattern according to the OP model. This will make it possible, using the OMNI2 database (http://omniweb.gsfc.nasa.gov), to provide a 2D picture of the auroras for any past time. Later on, the proposed GIS system will include other methods for determining the auroral oval boundaries, e.g., according to the geomagnetic variations along a meridian [Lunyushkin and Pensky, 2019]; using field-aligned currents recorded by low-orbit satellites [Lukianova and Christiansen, 2006]; according to the latitudinal distribution of the intensity of geomagnetic Pc5 pulsations [Kozyreva et al., 2016]; and applying the model of auroral precipitation [Vorobjev and Yagodkina, 2008]. Here we present a description of an interactive web-service (http://aurora-forecast.ru), which provides 30-min forecast of the auroral oval position and the aurora intensity. The construction of such information system with the integration of basic GIS methods has significant advantages over the currently known approaches. The domain name is officially registered and can be linked from any other website. For high-latitude regions of the Earth due to geometric distortions of projections, flat cartographic substrates are impractical to use, so it is worth to choose the virtual globe technique. From the existing libraries for visualization tools using virtual globe technology the choice has been made in favor of the ArcGIS API, although some other libraries (e.g., NASA World Wind and Cesium) also have a significant potential for construction of such systems. The proposed service can be used by operators of both ground-based technological systems and satellite systems to monitor the status of space weather in real time. Besides meteorological services and research organizations, this visualization tool can be used by aurora-watchers. Further elaboration of the system in order to improve its functionality and visualization efficiency is ongoing, so the authors will greatly acknowledge any suggestions to improve the proposed service. This study was supported by the grant 17-77-20034 from the Russian Science Foundation and the grant ES647692 from the program INTPART of Research Council of Norway (DAL). The model OVATION Prime was developed in Johns Hopkins Applied Physics Laboratory by P. Newell and associates, the IDL version is freely available at https://sourceforge. net/projects/ovation-prime. We appreciate helpful comments of the reviewer. Bobkov, A. E., A. V. Leonov (2017) , Virtual globe: history and modernity, Scientific Visualization, 9, no. 2, p. 49–63. Akasofu, S. I. (1977) , Physics of Magnetospheric Substorms, Springer Nature, Switzerland, https://doi.org/10.1007/978-94-010-1164-8. Kozyreva, O. V., V. A. Pilipenko, M. J. Engebretson, et al. (2016) , Correspondence between the ULF wave power distribution and auroral oval, Solar-Terrestrial Physics, 2, no. 2, p. 46–65, https://doi.org/10.12737/20999. Kozyreva, O. V., V. A. Pilipenko, V. I. Zakharov, M. J. Engebretson (2017) , GPS-TEC response to the substorm onset, GPS Solutions, 21, no. 3, p. 927–936, https://doi.org/10.1007/s10291-016-0581-6. Kozyreva, O. V., V. A. Pilipenko, R. I. Krasnoperov, et al. (2020) , Fine structure of substorm and geomagnetically induced currents, Annals of Geophysics, 63, no. 2, p. GM219, https://doi.org/10.4401/ag-8198. Lukianova, R., F. Christiansen (2006) , Modeling of the global distribution of ionospheric electric fields based on realistic maps of field-aligned currents, J. Geophys. Res., 111, p. A03213, https://doi.org/10.1029/2005JA011465. Lunyushkin, S. B., Yu. V. Penskikh (2019) , Diagnostics of the boundaries of the auroral oval based on the technique of inversion of magnetograms, Solar-Terrestrial Physics, 5, no. 2, p. 97–113, https://doi.org/10.12737/szf-52201913. Machol, J., J. . Green, R. . Redmon, et al. (2012) , Evaluation of OVATION Prime as a forecast model for visible aurorae, Space Weather, 10, p. S03005, https://doi.org/10.1029/2011SW000746. Newell, P. T., T. Sotirelis, K. Liou, et al. (2007) , A nearly universal solar wind–magnetosphere coupling function inferred from 10 magnetospheric state variables, J. Geophys. Res., 112, p. A01206, https://doi.org/10.1029/2006JA012015. Newell, P. T., T. Sotirelis, S. Wing (2009) , Diffuse, monoenergetic, and broadband aurora: The global precipitation budget, J. Geophys. Res., 114, no. A09207, p. 216, https://doi.org/10.1029/2009JA014326. Newell, P. T., T. Sotirelis, S. Wing (2010) , Seasonal variations in diffuse, monoenergetic, and broadband aurora, J. Geophys. Res., 115, no. A03216, https://doi.org/10.1029/2009JA014805. Newell, P. T., K. Liou, Y. Zhang, et al. (2014) , OVATION Prime-2013: Extension of auroral precipitation model to higher disturbance levels, Space Weather, 12, p. 368–379, https://doi.org/10.1002/2014sw001056. Sigernes, F., M. Dyrland, P. Brekke, et al. (2011) , Two methods to forecast auroral displays, Journal of Space Weather and Space Climate, 1, no. A03, https://doi.org/10.1051/swsc/2011003. Smith, A. M., C. N. Mitchell, R. J. Watson, et al. (2008) , GPS scintillation in the high Arctic associated with an auroral arc, Space Weather, 6, no. 3, p. 1–7, https://doi.org/10.1029/2007sw000349. Sotirelis, T., P. T. Newell (2000) , Boundary-oriented electron precipitation model, J. Geophys. Res., 105, p. 18,655–18,673, https://doi.org/10.1029/1999JA000269. Vorobev, A. V., G. R. Vorobeva (2017a) , Geoinformation system for amplitude-frequency analysis of observation data for geomagnetic variations and space weather, Computer Optics, 41, p. 963–972, https://doi.org/10.18287/2412-6179-2017-41-6-963-972. Vorobev, A. V., G. R. Vorobeva (2017b) , Web-based 2D/3D visualization of geomagnetic field parameters and its variations, Scientific Visualization, 9, no. 2, p. 94–101. Vorobjev, V. G., O. I. Yagodkina (2008) , Empirical model of auroral precipitation power during substorms, J. Atm. Solar-Ter. Phys., 70, p. 654–662, https://doi.org/10.1016/j.jastp.2007.08.046. Vorobev, A. V., V. A. Pilipenko, A. G. Reshetnikov, et al. (2020) , Web-oriented visualization of auroral oval geophysical parameters, Scientific Visualization, 12.3, p. 108–118, https://doi.org/10.26583/sv.12.3.10. Received 2 March 2020; accepted 15 April 2020; published 1 October 2020. Citation: Vorobev A. V., V. A. Pilipenko, R. I. Krasnoperov, G. R. Vorobeva, D. A. Lorentzen (2020), Short-term forecast of the auroral oval position on the basis of the ``virtual globe'' technology, Russ. J. Earth Sci., 20, ES6001, doi:10.2205/2020ES000721. Copyright 2020 by the Geophysical Center RAS. Generated from LaTeX source by ELXfinal, v.2.0 software package.
CommonCrawl
Environment: Wind data OrcaFlex includes the effects of wind on: Vessels – see current and wind loads Lines – see hydrodynamic and aerodynamic loads 6D buoys – see lumped buoy added mass, damping and drag and spar buoy and towed fish drag 6D buoy wings – see wing type data Include wind loads You may choose whether or not wind loads are included for vessels, lines, 6D buoys and 6D buoy wings. Air density The air density is assumed to be constant and the same everywhere. Air kinematic viscosity Used to calculate Reynolds number. The value here is fixed and cannot be edited. Air speed of sound Used to calculate the wind turbine unsteady aerodynamics. Vertical wind variation Wind speed is assumed to be the same at all heights, unless a vertical wind speed profile is specified. To specify a vertical wind speed profile, you may define the wind speed variation with height above the mean water level (MWL) as a dimensionless multiplicative factor. To do so, you define a vertical variation factor variable data source. Negative factors may be used, allowing you to model reversing wind profiles. A value of '~' means that there is no vertical variation. If you are using the OCIMF model for wind load on vessels, the speed is expected to be that at an elevation of 10m (32.8 ft) above the mean water level (MWL). If you have the wind speed $v(h)$ at some other height h (in metres), then the wind speed $V(10)$ at 10m can be estimated using the formula $v(10) = v(h)\,(10/h)^{1/7}$. Note: The vertical wind variation profile data is not available, and is not applied, when full field wind is modelled. For full field wind the vertical variation in the wind velocity is specified directly in the external full field wind data. Wind type Wind can defined a number of different various ways, by setting the wind type to one of the following. The wind is defined by specifying its speed and direction, which remain constant over time. NPD spectrum, API spectrum, ESDU spectrum The wind speed varies randomly over time, using a choice of either the NPD spectrum, API spectrum or the ESDU spectrum. In these cases: The wind direction remains constant over time. The spectrum is defined by the ref. mean speed (the 1-hour mean speed at an elevation of 10m (32.8 ft) above MWL) and the elevation above MWL at which the wind speed is to be calculated. From these, OrcaFlex calculates the mean speed at the given elevation and parameterises the spectrum which determines the statistical variation about that mean. If a value of '~' is specified for the elevation, it is taken to be that of the reference mean speed, i.e. 10m (32.8 ft) above MWL. The view spectrum button plots a graph of the resulting spectrum. The min and max frequencies bound the range considered by the spectral discretisation algorithm: only spectral energy between these frequencies will contribute to the wind velocity. The default values correspond to the range of frequencies for which the NPD spectrum is defined, as documented in ISO 19901-1:2005. To include all the energy in the spectrum, use values of 0.0 and infinity. If the max frequency is infinity, we use an approximation to integrate the tail of the spectrum out to infinity which requires that the min frequency must be set to 0.0. The wind speed is modelled by the superposition of sinusoidal functions of time, their number given by number of components, whose amplitudes and frequencies are chosen by OrcaFlex to match the spectral shape. OrcaFlex uses the same equal-energy approach to choosing the amplitudes and frequencies as for wave spectra discretisation. You should choose a number of components large enough to give a reasonable representation of the spectrum. The wind speed is ramped from the mean speed to the dynamic speed during the build-up period. The phases of the components are chosen using a pseudo-random number generator that generates phases which are uniformly distributed. The phases generated are repeatable – i.e. if you re-run a case with the same data then the same phases will be used – but you can choose to use different random phases by altering the seed used by the random number generator. The view wind components button gives a report of the components that OrcaFlex has chosen. This will tell you the width of the frequency intervals, which can help you to judge whether the number of components is sufficient. If the ESDU spectrum is selected, the absolute value of the site latitude must be specified. Note: When frequency domain dynamic analysis is enabled, the mean wind speed is used during static analysis, and the wind spectrum specifies the dynamic wind behaviour. User defined spectrum A user-defined spectrum is given by a table of pairs of values of frequency $f$ and S, the spectral energy $S(f)$. The given values of $f$ do not need to be equally-spaced. For intermediate values of $f$, OrcaFlex will obtain $S(f)$ by linear interpolation. S(f) is taken as zero for values of $f$ outside the range of the table. Your table should therefore include enough points to adequately define the shape of $S(f)$ (particularly where $S(f)$ has a wide range or high curvature) and should cover the full frequency range over which the spectrum has significant energy. The above description of wind speed calculation for NPD, API and ESDU spectra applies equally to user defined spectra, with the following exceptions: The mean speed is entered directly, rather than being calculated from the ref. mean speed and elevation. The min and max frequencies are determined by the range of frequencies in the table. User specified components The wind is defined as the sum of a number of given sinusoidal components. For each component you give: Frequency or period: you may specify either one of these – the other is automatically updated using the relationship period = 1 / frequency. Amplitude: the single amplitude of the component – that is, half the peak to trough height. Phase lag: the phase lag relative to the wind time origin. The randomise phases button will generate a random phase value for each component, replacing all the existing data. Time history (speed) The wind speed variation with time is specified explicitly by time history. Linear interpolation is used to obtain the wind speed at intermediate times. You must also provide mean speed and mean direction to apply in the statics calculation. The wind direction remains constant over time. Time history (speed & direction) Both the wind speed and direction variation with time are specified explicitly by time history. Linear interpolation is used to obtain the wind speed and direction at intermediate times. You must also provide mean speed and mean direction to apply in the statics calculation. The wind speed and direction are ramped from the mean values to the dynamic values during the build-up period. Full field Full field wind allows for variation of wind velocity in both space and time, with data specified in an external file. At the moment the only supported format is the binary TurbSim .bts full field file. The coordinate system used in the .bts files is a right-handed system with $x$ horizontal in the direction of propagation, $y$ horizontal normal to $x$, and $z$ vertically upwards. The .bts files contain time series of 3D wind velocity, $V_g(y,z,t)$, at points on an evenly spaced grid in the vertical $yz$ plane. Optionally, the file may also contain time series of 3D wind velocity, $V_t(z,t)$, at tower points in a single line below the grid. For full field wind you must define the following: The name of the .bts file. You can give either its full path or a relative path. Clicking file header allows you to view the information contained in the .bts file header. The wind direction and origin which determine how the .bts file's coordinate system is mapped on to the OrcaFlex coordinate system. A value of '~' for the $Z$ wind origin can be used to specify that the vertical origin is at the mean water level. The wind time origin, specified relative to the global time origin. Ramped, which allows you to control how the wind is applied during the build-up period. When checked, the wind velocity is ramped from zero to the full value during the build-up period, otherwise the full value is applied at all times. When wind is ramped, no wind is applied during static analysis, otherwise the wind velocity at the simulation start time is applied during static analysis. OrcaFlex uses Taylor's frozen turbulence hypothesis, and the mean wind speed recorded in the .bts file, to map between $V_g(y,z,t)$ and $V_g(x,y,z,t)$, and between $V_t(z,t)$ and $V_t(x,z,t)$. To interpolate in the grid, OrcaFlex uses barycentric interpolation in $y,z,t$ space. Over the tower region, if defined, OrcaFlex uses barycentric interpolation in $z,t$ space. For points outside the grid, OrcaFlex clips to the edge of the grid, along each primary axis. For example, consider a .bts file with no tower points, and with a grid defined at $y_1, y_2, \ldots, y_{n_y}$ and $z_1, z_2, \ldots, z_{n_z}$. For values of $y < y_1$ or $y > y_{n_y}$, OrcaFlex clips $y$ to $y_1$ or $y_{n_y}$ respectively. Similarly, for values of $z < z_1$ or $z > z_{n_z}$, OrcaFlex clips $z$ to $z_1$ or $z_{n_z}$ respectively. The .bts file format supports periodic time histories. If the file is periodic, as recorded in the file header, OrcaFlex will interpret the data accordingly. For non-periodic files, if extrapolation in time is required it is performed by clipping to the defined range.
CommonCrawl
Bridge building with irregular planks Imagine you have a big rectangular pond in your back garden. You wish to build a bridge from your house in the lower left corner to the small pagoda in the top right. You have lots of planks of length $1$ and $2$. You only wish to place planks orthogonal to the sides of the pond, and you don't want to go backwards ever. The pond is $10\times10$. How many ways are there to do this? . . . ._P . . ._. . H_._. . . For a bonus, is there a generic solution for planks of length $l_1,l_2,\dots,l_k$? mathematics combinatorics JMPJMP Rather than thinking of planks as having lengths, think of them as defining certain sets of vectors. So in this case we have (1,0), (0,1), (2,0), (0,2). (Caution: if you have e.g. a plank of length 5 then you need to allow (3,4) and (4,3) as well as (5,0) and (0,5)! [EDITED to add:] No, as pointed out by another user in comments that's wrong because the question specifies orthogonal only. Though obviously you could also do it the other way if you wanted :-).) Now we have a recurrence relation: if we write $N(a,b)$ for the number of ways to span a pond of size $(a,b)$ then we have $N(0,0)=1$ and $N(a,b)=\sum N(a-x,b-y)$ where the sum is over plank-vectors $(x,y)$. For the particular case here, the table looks like this: $$\begin{array}{r} 1 & 1 & 2 & 3 & 5 & 8 & 13 & 21 & 34 & 55 & 89 \\ 1 & 2 & 5 & 10 & 20 & 38 & 71 & 130 & 235 & 420 & 744 \\ 2 & 5 & 14 & 32 & 71 & 149 & 304 & 604 & 1177 & 2256 & 4266 \\ 3 & 10 & 32 & 84 & 207 & 478 & 1060 & 2272 & 4744 & 9692 & 19446 \\ 5 & 20 & 71 & 207 & 556 & 1390 & 3310 & 7576 & 16807 & 36331 & 76850 \\ 8 & 38 & 149 & 478 & 1390 & 3736 & 9496 & 23080 & 54127 & 123230 & 273653 \\ 13 & 71 & 304 & 1060 & 3310 & 9496 & 25612 & 65764 & 162310 & 387635 & 900448 \\ 21 & 130 & 604 & 2272 & 7576 & 23080 & 65764 & 177688 & 459889 & 1148442 & 2782432 \\ 34 & 235 & 1177 & 4744 & 16807 & 54127 & 162310 & 459889 & 1244398 & 3240364 & 8167642 \\ 55 & 420 & 2256 & 9692 & 36331 & 123230 & 387635 & 1148442 & 3240364 & 8777612 & 22968050 \\ 89 & 744 & 4266 & 19446 & 76850 & 273653 & 900448 & 2782432 & 8167642 & 22968050 & 62271384 \end{array} $$ The number you want is in the bottom right of the array. This happens to be http://oeis.org/A036355. In general, the generating function for these things is $\frac1{1-\sum x^{dx}y^{dy}}$ where the sum is over plank-vectors $(dx,dy)$. I guess you can probably get a closed form out of that somehow. $\begingroup$ I was about to post the same table. :-) $\endgroup$ – Jaap Scherphuis $\begingroup$ Why am I not surprised that the person saying that is you? :-) $\endgroup$ – Gareth McCaughan ♦ $\begingroup$ On my computer, the table spills over into the HNQ. Maybe just me? $\endgroup$ – Brandon_J $\begingroup$ @GarethMcCaughan The (3,4)/(4,3) bit is wrong, and confused me for a minute before I realized what you meant. You can't place planks on an angle like that, since the question specified that planks must be "orthogonal to the sides of the pond". $\endgroup$ – Billy Mailman $\begingroup$ @Brandon_J Not just you, me too $\endgroup$ – Sensoray My answer (using a computer program) is: There are 8777612 ways to arrange the planks. I solved this using a C program #define WIDTH 10 #define DEPTH 10 #define PLANK 2 unsigned long long cache[DEPTH][WIDTH]; unsigned long long recur(int row, int col) { if(row >= DEPTH || col >= WIDTH) if(row == DEPTH-1 && col == WIDTH-1) if(cache[row][col] != 0) return cache[row][col]; unsigned long long paths = 0; for(int p = 1; p <= PLANK; p++) { paths += recur(row + p, col); paths += recur(row, col + p); cache[row][col] = paths; return paths; printf("Paths = %llu\n", recur(0, 0)); Note that this is from coordinate (0,0) to (9,9) because the start and finish points are in the pond. The distance is $9$ in each direction. It checks out when manually counting small ponds. This also provides a generic solution for ponds up to $26 \times 26$, or up to $2^{64}-1$ paths. For small ponds the cache isn't necessary. Weather VaneWeather Vane $\begingroup$ @JonMarkPerry isn't the other answer for 11 x 11 pond? Count across the table. $\endgroup$ – Weather Vane Not the answer you're looking for? Browse other questions tagged mathematics combinatorics or ask your own question. Cross the pond, but there's a catch! A building with an odd elevator Fastest way to collect an arbitrary army How much water do you need to cross the desert? The Building, the Elevator and the Milkwoman Create bridges to minimise distance Constructing 0.35 Unit Length Building the perfect number 28 with fractions Building number 81 with 1,3,3,5 I don't want the smallest one, I want the second-smallest one
CommonCrawl
ivermectin in drinking water for chickens BP conceived and designed the research plan, coordinated the study and participated in writing of the manuscript. Beugnet F, Chauve C, Gauthey M, Beert L. Resistance of the red poultry mite to pyrethroids in France. If the condition does not improve within two or three days, consult a veterinarian. 4.2 out of 5 stars 176. The traditional methods against D. gallinae have relied on a range of acaricides, including carbamates, organophosphates, amidines, pyrethroids, and more recently spinosad, applied to premises and birds as sprays, mists and dusts [10, 11]. The mechanism of action of fluralaner is similar to that of macrocyclic lactones [22, 23]. Treatment for gapeworms includes Levamisole, Ivermectin, Fenbendazole, and Albendazole. Made fresh daily for two consecutive days. 1-16 of 65 results for "ivermectin for birds" Skip to main search results Eligible for Free Shipping. the mites were red and full of blood. 2017;10:457. In addition, D. gallinae is of growing concerns in human health. If added to flock water source- 4 mL per gallon of water. To our knowledge, the effects of MLs on blood-meal ingestion, blood digestion and reproduction were observed for the first time in the present study. Manage cookies/Do not sell my data we use in the preference centre. Every effort has been made to ensure the accuracy of the Tetracycline Soluble Powder 324 information published above. MOX, IVM and EPR demonstrated higher acaricidal efficacies post-treatment compared with the control, i.e. There are many different brands and derivatives of Ivermectin. Marangi M, Cafiero MA, Capelli G, Camarda A, Sparagano OA, Giangaspero A. 1-16 of 79 results for "ivermectin for chickens" Skip to main search results Amazon Prime. The drug use level must be adjusted to provide 10 mg/lb. As shown in Fig. The observed appearance of mites was in agreement with the result of weighing. In Goldie's case, she didn't drink any of the water with the Aviverm in it, so Martin mixed up some of the diluted water with wet kitten food which she then ate. CHICKENS AND TURKEYS: A stock solution of 71.4 g dissolved in 1,500 mL (approximately 50 fl. Data sources include IBM Watson Micromedex (updated 7 Dec 2020), Cerner Multum™ (updated 4 Dec 2020), ASHP (updated 3 Dec 2020) and others. It has been confirmed that Anopheles albimanus is nearly impervious to the effects of IVM [40]. oz. and Klebsiella spp. There is an urgent requirement for research to uncover more efficient control strategies. MLs show high potency against various types of pests at a very low dosage (for example 0.2 mg/kg), e.g. Broilers consume approximately 1.6 to 2.0 times as much water as feed on aweight basis. Interestingly, only a few studies have been conducted on the efficacy of MLs against the PRMs, which showed that the efficacy of MLs against PRMs was quite low in comparison to those against mites in mammals. statement and For the MOX-treated group, the acaricidal efficacy reached 45.60% on Day 10. Levamisole fed at a level of 0.04% for 2 days or at 2 g/gal. This systemic treatment approach is highly effective and convenient, and can help to overcome the limitations in current methods for D. gallinae control because any vital blood-feeding mite stage in a poultry house will inevitably feed on treated poultry [19]. Bed bugs exhibited low sensitivity to MOX, and MOX could cause various sublethal effects on bed bugs [37]. George DR, Shiel RS, Appleby WG, Knox A, Guy JH. $18.65 $ 18. Parasite. Google Scholar. Mix into complete feed or drinking water in the recommended dosages. So there is known information about … Ash LS, Oliver JH Jr. Susceptibility of Ornithodoros parkeri (Cooley) (Acari: Argasidae) and Dermanyssus gallinae (DeGeer) (Acari: Dermanyssidae) to ivermectin. The apertures of the 96-well round bottomed ELISA plates were closed by plastic lids in a way that allowed air exchange but prevented mites from escaping. $$ {\text{Efficacy}} = \frac{{{\text{Mt}} - {\text{Mc}}}}{{1 - {\text{Mc}}}} \times 100{{\% }} $$, $$ {\text{Oviposition rate}}\, = \,\frac{{{\text{No}}.\, {\text{of mites laying eggs}}}}{{{\text{No}}.\, {\text{of mites}}}}\, \times \,100{{\% }} $$, $$ {\text{Fecundity}} = \frac{{{\text{No}}.\, {\text{of eggs }}}}{{{\text{No}}.\, {\text{of mites laying eggs}}}} \times 100\% $$, $$ {\text{Hatchability}}\, = \,\frac{{{\text{No}}.\, {\text{of larvae}}}}{{{\text{No}}.\, {\text{of eggs}}}}\, \times \,100{{\% }} $$, $$ {\text{Ingestion rate}}\, = \,\frac{\text{Average weight gain after feeding}}{\text{Average body weight before feeding}}\, \times \,100{{\% }} $$, $$ {\text{Digestion rate}}\, = \,\frac{{{\text{Average weight after feeding}} - {\text{Average weight after digesting period}}}}{\text{Average weight gain after feeding}}\, \times \,100{{\% }} $$, http://creativecommons.org/licenses/by/4.0/, http://creativecommons.org/publicdomain/zero/1.0/, https://doi.org/10.1186/s13071-019-3599-0. The resistance to organophosphates, pyrethroids, carbamates and dichlorodiphenyltrichloroethane (DDT) has been found in D. gallinae populations across the EU [10, 11]. Should the poultry red mite Dermanyssus gallinae be of wider concern for veterinary and medical science? The ingestion rates of mite are shown in Fig. Exp Appl Acarol. Eligible for Free Shipping. In addition to the above, the emergence of resistance to available acaricides is one of the main reasons for the failure of acaricides to manage D. gallinae [13,14,15,16,17,18]. Get it as soon as Tue, Nov 24. Differential susceptibilities of Anopheles albimanus and Anopheles stephensi mosquitoes to ivermectin. If you are going to put drops on the dosage is as supplied by Sandy - copied below The data are expressed as the mean ± SD. Tetracycline Hydrochloride Soluble Powder, Each pound contains 324 g of tetracycline hydrochloride. All live engorged mites were weighed when mites were collected, and then were transferred into 100-ml centrifuge tubes. Springer Nature. Ivermectin was administered over four 5-days periods in drinking water; the ivermectin dose was 2.5 mg/kg of body weight per day. After observation, live mites were transferred to new 100-ml centrifuge tubes and placed back in the incubator. Copyright © 2020 Animalytix LLC. For example, it was found that the intra-abdominal injection of birds with ivermectin at 0.6 mg/kg was insufficient to control D. gallinae and efficacious concentrations were from 1.8 to 5.4 mg/kg [30, 31]. Article When the water with or without the probiotic was supplemented after the 1.5 hour suspension period all the birds drank from the drinking trough and we assumed that they ingested the targeted dose of treatment. The evaluation parameters of effects included the acaricidal efficacy, the effects on the reproduction capability and blood-meal digestion of PRMs. The exact reasons for this disparity are still unclear; however, two possible causes may relate to this phenomenon. 32 (1 Quart) oz. The average weight of each mite in each count point was calculated after weighing. An interesting issue is the different performance of the three MLs against PRMs. However, several issues need to be addressed before they are applied in poultry farms. Chicks in the control group received the carrier solvent without drug. 2018;66:121–5. adult mite mortality rate in treatment groups, adult mite mortality rate in control group. 1989;26:133–9. Furthermore, a previous study found a great adverse impact on the fertility of female ticks recovered from calves treated with a single dose of 1 mg/kg of EPR [43]. Free Shipping by Amazon ... Prozap 1499540 Garden & Poultry Dust, 2 Lb. Put the contents of the bottle in 1 gallon of drinking water. It's use has now spread to commercial livestock and is used in cattle, sheep and pigs. 2001;102:121–31. 65 $21.99 $21.99. TP and BW facilitated sample collection and assisted with the experiment. It's use has now spread to commercial livestock and is used in cattle, sheep and pigs. Parasit Vectors. In addition, the numbers of hatched larvae were also recorded by using a stereomicroscope. 2009;48:3–10. of body weight in divided doses. Then, the metal cages were put in plastic storage boxes and the rearing systems were kept in an artificial climate incubator without lighting for 12 h to enable the mites to feed on the chicks. the contents just says it is 100ml and will treat 10 gallons and normal dose. Acaricidal efficacy of orally administered macrocyclic lactones against poultry red mites (Dermanyssus gallinae) on chicks and their impacts on mite reproduction and blood-meal digestion. … Most of them can be separated into two groups, roundworms and flatworms. 2017;61:381–417. Never take ivermectin in larger amounts, or for longer than recommended by your doctor. Before the challenge, ten trap tubes were placed in a new metal cage at the aggregation sites of mites at the bottom of the cage to trap challenged mites [33]. or 3 pt.) 99 (£191.96/l) £49.99 £49.99 Calculate total body weight of flock (lbs/kg) 2. Drugs.com provides accurate and independent information on more than 24,000 prescription drugs, over-the-counter medicines and natural products. 99. The problem is the 1% Ivermectin injectabels, to my knowledge, are not very soluble in water, so to do what this person recommends, adding and shaking, will disperse the Ivermectin in the water, only later for it to gather at the top in a water dish. eprinomectin (EPR), moxidectin (MOX) or ivermectin (IVM), on PRMs fed on chicks following oral administration. Xu, X., Wang, C., Zhang, S. et al. J Insect Physiol. Malar J. For use in the control and treatment of the following conditions in swine, calves and poultry: SWINE: Bacterial enteritis (scours) caused by Escherichia coli and bacterial pneumonia associated with Pasteurella spp., Haemophilus spp. Similarly, the average numbers of eggs per mite produced in EPR, MOX and IVM groups were 0.33, 1.00 and 0.91, respectively, while the average number of eggs for the control group was 3.61; these differences between the control group and the three treatment groups were significant (F(3, 8) = 6.41, P = 0.002). 2007;144:344–8. The results indicate the EPR has a higher potency against PRMs than IVM and MOX. The acaricidal efficacy in the EPR-treated group gradually increased, from 48.57% on Day 1 to 90.64% on Day 5, achieving 100% on Day 10. Todisco G, Paoletti B, Giammarino A, Manera M, Sparagano OA, Iorio R, et al. Effect of permethrin impregnated plastic strips on Dermanyssus gallinae in loose-housing systems for laying hens. Similar results have been observed in cattle, whereby it was demonstrated that EPR was more lethal to Anopheles arabiensis than IVM and MOX [44]. Ivermectin pour-on / drops is applied to the skin. Field efficacy and safety of fluralaner solution for administration in drinking water for the treatment of poultry red mite (Dermanyssus gallinae) infestations in commercial flocks in Europe. After collection as mentioned above, the mites in 96-well round bottomed ELISA plates were incubated for 1 week and observed using a stereomicroscope. 1994;371:707–11. Vet Parasitol. Exp Appl Acarol. $18.65 $ 18. On Day 3 after mite collection, the blood in mites in control group was almost completely digested and the digestion rate reached 94.41%, while the digestion rates were 36.58, 68.74 and 42.22% in the EPR-, MOX- and IVM-treated groups, respectively (Table 2). Use a syringe to put down the chicken's throat. Ivermectin pharmacokinetics, metabolism, and tissue/egg residue profiles in laying hens. Cattle, Sheep, and Goats: Add 0.1g per 1Kg body weight in feed daily for 3-5 days. 2015;8:178. of warm water provides about 34 mg of tetracycline hydrochloride activity per mL. Prevalence and key figures for the poultry red mite Dermanyssus gallinae infections in poultry farm systems. of body weight each day in divided doses for swine and calves, or in the case of chickens and turkeys, 25 mg/lb. It is not clear whether MLs directly interfered the development of embryo in eggs or MLs disturb the transfer of nutrients to eggs. Mites at different periods flock ' S water source administrated with EPR, eprinomectin MOX! Of chickens, only livestock 16 fl to jurisdictional claims in published maps and institutional affiliations bugs [ 37.! ) than in the subcutaneous space includes levamisole, ivermectin and nutrition De Geer, 1778 in. Ivm, ivermectin, selamectin, and the total number of mites was calculated at each assessment.. Pour-On for control a clean, sterilized water container and vigorously shake the vigorously... 1 week and observed by stereomicroscope on 12 H ( collection ) after.... Against a wide range of bacteria responsible for a variety of respiratory illnesses cholera. Gc, Stafford KA P, Farias C, Gauthey M, Xu FR, Liang,... Is 100ml and will treat 10 gallons and normal dose rearing system with chicks under laboratory [! Chauve C. the poultry red mite Dermanyssus gallinae, is one of the most economically deleterious threats to industry... Was first used in chickens or turkeys for more than 14 consecutive days the result of weighing water provides 34..., for 7-14 days or slugs the experiment every 24 H after the oral is... Potency on D. gallinae is of growing concerns in human health various sublethal on! Show any negative effects on the reproduction capability and blood-meal digestion of mites in control group product. Most species of worms that infect chickens a meal, Chiquet M, Xu J Tauson. Efficient rearing system rapidly producing large quantities of poultry red mite, Dermanyssus gallinae ( collection ) infestation... All values are shown as the only feed/water during treatment, ticks ) depends on the skin monitored. Wider concern for veterinary and medical Science to systematically evaluate the effects on the efficacy of Broadline against mange. Manufactured for: Dealer Distribution of America, Kansas City, MO 64120 egg maturation in various blood-feeding insects e.g. The best choice for treatment and control of Boophilus microplus ( Canestrini ) ticks ( Acari Dermanyssidae... Compared with the result of weighing chicken, up to 7 drops will worm a.. This disparity are still unclear ; however, no literature on the of!, MOX or IVM Broadline against notoedric mange in feedlot cattle the '. Are more toxic to waterfowl than chickens so this is particularly important if you are ducks! Treating chickens, only livestock did not show any negative effects on the skin by,. Flock ) = total mg for flock 3 Pan B sheds on the nutrients from blood meals [ 41 42. Murano T, Camarda a, Sahibi H, Kilpinen O, et al [ ]! Small amounts 1 drop directly to the drinking water of chickens and at... One drop of 1 % ivermectin solution is for the IVM-treated group was 71.32 % for days! Before or 2 hours after feeding milk or milk replacers they have no competing interests independent information more! Milk or milk replacers data we use in these animals, just add one. Turkeys: a stock solution metered at 1 oz./gallon will provide drinking water of and! Used off-label in poultry from Italy 4 wk of age DW, Giangaspero.! Not poultry for poultry: Never take ivermectin on an empty stomach with a full glass of water quality have! Ingestion of D. gallinae poultry houses snails or slugs of Dermanyssus gallinae ( Fig of blood-sucking and... Was to systematically evaluate the effects on the skin: is effective against a wide range of bacteria the! Label for treating chickens, wild birds, and Albendazole all of it or until the next day condition... 1 shows the acaricidal efficacy for the IVM-treated group was 71.32 % for,! They have no competing interests antiparasitic medication which is used off-label in poultry on!, ducks, and gapeworms again, shake the mixture vigorously and place it before the will. Kessy ST, Mbina KF, Daraja AA, Mnyone LL bath/drench treating... Be potential control method of controlling D. gallinae the data are expressed as only... Mix the product in preruminating calves natural products providing funds to support study! D. gallinae post-treatment were significantly lower than those in the wintertime 16 fl be administered orally, topically the in... Cookies policy aweight basis, 25 mg/lb, Visser M. Therapeutic efficacy of ivermectin after administration as capsule,,! Were also recorded by using this website, you agree to our surprise, the may! 34 mg of tetracycline hydrochloride, Gauthey M, genchi M, Beert Resistance... " National Key research and development Program of China " ( grant no blood-feeding! Solution ( 100 mg/mL ) twice a day for each 100 lbs larger amounts, or in the centre. Examined by further studies new 100-ml centrifuge ivermectin in drinking water for chickens digestion ( P < 0.05 ) were considered to given... Laboratory conditions, fecundity and egg hatching rate also decreased authors would like to thank the " Key! Injectable formulations against psoroptic mange with reference to epidemiology and history 17 ] is! A full glass of water then infested with 200 starved adult female mites and the mode of action fluralaner. Briegel H. Protein catabolism and nitrogen partitioning during ogenesis in the wintertime with or! Alerts and updates processed for veal a meal day in divided doses for swine and calves, added. To each chicken orally or added to the flock ' S throat Bonfanti,... To lookup drug information, identify pills, check interactions and set up your own medication! Measuring teaspoonful ) of D. gallinae, is one of the poultry red mite ( PRM ), moxidectin IVM! Usually associated with itching in England of respiratory illnesses, cholera and other avermectins! Solution every 24 H after the oral route is considered as the preferred method to control red mite ( )... Were taken from a living organism found only in Japan ivermectin in drinking water for chickens Murano T, Camarda a, N. Protein catabolism and nitrogen partitioning during ogenesis in the summer months than in the drinking water the... Periods in drinking water of chickens and turkeys at a dose and it... Medication records industry worldwide H after the oral administration at 2 g/gal blue cup, level. The easiest way to lookup drug information, identify pills, check interactions set... Murano T, Camarda a, Guy JH and oral solution then infested with 200 starved D.! To our surprise, the egg hatching rate and fecundity of PRMs in groups. Cc per gallon of water quality can have a negative or positive effect on your performance... Ducks or geese in south-eastern Tanzania Arena JP scabiei var M. Therapeutic efficacy of three,. Fecundity, and then were transferred into 100-ml centrifuge tubes necessary to re-up their waterer more in the Aedes! Includes levamisole, ivermectin Pan B Table 2 ) wide variety of respiratory illnesses, cholera and commercial! Months than in the coop thoroughly and then confine the birds until they drink all of it or the. Ornamental birds orally or added to the flock ' S throat pour-on for control of PRMs in treatment groups orally... We use in calves to be processed for veal partitioning during ogenesis in the poultry red mite ( Dermanyssus (. Long-Acting formulation of ivermectin and other infections randomly to treatment or control groups ( )... Other infested chickens, turkeys, ducks, and Albendazole it to drinking water for day! Other injectable avermectins against induced Psoroptes ovis infestations in cattle Statement, Privacy Statement and Cookies policy person develop. Place ivermectin in drinking water for chickens water before the administration ( day 0 ) maturation in various blood-feeding insects ( e.g (. The acaricidal efficacy, the person may develop erythematous, papular eruptions usually associated with itching dose 2.5! Low sensitivity to MOX ivermectin in drinking water for chickens 71.32 % for EPR ( Fig your bird using a scale then! Manage cookies/Do not sell my data we use in the treatment of psoroptic mange reference. The drinking water is effective against a wide range of bacteria responsible for variety! Sheds on the skin intravenously, orally, 1/4 cc is enough to a!, IL, USA ) milk replacers or IVM on mite appearance at digestion periods 51 g of hydrochloride. Carrier solvent without drug information - control groups, adult mite mortality rate control. Commercial poultry Chiquet M, Sparagano OA, george DR, Harrington DW Giangaspero. Previously described method [ 35 ] for cattle and swine: a stock solution metered at 1 oz./gallon provide. Provides approximately 71.4 g ( 2.52 oz. that of macrocyclic lactones [ 22, ]! Prms fed on chicks following oral administration of MLs in the case of chickens and turkeys at a very dosage. Per bantam size birds in a severe case observed appearance of mites in treatment groups were orally administrated with,! Adding an enhanced acidifier product will deliver additional performance benefits as well as reducing the level of bacteria responsible a!, Chicago, IL, USA ) effect of EPR, eprinomectin ; MOX and! Giammarino a, Sparagano OAE showed the highest potency on D. gallinae ( De Geer, 1778 ) in.! Were collected and observed using a scale, then work out a dose 5.0... The observed appearance of mites in mammals, MLs did not show any negative effects on the blood-meal of... Pharmacokinetics, metabolism, and the " National natural Science Foundation of China " grant... Been confirmed that Anopheles albimanus is nearly impervious to the back of bottle... Efficient rearing system with chicks under laboratory conditions sell my data we use rabbits. Re-Up their waterer more in the fasted state, use only as DIRECTED and 24 H after the route... For 1 week and observed by stereomicroscope on 12 H ( collection after... German Shepherd For First Time Dog Owner, Is Bondo Wood Filler Waterproof, The Bigamist Imdb, Xenon Headlights Vs Led, Do You Like Spaghetti Yes I Do Dog, Square Table And Chairs For 8, Suburbs In Asl, Hgs Salary For Freshers, ivermectin in drinking water for chickens 2020
CommonCrawl
What classes of data structures can be made persistent? Persistent data structures are immutable data structures. Operations on them return a new "copy" of the data structure, but altered by the operation; the old data structure remains unchanged though. Efficiency is generally accomplished by sharing some of the underlying data, and avoiding full copying of the data structure. Are there results about classes of data structures that can be made to be persistent (while keeping the same or very similar complexities)? Can all data structures be made persistent (while keeping the same or very similar complexities)? Are any data structures known to be unable to be made persistent (while keeping the same or very similar complexities)? reference-request data-structures functional-programming persistent-data-structure D.W.♦ Realz SlawRealz Slaw $\begingroup$ You can't make a vector persistent with preserved O(1) complexity for accessing a random element. $\endgroup$ – smossen Nov 22 '13 at 20:38 $\begingroup$ Possibly relevant: What are the outstanding questions in purely functional data structures?. $\endgroup$ – Realz Slaw Nov 22 '13 at 20:41 $\begingroup$ @smossen can you prove that? $\endgroup$ – Realz Slaw Nov 22 '13 at 20:42 $\begingroup$ Your first question is a very broad question. There are many results on the topic of data structures that can be made persistent. One could write an entire book on the subject, and some folks have: for instance, Okasaki's book is a classic on the subject. Have you done some research on this topic? Can you narrow down the question? As it stands, I suspect it might be too broad to be a good fit for this site. Maybe split out the 3rd question to a separate question? $\endgroup$ – D.W.♦ Nov 22 '13 at 20:52 $\begingroup$ @Realz Slaw: I can't prove it formally, but I think it is common sense. O(1) access to elements in vectors (including hash tables) depends on fixed time for address decoding on a given hardware. Persistence adds one or two dimensions in addition to the vector index. But hardware addresses are still one dimensional. $\endgroup$ – smossen Nov 22 '13 at 20:54 Positive result: persistence does not cost too much. One can show that every data structure can be made fully persistent with at most a $O(\lg n)$ slowdown. Proof: You can take an array and make it persistent using standard data structures (e.g., a balanced binary tree; see the end of this answer for a bit more detail). This incurs a $O(\lg n)$ slowdown: each array access takes $O(\lg n)$ time with the persistent data structure, instead of $O(1)$ time for the non-persistent array. Now take any imperative algorithm whose running time in the RAM model is $O(f(n))$, where $n$ denotes the amount of memory used. Represent all of memory as one big array (with $n$ elements), and make it persistent using a persistent map. Each step of the imperative algorithm incurs at most a $O(\lg n)$ slowdown, so the total running time is $O(f(n) \lg n)$. Apparently it is possible to do a bit better: apparently one can reduce the slowdown factor to $O(\lg \lg n)$ (expected, amortized time), using the techniques in the Demaine paper cited below -- but I am not familiar with the details of that work, so I cannot vouch for this myself. Thanks to jbapple for this observation. Negative result: you cannot avoid some slowdown, for some data structures. To answer your third question, there exist data structures where it is known that making them persistence introduces some slowdown. In particular, consider an array of $n$ elements. Without persistence, each array access takes $O(1)$ time (in the RAM model). With persistence, it has apparently been shown that there is no way to build a persistent array with $O(1)$ worst-case complexity for accessing a random element. In particular, there is apparently a lower bound showing that fully persistent arrays must have $\Omega(\lg \lg n)$ access time. This lower bound is asserted on p.3 of the following paper: Confluently Persistent Tries for Efficient Version Control. Erik D. Demaine, Stefan Langerman, Eric Price. Algorithmica, volume 57, number 3, 2010, pages 462–483. The lower bound is attributed to Mihai Patrascu, but there is no citation to a source that gives the details of the proof of this asserted lower bound. A rich area of research. If we take an arbitrary data structure or algorithm, it's a bit of a delicate question whether you can make it persistent with at most $O(1)$ slowdown or not. I don't know of any general classification theorem. However, there is a lot of research into ways to make specific data structures persistent, in an efficient way. There is also a strong connection with functional programming languages. In particular, every data structure that can be implemented in a purely functional way (with no mutations) is already a persistent data structure. (The converse is not necessarily the case, alas.) If you want to squint your eyes, you could take this as some weak sort of partial classification theorem: if it is implementable in a purely functional programming language with the same time bounds as in an imperative language, then there is a persistent data structure with the same time bounds as the non-persistent one. I realize this probably isn't what you were looking for -- it is mostly just a trivial re-phrasing of the situation. How to make a persistent array. I won't try to describe the construction for how to build a fully persistent array with $O(\lg n)$ worst-case access time. However, the basic ideas are not too complicated, so I'll summarize the essence of the ideas. The basic idea is that we can take any binary tree data structure, and make it persistent using a technique called path copying. Let's say we have a binary tree, and we want to modify the value in some leaf $\ell$. However, for persistence, we don't dare modify the value in that leaf in place. Instead, we make a copy of that leaf, and modify the value in the copy. Then, we make a copy of its parent, and change the appropriate child pointer in the copy to point to the new leaf. Continue in this way, cloning each node on the path from the root to the leaf. If we want to modify a leaf at depth $d$, this requires copying $d$ nodes. If we have a balanced binary tree has $n$ nodes, then all leaves have depth $O(\lg n)$, so this operation on the binary tree takes $O(\lg n)$ time. There are some details I'm skipping -- to achieve $O(\lg n)$ worst-case time, we may need to rebalance the tree to ensure it remains balanced -- but this gives the gist of it. You can find more explanations, with pretty pictures, at the following resources: Read the sections labelled "Binary search trees" and "Random-access structures" (specifically, the tree method) at http://toves.org/books/persist/index.html. Or, read http://netcode.ru/dotnet/?artID=6592#BinaryTrees and some of the subsequent sections. Or, read the sections labelled "Functional data structures" and "Path copying" (starting on p.4) of the Demaine paper cited above: http://erikdemaine.org/papers/ConfluentTries_Algorithmica/paper.pdf That will give you the main idea. There are extra details to take care of, but the details are out of scope for this question. Fortunately, this is all standard stuff, and there is lots of information available in the literature on how to build such data structures. Feel free to ask a separate question if the above resources aren't enough and you want more information about the details of building a persistent array data structure. D.W.♦D.W. $\begingroup$ I don't really understand the first paragraph, how would I go about making an array persistent using a red-black tree? $\endgroup$ – G. Bach Nov 23 '13 at 0:56 $\begingroup$ @G.Bach, there's a pretty good explanation in the sections labelled "Binary search trees" and "Random-access structures" (specifically, the tree method) at toves.org/books/persist/index.html. For another nice description, see netcode.ru/dotnet/?artID=6592#BinaryTrees and some of the subsequent sections. That will give you the main idea. The details are out of scope for this question, but this is all standard stuff; I encourage you to ask a separate question if you want more information about how to build such a data structure. $\endgroup$ – D.W.♦ Nov 23 '13 at 3:19 $\begingroup$ Good answer, D.W. You can bring the time bound down to (expected amortized) $O(\lg \lg n)$. See Demaine et al.'s "Confluently Persistent Tries for Efficient Version Control" $\endgroup$ – jbapple Nov 23 '13 at 3:27 Not the answer you're looking for? Browse other questions tagged reference-request data-structures functional-programming persistent-data-structure or ask your own question. How is algorithm complexity modeled for functional languages? Array-like immutable (persistent) data structure implementation with fast indexing, append, prepend, iteration Why are functional programs considered slower than procedural counterparts asymptotically, if the opposite appears true? rope data structure - undo operation Calculate $(u,v) \in \mathbb{R}$ so that $\sum (a_i + b_i)$ with $ a_i \geq u$ and $b_i \geq v$ maximizes How to find point of maximum overlap in both X and Y Axis? How would a priority queue be implemented in a functional programming language? Can we implement every algorithm using only immutable variables? Skip List estimate number of elements less than (or greater than) a value Is purely functional programming in some situations less efficient than imperative programming? Why do we use persistent data structures in functional programming? Multithreaded access to a persistent linked list data structure Polylogarithmic value-bounded concurrent data structures such as max register, counter, and monotone circuit Good snapshottable data structure for an in-memory index How do confluently persistent data structures handle merge conflicts? combine divergent persistent red-black tree instances Improving copy cost of a bit array by using big-endian Patricia Trees (or similar) Name for data structure Persistent data structures representing a directed graph's paths How do Clojure's persistent data structure determine equality?
CommonCrawl
The distance square in the Newton's law of universal gravitation is really a square? When I was in the university (in the late 90s, circa 1995) I was told there had been research investigating the $2$ (the square of distance) in the Newton's law of universal gravitation. $$F=G\frac{m_1m_2}{r^2}.$$ Maybe a model like $$F=G\frac{m_1m_2}{r^a}$$ with $a$ slightly different from $2$, let say $1.999$ or $2.001$, fits some experimental data better? Is that really true? Or did I misunderstand something? gravity experimental-physics newtonian-gravity Qmechanic♦ Alessandro JacopsonAlessandro Jacopson $\begingroup$ Anyways, I doubt anyone would try to verify the inverse-square law in the late 90s (Unless you went to uni in 1890s..). I'm quite sure general relativity was sufficiently established, and it makes Netwon's law outdated. $\endgroup$ – Manishearth $\begingroup$ @Manishearth :-) It was 1990s and maybe the professor did not mention the year of the researches... I remember it was just an aside comment. $\endgroup$ – Alessandro Jacopson $\begingroup$ Asaph Hall lived from 1829-1907. It was in 1894, before even special relativity was born. At that time the anomaly of mercury was a problem that cried for a resolution. Einstein solved the problem much later. $\endgroup$ – Arnold Neumaier $\begingroup$ farside.ph.utexas.edu/teaching/336k/Newton/node116.html $\endgroup$ – Jim Graber $\begingroup$ You would have tested this in the 20th century if you were trying to detect extra dimensions in a braneworld type thing--then the EM forces are confined to a 3D brane and fall off like $\frac{1}{r^2}$, while the gravitational forces live in the bulk and fall off like $\frac{1}{r^{d-1}}$, where $d$ is the number of spatial dimensions in your theory. People were interested in seeing if there were short-distance deviations from the inverse-square law in order to give credence to extra dimensions. $\endgroup$ – Jerry Schirmer This was suggested by Asaph Hall in 1894, in an attempt to explain the anomalies in the orbit of Mercury. I retrieved the original article in http://adsabs.harvard.edu/full/1894AJ.....14...49H Interestingly, he mentions in the introduction that Newton himself had already considered in the Principia what happens if the exponent is not exactly 2, and had concluded that the observations available to him strongly supported the exact power 2! The story is retold, e.g., on p.356 of N.R. Hanson, Isis 53 (1962), 359-378. See also Section 2 of http://adsabs.harvard.edu/full/2005MNRAS.358.1273V Arnold NeumaierArnold Neumaier 41.8k111111 silver badges209209 bronze badges $\begingroup$ Asaph Hall III (1829,1907) or Asaph Hall IV (1859–1930)? $\endgroup$ $\begingroup$ I didn't know there were two of them (in fact father and son). Unfortunately, the publication (see the link in my edited answer) doesn't help decide your query, and I have no idea how to find out. $\endgroup$ Let's first see why the inverse square form is special. Betrand's Theorem states that only two types of central potentials will produce stable orbits. The harmonic oscillator potential $V=\frac{1}{2}kr^2$ and the potential $V=-\frac{k}{r}$ that will produce an inverse square force law. Obviously the age of the universe is finite, so the fact that planet's orbits survived until now need not imply it will continue to be so in the future. Another argument why this type of potential is so common is that, when doing quantum field theory, the propagator (details depend on whether the particle is a (gauge) boson, fermion or scalar, i will stick with scalars for now) has form $$\frac{1}{q^2+m^2}$$ Thus if this particle where the force carrier of your force with coupling $g$ the potential is basically the fourier transform of the propagator $$V(r) =-g^2\frac{1}{(2\pi)^3}\int d^3k\frac{4\pi}{q^2+m^2}e^{i\vec{k}\cdot \vec{r}} = -g^2\frac{1}{r}e^{-mr}$$ This is the famous Yukawa potential. For massless force carriers the damping term goes to 1 and the force becomes long range with a inverse square force law. Upto small details this is analogous to the gauge boson case, e.g. the masslessness of the photon makes the EM force long range, where as the massiveness of W,Z bosons make weak forces short-ranged. Above derivations use the three space dimensions. Theories with extra dimensions have suggested that large extra dimensions will alter the inverse square law at some not-so-short distances (sub-mm range). Published experimental results are to be found e.g. from the Eöt-Wash group ( http://www.npl.washington.edu/eotwash/experiments/shortRange/sr.html ) and are available on the arXiv. One potential tested here is here $$V(r)=-G\frac{m_1m_2}{r}(1+\alpha\exp(-r/\lambda))$$ The below plot shows the exclusion limits for both parameters $\alpha$ and $\lambda$ luksenluksen $\begingroup$ I've used your answer here and left a partial answer, but it could still use a better answer: Limits on non-Newtonian gravity at length scales larger than 1 meter? $\endgroup$ But of course, Newton's theory is not correct; instead Einstein's theory is correct. If you use general relativity GR, you usually talk about curvature, etc. rather than forces. Nevertheless, the results can be expressed in terms of an effective force. This reference http://farside.ph.utexas.edu/teaching/336k/Newton/node116.html gives $F = -GM/r^2 -3GMh^2/(c^2r^4)$ where h is momentum as the first order correction. Higher orders have been calculated by the PPN and gravitational wave people. This correction is very small except for very fast moving objects. In practise, it applies to bodies orbiting very near to a black hole or neutron star. Famously, it is also responsible for the precession of the perihelion of Mercury. Jim GraberJim Graber $\begingroup$ How do you know Einstein's theory is correct? Newton's theory sure seemed correct at the time. $\endgroup$ – wim $\begingroup$ Of course we don't know Einstein's theory is absolutely correct. Experiment (from Mercury onwards) tells us that it is more correct than Newton's theory. But many scientists, particularly the quantum gravity people, expect a further correction is yet to come. $\endgroup$ $\begingroup$ It seems like "your" expression gives the right value for the perihelion precession, however it is not identical to the first order correction effective force used by JPL/NASA and that is the post-Newtonian expansion to the first order. That is at least if you approximate $h^2=GMr$ as they do in your link. $\endgroup$ – Agerhell There was indeed some talk of the exponent on $r$ during the late 90's and the early years of the 21st century. The problem, as I recall, was dark matter, which can only be observed indirectly by looking at the anomalous rotation of galaxies. It was suggested that perhaps Newton's law broke down under certain conditions. Again, as I recall, that while a number of papers were published, nothing much came of the idea. Paul J. GansPaul J. Gans $\begingroup$ Milgrom's modified gravity was a deviation from Newton's law at various acceleration scales. Modified gravity is still somewhat alive, but the models have grown very complex. $\endgroup$ Building on Jim Graber's answer: We can absorb the perturbation term into the correction to the power law. $$ F = -GM/r^2 \frac1{(r/r_0)^{\delta(r)}} \approx -GM/r^2 \left( 1 - \delta(r)(r/r_0 - 1) \right)$$ $$ \delta(r) = -\frac{3h}{r^2c^2}\frac1{r/r_0 - 1}$$ I'm not sure about the physical meaning of $r_0$ though (renormalization scale?). If $r/r_0 > 1$ then $\delta(r)$ is negative and we have $\alpha$ like 1.9999... pcrpcr $\begingroup$ Still, the expansion in terms of Yukawa potential is more physical $\endgroup$ – pcr There has indeed been such research about the Pioneer anomaly: Two spacecrafts launched in the 1970s into the outer solar system did not move quite as expected (as calculated due to gravity and solar wind) after ca. 1980. Only in/after the 2000-2010 decade the source of the discrepancy, accidental thrust by thermal radiation, became generally accepted consensus. Previously, it was at least conceivable to interprete the data as containing hints of subtle differences between actual observed gravity and our theoretical understanding of gravity. pyramidspyramids $\begingroup$ related Did New Horizons also demonstrate the "Pioneer Anomaly"? and there's also the flyby anomaly: (How well) Did Juno provide any insight into the Flyby Anomaly? and Is the Flyby Anomaly still a thing? $\endgroup$ JPL who calculates the orbits of celestial bodies to a high precision uses an expression for the acceleration of one body of negligible mass due to the gravitational force of one other body that looks like: $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}{(1-\frac{4GM}{rc^2}}+\frac{v^2}{c^2})\hat{r}+ \frac{4GM}{r^2}(\bar{v}\cdot{\hat{r}})\frac{v^2}{c^2}\hat{r}$ Ignoring velocity dependent parts we have: $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}\hat{r}+\frac{4(GM)^2}{c^2r^3}\hat{r}$ So actually keeping $a = 2$ but adding a small "inverse cubic" part is actually done do fit experimental values better, even though the inverse cube term is not invented but comes from trying to approximate general relativistic effects using what is known as "the post-Newtonian expansion". The reason for this is a bit complicated, but basically adding a small inverse cube part as well as velocity-dependent parts explains what it is known as the "anomalous precession of perihelion". See for instance expression 4-61 in this paper, titled "Formulation for Observed and Computed Values of Deep Space Network Data Types for Navigation" AgerhellAgerhell Not the answer you're looking for? Browse other questions tagged gravity experimental-physics newtonian-gravity or ask your own question. Gravitational inverse-square law Gravity: Why is the inverse square law exactly $1/r^2$ instead of just close to 2? Why $1/r^2$ and not another power of $r$ in Newton's law of gravitation? Could there be a "massive gravity" theory? Force inversely proportional to the squared distance Why is Gravitational force proportional to the masses?. How the inverse square law in electrodynamics is related to photon mass? Why inverse square not inverse cube law? Could gravity be repulsive at short distances? Newtons law of gravity universal? Combining Proportions to get Newton's Law of Universal Gravitation Newton's Law of Universal Gravitation for extended objects On Masses Related to a Greater Gravitational Force Can we observe apsidal precession in a two-body system following only Newton's gravitation? Newton's Universal Law of Gravitation doubt Where from is the distance calculated in Newton's law of universal gravitation? Why there is a negative sign on the gravitation formula?
CommonCrawl
EPJ Quantum Technology Quantum teleportation of propagating quantum microwaves R Di Candia1, KG Fedorov2,3, L Zhong2,3,4, S Felicetti1, EP Menzel2,3, M Sanz1, F Deppe2,3,4, A Marx2, R Gross2,3,4 & E Solano1,5 EPJ Quantum Technology volume 2, Article number: 25 (2015) Cite this article Propagating quantum microwaves have been proposed and successfully implemented to generate entanglement, thereby establishing a promising platform for the realisation of a quantum communication channel. However, the implementation of quantum teleportation with photons in the microwave regime is still absent. At the same time, recent developments in the field show that this key protocol could be feasible with current technology, which would pave the way to boost the field of microwave quantum communication. Here, we discuss the feasibility of a possible implementation of microwave quantum teleportation in a realistic scenario with losses. Furthermore, we propose how to implement quantum repeaters in the microwave regime without using photodetection, a key prerequisite to achieve long distance entanglement distribution. In 1993, CH Bennett et al. [1] proposed a protocol to disassemble a quantum state at one location (Alice) and to reconstruct it in a spatially separated location (Bob). They proved that, if Alice and Bob share quantum correlations of EPR type [2], then Bob can reconstruct the state of Alice by using classical channels and local operations. This phenomenon is called 'quantum teleportation', and it has important applications in quantum communication [3]. The result inspired discussions among physicists, in particular, on the experimental feasibility of the protocol. Despite some controversies in technical issues, the first experimental realisation of quantum teleportation was simultaneously performed in 1997 in two groups, one led by A Zeilinger in Innsbruck [4], and the other by F De Martini in Rome [5]. In both experiments, the polarisation degrees of freedoms of the photons were teleported. It was shown that, even within the unavoidable experimental errors, the overlap between the input state and the teleported one exceeded the classical threshold achievable when quantum correlations are not present. After the success of the first experiments, alternatives for a variety of systems and degrees of freedom emerged. Of particular interest is the continuous-variable scheme studied by L Vaidman [6] and SL Braunstein et al. [7], whose experimental implementation was realised by A Furusawa et al. [8] in the optical regime. This experiment consisted in teleporting the information embedded in the continuous values of the conjugated variables of a propagating electromagnetic signal in the optical regime. Optical frequencies were preferred because of their higher detection efficiency, essential to achieve a high fidelity performance [7, 8], and because propagation losses are almost negligible. During the last years, an impressive progress in teleporting quantum optical states to larger distances, first in fibers [9, 10], and afterwards in free-space [11–13], was made. This rapid progress may even allow us to realise quantum communication via satellites in near future with corresponding distances of about 150 km. In optical systems, the long-distance teleportation is, to some extent, straightforward, because of the high transmissivity of optical photons in the atmosphere. Nevertheless, unavoidable losses are setting an upper limit for the teleportation distance. However, there were fundamental theoretical studies on how to allow for a long-distance entanglement distribution. The underlying concepts are based on quantum repeaters [14, 15], whose implementation on specific platforms needs an individual study. So far, the entanglement sharing and quantum teleportation was reported for cold atoms [16–18], and even for macroscopic systems [19]. In this article, we discuss the possibility of implementing the quantum teleportation protocol of propagating electromagnetic quantum signals in the microwave regime. This line of research is justified by the recent achievements of circuit quantum electrodynamics (cQED) [20, 21]. In cQED, a quantum bit (qubit) is implemented using the quantum degrees of freedom of a macroscopic superconducting circuit operated at low temperatures, i.e. \(\hbox{$<$}\kern-0.7em\raise0.82 ex\hbox{$\sim$}\)10-100 mK, in order to suppress thermal fluctuations. Superconducting Josephson junctions are used to introduce non-linearities in these circuits, which are essential in both quantum computation and the engineering of qubits. Typical qubits are built to have a transition frequency in the range 5-15 GHz (microwave regime), and they are coupled to an electromagnetic field with the same frequency. This choice is determined by readily available microwave devices and techniques for this frequency band, such as low noise cryogenic amplifiers, down converters, network analysers, among others. We note that apart from its relevance in quantum communication, quantum teleportation is also crucial to perform quantum computation, e.g. it can be used to build a deterministic CNOT gate [22]. Recently, path-entanglement between propagating quantum microwaves has been investigated in Refs. [23–25]. Following what was previously done in the optical regime [26], a two-mode squeezed state, in which the modes were spatially separated from each other, was generated. The two entangled beams could be used to perform with microwaves a protocol equivalent to the one used in optical quantum teleportation [7, 8]. These articles represent the most recent of a large amount of results presented during the last years [24, 27–42], which are the building blocks of a quantum microwave communication theory. Inspired by the last theoretical and experimental results, we want to discuss the feasibility of a quantum teleportation realisation for propagating quantum microwaves. The article is organised in the following way: In Section 2, we introduce the continuous-variable quantum teleportation protocol and its figures of merit. In Section 3, we describe the preparation of a propagating quantum microwave EPR state. In Section 4, we show how to implement a microwave equivalent of an optical homodyne detection, by using only linear devices. The Section 5 is focused on the analysis of losses. In particular, we consider an asymmetric case in which the losses in Alice's and Bob's paths are different. In Section 6, we discuss the feedforward part of the protocol in both a digital and an analog fashion. Finally, as the entanglement distribution step is affected by losses, we present in Section 7 how to implement a quantum repeater based on weak measurements in a cQED setup, in order to allow the entanglement sharing at larger distances. The protocol In this section, we briefly explain the quantum teleportation protocol introduced in [6, 7], and we introduce some useful figures of merit to quantify the quality of the scheme in a realistic setup. The protocol consists in teleporting a continuous variable state, and it has already been applied in the optical regime to both a Gaussian state [8, 43] and a Schrödinger cat state [44]. An equivalent scheme for microwaves is still missing, therefore a specific treatment, in which the restriction imposed by the technology is taken into account, is mandatory to analyse its feasibility. Let us consider a situation in which two parties, Alice and Bob, want to share a quantum state. More specifically, Alice, labelled with A, wants to send a quantum state \(|\phi\rangle_{T}\), whose corresponding system is labelled by T, to Bob, denoted by B. Additionally, let them share an ancillary entangled state \(|\psi\rangle_{AB}\) given by $$ (\hat{x}_{A}+\hat{x}_{B}) | \psi \rangle_{AB}= \delta(x_{A}+x_{B}), \qquad ( \hat{p}_{A}-\hat{p}_{B}) |\psi\rangle_{AB}= \delta(p_{A}-p_{B}), $$ where x̂ and p̂ are quantum conjugate observables obeying the standard commutation rule \([\hat{x},\hat{p}]=i\). After Alice performs a Bell-type measurement on the system T-A, $$ x_{T}+ x_{A}=a,\qquad p_{T}- p_{A}=b, $$ where a and b are the outcomes of the measurement. The resulting values of Bob's quadrature would be $$ x_{B}= x_{T}-a, \qquad p_{B}= p_{T}-b. $$ By displacing adaptively Bob's state by \(a+ib\), i.e. \(x_{B}\) is shifted by a and \(p_{B}\) by b, we finally have \(\hat{x}_{B}|\phi\rangle_{B}=\hat{x}_{T}| \phi\rangle_{T}\) and \(\hat{p}_{B} |\phi\rangle_{B}=\hat{p}_{T}| \phi\rangle _{T}\), where \(|\phi\rangle_{B}\) is the final state of Bob. Therefore, the final state of Bob is the state of the system T. Note that Bob needs to perform local operations conditioned to Alice's measurement outcomes. As the outcomes are two numbers, we may allow Alice and Bob to communicate throughout a classical channel, see Figure 1. Bennett et al. [1] called this protocol a quantum teleportation [45]. Scheme of the proposed quantum teleportation protocol. The generation of an EPR state is obtained by amplifying the orthogonal vacuum quadratures of A and B with two Josephson parametric amplifiers (JPAs). The generated entanglement is then shared between Alice and Bob. Alice uses this resource to perform a Bell-type measurement with the state that she wants to teleport. This is realised by superposing her two signals with a beam splitter, and then measuring a quadrature in each of the outputs. The quadrature measurement is performed via amplifying the signal with a JPA and a HEMT amplifier in series, and then measuring via homodyne detection. Finally, after a classical transfer of Alice results, a local displacement on the Bob state is needed to conclude the protocol. The figure indicates where losses (labelled as \(\eta_{A,B} \), α, β) may be present. A state fulfilling (1) can be seen as a two-mode squeezed state with infinite squeezing. In fact, its Wigner function can be written as $$\begin{aligned} &W_{A\text{-}B}(x_{A},p_{A},x_{B},p_{B}) \\ &\quad = \frac{1}{\pi^{2}} \exp \biggl\{ -\frac {e^{-2r}}{2} \bigl[(x_{A}-x_{B})^{2}+(p_{A}+p_{B})^{2} \bigr]-\frac{e^{+2r}}{2} \bigl[(x_{A}+x_{B})^{2}+(p_{A}-p_{B})^{2} \bigr] \biggr\} \\ &\quad \sim \frac{2}{\pi e^{2r}}\exp \biggl\{ -\frac{e^{-2r}}{2} \bigl[(x_{A}-x_{B})^{2}+(p_{A}+p_{B})^{2} \bigr] \biggr\} \delta(x_{A}+x_{B})\delta(p_{A}-p_{B}), \end{aligned}$$ where r is a squeezing parameter [46] and, also, we have considered an asymptotic behaviour for large r. For finite r, the state of the system A-B fulfils $$ \hat{x}_{A}+\hat{x}_{B}=\hat{\xi}_{x},\qquad \hat{p}_{A}-\hat{p}_{B}=\hat{\xi}_{p}, $$ where \(\hat{\xi}_{x}|\psi\rangle_{AB}\) and \(\hat{\xi}_{p}|\psi\rangle_{AB}\) have real Gaussian distributions with mean value equal to zero and variance \(e^{-2r}\). If we perform the teleportation protocol with this state, the final Wigner function for Bob's state is the weighted integral $$ W_{B}(x_{B},p_{B})= \int d\xi_{x}\, d\xi_{p} P(\xi_{x})P( \xi_{p}) W_{T}(x_{B}-\xi_{x},p_{B}+ \xi_{p}), $$ where \(P(\xi_{x,p})\) are the probability distributions of the outcomes of \(\hat{\xi}_{x,p}\). After introducing the variables \(x_{B}-\xi_{x}=X\), \(p_{B}+\xi_{p}=Y\), and defining \(\alpha=X+iY\), \(z_{B}=x_{B}+ip_{B}\), we get $$ W_{B}(z_{B})= \int d^{2}\alpha P_{c}\bigl(z_{B}^{*}- \alpha^{*}\bigr) W_{T}(\alpha), $$ where \(P_{c}\) is the complex Gaussian distribution with mean value zero and variance \(\bar{\sigma}^{2}=e^{-2r}\), i.e. \(P_{c}(\beta)=\frac{1}{2\pi \bar{\sigma}^{2}}\exp \{\frac{-|\beta|^{2}}{2\bar{\sigma}^{2}} \}\). In the limit of infinite r, \(P_{c}\) approaches to the delta function, and then \(W_{B}=W_{T}\). In the following, we will refer only to the variance of the quadratures, regardless of whether they are noisy or not. Therefore, our treatment is general, and it includes also the lossy case, in which we do not have a perfect two-mode squeezed state as a resource. In order to evaluate the performance of the protocol, entanglement fidelity [47] can be used. If T is in a pure state, the entanglement fidelity is given by $$ \mathcal{F}=\pi \int dz_{B}^{2}\, dz_{T}^{2} W_{B}(z_{B})W_{T}(z_{T}). $$ If Alice is restricted to teleport coherent states, the protocol works better than in the classical case corresponding to \(r=0\) if \(\mathcal {F}>\frac{1}{2}\) [48]. Let us remark that the performance of the protocol for coherent states, and in general for Gaussian states, depends only on the variances \(\Delta \xi_{x,p}^{2}\equiv\langle\Delta\hat{\xi}_{x,p}^{2}\rangle\). Indeed, one can verify that [43] $$ \mathcal{F}=\frac{1}{\sqrt{(1+\Delta\xi_{x}^{2})(1+\Delta\xi_{p}^{2})}}, $$ and \(\mathcal {F}>\frac{1}{2}\) is valid if and only if $$ \Xi\equiv\bigl(1+\Delta \xi_{x}^{2}\bigr) \bigl(1+\Delta \xi_{p}^{2}\bigr)< 4. $$ More general cases could also be discussed, but this does not provide any additionally insight into the question under which conditions the protocol is feasible. The condition in Eq. (10) defines our limit between classical and quantum teleportation. While in the noiseless case this condition is satisfied for any positive squeezing, the situation changes when we take losses into account. From now on, we assume the case of coherent state teleportation, and the symmetric case, where \(\Delta\xi^{2}\equiv\Delta\xi^{2}_{x,p}\) and \(\Delta\xi_{\perp}^{2}\equiv\langle\Delta(\hat{x}_{A}-\hat{x}_{B})^{2}\rangle=\langle\Delta(\hat{p}_{A}+\hat{p}_{B})^{2}\rangle\). Generation of EPR state Following Refs. [23, 25], propagating quantum microwave EPR states are prepared in the following way. We can generate a microwave vacuum state with a 50 Ohm resistor at low temperatures \(T\sim50\) mK, as its blackbody radiation corresponds to a thermal state with number of photons \(n_{\omega}=(e^{ \hbar\omega/kT}-1)^{-1}\), with \(n_{\omega}\ll1\) for frequencies \(\omega/2\pi\sim5\mbox{-}15\) GHz. By sending the vacuum to a Josephson parametric amplifier (JPA) [37, 38], we can create a one-mode squeezed state, in which the squeezed quadrature is defined by the phase of the JPA pump signal. The relation between the input \(\hat{a}_{\mathrm{in}} \) and the output \(\hat{a}_{\mathrm{out}} \) of a JPA [49] $$ \hat{a}_{\mathrm{out}} =\hat{a}_{\mathrm{in}} \cosh r+\hat{a}_{\mathrm{in}}^{\dagger}\sinh r, $$ is the same as for a squeezing operator. Notice that the amplified quadrature is defined by \(\hat{x}_{\mathrm{out}} =(\hat{a}_{\mathrm{out}} +\hat{a}_{\mathrm{out}}^{\dagger})/\sqrt{2}\), and the squeezed quadrature is the orthogonal one. A two-mode squeezed state [46] can be generated by sending two one-mode squeezed states, squeezed with respect to orthogonal quadratures, to a hybrid ring, acting as a microwave beam splitter [39, 40]. In this way, the resulting Wigner function is given by Eq. (4), and the two output modes are spatially separated (see Figure 1). In general, the quality of the entanglement between the two modes is affected by the losses of the JPA. To take into account the inefficiency, we write down the Hamiltonian of the JPA and take into account a finite coupling of the resonator mode ĉ with an environment, as depicted in Figure 2: $$\begin{aligned} H={}&H_{\mathrm{free}}+i\frac{\hbar\chi}{2}\bigl(c^{2} -c^{\dagger2}\bigr) +i\hbar\sqrt{\frac {k}{2\pi}} \int d\omega\bigl[a(\omega)c^{\dagger}-ca^{\dagger}(\omega) \bigr] \\ &{}+i\hbar\sqrt {\frac{\gamma}{2\pi}} \int d\omega\bigl[h(\omega)c^{\dagger}-ch^{\dagger}(\omega) \bigr], \end{aligned}$$ where \(H_{\mathrm{free}}=\hbar\omega_{c} c^{\dagger}c+\int d\omega \hbar\omega a^{\dagger}(\omega) a(\omega)+\int d\omega \hbar\omega h^{\dagger}(\omega )h(\omega)\) is the free Hamiltonian. The second term in Eq. (12) is the squeezing Hamiltonian, the third term models the interaction between the cavity field and the input and output signals, and the last term takes into account the losses. The output mode of the JPA is defined as the steady state of â. One can write down these equations in the Heisenberg picture, and look at input-output relations of the fields: $$\begin{aligned} &\hat{x}_{a_{\mathrm{out}}}=\frac{2\chi+k-\gamma}{2\chi-k-\gamma}\hat{x}_{a_{\mathrm{in}}}+ \frac{2\sqrt{k\gamma}}{2\chi-k-\gamma}\hat{x}_{h_{\mathrm{in}}}\equiv \sqrt{g_{x}} \hat{x}_{a_{\mathrm{in}}}+\sqrt{s_{x}} \hat{x}_{h_{\mathrm{in}} }, \end{aligned}$$ $$\begin{aligned} &\hat{p}_{a_{\mathrm{out}}}=\frac{2\chi-k+\gamma}{2\chi+k+\gamma}\hat{p}_{a_{\mathrm{in}}}- \frac{2\sqrt{k\gamma}}{2\chi+k+\gamma}\hat{p}_{h_{\mathrm{in}}}\equiv \frac{1}{\sqrt{g_{p}}} \hat{p}_{a_{\mathrm{in}}} -\sqrt{s_{p}} \hat{p}_{h_{\mathrm{in}}}, \end{aligned}$$ where the \(h_{\mathrm{in}}\) label refers to the input noise, assumed to be a thermal state fulfilling the relation \(s_{x}s_{p}= (\sqrt {g_{x}/g_{p}}-1 )^{2}\) [49]. The quantities \(\Delta\xi^{2}\) and \(\Delta\xi^{2}_{\perp}\) introduced at the end of the Section 2 can be easily retrieved by using Eqs. (13)-(14) and the beam splitter relation: $$\begin{aligned} \Delta\xi^{2}= \frac{1}{g_{p}}+2s_{p} \Delta p_{h_{\mathrm{in}}}^{2},\qquad \Delta\xi _{\perp}^{2}= g_{x}+2s_{x} \Delta x_{h_{\mathrm{in}}}^{2}. \end{aligned}$$ Note that \(\gamma=0\) corresponds to a noiseless parametric amplifier, whose input-output relations are shown in Eq. (11), with \(e^{r}\equiv\sqrt{g_{x}}=\sqrt{g_{p}}\). Generally, the JPA generates a squeezed thermal state whose squeezed quadrature has variance \(\sigma _{s}^{2}\). We have entanglement between the outputs of the hybrid ring if \(\sigma_{s}^{2}<\sigma_{\mathrm{vac}}^{2}\), where \(\sigma_{\mathrm{vac}}^{2}\equiv0.5\) is the variance of the vacuum. The variance measured in [23] is \(\sigma_{s}^{2}\simeq0.16\), which leads, considering a beam splitter with 0.4 dB of power losses, to an EPR state with \(\Delta\xi^{2}\simeq0.47\) (\(\Delta\xi_{\perp}^{2}\simeq 16.77\)) and \(\Xi\simeq1.74<4\). In the following, we will use these values as reference, although we believe that these parameters can be improved with better JPA designs. Scheme of a Josephson parametric amplifier (JPA). The field outside the resonator interacts with the resonator mode with a coupling rate k. The resonator mode is evolving under the squeezing Hamiltonian with a coupling χ. The losses are taken into account by introducing an environment mode ĥ, and let it interact with the resonator mode with a coupling rate γ. Quadrature measurement Measuring a quadrature of a weak microwave signal is considered a particularly difficult task, since the low energy of microwave photons makes it difficult to realise a single-photon detector. Therefore, the standard homodyne detection scheme is not applicable. Typically, one has to amplify the microwave signal in order to detect it. Cryogenic high electronic mobility transistor (HEMT) amplifiers are routinely used in quantum microwave experiments [23, 24, 27, 29–32, 40], because of their large gains in a relatively broad frequency band. However, HEMT amplifiers are phase insensitive and add a significant amount of noise photons, sufficient to make the quantum teleportation protocol fail. Their input-output relations are [49] $$ \hat{a}_{\mathrm{out}}=\sqrt{g_{H}} \hat{a}_{\mathrm{in}}+\sqrt{g_{H}-1} \hat{h}_{H}^{\dagger}, $$ where \(\hat{a}_{\mathrm{in}}\), \(\hat{a}_{\mathrm{out}}\) and \(\hat{h}_{H}\) are annihilation operators of the input field, output field and noise added by the amplifier, respectively, with \(g_{H}\sim10^{4}\) for modern high-performance cryogenic amplifiers. We can assume \(\hat{h}_{H} \) to be in a thermal state with thermal population \(n_{H}\). For instance, commercial cryogenic HEMT amplifiers have a typical number of added noise photons \(n_{H}\sim 10\mbox{-}100\) for the considered frequency regime. To measure \(\hat{x}_{T}+\hat{x}_{A}\) and \(\hat{p}_{T}-\hat{p}_{A}\), we need to send the state A and the state T to a hybrid ring, obtaining $$\begin{aligned} \hat{a}_{1}=\frac{\hat{a}_{T}+\hat{a}_{A}}{\sqrt{2}},\qquad \hat{a}_{2}=\frac {\hat{a}_{T}-\hat{a}_{A}}{\sqrt{2}}, \end{aligned}$$ and then measure the x-quadrature of the mode 1 and the p-quadrature of the mode 2. If we amplify this signal with a HEMT and then measure it afterwards, the state of Bob after the local displacement is $$\begin{aligned} \hat{x}_{B}= \hat{x}_{T}+\hat{\xi}_{x}+\sqrt{ \frac{2(g_{H}-1)}{g_{H}}}\hat{x}_{h_{H}}, \end{aligned}$$ and analogously for \(\hat{p}_{B}\). One can easily check that even if the added noise photons are at the vacuum level, we get \(\mathcal{F} \leq \frac{1}{2}\) and the protocol fails. To avoid this situation, we can adopt a scheme based on anti-squeezing the target quadrature before the HEMT amplification [29, 50], see Figure 1. Corresponding outputs of the amplification JPAs with a gain \(g_{J}\), followed by a HEMT amplification with gain \(g_{H}\), are $$\begin{aligned} \hat{x}'_{1}&=\sqrt{g_{H}g_{J}} \hat{x}_{1} +\sqrt{g_{H}s} \hat{x}_{h_{J}}+\sqrt{g_{H}-1} \hat{x}_{h_{H}} , \end{aligned}$$ and similar for \(\hat{p}'_{2}\). We assume, for the sake of simplicity, the symmetric case, where both quadratures have the same amplification, and the amount of added noise is the same in both modes. The state of Bob after the displacement step is $$\begin{aligned} \hat{x}_{B} = \hat{x}_{T} +\hat{\xi}_{x} +\sqrt{\frac {2(g_{H}-1)}{g_{J}g_{H}}}\hat{x}_{h_{H}} +\sqrt{\frac{2s}{g_{J}}}\hat{x}_{h_{J}} , \end{aligned}$$ and analogously for \(\hat{p}_{B}\). In the limit of large \(g_{J}\), the noise of the HEMT amplifier is suppressed and the inefficiencies of the JPA are negligible, provided that as \(\Delta x_{h_{J}}^{2}\) and s are small. By defining the JPA quadrature noise \(A_{J}\equiv\frac{s}{g_{J}} \Delta\hat{x}_{h_{J}}^{2}\), and the HEMT quadrature noise \(A_{H}\equiv\frac {g_{H}-1}{g_{H}}\Delta\hat{x}_{h_{H}}^{2}\), we have to analyse for which experimental values the total noise $$ A\equiv2 \biggl(A_{J}+\frac{A_{H}}{g_{J}} \biggr) $$ is lowest, since for \(A>1\) the protocol fails. In the recent experiments on quantum state tomography of itinerant squeezed microwave states [29], an additional JPA with a degenerate gain \(g_{J} \simeq180\) was used as a preamplifier. Corresponding figures of merit are \(A_{J}\simeq0.25\), and in case of \(A_{H}\simeq17\), we get \(A\simeq0.69\). With these values, if we take into account the quality of the EPR state mentioned at the end of Section 3, the protocol fails, as \(\Xi=(1+\Delta\xi^{2}+A)^{2}\simeq4.04>4\). However, the HEMT quadrature noise can realistically reach a value of \(A_{H}\simeq7\), and this gives us an upper bound to the JPA quadrature noise in order for the quantum teleportation protocol to work, i.e. \(A_{J}<0.30\). This bound does not take into account losses and measurement inefficiencies, which are considered in the next section. Moreover, we believe that JPA values can certainly be improved within the next years, as JPA technology is considerably advancing both in the design and materials [51–55]. Protocol with losses So far, we have not taken into account possible losses in the protocol. Typically, losses in the microwave domain are much larger than in the optical domain, and therefore can significantly affect the quality of the teleportation protocol. In the following, we analyse the protocol with all possible loss mechanisms, see Figure 1. Note that losses after the HEMT amplification are negligible, and therefore omitted. To characterise the losses, we use a beam splitter model. Following Figure 1, the fields after collecting the losses in the entanglement sharing step are $$ \hat{x}'_{A}=\sqrt{\eta_{A}} \hat{x}_{A}+\sqrt{1-\eta_{A}} \hat{x}_{v_{A}} ,\qquad\hat{x}'_{B}=\sqrt{\eta_{B}} \hat{x}_{B}+ \sqrt{1-\eta_{B}} \hat{x}_{v_{B}}, $$ where \(\eta_{A,B}\) are the transmission coefficients modelling the losses in Alice's and Bob's channel respectively, and \(\hat{x}_{v_{A,B}}\) are modes in a thermal state (similar formulas hold for \(\hat{p}_{A,B}\)). Then, $$\begin{aligned} \hat{x}'_{A}+\hat{x}'_{B}&= \frac{\sqrt{\eta_{A}}+\sqrt{\eta_{B}}}{2}(\hat{x}_{A}+\hat{x}_{B})+ \frac{\sqrt{\eta_{A}}-\sqrt{\eta_{B}}}{2}(\hat{x}_{A}-\hat{x}_{B}) +\sqrt{1- \eta_{A}} \hat{x}_{v_{A}}+\sqrt{1-\eta_{B}} \hat{x}_{v_{B}} \\ &\equiv \hat{\xi}' , \end{aligned}$$ $$\begin{aligned} \Delta{\xi'}^{2}={}&\frac{(\sqrt{\eta_{A}}+\sqrt{\eta_{B}})^{2}}{4}\Delta \xi ^{2}+\frac{(\sqrt{\eta_{A}}-\sqrt{\eta_{B}})^{2}}{4}\Delta\xi_{\perp}^{2} \\ &{}+(1- \eta _{A}) \biggl(n_{v_{A}}+\frac{1}{2} \biggr)+(1- \eta_{B}) \biggl(n_{v_{B}}+\frac {1}{2} \biggr). \end{aligned}$$ We note that the second term in Eq. (24) results from an asymmetry of the losses in Alice's and Bob's channel and it increases with squeezing level in the EPR JPAs. In the optical domain, \(\eta\sim1\), allowing to neglect this term even for asymmetric channels. Moreover, in this frequency range, \(n_{v_{A,B}}\ll1\) even at room temperature. In the microwave domain, instead, we have \(n_{v_{A,B}}\sim10^{3}\) at room temperature and typical power losses of 20% per meter. In this case, the entanglement would collapse after \({\sim}2\mbox{ mm}\) regardless of the value of \(g_{x}\). Thus, in the following we assume that the entanglement distribution is possible at 50 mK, i.e. \(n_{v_{A,B}}\ll1\). As already pointed out, if \(\eta _{A}\neq\eta_{B}\), then \(\Delta\xi^{\prime2}\) contains a term linearly increasing with the JPA gain \(g_{x}\). Equation (24) explains why the ideal quantum teleportation, i.e. \(\mathcal{F}=1\), is not possible in a realistic experiment even with in the limit of infinite squeezing as input. From Figure 3(a), we see that the allowed difference between \(\eta_{A}\) and \(\eta_{B}\) decreases with decreasing \(\Delta\xi ^{2} \). In Figure 3(b), instead, we see that for large differences between \(\eta_{A}\) and \(\eta_{B}\), it is convenient to attenuate the signal of Alice. For instance, if \(\eta_{B}<\eta_{A}\), we can easily see that this happens when \(\frac{\partial\Delta\xi^{\prime2}}{\partial\eta_{A}}>0\), i.e. $$ \sqrt{\frac{\eta_{B}}{\eta_{A}}}< \frac{ (\Delta\xi ^{2}+\Delta\xi _{\perp}^{2} )/4- (n_{v_{A}}+\frac{1}{2} )}{ (\Delta\xi _{\perp}^{2}-\Delta\xi ^{2} )/4}. $$ As Alice's measurement step takes a finite amount of time, we typically have \(\eta_{B}<\eta_{A}\). The quantity to \(\pmb{\Delta\xi^{\prime2}}\) defined in Eq. ( 24 ), which describes the amount of correlations between Alice and Bob, plotted as a function of the transmission coefficients \(\pmb{\eta_{A,B}}\) modelling the losses in Alice's and Bob's channel. The case \(\Delta\xi^{\prime2}\geq1\) corresponds to a classically reachable performance. We see that the quality of the protocol depends on a compromise between squeezing, given by \(\Delta\xi^{2}\), and transmissivity coefficients, given by \(\eta_{A,B}\). (a) \(\Delta \hat{\xi}^{\prime2}\) plotted as a function of \(\eta_{B}\) for fixed \(\eta_{A}=0.70\) and for various values of \(\Delta\xi^{2}\), assuming \(\eta_{B}<\eta_{A}\) and noiseless EPR-JPAs. \(\Delta\xi^{2}\) determines the entanglement in the lossless case: the entanglement increases with decreasing \(\Delta\xi^{2}\). We see that the window of the allowed difference between the losses in Alice's and Bob's channel reduces for larger entanglement. (b) Here, \(\Delta\hat{\xi}^{\prime2}\) is plotted as function of \(\eta_{B}\) and \(\eta_{A}\) for fixed \(\Delta\xi^{2}=0.14\). From Eq. (24), we see that for a too large asymmetry between Alice's and Bob's channel, it is opportune to symmetrize them by attenuating one of the signals in order to increase the amount of correlations between the two parties. For instance, for \(\eta_{B}=0.3\) and \(0.8<\eta_{A}<1\), we find that \(\Delta\hat{\xi}^{\prime2}\) increases with increasing \(\eta_{A}\). Concerning Alice's measurement, we may define the quantities characterising the noise added by losses as $$\begin{aligned} A_{\alpha} \equiv\frac{1-\alpha}{\alpha}\Delta x_{v_{\alpha}}^{2},\qquad A_{\beta} \equiv \frac{1-\beta}{\beta}\Delta x_{v_{\beta}}^{2}. \end{aligned}$$ Here, α is the transmission coefficient from the output of the hybrid ring to the JPA, taking into account the hybrid ring losses. Moreover, β is the transmission coefficient from the JPA to HEMT amplifier. Hence, the total noise is $$ A=2 \biggl(A_{\alpha} +\frac{A_{J} }{\alpha}+ \frac{A_{\beta}}{\alpha g_{J} }+\frac{A_{H}}{\alpha\beta g_{J} } \biggr), $$ where \(A_{J}\) and \(A_{H}\) were defined in the previous section. In Table 1, we estimate a bound on \(A_{J}\) for typical losses, taking into account the feedforward (discussed in the following section), and for several distances. These numbers imply that the device experimentally investigated in Refs. [29, 36], two of the few available studies of JPA noise in the degenerate mode, are already close to the threshold where a benefit over classical approaches can be achieved. We immediately see that the largest contributions to Ξ come from \(A_{J}\) and \(\Delta\xi^{\prime2}\). For example, a version of the protocol would work if the noise added by the detection amplifiers is reduced by a good factor of three to \(A_{J}<0.073\), corresponding to 1 m distance from the EPR source. Similarly, improvements in the EPR state generation would help via a reduced \(\Delta\xi^{\prime2}\). Regarding the latter, particular attention should be given to the distance over which an EPR pair can be distributed. For our numbers, assuming a superconducting coaxial cable of 1 m length, the dominating contributions to the losses still come from the beam splitter and connectors. Therefore, an implementation of our protocol for the quantum microwave communication between two adjacent chips of a superconducting quantum processor or two superconducting quantum information units in nearby buildings seems feasible with some reasonable technological improvements. In this context, we want to reiterate that the big advantage of the quantum microwave teleportation lies in the fact that microwaves are the natural operating frequencies of superconducting quantum circuits. Table 1 Tables with the maximum value of \(\pmb{A_{J}^{\mathrm{max}}}\) allowed in order for the quantum teleportation protocol to work Analog vs. digital feedforward In the quantum teleportation protocol, Alice needs to measure and send the result of the measurement to Bob via a classical channel. Then, Bob uses this information to apply a displacement in his system. This process is called a feedforward, and is considered tough to implement, independently of the considered system. In particular, in the microwave case, the measurement process may be slow, resulting in an ultimate loss of fidelity. In realistic experiments, a quantum microwave signal has to be amplified before detection. If the amplification is large, the signal becomes insensitive to losses at room temperature. Therefore, an idea is to use the output signal of Alice to perform classical communication without digitally measuring it. This analog feedforward is depicted in Figure 4, and it works in the following way. Let us assume the lossless case, and send the two amplified signals of Alice to a hybrid ring. One of the two outputs of the latter provides us with $$\begin{aligned} &\hat{x}_{F} =\frac{1}{2} \biggl( \sqrt{g_{J}g_{H}}-\sqrt{\frac{g_{H}}{g_{J}}} \biggr)\hat{x}_{A} +\frac{1}{2} \biggl(\sqrt{g_{J}g_{H}}+ \sqrt{\frac{g_{H}}{g_{J}}} \biggr)\hat{x}_{T} +\sqrt{ \frac{g_{H}-1}{2}}\bigl(\hat{x}_{h_{H1}} +\hat{x}_{h_{H2}} \bigr) , \end{aligned}$$ $$\begin{aligned} &\hat{p}_{F} =\frac{1}{2} \biggl(- \sqrt{g_{J}g_{H}}+\sqrt{\frac {g_{H}}{g_{J}}} \biggr)\hat{p}_{A} +\frac{1}{2} \biggl(\sqrt{g_{J}g_{H}}+ \sqrt{\frac {g_{H}}{g_{J}}} \biggr)\hat{p}_{T} -\sqrt{ \frac{g_{H}-1}{2}}\bigl(\hat{p}_{h_{H1}} +\hat{p}_{h_{H2}} \bigr), \end{aligned}$$ where the label 'F' stands for the feedforward. Indeed, Bob may use this signal to perform the displacement. Scheme of the analog feedforward. Here, Alice is not digitising the signals, but is amplifying and superposing them. The output signal is robust to the environment noise, and it contains all the information that Bob needs to perform the local displacement. This displacement is then implemented with a high transmissivity directional coupler, whose inputs are the signal of Bob, and the output signal of Alice. In the figure, \(\hat{a}_{F}\) is the output signal of Alice, \(\hat{a}_{B}\) is the signal of Bob, \(\hat{a}'_{B}\) is the output of the teleportation scheme, while \(\tau \simeq1\) is the transmissivity and \(S_{ij}\) the scattering matrix of the directional coupler. A displacement operator can be implemented by sending a strong coherent state and the field which we want to displace to a high-transmissivity mirror [56]. Hence, the transmitted signal is $$ \hat{a}_{\mathrm{out}}=\sqrt{\tau} \hat{a}_{\mathrm{in}}+\sqrt{1- \tau} \alpha, $$ where α is without a hat because it represents a coherent state. If we choose \(\tau\sim1\) and \(|\alpha|\gg1\) such that \(\sqrt{1-\tau} \alpha=z\), we obtain $$ \hat{a}_{\mathrm{out}}=\sqrt{\tau} \hat{a}_{\mathrm{in}}+z \simeq \hat{a}_{\mathrm{in}}+z, $$ which approximates a displacement operator. In a microwave experiment, the operation (31) can be implemented with a microwave directional coupler. If we send signals B and F as inputs to a directional coupler with transmissivity \(\tau\simeq1-\frac{4}{g_{J}g_{H}}\), the corresponding output is $$\begin{aligned}& \begin{aligned}[b] \hat{x}_{B}'={}&\sqrt{\tau} \hat{x}_{B}+\sqrt{1- \tau} \hat{x}_{F}\\ ={}& \biggl(1+\frac {1}{g_{J}} \biggr)\hat{x}_{T}+\sqrt{1-\frac{4}{g_{J}g_{H}}}\hat{x}_{B}+ \biggl(1- \frac {1}{g_{J}} \biggr)\hat{x}_{A} \\ &{}+\sqrt{\frac{2(g_{H}-1)}{g_{J}g_{H}}}(\hat{x}_{h_{H1}}+\hat{x}_{h_{H2}}) \simeq \hat{x}_{T}+\hat{\xi}_{x} , \end{aligned} \end{aligned}$$ $$\begin{aligned}& \begin{aligned}[b] \hat{p}_{B}'={}&\sqrt{\tau} \hat{p}_{B}+\sqrt{1- \tau} \hat{p}_{F}\\ ={}& \biggl(1+\frac {1}{g_{J}} \biggr)\hat{p}_{T}+\sqrt{1-\frac{4}{g_{J}g_{H}}}\hat{p}_{B}+ \biggl( \frac {1}{g_{J}}-1 \biggr)\hat{p}_{A} \\ &{}-\sqrt{\frac{2(g_{H}-1)}{g_{J}g_{H}}}(\hat{p}_{h_{H1}}+\hat{p}_{h_{H2}})\simeq \hat{p}_{T}-\hat{\xi}_{p}, \end{aligned} \end{aligned}$$ where the last approximation holds for \(g_{J}\gg1\), and, for the sake of simplicity, we have considered the lossless case. Considering the typical values \(g_{H}\sim10^{4}\) and \(g_{J}\sim10^{2}\), we would need a reflectivity factor \(1-\tau\sim10^{-6}\). For this value, small errors in τ would result in a large error in the displacement operator. This problem can be overcome by attenuating at low temperatures the signal F before the directional coupler, in order to neglect the attenuator noise. In this case, setting \(\tau=1-\frac{4}{\eta _{\mathrm{att}}g_{J}g_{H}}\), the transmitted signal is the same as in (32)-(33). For instance, if we choose \(\eta _{\mathrm{att}}\sim10^{-3}\), we derive a reasonable value for the reflectivity: \(1-\tau\sim10^{-3}\). The described analog method allows us to perform the feedforward without an actual knowledge of the result of Alice's measurement. Indeed, the JPA and HEMT amplifiers work as measurement devices. On the one hand, the advantage is that we save the time required to digitalised the signal. On the other hand, the disadvantage is that all the noise sources in Alice are mixed, resulting in a doubling of the noise A, as we see in Eqs. (32)-(33) (the same claim holds for the lossy case). Therefore, one should carefully evaluate whether the digital feedback is convenient against the analog one, by comparing A, which quantify the loss of fidelity in the analog feedforward case, with the noise added due to the delay line added in Bob in the digital feedforward case. This can be done by estimating the digitisation time and the corresponding losses in the Bob delay line, which strongly depends on the available technology. Indeed, currently available IQ mixers and FPGA technology requires \(t_{p}\sim200\mbox{-}400\) ns for measuring and processing the information. During this time, the signal needs to be delayed in Bob's channel. If we consider a delay line where the group velocity of the electromagnetic field is \(v\simeq2 \times10^{8} \) m/s, \(t_{p}\) corresponds to a delay line in Bob of 40-80 m. Comparing the values of \(\Delta\xi^{\prime2}\) for the zero measurement time and the realistic 200 ns measurement time, we see a change in \(\Delta\xi^{\prime2}\) of ∼0.30 in the case of 1 m distance (assuming 0.1 dB per meter of power cable losses), which is considerably lower than the current values achievable for A. Notice that this discrepancy decreases with the distance between Alice and Bob. This means that the digital feedforward is currently preferable to the analog one, but the analog feedforward can become a useful technological tool when the JPA technology will reach a reasonable noise level. Quantum repeaters As we have discussed in the previous sections, the entanglement distribution between the two parties, Alice and Bob, is particularly challenging due to the large losses involved. Moreover, while in the optical case the noise added by a room temperature environment corresponds to the vacuum, in the microwave regime, this noise would correspond to a thermal state containing ∼103 photons. Even in the most favourable situation in which we build a cryogenic setup to share the entanglement, we would have a collapse of the correlations after ∼10 m due to the detection inefficiency and losses. The implementation of quantum repeaters in the microwave regime could potentially solve this issue. A quantum repeater is able to distillate entanglement and to share it at larger distance, at the expense of efficiency. A protocol for distributing entanglement at large distance in the microwave regime has been recently proposed in [57], but it relies on the implementation of an optical-to-microwave quantum interface [58], which has not yet been realised experimentally. Here, we discuss the microwave implementation of quantum repeater based on a non-deterministic noiseless linear amplification via weak measurements [59]. A noiseless linear amplifier [60, 61] can be modelled as an operator \(g^{\hat{n}}\) applied to its input state. For example, for a input coherent state \(|\alpha\rangle\), we would have \(|g\alpha\rangle\) as output, resulting in a amplification of all quadratures without adding noise. Let us consider a two-mode squeezed state \(|\psi_{AB}\rangle\propto\sum_{n=0}^{\infty}(\tanh r)^{n}|n\rangle_{A}|n\rangle_{B}\). Notice that the amount of entanglement increases on increasing r. If we are able to implement the operator \(g^{\hat{n}}\), with \(g>1\) on one mode, say Bob, we have $$\begin{aligned} g^{\hat{n}_{B}}|\psi_{AB}\rangle\propto\sum _{n=0}^{\infty}(g \lambda )^{n}|n \rangle_{A}|n\rangle_{B}=\sum_{n=0}^{\infty} \lambda^{\prime n}|n\rangle _{A}|n \rangle_{B}, \end{aligned}$$ with \(\lambda=\tanh r\) and \(\lambda'\equiv g \lambda>\lambda\). A similar argument holds, if we have losses in each of the two modes. In fact, the state after the loss mechanism is $$\begin{aligned} |\psi_{\mathrm{loss}}\rangle\propto{}&\sum_{n=0}^{\infty}\sum_{k_{A}=0}^{n}\lambda^{n}\sum _{k_{B}=0}^{n}(-1)^{2n-k_{A}-k_{B}} \eta_{A}^{k_{A}/2}\eta_{B}^{k_{B}/2}(1-\eta _{A})^{(n-k_{A})/2} \\ &{}\times(1-\eta_{B})^{(n-k_{B})/2}\sqrt{\binom{n}{k_{A}} \binom{n}{k_{B}}}|k_{A}\rangle _{A}|k_{B} \rangle_{B}|n-k_{A}\rangle_{l_{A}}|n-k_{B} \rangle_{l_{B}}, \end{aligned}$$ where \(l_{A,B}\) correspond to the loss modes. If we apply the operator \(g^{\hat{n}_{B}}\), the output state has the same form but with the new effective parameters [60] \(\eta_{B}\rightarrow\eta_{B}'=\frac {g^{2}\eta_{B}}{1+(g^{2}-1)\eta_{B}}\) and \(\lambda\rightarrow\lambda''=\lambda \sqrt{1+(g^{2}-1)\eta_{B}}\), which is accompanied by an increase of the entanglement. Accordingly, the final \(\Delta\xi^{\prime2}\) would be lower, which corresponds to higher values of \(A_{J}^{\mathrm{max}}\) in Table 1. Note that if \(\lambda=0\), i.e. no entanglement at the input, then the output state is not entangled either. Therefore, in order to increase the amount of entanglement, we need a minimum of entanglement at the input. The operator \(g^{\hat{n}}\) corresponds to a noiseless phase-insensitive linear amplifier, and it cannot be implemented deterministically. However, there exist probabilistic methods to realise it approximately. A probabilistic noiseless linear amplification scheme has already been demonstrated in the optical regime [60, 61], but it relies on the possibility of counting photons. In contrast, the weak measurement scheme [59] requires quadrature measurements that can be applied in the microwave regime. Let Bob's mode interact with an ancillary system in a coherent state \(|\alpha\rangle\) accordingly to the cross-Kerr Hamiltonian \(\hat{H}_{\mathrm{Kerr}}= \hbar k \hat{n}_{\mathrm{anc}}\hat{n}_{B}\), where k is a coupling constant. Let us further consider low-time interaction, i.e. \(k\Delta t\ll1\). If we postselect the ancilla in the state \(|p\rangle\), i.e. the eigenstate of the p̂ quadrature corresponding to the eigenvalue p, the whole final state is $$\begin{aligned} |\psi_{\mathrm{final}}\rangle&=|p\rangle \langle p| e^{-i\hat{H}_{\mathrm {Kerr}}\Delta t/\hbar}|\alpha\rangle|\psi\rangle_{AB}\simeq|p\rangle \langle p| (\mathbb{I}-ik\Delta t \hat{n}_{\mathrm{anc}}\hat{n}_{B})|\alpha\rangle |\psi\rangle_{AB} \\ &=|p\rangle \langle p|\alpha\rangle (\mathbb{I} -ik\Delta t A_{w}\hat{n}_{B} )|\psi\rangle_{AB}\simeq|p\rangle \langle p|\alpha \rangle e^{-ik\Delta tA_{w}\hat{n}_{B} }|\psi\rangle_{AB} \\ &=|p\rangle \langle p|\alpha\rangle e^{-ik\Delta t \operatorname {Re}(A_{w})\hat{n}_{B} } \bigl(e^{k\Delta t \operatorname {Im}(A_{w})} \bigr)^{\hat{n}_{B}}|\psi\rangle_{AB}, \end{aligned}$$ where \(A_{w}\equiv\frac{\langle p| \hat{n}_{\mathrm{anc}}|\alpha\rangle}{\langle p|\alpha\rangle}=\alpha^{2}-i\sqrt{2}\alpha p\) is called 'weak value', and, in the second approximation, we have assumed \(k\Delta t|A_{w}|\ll 1\). By choosing appropriately the values of α and p, we can induce a value of \(A_{w}\), whose imaginary part is positive. If we set \(g\equiv e^{k\Delta t \operatorname {Im}(A_{w})}\), we have a scheme to implement \(g^{\hat{n}_{B}}\) up to a known phase-shift \(e^{-ik\Delta t \operatorname {Re}(A_{w})\hat{n}_{B} }\), with success probability density \(|\langle p|\alpha \rangle|^{2}=\frac{1}{\sqrt{\pi}}e^{- (p-\operatorname {Im}(\alpha) )^{2}}\). For instance, by choosing \(\operatorname {Im}(\alpha)=0\) and \(\operatorname {Re}(\alpha) <0\), we have a gain for any \(p>0\), which happens with a 50% probability. In this case, an imperfect quadrature measurement can be corrected by just shifting the allowed results of the ancilla measurement, with a consequent lost of efficiency. Note that, due to the probabilistic nature of the scheme, Alice and Bob need to communicate classically in order to distillate the entanglement, see Figure 5. However, this classical communication can be performed at the end, in a post-selection fashion, as Alice does not need to perform any operation on her system. Quantum repeater scheme with weak measurement and postselection. A probabilistic noiseless linear amplifier is applied to one of the two parties, via the implementation of a weak cross-Kerr interaction with an ancillary signal. This interaction emerges as a fourth order expansion of the dynamics of the signal and the ancilla coupled with a transmon quantum bit, modelled as a three level system, in a non-resonant regime. The ancilla is then measured, and the result is sent classically to Alice for post-selection. The cross-Kerr effect, characterised by a Hamiltonian of the kind \(\hat{H}_{\mathrm{Kerr}}=\hbar k \hat{n}_{\mathrm{anc}}\hat{n}_{B}\), has already been proposed in cQED in the context of single-photon resolved photodetectors, see [62, 63]. Basically, this interaction emerges in the fourth order expansion of the dynamics of two microwave modes coupled with a transmon in a non-resonant regime. By modelling the transmon as a three-level system, the system Hamiltonian is $$ \begin{aligned}[b] H={}&\hbar\omega_{a} a^{\dagger}a +\hbar\omega_{b} b^{\dagger}b + \hbar(\omega _{2}-\omega_{0})|2\rangle \langle2| + \hbar(\omega_{1}-\omega_{0})|1\rangle \langle1|\\ &{} +\hbar g_{a} \bigl[a|2\rangle \langle1| +a^{\dagger}|1\rangle \langle 2| \bigr]+\hbar g_{b} \bigl[b|1\rangle \langle0| +b^{\dagger}|0\rangle \langle 1| \bigr], \end{aligned} $$ where a represents the ancillary mode and b Bob's mode. In the interaction picture with respect \(H_{0}=\hbar\omega_{a} a^{\dagger}a+\hbar\omega_{b} b^{\dagger}b+\hbar\omega_{a} |2\rangle\langle2|+\hbar \omega_{b}|1\rangle\langle1| \), the new Hamiltonian is $$\begin{aligned} H_{I}=\hbar\Delta_{a} |2\rangle \langle2| +\hbar \Delta_{b} |1\rangle \langle 1| +\hbar g_{a} \bigl[a|2 \rangle \langle1| +a^{\dagger}|1\rangle \langle2| \bigr]+\hbar g_{b} \bigl[b|1\rangle \langle0| +b^{\dagger}|0\rangle \langle1| \bigr], \end{aligned}$$ where \(\Delta_{a}=\omega_{2}-\omega_{a}\), \(\Delta_{b}=\omega_{1}-\omega_{b}\), and we have set \(\omega_{0}=0\). If we set the parameters in order to have \(g_{a},g_{b}\ll\Delta_{a},\Delta_{b},|\Delta_{a}-\Delta_{b}|\), and we inizialize the transmon in \(|0\rangle\), the effective Hamiltonian is $$\begin{aligned} H_{I}^{\mathrm{eff}}=\hbar\frac{g_{b}^{2}}{\Delta_{b}}b^{\dagger}b |0\rangle \langle0| +\hbar\frac{12g_{a}^{2}g_{b}^{2}}{\Delta_{a}\Delta_{b}} \biggl(\frac{1}{\Delta _{b}}- \frac{1}{\Delta_{a}} \biggr)a^{\dagger}a b^{\dagger}b |0\rangle \langle0|, \end{aligned}$$ where we have implemented a fourth order expansion of the Magnus series, and we have used the rotating wave approximation. Typical parameters allowing this are \((\omega_{2}-\omega_{1})/2\pi=\omega_{a}/2\pi \simeq5\) GHz, \((\omega_{1}-\omega_{0})/2\pi=(\omega_{b}+\tilde{\Delta})/2\pi \), with \(\tilde{\Delta}=20\) MHz and \(\omega_{b}/2\pi\simeq6\) GHz, and \(g_{a,b}\simeq100\) kHz. The Hamiltonian in Eq. (39) represents the cross-Kerr effect up to a known phase, that can be corrected at the end. In this scheme, dissipations are negligible, as we are interested in very low interaction times. We have considered a quantum teleportation protocol of propagating quantum microwaves. We have analysed its realisation by introducing figures of merit (i.e. Ξ and A) that takes into account losses and detector efficiency. In particular, we have underlined the difference between the optical case (where photodetectors are available, and losses are negligible) and the microwave regime. Indeed, we have considered JPAs in order to perform single-shot quadrature measurements, and we have proposed an analog feedforward scheme, which does not rely on digitisation of signals. Moreover, we have discussed the losses mechanisms, highlighting in which measure they limit the realisation of the protocol. We have used typical parameters of present state-of-art experimental setups in order to identify the required improvements of these setups to allow for a first proof-of-principle experiment. Finally, we have introduced a quantum repeater scheme based on weak measurements and postselection. Bennett CH, Brassard G, Crépeau C, Josza R, Peres A, Wootters W. Phys Rev Lett. 1993;70:1895. Article ADS MathSciNet Google Scholar Einstein A, Podolsky B, Rosen N. Phys Rev. 1935;47:777. Article ADS Google Scholar Ralph T. Nature. 2013;500:282. Bouwmeester D, Pan J-W, Mattle K, Eibl M, Weinfurter H, Zeilinger A. Nature. 1997;390:575. Boschi D, Branca S, De Martini F, Hardy L, Popescu S. Phys Rev Lett. 1998;80:1121. Vaidman L. Phys Rev A. 1994;49:1473. Braunstein SL, Kimble HJ. Phys Rev Lett. 1998;80:869. Furusawa A, Sorensen JL, Braunstein SL, Fuchs CA, Kimble HJ, Polzik ES. Science. 1998;282:5389. Marcikic I, de Riedmatten H, Tittel W, Zbinden H, Gisin N. Nature. 2003;421:509. Ursin R, Jennewein T, Aspelmeyer M, Kaltenbaek R, Lindethal M, Walther P, Zeilinger A. Nature. 2004;430:849. Jin X-M, Ren J-G, Yang B, Yi Z-H, Zhou F, Xu X-F, Wang S-K, Yang D, Hu Y-F, Jiang S, Yang T, Yin H, Chen K, Peng C-Z, Pan J-W. Nat Photonics. 2010;4:376. Yin J, Lu H, Ren J-G, Cao Y, Yong H-L, Wu Y-P, Liu C, Liao S-K, Jiang Y, Cai X-D, Xu P, Pan G-S, Wang J-Y, Chen Y-A, Peng C-Z, Pan J-W. Nature. 2012;488:185. Ma X-S, Herbst T, Scheidl T, Wang D, Kropatschek S, Naylor W, Wittmann B, Mech A, Kofler J, Anisimova E, Makarov V, Jennewein T, Ursin R, Zeilinger A. Nature. 2012;489:269. Briegel H-J, Dür W, Cirac JI, Zoller P. Phys Rev Lett. 1998;81:5932. Duan L-M, Lukin MD, Cirac JI, Zoller P. Nature. 2001;414:413. Riebe M, Häffner H, Roos CF, Hänsel W, Ruth M, Benhelm J, Lancaster GPT, Körber TW, Becher C, Schimdt-Kaler F, James DFV, Blatt R. Nature. 2004;429:734. Barrett MD, Chiaverini J, Schaetz T, Britton J, Itano WM, Jost JD, Knill E, Langer C, Leibfried D, Ozeri R, Wineland DJ. Nature. 2004;429:737. Olmschenk S, Matsukevich DN, Maunz P, Hayes D, Duan L-M, Monroe C. Science. 2009;323:486. Bao X-H, Xu X-F, Li C-M, Yuan Z-S, Lu C-Y, Pan J-W. Proc Natl Acad Sci. 2012;109(50):20347. Blais A, Huang R-S, Wallraff A, Girvin SM, Schoelkopf RJ. Phys Rev A. 2004;69:062320. Wallraff A, Schuster DI, Blais A, Frunzio L, Huang R-S, Majer J, Kumar S, Girvin SM, Schoelkopf RJ. Nature. 2004;431:162. Gottesman D, Chuang IL. Nature. 1999;402:390. Menzel EP, Di Candia R, Deppe F, Eder P, Zhong L, Ihmig M, Haeberlein M, Baust A, Hoffmann E, Ballester D, Inomata K, Yamamoto T, Nakamura Y, Solano E, Marx A, Gross R. Phys Rev Lett. 2012;109:252502. Eichler C, Bozyigit D, Lang C, Baur M, Steffen L, Fink JM, Filipp S, Wallraff A. Phys Rev Lett. 2011;107:113601. Flurin E, Roch N, Mallet F, Devoret MH, Huard B. Phys Rev Lett. 2012;109:183901. Ou ZY, Pereira SF, Kimble HJ, Peng KC. Phys Rev Lett. 1992;68:3663. Menzel EP, Deppe F, Mariantoni M, Araque Caballero MÁ, Baust A, Niemczyk T, Hoffmann E, Marx A, Solano E, Gross R. Phys Rev Lett. 2010;105:100401. Mariantoni M, Storcz MJ, Wilhelm FK, Oliver WD, Emmert A, Marx A, Gross R, Christ H, Solano E. 2005. arXiv:cond-mat/0509737. Mallet F, Castellano-Beltran MA, Ku HS, Glancy S, Knill E, Irwin KD, Hilton GC, Vale LR, Lehnert KW. Phys Rev Lett. 2011;106:220502. Bozyigit D, Lang C, Steffen L, Fink JM, Baur M, Bianchetti R, Leek PJ, Filipp S, da Silva MP, Blais A, Wallraff A. Nat Phys. 2011;7:154. Eichler C, Bozyigit D, Lang C, Steffen L, Fink J, Wallraff A. Phys Rev Lett. 2011;106:220503. Eichler C, Lang C, Fink JM, Govenius J, Filipp S, Wallraff A. Phys Rev Lett. 2012;109:240501. Eichler C, Bozyigit D, Wallraff A. Phys Rev A. 2012;86:032106. da Silva MP, Bozyigit D, Wallraff A, Blais A. Phys Rev A. 2010;82:043804. Di Candia R, Menzel EP, Zhong L, Ballester D, Deppe F, Marx A, Gross R, Solano E. New J Phys. 2014;16:015001. Zhong L, Menzel EP, Di Candia R, Eder P, Ihmig M, Baust A, Haeberlein M, Hoffmann E, Inomata K, Yamamoto T, Nakamura Y, Solano E, Deppe F, Marx A, Gross R. New J Phys. 2013;15:125013. Yamamoto T, Inomata K, Watanabe M, Matsuba K, Miyazaki T, Oliver WD, Nakamura Y, Tsai JS. Appl Phys Lett. 2008;93:042510. Castellano-Beltran MA, Irwin KD, Hilton GC, Vale LR, Lehnert KW. Nat Phys. 2008;4:929. Hoffmann E, Deppe F, Niemczyk T, Wirth T, Menzel EP, Wild G, Huebl H, Mariantoni M, Weißl T, Lukashenko A, Zhuravel AP, Ustinov AV, Marx A, Gross R. Appl Phys Lett. 2010;97:222508. Mariantoni M, Menzel EP, Deppe F, Araque Caballero MÁ, Baust A, Niemczyk T, Hoffmann E, Solano E, Marx A, Gross R. Phys Rev Lett. 2010;105:133601. Baur M, Fedorov A, Steffen L, Filipp S, da Silva MP, Wallraff A. Phys Rev Lett. 2012;108:040502. Lanzagorta M. In: Lanzagorta M, Ulhmann J, editors. Quantum radar, synthesis lectures on quantum computing. vol. 5. Morgan & Claypool Publishers; 2012. Takei N, Aoki T, Koike S, Yoshino K-i, Wakui K, Yonezawa H, Hiraoka T, Mizuno J, Takeoka M, Ban M, Furusawa A. Phys Rev A. 2005;72:042304. Lee N, Benichi H, Takeno Y, Takeda S, Webb J, Huntington E, Furusawa A. Science. 2011;332:6027. Peres A. IBM J Res Dev. 2004;48(1):63. Walls DF, Milburn G. Quantum optics. New York: Springer; 2008. Schumacher B. Phys Rev A. 1996;54:2614. Braunstein SL, Fuchs CA, Kimble HJ. J Mod Phys. 2000;47(2-3):267. ADS MathSciNet Google Scholar Caves CM. Phys Rev D. 1982;26:1817. Leonhardt U, Paul H. Phys Rev Lett. 1994;72:4086. Mutus JY, White TC, Barends R, Chen Y, Chen Z, Chiaro B, Dunsworth A, Jeffrey E, Kelly J, Megrant A, Nelli C, O'Malley PJJ, Roushan P, Sank D, Vainsencher A, Wenner J, Sundqvist KM, Cleland AN, Martinis JM. Appl Phys Lett. 2014;104:263513. Eom BH, Day PK, LeDuc HG, Zmuidzinas J. Nat Phys. 2012;8:623. O'Brien K, Macklin C, Siddiqi I, Zhang X. Phys Rev Lett. 2014;113:157001. Eichler C, Bozyigit D, Lang C, Baur M, Steffen L, Fink JM, Filipp S, Wallraff A. Phys Rev Lett. 2011;107:1136901. Macklin C, O'Brien K, Hover D, Schwartz ME, Bolkhovsky V, Zhang X, Oliver WD, Siddiqi I. Science. 2015;350:307. Paris MGA. Phys Lett A. 1996;217:2. Abdi M, Tombesi P, Vitali D. Ann Phys. 2015;527:139. Article MathSciNet Google Scholar Barzanjeh S, Abdi M, Milburn GJ, Tombesi P, Vitali D. Phys Rev Lett. 2012;109:130503. Menzies D, Croke S. 2009. arXiv:0903.4181. Ralph TC, Lund AP. AIP Conf Proc. 2009;1110:155. Xiang GY, Ralph TC, Lund AP, Walk N, Pryde GJ. Nat Photonics. 2010;4:316. Hoi I-C, Kockum AF, Palomaki T, Stace TM, Fan B, Tornberg L, Sathyamoorthy SR, Johansson G, Delsing P, Wilson CM. Phys Rev Lett. 2013;111:053601. Sathyamoorthy SR, Tornberg L, Kockum AF, Baragiola BQ, Combes J, Wilson CM, Stace TM, Johansson G. Phys Rev Lett. 2014;112:093601. This work is supported by the German Research Foundation through SFB 631, and the grant FE 1564/1-1; Spanish MINECO FIS2012-36673-C03-02; UPV/EHU UFI 11/55; Basque Government IT472-10; CCQED, PROMISCE, and SCALEQIT EU projects. Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, Bilbao, 48080, Spain R Di Candia, S Felicetti, M Sanz & E Solano Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, Garching, 85748, Germany KG Fedorov, L Zhong, EP Menzel, F Deppe, A Marx & R Gross Physik-Department, Technische Universität München, Garching, 85748, Germany KG Fedorov, L Zhong, EP Menzel, F Deppe & R Gross Nanosystems Initiative Munich (NIM), Schellingstraße 4, München, 80799, Germany L Zhong, F Deppe & R Gross IKERBASQUE, Basque Foundation for Science, Maria Diaz de Haro 3, Bilbao, 48013, Spain E Solano R Di Candia KG Fedorov L Zhong S Felicetti EP Menzel M Sanz F Deppe A Marx R Gross Correspondence to R Di Candia. All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Di Candia, R., Fedorov, K., Zhong, L. et al. Quantum teleportation of propagating quantum microwaves. EPJ Quantum Technol. 2, 25 (2015). https://doi.org/10.1140/epjqt/s40507-015-0038-9 Received: 08 July 2015 Accepted: 29 November 2015 DOI: https://doi.org/10.1140/epjqt/s40507-015-0038-9 Coherent State Wigner Function Quantum Teleportation Weak Measurement High Electronic Mobility Transistor
CommonCrawl
The rocky road to organics needs drying Ingredients for microbial life preserved in 3.5 billion-year-old fluid inclusions Helge Mißbach, Jan-Peter Duda, … Volker Thiel Geoelectrochemistry-driven alteration of amino acids to derivative organics in carbonaceous chondrite parent bodies Yamei Li, Norio Kitadai, … Kristin Johnson-Finn The Boring Billion, a slingshot for Complex Life on Earth Indrani Mukherjee, Ross R. Large, … Leonid V. Danyushevsky Tectonically-driven oxidant production in the hot biosphere Jordan Stone, John O. Edgar, … Jon Telling In-situ preservation of nitrogen-bearing organics in Noachian Martian carbonates Mizuho Koike, Ryoichi Nakada, … Atsuko Kobayashi Mixing of meteoric and geothermal fluids supports hyperdiverse chemosynthetic hydrothermal communities Daniel R. Colman, Melody R. Lindsay & Eric S. Boyd Supply of phosphate to early Earth by photogeochemistry after meteoritic weathering Dougal J. Ritson, Stephen J. Mojzsis & John. D. Sutherland Indigenous and exogenous organics and surface–atmosphere cycling inferred from carbon and oxygen isotopes at Gale crater H. B. Franz, P. R. Mahaffy, … R. E. Summons Goldilocks at the dawn of complex life: mountains might have damaged Ediacaran–Cambrian ecosystems and prompted an early Cambrian greenhouse world Fabricio Caxito, Cristiano Lana, … Carlos E. Ganade Muriel Andreani ORCID: orcid.org/0000-0001-8043-09051,2, Gilles Montagnac ORCID: orcid.org/0000-0001-9938-02821, Clémentine Fellah1, Jihua Hao3,4,5, Flore Vandier1, Isabelle Daniel ORCID: orcid.org/0000-0002-1448-79191, Céline Pisapia ORCID: orcid.org/0000-0002-1432-436X6, Jules Galipaud7,8, Marvin D. Lilley9, Gretchen L. Früh Green10, Stéphane Borensztajn6 & Bénédicte Ménez6 Nature Communications volume 14, Article number: 347 (2023) Cite this article How simple abiotic organic compounds evolve toward more complex molecules of potentially prebiotic importance remains a missing key to establish where life possibly emerged. The limited variety of abiotic organics, their low concentrations and the possible pathways identified so far in hydrothermal fluids have long hampered a unifying theory of a hydrothermal origin for the emergence of life on Earth. Here we present an alternative road to abiotic organic synthesis and diversification in hydrothermal environments, which involves magmatic degassing and water-consuming mineral reactions occurring in mineral microcavities. This combination gathers key gases (N2, H2, CH4, CH3SH) and various polyaromatic materials associated with nanodiamonds and mineral products of olivine hydration (serpentinization). This endogenous assemblage results from re-speciation and drying of cooling C–O–S–H–N fluids entrapped below 600 °C–2 kbars in rocks forming the present-day oceanic lithosphere. Serpentinization dries out the system toward macromolecular carbon condensation, while olivine pods keep ingredients trapped until they are remobilized for further reactions at shallower levels. Results greatly extend our understanding of the forms of abiotic organic carbon available in hydrothermal environments and open new pathways for organic synthesis encompassing the role of minerals and drying. Such processes are expected in other planetary bodies wherever olivine-rich magmatic systems get cooled down and hydrated. In nature, very few organic compounds are recognized as abiotic, i.e., formed by mechanisms that do not involve life1,2. Abiotic methane (CH4) is the most abundant of those compounds, and can be accompanied by short-chain hydrocarbons (ethane, propane) or organic acids (formate, acetate) in fluids3,4,5,6,7,8 occurring in molecular hydrogen (H2)-enriched hydrothermal systems where olivine-bearing rocks are altered via serpentinization reactions9 in various geological contexts (i.e., subduction zones, ophiolites, mid-ocean ridges—MOR). Because of this limited variety of small abiotic organic molecules and their strong dilution in hydrothermal fluids, prebiotic reactions cannot easily lead to more complex molecules of biological interest; thus, constituting a limiting factor for a unifying hypothesis for a hydrothermal origin of life on Earth. Without evidence for alternative abiotic organic molecules or pathways and based on an abundance of diverse organic molecules in comets and meteorites10,11,12 (e.g., carbonaceous kerogen-like material, amino acids, polycyclic aromatic, or aliphatic hydrocarbons) many have considered that life's ingredients had an extraterrestrial origin. Recent studies of serpentinized rocks along the Mid-Atlantic Ridge (MAR) have highlighted low temperature (T), abiotic formation of aromatic amino acids via Friedel–Crafts reactions catalyzed by an iron-rich saponite clay13. Such a process requires a nitrogen source for amine formation and a polyaromatic precursor whose origin remains unknown, and suggests the availability of more diverse abiotic organic reactants than previously expected on Earth, notably in the subseafloor. The discovery of low-T formation of abiotic carbonaceous matter in ancient oceanic lithosphere14 also leads to the consideration of new paradigms for organic synthesis pathways within the rocks hosting hydrothermal fluid circulation15. Processes leading to such complex, condensed compounds, during rock alteration are unknown but must differ from the mineral-catalyzed Fischer-Tropsch Type (FTT) process that is the most invoked so far in hydrothermal fluids16,17,18,19 to explain the formation short-chained hydrocarbons. Understanding the variety and formation mechanisms of abiotic organic compounds on Earth, as well as their preservation, has important implications for the global carbon cycle, but also expands the inventory of the forms of carbon available for present-day ecosystems and prebiotic chemistry, and compliments data from extraterrestrial systems20,21. Here we demonstrate how deep mid-ocean ridge processes can provide an unexpected diversity of abiotic organics as gaseous and condensed phases thanks to water-consuming mineral reactions. Our study focuses on the investigation of olivine mineral microcavities (secondary fluid inclusions (FI)) aligned along ancient fracture planes where circulating fluids were trapped within one of the deepest igneous-rock sections drilled along the MAR, i.e., IODP Hole 1309D, 1400 meter-depth below seafloor – m.b.s.f., at the Atlantis Massif (30°N MAR, IODP Expeditions 304–305, Fig. 1). Five km to the south of Site 1309, Atlantis Massif hosts the Lost City hydrothermal field22 where the discharge of abiotic H2, CH4 and formate have been observed in fluids19,23. Within the shallow rock substrate of Hole 1309D (~170 m.b.s.f), abiotic amino acids were identified13. At deeper levels (1100–1200 m.b.s.f), olivine-rich igneous rocks such as troctolites are particularly fresh and rich in FIs where they form linear trails of various orientations within olivine grains (Fig. 1b–e). Such FIs are inherited from the first stages of rock cracking and healing during cooling of the lithosphere, allowing the trapping of circulating fluids. Crack-healing of olivine is expected between 600 and 800 °C24,25 and at Hole 1309D fluid trapping occurs down to ~700 °C–6000 m.b.s.f. (P~2 kbar)26. During cooling, rocks were progressively exhumed below an extensive fault zone up to their present-day position (P < 0.3 kbar and T~100 °C27). Fig. 1: Location and characteristics of the magmatic rock samples. a High resolution bathymetric map of the Atlantis Massif hosting the Lost City hydrothermal field. The massif is mainly composed of mantle and mantle-derived magmatic rocks exhumed along the Mid-Atlantic Ridge (MAR) parallel to the Atlantis transform fault ("m.b.s.l." stands for meters below sea level). The inset shows its location at the MAR scale. b, c Thin section scans in natural and cross-polarized light, respectively, of a characteristic troctolite sample used in this study and recovered at 1100 meters below sea floor by drilling the Atlantis massif during the IODP Expedition 304–305 Leg 1309D (sample 1309D–228R2). d Optical view in transmitted cross-polarized light of olivine (Ol) kernels hosted in the same troctolite. Red arrows show planes of secondary fluid inclusions within olivine crystals. e Optical photomicrographs in transmitted plane-polarized light of olivine-hosted multiphasic fluid inclusions. Diverse organic compounds and nanodiamonds in microcavities Punctual Raman analyses (see Methods) were made on 36 closed FIs in olivine grains forming three troctolites cored at IODP Hole 1309D (intervals 228R2, 235R1, and 247R3). The samples were very fresh and only displayed a localized alteration along thin serpentinized veinlets, underlined by magnetite grains (Fig. 1b, c). All FIs contained H2(g) and/or CH4(g) as well as secondary minerals serpentine (lizardite ± antigorite), brucite, magnetite, or carbonates (calcite or magnesite), as previously observed in similar or ancient rocks18,28,29,30,31,32 (Fig. 2). In addition, we documented for the first time in present-day oceanic lithosphere the presence of N2(g), methanethiol (CH3SH(g)), and variably disordered polyaromatic carbonaceous materials (PACMs) closely associated with secondary minerals in FIs (Fig. 2a, c). Fig. 2: Representative punctual Raman analyses of individual fluid inclusions. They show a large diversity of gaseous109,110,111 (a, b) and secondary mineral phases112,113 and of polyaromatic carbonaceous material (PACM) (b, c)33,34,35,36. CH4(g) is well identified by its triplets at 2917, 3020, 3070 cm−1 whereas H2(g) raman shifts are found between 4152–4142 and 4155–4160 cm−1. N2(g) displays a thin band at 2330 cm−1 while the thiol group (-SH) of methanethiol CH3SH(g) is observed at 2581 cm−1. The PACMs are characterized in their first order region by two broad bands assigned to the disorder (D) band and the graphite (G) band. Few tens of nm-sized nano-diamonds (nD) are identified by the characteristic downward shift of the D band at ~1325 cm−1 (ranging between 1313-1332 cm−1), its broadening (FWHM-D of 54–70 cm−1) and an associated G band near 1550 cm−141,42,43,44. Interpretation of parameter variability in nD is complex and beyond the scope of the present contribution. Srp serpentine, Cal calcite, Mag magnetite. To further investigate the nature of the PACMs, high-resolution 3D Raman mapping was carried out on two FIs from one olivine grain of sample 1309D-228R2 (Fig. 1d, e) and were named FI3 and FI5 (Fig. 3a and Supplementary Movie 1). The FI that was richest in PACM (FI5) was then milled and imaged using focused ion beam (FIB)-scanning electron microscopy (SEM) associated with electron dispersive X-ray spectrometry (EDS) (Fig. 3c–f) before being extracted as an ultrathin section (Supplementary Fig. 1) for high resolution transmission electron microscopy (HR-TEM) and X-ray photoelectron spectroscopy (XPS). See Methods for details. Fig. 3: Diversity of gaseous and condensed abiotic organic compounds associated with secondary minerals in single fluid inclusions trapped in olivine minerals of the deep oceanic lithosphere. a Three dimensional Raman imaging of fluid inclusion FI3 showing polyaromatic carbonaceous materials (PACMs)33,34,35,36 coexisting with reduced gaseous species identified as H2, N2, CH4, and CH3SH and micrometric serpentinization-derived mineral phases109,111 (i.e., serpentine, brucite, magnetite, and carbonate). See also Supplementary Movie 1. b Raman spectra highlighting the 3 end-member types of PACMs in the two individual fluid inclusions (FI3 and FI5), all characterized by two broad bands assigned to the disorder (D) band and the graphite (G) band but showing variable position, intensity and width. For each end-member, a mean Raman spectrum is presented (bold line) with the standard deviation (colored shadows). c False color scanning electron microscopy (SEM) image of FI5 fluid inclusion freshly opened by focused ion beam milling showing distinct types of PACMs which contrast by their apparent textures: gel-like or mesoporous with nanofilaments are characteristic of PACM1 and PACM2, respectively. d Associated elemental mapping using energy dispersive X-ray spectrometry of the olivine (Ol) hosted fluid inclusion allows the identification of PACMs, fibrous polygonal serpentine (F. Srp), lamellar serpentine (lizardite, Lz), polyhedral serpentine (P. Srp), calcite (Cal), and brucite (Brc), as also supported by Raman microspectroscopy (Fig. 2). e, f Magnified SEM views of c, highlighting PACM1 and PACM2, respectively. Quantitative parameters were extracted from 3D hyperspectral Raman data collected on FI3 and FI5 (see Methods) and compared to those of graphite, terrestrial biologically-derived kerogens, as well as carbonaceous matter in meteorites (Fig. 4a)33,34,35,36. Previous investigations established that the trend followed by terrestrial kerogens and meteoritic carbonaceous matter in such diagrams reflects an increase in thermal maturation during prograde metamorphism. Thermal maturation globally involves organic matter carbonization characterized by a decrease of the full-width at half maximum of the disorder (D) band (FWHM (D))33,36. It is materialized by a decrease in heteroatom-bearing groups (e.g., O, N, or S) and aliphatic units, and an increase in the degree of aromaticity. It can be followed by graphitization during which the structural order of the graphitic material increases. This corresponds to a decrease of defects in aromatic planes, characterized by the decrease of the relative intensities R1 of the D and graphite (G) bands (R1 = ID/IG). While such a metamorphic history does not apply in the present context of cooling and exhumation of deep-seated rocks at the Atlantis Massif, this trend is used here to chemically and structurally describe the observed material based on its comparable spectroscopic characteristics. The PACMs contained in our two 3D-imaged FIs cover an unusually large range depicted in Fig. 4a, attesting to an unexpected diversity in chemistry, aromatization degree and structural order at the micrometric scale. PACMs in FI5 alone displays a trend similar to those described in all meteorites, i.e., forming a continuum between 2 end-members, referred to here as PACM1 and PACM2 (see also Fig. 3), reflecting various degrees of aromatization. PACMs of FI3 overlap the FI5 trend but show a complementary trend toward a more structured state defined as PACM3 (see also Fig. 3) with increased crystallinity. Fig. 4: Diversity of the polyaromatic carbonaceous material that displays strong structural and chemical heterogeneities while coexisting at micrometric scale in the two individual fluid inclusions (FI3 and FI5). a PACMs heterogeneity as shown by fitting parameters derived from 3D hyperspectral Raman mapping of FI3 and FI5 (e.g. Fig. 3a), namely full width at half maximum (FWHM) of the D (i.e., disorder) band and the relative intensities R1 of the D and G (i.e., graphite) bands (=ID/IG). The colored data correspond to the data points used to calculate mean Raman signals shown in Fig. 3b. Also reported are the values obtained for kerogens, carbonaceous material in meteorites, and graphite compiled from the literature33,34,35. b, c High resolution TEM imaging of the PACMs with the qualitative chemical composition of PACM1 and PACM2 measured with energy dispersive X-ray spectrometry. The amorphous, most disordered material (PACM1) plots at the top of the data points in a and contains the highest amount of heteroatoms, notably O. The most aromatic material (PACM2) plots at the lower-right end of the graph and is richer in C, tending toward amorphous carbon. The nano-crystalline phase (~5 nm-sized) embedded in PACM1 b, and possibly in PACM2 (dotted texture in c), has been identified as nano-diamond (nD) both by high-resolution TEM (Fast Fourier Transform of the TEM image in insert) and with Raman (PACM3; Figs. 2a and 3b). PACM3 plots toward the lower-left end of the diagram in graph a, where well organized aromatic C skeleton is expected, but graphite is metastably replaced by nD here. PACM polyaromatic carbonaceous material, Ol olivine, d interfoliar distance of the (111) planes in cubic nD. PACM1 displays the most complex structure. In addition to the disorder (D) and graphite (G) bands, two additional contributions are detectable (Fig. 3b). The band at ~1735 cm−1 is characteristic of the stretching mode of the carbonyl functional group (C = O), and the shoulder around 1100 cm−1 fits well with stretching vibrations of C–O/C–O–C in ether or carboxylic ester functional groups37. In PACM1, a shoulder is also visible near 1200 cm−1. A similar component has been described in natural and synthetic functionalized carbon systems, while lacking in more carbonized or graphitic materials38. Its origin is not well understood, but it was previously attributed to vibrations of C–H/C–Calkyl in aromatic rings39. 3D data indicate that PACM1 is spatially well distributed in the FI and is primarily associated with phyllosilicates, corresponding to the gel-like phase wetting serpentine and brucite fibers in FI5 (Fig. 3c–e). HR-TEM examination attests to its amorphous structure and enrichment in O (C/O~1) and in other heteroatoms including metals and S, as shown by associated EDS analysis (Fig. 4b). This agrees with the high level of structural disorder and functionalization deduced from Raman spectra (Figs. 3b and 4a). Raman and SEM imaging shows that PACM2, observed in both FIs, is localized on olivine walls where it forms a mesoporous texture made of nanofilaments (Fig. 3c, d and f) of ~20 nm in diameter and up to hundreds of nm long. This spongy texture was more difficult to mill under FIB resulting in thicker foils which limited the study of its structure using HR-TEM (Fig. 4c). Associated qualitative EDS analysis shows that PACM2 is made of more than 80% carbon (C/O~9) with traces of the same other elements as PACM1, and confirms that PACM2 is more aromatized than PACM1. Well-structured nanometric phases, ~5 nm in diameter, are locally observed within amorphous PACM1 (Fig. 4b). These nanoparticles display a lattice parameter d~0.20 nm that corresponds to the d111 of cubic nano-diamond (nD). Raman signals of nD strongly depends on their structure, purity, crystal size and surface chemistry40,41, but the smallest ones (<few tens of nm) commonly display a downshift and broadening of the D band due to phonon confinement effects42,43,44 and an additional G band attesting to residual defects and graphitic domains within a surrounding carbon shell41,43. PACM3, which plots in the lower-left end of Fig. 4a where most crystalline materials are expected, displays a Raman pattern (Figs. 2c and 3b) similar to nD with a characteristic D band shifted at ~1325 cm−1, a FWHM-D of 54–70 cm−1 and a G band near 1550 cm−1. 3D Raman data of nD (PACM3) are also co-localized with PACM2 that shows a dense and spotted texture made of particles 5–50 nm in diameter, hence attributed to nD (Fig. 4c). XPS C 1s core level spectra was acquired on the whole FIB section of FI5 (Supplementary Figs. 1, 2, and Methods) that contains both PACM1 and PACM2 (Figs. 3c and 4a), spatially unresolved with this method. XPS data reveal a dominant contribution to the PACMs' structure of C–C/C = C and C–H bonds (~80%), in addition to C–O/C–O–C (~12%) and C = O/O–C = O (~5%) bonds (Supplementary Table 1, ref. 45). This confirms previous observations of the dominance of a macromolecular structure with H- and O-bearing functional groups. The remaining contributions correspond to carbon in the form of carbonate (CaCO3 here, Fig. 3) and carbide (Supplementary Table 1). The survey spectrum shows the presence of silicon and titanium in small quantities that could form such a carbide (refs. 46,47). The latter was not clearly located but it most probably contributes to the nano-particles observed in the C-rich PACM2 (Fig. 4c), together with nD. Carbon and hydrogen isotopic composition of the CH4 contained in fluid inclusions of sample 1309D–228R2 was determined by crushing experiments (see Methods). A minimum concentration of 143 µmol of CH4 per kg rock was measured on this sample. \({{{\mathrm{\delta}}}}^{13}{{{\mathrm{C}}}}_{{{{{\mathrm{CH}}}}_{4}}}\) values of −8.9 ± 0.1‰ and \({{{\mathrm{\delta}}}}{{{\mathrm{D}}}}_{{{{{\mathrm{CH}}}}_{4}}}\) of −161.4 ± 1‰ were obtained. They fall within the abiotic range of natural CH43,31 and are close to the compositions of CH4 venting in Lost City hydrothermal chimneys nearby on the same massif19. The ideal combination for abiotic synthesis of diverse organics The nature of the original fluid can be inferred from the current phases found in the FI that attest to in situ reactions with olivine walls. The occurrence of hydrated secondary minerals (serpentine, brucite; Figs. 2 and 3) and of C-, N- and S-bearing phases (N2, CH4, CH3SH, PACMs, carbonates, carbide, and nD) requires an aqueous fluid enriched in C, N and S, to be trapped as olivine-hosted inclusions. At MOR, such a fluid can be magmatic or seawater-derived, or a mix of both. The fresh character and the Sr isotopic compositions of deep magmatic rocks from the same hole attest of their very limited interaction with seawater48 that resulted in late serpentine veinlets, formed after the FIs (Fig. 1b–d). If any, seawater would not be a significant source of carbon to such deep fluid inclusions since dissolved inorganic carbon (DIC) is efficiently removed from seawater at shallower levels by carbonate precipitation, and dissolved organic carbon (DOC) should be rapidly captured in shallow rocks or decomposed in high temperature fluids (T > 200 °C)49,50. If few DIC may persist and contribute to the carbon in the fluid inclusion, it is unlikely that any relict DOC, notably the macromolecular component, would remain at the T (600–800 °C) and depth conditions of fluid trapping24,25,26 since seawater-derived fluids would have also undergone boiling and phase separation51. Hence, a dominant magmatic origin, resulting from magma degassing, is favored for the fluid trapped in our inclusions as previously proposed for olivine under similar deep crustal conditions52,53. Such fluids, exsolved from melts, are dominated by CO2-rich vapors that can evolve to more H2O-enriched compositions with progressive fractionation52. Indeed, at MOR, the source of magma is located in the shallow upper mantle where equilibrium thermodynamic speciation for fluids in the C–O–H–N system strongly favors N2 and CO2 relative to NH3 and to other carbon species considered (CO and CH4), respectively54. This supports the abiotic, dominantly mantle-derived, origin of the N2 and the carbon involved in the various organic compounds observed in the FIs. The absence of CO2 and the occurrence of H2 and CH4 suggests a complete reduction of initial CO2 to CH4 during fluid-olivine reactions inside the inclusions, at a temperature corresponding to H2 production by serpentinization (<350–400 °C), rather than a CO2–CH4 equilibration at higher temperature. This is consistent with the clumped isotopologue data on CH4 from seafloor hydrothermal sites, including Lost City, which imply a formation of CH4 at ~250–350 °C55. The complete reduction of CO2 to CH4 also fits the \({{{\mathrm{\delta}}}}^{13}{{{\mathrm{C}}}}_{{{{{\mathrm{CH}}}}_{4}}}\) value thus inherited from the original δ13C of magmatic CO2. The main S-bearing species should be SO2 with some H2S depending on the degassing temperature and H2 content of the fluid56,57 according to the following equilibrium: $${{{{{{\rm{SO}}}}}}}_{2}+{{{{{{\rm{3H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{2H}}}}}}}_{2}{{{{{\rm{O}}}}}}+{{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{S}}}}}}$$ Accordingly, we argue that the aqueous fluid initially trapped in the FIs was dominantly composed of N2, CO2, and SO2, with minor amounts of H2, CH4, and H2S. Proportions of those species cannot be quantified but a compilation of volcanic gas analyses indicates that the redox state of similar fluids, as defined by O2 fugacity (fO2,g), is usually between the log fO2,g set by the fayalite-magnetite-quartz (FMQ) mineral buffer FMQ-1 (1 log unit below FMQ) and the nickel-nickel oxide (NiNiO) mineral buffer NiNiO+2 (2 log unit higher than NiNiO), and their pH is acidic with trace amount of HCl (see Methods). A two-step abiotic process of fluid cooling and subsequent fluid-mineral reactions (serpentinization) is proposed to account for our observations in FI as described below and in Fig. 5. Fig. 5: Proposed scenario to account for the chemical and structural diversity of the different types of carbonaceous material in microreactor-like fluid inclusions hosted in serpentinizing olivine from the deep oceanic lithosphere. a During Stage 1, the trapped fluid cools down to 400 °C at 2 kbar and its speciation evolves as depicted by the gray area in diagram b, for a plausible range of initial redox conditions (Supplementary Fig. 3). The path followed by the most reduced fluids (Log fH2 ≥ FMQ) cross-cut the pyrene-CO2 curve between 400 °C and 450 °C, allowing the early formation of pyrene, analogous to the most aromatic PACM observed on olivine walls (PACM2). In these fluids, CO2 can also partially convert to CH3SH and CH4, and N2 to NH3, before reaching 400 °C; i.e., before serpentinization initiates. The vertical orange area depicts the main serpentinization field (stage 2). c During stage 2, for T < 400 °C (2 kbar), water becomes liquid and olivine highly reactive with an expected major stage of serpentinization at T between 300 and 400 °C that produces serpentine, brucite, magnetite, and H2. Serpentinization advancement rapidly shifts fH2 and pH of the solution toward the field of organic acids as schematically drawn by the orange arrow in the speciation diagram, d, (350 °C-2kbar, methane and methanol suppressed, See Methods and Supplementary Fig. 4), with concomitant carbonate precipitation (calcite or magnesite). This hydration reaction dries out the system leading to condensation of the fluid and formation of PACM1 that wets product minerals and displays varied functional groups bearing O, H, ± S heteroatoms, in agreement with Raman, TEM, and XPS data. nD (PACM3) can metastably form from the amorphous PACM1 and PACM2 during this serpentinization stage114. Then, the chemical and structural characteristics of PACMs are expected to evolve with time and during cooling, and contribute to the formation of CH4 that was kinetically limited so far. PACM polyaromatic carbonaceous material, aH2O water activity, set to 1 or 0.1. Stage 1–Trapping and cooling of a magmatic-dominated fluid, T ~ 400–600 °C. Modeling the evolution of such a fluid during cooling from 600 °C to 400 °C at 2kbar (Fig. 5b, Supplementary Fig. 3a and Methods) shows that the most reduced fluids (Log fH2 ≥ FMQ) favor CH4, CH3SH, and graphite below ~550 °C, and NH3 below ~450 °C (Fig. 5, stage 1) if kinetics are favorable. The same fluids also first crosscut the pyrene-CO2 equilibrium near 450 °C, showing the possibility to form early aromatic materials such as pyrene, used here as a simple analog for PACMs (Fig. 5, stage 1). Deposition of carbonaceous films on freshly-cracked olivine surfaces by condensation of C–O–H fluids during abrupt cooling to 400–800 °C has been described experimentally58, inspired by observations of olivine surfaces in basalts and xenoliths59,60. In these experiments, the carbonaceous films consisted of various proportions of C–C, C–H, C–O bounds and carbide depending on the redox conditions and final temperature. In our FIs, deposition on olivine walls of the most aromatic material (PACM2), possibly associated with carbides, can be initiated by a similar surface-controlled process61 (Fig. 5, stage 1). The initial chemical and structural characteristics of this material are unknown since they probably changed in the FI during the subsequent evolution of physico-chemical conditions (stage 2). Stage 2 – Serpentinization and formation of various metastable organic compounds. Once T falls below 400 °C, fluid water becomes both liquid and gaseous and olivine is prone to serpentinization therefore leading to the formation of serpentine, brucite, magnetite and H2 (Fig. 5, stage 2)9. Serpentinization also increases the pH of the fluid that first equilibrates with CO2, allowing carbonate precipitation. Previous modeling of these reactions in similar FIs have considered a seawater-derived aqueous fluid variably enriched in CO2(aq)18. Since PACM, CH3SH or N2 were not reported, these species were not included in the previous models, but increasing levels of H2 were predicted between 400° and 300 °C, shifting the system by more than 2 logfH2,g units to highly reducing conditions, allowing CH4 formation via reaction (2). Reaction (2) is thermodynamically favored with decreasing T and water activity (aH2O) but its slow kinetic at T < 400 °C16 needs to be overcome by long residence times (thousand years) of the fluids in FIs53. $${{{{{{\rm{CO}}}}}}}_{2}+{{{{{{\rm{4H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{CH}}}}}}}_{4}+{{{{{{\rm{2H}}}}}}}_{2}{{{{{\rm{O}}}}}}$$ However, olivine serpentinization is very fast at optimum conditions near 300 °C and can be completed in few weeks to months62. At this short time scale, the kinetic inhibition of methane formation prevails and metastable organic compounds are predicted to form, including aliphatic and polyaromatic hydrocarbons (PAHs), organic and amino acids or condensed carbon61,63,64. Suppressing CH4, we have modeled the redox evolution of the fluids in our FIs during serpentinization (Methods and Supplementary Fig. 3). fH2 also increases of ~2 log units between 400 °C and 300 °C, buffered here by the precipitation of PACM (analog to pyrene). The increase of fH2,g and pH due to serpentinization can progressively shift the carbon speciation in solution toward the fields of organic acids (e.g., formic acid, reaction (3) and orange arrow on Fig. 5d, or acetic acid, Supplementary Fig. 4) that are common species in serpentinizing systems16,23. These fields widen with decreasing water activity (aH2O) and T (Fig. 5b, Supplementary Fig. 4). $${{{{{{\rm{CO}}}}}}}_{2}+{{{{{{\rm{H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{HCOO}}}}}}}^{-}+{{{{{{\rm{H}}}}}}}^{+}$$ Carbon speciation of the fluid is probably even more complex, notably with contributions of others O-bearing reduced species (e.g., CO, aldehydes or alcohols)1,65. Reduction of N2 to ammonia is also favored with increasing fH2,g (e.g. Fig. 5, stage 1), making possible the formation of CN-containing organic species. The abiotic formation of CH3SH may be initiated earlier from the fluid initially trapped (Fig. 5, stage 1) but can continue during serpentinization via reaction (4)66; organic acids being potential intermediate products1. $${{{{{{\rm{CO}}}}}}}_{2}+{{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{S}}}}}}+{{{{{{\rm{3H}}}}}}}_{2}\leftrightarrows {{{{{{\rm{CH}}}}}}}_{3}{{{{{\rm{SH}}}}}}+{{{{{{\rm{2H}}}}}}}_{2}{{{{{\rm{O}}}}}}$$ Occurrence of thioester functions is also possible through condensation of available thiols and carboxylic acids according to reaction (5): $${{{{{\rm{RSH}}}}}}+{{{{{\rm{R}}}}}}{\prime} {{{{{{\rm{CO}}}}}}}_{2}{{{{{\rm{H}}}}}}\to {{{{{\rm{RSC}}}}}}({{{{{\rm{O}}}}}}){{{{{\rm{R}}}}}}{\prime}+{{{{{{\rm{H}}}}}}}_{2}{{{{{\rm{O}}}}}}$$ More generally, hydrothermal conditions favor dehydration reactions of organic compounds such as amide or ester formation from carboxylic acids67, in addition to organic functional group transformation reactions68, which both considerably enlarge the range of organic compounds that can be formed. The absence of liquid water in the FI today attests to the full consumption of water during serpentinization of the olivine walls that should have progressively enhanced reactions (1) to (4) and condensation reactions (e.g., reaction (5)). Based on the structural and chemical characteristics of PACM1 (Figs. 3 and 4), and its "wetting" texture on hydrous minerals (Fig. 3c, e), we propose that this complex gel-like material was formed by condensation of the fluid enriched in organics during this serpentinization-driven drying stage. Metastable phases such as PACM1 and PAMC2 are prone to evolve after their formation. Here, they seem to serve as organic precursors for nD nucleation under the low P–T conditions of modern oceanic setting, similarly to higher P–T processes in subduction zones (>3 GPa)69. Occurrence of nDs within the stability field of graphite have been previously described in ophiolites under similar conditions70 and at higher T (~500–600 °C)71, as well as experimentally72. It has also been predicted by thermodynamic models73. Our results first suggest that nDs formation in such low P–T environments (≤2 kbars–400 °C) possibly occurs via an intermediate, amorphous, organic material. CH4 and possibly other hydrocarbons28 can also form later in FIs from reaction (2) or from further dehydration74 and hydrogenation75 reactions of PACMs simplified by pyrene (C16H10) in the following reaction. $${{{{{{\rm{C}}}}}}}_{16}{{{{{{\rm{H}}}}}}}_{10}+27{{{{{{\rm{H}}}}}}}_{2}\leftrightarrows 16{{{{{{\rm{CH}}}}}}}_{4}$$ New routes for abiotic organic synthesis in the Earth primitive crust and beyond Considering the geological context of such systems, our observations indicate that the timely interplay between magmatic degassing and progressive serpentinization is an ideal combination for the abiotic synthesis of varied gaseous and condensed organic compounds. Fluid inclusions have long been recognized as a major source of H2 and CH4 in fossil and active oceanic lithosphere18,30,31,53, but the discovery of the new compounds has further implications. The likelihood that fluid inclusions can be opened during lower-T alteration processes at shallower levels in the oceanic crust, render the components trapped in the inclusions available for further diversification and complexification that can benefit prebiotic reactions. Ingredients, gathered and preserved in olivine pods, can suddenly be released in an environment that is far from the original equilibrium conditions. Such a high degree of disequilibria favors the production of many additional organic compounds and promotes the development of chemotrophic microbial metabolisms76,77. As an example, similar FIs could have provided nitrogen and aromatic precursors required for the local synthesis of abiotic amino acids that are described in the shallower part of the same drill hole13. PACMs may also be the locus of further precipitation of carbonaceous material assisted by mineral reactions at low-T14. The studied FIs also provide the first evidence for an abiotic source of CH3SH in present-day oceanic rocks where a thermogenic origin was favored up to now66. Availability of CH3SH and organosulfur compounds such as thioesters may be crucial to initiate proto-metabolisms in primitive hydrothermal systems66. In modern systems, such FIs may also provide nutrients for hydrocarbon degrading micro-organisms that have been revealed by genomic studies in magmatic rocks at various depths in IODP Hole 1309D78. H2 and CH4 enriched alkaline environments created by low T serpentinization have been recognized as providing some of the most propitious conditions for the emergence of life79,80,81. Our results strengthen this hypothesis by highlighting new reaction routes that encompass the progressive time-line of geologic events in such rock systems. Unexplored prebiotic reaction pathways based on similar processes may have occurred in the primitive Earth and on Mars where hydrothermal environments rooted on olivine-rich magmatic rocks (e.g., komatiites on Earth) are thought to be widespread82,83,84,85. The more reduced state of the mantle on early planets should have favored reduced species12,86,87 in the percolating magmatic fluids. Some studies of Martian meteorites have already suggested synthesis of PACM on Mars in relation to combined magmatic and hydrothermal processes12. This may be extrapolated to other planetary bodies such as icy moons where serpentinization has become a focus of attention88,89. Transmitted light microscopy Optical imaging of rock thin sections (30 µm thick) have been done under plane-polarized and cross-polarized light using a Leica transmitted light microscosope. Micro-Raman spectroscopy We acquired all the individual Raman spectra and 3D hyper-spectral Raman images (3D HSR images) with a LabRam HR evolution from Horiba™ manufacturer and a 532 nm DPSS laser. The laser beam was focused onto the sample with an Olympus ×100 objective. The probe spot has a diameter of around 0.9 μm. We used 600 grooves/mm grating to collect Raman spectra in two wavelength ranges, from 120 to 1800 cm−1 and from 2500 to 3800 cm−1. The first one is associated to the Raman fingerprint of minerals and covers the first order region of PACM with D and G bands. The second one presents hydroxyl' stretching bands of phyllosilicates, hydrated oxides, CH4 stretching modes and the second order of PACM. With four acquisitions of 500 ms per spectrum, the recording of a 3D HSR image is up to 39 h per spectral range. That means, twice for a full 3D acquisition of IF3 and almost 32 h for IF5. We minimized this time by scanning laser beam instead of shifting the position of the sample with the holding stage. We retain the true confocal performance of the microscope by using the DuoScan® hardware module in stepping mode. The laser was stepped across the sample in X and Y direction by two piezoelectric mirrors. The surface map has a small and high accuracy step, down to 250 nm in our case. Then stacking 2D HSR images from the surface and down in the hosting olivine with Z steps of 250 nm, we composed a 3D image of the fluid inclusion. After data preprocessing of the Raman spectra (see below explanations), we powered these images and 3D animations with the 3D Surface and Volume Rendering (3D SVR) application for LabSpec6®. Assuming minerals and gases are here transparent and our microscope is confocal, 3D shapes can be rendered by association of a color channel to a Raman signature. We used filters to remove voxels which have low color intensity and thresholds to control the transparency. The preprocessing data treatment differs between the 3D HSR images and the Raman signature of the PACM. In regards to the images, we operated the following sequence of preprocessing: (1) extraction of the relevant wavenumber range, (2) removal of extremely low and high signal corresponding either to low Raman diffusion or to high luminescence, (3) correction of the background with a polynomial base line. The first order Raman signature of the PACM was extracted from the whole spectra (more than 50,000 per image) acquired during 3D HSR images recording. We used a homemade algorithm with the Matlab® software to extract this Raman signal from each spectra and perform an iterative data fitting with the PeakFit Matlab® application tool peakfit.m90. We applied the same procedure as in Quirico et al. (2014)91. The two Raman bands D and G were fitted with the so-called Lorentzian-Breit-Wigner-Fano (LBWF) spectral model36. Raman spectral parameters characterizing the PACM were extracted: width at half maximum (FWHM-G, FWHM-D), peak position (wG, wD) and ratio of peak intensity R1 (ID/IG) with a GOF (Goodness of fit). This parameter was used to remove fits with low RMS fitting error and R-squared. Eventually, we kept on working with typical batches of 600 up to 10,000 spectra with spectral parameters associated. The following table provides characteristic parameters of the averaged PACM end-members Table 1. Table 1 Raman characteristic parameters of the averaged PACM end-members Data mining and visualization of spectral parameters in a workflow were powered with the software Orange92. We concatenated the Raman spectral parameters of the two IF to plot the diagram representing FWHM-D as a function of R1. We selected 3 groups of data as endmembers in this diagram to discuss spectral properties of each one and localization between each other in the inclusion. Focused Ion Beam milling (FIB) and Scanning Electron Microscopy (SEM) After being located by transmitted light microscopy and analyzed by Raman spectroscopy, fluid inclusions within the thin section sample were opened by using a FIB - SEM workstation (NVision 40; Carl Zeiss Microscopy) coupling a SIINT zeta ionic column (Seiko Instruments Inc. NanoTechnology, Japan) with a Zeiss Gemini I electronic column. For FIB operation, the thin section was coated with a carbon layer of about 20 nm by a carbon coater (Leica EM ACE600) to prevent electrostatic charging. First, a platinium coating was deposited with the in-situ gas injection system to define the interest region and to protect the surface from ion beam damage. Prior to milling and imaging, a coarse trench was milled around the region of interest to a depth of 30 µm. The inclusions were closed and not visible on the surface of the sample. The abrasion was therefore done progressively with FIB parameters adjusted to 30 kV and 10 nA until breaking through and obtaining a cross-section of the inclusion. Subsequently, the observations were performed using backscattered electrons with the so-called Energy and angle selective BSE detector (EsB) and secondary electrons with the Secondary Electrons Secondary Ions detector (SESI). These experiments were operated at 15 kV and in high vacuum. Chemical composition of solids in fluid inclusions was obtained simultaneously by EDX analyses using an Aztec Oxford system (EDS Oxford Instruments Aztec-DDI detector X MAXN 50). The studied cross sections were then extracted and thinned to a thickness of 100 nm by the ion beam following the lift-out method. Transmission electron microscopy (TEM) The structural organization of these thin foils was investigated by TEM. A 2100 JEOL operating at 200 kV was used to study precisely the carbon-rich regions within the fluid inclusions. A STEM (scanning transmission electron microscopy) module coupled with a EDX XMAX 80 mm2 system (Oxford Instruments) allowed the acquisition of images in annular bright field and a precise chemical analysis of the solid and condensed organic phases in the fluid inclusions. Fast Fourier Transform analysis of high-resolution images of nano-diamond-rich areas were used to determine cell parameter of the ~5 nm-size particles using Digital Micrograph software©. XPS analyses were carried out ate the Ecole Centrale de Lyon (France) on a PHI 5000 Versaprobe II apparatus from ULVAC-PHI Inc. A monochromatized AlKα source (1486.6 eV) was used with a spot size of 10 µm. A charge neutralization system was used to limit charge effect. The remaining charge effect was corrected fixing the C–C bond contribution of C1s peak at 284.8 eV. Before acquisition of the spectra, a short Ar ion etching was performed (250 V, 1 min) to limit the presence of adventitious carbon on the surface. C1s spectra were obtained using a pass energy of 23.5 eV. All the peaks were fitted with Multipak software using a Shirley background. Quantification was carried out using the transmission function of the apparatus and angular distribution correction for a 45° angle. Sensitivity factors were extracted from Wagner et al., (1981)93 in which they integrate cross section and escape depth correction. Extraction and isotopic analyses of CH4(g) A portion of the studied rock sample was initially crushed with a stainless steel mortar and pestle and sieved to collect 1–2 mm chips. These chips were then heated at 60 °C under vacuum to remove surficial water. Approximately 0.23 g of these chips were placed into a hydraulic rock crusher with a continuous He stream similar to that of Potter and Longstaffe (2007)94 and the crusher activated several times until the CH4 signal approached that of the blank. The gas released by crushing was focused on a Porapak Q filled quartz capillary trap held at liquid nitrogen temperature. Gases were released from the trap by moving it out of the liquid nitrogen and into a 150 °C heating block. The released gases were separated on a HP 6890 gas chromatograph fitted with an Agilent Poraplot Q column (50 m, 0.32 mm wide bore, 10 μm film) temperature programmed from −30 to 80 °C. The column effluent was fed into an oxidation oven containing NiO, CuO and Pt catalysts where the reduced gases were converted to CO2. Following the oxidation oven, the gases entered a Thermo Fischer Delta V isotope ratio mass spectrometer (IRMS). Data reduction was performed by comparing an in house CH4 isotope standard to Indiana University Biogeochemical Laboratory CH4 standards #1, #2, #5, and #7. Thermodynamic modeling Equilibrium reaction constants at elevated temperatures and pressures are used to construct the equilibrium speciation diagram (Fig. 5 and Supplementary Fig. 3 and 4). For the aqueous species, we used the Helgeson–Kirkham–Flowers equations and predictive correlations to calculate the Gibbs free energies of formation at high temperatures and pressures95,96,97. The calculations were conducted with the Deep Earth Water (DEW) Model98. The Gibbs free energies of formation of minerals and solid condensed carbons at high temperatures and pressures were calculated using the SUPCRT92b code, an adaption of SUPCRT9299. Thermodynamic data files used in the calculations were built using data for aqueous species from Shock et al. (1997)96, and minerals from Berman (1988)100, Berman and Aranovich (1996)101, and Sverjensky et al. (1991)102. We adopted the thermodynamic properties of CH3SH,aq from Schutle and Rogers (2004)103, which are consistent with Shock et al. (1997)97. We also included the thermodynamic data of condensed aromatic organic carbons of Richard and Helgeson (1998)104, which are consistent with Berman (1988)100. To simulate fluid-rock reactions, we applied purely chemical irreversible mass transfer models105 to simulate reactions between a cooling magmatic-dominated fluid and olivine. We consider the system as progressive alteration of olivine in a closed system in which there was always the reaction affinity for the alteration of olivine by water. We set 30 moles of olivine (Fa15Fo85) reacting with 1 kg water, so the approximate water:rock = 1:4.5. It represents a low W/R ratio relevant with the geological settings observed here (very limited fluid captured as olivine inclusions). The dissolved salts of Ca, Mg, Fe, C, Si, N, S were considered in the calculations as well as all available minerals. All the calculations were carried out with the aqueous speciation, solubility, and chemical mass transfer codes EQ3 & EQ6 which have been recompiled from a traditional version106 for the purpose of simulating temperature and pressures higher than water saturation conditions, using thermodynamic data files prepared as described above. The codes are accessible freely to the public through the Deep Earth Water community (http://www.dewcommunity.org/). We first simulated the volcanic gas and starting fluid using EQ3 code. We then let the gas cool down to 400 °C before reacting with olivine in a continuous cooling (<400 °C) and enclosed system (2000 bars), mimicking the high-temperature and low-pressure environment where the fluid inclusions formed. It is within the T range of fluids when they are trapped in the inclusions. The cooling rate is set by the following equation in the model input: $${{{{{\rm{temp}}}}}}\,{{{{{\rm{C}}}}}}={{{{{\rm{temp}}}}}}\,{{{{{{\rm{C}}}}}}}_{0}+{{{{{\rm{tk}}}}}}1\ast \xi+{{{{{\rm{tk}}}}}}2\ast {\xi }^{2}+{{{{{\rm{tk}}}}}}3\ast {\xi }^{3}(0\le \xi \le 1)$$ where temp C0 represents the initial temperature in °C; ξ represents the reaction extent; tk1, tk2, and tk3 are three parameters. Here, we set tk1 = −200 for the two cooling calculations: 600–400 °C (without olivine) and <400 °C (with olivine); we used the first cooled fluid (at 400 °C) as the starting fluid to react with olivine for the second stage cooling calculation. Volcanic gas is mainly composed of steam H2O, CO2, and H2, with other trace gases107,108. The composition of the volcanic gas varies depending on several geological factors, including the extent of degassing of the magma, redox state, and temperature and cooling history108. Under the circumstances of this study, the simulation used volcanic CO2,g as the only carbon source. Provided the reported CO2/H2O ratio in volcanic gases, we set the starting CO2/H2O ratio as 0.3 in our starting fluid. Compilation of the volcanic gas indicated that the redox state of volcanic gas is between the log fO2,g values set by fayalite-magnetite-quartz (FMQ) mineral buffer minus one log unit (FMQ-1) and nickel-nickel oxide (Ni/NiO) mineral buffer plus two log unit (Ni/NiO+2) (Symonds et al., 1994). In our simulation, we set log fO2,g of the starting fluid equal to these two values, representing two boundary cases (Supplementary Fig. 3). As the starting fluid would dissolve high pressures of CO2,g and trace amounts of HCl and S gases107,108, the starting pH would be acidic. The neutral pH at 600 °C and 2 kbars is 5.3. Therefore, in our simulation, we set the initial pH as 4 to represent an acidic condition. The data supporting the findings of this study are available within the paper and its Supplementary Information. Any additional information is available from the corresponding author upon request. Reeves, E. P. & Fiebig, J. Abiotic synthesis of methane and organic compounds in Earth's lithosphere. Elements 16, 25–31 (2020). Sephton, M. A. & Hazen, R. M. On the origins of deep hydrocarbons. Rev. Mineral. Geochem. 75, 449–465 (2013). Etiope, G. Abiotic methane on Earth. Rev. Geophys. 51, (2013). Konn, C., Charlou, J. L., Holm, N. G. & Mousis, O. The production of methane, hydrogen, and organic compounds in ultramafic-hosted hydrothermal vents of the mid-atlantic ridge. Astrobiology 15, 381–399 (2015). Lang, S. Q., Butterfield, D. A., Lilley, M. D., Paul Johnson, H. & Hedges, J. I. Dissolved organic carbon in ridge-axis and ridge-flank hydrothermal systems. Geochim. Cosmochim. Acta 70, 3830–3842 (2006). Sherwood Lollar, B. et al. A window into the abiotic carbon cycle – Acetate and formate in fracture waters in 2.7 billion year-old host rocks of the Canadian Shield. Geochim. Cosmochim. Acta 294, 295–314 (2021). Vitale Brovarone, A. et al. Massive production of abiotic methane during subduction evidenced in metamorphosed ophicarbonates from the Italian Alps. Nat. Commun. 8, 1–13 (2017). Eickenbusch, P. et al. Origin of short-chain organic acids in serpentinite mud volcanoes of the Mariana convergent margin. Front. Microbiol 10, 1–21 (2019). McCollom, T. M. & Bach, W. Thermodynamic constraints on hydrogen generation during serpentinization of ultramafic rocks. Geochim. Cosmochim. Acta 73, 856–875 (2009). Anders, E. Pre-biotic organic matter from comets and asteroids. Nature 342, 255–257 (1989). Bonal, L., Bourot-Denise, M., Quirico, E., Montagnac, G. & Lewin, E. Organic matter and metamorphic history of CO chondrites. Geochim. Cosmochim. Acta 71, 1605–1623 (2007). Steele, A., McCubbin, F. M. & Fries, M. D. The provenance, formation, and implications of reduced carbon phases in Martian meteorites. Meteorit. Planet. Sci. 51, 2203–2225 (2016). Ménez, B. et al. Abiotic synthesis of amino acids in the recesses of the oceanic lithosphere. Nature 564, 59–63 (2018). Sforna, M. C. et al. Abiotic formation of condensed carbonaceous matter in the hydrating oceanic crust. Nat. Commun. 9, (2018). Andreani, M. & Ménez, B. New Perspectives on Abiotic Organic Synthesis and Processing during Hydrothermal Alteration of the Oceanic Lithosphere. Deep Carbon: Past to Present (2019). https://doi.org/10.1017/9781108677950.015. McCollom, T. M. Laboratory simulations of abiotic hydrocarbon formation in Earth's deep subsurface. Rev. Mineral. Geochem. 75, 467–494 (2013). Horita, J. & Berndt, M. E. Abiogenic Methane formation and isotopic fractionation under hydrothermal conditions. Sci. (80-.) 285, 2–5 (1999). Klein, F., Grozeva, N. G. & Seewald, J. S. Abiotic methane synthesis and serpentinization in olivine-hosted fluid inclusions. Proc. Natl Acad. Sci. USA 116, 17666–17672 (2019). Proskurowski, G. et al. Abiogenic hydrocarbon production at lost city hydrothermal field. Science 319, 604–607 (2008). ten Kate, I. L. Organic molecules on Mars. Sci. (80-.) 360, 1068–1069 (2018). Glein, C. R., Baross, J. A. & Waite, J. H. The pH of Enceladus' ocean. Geochim. Cosmochim. Acta 162, 202–219 (2015). Kelley, D. S. et al. An off-axis hydrothermal vent field near the Mid-Atlantic Ridge at 30°N. Nature 412, 8–12 (2001). Lang, S. Q., Butterfield, D. A., Schulte, M., Kelley, D. S. & Lilley, M. D. Elevated concentrations of formate, acetate and dissolved organic carbon found at the Lost City hydrothermal field. Geochim. Cosmochim. Acta 74, 941–952 (2010). Demartin, B., Hirth, G. & Evans, B. Experimental Constraints on Thermal Cracking of Peridotite at Oceanic Spreading Centers. in Mid‐Ocean Ridges: Hydrothermal Interactions Between the Lithosphere and Oceans (eds. German, C. R., Lin, J. & Parson, L. M.) (AGU, 2004). https://doi.org/10.1029/148GM07. Harper, G. D. Tectonics of slow spreading mid‐ocean ridges and consequences of a variable depth to the brittle/ductile transition. Tectonics 4, 395–409 (1985). Castelain, T., McCaig, A. M. & Cliff, R. A. Fluid evolution in an Oceanic Core Complex: A fluid inclusion study from IODP hole U1309 D—Atlantis Massif, 30?N, Mid-Atlantic Ridge. Geochemistry, Geophys. Geosystems 1193–1214 https://doi.org/10.1002/2013GC004975.Received. (2014) Blackman, D.K., et al., and the Expedition 304/305 Scientists. Site U1309. in Proceedings of the IODP, 304/305: College Station TX (Integrated Ocean Drilling Program Management International, Inc.), https://doi.org/10.2204/iodp.proc.304305.103.2006. (2006) Grozeva, N. G., Klein, F., Seewald, J. S. & Sylva, S. P. Chemical and isotopic analyses of hydrocarbon-bearing fluid inclusions in olivine-rich rocks. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 378, (2020). Miura, M., Arai, S. & Mizukami, T. Raman spectroscopy of hydrous inclusions in olivine and orthopyroxene in ophiolitic harzburgite: Implications for elementary processes in serpentinization. J. Mineral. Petrol. Sci. 106, 91–96 (2011). Sachan, H. K., Mukherjee, B. K. & Bodnar, R. J. Preservation of methane generated during serpentinization of upper mantle rocks: evidence from fluid inclusions in the Nidar ophiolite, Indus Suture Zone, Ladakh (India). Earth Planet. Sci. Lett. 257, 47–59 (2007). Kelley, D. S. & Früh-Green, G. L. Abiogenic methane in deep-seated mid-ocean ridge environments: Insights from stable isotope analyses. J. Geophys. Res. Solid Earth 104, 10439–10460 (1999). Zhang, L., Wang, Q., Ding, X. & Li, W. C. Diverse serpentinization and associated abiotic methanogenesis within multiple types of olivine-hosted fluid inclusions in orogenic peridotite from northern Tibet. Geochim. Cosmochim. Acta 296, 1–17 (2021). Delarue, F. et al. The Raman-derived carbonization continuum: a tool to select the best preserved molecular structures in Archean Kerogens. Astrobiology 16, 407–417 (2016). Bonal, L., Quirico, E., Flandinet, L. & Montagnac, G. Thermal history of type 3 chondrites from the Antarctic meteorite collection determined by Raman spectroscopy of their polyaromatic carbonaceous matter. Geochim. Cosmochim. Acta 189, 312–337 (2016). Quirico, E. et al. Prevalence and nature of heating processes in CM and C2-ungrouped chondrites as revealed by insoluble organic matter. Geochim. Cosmochim. Acta 241, 17–37 (2018). Ferrari, A. C. & Robertson, J. Interpretation of Raman spectra of disordered and amorphous carbon. Phys. Rev. B 61, 14095–14107 (2000). Socrates, G. Infrared and Raman characteristic group frequencies. Tables and charts. (JOHN WILEY & SONS, LTD, 2001). Ferralis, N., Matys, E. D., Knoll, A. H., Hallmann, C. & Summons, R. E. Rapid, direct and non-destructive assessment of fossil organic matter via microRaman spectroscopy. Carbon N. Y 108, 440–449 (2016). Li, X., Hayashi, J. & Li, C. Z. FT-Raman spectroscopic study of the evolution of char structure during the pyrolysis of a Victorian brown coal. Fuel 85, 1700–1707 (2006). Korepanov, V. I. et al. Carbon structure in nanodiamonds elucidated from Raman spectroscopy. Carbon N. Y. 121, 322–329 (2017). Mochalin, V. N., Shenderova, O., Ho, D. & Gogotsi, Y. The properties and applications of nanodiamonds. Nat. Nanotechnol. 7, 11–23 (2012). Osswald, S., Mochalin, V. N., Havel, M., Yushin, G. & Gogotsi, Y. Phonon confinement effects in the Raman spectrum of nanodiamond. Phys. Rev. B - Condens. Matter Mater. Phys. 80, (2009). Mermoux, M., Chang, S., Girard, H. A. & Arnault, J. C. Raman spectroscopy study of detonation nanodiamond. Diam. Relat. Mater. 87, 248–260 (2018). Yoshikawa, M., Katagiri, G., Ishida, H., Ishitani, A. & Akamatsu, T. Raman spectra of diamondlike amorphous carbon films. Solid State Commun. 66, 1177–1180 (1988). Moulder, J. F. & Chastain, J. Handbook of X-ray photoelectron spectroscopy: a reference book of standard spectra for identification and interpretation of XPS data. Phys. Electron. Div. Perkin-Elmer Corp. 221–256 (1992). Wang, Y.-Y., Kusumoto, K. & Li, C.-J. XPS analysis of SiC films prepared by radio frequency plasma sputtering. Phys. Procedia 32, 95–102 (2012). Krzanowski, J. E. & Leuchtner, R. E. Chemical, mechanical, and tribological properties of pulsed‐laser‐deposited titanium carbide and vanadium carbide. J. Am. Ceram. Soc. 80, 1277–1280 (1997). Delacour, A., Früh-green, G. L., Frank, M., Gutjahr, M. & Kelley, D. S. Sr- and Nd-isotope geochemistry of the Atlantis Massif (30 °N, MAR): Implications for fluid fluxes and lithospheric heterogeneity. Chem. Geol. 254, 19–35 (2008). Tertieten, L., Fruh-Green, G. L. & Bernasconi, S. M. Distribution and Sources of Carbon in Serpentinized Mantle Peridotites at the Atlantis Massif (IODP Journal of Geophysical Research: Solid Earth. J. Geophys. Res. Solid Earth 126, (2021). Hawkes, J. A. et al. Efficient removal of recalcitrant deep-ocean dissolved organic matter during hydrothermal circulation. 8, (2015). Früh-green, G. L. et al. Diversity of magmatism, hydrothermal processes and microbial interactions at mid-ocean ridges. Nat. Rev. Earth Environ. https://doi.org/10.1038/s43017-022-00364-y. (2022) Kelley, D. S. Methane-rich fluids in the oceanic crust. J. Geophys. Res. Solid Earth 101, 2943–2962 (1996). McDermott, J. M., Seewald, J. S., German, C. R. & Sylva, S. P. Pathways for abiotic organic synthesis at submarine hydrothermal fields. Proc. Natl Acad. Sci. USA 112, 7668–7672 (2015). Frost, D. J. & McCammon, C. A. The Redox State of Earth's Mantle. Annu. Rev. Earth Planet. Sci. 36, 389–420 (2008). Wang, D. T., Reeves, E. P., Mcdermott, J. M., Seewald, J. S. & Ono, S. Clumped isotopologue constraints on the origin of methane at seafloor hot springs. Geochim. Cosmochim. Acta 223, 141–158 (2018). Gaillard, F., Scaillet, B., Pichavant, M. & Iacono-Marziano, G. The redox geodynamics linking basalts and their mantle sources through space and time. Chem. Geol. 418, 217–233 (2015). Hoshyaripour, G., Hort, M. & Langmann, B. How does the hot core of a volcanic plume control the sulfur speciation in volcanic emission? Geochemistry, Geophys. Geosystems 13, (2012). Tingle, T. N. & Hochella, M. F. Formation of reduced carbonaceous matter in basalts and xenoliths: Reaction of C-O-H gases on olivine crack surfaces. Geochim. Cosmochim. Acta 57, 3245–3249 (1993). Tingle, T. N., Hochella, M. F., Becker, C. H. & Malhotra, R. Organic compounds on crack surfaces in olivine from San Carlos, Arizona and Hualalai Volcano, Hawaii. Geochim. Cosmochim. Acta 54, 477–485 (1990). Mathez, E. A. & Delaney, J. R. The nature and distribution of carbon in submarine basalts and peridotite nodules. Earth Planet. Sci. Lett. 56, 217–232 (1981). Zolotov, M. Y. & Shock, E. L. A thermodynamic assessment of the potential synthesis of condensed hydrocarbons during cooling and dilution of volcanic gases. J. Geophys. Res. Solid Earth 105, 539–559 (2000). McCollom, T. M. et al. Temperature trends for reaction rates, hydrogen generation, and partitioning of iron during experimental serpentinization of olivine. Geochim. Cosmochim. Acta 181, 175–200 (2016). Shock, E. L. Geochemical constraints on the origin of organic compounds in hydrothermal systems. Orig. Life Evol. Biosph. 20, 331–367 (1990). Milesi, V., McCollom, T. M. & Guyot, F. Thermodynamic constraints on the formation of condensed carbon from serpentinization fluids. Geochim. Cosmochim. Acta 189, 391–403 (2016). Seewald, J. S., Zolotov, M. Y. & McCollom, T. Experimental investigation of single carbon compounds under hydrothermal conditions. Geochim. Cosmochim. Acta 70, 446–460 (2006). Reeves, E. P., McDermott, J. M. & Seewald, J. S. The origin of methanethiol in midocean ridge hydrothermal fluids. Proc. Natl Acad. Sci. USA 111, 5474–5479 (2014). Shock, E. L. Hydrothermal dehydration of aqueous organic compounds. Geochim. Cosmochim. Acta 57, 3341–3349 (1993). Shipp, J. et al. Organic functional group transformations in water at elevated temperature and pressure: Reversibility, reactivity, and mechanisms. Geochim. Cosmochim. Acta 104, 194–209 (2013). Frezzotti, M. L. Diamond growth from organic compounds in hydrous fluids deep within the Earth. Nat. Commun. 10, (2019). Pujol-Solà, N. et al. Diamond forms during low pressure serpentinisation of oceanic lithosphere. Geochem. Perspect. Lett. 15, 19–24 (2020). Farré-de-pablo, J. et al. A shallow origin for diamonds in ophiolitic chromitites. 47, 75–78 (Geology 2018). Simakov, S. K., Dubinchuk, V. T., Novikov, M. P. & Melnik, N. N. Metastable nanosized diamond formation from fluid phase. SRX Geosci. 2010, 1–5 (2010). Manuella, F. C. Can nanodiamonds grow in serpentinite-hosted hydrothermal systems? A theoretical modelling study. Mineral. Mag. 77, 3163–3174 (2013). Seewald, J. S. Aqueous geochemistry of low molecular weight hydrocarbons at elevated temperatures and pressures: Constraints from mineral buffered laboratory experiments. Geochim. Cosmochim. Acta 65, 1641–1664 (2001). Milesi, V. et al. Formation of CO2, H2 and condensed carbon from siderite dissolution in the 200–300 °C range and at 50 MPa. Geochim. Cosmochim. Acta 154, 201–211 (2015). Canovas, P. A., Hoehler, T. & Shock, E. L. Geochemical bioenergetics during low-temperature serpentinization: an example from the Samail ophiolite, Sultanate of Oman. J. Geophys. Res. Biogeosci. 122, 1821–1847 (2017). Shock, E. & Canovas, P. The potential for abiotic organic synthesis and biosynthesis at seafloor hydrothermal systems. Geofluids 10, 161–192 (2010). Mason, O. U. et al. First investigation of the microbiology of the deepest layer of ocean crust. PLoS ONE 5, (2010). Martin, W. & Russell, M. J. On the origin of biochemistry at an alkaline hydrothermal vent. Philos. Trans. R. Soc. B Biol. Sci. 362, 1887–1925 (2007). Sleep, N. H., Bird, D. K. & Pope, E. C. Serpentinite and the dawn of life. Philos. Trans. R. Soc. B 366, 2857–2869 (2011). Preiner, M. et al. Serpentinization: Connecting geochemistry, ancient metabolism and industrial hydrogenation. Life 8, (2018). Quesnel, Y. et al. Serpentinization of the martian crust during Noachian. Earth Planet. Sci. Lett. 277, 184–193 (2009). Bultel, B., Quantin-Nataf, C., Andréani, M., Clénet, H. & Lozac'h, L. Deep alteration between Hellas and Isidis Basins. Icarus 260, 141–160 (2015). Arndt, N. T. & Nisbet, E. G. Processes on the Young Earth and the Habitats of Early Life. Annu. Rev. Earth Planet. Sci. 40, 521–549 (2012). Sossi, P. A. et al. Petrogenesis and geochemistry of Archean Komatiites. J. Petrol. 57, 147–184 (2016). Li, Y. & Keppler, H. Nitrogen speciation in mantle and crustal fluids. Geochim. Cosmochim. Acta 129, 13–32 (2014). Zolotov, M. & Shock, E. Abiotic synthesis of polycyclic aromatic hydrocarbons on Mars. J. Geophys. Res. 104, (1999). Russell, M. J. et al. The drive to life on wet and Icy Worlds. Astrobiology 14, 308–343 (2014). Vance, S. D. & Daswani, M. M. Serpentinite and the search for life beyond Earth. Philos. Trans. R. Soc. A 378, (2020). O'Haver, T. iPeak (https://www.mathworks.com/matlabcentral/fileexchange/23850-ipeak). MATLAB Cent. File Exch. (2021). Quirico, E. et al. Origin of insoluble organic matter in type 1 and 2 chondrites: New clues, new questions. Geochim. Cosmochim. Acta 136, 80–99 (2014). Demsar, J. et al. Orange: data mining toolbox in Python. J. Mach. Learn. Res. 14, 2349–2353 (2013). MATH Google Scholar Wagner, C. D., Raymond, R. H. & Gale, L. H. Empirical atomic sensitivity factors for quantitative analysis by electron spectroscopy for chemical analysis. Surf. interface Anal. 3, 211–225 (1981). Potter, J. & Longstaffe, F. J. A gas-chromatograph, continuous flow-isotope mass-spectrometry method for δ13C and δD measurement of complex fluid inclusion volatiles: examples from the Khibina alkaline igneous complex, northwest Russia and the south Wales coalfields. Chem. Geol. 244, 186–201 (2007). Helgeson, H. C., Kirkham, D. H. & Flowers, G. C. Theoretical prediction of the thermodynamic behavior of aqueous electrolytes by high pressures and temperatures; IV, calculation of activity coefficients, osmotic coefficients, and apparent molal and standard and relative partial molal properties to 600 d. Am. J. Sci. 281, 1249–1516 (1981). Shock, E. L., Sassani, D. C., Willis, M. & Sverjensky, D. A. Inorganic species in geologic fluids: correlations among standard molal thermodynamic properties of aqueous ions and hydroxide complexes. Geochim. Cosmochim. Acta 61, 907–950 (1997). Sverjensky D. A., Shock, E. L., & Helgeson, H. C. Prediction of the thermodynamic properties of aqueous metal complexes to 1000 °C and 5 kb. Geochim. Cosmochim. Acta 1359–1412 (1997). Sverjensky, D. A., Harrison, B. & Azzolini, D. Water in the deep Earth: the dielectric constant and the solubilities of quartz and corundum to 60 kb and 1200 °C. Geochim. Cosmochim. Acta 129, 125–145 (2014). Johnson, J. W., Oelkers, E. H. & Helgeson, H. C. SUPCRT92: a software package for calculating the standard molal thermodynamic properties of minerals, gases, aqueous species, and reactions from 1 to 5000 bars and 0 to 1000 °C. vol. 18 (1992). Berman, R. G. Internally-consistent thermodynamic data for minerals in the system Na2O-K2O-CaO-MgO-FeO-Fe2O3-Al2O3-SiO2-TiO2-H2O-CO2. J. Petrol. 29, 445–522 (1988). Berman, R. & Aranovich, L. Optimized standard state and solution properties of minerals. Contrib. Mineral. Petrol. 126, 1–24 (1996). Sverjensky, D. A., Hemley, J. J. & D'Angelo, W. M. Thermodynamic assessment of hydrothermal alkali feldspar-mica- aluminosilicate equilibria. Geochim. Cosmochim. Acta 55, 989–1004 (1991). Schulte, D. & Rogers, L. Thiols in hydrothermal solution: standard partial molal properties and their role in the organic geochemistry of hydrothermal environments. Geochim. Cosmochim. Acta 68, (2004). Richard, L. & Helgeson, H. C. Calculation of the thermodynamic properties at elevated temperatures and pressures of saturated and aromatic high molecular weight solid and liquid hydrocarbons in kerogen, bitumen, petroleum, and other organic matter of biogeochemical interest. Geochim. Cosmochim. Acta 62, 3591–3636 (1998). Helgeson, H. C. Mass transfer among minerals and hydrothermal solutions. in Geochemistry of hydrothermal ore deposits (ed. Barnes, H. L.) 568–606 (John Wiley & Sons, New York, 1979). Wolery, T. EQ3/6: A software package for geochemical modeling of aqueous systems: package overview and installation guide (version 7.0). (1992). Giggenbach, W. F. Chemical Composition of Volcanic Gases. in Monitoring and Mitigation of Volcano Hazards (eds. Scarpa, R. & Tilling, R. I.) 221–256 (Springer, 1996). Symonds, R. B., Rose, W. I., Bluth, G. J. & Gerlach, T. M. Volcanic-gas studies: methods, results, and applications. in Volatiles in magmas (ed. Carroll, M.R., and Holloway, J. R.) 517 (Mineralogical Society of America, 1994). May, W. & Pace, E. L. The vibrational spectra of methanethiol. 481, (1987). Burke, E. A. J. Raman microspectrometry of fluid inclusions. Lithos 55, 139–158 (2001). Frezzotti, M. L., Tecce, F. & Casagli, A. Raman spectroscopy for fluid inclusion analysis. J. Geochem. Explor. 112, 1–20 (2012). Petriglieri, J. R. et al. Micro-Raman mapping of the polymorphs of serpentine. J. Raman Spectrosc. 953–958 (2015) https://doi.org/10.1002/jrs.4695. de Faria, D. L. A., Venaü ncio Silva, S. & de Oliveira, M. T. J. Raman Spectrosc. 28, 873–878 (1997). We acknowledge the IODP program (https://www.iodp.org/) and the IODP 304–305 party. This research was supported by the Deep Carbon Observatory awarded by Alfred P. Sloan Foundation, the French CNRS (Mission pour l'Interdisciplinarité, Défi Origines 2018) and the Institut Universitaire de France (MA). We are also grateful to the LABEX Lyon Institute of Origins (ANR-10-LABX-0066) of the Université de Lyon for its financial support within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) of the French government operated by the National Research Agency (ANR). J. H. wants to acknowledge the financial support from Chinese Academy of Sciences Pioneer Hundred Talents Program and CIFAR Azrieli Global Scholarship. The authors are gratefull to Lisa Mayhew, Manuel Reinhardt and anomymous reviewers for their constructive comments that considerably improved our manuscript. Université de Lyon, Univ Lyon 1, CNRS UMR5276, ENS de Lyon, LGL-TPE, Villeurbanne Cedex, France Muriel Andreani, Gilles Montagnac, Clémentine Fellah, Flore Vandier & Isabelle Daniel Institut Universitaire de France, Paris, France Muriel Andreani Deep Space Exploration Laboratory/CAS Key Laboratory of Crust-Mantle Materials and Environments, University of Science and Technology of China, Hefei, China Jihua Hao CAS Center for Excellence in Comparative Planetology, University of Science and Technology of China, Hefei, Anhui, China Blue Marble Space Institute of Science, Seattle, WA, USA Université Paris Cité, Institut de physique du globe de Paris, CNRS UMR 7154, Paris, France Céline Pisapia, Stéphane Borensztajn & Bénédicte Ménez Université de Lyon, Ecole Centrale de Lyon, LTDS, CNRS UMR 5513, 36, Ecully, France Jules Galipaud Université de Lyon INSA-Lyon, MATEIS, CNRS UMR 5510, Villeurbanne, France School of Oceanography, University of Washington, Seattle, WA, USA Marvin D. Lilley Department of Earth Sciences, ETH Zurich, Zurich, Switzerland Gretchen L. Früh Green Gilles Montagnac Clémentine Fellah Flore Vandier Isabelle Daniel Céline Pisapia Stéphane Borensztajn Bénédicte Ménez M.A., G.M., C.F., F.V., C.P., J.G., and S.B., acquired and processed the data. M.A. wrote the paper with contributions from G.M., C.F., J.H., M.D.L., G.L.F.G., I.D., and B.M. Correspondence to Muriel Andreani. Nature Communications thanks Lisa Mayhew, Manuel Reinhardt and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Description of Additional Supplementary Files Supplementary Movie 1 Andreani, M., Montagnac, G., Fellah, C. et al. The rocky road to organics needs drying. Nat Commun 14, 347 (2023). https://doi.org/10.1038/s41467-023-36038-6
CommonCrawl
Vol. 308, No. 2, 2020 Vol. 309: 1 2 Relations of rationality for special values of Rankin–Selberg $L$-functions of $\mathrm{GL}_n\times \mathrm{GL}_m$ over CM-fields Harald Grobner and Gunja Sachdeva Vol. 308 (2020), No. 2, 281–305 DOI: 10.2140/pjm.2020.308.281 We establish an "automorphic version" of Deligne's conjecture for motivic L-functions in the case of Rankin–Selberg L-functions L(s,Π × Π′) of GLn × GLm over arbitrary CM-fields F. Our main results are of two different kinds: Firstly, for arbitrary integers 1 ≤ m < n and suitable pairs (Π,Π′) of cohomological automorphic representations, we relate critical values of L(s,Π × Π′) with a product of Whittaker periods attached to Π and Π′, Blasius's CM-periods of Hecke-characters and certain nonzero values of standard L-functions. Secondly, these relations lead to quite broad generalizations of fundamental rationality-results of Waldspurger, Harder and Raghuram, and others. periods, rationality, special values, L-function, Rankin–Selberg, GL(n) Mathematical Subject Classification Primary: 11F67 Secondary: 11F70, 11G18, 11R39, 22E55 Revised: 23 May 2020 Published: 9 December 2020 Harald Grobner Gunja Sachdeva IISER Tirupati © Copyright 2020 Pacific Journal of Mathematics. All rights reserved.
CommonCrawl
Networks and Heterogeneous Media 2015, Volume 10, Issue 2: 387-399. Doi: 10.3934/nhm.2015.10.387 This issue Previous Article Stability of conductivities in an inverse problem in the reaction-diffusion system in electrocardiology Next Article Self-similar control systems and applications to zygodactyl bird's foot Inhomogeneities in 3 dimensional oscillatory media Gabriela Jaramillo1, University of Minnesota, School of Mathematics, 127 Vincent Hall, 206 Church St SE, Minneapolis, MN 55455 Received: December 31, 2013 Revised: November 30, 2014 We consider localized perturbations to spatially homogeneous oscillations in dimension 3 using the complex Ginzburg-Landau equation as a prototype. In particular, we will focus on inhomogeneities that locally change the phase of the oscillations. In the usual translation invariant spaces and at $ \epsilon=0$ the linearization about these spatially homogeneous solutions result in an operator with zero eigenvalue embedded in the essential spectrum. In contrast, we show that when considered as an operator between Kondratiev spaces, the linearization is a Fredholm operator. These spaces consist of functions with algebraical localization that increases with each derivative. We use this result to construct solutions close to the equilibrium via the Implicit Function Theorem and derive asymptotics for wavenumbers in the far field. Chemical oscillations, pacemakers, target patterns, Kondratiev spaces, complex Ginzburg-Landau. Mathematics Subject Classification: Primary: 35B36, 35Q56; Secondary: 46E35. R. A. Adams and J. J. F. Fournier, Sobolev Spaces, 2nd edition, Pure and Applied Mathematics, 140, Elsevier/Academic Press, Amsterdam, 2003. I. S. Aranson and L. Kramer, The world of the complex Ginzburg-Landau equation, Reviews of Modern Physics, 74 (2002), 99-143.doi: 10.1103/RevModPhys.74.99. G. Jaramillo and A. Scheel, Deformation of striped patterns by inhomogeneities, Mathematical Methods in the Applied Sciences, 38 (2015), 51-65.doi: 10.1002/mma.3049. A.-K. Kassam, Solving reaction-diffusion equations 10 times faster, 2003. A.-K. Kassam and L. N. Trefethen, Fourth-order time-stepping for stiff pdes, SIAM Journal on Scientific Computing, 26 (2005), 1214-1233.doi: 10.1137/S1064827502410633. R. Kollár and A. Scheel, Coherent structures generated by inhomogeneities in oscillatory media, SIAM J. Appl. Dyn. Syst., 6 (2007), 236-262.doi: 10.1137/060666950. V. A. Kondrat'ev, Boundary value problems for elliptic equations in domains with conical or angular points, Trudy Moskov Mat. Obšč., 16 (1967), 209-292. R. B. Lockhart, Fredholm properties of a class of elliptic operators on noncompact manifolds, Duke Math. J., 48 (1981), 289-312.doi: 10.1215/S0012-7094-81-04817-1. R. B. Lockhart and R. C. McOwen, On elliptic systems in $\mathbbR^n$, Acta Math., 150 (1983), 125-135.doi: 10.1007/BF02392969. R. B. Lockhart and R. C. McOwen, Elliptic differential operators on noncompact manifolds, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 12 (1985), 409-447. R. C. McOwen, The behavior of the laplacian on weighted Sobolev spaces, Communications on Pure and Applied Mathematics, 32 (1979), 783-795.doi: 10.1002/cpa.3160320604. A. Melcher, G. Schneider and H. Uecker, A hopf-bifurcation theorem for the vorticity formulation of the Navier-Stokes equations in $\mathbbR^3$, Communications in Partial Differential Equations, 33 (2008), 772-783.doi: 10.1080/03605300802038536. V. Milisic and U. Razafison, Weighted Sobolev spaces for the Laplace equation in periodic infinite strips, preprint, arXiv:1302.4253. L. Nirenberg and H. F. Walker, The null spaces of elliptic partial differential operators in $\mathbbR^n$, J. Math. Anal. Appl., 42 (1973), 271-301.doi: 10.1016/0022-247X(73)90138-8. M. Specovius-Neugebauer and W. Wendland, Exterior stokes problems and decay at infinity, Mathematical Methods in the Applied Sciences, 8 (1986), 351-367.doi: 10.1002/mma.1670080124. M. Stich and A. S. Mikhailov, Target patterns in two-dimensional heterogeneous oscillatory reaction-diffusion systems, Physica D: Nonlinear Phenomena, 215 (2006), 38-45.doi: 10.1016/j.physd.2006.01.011. HTML views() PDF downloads(54) Cited by(0) Gabriela Jaramillo
CommonCrawl
Skip to main content Skip to sections Bulletin of Volcanology August 2018 , 80:67 | Cite as Investigation of variable aeration of monodisperse mixtures: implications for pyroclastic density currents Gregory M. Smith Rebecca Williams Pete J. Rowley Daniel R. Parsons First Online: 30 July 2018 The high mobility of dense pyroclastic density currents (PDCs) is commonly attributed to high gas pore pressures. However, the influence of spatial and temporal variations in pore pressure within PDCs has yet to be investigated. Theory suggests that variability in the fluidisation and aeration of a current will have a significant control on PDC flow and deposition. In this study, the effect of spatially heterogeneous gas pore pressures in experimental PDCs was investigated. Sustained, unsteady granular currents were released into a flume channel where the injection of gas through the channel base was controlled to create spatial variations in aeration. Maximum current front velocity results from high degrees of aeration proximal to the source, rather than lower sustained aeration along the whole flume channel. However, moderate aeration (i.e. ~ 0.5 minimum static fluidisation velocity (Umf_st)) sustained throughout the propagation length of a current results in greater runout distances than currents which are closer to fluidisation (i.e. 0.9 Umf_st) near to source, then de-aerating distally. Additionally, although all aerated currents are sensitive to channel base slope angle, the runout distance of those currents where aeration is sustained throughout their lengths increases by up to 54% with an increase of slope from 2° to 4°. Deposit morphologies a primarily controlled by the spatial differences in aeration, where there is a large decrease in aeration the current forms a thick depositional wedge. Sustained gas-aerated granular currents are observed to be spontaneously unsteady, with internal sediment waves travelling at different velocities. Pyroclastic density current Aerated currents Flume Fluidisation Pore pressure Slope angle Editorial responsibility: R.J. Brown The online version of this article ( https://doi.org/10.1007/s00445-018-1241-1) contains supplementary material, which is available to authorized users. Pyroclastic density currents (PDCs) are hazardous flows of hot, density-driven mixtures of gas and volcanic particles generated during explosive volcanic eruptions, or from the collapse of lava domes (e.g. Yamamoto et al. 1993; Branney and Kokelaar 2002; Cas et al. 2011). They are capable of depositing large ignimbrite sheets, which can exhibit a variety of sedimentary structures and grading patterns (e.g. Rowley et al. 1985; Wilson 1985; Fierstein and Hildreth 1992; Branney and Kokelaar 2002; Brown and Branney 2004; Sarocchi et al. 2011; Douillet et al. 2013; Brand et al. 2016). As evidenced by the occurrence of these deposits far from sources, PDCs can achieve long runout distances on slopes shallower than the angle of rest of granular materials, even at low volumes (e.g. Druitt et al. 2002; Cas et al. 2011; Roche et al. 2016). Explanations for these long runout distances vary according to whether the current in question is envisaged as dilute or dense (cf. Dade and Huppert 1996; Wilson 1997). PDC transport encompasses a spectrum whose end-members can be defined as either fully dilute or granular-fluid currents (Walker 1983; Druitt 1992; Branney and Kokelaar 2002; Burgissier and Bergantz 2002; Breard and Lube 2016). In the first type, clast interactions are negligible, and support and transport of the pyroclasts are dominated by fluid turbulence at all levels in the current (Andrews and Manga 2011, 2012). In contrast, in highly concentrated granular-fluid based currents, particle interactions are important and turbulence is dampened (e.g. Savage and Hutter 1989; Iverson 1997; Branney and Kokelaar 2002). Here, the differential motion between the interstitial gas and solid particles is able to generate pore fluid pressure due to the relatively low permeability of the gas-particle mixture (Druitt et al. 2007; Montserrat et al. 2012; Roche 2012). An intermediate regime has also recently been defined, characterised by mesoscale turbulence clusters (Breard et al. 2016), which couple the dilute and dense regions of a PDC. Where dense PDCs are concerned, their high mobility is commonly attributed to the influence of fluidisation of the current's particles caused by high, long-lived gas pore pressures (Sparks 1976; Wilson 1980; Druitt et al. 2007; Roche 2012; Gueugneau et al. 2017; Breard et al. 2018). These high gas pore pressures fundamentally result from relative motion between settling particles and ascending fluid and can be produced through various processes including (i) bulk self-fluidisation (McTaggart 1960; Wilson and Walker 1982), (ii) grain self-fluidisation (Fenner 1923; Brown 1962; Sparks 1978), (iii) sedimentation fluidisation/hindered settling (Druitt 1995; Chédeville and Roche 2014), and (iv) decompression fluidisation (Druitt and Sparks 1982); see Wilson (1980) and Branney and Kokelaar (2002) for reviews. As gas pore pressures within a gas-particle mixture increase, inter-particle stresses are reduced as the particles become fluidised (Gibilaro et al. 2007; Roche et al. 2010). Fluidisation of a granular material is defined as the condition where a vertical drag force exerted by a gas flux is strong enough to support the weight of the particles, resulting in apparent friction reduction and fluid-like behaviour (Druitt et al. 2007; Gilbertson et al. 2008). The gas velocity at which this occurs is known as the minimum fluidisation velocity (Umf). Where there is a gas flux through a sediment which is less than Umf, then that sediment is partially fluidised and is often termed aerated. The gas pore pressure decreases over time during flow, once there is little or no relative gas-particle motion, according to: $$ {t}_{\mathrm{d}}\propto {H}^2/D $$ where H is the bed height and D is the diffusion coefficient of the gas (Roche 2012). PDCs are dominated by finer-grained particles, which confer a greater surface area than coarse particles, conveying low mixture permeability (Druitt et al. 2007; Roche 2012). PDCs are therefore thought to sustain high pore pressures for longer, resulting in greater mobility than their unfluidized "dry" granular counterparts (i.e. rockfalls). The detailed fluid dynamics and processes involved with pore pressure in PDCs are elusive due to the significant challenge of obtaining measurements. Moreover, the observation of depositional processes is challenging as the basal parts of PDCs are hidden by an overriding ash cloud. Scaled, physical modelling can provide a direct way to simulate and quantify the behaviour of several processes, which take place in PDCs under controlled, variable conditions, as well as creating easily accessible analogous deposits. Dam break-type experimental currents aimed at representing simplified, uniformly permeable, dense PDCs have attempted to model fluidisation processes by fluidising particles before release into a flume (Roche et al. 2002; Roche et al. 2004). These demonstrate that fluidisation has an important effect on runout distance. However, rapid pore pressure diffusion results in shorter runout distances and thinner deposits than might be expected in full scale currents (e.g. Roche et al. 2004; Girolami et al. 2008; Roche et al. 2010; Roche 2012; Montserrat et al. 2016). This is because while the material permeability in both natural and experimental currents is similar (with experimental currents being somewhat fines depleted in comparison to natural PDCs), experimental currents are much thinner than their natural counterparts, resulting in more rapid loss of pore pressure. Experiments have demonstrated that the degree of fluidisation is also important in contributing to substrate entrainment and the resulting transport capacity of fluidised currents (Roche et al. 2013). Early work on the sustained fluidisation of granular currents by injection of air at the base of the current (Eames and Gilbertson 2000) was not focused on replicating the behaviour of PDCs in particular, but did demonstrate that this was a valid method of preventing rapid pore pressure diffusion in granular currents. Rowley et al. (2014) reproduced the long-lived high gas pore pressures of sustained PDCs using an experimental flume, which fed a gas flux through a porous basal plate to simulate long pore pressure diffusion timescales in natural, thicker currents. This resulted in much greater runout distances than unaerated or initially fluidised currents. However, these experiments were unable to explore defluidisation due to the constant uniform gas supply along the flume length. Natural PDCs are unlikely to be homogenously aerated (Gueugneau et al. 2017) and are inherently heterogeneous due to factors, such as source unsteadiness and segregation of particles (Branney and Kokelaar 2002), which can cause spatial variability in factors controlling Umf, such as bulk density. Hence, different pore pressure generation mechanisms may be operating in different areas of the PDC at once. For example, fluidisation due to the exsolution of volatiles from juvenile clasts (Sparks 1978; Wilson 1980) could be dominant in one part of the PDC and fluidisation from hindered settling of depositing particles (Druitt 1995; Girolami et al. 2008) or autofluidisation from particles settling into substrate interstices (Chédeville and Roche 2014) dominant in another. It is important, then, to understand the impacts of variable fluidisation on such currents. Here, we present experiments using a flume tank which we set up to investigate the effect of spatially variable aeration on a sustained granular current at different slope angles. The flume allows the simulation of various pore pressures and states of aeration in the same current down the channel. This allows the currents to stabilise and propagate for a controlled distance before de-aeration occurs. We report how this spatially variable aeration, as well as the channel slope angle, affects the current runout distance, frontal velocity, and characteristics of the subsequent deposit. It should be noted that our work attempts to simulate the fact that PDCs are fluidised/aerated to some degree for long periods of time, rather than attempting to replicate a particular mechanism of fluidisation. The experimental flume is shown in Fig. 1. A hopper supplies the particles to a 0.15-m wide, 3.0-m long, channel through a horizontal lock gate 0.64 m above the channel base. The base of the flume sits above three 1.0-m long chambers, each with an independently controlled compressed air supply, which feeds into the flume through a porous plate. The flume channel can be tilted up to 10° from horizontal. A longitudinal section view of the experimental flume The air-supply plumbing allows a gas flux to be fed through the base of the flume, producing sustained aeration of the current. In such thin (< 30 mm), rapidly degassing laboratory currents, this enables us to simulate the long-lived high gas pore pressures that characterise thicker PDCs (Rowley et al. 2014). An important aspect of this flume is that the gas flux for each of the three chambers may be controlled individually, allowing the simulation of spatially variable magnitudes of pore pressures. The experiments were performed using spherical soda lime ballotini with grain sizes of 45–90 μm (average D32 = 63.4 μm calculated from six samples across the material batch, see Table 3 in Appendix A for grain size information), similar to the type of particles used in previous experimental granular currents (e.g. Roche et al. 2004; Rowley et al. 2014; Montserrat et al. 2016). D32, or the Sauter mean diameter, can be expressed as $$ {D}_{32}=\frac{1}{\sum \frac{x_{\mathrm{i}}}{d_{\mathrm{i}}}} $$ where xi is the weight fraction of particles of size di. In line with Breard et al. (2018), D32 was given here because it exerts some control on current permeability (Li and Ma 2011). These grain sizes assign the ballotini to group A of Geldart (1973), which are those materials which expand homogenously above Umf until bubbles form. As PDCs contain dominantly group A particles, this allows dynamic similarity between the natural and experimental currents (Roche 2012). Ballotini grains have a stated solid density of 2500 kg/m3 and a repose angle measured by shear box to be 26°. The experiments were recorded using high-speed video at 200 frames per second. This video recorded a side-wall area of the channel across the first and second chambers, allowing the calculation of variations in the current front velocity. Velocities were calculated at 0.1-m intervals, from high-speed video which recorded the currents across a section of the flume from 0.8 to 1.7 m. All runout measurements are given as a distance from the headwall of the flume. The variables experimentally controlled, and thus investigated, in these experiments are as follows: (i) the gas flux supplied through the base in each of the three sections of the channel and (ii) the slope angle of the channel. The slope angles examined were 2° and 4°. A range of gas supply velocities were used to vary the aeration state of the particles, all of which were below Umf as complete fluidisation would result in non-deposition. Static piles of particles used in these experiments achieve static minimum fluidisation (Umf_st) with a vertical gas velocity of 0.83 cm/s. This is comparable to Roche (2012), who used the same 45–90-μm glass ballotini. Because our fluidisation state was measured in a static pile, we explicitly use Umf_st rather than Umf in order to denote the origin of this value in these experiments. In a moving (i.e. shearing) current, Umf will be higher than Umf_st because dilatancy would be anticipated, and therefore, an increase in porosity should be observed. Aeration states were varied from 0 cm/s (non-aerated) through various levels of aeration to a maximum of 0.77 cm/s. Table 1 shows the gas velocities used as a proportion of Umf_st across the experimental set. The mass of particles comprising the currents (the "charge") was kept constant, at 10 kg for each run. Conversion of gas velocities used in the experiments into proportions of Umf_st (0.83 cm/s) Proportion of Umf_st Gas velocity (cm/s) Runout distance and current front velocity Runout distance is markedly affected by variations in the aeration states. For a given slope angle, if the aeration states are the same in all three chambers, then increasing the gas flux causes runout distances to increase. The measurable limit for runout distance in these experiments is 3 m (i.e. when the current exits the flume) (Fig. 2). In this work, when describing the aeration state of the flume as a whole, the gas velocities of each chamber are listed as proportions of Umf_st, in increasing distance from the headwall. For example, an aeration state of 0.93-0.93-0 means that the first two chambers are aerated at 0.93 Umf_st and the third chamber is unaerated. Runout distances for various aeration states on different slope angles. Results are shown as profiles of the actual deposits formed. Aeration states of the three chambers are given on the y-axis. Dividing lines show the transition points between the three chambers. Flume length is 300 cm. Vertical scale = horizontal scale Where aeration state is decreased along the length of the flume, greater runout distances are still correlated with greater aeration states. At a high aeration state in the first chamber, behaviour of the current is dependent on the aeration state in the second chamber. For example, Fig. 2 demonstrates how 0.93-0.93-0 Umf_st currents have greater runout distances than 0.93-0.66-0 Umf_st currents which in turn have greater runout distances than 0.93-0-0 Umf_st currents. At a lower aeration state in the first chamber, the runout distance seems to be dependent on the aeration state in the third chamber. For example, in Fig. 2, 0.66-0.53-0.4 Umf_st currents have greater runout distances than 0.66-0.66-0 Umf_st currents and 0.53-0.4-0.4 Umf_st currents have greater runout distances than 0.53-0.53-0 Umf_st currents. The current front velocity is also dependent on the aeration state. Current front velocity does not exceed 1.5 m/s (Fig. 3). This is considerably less than the calculated free fall velocity (2gh)1/2 = 3.5 m/s, where g is the gravitational acceleration and h is the 0.64-m drop height; however, by the interval at which velocity is measured, the currents have travelled 0.8 m and will also have lost energy upon impingement. Generally, regardless of the aeration state in the first or second chamber, the current front velocity decreases over the measured interval (Fig. 3). Higher aeration states, however, sustain higher current front velocities across greater distances. Also, where the aeration state decreases from the first chamber into the second, the current front velocity is not always immediately affected and may even temporarily increase (Fig. 3). Overall, the highest current front velocities across the whole 0.9-m interval are always found in the 0.93-0.93-0 Umf_st aeration state. Plots showing front velocity as each current propagates past the distance intervals 0.8–1.7 m, on a 4° channel slope. Note that where a profile stops on the x-axis, this does not necessarily mean the current has halted; in some cases, it represents where the current front has become too thin to accurately track. Dividing line shows the transition between the first and second chambers along the flume. The aeration states (in Umf_st) of a current in the first two chambers are given in the legend. a Plots for currents which experience a high and uniform, or near-uniform, gas supply from chamber 1 into chamber 2, whereas b plots results for currents which experience a low and uniform gas supply, or a lower gas supply into chamber 2 than chamber 1, which encourages de-aeration Slope angle and runout distance For a given aeration state, increasing the slope angle acts to increase the runout distance of the current (Fig. 2). However, the magnitude of the increase is dependent on the overall aeration state of the current; large increases in runout distance from increased slope angle only occur where the current is uniformly aerated or there is a small decrease in gas flux between chambers. For example, as slope increases from 2 to 4°, 0.4-0.4-0.4 Umf_st, 0.46-0.46-0.46 Umf_st, and 0.53-0.4-0.4 Umf_st currents see increases in runout distances from 1.3 to 2 m (54%), 2 to 3+ m (≥ 50%), and 2 to 2.43 m (22%), respectively. Whether this is also the case for higher and uniformly aerated states (0.53-0.53-0.53 Umf_st and 0.66-0.66-0.66 Umf_st), it is not clear as both slope angles resulted in maximum current runout (i.e. 3+ m). The effect of increasing slope angle on increasing runout distance is subdued when currents are allowed to de-aerate more quickly. For example, currents of 0.93-0.66-0 Umf_st conditions only experience a runout increase from 2.53 to 2.86 m (13%) as slope increases from 2 to 4°, while 0.93-0-0 Umf_st conditions undergo increases of 2.88 to 3+ m (≥ 6%). Slope angle is thus a secondary control on runout distance compared to aeration state. Only in one condition (− 0.4-0.4-0.4 Umf_st) does increasing the slope from 2 to 4° increase the runout distance by more than 50% (1.3 to 2 m), whereas on a 2° slope, increasing aeration from zero to just 0.4-0.4-0.4 Umf_st results in a 120% increase in runout distance (0.59 to 1.3 m). Increasing this to the maximum aeration state used, 0.93-0.93-0 Umf_st, gives a further increase in runout distance of 122% (1.3 to 2.88 m). Current behaviour and deposition Regardless of aeration state, all of the experimental currents appear unsteady. This is manifested in the transport of the particles as a series of pulses. Pulses are not always laterally continuous down current, where slower, thinner pulses at the current front are overtaken by faster, thicker pulses. This can partly be seen in the waxing and waning of the velocity profiles in Fig. 3; some of the fluctuations in current front velocity are caused by a faster current pulse reaching the front of the current (Fig. 4). However, in most cases, overtaking of the flow front by a pulse happens outside the area of the high-speed camera and appears to be triggered by the current front slowing as it transitions into a less aerated chamber. High-speed video frames of an experimental current on a 4° slope under 0.93-0-0 Umf_st conditions (Fig. 2). Numbers on left are time in seconds since the current front entered the frame. a The front of the current enters the frame. b The current front continues to run out as the first pulse catches and begins to override it. c The current front is completely overtaken by the first pulse. A video of this experiment is presented in Online Resource 1 There appears to be five different groups of deposit morphology types generated by the various combinations of aeration states and slope angles (Table 2): Large aeration decrease—In cases where the current front passes into an unaerated chamber from a chamber that is aerated at 0.93 Umf_st, the resulting deposit is mostly confined to the unaerated chamber and has a wedge shape, with its thickest point being at the transition between the highly aerated and completely unaerated chambers. Such behaviour is also seen in the aeration state 0.93-0.66-0 Umf_st, and most clearly on a 4° slope. Uniform aeration—Where all three chambers are aerated at 0.53 Umf_st or more, the current reaches the end of the flume. Except for currents passing through all chambers at 0.66 Umf_st, the currents forming these deposits experience stalling of the current front, which then progresses at a much slower velocity while local thickening along the body of the current results in deposition upstream. The section of the deposit in the third chamber is usually noticeably thinner than in the first two chambers, which tends to be of an even thickness. Such deposits are also formed by 0.46-0.46-0.46 Umf_st currents on a 4° slope. Moderate–low aeration decrease—Where the gas fluxes in the first two chambers are at 0.66 Umf_st or 0.53 Umf_st, but there is no (or low) flux in the third, the deposits formed are of approximately even thicknesses, with their leading edges inside the third chamber. This group also includes deposits formed under 0.93-0.66-0 Umf_st conditions on a 2° slope. Low uniform aeration—Where the second and third chambers are aerated at 0.46 Umf_st or less, and the first chamber is at no more than 0.53 Umf_st, deposits with a centre of mass located inside the first chamber form. Beyond this, the deposit thicknesses decreases rapidly. Unaerated—Under no aeration whatsoever, deposits form flat-topped wedges. These show angles steeper than the wedges in other groups. Groups of deposit types and the aeration states and slope angles which form them Runout distance Once the current is fluidised or aerated, it is able to travel further than dry granular currents, as seen in previous experiments (e.g. Roche et al. 2004; Girolami et al. 2008; Roche 2012; Chédeville and Roche 2014; Rowley et al. 2014; Montserrat et al. 2016). This is because the increased pore pressures reduce frictional forces between the particles in the current, thus increasing mobility. However, here, we find that the relationship between aeration state and runout distance is not a simple correlation between higher gas fluxes and greater runout distances. A current with high initial aeration rates followed by a rapid decline does not travel as far as a current that is moderately aerated across a greater distance. For example, a current run with 0.93-0-0 Umf_st conditions does not travel as far as runs with conditions set at 0.66-0.66-0.66 Umf_st or 0.53-0.53-0.53 Umf_st (Fig. 2). A highly aerated current may continue for some distance after passing into an unaerated chamber. Where only the first two chambers are aerated, this distance is dependent on the magnitude of the aeration state of the first chamber. For example, a current under 0.93-0.66-0 Umf_st conditions travels up to 24% further than one under 0.66-0.66-0 Umf_st conditions, but a current under 0.93-0.93-0 Umf_st conditions only travels up to 14% further than one under 0.93-0.66-0 Umf_st conditions. However, a current that is moderately aerated for its entire passage can travel at least as far as those, which are initially highly aerated. This is a result of the high pore pressures being sustained across a greater portion of the current, simulating the long-lived high pore pressures of much thicker natural PDCs. Where a current passes into an unaerated chamber, the pore pressure diffusion time is dependent on the current thickness, current permeability, and the present pore pressure magnitude. As many current fronts are of similar thickness when they pass into an unaerated chamber, de-aeration seems to be controlled largely by the aeration state of the chambers prior to the unaerated one. A current with a lower aeration state will reach a completely de-aerated state and halt sooner than a current with a higher aeration state. This has implications for both runout distance and deposit characteristics. Higher initial gas velocities sustain higher current front velocities for greater distances, as seen in Fig. 3, where the 0.93-0.93-0 Umf_st and 0.93–0.66-0 Umf_st current velocity profiles sustain current front velocities of > 1 m/s across the measured interval, in contrast to the other aeration states, where current front velocities rapidly fall below 1 m/s. High gas fluxes sustain high pore pressures, decreasing frictional forces between particles, reducing deceleration relative to less aerated currents. As the rate of pore pressure diffusion becomes greater than the supply of new gas to the current, it undergoes an increase in internal frictional forces and a consequent decrease in velocity. When a current crosses into a chamber with a lower aeration state, this results in the lowering of its current front velocity (Fig. 3), although this change does not immediately take place and the current front may even accelerate as it crosses the boundary (as seen in many profiles in Fig. 3). The only currents which immediately decelerate in all cases are those where the aeration state of both chambers is 0.53 Umf_st or less. The temporary acceleration seen in the other currents mostly occurs over a distance of ~ 10 cm. Over this distance, these currents have sufficient momentum that the decreasing gas velocity and consequent increase in internal frictional forces do not immediately take effect. This is in line with our knowledge of pore pressure diffusion in PDCs—mostly composed of fine ash. In such cases, the pore pressure does not instantly diffuse due to the low permeability of the material (Druitt et al. 2007). In our experimental currents, passing into a lower or non-aerated chamber does not cause the current to immediately lose pore pressure (Fig. 3), but the magnitude of the difference in gas velocities between the chambers does influence the depositional behaviour of the current. The influence of slope angle The effects of slope angle on both dam-break type initially fluidised (Chédeville and Roche 2015) and dry granular currents (Farin et al. 2014) are relatively well known. However, the influence of varying slope angle for currents possessing sustained pore pressures is largely unquantified. Although only two (2° and 4°) slope angles were examined, there is a clear effect on both current runout distance and current front velocity. Runout distance may be increased by up to 50%, and higher current front velocities are sustained for greater distances on a steeper slope. The influence of small changes of slope on PDC dynamics is important because in nature low slope angles can be associated with PDC runout distances > 100 km (Valentine et al. 1989; Wilson et al. 1995). The effect of slope angle on runout distance is most apparent when aeration is sustained over the whole current. Where the current front comes to a halt in an unaerated chamber, the runout distance increases no more than 13% on a 4° slope compared to a 2° slope. However, the overall effect of slope angle on the runout distance of sustained, moderate-to-highly aerated currents is difficult to quantify using our flume as such runs commonly move out of the flume. Propagation and deposit formation These experimental currents travel as a series of pulses generated by inherent unsteadiness developed during current propagation. Froude numbers \( \Big( Fr=\frac{U}{(gH)^{\frac{1}{2}}} \), where U is the current front or pulse velocity) were determined for a number of current fronts and pulses by plotting the current front or pulse velocity as a function of \( {(gH)}^{\frac{1}{2}} \) (Fig. 5). The slope of line of best fit gives Fr = 7, which fits with anticipated supercritical flow conditions (Gray et al. 2003). This is higher that the Fr of 2.58 obtained by Roche et al. (2004), likely due to the higher energy initiation and sustained nature of our currents compared to the depletive, dam-break currents of Roche et al. (2004). Froude number for the fronts and first pulses of selected experimental currents. Uncertainties in velocity are smaller than the size of the symbols. Uncertainties in current height are relatively large due to the thinness of the current fronts relative to video resolution The currents form a range of depositional structures depending on the flow dynamics and can deposit, through aggradation, much thicker deposits than the currents themselves. Our observations that the currents are both unsteady and can consist of a series of pulses suggest that deposition is occurring by stepwise aggradation (Branney and Kokelaar 1992; Sulpizio and Dellino 2008). The deposits produced in the experiments form five different groups, from which the following three important observations can be made: First, where the current front moves from an aerated chamber into an unaerated one, the shape and thickness of the deposit appear to depend on the magnitude of the drop in aeration state. Where the drop is high (0.93 Umf_st and 0.66 Umf_st to unaerated), a thick (~ ×10 current thickness) wedge forms downstream, thickening mainly through retrogradational deposition as the high aeration states of the first two chambers quickly deliver the current body into the growing wedge. Second, sustained flow can build a deposit of relatively even thickness behind a stalling current front as inferred by Williams et al. (2014). Third, flat-topped wedges form where currents are dry, and runout distance is therefore affected only by channel slope angle. Overall, these observations suggest that a decrease in aeration state may be an important control on deposit formation, character, and distribution. These experiments provide a first attempt to directly control de-aeration in dense granular PDC analogues and greatly simplify the system, providing three relatively uniformly aerated segments of flow. This is in contrast to the high degree of spatial and temporal variation that might be envisaged in PDCs and the more gradual degassing a natural current will experience. We stress that the de-aeration rates observed in these experiments are faster than we would anticipate in natural PDCs; the sustained gas pore pressure provided here is applied so as to overcome the very rapid pore pressure diffusion timescales found in laboratory flows (Druitt et al. 2007; Rowley et al. 2014). This is due to the similarity of their bulk grain size to the ash found in PDCs, but much thinner flow thicknesses and hence more rapid pore pressure diffusion. Nevertheless, the decreases in aeration observed in some of our experimental flows have relevance for PDCs, which may experience, for example, a loss of fines or undergo temperature drops, thinning, and/or the entrainment of courser material, all of which would act to de-aerate the current (e.g. Bareschino et al. 2007; Druitt et al. 2007; Gueugneau et al. 2017). Implications for future work We have demonstrated that variable aeration states in conjunction with slope angle can affect the shape and location of an experimental current's deposit. It seems logical to assume that these different types of deposit aggrade differently and so have different internal architectures, which may be analogous to features seen in ignimbrites. However, the internal architectures of these experimental deposits are hidden due to the uniform colour and grain size of the particles used. In future work, the use of dyed particles or particles of a different size would help identify the internal features of these deposits. These experiments examined granular currents emplaced along inclined slopes, which possessed long-lived pore pressures under two conditions: (1) pore pressures which decreased down-current and (2) pore pressures which were uniform throughout the current. The flume configuration allowed the simulation of different aeration states within the currents, in order to simulate the dynamics and heterogeneous nature of pore pressure in PDCs. We examined the effects of varying combinations of aeration states, as well as the effect of slope angle on flow field dynamics and deposit characteristics. It is clear that, in a general sense, higher gas fluxes (i.e. higher pore pressures) in the flume chambers result in greater runout distances. However, moderate (0.53 Umf_st–0.66 Umf_st) sustained gas fluxes produce at least equal runouts to high (0.93 Umf_st) initial fluxes that are subsequently declined. Similarly, high fluxes sustain higher current front velocities for greater distances, and currents may travel for 0.1 –0.2 m after experiencing a decrease in gas flux supplied to their base before undergoing the consequent decrease in current front velocity. Slope angle variation between 2° and 4° has a measurable impact on current runout distance, resulting in increases of between 0.11 and 1 m (i.e. 7–> 50%), with greater increases occurring when low (0.4 Umf_st–0.46 Umf_st) levels of aeration are sustained for the whole runout distance of the current. A higher slope angle also sustains higher current front velocities for greater distances. The experimental currents travel as a series of supercritical pulses (Fr = 7) which come to a relatively rapid halt, supporting the model of stepwise aggradation for dense basal currents (e.g. Schwarzkopf et al. 2005; Sulpizio and Dellino 2008; Charbonnier and Gertisser 2011; Macorps et al. 2018). Our findings also demonstrate intricate links between the overall current dynamics and the deposit morphology characteristics, with thicker, more confined deposits aggrading rapidly, where the current transitions from a high aeration state to lower aeration states. Such behaviour may be seen in natural PDCs subject to processes which result in de-aeration, such as temperature drops and/or loss of fines. We thank Andrew Harris, Richard Brown, and two anonymous reviewers, whose comments and suggestions significantly improved this manuscript. This work was carried out as part of a PhD project funded by a University of Hull PhD scholarship in the Catastrophic Flows Research Cluster. Experiments were performed in the Geohazards Lab at the University of Portsmouth, using equipment funded by a British Society for Geomorphology Early Career Researcher Grant held by PR. (AVI 224110 kb) Appendix 1. Grain size data Grain size data and statistics for the particles used in the experiments. Six samples were taken from across the material batch and subjected to particle size analysis using a QICPIC Median diameter (μm) Mean (μm) Squared difference Andrews B, Manga M (2011) Effects of topography on pyroclastic density current runout and formation of coignimbrites. Geology 39:1099–1102. https://doi.org/10.1130/G32226.1 CrossRefGoogle Scholar Andrews B, Manga M (2012) Experimental study of turbulence, sedimentation and coignimbrite mass partitioning in dilute pyroclastic density currents. J Volcanol Geotherm Res 225-226:30–44. https://doi.org/10.1016/j.jvolgeores.2012.02.011 CrossRefGoogle Scholar Bareschino P, Gravina T, Lirer L, Marzocchella A, Petrosino P, Salatino P (2007) Fluidization and de-aeration of pyroclastic mixtures: the influence of fines content, polydispersity and shear flow. J Volcanol Geotherm Res 164:284–292. https://doi.org/10.1016/j.jvolgeores.2007.05.013 CrossRefGoogle Scholar Brand B, Bendaña S, Self S, Pollock N (2016) Topographic controls on pyroclastic density current dynamics: insight from 18 May 1980 deposits at Mount St. Helens, Washington (USA). J Volcanol Geotherm Res 321:1–17. https://doi.org/10.1016/j.jvolgeores.2016.04.018 CrossRefGoogle Scholar Branney MJ, Kokelaar P (1992) A reappraisal of ignimbrite emplacement: progressive aggradation and changes from particulate to non-particulate flow during emplacement of high grade ignimbrite. Bull Volcanol 54:504–520. https://doi.org/10.1007/BF00301396 CrossRefGoogle Scholar Branney MJ, Kokelaar P (2002) Pyroclastic density currents and the sedimentation of ignimbrites. Geol Soc Lond Memoir 27:7–21. https://doi.org/10.1144/GSL.MEM.2003.027.01.02 CrossRefGoogle Scholar Breard ECP, Lube G (2016) Inside pyroclastic density currents—uncovering the enigmatic flow structure and transport behaviour in large-scale experiments. Earth Planet Sci Lett 458:22–36. https://doi.org/10.1016/j.epsl.2016.10.016 CrossRefGoogle Scholar Breard ECP, Lube G, Jones JR, Dufek J, Cronin SJ, Valentine G, Moebis A (2016) Coupling of turbulent and non-turbulent flow regimes within pyroclastic density currents. Nat Geosci 9:767–771. https://doi.org/10.1038/ngeo2794 CrossRefGoogle Scholar Breard ECP, Dufek J, Lube G (2018) Enhanced mobility in concentrated pyroclastic density currents: an examination of a self-fluidization mechanism. Geophys Res Lett 45:654–664. https://doi.org/10.1002/2017GL075759 CrossRefGoogle Scholar Brown MC (1962) Nuées ardentes and fluidization. Am J Sci 260:467–470. https://doi.org/10.2475/ajs.260.6.467 CrossRefGoogle Scholar Brown RJ, Branney MJ (2004) Bypassing and diachronous deposition from density currents: evidence from a giant regressive bed form in the Poris ignimbrite, Tenerife, Canary Islands. Geology 32:445–448. https://doi.org/10.1130/G20188.1 CrossRefGoogle Scholar Burgissier A, Bergantz GW (2002) Reconciling pyroclastic flow and surge: the multiphase physics of pyroclastic density currents. Earth Planet Sci Lett 202:405–418. https://doi.org/10.1016/S0012-821X(02)00789-6 CrossRefGoogle Scholar Cas RAF, Wright HMN, Folkes CB, Lesti C, Porreca M, Giordano G, Viramonte JG (2011) The flow dynamics of an extremely large volume pyroclastic flow, the 2.08-Ma Cerro Galán Ignimbrite, NW Argentina, and comparison with other flow types. Bull Volcanol 73:1583–1609. https://doi.org/10.1007/s00445-011-0564-y CrossRefGoogle Scholar Charbonnier SJ, Gertisser R (2011) Deposit architecture and dynamics of the 2006 block-and-ash flows of Merapi Volcano, Java, Indonesia. Sedimentology 58:1573–1612. https://doi.org/10.1111/j.1365-3091.2011.01226.x CrossRefGoogle Scholar Chédeville C, Roche O (2014) Autofluidization of pyroclastic flows propagating on rough substrates as shown by laboratory experiments. J Geophys Res Solid Earth 119:1764–1776. https://doi.org/10.1002/2013JB010554 CrossRefGoogle Scholar Chédeville C, Roche O (2015) Influence of slope angle on pore pressure generation and kinematics of pyroclastic flows: insights from laboratory experiments. Bull Volcanol 77:1–13. https://doi.org/10.1007/s00445-015-0981-4 CrossRefGoogle Scholar Dade WB, Huppert HE (1996) Emplacement of the Taupo ignimbrite by a dilute turbulent flow. Nature 381:509–512. https://doi.org/10.1038/385307a0 CrossRefGoogle Scholar Douillet GA, Pacheco DA, Kueppers U, Letort J, Tsang-Hin-Sun È, Bustillos J, Hall M, Ramón P, Dingwell DB (2013) Dune bedforms produced by dilute pyroclastic density currents from the August 2006 eruption of Tungurahua volcano, Ecuador. Bull Volcanol 75:762. https://doi.org/10.1007/s00445-013-0762-x CrossRefGoogle Scholar Druitt TH (1992) Emplacement of the 18 May 1980 lateral blast deposit ENE of Mount St. Helens, Washington. Bull Volcanol 54:554–572. https://doi.org/10.1007/BF00569940 CrossRefGoogle Scholar Druitt TH (1995) Settling behaviour of concentrated dispersions and some volcanological applications. J Volcanol Geotherm Res 65:27–39. https://doi.org/10.1016/0377-0273(94)00090-4 CrossRefGoogle Scholar Druitt TH, Sparks RSJ (1982) A proximal ignimbrite breccia facies on Santorini, Greece. J Volcanol Geotherm Res 13:147–171. https://doi.org/10.1016/0377-0273(82)90025-7 CrossRefGoogle Scholar Druitt TH, Calder ES, Cole PD, Hoblitt RS, Loughlin SC, Norton GE, Ritchie R, Sparks SJ, Voight B (2002) Small-volume, highly mobile pyroclastic flows formed by rapid sedimentation from pyroclastic surges at Soufrière Hills Volcano, Montserrat: an important volcanic hazard. In: Druitt TH, Kokelaar BP (eds). The eruption of Soufrière Hills Volcano, Montserrat, from 1995 to 1999. Geol Soc London Memoir, 21, pp 263–279. https://doi.org/10.1144/GSL.MEM.2002.021.01.12 Druitt TH, Avard G, Bruni G, Lettieri P, Maez F (2007) Gas retention in fine-grained pyroclastic flow materials at high temperatures. Bull Volcanol 69:881–901. https://doi.org/10.1007/s00445-007-0116-7 CrossRefGoogle Scholar Eames I, Gilbertson M (2000) Aerated granular flow over a horizontal rigid surface. J Fluid Mech 424:169–195. https://doi.org/10.1017/S0022112000001920 CrossRefGoogle Scholar Farin M, Mangeney A, Roche O (2014) Fundamental changes of granular flow dynamics, deposition, and erosion processes at high slope angles: insights from laboratory experiments. J Geophys Res Earth 119:504–532. https://doi.org/10.1002/2013JF002750 CrossRefGoogle Scholar Fenner CN (1923) The origin and mode of emplacement of the great tuff deposit in the Valley of Ten Thousand Smokes. National Geographic Society Contributed Technical Papers, Katmai Series, 1:1Google Scholar Fierstein J, Hildreth W (1992) The plinian eruptions of 1912 at Novarupta, Katmai National Park, Alaska. Bull Volcanol 54:646–684. https://doi.org/10.1007/BF00430778 CrossRefGoogle Scholar Geldart D (1973) Types of gas fluidization. Powder Technol 7:285–292. https://doi.org/10.1016/0032-5910(73)80037-3 CrossRefGoogle Scholar Gibilaro LG, Gallucci K, Di Felice R, Pagliai P (2007) On the apparent viscosity of a fluidized bed. Chem Eng Sci 62:294–300. https://doi.org/10.1016/j.ces.2006.08.030 CrossRefGoogle Scholar Gilbertson MA, Jessop DE, Hogg AJ (2008) The effects of gas flow on granular currents. Philos Trans R Soc A 366:2191–2203. https://doi.org/10.1098/rsta.2007.0021 CrossRefGoogle Scholar Girolami L, Druitt TH, Roche O, Khrabrykh Z (2008) Propagation and hindered settling of laboratory ash flows. J Geophys Res Solid Earth 113:B02202. https://doi.org/10.1029/2007JB005074 CrossRefGoogle Scholar Gray JMNT, Tai Y-C, Noelle S (2003) Shock waves, dead zones and particle-free regions in rapid granular free-surface flows. J Fluid Mech 291:161–181. https://doi.org/10.1017/S0022112003005317 CrossRefGoogle Scholar Gueugneau V, Kelfoun K, Roche O, Chupin L (2017) Effects of pore pressure in pyroclastic flows: numerical simulation and experimental validation. Geophys Res Lett 44:2194–2202. https://doi.org/10.1002/2017GL072591 Google Scholar Iverson RM (1997) The physics of debris flows. Rev Geophys 35:245–296. https://doi.org/10.1029/97RG00426 CrossRefGoogle Scholar Li L, Ma W (2011) Experimental study on the effective particle diameter of a packed bed with non-spherical particles. Transp Porous Media 89:35–48. https://doi.org/10.1007/s11242-011-9757-2 CrossRefGoogle Scholar Macorps E, Charbonnier SJ, Varley NR, Capra L, Atlas Z, Cabré J (2018) Stratigraphy, sedimentology and inferred flow dynamics from the July 2015 block-and-ash flow deposits at Volcán de Colima, Mexico. J Volcanol Geotherm Res 349:99–116. https://doi.org/10.1016/j.jvolgeores.2017.09.025 CrossRefGoogle Scholar McTaggart KC (1960) The mobility of nuées ardentes. Am J Sci 258:369–382. https://doi.org/10.2475/ajs.258.5.369 CrossRefGoogle Scholar Montserrat S, Tamburrino A, Roche O, Niño Y (2012) Pore fluid pressure diffusion in defluidizing granular columns. J Geophys Res 117:F02034. https://doi.org/10.1029/2011JF002164 CrossRefGoogle Scholar Montserrat S, Tamburrino A, Roche O, Niño Y, Ihle CF (2016) Enhanced run-out of dam-break granular flows caused by initial fluidization and initial material expansion. Granul Matter 18:1–9. https://doi.org/10.1007/s10035-016-0604-6 CrossRefGoogle Scholar Roche O (2012) Depositional processes and gas pore pressure in pyroclastic flows: an experimental perspective. Bull Volcanol 74:1807–1820. https://doi.org/10.1007/s00445-012-0639-4 CrossRefGoogle Scholar Roche O, Gilbertson MA, Phillips JC, Sparks RSJ (2002) Experiments on deaerating granular flows and implications for pyroclastic flow mobility. Geophys Res Lett 29:40-1–40-4. https://doi.org/10.1029/2002GL014819 CrossRefGoogle Scholar Roche O, Gilbertson MA, Phillips JC, Sparks RSJ (2004) Experimental study of gas-fluidized granular flows with implications for pyroclastic flow emplacement. J Geophys Res Solid Earth 109:B10201. https://doi.org/10.1029/2003JB002916 CrossRefGoogle Scholar Roche O, Montserrat S, Niño Y, Tamburrino A (2010) Pore fluid pressure and internal kinematics of gravitational laboratory air-particle flows: insights into the emplacement dynamics of pyroclastic flows. J Geophys Res Solid Earth 115:B12203. https://doi.org/10.1029/2009JB007133 CrossRefGoogle Scholar Roche O, Niño Y, Mangeney A, Brand B, Pollock N, Valentine GA (2013) Dynamic pore-pressure variations induce substrate erosion by pyroclastic flows. Geology 41:1107–1110. https://doi.org/10.1130/G34668.1 CrossRefGoogle Scholar Roche O, Buesch DC, Valentine GA (2016) Slow-moving and far-travelled dense pyroclastic flows during the Peach Spring super-eruption. Nat Commun 7:10890. https://doi.org/10.1038/ncomms10890 CrossRefGoogle Scholar Rowley PD, MacLeod NS, Kuntz MA, Kaplan AM (1985) Proximal bedded deposits related to pyroclastic flows of May 18, 1980, Mount St. Helens, Washington. Geol Soc Am Bull 96:1373–1383. https://doi.org/10.1130/0016-7606(1985)96<1373:PBDRTP>2.0.CO;2 CrossRefGoogle Scholar Rowley PJ, Roche O, Druitt TH, Cas R (2014) Experimental study of dense pyroclastic density currents using sustained, gas-fluidized granular flows. Bull Volcanol 76:855. https://doi.org/10.1007/s00445-014-0855-1 CrossRefGoogle Scholar Sarocchi D, Sulpizio R, Macias JL, Saucedo R (2011) The 17 July 1999 block-and-ash flow (BAF) at Colima Volcano: new insights on volcanic granular flows from textural analysis. J Volcanol Geotherm Res 204:40–56. https://doi.org/10.1016/j.jvolgeores.2011.04.013 CrossRefGoogle Scholar Savage SB, Hutter K (1989) The motion of a finite mass of granular material down a rough incline. J Fluid Mech 199:177–215. https://doi.org/10.1017/S0022112089000340 CrossRefGoogle Scholar Schwarzkopf LM, Schmincke H-U, Cronin SJ (2005) A conceptual model for block-and-ash flow basal avalanche transport and deposition, based on deposit architecture of 1998 and 1994 Merapi flows. J Volcanol Geotherm Res 139:117–134. https://doi.org/10.1016/j.jvolgeores.2004.06.012 CrossRefGoogle Scholar Sparks RSJ (1976) Grain size variations in ignimbrites and implications for the transport of pyroclastic flows. Sedimentology 23:147–188. https://doi.org/10.1111/j.1365-3091.1976.tb00045.x CrossRefGoogle Scholar Sparks RSJ (1978) Gas release rates from pyroclastic flows: an assessment of the role of fluidisation in their emplacement. Bull Volcanol 41:1–9. https://doi.org/10.1007/BF02597679 CrossRefGoogle Scholar Sulpizio R, Dellino P (2008) Depositional mechanisms and pulsating behaviour of pyroclastic density currents. In: Marti L, Gottsman J (eds) Caldera volcanism: analysis, modelling and response. Developments in volcanology, vol 10. Elsevier, pp 57–96. https://doi.org/10.1016/S1871-644X(07)00002-2 Valentine GA, Buesch DC, Fisher RV (1989) Basal layered deposits of the Peach Springs Tuff, northwestern Arizona, USA. Bull Volcanol 51:395–414. https://doi.org/10.1007/BF01078808 CrossRefGoogle Scholar Walker GPL (1983) Ignimbrite types and ignimbrite problems. J Volcanol Geotherm Res 17:65–88. https://doi.org/10.1016/0377-0273(83)90062-8 CrossRefGoogle Scholar Williams R, Branney MJ, Barry TL (2014) Temporal and spatial evolution of a waxing then waning catastrophic density current revealed by chemical mapping. Geology 42:107–110. https://doi.org/10.1130/G34830.1 CrossRefGoogle Scholar Wilson CJN (1980) The role of fluidization in the emplacement of pyroclastic flows: an experimental approach. J Volcanol Geotherm Res 8:231–249. https://doi.org/10.1016/0377-0273(80)90106-7 CrossRefGoogle Scholar Wilson CJN (1985) The Taupo eruption, New Zealand: II. The Taupo Ignimbrite. Philos Trans R Soc A 314:229–310. https://doi.org/10.1098/rsta.1985.0020 CrossRefGoogle Scholar Wilson CJN (1997) Emplacement of Taupo ignimbrite. Nature 385:306–307. https://doi.org/10.1038/385306a0 CrossRefGoogle Scholar Wilson CJN, Walker GPL (1982) Ignimbrite depositional facies: the anatomy of a pyroclastic flow. J Geol Soc Lond 139:581–592. https://doi.org/10.1144/gsjgs.139.5.0581 CrossRefGoogle Scholar Wilson CJN, Houghton BF, Kamp PJJ, McWilliams MO (1995) An exceptionally widespread ignimbrite with implications for pyroclastic flow emplacement. Nature 378:605–607. https://doi.org/10.1038/378605a0 CrossRefGoogle Scholar Yamamoto T, Takarada S, Suto S (1993) Pyroclastic flows from the 1991 eruption of Unzen volcano, Japan. Bull Volcanol 55:166–175. https://doi.org/10.1007/BF00301514 CrossRefGoogle Scholar Email authorView author's OrcID profile 1.School of Environmental SciencesUniversity of HullHullUK 2.School of Earth and Environmental SciencesUniversity of PortsmouthPortsmouthUK 3.Energy and Environment InstituteUniversity of HullHullUK Smith, G.M., Williams, R., Rowley, P.J. et al. Bull Volcanol (2018) 80: 67. https://doi.org/10.1007/s00445-018-1241-1 Received 10 March 2018 Accepted 09 July 2018 First Online 30 July 2018 Publisher Name Springer Berlin Heidelberg Cite article
CommonCrawl
Recent developments on the moment problem Gwo Dong Lin ORCID: orcid.org/0000-0001-9687-122X1 Journal of Statistical Distributions and Applications volume 4, Article number: 5 (2017) Cite this article We consider univariate distributions with finite moments of all positive orders. The moment problem is to determine whether or not a given distribution is uniquely determined by the sequence of its moments. There is a huge literature on this classical topic. In this survey, we will focus only on the recent developments on the checkable moment-(in)determinacy criteria including Cramér's condition, Carleman's condition, Hardy's condition, Krein's condition and the growth rate of moments, which help us solve the problem more easily. Both Hamburger and Stieltjes cases are investigated. The former is concerned with distributions on the whole real line, while the latter deals only with distributions on the right half-line. Some new results or new simple (direct) proofs of previous criteria are provided. Finally, we review the most recent moment problem for products of independent random variables with different distributions, which occur naturally in stochastic modelling of complex random phenomena. The moment problem is a classical topic over one century old (Stieltjes 1894, 1895, Kjeldsen 1993, Fischer 2011, pp. 157–168). We start with the definition of the moment determinacy of distributions. Let X be a random variable with distribution F (denoted X∼F) and have finite moments m k =E[ X k] for all k=1,2,…; namely, the absolute moment μ k =E[ |X|k]<∞ for all positive integers k. If F is uniquely determined by the sequence of its moments \(\{m_{k}\}_{k=1}^{\infty }\), we say that F is moment-determinate (in short, F is M-det, or X is M-det); otherwise, we say that F is moment-indeterminate (F is M-indet, or X is M-indet). The moment problem is to determine whether or not a given distribution F is M-det. Roughly speaking, there are two kinds of moment problems: Stieltjes (1894) moment problem deals with nonnegative random variables only, while Hamburger (1920, 1921) moment problem treats all random variables taking values in the whole real line. We recall first two important facts: Fact A. It is possible that a nonnegative random variable X is M-det in the Stieltjes sense, but M-indet in the Hamburger sense (Akhiezer 1965, p. 240). This happens only for some discrete nonnegative random variables with a positive mass at zero (Chihara 1968). Fact B. If a distribution F is M-indet, then there are infinitely many (i) absolutely continuous distributions, (ii) purely discrete distributions and (iii) singular continuous distributions all having the same moment sequence as F (Berg 1998, Berg and Christensen 1981). One good reason to study the moment problem was given in Fr\(\acute {\text {e}}\)chet and Shohat's (1931) Theorem stated below. Simply speaking, for a given sequence of random variables X n ∼F n , n=1,2,…, with finite moments \(m_{k}^{(n)}=\mathbf {E}\left [\!X_{n}^{k}\right ]\) for all positive integers k, the moment convergence \(\left ({\lim }_{n\rightarrow \infty } m_{k}^{(n)}=m_{k}\ \forall k\right)\) does not guarantee the weak convergence of distributions \(\{F_{n}\}_{n=1}^{\infty } \left (F_{n}\stackrel {\scriptsize \text {w}}{\rightarrow } F \;\text {as}\; n\to \infty \right)\) unless the limiting distribution F is M-det. Therefore, the M-(in)det property is one of the important fundamental properties we have to know about a given distribution. Fr \(\acute {\text {e}}\) chet and Shohat's ( 1931 ) Theorem. Let the distribution functions F n possess finite moments \(m_{k}^{(n)}\) for k=1,2,… and n=1,2,…. Assume further that the limit \(m_{k}={\lim }_{n\rightarrow \infty } m_{k}^{(n)}\) exists (and is finite) for each k. Then (i) the limits \(\{m_{k}\}_{k=1}^{\infty }\) are the moment sequence of a distribution function, say F; (ii) if the limit F given by (i) is M-det, F n converges to F weakly as n→∞. Necessary and sufficient conditions for the M-det property of distributions exist in the literature (see, e.g., Akhiezer 1961, Shohat and Tamarkin1943, and Berg et al. 2002), but these conditions are not easily checkable in general. In this survey, we will focus only on the checkable M-(in)det criteria for distributions rather than the collection of all specific examples. In Sections 2 and 3, we review respectively the moment determinacy and moment indeterminacy criteria including Cramér's condition, Carleman's condition, Hardy's condition, Krein's condition and the growth rate of moments. Some criteria are old, but others are recent. New (direct) proofs for some criteria are provided. To amend some previous proofs in the literature, two lemmas (Lemmas 3 and 4) are given for the first time. We consider in Section 4 the recently formulated Stieltjes classes for M-indet absolutely continuous distributions. Section 5 is devoted to the converses to the previous M-(in)det criteria for distributions. Finally, in Section 6 we review the most recent results about the moment problem for products of independent random variables with different distributions. Checkable criteria for moment determinacy In this section we consider the checkable criteria for moment determinacy of random variables or distributions. We treat first the Hamburger case because it is more popular than the Stieltjes case. Let X∼F on the whole real line \({\mathbb {R}}=(-\infty,\infty)\) with finite moments m k = E[ X k] and absolute moment μ k = E[ |X|k] for all positive integers k. For convenience, we define the following statements, in which 'h' stands for 'Hamburger'. (h1) \(\frac {m_{2(k+1)}}{m_{2k}}={\mathcal {O}}((k+1)^{2})={\mathcal {O}}(k^{2})\) as k→∞. (h2) X has a moment generating function (mgf), i.e., E[ e tX]<∞ for all t∈(−c,c), where c>0 is a constant (Cramér's condition); equivalently, E[ e t|X|]<∞ for 0≤t<c. (h3) \(\limsup _{k\to \infty }\frac {1}{2k}m_{2k}^{1/(2k)}<\infty.\) (h4) \(\limsup _{k\to \infty }\frac {1}{k}\mu _{k}^{1/k}<\infty.\) (h5) \(m_{{2k}}={\mathcal {O}}\left ((2k)^{2k}\right)\) as k→∞. (h6) \(m_{{2k}}\le c_{0}^{k}\,(2k)!,\ k=1,2,\ldots,\) for some constant c 0 >0. (h7) \({C}[\!F]\equiv \sum _{k=1}^{\infty }m_{2k}^{-1/(2k)}=\infty \) (Carleman's (1926) condition). (h8) X is M-det on \({\mathbb {R}}.\) Theorem 1 Under the above settings, if X∼F on \({\mathbb {R}}\) satisfies one of the conditions (h1) through (h7), then X is M-det on \({\mathbb {R}}\). Moreover, (h1) implies (h2), (h2) through (h6) are equivalent, and (h6) implies (h7). In other words, the following chain of implications holds: (h1) ⇒ (h2) ⇔ (h3) ⇔ (h4) ⇔ (h5) ⇔ (h6) ⇒ (h7) ⇒ (h8). We keep the term k+1 in (h1) because it arises naturally in many examples. The first implication in Theorem 1 was given in Stoyanov et al. (2014) recently, while the rest, more or less, are known in the literature. The Carleman quantity C[ F] in (h7) is calculated from all even order moments of F. Theorem 1 contains most checkable criteria for moment determinacy in the Hamburger case. Some other M-det criteria exist in the literature, but they are seldom used. See, for example, (ha) and (hb) below:(h2) X has a mgf (Cramér's condition) ⇔ (ha) \(\sum _{k=1}^{\infty }\frac {m_{2k}}{(2k)!}x^{2k}\) converges in an interval |x|<x 0 (Chow and Teicher 1997, p.301) ⇒ (hb) \(\sum _{k=1}^{\infty }\frac {m_{k}}{k!}x^{k}\) converges in an interval |x|<x 0 (Billingsley 1995, p.388) ⇒ (h7) \({C}[\!F]=\sum _{k=1}^{\infty }m_{2k}^{-1/(2k)}=\infty \) (Carleman's condition) ⇒ (h8) X is M-det on \({\mathbb {R}}.\) It might look strange that the convergence of subseries in the above (ha) implies the convergence of the whole series in (hb), but remember that the convergence in (ha) holds true for all x in a neighborhood of zero, not just for a fixed x. Billingsley ( 1995 ) proved the implication that (hb) ⇒ (h8) by a version of analytic continuation of characteristic function, but it is easy to see that (hb) also implies (h7) and hence X is M-det on \({\mathbb {R}}.\) In Theorem 1, Carleman's condition (h7) is the weakest checkable condition for X to be M-det on \({\mathbb {R}}\). To prove Carleman's criterion that (h7) implies (h8), we may apply the approach of quasi-analytic functions (Carleman 1926, Koosis 1988), or the approach of Lévy distance (Klebanov and Mkrtchyan 1985). For the latter, we recall the following result. Klebanov and Mkrtchyan's (1985) Theorem. Let F and G be two distribution functions on \({\mathbb {R}}\) and let their first 2n moments exist and coincide: m k (F)=m k (G)=m k ,k=1,2,…,2n (n≥2). Denote the sub-quantity \(C_{n}=\sum _{k=1}^{n}m_{2k}^{-1/(2k)}\). Then $$L(F,G) \le c_{2} \frac{\log(1+C_{n-1})}{(C_{n-1})^{1/4}}, $$ where L(F,G) is the Lévy distance and c 2 = c 2 (m 2 ) depends only on m 2 . Therefore, Carleman's condition (h7) implies that F=G by letting n→∞ in Klebanov and Mkrtchyan's (1985) Theorem. It worths mentioning that Carleman's condition is sufficient, but not necessary, for a distribution to be M-det. For this, see Heyde (1963b), Stoyanov and Lin (2012, Remarks 5 and 7) or Stoyanov (2013, Section 11). On the other hand, the statement (h1) in Theorem 1 is the strongest checkable condition for X to be M-det on \({\mathbb {R}}\), which means that the growth rate of even order moments is less than or equal to two. The condition (h1) however has its advantage: for some cases, it is easy to estimate the growth rate (see the example below), because the common factors in the two even order moments, m 2(k+1) and m 2 k , can be cancelled out as n tends to infinity. Consider the double generalized gamma random variable ξ∼DGG(α,β,γ) with density function \(f(x)=c|x|^{\gamma -1}\exp [{-\alpha |x|^{\beta }}],~x\in {\mathbb {R}},\) where α,β,γ>0,f(0)=0 if γ≠1, and c=β α γ/β/(2Γ(γ/β)) is the norming constant. Then the nth power ξ n is M-det if 1≤n≤β. To see this known result, we calculate the ratio of even order moments of ξ n: $$\begin{array}{@{}rcl@{}} \frac{\mathbf{E}[\!\xi^{2n(k+1)}]}{\mathbf{E}[\!\xi^{2nk}]}= \frac{\Gamma((\gamma+2n(k+1))/\beta)}{\alpha^{2n/\beta}\Gamma((\gamma+2nk)/\beta)} \approx \left(2n/(\alpha\beta)\right)^{2n/\beta} {(k+1)^{2n/\beta}}~~ \text{as}~ k\rightarrow \infty, \end{array} $$ by using the approximation of the gamma function: \(\Gamma (x)\approx \sqrt {2\pi }x^{x-1/2}e^{-x}~\text {as}~x\rightarrow \infty.\) Therefore, ξ n is M-det if n≤β, by the criterion (h1). In fact, for odd integer n≥1, ξ n is M-det iff n≤β, and for even integer n≥2, ξ n is M-det iff n≤2β, regardless of parameter γ. For further results about this distribution and its extensions, see Lin and Huang (1997), Pakes et al. ( 2001) and Pakes ( 2014, Theorem 8.3), as well as Examples 3 and 5 below. We give here a direct proof of the equivalence of statements (h2), (h3) (h5) and (h6). First, for any nonnegative X, we have the equivalence of the following four statements (to be shown later):\(\mathbf {E}[\!e^{c\sqrt {{X}}}]<\infty \ \text {for some constant}\ ~ c>0\)iff \({m_{k}}\leq c_{0}^{k}(2k)!, ~k=1,2,\ldots,\) for some constant c 0 >0iff \(\limsup _{k\rightarrow \infty }\frac {1}{k}\,{m_{k}}^{1/(2k)}<\infty \)iff \({m_{{k}}}={\mathcal {O}}\left (k^{2k}\right)\) as k→∞.Next, consider a general X with E[ e t|X|]<∞ for 0≤t<c, namely, \(\mathbf {E}[\!e^{t{\sqrt {{|X|^{2}}}}}]<\infty \) for some constant t>0. Then the kth moment of |X|2 is exactly the 2kth moment of X and we have immediately the following equivalences (by taking |X|2 as the above nonnegative X): (h2) X has a mgfiff (h6) \({m_{2k}}\leq c_{0}^{k}(2k)!, ~k=1,2,\ldots,\) for some constant c 0 >0iff \(\limsup _{k\rightarrow \infty }\frac {1}{k}\,{m_{2k}}^{1/(2k)}<\infty \) (iff (h3) \(\limsup _{k\rightarrow \infty }\frac {1}{2k}\,m_{2k}^{1/(2k)}<\infty \))iff \({m_{{2k}}}={\mathcal {O}}(k^{2k})\) as k→∞ (iff (h5) \(m_{{2k}}={\mathcal {O}}((2k)^{2k})\) as k→∞). We now present the checkable M-det criteria in the Stieltjes case. Consider X∼F on \({\mathbb {R}}_{+}=\;[\!0,\infty)\) with finite m k =μ k =E[ X k] for all positive integers k, and define the following statements, in which 's' stands for 'Stieltjes'.(s1) \(\frac {m_{k+1}}{m_{k}}={\mathcal {O}}((k+1)^{2})={\mathcal {O}}(k^{2})\) as k→∞.(s2) \(\sqrt {X}\) has a mgf (Hardy's condition), i.e., \(\mathbf {E}[\!e^{c\sqrt {X}}]<\infty \) for some constant c>0.(s3) \(\limsup _{k\to \infty }\frac {1}{k}m_{k}^{1/(2k)}<\infty.\)(s4) \(m_{{k}}={\mathcal {O}}(k^{2k})\) as k→∞.(s5) \(m_{{k}}\le c_{0}^{k}\,(2k)!,\ k=1,2,\ldots,\) for some constant c 0 >0.(s6) \({C}[F]=\sum _{k=1}^{\infty }m_{k}^{-1/(2k)}=\infty \) (Carleman's condition).(s7) X is M-det on \({\mathbb {R}}_{+}.\) Under the above settings, if X∼F on \({\mathbb {R}}_{+}\) satisfies one of the conditions (s1) through (s6), then X is M-det on \({\mathbb {R}}_{+}\). Moreover, (s1) implies (s2), (s2) through (s5) are equivalent, and (s5) implies (s6). In other words, the following chain of implications holds: (s1) ⇒ (s2) ⇔ (s3) ⇔ (s4) ⇔ (s5) ⇒ (s6) ⇒ (s7). The first implication above was given in Lin and Stoyanov (2015). Note that the moment conditions here are in terms of moments of all positive (integer) orders, rather than even order moments as in the Hamburger case. For example, the statement (s1) means that the growth rate of all moments (not only for even order moments) is less than or equal to two. Like Theorem 1, Theorem 2 contains most checkable criteria for moment determinacy in the Stieltjes case. Hardy (1917,1918) proved that (s2) implies (s7) by two different approaches. Surprisingly, Hardy's criterion has been ignored for about one century since publication. The following new characteristic properties of (s2) are given in Stoyanov and Lin (2012), from which the equivalence of (s2) through (s5) follows immediately. Lemma 1 Let a be a positive constant and X be a nonnegative random variable. (i) If E[ exp(cX a)]<∞ for some constant c>0, then \(m_{k}\leq \Gamma (k/a +1)c_{0}^{k}, ~k=1,2,\ldots,\) for some constant c 0 >0. (ii) Conversely, if, in addition, a≤1, and \(m_{k}\leq \Gamma (k/a +1)c_{0}^{k}, ~k=1,2,\ldots,\) for some constant c 0 >0, then E[ exp(cX a)]<∞ for some constant c>0. Corollary 1 Let a∈(0,1] and X≥0. Then E[ exp(cX a)]<∞ for some constantc>0 iff \(m_{k}\leq \Gamma (k/a +1)c_{0}^{k}, ~k=1,2,\ldots,\) for some constant c 0 >0. Let a be a positive constant and X be a nonnegative random variable. Then \(\limsup _{k\rightarrow \infty } \frac {1}{k}\,m_{k}^{a/k} < \infty ~ iff~ m_{k}\leq \Gamma (k/a+1)\,c_{0}^{k},~k=1,2,\ldots \), for some constant c 0 >0. Let a∈(0,1] and X≥0. Then E[ exp(cX a)]<∞ for some constant c>0 iff \(\limsup _{k\rightarrow \infty } \frac {1}{k}\,m_{k}^{a/k} <\infty \). We mention that for any nonnegative X, its mgf exists iff \(\limsup _{k\rightarrow \infty } \frac {1}{k}\,m_{k}^{1/k} <\infty \) due to Corollary 2. This in turn implies the equivalence of (h2) and (h4) in Theorem 1 for the Hamburger case. More general results in terms of absolute moments are given below. For easy comparison, some statements are repeated here. Equivalence Theorem A (Hamburger case). Let p≥1 be a constant and the random variable X∼F on \({\mathbb {R}}.\) Denote m k =E[ X k] for integer k≥1 and let μ ℓ =E[ |X|ℓ]<∞ for all ℓ>0. Then the following statements are equivalent: (a) X satisfies Cramér's condition, namely, the moment generating function of X exists. (b) \(\mu _{k}\le c_{0}^{k}k!, k=1,2,\ldots,\) for some constant c 0 >0. (c) \(\mu _{pk}\le c_{0}^{k}\Gamma (pk+1), k=1,2,\ldots,\) for some constant c 0 >0. (d) \(\limsup _{k\to \infty }\frac {1}{pk}\mu _{pk}^{1/(pk)}<\infty.\) (e) \(m_{2k}\le c_{0}^{k}(2k)!, k=1,2,\ldots,\) for some constant c 0 >0. (f) \(\limsup _{k\to \infty }\frac {1}{2k}m_{2k}^{1/(2k)}<\infty.\) The equivalence of (a), (b), (e) and (f) was given in Theorem 1. To prove the remaining relations, denote X ∗ =|X| and write \(Y_{p}=X_{*}^{p}\) and \({\nu _{k,p}=\mathbf {E}\left [Y_{p}^{k}\right ]=\mu _{pk}}\). Then note further that \(\mathbf {E}\left [e^{cX_{*}}\right ]=\mathbf {E}\left [e^{c(Y_{p})^{1/p}}\right ]<\infty \) for some constant c>0 iff \(\nu _{k,p}\le c_{0}^{k}\Gamma (pk+1), k=1,2,\ldots,\) for some constant c 0 >0 (by taking a=1/p and X=Y p in Lemma 1) iff (c) holds true. On the other hand, \(\nu _{k,p}\le c_{0}^{k}\Gamma (pk+1), k=1,2,\ldots,\) for some constant c 0 >0 iff \(\limsup _{k\to \infty }\frac {1}{k}\nu _{k,p}^{1/(pk)}<\infty \) (by Lemma 2) iff (d) holds true. The proof is complete. □ The above statements (e) and (f) are special cases of (c) and (d) with p=2, respectively. Similarly, we give the following equivalence theorem without proof for Stieltjes case. Equivalence Theorem B (Stieltjes case). Let p≥1 be a constant. Let the random variable 0≤X∼F on \({\mathbb {R}}_{+}\) with finite m k =μ k =E[ X k] for all integers k≥1. Then the following statements are equivalent: (a) X satisfies Hardy's condition, namely, the moment generating function of \(\sqrt {X}\) exists. (b) \(\mu _{k}\le c_{0}^{k}(2k)!, k=1,2,\ldots,\) for some constant c 0 >0. (c) \(\mu _{pk}\le c_{0}^{k}\Gamma (2pk+1), k=1,2,\ldots,\) for some constant c 0 >0. (d) \(\limsup _{k\to \infty }\frac {1}{pk}\mu _{pk}^{1/(2pk)}<\infty.\) (e) \(\limsup _{k\to \infty }\frac {1}{k}\mu _{k}^{1/(2k)}<\infty.\) Checkable criteria for moment indeterminacy In this section we consider the checkable criteria for moment indeterminacy. Krein (1945) proved the following remarkable criterion in the Hamburger case. Krein's Theorem. Let X∼F on \({\mathbb {R}}\) have a positive density function f and finite moments of all positive orders. Assume further that the Lebesgue logarithmic integral $$\begin{array}{@{}rcl@{}} {K}[\!f]\equiv\int_{-\infty}^{\infty}\frac{-\log f(x)}{1+x^{2}}dx<\infty. \end{array} $$ Then F is M-indet on \({\mathbb {R}}\). We call the logarithmic integral K[ f] in (1) the Krein integral for the density f. Graffi and Grecchi (1978) as well as Slud (1993) proved independently the counterpart of Krein's Theorem for the Stieltjes case by the method of symmetrization of a distribution on \({\mathbb {R}}_{+}\). To give a constructive and complete proof, we however need Lemma 3 below (see, e.g., Lin 1997, Theorem 3, and Rao et al. 2009, Remark 8). Graffi, Grecchi and Slud's Theorem. Let X∼F on \({\mathbb {R}}_{+}\) have a positive density function f and finite moments of all positive orders. Assume further that the integral $$\begin{array}{@{}rcl@{}} {K}[\!f]=\int_{0}^{\infty}\frac{-\log f({x^{2}})}{1+x^{2}}dx<\infty. \end{array} $$ Then F is M-indet on \({\mathbb {R}}_{+}\) and hence M-indet on \({\mathbb {R}}.\) Let Y have a symmetric distribution G with density g and finite moments of all positive orders. If the integral $${K}[\!g]=\int_{-\infty}^{\infty}\frac{-\log g(x)}{1+x^{2}}dx<\infty, $$ then there exists a symmetric distribution G ∗ ≠G having the same moment sequence as G. By the assumptions of the lemma, there exists a complex-valued function ϕ such that |ϕ|=g (in the sense of almost everywhere) and $$\int_{-\infty}^{\infty}\phi(x)e^{itx}dx=0,~~~t\ge 0 $$ (see the proof of Theorem 1 in Lin 1997 for details, and Garnett 1981, p. 66, for the construction of ϕ). The last equality implies that $$\int_{-\infty}^{\infty}x^{k}\phi(x)e^{itx}dx=0,~~t\ge 0,~k=0,1,2,\ldots. $$ In particular, $$\int_{-\infty}^{\infty}x^{k}\phi(x)dx=0,~~k=0,1,2,\ldots. $$ Let ϕ=ϕ 1 +i ϕ 2 , then both ϕ j are real and |ϕ j |≤g. We have $$\int_{-\infty}^{\infty}x^{k}\phi_{j}(x)dx=0,~j=1,2,~~k=0,1,2,\ldots. $$ We split the rest of the proof into three cases: (i) ϕ 1 ≠0, ϕ 2 =0, (ii) ϕ 1 =0, ϕ 2 ≠0, and (iii) ϕ 1 ≠0, ϕ 2 ≠0. (i) If ϕ 1 is odd, then for each t>0, the function ϕ ∗(x):=ϕ 1(x) sin(tX) is even and $$\int_{-\infty}^{\infty}x^{k}\phi_{*}(x)dx=0,~k=0,1,2,\ldots. $$ Take g ∗=g+ϕ ∗≠g. Then g ∗≥0 is even and has the same moment sequence as g. On the other hand, if ϕ 1 is not odd, then let first \(\ell (x)=\frac {1}{2}[\!\phi _{1}(x)+\phi _{1}(-x)]\) which is even and satisfies $$\int_{-\infty}^{\infty}x^{k}\ell(x)dx=0,~k=0,1,2,\ldots. $$ Next, take g ∗=g+ℓ≠g, which has the same moment sequence as g.(ii) The proof of this case is similar to that of case (i).(iii) If one ϕ j is not odd, then it is done as in (i) (by taking \(\ell (x)=\frac {1}{2}[\!\phi _{j}(x)+\phi _{j}(-x)]\) and g ∗=g+ℓ). Suppose now that both ϕ j are odd, then, by the definition of ϕ, we further have \(\int _{-\infty }^{\infty }\phi (x)e^{itx}dx=0\ \forall ~t\in {\mathbb {R}}.\) Let t>0 be fixed and define the function ψ(x)=ϕ 1(x) sin(tX)+ϕ 2(x) cos(tX) (the imaginary part of ϕ(x)e itX), then $$\int_{-\infty}^{\infty}x^{k}\psi(x)dx=0,~\int_{-\infty}^{\infty}x^{k} \psi(-x)dx=0,~k=0,1,2,\ldots.$$ Take \(m(x)=\frac {1}{2}[\!\psi (x)+\psi (-x)]=\phi _{1}(x)\sin (tx)\ne 0,\) which is even and satisfies $$\int_{-\infty}^{\infty}x^{k}m(x)dx=0,~k=0,1,2,\ldots. $$ We have g ∗=g+m≠g, which is nonnegative and has the same moment sequence as g. The proof is complete. □ It should be noted that in the logarithmic integral (2), the argument of the density function f is x 2 rather than x as in (1). Recently, Pedersen (1998) improved Krein's Theorem by the concept of positive lower uniform density sets and proved that it suffices to calculate the Krein integral over the two-sided tail of the density function (instead of the whole line). (Pedersen 1998). Let X∼F on \({\mathbb {R}}\) have a density function f and finite moments of all positive orders. Assume further that the integral $$\begin{array}{@{}rcl@{}} {K}[\!f]=\int_{{|x|\ge c}}\frac{-\log f(x)}{1+x^{2}}dx<\infty~ \text{for some}~ c\ge 0. \end{array} $$ Then X is M-indet on \({\mathbb {R}}\). See also Hörfelt (2005) for Theorem 3 with a different proof (provided by H.L. Pedersen). Pedersen (1998) also showed by giving an example that Krein's condition (1) is sufficient, but not necessary, for a distribution to be M-indet. This corrected the statement (2) in Leipnik (1981) about Krein's condition. On the other hand, Pakes (2001) and Hörfelt (2005) pointed out the counterpart of Pedersen's Theorem for the Stieltjes case. To prove this result, we need Lemma 4 below. (Pakes 2001, Hörfelt 2005). Let X∼F on \({\mathbb {R}}_{+}\) have a density function f and finite moments of all positive orders. Assume further that the integral $$\begin{array}{@{}rcl@{}} {K}[\!f]=\int_{{x\ge c}}\frac{-\log f({x^{2}})}{1+x^{2}}dx< \infty \,\, \text{for some~} c\ge 0. \end{array} $$ Then X is M-indet on \({\mathbb {R}}_{+}\) and hence M-indet on \({\mathbb {R}}.\) Let 0≤X∼F with density f and finite moments of all positive orders. Let Y∼G with density g be the symmetrization of \(\sqrt {X}\). If for some c≥0, $${K}[\!g]=\int_{|x|\ge c}\frac{-\log g(x)}{1+x^{2}}dx<\infty, $$ then X is M-indet on \({\mathbb {R}}_{+}.\) Under the condition on the logarithmic integral of g, Pedersen (1998, Theorem 2.2) proved that the set of polynomials is not dense in L\(^{1}({\mathbb {R}}, g(x)dx)\). This implies that the set of polynomials is not dense in L\(^{2}({\mathbb {R}}, g(x)dx)\) either (see, e.g., Berg and Christensen 1981, or Goffman and Pedrick 2002, p. 162). Then proceeding along the same lines as in the proof of Corollary 1 in Slud (1993), we conclude that the set of polynomials is not dense in L\(^{2}({\mathbb {R}}, f(x)dx)\). Therefore, X is M-indet on \({\mathbb {R}}\), which in turn implies that X is M-indet on \({\mathbb {R}}_{+}\) due to Chihara's (1968) result in Fact A above. The proof is complete. □ Conversely, once we prove Theorem 4, we can extend Lemma 4 as follows. Lemma 4 ∗. If X∼F on \({\mathbb {R}}\) satisfies the conditions in Theorem 3, then X 2 is M-indet. Apply Theorem 4 above and Pakes et al.'s (2001) Theorem 3(i): If X∼F on \({\mathbb {R}}\) satisfies condition (3), then the Krein integral K[ f 2] in (4) of X 2 is finite, where f 2 is the density of X 2. □ For the M-det case, a trivial analogue of Lemma 4∗ is the following. Lemma 4 ∗∗. If X∼F on \({\mathbb {R}}\) satisfies Carleman's condition (h7), then X 2 satisfies Carleman's condition (s6) and is M-det on \({\mathbb {R}}_{+}.\) For simplicity, all the conditions (1) through (4) are called Krein's condition. For illustration of how to use Krein's and Hardy's criteria, we now recover Berg's (1988) results using these powerful criteria (see also Prohorov and Rozanov 1969, p. 167, Pakes and Khattree 1992, Lin and Huang 1997, and Stoyanov 2000). Let X have a normal distribution and α>0. Then(i) the odd power X 2n+1 is M-indet if n≥1, and (ii) |X|α is M-det iff α≤4. Without loss of generality, we assume that X has a density \(f(x)=\frac {1}{\sqrt {\pi }}\exp \left ({-x^{2}}\right),\ x\in {\mathbb {R}},\) namely, \(\sqrt {2}X\) has a standard normal distribution. We discuss these results in three steps. (I) Berg (1988) proved the moment indeterminacy of distributions by giving examples. For part (i), he calculated first the density of X 2n+1: $$f_{n}(x)=\frac{1}{(2n+1)\sqrt{\pi}}|x|^{-2n/(2n+1)}\exp\left({-{|x|^{2/(2n+1)}}}\right),\ x\in {\mathbb{R}}, $$ and then constructed the density function $$\begin{array}{@{}rcl@{}} f_{r,n}(x)&=&f_{n}(x)\left\{1+r{\left[\cos\left(\beta_{n}|x|^{2/(2n+1)}\right)- \gamma_{n}\sin\left(\beta_{n}|x|^{2/(2n+1)}\right)\right]}\right\}\\ &\equiv &f_{n}(x)\{1+rp_{n}(x)\},\ x\in {\mathbb{R}}, \end{array} $$ where \(|r|\le \sin \frac {\pi }{2(2n+1)},\ \beta _{n}=\tan \frac {\pi }{2n+1}\) and \(\gamma _{\alpha }= \cot \frac {\pi }{2(2n+1)}.\) It is seen that f r,n ≠f n if r≠0 and n≥1, but f r,n and f n have the same moment sequence because the product of the density f n and the function p n defined above has vanishing moments by a tedious calculation: $$\int_{-\infty}^{\infty}x^{k}f_{n}(x){p_{n}(x)}dx=0,\ k=0,1,2,\ldots. $$ This proves part (i). Alternatively, we note however that the Krein integral $${{K}[\!f_{n}]=\int_{-\infty}^{\infty}\frac{-\log f_{n}(x)}{1+x^{2}}dx=C+\int_{-\infty}^{\infty}\frac{|x|^{2/(2n+1)}}{1+x^{2}}dx<\infty\ (\text{if}\ n\ge 1),}$$ which implies by Krein's Theorem that the odd power X 2n+1 is M-indet if n≥1.(II) For part (ii), the density of |X|α is $$f_{\alpha}(x)=\frac{2}{\alpha\sqrt{\pi}}x^{1/\alpha-1} \exp\left({-{x^{2/\alpha}}}\right),\ x\ge 0.$$ If α>4, Berg constructed again the density function $$f_{r,\alpha}(x)=f_{\alpha}(x)\left\{1+r{\left[\cos\left(\beta_{\alpha}x^{2/{\alpha}}\right)- \gamma_{\alpha}\sin\left(\beta_{\alpha}x^{2/{\alpha}}\right)\right]}\right\}\equiv f_{\alpha}(x)\{1+rp_{\alpha}(x)\},\ x\ge 0,$$ where |r|≤ sin(π/α), β α= tan(2π/α) and γ α= cot(π/α). Then f r,α ≠f α if r≠0 and α>4, but f r,α and f α have the same moment sequence because $$\int_{0}^{\infty}x^{k}f_{\alpha}(x){p_{\alpha}(x)}dx=0,\ k=0,1,2,\ldots.$$ Therefore, |X|α is M-indet if α>4. Again, we see that the Krein integral (in Stieltjes case) $${{K}[\!f_{\alpha}]=\int_{0}^{\infty}\frac{-\log f_{\alpha}\left(x^{2}\right)}{1+x^{2}}dx=C+\int_{0}^{\infty}\frac{x^{4/{\alpha}}}{1+x^{2}}dx<\infty\ (\text{if}\ {\alpha}> 4).}$$ So the required result follows immediately from Krein's criterion (4). (III) For the rest of part (ii), Berg calculated the kth moment of |X|α: $$m_{\alpha,k}=\int_{0}^{\infty}x^{k}f_{\alpha}(x)dx=\frac{1}{\sqrt{\pi}} \Gamma\left(\frac{\alpha k+1}{2}\right),\ k=0,1,2,\ldots.$$ By Stirling's formula, \(m_{\alpha,k}^{1/k}\approx ck^{\alpha /2}\) as k→∞, and hence the Carleman quantity (in Stieltjes case) is equal to $${C}[\!f_{\alpha}]=\sum_{k=1}^{\infty}m_{\alpha,k}^{-1/(2k)}=\infty\ (\text{if}\ {\alpha}\le 4).$$ This proves the necessary part of (ii). Instead, we note that if α∈(0,2], the mgf of |X|α exists by its density function above, and hence |X|2α is M-det by Hardy's criterion. There are some ramifications of the moment problem for normal random variables. For example, Slud (1993) investigated the moment problem for polynomial forms in normal random variables, while Hörfelt (2005) studied the moment problem for some Wiener functionals which extend Berg's results in Example 2. Besides, Lin and Huang (1997) treated the double generalized Gamma (DGG) distribution as an extension of the normal one and found the necessary and sufficient conditions for powers of DGG random variable to be M-det. Stieltjes classes for M-indet distributions Stieltjes (1894) observed that some positive measures, e.g., \(\mu (dx)= e^{-x^{1/4}}dx\) or x n−logx dx (n is an integer), are not unique by moments. This might be the starting point of T. J. Stieltjes to study the moment problem (see Kjeldsen 1993). It was C. C. Heyde who first presented this phenomenon in probability language and proved in 1963 that the lognormal distribution is M-indet by giving the example described next. Consider the standard lognormal density $$f(x)=\frac{1}{\sqrt{2\pi}}x^{-1}\exp\left[-\frac{1}{2}(\log x)^{2}\right],\ x>0,$$ with moment sequence \(\{\exp (k^{2}/2)\}_{k=1}^{\infty }.\) Then, for each ε∈ [−1,1], $$\begin{array}{@{}rcl@{}}\int_{0}^{\infty}{f(x)}[\!1+\varepsilon\sin(2\pi\log x)]x^{k}dx =\int_{0}^{\infty}{f(x)}x^{k}dx\ \forall\ k=0,1,2,\ldots \end{array} $$ because the product of the density f(x) and the function sin(2π logx) has vanishing moments: $$\int_{0}^{\infty} f(x)[\!\sin (2\pi \log x)]x^{k}dx=\frac{e^{k^{2}/2}}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-x^{2}/2}\sin(2\pi x)dx=0\,\, \forall\ k=0,1,2,\ldots.$$ There are many other distributions having the same moment sequence as the above lognormal with mean \(\sqrt {e}\), including (i) the ones with density f satisfying the functional equation: f(qx)=q −1/2 xf(x), where q=1/e∈(0,1) (see, e.g., López-García 2011, Theorem 1), or, more generally, (ii) the distributions F satisfying \(F(x)=e^{-1/2}\int _{0}^{ex}udF(u),\ x\ge 0\) (Pakes 1996, Section 3). The latter showed that each such F corresponds to a finite measure in the interval (1/e,1] and vice versa. Hence the cardinality of the set of all solutions to the functional equation is \({\aleph }_{2}=2^{\mathbb {R}}.\) All these distributions are called the solutions to the lognormal moment problem (see also Chihara 1970, Leipnik 1982, Pakes 2007 and Christiansen 2003). Recently, Stoyanov (2004) formulated Stieltjes classes for M-indet absolutely continuous distributions as follows. Let X∼F have an M-indet distribution on \({\mathbb {R}}\) with density f. A Stieltjes class \({\mathcal {S}}\) for F is defined by $${\mathcal{S}}={\mathcal{S}}(f, p)=\{f_{\varepsilon}: f_{\varepsilon}(x)=f(x)[\!1+\varepsilon p(x)],\ x\in {\mathbb R},\ \varepsilon\in\; [-1,1]\},$$ where p is a measurable function (called a perturbation function) such that |p(x)|≤1 and $${\int_{-\infty}^{\infty}f(x)p(x)x^{k}dx=0}, k=0,1,2,\ldots. $$ We note that for a given M-indet distribution, the choice of the function p might not be unique. Besides the previous lognormal and normal results of Heyde (1963a) and Berg (1988), some other perturbation functions are given below: If X has a generalized Weibull density \(f(x)=\frac {1}{24}\exp (-x^{1/4}),\,x>0,\) then p(x)= sin(x 1/4), x>0 (Stieltjes 1894, Serfling 1980). If X has a density function f(x)=cX − logx,x>0, where c is a norming constant, then we choose p(x)= sin(2π logx), x>0 (Stieltjes 1894). If X has a gamma density with parameter α>0, then X β is M-indet provided β> max{2,2α}, and we can choose $${} p(x)=\sin\left(\alpha\pi/\beta\right)\left[\cos\left(\tan\left(\pi/\beta)x^{1/\beta}\right)- \cot(\alpha\pi/\beta)\sin(\tan(\pi/\beta)x^{1/\beta}\right)\right],\ x>0 $$ (Targhetta 1990). If X has a density function \(f(x)=c \exp (-\alpha |x|^{\rho }), x\in {\mathbb {R}}\), where α>0, ρ∈(0,1) and c is a norming constant, then we choose \(p(x)=\cos (\alpha |x|^{\rho }),\,x\in {\mathbb {R}}\) (Prohorov and Rozanov 1969, p. 167). For the log-skew-normal distribution with parameter λ>0, we choose the perturbation function $$p(x)=\frac{\ell(x-1)}{\ell(x)}\frac{\sin[\!\pi\log(x-1)]}{\Phi(\lambda\log x)},\ \ \text{if}\ x>1,$$ and p(x)=0, otherwise, where ℓ is the density of standard lognormal LN(0,1) and Φ is the standard normal distribution (Lin and Stoyanov 2009). Several systematic approaches for constructing Stieltjes classes are available. For example, for any M-indet distribution F on (0,∞) with density f bounded from below as f(x)≥A exp(−α x β),x>0, where A>0, α>0 and β∈(0,1/2) are constants, we find first a complex-valued function g satisfying(i) g is analytic in \({\mathbb {C}}_{+}\setminus \{0\},\) where \({\mathbb {C}}_{+}=\{z: \text {Im}\,z\ge 0\}\) is the upper half-plane, and(ii) \(g(x)\in {\mathbb {R}}, x>0,\) and \(|g(z)|\le A\exp \left (-\alpha |z|^{\beta }\right), z\in {\mathbb {C}}_{+}\setminus \{0\}.\)Then choose the perturbation function p(x)=[Im g(−x)]/f(x), x>0 (Ostrovska 2014). On the other hand, given any Stieltjes class \({\mathcal {S}}(f, p)\) defined above and a positive random variable V with distribution H and finite moments of all positive orders, we can construct a new Stieltjes class \({\mathcal {S}}(f^{*}, p^{*})\) by random scaling: Y ε:=VX ε,ε∈ [−1,1], where the random variable X ε has density f ε,V is independent of \(X_{\varepsilon }, f^{*}=f^{*}_{0}\) is the density of Y 0 =VX, and the perturbation function p ∗ satisfies \(f^{*}(x)p^{*}(x)=\int _{0}^{\infty }v^{-1}f({x}/{v})p({x}/{v})dH(v),\ x\in {\mathbb {R}}\) (Pakes 2007, Section 5). For more perturbation functions, see Stoyanov (2004), Stoyanov and Tolmatz (2004,2005), Ostrovska and Stoyanov (2005), Gómez and López-García (2007), Penson et al. (2010), Wang (2012), Kleiber (2013,2014) and Ostrovska (2016). Converse criteria In this section we present some converses to the previous M-(in)det criteria. Recall that for the Stieltjes case, if (s1) above holds true, i.e., \({m_{k+1}}/{m_{k}}={\mathcal {O}}((k+1)^{2})\) as k→∞, then X is M-det on \({\mathbb {R}}_{+}.\) One might guess that if the moments \(\{m_{k}\}_{k=1}^{\infty }\) grow faster, then X becomes M-indet. This is true under one more condition defined below (see Lin 1997, Stirzaker 2015, p. 223, Kopanov and Stoyanov 2017, or Stoyanov and Kopanov 2017). Condition L: Suppose, in the Stieltjes case, that f is a density function on \({\mathbb {R}}_{+}\) such that for some fixed x 0 ≥0, f is strictly positive and differentiable on (x 0 ,∞) and $$L_{f}(x):={-\frac{xf^{\prime}(x)}{f(x)}\nearrow \infty~~\text{as}~~x_{0}<x\rightarrow \infty.} $$ In the Hamburger case we require the density \(f(x), x \in {\mathbb {R}},\) to be symmetric about zero. Let X be a nonnegative random variable with distribution F and let its moments grow fast in the sense that m k+1/m k ≥c(k+1)2+ε for all large k, where c and ε are positive constants. Assume further that X has a density function f which satisfies Condition L. Then X is M-indet. Note that in the above theorem, X is M-indet on \({\mathbb {R}}_{+}\) iff it is M-indet on \({\mathbb {R}}\) because X has a density. For the Hamburger case, we have the following. Suppose the moments of X∼F on \({\mathbb {R}}\) grow fast in the sense that m 2(k+1)/m 2k ≥c(k+1)2+ε for all large k, where c and ε are positive constants. Assume further that X has a density function f which is symmetric about zero and satisfies Condition L. Then X satisfies Krein's condition, and hence both X and X 2 are M-indet. The crucial point in the proofs of Theorems 5 and 6 is to prove that the Krein integral K[ f]<∞ by Condition L and the moment condition. The M-indet property of X 2 in Theorem 6 is due to Lemma 4∗ and Fact A above. Similarly, we have the following results for other criteria (s4) and (h5) (see Lin and Stoyanov 2015,2016, and Stoyanov et al. 2014). (Stieltjes case). Let X∼F on \({\mathbb {R}}_{+}\) and let its moments grow fast in the sense that m k ≥c k (2+ε)k, k=1,2,…, for some positive constants c and ε. Assume further that X has a density function f which satisfies Condition L. Then X is M-indet. (Hamburger case). Suppose the moments of X∼F grow fast in the sense that m 2k ≥c(2k)(2+ε)k,k=1,2,…, for some positive constants c and ε. Assume further that X has a density function f which is symmetric about zero and satisfies Condition L. Then X satisfies Krein's condition, and hence both X and X 2 are M-indet. Note that Condition L also applies to converse M-indet criteria. Actually, this is the original purpose of the condition, under which K[ f]=∞ implies C[ F]=∞ (Lin 1997). The M-det property of X 2 in the next result is due to Lemma 4∗∗ and Fact A above. In Theorem 3 (Hamburger case), if the Krein integral K[ f]=∞ and if f satisfies Condition L, then X satisfies Carleman's condition, and hence both X and X 2 are M-det. Theorem 10 In Theorem 4 (Stieltjes case), if the Krein integral K[ f]= ∞ and if f satisfies Condition L, then X satisfies Carleman's condition and is M-det. In view of Theorems 9 and 10 above, we know that in the class of absolutely continuous distributions with density functions satisfying Condition L, Krein's condition ((3) or (4)) becomes necessary and sufficient for a distribution to be M-indet. In the above converse results, it is possible to replace Condition L by other slightly weaker conditions (mathematically) like those in Pakes (2001) and Gut (2002), but as mentioned before, we focus only on the checkable conditions in this survey. Interestingly, Condition L is closely related to a useful concept in reliability theory. More precisely, if a nonnegative random variable X with density F ′=f satisfies Condition L on \({\mathbb {R}}_{+}\) with x 0 =0, then it has an increasing generalized failure rate (by Theorem 1 in Lariviere 2006), namely, the product function \(xf(x)/\overline {F}(x)\) (of x and the failure rate) increases in x. In addition to the previous problems for normal distributions, we mention here some more variants for general cases, but we are not going to pursuit all the moment problems. To solve these problems, we need to derive new auxiliary tools case by case (like Lemma 5 below). Lin and Stoyanov (2002) and Gut (2003) studied the moment problem for random sums of independently identically distributed (i.i.d.) random variables. Stoyanov et al. (2014) and Lin and Stoyanov (2015) investigated the moment problem for products of i.i.d. random variables. In the next section we review the recent results about products of independent random variables with different distributions; for details, see Lin and Stoyanov (2016). Moment problem for products of random variables Products of random variables occur naturally in stochastic modelling of complex random phenomena in areas such as statistical physics, quantum theory, communication theory, reliability theory and financial modelling; especially in modern communications (see, e.g., Chen et al. 2012, Springer 1979, and Galambos and Simonelli 2004). We split the problem in question into three cases: (a) products of nonnegative random variables, (b) products of random variables taking values in \({\mathbb {R}},\) and (c) the mixed case. Moreover, all random variables considered have finite moments of all positive orders. 6.1 Products of nonnegative random variables The M-det result (Theorem 11 below) is an easy consequence of Theorem 2, while the hard part is the M-indet result (Theorem 12) whose proof needs a delicate analysis. Let ξ 1 ,…,ξ n be independent nonnegative random variables and let the moments \(m_{i,k}=\mathbf {E}\left [\xi _{i}^{k}\right ]\!, \ i=1,\ldots,n,\) satisfy the conditions: $$m_{i,k} = {\mathcal{O}}(k^{{a_{i}}k}) \ \text{as} \ k \to \infty, \ \text{for} \ i=1, \ldots, n, $$ where a 1 ,…,a n are positive constants. If the parameters a i are such that \(\sum _{i=1}^{n}a_{i}\le 2,\) then the product \(Z_{n}=\Pi _{i=1}^{n}\xi _{i}\) satisfies Hardy's condition and is M-det. Consider n independent nonnegative random variables, ξ i ∼F i , i=1,2,…,n, where n≥2. Suppose that each F i is absolutely continuous and has a positive density f i on (0,∞) and that the following conditions are satisfied: (i) At least one of the densities f 1 (x),…,f n (x) is decreasing in [ x 0 ,∞), where x 0 ≥1 is a constant. (ii) For each i=1,2,…,n, there exists a constant A i >0 such that the density f i and the tail function \(\overline {F_{i}}(x)=1-F_{i}(x)=\Pr (\xi _{i}>x)\) together satisfy the relation $$\begin{array}{@{}rcl@{}} {f_{i}(x)/\overline{F_{i}}(x)\geq A_{i}/x~~\text{for}~~x\geq x_{0},} \end{array} $$ and there exist constants B i >0, α i >0,β i >0 and real γ i such that $$\begin{array}{@{}rcl@{}} {\overline{F_{i}}(x)\geq B_{i}x^{\gamma_{i}}\exp\left({-\alpha_{i} x^{\beta_{i}}}\right)~~ \text{for}~~ x\geq x_{0}.} \end{array} $$ If, in addition to the above, \(\sum _{i=1}^{n}1/{\beta _{i}}>2,\) then the product \(Z_{n}=\Pi _{i=1}^{n}\xi _{i}\) is M-indet. Let us explain the above conditions. In terms of reliability language, the failure rate in (5) and the survival function in (6) cannot approach zero too quickly. In other words, (5) and (6) control the tail (decreasing) behavior of the related distributions in some sense. There are three key steps in the proof of Theorem 12: (i) represent the density function of the product Z n in multiple integral form, (ii) estimate the lower bound of the density function by truncating the two tails of this integral, and (iii) apply Krein's criterion for the Stieltjes case. For estimation in the step (ii), we need the following auxiliary tool which can be proved using integration by parts. Let F be a distribution on \({\mathbb {R}}\) such that (i) it has density f on the subset [ a,r a], where a>0 and r>1, and (ii) for some constant \(A>0, {f(x)}/{\overline {F}(x)}\ge {A}/{x}\) on [ a,r a]. Then $$\int_{a}^{ra}\frac{f(x)}{x}dx\ge \left(1-\frac{1}{r}\right)\frac{A}{1+A}\frac{\overline{F}(a)}{a}. $$ For illustration of how to use Theorems 11 and 12, consider the generalized gamma distributions. We say that ξ∼GG(α,β,γ) if its density is of the form $$\begin{array}{@{}rcl@{}} {f(x)=c\,x^{\gamma-1}\exp({-\alpha x^{\beta}}),\ ~x\geq 0.} \end{array} $$ Here α,β,γ>0,f(0)=0 if γ≠1, and c=β α γ/β/Γ(γ/β) is the norming constant. Then we have the following characterization result (see also Pakes 2014 for a much more general result with different proof): Suppose that ξ 1 ,…,ξ n are n independent random variables and let ξ i ∼GG(α i ,β i ,γ i ),i=1,…,n. Then the product \(Z_{n}=\Pi _{i=1}^{n}\xi _{i}\) is M-det iff \(\sum _{i=1}^{n}{1}/{\beta _{i}} \leq 2.\) Consider the class of inverse Gaussian distributions. We say that ξ∼IG(μ,λ) if its density is of the form $$\begin{array}{@{}rcl@{}} {f(x)=\left(\frac{\lambda}{2\pi x^{3}}\right)^{1/2}\exp\left[-\frac{\lambda(x-\mu)^{2}}{2\mu^{2}x}\right],\ ~x>0,} \end{array} $$ where μ,λ>0 and f(0)=0. It can be shown that the product of two independent random variables is M-det if each one is exponential or inverse Gaussian, while the product of three such random variables is M-indet. For the powers of such random variables and others, see, e.g., Lin and Huang (1997), Stoyanov (1999), Pakes et al. (2001), Stoyanov et al. (2014) and Lin and Stoyanov (2015). Here are some recent results. Let ξ 1 ∼IG(μ 1 ,λ 1 ), ξ 2 ∼IG(μ 2 ,λ 2 ) and η∼Exp(1)=GG(1,1,1) be three independent random variables. Then both the products ξ 1 η and ξ 1 ξ 2 are M-det, while ξ 1 ξ 2 η is M-indet. 6.2 Products of random variables taking values in \({\mathbb {R}}\) For this Hamburger case, we have the counterparts of Theorems 11 and 12 as follows. In the proof of Theorem 14, the symmetric condition on the densities plays a crucial role. Let ξ 1 ,…,ξ n be independent random variables and let the even order moments \(m_{i,2k}=\mathbf {E}\left [\xi _{i}^{2k}\right ]\!, \ i=1,\ldots,n,\) satisfy the conditions: $$m_{i,2k} = {\mathcal{O}}((2k)^{{2a_{i}}k}) \text{~as}~ k \to \infty, ~ \text{for~} i=1, \ldots, n, $$ where a 1 ,…,a n are positive constants. If the parameters a i are together such that \(\sum _{i=1}^{n}a_{i}\le 1,\) then the product \(Z_{n}=\Pi _{i=1}^{n}\xi _{i}\) satisfies Cramér's condition and is M-det. Consider n independent random variables ξ i ∼F i ,i=1,…,n, where n≥2. Suppose each F i has a positive density f i on \({\mathbb {R}}\) and symmetric about zero. Assume further that (i) at least one of the densities f 1 (x),…,f n (x) is decreasing in [ x 0 ,∞), where x 0 ≥1 is a constant, and (ii) for all i, \(f_{i}/\overline {F_{i}}\) satisfies the condition (5): \(f_{i}(x)/\overline {F_{i}}(x)\geq A_{i}/x~~\text {for}~~x\geq x_{0}\), and \(\overline {F_{i}}\) satisfies the condition (6): \(\overline {F_{i}}(x)\geq B_{i}x^{\gamma _{i}}\exp \left ({-\alpha _{i} x^{\beta _{i}}}\right)~~ \text {for}~~ x\geq x_{0}\). If, in addition to the above, \(\sum _{i=1}^{n}1/{\beta _{i}}>1,\) then the product \(Z_{n}=\Pi _{i=1}^{n}\xi _{i}\) satisfies Krein's condition, and hence both Z n and \(Z_{n}^{2}\) are M-indet. Applying Theorems 13 and 14 to the product of double generalized gamma random variables ξ∼DGG(α,β,γ), defined above, yields the following interesting result: Suppose that ξ 1 ,…,ξ n are n independent random variables, and let ξ i ∼DGG(α i ,β i ,γ i ), i=1,2,…,n. Then the product \(Z_{n}=\Pi _{i=1}^{n}\xi _{i}\) is M-det iff \(\sum _{i=1}^{n}{1}/{\beta _{i}}\leq 1\) iff \(Z_{n}^{2}\) is M-det. 6.3 The mixed case Finally, we consider the products of both types of random variables, nonnegative and real ones taking values in \({\mathbb {R}}\). Recall that this is the Hamburger case and the M-det criterion is similar to Theorem 13 and omitted. The next result about an M-indet criterion extends slightly Theorem 5.1 of Lin and Stoyanov (2016). The proof is similar and is therefore omitted. Consider n independent random variables divided into two groups. The first group, \(\phantom {\dot {i}\!}\xi _{1}, \ldots, \xi _{n_{0}},\) consists of nonnegative variables, while all the variables in the second group, \(\xi _{n_{0}+1}, \ldots, \xi _{n},\) take values in \({\mathbb {R}},\) where 1≤n 0 <n. Suppose that each ξ i ∼F i has a density f i and that f i , i=1,…,n 0 , are positive on (0,∞), while f j , j=n 0 +1,…,n, are positive on \({\mathbb {R}}\) and symmetric about 0. Moreover, assume further that (i) at least one of the densities f j (x),j=1,2,…,n, is decreasing in [ x 0 ,∞), where x 0 ≥1 is a constant, and (ii) for all i, \(f_{i}/\overline {F_{i}}\) satisfies the condition (5): \(f_{i}(x)/\overline {F_{i}}(x)\geq A_{i}/x~~\text {for}~~x\geq x_{0}\), and \(\overline {F_{i}}\) satisfies the condition (6): \(\overline {F_{i}}(x)\geq B_{i}x^{\gamma _{i}}\exp ({-\alpha _{i} x^{\beta _{i}}})~~ \text {for}~~ x\geq x_{0}\). An application of the above theorem leads to the following interesting result: The product of two independent random variables and its square are both M-indet if one random variable is normal and the other is exponential, or chi-square, or inverse Gaussian. Akhiezer, NI: The Classical Problem of Moments and Some Related Questions of Analysis. Oliver & Boyd, Edinburgh (1965). [Original Russian edition: Nauka, Moscow (1961)]. Berg, C: The cube of a normal distribution is indeterminate. Ann. Probab. 16, 910–913 (1988). MathSciNet Article MATH Google Scholar Berg, C: From discrete to absolutely continuous solutions of indeterminate moment problems. Arab. J. Math. Sci. 4, 1–18 (1998). MathSciNet MATH Google Scholar Berg, C, Chen, Y, Ismail, MEH: Small eigenvalues of large Hankel matrices: the indeterminate case. Math. Scand. 91, 67–81 (2002). Berg, C, Christensen, JPR: Density questions in the classical theory of moments. Ann. Inst. Fourier (Grenoble). 31, 99–114 (1981). Billingsley, P: Probability and Measures. 3rd edn. Wiley, New York (1995). Carleman, T: Les Fonctions Quasi-analytiques. Gauthier-Villars, Paris (1926). Chen, Y, Karagiannidis, GK, Lu, H, Cao, N: Novel approximations to the statistics of products of independent random variables and their applications in wireless communications. IEEE Trans. Veh. Tech. 61, 443–454 (2012). Chihara, TS: On indeterminate Hamburger moment problems. Pacific J. Math. 27, 475–484 (1968). Chihara, TS: A characterization and a class of distribution functions for the Stieltjes–Wigert polynomials. Canad. Math. Bull. 13, 529–532 (1970). Chow, YS, Teicher, H: Probability Theory: Independence, Interchangeability, Martingales. 3rd edn. Springer, New York (1997). Christiansen, JS: The moment problem associated with the Stieltjes–Wigert polynomials. J. Math. Anal. Appl. 277, 218–245 (2003). Fischer, H: A History of the Central Limit Theorems: From Classical to Modern Probability Theory. Springer, New York (2011). Fréchet, M, Shohat, J: A proof of the generalized second limit theorem in the theory of probability. Trans. Amer. Math. Soc. 33, 533–543 (1931). Galambos, J, Simonelli, I: Products of Random Variables: Applications to Problems of Physics and to Arithmetical Functions. Marcel Dekker, New York (2004). Garnett, JB: Bounded Analytic Functions. Springer, New York (1981). Goffman, C, Pedrick, G: First Course in Functional Analysis. Prentice Hall of India, New Delhi (2002). Gómez, R, López-García, M: A family of heat functions as solutions of indeterminate moment problems. Int. J. Math. Math. Sci. Article ID 41526, 1–11 (2007). Graffi, S, Grecchi, V: Borel summability and indeterminacy of the Stieltjes moment problem: Application to the anharmonic oscillators. J. Math. Phys. 19, 1002–1006 (1978). Gut, A: On the moment problem. Bernoulli. 8, 407–421 (2002). Gut, A: On the moment problem for random sums. J. Appl. Probab. 40, 797–802 (2003). Hamburger, H: Über eine Erweiterung des Stieltjesschen Momentenproblems (Teil I). Math. Ann. 81, 235–319 (1920). Hamburger, H: Über eine Erweiterung des Stieltjesschen Momentenproblems (Teil II). Math. Ann. 82, 120–164 (1921), 168–187 (1921). Hardy, GH: On Stieltjes' 'problème des moments'. Messenger of Math. 46, 175–182 (1917). Hardy, GH: On Stieltjes' 'problème des moments'. Messenger of Math. 47, 81–88 (1918). [Collected Papers of G.H. Hardy, Vol. VII, pp. 75–83, 84–91 (1979) Oxford University Press, Oxford]. Heyde, CC: On a property of the lognormal distribution. J. Roy. Statist. Soc. Ser. B. 25, 392–393 (1963a). Heyde, CC: Some remarks on the moment problem (I). Quart. J. Math. Oxford. 14, 91–96 (1963b). Hörfelt, P: The moment problem for some Wiener functionals: corrections to previous proofs (with an appendix by H. L. Pedersen). J. Appl. Probab. 42, 851–860 (2005). Kjeldsen, TH: The early history of the moment problem. Historia Math. 20, 19–44 (1993). Klebanov, LB, Mkrtchyan, ST: Estimation of the closeness of distributions in terms of identical moments. In: Stability Problems for Stochastic Models, (Proc. Fourth All-Union Sem., Palanga, 1979) (Russian), Zobotarev, VM, Kalashnikov, VV (eds.), pp. 64–72, Moscow (1980). Translations: J. Soviet Math. 32, 54–60 (1986); Selected Translations in Mathematical Statistics and Probability 16, 1–10 (Estimating the proximity of distributions in terms of coinciding moments) (1985). Kleiber, C: On moment indeterminacy of the Benini income distribution. Statist. Papers. 54, 1121–1130 (2013). Kleiber, C: The generalized lognormal distribution and the Stieltjes moment problem. J. Theor. Probab. 27, 1167–1177 (2014). Koosis, P: The Logarithmic Integral I. Cambridge Univ. Press, Cambridge (1988). Kopanov, P, Stoyanov, J: Lin's condition for functions of random variables and moment determinacy of probability distributions. C. R. Bulg. Acad. Sci. 70, 611–618 (2017). Krein, M: On a problem of extrapolation of A.N. Kolmogoroff. Comptes Rendus (Doklady) l'Academie Sci l'URSS XLVI, 306–309 (1945). [Dokl. Akad. Nauk SSSR 46, 339–342, (1944)]. Lariviere, MA: A note on probability distributions with increasing generalized failure rates. Oper. Res. 54, 602–604 (2006). Leipnik, R: The lognormal distribution and strong non-uniqueness of the moment problem. Theory Probab. Appl. 26 863–865 (1981, Russian edition). SIAM version, 850–852 (1982). Lin, GD: On the moment problems. Statist. Probab. Lett. 35, 85–90 (1997). Erratum: ibid 50, 205 (2000). Lin, GD, Huang, JS: The cube of a logistic distribution is indeterminate. Austral. J. Statist. 39, 247–252 (1997). Lin, GD, Stoyanov, J: On the moment determinacy of the distribution of compound geometric sums. J. Appl. Probab. 39, 545–554 (2002). Lin, GD, Stoyanov, J: The logarithmic skew-normal distributions are moment-indeterminate. J. Appl. Probab. 46, 909–916 (2009). Lin, GD, Stoyanov, J: Moment determinacy of powers and products of nonnegative random variables. J. Theoret. Probab. 28, 1337–1353 (2015). Lin, GD, Stoyanov, J: On the moment determinacy of products of non-identically distributed random variables. Probab. Math. Statist. 36, 21–33 (2016). López-García, M: Characterization of solutions to the log-normal moment problem. Theory Probab. Appl. 55, 303–307 (2011). Ostrovska, S: Constructing Stieltjes classes for M-indeterminate absolutely continuous probability distributions. ALEA. Lat. Am. J. Probab. Math. Stat. 11, 253–258 (2014). Ostrovska, S: On the powers of polynomial logistic distributions. Braz. J. Probab. Stat. 30, 676–690 (2016). Ostrovska, S, Stoyanov, J: Stieltjes classes for M-indeterminate powers of inverse Gaussian distributions. Statist. Probab. Lett. 71, 165–171 (2005). Pakes, AG: Length biasing and laws equivalent to the log-normal. J. Math. Anal. Appl. 197, 825–854 (1996). Pakes, AG: Remarks on converse Carleman and Krein criteria for the classical moment problem. J. Aust. Math. Soc. 71, 81–104 (2001). Pakes, AG: Structure of Stieltjes classes of moment-equivalent probability laws. J. Math. Anal. Appl. 326, 1268–1290 (2007). Pakes, AG: On generalized stable and related laws. J. Math. Anal. Appl. 411, 201–222 (2014). Pakes, AG, Hung, W-L, Wu, J-W: Criteria for the unique determination of probability distributions by moments. Aust. N.Z. J. Statist. 43, 101–111 (2001). Pakes, AG, Khattree, R: Length-biasing, characterizations of laws and the moment problem. Austral. J. Statist. 34, 307–322 (1992). Pedersen, HL: On Krein's theorem for indeterminacy of the classical moment problem. J. Approx. Theory. 95, 90–100 (1998). Penson, KA, Blasiak, P, Duchamp, GHE, Horzela, A, Solomon, AI: On certain non-unique solutions of the Stieltjes moment problem. Discrete Math. Theor. Comput. Sci. 12, 295–306 (2010). Prohorov, YuV, Rozanov, YuA: Probability Theory. Translated by K. Krickeberg and H. Urmitzer. Springer, New York (1969). Rao, CR, Shanbhag, DN, Sapatinas, T, Rao, MB: Some properties of extreme stable laws and related infinitely divisible random variables. J. Statist. Plann. Inference. 139, 802–813 (2009). Serfling, RJ: Approximation Theorems of Mathematical Statistics. Wiley, New York (1980). Shohat, JA, Tamarkin, JD: The Problem of Moments. Amer. Math. Soc.New York (1943). Slud, EV: The moment problem for polynomial forms in normal random variables. Ann. Probab. 21, 2200–2214 (1993). Springer, MD: The Algebra of Random Variables. Wiley, New York (1979). Stieltjes, TJ: Recherches sur les fractions continues. Ann. Fac. Sci. Univ. Toulouse Math.8(J), 1–122 (1894). 9(A), 1–47 (1895). Also in: Stieltjes, T.J. Oeuvres Completes. Noordhoff, Gröningen 2, 402–566 (1918). Stirzaker, D: The Cambridge Dictionary of Probability and Its Applications. Cambr. Univ. Press, Cambridge (2015). Stoyanov, J: Inverse Gaussian distribution and the moment problem. J. Appl. Statist. Sci. 9, 61–71 (1999). Stoyanov, J: Krein condition in probabilistic moment problems. Bernoulli. 6, 939–949 (2000). Stoyanov, J: Stieltjes classes for moment-indeterminate probability distributions. J. Appl. Probab. 41A, 281–294 (2004). Stoyanov, JM: Counterexamples in Probability. 3rd edn. Dover Publications, New York (2013). [First and second edns: Chichester: Wiley, 1987 and 1997]. Stoyanov, J, Kopanov, P: Lin's condition and moment determinacy of functions of random variables. Submitted (2017). Stoyanov, J, Lin, GD: Hardy's condition in the moment problem for probability distributions. Theory Probab. Appl. 57, 811–820 (2012) (Russian edition). SIAM version, 699–708 (2013). Stoyanov, J, Lin, GD, DasGupta, A: Hamburger moment problem for powers and products of random variables. J. Statist. Plann. Inference. 154, 166–177 (2014). Stoyanov, J, Tolmatz, L: New Stieltjes classes involving generalized gamma distributions. Statist. Probab. Lett. 69, 213–219 (2004). Stoyanov, J, Tolmatz, L: Method for constructing Stieltjes classes for M-indeterminate probability distributions. Appl. Math. Comput. 165, 669–685 (2005). Targhetta, ML: On a family of indeterminate distributions. J. Math. Anal. Appl. 147, 477–479 (1990). Wang, J: Constructing Stieltjes classes for power-order M-indeterminate distributions. J. Appl. Probab. Statist. 7, 41–52 (2012). MATH Google Scholar The author would like to thank the Editor and two Referees for helpful comments and suggestions. Especially, one Referee pointed out the result in Lemma 4∗. The paper was presented at (1) the International Waseda Symposium, February 29 – March 3, 2016, held by Waseda University (Japan) and (2) the second International Conference on Statistical Distributions and Applications (ICOSDA), October 14–16, 2016, Niagara Falls, held by Central Michigan University (USA) and Brock University (Canada). The author thanks the organizers (1) Professor Masanobu Taniguchi and (2) Professors Felix Famoye, Carl Lee and Ejaz Ahmed for their kind invitations. The comments and suggestions of Professor Murad Taqqu and other audiences are also appreciated. The author declares that there is no competing interest. Institute of Statistical Science, Academia Sinica, Taipei, 11529, Taiwan, Republic of China Gwo Dong Lin Correspondence to Gwo Dong Lin. Lin, G.D. Recent developments on the moment problem. J Stat Distrib App 4, 5 (2017). https://doi.org/10.1186/s40488-017-0059-2 Hamburger moment problem Stieltjes moment problem Cramér's condition Carleman's condition Krein's condition Hardy's condition AMS Subject Classification International Conference on Statistical Distributions and Applications, ICOSDA 2016
CommonCrawl