text
stringlengths
100
500k
subset
stringclasses
4 values
On This Day in Math - December 31 The Difficult Problem, Bogdonay-Belsky The problem for mental solution, appropriate for today is \( \frac{10^2+11^2+12^2+13^2+14^2}{365}\) (normally the last day of the year) For other great mathematicians or philosophers, he [Gauss] used the epithets magnus, or clarus, or clarissimus; for Newton alone he kept the prefix summus. ~W.W.R. Ball Yesterday (365th and normally last day of the year)was the sum of two consecutive squares and also three consecutive squares, Today is the 366th day, and it is the sum of four consecutive squares, 366 = 8 2,+ 92, + 102, + 112,. 365 is a palindrome in base 2; 101101101 it's 555 in base 8, and 16d in hexdecimal (base 16) 1719 When the first Astronomer Royal, John Flamsteed died on this day (see below) he was serving as Rector of Burstow (just east of Gatwick), and had been for thirty-five years. For some reason, no marker was placed on the grave, and 170 years later, it was not clear where the famous astronomer was buried. Finally, in 1888, another astronomer from Greenwich Observatory, Edwin Dunkin searched, and found, the burial site mentioned in his wife's will. Today there are several markers in the church at Barstow, including the one below indicating his resting place in the Chancel. Several other images of the church, and markers for Flamsteed, are at the site from which I obtained this note. *Blogs Greenwich http://blogs.greenwich.co.uk/rob-powell/the-grave-of-john-flamsteed/ Stephen Craven - http://www.geograph.org.uk/photo/2786257 1831 Gauss writes to his close friend, Wilhelm Olbers regarding an essay published by Laplace, "The essay... is quite unworthy of this great geometer. I find two different, very gross blunders in it. I had always imagined that among geometers of the first rank the calculation was always only the dress in which they present that which they created not by calculation, but by mediation about the subject itself. ". *Carl Friedrich Gauss: Titan of Science by Guy Waldo Dunnington, Jeremy Gray, Fritz-Egbert Dohse 1915 The Mathematical Association of America was founded in Columbus, Ohio. Starting with 1045 charter members, the Association now has some 34,000 members who are interested in the improvement of mathematical instruction at the collegiate level. *VFR 1935, a patent was issued for the game of Monopoly assigned to Parker Brothers, Inc., by Charles Darrow of Pennsylvania (No. 2,026,082). The patent titled it a "Board Game Apparatus" and described it as "intended primarily to provide a game of barter, thus involving trading and bargaining" in which "much of the interest in the game lies in trading and in striking shrewd bargains." Illustrations included with the patent showed not only the playing board and pieces, cards, and the scrip money. He had invented the game on 7 Mar 1933, though it was preceded by other real-estate board games. *TIS 1961 This was the last day of the year 1961, a Stobogrammatic number. If you rotate the number by 180o it still looks the same. Then name seems to have been created for the Jan 1961 issue of The Mathematics Magazine by J. M. Howell of Los Angeles City College. The last day of the year is a significant date since it is the last time someone will be living in such a year for a very long time. *Mathematics Magazine 1987 The last minute (UT) of the last hour of the last day of the year 1987 carried an extra second, a leap second. This was to coordinate the slowdown in rotation of the Earth on its axis, or Solar Time, with the more precise atomic time. The one-second insertion was made at 6:59:59 P.M. at the Naval Observatory in Washington D.C. Just exactly when the proverbial man-in-the¬street choose to insert this second was his own business, but in New York's Times Square it was done with much hoopla at midnight. *U.S. Naval Observatory's "Stargazing Notes for December 1987." *VFR 1999 Millenium memorial puzzle at Luppitt. It is made of fine grained granite, which is an exceptionally hard stone. It was unveiled on 31st December 2000 - Just in time for the true Millennium. The puzzles on the site, as described at the puzzle's website: The puzzles include a wordsearch concealing over 30 local placenames, a three way anamorphic illusion, a completely new idea based on the Tinner's Rabbits, an ancient maze from a French church, a modern Railway Maze (specially designed by Professor Sir Roger Penrose), a Word Anagram, a Letter misplacement puzzle, a traditional Word square puzzle, cryptarithms, hidden mice, and other curiosities and puzzles. 1999 Professor Andrew Wiles is knighted. The Princeton mathematician found fame in October 1994 when he succeeded in proving Fermat's Last Theorem. This was an amazing achievement that had eluded some of the greatest minds since Pierre Fermat conjured up his theory in the 1630s. His work has received every major honour and he had the pleasure in 1999 of seeing some of his former pupils crack another of mathematics' great puzzles: The Shimura-Taniyama-Weil conjecture. *BBC 1999 Alan Sugar, the man who founded Amstrad some 30 years ago and now runs Tottenham Hotspur football club,(Sugar sold his interest in the Spurs in 2007 according to a comment from Luke Robinson, below) has been knighted. So too has Maurice Wilkes, who developed the world's first practical stored-program computer in 1949. "I'm tickled pink by the news," said Mr Sugar, whose company launched the world's first mass-market word processor built with low-cost components from the Far East. At the height of its success, Amstrad was worth £1.5bn on the FTSE-100 index. Mr Sugar eventually broke Amstrad up, spinning off Viglen Technology, its personal computer business, of which he is now chairman. Maurice Wilkes led the Cambridge University team that developed the Edsac - Electronic Delay Storage Automatic Calculator. It was a huge contraption that could carry out just 650 instructions per second. Nevertheless, it went down in history as the first truly programmable computer. *BBC 1789 Benoît "Claudius" Crozet (December 31, 1789; Villefranche, France – January 29, 1864) was an educator and civil engineer. After serving in the French military, in 1816, he immigrated to the United States. He taught at the U.S. Military Academy at West Point, New York, and helped found the Virginia Military Institute at Lexington, Virginia. He was Principal Engineer for the Virginia Board of Public Works and oversaw the planning and construction of canals, turnpikes, bridges and railroads in Virginia, including the area which is now West Virginia. He became widely known as the "Pathfinder of the Blue Ridge." On June 7, 1816, in Paris, Crozet married Agathe Decamp. Late in fall of 1816, Crozet and his bride headed for the United States. Almost immediately after arriving, Crozet began work as a professor of engineering at the U.S. Military Academy at West Point, New York. While at West Point, Crozet is credited by some as being the first to use the chalkboard as an instructional tool. (Professor Ricky, a math historian at USMA has written, "old records show that it was introduced at West Point by Mr. George Baron, a civilian teacher, who in the autumn of 1801 gave to Cadet Swift 'a specimen of his mode of teaching at the blackboard' ").He also designed several of the buildings at West Point. Thomas Jefferson referred to Claudius Crozet as "by far the best mathematician in the United States." He also published A Treatise on Descriptive Geometry while at West Point, a copy of which was sent to Jefferson. Jefferson's response on Nov 23, 1821 began, "I thank you, Sir, for your kind attention in sending me a copy of your valuable treatise on Descriptive geometry." He continued the messsage with praise for the work, and the instructor both. The dining hall at the Virginia Military Institute is named in his honor. It has been affectionately nicknamed "Club Crozet" by the Cadets. * Wik & Natl. Archives 1864 Robert Grant Aitken (31 Dec 1864; 29 Oct 1951) American astronomer who specialized in the study of double stars, of which he discovered more than 3,000. He worked at the Lick Observatory from 1895 to 1935, becoming director from 1930. Aitken made systematic surveys of binary stars, measuring their positions visually. His massive New General Catalogue of Double Stars within 120 degrees of the North Pole allowed orbit determinations which increased astronomers' knowledge of stellar masses. He also measured positions of comets and planetary satellites and computed orbits. He wrote an important book on binary stars, and he lectured and wrote widely for the public.*TIS 1896 Carl Ludwig Siegel (December 31, 1896 – April 4, 1981) was a mathematician specializing in number theory and celestial mechanics. He was one of the most important mathematicians of the 20th century. Among his teachers were Max Planck and Ferdinand Georg Frobenius, whose influence made the young Siegel abandon astronomy and turn towards number theory instead. His best student was Jürgen Moser, one of the founders of KAM theory (Kolmogorov-Arnold-Moser), which lies at the foundations of chaos theory. Siegel's work on number theory, diophantine equations, and celestial mechanics in particular won him numerous honours. In 1978, he was awarded the Wolf Prize in Mathematics, one of the most prestigious in the field. Siegel's work spans analytic number theory; and his theorem on the finiteness of the integer points of curves, for genus greater than 1, is historically important as a major general result on diophantine equations, when the field was essentially undeveloped. He worked on L-functions, discovering the (presumed illusory) Siegel zero phenomenon. His work derived from the Hardy-Littlewood circle method on quadratic forms proved very influential on the later, adele group theories encompassing the use of theta-functions. The Siegel modular forms are recognised as part of the moduli theory of abelian varieties. In all this work the structural implications of analytic methods show through. André Weil, without hesitation, named Siegel as the greatest mathematician of the first half of the 20th century. In the early 1970s Weil gave a series of seminars on the history of number theory prior to the 20th century and he remarked that Siegel once told him that when the first person discovered the simplest case of Faulhaber's formula then, in Siegel's words, "Es gefiel dem lieben Gott." (It pleased the dear Lord.) Siegel was a profound student of the history of mathematics and put his studies to good use in such works as the Riemann-Siegel formula.*Wik 1929 Jeremy Bernstein (31 Dec 1929, ) American physicist, educator, and writer widely known for the clarity of his writing for the lay reader on the major issues of modern physics. He was a staff writer for the New Yorker for over 30 years until 1993. He has held appointments at the Institute for Advanced Study, Brookhaven National Laboratory, CERN, Oxford, the University of Islamabad, and the Ecole Polytechnique. Berstein has written over 50 technical papers as well as his books popularizing science including Albert Einstein; Cranks, Quarks, and the Cosmos and A Theory for Everything. His passion for science was launched after he entered Harvard University, thereafter combining it with a talent as a writer. *TIS 1930 Jaime Alfonso Escalante Gutierrez (December 31, 1930 — March 30, 2010) was a Bolivian educator well-known for teaching students calculus from 1974 to 1991 at Garfield High School, East Los Angeles, California. Escalante was the subject of the 1988 film Stand and Deliver, in which he is portrayed by Edward James Olmos.*Wik 1945 Leonard Max Adleman (December 31, 1945, ) is an American theoretical computer scientist and professor of computer science and molecular biology at the University of Southern California. He is known for being a co-inventor of the RSA (Rivest-Shamir-Adleman) cryptosystem in 1977, and of DNA computing. RSA is in widespread use in security applications. *Wik 1952 Vaughan Frederick Randal Jones (31 Dec 1952, ) is a New Zealand mathematician who was awarded the Fields Medal in 1990 for his study of functional analysis and knot theory. In 1984, Jones discovered a relationship between von Neumann algebras and geometric topology. As a result, he found a new polynomial invariant for knots and links in 3-space. It was a complete surprise because his invariant had been missed completely by topologists, in spite of intense activity in closely related areas during the preceding 60 years.*TIS 1610 Ludolph van Ceulen, a German mathematician who is famed for his calculation of π to 35 places. In Germany π used to be called the Ludolphine number. Because van Ceulen could not read Greek, Jan Cornets de Groot, the burgomaster of Delft and father of the jurist, scholar, statesman and diplomat, Hugo Grotius​, translated Archimedes' approximation to π for Van Ceulen. This proved a significant point in Van Ceulen's life for he spent the rest of his life obtaining better approximations to π using Archimedes' method with regular polygons with many sides.*SAU He has Pi on his memorial stone. 1679 Giovanni Alfonso Borelli (28 Jan 1608; 31 Dec 1679) Italian mathematician, physiologist and physicist sometimes called "father of biomechanics." He was the first to apply the laws of mechanics to the muscular action of the human body. In De motu animalium (Concerning Animal Motion, 1680), he correctly described the skeleton and muscles as a system of levers, and explained the mechanism of bird flight. He calculated the forces required for equilibrium in various joints of the body well before the mechanics of Isaac Newton. In 1649, he published a work on malignant fevers. He repudiated astrological causes of diseases and believed in chemical cures. In 1658, he published Euclidus restitutus. He made anatomical dissections, drew a diver's rebreather, investiged volcanoes, was first to suggest a parabolic path for comets, and considered Jupiter had an attractive influence on its moons.*TIS 1719 John Flamsteed (19 Aug 1646; 31 Dec 1719)English astronomer who established the Greenwich Observatory. Science Historian/blogger Thony Christie writes: " Observational astronomy only produced three significant star catalogues in the two thousand years leading up to the 18th century. The first, the Greek catalogue from Hipparchus and Ptolemaeus published by Ptolemaeus in the 2nd century CE, which contained just over 1000 stars mapped with an accuracy that was astounding for the conditions under which it was produced. The second, containing somewhat more that 700 stars plus another 300 borrowed from the Ptolemaeus catalogue, was produced by the Danish astronomer Tycho Brahe in the last quarter of the 16th century, with an accuracy many factors better than his Greek predecessors. Both of these catalogues were produced with naked eye observations. The first catalogue to be produced using telescopic sights on the measuring instruments was that of John Flamsteed published posthumously in 1725, which contains more than 3000 stars measured to a much higher degree of accuracy than that of Tycho." He then goes on to correct some misconceptions about Flamsteed's life that are commonly repeated, (he did NOT take part in talking Charles II into creating the observatory) and gives a nice description of a complex man. *Renaissance Mathematicus 1894 Thomas Jan Stieltjes, who did pioneering work on the integral. *VFR Thomas Stieltjes worked on almost all branches of analysis, continued fractions and number theory. *SAU 1913 Seth Carlo Chandler, Jr. (17 Sep 1846, 31 Dec 1913) was an American astronomer best known for his discovery (1884-85) of the Chandler Wobble, a complex movement in the Earth's axis of rotation (now referred to as polar motion) that causes latitude to vary with a period of 14 months. His interests were much wider than this single subject, however, and he made substantial contributions to such diverse areas of astronomy as cataloging and monitoring variable stars, the independent discovery of the nova T Coronae, improving the estimate of the constant of aberration, and computing the orbital parameters of minor planets and comets. His publications totaled more than 200. *TIS 1962 Charles G Darwin was the grandson of the famous biologist and graduated from Cambridge. He lectured on Physics at Manchester and after service in World War I and a period back at Cambridge he became Professor of Physics at Edinburgh. He left eventually to become head of a Cambridge college. He worked in Quantum Mechanics and had controversial views on Eugenics. *SAU 1982 Kurt Otto Friedrichs (September 28, 1901 – December 31, 1982) was a noted German American mathematician. He was the co-founder of the Courant Institute at New York University and recipient of the National Medal of Science.*Wik It requires a very unusual mind to undertake the analysis of the obvious. ~Alfred North Whitehead The 365th (and usually last) day of the year; 365 is a centered square number, and thus the sum of two consecutive squares (132 + 142 ) and also one more than four times a triangular number. 365 is the sum of two squares in two ways, 132 + 142 and 192 + 22 *Lord Karl Voldevive There are 10 days during the year that are the sum of three squares. This is the last one. 365 = 10²+11²+12² *jim wilder@wilderlab 365 is the smallest number that can be written as a sum of consecutive squares in more than one way (and all the numbers squared are consecutive.): 365 =102 + 112 + 122 =132 + 142 . 1610 Galileo in answer to a question from Father Christoph Clavius SJ about why his large aperture was partly covered; answered that he did this for two reasons: The first is to make it possible to work it more accurately because a large surface is more easily kept in the proper shape than a smaller one. The other reason is that if one wants to see a larger space in one glance, the glass can be uncovered, but it is then necessary to put a less acute glass near the eye and shorten the tube, otherwise the objects will appear very fuzzy. *Aalbert Vvan Helden, Galileo and the Telescope; Origins of the Telescope - Royal Netherlands Academy of Arts and Sciences, 2010 In 1873, the American Metrological Society was formed in New York City to improve systems of weights, measures and money. Its activities eventually extended with a committee considering units of force and energy, and another concerned with the adoption of Standard Time for the U.S. On 30 Dec 1884, at the meeting of the American Metrological Society at Columbia College in New York City, Charles S. Peirce read a paper on the determination of gravity. He also participated in a discussion of the adequacy of the standards of weight and measure in the United States and pointed out some of the deficiencies in the current system. As a result of his revelations, the Society passed a resolution recommending the appointment of a committee to advise Congress on the need for establishing an efficient bureau of standards. *TIS 1881 The "Four Fours" problem was first published in Knowledge a magazine of popular science edited by the astronomer Richard Proctor. The problem is to express whole numbers using exactly four fours and various arithmetical signs. For example 52 = 44 + 4 + 4. This can be done for the integers from 1 to 112, but 113 is a problem. Variations of the game allow use of factorials, square roots, decimal points (such as .4) etc. A good source for further study is here. And if you are interested, before there was a four-fours problem there was a three-threes problem 1902 Leornard Eugene Dickson married Susan Davis. Later he often said of his honeymoon: "It was a great success, except that I only got two research papers written." In all he published 18 books and hundreds of articles.*VFR 1915 A two day meeting in Columbus, Ohio began to found a new mathematical organization. The new organization would be called the Mathematical Organization of America, and took over the publishing of the American Mathematical Monthly which had been in operation for three years. The first president was Professor E. R. Hedrick of the University of Missouri. The Earle Raymond Hedrick lectures were established by the Mathematical Association in America in his honor. In 1924, Edwin Hubble announced the existence of another galactic system in addition to the Milky Way. He had found at least one "island universe," or galaxy of stars, lies outside our own Milky Way. Until then, scientists were not certain whether certain fuzzy clouds of light called "nebulae" that had been seen with telescopes were small clusters of clouds within the Milky Way or separate galaxies. Hubble measured the distance to the Andromeda nebula and showed it to be a hundred thousand times as far away as the nearest stars. This proved it was a separate galaxy, as large as our own Milky Way, but very far away. More galaxies have been found, some a spiral form like the Milky Way; others spheroidal, others without the spiral arms, or of irregular shape. 1952 Harvard mathematician Andrew Gleason received the Newcomb Cleveland Prize, a $1000 financial award, for his contributions toward the solution of Hilbert's Fifth Problem about Lie Groups. In 1982, a second full moon of the month was visible. Known as a "blue moon," the name does not refer to its color, but it is a rare event, giving rise to the expression, "once in a blue moon" came from. This blue blue moon was more special as a total lunar eclipse also occurred (U.S.). Although there were 41 blue moons in the twentieth century, this was one of four during an eclipse of the moon, and the only total eclipse of a blue moon in the twentieth century. A blue moon happens every 2.7 years because of a disparity between our calendar and the lunar cycle. The lunar cycle is the time it takes for the moon to revolve around the earth, is 29 days, 12 hours, and 44 minutes. *TIS The next blue moon will occur on Setember 30 of 2012. 1985 Version 3.2 of the IBM PC​-DOS operating system is announced PC-DOS, IBM's version of the DOS operating system used on the IBM PC, released Version 3.2 on this date. The system required 128KB RAM and was available on either one 720KB disk or two 51/4" disks. DOS has remained in use since the introduction of the IBM PC in 1981, with PC-DOS 200 being the latest release in 1998. *CHM 1850 John Milne (30 Dec 1850; 30 Jul 1913) English seismologist who invented the horizontal pendulum seismograph (1894) and was one of the European scientists that helped organize the seismic survey of Japan in the last half of the 1800's. Milne conducted experiments on the propagation of elastic waves from artificial sources, and building construction. He spent 20 years in Japan, until 1895, when a fire destroyed his property, and he returned home to the Isle of Wight. He set up a new laboratory and persuaded the Royal Society to fund initially 20 earthquake observatories around the world, equipped with his seismographs. By 1900, Milne seismographs were established on all of the inhabited continents and he was recognized as the world's leading seismologist. He died of Bright's disease.*TIS 1897 Stanisław Saks (December 30, 1897 – November 23, 1942) was a Polish mathematician and university tutor, known primarily for his membership in the Scottish Café circle, an extensive monograph on the Theory of Integrals, his works on measure theory and the Vitali-Hahn-Saks theorem.*wIK 1931 Sir John (Theodore) Houghton (30 Dec 1931, ) Welsh metereologist who began in the late 1960's drawing attention to the buildup of carbon dioxide in the earth's atmosphere and its result of global warming, now known as the greenhouse effect. As director-general (1983) of the British Meteorological Office, he began tracking changing climate patterns. In 1990, he co-chaired a team of scientists working for the United Nations that produced the first comprehensive report on the science of climate change. This led to the 1997 U.N. Conference on Climate Change, in Kyoto, Japan. The Kyoto Protocol that resulted there was a treaty among industrialized and developed nations to combat global warming by voluntarily adhering to progressively stiffening emissions-reduction standards.*TIS 1934 John N. Bahcall (30 Dec 1934, ) American astrophysicist who pioneered the development of neutrino astrophysics in the early 1960s. He theorized that neutrinos (subatomic particles that have no charge and exceedingly weak interaction with matter) can be used to understanding how stars shine. They are emitted by the sun and stars during the fusion energy creation process, and most are able to pass through the Earth without being stopped. He calculated the expected output of neutrinos from the sun, which created an experimental challenge to explain the unexpected result. He won the National Medal of Science (1998) for both his contributions to the planning and development of the Hubble Space Telescope and his pioneering research in neutrino astrophysics.*TIS 1691 Robert Boyle (25 Jan 1627, 30 Dec 1691) Anglo-Irish chemist and natural philosopher noted for his pioneering experiments on the properties of gases and his espousal of a corpuscular view of matter that was a forerunner of the modern theory of chemical elements. He was a founding member of the Royal Society of London. From 1656-68, he resided at Oxford where Robert Hooke, who helped him to construct the air pump. With this invention, Boyle demonstrated the physical characteristics of air and the necessity of air for combustion, respiration, and the transmission of sound, published in New Experiments Physio-Mechanical, Touching the Spring of the Air and its Effects (1660). In 1661, he reported to the Royal Society on the relationship of the volume of gases and pressure (Boyle's Law).*TIS 1695 Sir Samuel Morland (born 1625, 30 Dec 1695) English mathematician and inventor of mechanical calculators. His first machine added and subtracted English money using eight dials that were moved by a simple stylus. Another could multiply and divide using 30 discs with numbers marked around the edge - circular versions of Napier's linear bones. Five more discs handled finding square and cube roots. His third machine made trigonometric calculations. Morland built a speaking trumpet (1671) he claimed would allow a conversation to be conducted over a distance of 3/4 mile. By 1675, he had developed various pumps for domestic, marine and industrial applications, such as wells, draining ponds or mines, and fire fighting. He also designed iron stoves for marine use, and improved barometers. *TIS 1883 John Henry Dallmeyer (6 Sep 1830, 30 Dec 1883) German-born British inventor and manufacturer of lenses and telescopes. He introduced improvements in both photographic portrait and landscape lenses, in object glasses for the microscope, and in condensers for the optical lantern. Dallmeyer made photoheliographs (telescopes adapted for photographing the Sun) for Harvard observatory (1864), and the British government (1873). He introduced the "rapid rectilinear" (1866) which is a lens system composed of two matching doublet lenses, symmetrically placed around the focal aperture to remove many of the aberrations present in more simple constructions. He died on board a ship at sea off New Zealand. *TIS 1932 Eliakim Hastings Moore (January 26, 1862 – December 30, 1932) was an American mathematician. He discovered mathematics through a summer job at the Cincinnati Observatory while in high school. When the University of Chicago opened its doors in 1892, Moore was the first head of its mathematics department, a position he retained until his death in 1931. His first two colleagues were Bolza and Maschke. The resulting department was the second research-oriented mathematics department in American history, after Johns Hopkins University. Moore first worked in abstract algebra, proving in 1893 the classification of the structure of finite fields (also called Galois fields). Around 1900, he began working on the foundations of geometry. He reformulated Hilbert's axioms for geometry so that points were the only primitive notion, thus turning Hilbert's primitive lines and planes into defined notions. In 1902, he further showed that one of Hilbert's axioms for geometry was redundant. Independently, the twenty year old R.L. Moore (no relation) also proved this, but in a more elegant fashion than E. H. Moore used. When E. H. Moore heard of the feat, he arranged for a scholarship that would allow R.L. Moore to study for a doctorate at Chicago. E.H. Moore's work on axiom systems is considered one of the starting points for metamathematics and model theory. After 1906, he turned to the foundations of analysis. The concept of closure operator first appeared in his 1910 Introduction to a form of general analysis. He also wrote on algebraic geometry, number theory, and integral equations. At Chicago, Moore supervised 31 doctoral dissertations, including those of George Birkhoff, Leonard Dickson, Robert Lee Moore (no relation), and Oswald Veblen. Birkhoff and Veblen went on to forge and lead the first-rate departments at Harvard and Princeton, respectively. Dickson became the first great American algebraist and number theorist. Robert Moore founded American topology. According to the Mathematics Genealogy Project, as of January 2011, E. H. Moore had over 14,900 known "descendants." Moore convinced the New York Mathematical Society to change its name to the American Mathematical Society, whose Chicago branch he led. He presided over the AMS, 1901–02, and edited the Transactions of the American Mathematical Society, 1899–1907. He was elected to the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society. The American Mathematical Society established a prize in his honor in 2002. *Wik 1947 Alfred North Whitehead (15 Feb 1861, 30 Dec 1947) English mathematician and philosopher, who worked in logic, physics, philosophy of science and metaphysics. He is best known for his work with Bertrand Russell on one of probably the most famous books of the century, Principia Mathematica (1910-13) to demonstrate that logic is the basis for all mathematics. In physics (1910-24) his best known work was a theory of gravity, that competed with Einstein's general relativity for many decades. In his later life from 1924 onward at Harvard, he worked on more general issues in philosophy rather than mathematics, including the development of a comprehensive metaphysical system which has come to be known as process philosophy. *TIS 1956 Heinrich Scholz (December 17 1884 in Berlin , December 30 1956 in Muenster, Westphalia ) was a German logician, philosopher and theologian. *Wik 1982 Philip Hall (11 April 1904 in Hampstead, London, England - 30 Dec 1982 in Cambridge, Cambridgeshire, England) Hall was the main impetus behind the British school of group theory and the growth of group theory to be one of the major mathematical topics of the 20th Century was largely due to him.*SAU Folium of Descartes, *Wiki Die ganze Zahl schuf der liebe Gott, alles Übrige ist Menschenwerk. God made the integers, all else is the work of man. ~Leopold Kronecker The 364th day of the year; 364 is the total number of gifts in the Twelve Days of Christmas song: 1+(2+1) + (3+2+1) ... which is a series of triangular numbers. The sum of the first n triangular numbers can be expressed as (n+2 Choose 3). If you put a standard 8x8 chessboard on each face of a cube, there would be 364*(below) squares. Futility closet included this note on such a cube: "British puzzle expert Henry Dudeney once set himself the task of devising a complete knight's tour of a cube each of whose sides is a chessboard. He came up with this: If you cut out the figure, fold it into a cube and fasten it using the tabs provided, you'll have a map of the knight's path. It can start anywhere and make its way around the whole cube, visiting each of the 364 squares once and returning to its starting point. (*BTW, I've done the arithmetic on this, and that has to be 384 squares, but I didn't notice the discrepancy at first, so it's still here) The number of primes less than 364 = 3*6*4 (does that ever work again?) 1566 A part of Tycho Brahe's nose was cut off in a duel with another Danish nobleman. The dispute was over a point of mathematics. This he replaced with a prosthesis generally stated to be of silver and gold but containing a high copper content. *VFR On December 10, 1566, Tycho and the Danish blue blood Manderup Parsbjerg were guests at an engagement party at Prof. Bachmeister in Rostock. The party included a ball, but the festive environment did not keep the two men from starting an argument that went on even over the Christmas period. On December 29, they finished the matter with a rapier duel. During the duel, which started at 7 p.m. in total darkness, a large portion of the nose of Brahe was cut off by his Opponent. It was the most famous cut in science, if not the unkindest. *Neatorama 1692 Huygens, in a letter to L'Hospital, gave the first complete sketch of the folium of Descartes. Although the curve was first discussed 23 August 1638 no complete sketch had previously been given due to a reluctance to use negative numbers as coordinates. *VFR 1763 Nevil Maskelyne wrote his brother Edmund, reporting his safe arrival on 7 November after "an agreeable passage of 6 weeks". He noted that he had been "very sufficiently employed in making the observations recommended to me by the Commissioners of Longitude" and that it was at times "rather too fatiguing". The Princess Louise sailed for Barbados on 23 September. During the voyage Maskelyne and Charles Green took many lunar-distance observations (with Maskelyne later claiming that his final observation was within half of degree of the truth) and struggled a couple of times with the marine chair. Maskelyne's conclusion was that the Jupiter's satellites method of finding longitude would simply never work at sea because the telescope magnification required was far too high for use in a moving ship. *Board of Longitude project, Greenwich 1746 Euler writes to praise d'Alembert on his proof of the Fundamental Theorem of Algebra, but disagrees with his idea that log(-x) = log (x). Euler and d'Alembert's correspondence had begun on August 3, 1746, but several letters between these two, including the one that d'Alembert suggests that log(-x) = log (x) have been lost. *Robert E. Bradley, Ed Sandifer; Leonhard Euler: Life, Work and Legacy 1790 Obituary for Thomas "Tom" Fuller in the Columbian Centinial , Boston Massachusetts. His mathematical ability and its origin became a dueling point between abolitionists and those supporting slavery. Died- Negro Tom, the famous African Calculator, aged 80 years. He was the property of Mrs. Elizabeth Cox of Alexandria. Tom was a very black man. He was brought to this country at the age of 14, and was sold as a slave.... This man was a prodigy. Though he could never read or write, he had perfectly acquired the art of enumeration.... He could multiply seven into itself, that product by seven, and the products, so produced, by seven, for seven times. He could give the number of months, days, weeks, hours, minutes, and seconds in any period of time that any person chose to mention, allowing in his calculation for all leap years that happened in the time; he would give the number of poles, yards, feet, inches, and barley-corns in any distance, say the diameter of the earth's orbit; and in every calculation he would produce the true answer in less time than ninety-nine men out of a hundred would produce with their pens. And, what was, perhaps, more extraordinary, though interrupted in the progress of his calculation, and engaged in discourse necessary for him to begin again, but he would ... cast up plots of land. He took great notice of the lines of land which he had seen surveyed. He drew just conclusions from facts; surprisingly so, for his opportunities. Had his [Thomas Fuller] opportunity been equal to those of thousands of his fellow-men ... even a NEWTON himself, need have ashamed to acknowledge him a Brother in Science. *Univ of Buffalo Math Dept In 1927, Krakatoa began a new volcanic eruption on the seafloor along the same line as the cones of previous activity. By 26 Jan 1928, a growing cone had reached sea level and formed a small island called Anak Krakatoa (Child of Krakatoa). Sporadic activity continued until, by 1973, the island had reached a height of 622 ft above sea level. It was still in eruption in the early 1980s. The volcano Krakatoa is on Pulau (island) Rakata in the Sunda Strait between Java and Sumatra, Indonesia. It had been quiet since its previous catastrophic eruption of 1883. That threw pumice 33 miles high and 36,380 people were killed either by the ash fall or by the resulting tidal wave. The only earlier known eruption was in 1680, and was only moderate.*TIS 1939 Shockley Makes Historic Notebook Entry William Shockley records in his laboratory notebook that it should be possible to replace vacuum tubes with semiconductors. Eight years later, he, Walter Brattain and John Bardeen at AT&T Bell Laboratories successfully tested the point-contact transistor. Shockley developed much of the theory behind transistor action, and soon postulated the junction transistor, a much more reliable device. It took about ten years after the 1947 discovery before transistors replaced vacuum tubes in computer design as manufacturers learned to make them reliable and a new generation of engineers learned how to use them. *CHM 1947 George Dantzig announced his discovery of the simplex method at the joint annual meeting of the American Statistical Association and the Institute of Mathematical Statistics. The lecture was poorly attended and the result attracted no interest. *Robert Dorfman, "The discovery of linear programming," Annals of the History of Computing, 6(1984), 283–295, esp. 292. 1979 Edward Lorenz presents a paper at the 139th Annual Meeting of the American Association for the Advancement of Science with the title, "Predictability: Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?" *TIS According to Lorenz, upon failing to provide a title for a talk he was to present at the meeting Philip Merilees concocted the title. The idea that one butterfly could have a far-reaching ripple effect on subsequent events seems first to have appeared in a 1952 short story by Ray Bradbury about time travel. It seems that Merilees was was not familiar with Bradbury's story. *Wik Found this cartoon @NewYorker 1256 Birthdate of Ibn Al-Banna who studied the magic properties of numbers and letters. *VFR He was an Islamic mathematician who wrote a large number of works including an introduction to Euclid's Elements, an algebra text and various works on astronomy.*SAU 1796 Johann Christian Poggendorff (29 December 1796 – 24 January 1877), was a German physicist and science historian born in Hamburg. By far the greater and more important part of his work related to electricity and magnetism. Poggendorff is known for his electrostatic motor which is analogous to Wilhelm Holtz's electrostatic machine. In 1841 he described the use of the potentiometer for measurement of electrical potentials without current draw. Even at this early period he had conceived the idea of founding a physical and chemical scientific journal, and the realization of this plan was hastened by the sudden death of Ludwig Wilhelm Gilbert, the editor of Gilbert's Annalen der Physik, in 1824 Poggendorff immediately put himself in communication with the publisher, Barth of Leipzig. He became editor of Annalen der Physik und Chemie, which was to be a continuation of Gilbert's Annalen on a somewhat extended plan. Poggendorff was admirably qualified for the post, and edited the journal for 52 years, until 1876. In 1826, Poggendorff developed the mirror galvanometer, a device for detecting electric currents. He had an extraordinary memory, well stored with scientific knowledge, both modern and historical, a cool and impartial judgment, and a strong preference for facts as against theory of the speculative kind. He was thus able to throw himself into the spirit of modern experimental science. He possessed in abundant measure the German virtue of orderliness in the arrangement of knowledge and in the conduct of business. Further he had an engaging geniality of manner and much tact in dealing with men. These qualities soon made Poggendorff's Annalen (abbreviation: Pogg. Ann.) the foremost scientific journal in Europe. In the course of his fifty-two years editorship of the Annalen Poggendorff could not fail to acquire an unusual acquaintance with the labors of modern men of science. This knowledge, joined to what he had gathered by historical reading of equally unusual extent, he carefully digested and gave to the world in his Biographisch-literarisches Handworterbuch zur Geschichte der exacten Wissenschaften, containing notices of the lives and labors of mathematicians, astronomers, physicists, and chemists, of all peoples and all ages. This work contains an astounding collection of facts invaluable to the scientific biographer and historian. The first two volumes were published in 1863; after his death a third volume appeared in 1898, covering the period 1858-1883, and a fourth in 1904, coming down to the beginning of the 20th century. His literary and scientific reputation speedily brought him honorable recognition. In 1830 he was made royal professor, in 1838 Hon. Ph.D. and extraordinary professor in the University of Berlin, and in 1839 member of the Berlin Academy of Sciences. In 1845, he was elected a foreign member of the Royal Swedish Academy of Sciences. Many offers of ordinary professorships were made to him, but he declined them all, devoting himself to his duties as editor of the Annalen, and to the pursuit of his scientific researches. He died at Berlin on 24 January 1877. The Poggendorff Illusion is an optical illusion that involves the brain's perception of the interaction between diagonal lines and horizontal and vertical edges. It is named after Poggendorff, who discovered it in the drawing of Johann Karl Friedrich Zöllner, in which he showed the Zöllner illusion in 1860. In the picture to the right, a straight black line is obscured by a dark gray rectangle. The black line appears disjointed, although it is in fact straight; the second picture illustrates this fact.*Wik 1856 Birth of Thomas Jan Stieltjes, who did pioneering work on the integral. *VFR Thomas Stieltjes worked on almost all branches of analysis, continued fractions and number theory. *SAU 1861 Kurt Hensel (29 Dec 1861 in Königsberg, Prussia (now Kaliningrad, Russia) - 1 June 1941 in Marburg, Germany) invented the p-adic numbers, an algebraic theory which has proved important in later applications. From 1901 Hensel was editor of the prestigious and influential Crelle's Journal.*SAU 1905 Henri-Gaston Busignies (29 Dec 1905; 20 Jun 1981) French-born American electronics engineer whose invention (1936) of high-frequency direction finders (HF/DF, or "Huff Duff") permitted the U.S. Navy during World War II to detect enemy transmissions and quickly pinpoint the direction from which a radio transmission was coming. Busignies invented the radiocompass (1926) while still a student at Jules Ferry College in Versailles, France. In 1934, he started developing the direction finder based on his earlier radiocompass. Busignies developed the moving target indicator for wartime radar. It scrubbed off the radar screen every echo from stationary objects and left only echoes from moving objects, such as aircraft. *TIS 1911 (Emil) Klaus (Julius) Fuchs (29 Dec 1911; 28 Jan 1988) was a German-born physicist who was convicted as a spy on 1 Mar 1950, for passing nuclear research secrets to Russia. He fled from Nazi Germany to Britain. He was interned on the outbreak of WW II, but Prof. Max Born intervened on his behalf. Fuchs was released in 1942, naturalized in 1942 and joined the British atomic bomb research project. From 1943 he worked on the atom bomb with the Manhattan Project at Los Alamos, U.S. By 1945, he was sending secrets to Russia. In 1946, he became head of theoretical physics at Harwell, UK. He was caught, confessed, tried, imprisoned for nine of a 14 year sentence, released on 23 Jun 1959, and moved to East Germany and resumed nuclear research until 1979. *TIS 1944 Joseph W. Dauben (born 29 December 1944, Santa Monica- ) is a Herbert H. Lehman Distinguished Professor of History at the Graduate Center of the City University of New York. He obtained his Ph.D. from Harvard University. His fields of expertise are history of science, history of mathematics, the scientific revolution, sociology of science, intellectual history, 17-18th centuries, history of Chinese science, and the history of botany. His book Abraham Robinson was reviewed positively by Moshé Machover, but he noted that it avoids discussing any of Robinson's negative aspects, and "in this respect [the book] borders on the hagiographic, painting a portrait without warts." Dauben in a 1980 Guggenheim Fellow and is a Fellow of the American Association for the Advancement of Science, and a Fellow of the New York Academy of Sciences (since 1982). Dauben is an elected member (1991) of the International Academy of the History of Science and an elected foreign member (2001) of German Academy of Sciences Leopoldina. He delivered an invited lecture at the 1998 International Congress of Mathematicians in Berlin on Karl Marx's mathematical work. *Wik 1720 Maria Winckelmann (Maria Margarethe Winckelmann Kirch (25 Feb 1670 in Panitzsch, near Leipzig, Germany - 29 Dec 1720 in Berlin, Germany) was a German astronomer who helped her husband with his observations. She was the first woman to discover a comet.*SAU 1731 Brook Taylor (18 Aug 1685, 29 Dec 1731) British mathematician, best known for the Taylor's series, a method for expanding functions into infinite series. In 1708, Taylor produced a solution to the problem of the centre of oscillation. His Methodus incrementorum directa et inversa (1715; "Direct and Indirect Methods of Incrementation") introduced what is now called the calculus of finite differences. Using this, he was the first to express mathematically the movement of a vibrating string on the basis of mechanical principles. Methodus also contained Taylor's theorem, later recognized (1772) by Lagrange as the basis of differential calculus. A gifted artist, Taylor also wrote on basic principles of perspective (1715) containing the first general treatment of the principle of vanishing points.*TIS 1737 Joseph Saurin (1659 at Courtaison – December 29, 1737 at Paris) was a French mathematician and a converted Protestant minister. He was the first to show how the tangents at the multiple points of curves could be determined by mathematical analysis. He was accused in 1712 by Jean-Baptiste Rousseau of being the actual author of defamatory verses that gossip had attributed to Rousseau.*Wik 1891 Leopold Kronecker (7 Dec 1823, 29 Dec 1891) died of a bronchial illness in Berlin, in his 69th year. Kronecker's primary contributions were in the theory of equations. *VFR A German mathematician who worked to unify arithmetic, algebra and analysis, with a particular interest in elliptic functions, algebraic equations, theory of numbers, theory of determinants and theory of simple and multiple integrals. However the topics he studied were restricted by the fact that he believed in the reduction of all mathematics to arguments involving only the integers and a finite number of steps. He believed that mathematics should deal only with finite numbers and with a finite number of operations. He was the first to doubt the significance of non-constructive existence proofs, and believed that transcendental numbers did not exist. The Kronecker delta function is named in his honour. *TIS 1941 William James Macdonald (1851 in Huntly, Aberdeenshire, Scotland Died: 29 Dec 1941 in Edinburgh, Scotland) graduated from the University of St Andrews. He taught at Madras College St Andrews, at Merchiston Castle School and at Donald Stewart's College in Edinburgh. He was a pioneer of the introduction of modern geometry to the mathematical curriculum. He was a founder member of the EMS and became the sixth President in 1887. *SAU 1941 Tullio Levi-Civita (29 Mar 1873, 29 Dec 1941) Italian mathematician who was one of the founders of absolute differential calculus (tensor analysis) which had applications to the theory of relativity. In 1887, he published a famous paper in which he developed the calculus of tensors. In 1900 he published, jointly with Ricci, the theory of tensors Méthodes de calcul differential absolu et leures applications in a form which was used by Einstein 15 years later. Weyl also used Levi-Civita's ideas to produce a unified theory of gravitation and electromagnetism. In addition to the important contributions his work made in the theory of relativity, Levi-Civita produced a series of papers treating elegantly the problem of a static gravitational field. *TIS 1989 Adrien Albert (19 November 1907, Sydney - 29 December 1989, Canberra) was a leading authority in the development of medicinal chemistry in Australia. Albert also authored many important books on chemistry, including one on selective toxicity. He was awarded BSc with first class honours and the University Medal in 1932 at the University of Sydney. He gained a PhD in 1937 and a DSc in 1947 from the University of London. His appointments included Lecturer at the University of Sydney (1938-1947), advisor to the Medical Directorate of the Australian Army (1942-1947), research at the Wellcome Research Institute in London (1947-1948) and in 1948 the Foundation Chair of Medical Chemistry in the John Curtin School of Medical Research at the Australian National University in Canberra where he established the Department of Medical Chemistry. He was a Fellow of the Australian Academy of Science. He was the author of Selective Toxicity: The Physico-Chemical Basis of Therapy, first published by Chapman and Hall in 1951. The Adrien Albert Laboratory of Medicinal Chemistry at the University of Sydney was established in his honour in 1989.[1] His bequest funds the Adrien Albert Lectureship, awarded every two years by the Royal Society of Chemistry *Wik 1989 Hermann (Julius) Oberth (25 Jun 1894, 29 Dec 1989) was a German scientist who was one of three founders of space flight (with Tsiolkovsky and Goddard). After injury in WWI, he drafted a proposal for a long-range, liquid-propellant rocket, which the War Ministry dismissed as fanciful. Even his Ph.D. dissertation on his rocket design was rejected by the University of Heidelberg. When he published it as Die Rakete zu den Planetenräumen (1923; "The Rocket into Interplanetary Space") he gained recognition for its mathematical analysis of the rocket speed that would allow it to escape Earth's gravitational pull. He received a Romanian patent in 1931 for a liquid-propellant rocket design. His first such rocket was launched 7 May 1931, near Berlin. *TIS Shadow Family in Cowtown Anyone who considers arithmetical methods of producing random digits is, of course, in the state of sin. ~John Von Neumann The 363rd day of the year; 363 is the sum of nine consecutive primes and is also the sum of 5 consecutive powers of three. It is the last palindrome of the year. 363 is the numerator of the sum of the reciprocals of the first seven integers, \( \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}= \frac{363}{140}\) 1612 Galileo observed Neptune, but did not recognize it as a planet. Galileo's drawings show that he first observed Neptune on December 28, 1612, and again on January 27, 1613. On both occasions, Galileo mistook Neptune for a fixed star when it appeared very close—in conjunction—to Jupiter in the night sky; hence, he is not credited with Neptune's discovery. (The official discovery is usually cited as September 23, 1846, Neptune was discovered within 1° of where Le Verrier had predicted it to be.) During the period of his first observation in December 1612, Neptune was stationary in the sky because it had just turned retrograde that very day. This apparent backward motion is created when the orbit of the Earth takes it past an outer planet. Since Neptune was only beginning its yearly retrograde cycle, the motion of the planet was far too slight to be detected with Galileo's small telescope.*Wik 1893 Simon Newcomb gives a speech to the New York Mathematical Society with comments on the fourth dimension; "It is a perfectly legitimate exercise .... if we should not stop at three dimensions in geometry, but construct one for space having four... and there is room for an indefinite number of universes". He also called his speculations on the fourth dimension, "the fairlyland of geometry." The speech appears a short time later on February 1, 1894 in Nature. His comments would also be commented on in H. G. Wells, Time Machine. "But some philosophical people have been asking ... - Why not another direction at right angles to the other three? ... Professor Simon Newcomb was expanding on this only a month or so ago." *Alfred M. Bork, The Fourth Dimenson in Nineteenth-Century Physics, Isis, Sept 1964 pg 326-338 In 1893, Professor James Dewar gave six well-illustrated lectures on "Air gaseous and liquid," at the Royal Institution, London, 28 Dec 1893 - 9 Jan 1894. Some of the air in the room was liquified in the presence of the audience and it remained so for some time, when enclosed in a vacuum jacket. Again, 1 Apr 1898. My favorite stupid joke about Thermos Bottles: "You put hot stuff in a thermos, it stays hot. You put cold stuff in a thermos, it stays cold. BUT How does the Thermos know which is which?" 1895 Wilhelm Conrad Rontgen announces that he has taken an x-ray of his wife's hand in a paper, "Ein neue Art von Strahlen", to the Würzburg Physical-Medical-Society on 28 Dec and it appeared in their 1895 proceedings. By January he was famous. In the next year some 50 books and 1000 papers appeared on the subject! A journal devoted to the subject was founded in May 1896. 1895 The Lumières held their first public screening of projected motion pictures in 1895. The Lumière brothers, Auguste Marie Louis Nicolas [oɡyst maʁi lwi nikɔla] (19 October 1862, Besançon, France – 10 April 1954, Lyon) and Louis Jean (5 October 1864, Besançon, France – 6 June 1948, Bandol) were the earliest filmmakers in history. (Appropriately, "lumière" translates as "light" in English.) Their first public screening of films at which admission was charged was held on December 28, 1895, at Salon Indien du Grand Café in Paris. This history-making presentation featured ten short films, including their first film, Sortie des Usines Lumière à Lyon (Workers Leaving the Lumière Factory). Each film is 17 meters long, which, when hand cranked through a projector, runs approximately 50 seconds. *Wik 1923 George David Birkhoff of Harvard received the first Bocher Memorial Prize for his paper "Dynamical systems with two degrees of freedom." *VFR 1938 Kurt Godel lectures to the annual AMS meeting, Williamsburg, on the consistency of the axiom of choice and the generalized continuum hypothesis. Independence was proved in 1963 by Paul Cohen. *VFR In 1931, Irene Joliot-Curie reported her study of the unusually penetrating radiation released when beryllium was bombarded by alpha particles seen by the German physicists, Walter Bothe and H. Becker in 1930. Joliot-Curie (daughter of Marie and Pierre Curie) agreed with them that the radiation was energetic gamma rays. She further discovered that if the emitted radiation passed through paraffin (or other hydrogen containing materials), large numbers of protons were released. Since this was, in fact, a previously unknown result for gamma rays, she lacked an explanation. It was to be the experiments of James Chadwick performed during 7-17 Feb that would discover the radiation was in fact new particles - neutrons.*TIS 1973 For a really big ellipse, consider the orbit of the comet Kahoutek, which reached perihelion on this date. The length of the major and minor axes are 3,600 and 44 Astronomical Units. The comet's eccentricity is approximately 0.99993. *UMAP Journal, 4(1983), p. 164 Comet Kohoutek is a long-period comet; its previous apparition was about 150,000 years ago, and its next apparition will be in about 75,000 years. The comet was discovered on March 18th on photographic plates taken on March 7th and 9th by Czech astronomer Luboš Kohoutek, for whom the comet is named. *Wik In 2005, the first in a network of satellites, named Galileo, was launched by a consortium of European goverments and companies. By 2011, Galileo will consist of 30 satellites providing worldwide coverage as an alternative to the U.S. monopoly with its Global Positioning System (GPS). At a cost of $4 billion, it's Europe's biggest-ever space project, with one-third contributed by governments and the balance from eight companies. Since the American GPS is controlled by the military, the European satellite network is designed to ensure independance for civilian use, but also offer more precision for a paid service. Customers are expected to include service for small airports, transportation, and mobile phone manufacturers to build in navigation capabilities.*TIS 2009 Longest flight by a paper-only plane-Takuo Toda sets world record TOKYO, Japan--Using a specially designed 10cm long paper plane, Japanese origami plane virtuoso Takuo Toda's origami flight in a Japan Airlines hangar near Tokyo's Haneda Airport lasted 26.1s - setting the world record for the Longest flight by a paper-only plane. This one was made strictly in keeping with traditional rules of the ancient Japanese art; only one sheet of paper was folded by hand, with no scissors or glue. He had previously set a record for time aloft with a plane that included tape. *worldrecordsacademy.org There is a video here. 2013 Voyager 1 is a 722-kilogram (1,590 lb) space probe launched by NASA on September 5, 1977 to study the outer Solar System. Operating for 36 years, 3 months, and 23 days as of 28 December 2013, the spacecraft communicates with the Deep Space Network to receive routine commands and return data. At a distance of about 127.21 AU (1.903×1010 km) from the Earth as of 28 December 2013, it is the farthest humanmade object from Earth. *Wik 1798 Thomas Henderson (28 Dec 1798; 23 Nov 1844) Scottish astronomer, the first Scottish Astronomer Royal (1834), who was first to measure the parallax of a star (Alpha Centauri, observed at the Cape of Good Hope) in 1831-33, but delayed publication of his results until Jan 1839. By then, a few months earlier, both Friedrich Bessel and Friedrich Struve had been recognized as first for their measurements of stellar parallaxes. Alpha Centauri can be observed from the Cape, though not from Britain. It is now known to be the nearest star to the Sun, but is still so distant that its light takes 4.5 years to reach us. As Scottish Astronomer Royal in 1834, he worked diligently at the Edinburgh observatory for ten years, making over 60,000 observations of star positions before his death in 1844. *TIS 1808 Victoire Louis Athanase Dupré (December 28 1808 ; August 10 1869 ) was a French mathematician and physicist. He worked on number theory and in the 1860s with thermodynamics and from him comes the textbook mécanique Théorie de la Chaleur (1869), which is essentially the distribution of this then-new field of knowledge in France contributed. Together with his son Paul Dupré experimental research, he examined the capillary and the surface tension of liquids. This work also led to a formulation of Young's equation which is known today as the Young-Dupré equation. *Wik 1828 Henry R. Rowlands becomes the first American to patent a device for walking on water. Since that time there have been at least one-hundred other patents approved in the US for similar devices. All seem to be inspired by the earliest known design (Jesus excepted) by Leonardo da Vinci in the late Fifteenth Century. 1873 William Draper Harkins (28 Dec 1873; 7 Mar 1951) American nuclear chemist who was one of the first to investigate the structure and fusion reactions of the nucleus. In 1920, Harkins predicted the existence of the neutron, subsequently discovered by Chadwick's experiment. He made pioneering studies of nuclear reactions with Wilson cloud chambers. In the early 1930's, (with M.D. Kamen) he built a cyclotron. Harkins demonstrated that in neutron bombardment reactions the first step in neutron capture is the formation of an "excited nucleus" of measurable lifetime, which subsequently splits into fragments. He also suggested that subatomic energy might provide enough energy to power the Sun over its lifetime.*TIS 1882 Sir Arthur Stanley Eddington (28 Dec 1882; 22 Nov 1944) English astrophysicist, and mathematician known for his work on the motion, distribution, evolution and structure of stars. He also interpreted Einstein's general theory of relativity. He was one of the first to suggest (1917) conversion of matter into radiation powered the stars. In 1919, he led a solar eclipse expedition which confirmed the predicted bending of starlight by gravity. He developed an equation for radiation pressure. In 1924, he derived an important mass-luminosity relation. He also studied pulsations in Cepheid variables, and the very high densities of white dwarfs. He sought fundamental relationships between the prinicipal physical constants. Eddington wrote many books for the general reader, including Stars and Atoms. *TIS One of my favorite stories about Eddington is this one: Ludwick Silberstein approached Eddington and told him that people believed he was one of only three people in the world who understood general relativity, and that included Einstein. When Eddington didn't respond for a moment he prodded, come on, don't be modest, and Eddington replied, "Oh, no. It's not that. I was just trying to figure out who the third was?" *Mario Livio, Brilliant Blunders 1898 Carl-Gustaf Arvid Rossby (28 Dec 1898; 19 Aug 1957) Swedish-U.S. meteorologist who first explained the large-scale motions of the atmosphere in terms of fluid mechanics. His work contributed to developing meteorology as a science. Rossby first theorized about the existence of the jet stream in 1939, and that it governs the easterly movement of most weather. U.S. Army Air Corps pilots flying B-29 bombing missions across the Pacific Ocean during World War II proved the jet stream's existence. The pilots found that when they flew from east to west, they experienced slower arrival times and fuel shortage problems. When flying from west to east, however, they found the opposite to be true. Rossby created mathematical models (Rossby equations) for computerized weather prediction (1950). *TIS 1903 John von Neumann is born in Budapest, Hungary.(28 Dec 1903, 8 Feb 1957) His prodigious abilities were recognized in the early childhood. He obtained a degree in chemical engineering attending the University of Berlin (1921-1923) and the Technische Hochschule in Zurich (1923-1926). *CHM He made important contributions in quantum physics, logic, meteorology, and computer science. He invented game theory, the branch of mathematics that analyses strategy and is now widely employed for military and economic purposes. During WW II, he studied the implosion method for bringing nuclear fuel to explosion and he participated in the development of the hydrogen bomb. He also set quantum theory upon a rigorous mathematical basis. In computer theory, von Neumann did much of the pioneering work in logical design, in the problem of obtaining reliable answers from a machine with unreliable components, the function of "memory," and machine imitation of "randomness." *TIS 1929 Maarten Schmidt (28 Dec 1929, ) Dutch-born American astronomer who in 1963 discovered quasars (quasi-stellar objects). The hydrogen spectrum of these starlike objects shows a huge redshift, which indicates they are more distant than normal stars, travelling away at greater speed, and are among the oldest objects observed. In turn, this indicates they existed only when the universe was very young, and provides evidence against the steady state theory of Fred Hoyle. Schmidt is currently seeking to find the redshift above which there are no quasars, and he also studies x-ray and gamma ray sources.*TIS 1663 Francesco Maria Grimaldi (2 Apr 1618, 28 Dec 1663) Italian mathematician and physicist who studied the diffraction of light. He observed the image on a screen in a darkened room of a tiny beam of sunlight after it passed pass through a fine screen (or a slit, edge of a screen, wire, hair, fabric or bird feather). The image had iridescent fringes, and deviated from a normal geometrical shadow. He coined the name diffraction for this change of trajectory of the light passing near opaque objects (though, more specifically, it may have been interferences with two close sources that he observed). This provided evidence for later physicists to support the wave theory of light. With Riccioli, he investigated the object in free fall (1640-50), and found that distance of fall was proportional to the square of the time taken.*TIS 1827 Robert Woodhouse (28 April 1773 – 23 December 1827) was an English mathematician. His earliest work, entitled the Principles of Analytical Calculation, was published at Cambridge in 1803. In this he explained the differential notation and strongly pressed the employment of it; but he severely criticized the methods used by continental writers, and their constant assumption of non-evident principles. This was followed in 1809 by a trigonometry (plane and spherical), and in 1810 by a historical treatise on the calculus of variations and isoperimetrical problems. He next produced an astronomy; of which the first book (usually bound in two volumes), on practical and descriptive astronomy, was issued in 1812, and the second book, containing an account of the treatment of physical astronomy by Pierre-Simon Laplace and other continental writers, was issued in 1818. He became the Lucasian Professor of Mathematics in 1820, and subsequently the Plumian professor in the university. As Plumian Professor he was responsible for installing and adjusting the transit instruments and clocks at the Cambridge Observatory.[3] He held that position until his death in 1827. *Wik 1871 John Henry Pratt (4 June 1809 - 28 December 1871) was a British clergyman and mathematician who devised a theory of crustal balance which would become the basis for the isostasy principle. *Wik 1896 Horatio (Emmons) Hale (3 May 1817, 28 Dec 1896) was an American anthropologistwhose contributions to the science of ethnology, included his theory of the origin of the diversities of human languages and dialectsa theory suggested by his study of child languages (the languages invented by little children). He emphasized the importance of languages as tests of mental capacity and as criteria for the classification of human groups. Hale was the first to discover that the Tutelos of Virginia belonged to the Siouan family, and to identify the Cherokee as a member of the Iroquoian family of speech. He sailed with the scientific corps of the Wilkes Exploring Expedition (1838-42) collecting linguistic materials. He used the drift of the Polynesian tongue as a clue to the migration of this race. *TIS 1919 Johannes Robert Rydberg​, ('Janne' to his friends), (November 8, 1854 – December 28, 1919), was a Swedish physicist mainly known for devising the Rydberg formula, in 1888, which is used to predict the wavelengths of photons (of light and other electromagnetic radiation) emitted by changes in the energy level of an electron in a hydrogen atom. The physical constant known as the Rydberg constant is named after him, as is the Rydberg unit. Excited atoms with very high values of the principal quantum number, represented by n in the Rydberg formula, are called Rydberg atoms. Rydberg's anticipation that spectral studies could assist in a theoretical understanding of the atom and its chemical properties was justified in 1913 by the work of Niels Bohr (see hydrogen spectrum). An important spectroscopic constant based on a hypothetical atom of infinite mass is called the Rydberg (R) in his honour. *Wik 1923 Gustave Eiffel (15 Dec 1832, 28 Dec 1923) French civil engineer who specialized in metal structures, known especially for the Eiffel Tower in Paris. He built his first of his iron bridges at Bordeaux (1858) and was among the first engineers to build bridge foundations using compressed-air caissons. His work includes designing the rotatable dome for Nice Observatory on the summit of Mont Gros (1886), and the framework for the Statue of Liberty now in New York Harbor. After building the Eiffel Tower (1887-9), which he used for scientific research on meteorology, aerodynamics and radio telegraphy, he also built the first aerodynamic laboratory at Auteuil, outside Paris, where he pursued his research work without interruption during WW I. *TIS 1964 Edwin Bidwell Wilson (25 April 1879 in Hartford, Connecticut, USA - 28 Dec 1964 in Brookline, Massachusetts, USA) Wilson graduated from Yale with a Ph.D. in 1901 and, in the same year, a textbook which he had written on vector analysis was published. Vector analysis (1901) was based on Gibbs' lectures and , "This beautiful work, published when Wilson was only twenty-two years old, had a profound and lasting influence on the notation for and the use of vector analysis." Wilson had been inspired by Gibbs to work on mathematical physics and he began to write papers on mechanics and the theory of relativity. In 1912 Wilson published the first American advanced calculus text. World War I had seen another move in Wilson's research interests for he had undertaken war work which involved aerodynamics and this led him to study the effects of gusts of wind on a plane. In 1920 he published his third major text Aeronautics and gathered round him a group of students working on this topic. Wilson had already worked in a number of quite distinct areas and his work on aeronautics did not become the major topic for the rest of his career. Not long after the publication of his important text on Aeronautics his interests moved again, this time towards probability and statistics. He did not study statistics for its own, however, but he was interested in applying statistics both to astronomy and to biology. He was the first to study confidence intervals, later rediscovered by Neyman. In 1922 Wilson left the Massachusetts Institute of Technology to become Professor of Vital Statistics at the Harvard School of Public Health. He continued to hold this post until he retired in 1945, when he became professor emeritus. After he retired, Wilson spent a year in Glasgow, Scotland when he was Stevenson lecture on Citizenship. From 1948 he was a consultant to the Office of Naval Research in Boston. *SAU Jacob Bernoulli's tomb marker At ubi materia, ibi Geometria. Where there is matter, there is geometry. ~Johannes Kepler The 362nd day of the year; 362 and its double and triple all use the same number of digits in Roman numerals.*What's Special About This Number. 3!+6!+2! - 1 =727 and 3!*6!*2! + 1=8641 are both prime *Prime Curios In 1831, Charles Darwin set sail from Plymouth harbour on his voyage of scientific discovery aboard the HMS Beagle, a British Navy ship. The Captain Robert FitzRoy was sailing to the southern coast of South America in order to complete a government survey. Darwin had an unpaid position as the ship's naturalist, at age 22, just out of university. Originally planned to be at sea for two years, the voyage lasted five years, making stops in Brazil, the Galapogos Islands, and New Zealand. From the observations he made and the specimens he collected on that voyage, Darwin developed his theory of biological evolution through natural selection, which he published 28 years after the Beagle left Plymouth. Darwin laid the foundation of modern evolutionary theory. *TIS In 1956, the formerly believed "law" of conservation of parity was disproved in the first successful results from an experiment conducted by Madame Chien-Shiung Wu at Columbia University on the beta-decay of cobalt-60. It had been suggested in a paper published by Lee and Yang on 1 Oct 1956. There had been problems to overcome working with the cobalt sample and detectors in a vacuum at a working temperature of one-hundredth of a kelvin. Wu's team repeated the experiment, doing maintenance on the apparatus as necessary, until on 9 Jan 1957 further measurements confirmed the initial results. Leon Lederman performed an independent test of parity with Columbia's cyclotron. They held a press conference on 15 Jan 1957.*TIS 1571 Johannes Kepler (27 Dec 1571; 15 Nov 1630) German astronomer who formulated three major laws of planetary motion which enabled Isaac Newton to devise the law of gravitation. Working from the carefully measured positions of the planets recorded by Tycho Brahe, Kepler mathematically deduced three relationships from the data: (1) the planets move in elliptical orbits with the Sun at one focus; (2) the radius vector sweeps out equal areas in equal times; and (3) for two planets the squares of their periods are proportional to the cubes of their mean distances from the sun. Kepler suggested that the tides were caused by the attraction of the moon. He believed that the universe was governed by mathematical rules, but recognized the importance of experimental verification.*TIS 1654 Jacob Jacques Bernoulli (27 Dec 1654; 16 Aug 1705) was a Swiss mathematician and astronomer who was one of the first to fully utilize differential calculus and introduced the term integral in integral calculus. Jacob Bernoulli's first important contributions were a pamphlet on the parallels of logic and algebra (1685), work on probability in 1685 and geometry in 1687. His geometry result gave a construction to divide any triangle into four equal parts with two perpendicular lines. By 1689 he had published important work on infinite series and published his law of large numbers in probability theory. He published five treatises on infinite series (1682 - 1704). Jacob was intrigued by the logarithmic spiral and requested it be carved on his tombstone. He was the first of the Bernoulli family of mathematicians. *TIS (see more about the family of Bernoulli's at the Renaissance Mathematicus ) Even as the finite encloses an infinite series And in the unlimited limits appear, So the soul of immensity dwells in minutia And in the narrowest limits no limit in here. What joy to discern the minute in infinity! The vast to perceive in the small, what divinity! Ars Conjectandi 1773 Sir George Cayley (27 Dec 1773; 15 Dec 1857)(6th Baronet ) English aeronautical pioneer who built the first successful man-carrying glider (1853). He made extensive anatomical and functional studies of bird flight. By measuring bird and human muscle masses, he realized it would be impossible for humans to strap on a pair of wings and take to the air. His further studies in the principles of lift, drag and thrust founded the science of aerodynamics from which he discovered stabilizing flying craft required both vertical and horizontal tail rudders, that concave wings produced more lift than flat surfaces and that swept-back wings provided greater stability. Cayley also invented the caterpillar tractor (1825), automatic railroad crossing signals, self-righting lifeboats, and an expansion-air (hot-air) engine. *TIS (He was a distant cousin of the father of mathematician Arthur Cayley) 1915 Jacob Lionel Bakst Cooper (27 December 1915, Beaufort West, Cape Province, South Africa, 8 August 1979, London, England) was a South African mathematician who worked in operator theory, transform theory, thermodynamics, functional analysis and differential equations.*Wik 1771 Henri Pitot (3 May 1695, 27 Dec 1771) French hydraulic engineer who invented the Pitot tube (1732), an instrument to measure flow velocity either in liquids or gases. With subsequent improvements by Henri Darcy, its modern form is used to determine the airspeed of aircraft. Although originally a trained mathematician and astronomer, he became involved with an investigation of the velocity of flowing water at different depths, for which purpose he first created the Pitot tube. He disproved the prevailing belief that the velocity of flowing water increased with depth. Pitot became an engineer in charge of maintenance and construction of canals, bridges, drainage projects, and is particularly remembered for his kilometer-long Roman-arched Saint-Clément Aqueduct (1772) at Montpellier, France. *TIS 1930 Gyula Farkas (28 March 1847 in Sárosd, Fejér County, Hungary - 27 Dec 1930 in Pestszentlorinc, Hungary) He is remembered for Farkas theorem which is used in linear programming and also for his work on linear inequalities. In 1881 Gyula Farkas published a paper on Farkas Bolyai's iterative solution to the trinomial equation, making a careful study of the convergence of the algorithm. In a paper published three years later, Farkas examined the convergence of more general iterative methods. He also made major contributions to applied mathematics and physics, particularly in the areas of mechanical equilibrium, thermodynamics, and electrodynamics.*SAU 1973 Raymond Woodard Brink (4 Jan 1890 in Newark, New Jersey, USA - 27 Dec 1973 in La Jolla, California, USA) was an American mathematician who studied at Kansas State University, Harvard and Paris. He taught at the University of Minnesota though he spent a year in Edinburgh in 1919. He worked on the convergence of series. *SAU 1992 Alfred Hoblitzelle Clifford (July 11, 1908 – December 27, 1992) was an American mathematician who is known for Clifford theory and for his work on semigroups. The Alfred H. Clifford Mathematics Research Library at Tulane University is named after him.*Wik 1995 Boris Vladimirovich Gnedenko (January 1, 1912 - December 27, 1995) was a Soviet mathematician and a student of Andrey Nikolaevich Kolmogorov. He was born in Simbirsk (now Ulyanovsk), Russia, and died in Moscow. He is perhaps best known for his work with Kolmogorov, and his contributions to the study of probability theory. Gnedenko was appointed as Head of the Physics, Mathematics and Chemistry Section of the Ukrainian Academy of Sciences in 1949, and also became Director of the Kiev Institute of Mathematics in the same year.*Wik 1996 Sister Mary Celine Fasenmyer, R.S.M., (October 4, 1906, Crown, Pennsylvania – December 27, 1996, Erie, Pennsylvania) was a mathematician. She is most noted for her work on hypergeometric functions and linear algebra.*Wik 2006 Peter L. Hammer (December 23, 1936 - December 27, 2006) was an American mathematician native to Romania. He contributed to the fields of operations research and applied discrete mathematics through the study of pseudo-Boolean functions and their connections to graph theory and data mining.*Wik A young man passes from our public schools to the universities, ignorant almost of the elements of every branch of useful knowledge. ~Charles Babbage The 361st day of the year, 2361 is an apocalyptic number, it contains 666. 2361=4697085165547666455778961193578674054751365097816639741414581943064418050229216886927397996769537406063869952 That's 109 digits. One of Ramanujan's many approximations of pi was (92+ (192/22))1/4, and 361 = 192 and as 361 is the last year day that is a perfect square, important to point out for students that all perfect squares are also the sum of consecutive triangular numbers, 361= 171 + 190 1837 Charles Babbage completed his "Calculating Engine" manuscript. *VFR 1843 John Graves write to William Rowan Hamilton that he has invented an eight-dimension normed division algebra he called "Octaves" Within a few months, Hamilton would realize that the octonions were not associative. This would lead to the first use of the term "associative" by Hamilton in 1844. (Except for matrices, which were not generally considered as "numbers", there were no common non-associative systems at that time) *Joan Baez Rankin Lecture of September 17, 2008 Glascow The complete Volume Two of the Proceedings of the Royal Irish Academy were released in 1844, but the paper had been read on November 13, 1843; over a full month before Grave's letter. Hamilton created the phrase in explaining that although the Quaterninons maintained the distributive property, "yet the commutative character is lost," and then adds, "another important property of the old multiplication is preserved ... which may be called the associative character of the operation." 1864 The official seal of MIT was adopted on December 26, 1864. The craftsman at the anvil and the scholar with a book on the seal of the Massachusetts Institute of Technology embody the educational philosophy of William Barton Rogers and other incorporators of MIT as stated in their 1860 proposal Objects and Plan of an Institute of Technology. *MIT History 1898 Radium discovered by Pierre and Marie Curie. *VFR Actually, it seems this was the date of their announcement of the discovery(which must have occurred a few days earlier. They created the name radium for their element. This was their second discovery in the first year of her research on her thesis. They had also discovered Polonium earlier in the year. In 1906, the world's first full-length feature film, the 70-min Story of the Kelly Gang was presented in the Town Hall at Melbourne, Australia, where it had been filmed at a cost of £450. It preceded D.W. Griffith's The Birth of a Nation by nine years. The subject of the Australian movie was Ned Kelly, a bandit who lived 1855 to 1880. The film toured through Australia for over 20 years, and abroad in New Zealand and Britain. Since some people, including politicians and police viewed the content of the film as glorifying the criminals, the movie was banned (1907) in Benalla and Wangaratta and also in Victoria (1912). Only fragments totalling about 10 minutes of the original nitrate film have survived to the present.*TIS 1951 Kurt Godel delivered the Gibbs Lecture, "Some Basic Theorems on the Foundations of Mathematics and their Philosophical Implications," to the annual AMS meeting at Brown University. *VFR 1982 TIME Names a Non-Human "Man of the Year" TIME magazine's editors selected the Personal Computer for "Machine of the Year," in lieu of their well-known "Man of the Year" award. The computer beat out U.S. President Ronald Reagan, U.K. prime minister Margaret Thatcher and Prime Minister of Israel​, Menachem Begin. The planet Earth became the second non-human recipient for the award in 1988. The awards have been given since 1927. The magazine's essay reported that in 1982, 80% of Americans expected that "in the fairly near future, home computers will be as commonplace as television sets or dishwashers." In 1980, 724,000 personal computers were sold in the United States, according to Time. The following year, that number doubled to 1.4 million. *CHM BIRTHS1532 Wilhelm Xylander (born Wilhelm Holtzman, graecized to Xylander) (December 26, 1532 – February 10, 1576) was a German classical scholar and humanist. Xylander was the author of a number of important works. He translated the first six books of Euclid into German with notes, the Arithmetica of Diophantus, and the De quattuor mathematicis scientiis of Michael Psellus into Latin. *Wik 1780 Mary Fairfax Greig Somerville (26 Dec 1780 in Jedburgh, Roxburghshire, Scotland - 29 Nov 1872 in Naples, Italy) Somerville wrote many works which influenced Maxwell. Her discussion of a hypothetical planet perturbing Uranus led Adams to his investigation. Somerville College in Oxford was named after her.*SAU 1791 Charles Babbage born. *VFR (26 Dec 1791; 18 Oct 1871) English mathematician and pioneer of mechanical computation, which he pursued to eliminate inaccuracies in mathematical tables. By 1822, he had a small calculating machine able to compute squares. He produced prototypes of portions of a larger Difference Engine. (Georg and Edvard Schuetz later constructed the first working devices to the same design which were successful in limited applications.) In 1833 he began his programmable Analytical Machine, a forerunner of modern computers. His other inventions include the cowcatcher, dynamometer, standard railroad gauge, uniform postal rates, occulting lights for lighthouses, Greenwich time signals, heliograph opthalmoscope. He also had an interest in cyphers and lock-picking.*TIS 1861 Frederick Engle born in Germany. He became the closest student of the Norwegian mathe¬matician Sophus Lie. Engle was also the first to translate Lobachevsky's work into a Western language (German). *VFR 1900 Antoni Zygmund (26 Dec 1900; 30 May 1992) Polish-born mathematician who created a major analysis research centre at Chicago, and recognized in 1986 for this with the National Medal for Science. In 1940, he escaped with his wife and son from German controlled Poland to the USA. He did much work in harmonic analysis, a statistical method for determining the amplitude and period of certain harmonic or wave components in a set of data with the aid of Fourier series. Such technique can be applied in various fields of science and technology, including natural phenomena such as sea tides. He also did major work in Fourier analysis and its application to partial differential equations. Zygmund's book Trigonometric Series (1935) is a classic, definitive work on the subject*TIS 1903 Lancelot Stephen Bosanquet (26 Dec 1903 in St. Stephen's-by-Saltash, Cornwall, England - 10 Jan 1984 in Cambridge, Cambridgeshire, England) Bosanquet wrote many papers on the convergence and summability of Fourier series. He also wrote on the convergence and summability of Dirichlet series and studied specific kinds of summability such as summability factors for Cesàro means. His later work on integrals include two major papers on the Laplace-Stieltjes integral published in 1953 and 1961. Other topics he studied included inequalities, mean-value theorems, Tauberian theorems, and convexity theorems. *SAU 1937 John Horton Conway (born 26 December 1937, ) is a prolific mathematician active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He has also contributed to many branches of recreational mathematics, notably the invention of the cellular automaton called the Game of Life. Conway is currently Professor of Mathematics and John Von Neumann Professor in Applied and Computational Mathematics at Princeton University. He studied at Cambridge, where he started research under Harold Davenport. He received the Berwick Prize (1971),[1] was elected a Fellow of the Royal Society (1981),[2] was the first recipient of the Pólya Prize (LMS) (1987),[1] won the Nemmers Prize in Mathematics (1998) and received the Leroy P. Steele Prize for Mathematical Exposition (2000) of the American Mathematical Society. He has an Erdős number of one.*Wik Conway is known for his sense of humor, and the last proof in his "On Numbers and Games" is this: Theorem 100; This is the last Theorem in this book. The Proof is Obvious. I really enjoyed Siobhan Roberts biography of Conway. You may, too. 1624 Simon Marius (10 Jan 1573, 26 Dec 1624) (Also known as Simon Mayr) German astronomer, pupil of Tycho Brahe, one of the earliest users of the telescope and the first in print to make mention the Andromeda nebula (1612). He studied and named the four largest moons of Jupiter as then known: Io, Europa, Ganymede and Callisto (1609) after mythological figures closely involved in love with Jupiter. Although he may have made his discovery independently of Galileo, when Marius claimed to have discovered these satellites of Jupiter (1609), in a dispute over priority, it was Galileo who was credited by other astronomers. However, Marius was the first to prepare tables of the mean periodic motions of these moons. He also observed sunspots in 1611 *TIS You can find a nice blog about the conflict with Galileo by the Renaissance Mathematicus. 1931 Melvil Dewey (10 Dec 1851, 26 Dec 1931) American librarian who developed library science in the U.S., especially with his system of classification, the Dewey Decimal Classification (1876), for library cataloging. His system of classification (1876) uses numbers from 000 to 999 to cover the general fields of knowledge and designating more specific subjects by the use of decimal points. He was an activist in the spelling reform and metric system movements. Dewey invented the vertical office file, winning a gold medal at the 1893 World's Fair. It was essentially an enlarged version of a card catalogue, where paper documents hung vertically in long drawers. *TIS 2006 Martin David Kruskal (September 28, 1925 – December 26, 2006) was an American mathematician and physicist. He made fundamental contributions in many areas of mathematics and science, ranging from plasma physics to general relativity and from nonlinear analysis to asymptotic analysis. His single most celebrated contribution was the discovery and theory of solitons. His Ph.D. dissertation, written under the direction of Richard Courant and Bernard Friedman at New York University, was on the topic "The Bridge Theorem For Minimal Surfaces." He received his Ph.D. in 1952. In the 1950s and early 1960s, he worked largely on plasma physics, developing many ideas that are now fundamental in the field. His theory of adiabatic invariants was important in fusion research. Important concepts of plasma physics that bear his name include the Kruskal–Shafranov instability and the Bernstein–Greene–Kruskal (BGK) modes. With I. B. Bernstein, E. A. Frieman, and R. M. Kulsrud, he developed the MHD (or magnetohydrodynamic) Energy Principle. His interests extended to plasma astrophysics as well as laboratory plasmas. Martin Kruskal's work in plasma physics is considered by some to be his most outstanding. In 1960, Kruskal discovered the full classical spacetime structure of the simplest type of black hole in General Relativity. A spherically symmetric black hole can be described by the Schwarzschild solution, which was discovered in the early days of General Relativity. However, in its original form, this solution only describes the region exterior to the horizon of the black hole. Kruskal (in parallel with George Szekeres) discovered the maximal analytic continuation of the Schwarzschild solution, which he exhibited elegantly using what are now called Kruskal–Szekeres coordinates. This led Kruskal to the astonishing discovery that the interior of the black hole looks like a "wormhole" connecting two identical, asymptotically flat universes. This was the first real example of a wormhole solution in General Relativity. The wormhole collapses to a singularity before any observer or signal can travel from one universe to the other. This is now believed to be the general fate of wormholes in General Relativity. Martin Kruskal was married to Laura Kruskal, his wife of 56 years. Laura is well known as a lecturer and writer about origami and originator of many new models.[3] Martin, who had a great love of games, puzzles, and word play of all kinds, also invented several quite unusual origami models including an envelope for sending secret messages (anyone who unfolded the envelope to read the message would have great difficulty refolding it to conceal the deed). His Mother, Lillian Rose Vorhaus Kruskal Oppenheimer was an American origami pioneer. She popularized origami in the West starting in the 1950s, and is credited with popularizing the Japanese term origami in English-speaking circles, which gradually supplanted the literal translation paper folding that had been used earlier. In the 1960s she co-wrote several popular books on origami with Shari Lewis.*wik On This Day in Math - December 9
CommonCrawl
DOI:10.1088/0004-637X/726/1/54 Synchrotron Blob Model of Infrared and X-ray Flares from Sagittarius A$^*$ @article{Kusunose2010SynchrotronBM, title={Synchrotron Blob Model of Infrared and X-ray Flares from Sagittarius A\$^*\$}, author={Masaaki Kusunose and Fumio Takahara}, journal={arXiv: High Energy Astrophysical Phenomena}, M. Kusunose, F. Takahara arXiv: High Energy Astrophysical Phenomena Sagittarius A$^*$ in the Galactic center harbors a supermassive black hole and exhibits various active phenomena. Besides quiescent emission in radio and submillimeter radiation, flares in the near infrared (NIR) and X-ray bands are observed to occur frequently. We study a time-dependent model of the flares, assuming that the emission is from a blob ejected from the central object. Electrons obeying a power law with the exponential cutoff are assumed to be injected in the blob for a limited… View PDF on arXiv An inverse Compton scattering origin of x-ray flares from Sgr A F. Yusef-Zadeh, M. Wardle, +6 authors D. Porquet The X-ray and near-IR emission from Sgr A* is dominated by flaring, while a quiescent component dominates the emission at radio and submillimeter (sub-mm) wavelengths. The spectral energy… A Leptonic Model of Steady High-Energy Gamma-Ray Emission from Sgr A$^*$ Recent observations of Sgr A* by Fermi and HESS have detected steady {gamma}-ray emission in the GeV and TeV bands. We present a new model to explain the GeV {gamma}-ray emission by inverse Compton… Concurrent X-ray, near-infrared, sub-millimeter, and GeV gamma-ray observations of Sagittarius A G. Trap, A. Goldwurm, +13 authors F. Yusef-Zadeh Aims. The radiative counterpart of the supermassive black hole at the Galactic center (GC), Sgr A � , is subject to frequent flares that are visible simultaneously in X-rays and the near-infrared… X-Ray Flares from Sagittarius A * and Black Hole Universe T. X. Zhang, C. Wilson, M. Schamschula Sagittarius (Sgr) A* is a massive black hole at the Milky Way center with mass of about 4.5 million solar masses. It is usually quite faint, emiting steadily at all wavelengths including X-rays.… Statistical and theoretical studies of flares from Sagittarius A⋆ Ya-Ping Li, Q. Yuan, +5 authors J. Dexter Proceedings of the International Astronomical Union Abstract Multi-wavelength flares have routinely been observed from the supermassive black hole, Sagittarius A⋆ (Sgr A⋆), at our Galactic center. The nature of these flares remains largely unclear,… A CHANDRA/HETGS CENSUS OF X-RAY VARIABILITY FROM Sgr A* DURING 2012 J. Neilsen, M. Nowak, +13 authors F. Baganoff We present the first systematic analysis of the X-ray variability of Sgr A ∗ during the Chandra X-ray Observatory's 2012 Sgr A ∗ X-ray Visionary Project. With 38 High Energy Transmission Grating… A magnetohydrodynamic model for multiwavelength flares from Sagittarius A⋆ (I): model and the near-infrared and X-ray flares Ya-Ping Li, F. Yuan, Q. Wang Flares from the supermassive black hole in our Galaxy, Sagittarius~A$^\star$ (Sgr A$^\star$), are routinely observed over the last decade or so. Despite numerous observational and theoretical… Non-thermal models for infrared flares from Sgr A* E. A. Petersen, C. Gammie Recent observations with mm very long baseline interferometry (mm-VLBI) and near-infrared (NIR) interferometry provide mm images and NIR centroid proper motion for Sgr A*. Of particular interest… The role of electron heating physics in images and variability of the Galactic Centre black hole Sagittarius A* A. Chael, M. Rowan, R. Narayan, Michael D. Johnson, L. Sironi Monthly Notices of the Royal Astronomical Society The accretion flow around the Galactic Center black hole Sagittarius A* (Sgr A*) is expected to have an electron temperature that is distinct from the ion temperature, due to weak Coulomb coupling in… Sgr A* flares: Tidal disruption of asteroids and planets? K. Zubovas, S. Nayakshin, S. Markoff Physics, Computer Science It is speculated that one such disruption may explain the putative increase in Sgr A ∗ luminosity, and is estimated that asteroids larger than ∼10 km in size are needed to power the observed flares, with the maximum possible luminosity of the order of 10 39 erg s −1. An x-ray, infrared, and submillimeter flare of Sagittarius A* D. Marrone, F. Baganoff, +14 authors G. Bower Energetic flares are observed in the Galactic supermassive black hole Sagittarius A* from radio to X-ray wavelengths. On a few occasions, simultaneous flares have been detected in IR and X-ray… View 3 excerpts, references background and methods Time-Dependent Models of Flares from Sagittarius A* K. Dodds-Eden, Prateek Sharma, +4 authors D. Porquet The emission from Sgr A*, the supermassive black hole in the Galactic Center, shows order of magnitude variability ('flares') a few times a day that is particularly prominent in the near-infrared… View 1 excerpt, references methods On the Nature of the Variable Infrared Emission from Sagittarius A F. Yuan, E. Quataert, R. Narayan Recent infrared (IR) observations of the center of our Galaxy indicate that the supermassive black hole (SMBH) source Sgr A* is strongly variable in the IR. The timescale for the variability, ~30… The Nature of the 10 kilosecond X-ray flare in Sgr A* S. Markoff, H. Falcke, F. Yuan, P. Biermann The X-ray mission Chandra has observed a dramatic X-ray flare { a brightening by a factor of 50 for only three hours { from Sgr A*, the Galactic Center supermassive black hole. Sgr A* has never shown… Rapid X-ray flaring from the direction of the supermassive black hole at the Galactic Centre F. Baganoff, M. Bautz, +8 authors F. Walter The discovery of rapid X-ray flaring from the direction of Sagittarius A* provides compelling evidence that the emission is coming from the accretion of gas onto a supermassive black hole at the Galactic Centre. X-ray hiccups from Sagittarius A* observed by XMM-Newton - The second brightest flare and three moderate flares caught in half a day D. Porquet, N. Grosso, +22 authors Spain. Context. Our Galaxy hosts at its dynamical center Sgr A*, the closest supermassive black hole. Surprisingly, its luminosity is several orders of magnitude lower than the Eddington luminosity.… Polarimetry of near-infrared flares from Sagittarius A* A. Eckart, R. Schödel, L. Meyer, S. Trippe, T. Ott, R. Genzel Context. We report new polarization measurements of the variable near-infrared emission of the SgrA* counterpart associated with the massive 3–$4\times10^6$ $M_{\odot}$ Black Hole at the Galactic… A constant spectral index for sagittarius A* during infrared/X-ray intensity variations S. Hornstein, K. Matthews, +5 authors F. Baganoff We report the first time-series of broadband infrared color m easurements of Sgr A*, the variable emission source associated with the supermassive black hole at the Galactic Center. Using the laser… Nonthermal Electrons in Radiatively Inefficient Accretion Flow Models of Sagittarius A We investigate radiatively inefficient accretion flow models for Sgr A*, the supermassive black hole in our Galactic center, in light of new observational constraints. Confirmation of linear… View 11 excerpts, references methods and background Near-infrared flares from accreting gas around the supermassive black hole at the Galactic Centre R. Genzel, R. Schödel, +5 authors B. Aschenbach High-resolution infrared observations of Sagittarius A* reveal 'quiescent' emission and several flares, and traces very energetic electrons or moderately hot gas within the innermost accretion region.
CommonCrawl
Don't see the point of the Fundamental Theorem of Calculus. $$\frac{d}{dx}\int_a^xf(t)\,dt$$ I would love to to understand what exactly is the point of FTC. I'm not interested in mechanically churning out solutions to problems. It doesn't state anything that isn't already known. Prior to reading about FTC, the integral is defined as the anti-derivative. So, it's basically an operator. "Take the anti-derivative by figuring out whose derivative this is!" Simple. So, what is so "fundamental" about redundantly restating the very definition of the integral? (The derivative of the anti-derivative is the function). This to me is like saying $-(-1) = +1$. Not exactly earth shattering. Am I missing something with regard to the indefinite vs. definite integral? If we look at a simple example, $$\frac{d}{dx}\int_1^xt^2 \, dt = \cdots =x^2$$ Can we discuss what exactly this is representing? Why would you even write this? Why would you take the rate of change of an area under the curve? Why would you want to take the derivative of an integral? Or, is this just done to prove something else? When would you even come across this situation in Math? Taking the rate of change of the area under a curve and/or total displacement? (derivative of the definite integral) Also, what is the significance of using $t$ as a variable? Why would you integrate from a constant to a function in the first place? (take area under the curve or compute total displacement) I don't understand what exactly things FTC even allows anyone to do. Without FTC, I can already evaluate definite integrals. Without FTC, I can already take derivatives. So, with FTC, I can take an integral then take a derivative? So, what's even the point of FTC? I really don't see anything "fundamental" whatsoever about this redundant self-evident "theorem". This is like taking the inverse of an inverse. Right back to f(x), but that's simply a "neat trick" vs. a "Fundamental Theorem of Algebra". Michael Hardy JackOfAllJackOfAll $\begingroup$ The definition of (one kind of) an integral is the limit of a riemann sum, how do I know a priori that the given limit is an antiderivative? $\endgroup$ – Tyler Dec 11 '14 at 0:12 $\begingroup$ You should sue whoever taught you that the definition of the integral is an antiderivative. $\endgroup$ – Bruno Joyal Dec 11 '14 at 0:13 $\begingroup$ As for "without FTC I can evaluate integrals," how do you personally evaluate definite integrals without it? Do you compute a limit of Riemann sums every single time? $\endgroup$ – Nick D. Dec 11 '14 at 0:19 $\begingroup$ @NickD. My guess would be that it appears obvious to the OP because he's been taught that an integral is an antiderivative. $\endgroup$ – David Dec 11 '14 at 0:23 $\begingroup$ @JackOfAll You're asking too many questions in the same post (which sounds quite rant-ish as well); narrow down a specific question for better feedback. As for why one would differentiate the "area under a curve," think of physically relevant quantities that are related by integration, e.g. velocity vs. distance traveled. Sometimes you are only given an integral as the definition of one quantity, and it's useful to know how to differentiate to get the other. $\endgroup$ – Gyu Eun Lee Dec 11 '14 at 0:30 I am guessing that you have been taught that an integral is an antiderivative, and in these terms your complaint is completely justified: this makes the FTC a triviality. However the "proper" definition of an integral is quite different from this and is based upon Riemann sums. Too long to explain here but there will be many references online. Something else you might like to think about however. The way you have been taught makes it obvious that an integral is the opposite of a derivative. But then, if the integral is the opposite of a derivative, this makes it extremely non-obvious that the integral can be used to calculate areas! Comment: to keep the real experts happy, replace "the proper definition" by "one of the proper definitions" in my second sentence. $\begingroup$ Riemann is fine, but you could go even further to Lebesgue ;) $\endgroup$ – Tobias Kienzler Dec 11 '14 at 8:52 $\begingroup$ @Tobias Hence the final comment in my answer :) $\endgroup$ – David Dec 11 '14 at 11:32 $\begingroup$ @TobiasKienzler: or then back to sums with the Henstock-Kurzweil integral, which integrates all the Lebesgue-integrable functions and then some, with much easier definition. $\endgroup$ – mbork Dec 13 '14 at 21:22 $\begingroup$ @mbork Neat, I didn't know about that one $\endgroup$ – Tobias Kienzler Dec 14 '14 at 12:36 $\begingroup$ I am the OP. I am truly grateful and humbled by knowledge shared by this collective brain trust. In the next few days, I am going to do this thread justice and read everything when I can be alone for a few hours next week. Thank you again for this. $\endgroup$ – JackOfAll Dec 14 '14 at 23:12 You seem to think that you already know that definite integrals have something to do with antidifferentiation. Probably you think this because $\int_a^b f(x) \, dx$ looks remarkably similar to $\int f(x) \, dx$. But, without the FTC, these two things have nothing whatsoever to do with one another. They are two completely unrelated operations which for some bizarre reason share a symbol. $\int f(x) \, dx$, as you note, means the antiderivative of $f(x)$. But $\int_a^b f(x) \, dx$ means the area between the curve you get when you graph $f(x)$ and the $x$-axis, over the interval $[a,b]$. Without the FTC, there is no reason to expect this to have anything to do with the antiderivative (or "indefinite integral".) KundorKundor $\begingroup$ @MackTuesday: Try that for $f(x) = \frac1x$ and see what happens. (Hint: Your definition fixes the integration constant so that $F(0) = 0$; if the antiderivative $F$ of $f$ diverges at zero, this is not possible.) Yes, of course there is a relationship between the two concepts, but then, that relationship pretty much is the FTC. $\endgroup$ – Ilmari Karonen Dec 11 '14 at 17:03 $\begingroup$ @IlmariKaronen ...and yet the point remains that they're not unrelated. Note that without bounds, the antiderivative sign also represents the "indefinite integral"--so of course it's related to the definite integral! $\endgroup$ – Kyle Strand Dec 12 '14 at 0:55 $\begingroup$ The bizarre reason is precisely the fundamental theorem of calculus :) $\endgroup$ – Roberto Bonvallet Dec 12 '14 at 17:55 $\begingroup$ @KyleStrand, there's no "of course the definite integral and the indefinite integral are related", unless you know the FTC. That was Kundor's point. Despite the relationship of the symbols (or the names), they have completely different origins (definitions). People have created these symbols and names because they were able to prove the FTC. $\endgroup$ – Paul Draper Dec 14 '14 at 4:33 As integrals and derivatives are presented in Apostol's Calculus, it becomes quite evident that the relationship between them--the Fundamental Theorem of Calculus--is quite remarkable and a bit unexpected. Apostol actually introduces the notion of an integral first: the notation $$\int_{x=a}^b f(x) \, dx$$ is intended to represent the signed area enclosed by a function $f(x)$ and the $x$-axis, on the interval $x \in [a,b]$. This idea of "area" is something familiar to us from elementary geometry, and it is not difficult to conceptualize the "area under a curve" as an extension of the areas of more familiar geometric shapes, such as polygons and circles. Thus it seems natural to talk about the area enclosed by the curve of a parabola $f(x) = x^2$ and the $x$-axis on the interval $[0,1]$. Indeed, Archimedes of Syracuse, thousands of years ago, used a method remarkably similar to Riemann sums to obtain areas enclosed by parabolic segments. Now let's switch gears and talk about derivatives: a derivative $f'(x)$ of a function $f(x)$ at a point $x = a$ has the geometric interpretation of the slope of the tangent line to the function at that point. Loosely speaking, the greater this value, the more rapidly the function $f(x)$ is increasing at that point. More formally, $$f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x-a}.$$ What makes the integral and derivative concepts in calculus (or analysis, if you prefer), is that both are mathematical ideas involving some kind of limiting process: the (Riemann) integral is understood as the sum of the rectangular areas defined by successively more refined partitions of the interval $[a,b]$, and the derivative is understood as the slope of a secant line as one intersection point approaches the other. Note that in these contexts, it is not at all obvious that the two concepts are related. Yet the Fundamental Theorem of Calculus states (in one form) that $$\int_{x=a}^b f(x) \, dx = F(b) - F(a)$$ where $F(x)$ is some function satisfying $F'(x) = f(x)$. This gives us a means to compute without resorting to Riemann summation a definite integral as the difference of the integrand's antiderivative on the interval's endpoints. (In fact, the FTC is a unidimensional special case of Stokes' Theorem and as such holds deeper insights, but that's not in the scope of our discussion). So, in summary, the FTC is not a trivial result. Apostol does in fact provide a quasi-geometric heuristic "proof" of why this relationship should exist, and it is worth reading. And if we are to have a proper appreciation for calculus, it helps to have the proper pedagogy and motivation that his text provides. But should you desire to understand the foundations of calculus further, then a more rigorous and less computationally oriented treatment is recommended, such that found in Walter Rudin's Principles of Mathematical Analysis. heropupheropup $\begingroup$ You clarified it for me, thanks! $\endgroup$ – PatrickT Dec 12 '14 at 17:24 $\begingroup$ Ah, so F(b) - F(a) is a form of that theorem? I've been taught that F(b) - F(a) is just "the Newton-Leibniz formula"... $\endgroup$ – myfreeweb Dec 13 '14 at 19:23 $\begingroup$ @myfreeweb Some mathematicians actually DEFINE the integral by the Newton-Liebniz formula.While this may be pedagogically much easier for beginning calculus students to understand then Riemann sums or the even more sophisticated formulations of the integal, it's a very mathematically misleading thing to do to say the least. $\endgroup$ – Mathemagician1234 Dec 14 '14 at 5:47 $\begingroup$ This made the point the most clearly. The integral is actually defined independently of the derivative. The integral is merely defined the anti-derivative, but it is the Reimann sum of rectangles under the curve. Later on, one that can attempt to tie them together as "inverses using the FTC. $\endgroup$ – JackOfAll Aug 26 '15 at 19:24 From an intuitive standpoint, $F(x)=\int_a^xf(t)dt$ can be viewed as a cumulative function that tallies up the values of $f$ from $a$ to whatever $x$ is. With this in mind, it shouldn't be surprising that $\frac{d}{dx}F(x)=f(x)$. From a theoretical standpoint, FTC part 2 is the theorem that allows us to write $$\int_a^b f(t)dt=\left.F(t)\right|_a^b$$ where $F(t)$ is an antiderivative of $f(t)$. In other words, FTC2 allows us to evaluate definite integrals using indefinite integrals. FTC allows us to define new functions by integrating others, such as $$\operatorname{erf}(x)\triangleq\int_2^x e^{t^2}dt\hspace{30pt}\hspace{30pt}\operatorname{Li}(x)\triangleq\int_2^x \frac{dt}{\operatorname{ln}(t)}$$ $$\operatorname{C}(x)\triangleq\int_2^x \cos{t^2}dt\hspace{20pt}\hspace{30pt}\operatorname{S}(x)\triangleq\int_2^x \sin{t^2}dt$$ Alexander Gruber♦Alexander Gruber The fundamental theorem of calculus is just a continuous generalization of telescoping series. Suppose you have a sequence of numbers, $$x_1,~x_2,~x_3,~\dots,~x_n,$$ like, for example, $1,2,5,7,12$. You can consider the sequence of differences between each number and the next one, $$x_2-x_1,~x_3-x_2,~x_4-x_3,~\dots,~x_n-x_{n-1},$$ which in the example would be, $1,3,2,5$. If you add up the differences, most of the terms in the sum cancel and you get the total difference between the first and the last number, \begin{align} & (x_2-x_1) + (x_3-x_2) + (x_4-x_3) + \dots +(x_n-x_{n-1}) \\ &= -x_1 + (x_2 - x_2) + (x_3 - x_3) + \dots + (x_{n-1}-x_{n-1}) + x_n \\ &= x_n - x_1. \end{align} In the example this is, $1 + 3 + 2 + 5 = 11 = 12-1$. In the fundamental theorem of calculus the concept is the same, but with the following replacements: Instead of a sequences you have functions. Instead of differences you have the derivative. Instead of sums you have the integral. Nick AlgerNick Alger $\begingroup$ This explains the theorem, not the point of the theorem. $\endgroup$ – Jessica B Dec 11 '14 at 7:16 $\begingroup$ The point is that it extends basic ideas about discrete things (sums, differences) to the continuous setting... $\endgroup$ – Nick Alger Dec 11 '14 at 7:19 $\begingroup$ Many of the top-voted answers say that the relationship between integrals and derivatives is "surprising". But I feel it is quite intuitive, as this answer demonstrates. Admittedly we have to prove that the idea still works as we take smaller and smaller slices - presumably that is what the FTC proves. But why do they say it is surprising? $\endgroup$ – joeytwiddle Dec 13 '14 at 8:54 The importance of the fundamental theorem of calculus- and some of the other posters have given correct responses, don't get me wrong- can be best understood in a historical context that goes back to a century before Riemann constructed his precise definition of the integral. The basic idea of the integral is essentially that of areas and volumes, which dates back to Ancient Greece and Archimedes' "method of exhaustion," by which these quantities were computed by inscribing polygons of known area into an arbitrary region with smaller and smaller areas until the region is "filled up" and then adding up all the areas. (If this sounds like basically the definition of Riemann sums, except with more general figures then rectangular partitions- well, you're right. In a lot of ways, taking the limit of a Riemann sum is a rigorous formulation of exhaustion.) As you could well imagine, this was an incredibly cumbersome and lengthy procedure for any but the simplest figures. For example, it took Archimedes months to obtain a reasonable approximation of the area of a circle using this method. Another, much simpler example was Archimedes' use of the procedure to obtain the area under a parabola, which was fairly important in not only mathematics but physics and construction problems. A good discussion of the details can be found here. You can get the idea from these two examples that using this procedure meant that something we usually take for granted from calculus as a relatively simple computation was a Herculean task fit only for geniuses to undertake. Later mathematicians, from the Renaissance onward, used geometric methods and calculations with limits to obtain areas and volumes. For example, Galileo was able the guess the value of the area of one arch of the cycloid, a curve generated by a rolling circle in the plane,to be $3\pi^2$. A good discussion of the details of how a geometric proof of this fact goes- comparing it to a calculus solution- can be found here. Again, this was somewhat easier, but not much. The relationship between the tangent and the quadrature problems was first recognized by Issac Newton's teacher, Issac Barrow, and fully exploited by Newton, Liebniz, and the Bernoullis. Since most functions that were known from both basic geometry and physics at that point were fairly smooth and had antiderivatives, the new science of calculus, unified by the Fundamental Theorem, made areas and volumes now a relatively straightforward computation to solve for all mathematics and science students. Without it, it's hard to imagine that calculus- and for that matter, most of classical mechanics and subsequent breakthroughs in physics and other hard sciences- would have been possible. Furthermore, if it were possible, it would have taken dozens of centuries to achieve. Also, since most of the later developments in the theory of calculus-such as the Riemann integral and its subsequent refinements, as well as developments in differential equations and functional analysis, were all abstracted largely from calculus, none of these things would have been likely, either. So you can make a very good case calculus, and its descendant, analysis, would have died stillborn without some version of the Fundamental Theorem of Calculus. Alexander Gruber♦ Mathemagician1234Mathemagician1234 There are some cute applications of the fundamental theorem of calculus, and I'm sure some of the others answers will dig them up. But for the most part I agree with you: the FTC, in one dimension, isn't all that exciting! But it's just the tip of the iceberg. You have focused on one form of the fundamental theorem of calculus; there is a second form, namely that $$\int_a^b \frac{df}{dx}\,dx = f(b) - f(a)\tag{1} $$ I wouldn't blame you for thinking this is even more obvious than the first form! But the FTC is one special case, and the most boring one at that, of a general principal called Stokes's Theorem that is much more deserving of the "fundamental" moniker. Let's take a closer look at what equation 1 is saying. The left-hand side asks you to take some (possibly horrible and complicated) function $f$, take its derivative, and then sum the values of that derivative up over the entire interval $[a,b]$. FTC says you can get the same answer just by looking at $f$ at two values. The key points, here, are that The LHS requires knowing $f$ (and its derivative) everywhere along $[a,b]$. The RHS needs $f$ only at the boundary of the interval. The LHS requires being able to take the derivative of $f$. The right-hand side requires only knowing $f$, and no derivatives. Well duh, you're thinking. Isn't that the entire point of anti-derivatives. Yes, indeed. But it turns out that both of these benefits carry over, in a beautiful way, to higher dimensions, where an equivalent property holds. Let's say you have a region of the plane $\Omega$, and its boundary curve $\partial \Omega$. Then: $$\int_{\Omega} \nabla \cdot v \,dV = \int_{\partial \Omega} v\cdot \hat{n}\,dA.$$ There is some fairly elementary intuition about what these terms mean, but going into it without at least a bit of vector calculus knowledge would take us too far astray... the key point though is that the above lets you turn integrals over areas in the plane to integrals over their one-dimensional boundaries, just like the FTC turns a one-dimensional integration into a zero-dimensional difference of values. It doesn't always work, but it works enough of the time to be extremely powerful. For example, let's say you draw a closed curve $\gamma(s)$ in the plane. What is the area enclosed by the curve? Maybe you've learned some tools for computing this area: some slicing techniques, perhaps. You've also seen that these techniques are a huge pain; even moreso when the curve $\gamma$ is complicated with lots of loops and concavities. It turns out you can compute the area enclosed by only integrating around $\gamma$: $$\textrm{Area} = \int \frac{1}{2} \gamma(s) \cdot \gamma'(s)^{\perp}\, ds.$$ The two-dimensional problem has become a one-dimensional problem, and much more tractable both analytically and computationally. You can find similar nice formulas for many other geometric quantities of interest, such as the center of mass of a region $\Omega$, its moment of inertia, etc -- all quantities that nominally depend on the entire interior of $\Omega$ -- using only integration around the boundary. One final example: let's say I give you a point $p$ in the plane, and a super-complicated closed curve $\gamma$. How can you tell if the point is inside, or outside, the region enclosed by $\gamma$? People use various tricks to do this, for example by drawing a ray from $p$ to a point at infinity, and counting how many times the ray intersects $\gamma$... but you can do it robustly and easily using a boundary integral, $$\int \frac{-1}{4\pi\|\gamma(s) - p\|^2}[\gamma(s)-p]\cdot \gamma'(s)^{\perp}\,ds$$ which will be equal to $1$ if $p$ is inside the region, or $0$ if $p$ is outside (assuming I've not made any mistakes in my calculation). There's more: recall the second key point about the FTC above: the LHS requires computing derivatives, while the RHS does not. This comes up all of the time when doing numerical calculations and simulations. For example, let's say you want to simulate the way that your clothing wrinkles and folds as you dance. It turns out that if you represent your shirt as a surface parameterized by $r(s,t): \mathbb{R}^2 \to \mathbb{R}^3$, the bending energy of the shirt, which is required to correctly compute shirt physics, is given by $$E \propto \int (\Delta r \cdot \hat{n})^2\,dA.$$ Again I won't go too much into the details of the math; the important part is that computing $\Delta r$ requires knowing two derivatives of $r$. This means that $r$ must be twice-differentiable for the formula to make sense; that's fine in an ideal setting, but what if the geometry of your shirt comes from a Microsoft Kinect, or is inferred form video footage? The shirt surface will be "chunky," or have lots of noise, and you often won't even be able to compute first derivatives, never mind second derivatives. It turns out that Stokes's Theorem can be used to reduce the number of derivatives that are needed, and is behind the cloth animation you see in video games and movies effects. In fact, countless physical simulations, from how the galaxy formed, to how wind flows around an airplane wing, to how your cheek deforms when you get punched in the face, rests on a foundation made up of the "pointless trick" that is the (generalized) FTC. $\begingroup$ "the FTC, in one dimension, isn't all that exciting!" I think the past 500 years of mathematics would disagree with that statement. $\endgroup$ – Daniel McLaury Dec 11 '14 at 6:19 $\begingroup$ So the fact that knowing the forces acting on a moving body allows you to determine its trajectory isn't important to you? What kind of applied math do you do, exactly? $\endgroup$ – Daniel McLaury Dec 11 '14 at 6:34 $\begingroup$ "Second the practice of approximating an integral curve of a Hamiltonian system is an exercise in purely definite integration." Yes, and this is true because of the fundamental theorem of calculus! $\endgroup$ – Daniel McLaury Dec 11 '14 at 6:45 $\begingroup$ Why are people downvoting this answer?-- it seems a bit antiintellectual to do so; it's the only one that hints at the enlightening, abstract notion of what an integral means... and best points the way to further learning. You may disagree about whether the 1-dimension FTC is exiting, but the author clearly indicated that this was an opinion. It's a little disappointing to see all the other answers fixate on the 1-dimensional case, since there's so much more beauty and meaning to the FTC. $\endgroup$ – Max Wallace Dec 12 '14 at 22:00 $\begingroup$ In my opinion, a good answer entails some inference about the mathematical background of the individual asking the question, and furnishes a response in that context. A student who asks the question that was asked is highly unlikely to be in a position to appreciate Stokes' Theorem. That's not what was asked. And for what it's worth, I don't think the one-dimensional case is trivial: that would imply no need for rigorous proof. If someone asks you about the quadratic formula, you don't respond with a treatise on Galois theory and solvability by radicals. $\endgroup$ – heropup Dec 13 '14 at 8:21 $$\frac{d}{dx}\int_a^xf(t)\,dt\tag{1}$$ A lot going on here and plenty of good answers already. But I'll chime in question by question. We might write $(1)$ because we are confronted by a function which is defined in terms of an integral such as $$A(x)=\int_a^x f(t)\,dt\tag{2}$$ that we want to "do" calculus on just like we "did" calculus on a multitude of other functions (polynomials, exponentials, logs, trig, products, quotients, compositions, etc.): find instantaneous rates of change, linear approximations, local and/or global extrema, intervals of increase/decrease, intervals of concavity, inflection points, etc. In order to do those things, we want to get our hands on $A'(x)$. The FTC offers an extremely efficient way to do so. To get a geometric feel for functions of the form of $(2)$, consider $f(t)=\sin t$ (in blue) and $g(x)=\int_0^x \sin t\,dt$ in red. As $x$ varies here from $x=0$ to $x=2\pi$, the different amounts of area under the curve $y=f(t)$ are accumulated indicated by the red shading. This "accumulator function" $g(x)$ is itself a bona fide function. At each fixed $x$, we can compute the accumulated area under $f$ from $0$ to that $x$, resulting in the number $g(x)$. We can then plot the point $(x,g(x))$ and repeat the process to generate the graph of $g(x)$ in red. Hopefully once you see that functions like $g(x)$ make sense, then it is natural to want to do calculus on them. In $(1)$, $t$ is a dummy variable of integration that ranges from $a$ to $x$. If we wrote $$\int_a^x f(x)\,dx,$$ this is a different animal than $(1)$: the $x$ in the integrand here is varying from the lower limit of integration $a$ to the upper limit of integration $x$. Is this what we want to convey? Likely not (at this level). The notation in $(1)$ using a different symbol for the dummy variable of integration than the independent variable in the upper limit of integration brings clarity and precision to what it is that we want to communicate. For example, if $v(t)$ is a velocity function for an object on $a\le t\le b$ then $d(t):=\int_a^t v(s)\,ds$ represents the (net) displacement of the object over from time $a$ to time $t$. Note that $d(t)$ is indeed a function of the independent variable time $t$ and it is very natural to want to do calculus on $d(t)$, e.g., take its derivative with respect to $t$. JohnDJohnD You do not seem to notice that there is a problem: If $f$ is any function on $[a,b]$, then there is an important question: Is there a differentiable function $F$ on $[a,b]$ such that $F'=f$ ? Obviously if there is one such fuction, then there are many (add something constant). But is there such a thing in the first place?? The symbol $$ \int_a^x f(t) dt $$ is not a new notation for such a function (if one exists at all). This thing might give you a real number for every $x\in [a,b]$ and this works for any continous $f$ by a limit of sums etc. This definition (limit of sums for partitions) can be given even before one ever talks about derivatives! So you define a new function $F$ out of a given continous $f$ by $$ F(x):=\int_a^x f(t) dt $$ and now the FTC states that $F'=f$, and this is something you have to prove. Exercise: If you assume that $$ F(x)=\int_0^x e^{t^2} dt $$ is just a function with derivative $e^{t^2}$ please figure out the value $F(1)$ Integration is really, really much more than an operation. What you're missing is that integration is analogous to summation, and that area is just an interpretation of it. I think the easiest way to see that the antiderivative is the area under the curve, is with the geometrical interpretation of the funtamental thoerem of calculus on wikipedia: If you think about the function $A(x)$ as the "area under the curve up to $x$", then you'll have: $$A(x+h)-A(x) \approx f(x)\cdot h\implies \frac{A(x+h)-A(x)}{h}\approx f(x)$$ So, when $h$ tends to $0$, geometrically you have a better estimative for the area, and algebraically, you approach the derivative for the area function $A(x)$. The $f(x)$ doesn't depend on $h$, so it stays the same. Therefore you're really saying that: $$\lim_{h\to 0} \frac{A(x+h)-A(x)}{h} = A'(x) = f(x)$$ So the area of the function $f$ is the function such that its derivative is $f$ itself. This is the antiderivative. But what exactly is area? What the fundamental theorem of calculus will do, is that it will replace "area" by an infinite sum of little retangles, a method called Riemman Sum: And will let their base $\Delta x$ tend to $0$, to get a better approximation of the 'area'. Then, the theorem will replace $A(x)$ by $\int f(x) dx$, there $dx$ means that $\Delta x \to 0$. The theorem will take the derivative of this more precise definition of 'area' and prove its derivative is the function itself. But remember, the power of the integral isn't in the antiderivation process, or calculation of area. It's really related to infinite sums over continuous domain. Lucas ZanellaLucas Zanella $\begingroup$ In the following, why are you able to say A'(x) = f(x)? I missed the connection. Did you take the limit of both sides in the section prior? $$\lim_{h\to 0} \frac{A(x+h)-A(x)}{h} = A'(x) = f(x)$$ $\endgroup$ – JackOfAll Aug 26 '15 at 22:03 $\begingroup$ ie: Did you imply this step? $$\frac{A(x+h)-A(x)}{h}\approx f(x)$$ $$\lim_{h\to 0} \frac{A(x+h)-A(x)}{h} \approx \lim_{h\to 0} f(x)$$ $$\lim_{h\to 0} \frac{A(x+h)-A(x)}{h} \approx f(x)$$ $$A'(x) \approx \lim_{h\to 0} f(x)$$ $\endgroup$ – JackOfAll Aug 26 '15 at 22:05 $\begingroup$ And why did you suddenly turn the $\approx$ into $=$ ? $\endgroup$ – JackOfAll Aug 26 '15 at 22:09 $\begingroup$ @JackOfAll I did not gave a proof, just an intuition. What I'm saying is that $ \frac{A(x+h)-A(x)}{h}\approx f(x)$ when $h\to 0$, thus, since the derivative is just the limit of this expression, we expect $\lim_{h\to 0} \frac{A(x+h)-A(x)}{h}$ to be $f(x)$. For a formal proof, you can follow the wikipedia's page on 'fundamental theorem of calculus'. $\endgroup$ – Lucas Zanella Aug 28 '15 at 1:50 I don't know whether the following will persuade you. One of the points of FTC is that a continuous function on an interval is the derivative of SOMETHING, and you can use this to define functions with particular desired properties. For example, the function $f: f(x) = \frac{1}{x}$ is continuous on $(0, \infty)$, so that $\int_{1}^{x} \frac{1}{t} dt$ exists for all $x > 0.$ Now you probably know the (natural) $\log$ function, as a function with familiar properties, but FTC can be used to demonstrate the existence of a function with the right properties from scratch, even if you hadn't known before that there was such a function. For if we set $g(x) = \int_{1}^{x} \frac{1}{t} dt,$ then we know that $g^{\prime}(x) = \frac{1}{x}.$ Also, a change of variables shows that $g(ab) = g(a) + g(b)$ for positive real $a$ and $b$, since $\int_{1}^{ab} \frac{1}{t} dt = \int_{1}^{a} \frac{1}{t} dt +\int_{a}^{ab} \frac{1}{t} dt,$ and setting $u = \frac{ t}{a},$ we obtain $ g(ab) = \int_{1}^{ab} \frac{1}{t} dt = \int_{1}^{a} \frac{1}{t} dt +\int_{1}^{b} \frac{1}{u} du = g(a) + g(b).$ In some books, such as Spivak's "Calculus", this approach is used to define the exponential function, rather than just using a definition using power series. The inverse function $h = g^{-1} : \mathbb{R} \to (0,\infty)$ satisfies $h^{\prime}(x) = h(x)$ for all $x$ and $h(0) =1.$ Then when the theory of Taylor series is developed, it is clear that $h$ has the familiar Taylor series. Geoff RobinsonGeoff Robinson $\begingroup$ It's not true that "a continuous function on an interval is the integral of SOMETHING"; I think you must have made an editing mistake. $\endgroup$ – ruakh Dec 11 '14 at 4:46 $\begingroup$ @ruakh : Oops, yes, absolutely: I meant " a continuous function is the DERIVATIVE of SOMETHING", which is why I went on to discuss the definition of the log function using the integral of $\frac{1}{x},$ which I still think is a useful example if you have some apppreciation of rigour. $\endgroup$ – Geoff Robinson Dec 11 '14 at 20:15 In my perspective your question is about the Second Fundamental Theorem of Calculus. I had your same doubt! But now I understand the power of this amazing theorem. What is the advantage? Some integrals are impossible to solve without a computer. And what if you are in a test and you cannot use a computer or a calculator? Or what if you are solving a system of differential equations? With this theorem you do not need to solve the integral. You can skip that part! Maybe you are in Calculus I and you will not use too much, but in Calculus III and Differential Equations will be very important. Also in many Physics applications. If the upper limit is x and the lower limit is a constant, the derivative cancels the integral and that is the answer! Please check this link: http://beginnermathstackexchange.blogspot.com/2014/12/second-fundamental-theorem-of-calculus.html The most important cases are when the limits of the integral are not only an x. If the upper limit is a function f(x) and the lower limit is a constant, and we are solving the derivative of the integral of g(x). The solution is g(f(x))(f`(x))) In this link you will find a problem of the famous Calculus book written by the professors Larson and Edwards. http://beginnermathstackexchange.blogspot.com/2014/12/problem-from-calculus-book-written-by_16.html If we have two different functions as upper and lower limit, then just apply the formula that is in this link: http://beginnermathstackexchange.blogspot.com/2014/12/second-fundamental-theorem-of-calculus_16.html BeginnerBeginner $\begingroup$ You should explicitly say if that is your own blog $\endgroup$ – Aditya Hase Dec 17 '14 at 17:32 $\begingroup$ I just created a website to put the images. I did not find other way to do it. It is not really a blog. $\endgroup$ – Beginner Dec 17 '14 at 23:46 $\begingroup$ It is recommended that you type up the equations in your post using LaTeX, rather than link to images of the equations. Here is a link that explaines how to do it: meta.math.stackexchange.com/questions/5020/… $\endgroup$ – Nick Alger Dec 25 '14 at 22:52 $\begingroup$ I did not know about LaTex, but I found it a couple of days ago and I am trying to learn it. However, I really appreciate the link that you sent me! $\endgroup$ – Beginner Dec 26 '14 at 1:54 The only reason you know the definition of an integral as an antiderivative is BECAUSE of the FTC. And there are all sorts of situations where it is useful in the real world. For example, if you integrate force along a path, you get energy. Let's say that that force is gravity. FTC says that know matter whether you push a boulder straight up the side of a mountain or take a bunch of switch-backs, the rock still has the same amount of gravitational potential energy at the top of the mountain. Not the answer you're looking for? Browse other questions tagged calculus or ask your own question. How wrong is it? - A "proof" of the FTC that I came up with in high school by hand-waving. Fundamental Theorem of Calculus problem The Meaning of the Fundamental Theorem of Calculus What does this FTC scenario actually represent? Why does the statement of First Fundamental Theorem of Calculus include the condition that f be continuous? Fundamental Theorem of Calculus Question (Area under Curve) Could somebody explain this proof for the fundamental theorem of calculus to me? Stating the Fundamental Theorem of Calculus in simple terms Need help to understand the part of a calculus video related to Fundamental Theorem of Calculus Help with using fundamental theorem of calculus? 2nd fundamental theorem of calculus proof - spivak
CommonCrawl
How to calculate no. of binary strings containg substring "00"? [duplicate] Number of binary numbers with two consecutive zeros 3 answers I need to calculate no of possible substrings containing "00" as a substring. I know the length of the binary string. Eg: for a string of length 4, possible substrings are: 0000 0001 0010 0011 0100 1000 1001 1100 I just need the number of possible combinations, not enumerate all of them. combinatorics binary Douglas S. Stones kBislakBisla marked as duplicate by MJD, Andrey Rekalo, Start wearing purple, Thomas, Amzoti Aug 13 '13 at 17:41 $\begingroup$ possible duplicate of Number of binary numbers with two consecutive zeros; How many $N$ digits binary numbers can be formed where $0$ is not repeated; How many bit-strings of length 7 have exactly 2 consecutive zeros; Probability of having at least $K$ consecutive zeros in a sequence of 0s and 1s $\endgroup$ – MJD Aug 13 '13 at 17:01 Let $a_n$ be the number of strings of length $n$ that do contain $00$, and let $b_n$ be the number that don't; of course $b_n=2^n-a_n$, but it's actually easier to determine $b_n$. Consider a string of length $n$ that does not contain $00$. If it ends in $1$, it can be obtained from a string of length $n-1$ that does not contain $00$ by appending a $1$. If it ends in $0$, it can be obtained from a string of length $n-2$ that does not contain $00$ by appending $10$. Assuming that $n\ge 2$, every string of length $n$ that does not contain $00$ is obtained in exactly one of these two ways, so $b_n=b_{n-1}+b_{n-2}$. Clearly $b_0=1$, since the empty string does not contain $00$, and $b_1=2$. The recurrence is the same as that for the Fibonacci numbers, $b_0=F_2$, and $b_1=F_3$, so in general we have $b_n=F_{n+2}$ and therefore $$a_n=2^n-F_{n+2}\;.$$ Using the closed-form expressions in the linked article, we can write $$a_n=2^n-F_{n+2}=2^n-\frac1{\sqrt5}\left(\varphi^{n+2}-\widehat\varphi^{n+2}\right)=2^n-\left\lfloor\frac{\varphi^{n+2}}{\sqrt5}+\frac12\right\rfloor\;,$$ where $\varphi=\frac12\left(1+\sqrt5\right)$ and $\widehat\varphi=\frac12\left(1-\sqrt5\right)$. Brian M. ScottBrian M. Scott $\begingroup$ @Czechnology: You're welcome! $\endgroup$ – Brian M. Scott Nov 5 '14 at 17:39 Not the answer you're looking for? Browse other questions tagged combinatorics binary or ask your own question. Number of binary numbers with two consecutive zeros Counting the number of strings without consecutive repitition. How many $N$ digits binary numbers can be formed where $0$ is not repeated Number of occurrences of k consecutive 1's in a binary string of length n (containing only 1's and 0's) How many bit-strings of length 7 have exactly 2 consecutive zeros Probability of having at least $K$ consecutive zeros in a sequence of $0$s and $1$s Find a recurrence relation for the number of bit strings of length ݊that do not have two consecutive 0s. How to find the moment generating function of this scenario. The number of states in which $N$ binary strings with strictly adjacent $1$ bits have a specific number of strings with any $1$ bits binary division and remainder How to calculate no. of binary strings containing substring "11011"? Binary subtraction with borrowing vs. 2's complement Determining if the result of adding (subtracting) two binary numbers is correct as NBC and 2's Exhaustive path through 2^n bit configurations total time needed to count from 1 to n in binary? Deriving the number of all even length palindromic sequences with length at most $n$ Trying to understand $2$'s complement Finding similarities between binary strings
CommonCrawl
Lang, H. P. et al. The nanomechanical NOSE. Micro Electro Mechanical Systems, 1999. MEMS'99. Twelfth IEEE International Conference on 9–13 (IEEE, 1999). Lang, H. P. et al. Micro Total Analysis Systems' 9 57–60 (Springer Netherlands, 1998). Lang, H. P. et al. A chemical sensor based on a micromechanical cantilever array for the identification of gases and vapors. Applied Physics A: Materials Science & Processing 66, S61–S64 (1998). Khuri-Yakub, B. T. et al. 6D-1 The Capacitive Micromachined Ultrasonic Transducer (CMUT) as a Chem/Bio Sensor. Ultrasonics Symposium, 2007. IEEE 472–475 (IEEE, 2007). Jung, T., Schlittler, R., Gimzewski, J. K. & Himpsel, F. J. One-dimensional metal structures at decorated steps. Applied Physics A 61, 467–474 (1995). Jung, T. A., Schlittler, R. R., Gimzewski, J. K., Tang, H. & Joachim, C. Contacting molecular nanostructures. Molecular mechanics, charge Transfer, and transport properties. NATO ASI Series E Applied Sciences-Advanced Study Institute 341, (1997). Jung, T. A., Schlittler, R. R., Gimzewski, J. K., Tang, H. & Joachim, C. Controlled room-temperature positioning of individual molecules: molecular flexure and motion. SCIENCE-NEW YORK THEN WASHINGTON- 181–183 (1996). Jung, T. A., Schlittler, R. R. & Gimzewski, J. K. Conformational identification of individual adsorbed molecules with the STM. (1997). Jung, T. A., Himpsel, F. J., Schlittler, R. R. & Gimzewski, J. K. Scanning Probe Microscopy 11–48 (Springer Berlin Heidelberg, 1998). Jung, T. A., SCHÜTTLER, R. R., Gimzewski, J. K., Tang, H. & Joachim, C. CONTACTING MOLECULAR NANOSTRUCTURES. (1997). Jr, J. T. Yates et al. Materials index. Surface Science 386, 351–354 (1997). Johansson, P., Berndt, R., Gimzewski, J. K., Apell, S. P. & ,. Comment on"Physical Picture for Light Emission in Scanning Tunneling Microscopy". Physical review letters 84, 2034–2034 (2000). Joachim, C. & Gimzewski, J. A. M. E. S. K. A. Z. I. M. I. E. R. Z. A nanoscale single-molecule amplifier and its consequences. Proceedings of the IEEE 86, 184–190 (1998). Joachim, C. & Gimzewski, J. K. Tunneling Processes within a C60 Molecule. Probe Microscopy 1, 269–276 (Submitted). Joachim, C., Gimzewski, J. K. & Tang, H. Physical principles of the single-C 60 transistor effect. Physical Review B 58, 16407 (1998). Joachim, C. & Gimzewski, J. K. An electromechanical amplifier using a single molecule. Chemical Physics Letters 265, 353–357 (1997). Joachim, C., Gimzewski, J. K. & Aviram, A. Electronics using hybrid-molecular and mono-molecular devices. Nature 408, 541–548 (2000). Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic Transparence of a Single ${\mathrm{C}}_{60}$ Molecule. Phys. Rev. Lett. 74, 2102–2105 (1995). Joachim, C. & Gimzewski, J. K. CONTACTING A SINGLE C60 MOLECULE. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994). Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic transparence of a single C 60 molecule. Physical review letters 74, 2102 (1995). Joachim, C., Bergaud, C., Pinna, H., Tang, H. & Gimzewski, J. K. Is There A Minimum Size and a Maximum Speed for a Nanoscale Amplifier?. Annals of the New York Academy of Sciences 852, 243–256 (1998). Joachim, C. & Gimzewski, J. K. Analysis of low-voltage I (V) characteristics of a single C60 molecule. EPL (Europhysics Letters) 30, 409 (1995). Joachim, C. & Gimzewski, J. Kazimiez. Molecular Machines and Motors 1–18 (Springer Berlin Heidelberg, 2001). James, G. Label-Free Biodetection Using Capacitive Micromachined Ultrasonic Transducers (CMUTs) and Its Application for Cardiovascular Disease Diagnostics. Journal of Nanomedicine & Nanotechnology (2012). Ishibe, K., Nakada, S., Mera, Y. & Maeda, K. Nanoprobe Fourier-transform photoabsorption spectroscopy using a supercontinuum light source. Microscopy and Microanalysis 18, 591–595 (2012). III, A. J. Ashe et al. Core-ionization energies and the anomalous basicity of arsabenzene and phosphabenzene. Journal of the American Chemical Society 101, 1764–1767 (1979). Hu, W. et al. DNA builds and strengthens the extracellular matrix in Myxococcus xanthus biofilms by interacting with exopolysaccharides. PloS one 7, e51905 (2012). Hsueh, C., Reed, J. & Gimzewski, J. K. Single molecule gene expression profiling using atomic force microscopy. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY 241, (AMER CHEMICAL SOC 1155 16TH ST, NW, WASHINGTON, DC 20036 USA, 2011). Hsueh, C., Chen, H., Gimzewski, J. K., Reed, J. & Abdel-Fattah, T. M. Localized nanoscopic surface measurements of nickel-modified mica for single-molecule DNA sequence sampling. ACS applied materials & interfaces 2, 3249–3256 (2010). Hofmann, F. et al. Scrape-off measurements during Alfvén wave heating in the TCA tokamak. Journal of Nuclear Materials 121, 22–28 (1984). Himpsel, F. J., Jung, T. A., Schlittler, R. R. & Gimzewski, J. K. Element-Specific Contrast in STM via Resonant Tunneling. APS March Meeting Abstracts 1, 1908 (1996). Hasegawa, T. et al. Memristive operations demonstrated by gap-type atomic switches. Applied Physics A 102, 811–815 (2011). Han, T. H. & Liao, J. C. Erythrocyte nitric oxide transport reduced by a submembrane cytoskeletal barrier. Biochimica et Biophysica Acta (BBA)-General Subjects 1723, 135–142 (2005). Han, S., Novak, E., Reed, J., Teitell, M. & Gimzewski, J. Biological applications of microscope profiler. Microelectronics, MEMS, and Nanotechnology 67990L–67990L (International Society for Optics and Photonics, 2007). Han, S., Novak, E., Reed, J., Teitell, M. & Gimzewski, J. Biological applications of microscope profiler (Invited Paper)[6799-23]. PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 6799, 6799 (2008). Haak, H. W., Sawatzky, G. A., Ungier, L., Gimzewski, J. K. & Thomas, T. D. Core-level electron–electron coincidence spectroscopy. Review of scientific instruments 55, 696–711 (1984). Guo, L. et al. Phenotypic characterization of the foldase homologue PrsA in Streptococcus mutans. Molecular oral microbiology 28, 154–165 (2013). Groner, P., Gimzewski, J. K. & Veprek, S. BORON AND DOPED BORON 1ST WALL COATINGS BY PLASMA CVD. JOURNAL OF NUCLEAR MATERIALS 103, 257–260 (1982). Groner, P., Gimzewski, J. K. & Veprek, S. Boron and doped boron first wall coatings by plasma CVD. Journal of Nuclear Materials 103, 257–260 (1981). Grintsevich, E. E., Sharma, S., Gimzewski, J. K. & Reisler, E. Cooperative binding of drebrin to actin filaments. MOLECULAR BIOLOGY OF THE CELL 22, (AMER SOC CELL BIOLOGY 8120 WOODMONT AVE, STE 750, BETHESDA, MD 20814-2755 USA, 2011). Gimzewski, J. K., Jung, T. & Schlittler, R. R. LAYERED MEDIUM AND METHOD FOR CREATING A PATTERN. (2002). Gimzewski, J. K. et al. Near Field Optics 333–340 (Springer Netherlands, 1993). Gimzewski, J. K. & Humbert, A. Scanning tunneling microscopy of surface microstructure on rough surfaces. SPIE MILESTONE SERIES MS 107, 249–249 (1995). Gimzewski, J. K. et al. Scrape-off measurements during Alfvén wave heating in the TCA tokamak. (1983). Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Local Experiments Using Nanofabricated Structures in STM. NATO ASI SERIES E APPLIED SCIENCES 235, 219–219 (1993). Gimzewski, J. K., Modesti, S., David, T. & Schlittler, R. R. Scanning tunneling microscopy of ordered C60 and C70 layers on Au (111), Cu (111), Ag (110), and Au (110) surfaces. Journal of Vacuum Science & Technology B 12, 1942–1946 (1994). Gimzewski, J. K., Möller, R., Pohl, D. W. & Schlittler, R. R. Transition from tunneling to point contact investigated by scanning tunneling microscopy and spectroscopy. Surface Science 189, 15–23 (1987). Gimzewski, J. K., Cross, S. E., Jin, Y. & Rao, J. ANALYSIS OF EX VIVO CELLS FOR DISEASE STATE DETECTION AND THERAPEUTIC AGENT SELECTION AND MONITORING. (2008).
CommonCrawl
Antiglycation potential of Indigoferin a, Indigoferin B and Indigoferin C natural products from Indigofera heterantha Brandis Ayesha Khan1, Ajmal Khan2, Manzoor Ahmad3, Mumtaz Ali3, Umar Farooq1, Farhan Ahmad Khan1 & Syed Majid Bukhari1 Clinical Phytoscience volume 7, Article number: 5 (2021) Cite this article Diabetes is a long-lasting and serious disease that effect in worldwide individual lives, families, and societies. Hyperglycemia of diabetes mellitus produced Advance Glycation End Products that are associated with diabetic complications like neuropathy, nephropathy, retinopathy, and cardiovascular diseases. In this study, the natural products isolated from of Indigofera heterantha Brandis, Indigoferin A (S1), Indigoferin B (S2) and Indigoferin C (S3) were evaluated for their in vitro antiglycation activity. The compounds exhibited a significant inhibitory activity against the formation of Advanced Glycation End-Products with IC50 values of 674.25 ± 3.2 μM, 407.03 ± 4.7 μM and 726.41 ± 2.1 μM, respectively. Here, important structure-activity relationship was observed, when the intramolecular hydrogen bonding interactions suppressed the antiglycation activity of compound S3. Thus, the study clearly demonstrates that the number and the position of substituents act as an assisting factor and directly influence the inhibitory activity of the natural product by altering the sugar or protein binding affinity. This study explain first time the antiglycation inhibitory ability of chemical constituents isolated from I. heterantha and can be used for above late diabetic complications. Indigofera heterantha Brandis (I. heterantha) belong to family Fabaceae, is a small tree or shrub widely distributed in tropical and subtropical regions with pinnate leaves. The name Indegofera is due to the presence of indigo flowers in most of the species of this family. The plants of this genus have many important medicinal properties, thus are widely used as folk medicine for the treatment of whooping cough, hepatitis [1], and tooth ache [2]. Previous studies showed that compounds isolated from of this genus act as an excellent anti-inflammatory agent specially for snake bite or insect sting, possess good antimicrobial, antifungal [3], antibacterial [4], and urease inhibitory activities [5]. Due to several important medicinal properties exhibited by the compounds isolated from the I. heterantha, it was envisioned to investigate the natural products isolated from this specie for their in vitro antiglycation potential. The antiglycation activity of natural products from I. heterantha have not been explored previously, which make them interesting candidates for investigation into their diverse biological properties. Diabetes mellitus causes high blood sugar level leading to a condition commonly termed as hyperglycemia [6]. Hyperglycemia when persist for longer period of time facilitates the synthesis of special non-enzymatic glycated products called Advance Glycation End Products (AGEs) [7]. Previous studies have revealed a positive association between tissue AGEs and microvascular as well as macrovascular complications related to diabetes [8], with glucose acting as a long-term fuel for diabetic complications [9]. Several factors contribute towards the development of AGEs including the duration and the degree of hyperglycemia, tissues permeability to free blood glucose, and protein's half-life [10]. AGEs through crosslinking cellular matrix of long-lived proteins alter the tissue function and mechanical properties [11] resulting in onset of late diabetic complications like neuropathy, nephropathy, retinopathy, and cardiovascular diseases. Currently various pharmacological and natural antiglycating agents have been under investigation to prevent the formation of AGEs. These include aminoguanidine, rutin (Fig. 1), pyridoxamine, antioxidants, aspirin, and RAGE blockers. Several synthetic compounds and plant extracts have also shown significant antiglycation activity [12]. Aged garlic extract exhibited excellent antiglycation activity in vitro [13]. Polysaccharide fractions extracted from pumpkin and Punica granatum have also been reported as good inhibitors for glycation [14, 15]. Structure of Rutin The current study describes the antiglycation activities of three natural compounds indigoferin A (S1), indigoferin B (S2) and indigoferin C (S3, Fig. 2) isolated from I. heterantha [5]. The detailed extraction and isolation has been published, while the compounds had been characterized based on 1D and 2D NMR data. This is the first report on the antiglycating activity of these compounds. Structures of compounds isolated from I. heterantha [5] Materials and method Precoated aluminum sheets and silica gel 60F-254 were purchased from E. Merck. All solvents, such as methanol (HPLC grade), n-hexane (HPLC grade), chloroform (HPLC grade), ethyl acetate (HPLC grade), n-butanol (HPLC grade) and cerric sulphate reagent (laboratory grade) were purchased from Sigma-Aldrich. Chemicals including dimethylsulfoxide (DMSO, analytical grade), glucose anhydrous (laboratory grade), disodium hydrogen phosphate (Na2HPO4, laboratory grade), sodium azide (NaN3, laboratory grade) and sodium dihydrogen phosphate (NaH2PO4, laboratory grade), methylglyoxal (MG, laboratory grade) were purchased from Sigma-Aldrich, while bovine serum albumin (BSA, laboratory grade) was purchased from Research Organics, Cleveland (USA). I. heterantha Wall were collected from upper Dir, Khyber Pakhtunkhwa (Pakistan), during the month of April 2005. The plant was identified by Prof. Dr. Jahandar Shah, plant taxonomist, University of Malakand, Chakdara. The voucher specimen number GI-014 was placed in the herbarium of botany department, University of Malakand, Chakdara, Dir (L), Pakistan. Extraction and isolation The detail extraction and isolation of these compounds Indigoferin A (S1), Indigoferin B (S2) and Indigoferin C (S3) from I. heterantha were published in our previous article [5]. The collected I. heterantha Wall (10 kg) was shade dried for 3 weeks, followed by pulverization into fine powder, then soaked in MeOH (80% v/v) with rare stirring at room temperature. After 2 weeks, the material was filtered 3 times and the filtrate obtained was concentrated in vauco at 40 °C. The MeOH extract (463.5 g) obtained was then suspended in distilled water and extracted with n-hexane (20.71% w/w), chloroform (15.96% w/w), ethyl acetate (12.94% w/w), and n-butanol (19.41% w/w), and finally the aqueous (30.96% w/w) fraction was obtained. Each organic extract was then evaporated to dryness. In vitro Antiglycation assay Buffer was prepared by mixing a calculated amount of Na2HPO4 and NaH2PO4 along with NaN3, for preventing the bacterial interactions and pH was maintained at 7.4 with concentrations 67 mM and 3 mM respectively. BSA (10 mg/mL) and MG (50 mg/mL) solutions were prepared in buffer while test samples (1 mM/mL) were prepared in DMSO. In vitro antiglycation activity was performed according to the reported method [16] with few modifications. The samples (S1, S2 and S3) were prepared in DMSO at 1 mM concentration. For IC50 serial dilutions were used. Triplicate samples, in a 96-well plate assay each well having a reaction mixture of 200 μL, a glycated control containing 20 μL of test compound solution, 50 μL of BSA, 50 μL of MG, and 80 μL of phosphate buffer, while blank control containing 20 μL of DMSO was incubated for 9 days maintaining the temperature at 37 °C. After incubation assessment of fluorescence (excitation at 330 nm and emission at 440 nm) for the change in fluorescence intensity was evaluated using microplate ELISA reader, Spectra Max Plus384 (Molecular Devices, CA, USA) at 37 °C. The percentage inhibition for each compound was calculated by using formula. $$ \% inhibition=\left( 1-{fluorescence}_{test\kern0.5em compound\kern1em }/\kern0.5em {flourescence}_{control}\right)\times \kern0.5em 100 $$ Rutin (Fig. 1) is used as a positive control with IC50 value of 294.5 ± 1.5 μM. S.E.M = IC50 value was presented as mean ± S.E.M (standard error of the mean) calculated by using the formula $$ S.E.M=\frac{s}{\sqrt{N}} $$ Where s = sample standard deviation $$ s=\sqrt{\frac{1}{N-1}\sum \limits_{i=1}^N{\left({x}_i-\overline{x}\right)}^2} $$ xi − xn = sample data set x = mean value of sample data Compounds Indigoferin A (S1) (89 mg), Indigoferin B (S2) (105 mg) and Indigoferin C (S3) (145 mg) were isolated as black gummy solid, brown powder and yellow amorphous powder, respectively from the ethyl acetate fraction of the methanolic extracts of I. heterantha. The detail spectroscopic data of all these compounds were published in our previous article [5]. On the basis of spectroscopic data, the structures of all compounds were determined as Indigoferin-A [(6-methyl-1 -(4-((2S,3S,4S,5S,6R)-3,4,5-trihydroxy-6-(hydroxymethyl) tetrahydro-2H-pyran-2-yloxy)phenyl)heptan-1-one)], Indigoferine-B, (2R,3R,4R,5R,6S)-2-(hydroxymethyl)-6-(4-(5-methylhexyl)phenoxy)tetrahydro-2H-pyran-3,4,5-triol and Indigoferin-C 6-hydroxy-1-2-(2,46-trihydroxyphenyl)heptan-1-one. In vitro Antiglycation activity To explore the medicinal importance of compounds from I. heterantha, the compounds Indigoferin A (S1), Indigoferin B (S2) and Indigoferin C (S3) were evaluated for their in vitro antiglycation potential. Here, all three natural products were found to be significantly active with IC50 values of 674.25 ± 3.2 μM, 407.03 ± 4.7 μM and 726.41 ± 2.1 μM, respectively albeit lower in potency than the standard rutin (Table. 1). Although the potency of S1, S2, and S3 are lower than the standard but are potent to show inhibitory effect. The difference potential is attributed to the structural differences in compounds and the standard rutin, whereas rutin is a disaccharide with a α-substituted chromen-4-one (with hydroxyl and phenyl substituents), while S1 and S2 are monosaccharides with β-phenyl substituents, while S3 lacks the sugar moiety. The compound S2 (407.03 ± 4.7 μM) was identified to possess better inhibitory potential compared to the other two natural products S1 and S3. While the compound S3 (726.41 ± 2.1 μM) was found to be the least active among the isolated natural products, which could be attributed to the intramolecular hydrogen bonding between the ortho substituted hydroxyl group and the carbonyl oxygen (Fig. 3). The unavailability of hydroxyl group to interact via intermolecular hydrogen bonding with the protein can be the contributing factor to the lower potency of the natural product S3. The suppressing influence of intramolecular hydrogen bonding on activity of synthetic compounds was previously published [17] for synthetic compounds but the phenomenon was first time discussed for natural compounds. Table 1 In-vitro antiglycation potential of compounds isolated from I. heterantha (N = 3) Intramolecular hydrogen bonding interaction of I. heterantha (S3) The comparison of inhibitory potential of S1 (674.25 ± 3.2 μM) and S2 (407.03 ± 4.7 μM) shows a significant difference (Fig. 4), which can be due to the structural differences between the two compounds. Here, the lower inhibitory potential of S1 could be due to the presence of carbonyl group, which extends resonance creating a dominant electron withdrawing effect (Fig. 3) [17, 18]. However, S2 has significant antiglycation activity due to absence of carbonyl group and multiple hydroxyl groups [18, 19]. Thus, the substituent and its position remarkably effect the protein or sugar binding activity of each analogue. Comparison of in-vitro antiglycation activity of I. heterantha compounds with reference compound (rutin). Number of sample treatment (N) = 3. The results were expressed as mean ± SEM Additionally, it was observed that the presence of sugar moiety and its stereochemistry also influences the antiglycation potential of compounds. The absence of sugar moiety in S3 could be another contributing factors towards the lower inhibitory potential of S3 compared to the other two compounds with sugar moiety. While the comparison of S1 and S2 clearly exhibit the influence of the stereochemistry of sugar moiety upon antiglycation activity of these compounds. Here, the S1 with mannose sugar moiety has lower inhibitory potential than S2, while the change of stereochemistry at position 3 and 4 of sugar moiety in S2 led to the better inhibitory potential against glycation (Fig. 2). The hydroxyl at position 3 and 4 of the sugar moiety could be involved in facilitating the hydrogen bonding interactions of hydroxyls with the binding site [17,18,19]. Overall, this study provides a useful insight into the structure-activity relationship and antiglycation potential of natural products from I. heterantha. Here, we report for the first time the antiglycation inhibitory ability of natural products isolated from I. heterantha. The study explains the influence of the stereochemistry of sugar moiety, the availability of hydroxyl groups for hydrogen bonding interactions with the binding site, and the influence of electron withdrawing resonance effect upon antiglycation potential of the natural products. The compounds exhibit different extent of potential influenced by the number and the position, and the stereochemistry of hydroxyl substituents and influenced their inhibitory activity. Additionally, compound S2 could be utilized as the starting point for structure-activity relationship studies aimed at designing potent antiglycating agent. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Hussain F, Islam M, Zaman A. Ethnobotanical profile of plants of Shawar Valley, district swat, Pakistan. Int J Biol Biotechnol. 2006;3(2):301–7. Purkayastha SK. A manual of Indian timbers: Sribhumi publishing Compnay; 1997. Dahot MU. Antibacterial and antifungal activity of small protein of Indigofera oblongifolia leaves. J Ethnopharmacol. 1999;64(3):277–82. Vijayan M, Jacob K, Govindaraj Y. Antibacterial activity and mutagenicity of leaves of Indigofera tinctoria Linn. J Exp Integr Med. 2012;2(3):263–9. Tariq SA, Ahmad MN, Khan A, Choudhary MI, Ahmad W, Ahmad M. Urease inhibitors from Indigofera gerardiana wall. J Enzyme Inhib Med Chem. 2011;26(4):480–4. Miyata T. New aspects in the pathogenesis of dialysis-related amyloidosis: pathophysiology of advanced glycation end products in renal failure. Nihon Jinzo Gakkai Shi. 1996;38(5):191–7. Schmidt AM, Du Yan S, Stern DM. The dark side of glucose. Nat Med. 1995;1(10):1002–4. Brownlee M. Biochemistry and molecular cell biology of diabetic complications. Nature. 2001;414(6865):813–20. Peppa M, Uribarri J, Vlassara H. Glucose, advanced glycation end products, and diabetes complications: what is new and what works. Clin Diabetes. 2003;21(4):186–7. Furth A. Glycated proteins in diabetes. Br J Biomed Sci. 1997;54(3):192–200. Brownlee M, Cerami A, Vlassara H. Advanced glycosylation end products in tissue and the biochemical basis of diabetic complications. N Engl J Med. 1988;318(20):1315–21. Khan A, Khan A, Farooq U, Taha M, Shah SAA, Halim SA, Akram A, Khan MZ, Jan AK, Al-Harrasi A. Oxindole-based chalcones: synthesis and their activity against glycation of proteins. Med Chem Res. 2019;28(6):900–6. Ahmad MS, Pischetsrieder M, Ahmed N. Aged garlic extract and S-allyl cysteine prevent formation of advanced glycation endproducts. Eur J Pharmacol. 2007;561(1):32–8. Wang X, Zhang L-S, Dong L-L. Inhibitory effect of polysaccharides from pumpkin on advanced glycation end-products formation and aldose reductase activity. Food Chem. 2012;130(4):821–5. Rout S, Banerjee R. Free radical scavenging, anti-glycation and tyrosinase inhibition properties of a polysaccharide fraction isolated from the rind from Punica granatum. Bioresour Technol. 2007;98(16):3159–63. Ahmed N. Advanced glycation endproducts—role in pathology of diabetic complications. Diabetes Res Clin Pract. 2005;67(1):3–21. Taha M, Naz H, Rasheed S, Ismail NH, Rahman AA, Yousuf S, Choudhary MI. Synthesis of 4-methoxybenzoylhydrazones and evaluation of their antiglycation activity. Molecules. 2014;19(1):1286–301. Liu H, Wang C, Qi X, Zou J, Sun Z. Antiglycation and antioxidant activities of mogroside extract from Siraitia grosvenorii (Swingle) fruits. J Food Sci Technol. 2018;55(5):1880–8. Yeh W-J, Hsia S-M, Lee W-H, Wu C-H. Polyphenols with antiglycation activity and mechanisms of action: a review of recent findings. J Food Drug Anal. 2017;25(1):84–92. We acknowledge Higher Education Commission (HEC) of Pakistan, COMSATS University Islamabad Abbottabad Campus and HEJ Research Institute of Chemistry, University of Karachi for support. Department of Chemistry, COMSATS University Islamabad Abbottabad Campus, Abbottabad, Pakistan Ayesha Khan, Umar Farooq, Farhan Ahmad Khan & Syed Majid Bukhari Natural and Medical Sciences Research Center, University of Nizwa, 616 Birkat Al Mauz, PO Box 33, Nizwa, Oman Department of Chemistry, University of Malakand, Chakdara, Dir (L), Pakistan Manzoor Ahmad & Mumtaz Ali Manzoor Ahmad Mumtaz Ali Umar Farooq Farhan Ahmad Khan Syed Majid Bukhari AK and UF conceived and designed the study. Ayesha K. performed antiglycation activity. MA and M. Ali performed isolation, and FAH and SMB analyzed the data. AK and Ayesha K. wrote the manuscript with inputs and comments from all co-authors. All authors have read and approved the final version of the manuscript. Correspondence to Ajmal Khan or Umar Farooq. Khan, A., Khan, A., Ahmad, M. et al. Antiglycation potential of Indigoferin a, Indigoferin B and Indigoferin C natural products from Indigofera heterantha Brandis. Clin Phytosci 7, 5 (2021). https://doi.org/10.1186/s40816-020-00238-0 Indigofera heterantha Brandis Advanced Glycation end-products Antiglycation activity
CommonCrawl
Question 476 income and capital returns, idiom The saying "buy low, sell high" suggests that investors should make a: (a) Positive income return. (b) Positive capital return. (c) Negative income return. (d) Negative capital return. (e) Positive total return. Question 478 income and capital returns Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? (a) Dividends. (b) Rent. (c) Coupons. (d) Loan payments. (e) Capital gains. An asset's total expected return over the next year is given by: ###r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0} ### Where ##p_0## is the current price, ##c_1## is the expected income in one year and ##p_1## is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? (a) ##c_1## (b) ##p_1-p_0## (c) ##\dfrac{c_1}{p_0} ## (d) ##\dfrac{p_1}{p_0} -1## (e) ##\dfrac{p_1}{p_0} ## A share was bought for $30 (at t=0) and paid its annual dividend of $6 one year later (at t=1). Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: ##r_\text{total}## , ##r_\text{capital}## , ##r_\text{dividend}##. (a) -0.1, -0.3, 0.2. (b) -0.1, 0.1, -0.2. (c) 0.1, -0.1, 0.2. (d) 0.1, 0.2, -0.1. (e) 0.2, 0.1, -0.1. Question 404 income and capital returns, real estate One and a half years ago Frank bought a house for $600,000. Now it's worth only $500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $18,617.27. The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. (a) 3.1029% (b) 3.3201% (c) 3.7235% (d) 3.9841% (e) 7% Question 278 inflation, real and nominal returns and cash flows Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? Question 353 income and capital returns, inflation, real and nominal returns and cash flows, real estate A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. (a) 3.9216%, 2.9412%, 0.9804%. (b) 3.9216%, 0.9804%, 2.9412%. (c) 3.9216%, 0.9804%, 0.9804%. (d) 1.9804%, 1.0000%, 0.9804%. (e) 1.9608%, 0.9804%, 0.9804%. Question 525 income and capital returns, real and nominal returns and cash flows, inflation Which of the following statements about cash in the form of notes and coins is NOT correct? Assume that inflation is positive. Notes and coins: (a) Pay no income cash flow. (b) Have a nominal total return of zero. (c) Have a nominal capital return of zero. (d) Have a nominal income return of zero. (e) Have a real total return of zero. Question 526 real and nominal returns and cash flows, inflation, no explanation How can a nominal cash flow be precisely converted into a real cash flow? (a) ##C_\text{real, t}=C_\text{nominal,t}.(1+r_\text{inflation})^t## (b) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{(1+r_\text{inflation})^t} ## (c) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{r_\text{inflation}} ## (d) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation} ## (e) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation}.t## You expect a nominal payment of $100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct? (a) The nominal cash flow of $100 in 5 years is equivalent to a real cash flow of $86.2609 in 5 years. This means that $86.2609 will buy the same amount of goods and services now as $100 will buy in 5 years. (b) The real discount rate of 10% pa is equivalent to a nominal discount rate of 13.3333% pa. (c) The nominal price of goods and services will increase by 3% every year. (d) The real price of goods and services will increase by 3% every year. (e) The present value of your payment will increase by the nominal discount rate every year. What is the present value of a real payment of $500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. (a) $472.3557 (b) $471.298 (c) $436.7194 (d) $435.7415 (e) $405.8112 On his 20th birthday, a man makes a resolution. He will put $30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. (a) $21,600, $95,035.46 (b) $21,600, $49,515.44 (c) $21,600, $4,909.33 (d) $21,600, $2,557.86 (e) $11,254.05, $2,557.86 Question 221 credit risk You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? (a) Junior debt is the safest. Preference shares will have the highest returns. (b) Preference shares are the safest. Ordinary shares will have the highest returns. (c) Senior debt is the safest. Ordinary shares will have the highest returns. (d) Junior debt is the safest. Ordinary shares will have the highest returns. (e) Senior debt is the safest. Junior debt will have the highest returns. Question 466 limited liability, business structure Which business structure or structures have the advantage of limited liability for equity investors? (a) Sole traders. (b) Partnerships. (c) Corporations. (d) All of the above. (e) None of the above Question 531 bankruptcy or insolvency, capital structure, risk, limited liability Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately. (a) Alice has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $10,000 and liabilities of $3,000. (b) Billy has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a corporate business with assets worth $3,000 and liabilities of $10,000. (c) Carla has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a corporate business with assets worth $10,000 and liabilities of $3,000. (d) Darren has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $3,000 and liabilities of $10,000. (e) Ernie has $1,000 cash, lent $3,000 to his friend, and doesn't have any personal debt or own any businesses. Question 467 book and market values Which of the following statements about book and market equity is NOT correct? (a) The market value of equity of a listed company's common stock is equal to the number of common shares multiplied by the share price. (b) The book value of equity is the sum of contributed equity, retained profits and reserves. (c) A company's book value of equity is recorded in its income statement, also known as the 'profit and loss' or the 'statement of financial performance'. (d) A new company's market value of equity equals its book value of equity the moment that its shares are first sold. From then on, the market value changes continuously but the book value which is recorded at historical cost tends to only change due to retained profits. (e) To buy all of the firm's shares, generally a price close to the market value of equity will have to be paid. Question 473 market capitalisation of equity The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? (a) $431.18 billion (b) $429 billion (c) $134.07 billion (d) $8.44 billion (e) $3.21 billion Question 444 investment decision, corporate financial decision theory The investment decision primarily affects which part of a business? (a) Assets. (b) Liabilities and owner's equity. (c) Current assets and current liabilities. (d) Dividends and buy backs. (e) Net income, also known as earnings or net profit after tax. Question 445 financing decision, corporate financial decision theory The financing decision primarily affects which part of a business? Question 443 corporate financial decision theory, investment decision, financing decision, working capital decision, payout policy Business people make lots of important decisions. Which of the following is the most important long term decision? (a) Investment decision. (b) Financing decision. (c) Working capital decision. (d) Payout policy decision. (e) Capital or labour decision. Question 515 corporate financial decision theory, idiom The expression 'you have to spend money to make money' relates to which business decision? (e) Diversification decision. Question 490 expected and historical returns, accounting ratio Which of the following is NOT a synonym of 'required return'? (a) total required yield (b) cost of capital (c) discount rate (d) opportunity cost of capital (e) accounting rate of return Which of the following equations is NOT equal to the total return of an asset? Let ##p_0## be the current price, ##p_1## the expected price in one year and ##c_1## the expected income in one year. (a) ##r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0} ## (b) ##r_\text{total} = \dfrac{c_1+p_1}{p_0} - 1## (c) ##r_\text{total} = \dfrac{c_1}{p_0} + \dfrac{p_1-p_0}{p_0}## (d) ##r_\text{total} = \dfrac{c_1}{p_0} + \dfrac{p_1}{p_0} ## (e) ##r_\text{total} = \dfrac{c_1}{p_0} + \dfrac{p_1}{p_0} - 1## A stock was bought for $8 and paid a dividend of $0.50 one year later (at t=1 year). Just after the dividend was paid, the stock price was $7 (at t=1 year). What were the total, capital and dividend returns given as effective annual rates? The choices are given in the same order: ##r_\text{total}##, ##r_\text{capital}##, ##r_\text{dividend}##. (a) 0.0625, -0.0625, -0.125. (b) 0.0625, 0.125, -0.0625. (c) -0.0625, 0.0625, -0.125. (d) -0.0625, -0.125, 0.0625. (e) -0.125, -0.1875, 0.0625. Question 21 income and capital returns, bond pricing A fixed coupon bond was bought for $90 and paid its annual coupon of $3 one year later (at t=1 year). Just after the coupon was paid, the bond price was $92 (at t=1 year). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: ## r_\text{total},r_\text{capital},r_\text{income} ##. (a) -0.0556, -0.0222, -0.0333 (b) 0.0222, -0.0111, 0.0333. (c) 0.0333, 0.0556, 0.0222. (d) 0.0556, 0.0222, 0.0333. (e) 0.0556, 0.0333, 0.0222. Question 456 inflation, effective rate In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions: In 1969 he demands a ransom of $1 million (=10^6), and again; In 1997 he demands a ransom of $100 billion (=10^11). If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as $100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997? The answer choices below are given as effective annual rates: (a) 0.5086% pa (b) 1.5086% pa (c) 5.0859% pa (d) 50.8591% pa (e) 150.8591% pa (a) 5.8824%, 0.9804%, 4.902%. Question 155 inflation, real and nominal returns and cash flows, Loan, effective rate conversion You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back $1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now? (a) $838,907.00 (b) $838,986.09 (c) $841,754.97 (d) $889,996.44 (e) $944,108.22 Question 295 inflation, real and nominal returns and cash flows, NPV When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? (a) I only. (b) III only. (c) IV only. (d) I and IV only. (e) II and III only. What is the present value of a nominal payment of $100 in 5 years? The real discount rate is 10% pa and the inflation rate is 3% pa. (a) $71.2986 (b) $62.0921 (c) $61.5028 (d) $54.276 (e) $53.5612 What is the present value of a nominal payment of $1,000 in 4 years? The nominal discount rate is 8% pa and the inflation rate is 2% pa. (a) $795.62 (b) $792.0937 Which of the following statements about inflation is NOT correct? (a) Real returns approximately equal nominal returns less the inflation rate. (b) Constant prices are the same as real prices. (c) Current prices are the same as nominal prices. (d) If your nominal wage grows by inflation, then your real wage won't change because you will be able to buy the same amount of goods and services as before. (e) Interest rates advertised at the bank are usually quoted in real terms. Question 120 credit risk, payout policy A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? (a) Preferred stock only. (b) The senior and junior bonds only. (c) Common stock only. (d) The senior and junior bonds and the preferred stock. (e) No payments on any security is required since the firm made a loss. Question 452 limited liability, expected and historical returns What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time? Let the current share price be ##p_0##, the expected future share price be ##p_1##, the expected future dividend be ##d_1## and the expected return be ##r##. Define the expected return as: ##r=\dfrac{p_1-p_0+d_1}{p_0} ## The answer choices are stated using inequalities. As an example, the first answer choice "(a) ##0≤p<∞## and ##0≤r< 1##", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one. (a) ##0≤p<∞## and ##0≤r< 1## (b) ##0≤p<∞## and ##-1≤r< ∞## (c) ##0≤p<∞## and ##0≤r< ∞## (d) ##0≤p<∞## and ##-∞≤r< ∞## (e) ##-∞<p<∞## and ##-∞<r< ∞## Question 461 book and market values, ROE, ROA, market efficiency One year ago a pharmaceutical firm floated by selling its 1 million shares for $100 each. Its book and market values of equity were both $100m. Its debt totalled $50m. The required return on the firm's assets was 15%, equity 20% and debt 5% pa. In the year since then, the firm: Earned net income of $29m. Paid dividends totaling $10m. Discovered a valuable new drug that will lead to a massive 1,000 times increase in the firm's net income in 10 years after the research is commercialised. News of the discovery was publicly announced. The firm's systematic risk remains unchanged. Which of the following statements is NOT correct? All statements are about current figures, not figures one year ago. (a) The book value of equity would be larger than the market value of equity. (b) The book ROA from accounting would be larger than the required return on assets from finance. (c) The book ROE from accounting would be larger than the required return on equity from finance. (d) The book ROE would be larger than the book ROA. (e) The required return on equity would be larger than the required return on assets. Hint: Book return on assets (ROA) and book return on equity (ROE) are ratios that accountants like to use to measure a business's past performance. ###\text{ROA}= \dfrac{\text{Net income}}{\text{Book value of assets}}### ###\text{ROE}= \dfrac{\text{Net income}}{\text{Book value of equity}}### The required return on assets ##r_V## is a return that financiers like to use to estimate a business's future required performance which compensates them for the firm's assets' risks. If the business were to achieve realised historical returns equal to its required returns, then investment into the business's assets would have been a zero-NPV decision, which is neither good nor bad but fair. ###r_\text{V, 0 to 1}= \dfrac{\text{Cash flow from assets}_\text{1}}{\text{Market value of assets}_\text{0}} = \dfrac{CFFA_\text{1}}{V_\text{0}}### Similarly for equity and debt. The below screenshot of Microsoft's (MSFT) details were taken from the Google Finance website on 28 Nov 2014. Some information has been deliberately blanked out. What was MSFT's market capitalisation of equity? (a) $395.11 million (b) $21.01 billion (d) $393.95 billion (e) $1.02935 trillion Question 446 working capital decision, corporate financial decision theory The working capital decision primarily affects which part of a business? Question 447 payout policy, corporate financial decision theory Payout policy is most closely related to which part of a business? The expression 'cash is king' emphasizes the importance of having enough cash to pay your short term debts to avoid bankruptcy. Which business decision is this expression most closely related to? Question 516 corporate financial decision theory Which of the following decisions relates to the current assets and current liabilities of the firm? Question 2 NPV, Annuity Katya offers to pay you $10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? Question 481 Annuity This annuity formula ##\dfrac{C_1}{r}\left(1-\dfrac{1}{(1+r)^3} \right)## is equivalent to which of the following formulas? Note the 3. In the below formulas, ##C_t## is a cash flow at time t. All of the cash flows are equal, but paid at different times. (a) ##C_0+C_1+C_2+C_3## (b) ##\dfrac{C_0+C_1+C_2+C_3}{(1+r)^3} ## (c) ##C_0+\dfrac{C_1}{(1+r)^1} +\dfrac{C_2}{(1+r)^2} + \dfrac{C_3}{(1+r)^3} ## (d) ##\dfrac{C_1}{(1+r)^1} +\dfrac{C_2}{(1+r)^2} + \dfrac{C_3}{(1+r)^3} ## (e) ##\dfrac{C_1}{(1+r)^1} + \dfrac{C_2}{(1+r)^2} ## Question 499 NPV, Annuity Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five $10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? (a) $0 (b) $10 (c) $50 (d) Positive infinity (e) Priceless Question 479 perpetuity with growth, DDM, NPV Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? (a) ##V_0=\dfrac{C_t}{(1+r)^t} ## (b) ##V_0=\dfrac{C_1}{r}.\left(1-\dfrac{1}{(1+r)^T} \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t}{(1+r)^t} \right) ## (c) ##V_0=\dfrac{C_1}{r-g}.\left(1-\left(\dfrac{1+g}{1+r}\right)^T \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## (d) ##V_0=\dfrac{C_1}{r} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t}{(1+r)^t} \right) ## (e) ##V_0=\dfrac{C_1}{r-g} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## Question 4 DDM For a price of $13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? For a price of $1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be ##100(1+0.05)^1=$105.00##, and the year after it will be ##100(1+0.05)^2=110.25## and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? Question 201 DDM, income and capital returns The following is the Dividend Discount Model (DDM) used to price stocks: ###P_0=\dfrac{C_1}{r-g}### If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: (a) Dividend growth rate is equal to the long term expected growth rate of the stock price. (b) Dividend growth rate is equal to the long term expected capital return of the stock. (c) Dividend growth rate is equal to the long term expected dividend yield. (d) Total return of the stock is equal to its long term required return. (e) Total return of the stock is equal to the company's long term cost of equity. Question 497 income and capital returns, DDM, ex dividend date A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the $10 one tonight will be $10.50 in one year, then in two years it will be $11.025 and so on. The stock's required return is 10% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? (a) $200 today and $210 tomorrow. (b) $210 today and $220 tomorrow. (c) $220 today and $230 tomorrow. (d) $210 today and $200 tomorrow. (e) $220 today and $210 tomorrow. Question 289 DDM, expected and historical returns, ROE In the dividend discount model: ###P_0 = \dfrac{C_1}{r-g}### The return ##r## is supposed to be the: (a) Expected future total return of the market price of equity. (b) Expected future total return of the book price of equity. (c) Actual historical total return on the market price of equity. (d) Actual historical total return on the book price of equity. (e) Actual historical return on equity (ROE) defined as (Net Income / Owners Equity). Question 40 DDM, perpetuity with growth A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; the dividend at t=5 will be $1.15(1+0.05), the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? (d) $22.4457 (e) $3.6341 A fairly valued share's current price is $4 and it has a total required return of 30%. Dividends are paid annually and next year's dividend is expected to be $1. After that, dividends are expected to grow by 5% pa in perpetuity. All rates are effective annual returns. What is the expected dividend income paid at the end of the second year (t=2) and what is the expected capital gain from just after the first dividend (t=1) to just after the second dividend (t=2)? The answers are given in the same order, the dividend and then the capital gain. (a) $1.3, $0.26 (b) $1.25, $0.25 (c) $1.1025, $0.2205 (d) $1.05, $0.21 (e) $1, $0.2 Question 50 DDM, stock pricing, inflation, real and nominal returns and cash flows Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a $0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? Question 535 DDM, real and nominal returns and cash flows, stock pricing You are an equities analyst trying to value the equity of the Australian telecoms company Telstra, with ticker TLS. In Australia, listed companies like Telstra tend to pay dividends every 6 months. The payment around August is called the final dividend and the payment around February is called the interim dividend. Both occur annually. Today is mid-March 2015. TLS's last interim dividend of $0.15 was one month ago in mid-February 2015. TLS's last final dividend of $0.15 was seven months ago in mid-August 2014. Judging by TLS's dividend history and prospects, you estimate that the nominal dividend growth rate will be 1% pa. Assume that TLS's total nominal cost of equity is 6% pa. The dividends are nominal cash flows and the inflation rate is 2.5% pa. All rates are quoted as nominal effective annual rates. Assume that each month is exactly one twelfth (1/12) of a year, so you can ignore the number of days in each month. Calculate the current TLS share price. (a) $6.06 (b) $6.080152 (c) $6.149576 (d) $6.179509 (e) $6.300707 Question 488 income and capital returns, payout policy, payout ratio, DDM Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts. BigDiv pays large dividends and ZeroDiv doesn't pay any dividends. Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk. Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV. All things remaining equal, which of the following statements is NOT correct? (a) BigDiv is expected to have a lower capital return than ZeroDiv in the future. (b) BigDiv is expected to have a lower total return than ZeroDiv in the future. (c) ZeroDiv's assets are likely to grow faster than BigDiv's. (d) ZeroDiv's share price will increase faster than BigDiv's. (e) BigDiv currently has a higher payout ratio than ZeroDiv. Question 217 NPV, DDM, multi stage growth model A stock is expected to pay a dividend of $15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? (b) $236.33 (c) $237.93 (d) $348.69 (e) $223.24 Question 348 PE ratio, Multiples valuation Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; JP Morgan Chase's historical earnings per share (EPS) is $4.37; Citi Group's share price is $50.05 and historical EPS is $4.26; Wells Fargo's share price is $48.98 and historical EPS is $3.89. Note: Figures sourced from Google Finance on 24 March 2014. Question 341 Multiples valuation, PE ratio Estimate Microsoft's (MSFT) share price using a price earnings (PE) multiples approach with the following assumptions and figures only: Apple, Google and Microsoft are comparable companies, Apple's (AAPL) share price is $526.24 and historical EPS is $40.32. Google's (GOOG) share price is $1,215.65 and historical EPS is $36.23. Micrsoft's (MSFT) historical earnings per share (EPS) is $2.71. Source: Google Finance 28 Feb 2014. (a) $63.15 (b) $61.67 (c) $30.83 (d) $28.25 (e) $8.60 Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive earnings, disregard firms with negative earnings and therefore negative PE ratios. (a) Illiquid small private companies. (b) High growth technology firms. (c) Firms expected to have temporarily low earnings over the next year, but with higher earnings later. (d) Firms with a very low level of systematic risk. (e) Firms whose assets include a very large proportion of cash. Which firms tend to have high forward-looking price-earnings (PE) ratios? (b) Exchange-listed companies operating in stagnant industries with negative growth prospects. (c) Exchange-listed companies expected to have temporarily high earnings over the next year, but with lower earnings later. (d) Exchange-listed companies operating in high-risk industries with very high required returns on equity. (e) Exchange-listed companies whose assets include a very large proportion of cash. Question 579 price gains and returns over time, time calculation, effective rate How many years will it take for an asset's price to double if the price grows by 10% pa? (a) 1.8182 years (b) 3.3219 years (c) 7.2725 years (d) 11.5267 years (e) 13.7504 years How many years will it take for an asset's price to quadruple (be four times as big, say from $1 to $4) if the price grows by 15% pa? (d) 9.919 years Question 333 DDM, time calculation When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently $1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? (a) 1,443 years (b) 1,199 years (c) 955 years (d) 674 years (e) 148 years The following cash flows are expected: 10 yearly payments of $80, with the first payment in 3 years from now (first payment at t=3). 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? (a) $1,006.25 Question 37 IRR If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: (a) Positive infinity (##+\infty##) (b) Zero (0). (c) Less than the project's required return. (d) More than the project's required return. (e) Equal to the project's required return. Question 126 IRR What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 (a) 0.21 (b) 0.105 (c) 0.1111 (d) 0.1 (e) 0 Question 46 NPV, annuity due The phone company Telstra have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a: 'Bring Your Own' (BYO) mobile service plan, costing $50 per month. There is no phone included in this plan. The other plan is a: 'Bundled' mobile service plan that comes with the latest smart phone, costing $71 per month. This plan includes the latest smart phone. Neither plan has any additional payments at the start or end. The only difference between the plans is the phone, so what is the implied cost of the phone as a present value? Assume that the discount rate is 2% per month given as an effective monthly rate, the same high interest rate on credit cards. Question 465 NPV, perpetuity The boss of WorkingForTheManCorp has a wicked (and unethical) idea. He plans to pay his poor workers one week late so that he can get more interest on his cash in the bank. Every week he is supposed to pay his 1,000 employees $1,000 each. So $1 million is paid to employees every week. The boss was just about to pay his employees today, until he thought of this idea so he will actually pay them one week (7 days) later for the work they did last week and every week in the future, forever. Bank interest rates are 10% pa, given as a real effective annual rate. So ##r_\text{eff annual, real} = 0.1## and the real effective weekly rate is therefore ##r_\text{eff weekly, real} = (1+0.1)^{1/52}-1 = 0.001834569## All rates and cash flows are real, the inflation rate is 3% pa and there are 52 weeks per year. The boss will always pay wages one week late. The business will operate forever with constant real wages and the same number of employees. What is the net present value (NPV) of the boss's decision to pay later? (b) $1,919.39 (c) $13,580.21 (d) $18,295.38 (e) $1,000,000.00 Question 60 pay back period The required return of a project is 10%, given as an effective annual rate. What is the payback period of the project in years? Assume that the cash flows shown in the table are received smoothly over the year. So the $121 at time 2 is actually earned smoothly from t=1 to t=2. (a) 2.7355 (b) 2.3596 (d) 1.2645 (e) 0.2645 Question 190 pay back period A project has the following cash flows: Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the $500 at time 2 is actually earned smoothly from t=1 to t=2. (a) -0.80 (b) 0.80 (c) 1.20 (d) 1.80 (e) 2.20 Question 500 NPV, IRR The below graph shows a project's net present value (NPV) against its annual discount rate. For what discount rate or range of discount rates would you accept and commence the project? All answer choices are given as approximations from reading off the graph. (a) From 0 to 10% pa. (b) From 0 to 5% pa. (c) At 5.5% pa. (d) From 6 to 20% pa. (e) From 0 to 20% pa. Question 501 NPV, IRR, pay back period Which of the following statements is NOT correct? (a) When the project's discount rate is 18% pa, the NPV is approximately -$30m. (b) The payback period is infinite, the project never pays itself off. (c) The addition of the project's cash flows, ignoring the time value of money, is approximately $20m. (d) The project's IRR is approximately 5.5% pa. (e) As the discount rate rises, the NPV falls. Question 489 NPV, IRR, pay back period, DDM A firm is considering a business project which costs $11m now and is expected to pay a constant $1m at the end of every year forever. Assume that the initial $11m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? (a) The NPV is negative $1m. (b) The IRR is 9.09% pa, less than the 10% cost of capital. (c) The payback period is infinite, the project will never pay itself off. (d) The project should be rejected. (e) If the project is accepted then the market value of the firm's assets will fall by $1m. A firm is considering a business project which costs $10m now and is expected to pay a single cash flow of $12.1m in two years. (a) The NPV is zero. (b) The IRR is 10% pa, equal to the 10% cost of capital. (c) The payback period is two years assuming that the whole $12.1m cash flow occurs at t=2, or 1.826 years if the $12.1m cash flow is paid smoothly over the second year. (d) The project could be accepted or rejected, the owners would be indifferent. (e) If the project is accepted then the market value of the firm's assets will increase by $2.1m more than it would otherwise if the project was rejected. Question 251 NPV You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end (t=1). How much can you consume at each time? (a) $57,619.0476 (b) $55,000 (c) $53,809.5238 (d) $52,380.9524 (e) $50,000 Question 250 NPV, Loan, arbitrage table Your neighbour asks you for a loan of $100 and offers to pay you back $120 in one year. You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates. Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs. The Net Present Value (NPV) of lending to your neighbour is $9.09. Describe what you would do to actually receive a $9.09 cash flow right now with zero net cash flows in the future. (a) Borrow $109.09 from the bank and lend $100 of it to your neighbour now. (b) Borrow $100 from the bank and lend it to your neighbour now. (c) Borrow $209.09 from the bank and lend $100 to your neighbour now. (d) Borrow $120 from the bank and lend $100 of it to your neighbour now. (e) Borrow $90.91 from the bank and lend it to your neighbour now. Question 502 NPV, IRR, mutually exclusive projects An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects now ($) Sale price in one year ($) IRR (% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? (a) Petrol station. (b) Car wash. (c) Car park. (d) None of the projects. (e) All of the projects. Question 532 mutually exclusive projects, NPV, IRR An investor owns a whole level of an old office building which is currently worth $1 million. There are three mutually exclusive projects that can be started by the investor. The office building level can be: Rented out to a tenant for one year at $0.1m paid immediately, and then sold for $0.99m in one year. Refurbished into more modern commercial office rooms at a cost of $1m now, and then sold for $2.4m when the refurbishment is finished in one year. Converted into residential apartments at a cost of $2m now, and then sold for $3.4m when the conversion is finished in one year. All of the development projects have the same risk so the required return of each is 10% pa. The table below shows the estimated cash flows and internal rates of returns (IRR's). Project Cash flow now ($) Cash flow in Rent then sell as is -900,000 990,000 10 Refurbishment into modern offices -2,000,000 2,400,000 20 Conversion into residential apartments -3,000,000 3,400,000 13.33 (a) Rent then sell as is. (b) Refurbishment into modern offices. (c) Conversion into residential apartments. (e) Any of the above. Question 505 equivalent annual cash flow A low-quality second-hand car can be bought now for $1,000 and will last for 1 year before it will be scrapped for nothing. A high-quality second-hand car can be bought now for $4,900 and it will last for 5 years before it will be scrapped for nothing. What is the equivalent annual cost of each car? Assume a discount rate of 10% pa, given as an effective annual rate. The answer choices are given as the equivalent annual cost of the low-quality car and then the high quality car. (a) $100, $490 (b) $909.09, $608.5 (c) $1,000, $980 (d) $1,000, $1578.3 (e) $1,100, $1,292.61 Question 180 equivalent annual cash flow, inflation, real and nominal returns and cash flows Details of two different types of light bulbs are given below: Low-energy light bulbs cost $3.50, have a life of nine years, and use about $1.60 of electricity a year, paid at the end of each year. Conventional light bulbs cost only $0.50, but last only about a year and use about $6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. (a) 1.4873, 6.7857 (b) 1.6525, 6.7857 (c) 2.1415, 7.1250 (d) 14.8725, 6.7857 (e) 2.0924, 7.1250 Carlos and Edwin are brothers and they both love Holden Commodore cars. Carlos likes to buy the latest Holden Commodore car for $40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for $20,000. Carlos never has to bother with paying for repairs since his cars are brand new. Edwin also likes Commodores, but prefers to buy 4-year old cars for $20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for $2,000 and buys another 4-year old second hand car, and so on. Every time Edwin buys a second hand 4 year old car he immediately has to spend $1,000 on repairs, and then $1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for $2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. (a) $13,848.99 (b) $13,106.61 (c) $8,547.50 (d) $4,238.08 (e) -$103.85 You own some nice shoes which you use once per week on date nights. You bought them 2 years ago for $500. In your experience, shoes used once per week last for 6 years. So you expect yours to last for another 4 years. Your younger sister said that she wants to borrow your shoes once per week. With the increased use, your shoes will only last for another 2 years rather than 4. What is the present value of the cost of letting your sister use your current shoes for the next 2 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new pair of shoes when your current pair wears out and your sister will not use the new ones; your sister will only use your current shoes so she will only use it for the next 2 years; and the price of new shoes never changes. An industrial chicken farmer grows chickens for their meat. Chickens: Cost $0.50 each to buy as chicks. They are bought on the day they're born, at t=0. Grow at a rate of $0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6). Grow at a rate of $0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they're older and grow more slowly. Feed costs are $0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs $0.30, and so on. Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above). The required return of the chicken farm is 0.5% given as an effective weekly rate. Ignore taxes and the fixed costs of the factory. Ignore the chicken's welfare and other environmental and ethical concerns. Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks. (a) $0.3651, $0.2374 (b) $0.3172, $0.3506 (d) $0.3050, $0.2142 (e) $0.0157, $0.0491 Question 128 debt terminology, needs refinement An 'interest payment' is the same thing as a 'coupon payment'. or ? Question 129 debt terminology An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? Which of the following statements is NOT correct? Borrowers: (a) Receive cash at the start and promise to pay cash in the future, as set out in the debt contract. (b) Are debtors. (c) Owe money. (d) Are funded by debt. (e) Buy debt. Which of the following statements is NOT correct? Lenders: (a) Are long debt. (b) Invest in debt. (c) Are owed money. (d) Provide debt funding. (e) Have debt liabilities. Question 290 APR, effective rate, debt terminology Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) An effective annual rate could be called: "a yearly rate compounding per year". (b) An APR compounding monthly could be called: "a yearly rate compounding per month". (c) An effective monthly rate could be called: "a yearly rate compounding per month". (d) An APR compounding daily could be called: "a yearly rate compounding per day". (e) An effective 2-year rate could be called: "a 2-year rate compounding every 2 years". Question 16 credit card, APR, effective rate A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: ### r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily} ### (a) 0.0072, 0.09, 0.0002. (b) 0.0139, 0.18, 0.0005. (d) 0.015, 0.1956, 0.0005. (e) 0.015, 0.1956, 0.006. Question 131 APR, effective rate Calculate the effective annual rates of the following three APR's: A credit card offering an interest rate of 18% pa, compounding monthly. A bond offering a yield of 6% pa, compounding semi-annually. An annual dividend-paying stock offering a return of 10% pa compounding annually. ##r_\text{credit card, eff yrly}##, ##r_\text{bond, eff yrly}##, ##r_\text{stock, eff yrly}## (a) 0.1956, 0.0609, 0.1. (b) 0.015, 0.09, 0.1. (e) 6.2876, 0.1236, 0.1. Question 19 fully amortising loan, APR You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). (a) 900 (c) 2,722.1 (d) 2,843.71 (e) 34,424.99 Question 134 fully amortising loan, APR You want to buy an apartment worth $400,000. You have saved a deposit of $80,000. The bank has agreed to lend you the $320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (e) $23,247.65 You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 246,567.70, 93,351.63 (b) 246,567.70, 235,741.91 (c) 248,563.73, 96,346.75 (d) 248,563.73, 238,323.24 (e) 256,580.38, 245,314.97 How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 184,925.77, 164,313.82 (c) 186,422.80, 166,717.43 You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. (a) $320,725.47, $284,977.19 (b) $310,704.66, $277,862.39 (c) $310,704.66, $197,354.23 (d) $308,209.62, $273,856.37 (e) $308,209.62, $192,529.73 Question 29 interest only loan You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). Question 107 interest only loan You want to buy an apartment worth $300,000. You have saved a deposit of $60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 17,435.74 (b) 1,438.92 (c) 1,414.49 (e) 666.67 Question 509 bond pricing Calculate the price of a newly issued ten year bond with a face value of $100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid annually. So there's only one coupon per year, paid in arrears every year. Calculate the price of a newly issued ten year bond with a face value of $100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid semi-annually. So there are two coupons per year, paid in arrears every six months. Question 23 bond pricing, premium par and discount bonds Bonds X and Y are issued by the same US company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X and Y's coupon rates are 8 and 12% pa respectively. Which of the following statements is true? (a) Bonds X and Y are premium bonds. (b) Bonds X and Y are discount bonds. (c) Bond X is a discount bond but bond Y is a premium bond. (d) Bond X is a premium bond but bond Y is a discount bond. (e) Bonds X and Y are par bonds. Question 48 IRR, NPV, bond pricing, premium par and discount bonds, market efficiency The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? (a) The internal rate of return (IRR) of buying a fairly priced bond is equal to the bond's yield. (b) The Present Value of a fairly priced bond's coupons and face value is equal to its price. (c) If a fairly priced bond's required return rises, its price will fall. (d) Fairly priced premium bonds' yields are less than their coupon rates, prices are more than their face values, and the NPV of buying them is therefore positive. (e) The NPV of buying a fairly priced bond is zero. Question 63 bond pricing, NPV, market efficiency (a) The internal rate of return (IRR) of buying a bond is equal to the bond's yield. (c) If the required return of a bond falls, its price will fall. (d) Fairly priced discount bonds' yield is more than the coupon rate, price is less than face value, and the NPV of buying them is zero. A bond maturing in 10 years has a coupon rate of 4% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value of the bond is $100. What is its price? (e) $85.12 A three year bond has a fixed coupon rate of 12% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value is $100. What is its price? Question 227 bond pricing, premium par and discount bonds Which one of the following bonds is trading at a premium? (a) a ten-year bond with a $4,000 face value whose yield to maturity is 6.0% and coupon rate is 5.9% paid semi-annually. (b) a fifteen-year bond with a $10,000 face value whose yield to maturity is 8.0% and coupon rate is 7.8% paid semi-annually. (c) a five-year bond with a $2,000 face value whose yield to maturity is 7.0% and coupon rate is 7.2% paid semi-annually. (d) a two-year bond with a $50,000 face value whose yield to maturity is 5.2% and coupon rate is 5.2% paid semi-annually. (e) None of the above bonds are premium bonds. An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of $1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa. A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price. (a) The zero-coupon bond and the 7% semi-annual coupon bond were both discount bonds but now they are both premium bonds. (b) The zero-coupon bond and the 7% semi-annual coupon bond were both premium bonds but now they are both discount bonds. (c) When yields fell, the investor made a capital loss on both bonds. (d) When yields fell, the investor made a capital gain on both bonds. (e) When yields fell, the investor made a capital gain on the zero coupon bond but a loss on the 7% semi-annual coupon bond. In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero. A three year government bond with a face value of $100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond? (a) 94.20452353 (b) 100 (c) 106 (d) 112 (e) The bond is priceless. A 10 year bond has a face value of $100, a yield of 6% pa and a fixed coupon rate of 8% pa, paid semi-annually. What is its price? (d) $126.628 Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive. Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures. (a) A bullet loan has no interest payments but it does have a face value. Therefore it's a discount loan. (b) A fully amortising loan has interest payments but does not have a face value. Therefore it's a premium loan. (c) An interest only loan has interest payments and its price and face value are equal. Therefore it's a par loan. (d) A zero coupon bond has no coupon payments but it does have a face value. Therefore it's a premium bond. (e) A balloon loan has interest payments and its face value is more than its price. Therefore it's a discount loan. Question 35 bond pricing, zero coupon bond, term structure of interest rates, forward interest rate A European company just issued two bonds, a 1 year zero coupon bond at a yield of 8% pa, and a 2 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. Question 143 bond pricing, zero coupon bond, term structure of interest rates, forward interest rate An Australian company just issued two bonds: A 6-month zero coupon bond at a yield of 6% pa, and A 12 month zero coupon bond at a yield of 7% pa. What is the company's forward rate from 6 to 12 months? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. A 1 year zero coupon bond at a yield of 8% pa, and A 2 year zero coupon bond at a yield of 10% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. (a) 6.01% (b) 6.02% (c) 9.20% (d) 12.02% (e) 18.40% Question 254 time calculation, APR Your main expense is fuel for your car which costs $100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month). You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? (a) In 23 months (t=23 months). (b) In 24 months (t=24 months). (c) In 25 months (t=25 months). (d) In 26 months (t=26 months). (e) In 27 months (t=27 months). Question 32 time calculation, APR You really want to go on a back packing trip to Europe when you finish university. Currently you have $1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost $2,000, how long will it take for your bank account to reach that amount? (a) -3.74 years (b) 1.81 years (c) 3.33 years (d) 3.61 years (e) 3.74 years Question 485 capital budgeting, opportunity cost, sunk cost A young lady is trying to decide if she should attend university or not. The young lady's parents say that she must attend university because otherwise all of her hard work studying and attending school during her childhood was a waste. What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university? The hard work studying at school in her childhood should be classified as: (a) A sunk cost. (b) An opportunity cost. (c) A negative side effect. (d) A positive side effect. (e) A depreciation expense. A young lady is trying to decide if she should attend university. Her friends say that she should go to university because she is more likely to meet a clever young man than if she begins full time work straight away. What's the correct way to classify this item from a capital budgeting perspective when trying to find the Net Present Value of going to university rather than working? The opportunity to meet a desirable future spouse should be classified as: A man is thinking about taking a day off from his casual painting job to relax. He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work. But he's thinking about the hours that he could work today (in the future) which are: (d) A capital expense. A man has taken a day off from his casual painting job to relax. It's the end of the day and he's thinking about the hours that he could have spent working (in the past) which are now: Question 176 CFFA Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) CapEx is added in the Net Income (NI) equation so it needs subtracting in the CFFA equation. (b) CapEx is a financing cash flow that needs to be ignored. Therefore it should be subtracted. (c) CapEx is not a cash flow, it's a non-cash expense made up by accountants that needs to be subtracted. (d) CapEx is subtracted to account for the net cash spent on capital assets. (e) CapEx is subtracted because it's too hard to predict, therefore we exclude it. Cash Flow From Assets (CFFA) can be defined as: (a) Cash available to distribute to creditors and stockholders. (b) Cash flow to creditors minus cash flow to stockholders. (c) Net income (or earnings) plus depreciation plus interest expense. (d) Net income minus the increase in net working capital. (e) Net income minus net capital spending minus the increase in net working capital. A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation. (a) Buy less land, buildings and trucks than what was planned. Assume that this has no impact on revenue. (b) Pay less cash to creditors by refinancing the firm's existing coupon bonds with zero-coupon bonds that require no interest payments. Assume that there are no transaction costs and that both types of bonds have the same yield to maturity. (c) Change the depreciation method used for tax purposes from diminishing value to straight line, so less depreciation occurs this year and more occurs in later years. Assume that the government's tax department allow this. (d) Buying more inventory than was planned, so there is an increase in net working capital. Assume that there is no increase in sales. (e) Raising new equity through a rights issue. Assume that all of the money raised is spent on new capital assets such as land and trucks, but they will be fitted out and delivered in one year so no new cash will be earned from them. Question 238 CFFA, leverage, interest tax shield A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: (a) The company is increasing its debt-to-assets and debt-to-equity ratios. These are types of 'leverage' or 'gearing' ratios. (b) The company will pay less tax to the government due to the benefit of interest tax shields. (c) The company's net income, also known as earnings or net profit after tax, will fall. (d) The company's expected levered firm free cash flow (FFCF or CFFA) will be higher due to tax shields. (e) The company's expected levered equity free cash flow (EFCF) will not change. Question 349 CFFA, depreciation tax shield Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? ###NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) An increase in revenue (Rev). (b) An increase in rent expense (part of fixed costs, FC). (c) An increase in depreciation expense (Depr). (d) An decrease in net working capital (ΔNWC). (e) An increase in dividends. Over the next year, the management of an unlevered company plans to: Achieve firm free cash flow (FFCF or CFFA) of $1m. Pay dividends of $1.8m Complete a $1.3m share buy-back. Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. Assume that: All amounts are received and paid at the end of the year so you can ignore the time value of money. The firm has sufficient retained profits to pay the dividend and complete the buy back. The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? (a) $2.1m (b) $1.3m (c) $0.8m (d) $0.3m (e) No new shares need to be issued, the firm will be sufficiently financed. Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - ΔNWC+IntExp### (b) An increase in rent expense (a type of recurring fixed cost, FC). (d) An increase in inventories (a current asset). (e) An decrease in interest expense (IntExp). Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Ching-A-Lings Corp Income Statement for year ending 30th June 2013 $m Sales 100 COGS 20 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 30 Taxes at 30% 9 Net income 21 as at 30th June 2013 2012 $m $m Inventory 49 38 Trade debtors 14 2 Rent paid in advance 5 5 PPE 400 400 Total assets 468 445 Trade creditors 4 10 Bond liabilities 200 190 Contributed equity 145 145 Retained profits 119 100 Total L and OE 468 445 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: (a) $43m (b) $31m (c) $23m (d) $11m (e) $1m Make $5m in sales, $1.9m in net income and $2m in equity free cash flow (EFCF). Pay dividends of $1m. The firm has sufficient retained profits to legally pay the dividend and complete the buy back. (a) $2m (b) $1m Question 511 capital budgeting, CFFA Find the cash flow from assets (CFFA) of the following project. One Year Mining Project Data Project life 1 year Initial investment in building mine and equipment $9m Depreciation of mine and equipment over the year $8m Kilograms of gold mined at end of year 1,000 Sale price per kilogram $0.05m Variable cost per kilogram $0.03m Before-tax cost of closing mine at end of year $4m Tax rate 30% Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of $1m at the end of the year. Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of $3m right now. However, if the mine goes ahead then this natural beauty will be destroyed. Note 3: The mining equipment will have a book value of $1m at the end of the year for tax purposes. However, the equipment is expected to fetch $2.5m when it is sold. Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one. (a) -9, 15.65 (b) -9, 14.3 (c) -12, 16.8 (d) -12, 16.35 (e) -12, 14.3 Project life 2 years Initial investment in equipment $6m Depreciation of equipment per year for tax purposes $1m Unit sales per year 4m Sale price per unit $8 Variable cost per unit $3 Fixed costs per year, paid at the end of each year $1.5m Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch $0.9 million when it is sold at t=2. Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another $0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities. Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). (a) -6, 12.25, 16.68 (b) -6.8, 13.25, 14.05 (c) -6.8, 13.25, 15.88 (d) -6.8, 13.25, 18.51 (e) -6.8, 13.25, 17.71 Question 377 leverage, capital structure Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Question 379 leverage, capital structure, payout policy Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? Question 94 leverage, capital structure, real estate Your friend just bought a house for $400,000. He financed it using a $320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is $80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So ##V=D+E##. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. ### r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0} ### where ##r_{0-1}## is the return (percentage change) of an asset with price ##p_0## initially, ##p_1## one period later, and paying a cash flow of ##c_1## at time ##t=1##. (a) -100% (b) -50% (c) -12.5% (d) -10% (e) -8% Question 301 leverage, capital structure, real estate Your friend just bought a house for $1,000,000. He financed it using a $900,000 mortgage loan and a deposit of $100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is $100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? No income (rent) was received from the house during the short time over which house prices fell. Your friend will not declare bankruptcy, he will always pay off his debts. (a) -1,000% (b) -150% (c) -100% (e) -10% Question 406 leverage, WACC, margin loan, portfolio return One year ago you bought $100,000 of shares partly funded using a margin loan. The margin loan size was $70,000 and the other $30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. (e) 11.7067% Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Question 67 CFFA, interest tax shield Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)### ###CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp### What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and ##r_D## is the cost of debt. (a) ##D(1+r_D)## (b) ##D/(1+r_D) ## (c) ##D.r_D ## (d) ##D / r_D## (e) ##NI.r_D## Question 296 CFFA, interest tax shield (a) An increase in revenue (##Rev##). (b) An decrease in revenue (##Rev##). (c) An increase in rent expense (part of fixed costs, ##FC##). (d) An increase in interest expense (##IntExp##). Question 506 leverage, accounting ratio A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio? (a) 20% (b) 36% (c) 60% (d) 75% (c) 37.5% (e) 6.25% Question 68 WACC, CFFA, capital budgeting A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. (a) The manufacturing firm's before-tax WACC. (b) The manufacturing firm's after-tax WACC. (c) A services firm's before-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm. (d) A services firm's after-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm. (e) The services firm's levered cost of equity. Question 89 WACC, CFFA, interest tax shield A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. (a) Discount the project's unlevered CFFA by the furniture manufacturing firms' 30% WACC after tax. (b) Discount the project's unlevered CFFA by the company's 20% WACC after tax. (c) Discount the project's levered CFFA by the company's 20% WACC after tax. (d) Discount the project's levered CFFA by the furniture manufacturing firms' 30% WACC after tax. (e) The methods outlined in answers (a) and (c) will give the same valuations, both are correct. Question 113 WACC, CFFA, capital budgeting The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. Motorola had a 20% after-tax WACC before it merged with Google. Google and Motorola have the same level of gearing. Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: (a) Unlevered CFFA should be discounted by Google's 10% WACC after tax. (b) Unlevered CFFA should be discounted by Motorola's 20% WACC after tax. (c) Levered CFFA should be discounted by Google's 10% WACC after tax. (d) Levered CFFA should be discounted by Motorola's 20% WACC after tax. (e) Unlevered CFFA by 15%, the average of Google and Motorola's WACC after tax. Question 368 interest tax shield, CFFA A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned}### Does this annual FFCF or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). ###\begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\### One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense ##(IntExp)## is zero: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned}### Does this annual FFCF with zero interest expense or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). ###\begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\### Question 413 CFFA, interest tax shield, depreciation tax shield There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). One method is to use the following formulas to transform net income (NI) into FFCF including interest and depreciation tax shields: ###FFCF=NI + Depr - CapEx -ΔNWC + IntExp### ###NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )### Another popular method is to use EBITDA rather than net income. EBITDA is defined as: ###EBITDA=Rev - COGS - FC### One of the below formulas correctly calculates FFCF from EBITDA, including interest and depreciation tax shields, giving an identical answer to that above. Which formula is correct? (a) ##FFCF=EBITDA+ Depr - CapEx -ΔNWC + IntExp## (b) ##FFCF=EBITDA.(1-t_c )+Depr- CapEx -ΔNWC## (c) ##FFCF=EBITDA.(1-t_c )+ Depr.t_c - CapEx -ΔNWC + IntExp.t_c## (d) ##FFCF=EBITDA.(1-t_c )+Depr.(1-t_c )- CapEx -ΔNWC+IntExp.(1-t_c)## (e) ##FFCF=EBITDA.(1-t_c )- CapEx -ΔNWC## Question 69 interest tax shield, capital structure, leverage, WACC Which statement about risk, required return and capital structure is the most correct? (a) The before-tax cost of debt is less than the before-tax cost of equity. Therefore debt is a cheaper form of financing than equity so companies should try to finance their projects with debt only. (b) Debt makes a firm's equity more risky. Therefore the higher the amount of debt, the higher the cost of equity. (c) The more debt a firm has, the higher its tax shields. Therefore firms should seek to have as much debt and as little equity as possible. (d) The more debt, the lower the firm's after tax WACC. The after tax WACC is the discount rate that discounts the firm's cash flows, so the lower it is the more the firm is worth. Therefore firms should try to make their after tax WACC as low as possible by using as much debt as possible. (e) The less debt, the lower the chance of bankruptcy. Therefore firms should try to pay off all of their debt so that they are financed by equity only. Question 78 WACC, capital structure A company issues a large amount of bonds to raise money for new projects of similar risk to the company's existing projects. The net present value (NPV) of the new projects is positive but small. Assume a classical tax system. Which statement is NOT correct? (a) The debt-to-assets (D/V) ratio will increase. (b) The debt-to-equity ratio (D/E) will increase. (c) Firm value is likely to have increased due to the higher amount of interest tax shields, assuming that there will not be any costs of financial distress. (d) The company's after-tax WACC is likely to have decreased. (e) The company's before-tax WACC is likely to have decreased. Question 84 WACC, capital structure, capital budgeting A firm is considering a new project of similar risk to the current risk of the firm. This project will expand its existing business. The cash flows of the project have been calculated assuming that there is no interest expense. In other words, the cash flows assume that the project is all-equity financed. In fact the firm has a target debt-to-equity ratio of 1, so the project will be financed with 50% debt and 50% equity. To find the levered value of the firm's assets, what discount rate should be applied to the project's unlevered cash flows? Assume a classical tax system. (a) The required return on equity, ##r_E## (b) The required return on debt, ##r_D## (c) The after-tax required return on debt, ##r_D.(1-t_c)## (d) The after-tax WACC, ##\text{WACC after tax}=\frac{D}{V_L}.r_D.(1-t_c )+\frac{E_L}{V_L}.r_E## (e) The pre-tax WACC, ##\text{WACC before tax}=\frac{D}{V_L}.r_D+\frac{E_L}{V_L}.r_E## A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? (d) The company's after-tax WACC is likely to stay the same. (e) The company's before-tax WACC is likely to stay the same. Question 99 capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. The firm and individual investors can borrow at the same rate and have the same tax rates. The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium. There are no market frictions relating to debt such as asymmetric information or transaction costs. Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered. According to Miller and Modigliani's theory, which statement is correct? (a) The firm's share price and shareholder wealth will both decrease. This is because the firm will have more debt and therefore more risk so the discount rate applied to its cash flows will be higher, decreasing the value of the firm and therefore the value of the firm's equity and share price. (b) The firm's share price and shareholder wealth will both increase. This is because the firm will have more debt which will amplify the returns of equity investors. This will mean that returns on equity can be much higher and investors will pay a premium for this, leading to an increase in the stock price. (c) The firm's share price and shareholder wealth will both increase since it has more debt and therefore more tax shields. (d) The firm's share price will increase due to the higher value of tax shields. But shareholder wealth will remain unchanged because capital structure is irrelevant when investors can use home-made leverage to create tax-shields themselves. (e) The firm's share price and shareholder wealth will both increase. This is because the cost of debt is cheaper than equity, leading to a lower (before and after tax) WACC. This lower WACC will lead to a higher value of the firm and a higher share price. Question 121 capital structure, leverage, financial distress, interest tax shield Fill in the missing words in the following sentence: All things remaining equal, as a firm's amount of debt funding falls, benefits of interest tax shields __________ and the costs of financial distress __________. (a) Fall, fall. (b) Fall, rises. (c) Rise, fall. (d) Rise, rise. (e) Remain unchanged, remain unchanged. Question 411 WACC, capital structure A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress. Which of the following statements is NOT correct, all things remaining equal? (a) The firm's WACC before tax will rise. (b) The firm's WACC after tax will rise. (c) The firm's required return on equity will be lower. (d) The firm's net income will be higher. (e) The firm's free cash flow will be lower. Question 559 variance, standard deviation, covariance, correlation Which of the following statements about standard statistical mathematics notation is NOT correct? (a) The arithmetic average of variable X is represented by ##\bar{X}##. (b) The standard deviation of variable X is represented by ##\sigma_X##. (c) The variance of variable X is represented by ##\sigma_X^2##. (d) The covariance between variables X and Y is represented by ##\sigma_{X,Y}^2##. (e) The correlation between variables X and Y is represented by ##\rho_{X,Y}##. Question 236 diversification, correlation, risk Diversification in a portfolio of two assets works best when the correlation between their returns is: (a) -1 (b) -0.5 Question 81 risk, correlation, diversification Stock A and B's returns have a correlation of 0.3. Which statement is NOT correct? (a) If stock A's return increases, stock B's return is also expected to increase. (b) If stock A's return decreases, stock B's return is also expected to decrease. (c) If stock A and B were combined in a portfolio, there would be no diversification at all since they are positively correlated. (d) Stock A and B's returns have positive covariance. (e) a and b. Question 111 portfolio risk, correlation All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: (a) The correlation between the stocks' returns rise. (b) The correlation between the stocks' returns decline. (c) The portfolio standard deviation declines. (d) Both stocks' individual variances decline. (e) Both stocks' individual standard deviations decline. Question 82 portfolio return Stock Expected return Standard deviation Correlation Dollars A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the expected return of the above portfolio? Question 83 portfolio risk, standard deviation deviation Correlation ##(\rho_{A,B})## Dollars What is the standard deviation (not variance) of the above portfolio? Question 282 expected and historical returns, income and capital returns You're the boss of an investment bank's equities research team. Your five analysts are each trying to find the expected total return over the next year of shares in a mining company. The mining firm: Is regarded as a mature company since it's quite stable in size and was floated around 30 years ago. It is not a high-growth company; Share price is very sensitive to changes in the price of the market portfolio, economic growth, the exchange rate and commodities prices. Due to this, its standard deviation of total returns is much higher than that of the market index; Experienced tough times in the last 10 years due to unexpected falls in commodity prices. Shares are traded in an active liquid market. Your team of analysts present their findings, and everyone has different views. While there's no definitive true answer, who's calculation of the expected total return is the most plausible? The analysts' source data is correct and true, but their inferences might be wrong; All returns and yields are given as effective annual nominal rates. (a) Alice says 5% pa since she calculated that this was the average total yield on government bonds over the last 10 years. She says that this is also the expected total yield implied by current prices on one year government bonds. (b) Bob says 4% pa since he calculated that this was the average total return on the mining stock over the last 10 years. (c) Cate says 3% pa since she calculated that this was the average growth rate of the share price over the last 10 years. (d) Dave says 6% pa since he calculated that this was the average growth rate of the share market price index (not the accumulation index) over the last 10 years. (e) Eve says 15% pa since she calculated that this was the discount rate implied by the dividend discount model using the current share price, forecast dividend in one year and a 3% growth rate in dividends thereafter, which is the expected long term inflation rate. Question 285 covariance, portfolio risk Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: Prices and expected returns of each stock stays the same, Variance of stock B's returns stays the same, Correlation of returns between the stocks stays the same. (a) The variance of the portfolio will increase. (b) The standard deviation of the portfolio will increase. (c) The covariance of returns between stocks A and B will stay the same. (d) The portfolio return will stay the same. (e) The portfolio value will stay the same. Question 293 covariance, correlation, portfolio risk All things remaining equal, the higher the correlation of returns between two stocks: (a) The more diversification is possible when those stocks are combined in a portfolio. (b) The lower the variance of returns of an equally-weighted portfolio of those stocks. (c) The lower the volatility of returns of an equal-weighted portfolio of those stocks. (d) The higher the covariance between those stocks' returns. (e) The more likely that when one stock has a positive return, the other has a negative return. Question 279 diversification Do you think that the following statement is or ? "Buying a single company stock usually provides a safer return than a stock mutual fund." Question 294 short selling, portfolio weights Which of the following statements about short-selling is NOT true? (a) Short sellers benefit from price falls. (b) To short sell, you must borrow the asset from person A and sell it to person B, then later on buy an identical asset from person C and return it to person A. (c) Short selling only works for assets that are 'fungible' which means that there are many that are identical and substitutable, such as shares and bonds and unlike real estate. (d) An investor who short-sells an asset has a negative weight in that asset. (e) An investor who short-sells an asset is said to be 'long' that asset. Question 557 portfolio weights, portfolio return An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. Stock A has an expected return of 5% pa. Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? (a) 80%, 20% (b) 60%, 40% (c) 40%, 60% (d) 20%, 80% (e) 20%, 20% Question 558 portfolio weights, portfolio return, short selling An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 16% pa. (a) 200%, -100% (b) 200%, 100% (c) -100%, 200% (d) 100%, 200% (e) -100%, 100% Question 556 portfolio risk, portfolio return, standard deviation Stock A has an expected return of 10% pa and a standard deviation of 20% pa. Stock B has an expected return of 15% pa and a standard deviation of 30% pa. The correlation coefficient between stock A and B's expected returns is 70%. What will be the annual standard deviation of the portfolio with this 12% pa target return? (a) 24.28168% pa (b) 24% pa (c) 22.126907% pa (d) 19.697716% pa (e) 16.970563% pa Question 562 covariance What is the covariance of a variable X with itself? The cov(X, X) or ##\sigma_{X,X}## equals: (a) var(X) or ##\sigma_X^2## (b) sd(X) or ##\sigma_X## (e) Mathematically undefined Question 563 correlation What is the correlation of a variable X with itself? The corr(X, X) or ##\rho_{X,X}## equals: What is the covariance of a variable X with a constant C? The cov(X, C) or ##\sigma_{X,C}## equals: What is the correlation of a variable X with a constant C? The corr(X, C) or ##\rho_{X,C}## equals: Question 560 standard deviation, variance The standard deviation and variance of a stock's annual returns are calculated over a number of years. The units of the returns are percent per annum ##(\% pa)##. What are the units of the standard deviation ##(\sigma)## and variance ##(\sigma^2)## of returns respectively? (a) Percentage points per annum ##(\text{pp pa})## and percentage points per annum ##(\text{pp pa})##. (b) Percentage points per annum ##(\text{pp pa})## and percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)##. (c) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and percentage points per annum ##(\text{pp pa})##. (d) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)##. (e) Percent per annum ##(\% pa)## and percent per annum ##(\% pa)##. Hint: Visit Wikipedia to understand the difference between percentage points ##(\text{pp})## and percent ##(\%)##. Question 561 covariance, correlation The covariance and correlation of two stocks X and Y's annual returns are calculated over a number of years. The units of the returns are in percent per annum ##(\% pa)##. What are the units of the covariance ##(\sigma_{X,Y})## and correlation ##(\rho_{X,Y})## of returns respectively? (e) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and a pure number with no units. Question 307 risk, variance Let the variance of returns for a share per month be ##\sigma_\text{monthly}^2##. What is the formula for the variance of the share's returns per year ##(\sigma_\text{yearly}^2)##? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. (a) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2## (b) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12## (c) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12^2## (d) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times \sqrt{12}## (e) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times {12}^{1/3}## Question 80 CAPM, risk, diversification Diversification is achieved by investing in a large amount of stocks. What type of risk is reduced by diversification? (a) Idiosyncratic risk. (b) Systematic risk. (c) Both idiosyncratic and systematic risk. (d) Market risk. (e) Beta risk. Question 90 CAPM, risk According to the theory of the Capital Asset Pricing Model (CAPM), total variance can be broken into two components, systematic variance and idiosyncratic variance. Which of the following events would be considered the most diversifiable according to the theory of the CAPM? (a) Global economic recession. (b) A major terrorist attack, grounding all commercial aircraft in the US and Europe. (c) An increase in corporate tax rates. (d) The outbreak of world war. (e) A company's poor earnings announcement. Question 112 CAPM, risk According to the theory of the Capital Asset Pricing Model (CAPM), total risk can be broken into two components, systematic risk and idiosyncratic risk. Which of the following events would be considered a systematic, undiversifiable event according to the theory of the CAPM? (a) A decrease in house prices in one city. (b) An increase in mining industry tax rates. (d) A case of fraud at a major retailer. (e) A poor earnings announcement from a major firm. Question 86 CAPM Treasury bonds currently have a return of 5% pa. A stock has a beta of 0.5 and the market return is 10% pa. What is the expected return of the stock? (a) 5% pa (b) 7.5% pa (c) 10% pa (d) 12.5% pa (e) 20% pa Question 326 CAPM A fairly priced stock has an expected return equal to the market's. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the stock's beta? (b) 0.5 Question 232 CAPM, DDM A stock has a beta of 0.5. Its next dividend is expected to be $3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates. (c) $40.8 (d) $40 (e) $37.5 Question 244 CAPM, SML, NPV, risk Examine the following graph which shows stocks' betas ##(\beta)## and expected returns ##(\mu)##: Assume that the CAPM holds and that future expectations of stocks' returns and betas are correctly measured. Which statement is NOT correct? (a) Asset A is underpriced. (b) Asset B has a negative alpha (a negative excess return or abnormal return). (c) Buying asset C would be a positive NPV investment. (d) Asset D has less systematic variance than the market portfolio (M). (e) Asset E is fairly priced. Question 235 SML, NPV, CAPM, risk The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot on the SML would have: (a) A positive NPV and should be accepted. (b) A zero NPV. (c) A negative NPV and should be rejected. (d) A large amount of diversifiable risk. (e) Zero diversifiable risk. Question 110 CAPM, SML, NPV Investment projects that plot above the SML would have: (a) A positive NPV. (c) A negative NPV. Question 72 CAPM, portfolio beta, portfolio risk deviation Correlation Beta Dollars A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the beta of the above portfolio? (b) 0.833333333 (d) 1.166666667 (e) 1.4 Stock A has a beta of 0.5 and stock B has a beta of 1. Which statement is NOT correct? (a) Stock A has less systematic risk than stock B, so stock A's return should be less than stock B's. (b) Stock B has the same systematic risk as the market, so its return should be the same as the market's. (c) Stock B has the same beta as the market, so it also has the same total risk as the market. (d) If stock A and B were combined in a portfolio with weights of 50% each, the beta of the portfolio would be 0.75. (e) Stocks A and B have more systematic risk than the risk free security (government bonds) so their return should be greater than the risk free rate. Which statement is the most correct? (a) The risk free rate has zero systematic risk and zero idiosyncratic risk. (b) The market portfolio has zero idiosyncratic risk. (c) The market portfolio has zero systematic risk. (d) a and b are true. (e) a and c are true. Question 92 CAPM, SML, CML Which statement(s) are correct? (i) All stocks that plot on the Security Market Line (SML) are fairly priced. (ii) All stocks that plot above the Security Market Line (SML) are overpriced. (iii) All fairly priced stocks that plot on the Capital Market Line (CML) have zero idiosyncratic risk. Select the most correct response: (a) Only (i) is true. (b) Only (ii) is true. (c) Only (iii) is true. (d) All statements (i), (ii) and (iii) are true. (e) Only statements (i) and (iii) are true. Question 116 capital structure, CAPM A firm changes its capital structure by issuing a large amount of equity and using the funds to repay debt. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? (a) The beta of the firm's assets will increase. (b) The beta of the firm's assets will decrease. (c) The beta of the firm's equity will increase. (d) The beta of the firm's equity will decrease. (e) The beta of the firm's equity will be unchanged. Question 248 CAPM, DDM, income and capital returns The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model): ###p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}### Which, since ##c_1/p_0## is the income return (##r_\text{income}##), can be expressed as: ###r_\text{total}=r_\text{income}+r_\text{capital}### So the total return of an asset is the income component plus the capital or price growth component. Another way to break up total return is to use the Capital Asset Pricing Model: ###r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})### ###r_\text{total}=r_\text{time value}+r_\text{risk premium}### So the risk free rate is the time value of money and the term ##β(r_\text{m}- r_\text{f})## is the compensation for taking on systematic risk. Using the above theory and your general knowledge, which of the below equations, if any, are correct? (I) ##r_\text{income}=r_\text{time value}## (II) ##r_\text{income}=r_\text{risk premium}## (III) ##r_\text{capital}=r_\text{time value}## (IV) ##r_\text{capital}=r_\text{risk premium}## (V) ##r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}## Which of the equations are correct? (a) I, IV and V only. (b) II, III and V only. (c) V only. (d) All are true. (e) None are true. Question 410 CAPM, capital budgeting The CAPM can be used to find a business's expected opportunity cost of capital: ###r_i=r_f+β_i (r_m-r_f)### What should be used as the risk free rate ##r_f##? (a) The current central bank policy rate (RBA overnight money market rate). (b) The current 30 day federal government treasury bill rate. (c) The average historical 30 day federal government treasury bill rate over the last 20 years. (d) The current 30 year federal government treasury bond rate. (e) The average historical 30 year federal government treasury bond rate over the last 20 years. Question 408 leverage, portfolio beta, portfolio risk, real estate, CAPM You just bought a house worth $1,000,000. You financed it with an $800,000 mortgage loan and a deposit of $200,000. You estimate that: The house has a beta of 1; The mortgage loan has a beta of 0.2. What is the beta of the equity (the $200,000 deposit) that you have in your house? Also, if the risk free rate is 5% pa and the market portfolio's return is 10% pa, what is the expected return on equity in your house? Ignore taxes, assume that all cash flows (interest payments and rent) were paid and received at the end of the year, and all rates are effective annual rates. (a) The beta of equity is 5 and the expected return on equity is 30% pa. (b) The beta of equity is 4.2 and the expected return on equity is 26% pa. (c) The beta of equity is 0.86 and the expected return on equity is 9.3% pa. (d) The beta of equity is 1 and the expected return on equity is 10% pa. (e) The beta of equity is 0.6 and the expected return on equity is 8% pa. Question 114 WACC, capital structure, risk A firm's WACC before tax would decrease due to: (a) the firm's industry becoming more systematically risky, for example if it was a mining company and commodities prices varied more strongly and were more positively correlated with the market portfolio. (b) the firm's industry becoming less systematically risky, for example if it was a child care centre and the government announced permanently higher subsidies for parents' child care expenses. (c) the firm issuing more debt and using the proceeds to repurchase stock. (d) the firm issuing more equity and using the proceeds to pay off debt holders. (e) none of the above. Question 117 WACC A firm can issue 5 year annual coupon bonds at a yield of 8% pa and a coupon rate of 12% pa. The beta of its levered equity is 1. Five year government bonds yield 5% pa with a coupon rate of 6% pa. The market's expected dividend return is 4% pa and its expected capital return is 6% pa. The firm's debt-to-equity ratio is 2:1. The corporate tax rate is 30%. What is the firm's after-tax WACC? Assume a classical tax system. (d) 7.80% Question 302 WACC, CAPM Which of the following statements about the weighted average cost of capital (WACC) is NOT correct? (a) WACC before tax ##= r_D.\dfrac{D}{V_L} + r_{EL}.\dfrac{E_L}{V_L}## (b) WACC before tax ##= r_f + \beta_{VL}.(r_m - r_f)## (c) WACC after tax ##= r_D.(1-t_c).\dfrac{D}{V_L} + r_{EL}.\dfrac{E_L}{V_L}## (d) WACC after tax ##= r_f + \beta_{VL}.(r_m - r_f) - \dfrac{r_D.D.t_c}{V_L}## (e) WACC after tax ##= r_f + \beta_{VL}.(r_m - r_f)## Question 303 WACC, CAPM, CFFA There are many different ways to value a firm's assets. Which of the following will NOT give the correct market value of a levered firm's assets ##(V_L)##? Assume that: The firm is financed by listed common stock and vanilla annual fixed coupon bonds, which are both traded in a liquid market. The bonds' yield is equal to the coupon rate, so the bonds are issued at par. The yield curve is flat and yields are not expected to change. When bonds mature they will be rolled over by issuing the same number of new bonds with the same expected yield and coupon rate, and so on forever. Tax rates on the dividends and capital gains received by investors are equal, and capital gains tax is paid every year, even on unrealised gains regardless of when the asset is sold. There is no re-investment of the firm's cash back into the business. All of the firm's excess cash flow is paid out as dividends so real growth is zero. The firm operates in a mature industry with zero real growth. All cash flows and rates in the below equations are real (not nominal) and are expected to be stable forever. Therefore the perpetuity equation with no growth is suitable for valuation. (a) ##V_L = n_\text{shares}.P_\text{share} + n_\text{bonds}.P_\text{bond}## (b) ##V_L = n_\text{shares}.\dfrac{\text{Dividend per share}}{r_f + \beta_{EL}(r_m - r_f)} + n_\text{bonds}.\dfrac{\text{Coupon per bond}}{r_f + \beta_D(r_m - r_f)}## (c) ##V_L = \dfrac{\text{CFFA}_{L}}{r_\text{WACC before tax}}## (d) ##V_L = \dfrac{\text{CFFA}_{U}}{r_\text{WACC after tax}}## (e) ##V_L = \dfrac{\text{CFFA}_{L}}{r_\text{WACC after tax}}## ###r_\text{WACC before tax} = r_D.\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital before tax}### ###r_\text{WACC after tax} = r_D.(1-t_c).\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital after tax}### ###NI_L=(Rev-COGS-FC-Depr-\mathbf{IntExp}).(1-t_c) = \text{Net Income Levered}### ###CFFA_L=NI_L+Depr-CapEx - \varDelta NWC+\mathbf{IntExp} = \text{Cash Flow From Assets Levered}### ###NI_U=(Rev-COGS-FC-Depr).(1-t_c) = \text{Net Income Unlevered}### ###CFFA_U=NI_U+Depr-CapEx - \varDelta NWC= \text{Cash Flow From Assets Unlevered}### Question 100 market efficiency, technical analysis, joint hypothesis problem A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. (a) Only III is true. (b) Only II and III are true. (c) Only I, II and III are true. (d) Only IV is true. (e) Either I, II and III are true, or IV is true, or they are all true. Question 242 technical analysis, market efficiency Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: (a) Markets are weak-form efficient. (b) Markets are semi-strong-form efficient. (c) Past prices cannot be used to predict future prices. (d) Past returns can be used to predict future returns. (e) Stock prices reflect all publically available information. Question 243 fundamental analysis, market efficiency Fundamentalists who analyse company financial reports and news announcements (but who don't have inside information) will make positive abnormal returns if: (a) Markets are weak and semi-strong form efficient but strong-form inefficient. (b) Markets are weak form efficient but semi-strong and strong-form inefficient. (c) Technical traders make positive excess returns. (d) Chartists make negative excess returns. (e) Insiders make negative excess returns. Question 339 bond pricing, inflation, market efficiency, income and capital returns Economic statistics released this morning were a surprise: they show a strong chance of consumer price inflation (CPI) reaching 5% pa over the next 2 years. This is much higher than the previous forecast of 3% pa. A vanilla fixed-coupon 2-year risk-free government bond was issued at par this morning, just before the economic news was released. What is the expected change in bond price after the economic news this morning, and in the next 2 years? Assume that: Inflation remains at 5% over the next 2 years. Investors demand a constant real bond yield. The bond price falls by the (after-tax) value of the coupon the night before the ex-coupon date, as in real life. (a) Today the price would have increased significantly. Over the next 2 years, the bond price is expected to increase, measured on each ex-coupon date. (b) Today the price would have increased significantly. Over the next 2 years, the bond price is expected to be unchanged, measured on each ex-coupon date. (c) Today the price would have been unchanged. (d) Today the price would have decreased significantly. (e) Today the price would have decreased significantly. Question 338 market efficiency, CAPM, opportunity cost, technical analysis A man inherits $500,000 worth of shares. He believes that by learning the secrets of trading, keeping up with the financial news and doing complex trend analysis with charts that he can quit his job and become a self-employed day trader in the equities markets. What is the expected gain from doing this over the first year? Measure the net gain in wealth received at the end of this first year due to the decision to become a day trader. Assume the following: He earns $60,000 pa in his current job, paid in a lump sum at the end of each year. He enjoys examining share price graphs and day trading just as much as he enjoys his current job. Stock markets are weak form and semi-strong form efficient. He has no inside information. He makes 1 trade every day and there are 250 trading days in the year. Trading costs are $20 per trade. His broker invoices him for the trading costs at the end of the year. The shares that he currently owns and the shares that he intends to trade have the same level of systematic risk as the market portfolio. The market portfolio's expected return is 10% pa. Measure the net gain over the first year as an expected wealth increase at the end of the year. (a) $110,000 (c) $45,000 (d) -$15,000 (e) -$65,000 Question 65 annuity with growth, needs refinement Which of the below formulas gives the present value of an annuity with growth? (a) ##\dfrac{C_1}{r-g}\left(1-\dfrac{1}{(1+r)^T}\right)## (b) ##\dfrac{C_1(1+g)^T}{r-g}\left(1-\dfrac{1}{(1+r)^T}\right)## (c) ##\dfrac{C_1}{r-g}\left(1-\dfrac{1+g}{(1+r)^T}\right)## (d) ##\dfrac{C_1}{r-g}\left(1-\left(\dfrac{1+g}{1+r}\right)^T \right)## (e) ##\dfrac{C_1(1+g)}{r-g}\left(1-\dfrac{1}{(1+r)^T}\right)## Hint: The equation of a perpetuity without growth is: ###V_\text{0, perp without growth} = \frac{C_\text{1}}{r}### The formula for the present value of an annuity without growth is derived from the formula for a perpetuity without growth. The idea is than an annuity with T payments from t=1 to T inclusive is equivalent to a perpetuity starting at t=1 with fixed positive cash flows, plus a perpetuity starting T periods later (t=T+1) with fixed negative cash flows. The positive and negative cash flows after time period T cancel each other out, leaving the positive cash flows between t=1 to T, which is the annuity. ###\begin{aligned} V_\text{0, annuity} &= V_\text{0, perp without growth from t=1} - V_\text{0, perp without growth from t=T+1} \\ &= \dfrac{C_\text{1}}{r} - \dfrac{ \left( \dfrac{C_\text{T+1}}{r} \right) }{(1+r)^T} \\ &= \dfrac{C_\text{1}}{r} - \dfrac{ \left( \dfrac{C_\text{1}}{r} \right) }{(1+r)^T} \\ &= \dfrac{C_\text{1}}{r}\left(1 - \dfrac{1}{(1+r)^T}\right) \\ \end{aligned}### The equation of a perpetuity with growth is: ###V_\text{0, perp with growth} = \dfrac{C_\text{1}}{r-g}### Question 416 real estate, market efficiency, income and capital returns, DDM, CAPM A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: His forecast is true. Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. Ignore all costs such as taxes, agent fees, maintenance and so on. All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: (a) The rental yield will fall and approach zero. (b) The total return will fall and approach the capital return (5% pa). (c) One or all of the following must fall: the systematic risk of real estate, the risk free rate or the market risk premium. (d) If the country's nominal wealth growth rate is 4% pa and the nominal real estate growth rate is 5% pa then real estate will approach 100% of the country's wealth over time. (e) If the country's nominal gross domestic production (GDP) growth rate is 4% pa and the nominal real estate rent growth rate is 2% pa then real estate rent will approach 100% of the country's GDP over time. Question 464 mispriced asset, NPV, DDM, market efficiency A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): (a) $0, $0 (b) $1,977.19, $2,000 (c) $2,977.19, $3,000 (d) $499.96, $500 (e) $84,214.9, Infinite Question 569 personal tax The average weekly earnings of an Australian adult worker before tax was $1,542.40 per week in November 2014 according to the Australian Bureau of Statistics. Therefore average annual earnings before tax were $80,204.80 assuming 52 weeks per year. Personal income tax rates published by the Australian Tax Office are reproduced for the 2014-2015 financial year in the table below: Tax on this income 0 – $18,200 Nil $18,201 – $37,000 19c for each $1 over $18,200 $37,001 – $80,000 $3,572 plus 32.5c for each $1 over $37,000 $80,001 – $180,000 $17,547 plus 37c for each $1 over $80,000 $180,001 and over $54,547 plus 45c for each $1 over $180,000 The above rates do not include the Medicare levy of 2%. Exclude the Medicare levy from your calculations How much personal income tax would you have to pay per year if you earned $80,204.80 per annum before-tax? (e) $3,638.56 Question 448 franking credit, personal tax on dividends, imputation tax system A small private company has a single shareholder. This year the firm earned a $100 profit before tax. All of the firm's after tax profits will be paid out as dividends to the owner. The corporate tax rate is 30% and the sole shareholder's personal marginal tax rate is 45%. The Australian imputation tax system applies because the company generates all of its income in Australia and pays corporate tax to the Australian Tax Office. Therefore all of the company's dividends are fully franked. The sole shareholder is an Australian for tax purposes and can therefore use the franking credits to offset his personal income tax liability. What will be the personal tax payable by the shareholder and the corporate tax payable by the company? (a) Personal tax of $6.43 and corporate tax of $45. (b) Personal tax of $15 and corporate tax of $30. (c) Personal tax of $16.5 and corporate tax of $45. (d) Personal tax of $31.5 and corporate tax of $30. (e) Personal tax of $45 and corporate tax of $0. Question 449 personal tax on dividends, classical tax system The United States' classical tax system applies because the company generates all of its income in the US and pays corporate tax to the Internal Revenue Service. The shareholder is also an American for tax purposes. Question 309 stock pricing, ex dividend date A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes. The share price is expected to fall during the: (a) Day of the payment date, between the payment date's morning opening price and afternoon closing price. (b) Night before the payment date, between the previous day's afternoon closing price and the payment date's morning opening price. (c) Day of the ex-dividend date, between the ex-dividend date's morning opening price and afternoon closing price. (d) Night before the ex-dividend date, between the last with-dividend date's afternoon closing price and the ex-dividend date's morning opening price. (e) Day of the last with-dividend date, between the with-dividend date's morning opening price and afternoon closing price. Question 70 payout policy Due to floods overseas, there is a cut in the supply of the mineral iron ore and its price increases dramatically. An Australian iron ore mining company therefore expects a large but temporary increase in its profit and cash flows. The mining company does not have any positive NPV projects to begin, so what should it do? Select the most correct answer. (a) Pay out the excess cash by increasing the regular dividend, and cutting it later. (b) Pay out a special dividend. (c) Conduct an on or off-market share repurchase. (d) Conduct a share dividend (also called a 'bonus issue'). (e) Either b or c. Question 202 DDM, payout policy Currently, a mining company has a share price of $6 and pays constant annual dividends of $0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of $0.30 in 1 year. If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only ##(P_\text{0 one-off})## , and the second assumes that the increase is permanent ##(P_\text{0 permanent})##: (a) ##P_\text{0 one-off} = 9.6000, \space \space P_\text{0 permanent} = 6.2766## (b) ##P_\text{0 one-off} = 6.3000, \space \space P_\text{0 permanent} = 6.2769## (c) ##P_\text{0 one-off} = 9.6000, \space \space P_\text{0 permanent} = 6.3000## (d) ##P_\text{0 one-off} = 6.2769, \space \space P_\text{0 permanent} = 9.6000## (e) ##P_\text{0 one-off} = 6.3000, \space \space P_\text{0 permanent} = 9.6000## Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist. Question 409 NPV, capital structure, capital budgeting A pharmaceutical firm has just discovered a valuable new drug. So far the news has been kept a secret. The net present value of making and commercialising the drug is $200 million, but $600 million of bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and bond raising to shareholders simultaneously in the same announcement. The bonds will be issued shortly after. Once the announcement is made and the bonds are issued, what is the expected increase in the value of the firm's assets (ΔV), market capitalisation of debt (ΔD) and market cap of equity (ΔE)? The triangle symbol is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: ##ΔV = ΔD+ΔE## (a) ##ΔV=800m, ΔD = 600m, ΔE=200m## (b) ##ΔV=200m, ΔD = 600m, ΔE= 0## (c) ##ΔV=200m, ΔD =0m, \quad ΔE=200m## (d) ##ΔV=400m, ΔD = 600m, ΔE=-200m## (e) ##ΔV=800m, ΔD = 800m, ΔE= 0## A mining firm has just discovered a new mine. So far the news has been kept a secret. The net present value of digging the mine and selling the minerals is $250 million, but $500 million of new equity and $300 million of new bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and equity and bond raising to shareholders simultaneously in the same announcement. The shares and bonds will be issued shortly after. Once the announcement is made and the new shares and bonds are issued, what is the expected increase in the value of the firm's assets ##(\Delta V)##, market capitalisation of debt ##(\Delta D)## and market cap of equity ##(\Delta E)##? Assume that markets are semi-strong form efficient. The triangle symbol ##\Delta## is the Greek letter capital delta which means change or increase in mathematics. Remember: ##\Delta V = \Delta D+ \Delta E## (a) ##\Delta V = 250m##, ##ΔD = 300m##, ##ΔE= 250## (b) ##\Delta V = 250m##, ##ΔD = 300m##, ##ΔE= 750## (c) ##\Delta V = 400m##, ##ΔD = 300m##, ##ΔE= -250## (d) ##\Delta V = 1,050m##, ##ΔD = 300m##, ##ΔE= 750## (e) ##\Delta V = 1,050m##, ##ΔD = 300m##, ##ΔE= 250## Question 513 stock split, reverse stock split, stock dividend, bonus issue, rights issue (a) A 3 for 2 stock split means that for every 2 existing shares, all shareholders will receive 1 extra share. (b) A 3 for 10 bonus issue means that for every 10 existing shares, all shareholders will receive 3 extra shares. (c) A 20% stock dividend means that for every 10 existing shares, all shareholders will receive 2 extra shares. (d) A 1 for 10 reverse stock split means that for every 10 existing shares, all shareholders will lose 9 shares, so they will only be left with 1 share. (e) A 3 for 10 rights issue at a subscription price of $8 means that for every 10 existing shares, all shareholders can sell 3 of their shares back to the company at a price of $8 each, so shareholders receive money. Question 566 capital structure, capital raising, rights issue, on market repurchase, dividend, stock split, bonus issue A company's share price fell by 20% and its number of shares rose by 25%. Assume that there are no taxes, no signalling effects and no transaction costs. Which one of the following corporate events may have happened? (a) $1 cash dividend when the pre-announcement stock price was $5. (b) On-market buy-back of 20% of the company's outstanding stock. (c) 5 for 4 stock split. (d) 1 for 5 bonus issue. (e) 1 for 4 rights issue at a subscription price of $3 when the pre-announcement stock price was $5. Question 567 stock split, capital structure A company conducts a 4 for 3 stock split. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. (a) -33.33%, 50% (b) -25%, 33.33% (c) -25%, 25% (d) -20%, 25% (e) 33.33%, -25% Question 568 rights issue, capital raising, capital structure A company conducts a 1 for 5 rights issue at a subscription price of $7 when the pre-announcement stock price was $10. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. Ignore all taxes, transaction costs and signalling effects. (b) -5%, 20% (c) 0%, 20% (d) 7.14%, 20% (e) 11.67%, 0% Question 212 rights issue In mid 2009 the listed mining company Rio Tinto announced a 21-for-40 renounceable rights issue. Below is the chronology of events: 04/06/2009. Share price opens at $69.00 and closes at $66.90. 05/06/2009. 21-for-40 rights issue announced at a subscription price of $28.29. 16/06/2009. Last day that shares trade cum-rights. Share price opens at $76.40 and closes at $75.50. 17/06/2009. Shares trade ex-rights. Rights trading commences. All things remaining equal, what would you expect Rio Tinto's stock price to open at on the first day that it trades ex-rights (17/6/2009)? Ignore the time value of money since time is negligibly short. Also ignore taxes. In late 2003 the listed bank ANZ announced a 2-for-11 rights issue to fund the takeover of New Zealand bank NBNZ. Below is the chronology of events: 23/10/2003. Share price closes at $18.30. 24/10/2003. 2-for-11 rights issue announced at a subscription price of $13. The proceeds of the rights issue will be used to acquire New Zealand bank NBNZ. Trading halt announced in morning before market opens. 28/10/2003. Trading halt lifted. Last (and only) day that shares trade cum-rights. Share price opens at $18.00 and closes at $18.14. 29/10/2003. Shares trade ex-rights. All things remaining equal, what would you expect ANZ's stock price to open at on the first day that it trades ex-rights (29/10/2003)? Ignore the time value of money since time is negligibly short. Also ignore taxes. (a) 17.3492 (b) 17.2308 (c) 14.8418 (d) 13.7908 (e) 13.7692 Question 310 foreign exchange rate Is it possible for all countries' exchange rates to appreciate by 5% in the same year? or ? An American wishes to convert USD 1 million to Australian dollars (AUD). The exchange rate is 0.8 USD per AUD. How much is the USD 1 million worth in AUD? (a) AUD 0.2 million. (b) AUD 0.8 million (c) AUD 1 million (d) AUD 1.25 million. (e) AUD 1.8 million. Question 313 foreign exchange rate, American and European terms If the AUD appreciates against the USD, the American terms quote of the AUD will or ? If the USD appreciates against the AUD, the American terms quote of the AUD will or ? If the current AUD exchange rate is USD 0.9686 = AUD 1, what is the European terms quote of the AUD against the USD? (a) 0.9686 USD per AUD (b) 0.9686 AUD per USD (c) 1.0324 USD per AUD (d) 1.0324 AUD per USD (e) 1.0324 AUD per EUR If the AUD appreciates against the USD, the European terms quote of the AUD will or ? If the USD appreciates against the AUD, the European terms quote of the AUD will or ? How is the AUD normally quoted in Australia? Using or terms? Question 323 foreign exchange rate, monetary policy, American and European terms The market expects the Reserve Bank of Australia (RBA) to increase the policy rate by 25 basis points at their next meeting. As expected, the RBA increases the policy rate by 25 basis points. What do you expect to happen to Australia's exchange rate in the short term? The Australian dollar will: (a) Appreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will increase. (b) Depreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will decrease. (c) Appreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will decrease. (d) Depreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will increase. (e) Be unaffected by the change in the policy rate, so the exchange rate will remain the same. Investors expect the Reserve Bank of Australia (RBA) to keep the policy rate steady at their next meeting. Then unexpectedly, the RBA announce that they will increase the policy rate by 25 basis points due to fears that the economy is growing too fast and that inflation will be above their target rate of 2 to 3 per cent. What do you expect to happen to Australia's exchange rate in the short term? The Australian dollar is likely to: (a) Appreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will increase. (b) Depreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will decrease. (c) Appreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will decrease. (d) Depreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will increase. (e) Appreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will decrease. Investors expect the Reserve Bank of Australia (RBA) to decrease the overnight cash rate at their next meeting. Then unexpectedly, the RBA announce that they will keep the policy rate unchanged. Then unexpectedly, the RBA announce that they will increase the policy rate by 50 basis points due to high future GDP and inflation forecasts. Question 246 foreign exchange rate, forward foreign exchange rate, cross currency interest rate parity Suppose the Australian cash rate is expected to be 8.15% pa and the US federal funds rate is expected to be 3.00% pa over the next 2 years, both given as nominal effective annual rates. The current exchange rate is at parity, so 1 USD = 1 AUD. What is the implied 2 year forward foreign exchange rate? (a) 1 USD = 1.1025 AUD (b) 1.1025 USD = 1 AUD (c) 1 USD = 1.05 AUD (d) 1 USD = 1.1 AUD (e) 1.1 USD = 1 AUD In the 1997 Asian financial crisis many countries' exchange rates depreciated rapidly against the US dollar (USD). The Thai, Indonesian, Malaysian, Korean and Filipino currencies were severely affected. The below graph shows these Asian countries' currencies in USD per one unit of their currency, indexed to 100 in June 1997. Of the statements below, which is NOT correct? The Asian countries': (a) Exports denominated in domestic currency became cheaper to foreigners. (b) Imports denominated in domestic currency became more expensive. (c) Citizens would have to pay more in their own currency when holidaying in the US. (d) USD interest payments on USD fixed-interest bonds became more expensive in their own currency. (e) Domestic currency interest payments on fixed-interest bonds denominated in domestic currency became cheaper in their own currency. Copyright © 2014 Keith Woodward
CommonCrawl
What is the detection threshold of gravitational waves for LIGO? Since now two neutron stars have been detected merging via gravitational waves, I was wondering what is the current detection threshold that the LIGO detectors can achieve. Considering that the first observed objects were two black holes with a combined mass of more than 60 solar masses and they now detected two neutron stars with a combined mass of only about 3 solar masses I was wondering what was the threshold that these detectors can actually detect. Obviously there are much larger stars out there which orbit each other, but their size and distance from each other make gravitational waves too difficult to detect. So what masses and at what distances can we expect to be detected in the future? gravitational-waves AdwaenythAdwaenyth I'm afraid this is not straightforward The amplitude of the gravitational wave strain signal from a merging compact binary (neutrons star or black hole) is $$h \sim 10^{-22} \left(\frac{M}{2.8M_{\odot}}\right)^{5/3}\left(\frac{0.01{\rm s}}{P}\right)^{2/3}\left(\frac{100 {\rm Mpc}}{d}\right),$$ where $M$ is the total mass of the system in solar masses, $P$ is the instantaneous orbital period in seconds and $d$ is the distance in 100s of Mpc. $h \sim 10^{-22}$ is a reasonable number for the sensitivity of LIGO to gravitational wave strain where it is most sensitive (at frequencies of 30-300 Hz). So you can see that to increase the observability you can increase the mass, decrease the period or decrease the distance. But here are the complications. LIGO is only sensitive between about 30-300 Hz and the GW frequencies are twice the orbital frequency. Thus you cannot shorten the period to something very small because it would fall outside the LIGO frequency range and you also cannot increase the mass to something too much bigger than the black holes that have been already seen because they merge before they can attain high enough orbital frequencies to be seen. (The frequency at merger is $\propto M^{-1}$). A further complication is that the evolution of the signals is more rapid at lower masses. That is - the rate of change of frequency and amplitude increase rapidly with total mass. That is why the recent neutron star merger was detectable for 100s by LIGO, whereas the more massive black hole mergers could only be seen for about 1 second. But what this means is that you have fewer cycles of the black hole signal that can be "added up" to improve the signal to noise, which means that higher mass sources are less detectable than a simple application of the formula I gave above would suggest. A further complication is that there is a geometric factor depending on how the source and detectors are orientated with respect to each other. OK, these are complications, but the formula can still be used as an approximation. So if we take the GW170817 signal, the total mass was about $2.8M_{\odot}$, the source was at 40 Mpc, so at frequencies of 200 Hz (corresponding to a period of 0.01 s) you might have expected a strain signal of about $3\times 10^{-22}$. This gave a very readily detectable signal. The discovery paper (Abbot et al. 2017) says the "horizon" for detection was approximately 218 Mpc for LIGO-Livingston and 107 Mic for LIGO-Hanford. As the source was much closer than these numbers then it is unsurprising that the detection was strong. Taking the formula above and a fixed orbital period of 0.01 s, we can see that the horizon distance will scale as $\sim M^{5/3}$. So a $10 M_{\odot} + 10 M_{\odot}$ black hole binary might be seen out to $218 \times (20/2.8)^{5/3} = 5.7$ Gpc (this will be an overestimate by a factor of a few because of the issue of the rapidity of the evolution towards merger that I discussed above). A more through and technical discussion can be read here, although this is a couple of years out of date and LIGO's reach has been extended by about a factor of five since these calculations were done. $\begingroup$ Maybe this should be a separate question, but why doesn't the gravitational wave "brightness" go down by the square of the distance? $\endgroup$ – antlersoft Oct 17 '17 at 15:16 $\begingroup$ @antlersoft Because you are measuring the amplitude of the wave, not the power. $\endgroup$ – Rob Jeffries Oct 17 '17 at 15:53 Figure 1 of this paper shows the horizon distance (distance to which a circularly polarised overhead signal would be detected at SNR 8) for larger mass systems up to total mass of 1000 solar masses, assuming a search with compact binary coalescence templates. For higher masses the signal amplitude is generally larger, but they merge at lower frequencies so the signals are generally shorter-lived in the sensitive band of the detectors. As they're shorter they also, unfortunately, look a lot more like classes of instrumental glitches, so if they're not that strong (just above a threshold of roughly SNR 8) the background level can be large and lead to lower significance of any candidates. Matt PitkinMatt Pitkin Not the answer you're looking for? Browse other questions tagged gravitational-waves or ask your own question. How to compensate the effect of tectonic activity in devices like LIGO? Quantum Mechanics after the detection of Gravitational Waves How to derive the redshift of GW150914? Techniques for locating origin of gravitational waves From where does the energy for gravitational waves come from? Could gravitational waves near merging black holes collapse to a black hole themselves? GW from merging of neutron stars and black holes
CommonCrawl
Why is surface area not simply $2 \pi \int_{a}^{b} (y) dx$ instead of $2 \pi \int_a^b (y \cdot \sqrt{1 + y'^2}) dx$? Geometrically speaking, it seems to me that if you have for example $y^2=8x$ revolved around the x-axis, taking the limit of the sum of $n$ surfaces of cylinders as $n$ approaches infinity should give you the surface area of that surface of revolution. This is how the author initially derives the formula for finding the volume of solids of revolution. Take a rectangle under the curve over $\Delta x$ and revolve it around the axis to get an approximation of the volume of the solid over that interval. Add up those rectangles over $n$ changes in $x$ and take the limit as $n$ approaches infinity, which is the integral of the function that gives you the $y$ value (radius of that approximating cylinder) for each $x$ value. Following the same principle, why wouldn't we be able to take those same cylinders, but instead of taking their volume, taking their surface area and take the limit as the number of those cylinders approaches zero? In other words, in this case each $y$ value is given by $y = \sqrt{8x}$, which is the radius of that cylinder of height $\Delta x$ and an approximation of the surface area over that interval. Why doesn't that work? Why do we need to deal with arc length? I don't understand why it doesn't work in this case, it seems to me that you're still getting a better and better approximation of surface area as those cylinders get smaller and smaller, eventually getting the exact surface area with the limit as their number goes to infinity. PS: I saw this Surface area of a solid of revolution: Why does not $ \int_{b}^{a} 2\pi \,f(x) \,dx $ work? but it's still not making sense visually/geometrically. calculus integration jeremy radcliffjeremy radcliff $\begingroup$ Possible duplicate of Areas versus volumes of revolution: why does the area require approximation by a cone? $\endgroup$ – Hans Lundmark Nov 25 '19 at 15:15 We have an option to cut the solid of revolution (obtained by revolution of $y = f(x)$ between $x = a$ and $x = b$) into multiple slices in the following manner: each slice is a cylinder each slice is a section of cone cut by two parallel planes (a frustum of a cone) Let the desired slicing be done via partition $$P = \{x_{0}, x_{1}, x_{2}, \dots, x_{n}\}$$ of $[a, b]$. We will apply both the approaches mentioned earlier to calculate the surface area as well as volume of the solid of revolution. First we deal with volume which has an easier analysis. If we slice the solid as cylinders then the approximation of volume is given by $$V(P) = \pi\sum_{i = 1}^{n}\{f(x_{i})\}^{2}(x_{i} - x_{i - 1})\tag{1}$$ which is a Riemann sum for the integral $\pi\int_{a}^{b}\{f(x)\}^{2}\,dx$ and this is the desired volume. If we slice the solid into frustums of cone we get the approximation of volume as $$V(P) = \frac{\pi}{3}\sum_{i = 1}^{n}\left[\{f(x_{i - 1})\}^{2} + f(x_{i - 1})f(x_{i}) + \{f(x_{i})\}^{2}\right](x_{i} - x_{i -1})\tag{2}$$ which is split into 3 terms and each term is a Riemann sum for $(\pi/3)\int_{a}^{b}\{f(x)\}^{2}\,dx$ so that the desired volume is again $\pi\int_{a}^{b}\{f(x)\}^{2}\,dx$ Let's now come to surface area of the solid of revolution. If we slice the solid into cylinders then the surface area is approximated by $$S(P) = 2\pi\sum_{i = 1}^{n}f(x_{i})(x_{i} - x_{i - 1})\tag{3}$$ which tends to $2\pi\int_{a}^{b}f(x)\, dx$. If we slice the solid into frustums we get the approximation for surface area as $$S(P) = \pi\sum_{i = 1}^{n}\{f(x_{i - 1}) + f(x_{i})\}\sqrt{(x_{i} - x_{i - 1})^{2} + ((f(x_{i}) - f(x_{i - 1}))^{2}}\tag{4}$$ which can be simplified by the use of mean value theorem as $$S(P) = \pi\sum_{i = 1}^{n}\{f(x_{i - 1}) + f(x_{i})\}\sqrt{1 + \{f'(t_{i})\}^{2}}\cdot(x_{i} - x_{i - 1})\tag{5}$$ for some points $t_{i} \in (x_{i - 1}, x_{i})$. This can be split into two sums each of which is a Riemann sum for $\pi\int_{a}^{b}f(x)\sqrt{1 + \{f'(x)\}^{2}}\,dx$ so that the desired surface area is $2\pi\int_{a}^{b}f(x)\sqrt{1 + \{f'(x)\}^{2}}\,dx$. We see that in case of volume both the approaches give the same answer. But in case of surface area the answers obtained by both the methods are different. Further note that out of the two answers we can easily verify which one is correct by using $y = x, a = 0, b = 1$ so that the solid of revolution is a circular cone. This verification shows that the technique used in equation $(4), (5)$ gives the correct surface area. The question which OP is asking is this: Why do both the approaches (using cylinders and frustums) give the same result for volume but different results for surface area? The reason is simple. Both the sums in $(3)$ and $(4)$ are trying to approximate the surface area of the solid, but there is a huge difference between them namely $$\Delta = 2\pi\sum_{i = 1}^{n}f(x_{i})\left[\sqrt{1 + \{f'(t_{i})\}^{2}} - 1\right](x_{i} - x_{i - 1})$$ and this itself is a non-zero sum unless $f'(x)$ is identically zero. So the approximation $(4)$ is trying to take into account some additional surface area which is left out by sum $(3)$ and this additional part is significant unless $f'(x) = 0$ identically. Hence $(4)$ is a better and correct approximation. In case of volume both the approximations $(1), (2)$ are Riemann sums for the same integral (but are expressed in slightly different ways). Paramanand SinghParamanand Singh $\begingroup$ Thank you, you identified exactly what I was really asking and explained perfectly step by step; this is extremely helpful. $\endgroup$ – jeremy radcliff Mar 11 '16 at 9:39 $\begingroup$ @jeremyradcliff: Glad to know that I was helpful in some way. $\endgroup$ – Paramanand Singh Mar 11 '16 at 11:10 $\begingroup$ Perfect. This clears it for me. $\endgroup$ – R004 Jul 7 '17 at 8:43 $\begingroup$ I guess I can intuit this. In the case of volumes, the difference in the differential volumes( of cylinder and frustum ) is very less when compared with each one's differential volume. So, when the differentials are stacked continuously, the entire volume masks( so to speak ) the integrated error. In the case of surfaces, however, the difference in the differentials is comparable to each surface if you imagine it. When we stack these differentials, we see that stacking differential frustum surfaces gets us closest to the true volume. I can imagine this and I think it is correct. $\endgroup$ – R004 Jul 7 '17 at 9:37 $\begingroup$ @R004: I think you have very well summarized my rigorous explanation into an intuitive form which may be easier to grasp for many people. Thanks for your comment. $\endgroup$ – Paramanand Singh Jul 7 '17 at 14:13 In your link, it explains precisely why integrating $y$ doesn't work: Because the length $ds$ of a little piece of the arc is not $dx$, but $\sqrt{(dx)^2+(dy)^2}$, which can be written as $${\sqrt{(dx)^2+(dy)^2}\over dx}\cdot dx = \sqrt{1 + \left(dy\over dx\right)^2} \cdot dx.$$ It's the same reasoning why the length of a hypotenuse is $\sqrt{a^2+b^2}$ and not $a+b$. Or this "proof" that $pi=4$: Is value of $\pi = 4$? Christopher Carl HeckmanChristopher Carl Heckman $\begingroup$ Yes, but why does it work for volume, then? That's what I didn't understand in the link. Shouldn't you run into the same problem when integrating to find the volume? What's the difference? $\endgroup$ – jeremy radcliff Mar 10 '16 at 4:51 $\begingroup$ The integral $\int_a^b f(x)\,dx$ is based on dividing an interval into pieces and multiplying the width of each interval by the value of $f$ at some value in that interval, and adding the values together. This is not an exact answer; there is some error in the approximation. The error analysis for arc length turns out to be different from the error analysis for volume. $\endgroup$ – Christopher Carl Heckman Mar 10 '16 at 4:56 $\begingroup$ There is some error in the approximation, but that error goes to zero when taking the limit as the number of those pieces of the interval goes to infinity, at least in the case of volume. Somehow though, this doesn't happen with surface area; I guess as you said the nature of the error is such that the error doesn't go to zero as $n$ goes to infinity. But why is that the case? What is different in the error analysis between volume and surface area? Visually at least, I just can't "see" the difference. $\endgroup$ – jeremy radcliff Mar 10 '16 at 5:00 $\begingroup$ This is the sort of thing covered in Real Analysis. I took it a long time ago and became a combinatorialist, so I don't remember everything from it. Sorry; someone else will have to pick up the thread here. $\endgroup$ – Christopher Carl Heckman Mar 10 '16 at 5:05 $\begingroup$ @jeremyradcliff Here's a question about a similar problem. Why is $\pi\neq 4$? Just because you're "approximating" your surfaces in 3D doesn't mean that you approximate their area. In 2D, approximating lines doesn't guarantee that you're approximating their lengths. $\endgroup$ – Kitegi Mar 10 '16 at 5:45 I cannot post a comment yet, but I will try to pick up where Carl Heckman left. The problem with the 'rectangle' approximation in this case, is the same as the idea of $\pi = 4$. Consider the line $y=a x$, for example. Let $a$ become very large, and look at the surface area of the surface of revolution from $x=0$ to $x=1/a$. This should approach the area of a disc in the limit of $a \rightarrow \infty$, since we have a cone where the top angle will become very flat. However, if we were to put cylinders under this line and simply look at the surface area of the boundary of the cylinders, it is clear that we will not get the right answer, since this area will go to $0$ in this limit (the sum of the widths of the cylinders goes to zero). We see that we somehow have to take into account the sides of the cylinders (perpendicular to the x-axis) as well. If we try to do that, however, we run into the problem Carl Heckman described. For example, look at the surface of revolution of a small part of the line around a point (not in the limit $a \rightarrow \infty$). $y$ does not vary much in a small enough area, so adding the sides of the cylinders as well, we would find that the surface area would be $$2\pi y(\Delta x+\Delta y) = 2\pi y|a+1| \Delta x$$ The error in this case is similar as to when you are trying to find the length of a hypotenuse of a right angle, by adding the two other lengths. Note also that $\sqrt{1+(y')^2} \Delta x$ is the length of the hypotenuse of a very small right triangle, which appears to be the factor we need to get the right surface area. I hope I managed to make it slightly clearer why multiplying by $\sqrt{1+(y')^2}$ is the right thing to do without rambling on too much. TroyTroy Why is the formula for cone $ \pi r l $ ( l slant height) and not $ \pi \bar r h$ ( h is cone height)?... for the same reason. NarasimhamNarasimham Not the answer you're looking for? Browse other questions tagged calculus integration or ask your own question. The staircase paradox, or why $\pi\ne4$ Areas versus volumes of revolution: why does the area require approximation by a cone? Surface area of a solid of revolution: Why does not $ \int_{b}^{a} 2\pi \,f(x) \,dx $ work? Why is the area of a sphere is coming out as $ \pi^2R^2 $ instead of $4\pi R^2$? Volumes of solid objects Why is this way of deriving surface area of sphere wrong when a similar method can be used to derive volume? Doubt in Application of Integration - Calculation of volumes and surface areas of solids of revolution surface Areas using cylindrical shells Why is arc length not in the formula for the volume of a solid of revolution? Why does the same limit work in one case but fail in another? Surface area by the revolution of cycloid Surface Area vs. Volume of Solid of Revolution gabriels horn: find a p for a p-series in such a way that the volume and surface area are infinite References for solid of revolution of a region which crosses the axis of revolution? Why does wolfram answer as such in this example for surface area and volume of revolution on an area crossing the axis?
CommonCrawl
Resilience analytics: coverage and robustness in multi-modal transportation networks Abdelkader Baggag1, Sofiane Abbar1, Tahar Zanouda1 & Jaideep Srivastava1 A multi-modal transportation system of a city can be modeled as a multiplex network with different layers corresponding to different transportation modes. These layers include, but are not limited to, bus network, metro network, and road network. Formally, a multiplex network is a multilayer graph in which the same set of nodes are connected by different types of relationships. Intra-layer relationships denote the road segments connecting stations of the same transportation mode, whereas inter-layer relationships represent connections between different transportation modes within the same station. Given a multi-modal transportation system of a city, we are interested in assessing its quality or efficiency by estimating the coverage i.e., a portion of the city that can be covered by a random walker who navigates through it within a given time budget, or steps. We are also interested in the robustness of the whole transportation system which denotes the degree to which the system is able to withstand a random or targeted failure affecting one or more parts of it. Previous approaches proposed a mathematical framework to numerically compute the coverage in multiplex networks. However solutions are usually based on eigenvalue decomposition, known to be time consuming and hard to obtain in the case of large systems. In this work, we propose MUME, an efficient algorithm for Multi-modal Urban Mobility Estimation, that takes advantage of the special structure of the supra-Laplacian matrix of the transportation multiplex, to compute the coverage of the system. We conduct a comprehensive series of experiments to demonstrate the effectiveness and efficiency of MUME on both synthetic and real transportation networks of various cities such as Paris, London, New York and Chicago. A future goal is to use this experience to make projections for a fast growing city like Doha. In the past years scholars have increasingly realized that urban infrastructure modeling can not be addressed in a decoupled way: transportation networks in big cities are naturally multi-modal, and as such commuters use different modes to move around the city. This implies that congestion in surface (car) commuting has large effects on other modes of transportation, e.g., bus or metro; the other way around, incidences in the metro system (e.g., temporary power failure in a station) will have severe consequences on the bus and private car systems. Provided that many cities—and particularly large metropolis—offer open data from all sorts of remote sensing devices, it is tempting to dive deep in those data so as to characterize such interwoven layers, and quantify their mutual effect on each other. In this paper, however, we intend to take one step back and address the question from a theoretical perspective to (i) represent the multi-modal transportation system as a multiplex network; (ii) mathematically characterize the random walk coverage of this multiplex, and (iii) assess the robustness of such coverage when the system is confronted with failure. This strategy—setting a theoretical framework—provides an anticipatory understanding, for instance to avoid possible, unforeseen negative side-effects of urban planning decisions. Also from an urban planning point of view, our proposal ultimately opens the path for an holistic route ranking, helping authorities to prioritize certain navigation strategies over others, in particular during mega events. Regarding point (i) above, our work adds to the literature on multiplex networks, which has gained a lot of momentum in the last five years. As research on complex systems matured, it became essential to move beyond simple graphs and investigate more complicated (but more realistic) frameworks. At first sight, the expansion from "monoplex" to multiplex may be hailed as an easy one—from a network to a "stack" of networks. However, things turned out to be more complicated, and a generalization of "traditional" network theory had to be developed, e.g., see [1]. To begin with, an adjacency matrix can no longer encode the layer-to-layer interactions of multiplex systems, and rather supra-adjacency matrices or adjacency tensors enter the scene, e.g., see [2–4]. This in turn modifies all the underlying algebra that lays at the base of monoplex network analysis, both regarding static descriptors—degree, transitivity, eigenvector centrality, modularity, etc. [5–8]—and dynamic processes [9], such as mobility on urban multiplexes. The latter—which is the focus of this contribution, see next Section—has been tackled only recently [10–12]. Needless to say, random walk dynamics—and its neighboring problems, e.g., Mean First Passage Time [13, 14] and network coverage [15, 16]—have a long tradition in network theory [17]. We here resort on De Domenico et al. [18], which offers the first theoretical generalization of random walks to the multiplex framework, as applied to navigability processes on multi-modal transportation networks. Finally, the concept of robustness has been central to network theory from the early 2000s [19, 20], because of its applied significance together with a long-standing tradition under the topic of percolation theory in Statistical Physics [21, 22]. Closer to urban questions, Arcaute et al. [23] have relied on percolation to explore the limits of regions and cities; Li et al. [24] propose an interesting dynamical percolation approach to unveil complex commuting dynamics in cities; and finally other works [25, 26] focus on the problem of infrastructural robustness and city design from the idea of progressive structure failure (removal of randomly chosen edges). More recently, Romero et al. [27] studied the impact of external stress on the structure of networks applied to social media platforms; and Baggio et al. [28] looked at the robustness of multiplex networks in a social-ecological context. In the multilayer framework, percolation transitions have also been studied from a theoretical perspective, e.g., see [29]. This paper is organized as follows. In the next section,we present the data model used to represent a multi-modal transporation system of a city, and formalize the problem of efficient computation of the coverage using random walkers. Then we introduce the Multi-model Urban Mobility Estimation algorithm; and in the validation section, we present the experimental evaluation of the model; with a conclusion at the end of the paper. Data model and problems Many physical realities can be modeled as sets of interconnected entities; and multi-layer networks are used as a representation of these complex systems. We therefore observe many dynamical processes being studied on top of these networks, such as diffusion processes [30, 31], synchronization [32, 33], percolation [34, 35], etc. We use, in particular, multiplex networks to provide the comprehensive conceptual framework, see e.g., [1, 18, 30, 36–43], and random walks to study the mobility of commuters within a multimodal transportation network in a city. This will allow the development of optimal navigation strategies. Multiplex networks Given a set of L layers, each representing a type of relationship and containing N nodes. The relationship is represented by an edge and can be anything depending on the complex system, e.g., in social networks, it can be "friendship" on one layer such as Skype and "professional" on another layer, such as LinkedIn. For multimodal transportation systems, the nodes represent the components of the complex system, e.g., bus stations in the first layer, and metro stations in the second layer, etc. Even though the layers are different from each other, the commuters use both of them to move in a large city, and therefore it is important to represent their mobility by taking into account the coupling between layers. A multilayer network is a pair where is a finite sequence of (directed or indirected, weighted or unweighted) intra-layer graphs \(\mathcal{{G}}^{\alpha } = ( \mathcal{{V}}^{\alpha }, \mathcal{{E}}^{\alpha } )\), and is the set of inter-layer connections between nodes of different layers \(\mathcal{{G}}^{\alpha }\) and \(\mathcal{{G}}^{ \beta }\), i.e., A multiplex network is a special type of multilayer network in which \(\mathcal{{V}}^{1} = \mathcal{{V}}^{2} = \cdots = \mathcal{{V}}^{L} = \mathcal{{V}}\), and the only possible type of interlayer connections are those in which a given node is only connected to its counterpart nodes in the rest of layers, i.e., $$\begin{aligned} {\mathcal{{E}}}_{\alpha \beta } &= \bigcup_{\alpha , ~\beta } \bigl\{ \bigl[ i(\alpha ), i(\beta ) \bigr] \mid i(\alpha ) \in { \mathcal{{V}}}^{\alpha }, i(\beta ) \in {\mathcal{{V}}}^{\beta }, \alpha \neq \beta \bigr\} . \end{aligned}$$ Here, a node-layer \(i(\alpha)\) means that node i participates in layer α. In other words, multiplex networks consist of a fixed set of nodes connected by different types of links, see Fig. 1. The paradigm of multiplex networks is social systems, since these systems can be seen as a superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. In this study, we consider node-aligned multiplex networks, i.e., inter-layer connections are "diagonal" in the sense that each node is connected only to its counterpart in the other layers, and the inter-layer edges exist only between consecutive layers. Example of a multiplex configuration. A three layer multiplex network showing the inter-layer and intra-layer correspondences between different nodes There have been some attempts in the literature for modeling multilayer networks properly by using the concept of tensors, e.g., see [6, 44]. In this study, we use proper matrix representation, and therefore the supra-adjacency matrix of the multiplex network has the general form where \({\mathbf{W}}^{(\alpha )}\) is the adjacency matrix of layer α, \({\mathbf{D}}^{(\alpha \beta )}\) is a diagonal matrix such that \(d_{ii}^{\alpha \beta }\) is the cost associated with the inter-layer edge \([ i(\alpha ), i(\beta ) ]\), and \({\mathbf{D}}^{( \alpha \alpha )}\) is a diagonal matrix such that \(d_{ii}^{\alpha \alpha }\) represents the cost of staying in the same node and in the same layer. Note that multiplex networks allow an easy integration of traversal times by adding weights to the different edges of the network. Weights of edges in the same layer will represent the time it takes to go from one station to another, whereas weights of edges connecting the same station in two different layers represent the time it takes to transfer from one mode of transportation to another. The weights of transferring can also take into account the frequency of each line, which is not part of this study. However, in some cases, frequency can be relevant in bus or rail networks. The spectrum of the supra-adjacency matrix (and its associated supra-Laplacian matrix) is directly related to several dynamical processes that take place on a multilayer network, such as the diffusion dynamics [45], and the guarantee of a unique stationary state of the Markov process, e.g., see [46]. Represented this way, multiplex networks encode significantly more information than their single layers taken separately, since they include correlations between the role of the nodes in the different layers. For example, a node that is a hub in the metro layer is more likely to be a hub in the bus layer. Therefore, the degree of nodes in the metro layer is positively correlated with that of the bus layer. Negative correlations may also exist, when the hubs of one layer are not the hubs of another layer. One limitation of multiplex networks, when all lines of a transportation mode are put in the same layer, is that they do not account for the cost of transferring between lines (at the same stop), especially when the stop is represented with the same node in that layer. However, there is a study by Aleta et al. that addresses this issue, see, e.g., [47]. Coverage by random walk Random walks constitute a fundamental mechanism for many dynamics taking place on complex networks, e.g., see [48]. To assess the urban mobility in this multiplex transportation system, we model commuters as random walkers and we determine the coverage of the random walks, defined as the expected value of the number of steps to reach all nodes in the transportation system, regardless of the layer, on a walk that started from any node-layer \(j(\alpha )\), i.e., $$ {\mathcal{{C}}}_{j_{(\alpha )}} (t) = {\mathbb{E}} \bigl[\text{\# steps to reach all nodes in the graph on a walk that starts at }j(\alpha )\bigr] , $$ i.e., it is the expected value of the number of nodes in the network being visited at least once in a time less than or equal to t, regardless of the layer, assuming that walks started from any other node-layer in the network. A random walk is a Markovian process [49], which means that the transitions between states are historyless, i.e., the probability of transitioning to the next state depends only on the current state, not on any of the other previous states. Moreover, at each time step, the random walker has three options: the first one is to stay at the same node, the second one is to move to other neighboring nodes on the same layer and the last one is to switch to one of its counterparts on other layers, as illustrated in Fig. 2. Random walk on a multiplex. An illustration of different possible moves available for a random walker in a multiplex setting The mathematical model, used in this paper, is inspired from the study in [18], and was clearly developed by us in [50]. Therefore, given a multiplex transportation system of N nodes and L layers, the discrete-time master equation describing the probability of finding the walker in node-layer \(i(\alpha )\), at time \((t+1)\), can be written as, e.g., see [18, 50, 51] $$\begin{aligned} p_{i(\alpha )} (t+1) &= \mathcal{{A}}_{ii}^{\alpha \alpha } p_{i(\alpha )} (t) + \sum_{j\neq i}^{N} { \mathcal{{A}}}_{ij}^{\alpha \alpha } p_{j( \alpha )} (t) \\ &\quad {}+ \sum _{\beta =1}^{L} {\mathcal{{A}}}_{ii}^{\alpha \beta } p _{i(\beta )} (t) + \sum_{\beta =1}^{L} \sum _{ j \neq i }^{N} {\mathcal{{A}}}_{ij}^{\alpha \beta } p_{j(\beta )} (t) \end{aligned}$$ which can be assembled in matrix form as , where is the transition supra-matrix (always assumed to be independent of time), and \({\mathbf{P}} \in {\mathbb{R}}^{NL}\) is a supra-vector containing the probability of finding the walker at any node-layer \(i(\alpha )\), such that $$\begin{aligned} \mathbf{P} = \bigl[\mathbf{p}_{1}^{T} \quad { \mathbf{p}}_{2}^{T} \quad \cdots \quad \mathbf{p}_{L}^{T} \bigr]^{T} \quad \text{and} \quad \mathbf{p}_{\alpha } = [ p_{1(\alpha )} \quad p_{2(\alpha )} \quad \cdots \quad p_{N(\alpha )} ]^{T}. \end{aligned}$$ For a classical random walk, the transition probability of moving from node-layer \(i(\alpha )\) to node-layer \(j(\alpha )\), i.e., within the same layer α, or to switch to the counterpart of vertex i in layer β, i.e., to node-layer \(i(\beta )\), is uniformly distributed. Therefore we have $$\begin{aligned} \mathcal{A}_{i j}^{\alpha\beta} = \textstyle\begin{cases} \frac{d_{(i)}^{\alpha \alpha }}{k_{i(\alpha )} + c_{i(\alpha )}}&\text{if } i=j\text{ and } \beta = \alpha, \\ \frac{w_{ij}^{\alpha }}{k_{i(\alpha )} + c_{i(\alpha )}}&\text{if }i\neq j\text{ and }\beta = \alpha, \\ \frac{d_{(i)}^{\alpha \beta }}{k_{i(\alpha )} + c_{i(\alpha )}}&\text{if }i = j\text{ and }\beta \neq \alpha, \\ 0&\text{if }i \neq j\text{ and } \beta \neq \alpha, \end{cases}\displaystyle \end{aligned}$$ where \(w_{ij}^{\alpha }\) is the weight of the intra-layer edge \([ i(\alpha ), j(\alpha ) ] \) and \(d_{(i)}^{\alpha \beta }\) is the weight of the inter-layer edge \([ i(\alpha ), i(\beta ) ] \), i.e., the cost to switch from layer α to layer β at node i, while \(d_{(i)}^{\alpha \alpha }\) quantifies the cost of staying in the same node and in the same layer. These are the elements of the matrices \({\mathbf{W}}^{(\alpha )}\), \({\mathbf{D}} ^{(\alpha \beta )}\), and \({\mathbf{D}}^{(\alpha \alpha )}\) in respectively. The intra-layer strength of a node-layer \(i(\alpha )\) is \(k_{i(\alpha )}\), and \(c_{i(\alpha )}\) is the inter-layer strength of node i with respect to its connections to its counterparts in different layers. They are defined as $$ k_{i(\alpha )} = \sum_{j \in {\mathcal{{N}}} (i) } w_{ij}^{\alpha } \quad \text{and} \quad c_{i(\alpha )} = \sum_{\beta } d_{(i)}^{\alpha \beta } , $$ so that the total strength of node-layer \(i(\alpha )\) is the sum, i.e., \(\kappa_{i(\alpha )} = k_{i(\alpha )} + c_{i(\alpha )}\). Since each node is coupled only with its counterparts in different layers, then, only the elements of the type \(\mathcal{{A}}_{ii}^{\alpha \beta }\) are different from zero. Jumps to other nodes in the other layers, as in Lévy random walks, are not allowed, and therefore \(\mathcal{{A}}_{ij}^{\alpha \beta } =0\) for \(i\neq j\) and \(\alpha \neq \beta \). Mathematical analysis of the model In matrix form, it can be shown that the discrete-time master equation (4) can be written as the initial value problem, and without loss of generality, we assume that, at \(t=0\), the random walker is in the first layer at node-layer \(j(1)\), i.e., \({\mathbf{P}} (t=0) = {\mathbf{P}}_{j(1)} (0)\) then the initial value problem admits the following solution where is the usual matrix exponential, i.e., It is easy to see that \({\mathbf{P}}_{j(1)} (0) = [\mathbf{e}_{j}^{T} \ {\mathbf{0}}^{T} \ \cdots\ {\mathbf{0}}^{T}]^{T} \) with \({\mathbf{e}}_{j} \in {\mathbb{R}}^{N}\) being the canonical vector, and \({\mathbf{0}} \in {\mathbb{R}}^{N}\) is the vector of all zeros. Let be the diagonal matrix containing the total strength of all nodes, i.e., , where \({\mathbf{1}} \in {\mathbb{R}} ^{NL}\) is the vector of all ones, then . Therefore, the matrix is the normalized supra-Laplacian of the multiplex network. The supra-Laplacian of the multiplex network is Therefore, the matrix is the normalized supra-Laplacian. □ The random walker can be at any layer, so let \(p_{i} (t)\) be the probability to find the walker in node i at time t, regardless of the layer, i.e., $$\begin{aligned} p_{i} (t) = \sum_{\alpha =1}^{L}p_{i(\alpha )} &= {\mathbf{E}}_{i} ^{T} {\mathbf{P}} (t), \end{aligned}$$ where \({\mathbf{E}}_{i} = [{\mathbf{e}}_{i}^{T}\ \cdots\ {\mathbf{e}}_{i}^{T} ]^{T} \in {\mathbb{R}}^{NL} \). Since , and using Equations (8) and (7), we get at time \((t+1)\) the following expression for \(p_{i} (t+1)\) To determine the coverage, defined as in [18], let's find an expression for the probability \(\delta_{i, j} (t)\) not to find the walker in vertex i after t time steps, assuming it started in vertex j, that is $$\begin{aligned} \delta_{i, j} (t) &= \bigl[ 1 - p_{j} (0) \bigr] \prod _{\tau =1}^{t} \bigl[ 1 - p_{i} (\tau ) \bigr] . \end{aligned}$$ From (10), we get the recurrence relation \(\delta_{i, j} (t+1) = \delta_{i, j} (t) [ 1 - p_{i} (t+1) ] \), thus leading to the initial value problem with \(\delta_{i,j} (0) = 0\) for \(j=i\) since the walker started in vertex j and the probability of not finding it in the same vertex is 0. In the case of \(j\neq i\), then \(\delta_{i,j} (0) = 1\). The solution to the initial value problem (11) is, see [18] Therefore, the coverage is given by double averaging over all vertices the probability \([ 1 - \delta_{i,j} (t) ] \), i.e., The matrix need not be formed explicitly, since only its action on the vector \({\mathbf{P}}_{j(1)} (0)\) is needed, i.e., a matrix-vector product, therefore Moreover, since \({\mathbf{P}}_{j(1)}(0) = [{\mathbf{e}}_{j}^{T} \ {\mathbf{0}}^{T}\ \cdots\ {\mathbf{0}}^{T}]^{T}\), then i.e., the jth column of and we get the following recurrences These relations can be proven easily the usual way of proving recurrences, i.e., validate for the initial case, then assume it is correct for τ and prove that it is still correct for \(\tau + 1\). The details are skipped. □ The number of walks from node i to node j of length τ is the entry on row i and column j of the matrix . Therefore the matrix represents the total number of walks from node i to node j, of any length less than or equal to \((t+1)\). Resilience to failures and percolation Significant progress has been made in understanding the percolation properties of multilayer networks. For example, it has been shown that dependency links can have a serious impact on cascading failure events, in particular for interdependent networks. And, in many multilayer networks, some nodes of a layer are interdependent on nodes in other layers. A node is interdependent on another node in a different layer if it needs the other node to function in order to function itself properly. When two or more networks are interdependent, a fraction of node failures in one layer can trigger a cascade of failures that propagate in the multilayer network. This can mean that a network of networks as a whole may be more fragile than its constituent parts taken in isolation. A dramatic real-world example of a cascade of failures is the blackout that affected much of Italy in 2003, where the shutdown of power stations directly led to the failure of nodes in the Internet communication network, which in turn contributed to further breakdown of power stations, see [52]. Also, the work of Brummitt et al. in [53, 54] shows the importance of considering interconnected networks to better understand cascading failures. It is therefore critical to consider interdependent network properties in order to design robust networks. It is now clear that the robustness of multilayer networks can be evaluated by calculating the size of their mutually connected giant component (MCGC) when a random failure affects a fraction of the nodes in the system, see the pioneering work in [52]. The MCGC of a multilayer network is the largest component that remains after the random failure propagates back and forth in the different layers. The MCGC is defined as the set of nodes \(i(\alpha )\) that satisfy the following recursive set of equations, see [55] at least one neighbor \(j(\alpha )\) of node \(i(\alpha )\) in layer α is in the MCGC; all the interdependent nodes \(i(\beta )\) of node \(i(\alpha )\) are in the mutually connected giant component. Network percolation theory has already been exploited in the urban context for purposes other than the ones in this work, e.g., see [24, 56, 57]. With the road networks for dozens of cities at hand, we can now proceed with the percolation dynamics in two different ways. Both of them share the idea of progressive structural deterioration [19, 20, 58], understood either as error or failure (removal of randomly chosen edges); or attack (removal of important edges, where "importance" can be quantified by some descriptor, such as high betweenness of edges, high centrality of nodes, etc.) Note that in this work we focus on bond percolation (the removal of edges) as opposed to site percolation (the removal of nodes). To quantify the robustness of the multimodal transportation system, we use percolation theory [19] to describe the impact of edge failures in the multiplex on the coverage. We iteratively remove edges from the multiplex and compute the new coverage of the resulting network. Computational approach In [18], a numerical approach to estimate the coverage has been proposed. It is based on the eigendecomposition of the normalized supra-Laplacian . The general form of the coverage has the following expression $$ {\mathcal{{C}}} (t) = 1 - \frac{1}{N^{2}} \sum _{i,j=1}^{N} \delta_{i,j} (0) \exp \biggl[ - \sum_{ \ell \in {\boldsymbol{ \Lambda ^{0} }} } C_{i,j} ( \ell ) t - \sum_{ \ell \in {\boldsymbol{ \Lambda ^{+} }} } C_{i,j} ( \ell ) \frac{ e^{ -\lambda_{\ell }t } - 1 }{ -\lambda_{\ell }} \biggr] , $$ where are constants depending on the vertex, the transition matrix, the eigendecomposition, and the initial conditions. Each supramatrix is obtained from products of the eigenvectors of the normalized supra-Laplacian, and \(\boldsymbol{ \Lambda ^{0} }\) and \(\boldsymbol{ \Lambda ^{+} }\) indicate the sets of all null and positive eigenvalues of the normalized supra-Laplacian, respectively. Any solution approach based on the eigendecomposition is time consuming and hard to obtain, especially for large matrices. Therefore it should be avoided. Proposed algorithm The main kernel in computing the coverage is how to compute the exponent . For this, we propose the Multi-model Urban Mobility Estimation (MUME) Algorithm 1. Therefore, the way the coverage is computed here results in a tremendous saving in the computational time, as opposed to the eigendecomposition of the (normalized) supra-Laplacian matrix proposed in [18]. Floating point operation (flop) is a simple, machine-independent measure of algorithm complexity. In multi-modal transportation networks, we usually have a small number of layers, for example in our study, \(L=2\), since we consider a bus layer and a metro layer. Hence, in the MUME algorithm, we have one matrix-vector product per iteration (Step 6) whose count is \(\ll 2 N^{2}\) flops, because of the sparsity of the matrix ; and one addition of 2N-vectors (Step 7) whose count is 2N flops. The main objective of this work is to study urban mobility challenges in modern cities, as well as the robustness and resilience of the complex transportation systems. Such work can serve as a basis for an automatic comparative evaluation of transportation system efficiency of different cities. The multilayer nature of the proposed framework requires data from different modes of transportation. However, it is found that not so many cities have collected, cleaned, and made data about their transportation systems publicly available. We thus limited our experimentation to four big cities: Paris, London, New York City, and Chicago. We experiment with random graphs of different natures to derive more generalizable conclusions. In what follows, we present an overview of the data and the methods developed to produce the multiplex urban transportation network of every city from raw data. We then summarize and discuss our results for both convergence of coverage and robustness to failures. At the level of every city; we acquire, parse and combine GTFS (Google Transit Feed Specification)Footnote 1 datasets of every transportation mode. Google Transit Feed Specification is a format of data created to provide transit schedules and public transport information for specific geographical location. It is a "standard" developed by Google in order to help public transport agencies to publish and integrate their data with Google Maps. A typical GTFS feed includes information about multiple aspects of a transit system, such as stops, routes, trips, and schedules. Needless to say, the availability of these datasets is a key resource to study the dynamics of the transportation systems, e.g., see [59–62]. In our study, we use GTFS datasets to represent the anatomy of the public transportation system in all cities except London, and build a multiplex urban transportation network for every city. In order to process and transform the combined GTFS datasets to multiplex system, we perform four tasks: Merging GTFS dataset from different sources. Since the datasets come from various agencies and transportation companies which have adopted different indexes, the first step to reliably build transportation network after merging datasets is to re-index stop locations to avoid any conflicts. To do so, we join stations spatially (using latitude and longitude coordinates). We use text similarity matching techniques applied on stations' names to double check our results. Identifying and extracting routes. As we are interested in identifying connected locations, we start by filtering occasional trips, such as trips during national holidays, etc., from our dataset. Then, for each trip, we order stop locations based on departure time to identify connected locations. Transportation network as a graph. We construct a graph of every transportation network in the city from the set of ordered stop locations per trip. Every set of these nodes represent a path in the network. As a result, we obtain a network for each mode of transportation in the city. As an example; for the case of Paris, the result of this step for both metro and bus networks is illustrated in Fig. 3. Bus (top panel) and Metro (bottom panel) networks generated from merged GTFS files for the city of Paris. We can clearly see that bus network covers a much larger area than metro network and is much denser than it Building a multiplex network. In our study, we represent the transportation network as a two-layer multiplex: Bus network and Metro network, as these two transportation modes represent the most significant urban transportation modes. The nodes of each layer represent the stop stations (bus stations or metro stations). As our multiplex system has to be ordinal and diagonal, we establish a connection (a link) between a node in one layer (e.g., bus) and its counter-part in the other layer (e.g., metro). We adopted an assumption according to which two nodes in two different layers that are within a walking distance radius (\(\leq 100~m\)) represent the same station (i.e, a station that provides a connection between the two transportation modes). As stated in the modeling section, we assume that the transition probabilities are uniform at each node. That is, at each node, the random walker has the same probability to move through all possible edges, including those connecting to other layers. We eliminate the nodes from layer 1 (respectively: layer 2) that don't match any node in layer 2 (respectively: layer 1). We make sure to retain connectivity through the removed nodes by connecting their neighbors recursively. Note that by doing so, we could end-up with a network that has much more edges than the initial one. So, in order to build a multiplex network, we used a simplified approach by keeping the same number of stations at every layer. We simply run a recursive algorithm to remove the nodes (stations) which do not have a counterpart in the other transportation layer, however, we retained the connectivity between the nodes. By removing the nodes, the algorithm increased the number of edges between the different remained nodes. This is for instance the case of Paris Bus network (see Table 1). Table 1 Basic statistics about different transportation networks used in this study. Bus (initial) is the initial bus network extracted from GTFS files; Bus (multiplex) is the part of the initial bus network that matches the metro network in the city. Edges in Bus (multiplex) are routes (paths) extracted from Bus (initial) We apply the same process for each of the studied cities, and as a result, we obtain the multiplex representation of the urban transportation for every city. In the case of the city of London, we use both (1) OpenStreetMap (OSM)Footnote 2 and (2) The National Public Transport Data (NPTDR).Footnote 3 OSM provides an updated map of different bus and metro stations in the city, whereas NPTDR contains a snapshot of every public transport journey in Great Britain for a selected week in October each year. While NPTDR database covers Great Britain (England, Scotland, Wales), we focus only on London city. First, we filter all the stations from NPTDR that are inside the bounding box of London city. Second, we extract all the stop points and trajectories of the two modes of transportation considered, i.e., bus and metro networks in this case. Then, we use these stop points and trajectories to build the graph of each layer. Next, we identify the inter-layer edges that connect all the same nodes residing in both layers. Finally, we build a two-layer transportation multiplex for the city of London by merging both graphs and using the identified ordinal nodes. Note that the hardest part about collecting data, is to use two sources of data for two modes of transportation, for example the metro network and the bus network for the city of London. When we started the resilience analysis, the data was not available for the city of London. Thus, we used other reliable data sources, Open Source Maps and The National Public Transport Data (NPTDR), in this case. The summary of the characteristic of each network is given in Table 1. Convergence of the coverage Given a multi-model transportation network and its corresponding multiplex representation, we are interested in how much of the network a random walker can visit (cover) within a given budget of time. The time budget can be substituted with a corresponding number of steps or movements that allow the walker to go from one node to one of its neighbors. The faster the coverage (i.e., fewer steps) the better. Indeed, the number of steps required to visit the entire network in a complete randomized setting is a very good indicator about the quality of the underlying multi-model transportation system. We first run MUME to compute the coverage convergence curve on synthetic graphs. The idea is to build different multiplex networks of two layers to mimic the two transportation modes under study. We also enforce different configurations of the multiplex to capture the heterogeneity observed in the real networks of buses and metros as shown in Fig. 3. Thus we generate random graphs with heterogeneous degree distributions following Barabási–Albert (BA) model and other graphs with more homogeneous degree distributions following Erdős–Rényi (ER). Based on our empirical observation, we found that metro stations in general are quite somewhat similar to BA graphs, whereas bus graphs we generated resemble ER graphs. The reason of this interesting distinction resides in the way bus networks are generated in which we only keep bus stations that match metro stations, and then create edges between any two pairs of nodes (bus stations) for which there is a shortest path in the initial bus graph that doesn't contain any metro station. This process naturally lead to a much denser graph as the number of paths in a graph is much higher than the number of its edges. Thus, we create three different multiplex networks configurations: BA-BA, BA-ER, and ER-ER. The first and third networks simulate cases where the two transportation modes share similar topological properties, whereas the second case simulates more realistic cases of transportation modes having different graph topologies. We fix the number of nodes in all graphs to \(N=100\) and vary the number of edges. In BA, we requested that each new node connects to two already existing nodes, while in ER we set the density score \(p=0.4\). Practically, we vary τ, the number of steps, to take values in \([0, 1000]\) interval. We request MUME to compute the coverage score for each value of τ. We run this process several time and report on averages. Figure 4 plots the coverage curves of the different synthetic multiplex networks. Coverage convergence of random multiplex networks. We consider three different configurations for a two layered multiplex: BA-BA, BA-ER, and ER-ER with the following settings: BA (\(N=100\), \(E=196\)), ER (\(N=100\), \(E=2003\)) Several observations can be made here. First, while all three multiplex networks coverage converge within \(\tau =1000\) steps, it is interesting to see that ER-ER network reaches convergence faster. This is mainly due to the fact that the graphs on both layers in the ER-ER network are dense which leads to a smaller average shortest path in the whole network. This result is also partly explained by the homogeneous degree distribution of ER graphs which prevents the random walker from getting stuck in a local hub, unlike BA graphs that favor the formation of such hubs. Second, we found that BA-BA and BA-ER multiplex networks show exactly the same convergence behavior, despite the multiple runs performed. While further investigations need to be conducted to determine the actual reasons behind such behavior, we believe that the presence of one BA layer in the multiplex can heavily impact the random walker and get him into those dense regions and hubs in the BA layer that are not necessarily connected in the other layer. Figure 5 reports the results of coverage convergence observed in the four cities studied here. We intentionally run the random walker for the same number of steps (\(\tau = 1000\)) for all cities to enable a direct comparison of the results. As expected, the convergence of the coverage happens faster in smaller graphs (Chicago) than bigger once (Paris and New York). Obviously, the smaller the number of stations a transportation system has, the fewer the number of steps required to cover all of them. We also see that the multi-modal transportation network of Paris allows a higher coverage compared to New York and London. This is explained by the density of both "Metro de Paris" which is the second denser worldwide, and the great density of its corresponding bus network that has more that 38K edges (a complete graph of that size would have had approximately 45K edges). Another surprising yet interesting observation is the coverage achieved by the London multiplex network, which lays a little bit above 0.2 at \(\tau = 1000\), way behind the performances achieved by the other three cities. This is all the more surprising that both London metro network and bus network are of the same size as Paris networks. The reasons of such under-performance might be due to the fact that London networks have been generated from a different dataset which might be incomplete for the bus network (the metro network has been thoroughly verified by us). Coverage convergence of the multi-modal transportation networks of Paris, London, New York City, and Chicago Robustness to failures Another important qualitative aspect of multi-modal transportation systems is their ability to withstand random failures that may occur in the system. In reality, failures happen more frequently that one could imagine. A heavy traffic jam due to an accident, bad weather condition, or renovation work usually can take down an entire road segment which forces commuters to change their routes, especially in cases where bus drivers for instance cannot take initiatives on their own. It is also the case of metro stations, where frequent closure of segments happen due to electrical shutdowns, suspicious objects on the rails, maintenance work, etc. Thus, understanding the impact of such failures and the way they affect the entire system is of a great importance for cities. To quantify the robustness of the multi-modal transportation system, we use percolation theory [19] that nicely describes the impact of edge failures in the multiplex on the coverage. It is worth noticing that we are dealing with bond percolation as opposed to site percolation in which nodes are removed from the network instead. For all multiplex networks we have created (three synthetic and four real), we iteratively remove a fraction of edges (5%) from both layers of the multiplex, and use MUME to compute the coverage achieved at \(\tau = 1000\) steps of the resulting network. As one could expect, the coverage score should be inversely correlated with the fraction of edges removed, i.e., the more failures there is, the harder it gets for the random walk to reach nodes. Figures 6 and 7 show the degradation of the coverage as a function of the amount of removed edges in both synthetic and real multi-modal transportation networks. Interestingly enough, we see in Fig. 6 that failures affect our three synthetic networks in three completely different ways. The most fragile multiplex network is BA-ER that gets almost completely disconnected with the removal of only 20% of its edges. This is followed by the BA-BA network whose coverage degradation is somewhat linear to the fraction of removed edges. ER-ER on the other hand demonstrates a strong robustness to failures with it securing more than 85% of its coverage when 80% of its edges are removed. While the results of ER-ER and BA-BA can be explained by the relatively high/low densities of their two basic forming graphs BA, ER (the higher the density, the better the robustness of the coverage). It is unclear why having two graphs of different topological structures severely fragilizes the whole integrated multiplex system. Robustness of random multiplexes to random failures. The curves are averaged over three independent runs Robustness to failure of three big cities: Paris, London, and New York City Here, it is worth noting that most of the literature regards the BA graphs as being more robust to failure. However, most of these studies are "site" specific; and we may cite the works, e.g., in [63, 64], which look (even though partially) at the relationship between edge failure, robustness and network topology. Unsurprisingly, the real transportation networks of the four cities behave just like the synthetic BA-ER multiplex network. High fragility is observed as networks lose more than 20% of their coverage after removing only 5% of their edges. The coverage tends to zero after the removal of 50% of edges. Despite this common fragility, one can see that Paris transportation system is slightly more robust to failures, followed by Chicago, New York City, and London. The main objective of this study is to better understand and predict urban mobility patterns in the city, and analyze the robustness of the multi-modal transportation system, i.e., its ability to withstand random and targeted failures. To do so, we model the multi-modal transportation system as a "multiplex network" consisting of several layers that correspond to the different transportation modes available in the city; and we estimate the coverage of the city, which is defined as the average fraction of distinct vertices visited at least once during a time budget. We first developed a mathematical framework to compute the coverage in a multiplex network setting, which we applied to different synthetic and real-life transportation systems built from four different cities, namely Chicago, London, New York, and Paris. Our experiments revealed different convergence patterns of the coverage in multiplex networks that are related to the topological characteristics of their underlying graphs. Dense and homogeneous graphs for instance lead to a faster convergence in general. Second, we looked at how different transportation networks react to failures and stress. Failures are simulated by the withdrawal of a small fraction of the edges from different layers, and coverage is computed for each removed fraction. A close inspection of the results showed that, unlike synthetic transportation networks, the four cities we studied behave quite similarly in terms of coverage degradation, with Paris network being the most robust among all. Moreover, one of the interesting findings of this work is the similarity between real transportation networks and BA-ER simulated networks. As a future work, we intend to expand our mathematical framework to capture the actual commuting dynamics. Our focus will be to estimate the average travel time of commuters in different cities, and how it is affected by failures occurring in the system. We are developing a scalable computational framework to help planners in the city of Doha to efficiently manage the flow of people and intelligently handle capacity of their infrastructure. We hope that the developed computational tool will help the city of Doha to identify early problems, predict failures and design better transportation infrastructure in preparation for the FIFA 2022 world cup. https://developers.google.com/transit/gtfs/reference http://www.openstreetmap.org https://data.gov.uk/dataset/nptdr De Domenico M, Solé-Ribalta A, Cozzo E, Kivelä M, Moreno Y, Porter MA, Gómez S, Arenas A (2013) Mathematical formulation of multilayer networks. Phys Rev X 3(4):041022 Dunlavy DM, Kolda TG, Kegelmeyer WP (2011) Multilinear algebra for analyzing data with multiple linkages. In: Graph algorithms in the language of linear algebra, pp 85–114. https://doi.org/10.1137/1.9780898719918.ch7 Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500. https://doi.org/10.1137/07070111X Sun J, Tao D, Faloutsos C (2006) Beyond streams and graphs: dynamic tensor analysis. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, pp 374–383 De Domenico M, Porter MA, Arenas A (2014) MuxViz: a tool for multilayer analysis and visualization of networks. J Complex Netw 3(2):159–176. https://doi.org/10.1093/comnet/cnu038 Kivelä M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA (2014) Multilayer networks. J Complex Netw 2(3):203–271 Solé-Ribalta A, De Domenico M, Gómez S, Arenas A (2014) Centrality rankings in multiplex networks. In: Proceedings of the 2014 ACM conference on web science, pp 149–155 Solé-Ribalta A, De Domenico M, Gómez S, Arenas A (2016) Random walk centrality in interconnected multilayer networks. Phys D: Nonlinear Phenom 323:73–79 De Domenico M, Granell C, Porter MA, Arenas A (2016) The physics of spreading processes in multilayer networks. Nat Phys 12:901–906. https://doi.org/10.1038/nphys3865 Chodrow PS, Al-Awwad Z, Jiang S, González MC (2016) Demand and congestion in multiplex transportation networks. PLoS ONE 11(9):0161738 Solé-Ribalta A, Gómez S, Arenas A (2016) Congestion induced by the structure of multiplex networks. Phys Rev Lett 116(10):108701 Strano E, Shai S, Dobson S, Barthelemy M (2015) Multiplex networks in metropolitan areas: generic features and local effects. J R Soc Interface 12(111):20150651 Barabasi A-L, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512. https://doi.org/10.1126/science.286.5439.509 Condamin S, Bénichou O, Tejedor V, Voituriez R, Klafter J (2007) First-passage times in complex scale-invariant media. Nature 450(7166):77–80 Yang S-J (2005) Exploring complex networks by walking on them. Phys Rev E 71(1):016107 da Fontoura Costa L, Travieso G (2007) Exploring complex networks through random walks. Phys Rev E 75(1):016102. https://doi.org/10.1103/PhysRevE.75.016102 Noh JD, Rieger H (2004) Random walks on complex networks. Phys Rev Lett 92(11):118701. https://doi.org/10.1103/PhysRevLett.92.118701 De Domenico M, Solé-Ribalta A, Gómez S, Arenas A (2014) Navigability of interconnected networks under random failures. Proc Natl Acad Sci 111(23):8351–8356 Albert R, Jeong H, Barabási A-L (2000) Error and attack tolerance of complex networks. Nature 406(6794):378–382. https://doi.org/10.1038/35019019 Cohen R, Erez K, Ben-Avraham D, Havlin S (2000) Resilience of the Internet to random breakdowns. Phys Rev Lett 85(21):4626–4628. https://doi.org/10.1103/PhysRevLett.85.4626 Molloy M, Reed B (1995) A critical point for random graphs with a given degree sequence. Random Struct Algorithms 6(2–3):161–180 Molloy M, Reed B (1998) The size of the giant component of a random graph with a given degree sequence. Comb Probab Comput 7(03):295–305 Arcaute E, Molinero C, Hatna E, Murcio R, Vargas-Ruiz C, Masucci AP, Batty M (2016) Cities and regions in Britain through hierarchical percolation. Open Sci 3(4):150691 MathSciNet Google Scholar Li D, Fu B, Wang Y, Lu G, Berezin Y, Stanley HE, Havlin S (2015) Percolation transition in dynamical traffic network with evolving critical bottlenecks. Proc Natl Acad Sci 112(3):669–672 Abbar S, Zanouda T, Borge-Holthoefer J (2016) Robustness and resilience of cities around the world. ArXiv preprint. arXiv:1608.01709 Wang J (2015) Resilience of self-organised and top-down planned cities? A case study on London and Beijing street networks. PLoS ONE 10(12):0141736 Romero DM, Uzzi B, Kleinberg J (2016) Social networks under stress. In: Proceedings of the 25th international conference on world wide web. international world wide web conferences steering committee, Montréal, Québec, Canada, April 11–15, 2016. ACM, New York, pp 9–20. https://doi.org/10.1145/2872427.2883063 Baggio JA, BurnSilver SB, Arenas A, Magdanz JS, Kofinas GP, De Domenico M (2016) Multiplex social ecological network analysis reveals how social changes affect community robustness more than resource depletion. Proc Natl Acad Sci 113(48):13708–13713 Bianconi G, Dorogovtsev SN (2014) Multiple percolation transitions in a configuration model of a network of networks. Phys Rev E 89(6):062814 Gomez S, Diaz-Guilera A, Gomez-Gardenes J, Perez-Vicente CJ, Moreno Y, Arenas A (2013) Diffusion dynamics on multiplex networks. Phys Rev Lett 110(2):028701 Watts DJ (2002) A simple model of global cascades on random networks. Proc Natl Acad Sci 99(9):5766–5771 Arenas A, Díaz-Guilera A, Kurths J, Moreno Y, Zhou C (2008) Synchronization in complex networks. Phys Rep 469(3):93–153 Gómez-Gardeñes J, Gómez S, Arenas A, Moreno Y (2011) Explosive synchronization transitions in scale-free networks. Phys Rev Lett 106:128701. https://doi.org/10.1103/PhysRevLett.106.128701 Achlioptas D, D'Souza RM, Spencer J (2009) Explosive percolation in random networks. Science 323(5920):1453–1455. https://doi.org/10.1126/science.1167782 Radicchi FSF (2009) Explosive synchronization transitions in scale-free networks. Phys Rev Lett 106:128701. https://doi.org/10.1103/PhysRevLett.106.128701 Boccaletti S, Bianconi G, Criado R, Del Genio CI, Gómez-Gardeñes J, Romance M, Sendiña-Nadal I, Wang Z, Zanin M (2014) The structure and dynamics of multilayer networks. Phys Rep 544(1):1–122 De Domenico M, Nicosia V, Arenas A, Latora V (2015) Structural reducibility of multilayer networks. Nat Commun 6:6864. https://doi.org/10.1038/ncomms7864 De Domenico M, Solé-Ribalta A, Omodei E, Gómez S, Arenas A (2015) Ranking in interconnected multilayer networks reveals versatile nodes. Nat Commun 6:6868. https://doi.org/10.1038/ncomms7868 Lee K-M, Min B, Goh K-I (2015) Towards real-world complexity: an introduction to multiplex networks. Eur Phys J B 88(2):1–20 Menichetti G, Remondini D, Panzarasa P, Mondragón RJ, Bianconi G (2014) Weighted multiplex networks. PLoS ONE 9(6):97857 Min B, Do Yi S, Lee K-M, Goh K-I (2014) Network robustness of multiplex networks with interlayer degree correlations. Phys Rev E 89(4):042811 Shai S, Dobson S (2013) Coupled adaptive complex networks. Phys Rev E 87(4):042812 Sole-Ribalta A, De Domenico M, Kouvaris NE, Diaz-Guilera A, Gomez S, Arenas A (2013) Spectral properties of the Laplacian of multiplex networks. Phys Rev E 88(3):032807 De Domenico M, Solé A, Gómez S, Arenas A (2013) Random walks on multiplex networks. ArXiv preprint. arXiv:1306.0519 Aleta A, Meloni S, Moreno Y (2017) A multilayer perspective for the analysis of urban transportation systems. Sci Rep 7:44359 Guo Q, Cozzo E, Zheng Z, Moreno Y (2016) Lévy random walks on multiplex networks. Sci Rep 6:37641. https://doi.org/10.1038/srep37641 Wilson RJ (1996) An introduction to graph theory, 4th edn. Prentice Hall, New York Baggag A, Abbar S, Zanouda T, Borge-Holthoefer J, Srivastava J (2016) A multiplex approach to urban mobility. In: Cherifi H, Gaito S, Quattrociocchi W, Sala A (eds) The 5th international workshop on complex networks and their applications. Studies in computational intelligence, vol 693. Springer, Cham, pp 551–563. https://doi.org/10.1007/978-3-319-50901-3_44 Guo Q, Cozzo E, Zheng Z, Moreno Y (2016) Levy random walks on multiplex networks. ArXiv preprint. arXiv:1605.07587 Buldyrev SV, Parshani R, Paul G, Stanley HE, Havlin S (2010) Catastrophic cascade of failures in interdependent networks. Nature 464(7291):1025–1028 Brummitt CD, D'Souza RM, Leicht EA (2012) Suppressing cascades of load in interdependent networks. Proc Natl Acad Sci 109(12):680–689 Brummitt CD, Barnett G, D'Souza RM (2015) Coupled catastrophes: sudden shifts cascade and hop among interdependent systems. J R Soc Interface 12(112):20150712 Son S-W, Bizhani G, Christensen C, Grassberger P, Paczuski M (2012) Percolation theory on interdependent networks based on epidemic spreading. Europhys Lett 97(1):16006 Arcaute E, Molinero C, Hatna E, Murcio R, Vargas-Ruiz C, Masucci P, Wang J, Batty M (2015) Hierarchical organisation of Britain through percolation theory. ArXiv preprint. arXiv:1504.08318 Wang J (2015) Resilience of self-organised and top-down planned cities. a case study on London and Beijing street networks. PLoS ONE 10(12):1–20. https://doi.org/10.1371/journal.pone.0141736 Callaway DS, Newman ME, Strogatz SH, Watts DJ (2000) Network robustness and fragility: percolation on random graphs. Phys Rev Lett 85(25):5468–5471 Delling D, Pajor T, Werneck RF (2014) Round-based public transit routing. Transp Sci 49(3):591–604 Catala M, Dowling S, Hayward D (2011) Expanding the google transit feed specification to support operations and planning. Technical report Puchalsky CM, Joshi D, Scherr W (2012) Development of a regional forecasting model based on Google transit feed. In: 91st annual meeting of the transportation research board, Washington, DC, USA, pp 1–17 Wong J (2013) Leveraging the general transit feed specification for efficient transit analysis. Transp Res Rec 2338:11–19 Lin Y, Kang R, Wang Z, Zhao Z, Li D, Havlin S (2017) Robustness of networks with dependency topology. Europhys Lett 118(3):36002 Abbas W, Egerstedt M (2012) Robust graph topologies for networked systems. IFAC Proc Vol 45(26):85–90. https://doi.org/10.3182/20120914-2-US-4030.00052 The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. Detailed bus and metro network data are made available through open data portals. For metro networks, we parse GTFS (Google Transit Feed Specification) datasets for every city. For NYC, we used GTFS data from NYC MTA, see http://web.mta.info/developers/download.html. For Chicago, we used GTFS data from Chicago Transit Authority, see http://www.transitchicago.com/developers/gtfs.aspx. For Paris, we used GTFS data from RATP, see https://data.ratp.fr/explore/?sort=modified. For London, we used GTFS data from TFL Open Portal, see https://api.tfl.gov.uk/. These data sets generally are well maintained; however, many properties are often incomplete or missing entirely. For this purpose, we infer required road characteristics to build realistic and routable road networks using OpenStreetMap, an open-source crowd sourced mapping tool. The source code can be made available to other researchers for reproducibility and to build on top of our work. Qatar Computing Research Institute (QCRI), Hamad Bin Khalifa University, Doha, Qatar Abdelkader Baggag, Sofiane Abbar, Tahar Zanouda & Jaideep Srivastava Abdelkader Baggag Sofiane Abbar Tahar Zanouda Jaideep Srivastava The authors contributed equally to this research article. All authors read and approved the final manuscript. Correspondence to Abdelkader Baggag. The manuscript reports research which does not require any approval by ethics committee(s). All contributing authors declare their consent for the final accepted version of the manuscript to be considered for publication in the journal EPJ Data Science. Baggag, A., Abbar, S., Zanouda, T. et al. Resilience analytics: coverage and robustness in multi-modal transportation networks. EPJ Data Sci. 7, 14 (2018). https://doi.org/10.1140/epjds/s13688-018-0139-7 Received: 25 December 2017 Multimodal transportation Random and targeted failures Individual and Collective Human Mobility: Description, Modelling, Prediction
CommonCrawl
Recent questions tagged sum Can you find three consecutive even integers whose sum is equal to 174? asked Jan 11 in Mathematics by anonymous | 66 views Find the sum of all two digit whole numbers which are divisible by 11? asked Jan 11 in Mathematics by anonymous | 9 views What is the sum of the infinite series: $1/11+2/11^2+1/11^3+2/11^4+…$? The sum of the squares of three consecutive even numbers is 308. What are the integers? What is the sum of 3 846 and 3 217? asked Dec 6, 2020 in Mathematics by ♦CT Diamond (37,163 points) | 6 views What is the sum of 100, 40, 30, 2 and 1? asked Dec 3, 2020 in Mathematics by ♦CT Diamond (37,163 points) | 14 views What is the sum of 7000, 456 and 98734? I have been asked to find the sum of 684 and 293. How do you get the sum of 4713 and 1215? asked Nov 24, 2020 in Mathematics by ♦CT Diamond (37,163 points) | 10 views What is the sum of 2457 and 3789? What is the probability that the sum will be 14? asked Sep 23, 2020 in Data Science by ♦Tedsf Diamond (53,829 points) | 18 views If I am given the series 16 + 8 + 4 + 2 + ..., show me how to: asked Aug 25, 2020 in Mathematics by ♦CT Diamond (37,163 points) | 6 views How can I calculate the sum of the digits of the following expression? asked Aug 23, 2020 in Mathematics by ♦CT Diamond (37,163 points) | 13 views Is this a quadratic sequence? 6; 9; 14; 21; 30; ... asked Aug 23, 2020 in Mathematics by ♦Joshua Mwanza Diamond (40,306 points) | 10 views quadratic If you are given the sequence: 3 ; 12 ; 48 ; 192 ; 768 ; ... Neptune completes 1 ½ turns about its axis each day. How many turns does it complete in 1 week? asked Aug 20, 2020 in Mathematics by ♦Joshua Mwanza Diamond (40,306 points) | 216 views In the first two hockey games of the year, Rodayo played 1 ½ periods and 1 ¾ periods. How many periods in all did he play? simplest Chad made a snack by combining 1/3 of a bowl of granola with ¼ of a bowl of chopped banana and ½ of a bowl of yoghurt. In a full set of permanent teeth, ¼ of the teeth are incisors, ¼ are premolars, and 3/8 are molars. What fraction of all the teeth are incisors, premolars and molars? Find the common difference and write down the next 3 terms of the sequence $a-3b; a-b; a+b; a+3b; \ldots$ asked Aug 17, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 57 views Find the common difference and write down the next 3 terms of the sequence. $2; 6; 10; 14; 18; 22; \ldots$ asked Aug 17, 2020 in Mathematics by anonymous | 10 views How will you print the sum of numbers starting from 1 to 100 (inclusive of both)? asked Jun 26, 2020 in Data Science by Teddy Wooden (3,536 points) | 26 views The four-digit number $A55B$ is divisible by $36$. What is the sum of $A$ and $B$? asked Jun 20, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 43 views digts divisibilty What is the value of the sum $5 + 10 + 15 + · · · + 95 + 100$? Lin is reading a 47-page book. She read the first 20 pages in 35 minutes. asked Jun 19, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 231 views 1+2+3+4+5+ ... + 99+ 100 = ? asked Jun 6, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 27 views What is the value of (14 + 5) - (5 - 2) Julia read a book in 20 days. She read 16 pages every day for the first 15 days, and 18 pages everyday for the last 5 days. How many pages did Julia read? If you add 1,000 to 29,898, you obtain asked Jun 4, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 455 views Given $g(x)=\sqrt{4-x^2}$ and $h(x)=\dfrac{3}{x}+1$ asked May 24, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 24 views asymptotes Determine the perimeter of the following breadth Subtract x from a. If $x=4$, find the value of the sum $\sum_{t=0}^{3} (5 +\sqrt{x^t})$ Calculate the fourth term of $(x^2-y^2)^{11}$ Factorise $a^3+b^3$ fully factorize The sum of two numbers is 13. If you subtract them, the difference is 6. What are those numbers? asked May 6, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 21 views The sum of the kinetic energy and gravitational potential energy of an object is called asked May 3, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 199 views gravitational What are the sum of the whole numbers between 100 and 1000 which are divisible by 11? asked Apr 30, 2020 in Mathematics by ♦Tedsf Diamond (53,829 points) | 50 views Does $\sum_{n=1}^{\infty} \frac{1}{n}$ converge? Find the common difference and write down the next 3 terms of the sequence. $2;6;10;14;18;22;…$ How do I calculate $T_{99}$ in this quadratic sequence? A quadratic sequence has a second term equal to 1, a third term equal to −6 and a fourth term equal to −14. Determine the second difference for this sequence. Hence, or otherwise, calculate the first term of the pattern. grade12 How do I write the series $31+24+17+10+3$ in sigma notation? What is the value of a ? $\sum_{k=1}{3}(a^{2k-1})=28$ This quadratic equation is giving headaches $8x=12x^2 $ What is the sum of $\dfrac{3}{5}; \dfrac{4}{9}$ and $\dfrac{2}{7}$ Explain what is a linear equation what is it used for? What is the name given to all the numbers that can be expressed as the sum of two consecutive whole numbers? A boy multiplied a number with 10 and got 100. If he divided it by 10, what would be the answer? Billy read 2 books. He read the first one in one week with 25 pages everyday. He read the second book in 12 days with 23 pages everyday. What is the total number of pages that Billy read?
CommonCrawl
How do you calculate $ 2^{2^{2^{2^{2}}}} $? From information I have gathered online, this should be equivalent to $2^{16}$ but when I punch the numbers into this large number calculator, the number comes out to be over a thousand digits. Is the calculator wrong or is my method wrong? algebra-precalculus arithmetic tetration sgrmshrsm7 Henry RHenry R $\begingroup$ The number is equal to $2^r$ where $r=2^{16}$. $\endgroup$ – Aravind Mar 16 '17 at 17:17 $\begingroup$ this is quiet not clear use parentesis $\endgroup$ – Dr. Sonnhard Graubner Mar 16 '17 at 17:17 $\begingroup$ If parenthesis are not used, it is assumed that exponents are evaluated from top-down as opposed to bottom-up. $(a^b)^c=a^{bc}\neq a^{(b~^c)}$. Exponentiation is not associative. The answer of $2^{16}$ is for if it were evaluated bottom-up as (((2^2)^2)^2)^2 instead of top-down which is 2^(2^(2^(2^2))) which is much larger $\endgroup$ – JMoravitz Mar 16 '17 at 17:22 $\begingroup$ Possible duplicate of What is the order when doing $x^{y^z}$ and why? $\endgroup$ – Simply Beautiful Art Mar 16 '17 at 21:11 $\begingroup$ My TI calculator has an inline option of showing this as 2^2^2^2^2, which evaluates as 65536 (left to right evaluation). But when I switch to math print mode it shows the tower of powers, which it tries, but fails, to evaluate top down...overflow. I'm not saying that a TI calculator is the final arbiter of math truth, but it is what I get. $\endgroup$ – paw88789 Mar 17 '17 at 2:22 $$2^{2^{2^{2^2}}}=2^{2^{2^4}}=2^{2^{16}}=2^{65536}\tag1$$ The number of digits: $$\mathcal{A}=1+\lfloor\log_{10}\left(2^{65536}\right)\rfloor=19729\tag2$$ $\begingroup$ This is all true enough, but it doesn't really answer the question as actually asked. $\endgroup$ – hBy2Py Mar 16 '17 at 18:59 $\begingroup$ It currently does answer the question in the title. Equation 1 shows how to evaluate the expression. It also implies, though doesn't explicitly states that the answer the asker's method (or at least the answer he got using the method) was wrong. $\endgroup$ – Fluidized Pigeon Reactor Mar 16 '17 at 21:30 $\begingroup$ The title is a reference for searching and tracking. The question is the text beneath that. NAA. $\endgroup$ – Nij Mar 17 '17 at 8:43 What you have is a power tower or "tetration" (defined as iterated exponentiation). From the latter link, you would most benefit from this brief excerpt on the difference between iterated powers and iterated exponentials. The comment by JMoravitz really gets to the heart of the matter, namely that exponential towers must be evaluated from top to bottom (or right to left). There actually is a notation for your particular question: ${}^52=2^{2^{2^{2^{2}}}}$. You really need to look at ${}^42$ before you get something meaningful because, unfortunately, $$ {}^32=2^{2^{2}}=2^4=16=4^2=(2^2)^2; $$ however, $$ {}^42=2^{2^{2^{2}}}=2^{2^{4}}=2^{16}\neq2^8=(4^2)^2=((2^2)^2)^2. $$ Hence, your method is wrong, but everything in those links should provide more than enough for you to become comfortable with tetration. Daniel W. FarlowDaniel W. Farlow $\begingroup$ It seems that the Knuth's notation $2\uparrow\uparrow n$ has gained popularity over Rucker's one $^n2$ nowadays. $\endgroup$ – zwim Mar 16 '17 at 17:47 $\begingroup$ @Daniel W. Farlow look like you beat me to posting. Got to love it when that happens. $\endgroup$ – Sentinel135 Mar 16 '17 at 17:53 $\begingroup$ @zwim I actually prefer Rucker's notation, but I do see the appeal of Knuth's very unambiguous notation. $\endgroup$ – Daniel W. Farlow Mar 16 '17 at 17:55 By convention, the meaning of things written $ \displaystyle a^{b^{c^d}} $ without brackets is $ \displaystyle a^{\left(b^{\left(c^d\right)}\right)} $ and not $\left(\left(a^b\right)^c\right)^d$. This is because $\left(\left(a^b\right)^c\right)^d$ equals $a^{b\cdot c\cdot d}$ anyway, so it makes pragmatic sense to reserve the raw power-tower notation $ \displaystyle a^{b^{c^d}} $ for the case that doesn't have an alternative notation without parentheses. As others have explained, $\displaystyle 2^{2^{2^{2^2}}}$ interpreted with this convention is $2^{65536}$, a horribly huge number, whereas $(((2^2)^2)^2)^2$ is $2^{16}=65536$, as you compute. hmakholm left over Monicahmakholm left over Monica $\begingroup$ "Horribly huge number" rubs me the wrong way in this context. Huge numbers are horrible when they count something that you don't want. The number's not horrible when it's your bank balance! (Actually it might be. If you put together that many pennies it'd probably collapse into a black hole the size of the Milky Way.) $\endgroup$ – Matt Samuel Mar 16 '17 at 22:44 $\begingroup$ @MattSamuel: Milky Way? This number exceeds the number of Planck volumes in the observable universe ... to the hundredth power! $\endgroup$ – hmakholm left over Monica Mar 17 '17 at 1:24 $\begingroup$ Do you really expect me to do coordinate transformations in my head while strapped to a centrifuge??? $\endgroup$ – Matt Samuel Mar 17 '17 at 1:25 $\begingroup$ 2^65536 really isn't a horribly huge number when you consider that its binary representation fits in a mere 8 KB of memory. I mean, we are already using RSA moduli around 2^4096 already, so this number is only around 10× longer. $\endgroup$ – Nayuki Mar 17 '17 at 3:28 $\begingroup$ Actually all finite numbers are pretty small because all but a finite number of the rest are bigger. Come to that all transfinite numbers are pretty small as well. $\endgroup$ – Martin Rattigan Apr 2 '17 at 21:55 I would calculate it using Maxima (which evaluates repeat exponentiation correctly, right-to-left), since there is no point wasting brain cells on something that a machine can do: $ maxima Maxima branch_5_39_base_2_gc9edaee http://maxima.sourceforge.net using Lisp GNU Common Lisp (GCL) GCL 2.6.12 Distributed under the GNU Public License. See the file COPYING. Dedicated to the memory of William Schelter. The function bug_report() provides bug reporting information. (%i1) 2^2^2^2^2; (%o1) 200352993040684646497907235156025575044782547556975141926501697371089405\ 955631145308950613088093334810103823434290726318182294938211881266886950636476\ (%i2) bfloat(%); (%o2) 2.003529930406846b19728 (%i3) Of course if I just wanted to estimate the magnitude of the number without resorting to the use of arbitrary precision computer software, I'd note that the exponent is $2^{2^{2^2}}=2^{(2^{(2^2)})}=2^{(2^4)}=2^{16}=65536$; multiplying it by $\log_{10} 2\sim 0.30103$ gives $19728.302$, so the result is approximately $10^{0.302}\times 10^{19728}\sim 2\times 10^{19728}$. Viktor TothViktor Toth $\begingroup$ Wow. Someone actually posted the digits. $\endgroup$ – Matt Samuel Mar 16 '17 at 23:36 $\begingroup$ There is a mistake in the computed result. The 2 in the middle should be a 3. Probably a typo. $\endgroup$ – augustin Mar 17 '17 at 3:43 $\begingroup$ "2 in the middle"... can you be a bit more specific? $\endgroup$ – Viktor Toth Mar 17 '17 at 3:48 $\begingroup$ @augustin: The digit in the middle is an 8. And yes, I checked it; with the help of the computer, of course. Given that the total number of digits (19729) is odd, the digit in the middle is well defined (it is the digit which is preceded and followed by the same number of digits). $\endgroup$ – celtschk Mar 17 '17 at 7:59 $\begingroup$ @celtschk now you sir are fun at parties! $\endgroup$ – Pierre Arlaud Mar 17 '17 at 8:44 This looks an awfully close to what is known as a tetration (a.k.a. power tower). This is $^{(k)}a=a^{^{(k-1)}a}$ where $^1a=a$. For numbers greater than one, these usually get really big really fast, and faster than exponents do. So in your case, you have $^52=2^{2^{16}}$. Now if you want to see an interesting one look at $\lim_{k\to \infty}\;^{(k)}(\sqrt{2})$. Sentinel135Sentinel135 $\begingroup$ Wouldn't that work for any $\sqrt[n]{n}$? $\endgroup$ – Random832 Mar 16 '17 at 20:04 $\begingroup$ Wouldn't what work for any $\sqrt[n]{n}$? I never said the answer. And yes, but it depends on what you think the answer is. ;D $\endgroup$ – Sentinel135 Mar 16 '17 at 20:13 $\begingroup$ Looks like that limit goes to infinity really quickly. $\endgroup$ – Joshua Mar 16 '17 at 20:36 $\begingroup$ @Joshua are you sure? can you try and prove it? $\endgroup$ – Sentinel135 Mar 16 '17 at 20:56 $\begingroup$ Darn. I hate accumulated roundoff. $\endgroup$ – Joshua Mar 16 '17 at 21:24 Your equation can be simplified using Knuth's up arrow notation. \begin{equation*} 2^{2^{2^{2^2}}} = 2 \uparrow\uparrow 5 \end{equation*} (because we can calculate tetration with Knuth's up arrow notation) By definition of Knuth's up arrow notation, You can get this result. \begin{equation*} 2\uparrow\uparrow5 = 2^{(2^{(2^{(2^2)})})} \end{equation*} And according to web2.0calc, \begin{equation*} 2^{(2^{(2^{2})})} = 65536 \end{equation*} Finally, the answer would be: \begin{equation*} 2^{65536} \end{equation*} (correct me if I'm wrong, this was my first answer on Mathematics SE) Matthew RohMatthew Roh Not the answer you're looking for? Browse other questions tagged algebra-precalculus arithmetic tetration or ask your own question. What is the order when doing $x^{y^z}$ and why? How to calculate the number of decimal digits for a binary number? How do I divide integers in a negative base? Addition and multiplication of two numbers. Help with Math Formula Adding and subtracting roots The smallest 12 digit natural number for which the sum of its digits is 80. Fractional hexadecimal addition. Subtraction when second number is bigger than first number Is $(-2-2)^2 - (-2+3)(-2-3)-4(-2^2+2) = 69$? standard divisor
CommonCrawl
Urban pollution greatly enhances formation of natural aerosols over the Amazon rainforest Decrease in radiative forcing by organic aerosol nucleation, climate, and land use change Jialei Zhu, Joyce E. Penner, … Hugh Coe Influence of biogenic emissions from boreal forests on aerosol–cloud interactions T. Petäjä, K. Tabakova, … V.-M. Kerminen Major secondary aerosol formation in southern African open biomass burning plumes Ville Vakkari, Johan P. Beukes, … Pieter G. van Zyl Global nitrous acid emissions and levels of regional oxidants enhanced by wildfires N. Theys, R. Volkamer, … M. Van Roozendael Chemistry-driven changes strongly influence climate forcing from vegetation emissions James Weber, Scott Archer-Nicholls, … Alex T. Archibald Terpene emissions from boreal wetlands can initiate stronger atmospheric new particle formation than boreal forests Heikki Junninen, Lauri Ahonen, … Markku Kulmala Formation of secondary organic aerosols from anthropogenic precursors in laboratory studies Deepchandra Srivastava, Tuan V. Vu, … Roy M. Harrison Fire air pollution reduces global terrestrial productivity Xu Yue & Nadine Unger Non-linear effects of secondary organic aerosol formation and properties in multi-precursor systems Masayuki Takeuchi, Thomas Berkemeier, … Nga Lee Ng Manish Shrivastava ORCID: orcid.org/0000-0002-9053-24001, Meinrat O. Andreae ORCID: orcid.org/0000-0003-1968-79252,3,4, Paulo Artaxo ORCID: orcid.org/0000-0001-7754-30365, Henrique M. J. Barbosa ORCID: orcid.org/0000-0002-4027-18555, Larry K. Berg ORCID: orcid.org/0000-0002-3362-94921, Joel Brito ORCID: orcid.org/0000-0002-4420-94426, Joseph Ching ORCID: orcid.org/0000-0003-1295-61767, Richard C. Easter1, Jiwen Fan ORCID: orcid.org/0000-0001-5280-43911, Jerome D. Fast ORCID: orcid.org/0000-0002-2006-56751, Zhe Feng ORCID: orcid.org/0000-0002-7540-90171, Jose D. Fuentes ORCID: orcid.org/0000-0002-6177-63268, Marianne Glasius ORCID: orcid.org/0000-0002-4404-69899, Allen H. Goldstein ORCID: orcid.org/0000-0003-4014-489610, Eliane Gomes Alves ORCID: orcid.org/0000-0001-5245-195211, Helber Gomes ORCID: orcid.org/0000-0001-9972-999012, Dasa Gu13, Alex Guenther1,13, Shantanu H. Jathar ORCID: orcid.org/0000-0003-4106-235814, Saewung Kim13, Ying Liu ORCID: orcid.org/0000-0001-5685-74231, Sijia Lou ORCID: orcid.org/0000-0002-9958-84941, Scot T. Martin15, V. Faye McNeill16, Adan Medeiros17, Suzane S. de Sá15, John E. Shilling ORCID: orcid.org/0000-0002-3728-01951, Stephen R. Springston ORCID: orcid.org/0000-0003-0159-493118, R. A. F. Souza ORCID: orcid.org/0000-0003-0838-372319, Joel A. Thornton20, Gabriel Isaacman-VanWertz ORCID: orcid.org/0000-0002-3717-479821, Lindsay D. Yee ORCID: orcid.org/0000-0001-8965-931910, Rita Ynoue ORCID: orcid.org/0000-0001-5810-291322, Rahul A. Zaveri ORCID: orcid.org/0000-0001-9874-88071, Alla Zelenyuk ORCID: orcid.org/0000-0002-0674-09101 & Chun Zhao23 Nature Communications volume 10, Article number: 1046 (2019) Cite this article Atmospheric chemistry Biogeochemistry One of the least understood aspects in atmospheric chemistry is how urban emissions influence the formation of natural organic aerosols, which affect Earth's energy budget. The Amazon rainforest, during its wet season, is one of the few remaining places on Earth where atmospheric chemistry transitions between preindustrial and urban-influenced conditions. Here, we integrate insights from several laboratory measurements and simulate the formation of secondary organic aerosols (SOA) in the Amazon using a high-resolution chemical transport model. Simulations show that emissions of nitrogen-oxides from Manaus, a city of ~2 million people, greatly enhance production of biogenic SOA by 60–200% on average with peak enhancements of 400%, through the increased oxidation of gas-phase organic carbon emitted by the forests. Simulated enhancements agree with aircraft measurements, and are much larger than those reported over other locations. The implication is that increasing anthropogenic emissions in the future might substantially enhance biogenic SOA in pristine locations like the Amazon. The response of natural systems to anthropogenic emissions remains one of the largest uncertainties in our understanding of the radiative forcing of climate1,2,3. Secondary organic aerosol (SOA) is a ubiquitous component of atmospheric aerosol, which scatters and absorbs solar radiation and also activates to form cloud droplets4,5,6. Over pristine regions such as the Amazon rainforests, SOA formed by oxidation of biogenic volatile organic compound (VOC) precursors accounts for most of the cloud condensation nuclei, especially during the wet season7. Field measurements suggest that much of biogenic SOA mass is formed through mechanisms that are driven/enhanced by anthropogenic emissions8,9,10,11,12. Anthropogenically controlled biogenic SOA refers to SOA formed due to oxidation of biogenic precursors, but that would not be formed in absence of anthropogenic emissions13. A modeling study over the United States suggested that ~20% of biogenic SOA was controlled by anthropogenic nitrogen-oxides (NOx) and another 30% was controlled by partitioning of SOA within primary organic aerosol (POA)9. Another global modeling study suggested that addition of large amounts of SOA sources (70% of total) that spatially matched anthropogenic pollution was needed to produce best model-measurement agreement14. Chemical pathways of SOA formation could be broadly classified as two types: [1] Pure gas-phase chemistry, which refers to gas-phase oxidation of volatile organic compounds (VOCs) emitted from terrestrial vegetation and combustion activities (e.g., wildfires, traffic) that results in formation of lower volatility condensable products15,16,17 and has been studied in outdoor chambers as early as 198218, and [2] Multiphase chemistry, which refers to chemistry occurring between gas- and particle-phases, such as acid-catalyzed reactive uptake of organics in the aqueous phase of hygroscopic particles (aqueous aerosols)19,20,21. A significantly improved understanding of pathway 2 has developed only recently over the past decade, e.g. the uptake of isoprene epoxydiols (IEPOX) on aqueous aerosols mediated by SO2 and NOx22,23,24,25,26,27,28,29. IEPOX-SOA constitutes 10–30% of SOA at various locations around the globe25. However, other pathways for anthropogenic-biogenic interactions may be important as well, e.g. nonlinear effects of NOx on both gas- and particle-phase chemistry of SOA, as discussed in a recent review article30. One of the challenges in accurately quantifying anthropogenically controlled or enhanced biogenic SOA through field measurements is the need to establish a baseline biogenic SOA level that would exist in absence of any anthropogenic perturbations. This is difficult in large part due to the ubiquitous influence of anthropogenic emissions over most terrestrial locations in the Northern hemisphere including the continental United States. The vast Amazon rainforest during its wet season is one of the few remaining places on Earth where atmospheric chemistry transitions between pristine-preindustrial and urban pollution-influenced conditions. This region presents a unique natural laboratory to understand how anthropogenic emissions impact biogenic SOA formation31. While there are several observational studies during the Green Ocean Amazon (GoAmazon2014/5) field campaign over the Amazon rainforest31, we present the first dedicated modeling study of SOA formation investigating this field campaign. We include SOA chemistry pathways observed in several laboratory studies within a high-resolution regional chemical transport model, and provide a holistic view of how natural biogenic SOA formation changes due to its chemical interactions with urban pollution. Manaus, a city of 2 million people located within the forest, represents the only major anthropogenic source within the Amazon during the wet season. In the absence of Manaus emissions, the Amazon atmosphere in the wet season approaches preindustrial conditions7. Measurements demonstrate a sharp contrast in levels of various pollutants (including gases, particles, and oxidants) between air masses in the pristine Amazon, and when Manaus emissions mix with this pristine air32. Observational studies over the Amazon have demonstrated complex interactions between urban pollution and biogenic SOA. By using an oxidation flow reactor, Palm et al.33 showed that additional SOA could be produced from biogenic precursors (available in the ambient air) provided that additional ozone and OH concentrations were available. An analysis of VOC concentrations and variation with NOy by Liu et al.34 indicated a substantial increase in oxidant concentrations in the pollution plume, which, considering the work of Palm et al.33, suggests a significant role for anthropogenic control on SOA production. Statistical and cluster analysis of Aerosol Mass Spectrometer (AMS) and auxiliary datasets by de Sá et al.35 at a surface site showed that the increase in OA ranged from 25% to 200% under polluted compared to background conditions, including contributions from both primary and secondary particulate matter. de Sá et al.23 further showed that the relative contribution of the IEPOX-SOA factor to OA decreased significantly under polluted conditions. Liu et al.36 observed that the afternoon concentrations of organic hydroxyhydroperoxides (ISOPOOH) decreased from 600 pptv under background conditions to <60 pptv under polluted conditions, suggesting important shifts in the gas-phase chemistry that could affect OA production. Aircraft measurements from the Manaus plume on the same day targeted in this study (March 13) found that the composition of the downwind OA became progressively more oxidized, with a conversion from hydrocarbon-like OA to oxygenated OA37. Our present modeling study aims to provide a mechanistic understanding of observed impacts of anthropogenic emissions on SOA formation over the Amazon. Using a high-resolution regional chemical transport model, we contrast SOA production in air masses from the near-pristine background with those affected by the Manaus plume, in order to understand how anthropogenic emissions affect biogenic SOA formation in this region. Model predictions are evaluated with aircraft measurements of organic aerosols (OA, which is sum of POA and SOA) using a high-resolution Aerosol Mass Spectrometer (AMS)37. Our study focuses on aircraft measurements since the aircraft rapidly measures trace gases and aerosol concentrations over both background and plume-affected locations, concomitantly. Aircraft measurements, thus represent a snapshot of changes that occur in biogenic SOA due to anthropogenic emissions over the otherwise pristine wet-season Amazon. Most of the analyses presented in this Manuscript are for 13 March 2014, a day of mostly sunny skies and no precipitation along the aircraft flight path, which is ideal for studying SOA formation32. We show large enhancements (60–200% on average, 400% maximum) of natural biogenic SOA within the Amazon that are due to substantial increase in oxidants (OH and ozone) promoted by NOx emissions within the urban plume, and are much larger than the enhancements reported in other locations. In the absence of the urban plume, background NOx concentrations are much lower (that can mainly to be attributed to soil NOx emissions) causing lower OH and ozone production, thus decreasing reacted biogenic VOCs and SOA formation. We show that although isoprene dominates the emissions fluxes of biogenic VOCs within the Amazon, it contributes 50% to biogenic SOA formation while terpenes contribute the remaining half. Our results provide a clear mechanistic picture of how anthropogenic emissions are likely to have greatly enhanced biogenic SOA formation since preindustrial times over the Earth. Simulating OA within a regional model Comparing modeled and observed particle concentrations over the Amazon is particularly challenging due to large uncertainties in emissions of biogenic VOCs and a complex wet scavenging environment38. We use the regional Weather Research and Forecasting Model coupled to chemistry (WRF-Chem) model39,40 at high resolution with 2 km grid spacing i.e. at cloud-, chemistry-, and emissions-resolving scales to simulate atmospheric chemistry and SOA formation during GoAmazon2014/5 (Methods). We simulate the atmospheric conditions between 10 and 17 March 2014 with the first 3 days used for spin-up of aerosol and trace gas concentrations, for a region that includes the Amazon basin (Methods, Supplementary Figure 1 and Supplementary Table 1). Due to the large computational costs associated with our SOA parameterizations and high-resolution coupled cloud-chemistry-meteorological WRF-Chem simulations (Methods), we only conduct simulations for a 1-week period. However, the results and conclusions from this study are expected to apply more broadly over the entire wet season period, since both observations and a previous WRF-Chem study show that the sharp contrast between plume and background oxidants is a common feature among several days41. Simulated SOA from pure gas-phase chemistry pathway is represented in the model using a modified volatility basis set (VBS) approach (Methods). The VBS approach represents multiple generations of oxidation of biogenic VOCs that include isoprene, monoterpene, and sesquiterpene compound classes, and anthropogenic and biomass burning precursors using a lumped set of compounds. Initial yields are determined by fitting environmental chamber measurements and generally vary with VOC, NOx, and oxidants (Supplementary Table 2). This work also includes several major updates to SOA aging parameterizations (Methods) to gain insights into biogenic SOA formation and its interactions with anthropogenic emissions. Isoprene SOA is formed in the model by two different pathways: gas-phase chemistry pathway (represented by VBS) and multiphase IEPOX-SOA pathway (represented by simple Gamma model)42, which are coupled to the Model for Simulating Aerosol Interaction and Chemistry (MOSAIC) aerosol module43 within WRF-Chem (Methods). Aircraft measurements of OA and model predictions To understand how the Manaus plume affects biogenic SOA formation, we compare results from two model simulations: Default, wherein all emissions including those from Manaus urban region and biogenic emissions are on, and a sensitivity simulation wherein biogenic emissions from the forest are on, but anthropogenic Manaus emissions are turned off (including NOx, SO2, anthropogenic VOCs, and primary particulate emissions, e.g., POA, sulfate). This simulation represents background concentrations of OA and trace gases over the Amazon. Figure 1a compares measured variations of OA mass concentrations using the Aerosol Mass Spectrometer (AMS)37 with WRF-Chem predictions along aircraft flight transects on March 13. The periodic rise and fall of OA in Fig. 1a reflects times when the aircraft intersected the plume in transverse patterns (Supplementary Figure 2), thus concomitantly sampling air masses both in-plume and within the local background. Transect 1 is closest to the city (24 km downwind of T1, urban center), while transects 2, 3, and 4 are farther downwind of the city, spaced approximately equally at intervals of 24 km (Supplementary Figure 2). On this day, the default model (blue lines) also predicts that the plume emitted from the Manaus region (close to the T1 site) passed over the T3 site, which is 70 km downwind of the urban region (Figs. 2 and 3). Both measurements (orange line) and model predictions (blue line) show that OA concentrations are clearly enhanced in the plume compared to the background. Similarly, NOy, ozone, and CO concentrations show a periodic rise and fall as the aircraft moves within and outside of the urban plume (Supplementary Figure 3a, 3b, and 3c, respectively). Simulations (blue line) agree with measurements (orange line) and capture this rise and fall of OA, NOy, CO, and ozone corresponding to the four different aircraft transects. Aircraft measurements and model predictions demonstrating large OA enhancements within the urban plume compared to the pristine Amazonian background. a Total organic aerosols (OA) measured along aircraft flight transects at 500-m altitude on March 13 (orange) and model-predicted OA for simulations with all emissions on (blue), and biogenic emissions on but Manaus emissions off (green, representing background OA), as described in the text. The shaded regions depict simulated biogenic and anthropogenic OA components. b Measured and model predicted average percent enhancement in plume compared to background organic aerosol on 4 different flight transects, as marked in a. The figure shows four different transects when the aircraft intersected the Manaus plume marked as 1–4, while background (outside plume) concentrations are ~0.5 μg m−3, indicated by the open circles. Bars represent measurements while symbols represent model-predicted increases of OA within Manaus plume compared to background conditions WRF-Chem simulated concentrations of biogenic SOA in the presence and absence of Manaus emissions. a Biogenic SOA when all emissions are on b Biogenic SOA when biogenic volatile organic compound (VOC) emissions are on but Manaus (anthropogenic) emissions are turned off c Biogenic SOA enhancement (%) calculated from the two simulations with Manaus emissions turned on/off i.e. (a–b)/b × 100. WRF-Chem predictions are at ~500 m altitude, averaged during the afternoon (16–20 UTC = 12–16 local time) of 13 March 2014 WRF-Chem simulated concentrations of NOx and oxidants with Manaus emissions turned on/off. a, b, and c show simulated NOx, ozone and OH with all emissions on, while d, e and f show simulations with biogenic emissions on but Manaus emissions turned off for an altitude of ~500 m, averaged during the afternoon (16–20 UTC) of 13 March 2014. A comparison of top and bottom panels demonstrates how NOx and oxidants are greatly enhanced by the Manaus plume within the otherwise pristine Amazon The model predicts that natural biogenic SOA formed by oxidation of biogenic VOC emissions (including isoprene, monoterpenes, and sesquiterpenes, emitted by the forest, denoted by pink shaded region in Fig. 1a) dominates over anthropogenic OA (black shaded region in Fig. 1a). Anthropogenic OA is significant only within plume. The default model (blue) agrees with measured OA loadings (orange) during three of the four flight transects, however, the sensitivity simulation with Manaus emissions turned off (green) predicts much lower OA loadings, representing the regional background biogenic SOA concentrations (~0.5 μg m−3). The simulated background OA agrees with aircraft measurements (Fig. 1a and Supplementary Table 3). Differences between our two model simulations represented by the blue and green lines in Fig. 1a represents the enhancement of OA due to anthropogenic emissions. In-plume model simulated OA agrees with measured OA (within 15%) for transects 1, 2, and 3. However, for transect 4, the model overestimates OA by ~50% compared to measurements. Since transect 4 is farthest from the city center, differences between locations of the observed and simulated plume are greatest for transect 4 compared to transects 1, 2, and 3, as observed from CO comparisons for this day (Supplementary Figure 3c). Consistently, Fig. 1a also shows that simulated OA (blue line) drops to the background value of ~0.5 µg m−3 when the aircraft moves out of the plume during flight transects 1–3 (open circles in Fig. 1a). However, simulated OA does not drop to the background value when the aircraft moves out of plume during transect 4 due to model-measurement differences in plume location and dispersion (e.g. at ~4000 s flight time in Fig. 1a). These differences in plume location and dispersion farther downwind of Manaus partly explain the increased model-measurement differences in simulated OA for transect 4 compared to the other transects (Supplementary Table 3). Therefore, our approach focuses on identifying the shifted urban plume in the model and contrasting plume-to-background concentrations, as described below. Traditional statistical techniques for model evaluation would not be expected to work well in this study because shifts in plume location could be an issue, especially in high-resolution model simulations. To quantify the plume-to-background enhancement in OA concentrations, we classify in-plume and background locations in measurements based on NOy thresholds (background: NOy < 0.5 ppb, in-plume: NOy > 2 ppb)36. In model simulations, we define the plume based on NOy threshold (NOy > 2 ppb), similar to measurements. Background concentrations in the model are obtained from our sensitivity simulation where anthropogenic (Manaus) emissions are turned off. Both measurements and model simulations show that NOy increases from sub-ppb levels in the background to several ppbs in-plume i.e. more than an order of magnitude increase (Supplementary Figure 3a). Since the simulated plume could be shifted compared to measurements, we determine in-plume locations in the model by scanning all radial grid-points downwind of Manaus (T1 site) that exceed the NOy threshold of 2 ppb. The model is sampled at the same time, altitude, and radial distance as corresponding aircraft measurements. Percentage enhancement factors in the model are calculated as the ratio of difference between plume and background OA (i.e. plume OA-background OA) to background OA concentrations for all locations averaged across each flight transect, similar to measurements. Percentage in-plume enhancement factors of OA compared to its immediate background are calculated individually for four different flight transects of the plume (Fig. 1b). Measurements (orange bars in Fig. 1b) indicate that OA is enhanced by an average of 100–200% in-plume with peak enhancements (calculated as the difference between largest in-plume OA concentrations to background levels) of ~400% compared to background on March 13. These enhancements are much larger than those reported in previous studies over other regions in the Northern hemisphere9,14. We attribute these large enhancements mostly to the sharp increase in oxidants within Manaus plume compared to the background Amazon, as discussed later. The model shows excellent agreement with observed enhancements for transects 1, 2, and 3 (within 20%) (Fig. 1b). For transect 4, the model overestimates enhancement by ~75% compared to observations due to overestimation of simulated in-plume OA for this transect. Although the focus of this study is on March 13, which represented a golden day due to sunny conditions and clear evolution of the plume downwind of Manaus, OA was also enhanced on other days. For example, we calculated enhancements of OA in plume-affected locations on 2 other days i.e. March 14 and 16 during the simulated period based on measured and simulated CO/NOy (Supplementary Figure 4). The model moderately overestimates OA enhancements on both days (by ~50–60%). Some of these differences between simulated and measured enhancements are due to differences in plume location and dispersion. Our simulations indicate that biogenic SOA is the dominant contributor to total OA downwind of the city (Fig. 1a and Supplementary Figure 5). This result explains the dominant oxygenated organic aerosol (OOA) contribution to total OA downwind, suggested by AMS factor analysis37. Consistent with our findings of OA enhancement from aircraft, a recent study also found 25–200% enhancement in submicron particles observed over the T3 ground site during the entire wet season35. Simulated enhancement in biogenic SOA Isoprene and other biogenic VOCs are emitted throughout the Amazonian rainforest as diffuse area sources. Manaus emissions interact with these biogenic sources, increasing oxidants and SOA formation. WRF-Chem simulated spatial distribution of total biogenic SOA (sum of SOA formed by oxidation of isoprene, monoterpenes, and sesquiterpenes) from the default (all emissions on) and the simulation with Manaus emissions off are shown in Fig. 2a, b, respectively. Biogenic SOA formation is enhanced both in-plume and its outflow regions by a factor of 100–400% on average during the afternoons of 13 March 2014, as indicated in Fig. 2c, consistent with the enhancement in total OA shown for flight transects in Fig. 1. Background versus in-plume oxidants In the wet season, locations not affected by Manaus can approach conditions characteristic of preindustrial times. WRF-Chem predicts that background oxidants are mainly sustained by catalytic effects of natural NO emissions (soil NOx, described below) on OH concentrations through reactions of NO with hydroperoxyl radicals (HO2) and organic peroxy radicals (RO2) during the daytime, and this chemistry also affects ozone. Additional OH recycling mechanisms have been suggested in the literature44; however, these recycling mechanisms often cause substantial overestimation of observed OH45. Therefore, no additional OH recycling mechanisms are included in the model. Soil NOx as the driver of background oxidizing capacity The dominant natural background source of NOx is emissions from soils, which we include here as an effective soil NO emissions flux of 8.3 × 109 molecules cm−2 s−1 within WRF-Chem (Methods). This value is close to the soil NOx emissions range suggested by field measurements over Amazon rainforests, as discussed by Liu et al.36. Under background Amazonian conditions, the relative reaction rate of isoprene peroxy radicals (ISOPOO) with NO to that with HO2 is suggested by analysis of measurements to be approximately unity36. Global chemical transport models often predict a much smaller relative reaction rate of ISOPOO with NO compared to HO2 over the Amazon (~0.2), which was attributed to their order of magnitude lower soil NO emissions compared to measurements36. However, our WRF-Chem simulations predict that the ratio of reactions rates of ISOPOO with NO to that with HO2 is ~1.0 over the background Amazon (averaged across the inner model domain during the local afternoons 16–20 UTC of the simulated period), which increases confidence in the model's ability to simulate the variation of isoprene oxidation products over the Amazon. Enhancement of NOx and oxidants by the Manaus plume Figure 3 shows that Manaus emissions significantly increase NOx and oxidant levels within the Amazon compared to the background. Under polluted conditions, the model indicates that urban NOx emissions are more than an order of magnitude higher than soil NO emissions. When Manaus emissions are turned on (top panels in Fig. 3), WRF-Chem also simulates an order of magnitude higher OH radical concentrations and significantly enhanced ozone compared to when Manaus emissions are turned off (bottom panels in Fig. 3). Consistent with our simulations, NO measurements aboard the G-1 aircraft show that in-plume NO concentration (1.3 ppb) is more than an order of magnitude higher than that observed over the background (0.04 ppb) (average across all aircraft transects during the simulated period). The model also predicts that reaction rates of ISOPOO with NO within the Manaus plume greatly exceed that with HO2 by a factor of 3. Thus, the model predicts that urban NO emissions greatly increase the oxidizing capacity of the atmosphere and shift the atmospheric oxidation cycle towards the formation of nitrogen compounds46. Model predictions of this increased oxidation capacity within the plume are consistent with an analysis of measurements at the T3 site, which indicated that urban NOx amplifies OH concentrations by ~250% compared to the background47. Similarly, measurements suggest that ozone is also enhanced by a factor of 1.5–3 in plume-affected locations compared to background on March 13 (Supplementary Table 4), while the model predicts somewhat higher enhancement (factor of 2–3). The sharp increase in ozone and other pollutants between plume and background was also reported in two other recent WRF-Chem modeling studies in the Amazon basin41,48. We attribute the large observed enhancement of biogenic SOA within the urban plume (Fig. 1b) over the Amazon to the increase in oxidants due to NOx emissions. In comparison to the Amazon, most other regions of the Northern hemisphere have much higher NOx levels49. Smaller plume-to-background differences in NOx concentrations over the Northern hemisphere could explain the smaller effects of NOx on biogenic SOA, reported previously9. Previous studies have also reported a larger sensitivity of SOA to POA that promotes condensation of semi-volatile SOA species9,50. In this study, we assumed that condensation of SOA is independent of POA. While this is a conservative assumption, simulated POA concentrations are much smaller compared to both biogenic and anthropogenic SOA (Supplementary Figure 5), so SOA formation in our simulations is not sensitive to this mixing assumption. Consistently, both ground-based and aircraft measurements using AMS have shown that the oxygenated organic aerosol (OOA) factor, which can be related to SOA, dominates over the primary organic aerosol factor (HOA) in the background Amazon35,37. Biogenic SOA in the Amazon Figure 4 schematically illustrates how urban NOx emissions increase the reacted forest carbon (biogenic VOCs) over the Amazon, thereby enhancing biogenic SOA formation. In the absence of Manaus urban emissions, soil NOx emissions drive the oxidant cycling but lead to much lower SOA formation due to sub-ppb background NOx levels. Emission fluxes of biogenic VOC in the Amazon are modeled to be 80% isoprene, 17% monoterpenes, and 3% sesquiterpenes, on a mass basis (Fig. 4a)51. Average daytime isoprene flux simulated by WRF-Chem (~5 mg m−2 h−1) agrees within 20% with average wet season isoprene emissions flux estimates (~6 mg m−2 h−1) derived from aircraft measurements using the Eddy Covariance technique, as reported by Gu et al.52. Note that although Gu et al.52 found a strong correlation of isoprene emissions with terrain elevation during the dry season, the wet season did not exhibit this dependence. Since our study focuses on the wet season, the elevation dependence of isoprene emissions is not relevant to our study. Schematic illustrating how NOx emissions from Manaus greatly enhance formation of biogenic SOA within the urban plume. NOx emitted by Manaus greatly increases oxidants (OH and ozone; brown arrows), which promote reaction of forest carbon (emitted as isoprene and terpenes; green arrows). In the absence of the urban plume, background soil NOx emissions (purple arrows) drive the oxidant cycling but are much smaller than the NOx emitted from Manaus. Lower background NOx causes smaller OH and ozone production, thus decreasing reacted biogenic VOCs and SOA formation. The pie charts indicate WRF-Chem simulated domain-averaged components of a Mass emissions fluxes of biogenic VOCs, b Background biogenic SOA and c In-plume biogenic SOA at 500 m altitude during the afternoon (16–20 UTC) of 13 March 2014. Biogenic SOA consists of two parts: gas-phase chemistry of isoprene, monoterpenes, and sesquiterpenes represented by VBS approach (~70% of total SOA), and multiphase chemistry that is driven by IEPOX uptake into SOA, as described in the text Simulated concentrations of isoprene and its first generation oxidation products (ISOPOOH, methacrolein and methyl vinyl ketone) agree with Proton Transfer Reaction Mass Spectrometer (PTR-MS) measurements aboard the aircraft within a factor of 2 (Supplementary Figure 3d). A comparison of model with aircraft measurements showed a factor of ~2 difference in monoterpene concentrations between model and measurements on March 13. However, both model and measurements show that monoterpene concentrations drop by a factor of 3 or more in the plume compared to the background levels due to enhanced in-plume oxidation of monoterpenes. It's noteworthy that monoterpenes measured by PTR-MS over the aircraft flight track were often close to their detection limit (~0.2 ppbv) on March 13. The model simulated background monoterpene concentrations (with a median simulated value of 0.6 ppb) presented in this study, however, are well within the range of other measurements over the Amazon (0.1–1 ppbv), as summarized by Alves et al.53. Due to challenges in measurements and identification of sesquiterpenes, significant uncertainties remain about emissions fluxes of these species54. However, as discussed later in the manuscript, WRF-Chem simulated SOA from sesquiterpenes agrees with another observational study55. While the emission uncertainties of biogenic VOCs are considerable, and spatial heterogeneity will result in regional differences, the aircraft measurements demonstrate that our model simulation is a reasonable scenario for representing the biogenic VOC emissions in the Amazon region. Here, we use the WRF-Chem model to understand the contributions of different biogenic SOA precursors over the Amazon. The model predicts that ~70% of the total biogenic SOA is formed through the pure gas-phase chemistry pathway represented by VBS approach (Fig. 4b and c). All biogenic SOA types simulated by VBS are predicted to be enhanced in-plume. However, simulations indicate a greater in-plume enhancement of isoprene SOA (~180%) and sesquiterpene SOA (160%), compared to monoterpene SOA (~60%). The greater enhancement of isoprene and sesquiterpene SOA compared to monoterpene SOA can be explained by two additive effects of NOx on SOA formation: (1) NOx increases oxidants (OH and ozone), which increase the amount of reacted carbon for all biogenic VOCs, thus increasing SOA formation. (2) On a per reacted carbon basis, SOA yields may increase or decrease with NOx depending on specific VOC and oxidant types. For example, sesquiterpene photooxidation SOA yields increase while monoterpene SOA yields decrease as NOx increases56. This is most likely explained by the increased likelihood of sesquiterpenes oxidation to form multifunctional products of low volatility via isomerization, and also non-volatile organic nitrates at high NOx conditions compared to monoterpenes56. Despite the decrease of monoterpene SOA yields with increasing NOx, the model predicts an enhancement of monoterpene SOA in plumes (~60%) due to the large competing effects through the increase of reacted monoterpenes through NOx-promoted oxidant increase (effect 1 above). Consistently, a recent molecular level analysis of measured SOA over the southeastern United States showed that monoterpene SOA increases with increase in NOz (processed NOx), even as the ratio of measured fragmentation to functionalization products increases during the daytime8. Isoprene SOA contributes ~50% to the total biogenic SOA (Fig. 4b, c) and comprises gas-phase oxidation and multi-phase IEPOX-SOA formation pathways. Model-predicted IEPOX-SOA contributions of ~30% (Methods) are within the range of previous analysis of measurements over the Amazon23,25. Under low NOx background conditions, isoprene photooxidation could contribute to SOA formation by pathways other than IEPOX uptake e.g. through the formation of low volatility dihydroxy dihydroperoxide, ISOP(OOH)257. We tested the potential of this pathway for isoprene SOA formation, but it was not found to be important over pristine Amazonia, due to a competing isomerization reaction of the peroxy radical precursor, which results in the formation of a higher volatility product58 (reactions listed in Supplementary Table 5). Thus, in our approach, simulated isoprene SOA is the net effect of gas-phase chemistry pathways at high and low NOx conditions, and also multiphase IEPOX chemistry. Our results imply that under high NOx conditions within the Manaus plume, isoprene photooxidation SOA yields could be similar to or higher compared to background low NOx conditions59,60. Thus, effects (1) and (2) are also additive for isoprene, increasing isoprene SOA from gas-phase oxidation pathways in-plume. Terpenes (sum of mono- and sesquiterpenes) together contribute the remaining 50% of the biogenic SOA (Fig. 4b, c). Although sesquiterpenes have much smaller emissions fluxes than monoterpenes and isoprene (Fig. 4a), they have higher yields. Under background conditions, the model predicts a domain average sesquiterpene SOA contribution of 6%. This estimate is consistent with observations by Yee et al.55 that sesquiterpene oxidation contributes at least 5% to total submicron OA mass in the background Amazon based on measurements of molecular tracers of sesquiterpene oxidation at T3 during the modeled period. The vast Amazon rainforest transitions between pristine (preindustrial like conditions) and urban-influenced polluted regions due to rapid developments with increasing electricity and transport demands and also deforestation for agricultural purposes61. This region provides a unique lens to investigate how chemical pathways of SOA formation have transitioned from preindustrial to urban-influenced present-day conditions. By combining analyses using a high-resolution regional model and laboratory, and field measurements during the GoAmazon2014/5 field campaign, we investigate how anthropogenic emissions enhance different pathways of biogenic SOA formation over the Amazon region. Both aircraft measurements and model predictions indicate ~60–200% average enhancements in OA concentrations with peak enhancements of ~400% in the Manaus plume compared to background regions. These SOA enhancements over the Amazon are much larger than those previously reported over more polluted regions like the continental United States9,14. A major factor contributing to this enhancement over the Amazon is the sharp transition in NOx from sub-ppb levels in the pristine background to more than several ppbs within the urban plume (~an order of magnitude increase), which greatly increases biogenic SOA. Simulations indicate that the large enhancement in biogenic SOA observed during GoAmazon2014/5 can mostly be explained by urban NOx emissions that increase oxidants, and no additional OH recycling mechanisms are needed to explain this enhancement. Our study also highlights that the contribution of terpenes to biogenic SOA formation is as important as isoprene even within the isoprene-dominated Amazonian forests. Our simulations demonstrate major shifts in biogenic VOC chemistry and SOA formation within locations affected by the urban plumes. On a per reacted carbon basis, VOC yields can increase or decrease as NOx increases due to complex VOC- and oxidant-dependent chemistry. However, the overall amount of reacted carbon increases through acceleration of oxidant cycling promoted by NOx, thus increasing SOA formation over the Amazon. In addition, we show that although anthropogenic OA is a minor contributor to total OA over the Amazon, the major effects of urban pollution are manifested in terms of changing the chemical pathways and greatly increasing natural biogenic SOA formation over this region. Our results provide a clear picture of how anthropogenic emissions are likely to have greatly modified biogenic SOA formation since preindustrial times over the Earth, and imply that rapid urbanization in future years might substantially enhance biogenic SOA formation in the pristine forested regions of the Amazon. We used the community regional Weather Research and Forecasting Model coupled to chemistry (WRF-Chem version 3.5.139,40) for generating modeling results in this Manuscript. WRF-Chem is a community model and is accessible to users. Specific WRF-Chem configurations and modifications to gas- and particle-phase chemistry parameterizations used to generate results in this study are described below. WRF-Chem setup We use the regional WRF-Chem model39,40 at cloud-, chemistry, and emissions-resolving scales i.e. at 2 km grid spacing, which is at a much higher resolution than that used in previous global modeling studies (typically ~100's of km)62. Since high-resolution simulations explicitly resolve features in clouds, emissions, and chemistry, they do not suffer from uncertainties in parameterizations needed to represent these features in coarser resolution global models. Trace gases, aerosols, and clouds are simulated simultaneously with meteorology40. Biogenic VOC emissions are predicted using the Model of Emissions of Gases and Aerosols from Nature (MEGAN v 2.1)51, which is coupled to the Community Land Model (CLM). CLM is run at the same grid spacing as WRF-Chem. We use a nested grid configuration with an outer 10 km grid spacing domain covering 1500 × 1000 km and an inner 2 km grid spacing domain covering 450 × 300 km centered over Manaus City. Meteorological and chemical boundary conditions, land-surface scheme, and radiation scheme used for configuring the WRF-Chem runs used in this work are listed in Supplementary Table 1. The land surface data and emissions of trace gases and aerosols used for the simulations were the best available products for South America. The surface albedo, vegetation, and green fraction used in this study are documented in Beck et al.63. All model predictions analysed in this study are for the high-resolution inner domain that better resolves emissions, chemistry, and clouds compared to the outer domain. Also, the 2 km grid spacing inner domain explicitly resolves deep convective clouds, so no convective cloud parameterization is used for the inner domain. The National Centers for Environmental Prediction (NCEP) Climate Forecast System Version 2 (CFSv2) reanalysis data (CFSR)64 provides the meteorological initial and boundary conditions. Meteorological conditions were spun-up for 24 h, followed by 72 h of simulation, while the trace gas and aerosol species from the previous simulation were used as initial conditions. We conducted concatenated 4-day simulations, following the approach of Medeiros et al.41 for this region. The chemical boundary conditions for trace gases and aerosols over the outer domain are provided by a quasi-global WRF-Chem simulation in 201465, while the inner domain received boundary conditions from the outer domain. Meteorological fields Supplementary Figure 6 shows that the model reasonably simulates the multi-day variations of several meteorological fields with measurements, including surface temperature, specific humidity, wind speeds, boundary layer height, downwelling solar radiation, and surface latent heat flux. The surface temperature, specific humidity, and wind speeds are averaged from 3 sites around Manaus and downwind areas (T1-Manaus, T2, and T3 sites), boundary layer height and downwelling solar radiation are taken from the T3 site, and surface latent heat flux is taken from the T0k site. The model is randomly sampled for 1000 grid points over land within 50 km radius centered at T3 and Manaus for all the meteorological fields except for latent heat flux. For latent heat flux, the model randomly sampled 200 grid points within 30 km radius of T0k site where latent heat flux above forest canopy from a tower measurement is available. A random sampling strategy to the model output is chosen to mimic large spatial variability from a few single point observations during the relatively short study period. Surface meteorology measurements at T3 are from ARM MET datastream66, surface radiative flux measurements are from the ARM RADFLUX product67. Boundary layer height at the T3 site was derived using the vertical velocity statistics from the ARM DLPROFWSTATS4NEWS product68. The method follows Tucker et al.69 by using profiles of Doppler Lidar measured vertical velocity variance as a measure of the turbulence within the boundary layer. Starting from the surface, the first vertical height level where the vertical velocity variance drops below 0.04 m2 s−2 is designated as the boundary layer height. Emissions of trace gases and aerosols Since our WRF-Chem simulations are conducted at high resolution, including emissions of trace gases and aerosols were challenging for the Amazon, since detailed high-resolution emission inventories are scarce for this region. We combine several emissions inventories from different sources to get reasonable estimates of trace gases and aerosol emissions. We include primary emissions of gases such as CO, non-methane volatile organic compounds (NMVOC), sulfur dioxide (SO2), ammonia (NH3), and oxides of nitrogen (NOx) and aerosols, including organic carbon (OC), black carbon (BC) and sulfate (SO4) from anthropogenic and biomass burning sources. Emissions of aerosols and gases from the traffic sector were included from a detailed high resolution 2 km × 2 km gridded emissions inventory developed for this region based on the methodology described in a previous study70. We also included emissions of CO, NOx, SO2, VOCs, and particulate OC, BC, and SO4 from power plants over the Manaus region including a mix of fuel oil, diesel, and natural gas used in 2014 for electricity generation and emissions from a large oil refinery based on a recent study41. Emissions of CO, NOx, SO2, VOCs, and particulate OC, BC, and SO4 from these point sources were included. Additional anthropogenic SO2 and SO4 area emissions were also included based on VOCA (http://bio.cgrer.uiowa.edu/VOCA_emis/) and the Emissions Database for Global Atmospheric Research (EDGAR v4.1), respectively. NH3 emissions from industry, energy, residential, and agriculture are from the Hemispheric Transport of Air Pollution (HTAP_v2.2) 2010 emissions inventory71. Biogenic and biomass burning emissions We included biomass burning emissions including both gases and aerosols from the 2007 Fire Inventory from NCAR (FINN07)72. FINN07 particulate emissions include organic carbon (converted to OA using an OA/OC ratio of 1.4), black carbon, PM2.5, and PM10. NMVOC emissions from both anthropogenic and biomass burning sources are speciated according to the SAPRC-99 mechanism. We also include emissions of biogenic volatile organic compounds (BVOC). BVOC emissions are derived from the latest version of Model of Emissions of Gases and Aerosols from Nature (MEGAN v2.1) that has been recently coupled within the land surface scheme CLM4 (Community Land Model version 4.0) in WRF-Chem73. The 138 biogenic species from MEGAN are lumped into 3 biogenic VOC classes: isoprene (ISOP), terpenes (TERP), and sesquiterpenes (SESQ). Unspeciated organic emissions Unspeciated organic emissions are traditionally not included in emission inventories, but are important for anthropogenic SOA formation74,75,76. About 10–20% of total non-methane organic gas (NMOG) emissions are not routinely included in emissions inventories74. These unspeciated emissions have significant potential to form SOA since they are semi-volatile or intermediate volatility organics (SIVOCs). We represent all unspeciated NMOG emissions as an intermediate volatility species (i.e. C* = 104 µg m−3) for biomass burning and fossil-fuel sources referred to as a gas-phase species, IV-POA (g). Emissions of IV-POA (g) are assumed to be 20% of the total non-methane organic gas (NMOG) emissions for both biomass burning and fossil-fuel sources based on unspeciated fraction of NMOG emissions reported in Jathar et al.74. In addition, in our model, we assume that 50% of the emitted POA evaporates instantaneously and contributes to IV-POA (g), consistent with Jathar et al.74, while the remaining 50% is assumed to be non-volatile. This reduces the number of POA tracers that need to be advected in the model and increases computational efficiency since our focus is mainly on SOA formation. Oxidation of the evaporated POA also contributes to anthropogenic SOA formation, as described later. Effects of soil NO emissions We included sources of NO emissions from soils within WRF-Chem. Previous studies suggest soil NO emissions for tropical forests in the range 20–60 µg NO m−2 h−1 77,78,79. However, much of this NO reacts within the canopy with ozone and does not enter the above-canopy atmosphere. This in-canopy reduction of NO reduces the effective flux of NO in the above-canopy atmosphere by ~75%. We choose the upper bound of soil NO emissions and reduce it by 75% to obtain an effective NO emissions flux of 15 µg NO m−2 h−1 (8.3 × 109 molecules cm−2 s−1). This value is close to the soil NOx emissions range suggested by field measurements over Amazon rainforests (1.2 to 7.0 × 109 molecules cm−2 s−1), as discussed by Liu et al.36. Under background Amazonian conditions, Liu et al.36 suggested that the relative reaction rate of isoprene peroxy radicals (ISOPOO) with HO2 to that with NO is approximately unity. Indeed, our WRF-Chem simulations show that the ratio of reactions rates of ISOPOO with NO to that with HO2 is unity under background conditions. This increases confidence in the ability of the model to simulate the relative reaction rates of isoprene peroxy radicals. In contrast, a previous study using the global model GEOS-Chem predicted a much smaller relative reaction rate of ISOPOO with NO compared to HO2, which was attributed to its order of magnitude lower soil NOx emissions compared to measurements36. Background sources of sulfate in the Amazon In addition to soil NOx emissions, we also included emissions of dimethyl sulfide (DMS) of 0.8 ng m−2 s−1 from local soil and plant emissions within the Amazon rainforest based on a recent study80. DMS is also advected from the oceans within our modeling domain. Oxidation of DMS results in the formation of SO2, which is a background sulfate source. However, model simulations indicate that local DMS emissions are a minor source of sulfate, while the Manaus plume is a major source, which affects both in-plume and background sulfate concentrations. Simulated background sulfate of ~0.1 µg m−3 agrees with aircraft measurements (e.g. on 13 March). The model simulates the increasing trends of sulfate within plumes compared to the background (not shown). However, in-plume sulfate simulated by the model is a factor of 2 higher than the observed sulfate, which is within the expected uncertainties of sulfate emissions sources within the Amazon. Simulating SOA using the VBS approach Simulated SOA from pure gas-phase chemistry pathway is represented in the model using a volatility basis set (VBS) approach. The VBS approach represents multiple generations of oxidation of biogenic VOCs that include isoprene, monoterpene, and sesquiterpene compound classes, and anthropogenic and biomass burning precursors using a lumped set of compounds. Initial yields are determined by fitting environmental chamber measurements and generally vary with VOC, NOx, and oxidants (Supplementary Table 2). We modified the VBS approach to include further aging of organics at longer-timescale aging beyond that observed in environmental chambers, as described later in this section. Simulating anthropogenic SOA from unspeciated NMOG emissions Oxidation of anthropogenic IV-POA (g) by OH radicals results in the formation of semi-volatile SOA species that can be represented by fitting environmental chamber measurements using a VBS approach. Semi-volatile SOA formation yields due to oxidation of anthropogenic IV-POA (g) emissions were assumed to be the same as those reported for on- and off-road diesel vehicle sources and biomass burning/wood burning from Table S3 in Jathar et al.74 as shown below: $$\begin{array}{l}{\mathrm{IV - POA}}\left( {\mathrm{g}} \right) + {\mathrm{OH = 0}}{\mathrm{.044}}\,{\mathrm{SVOC}}_{\mathrm{1}} + {\mathrm{0}}{\mathrm{.071}}\,{\mathrm{SVOC}}_{\mathrm{2}}\\ {\mathrm{ + 0}}{\mathrm{.41}}\,{\mathrm{SVOC}}_{\mathrm{3}}{\mathrm{ + 0}}{\mathrm{.30}}\,{\mathrm{SVOC}}_{\mathrm{4}}\end{array}.$$ SVOC1, SVOC2, SVOC3, and SVOC4 represent lumped VBS species with C* of 0.1, 1, 10 and 100 μg m−3, respectively. These initial yields represent the first few generations of chemistry measured in chamber experiments. The sum of particle-phase concentrations of SVOC1, SVOC2, SVOC3, and SVOC4 comprises anthropogenic SOA (Supplementary Figure 5e) in our study. Simulating natural biogenic SOA Since, the simulations are for the wet season and the Amazon is low in OA (measured OA ~1–2 μg m−3), we rely primarily on chamber studies56,59,81,82,83,84,85, which measured SOA yields at low concentrations. The SOA yields used in the model are determined by fitting chamber measurements of SOA mass evolution (documented in Supplementary Table 2). We selected most available chamber studies that measured SOA yields at low concentrations so that they could represent conditions over the Amazon. To the extent that extremely low volatility organic compounds (ELVOC)86 and low volatility organic compounds (LVOCs) are not lost to the walls in the chamber experiments, these yields implicitly include the lower volatility compounds. Note that the key here is choice of measurements that measured SOA yields at low OA loadings. For example, using measurements in a continuous flow chamber, Shilling et al.85 found SOA yield of 0.09 when 1.9 ppbv of α-pinene reacted to produce OA loadings of 0.9 µg m−3. Importantly, this yield (0.09) remained constant at smaller OA loadings and the yield curve had no inflection point towards null yield for OA loadings as small as 0.15 µg m−3. This result indicates formation of products having vapor pressures below 0.15 µg m−3. More recent studies found that formation of ELVOCs likely explains the observation of no inflection towards null yield for α-pinene ozonolysis SOA observed at smaller OA loadings86. Thus, yields of these lower volatility compounds are implicitly captured by the 4-product volatility basis set fits with C* of 0.1, 1, 10, and 100 µg m−3 applied in this study. However, because it is difficult to run chamber experiments at very small SOA loading (<1 µg m−3), the fits to the chamber data will be insensitive to the specific value of the lowest C* bin chosen in the fits, but will be sensitive only to the fact that one such bin is included. In other words, fits to a typical chamber experiments will not be capable of distinguishing between products in C* bin of 0.1 or 0.01. For this reason most chamber fits choose a lower bound on the C* bin of 0.1, which also effectively captures mass in lower C* bins. This is just an inherent limitation of the laboratory experiments and the yield parameterizations. Yields vary based on precursor type, oxidants (OH, ozone or nitrate i.e. NO3 radicals) and also NOx levels during the measurements. The overall NOx-dependent yield is calculated as a sum of high and low NOx yields weighted by NOx branching ratio87 at each model grid point and time. We include additional reactions for the VBS bins within the SAPRC-99 mechanism: $${\mathrm{BVOC(g) + OH(or}}\,{\mathrm{ozone,}}\,{\mathrm{nitrate}}\,{\mathrm{radical)}} \to \mathop {\sum}\limits_{i = 1}^4 {a_{\mathrm{i}}{\mathrm{BVSOA(g)}}_{\mathrm{i}}},$$ $$a_{\mathrm{i}} = a_{{\mathrm{i,high}}}{\mathrm{B}} + a_{{\mathrm{i,low}}}\left( {1 - {\mathrm{B}}} \right),$$ where BVOC(g) are the primary biogenic gas species (isoprene, terpene, or sesquiterpene), BVSOA(g)i represents SOA precursor species formed after photochemical oxidation of the BVOC(g), 'i' is the volatility bin (i = 1, …, 4 corresponding to C* = 0.1, 1, 10, and 100 µg m−3), ai is the overall NOx-dependent molar yield calculated from eq. (2), ai,high and ai,low are the molar yields under high and low NOx conditions, respectively, as shown in Supplementary Table 2, and B is the NOx branching ratio as defined by Lane et al.87. In this work, we also included further NOx-dependent multigenerational aging of both biogenic SOA and anthropogenic organics as described below. Further aging of VBS organics In the atmosphere, longer-timescale aging (beyond that observed in chambers) can change SOA yields compared to those determined from chamber measurements. Multigenerational aging results in both functionalization (decreasing volatility) and fragmentation (increasing volatility) reactions. In our previous studies88,89, we showed that gas-phase fragmentation processes, which are often neglected in chemical transport SOA modeling parameterizations, could have large effects on both regional and global SOA loadings. In addition, the branching ratio between fragmentation and functionalization is reported to vary with the relative reaction rates between NOx, HO2, and RO2 radicals. Gas-phase fragmentation is reportedly more prevalent under high-NOx compared to low NOx conditions90,91. In this study, we assume that the probability of fragmentation equals the branching ratio between peroxy-NO radicals reaction rates to the sum of all peroxy radical reactions rates (including peroxy-peroxy and peroxy-NOx reactions). However, we assign an upper limit of 75% fragmentation based on our previous sensitivity studies that varied this branching ratio (but without an explicit NOx dependence)88,89. Each generation of aging of the VBS SOA species results in both functionalization and fragmentation reactions as a function of peroxy-NOx branching ratio, calculated at each WRF-Chem grid and time-step. In addition, we assume that a small fraction of organics fragment to species of much higher volatility and are not tracked. The maximum fraction of organics that is moved outside the VBS range is assumed as 10% by mass corresponding to the maximum fragmentation branching of 75%88,89. A sensitivity simulation, which turned off this additional aging showed a minor decrease in simulated mass concentrations of SOA in the background over the Amazon compared to the default simulation (not shown). We expect that the effect of NOx-dependent multigenerational aging is less pronounced over the Amazon compared to more polluted locations (such as the continental United States) likely due to smaller background oxidant concentrations over the Amazon. Thus, the added multigenerational aging does not affect the main results and conclusions of this study. Aerosol treatments in MOSAIC module The condensation of low volatility gases (H2SO4 and CH3SO3H) and the dynamic partitioning of semi-volatile inorganic gases (HNO3, HCl, and NH3) to size-distributed liquid, mixed-phase, and solid atmospheric aerosols are represented by the Model For Simulating Aerosol Interaction and Chemistry (MOSAIC) aerosol module43. In this study, the aerosol species simulated in MOSAIC include sulfate, nitrate, ammonium, other inorganics (OIN), elemental carbon, organic carbon and aerosol water. We represented aerosols by 4-size sections with dry particle diameter ranges of 0.039–0.156, 0.156–0.624, 0.624–2.5, and 2.5–10.0 µm. Both interstitial and activated (cloud-borne) species corresponding to all aerosol chemical components are included and advected. Also, each simulated size bin includes both particle number and mass. The MOSAIC aerosol module includes treatments of nucleation, coagulation, and condensation as described in previous studies43. The size-dependent dry deposition of particles (both number and mass) is based on the approach of Zhang et al.92. In addition, both in-cloud and below-cloud wet removal of trace gases and aerosols are simulated following Easter et al.93. Gas-phase chemistry Gas-phase chemistry in this study is based on the Statewide Air Pollution Research Center (SAPRC-99) mechanism94, which includes 211 reactions of 56 gases and 18 free radicals. This mechanism is updated to include gas-phase photochemical oxidation of gas-phase organic species to form SOA particles. We include SOA formed due to oxidation of semi-volatile and intermediate volatility organic compounds (S/IVOC) emitted from anthropogenic and biomass burning sources (SI-SOA) and traditional SOA (V-SOA) formed due to oxidation of volatile organic compounds (VOC) precursors from biogenic emissions. We also extended this gas-phase chemistry mechanism to include isoprene epoxydiol (IEPOX) formation (Supplementary Table 5). VOC oxidation and catalytic effects of NOx on the oxidant cycle sustains the atmospheric oxidation capacity95. NO is necessary for HOx cycling and formation of ozone and OH radicals. Additional OH recycling mechanisms have been suggested in the literature44, however, these recycling mechanisms often cause substantial overestimation of observed OH45. Therefore, in this study, we do not include additional OH recycling mechanisms in the model other than reactions between HO2 and NO. Multi-phase IEPOX chemistry Multiphase SOA formation from isoprene oxidation is simulated using new aqueous chemistry modules that we added within WRF-Chem based on the simpleGAMMA model42. These aqueous chemistry modules are coupled to the model for simulating aerosol interactions and chemistry (MOSAIC), which simulates key inorganic species like sulfate, nitrate, ammonium ions, particle acidity, and water needed by the simpleGamma model43. The uptake of IEPOX within aqueous aerosols is determined by its solubility (Henry's law constant, HIEPOX), followed by its reaction in the particle phase42. Here, we set HIEPOX as 1.7 × 108 M atm−1 following Gaston et al.24, which represents the higher end of HIEPOX values suggested in the literature24,96,97,98,99,100. Thus, IEPOX-SOA simulated in this study, most likely represents an upper bound estimate. Only a fraction of the epoxide reactively taken up by particles contributes to IEPOX-SOA formation101. The fraction of low volatility accretion products of IEPOX-SOA could vary significantly at different locations due to variable chemistry and partitioning. In this study, following measurements during GoAmazon2014/5 by Isaacman et al.26, we constrained this fraction to 0.4 i.e. only 40% of IEPOX-SOA products persist in the particle-phase due to their low volatility in our simulations. Products of IEPOX reactive uptake that are semi-volatile evaporate from particles, leaving only low volatility accretion products as IEPOX-SOA and organosulfates27. Key factors affecting computational cost of simulations Our simulations use detailed SOA parameterizations represented by the VBS approach, and a number of gas- and particle-phase VBS species need to be replicated for different source categories, including anthropogenic and biogenic classes (isoprene, terpene, sesquiterpenes classes) to resolve their individual contributions. Particle-phase species also multiply with number of size bins and also need to be replicated for interstitial and cloud-borne species that are advected in the model. Thus, a large number of gas- and particle-phase species (total of 420) are advected in the model, greatly increasing the computational cost compared to chemistry packages without SOA within WRF-Chem. In addition, the high resolution nested grid configuration (2 km grid spacing) also increases WRF-Chem computational costs compared to global modeling studies that use much coarser grid spacings (~100–200 km grid spacings). Simulations with Manaus emissions on/off We compare WRF-Chem simulations with Manaus emissions on/off to quantify how Manaus emissions amplify oxidant cycling and biogenic SOA formation over the Amazon. Plume locations simulated by the model can be shifted compared to observations due to minor errors in simulated wind direction and dispersion. We conduct a careful analysis to identify the shifts in model-simulated plume compared to aircraft measurements. Figure 1 and Supplementary Figures 3 and 4 show that OA, ozone, CO, and NOy concentrations along measured and simulated flight transects can be used to accurately diagnose the shifts in simulated plume compared to measurements. The simulated CO baseline has some uncertainty depending on the boundary conditions (from global WRF simulations) and was adjusted by a constant value of ~30 ppb for better visual comparison with measurements. The key here is in-plume CO values are substantially larger than the background. Over the Amazon, NOy sharply increases within urban plumes by more than an order of magnitude compared to background locations and is used to identify the shifted plume in the model compared to observations. Our analysis in this study focuses on aircraft transects ~500 m altitude since they are within the mixed boundary layer during the daytime. Aircraft measurements represent a snapshot of changes that occur in biogenic SOA due to anthropogenic emissions over the otherwise pristine wet-season Amazon. All data analyzed during the current study are included in this published article and its Supplementary Information. Aircraft measurements during the GoAmazon2014/5 field campaign used in this study are publicly available on the Atmospheric Radiation Measurement (ARM) website: http://campaign.arm.gov/goamazon2014/observations/. Model outputs from WRF-Chem that are used to generate figures in this study are available from the corresponding author on reasonable request. Seinfeld, J. H. et al. Improving our fundamental understanding of the role of aerosol-cloud interactions in the climate system. Proc. Natl Acad. Sci. USA 113, 5781–5790 (2016). Stevens, B. & Feingold, G. Untangling aerosol effects on clouds and precipitation in a buffered system. Nature 461, 607–613 (2009). Stocker, T. F. et al. IPCC, 2013: Climate Change 2013: The Physical Sciences Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. (Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2013). Bond, T. C. & Bergstrom, R. W. Light absorption by carbonaceous particles: an investigative review. Aerosol Sci. Technol. 40, 27–67 (2006). Murphy, D. M., et al. Single-particle mass spectrometry of tropospheric aerosol particles. J. Geophys. Res.-Atmos. 111, D23S32 (2006). Zhang, Q., et al. Ubiquity and dominance of oxygenated species in organic aerosols in anthropogenically-influenced Northern Hemisphere midlatitudes. Geophys. Res. Lett. 34, L13801 (2007). Poschl, U. et al. Rainforest aerosols as biogenic nuclei of clouds and precipitation in the Amazon. Science 329, 1513–1516 (2010). Zhang, H. et al. Monoterpenes are the largest source of summertime organic aerosol in the southeastern United States. Proc. Natl Acad. Sci. USA 115, 2038–2043 (2018). Carlton, A. G., Pinder, R. W., Bhave, P. V. & Pouliot, G. A. To what extent can biogenic SOA be controlled? Environ. Sci. Technol. 44, 3376–3380 (2010). Goldstein, A. H., Koven, C. D., Heald, C. L. & Fung, I. Y. Biogenic carbon and anthropogenic pollutants combine to form a cooling haze over the southeastern United States. Proc. Natl Acad. Sci. USA 106, 8835–8840 (2009). Shilling, J. E. et al. Enhanced SOA formation from mixed anthropogenic and biogenic emissions during the CARES campaign. Atmos. Chem. Phys. 13, 2091–2113 (2013). Weber, R. J., et al. A study of secondary organic aerosol formation in the anthropogenic-influenced southeastern United States. J. Geophys. Res.-Atmos. 112, D13302 (2007). Hoyle, C. R. et al. A review of the anthropogenic influence on biogenic secondary organic aerosol. Atmos. Chem. Phys. 11, 321–343 (2011). Spracklen, D. V. et al. Aerosol mass spectrometer constraint on the global secondary organic aerosol budget. Atmos. Chem. Phys. 11, 12109–12136 (2011). Carlton, A. G., Wiedinmyer, C. & Kroll, J. H. A review of Secondary Organic Aerosol (SOA) formation from isoprene. Atmos. Chem. Phys. 9, 4987–5005 (2009). Hallquist, M. et al. The formation, properties and impact of secondary organic aerosol: current and emerging issues. Atmos. Chem. Phys. 9, 5155–5236 (2009). Kroll, J. H. & Seinfeld, J. H. Chemistry of secondary organic aerosol: formation and evolution of low-volatility organics in the atmosphere. Atmos. Environ. 42, 3593–3624 (2008). Kamens, R. M., Gery, M. W., Jeffries, H. E., Jackson, M. & Cole, E. I. Ozone–isoprene reactions: product formation and aerosol potential. Int. J. Chem. Kinet. 14, 955–975 (1982). Blando, J. D. & Turpin, B. J. Secondary organic aerosol formation in cloud and fog droplets: a literature evaluation of plausibility. Atmos. Environ. 34, 1623–1632 (2000). Ervens, B., Turpin, B. J. & Weber, R. J. Secondary organic aerosol formation in cloud droplets and aqueous particles (aqSOA): a review of laboratory, field and model studies. Atmos. Chem. Phys. 11, 11069–11102 (2011). McNeill, V. F. Aqueous organic chemistry in the atmosphere: sources and chemical processing of organic aerosols. Environ. Sci. Technol. 49, 1237–1244 (2015). Budisulistiorini, S. H. et al. Real-time continuous characterization of secondary organic aerosol derived from isoprene epoxydiols in Downtown Atlanta, Georgia, using the aerodyne aerosol chemical speciation monitor. Environ. Sci. Technol. 47, 5686–5694 (2013). de Sá, S. S. et al. Influence of urban pollution on the production of organic particulate matter from isoprene epoxydiols in central Amazonia. Atmos. Chem. Phys. 17, 6611–6629 (2017). Gaston, C. J. et al. Reactive uptake of an isoprene-derived epoxydiol to submicron aerosol particles. Environ. Sci. Technol. 48, 11178–11186 (2014). Hu, W. W. et al. Characterization of a real-time tracer for isoprene epoxydiols-derived secondary organic aerosol (IEPOX-SOA) from aerosol mass spectrometer measurements. Atmos. Chem. Phys. 15, 11807–11833 (2015). Isaacman-VanWertz, G., et al. Ambient gas-particle partitioning of tracers for biogenic oxidation. Environ. Sci. Technol. https://doi.org/10.1021/acsest6b01674 (2016). Lopez-Hilfiker, F. D. et al. Molecular composition and volatility of organic aerosol in the southeastern U.S.: implications for IEPOX derived SOA. Environ. Sci. Technol. 50, 2200–2209 (2016). Paulot, F. et al. Unexpected epoxide formation in the gas-phase photooxidation of isoprene. Science 325, 730–733 (2009). Surratt, J. D. et al. Reactive intermediates revealed in secondary organic aerosol formation from isoprene. Proc. Natl Acad. Sci. USA 107, 6640–6645 (2010). Shrivastava, M. et al. Recent advances in understanding secondary organic aerosol: implications for global climate forcing. Rev. Geophys. 55, 509–559 (2017). Martin, S. T. et al. Introduction: observations and modeling of the green ocean Amazon (GoAmazon2014/5). Atmos. Chem. Phys. 16, 4785–4797 (2016). Martin, S. T. et al. The green ocean amazon experiment (goamazon2014/5) observes pollution affecting gases, aerosols, clouds, and rainfall over the rain forest. Bull. Am. Meteorol. Soc. 98, 981–997 (2017). Palm, B. B. et al. Secondary organic aerosol formation from ambient air in an oxidation flow reactor in central Amazonia. Atmos. Chem. Phys. 18, 467–493 (2018). Liu, Y. et al. Isoprene photo-oxidation products quantify the effect of pollution on hydroxyl radicals over Amazonia. Sci. Adv. 4, eaar2547 (2018). Article PubMed PubMed Central ADS CAS Google Scholar de Sá, S. S. et al. Urban influence on the concentration and composition of submicron particulate matter in central Amazonia. Atmos. Chem. Phys. 18, 12185–12206 (2018). Liu, Y. et al. Isoprene photochemistry over the Amazon rainforest. Proc. Natl Acad. Sci. USA 113, 6125–6130 (2016). Shilling, J. E. et al. Aircraft observations of the chemical composition and aging of aerosol in the Manaus urban plume during GoAmazon 2014/5. Atmos. Chem. Phys. 18, 10773–10797 (2018). Gordon, H. et al. Reduced anthropogenic aerosol radiative forcing caused by biogenic new particle formation. Proc. Natl Acad. Sci. USA 113, 12053–12058 (2016). Fast, J. D., et al. Evolution of ozone, particulates, and aerosol direct radiative forcing in the vicinity of Houston using a fully coupled meteorology-chemistry-aerosol model. J. Geophys. Res.-Atmos. 111, D21305 (2006). Grell, G. A. et al. Fully coupled "online" chemistry within the WRF model. Atmos. Environ. 39, 6957–6975 (2005). Medeiros, A. S. S. et al. Power plant fuel switching and air quality in a tropical, forested environment. Atmos. Chem. Phys. 17, 8987–8998 (2017). Woo, J. L. & McNeill, V. F. simpleGAMMA v1.0-a reduced model of secondary organic aerosol formation in the aqueous aerosol phase (aaSOA). Geosci. Model Dev. 8, 1821–1829 (2015). Zaveri, R. A., Easter, R. C., Fast, J. D., Peters, L. K. Model for simulating aerosol interactions and chemistry (MOSAIC). J. Geophys. Res.-Atmos. 113, D13204 (2008). Rohrer, F. et al. Maximum efficiency in the hydroxyl-radical-based self-cleansing of the troposphere. Nat. Geosci. 7, 559–563 (2014). Feiner, P. A. et al. Testing atmospheric oxidation in an Alabama Forest. J. Atmos. Sci. 73, 4699–4710 (2016). Perring, A. E., Pusede, S. E. & Cohen, R. C. An observational perspective on the atmospheric impacts of alkyl and multifunctional nitrates on ozone and secondary organic aerosol. Chem. Rev. 113, 5848–5870 (2013). Liu, Y., et al. Isoprene photo-oxidation products quantify the effect of pollution on hydroxyl radicals over Amazonia. Sci. Adv., in press, (2018). Abou Rafee, S. A. et al. Contributions of mobile, stationary and biogenic sources to air pollution in the Amazon rainforest: a numerical study with the WRF-Chem model. Atmos. Chem. Phys. 17, 7977–7995 (2017). Pye, H. O. T., Chan, A. W. H., Barkley, M. P. & Seinfeld, J. H. Global modeling of organic aerosol: the importance of reactive nitrogen (NOx and NO3). Atmos. Chem. Phys. 10, 11261–11276 (2010). Hoyle, C. R., Myhre, G., Berntsen, T. K. & Isaksen, I. S. A. Anthropogenic influence on SOA and the resulting radiative forcing. Atmos. Chem. Phys. 9, 2715–2728 (2009). Guenther, A. B. et al. The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1): an extended and updated framework for modeling biogenic emissions. Geosci. Model Dev. 5, 1471–1492 (2012). Gu, D. et al. Airborne observations reveal elevational gradient in tropical forest isoprene emissions. Nat. Commun. 8, 15541 (2017). Alves, E. G. et al. Seasonality of isoprenoid emissions from a primary rainforest in central Amazonia. Atmos. Chem. Phys. 16, 3903–3925 (2016). Kesselmeier, J., Guenther, A., Hoffmann, T., Piedade, M. T., Warnke, J. Natural volatile organic compound emissions from plants and their roles in oxidant balance and particle formation. In: Amazonia and Global Change. (American Geophysical Union, 2013). Yee, L. D., et al. Observations of sesquiterpenes and their oxidation products in central Amazonia during the wet and dry seasons. Atmos. Chem. Phys. https://doi.org/10.5194/acp-2018-191 (2018). Ng, N. L. et al. Effect of NO(x) level on secondary organic aerosol (SOA) formation from the photooxidation of terpenes. Atmos. Chem. Phys. 7, 5159–5174 (2007). Liu, J. et al. Efficient isoprene secondary organic aerosol formation from a Non-IEPOX pathway. Environ. Sci. Technol. 50, 9872–9880 (2016). D'Arnbro, E. L. et al. Isomerization of second-generation isoprene peroxy radicals: epoxide formation and implications for secondary organic aerosol yields. Environ. Sci. Technol. 51, 4978–4987 (2017). Chan, A. et al. Role of aldehyde chemistry and NOx concentrations in secondary organic aerosol formation. Atmos. Chem. Phys. 10, 7169–7188 (2010). Kroll, J. H., Ng, N. L., Murphy, S. M., Flagan, R. C. & Seinfeld, J. H. Secondary organic aerosol formation from isoprene photooxidation. Environ. Sci. Technol. 40, 1869–1877 (2006). Davidson, E. A. et al. The Amazon basin in transition. Nature 481, 321–328 (2012). Tsigaridis, K. et al. The AeroCom evaluation and intercomparison of organic aerosol in global models. Atmos. Chem. Phys. 14, 10845–10895 (2014). Beck, V. et al. WRF-Chem simulations in the Amazon region during wet and dry season transitions: evaluation of methane models and wetland inundation maps. Atmos. Chem. Phys. 13, 7961–7982 (2013). Saha, S. et al. The ncep climate forecast system reanalysis. Bull. Am. Meteorol. Soc. 91, 1015–1057 (2010). Hu, Z. Y. et al. Trans-Pacific transport and evolution of aerosols: evaluation of quasi-global WRF-Chem simulation with multiple observations. Geosci. Model Dev. 9, 1725–1746 (2016). D. H, J. K. Atmospheric Radiation Measurement (ARM) Climate Research Facility. 2013, updated hourly. Surface Meteorological Instrumentation (MET). 2014-01-01 to 2015-12-01, ARM Mobile Facility (MAO) Manacapuru, Amazonas, Brazil; AMF1 (M1). Atmospheric Radiation Measurement (ARM) Climate Research Facility Data Archive: Oak Ridge, Tennessee, USA. Accessed 2016-01-01 at https://doi.org/10.5439/1025220 (2013). C. L, K. G, L. R. Atmospheric Radiation Measurement (ARM) Climate Research Facility. 2013, updated hourly. Radiative Flux Analysis (RADFLUX1LONG), Atmospheric Radiation Measurement (ARM) Climate Research Facility Data Archive: Oak Ridge, Tennessee, USA. Accessed 2017-03-06 at https://doi.org/10.5439/1157585 (2013). L. R, R. N, T. S. Atmospheric Radiation Measurement (ARM) Climate Research Facility, updated hourly, Doppler Lidar Profiles (DLPROFWSTATS4NEWS). 2014-01-01 to 2015-12-01, ARM Mobile Facility (MAO) Manacapuru, Amazonas, Brazil; AMF1 (M1), Atmospheric Radiation Measurement (ARM) Climate Research Facility Data Archive: Oak Ridge, Tennessee, USA. Accessed 2016-04-08. (2013). Tucker, S. C. et al. Doppler lidar estimation of mixing height using turbulence, shear, and aerosol profiles. J. Atmos. Ocean Technol. 26, 673–688 (2009). Andrade, MdF, et al. Air quality forecasting system for Southeastern Brazil. Front. Environ. Sci. 3, https://doi.org/10.3389/fenvs.2015.00009 (2015). Janssens-Maenhout, G. et al. HTAP_v2.2: a mosaic of regional and global emission grid maps for 2008 and 2010 to study hemispheric transport of air pollution. Atmos. Chem. Phys. 15, 11411–11432 (2015). Wiedinmyer, C. et al. The Fire INventory from NCAR (FINN): a high resolution global model to estimate the emissions from open burning. Geosci. Model Dev. 4, 625–641 (2011). Zhao, C. et al. Sensitivity of biogenic volatile organic compounds to land surface parameterizations and vegetation distributions in California. Geosci. Model Dev. 9, 1959–1976 (2016). Jathar, S. H. et al. Unspeciated organic emissions from combustion sources and their influence on the secondary organic aerosol budget in the United States. Proc. Natl Acad. Sci. USA 111, 10473–10478 (2014). Robinson, A. L. et al. Rethinking organic aerosols: semivolatile emissions and photochemical aging. Science 315, 1259–1262 (2007). Shrivastava, M., Lane, T. E., Donahue, N. M., Pandis, S. N., Robinson, A. L. Effects of gas particle partitioning and aging of primary emissions on urban and regional organic aerosol concentrations. J. Geophys. Res.-Atmos. 113, D18301 (2008). Rummel, U., Ammann, C., Gut, A., Meixner, F. X. & Andreae, M. O. Eddy covariance measurements of nitric oxide flux within an Amazonian rain forest. J. Geophys. Res.-Atmos. 107, 9 (2002). Steinkamp, J. & Lawrence, M. G. Improvement and evaluation of simulated global biogenic soil NO emissions in an AC-GCM. Atmos. Chem. Phys. 11, 6063–6082 (2011). Yienger, J. J. & Levy, H. Empirical model of global soil-biogenic NOx emissions. J. Geophys. Res.-Atmos. 100, 11447–11464 (1995). Jardine, K. et al. Dimethyl sulfide in the Amazon rain forest. Glob. Biogeochem. Cycles 29, 19–32 (2015). Boyd, C. M. et al. Secondary organic aerosol formation from the beta-pinene+NO3 system: effect of humidity and peroxy radical fate. Atmos. Chem. Phys. 15, 7497–7522 (2015). Chen, Q., Liu, Y. J., Donahue, N. M., Shilling, J. E. & Martin, S. T. Particle-phase chemistry of secondary organic material: modeled compared to measured O:C and H:C elemental ratios provide constraints. Environ. Sci. Technol. 45, 4763–4770 (2011). Kleindienst, T. E., Lewandowski, M., Offenberg, J. H., Jaoui, M., Edney, E. O. Ozone-isoprene reaction: Re-examination of the formation of secondary organic aerosol. Geophys. Res. Lett. 34, L01805 (2007). Ng, N. L. et al. Secondary organic aerosol (SOA) formation from reaction of isoprene with nitrate radicals (NO(3)). Atmos. Chem. Phys. 8, 4117–4140 (2008). Shilling, J. E. et al. Particle mass yield in secondary organic aerosol formed by the dark ozonolysis of alpha-pinene. Atmos. Chem. Phys. 8, 2073–2088 (2008). Ehn, M. et al. A large source of low-volatility secondary organic aerosol. Nature 506, 476-+ (2014). Lane, T. E., Donahue, N. M. & Pandis, S. N. Effect of NO(x) on secondary organic aerosol concentrations. Environ. Sci. Technol. 42, 6022–6027 (2008). Shrivastava, M. et al. Global transformation and fate of SOA: implications of low-volatility SOA and gas-phase fragmentation reactions. J. Geophys. Res.-Atmos. 120, 4169–4195 (2015). Shrivastava, M. et al. Implications of low volatility SOA and gas-phase fragmentation reactions on SOA loadings and their spatial and temporal evolution in the atmosphere. J. Geophys. Res.-Atmos. 118, 3328–3342 (2013). Xu, L., Kollman, M. S., Song, C., Shilling, J. E. & Ng, N. L. Effects of NOx on the volatility of secondary organic aerosol from isoprene photooxidation. Environ. Sci. Technol. 48, 2253–2262 (2014). Loza, C. L. et al. Secondary organic aerosol yields of 12-carbon alkanes. Atmos. Chem. Phys. 14, 1423–1439 (2014). Zhang, L. M., Gong, S. L., Padro, J. & Barrie, L. A size-segregated particle dry deposition scheme for an atmospheric aerosol module. Atmos. Environ. 35, 549–560 (2001). Easter, R. C., et al. MIRAGE: model description and evaluation of aerosols and trace gases. J. Geophys. Res.-Atmos. 109, D20210 (2004). Carter WPL. SAPRC-99 mechanism files and associated programs and examples: http://www.cert.ucr.edu/carter/SAPRC99/, last updated 30 March 2010 (2010). Seinfeld J. H., Pandis S. N. In: Atmospheric Chemistry and Physics: From air pollution to climate change (Wiley-Interscience, 1998). Budisulistiorini, S. et al. Examining the effects of anthropogenic emissions on isoprene-derived secondary organic aerosol formation during the 2013 Southern Oxidant and Aerosol Study (SOAS) at the Look Rock, Tennessee, ground site. Atmos. Chem. Phys. 15, 8871–8888 (2015). Chan, M. N. et al. Characterization and quantification of isoprene-derived epoxydiols in ambient aerosol in the Southeastern United States. Environ. Sci. Technol. 44, 4590–4596 (2010). Eddingsaas, N. C., VanderVelde, D. G. & Wennberg, P. O. Kinetics and products of the acid-catalyzed ring-opening of atmospherically relevant butyl epoxy alcohols. J. Phys. Chem. A 114, 8106–8113 (2010). Nguyen, T. B. et al. Organic aerosol formation from the reactive uptake of isoprene epoxydiols (IEPOX) onto non-acidified inorganic seeds. Atmos. Chem. Phys. 14, 3497–3510 (2014). Pye, H. O. T. et al. Epoxide pathways improve model predictions of isoprene markers and reveal key role of acidity in aerosol formation. Environ. Sci. Technol. 47, 11056–11064 (2013). Riedel, T. P. et al. Heterogeneous reactions of isoprene-derived epoxides: reaction probabilities and molar secondary organic aerosol yield estimates. Environ. Sci. Technol. Lett. 2, 38–42 (2015). This work was supported by the U.S. Department of Energy (DOE), Office of Science, Biological, and Environmental Research's Atmospheric System Research (ASR) program. Dr. Shrivastava was also supported by the U.S. DOE, Office of Science, Office of Biological and Environmental Research through the Early Career Research Program. The authors thank the G-1 flight and ground crews for supporting the GoAmazon 2014/5 mission. Funding for data collection onboard the G-1 aircraft and at the ground sites was provided by the Atmospheric Radiation Measurement (ARM) Climate Research Facility, a U.S. Department of Energy Office of Science user facility sponsored by the Office of Biological and Environmental Research. The Pacific Northwest National Laboratory is operated for DOE by Battelle Memorial Institute under contract DE-AC06-76RL01830. R.Y. support at PNNL was provided by the US Department of Energy under the GoAmazon2014/5 project (Proc. no. 13/50521-7). J.A.T. was supported through a grant from the U.S. Department of Energy Office of Science DE-SC0018221. We acknowledge the support from the Central Office of the Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA), the Instituto Nacional de Pesquisas da Amazonia (INPA), the Instituto Nacional de Pesquisas Espaciais (INPE), and the Universidade do Estado do Amazonas (UEA and FAPEAM/GOAMAZON). P.A. was supported by FAPESP grants 2013/05014-0 and 2017/17047-0. The work was conducted under licenses 001030/2012-4 and 001262/2012-2 of the Brazilian National Council for Scientific and Technological Development (CNPq). Computational resources for the simulations were provided by the PNNL Institutional Computing (PIC) facility and EMSL (a DOE Office of Science User Facility sponsored by the Office of Biological and Environmental Research located at PNNL). Pacific Northwest National Laboratory, Richland, WA, 99352, USA Manish Shrivastava, Larry K. Berg, Richard C. Easter, Jiwen Fan, Jerome D. Fast, Zhe Feng, Alex Guenther, Ying Liu, Sijia Lou, John E. Shilling, Rahul A. Zaveri & Alla Zelenyuk Department of Geology and Geophysics, King Saud University, Riyadh 11451, Saudi Arabia Meinrat O. Andreae Scripps Institution of Oceanography, University of California San Diego, La Jolla, CA, 92093-0230, USA Max Planck Institute for Chemistry, P.O. Box 3060, Mainz, D-55020, Germany Institute of Physics, University of São Paulo, São Paulo, 05508-090, Brazil Paulo Artaxo & Henrique M. J. Barbosa IMT Lille Douai, University of Lille, SAGE, Lille, 59000, France Joel Brito Meteorological Research Institute, Japan Meteorological Agency, 1-1, Nagamine, Tsukuba, 305-0052, Ibaraki, Japan Joseph Ching Department of Meteorology and Atmospheric Science, Penn State University, University Park, PA, 16802, USA Jose D. Fuentes Department of Chemistry, Aarhus University, Aarhus, 8000, Denmark Marianne Glasius Department of Environmental Science, Policy, and Management, University of California, Berkeley, 94720, USA Allen H. Goldstein & Lindsay D. Yee Instituto Nacional de Pesquisas da Amazônia (INPA), Av. André Araújo, Manaus, AM, 69.060-000, Brazil Eliane Gomes Alves Institute of Atmospheric Sciences, Federal University of Alagoas, Maceió, AL, 57072-900, Brazil Helber Gomes Department of Earth System Science, University of California, Irvine, CA, 92697, USA Dasa Gu, Alex Guenther & Saewung Kim Department of Mechanical Engineering, Colorado State University, Fort Collins, 80523, USA Shantanu H. Jathar School of Engineering and Applied Sciences and Department of Earth and Planetary Sciences, Harvard University, Cambridge, MA, 02138, USA Scot T. Martin & Suzane S. de Sá Department of Chemical Engineering, Columbia University, New York, NY, 10027, USA V. Faye McNeill Amazonas State University, Center of Superior Studies of Tefé, R. Brasília, Tefé, AM, 69470000, Brazil Adan Medeiros Environmental and Climate Sciences Department, Brookhaven National Laboratory, Brookhaven, NY, 11973, USA Stephen R. Springston Amazonas State University, Superior School of Technology, Av Darcy Vargas, Manaus, AM, 69050020, Brazil R. A. F. Souza Department of Atmospheric Sciences, University of Washington, Seattle, 98195, USA Joel A. Thornton Department of Civil and Environmental Engineering, Virginia Tech, Blacksburg, VA, 24061, USA Gabriel Isaacman-VanWertz Department of Atmospheric Sciences, Institute of Astronomy, Geophysics and Atmospheric Sciences, University of Sao Paulo, Sao Paulo, 05508090, Brazil Rita Ynoue School of Earth and Space Sciences, University of Science and Technology of China, Hefei, 230026, China Chun Zhao Manish Shrivastava Paulo Artaxo Henrique M. J. Barbosa Larry K. Berg Richard C. Easter Jiwen Fan Jerome D. Fast Zhe Feng Allen H. Goldstein Dasa Gu Alex Guenther Saewung Kim Ying Liu Sijia Lou Scot T. Martin Suzane S. de Sá John E. Shilling Lindsay D. Yee Rahul A. Zaveri Alla Zelenyuk M.S., S.T.M. and A.Z. designed research, M.S., S.L., J.E.S., S.R.S., Z.F., J.C., R.Y., Y.L. and C.Z. processed data and performed analyses, and M.S., M.O.A., P.A., H.M.J.B., L.K.B., J.B., R.C.E., J.F., J.D.F., Z.F., J.D.F., M.G., A.H.G., E.G.A., H.G., D.G., A.G., S.H.J., S.K., S.T.M., V.F.M., A.M., S.S.S., J.E.S., R.A.F.S., J.A.T., G.I.V.W., L.Y., R.A.Z., A.Z. and C.Z. wrote the paper. Correspondence to Manish Shrivastava. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Shrivastava, M., Andreae, M.O., Artaxo, P. et al. Urban pollution greatly enhances formation of natural aerosols over the Amazon rainforest. Nat Commun 10, 1046 (2019). https://doi.org/10.1038/s41467-019-08909-4 Review of Secondary Aerosol Formation and Its Contribution in Air Pollution Load of Delhi NCR Manisha Mishra Sunil Gulia Umesh C. Kulshrestha Water, Air, & Soil Pollution (2023) James Weber Scott Archer-Nicholls Alex T. Archibald Suppression of anthropogenic secondary organic aerosol formation by isoprene Kangwei Li Xin Zhang Zhipeng Bai npj Climate and Atmospheric Science (2022) Masayuki Takeuchi Thomas Berkemeier Nga Lee Ng Extensive urban air pollution footprint evidenced by submicron organic aerosols molecular composition Christian Mark Salvador Charles C.-K. Chou T.-C. Su
CommonCrawl
Model theory of monadic predicate logic with the infinity quantifier Facundo Carreiro1, Alessandro Facchini ORCID: orcid.org/0000-0001-7507-116X2, Yde Venema1 & Fabio Zanasi3 Archive for Mathematical Logic (2021)Cite this article This paper establishes model-theoretic properties of \(\texttt {M} \texttt {E} ^{\infty }\), a variation of monadic first-order logic that features the generalised quantifier \(\exists ^\infty \) ('there are infinitely many'). We will also prove analogous versions of these results in the simpler setting of monadic first-order logic with and without equality (\(\texttt {M} \texttt {E} \) and \(\texttt {M} \), respectively). For each logic \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\) we will show the following. We provide syntactically defined fragments of \(\texttt {L} \) characterising four different semantic properties of \(\texttt {L} \)-sentences: (1) being monotone and (2) (Scott) continuous in a given set of monadic predicates; (3) having truth preserved under taking submodels or (4) being truth invariant under taking quotients. In each case, we produce an effectively defined map that translates an arbitrary sentence \(\varphi \) to a sentence \(\varphi ^\mathsf{p}\) belonging to the corresponding syntactic fragment, with the property that \(\varphi \) is equivalent to \(\varphi ^\mathsf{p}\) precisely when it has the associated semantic property. As a corollary of our developments, we obtain that the four semantic properties above are decidable for \(\texttt {L} \)-sentences. Model theory investigates the relationship between formal languages and semantics. From this perspective, among the most important results are the so called preservationFootnote 1 or characterisation theorems, linking the syntactic shape of formulas to some semantic property. Typically, these results characterise a certain language as the fragment of another, richer language consisting of those formulas that satisfy the given model-theoretic property. In the case of classical first-order logic, notable examples are the Łoś–Tarski theorem, stating that a first-order formula is equivalent to a universal one if and only if the class of its models is closed under taking submodels, and Lyndon's theorem, stating that a first-order formula is equivalent to one for which each occurrence of a relation symbol R is positive if and only if it is monotone with respect to the interpretation of R (see e.g. [17]). The aim of this paper is to show that similar results also hold for the predicate logic \(\texttt {M} \texttt {E} ^{\infty }\) that allows only monadic predicate symbols and no function symbols, but that goes beyond standard first-order logic with equality in that it features the generalised quantifier 'there are infinitely many'. Generalised quantifiers were introduced by Mostowski in [24], and in a more general sense by Lindström in [21], the main motivation being the observation that standard first-order quantifiers 'there are some' and 'for all' are not sufficient for expressing some basic mathematical concepts. Since then, they have attracted a lot of interest, insomuch that their study constitutes nowadays a well-established field of logic with important ramifications in disciplines such as linguistics and computer science.Footnote 2 Despite the fact that the absence of polyadic predicates clearly restricts its expressive power, monadic first-order logic (with identity) displays nice properties, both from a computational and a model-theoretic point of view. Indeed, the satisfiability problem becomes decidable [4, 22], and, in addition to an immediate application of Łoś–Tarski and Lyndon's theorems, one can also obtain a Lindström like characterisation result [26]. Moreover, adding the possibility of quantifying over predicates does not increase the expressiveness of the language [2], meaning that when restricted to monadic predicates, monadic second-order logic collapses into first-order logic. Concerning monadic first-order logic extended with an infinity quantifier, Mostowski [24] already proved a decidability result, whereas from work of Väänänen [27] we know that its expressive power coincides with that of weak monadic second-order logic restricted to monadic predicates, that is monadic first-order logic extended with a second-order quantifier ranging over finite sets.Footnote 3 Characterisation results and proof outline A characterisation result involves some fragment \(\texttt {L} _{\mathfrak {P}}\) of a given yardstick logic \(\texttt {L} \), related to a certain semantic property \(\mathfrak {P}\). It is usually formulated as $$\begin{aligned} \varphi \in \texttt {L} \text { has the property } \mathfrak {P} \text { iff } \varphi \text { is equivalent to some } \varphi ' \in \texttt {L} _{\mathfrak {P}}. \end{aligned}$$ In this work, our main yardstick logic will be \(\texttt {M} \texttt {E} ^{\infty }\). Table 1 summarises the semantic properties (\(\mathfrak {P}\)) we are going to consider, the corresponding expressively complete fragment (\(\texttt {L} _{\mathfrak {P}}\)) and the actual characterisation theorem. Table 1 A summary of our characterisation theorems The proof of each characterisation theorem is composed of two parts. The first, simpler one concerns the claim that each sentence in the fragment satisfies the concerned property. It is usually proved by a straightforward induction on the structure of the formula. The other direction is the expressive completeness statement, stating that within the considered logic, the fragment is expressively complete for the property. Its verification generally requires more effort. In this paper, we will actually verify a stronger expressive completeness statement. Namely, for each semantic property \(\mathfrak {P}\) and corresponding fragment \(\texttt {L} _{\mathfrak {P}}\) from Table 1, we are going to provide an effective translation operation \((\cdot )^\mathsf{p}: \texttt {M} \texttt {E} ^{\infty }\rightarrow \texttt {L} _{\mathfrak {P}}\) such that $$\begin{aligned} \text {if }\varphi \in \texttt {M} \texttt {E} ^{\infty }\text { has the property } \mathfrak {P} \text { then } \varphi \text { is equivalent to } \varphi ^\mathsf{p}. \end{aligned}$$ The proof of each instance of (2) will follow a uniform pattern, analogous to the one employed in the aim of obtaining similar results in the context of the modal \(\mu \)-calculus [11, 15, 19]. The crux of the adopted proof method is the following. Extending known results on monadic first-order logic and using an appropriate version of Ehrenfeucht–Fraïssé games, for each sentence \(\varphi \) in \( \texttt {M} \texttt {E} ^{\infty }\) it is possible to compute a logically equivalent sentence in basic normal norm. Such normal forms will take the shape of a disjunction \(\bigvee \nabla _{\texttt {M} \texttt {E} ^{\infty }}\), where each disjunct \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}\) characterises a class of models of \(\varphi \) satisfying the same set of \({\texttt {M} \texttt {E} ^{\infty }}\)-sentences of equal quantifier rank as \(\varphi \). Based on this, it will therefore be enough to define an effective translation \((\cdot )^\mathsf{p}\) for sentences in normal form, point-wise in each disjunct \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}\), and then verify that it indeed satisfies (2). As a corollary of the employed proof method, we obtain effective normal forms for sentences satisfying the considered property. In addition to \(\texttt {M} \texttt {E} ^{\infty }\), in this paper we also consider monadic first-order logic with and without equality, denoted by \(\texttt {M} \texttt {E} \) and \(\texttt {M} \), respectively. Table 2 shows a summary of the expressive completeness and normal form results presented in this paper. Table 2 An overview of our expressive completeness and normal form results Since the satisfiability problem for \(\texttt {M} \texttt {E} ^{\infty }\) is decidable and the translation \((\cdot )^\mathsf{p}\) is effectively computable, we obtain, as an immediate corollary of (2), that for each property \(\mathfrak {P}\) listed in Table 1 $$\begin{aligned} \text {the problem whether a }\texttt {M} \texttt {E} ^{\infty }-\text {sentence satisfies property} \mathfrak {P} \text {or not is decidable.} \end{aligned}$$ We consider these decidability results as a byproduct of our characterisation results, and we do not explore, for instance, computational complexity questions. Addressing these would involve a study of the complexity of the procedure that brings a formula \(\varphi \) into normal form and then translates it into a formula \((\varphi )^\mathsf{p}\) of the required shape. There are easier ways to prove the mentioned decidability resultsFootnote 4 and these may be useful as well to obtain complexity results. Application of obtained results: the companion paper Our original motivation to study characterisation results for these logics stems from our interest in so-called parity automata: these are finite-state systems that play a crucial role in obtaining decidability and expressiveness results in fixpoint logics and monadic second-order logics over trees and labelled transition systems (see e.g. [30]). Parity automata are specified by a finite set of states A, a distinguished, initial state \(a \in A\), a function \(\Omega \) assigning to each states a priority (a natural number), and a transition map \(\varDelta \). In various interesting cases, the co-domain of this transition map is given by a monadic logic in which the set of (monadic) predicates coincides with A. Hence, each monadic logic \(\texttt {L} \) induces its own class of automata \(\texttt {Aut} (\texttt {L} )\). A landmark result in this area is Janin and Walukiewicz's theorem stating that the bisimulation-invariant fragment of monadic second-order logic coincides with the modal \(\mu \)-calculus [19], and the proof of this result is an interesting mix of the theory of parity automata and the model theory of monadic predicate logic. First, normal forms results and characterisation theorems are used to verify that (on tree models) \(\texttt {Aut} (\texttt {Pos} _{}(\texttt {M} \texttt {E} ))\) is the class of automata characterising the expressive power of monadic second-order logic [32], whereas \(\texttt {Aut} (\texttt {Pos} _{}(\texttt {M} ))\) corresponds to the modal \(\mu \)-calculus [18], where \(\texttt {Pos} _{}(\texttt {L} )\) denote the positive fragment of the monadic logic \(\texttt {L} \). Then, Janin and Walukiewicz' expressiveness theorem is a consequence of these automata characterisations and the fact that positive monadic first-order logic without equality provides the quotient-invariant fragment of positive monadic first-order logic with equality (see Theorem 7). In our companion paper [10], among other things we provide a Janin–Walukiewicz type characterisation result for weak monadic second-order logic. Analogous to the case of full monadic second-order logic discussed previously, our proof crucially employs normal form results and characterisation theorems for \(\texttt {M} \texttt {E} ^{\infty }\), as listed in the Tables 1 and 2. Results in this paper first appeared in the first author's PhD thesis ( [8, Chapter 5]); this journal version largely expands material first published as part of the conference papers [9, 14]. In particular, the whole of Sect. 6 below contains new results. In this section we provide the basic definitions of the monadic predicate logics that we study in this paper. Throughout this paper we fix a finite set A of objects that we shall refer to as (monadic) predicate symbols or names. We shall also assume an infinite set \(\mathsf {iVar}\) of individual variables. Definition 1 Given a finite set A we define a (monadic) model to be a pair \(\mathbb {D}= (D,V )\) consisting of a set D, which we call the domain of \(\mathbb {D}\), and an interpretation or valuation \(V : A \rightarrow \wp (D)\). The class of all models will be denoted by \(\mathfrak {M}\). Note that we make the somewhat unusual choice of allowing the domain of a monadic model to be empty. In view of the applications of our results to automata theory (see Sect. 1) this choice is very natural, even if it means that some of our proofs here become more laborious in requiring an extra check. Observe that there is exactly one monadic model based on the empty domain; we shall denote this model as \({\mathbb {D}_\varnothing }{:=}(\varnothing , \varnothing )\). Observe that a valuation \(V: A \rightarrow \wp (D)\) can equivalently be presented via its associated colouring \(V^{\flat }:D \rightarrow \wp (A)\) given by $$\begin{aligned} V^{\flat }(d) {:=}\{a \in A \mid d \in V(a)\}. \end{aligned}$$ We will use these perspectives interchangeably, calling the set \(V^{\flat }(d) \subseteq A\) the colour or type of d. In case \(D = \varnothing \), \(V^{\flat }\) is simply the empty map. In this paper we study three languages of monadic predicate logic: the languages \(\texttt {M} \texttt {E} \) and \(\texttt {M} \) of monadic first-order logic with and without equality, respectively, and the extension \(\texttt {M} \texttt {E} ^{\infty }\) of \(\texttt {M} \texttt {E} \) with the generalised quantifiers \(\exists ^\infty \) and \(\forall ^\infty \). Probably the most concise definition of the full language of monadic predicate logic would be given by the following grammar: $$\begin{aligned} \varphi \mathrel {::=}a(x) \mid x \approx y \mid \lnot \varphi \mid (\varphi \vee \varphi ) \mid \exists x.\varphi \mid \exists ^\infty x.\varphi , \end{aligned}$$ where \(a \in A\) and x and y belong to the set \(\mathsf {iVar}\) of individual variables. In this set-up we would need to introduce the quantifiers \(\forall \) and \(\forall ^\infty \) as abbreviations of \(\lnot \exists \lnot \) and \(\lnot \exists ^\infty \lnot \), respectively. However, for our purposes it will be more convenient to work with a variant of this language where all formulas are in negation normal form; that is, we only permit the occurrence of the negation symbol \(\lnot \) in front of an atomic formula. In addition, for technical reasons we will add \(\bot \) and \(\top \) as constants, and we will write \(\lnot (x \approx y)\) as \(x \not \approx y\). Thus we arrive at the following definition of our syntax. The set \(\texttt {M} \texttt {E} ^{\infty }(A)\) of monadic formulas is given by the following grammar: $$\begin{aligned} \varphi&\mathrel {::=}\top \mid \bot \mid a(x) \mid \lnot a(x) \mid x \approx y \mid x \not \approx y \mid (\varphi \vee \varphi ) \mid (\varphi \wedge \varphi ) \\&\quad \quad \times \mid \exists x.\varphi \mid \forall x.\varphi \mid \exists ^\infty x.\varphi \mid \forall ^\infty x.\varphi \end{aligned}$$ where \(a \in A\) and \(x,y\in \mathsf {iVar}\). The language \(\texttt {M} \texttt {E} (A)\) of monadic first-order logic with equality is defined as the fragment of \(\texttt {M} \texttt {E} ^{\infty }(A)\) where occurrences of the generalised quantifiers \(\exists ^\infty \) and \(\forall ^\infty \) are not allowed: $$\begin{aligned} \varphi \mathrel {::=}\top \mid \bot \mid a(x) \mid \lnot a(x) \mid x \approx y \mid x \not \approx y \mid (\varphi \vee \varphi ) \mid (\varphi \wedge \varphi ) \mid \exists x.\varphi \mid \forall x.\varphi \end{aligned}$$ Finally, the language \(\texttt {M} (A)\) of monadic first-order logic is the equality-free fragment of \(\texttt {M} \texttt {E} (A)\); that is, atomic formulas of the form \(x \approx y\) and \(x \not \approx y\) are not permitted either: $$\begin{aligned} \varphi \mathrel {::=}\top \mid \bot \mid a(x) \mid \lnot a(x) \mid (\varphi \vee \varphi ) \mid (\varphi \wedge \varphi ) \mid \exists x.\varphi \mid \forall x.\varphi \end{aligned}$$ In all three languages we use the standard definition of free and bound variables, and we call a formula a sentence if it has no free variables. In the sequel we will often use the symbol \(\texttt {L} \) to denote either of the languages \(\texttt {M} \), \(\texttt {M} \texttt {E} \) or \(\texttt {M} \texttt {E} ^{\infty }\). For each of the languages \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\), we define the positive fragment \(\texttt {L} ^{+}\) of \(\texttt {L} \) as the language obtained by almost the same grammar as for \(\texttt {L} \), but with the difference that we do not allow negative formulas of the form \(\lnot a(x)\). To define the semantics of these languages we need to make a case distinction. For non-empty models we use the standard truth definition, which applies to arbitrary formulas since we can introduce the notion of an assignment, mapping individual variables to elements of the domain. In the case of the empty model, however, it is not possible to define assignments, so here we restrict the truth definition to sentences. The meaning of sentences in the languages \(\texttt {M} , \texttt {M} \texttt {E} \) and \(\texttt {M} \texttt {E} ^{\infty }\) is given in the form of a truth relation \(\models \). To define this truth relation on a model \(\mathbb {D}= (D ,V)\), we distinguish cases. Case \(D=\varnothing \):: We define the truth relation \(\models \) on the empty model \({\mathbb {D}_\varnothing }\) for all formulas that are Boolean combinations of sentences of the form \(Qx. \varphi \), where \(Q \in \{ \exists , \exists ^\infty , \forall , \forall ^\infty \}\) is a quantifier. The definition is by induction on the complexity of such sentences; the "atomic" clauses, where the sentence is of the form \(Qx. \varphi \), are as follows: $$\begin{aligned} \begin{array}{lll} {\mathbb {D}_\varnothing }\not \models Qx. \varphi &{} \text {if}\quad Q \in \{ \exists , \exists ^\infty \},&{} \\ {\mathbb {D}_\varnothing }\models Qx. \varphi &{} \text {if}\quad Q \in \{ \forall , \forall ^\infty \}.&{} \end{array} \end{aligned}$$ The clauses for the Boolean connectives are standard. Case \(D \ne \varnothing \):: In the case of a non-empty model \(\mathbb {D}\), we extend the truth relation to arbitrary formulas in a standard way, involving assignments of individual variables to elements of the domain. That is, given a model \(\mathbb {D}= (D,V)\), an assignment \(g :\mathsf {iVar}\rightarrow D\) and a formula \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\) we define the truth relation \(\models \) by a straightforward induction on the complexity of \(\varphi \). Below we explicitly provide the clauses of the quantifiers: $$\begin{aligned}\begin{array}{llll} &{}\mathbb {D},g \models \exists x.\varphi &{} \text {iff}\quad \mathbb {D},g [x\mapsto d] \models \varphi \text { for some }d\in D,\\ &{}\mathbb {D},g \models \forall x.\varphi &{} \text {iff}\quad \mathbb {D},g [x\mapsto d] \models \varphi \text { for all }d\in D,\\ &{}\mathbb {D},g \models \exists ^\infty x.\varphi &{} \text {iff}\quad \mathbb {D},g [x\mapsto d] \models \varphi \text { for infinitely many }d\in D,\\ &{}\mathbb {D},g \models \forall ^\infty x.\varphi &{} \text {iff}\quad \mathbb {D},g [x\mapsto d] \models \varphi \text { for all but at most finitely many }d\in D. \end{array} \end{aligned}$$ The clauses for the atomic formulas and for the Boolean connectives are standard. In what follows, when discussing the truth of \(\varphi \) on the empty model, we always implicitly assume that \(\varphi \) is a sentence. As mentioned in the introduction, general quantifiers such as \(\exists ^\infty \) and \(\forall ^\infty \) were introduced by Mostowski [24], who proved the decidability for the language obtained by extending \(\texttt {M} \) with such quantifiers. The decidability of the full language \(\texttt {M} \texttt {E} ^{\infty }\) was then proved by Slomson in [25].Footnote 5 The case for \(\texttt {M} \) and \(\texttt {M} \texttt {E} \) goes back already to [4, 22]. Fact 1 For each logic \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\), the problem of whether a given \(\texttt {L} \)-sentence \(\varphi \) is satisfiable, is decidable. In the remainder of the section we fix some further definitions and notations, starting with some useful syntactic abbreviations. Given a list \(\overline{{\mathbf {y}}} = y_1\ldots y_n\) of individual variables, we use the formula $$\begin{aligned} \text {diff}(\overline{{\mathbf {y}}}) {:=}\bigwedge _{1\le m < m^{\prime } \le n} (y_m \not \approx y_{m^{\prime }}) \end{aligned}$$ to state that the elements \(\overline{{\mathbf {y}}}\) are all distinct. An A-type is a formula of the form $$\begin{aligned} \tau _{S}(x) {:=}\bigwedge _{a\in S}a(x) \wedge \bigwedge _{a\in A\setminus S}\lnot a(x), \end{aligned}$$ where \(S \subseteq A\). Here and elsewhere we use the convention that \(\bigwedge \varnothing \!=\! \top \) (and \(\bigvee \varnothing \!=\! \bot \)). The positive A-type \(\tau _{S}^+(x)\) only bears positive information, and is defined as $$\begin{aligned} \tau _{S}^+(x) {:=}\bigwedge _{a\in S}a(x). \end{aligned}$$ Given a monadic model \(\mathbb {D}= (D,V)\) and a subset S of A, we define $$\begin{aligned} |S|_\mathbb {D}{:=}|\{d\in D \mid \mathbb {D}\models \tau _S(d) \}| \end{aligned}$$ as the number of elements of \(\mathbb {D}\) that realise the type \(\tau _{S}\). We often blur the distinction between the formula \(\tau _{S}(x)\) and the subset \(S \subseteq A\), calling S an A-type as well. Note that we have \(\mathbb {D}\models \tau _S(d)\) iff \(V^{\flat }(d) = S\), so that we may refer to \(V^{\flat }(d)\) as the type of \(d \in D\) indeed. The quantifier rank \(\texttt {qr} (\varphi )\) of a formula \(\varphi \in \texttt {M} \texttt {E} ^{\infty }\) (hence also for \(\texttt {M} \) and \(\texttt {M} \texttt {E} \)) is defined as follows: $$\begin{aligned} \begin{array}{llll} \texttt {qr} (\varphi ) &{}{:=}&{} 0 &{} \text {if }\varphi \text {is atomic},\\ \texttt {qr} (\lnot \psi ) &{} {:=}&{} \texttt {qr} (\psi ) \\ \texttt {qr} (\psi _{1}\mathrel {\heartsuit }\psi _{2}) &{} {:=}&{} \max \{\texttt {qr} (\psi _1),\texttt {qr} (\psi _2)\} &{} \text {where } \heartsuit \in \{ \wedge ,\vee \}\\ \texttt {qr} (Qx.\psi ) &{} {:=}&{} 1+\texttt {qr} (\psi ), &{} \text {where } Q \in \{\exists ,\forall ,\exists ^\infty ,\forall ^\infty \} \end{array} \end{aligned}$$ Given a monadic logic \(\texttt {L} \) we write \(\mathbb {D}\equiv _k^{\texttt {L} } \mathbb {D}'\) to indicate that the models \(\mathbb {D}\) and \(\mathbb {D}'\) satisfy exactly the same sentences \(\varphi \in \texttt {L} \) with \(\texttt {qr} (\varphi ) \le k\). We write \(\mathbb {D}\equiv ^{\texttt {L} } \mathbb {D}'\) if \(\mathbb {D}\equiv _k^{\texttt {L} } \mathbb {D}'\) for all k. When clear from context, we may omit explicit reference to \(\texttt {L} \). A partial isomorphism between two models (D, V) and \((D',V ')\) is a partial function \(f: D \rightharpoonup D'\) which is injective and satisfies that \(d \in V (a) \Leftrightarrow f(d) \in V '(a)\) for all \(a\in A\) and \( d\in \mathsf {Dom}(f)\). Given two sequences \(\overline{{\mathbf {d}}} \in D^k\) and \(\overline{{\mathbf {d'}}} \in {D'}^k\) we use \(f:\overline{{\mathbf {d}}} \mapsto \overline{{\mathbf {d'}}}\) to denote the partial function \(f:D\rightharpoonup D'\) defined as \(f(d_i) {:=}d'_i\). We will take care to avoid cases where there exist \(d_i,d_j\) such that \(d_i = d_j\) but \(d'_i \ne d'_j\). Finally, for future reference we briefly discuss the notion of Boolean duals. We first give a concrete definition of a dualisation operator on the set of monadic formulas. The (Boolean) dual \(\varphi ^{\delta } \in {\texttt {M} \texttt {E} ^{\infty }}(A)\) of \(\varphi \in {\texttt {M} \texttt {E} ^{\infty }}(A)\) is the formula given by: $$\begin{aligned} (a(x))^{\delta }&{:=}a(x)&(\lnot a(x))^{\delta }&{:=}\lnot a(x)\\ (\top )^{\delta }&{:=}\bot&(\bot )^{\delta }&{:=}\top \\ (x \approx y)^{\delta }&{:=}x \not \approx y&(x \not \approx y)^{\delta }&{:=}x \approx y \\ (\varphi \wedge \psi )^{\delta }&{:=}\varphi ^{\delta } \vee \psi ^{\delta }&(\varphi \vee \psi )^{\delta }&{:=}\varphi ^{\delta } \wedge \psi ^{\delta }\\ (\exists x.\psi )^{\delta }&{:=}\forall x.\psi ^{\delta }&(\forall x.\psi )^{\delta }&{:=}\exists x.\psi ^{\delta }\\ (\exists ^{\infty } x.\psi )^{\delta }&{:=}\forall ^{\infty } x.\psi ^{\delta }&(\forall ^{\infty } x.\psi )^{\delta }&{:=}\exists ^{\infty } x.\psi ^{\delta } \end{aligned}$$ Where \(\texttt {L} \in \{\texttt {M} ,\texttt {M} \texttt {E} ,\texttt {M} \texttt {E} ^{\infty }\}\), observe that if \(\varphi \in \texttt {L} (A)\) then \(\varphi ^{\delta } \in \texttt {L} (A)\). Moreover, the operator preserves positivity of the predicates, that is, if \(\varphi \in \texttt {L} ^+(A)\) then \(\varphi ^{\delta } \in \texttt {L} ^+(A)\). The following proposition states that the formulas \(\varphi \) and \(\varphi ^{\delta }\) are Boolean duals. We omit its proof, which is a routine check. Let \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\) be a monadic formula. Then \(\varphi \) and \(\varphi ^{\delta }\) are indeed Boolean duals, in the sense that for every monadic model (D, V) we have that $$\begin{aligned} (D,V) \models \varphi \text { iff } (D,V^{c}) \not \models \varphi ^{\delta }, \end{aligned}$$ where \(V^{c}: A \rightarrow \wp (D)\) is the valuation given by \(V^{c}(a) {:=}D \setminus V(a)\). Normal forms In this section we provide, for each of the logics \(\texttt {M} \), \(\texttt {M} \texttt {E} \) and \(\texttt {M} \texttt {E} ^{\infty }\), normal forms that will be pivotal for characterising the different fragments of these logics in later sections. Our approach will be game-theoretic, based on Ehrenfeucht–Fraïssé style model comparison games. These games were introduced by Ehrenfeucht [13] to study Fraïssé's analyis of first-order logic using so-called back-and-forth systems. Over the years, similar games have been introduced for various other logics, including extensions of first-order logic with generalised quantifiers [20]. As an important application of Ehrenfeucht–Fraïssé games one may use the notion of a winning strategy to obtain certain normal forms for formulas in the formalism under scrutiny. In the case of monadic first-order logic, one may extract relatively simple normal forms; this observation goes back to (at least) the work of Walukiewicz [32]. Our contribution here is that we use the method to obtain normal forms for the logic \(\texttt {M} \texttt {E} ^{\infty }\). Here and in the sequel it will often be convenient to blur the distinction between lists and sets. For instance, identifying the list \(\overline{{\mathbf {T}}} = T_{1}\ldots T_{n}\) with the set \(\{ T_{1}, \ldots , T_{n} \}\), we may write statements like \(S \in \overline{{\mathbf {T}}}\) or \(\Pi \subseteq \overline{{\mathbf {T}}}\). Moreover, given a finite set \(\varPhi = \{\varphi _1, \ldots , \varphi _n\}\), we write \(\varphi _1 \wedge \cdots \wedge \varphi _n\) as \(\bigwedge \varPhi \), and \(\varphi _1 \vee \cdots \vee \varphi _n\) as \(\bigvee \varPhi \). If \(\varPhi \) is empty, we set as usual \(\bigwedge \varPhi = \top \) and \(\bigvee \varPhi = \bot \). Finally, notice that we write \(\bigvee _{1\le m < m^{\prime } \le n} (y_m \approx y_{m^{\prime }}) \vee \psi \) as \(\text {diff}(\overline{{\mathbf {y}}}) \rightarrow \psi \). Normal form for \(\texttt {M} \) We start by introducing a normal form for monadic first-order logic without equality. Given sets of types \(\Sigma , \Pi \subseteq \wp (A)\), we define the following formulas: $$\begin{aligned} \begin{array}{lll} \nabla _{\texttt {M} }(\Sigma ,\Pi ) &{}{:=}&{} \bigwedge _{S\in \Sigma } \exists x. \tau _S(x) \wedge \forall x. \bigvee _{S\in \Pi } \tau _S(x)\\ \nabla _{\texttt {M} }(\Sigma ) &{}{:=}&{} \nabla _{\texttt {M} }(\Sigma ,\Sigma ) \end{array} \end{aligned}$$ A sentence of \(\texttt {M} (A)\) is in basic form if it is a disjunction of formulas of the form \(\nabla _{\texttt {M} }(\Sigma )\). Observe that \(\nabla _{\texttt {M} }(\Sigma ,\Pi ) \equiv \bot \) in case \(\Sigma \not \subseteq \Pi \) and that \(\nabla _{\texttt {M} }(\Sigma ,\Pi ) = \nabla _{\texttt {M} }(\Sigma ) = \forall x. \bot \) if \(\Sigma =\Pi = \varnothing \). The meaning of the formula \(\nabla _{\texttt {M} }(\Sigma )\) is that \(\Sigma \) is a complete description of the collection of types that are realised in a monadic model. The formula \(\nabla _{\texttt {M} }(\varnothing )\) distinghuishes the empty model from the non-empty ones. Every \(\texttt {M} \)-formula is effectively equivalent to a formula in basic form. There is an effective procedure that transforms an arbitrary \(\texttt {M} \)-sentence \(\varphi \) into an equivalent formula \(\varphi ^{*}\) in basic form. This observation is easy to prove using Ehrenfeucht–Fraïssé games (proof sketches can be found in [16, Lemma 16.23] and [31, Proposition 4.14]), and the decidability of the satisfiability problem for \(\texttt {M} \) (Fact 1). We omit a full proof because it is very similar to the following more complex cases. Normal form for \(\texttt {M} \texttt {E} \) Due to the additional expressive power provided by the (in-)equalities, the basic normal forms of \(\texttt {M} \texttt {E} \) take a more involved shape than those of \(\texttt {M} \). Definition 10 We say that a formula \(\varphi \in \texttt {M} \texttt {E} (A)\) is in basic form if \(\varphi = \bigvee \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) where each disjunct is of the form $$\begin{aligned} \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) = \exists \overline{{\mathbf {x}}}.\big (\text {diff}(\overline{{\mathbf {x}}}) \wedge \bigwedge _i \tau _{T_i}(x_i) \wedge \forall z.(\text {diff}(\overline{{\mathbf {x}}},z) \rightarrow \bigvee _{S\in \Pi } \tau _S(z))\big ) \end{aligned}$$ with \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k and \(\Pi \subseteq \overline{{\mathbf {T}}}\). We prove that every sentence of monadic first-order logic with equality is equivalent to a formula in basic form. Although this result seems to be folklore, we provide a detailed proof because some of its ingredients will be used later, when we give a normal form for \(\texttt {M} \texttt {E} ^{\infty }\). We start by defining the following relation between monadic models. For every \(k \in \mathbb {N}\) we define the relation \(\sim ^=_k\) on the class \(\mathfrak {M}\) of monadic models by putting $$\begin{aligned} \mathbb {D}\sim ^=_k \mathbb {D}' \Longleftrightarrow \forall S\subseteq A \ \big ( |S|_\mathbb {D}= |S|_{\mathbb {D}'} < k \text { or } |S|_\mathbb {D},|S|_{\mathbb {D}'} \ge k \big ), \end{aligned}$$ where \(\mathbb {D}\) and \(\mathbb {D}'\) are arbitrary monadic models. Intuitively, two models are related by \(\sim ^=_k\) when their type information coincides 'modulo k'. Later on we prove that this is the same as saying that they cannot be distinguished by a sentence of \(\texttt {M} \texttt {E} \) with quantifier rank at most k. As a special case, observe that any two monadic models are related by \(\sim ^{=}_{0}\). For the moment, we record the following properties of these relations. The following hold: The relation \(\sim ^=_k\) is an equivalence relation of finite index. Every \(E \in \mathfrak {M}/{\sim ^=_k}\) is characterised by a sentence \(\varphi ^=_E \in \texttt {M} \texttt {E} (A)\) with \(\texttt {qr} (\varphi ^=_E) = k\). We only prove the second statement, and first we consider the case where \(k=0\). The equivalence relation \(\sim ^{=}_{0}\) has the class \(\mathfrak {M}\) of all monadic models as its unique equivalence class, so here we may define \(\varphi ^{=}_{\mathfrak {M}} {:=}\top \). From now on we assume that \(k>0\). Take some equivalence class \(E \in \mathfrak {M}/{\sim ^=_k}\), and some representative \(\mathbb {D}\in E\). Let \(S_1,\ldots ,S_n \subseteq A\) be the types such that \(|S_i|_\mathbb {D}= l_i<k\) and let \(S'_1,\ldots ,S'_m \subseteq A\) be those satisfying \(|S'_i|_\mathbb {D}\ge k\). Note that the union of all the \(S_i\) and \(S'_i\) yields all the possible A-types, and that if a type \(S_{j}\) is not realised at all, we take \(l_j = 0\). Now define $$\begin{aligned} \varphi ^=_E \quad {:=}\quad&\bigwedge _{i\le n} \Big (\exists x_1,\ldots ,x_{l_i}. \text {diff}(x_1,\ldots ,x_{l_i}) \ \wedge \ \bigwedge _{j\le l_i} \tau _{S_i}(x_j)\\&\qquad \qquad \qquad \wedge \forall z. \text {diff}(x_1,\ldots ,x_{l_i},z) \rightarrow \lnot \tau _{S_i}(z)\Big )\ \\&\wedge \bigwedge _{i\le m} \big (\exists x_1,\ldots ,x_k.\text {diff}(x_1,\ldots ,x_k) \wedge \bigwedge _{j\le k} \tau _{S'_i}(x_j) \big ), \end{aligned}$$ where we understand that any conjunct of the form \(\exists x_1, \ldots ,x_{l}.\psi \) with \(l = 0\) is simply omitted (or, to the same effect, defined as \(\top \)). It is easy to see that \(\texttt {qr} (\varphi ^=_E) = k\) and that \(\mathbb {D}' \models \varphi ^=_E\) iff \(\mathbb {D}' \in E\). Intuitively, \(\varphi ^=_E\) gives a specification of E "type by type"; in particular observe that \(\varphi ^=_{\mathbb {D}_\varnothing }\equiv \forall x. \bot \). \(\square \) Next we recall a (standard) notion of Ehrenfeucht–Fraïssé game for \(\texttt {M} \texttt {E} \) which will be used to establish the connection between \({\sim ^=_k}\) and \(\equiv _k^{\texttt {M} \texttt {E} }\). Let \(\mathbb {D}_0 = (D_0,V_0)\) and \(\mathbb {D}_1 = (D_1,V_1)\) be monadic models. We define the game \(\text {EF}^=_k(\mathbb {D}_0,\mathbb {D}_1)\) between \(\forall \) and \(\exists \). If \(\mathbb {D}_i\) is one of the models we use \(\mathbb {D}_{-i}\) to denote the other model. A position in this game is a pair of sequences \(\overline{{\mathbf {s_0}}} \in D_0^n\) and \(\overline{{\mathbf {s_1}}} \in D_1^n\) with \(n \le k\). The game consists of k rounds. To describe a single round of the game, assume that n rounds have passed (where \(0 \le n < k\)); round \(n+1\) then consists of the following steps: \(\forall \) chooses an element \(d_i\) in one of the \(\mathbb {D}_i\); \(\exists \) responds with an element \(d_{-i}\) in the model \(\mathbb {D}_{-i}\). In this way, the sequences \(\overline{{\mathbf {s_i}}} \in D_i^n\) of elements chosen up to round n are extended to \({\overline{{\mathbf {s_i}}}' {:=}\overline{{\mathbf {s_i}}}\cdot d_i} \in D_i^{n+1}\). Player \(\exists \) survives the round iff she does not get stuck and the function \(f_{n+1}: \overline{{\mathbf {s_0}}}' \mapsto \overline{{\mathbf {s_1}}}'\) is a partial isomorphism of monadic models. Finally, player \(\exists \) wins the match iff she survives all k rounds. Given \(n\le k\) and \(\overline{{\mathbf {s_i}}} \in D_i^n\) such that \(f_n:\overline{{\mathbf {s_0}}}\mapsto \overline{{\mathbf {s_1}}}\) is a partial isomorphism, we write \(\text {EF}_{k}^=(\mathbb {D}_0, \mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\) to denote the (initialised) game where n moves have been played and \(k-n\) moves are left to be played. The following are equivalent: \(\mathbb {D}_0 \equiv _k^{\texttt {M} \texttt {E} } \mathbb {D}_1\), \(\mathbb {D}_0 \sim _k^= \mathbb {D}_1\), \(\exists \) has a winning strategy in \(\text {EF}_k^=(\mathbb {D}_0,\mathbb {D}_1)\). The implication from (1) to (2) is direct by Proposition 2. For the implication from (2) to (3) we give a winning strategy for \(\exists \) in \(\text {EF}_k^=(\mathbb {D}_0,\mathbb {D}_1)\) by showing the following claim. Claim 1 Let \(\mathbb {D}_0 \sim _k^= \mathbb {D}_1\) and \(\overline{{\mathbf {s_i}}} \in D_i^n\) be such that \(n<k\) and \(f_n:\overline{{\mathbf {s_0}}}\mapsto \overline{{\mathbf {s_1}}}\) is a partial isomorphism; then \(\exists \) can survive one more round in \(\text {EF}_{k}^=(\mathbb {D}_0,\mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\). Proof of Claim 1 Let \(\forall \) pick \(d_i\in D_i\) such that the type of \(d_i\) is \(T \subseteq A\). If \(d_i\) had already been played then \(\exists \) picks the same element as before and \(f_{n+1} = f_n\). If \(d_i\) is new and \(|T|_{\mathbb {D}_i} \ge k\) then, as at most \(n<k\) elements have been played, there is always some new \(d_{-i} \in D_{-i}\) that \(\exists \) can choose to match \(d_i\). If \(|T|_{\mathbb {D}_i} = m < k\) then we know that \(|T|_{\mathbb {D}_{-i}} = m\). Therefore, as \(d_i\) is new and \(f_n\) is injective, there must be a \(d_{-i} \in D_{-i}\) that \(\exists \) can choose. \(\square \) The implication from (3) to (1) is a standard result [12, Corollary 2.2.9] which we prove anyway because we will need to extend it later. We prove the following loaded statement. Let \(\overline{{\mathbf {s_i}}} \in D_i^n\) and \(\varphi (z_1,\ldots ,z_n) \in \texttt {M} \texttt {E} (A)\) be such that \(\texttt {qr} (\varphi ) \le k-n\). If \(\exists \) has a winning strategy in the game \(\text {EF}_k^=(\mathbb {D}_0,\mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\) then \(\mathbb {D}_0 \models \varphi (\overline{{\mathbf {s_0}}})\) iff \(\mathbb {D}_1 \models \varphi (\overline{{\mathbf {s_1}}})\). If \(\varphi \) is atomic the claim holds because of \(f_n:\overline{{\mathbf {s_0}}}\mapsto \overline{{\mathbf {s_1}}}\) being a partial isomorphism. The Boolean cases are straightforward. Let \(\varphi (z_1,\ldots ,z_n) = \exists x. \psi (z_1,\ldots ,z_n,x)\) and suppose \(\mathbb {D}_0 \models \varphi (\overline{{\mathbf {s_0}}})\). Hence, there exists \(d_0 \in D_0\) such that \(\mathbb {D}_0 \models \psi (\overline{{\mathbf {s_0}}},d_0)\). By hypothesis we know that \(\exists \) has a winning strategy for \(\text {EF}_k^=(\mathbb {D}_0,\mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\). Therefore, if \(\forall \) picks \(d_0\in D_0\) she can respond with some \(d_1\in D_1\) and have a winning strategy for \(\text {EF}_{k}^=(\mathbb {D}_0,\mathbb {D}_1) @(\overline{{\mathbf {s_0}}}{\cdot }d_0,\overline{{\mathbf {s_1}}}{\cdot }d_1)\). By induction hypothesis, because \(\texttt {qr} (\psi ) \le k- (n+1)\), we have that \(\mathbb {D}_0 \models \psi (\overline{{\mathbf {s_0}}},d_0)\) iff \(\mathbb {D}_1 \models \psi (\overline{{\mathbf {s_1}}},d_1)\) and hence \(\mathbb {D}_1 \models \exists x.\psi (\overline{{\mathbf {s_1}}},x)\). The opposite direction is proved by a symmetric argument. \(\square \) We finish the proof of the proposition by combining these two claims. \(\square \) Theorem 1 There is an effective procedure that transforms an arbitrary \(\texttt {M} \texttt {E} \)-sentence \(\varphi \) into an equivalent formula \(\varphi ^{*}\) in basic form. Let \(\texttt {qr} (\psi ) = k\) and let \(\llbracket \psi \rrbracket \) be the class of models satisfying \(\psi \). As \(\mathfrak {M}/{\equiv _k^{\texttt {M} \texttt {E} }}\) is the same as \(\mathfrak {M}/{\sim _k^=}\) by Proposition 3, it is easy to see that \(\psi \) is equivalent to \(\bigvee \{ \varphi ^=_E \mid E \in \llbracket \psi \rrbracket /{\sim _k^=} \}\). Now it only remains to see that each \(\varphi ^=_E\) is equivalent to the sentence \(\nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) for some \(\overline{{\mathbf {T}}},\Pi \subseteq \wp (A)\) with \(\Pi \subseteq \overline{{\mathbf {T}}}\). The crucial observation is that we will use \(\overline{{\mathbf {T}}}\) and \(\Pi \) to give a specification of the types "element by element". Take some representative \(\mathbb {D}\) of the equivalence class E. Let \(S_1,\ldots ,S_n \subseteq A\) be the types such that \(|S_i|_\mathbb {D}= l_i < k\) and \(S'_1,\ldots ,S'_m \subseteq A\) those satisfying \(|S'_j|_\mathbb {D}\ge k\). The size of the sequence \(\overline{{\mathbf {T}}}\) is defined to be \((\sum _{i=1}^n l_i) + k\times m\) where \(\overline{{\mathbf {T}}}\) contains exactly \(l_i\) occurrences of type \(S_i\) and at least k occurrences of each \(S'_j\). On the other hand we set \(\Pi {:=}\{S'_1,\ldots ,S'_m\}\). It is straightforward to check that \(\Pi \subseteq \overline{{\mathbf {T}}}\) and \(\varphi ^=_E\) is equivalent to \(\nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\). (Observe however, that the quantifier rank of the latter is only bounded by \(k\times 2^{|A|} + 1\).) In particular \(\varphi ^=_{\mathbb {D}_\varnothing }\equiv \nabla _{\texttt {M} \texttt {E} }(\varnothing ,\varnothing ) = \forall x. \bot \). The effectiveness of the procedure follows from the fact that, given the previous bound on the size of a normal form, it is possible to non-deterministically guess the number of disjuncts, types and associated parameters for each conjunct and repeatedly check whether the formulas \(\varphi \) and \(\bigvee \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) are equivalent, this latter problem being decidable by Fact 1. \(\square \) Normal form for \(\texttt {M} \texttt {E} ^{\infty }\) The logic \(\texttt {M} \texttt {E} ^{\infty }\) extends \(\texttt {M} \texttt {E} \) with the capacity to tear apart finite and infinite sets of elements. This is reflected in the normal form for \(\texttt {M} \texttt {E} ^{\infty }\) by adding extra information to the normal form of \(\texttt {M} \texttt {E} \). We say that a formula \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\) is in basic form if \(\varphi = \bigvee \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) where each disjunct is of the form $$\begin{aligned} \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) {:=}\nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) \wedge \nabla _{\!\!\infty }(\Sigma ) \end{aligned}$$ where \(\nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) is as in Definition 10, and $$\begin{aligned} \nabla _{\!\!\infty }(\Sigma ) {:=}\bigwedge _{S\in \Sigma } \exists ^\infty y.\tau _S(y) \wedge \forall ^\infty y.\bigvee _{S\in \Sigma } \tau _S(y). \end{aligned}$$ Here \(\overline{{\mathbf {T}}} \in \wp (A)^{k}\) for some k, and \(\Pi ,\Sigma \subseteq \wp (A)\) are such that \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\). Observe that basic formulas of \(\texttt {M} \texttt {E} \) are not basic formulas of \(\texttt {M} \texttt {E} ^{\infty }\). Intuitively, the formula \(\nabla _{\!\!\infty }(\Sigma )\) says that (1) for every type \(S\in \Sigma \), there are infinitely many elements satisfying S and (2) only finitely many elements do not satisfy any type in \(\Sigma \). As a special case, the formula \(\nabla _{\!\!\infty }(\varnothing )\) expresses that the model is finite. A short argument reveals that, intuitively, every disjunct of the form \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) expresses that any monadic model satisfying it admits a partition of its domain in three parts: distinct elements \(t_1,\ldots ,t_n\) with respective types \(T_1,\ldots ,T_n\), finitely many elements whose types belong to \(\Pi \), and for each \(S\in \Sigma \), infinitely many elements with type S. Note that this partition is not necessarily unique, unless we modify item (ii) so that it mentions finitely many elements whose type belongs to \(\Pi \setminus \Sigma \). In the same way as before, we define an equivalence relation \(\sim ^\infty _k\) on monadic models which refines \(\sim ^=_{k}\) by adding information about the (in-)finiteness of the types. For every \(k \in \mathbb {N}\) we define the relation \(\sim ^{\infty }_k\) on the class \(\mathfrak {M}\) of monadic models by putting $$\begin{aligned} \mathbb {D}\sim ^\infty _{k} \mathbb {D}' \Longleftrightarrow \forall S\subseteq A \ \big ( |S|_\mathbb {D}= |S|_{\mathbb {D}'}< k \text { or } k \le |S|_\mathbb {D},|S|_{\mathbb {D}'} < \omega \text { or } |S|_\mathbb {D},|S|_{\mathbb {D}'} \ge \omega \big ), \end{aligned}$$ As before, with this definition we find that \(\mathbb {D}\sim ^\infty _0 \mathbb {D}'\) holds always. The following hold, for every \(k \in \mathbb {N}\): The relation \(\sim ^\infty _k\) is an equivalence relation of finite index. The relation \(\sim ^\infty _k\) is a refinement of \(\sim ^=_k\). Every \(E \in \mathfrak {M}/{\sim ^\infty _k}\) is characterised by a sentence \(\varphi ^\infty _E \in \texttt {M} \texttt {E} ^{\infty }(A)\) with \(\texttt {qr} (\varphi ) = k\). We only prove the last item, for \(k>0\). Let \(E \in \mathfrak {M}/{\sim ^\infty _k}\) and let \(\mathbb {D}\in E\) be a representative of the class. Let \(E' \in \mathfrak {M}/{\sim ^=_k}\) be the equivalence class of \(\mathbb {D}\) with respect to \(\sim ^=_k\). Let \(S_1,\ldots ,S_n \subseteq A\) be all the types such that \(|S_i|_\mathbb {D}\ge \omega \), and define $$\begin{aligned} \varphi ^\infty _E {:=}\varphi ^=_{E'} \wedge \nabla _{\!\!\infty }(\{S_1,\ldots ,S_n\}). \end{aligned}$$ It is not difficult to see that \(\texttt {qr} (\varphi ^\infty _E) = k\) and that \(\mathbb {D}' \models \varphi ^\infty _E\) iff \(\mathbb {D}' \in E\). In particular \(\varphi ^\infty _{\mathbb {D}_\varnothing }\equiv \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\varnothing ,\varnothing ,\varnothing ) = \forall x. \bot \wedge \forall ^\infty y. \bot \). \(\square \) Now we give a version of the Ehrenfeucht–Fraŝsé game for \(\texttt {M} \texttt {E} ^{\infty }\). This game, which extends \(\text {EF}^=_k\) with moves for \(\exists ^\infty \), is the adaptation of the Ehrenfeucht–Fraïssé game for monotone generalised quantifiers found in [20] to the case of full monadic first-order logic. Let \(\mathbb {D}_0 = (D_0,V_0)\) and \(\mathbb {D}_1 = (D_1,V_1)\) be monadic models. We define the game \(\text {EF}^\infty _k(\mathbb {D}_0,\mathbb {D}_1)\) between \(\forall \) and \(\exists \). A position in this game is a pair of sequences \(\overline{{\mathbf {s_0}}} \in D_0^n\) and \(\overline{{\mathbf {s_1}}} \in D_1^n\) with \(n \le k\). The game consists of k rounds. To describe a single round of the game, assume that n rounds have passed (where \(0 \le n < k\)); round \(n+1\) then consists of the following steps. First \(\forall \) chooses to perform one of the following types of moves: second-order move: \(\forall \) chooses an infinite set \(X_i \subseteq D_i\); \(\exists \) responds with an infinite set \(X_{-i} \subseteq D_{-i}\); \(\forall \) chooses an element \(d_{-i} \in X_{-i}\); \(\exists \) responds with an element \(d_i \in X_i\). first-order move: \(\forall \) chooses an element \(d_i \in D_i\); \(\exists \) responds with an element \(d_{-i} \in D_{-i}\). The sequences \(\overline{{\mathbf {s_i}}} \in D_i^n\) of elements chosen up to round n are then extended to \({\overline{{\mathbf {s_i}}}' {:=}\overline{{\mathbf {s_i}}}\cdot d_i} \in D_{i}^{n+1}\). \(\exists \) survives the round iff she does not get stuck and the function \(f_{n+1}: \overline{{\mathbf {s_0}}}' \mapsto \overline{{\mathbf {s_1}}}'\) is a partial isomorphism of monadic models. Note that the only items that are recorded in a play of this game are the objects picked by the players; the subsets that are picked in a round starting with a second-order move by \(\forall \), are forgotten as soon as the players have selected inhabitants of these sets (Fig. 1). \(\mathbb {D}_0 \equiv _k^{\texttt {M} \texttt {E} ^{\infty }} \mathbb {D}_1\), \(\mathbb {D}_0 \sim _k^\infty \mathbb {D}_1\), \(\exists \) has a winning strategy in \(\text {EF}_k^\infty (\mathbb {D}_0,\mathbb {D}_1)\). The implication from (1) to (2) is direct by Proposition 4. For the implication from (2)) to (3) we show the following. Let \(\mathbb {D}_0 \sim _k^\infty \mathbb {D}_1\) and \(\overline{{\mathbf {s_i}}} \in D_i^n\) be such that \(n<k\) and \(f_n:\overline{{\mathbf {s_0}}}\mapsto \overline{{\mathbf {s_1}}}\) is a partial isomorphism. Then \(\exists \) can survive one more round in \(\text {EF}_{k}^\infty (\mathbb {D}_0, \mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\). Elements of type S have coloured background We focus on the second-order moves because the first-order moves are the same as in the corresponding Claim of Proposition 3. Let \(\forall \) choose an infinite set \(X_i \subseteq D_i\), we would like \(\exists \) to choose an infinite set \(X_{-i} \subseteq D_{-i}\) such that the following conditions hold: The map \(f_n\) is a well-defined partial isomorphism between the restricted monadic models \(\mathbb {D}_0{\upharpoonright }X_0\) and \(\mathbb {D}_1{\upharpoonright }X_1\), For every type S there is an element \(d\in X_i\) of type S which is not connected by \(f_n\) iff there is such an element in \(X_{-i}\). First we prove that such a set \(X_{-i}\) exists. To satisfy item (a) \(\exists \) just needs to add to \(X_{-i}\) the elements connected to \(X_i\) by \(f_n\); this is not a problem. For item (b) we proceed as follows: for every type S such that there is an element \(d\in X_i\) of type S, we add a new element \(d'\in D_{-i}\) of type S to \(X_{-i}\). To see that this is always possible, observe first that \(\mathbb {D}_0 \sim _k^\infty \mathbb {D}_1\) implies \(\mathbb {D}_0 \sim _k^= \mathbb {D}_1\). Using the properties of this relation, we divide in two cases: If \(|S|_{D_i} \ge k\) we know that \(|S|_{D_{-i}} \ge k\) as well. From the elements of \(D_{-i}\) of type S, at most \(n<k\) are used by \(f_n\). Hence, there is at least one \(d'\in D_{-i}\) of type S to choose from. If \(|S|_{D_i} < k\) we know that \(|S|_{D_{i}} = |S|_{D_{-i}}\). From the elements of \(D_{i}\) of type S, at most \(|S|_{D_{i}}-1\) are used by \(f_n\). (The reason for the '\(-1\)' is that we are assuming that we have just chosen a \(d\in X_i\) which is not in \(f_n\).) Using that \(|S|_{D_{i}} = |S|_{D_{-i}}\) and that \(f_n\) is a partial isomorphism we can again conclude that there is at least one \(d'\in D_{-i}\) of type S to choose from. Finally, we need to show that \(\exists \) can choose \(X_{-i}\) to be infinite. To see this, observe that \(X_{i}\) is infinite, while there are only finitely many types. Hence there must be some S such that \(|S|_{X_i} \ge \omega \). It is then safe to add infinitely many elements for S in \(X_{-i}\) while considering point (b). Moreover, the existence of infinitely many elements satisfying S in \(D_{-i}\) is guaranteed by \(\mathbb {D}_0 \sim _k^\infty \mathbb {D}_1\). Having shown that \(\exists \) can choose a set \(X_{-i}\) satisfying the above conditions, it is now clear that using point (b) \(\exists \) can survive the "first-order part" of the second-order move we were considering. This finishes the proof of the claim. \(\square \) Returning to the proof of Proposition 5, for the implication from (3) to (1) we prove the following. Let \(\overline{{\mathbf {s_i}}} \in D_i^n\) and \(\varphi (z_1,\ldots ,z_n) \in \texttt {M} \texttt {E} ^{\infty }(A)\) be such that \(\texttt {qr} (\varphi ) \le k-n\). If \(\exists \) has a winning strategy in \(\text {EF}_k^\infty (\mathbb {D}_0,\mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\) then \(\mathbb {D}_0 \models \varphi (\overline{{\mathbf {s_0}}})\) iff \(\mathbb {D}_1 \models \varphi (\overline{{\mathbf {s_1}}})\). All the cases involving operators of \(\texttt {M} \texttt {E} \) are the same as in Proposition 3. We prove the inductive case for the generalised quantifier. Let \(\varphi (z_1,\ldots ,z_n)\) be of the form \(\exists ^\infty x.\psi (z_1,\ldots ,z_n,x)\) and let \(\mathbb {D}_0 \models \varphi (\overline{{\mathbf {s_0}}})\). Hence, the set \(X_{0} {:=}\{ d_{0} \in D_{0} \mid \mathbb {D}_0 \models \psi (\overline{{\mathbf {s_0}}},d_0) \}\) is infinite. By assumption \(\exists \) has a winning strategy in \(\text {EF}_k^\infty (\mathbb {D}_0,\mathbb {D}_1)@(\overline{{\mathbf {s_0}}},\overline{{\mathbf {s_1}}})\). Therefore, if \(\forall \) plays a second-order move by picking \(X_0 \subseteq D_0\) she can respond with some infinite set \(X_1 \subseteq D_1\). We claim that \(\mathbb {D}_1 \models \psi (\overline{{\mathbf {s_1}}},d_1)\) for every \(d_1\in X_1\). First observe that if this holds then the set \(X'_1 {:=}\{ d_1 \in D_1 \mid \mathbb {D}_1 \models \psi (\overline{{\mathbf {s_1}}},d_1)\}\) must be infinite, and hence \(\mathbb {D}_1 \models \exists ^\infty x.\psi (\overline{{\mathbf {s_1}}},x)\). Assume, for a contradiction, that \(\mathbb {D}_1 \not \models \psi (\overline{{\mathbf {s_1}}},d'_1)\) for some \(d'_1\in X_1\). Let \(\forall \) play this \(d'_1\) as the second part of his move. Then, as \(\exists \) has a winning strategy, she will respond with some \(d'_0 \in X_0\) for which she has a winning strategy in \(\text {EF}_{k}^\infty (\mathbb {D}_0,\mathbb {D}_1) @(\overline{{\mathbf {s_0}}}{\cdot }d'_0,\overline{{\mathbf {s_1}}}{\cdot }d'_1)\). But then by our induction hypothesis, which applies since \(\texttt {qr} (\psi ) \le k-(n+1)\), we may infer from \(\mathbb {D}_1 \not \models \psi (\overline{{\mathbf {s_1}}},d'_1)\) that \(\mathbb {D}_0 \not \models \psi (\overline{{\mathbf {s_0}}},d'_0)\). This clearly contradicts the fact that \(d'_{0} \in X_{0}\). \(\square \) Combining the claims finishes the proof of the proposition. \(\square \) There is an effective procedure that transforms an arbitrary \(\texttt {M} \texttt {E} ^{\infty }\)-sentence \(\varphi \) into an equivalent formula \(\varphi ^{*}\) in basic form. This can be proved using the same argument as in Theorem 1 but based on Proposition 5. Hence we only focus on showing that \(\varphi _E^\infty \equiv \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) for some \(\overline{{\mathbf {T}}},\Pi ,\Sigma \subseteq \wp (A)\) such that \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\), where \(\varphi _E^\infty \) is the sentence characterising \(E \in \mathfrak {M}/{\sim ^\infty _k}\) from Proposition 4(2). Recall that $$\begin{aligned} \varphi ^\infty _E = \varphi ^=_{E'} \wedge \nabla _{\!\!\infty }(\Sigma ) \end{aligned}$$ where \(\Sigma \) is the collection of types that are realised by infinitely many elements. Using Theorem 1 on \(\varphi ^=_{E'}\) we know that this is equivalent to $$\begin{aligned} \varphi ^\infty _E = \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) \wedge \nabla _{\!\!\infty }(\Sigma ) \end{aligned}$$ where \(\Pi \subseteq \overline{{\mathbf {T}}}\). Observe that we may assume that \(\Sigma \subseteq \Pi \), otherwise the formula would be inconsistent. We can then conclude that \(\varphi ^\infty _E \equiv \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). \(\square \) In this section we provide our first characterisation result, which concerns the notion of monotonicity. Let V and \(V'\) be two A-valuations on the same domain D, and \(B \subseteq A\). Then we say that \(V'\) is a B-extension of V, notation: \(V \le _{B} V'\), if \(V(b) \subseteq V'(b)\) for every \(b \in B\), and \(V(a) = V'(a)\) for every \(a \in A \setminus B\). Given a monadic logic \(\texttt {L} \) and a formula \(\varphi \in \texttt {L} (A)\) we say that \(\varphi \) is monotone in \(B \subseteq A\) if $$\begin{aligned} (D,V),g \models \varphi \text { and } V \le _{B} V' \text { imply } (D,V'),g \models \varphi , \end{aligned}$$ for every pair of monadic models (D, V) and \((D,V')\) and every assignment \(g:\mathsf {iVar}\rightarrow D\). It is easy to prove that a formula is monotone in \(B \subseteq A\) if and only if it is monotone in every \(b \in B\). The semantic property of monotonicity can usually be linked to the syntactic notion of positivity. Indeed, for many logics, a formula \(\varphi \) is monotone in \(a \in A\) iff \(\varphi \) is equivalent to a formula where all occurrences of a have a positive polarity, that is, they are situated in the scope of an even number of negations. For \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} \}\) we define the fragment of A-formulas that are positive in all predicates in B, in short: the B-positive formulas by the following grammar: $$\begin{aligned} \varphi \mathrel {::=}\psi \mid b(x) \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \exists x.\varphi \mid \forall x.\varphi , \end{aligned}$$ where \(b \in B\) and \(\psi \in \texttt {L} (A\setminus B)\) (that is, there are no occurrences of any \(b \in B\) in \(\psi \)). Similarly, the B-positive fragment of \(\texttt {M} \texttt {E} ^{\infty }\) is given by $$\begin{aligned} \varphi \mathrel {::=}\psi \mid b(x) \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \exists x.\varphi \mid \forall x.\varphi \mid \exists ^\infty x.\varphi \mid \forall ^\infty x.\varphi , \end{aligned}$$ where \(b\in B\) and \(\psi \in \texttt {M} \texttt {E} ^{\infty }(A\setminus B)\). In all three cases, we let \(\texttt {Pos} _{B}(\texttt {L} (A))\) denote the set of B-positive sentences. Note that the difference between the fragments \(\texttt {Pos} _{B}(\texttt {M} (A))\) and \(\texttt {Pos} _{B}(\texttt {M} \texttt {E} (A))\) lies in the fact that in the latter case, the 'B-free' formulas \(\psi \) may contain the equality symbol, both positively (\(\approx \)) and negatively (\(\not \approx \)). Clearly \(\texttt {Pos} _{A}(\texttt {L} (A))= \texttt {L} ^+\). Perhaps a more natural presentation of the fragment \(\texttt {Pos} _{B}(\texttt {L} (A))\) would be via the following grammar (in the case of \(\texttt {M} \), the other cases would be similar): $$\begin{aligned} \varphi \mathrel {::=}\top \mid \bot \mid a(x) \mid \lnot a(x) \mid b(x) \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \exists x.\varphi \mid \forall x.\varphi , \end{aligned}$$ where \(a \in A \setminus B\) and \(b \in B\). It is not difficult to see that the above grammar produces the same formulas as the one in Definition 17. The latter presentation, however, is more convenient in the context of our companion paper [10], and in line with the definition of the fragments \(\texttt {Con} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\) studied in the next section. Let \(\varphi \) be a sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). Then \(\varphi \) is monotone in a set \(B \subseteq A\) if and only if there is an equivalent formula \(\varphi ^{\oslash } \in \texttt {Pos} _{B}(\texttt {L} (A))\). Furthermore, it is decidable whether a sentence \(\varphi \in \texttt {L} (A)\) has this property or not. The 'easy' direction of the first claim of the theorem is taken care of by the following proposition. Every formula \(\varphi \in \texttt {Pos} _{B}(\texttt {L} (A))\) is monotone in B, where \(\texttt {L} \) is one of the logics \(\{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). The case for \(D= \varnothing \) being immediate, we assume \(D \ne \varnothing \). The proof is a routine argument by induction on the complexity of \(\varphi \). That is, we show by induction, that any formula \(\varphi \) in the B-positive fragment (which may not be a sentence) satisfies (4), for every monadic model (D, V), valuation \(V' \ge _{B} V\) and assignment \({g:\mathsf {iVar}\rightarrow D}\). We focus on the generalised quantifiers. Let \((D,V),g \models \varphi \) and \(V \le _{B} V'\). Case \(\varphi = \exists ^\infty x.\varphi '(x)\). By definition there exists an infinite set \(I\subseteq D\) such that for all \(d\in I\) we have \((D,V),g[x\mapsto d] \models \varphi '(x)\). By induction hypothesis \((D,V'),g[x\mapsto d] \models \varphi '(x)\) for all \(d \in I\). Therefore \((D,V'),g \models \exists ^\infty x.\varphi '(x)\). Case \(\varphi = \forall ^\infty x.\varphi '(x)\). Hence there exists \(C\subseteq D\) such that for all \(d\in C\) we have \((D,V),g[x\mapsto d] \models \varphi '(x)\) and \(D\setminus C\) is finite. By induction hypothesis \((D,V'),g[x\mapsto d] \models \varphi '(x)\) for all \(d \in C\). Therefore \((D,V'),g \models \forall ^\infty x.\varphi '(x)\). This finishes the proof. \(\square \) The 'hard' direction of the first claim of Theorem 3 states that the fragment \(\texttt {Pos} _{B}(\texttt {M} )\) is complete for monotonicity in B. In order to prove this, we need to show that every sentence which is monotone in B is equivalent to some formula in \(\texttt {Pos} _{B}(\texttt {M} )\). We actually are going to prove a stronger result. Let \(\texttt {L} \) be one of the logics \(\{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). There exists an effective translation \((-)^{\oslash }:\texttt {L} (A) \rightarrow \texttt {Pos} _{B}(\texttt {L} (A))\) such that a sentence \({\varphi \in \texttt {L} (A)}\) is monotone in \(B \subseteq A\) only if \(\varphi \equiv \varphi ^\oslash \). We prove the three manifestations of Proposition 7 separately, in three respective subsections. Proof of Theorem 3 The first claim of the Theorem is an immediate consequence of Proposition 7. By effectiveness of the translation and Fact 1, it is decidable whether a sentence \(\varphi \in \texttt {L} (A)\) is monotone in \(B \subseteq A\) or not. \(\square \) The following definition will be used throughout in the remainder of the section. Given \(S \subseteq A\) and \(B \subseteq A\) we use the following notation $$\begin{aligned} \tau ^{B}_S(x) {:=}\bigwedge _{b\in S} b(x) \wedge \bigwedge _{b\in A\setminus (S\cup B)}\lnot b(x), \end{aligned}$$ for what we call the B-positive A-type \(\tau ^{B}_S\). Intuitively, \(\tau ^{B}_S\) works almost like the A-type \(\tau _S\), the difference being that \(\tau ^{B}_S\) discards the negative information for the names in B. If \(B = \{a\}\) we write \(\tau ^a_S\) instead of \(\tau ^{\{a\}}_S\). Observe that with this notation, \(\tau ^+_S\) is equivalent to \(\tau ^A_S\). Monotone fragment of \(\texttt {M} \) In this subsection we prove the \(\texttt {M} \)-variant of Proposition 7. That is, we give a translation that constructively maps arbitrary sentences into \(\texttt {Pos} _{B}(\texttt {M} )\) and moreover preserves truth iff the given sentence is monotone in B. To formulate the translation we need to introduce some new notation. Let \(B\subseteq A\) be a finite set of names and \(\Sigma \subseteq \wp (A)\) be types. The B-positive variant of \(\nabla _{\texttt {M} }(\Sigma )\) is given as follows: $$\begin{aligned} \nabla ^{B}_\texttt {M} (\Sigma ) {:=}\bigwedge _{S\in \Sigma } \exists x. \tau ^{B}_S(x) \wedge \forall x. \bigvee _{S\in \Sigma } \tau ^{B}_S(x). \end{aligned}$$ We also introduce the following generalised forms of the above notation, with types \(\Pi \subseteq \wp (A)\): $$\begin{aligned} \nabla ^{B}_\texttt {M} (\Sigma ,\Pi ) {:=}\bigwedge _{S\in \Sigma } \exists x. \tau ^{B}_S(x) \wedge \forall x. \bigvee _{S\in \Pi } \tau ^{B}_S(x). \end{aligned}$$ The positive variants of the above notations are defined as \(\nabla ^+_{\texttt {M} }(\Sigma ) {:=}\nabla ^{A}_\texttt {M} (\Sigma )\) and \(\nabla ^+_{\texttt {M} }(\Sigma ,\Pi ) {:=}\nabla ^{A}_\texttt {M} (\Sigma ,\Pi )\). There exists an effective translation \((-)^\oslash :\texttt {M} (A) \rightarrow \texttt {Pos} _{B}(\texttt {M} (A))\) such that a sentence \({\varphi \in \texttt {M} (A)}\) is monotone in \(B\subseteq A\) if and only if \(\varphi \equiv \varphi ^\oslash \). To define the translation, by Fact 2, we may assume without loss of generality that \(\varphi \) is in the normal form \(\bigvee \nabla _{\texttt {M} }(\Sigma )\). We define the translation as $$\begin{aligned} \left( \bigvee \nabla _{\texttt {M} }(\Sigma )\right) ^\oslash {:=}\bigvee \nabla ^{B}_\texttt {M} (\Sigma ). \end{aligned}$$ From the construction it is clear that \(\varphi ^\oslash \in \texttt {Pos} _{B}(\texttt {M} (A))\). Then, the right-to-left direction of the proposition is immediate by Proposition 6. For the left-to-right direction, assume that \(\varphi \) is monotone in B. It suffices to prove that \((D,V) \models \varphi \) if and only if \((D,V) \models \varphi ^\oslash \). This direction is trivial. Assume \((D,V) \models \varphi ^\oslash \) and let \(\Sigma \) be such that \((D,V) \models \nabla ^{B}_\texttt {M} (\Sigma )\). If \(D = \varnothing \), then \(\Sigma = \varnothing \) and \(\nabla ^{B}_\texttt {M} (\Sigma )= \nabla _{\texttt {M} }(\Sigma )\). Hence, assume \(D \ne \varnothing \), and clearly \(\Sigma \ne \varnothing \). We claim the existence of a surjective map \(T: D \rightarrow \Sigma \) such that \((D,V) \models \tau ^{B}_{T_{d}}(d)\), for every d in D. To see this, first note that, because of the existential part of \(\nabla ^{B}_\texttt {M} (\Sigma )\), every type \(S \in \Sigma \) has a 'B-witness' in \(\mathbb {D}\), that is, an element \(d_{S} \in D\) such that \((D,V) \models \tau ^{B}_{S}(d_{S})\). It is in fact safe to assume that all these witnesses are distinct (this is because (D, V) can be proved to be \(\texttt {M} \)-equivalent to such a model, cf. Proposition 18). This means that we may define \(T: d_{S} \mapsto S\), which will ensure the surjectivity of T. It remains to extend the definition of T to the elements of D that are not of the form \(d_{S}\) for some \(S \in \Sigma \). But this is easy: because of the universal part of \(\nabla ^{B}_\texttt {M} (\Sigma )\), we may find for every element d in D some type \(S_{d}\) in \(\Sigma \) such that \((D,V) \models \tau ^{B}_{S_{d}}(d)\). Putting these observations together, it should be clear that the map \(T: D \rightarrow \wp (A)\) given by \(T(d) {:=}S\) if \(d = d_{S}\) for some \(S \in \Sigma \), and \(T(d) {:=}S_{d}\) otherwise, satisfies the requirements. Now let \(U: A \rightarrow \wp (D)\) be the valuation of which T is the associated colouring, cf. Definition 2. That is, we put \(U(a) {:=}\{ d \in D \mid a \in T_{d} \}\). The definition of U is tailored towards the claim that $$\begin{aligned} (D,U) \models \nabla _{\texttt {M} }(\Sigma ). \end{aligned}$$ To see why this is the case, first take an arbitrary \(d \in D\); it is immediate by the definitions that \((D,U) \models \tau _{T_{d}}(d)\), and since \(T_{d} \in \Sigma \), this takes care of the universal conjunct of the formula \(\nabla _{\texttt {M} }(\Sigma )\). Now take an arbitrary \(S \in \Sigma \). It follows by the surjectivity of T that there is a \(d \in D\) such that \(\Sigma = T_{d}\); and since we saw that \((D,U) \models \tau _{T_{d}}(d)\), this takes care of the existential part. Clearly it follows from (6) that \((D,U) \models \varphi \). But then by monotonicity of \(\varphi \), we are done if we can show that $$\begin{aligned} U \le _{B} V. \end{aligned}$$ To see this, observe that for \(a \in A \setminus B\) we have the following equivalences: $$\begin{aligned} d \in U(a) \iff a \in T_{d} \iff (D,V) \models a(d) \iff d \in V(a), \end{aligned}$$ while for \(b \in B\) we can prove $$\begin{aligned} d \in U(b) \iff b \in T_{d} \Longrightarrow (D,V) \models b(d) \iff d \in V(b). \end{aligned}$$ This suffices to prove (7), and finishes the proof of the Proposition. \(\square \) A careful analysis of the translation gives us the following corollary, providing normal forms for the monotone fragment of \(\texttt {M} \). For any sentence \(\varphi \in \texttt {M} (A)\), the following hold. The formula \(\varphi \) is monotone in \(B \subseteq A\) iff it is equivalent to a formula in the basic form \(\bigvee \nabla ^{B}_\texttt {M} (\Sigma )\) for some types \(\Sigma \subseteq \wp (A)\). The formula \(\varphi \) is monotone in every \(a\in A\) iff \(\varphi \) is equivalent to a formula \(\bigvee \nabla ^+_{\texttt {M} }(\Sigma )\) for some types \(\Sigma \subseteq \wp (A)\). In both cases the normal forms are effective. Monotone fragment of \(\texttt {M} \texttt {E} \) In order to prove the \(\texttt {M} \texttt {E} \)-variant of Proposition 7, we need to introduce some new notation. Let \(B\subseteq A\) be a finite set of names, \(\Pi \subseteq \wp (A)\) be some types, and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) some list of types. The B-monotone variant of \(\nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) is given as follows: $$\begin{aligned} \nabla ^{B}_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )&{:=}\exists \overline{{\mathbf {x}}}.\big (\text {diff}(\overline{{\mathbf {x}}}) \wedge \bigwedge _i \tau ^{B}_{T_i}(x_i) \wedge \forall z.(\text {diff}(\overline{{\mathbf {x}}},z) \rightarrow \bigvee _{S\in \Pi } \tau ^{B}_S(z)) \big ). \end{aligned}$$ When the set B is a singleton \(\{a\}\) we will write a instead of B. The positive variant \(\nabla ^+_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) of \(\nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) is defined as above but with \(+\) in place of B. There exists an effective translation \((-)^\oslash :\texttt {M} \texttt {E} (A) \rightarrow \texttt {Pos} _{B}(\texttt {M} \texttt {E} (A))\) such that a sentence \({\varphi \in \texttt {M} \texttt {E} (A)}\) is monotone in B if and only if \(\varphi \equiv \varphi ^\oslash \). In proposition 10 this result is proved for \(\texttt {M} \texttt {E} ^{\infty }\) (i.e., \(\texttt {M} \texttt {E} \) extended with generalised quantifiers). It is not difficult to adapt the proof for \(\texttt {M} \texttt {E} \). The translation is defined as follows. By Theorem 1 we may assume without loss of generality that \(\varphi \) is in basic normal form \(\bigvee \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\). Then \(\varphi ^\oslash {:=}\bigvee \nabla ^{B}_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\). \(\square \) Combining the normal form for \(\texttt {M} \texttt {E} \) and the proof of the above proposition, we obtain a normal form for the monotone fragment of \(\texttt {M} \texttt {E} \). For any sentence \(\varphi \in \texttt {M} \texttt {E} (A)\), the following hold. The formula \(\varphi \) is monotone in \(B\subseteq A\) iff it is equivalent to a formula in the basic form \(\bigvee \nabla ^{B}_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) where for each disjunct we have \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k and \(\Pi \subseteq \overline{{\mathbf {T}}}\). The formula \(\varphi \) is monotone in all \(a\in A\) iff it is equivalent to a formula in the basic form \(\bigvee \nabla ^+_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi )\) where for each disjunct we have \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k and \(\Pi \subseteq \overline{{\mathbf {T}}}\). In both cases, normal forms are effective. Monotone fragment of \(\texttt {M} \texttt {E} ^{\infty }\) First, in this case too we introduce some notation for the positive variant of a sentence in normal form. Let \(B\subseteq A\) be a finite set of names, \(\Sigma , \Pi \subseteq \wp (A)\) be some types, and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) some list of types. The B-positive variant of \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) is given as follows: $$\begin{aligned} \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )&{:=}\nabla ^{B}_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) \wedge \nabla ^{B}_\infty (\Sigma )\\ \nabla ^{B}_\infty (\Sigma )&{:=}\bigwedge _{S\in \Sigma } \exists ^\infty y.\tau ^{B}_S(y) \wedge \forall ^\infty y.\bigvee _{S\in \Sigma } \tau ^{B}_S(y). \end{aligned}$$ When the set B is a singleton \(\{a\}\) we will write a instead of B. The positive variant of \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) is defined as \(\nabla ^+_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) {:=}\nabla ^{A}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). We are now ready to proceed with the proof of the \(\texttt {M} \texttt {E} ^{\infty }\)-variant of Proposition 7 and to give the translation. There is an effective translation \((-)^\oslash :\texttt {M} \texttt {E} ^{\infty }(A) \rightarrow \texttt {Pos} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\) such that a sentence \({\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)}\) is monotone in B if and only if \(\varphi \equiv \varphi ^\oslash \). By Theorem 2, we assume that \(\varphi \) is in the normal form \(\bigvee \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) = \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi \cup \Sigma ) \wedge \nabla _{\!\!\infty }(\Sigma )\) for some sets and list of types \(\Pi ,\Sigma , \overline{{\mathbf {T}}} \subseteq \wp (A)\) with \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\). For the translation we define $$\begin{aligned} \Big (\bigvee \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\Big )^\oslash {:=}\bigvee \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ). \end{aligned}$$ From the construction it is clear that \(\varphi ^\oslash \in \texttt {Pos} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\). Then, the right-to-left direction of the proposition is immediate by Proposition 6. For the left-to-right direction, assume that \(\varphi \) is monotone in B, we have to prove that \((D,V) \models \varphi \) if and only if \((D,V) \models \varphi ^\oslash \). Assume \((D,V) \models \varphi ^\oslash \), and in particular that \((D,V) \models \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). If \(D = \varnothing \), then \(\Sigma = \Pi = \overline{{\mathbf {T}}} = \varnothing \) and \(\nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )= \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). Hence, assume \(D \ne \varnothing \). Observe that the elements of D can be partitioned in the following way: distinct elements \(t_i \in D\) such that each \(t_i\) satisfies \(\tau ^{B}_{T_i}(x)\), for every \(S \in \Sigma \) an infinite set \(D_S\), such that every \(d \in D_S\) satisfies \(\tau ^{B}_{S}\), a finite set \(D_\Pi \) of elements, each satisfying one of the B-positive types \(\tau ^{B}_{S}\) with \(S \in \Pi \setminus \Sigma \). Following this partition, with every element \(d\in D\) we may associate a type \(S_{d}\) in, respectively, (a) \(\overline{{\mathbf {T}}}\), (b) \(\Sigma \), or (c) \(\Pi \setminus \Sigma \), such that d satisfies \(\tau ^{B}_{S_{d}}\). As in the proof of Proposition 8, we now consider the valuation U defined as \(U^{\flat }(d) {:=}S_d\), and as before we can show that \(U \le _{B} V\). Finally, it easily follows from the definitions that \((D,U) \models \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), implying that \((D,U) \models \varphi \). But then by the assumed B-monotonicity of \(\varphi \) it is immediate that \((D,V) \models \varphi \), as required. \(\square \) As with the previous two cases, the translation provides normal forms for the monotone fragment of \(\texttt {M} \texttt {E} ^{\infty }\). For any sentence \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\), the following hold: The formula \(\varphi \) is monotone in \(B \subseteq A\) iff it is equivalent to a formula \(\bigvee \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) for \(\Sigma \subseteq \Pi \subseteq \wp (A)\) and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k. The formula \(\varphi \) is monotone in every \(a\in A\) iff it is equivalent to a formula in the basic form \(\bigvee \nabla ^+_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) for types \(\Sigma \subseteq \Pi \subseteq \wp (A)\) and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k. In this section we study the sentences that are continuous in some set B of monadic predicate symbols. Let U and V be two A-valuations on the same domain D. For a set \(B \subseteq A\), we write \(U \le ^{\omega }_{B} V\) if \(U \le _{B} V\) and U(b) is finite, for every \(b \in B\). Given a monadic logic \(\texttt {L} \) and a formula \(\varphi \in \texttt {L} (A)\) we say that \(\varphi \) is continuous in \(B \subseteq A\) if \(\varphi \) is monotone in B and satisfies the following: $$\begin{aligned} \text {if } (D, V), g \models \varphi \text { then } (D, U), g \models \varphi \text { for some } U \le ^{\omega }_{B} V. \end{aligned}$$ for every monadic model (D, V) and every assignment \(g:\mathsf {iVar}\rightarrow D\). As for monotonicity it is straightforward to show that a formula \(\varphi \) is continuous in a set B iff it is continuous in every \(b \in B\). What explains both the name and the importance of this property is its equivalence to so-called Scott continuity. To understand it, we may formalise the dependence of the meaning of a monadic sentence \(\varphi \) with m free variables \(\overline{{\mathbf {x}}}\) in a monadic model \(\mathbb {D}= (D,V )\) on a fixed name \(b \in A\) as a map \(\varphi ^\mathbb {D}_b : \wp (D) \rightarrow \wp (D^m)\) defined by $$\begin{aligned} X \subseteq D \mapsto \{ \overline{{\mathbf {d}}} \in D^m \mid (D,V[b \mapsto X]) \models \varphi (\overline{{\mathbf {d}}}) \}. \end{aligned}$$ One can then verify that a sentence \(\varphi \) is continuous in b if and only if the operation \(\varphi ^\mathbb {D}_b\) is continuous with respect to the Scott topology on the powerset algebras.Footnote 6 Scott continuity is of key importance in many areas of theoretical computer sciences where ordered structures play a role, such as domain theory (see e.g. [1]). Similarly as for monotonicity, the semantic property of continuity can be provided with a syntactic characterisation. Let \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} \}\) The fragment of \(\texttt {L} (A)\)-formulas that are syntactically continuous in a subset \(B \subseteq A\) is defined by the following grammar: $$\begin{aligned} \varphi \mathrel {::=}\psi \mid b(x) \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \exists x.\varphi , \end{aligned}$$ where \(b\in B\) and \(\psi \in \texttt {L} (A\setminus B)\). In both cases, we let \(\texttt {Con} _{B}(\texttt {L} (A))\) denote the set of B-continuous sentences. To define the syntactically continuous fragment of \(\texttt {M} \texttt {E} ^{\infty }\), we first introduce the following binary generalised quantifier \(\mathbf {W} \): given two formulas \(\varphi (x)\) and \(\psi (x)\), we set $$\begin{aligned} \mathbf {W} x.(\varphi ,\psi ) {:=}\forall x.(\varphi (x) \vee \psi (x)) \wedge \forall ^\infty x.\psi (x). \end{aligned}$$ The intuition behind \(\mathbf {W} \) is the following. If \((D,V),g \models \mathbf {W} x.(\varphi , \psi )\), then because of the second conjunct there are only finitely many \(d \in D\) refuting \(\psi \). The point is that this weakens the universal quantification of the first conjunct to the effect that only the finitely many mentioned elements refuting \(\psi \) need to satisfy \(\varphi \). The fragment of \(\texttt {M} \texttt {E} ^{\infty }(A)\)-formulas that are syntactically continuous in a subset \(B \subseteq A\) is given by the following grammar: $$\begin{aligned} \varphi \mathrel {::=}\psi \mid b(x) \mid (\varphi \wedge \varphi ) \mid (\varphi \vee \varphi ) \mid \exists x.\varphi \mid \mathbf {W} x.(\varphi ,\psi ), \end{aligned}$$ where \(b\in B\) and \(\psi \in \texttt {M} \texttt {E} ^{\infty }(A\setminus B)\). We let \(\texttt {Con} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\) denote the set of B-continuous \(\texttt {M} \texttt {E} ^{\infty }\)-sentences. For \(\texttt {M} \) and \(\texttt {M} \texttt {E} \), the equivalence between the semantic and syntactic properties of continuity was established by van Benthem in [5]. To keep this paper self-contained, we give a sketch of this proof, which is based on a compactness argument. Let \(\varphi \) be a sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} \}\). Then \(\varphi \) is continuous in a set \(B \subseteq A\) if and only if there is an equivalent sentence \(\varphi ^{\ominus } \in \texttt {Con} _{B}(\texttt {L} (A))\). The direction from right to left is covered by Proposition 12 below, so we immediately turn to the completeness part of the statement. The case of \(\texttt {M} \) being treated in Sect. 5.1, we only discuss the statement for \(\texttt {M} \texttt {E} \). Hence, let \(\varphi \in \texttt {M} \texttt {E} (A)\) be continuous in B. For simplicity in the exposition, we assume \(B=\{b\}\); the case of an arbitrary B can easily be generalised from what follows. Let \(y_0, y_{1}, \ldots \) be an infinite list of variables not occurring in \(\varphi \). For \(k \in \omega \), consider the formula $$\begin{aligned} \varphi _k {:=}\exists y_{0} \cdots \exists y_{k-1}\, \left( \bigwedge _{\ell <k}b(y_{\ell }) \wedge \varphi ([\overline{{\mathbf {y}}}/b])\right) , \end{aligned}$$ where \(\varphi ([\overline{{\mathbf {y}}}/b])\big )\) is obtained from \(\varphi \) by substituting each occurrence of an atomic formula of the form b(x) with the formula \(\bigvee _{\ell < k} x \approx y_\ell \). Intuitively, \(\varphi _{k}\) expresses that \(\varphi \) holds if we reduce the current interpretation of b to some subset of size k. Define \(\varPhi {:=}\{ \varphi _k \mid k \in \omega \} \cup \{ \varphi _{{\mathbb {D}_\varnothing }}\}\), where \( \varphi _{{\mathbb {D}_\varnothing }} {:=}\forall x. \bot \) if \({\mathbb {D}_\varnothing }\models \varphi \) and \( \varphi _{{\mathbb {D}_\varnothing }} {:=}\exists x. \bot \) otherwise. Then by construction \(\varPhi \subset \texttt {Con} _{B}(\texttt {M} \texttt {E} (A))\). Now by continuity of \(\varphi \) we find that $$\begin{aligned} \varphi \models \bigvee \varPhi , \end{aligned}$$ that is, any non-empty monadic model that validates \(\varphi \) must validate one of the \(\varphi _{k}\). But then by compactness of first-order logic, there is an \(n \in \omega \) such that \(\varphi \models \bigvee _{k<n} \varphi _k \vee \varphi _{{\mathbb {D}_\varnothing }}\). By monotonicity, \(\varphi _{k} \models \varphi \), for every \(k \in \omega \), and by definition \(\varphi _{{\mathbb {D}_\varnothing }} \models \varphi \). We therefore conclude that \(\varphi \equiv \bigvee _{k<n} \varphi _k \vee \varphi _{{\mathbb {D}_\varnothing }}\). As \(\texttt {Con} _{B}(\texttt {M} \texttt {E} (A))\) is closed under disjunctions, this ends the proof of the statement. \(\square \) In this paper, we extend such a characterisation to \(\texttt {M} \texttt {E} ^{\infty }\). Moreover, analogously to what we did in the previous section, for \(\texttt {M} \) and \(\texttt {M} \texttt {E} ^{\infty }\) we provide both an explicit translation and a decidability result. The corresponding results in the case of \(\texttt {M} \texttt {E} \) remain open. Let \(\varphi \) be a sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} ^{\infty }\}\). Then \(\varphi \) is continuous in a set \(B \subseteq A\) if and only if there is an equivalent sentence \(\varphi ^{\ominus } \in \texttt {Con} _{B}(\texttt {L} (A))\). Furthermore, it is decidable whether a sentence \(\varphi \in \texttt {L} (A)\) has this property or not. Analogously to the previous case of monotonicity, the proof of the theorem is composed of two parts. We start with the right-left implication of the first claim (the preservation statement), which also holds for \(\texttt {M} \texttt {E} \). Every sentence \(\varphi \in \texttt {Con} _{B}(\texttt {L} (A))\) is continuous in B, where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). First observe that \(\varphi \) is monotone in B by Proposition 6. The case for \(D=\varnothing \) being clear, we assume \(D\ne \varnothing \). We show, by induction, that any first-order formula \(\varphi \) in the fragment satisfies (8), for every non-empty monadic model (D, V) and assignment \({ g:\mathsf {iVar}\rightarrow D}\). If \(\varphi = \psi \in \texttt {L} (A\setminus B)\), changes in the B part of the valuation will not affect the truth value of \(\varphi \) and hence the condition is trivial. Case \(\varphi = b(x)\) for some \(b \in B\): if \((D, V), g \models b(x)\) then \(g(x)\in V(b)\). Let U be the valuation given by \(U(b) {:=}\{ g(x) \}\), \(U(a) {:=}\varnothing \) for \(a \in B \setminus \{b \}\) and \(U(a) {:=}V(a)\) for \(a \in A \setminus B\). Then it is obvious that \((D, U), g \models b(x)\), while it is immediate by the definitions that \(U \le ^{\omega }_{B} V\). Case \(\varphi = \varphi _1 \vee \varphi _2\): assume \((D, V), g \models \varphi \). Without loss of generality we can assume that \((D, V), g \models \varphi _1\) and hence by induction hypothesis there is \(U \le ^{\omega }_{B} V\) such that \((D, U), g \models \varphi _1\) which clearly implies \((D, U), g \models \varphi \). Case \(\varphi = \varphi _1 \wedge \varphi _2\): assume \((D, V), g \models \varphi \). By induction hypothesis we have \(U_1,U_2 \le ^{\omega }_{B} V\) such that \((D,U_{1}), g \models \varphi _1\) and \((D, U_2), g \models \varphi _2\). Let U be the valuation defined by putting \(U(a) {:=}U_{1}(a) \cup U_{2}(a)\); then clearly we have \(U \le ^{\omega }_{B} V\), while it follows by monotonicity that \((D,U), g \models \varphi _1\) and \((D, U), g \models \varphi _2\). Clearly then \((D, U), g \models \varphi \). Case \(\varphi = \exists x.\varphi '(x)\) and \((D, V), g \models \varphi \). By definition there exists \(d\in D\) such that \((D, V), g[x\mapsto d] \models \varphi '(x)\). By induction hypothesis there is a valuation \(U \le ^{\omega }_{B} V\) such that \((D, U), g[x\mapsto d] \models \varphi '(x)\) and hence \((D, U), g \models \exists x.\varphi '(x)\). Case \(\varphi = \mathbf {W} x.(\varphi ',\psi )\in \texttt {Con} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\) and \((D, V), g \models \varphi \). Define the formulas \(\alpha (x)\) and \(\beta \) as follows: $$\begin{aligned} \varphi = \forall x.\underbrace{(\varphi '(x) \vee \psi (x))}_{\alpha (x)} \wedge \underbrace{\forall ^\infty x.\psi (x)}_\beta . \end{aligned}$$ Suppose that \((D, V), g \models \varphi \). By the induction hypothesis, for every \(d \in D\) which satisfies \((D, V), g_d \models \alpha (x)\) (where we write \( g_d {:=}g[x\mapsto d]\)) there is a valuation \(U_d \le ^{\omega }_{B} V\) such that \((D, U_d), g_d \models \alpha (x)\). The crucial observation is that because of \(\beta \), only finitely many elements of d refute \(\psi (x)\). Let U be the valuation defined by putting \(U(a) {:=}\bigcup \{U_d(a) \mid (D, V), g_d \not \models \psi (x) \}\). Note that for each \(b \in B\), the set U(b) is a finite union of finite sets, and hence finite itself; it follows that \(U \le ^{\omega }_{B} V\). We claim that $$\begin{aligned} (D, U), g \models \varphi . \end{aligned}$$ It is clear that \((D, U), g \models \beta \) because \(\psi \) (and hence \(\beta \)) is B-free. To prove that \((D, U), g \models \forall x\, \alpha (x)\), we have to show that \((D, U), g_d \models \varphi '(x) \vee \psi (x)\) for any \(d \in D\). We consider two cases: If \((D, V), g_d \models \psi (x)\) we are done, again because \(\psi \) is B-free. On the other hand, if \((D, V), g_d \not \models \psi (x)\), then \((D, U_d), g_d \models \alpha (x)\) by assumption on \(U_{d}\), while it is obvious that \(U_{d} \le _{B} U\); but then it follows by monotonicity of \(\alpha \) that \((D, U), g_d \models \alpha (x)\). The second part of the proof of the theorem, is constituted by the following stronger version of the expressive completeness result that provides, as a corollary, normal forms for the syntactically continuous fragments. Let \(\texttt {L} \) be one of the logics \(\{ \texttt {M} , \texttt {M} \texttt {E} ^{\infty }\}\). There exists an effective translation \((-)^\ominus :\texttt {L} (A) \rightarrow \texttt {Con} _{B}(\texttt {L} (A))\) such that a sentence \({\varphi \in \texttt {L} (A)}\) is continuous in \(B \subseteq A\) if and only if \(\varphi \equiv \varphi ^\ominus \). We prove the two manifestations of Proposition 13 separately, in two respective subsections. By putting together the two propositions above, we are able to conclude. The first claim follows from Proposition 13. Hence, by applying Fact 1 to Proposition 13, the problem of checking whether a sentence \(\varphi \in \texttt {L} (A)\) is continuous in \(B \subseteq A\) or not, is decidable. \(\square \) We conjecture that Proposition 13, and therefore Theorem 4, holds also for \(\texttt {L} =\texttt {M} \texttt {E} \). Continuous fragment of \(\texttt {M} \) Since continuity implies monotonicity, by Theorem 3, in order to verify the \(\texttt {M} \)-variant of Proposition 13, it is enough to prove the following result. There is an effective translation \((-)^\ominus :\texttt {Pos} _{B}(\texttt {M} (A)) \rightarrow \texttt {Con} _{B}(\texttt {M} (A))\) such that a sentence \(\varphi \in \texttt {Pos} _{B}(\texttt {M} (A))\) is continuous in \(B \subseteq A\) if and only if \(\varphi \equiv \varphi ^\ominus \). By Corollary 1, to define the translation we may assume without loss of generality that \(\varphi \) is in the basic form \(\bigvee \nabla ^{B}_\texttt {M} (\Sigma )\). For the translation, let $$\begin{aligned} \left( \bigvee \nabla ^{B}_\texttt {M} (\Sigma )\right) ^\ominus {:=}\bigvee \nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B}) \end{aligned}$$ where \(\Sigma ^{-}_{B} {:=}\{S\in \Sigma \mid B \cap S = \varnothing \}\). From the construction, it is clear that \(\varphi ^\ominus \in \texttt {Con} _{B}(\texttt {M} (A))\). Then the right-to-left direction of the proposition is immediate by Proposition 12. For the left-to-right direction, assume that \(\varphi \) is continuous in B. We have to prove that \((D, V) \models \varphi \) iff \((D, V) \models \varphi ^\ominus \), for every monadic model (D, V). Our proof strategy consists of proving the same equivalence for the model \((D\times \omega , V_\pi )\), where \(D\times \omega \) consists of \(\omega \) many copies of each element in D and \(V_\pi \) is the valuation given by \(V_{\pi }(a) {:=}\{(d,k) \mid d\in V(a), k\in \omega \}\). It is easy to see that \((D, V) \equiv ^{\texttt {M} } (D\times \omega , V_\pi )\) (see Proposition 18) and so it suffices indeed to prove that $$\begin{aligned} (D\times \omega , V_\pi ) \models \varphi \text { iff } (D\times \omega , V_\pi ) \models \varphi ^\ominus . \end{aligned}$$ Consider first the case where \(D= \varnothing \). Then \((D\times \omega , V_\pi ) = {\mathbb {D}_\varnothing }\), and then the claim is true since \(\nabla ^{B}_\texttt {M} (\varnothing ) = \nabla ^{B}_\texttt {M} (\varnothing ,\varnothing ^{-}_{B})\) and \({\mathbb {D}_\varnothing }\models \nabla ^{B}_\texttt {M} (\Sigma )\) iff \(\Sigma =\varnothing \). In the remainder of the proof we focus on the case where \(D \ne \varnothing \). Let \((D\times \omega , V_\pi ) \models \varphi \). As \(\varphi \) is continuous in B there is a valuation \(U \le ^{\omega }_{B} V_\pi \) satisfying \((D\times \omega , U) \models \varphi \). This means that \((D\times \omega , U) \models \nabla ^{B}_\texttt {M} (\Sigma )\) for some disjunct \(\nabla ^{B}_\texttt {M} (\Sigma )\) of \(\varphi \). Below we will use the following fact (which can easily be verified): $$\begin{aligned} (D\times \omega ),U \models \tau ^{B}_{S}(d,k) \text { iff } U^{\flat }(d,k) \setminus B = S \setminus B \text { and } U^{\flat }(d,k) \subseteq S \cap B. \end{aligned}$$ Our claim is now that \((D\times \omega , U) \models \nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B})\). The existential part of \(\nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B})\) is trivially true. To cover the universal part, it remains to show that every element of \((D\times \omega , U)\) realizes a B-positive type in \(\Sigma ^{-}_{B}\). Take an arbitrary pair \((d,k) \in D\times \omega \) and let T be the (full) type of (d, k), that is, let \(T {:=}U^{\flat }(d,k)\). If \(B \cap T = \varnothing \) then trivially \(T\in \Sigma ^{-}_{B}\) and we are done. So suppose \(B \cap T \ne \varnothing \). Observe that in \(D\times \omega \) we have infinitely many copies of \(d\in D\). Hence, as U(b) is finite for every \(b \in B\), there must be some \((d,k')\) with type \(U^{\flat }(d,k') = V_{\pi }^{\flat }(d,k') \setminus B = V_{\pi }^{\flat }(d,k) \setminus B = T \setminus B\). It follows from \((D\times \omega , U) \models \nabla ^{B}_\texttt {M} (\Sigma )\) and (10) that there is some \(S \in \Sigma \) such that \(S \setminus B = U^{\flat }(d,k') \setminus B = U^{\flat }(d,k')\) and \(S \cap B \subseteq U^{\flat }(d,k') \cap B = \varnothing \). From this we easily derive that \(S = U^{\flat }(d,k')\) and \(S \in \Sigma ^{-}_{B}\). Finally, we observe that \(S \setminus B = U^{\flat }(d,k') \setminus B = U^{\flat }(d,k) \setminus B\) and \(S \cap B = \varnothing \subseteq U^{\flat }(d,k)\), so that by (10) we find that \(D \times \omega ,U) \models \tau ^{B}_{S}(d,k)\) indeed. Finally, by monotonicity it directly follows from \((D\times \omega , U) \models \nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B})\) that \((D\times \omega , V_{\pi }) \models \nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B})\), and from this it is immediate that \((D\times \omega , V_\pi ) \models \varphi ^\ominus \). Let \((D\times \omega , V_\pi ) \models \nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B})\). To show that \((D\times \omega , V_\pi ) \models \nabla ^{B}_\texttt {M} (\Sigma )\), the existential part is trivial. For the universal part just observe that \(\Sigma ^{-}_{B} \subseteq \Sigma \). \(\square \) A careful analysis of the translation provides us with normal forms for the continuous fragment of \(\texttt {M} \). We also formulate a version of this result which holds when we restrict to the positive fragment of \(\texttt {M} \); this version, which can be proved in the same manner as the main result, will be needed in our companion paper. The formula \(\varphi \) is continuous in \(B \subseteq A\) iff it is equivalent to a formula \(\bigvee \nabla ^{B}_\texttt {M} (\Sigma ,\Sigma ^{-}_{B})\) for some types \(\Sigma \subseteq \wp (A)\), where \(\Sigma ^{-}_{B} {:=}\{S\in \Sigma \mid B \cap S = \varnothing \}\). If \(\varphi \) positive in A (i.e., \(\varphi \in {\texttt {M} }^+(A)\)) then \(\varphi \) is continuous in \(B \subseteq A\) iff it is equivalent to a formula in the basic form \(\bigvee \nabla ^+_{\texttt {M} }(\Sigma ,\Sigma ^{-}_{B})\) for some types \(\Sigma \subseteq \wp (A)\), where \(\Sigma ^{-}_{B} {:=}\{S\in \Sigma \mid B \cap S = \varnothing \}\). Continuous fragment of \(\texttt {M} \texttt {E} ^{\infty }\) As for the previous case, the \(\texttt {M} \texttt {E} ^{\infty }\)-variant of Proposition 13 is an immediate consequence of Theorem 3 and the following proposition. There is an effective translation \((-)^\ominus :\texttt {Pos} _{B}(\texttt {M} \texttt {E} ^{\infty }(A)) \rightarrow \texttt {Con} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\) such that a sentence \(\varphi \in \texttt {Pos} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\) is continuous in B if and only if \(\varphi \equiv \varphi ^\ominus \). By Corollary 3, we may assume that \(\varphi \) is in basic normal form, i.e., \(\varphi = \bigvee \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), with \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\). For the translation let \(\big (\bigvee \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\big )^\ominus {:=}\bigvee \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )^\ominus \) where $$\begin{aligned} \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )^\ominus {:=}{\left\{ \begin{array}{ll} \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) &{} \text { if } B \cap \bigcup \Sigma = \varnothing \\ \bot &{}\text { if } B \cap \bigcup \Sigma \ne \varnothing . \end{array}\right. } \end{aligned}$$ First we prove the right-to-left direction of the proposition. By Proposition 12 it is enough to show that \(\varphi ^\ominus \in \texttt {Con} _{B}(\texttt {M} \texttt {E} ^{\infty }(A))\). We focus on the disjuncts of \(\varphi ^\ominus \). The interesting case is where \(B \cap \bigcup \Sigma = \varnothing \). Define the formulas \(\varphi '(\overline{{\mathbf {x}}},z)\) and \(\psi (z)\) as follows: $$\begin{aligned} \begin{array}{lll} \varphi '(\overline{{\mathbf {x}}},z) &{} {:=}&{} \lnot \text {diff}(\overline{{\mathbf {x}}},z) \vee \bigvee _{S\in \Pi \setminus \Sigma } \tau ^B_S(z)\\ \psi (z) &{} {:=}&{} \bigvee _{S\in \Sigma } \tau ^B_S(y). \end{array} \end{aligned}$$ Then we may rearrange the internal structure of the formula \(\nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) somewhat, arriving at the following: $$\begin{aligned}&\exists \overline{{\mathbf {x}}}.\Big ( \text {diff}(\overline{{\mathbf {x}}}) \wedge \bigwedge _i \tau ^B_{T_i}(x_i)\ \wedge \forall z.(\underbrace{\lnot \text {diff}(\overline{{\mathbf {x}}},z) \vee \bigvee _{S\in \Pi \setminus \Sigma } \tau ^B_S(z)}_{\varphi '(\overline{{\mathbf {x}}},z)} \vee \underbrace{\bigvee _{S\in \Sigma } \tau ^B_S(z)}_{\psi (z)})\\&\quad \wedge \forall ^\infty y.\underbrace{\bigvee _{S\in \Sigma } \tau ^B_S(y)}_{\psi (y)} \Big ) \wedge \bigwedge _{S\in \Sigma } \exists ^\infty y.\tau ^B_S(y), \end{aligned}$$ so that we find $$\begin{aligned} \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) \equiv \exists \overline{{\mathbf {x}}}.\Big (\text {diff}(\overline{{\mathbf {x}}}) \wedge \bigwedge _i \tau ^B_{T_i}(x_i) \wedge \mathbf {W} z.(\varphi '(\overline{{\mathbf {x}}},z),\psi (z)) \Big ) \wedge \bigwedge _{S\in \Sigma } \exists ^\infty y.\tau ^B_S(y), \end{aligned}$$ which belongs to the required fragment because \(B \cap \bigcup \Sigma = \varnothing \). For the left-to-right direction of the proposition, we have to prove that \(\varphi \equiv \varphi ^\ominus \). Let \((D, V) \models \varphi \). Because \(\varphi \) is continuous in B we may assume that V(b) is finite, for all \(b \in B\). Let \(\nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) be a disjunct of \(\varphi \) such that \((D, V) \models \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). If \(D = \varnothing \), then \( {\overline{{\mathbf {T}}}}={\Pi }={\Sigma }=\varnothing \), and \(\nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) = (\nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ))^\ominus \). Hence, let \(D \ne \varnothing \). Suppose for contradiction that \(B \cap \bigcup \Sigma \ne \varnothing \), then there must be some \(S\in \Sigma \) with \(B \cap S \ne \varnothing \). Because \((D, V) \models \nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) we have, in particular, that \((D, V) \models \exists ^\infty x.\tau ^B_S(x)\) and hence V(b) must be infinite, for any \(b \in B \cap S\), which is absurd. It follows that \(B \cap \bigcup \Sigma = \varnothing \), but then we trivially conclude that \((D, V) \models \varphi ^\ominus \) because the disjunct remains unchanged. Let \((D, V) \models \varphi ^\ominus \). This direction is trivial, because the only difference between \(\varphi \) and \(\varphi ^\ominus \) is that some disjuncts may have been replaced by \(\bot \). \(\square \) We conclude the section by stating the following corollary, providing normal forms for the continuous fragment of \(\texttt {M} \texttt {E} ^{\infty }\). As in the case of \(\texttt {M} \) we formulate, for future reference, a variation of this result which applies to the positive fragment of \(\texttt {M} \texttt {E} ^{\infty }\). For any sentence \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\), the following hold. The formula \(\varphi \) is continuous in \(B \subseteq A\) iff \(\varphi \) is equivalent to a formula, effectively obtainable from \(\varphi \), which is a disjunction of formulas \(\nabla ^{B}_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) where \(\Sigma , \Pi \subseteq \wp (A)\) and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) are such that \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\) and \(B \cap \bigcup \Sigma = \varnothing \). If \(\varphi \) is positive (i.e., \(\varphi \in {\texttt {M} \texttt {E} ^{\infty }}^+(A)\)) then \(\varphi \) is continuous in \(B \subseteq A\) iff it is equivalent to a formula, effectively obtainable from \(\varphi \), which is a disjunction of formulas \(\bigvee \nabla ^+_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), where \(\Sigma , \Pi \subseteq \wp (A)\) and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) are such that \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\) and \(B \cap \bigcup \Sigma = \varnothing \). Submodels and quotients There are various natural notions of morphism between monadic models; the one that we will be interested here is that of a (strong) homomorphism. Let \(\mathbb {D}= (D,V)\) and \(\mathbb {D}' = (D',V')\) be two monadic models. A map \(f: D \rightarrow D'\) is a homomorphism from \(\mathbb {D}\) to \(\mathbb {D}'\), notation: \(f: \mathbb {D}\rightarrow \mathbb {D}'\), if we have \(d \in V(a)\) iff \(f(d) \in V'(a)\), for all \(a \in A\) and \(d \in D\). In this section we will be interested in the sentences of \(\texttt {M} , \texttt {M} \texttt {E} \) and \(\texttt {M} \texttt {E} ^{\infty }\) that are preserved under taking submodels and the ones that are invariant under quotients. Let \(\mathbb {D}= (D,V)\) and \(\mathbb {D}' = (D',V')\) be two monadic models. We call \(\mathbb {D}\) a submodel of \(\mathbb {D}'\) if \(D \subseteq D'\) and the inclusion map \(\iota _{DD'}: D \hookrightarrow D'\) is a homomorphism, and we say that \(\mathbb {D}'\) is a quotient of \(\mathbb {D}\) if there is a surjective homomorphism \(f: \mathbb {D}\rightarrow \mathbb {D}'\). Now let \(\varphi \) be an \(\texttt {L} \)-sentence, where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). We say that \(\varphi \) is preserved under taking submodels if \(\mathbb {D}\models \varphi \) implies \(\mathbb {D}' \models \varphi \), whenever \(\mathbb {D}'\) is a submodel of \(\mathbb {D}\). Similarly, \(\varphi \) is invariant under taking quotients if we have \(\mathbb {D}\models \varphi \) iff \(\mathbb {D}' \models \varphi \), whenever \(\mathbb {D}'\) is a quotient of \(\mathbb {D}\). The first of these properties (preservation under taking submodels) is well known from classical model theory—it is for instance the topic of the Łos-Tarski Theorem. When it comes to quotients, in model theory one is usually more interested in the formulas that are preserved under surjective homomorphisms (and the definition of homomorphism may also differ from ours). For instance, this is the topic of Lyndon's Theorem [23] which characterises the formulas that are preserved under a weaker notion of homomorphism as the ones that are positive in all predicates occurring in the formula. Our preference for the notion of invariance under quotients stems from the fact that the property of invariance under quotients plays a key role in characterising the bisimulation-invariant fragments of various monadic second-order logics, as is explained in our companion paper [10]. Preservation under submodels In this subsection we characterise the fragments of our predicate logics consisting of the sentences that are preserved under taking submodels. That is, the main result of this subsection is a Łos-Tarksi Theorem for \(\texttt {M} \texttt {E} ^{\infty }\). The universal fragment of the set \(\texttt {M} \texttt {E} ^{\infty }(A)\) is the collection \(\texttt {Univ} (\texttt {M} \texttt {E} ^{\infty }(A))\) of formulas given by the following grammar: $$\begin{aligned} \varphi \mathrel {::=}\top \mid \bot \mid a(x) \mid \lnot a(x) \mid x \approx y \mid x \not \approx y \mid (\varphi \vee \varphi ) \mid (\varphi \wedge \varphi ) \mid \forall x.\varphi \mid \forall ^\infty x.\varphi \end{aligned}$$ where \(x,y\in \mathsf {iVar}\) and \(a \in A\). The universal fragment \(\texttt {Univ} (\texttt {M} \texttt {E} (A))\) is obtained by deleting the clause for \(\forall ^\infty \) from this grammar, and we obtain the universal fragment \(\texttt {Univ} (\texttt {M} (A))\) by further deleting both clauses involving the equality symbol. Let \(\varphi \) be a sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). Then \(\varphi \) is preserved under taking submodels if and only if there is an equivalent formula \(\varphi ^{\otimes } \in \texttt {Univ} (\texttt {L} (A))\). Furthermore, it is decidable whether a sentence \(\varphi \in \texttt {L} (A)\) has this property or not. We start by verifying that universal formulas satisfy the property. Let \(\varphi \in \texttt {Univ} (\texttt {L} (A))\) be a universal sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). Then \(\varphi \) is preserved under taking submodels. It is enough to directly consider the case \(\texttt {L} = \texttt {M} \texttt {E} ^{\infty }\). Let \((D',V')\) be a submodel of the monadic model (D, V). The case for \(D = \varnothing \) being immediate, let us assume \(D \ne \varnothing \). By induction on the complexity of a formula \(\varphi \in \texttt {Univ} (\texttt {M} \texttt {E} ^{\infty }(A))\) we will show that for any assignment \(g: \mathsf {iVar}\rightarrow D'\) we have $$\begin{aligned} (D,V),g' \models \varphi \text { implies } (D',V'),g \models \varphi , \end{aligned}$$ where \(g':= g \circ \iota _{D'D}\). We will only consider the inductive step of the proof where \(\varphi \) is of the form \(\forall ^\infty x. \psi \). Define \(X_{D,V} {:=}\{ d \in D \mid (D,V), g'[x \mapsto d] \models \psi \}\), and similarly, \(X_{D',V'} {:=}\{ d \in D' \mid (D',V'), g[x \mapsto d] \models \psi \}\). By the inductive hypothesis we have that \(X_{D,V} \cap D' \subseteq X_{D',V'}\), implying that \(D' \setminus X_{D',V'} \subseteq D \setminus X_{D,V}\). But from this we immediately obtain that $$\begin{aligned} |D \setminus X_{D,V}|< \omega \text { implies } |D' \setminus X_{D',V'}| < \omega , \end{aligned}$$ which means that \((D,V),g' \models \varphi \) implies \( (D',V'),g \models \varphi \), as required. \(\square \) Turning to the much harder verification of the opposite implication of the theorem, we first define the appropriate translations from each monadic logic into its universal fragment. We start by defining the translations for sentences in basic normal forms. Let \(\Sigma , \Pi \subseteq \wp (A)\) be some types and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) some list of types. For \(\texttt {M} \)-sentences in basic form we first set $$\begin{aligned} \Big ( \nabla _{\texttt {M} }(\Sigma ) \Big )^{\otimes } {:=}\forall z \bigvee _{S \in \Sigma } \tau _{S}(z), \end{aligned}$$ in the case of \(\texttt {M} \texttt {E} \) we define $$\begin{aligned} \Big ( \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) \Big )^{\otimes } {:=}\forall z \bigvee _{S \in \overline{{\mathbf {T}}}\cup \Pi } \tau _{S}(z), \end{aligned}$$ while for basic formulas of \(\texttt {M} \texttt {E} ^{\infty }\), the translation \((-)^\otimes \) is given as follows: $$\begin{aligned} (\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ))^{\otimes } {:=}\forall z \bigvee _{S \in \overline{{\mathbf {T}}}\cup \Pi } \tau _{S}(z) \wedge \forall ^\infty z \bigvee _{S \in \Sigma } \tau _{S}(z). \end{aligned}$$ Second, in each case we define \((\bigvee _{i} \varphi _{i})^{\otimes } {:=}\bigvee \varphi _{i}^{\otimes }\). Finally, for each \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\), we extend the translation \((-)^{\otimes }\) to the collection of all sentences by defining \(\varphi ^{\otimes } {:=}(\varphi ^{*})^{\otimes }\), where \(\varphi ^{*}\) is the basic normal form of \(\varphi \) as given by Fact 2 (in the case of \(\texttt {M} \)), by Theorem 1 (in the case of \(\texttt {M} \texttt {E} \)), and by Theorem 2 (in the case of \(\texttt {M} \texttt {E} ^{\infty }\)). The missing part in the proof of the theorem is covered by the following result. For any monadic logic \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\) there is an effective translation \((-)^{\otimes } : \texttt {L} (A) \rightarrow \texttt {Univ} (\texttt {L} (A))\) such that a sentence \(\varphi \in \texttt {L} (A)\) is preserved under taking submodels if and only if \(\varphi \equiv \varphi ^{\otimes }\). We only consider the case where \(\texttt {L} = \texttt {M} \texttt {E} ^{\infty }\), leaving the other cases to the reader. It is easy to see that \(\varphi ^{\otimes } \in \texttt {Univ} (\texttt {M} \texttt {E} ^{\infty }(A))\), for every sentence \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\); but then it is immediate by Proposition 16 that \(\varphi \) is preserved under taking submodels if \(\varphi \equiv \varphi ^{\otimes }\). For the left-to-right direction, assume that \(\varphi \) is preserved under taking submodels. It is easy to see that \(\varphi \) implies \(\varphi ^{\otimes }\), so we focus on proving the opposite. That is, we suppose that \((D,V) \models \varphi ^{\otimes }\), and aim to show that \((D,V) \models \varphi \). By Theorem 2 we may assume without loss of generality that \(\varphi \) is a disjunction of sentences of the form \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), where \(\Sigma \subseteq \Pi \subseteq \overline{{\mathbf {T}}}\). It follows that (D, V) satisfies some disjunct \( \forall z \bigvee _{S \in \overline{{\mathbf {T}}}\cup \Pi } \tau _{S}(z) \wedge \forall ^\infty z \bigvee _{S \in \Sigma } \tau _{S}(z)\) of \(\Big (\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\Big )^{\otimes }\). Expand D with finitely many elements \(\overline{{\mathbf {d}}}\), in one-one correspondence with \(\overline{{\mathbf {T}}}\), and ensure that the type of each \(d_{i}\) is \(T_{i}\). In addition, add, for each \(S \in \Sigma \), infinitely many elements \(\{ e^{S}_{n} \mid n \in \omega \}\), each of type S. Call the resulting monadic model \(\mathbb {D}' = (D',V')\). This construction is tailored to ensure that \((D',V') \models \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), and so we obtain \((D',V') \models \varphi \). But obviously, \(\mathbb {D}\) is a submodel of \(\mathbb {D}'\). This implies that \((D,V) \models \varphi \), by our assumption on \(\varphi \). \(\square \) The first part of the theorem is an immediate consequence of Proposition 17. By applying Fact 1 to Proposition 17 we finally obtain that for the three concerned formalisms the problem of deciding whether a sentence is preserved under taking submodels is decidable. \(\square \) As an immediate consequence of the proof of the previous Proposition 17, we get effective normal forms for the universal fragments. A sentence \(\varphi \in \texttt {M} \texttt {E} (A)\) is preserved under taking submodels iff it is equivalent to a formula \(\bigvee \big ( \forall z \bigvee _{S \in \Sigma } \tau _{S}(z)\big )\), for types \(\Sigma \subseteq \wp (A)\). A sentence \(\varphi \in \texttt {M} \texttt {E} (A)\) is preserved under taking submodels iff it is equivalent to a formula \(\bigvee \big ( \forall z \bigvee _{S \in \overline{{\mathbf {T}}}\cup \Pi } \tau _{S}(z)\big )\), for types \(\Pi \subseteq \wp (A)\) and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k. A sentence \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\) is preserved under taking submodels iff it is equivalent to a formula \(\bigvee \big (\forall z \bigvee _{S \in \overline{{\mathbf {T}}}\cup \Pi } \tau _{S}(z) \wedge \forall ^\infty z \bigvee _{S \in \Sigma } \tau _{S}(z)\big )\) for types \(\Sigma \subseteq \Pi \subseteq \wp (A)\) and \(\overline{{\mathbf {T}}} \in \wp (A)^k\) for some k. In all three cases, normal forms are effective. Invariance under quotients The following theorem states that monadic first-order logic without equality (\(\texttt {M} \)) provides the quotient-invariant fragment of both monadic first-order logic with equality (\(\texttt {M} \texttt {E} \)), and of infinite-monadic predicate logic (\(\texttt {M} \texttt {E} ^{\infty }\)). Recall that a formula \(\varphi \) is invariant under taking quotients if it satisfies the condition that \(\mathbb {D}\models \varphi \) iff \(\mathbb {D}' \models \varphi \), for any monadic model \(\mathbb {D}\) and any quotient \(\mathbb {D}'\) of \(\mathbb {D}\). Let \(\varphi \) be a sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). Then \(\varphi \) is invariant under taking quotients if and only if there is an equivalent sentence in \(\texttt {M} \). Furthermore, it is decidable whether a sentence \(\varphi \in \texttt {L} (A)\) has this property or not. We first state the 'easy' part of the first claim of the theorem. Note that in fact, we have already been using this observation in earlier parts of the paper. Every sentence in \(\texttt {M} \) is invariant under taking quotients. Let \(f: D \rightarrow D'\) provide a surjective homomorphism between the models (D, V) and \((D',V')\), and observe that for any assignment \(g: \mathsf {iVar}\rightarrow D\) on D, the composition \(f \circ g: \mathsf {iVar}\rightarrow D'\) is an assignment on \(D'\). In order to prove the proposition one may show that, for an arbitrary \(\texttt {M} \)-formula \(\varphi \) and an arbitrary assignment \(g: \mathsf {iVar}\rightarrow D\), we have $$\begin{aligned} (D,V),g \models \varphi \text { iff } (D',V'), f \circ g \models \varphi . \end{aligned}$$ We leave the proof of (11), which proceeds by a straightforward induction on the complexity of \(\varphi \), as an exercise to the reader. \(\square \) To prove the remaining part of Theorem 6, we start with providing translations from \(\texttt {M} \texttt {E} \) and from \(\texttt {M} \texttt {E} ^{\infty }\), respectively, to \(\texttt {M} \). For \(\texttt {M} \texttt {E} \)-sentences in basic form we first define $$\begin{aligned} \Big ( \nabla _{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) \Big )^{\circ } {:=}\bigwedge _{i} \exists x_i. \tau _{T_i}(x_i) \wedge \forall x. \bigvee _{S\in \Pi } \tau _S(x), \end{aligned}$$ whereas for \(\texttt {M} \texttt {E} ^{\infty }\)-sentences in basic form we start with defining $$\begin{aligned} \Big ( \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) \Big )^{\bullet } {:=}\bigwedge _{i} \exists x_i. \tau _{T_i}(x_i) \wedge \forall x. \bigvee _{S\in \Sigma } \tau _S(x). \end{aligned}$$ In both cases, the translation is then extended to the full language as in Definition 28. Note that the two maps may give different translations for \(\texttt {M} \texttt {E} \)-sentences. Also observe that the \(\Pi \) 'disappears' in the translation of the formula \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). The key property of these translations is the following. For every monadic model (D, V) and every \(\varphi \in \texttt {M} \texttt {E} (A)\) we have $$\begin{aligned} (D,V) \models \varphi ^{\circ } \text { iff } (D\times \omega ,V_\pi ) \models \varphi . \end{aligned}$$ For every monadic model (D, V) and every \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\) we have $$\begin{aligned} (D,V) \models \varphi ^{\bullet } \text { iff } (D\times \omega ,V_\pi ) \models \varphi . \end{aligned}$$ Here \(V_{\pi }\) is the induced valuation given by \(V_{\pi }(a) {:=}\{ (d,k) \mid d \in V(a), k\in \omega \}\). We only prove the claim for \(\texttt {M} \texttt {E} ^{\infty }\) (i.e., the second part of the proposition), the case for \(\texttt {M} \texttt {E} \) being similar. Clearly it suffices to prove (13) for formulas of the form \(\varphi = \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). First of all, if \(\mathbb {D}\) is the empty model, we find \({\overline{{\mathbf {T}}}}={\Pi } = {\Sigma } = \varnothing \), \((D, V) = (D\times \omega , V_\pi )\), and \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) = (\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ))^\bullet \). In other words, in this case there is nothing to prove. In the sequel we assume that \(D \ne \varnothing \). Assume \((D, V) \models \varphi ^{\bullet }\), we will show that \((D\times \omega , V_\pi ) \models \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). Let \(d_i\) be such that \(V^{\flat }(d_i) = T_{i}\) in (D, V). It is clear that the \((d_i,i)\) provide distinct elements, with each \((d_i,i)\) satisfying \(\tau _{T_i}\) in \((D\times \omega , V_{\pi })\). Thus, the first-order existential part of \(\varphi \) is satisfied. With a similar argument it is straightforward to verify that the \(\exists ^\infty \)-part of \(\varphi \) is also satisfied—here we critically use the observation that \(\Sigma \subseteq \overline{{\mathbf {T}}}\), so that every type in \(\Sigma \) is witnessed in the model (D, V), and hence witnessed infinitely many times in \((D\times \omega , V_\pi )\). For the universal parts of \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\) it is enough to observe that, because of the universal part of \(\varphi ^\bullet \), every \(d\in D\) realizes a type in \(\Sigma \). By construction, the same applies to \((D\times \omega , V_{\pi })\). This takes care of both universal quantifiers. Assuming that \((D\times \omega , V_\pi ) \models \nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), we will show that \((D, V) \models \varphi ^\bullet \). The existential part of \(\varphi ^{\bullet }\) is trivial. For the universal part we have to show that every element of D realizes a type in \(\Sigma \). Suppose not, and let \(d\in D\) be such that \(\lnot \tau _S(d)\) for all \(S\in \Sigma \). Then we have \((D\times \omega , V_\pi ) \not \models \tau _S(d,k)\) for all k. That is, there are infinitely many elements not realising any type in \(\Sigma \). Hence we have \((D\times \omega , V_\pi ) \not \models \forall ^\infty y.\bigvee _{S\in \Sigma } \tau _S(y)\). Absurd, because this formula is a conjunct of \(\nabla _{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\). \(\square \) We will now show how the theorem follows from this. First of all we verify that in both cases \(\texttt {M} \) is expressively complete for the property of being invariant under taking quotients. For any monadic logic \(\texttt {L} \in \{ \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\) there is an effective translation such that a sentence \(\varphi \in \texttt {L} (A)\) is invariant under taking quotients if and only if . Let \(\varphi \) be a sentence of \(\texttt {M} \texttt {E} ^{\infty }\), and let (we only cover the case of \(\texttt {L} = \texttt {M} \texttt {E} ^{\infty }\), the case for \(\texttt {L} = \texttt {M} \texttt {E} \) is similar, just take ) We will show that The direction from right to left is immediate by Proposition 18. For the other direction it suffices to observe that any model (D, V) is a quotient of its '\(\omega \)-product' \((D\times \omega , V_\pi )\), and to reason as follows: $$\begin{aligned} (D,V) \models \varphi&\text { iff } (D\times \omega , V_\pi ) \models \varphi&(\text {assumption on }\varphi ) \\&\text { iff } (D,V) \models \varphi ^{\bullet }&(\text {Proposition }19) \end{aligned}$$ \(\square \) Hence we can conclude. The theorem is an immediate consequence of Proposition 20. Finally, the effectiveness of translation \((\cdot )^{\bullet }\), decidability of \(\texttt {M} \texttt {E} ^{\infty }\) (Fact 1) and (14) yield that it is decidable whether a given \(\texttt {M} \texttt {E} ^{\infty }\)-sentence \(\varphi \) is invariant under taking quotients or not. \(\square \) As a corollary, we obtain: Let \(\varphi \) be a sentence of the monadic logic \(\texttt {L} (A)\), where \(\texttt {L} \in \{ \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\). Then \(\varphi \) is invariant under taking quotients if and only if there is an equivalent sentence \(\nabla _{\texttt {M} }(\Sigma )\) for types \(\Sigma \subseteq \wp (A)\). Moreover, such a normal form is effective. In our companion paper [10] on automata, we need versions of these results for the monotone and the continuous fragment. For this purpose we define some slight modifications of the translations \((\cdot )^{\circ }\) and \((\cdot )^{\bullet }\) which restricts to positive and syntactically continuous sentences. There are effective translations \((\cdot )^{\circ }: \texttt {M} \texttt {E} ^+ \rightarrow \texttt {M} ^{+} \) and \((\cdot )^{\bullet }: {\texttt {M} \texttt {E} ^{\infty }}^+ \rightarrow \texttt {M} ^+\) such that \(\varphi \equiv \varphi ^{\circ }\) (respectively, \(\varphi \equiv \varphi ^{\bullet }\)) iff \(\varphi \) is invariant under quotients. Moreover, we may assume that \((\cdot )^{\bullet }: \texttt {Con} _{B}({\texttt {M} \texttt {E} ^{\infty }}(A))\cap {\texttt {M} \texttt {E} ^{\infty }}^+ \rightarrow \texttt {Con} _{B}(\texttt {M} (A)) \cap \texttt {M} ^+\), for any \(B \subseteq A\). We define translations \((\cdot )^{\circ }:\texttt {M} \texttt {E} ^+ \rightarrow \texttt {M} ^+\) and \((\cdot )^{\bullet }: {\texttt {M} \texttt {E} ^{\infty }}^+ \rightarrow \texttt {M} ^+\) as follows. For \(\texttt {M} \texttt {E} ^{+},{\texttt {M} \texttt {E} ^{\infty }}^{+}\)-sentences in simple basic form we define $$\begin{aligned} \begin{array}{lll} \Big ( \nabla ^+_{\texttt {M} \texttt {E} }(\overline{{\mathbf {T}}},\Pi ) \Big )^{\circ } &{} {:=}&{} \bigwedge _{i} \exists x_i. \tau ^+_{T_i}(x_i) \wedge \forall x. \bigvee _{S\in \Pi } \tau ^+_S(x),\\ \Big ( \nabla ^+_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma ) \Big )^{\bullet } &{} {:=}&{} \bigwedge _{i} \exists x_i. \tau ^+_{T_i}(x_i) \wedge \forall x. \bigvee _{S\in \Sigma } \tau ^+_S(x), \end{array} \end{aligned}$$ and then we use, respectively, the Corollaries 2 and 3 to extend these translations to the full positive fragments \(\texttt {M} \texttt {E} ^{+}\) and \({\texttt {M} \texttt {E} ^{\infty }}^{+}\), as we did in Definition 29 for the full language. We leave it as an exercise for the reader to prove the analogue of Proposition 19 for these translations, and to show how the first statements of the theorem follows from this. Finally, to see why we may assume that \((\cdot )^{\bullet }\) restricts to a map from the syntactically B-continuous fragment of \({\texttt {M} \texttt {E} ^{\infty }}^+(A)\) to the syntactically B-continuous fragment of \({\texttt {M} }^+(A)\), assume that \(\varphi \in \texttt {M} \texttt {E} ^{\infty }(A)\) is continuous in \(B \subseteq A\). By Corollary 5 we may assume that \(\varphi \) is a disjunction of formulas of the form \(\nabla ^+_{\texttt {M} \texttt {E} ^{\infty }}(\overline{{\mathbf {T}}},\Pi ,\Sigma )\), where \(B \cap \bigcup \Sigma = \varnothing \). This implies that in the formula \(\varphi ^{\bullet }\) no predicate symbol \(b \in B\) occurs in the scope of a universal quantifier, and so \(\varphi ^{\bullet }\) is syntactically continuous in B indeed. \(\square \) In this paper we established some model-theoretic results about the logic \(\texttt {M} \texttt {E} ^{\infty }\), a variation of monadic first-order logic that features the generalised quantifier \(\exists ^\infty \) ('there are infinitely many'), and about its classical fragments \(\texttt {M} \texttt {E} \) and \(\texttt {M} \) consisting of, respectively, monadid first-order logic with and without equality. For each logic \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\) we used the method of Ehrenfeucht–Fraïssé games to show that arbitrary sentences can be effectively rewritten into some normal form. We subsequently used these normal forms to prove a number of characterisation theorems, covering some well-known semantic properties, viz., monotonicity and preservation under submodels, but also some properties of more specific interest, viz., continuity and invariance under quotients. In all cases we actually proved a stronger result than a mere characterisation theorem: we provided a map, effectively translating arbitrary sentences into sentences of the required syntactic shape, and we showed that an arbitrary sentence in \(\texttt {L} \) has the semantic property under scrutiny iff it is equivalent to its translation. As a consequence of this result and the fact that each \(\texttt {L} \in \{ \texttt {M} , \texttt {M} \texttt {E} , \texttt {M} \texttt {E} ^{\infty }\}\) has a decidable satisfiability problem, we showed that each of the mentioned properties is decidable for monadic first-order sentences. Our main interest concerned the language \(\texttt {M} \texttt {E} ^{\infty }\) with the infinity quantifer. Since this operator does not make sense in finite models, we did not explicitly investigate which of our results on the other languages, \(\texttt {M} \) and \(\texttt {M} \texttt {E} \), hold as well in the setting of finite model theory. We claim, however, that all of our results on normal forms, and on characterisations of monotonicity, preservation under submodels, and invariance under quotients, hold in this setting as well, with only minor adaptations of the proofs. (The remaining property of continuity is obviously not of interest in the setting of finite models.) For instance, some of our proofs use a model-theoretic copying construction that turns an arbitrary monadic model (D, V) into its \(\omega \)-fold copy \((D\times \omega , V_{\pi })\). In the setting of finite model theory, this construction needs to be replaced with a more fine-grained k-fold copying construction, with k a finite number depending on the sentence under investigation. We finish with mentioning some suggestions for further research. First, given that many semantic properties of monadic predicate logics turn out to be decidable, a natural follow-up question would be to investigate the computational complexity of these problems. Second, by van Benthem's result (cf. Proposition 11), a sentence \(\varphi \in \texttt {M} \texttt {E} (A)\) is continuous in a set \(B \subseteq A\) if and only if it is equivalent to some \(\varphi ^{\ominus }\) in the syntactic fragment \(\texttt {Con} _{B}(\texttt {L} (A))\). Intrigueingly we did not manage to prove this result using the normal form method; we conjecture, however, that the obvious analogues of Proposition 13 and Theorem 4 do hold for \(\texttt {L} =\texttt {M} \texttt {E} \). Finally, one perspective on our work is that it bears further witness to the fact that failure of compactness is in itself not an obstacle for the development of model theory—as is well known, of course, from the area of finite model theory that we just mentioned. It would be interesting to see which of our characterisation results still hold if we drop the restriction to monadic predicate logic, and investigate the full language \(\texttt {FOE} ^{\infty }\) of first-order logic (with equality) extended with the infinity quantifier \(\exists ^\infty \), or fragments of \(\texttt {FOE} ^{\infty }\) that are more expressive than \(\texttt {M} \texttt {E} ^{\infty }\). A first step in this direction was taken by Ignacio Bellas Acosta, who wrote, under the supervision of the third author, an MSc thesis [3] on the modal fragment of \(\texttt {FOE} ^{\infty }\) establishing, among other things, a van Benthem-style bisimulation invariance result. Although it is quite commom to refer to these results as preservation theorems, in this paper we shall exclusively use the terminology characterisation theorem, reserving the term 'preservation' for the easier part of a characterisation result, which states that formulas in the given syntactic shape have the semantic property. For an overview see e.g. [6, 28, 33]. For an introduction to the model theory of generalised quantifiers, the interested reader can consult for instance [29, Chapter 10]. Extensions of monadic first-order logic with other generalised quantifiers have also been studied (see e.g. [7, 25]). In particular, A. Rabinovitch suggested to us (in personal communication), that our decidability results can be obtained by formulating semantic properties of our monadic predicate logic formulas in certain propositional languages. The argument in [25] is given in terms of the so called Chang quantifier \(Q_{C}\) given by \((D,V) \models Q_{C}x. \varphi \) iff the set \(\{ d \in D \mid (D,V) \models \varphi (d) \}\) of objects that satisfy \(\varphi \) has the same cardinality as D itself. The proof is easily seen to work also for \(\exists ^\infty \) and \(\forall ^\infty \), however. Both Mostowski's and Slomson's decidability results can be extended to the case of the empty domain. The interested reader is referred to [15, Sec. 8] for a more precise discussion of the connection. Abramsky, S., Jung, A.: Domain theory. In: Abramsky, S., Gabbay, D.M., Maibaum, T.S.E. (eds.) Handbook of Logic in Computer Science, vol. 3, pp. 2–168. Oxford University Press (1994) Ackermann, W.: Solvable Cases of the Decision Problem. North-Holland Publishing Company, Amsterdam (1954) MATH Google Scholar Acosta, I.B.: Studies in the extension of standard modal logic with an infinite modality. Master's thesis, Institute for Logic, Language and Computation, Universiteit van Amsterdam (2020) Behmann, H.: Beiträge zur Algebra der Logik, insbesondere zum Entscheidungsproblem. Mathematische Annalen (1922) van Benthem, J.: Dynamic bits and pieces. ILLC preprint LP-1997-01 (1997) van Benthem, J., Westerståhl, D.: Directions in generalized quantifier theory. Stud. Log. 55(3), 389–419 (1995) MathSciNet Article Google Scholar Caicedo, X.: On extensions of \({L}_{ \omega \omega }({Q}_1)\). Notre Dame J. Form. Log. 22(1), 85–93 (1981) Carreiro, F.: Fragments of fixpoint logics. Ph.D. thesis, Institute for Logic, Language and Computation, Universiteit van Amsterdam (2015) Carreiro, F., Facchini, A., Venema, Y., Zanasi, F.: Weak MSO: Automata and expressiveness modulo bisimilarity. In: Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), p. 1–27. ACM (2014) Carreiro, F., Facchini, A., Venema, Y., Zanasi, F.: The power of the weak. ACM Trans. Comput. Log. 21(2), (2020). https://doi.org/10.1145/3372392 D'Agostino, G., Hollenberg, M.: Logical questions concerning the \(\mu \)-calculus: interpolation, Lyndon and Łoś–Tarski. J. Symb. Log. 65(1), 310–332 (2000) Ebbinghaus, H.D., Flum, J.: Finite Model Theory. Perspectives in Mathematical Logic. Springer, Berlin (1995) Ehrenfeucht, A.: An application of games to the completeness problem of formalized theories. Fundam. Math. 49, 129–141 (1961) Facchini, A., Venema, Y., Zanasi, F.: A characterization theorem for the alternation-free fragment of the modal \(\mu \)-calculus. In: LICS, pp. 478–487. IEEE Computer Society (2013) Fontaine, G., Venema, Y.: Some model theory for the modal \(\mu \)-calculus: syntactic characterisations of semantic properties. Logical Methods Comput. Sci. 14(1) (2018) Grädel, E., Thomas, W., Wilke, T. (eds.): Automata, Logics, and Infinite Games: A Guide to Current Research. Lecture Notes in Computer Science, vol. 2500. Springer, Berlin (2002) Hodges, W.: Model Theory. Cambridge University Press, Cambridge (1993) Janin, D., Walukiewicz, I.: Automata for the modal \(\mu \)-calculus and related results. In: MFCS, pp. 552–562 (1995) Janin, D., Walukiewicz, I.: On the expressive completeness of the propositional \(\mu \)-calculus with respect to monadic second order logic. In: Proceedings of the 7th International Conference on Concurrency Theory, CONCUR '96, pp. 263–277. Springer, London (1996). http://portal.acm.org/citation.cfm?id=646731.703838 Krawczyk, A., Krynicki, M.: Ehrenfeucht games for generalized quantifiers. In: Marek, W., Srebrny, M., Zarach, A. (eds.) Set Theory and Hierarchy Theory A Memorial Tribute to Andrzej Mostowski. Lecture Notes in Mathematics, vol. 537, pp. 145–152. Springer (1976) Lindström, P.: First order predicate logic with generalized quantifiers. Theoria 32(3), 186–195 (1966) MathSciNet MATH Google Scholar Löwenheim, L.: Über Möglichkeiten im Relativkalkül. Math. Ann. 76(4), 447–470 (1915) Lyndon, R.C.: Properties preserved under homomorphism. Pac. J. Math. 9, 129–142 (1959) Mostowski, A.: On a generalization of quantifiers. Fundam. Math. 44(1), 12–36 (1957) Slomson, A.B.: The monadic fragment of predicate calculus with the Chang quantifier and equality. In: Proceedings of the Summer School in Logic Leeds, 1967, pp. 279–301. Springer (1968) Tharp, L.H.: The characterization of monadic logic. J. Symb. Log. 38(3), 481–488 (1973) Väänänen, J.: Remarks on generalized quantifiers and second-order logics. In: Set theory and hierarchy theory, pp. 117–123. Prace Naukowe Instytutu Matematyki Politechniki Wroclawskiej, Wroclaw (1977) Väänänen, J.: Generalized quantifiers. Bull. EATCS 62, 115–136 (1997) Väänänen, J.: Models and Games, vol. 132. Cambridge University Press, Cambridge (2011) Vardi, M.Y., Wilke, T.: Automata: from logics to algorithms. In: Flum, J., Grädel, E., Wilke, T. (eds.) Logic and Automata: History and Perspectives, Texts in Logic and Games, vol. 2, pp. 629–736. Amsterdam University Press (2008) Venema, Y.: Expressiveness modulo bisimilarity: a coalgebraic perspective. In: Johan van Benthem on Logic and Information Dynamics, pp. 33–65. Springer (2014) Walukiewicz, I.: Monadic second order logic on tree-like structures. In: Puech, C., Reischuk, R. (eds.) STACS. Lecture Notes in Computer Science, vol. 1046, pp. 401–413. Springer, Berlin (1996) Westerståhl, D.: Generalized quantifiers. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy, winter, 2016th edn. Stanford University, Metaphysics Research Lab (2016) We are very grateful to the anonymous referee for many useful suggestions for improving the paper. Open Access funding provided by SUPSI - University of Applied Sciences and Arts of Southern Switzerland. Institute for Logic, Language and Computation, Universiteit van Amsterdam, P.O. Box 94242, 1090 GE, Amsterdam, The Netherlands Facundo Carreiro & Yde Venema Dalle Molle Institute for Artificial Intelligence USI-SUPSI, Polo universitario Lugano - Campus Est, Via la Santa 1, 6962, Lugano-Viganello, Switzerland Alessandro Facchini University College London, 66-72 Gower Street, London, WC1E 6EA, UK Fabio Zanasi Facundo Carreiro Yde Venema Correspondence to Alessandro Facchini. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Carreiro, F., Facchini, A., Venema, Y. et al. Model theory of monadic predicate logic with the infinity quantifier. Arch. Math. Logic (2021). https://doi.org/10.1007/s00153-021-00797-0 Monadic first-order logic Generalised quantifier Infinity quantifier Characterisation theorem Preservation theorem
CommonCrawl
Experimental Study of Self-heating Ignition of Lithium-Ion Batteries During Storage: Effect of the Number of Cells | springerprofessional.de Skip to main content Back to the search result list previous article Study on the Performance of Parallel Air-Cooled... next article Handling Lithium-Ion Batteries in Electric Vehi... Activate PatentFit Swipe to navigate through the articles of this issue 03-08-2020 | Issue 6/2020 Open Access Experimental Study of Self-heating Ignition of Lithium-Ion Batteries During Storage: Effect of the Number of Cells Fire Technology > Issue 6/2020 Xuanze He, Francesco Restuccia, Yue Zhang, Zhenwen Hu, Xinyan Huang, Jun Fang, Guillermo Rein » View abstract Download PDF-version Lithium-ion batteries (LIBs) are an important type of energy storage device with high specific energy, high power, and a long cycle life. Due to their advantages, LIBs have been widely used for commercial applications, such as laptops, mobile phones and electric vehicles. Because of the fast development of electric vehicle technology and the increasing demand for electric vehicles, the global market of LIBs is predicted to keep increasing to USD 93.1 billion by 2025 [ 1 ]. However, LIBs are a new safety hazard because of their tendency to ignite and burn. Many fires resulting in economic losses and casualties have been reported [ 2 ]. For example, two LIB fires happened in two Boeing 787 Dreamliners in January 2013 [ 3 ]. In 2016, a series of LIB fires of Samsung mobile phones led to a recall of the Galaxy Note 7 around the world leading to a large falloff in sale. Ignition of LIBs can be triggered by abuse conditions, including mechanical abuse (crushing, penetration), electrical abuse (external short circuit, overcharge), thermal abuse (overheating) or internal short circuit [ 4 ]. All of these can initiate thermal runaway leading to fires. Unlike the first three abuse conditions, which require external factors, internal short circuit occurs inside LIB leading to spontaneous ignition. Since the Boeing 787 Dreamliner battery fires reported in 2013 [ 5 ], spontaneous ignition of LIBs has been under closer scrutiny. The cause of spontaneous ignition has been thought to be internal short circuit only [ 4 – 6 ], until a recent study found that spontaneous ignition can occur without an internal short circuit but because of internal chemical reactions [ 7 ]. Another possibility for spontaneous ignition of LIBs is by self-heating in open circuit condition, particularly when they are stacked forming a large pile in a warehouse or a cargo. Self-heating is the tendency of certain materials to undergo spontaneous internal exothermic reactions causing an increase in their temperature [ 8 – 10 ]. Self-heating ignition has been studied in many organic materials, such as coal [ 11 ], and shale [ 12 ]. For large sizes of these materials, self-heating ignition can occur at low ambient temperatures [ 8 , 9 ]. This is because heat generation due to chemical reactions is proportional to sample volume, while heat losses are proportional to sample surface area. Therefore, when the sample size is relatively large, the heat generation rate can be higher than the heat dissipation rate, resulting in spontaneous ignition caused by self-heating [ 9 ]. LIB ignition caused by various abuse conditions has been studied at both the small component scale and single-cell scale. By studying the chemical reactions using different combinations of components [ 13 , 14 ], the reactions of LIB thermal runaway have been identified [ 15 ]. In the order of onset temperatures from low to high, these reactions include: SEI (solid electrolyte interphase) decomposition, the reaction of intercalated Lithium with electrolyte, positive active material decomposition and electrolyte decomposition. The kinetics of these reactions have been studied [ 16 ] and employed in simulations of a single cell [ 17 ]. Tobishima and Yamaki [ 18 ] first experimentally studied self-heating ignition of a cylindrical LiCoO 2 LIB using oven experiments. After this, the effects of the state of charge (SOC) [ 19 ], cathode materials [ 20 ] and aging process [ 21 ] on this onset temperature were investigated. In terms of spontaneous ignition of LIBs, the internal short circuit issue has been considered [ 4 – 6 ]. The formation and detection of the internal short circuit, and how it causes spontaneous ignition are three key research topics [ 22 – 24 ]. Once the internal short circuit happens, the temperature of LIBs increases rapidly because of joule heating. The temperature increase triggers the chemical reactions, leading to spontaneous ignition [ 4 ]. Some works [ 25 – 27 ] have studied the critical self-ignition temperature of a single cell using non-dimensional analysis based on self-heating ignition theories. However, as only one cell was used in previous experiments [ 13 – 27 ], the number of cells and the consequent effects of heat transfer were neglected. The ignition of a LIB box has been numerically investigated by Hu et al. [ 28 ]. Results show that insulating materials could decrease the critical temperature of self-heating ignition, because these materials reduce the heat dissipation of cells. When multiple are cells stacked together during storage or transport, the critical self-heating ignition temperature could be lower than the temperature for one cell. We attempt to show the key symptom of self-heating, which is the ignition temperature decreases as the number of cells increases. This unique symptom proves the possibility of self-heating ignition of LIBs. In order to verify if LIB fires can start by self-heating, isothermal oven experiments for bench-scale samples are recommended [ 8 ]. Other faster methods employed in LIB ignition investigations include differential scanning calorimetry (DSC) [ 29 ], C80 micro-calorimeter [ 14 ], vent sizing package 2 (VSP2) adiabatic calorimeter [ 29 ] and Copper Slug Battery Calorimeter (CSBC) [ 30 ]. These methods are used to study component scale or single-battery scale. Accelerating rate calorimetry (ARC) [ 16 ] is another method to investigate self-heating for bench-scale samples in an adiabatic environment, which does not consider heat transfer among samples. Additionally, considering the low onset temperature of self-heating and low reaction rate of self-heating reactions [ 8 , 31 ], the kinetics obtained by these methods do not correspond to the kinetics of slow self-heating ignition. In comparison, an oven is large enough to conduct bench-scale samples and can provide constant temperature heating to study the heat transfer effects. In the current study, for the first time in literature, the effect of the number of cells on the self-heating behaviour of LiCoO 2 LIBs at 30% SOC has been studied using oven experiments. The effective kinetics and effective thermal properties of LIBs are extracted based on self-heating ignition theory, and are used to predict self-heating ignition of LIBs at real sizes in storage and their dependence on the ambient temperature. 2 Self-heating Ignition Theory The first theory to describe the self-heating phenomenon was put forward by Semenov [ 8 , 31 ]. This theory assumes a uniform temperature of the system, ignores the consumption of materials and assumes that heat generation is due to one global chemical reaction. These assumptions limit the wider utilization of Semenov theory because the temperature profile of most solid materials is not uniform. However, Semenov theory can effectively describe the self-heating problem of liquids. In order to describe a more realistic temperature distribution of solids, Frank-Kamenetskii proposed a model that incorporated the heat conduction of Fourier's law [ 8 , 31 ]. As the temperature variation of a substance itself can be calculated, Frank-Kamenetskii theory has been widely employed to investigate the characteristics of substance self-heating ignition [ 12 ]. This theory also neglects fuel consumption and assumes that heat production is from a global chemical reaction based on Arrhenius law. According to these assumptions, the energy conservation of the Frank-Kamenetskii theory is shown in Eq. ( 1): $$k\nabla^{2} T + f(t)\Delta H_{c} \exp \left( {\frac{ - E}{RT}} \right) = \rho c\frac{\partial T}{\partial t}$$ where \(k\) is the thermal conductivity of the fuel, \(T\) is the temperature of the fuel at a location, \(f\left( t \right)\) is the mass action law that depends on the concentration of reactants at any time, \(\Delta H_{c}\) is the effective heat of reaction of the fuel, \(E\) is the effective activation energy to describe the global reaction, \(R\) is the universal gas constant, \(\rho\) is the density of the fuel, \(c\) is the heat capacity of the fuel and \(t\) is time. Frank-Kamenetskii theory solves this transient heat conduction equation in steady state, as it assumes both the heat of reaction and effective activation energy of the material are large enough so that a steady state can be reached [ 9 , 12 , 31 ]. In practical cases, this is a well-approximated assumption as the temperature of the material is stable before ignition [ 12 ]. As a result, the right-hand side of the Eq. ( 1) is equal to zero. To solve Eq. ( 1) at steady state, Frank-Kamenetskii defined a dimensionless heat generation number of \(\delta\) [ 8 , 31 ], which is also known as the Damkohler number, shown in Eq. ( 2): $$\delta = \frac{{EL^{2} f_{0}\Delta H_{c} }}{{kRT_{a}^{2} }}e^{{ - E/\left( {RT_{a} } \right)}}$$ where the \(L\) is the characteristic length, \(T_{\text{a}}\) is the ambient temperature, \(f_{0}\) is the value of mass action law at initial time, which is a constant as the consumption of materials is ignored. As can be seen, \(\delta\) increases as the characteristic length \(L\) increases or as the ambient temperature \(T_{\text{a}}\) increases. Frank-Kamenetskii theory find that when \(\delta\) is higher than a critical value \(\delta_{c}\), thermal runaway occurs leading to ignition [ 8 , 31 ]. \(\delta_{c}\) is only related to the geometry of substance when the boundary condition \(T = T_{a}\) can be satisfied. This boundary condition is easy to reach when convection is large. The values of \(\delta_{c}\) can be found in literatures [ 8 , 31 ], for example, its value for an infinite slab is 0.878, and for a cube is 2.52. In this study, prismatic batteries have been used to study LIB self-heating ignition, the geometry of LIBs can be regarded as a cuboid. The \(\delta_{c}\) value of a cuboid is not a constant but depends on the length of the three sides. \(\delta_{c}\) can be calculated using the rectangular brick equation [ 8 ], which is shown in Eq. ( 3): $$\delta_{c} \left( {a,b,c} \right) = 0.84\left( {1 + 1/\left( {b/a} \right)^{2} + 1/\left( {c/a} \right)^{2} } \right)$$ where a, b, c is the half length of three sides with relation of a < b, c. At critical ignition condition, the dependence of the critical ambient temperature can be obtained by rearranging Eq. ( 2) at critical ignition condition, and taking the logarithm, as shown in Eq. ( 4): $$\ln \left( {\frac{{\delta_{c} T_{a,c}^{2} }}{{L^{2} }}} \right) = \ln \left( {\frac{E}{R}\frac{{f\Delta H_{c} }}{k}} \right) - \frac{E}{R}\frac{1}{{T_{a,c} }}$$ where \(T_{{{\text{a}},c}}\) is the minimum ambient temperature in which the ignition of the given sample size will occur. By plotting the \(\ln \left( {\frac{{\delta_{c} T_{{{\text{a}},c}}^{2} }}{{L^{2} }}} \right)\) against \(\left( {\frac{1}{{T_{{{\text{a}},c}} }}} \right)\), a straight-line correlation can be obtained if the one-step global Arrhenius reaction assumption is appropriate to apply to LIBs, which shows self-heating ignition can be modelled by Frank-Kamenetskii theory. The slope of the straight line is \(- \frac{E}{R}\), while the intercept corresponds to \(\frac{E}{R}\cdot\frac{{f\Delta H_{c} }}{k}\). Thus, the effective kinetics and thermo-physical parameters can be acquired. 3.1 LIB Samples Sanyo UF103450P prismatic batteries with graphite anode and LiCoO 2 cathode were selected for experiments, due to their widespread use in consumer electronics and ease of purchase. The cell has a nominal voltage of 3.7 V and nominal capacity of 1880 mAh, with the dimension of 34 mm × 10 mm × 50 mm. Each cell has a burst disc as a safety vent on the positive side. When the internal pressure is higher than a threshold, this safety venting will release gases. The state of charge (SOC) of 30% was selected for the experiments, as this is the maximum SOC allowed when batteries are shipped by air according to the Packing Instruction 965 (UM 3480) by IATA. Before the experiments, in order to measure the actual electrical capacity and ensure the same SOC, each cell was cycled three times at 0.2 C rate for 5 h at each charge or discharge process, with the final cycle to 30% SOC. After this, cells were rested for 5 h to avoid internal heat effects due to cycles. The experimental setup employed to determine the critical minimum ambient temperature for self-heating ignition of LIB cells was based on the procedure in the British Standards EN 15188:2007. Figure 1 shows the overall experimental setup for studying self-heating ignition of cells with a different number of cells. Stacks of 1, 2, 3 and 4 cells were selected. Cells were stacked into a cuboid using wires around them to fix the shape, as this shape is easy to stack, to calculate \(\delta_{c}\), and to ensure the internal conductive heat transfer. The physical dimensions of the different number of cells are shown in Table 1 to illustrate how they were stacked. According to Eq. ( 3), \(\delta_{c}\) and \(\delta_{c} /L^{2}\) are calculated and shown in Table 1, which demonstrates that as the increase of the number of cells, \(\delta_{c} /L^{2}\) decreases, and \(\delta_{c}\) increases. Bi numbers \(\left( {Bi = \frac{{h_{c} L}}{k}} \right)\) are evaluated and shown in Table 1 to determine whether the lumped capacitance method can be used [ 32 ], and justify which self-heating ignition theory should be chosen [ 8 ]. The heat transfer coefficient h c = 11 W/m 2 K is calculated from the experimental results (see Sect. 4.3). The effective conductivity of LIB k = 1.08 W/m K was previously measured experimentally in [ 33 ] for LiCoO 2. The characteristic length L is half the length of the smallest side. If Bi < 0.1, the thermal resistance and the temperature gradient of cells are negligible, and therefore the lumped capacitance assumption can be employed [ 8 ] and the Semenov theory should be chosen [ 32 ]. Otherwise, when Bi > 0.1, the lumped capacitance method is not satisfied [ 8 ], and the Frank-Kamenetskii theory should be selected [ 32 ]. Experimental setup for studying self-heating ignition of 30% SOC cells. Cells were placed at the centre of a metal mesh cage in a mechanically ventilated 136L oven, attached thermocouples for measuring the ambient ( T a), centre ( T c) and surface ( T s) temperatures and connected wires to measure voltage. The error of ambient temperature is ± 1°C due to the error of the thermocouple. The four stacks used for experiments are shown at the right side Physical Dimensions of Cells at Different Sizes, and Their \(\delta_{c}\), \(\delta_{c} /L^{2}\), Bi Width (a) (mm) Length (b) (mm) Height (c) (mm) \(\delta_{c}\) \(\delta_{c} /L^{2}\) Cells were placed at the centre of a metal mesh cage in a thermostatically controlled 136 L oven, which has mechanically forced air circulation to prevent thermal stratification. Cells are strapped and fastened together using fine wires to fix the geometry and keep cells in contact with each other. This helps avoid the effect of thermal contact resistance due to swelling. Three thermocouples were employed to measure temperatures: one at the surface of the central cell ( T c), the second attached on the surface of one of the outmost cells ( T s), and the third used to monitor the ambient temperature ( T a). In order to monitor the voltage history during experiments, one of the central batteries was welded with nickel strip on both terminals, which were connected to the cycler using high-temperature resistant wires. The metal mesh cage was used to reduce the effects of airflow on results, and to prevent the fires and projectiles destroying the oven. The minimum critical temperature, \(T_{{{\text{a}},c}}\), is defined as the minimum ambient temperature that allows thermal runaway to happen causing ignition. When conducted an experiment, if cells failed to ignite, the experiment was repeated with fresh cells at a 10°C higher temperature. If cells reached ignition, the experiment was repeated with fresh cells at a 10°C lower temperature. The experiments were conducted until \(T_{{{\text{a}},c}}\) was identified with the maximum error of ± 5°C for each number of cells. Then, the critical temperature experiments were repeated twice to decrease the error range. The total experiments carried out are summarised in Table 2. In total, 35 experiments corresponding to 158 h of oven run time were conducted. Total Number of Experiments Carried Out for the Different Number of Cells Number of experiments 4.1 Self-heating Ignition Phenomenon In general, in terms of LiCoO 2 cells at 30% SOC in our experiments, their self-heating ignition behaviours can be summarized into the following three stages: heating up, self-heating and thermal runaway. Taking a 1-cell experiment at 173°C as an example, Fig. 2 presents its three-stage self-heating ignition phenomenon and corresponding temperature profile. Table 3 also shows the criteria and observation of the three stages. Three stages of 30% SOC 1-cell self-heating ignition phenomenon and corresponding temperature and voltage characteristics at the ambient temperature of T a= 173°C. The typical LIB appearances in different stages are also shown, including cell swelling, electrolyte leakage, self-heating, and thermal runaway Criteria and Observations of the 3 Stages in Self-heating Ignition I. Heating up T c increases significantly above its initial temperature (1) Slight swelling (2) Fast T c increase (3) Slow Voltage decrease and rapid fluctuations (1st drop) (4) Electrolyte leakage, if T a≫ T a,cr II. Self-heating Crossover: T c> T a (1) No obvious swelling (2) Electrolyte leakage (3) Colour of cathode gradually changes from white to yellow (4) Crossover: T c increases over T a, followed by a slight drop, and a very slow increase (5) 2nd voltage drops to zero and then recovers III. Thermal runaway T c increases sharply (1) Rapid swelling in 2–3 s (2) Plastic coating near cathode melting (3) No further colour change at cathode (4) Venting ( T a> T a,cr.) and smoke. No flare, no fire and no sparks observed in any of our experiments (5) T c fast increase (6) 3rd voltage drop to zero Stage I: The first stage starts when a cell is heated significantly above its initial temperature once it has been placed into the oven. The cell temperature increased from the ambient temperature to oven temperature. In all experiments, cells initially began slightly swelling from their middle wall, due to thermal expansion. Once the temperature was higher than the onset temperature of SEI decomposition, this reaction started to generate gases, leading to the further swelling of cells For the experiments when T a≫ T a,cr, electrolyte leakage was observed in this stage. Stage II: Self-heating The second stage is characterised by the sample temperature exceeding the ambient temperature. As there is not a significant temperature increase in this stage, no obvious swelling was observed. Additionally, electrolyte leakage was often observed in this stage, in which the electrolyte began to leak out from the positive side, where there is a safety vent. This leakage leads to the gradual colour change of the cathode from white to yellow. The temperature increases over the ambient temperature due to self-heating, followed by a slight decrease because of the heat losses caused by the electrolyte leakage. After this, the cell temperature started to increase very slowly. When the electrolyte leakage was over, the cell appearance did not change further, but its temperature kept increasing. The heat accumulation in this stage maybe due to the SEI decomposition, reaction of intercalated Lithium with electrolyte, cathode positive material decomposition [ 4 , 15 ], or the chemical cross over between anode and cathode [ 7 ]. Stage III: Thermal runaway As the cell temperature increased, thermal runaway happened leading to ignition. The cell rapidly swelled in 2–3 s, due to the fast internal gas generation. When the internal pressure exceeded the threshold, venting happened, as the stage III image shown in Fig. 2. Some smoke can be seen, but no flare, fire or sparks were observed during all experiments. Moreover, for the first time in the literature, we find that self-heating ignition does not always cause venting. As shown in Fig. 3 of cell images after experiments, when the ambient temperature decreased to 169°C for a 1-cell experiment, the cell self-heating ignition was also captured based on the temperature profile, but no venting happened. In all our experiments, the ignition without venting only happened in 1-cell and 2-cell experiments at their critical ignition temperature. Cell images after experiments. Both thermal runaway and venting happened at T a= 173°C (left), but thermal runaway happened at T a= 169°C without venting (right). This is the first time the occurrence of LIB thermal runaway due to self-heating without venting was found in the literature In order to fix the shape of stacks and keep cells in contact with each other, wires were used to fasten cells in all experiments. This method caused venting to happen prior to thermal runaway for 3-cell and 4-cell experiments, as wires limit the swelling of cells causing external pressure on the cell surface. Heat and mass losses due to the venting add an additional source of uncertainty to the experiments, but according to the critical temperatures we obtained, these losses do not affect results significantly. Without the fastening of cells using wires, self-heating ignition of 3-cell and 4-cell experiments did not happen even at 2-cell critical ambient temperature. This is because the swelling of cell makes its surface curved, decreasing the physical contact areas between the cells reducing heat transfer, and therefore the cells do not behave as one body. Additionally, in terms of 1-cell and 2-cell experiments, because of small deformation and swelling in total, the wire fastening did not affect experiments in any visible way. 4.2 Temperature Figure 4a, b shows an example of ignition and no-ignition of a 1 cell configuration to explain how to identify \(T_{{{\text{a}},c}}\) using temperature data. Cells failed to be ignited at an ambient temperature of 162°C, but succeed to reach ignition at an ambient temperature of 169°C. In terms of the no-ignition cases, the cell temperature slightly exceeds the oven temperature firstly, and then it is cooled down to the oven temperature. This is because this oven temperature is the highest subcritical ambient temperature, however, heat generation due to chemical reaction in proportion to sample size is still slightly lower than heat losses proportion to sample surface. Regarding the ignition case, thermal runaway occurs at 106 min indicating the cell has ignited at the oven temperature of 169°C, which is the lowest supercritical ambient temperature. Therefore, the \(T_{{{\text{a}},c}}\) of the 1 cell is 165.5 ± 3.5°C. The temperature and voltage of 1-4 cells at 30% SOC experiments for both critical ignition and no-ignition cases. The left column is the cases of the maximum ambient temperatures for no-ignition, while the right column is the cases of the minimum ambient temperatures for ignition for 1–4 cells. The temperature for 1 cell is surface temperature T s, and other temperatures are central temperature (the temperature between two central cells) T c The experiments of the maximum ambient temperatures for no-ignition (left) and the experiments of the minimum ambient temperatures for ignition (right) among 1–4 cells are shown in Fig. 4. As the number of cells increases, the peak cell temperature and the minimum ambient temperature for ignition decreases. Additionally, according to the ignition cases in Fig. 4, the cell surface temperature in self-heating stage is equal to the ambient temperature, \(T_{s} = T_{a}\), which satisfies the boundary condition of Frank-Kamenetskii theory. The time to thermal runway, and the times of stage I and II are shown in Fig. 5. The time to thermal runaway equals to the sum of times of stage I and II. As the number of cells increases, the time of stage I increases linearly, while the time of stage II and the time to thermal runaway increase non-linearly. The time to thermal runaway and the times of different stages. The time to thermal runaway is the sum of the time of stages I and II 4.3 The Heat Transfer Coefficient The effective heat transfer coefficient can be estimated using battery temperature data from the heating up stage in Fig. 4. According to Table 1, only 1 cell and 2 cells have Bi < 0.1. In these conditions, based on the lumped capacitance method [ 32 ], we have: \(\dot{Q} = Sh\left( {T_{a} - T_{s} } \right) = mc\left( {dT_{s} /dt} \right)\), the heat transfer coefficient is \(h = mc\left( {dT_{s} /dt} \right)/S\left( {T_{a} - T_{s} } \right)\). Figure 6 presents the plots of dT s/ dt vs T a − T s for the critical ignition cases of 1 and 2 cells. The slopes correspond to \(hS/mc\), which can be used to extract the heat transfer coefficient. The surface area \(S\) is calculated using three side lengths, and the specific heat capacity \(c\) is 990 J/kg-K from previous experimental measurements of the same cell [ 27 ], and the cell mass \(m\) is 36.8 g. Therefore, the heat transfer coefficients of different number of cells can be calculated and are presented in Table 4. The final heat transfer coefficient we selected to calculate Bi number is 11 W/m 2 K. Extracting the heat transfer coefficient \(h\) from plots of dT s/ dt vs T a − T s, taking the cases of 1 cell (left) and 2 cells (right). The slopes are proportional to \(h\) Heat Transfer Coefficient for the Different Number of Cells Heat transfer coefficient (W/m 2 K) 4.4 Voltage Figure 4 shows the voltage characteristics across the three stages, and different voltage histories for the no-ignition and ignition cases respectively. In terms of ignition cases, the voltages exhibit similar trends across the experiments. In the first stage, the voltage decreases as the cell temperature increases, because the high temperature can speed up the degradation of cells [ 34 ]. There is always a fluctuation followed by the first voltage drop in this stage, which could be a signal of the start of an internal side reaction that is SEI decomposition as this has been regarded as the first side reaction during thermal runaway [ 4 ]. Figure 7 gives the time to voltage fluctuation of experiments, and their corresponding cell temperatures at that time. As the ambient temperature increases, the time to voltage fluctuation decreases. This is because it takes a longer time to heat more cells at lower ambient temperature. However, no matter how many cells were used and what the ambient temperature was, the cell temperatures at the time of voltage fluctuation are all around 130°C, which is close to the onset temperature of SEI decomposition in previous studies [ 4 , 15 ]. (a) The time to voltage fluctuation of 1–4 cell experiments, and (b) the cell temperature at that time. Cell temperatures were all around 130°C, which is the onset temperature of side reactions In the second stage, the voltage suddenly decreases to zero right after the electrolyte leakage. When the electrolyte leakage finishes, the cell voltage can be detected again in the self-heating stage. Figure 8 gives the relationship between time to electrolyte leakage and time to the 2nd voltage drop of three 1-cell experiments. Time to electrolyte leakage is defined as when we first observed the electrolyte leakage, and these values were always slightly smaller than time to the 2nd voltage drop. Relationship between time to electrolyte leakage and time to the 2nd voltage drop of three 1-cell experiments. Time to electrolyte leakage was always slightly smaller than time to the 2nd voltage drop, which shows electrolyte leakage can lead to the internal short circuit of cells After the 2nd voltage drop, the voltage decreases slowly. This may be caused by anode and cathode side reactions in high-temperatures, which could increase internal resistance by continuing to consume intercalated Lithium, generating further gases and impurities [ 4 ]. In the third stage, when the temperature starts to increase rapidly, the voltage sharply decreases to zero again, which can be regarded as a signal that the cell has ignited. 4.5 Critical Ignition Temperature Based on the ambient temperature data in Fig. 4, the critical temperatures of cell self-heating ignition are identified. The temperature values of 1, 2, 3 and 4 cells are 165.5 ± 3.5°C, 157 ± 2°C 155 ± 2°C and 153 ± 2°C, respectively. In this work, a clear trend is shown, namely that the required ambient temperature for cell self-heating ignition decreases as the number of cells increases due to the heat transfer effects presented in the theory section. This trend should be satisfied not only for the prismatic cells used here, but also for any other shape of cells, such as cylindrical cells. This is because although the conductive contact area between cylindrical cells is smaller, heat transfer takes place among cells by conduction and radiation in the air gaps. The critical temperature for 4 cells is 153°C, which is still very high compared with ambient temperature. However, when cells are stacked in warehouses or shipped in cargoes, the number of cells is relatively large, and therefore, based on this critical ambient trend, cell self-heating ignition could happen and lead to fires. 4.6 Effective Kinetics and Thermal Properties In order to quantify the effective kinetics and thermal properties, we assume that the boundary condition is \(T_{s} = T_{a}\), which is a good assumption in this work as the temperature of the cell is approximately steady before ignition, as shown in Fig. 4. Using the critical ignition temperatures for 1–4 cells from Fig. 9, a figure of \(ln\left( {{{\updelta }}_{c} T_{a}^{2} /L^{2} } \right)\) vs \(1000/T_{a}\) is plotted. The best linear fit is calculated in the figure with an R-squared value of 0.981. Figure 10 shows a typical Frank-Kamenetskii plot, which validates that the assumptions of Frank-Kamenetskii theory and one-step global Arrhenius reactions can be applied. The Frank-Kamenetskii plot also confirms that the ignition is caused by self-heating. The critical ignition temperature identified for different number of cells. The temperature values of 1, 2, 3 and 4 batteries are 165.5 ± 3.5°C, 157 ± 2°C, 155 ± 2°C and 153 ± 2°C, respectively Frank-Kamenetskii plot for cells with LiCoO 2 cathode material. A linear fit is plotted in order to extract the effective kinetics and thermophysical parameters The slope of the straight line corresponds to \(- \frac{E}{R}\), while the y-intercept is \(ln\left( {\frac{E}{R}\cdot\frac{{f\Delta H_{c} }}{k}} \right).\) The effective conductivity \(k\) of cells is highly related to cathode materials [ 33 ]. In terms of LiCoO 2 cathode material, the effective conductivity \(k\) is 1.08 W/mK [ 33 ]. Based on this, the effective kinetics and thermal properties of the cell are extracted, as shown in Table 5. The errors are also shown in the table using the fits that give the highest and the lowest possible effective kinetics and thermal properties from the experimental data. These data we found in this work can contribute to predicting the cell self-heating ignition behaviour. Effective Activation Energy \(E\) and \(ln\left( {\frac{{\Delta H_{c} fE}}{Rk}} \right)\) of the Cell at 30% SOC Extracted from Frank-Kamenetskii Plot \(k\) (W/m K) \(E\) (kJ/mol) (− error, + error) \(ln\left( {\frac{{\Delta H_{c} fE}}{Rk}} \right)\) (K 2/m 2) (− error, + error) \(R^{2}\) 1.08 [ 33 ] (− 38.97, +83.40) Effective conductivity \(k\) is from literature for LiCoO 2 cathode The kinetics we quantified are for 30% SOC, and the effective kinetics and thermophysical properties will differ if the same LIB has a higher SOC. Previous studies [ 19 , 30 ] show a LIB has higher reactivity when its SOC is larger, and hence a LIB with higher SOC is more likely to self-ignite. 5 Upscaling Study In order to predict self-heating ignition of the cells used in this work at lower ambient temperatures, the properties in Table 5 are employed in the Frank-Kamenetskii theory to upscale the laboratory results. In these predictions, we use 1-step effective kinetics of the cell. This method is widely used to predict self-heating phenomenon [ 8 , 12 ]. The 1-step effective kinetics quantified from Frank-Kamenetskii theory includes the effect of multi-step kinetics, and the prediction based on effective kinetics will need to be validated when large-scale experiments become available. This upscaling result assumes cells are stacked into a cube so that the characteristic length equals to the ratio of volume to area and the \(\delta_{c} = 2.52\). Figure 11 shows the 1D upscaled results for the LiCoO 2 cell at 30% SOC. The critical temperature decreases significantly as the ratio of volume to area increases. For a 1000 Litre recycle bin (with a characteristic length of 0.5 m), a size commonly used for used LIBs collection, critical self-heating ignition temperature is 114°C (± 11°C), which is 50°C lower than that for a single cell. As LIBs in a recycle bin could be damaged and not pristine, the critical self-heating ignition temperature could be much smaller than this prediction. Upscaled results of LIB used in this work based on Frank-Kamenetskii theory. Uncertainty (the maximum and minimum ignition temperatures) is represented by the shaded regions for based on experimental errors (Table 5) According to results in Fig. 11, even this LIB type at 30% SOC can become hazardous at the highest credible ambient temperature of 40°C when its volume to area is higher than 52 m. But it is unlikely to happen when this LIB is free from manufacturing flaws or defects, because a cube with a side length of 52 m is far greater than realistic value even for large rack storage. However, other LIB types or this type but with manufacturing defects can become a hazard if their self-heating reactivity is higher. Additionally, the SOC in this study is only 30%, and the critical temperature of self-heating ignition may decrease with increasing SOC, as higher SOC potentially increases the reactivity of the LIB. The upscaled results in Fig. 11 are only for the cells, not including the effects of packaging and boxes for storage, in which LIBs will be further insulated using separations and cushions, which could decrease the critical temperature of self-heating ignition [ 28 ]. This study presents: (i) an experimental proof that LIB self-heating during storage is possible and likelihood increases with the size of the ensemble, (ii) a method to upscale laboratory experiments to study self-heating of any battery types, and (iii) evidence that the cell used in this work at 30% SOC will not self-heat to ignition even at large scales when it is free from manufacturing flaws. In this study, the effect of the number of cells on the possibility of self-heating ignition of LIBs has been investigated using oven experiments. The Sanyo prismatic LiCoO 2 cells at 30% SOC were used in the experiments. Results show that self-heating ignition behaviour has three stages: heating up, self-heating, and thermal runaway. A previous study showed the critical ignition temperature for one 18650 LiCoO 2 cell was 155°C [ 17 ]. However, this study has found that the critical ignition temperature decreases as number of cells increases, which implies self-heating ignition is possible for large LIB ensembles. As the number of cells increases from 1 to 4, the critical ambient temperature for self-heating ignition decreases from 165.5°C to 153°C. A Frank-Kamenetskii analysis using these critical temperatures shows a very good linear fit between thermal properties and inverse critical ambient temperature with an R-squared value of 0.981, which confirms this is self-heating ignition. The effective kinetics \(E\) is 230.78 kJ/mol and effective thermal properties \(ln\left( {\frac{{\Delta H_{c} fE}}{Rk}} \right)\) is 86.03 K 2/m 2. These parameters are then used in the prediction of self-heating ignition of the cells during storage. Upscaling results show that this specific LIB type at 30% SOC is not particularly hazardous in terms of self-heating because the critical ambient temperature is high and safe enough. However, other LIB types with defects, higher SOC or being recycled could have a much larger reactivity. These LIBs could self-heat to ignition when they are stacked in a large enough ensemble during storage or transport. This work provides the first experimental study on self-heating ignition of LIBs in open circuit, contributing to understanding and predicting the onset of self-heating ignition of LIBs. The authors would like to thank the support from China Scholarship Council (CSC) to Xuanze He and Zhenwen Hu. Many thanks to Dr Gregory Offer, Dr Yatish Patel, Yan Zhao and Eirik Christensen (Imperial College London) for their advice and help, and to Dr Jingwu Wang and Yang Peng (University of Science and Technology of China) for their help with the experiments. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. Our product recommendations Premium-Abo der Gesellschaft für Informatik Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften. inform now go back to reference Grand view research (2017) Lithium-ion battery market worth $93.1 billion by 2025 | CAGR: 17.0%". https://​www.​grandviewresearc​h.​com/​press-release/​global-lithium-ion-battery-market Grand view research (2017) Lithium-ion battery market worth $93.1 billion by 2025 | CAGR: 17.0%". https://​www.​grandviewresearc​h.​com/​press-release/​global-lithium-ion-battery-market go back to reference Sun P et al (2020) A review of battery fires in electric vehicles. Fire Technol. https://​doi.​org/​10.​1016/​j.​pecs.​2020.​100832 CrossRef Sun P et al (2020) A review of battery fires in electric vehicles. Fire Technol. https://​doi.​org/​10.​1016/​j.​pecs.​2020.​100832 CrossRef go back to reference Williard N et al (2013) Lessons learned from the 787 Dreamliner issue on lithium-ion battery reliability. Energies 6(9): 4682–4695. https://​doi.​org/​10.​3390/​en6094682 CrossRef Williard N et al (2013) Lessons learned from the 787 Dreamliner issue on lithium-ion battery reliability. Energies 6(9): 4682–4695. https://​doi.​org/​10.​3390/​en6094682 CrossRef go back to reference Feng X et al (2018) Thermal runaway mechanism of lithium ion battery for electric vehicles: a review. Energy Storage Mater 10: 246–267. https://​doi.​org/​10.​1016/​j.​ensm.​2017.​05.​013 CrossRef Feng X et al (2018) Thermal runaway mechanism of lithium ion battery for electric vehicles: a review. Energy Storage Mater 10: 246–267. https://​doi.​org/​10.​1016/​j.​ensm.​2017.​05.​013 CrossRef go back to reference Aircraft incident report: auxiliary power unit battery fire, Japan airlines Boeing 787, JA 829 J, Boston, Massachusetts, 7 January 2013. National Transportation Safety Board, DC, Rep. No. PB2014-108867, 21 November 2014 Aircraft incident report: auxiliary power unit battery fire, Japan airlines Boeing 787, JA 829 J, Boston, Massachusetts, 7 January 2013. National Transportation Safety Board, DC, Rep. No. PB2014-108867, 21 November 2014 go back to reference Maleki H, Howard JN (2009) Internal short circuit in Li-ion cells. J Power Sources 191(2)):568–574. https://​doi.​org/​10.​1016/​j.​jpowsour.​2009.​02.​070 CrossRef Maleki H, Howard JN (2009) Internal short circuit in Li-ion cells. J Power Sources 191(2)):568–574. https://​doi.​org/​10.​1016/​j.​jpowsour.​2009.​02.​070 CrossRef go back to reference Liu X et al (2018) Thermal runaway of lithium-ion batteries without internal short circuit. Joule 2(10):2047–2064. https://​doi.​org/​10.​1016/​j.​joule.​2018.​06.​015 CrossRef Liu X et al (2018) Thermal runaway of lithium-ion batteries without internal short circuit. Joule 2(10):2047–2064. https://​doi.​org/​10.​1016/​j.​joule.​2018.​06.​015 CrossRef go back to reference Babrauskas V (2003) Ignition handbook, vol 318. Fire Science Publishers, Issaquah Babrauskas V (2003) Ignition handbook, vol 318. Fire Science Publishers, Issaquah go back to reference Bowes PC (1984) Self-heating: evaluating and controlling the hazards. HMSO, London Bowes PC (1984) Self-heating: evaluating and controlling the hazards. HMSO, London go back to reference Sun Q et al (2020) Assessment on thermal hazards of reactive chemicals in industry: state of the art and perspectives. Prog Energy Combust Sci 78:100832. https://​doi.​org/​10.​1016/​j.​pecs.​2020.​100832 CrossRef Sun Q et al (2020) Assessment on thermal hazards of reactive chemicals in industry: state of the art and perspectives. Prog Energy Combust Sci 78:100832. https://​doi.​org/​10.​1016/​j.​pecs.​2020.​100832 CrossRef go back to reference Joshi KA et al (2012) An experimental study of coal dust ignition in wedge shaped hot plate configurations. Combust Flame 159(1):376–384. https://​doi.​org/​10.​1016/​j.​combustflame.​2011.​06.​003 CrossRef Joshi KA et al (2012) An experimental study of coal dust ignition in wedge shaped hot plate configurations. Combust Flame 159(1):376–384. https://​doi.​org/​10.​1016/​j.​combustflame.​2011.​06.​003 CrossRef go back to reference Restuccia F et al (2017) Self-heating behavior and ignition of shale rock. Combust Flame 176:213–219. https://​doi.​org/​10.​1016/​j.​combustflame.​2016.​09.​025 CrossRef Restuccia F et al (2017) Self-heating behavior and ignition of shale rock. Combust Flame 176:213–219. https://​doi.​org/​10.​1016/​j.​combustflame.​2016.​09.​025 CrossRef go back to reference Maleki H et al (1999) Thermal stability studies of Li-ion cells and components. J Electrochem Soc 146(9):3224–3229. https://​doi.​org/​10.​1149/​1.​1392458 CrossRef Maleki H et al (1999) Thermal stability studies of Li-ion cells and components. J Electrochem Soc 146(9):3224–3229. https://​doi.​org/​10.​1149/​1.​1392458 CrossRef go back to reference Wang Q et al (2007) Thermal stability of delithiated LiMn 2O 4 with electrolyte for lithium-ion batteries. J Electrochem Soc 154(4):A263–A267. https://​doi.​org/​10.​1149/​1.​2433698 CrossRef Wang Q et al (2007) Thermal stability of delithiated LiMn 2O 4 with electrolyte for lithium-ion batteries. J Electrochem Soc 154(4):A263–A267. https://​doi.​org/​10.​1149/​1.​2433698 CrossRef go back to reference Wang Q et al (2012) Thermal runaway caused fire and explosion of lithium ion battery. J Power Sources 208:210–224. https://​doi.​org/​10.​1016/​j.​jpowsour.​2012.​02.​038 CrossRef Wang Q et al (2012) Thermal runaway caused fire and explosion of lithium ion battery. J Power Sources 208:210–224. https://​doi.​org/​10.​1016/​j.​jpowsour.​2012.​02.​038 CrossRef go back to reference MacNeil DD, Dahn JR (2001) Test of reaction kinetics using both differential scanning and accelerating rate calorimetries as applied to the reaction of Li × CoO 2 in non-aqueous electrolyte. J Phys Chem A 105(18):4430–4439. https://​doi.​org/​10.​1021/​jp001187j CrossRef MacNeil DD, Dahn JR (2001) Test of reaction kinetics using both differential scanning and accelerating rate calorimetries as applied to the reaction of Li × CoO 2 in non-aqueous electrolyte. J Phys Chem A 105(18):4430–4439. https://​doi.​org/​10.​1021/​jp001187j CrossRef go back to reference Hatchard TD et al (2001) Thermal model of cylindrical and prismatic lithium-ion cells. J Electrochem Soc 148(7): A755–A761. https://​doi.​org/​10.​1149/​1.​1377592 CrossRef Hatchard TD et al (2001) Thermal model of cylindrical and prismatic lithium-ion cells. J Electrochem Soc 148(7): A755–A761. https://​doi.​org/​10.​1149/​1.​1377592 CrossRef go back to reference Tobishima S, Yamaki J (1999) A consideration of lithium cell safety. J Power Sources 81:882–886. https://​doi.​org/​10.​1016/​S0378-7753(98)00240-7 CrossRef Tobishima S, Yamaki J (1999) A consideration of lithium cell safety. J Power Sources 81:882–886. https://​doi.​org/​10.​1016/​S0378-7753(98)00240-7 CrossRef go back to reference Roth EP et al (2004) Thermal abuse performance of high-power 18650 Li-ion cells. J Power Sources 128(2):308–318. https://​doi.​org/​10.​1016/​j.​jpowsour.​2003.​09.​068 MathSciNetCrossRef Roth EP et al (2004) Thermal abuse performance of high-power 18650 Li-ion cells. J Power Sources 128(2):308–318. https://​doi.​org/​10.​1016/​j.​jpowsour.​2003.​09.​068 MathSciNetCrossRef go back to reference Mendoza Hernandez OS et al (2015) Cathode material comparison of thermal runaway behavior of Li-ion cells at different state of charges including over charge. J Power Sources 280:499–504. https://​doi.​org/​10.​1016/​j.​jpowsour.​2015.​01.​143 CrossRef Mendoza Hernandez OS et al (2015) Cathode material comparison of thermal runaway behavior of Li-ion cells at different state of charges including over charge. J Power Sources 280:499–504. https://​doi.​org/​10.​1016/​j.​jpowsour.​2015.​01.​143 CrossRef go back to reference Larsson F et al (2018) Gas explosions and thermal runaways during external heating abuse of commercial lithium-ion graphite-LiCoO 2 cells at different levels of ageing. J Power Sources 373:220–231. https://​doi.​org/​10.​1016/​j.​jpowsour.​2017.​10.​085 CrossRef Larsson F et al (2018) Gas explosions and thermal runaways during external heating abuse of commercial lithium-ion graphite-LiCoO 2 cells at different levels of ageing. J Power Sources 373:220–231. https://​doi.​org/​10.​1016/​j.​jpowsour.​2017.​10.​085 CrossRef go back to reference Santhanagopalan S et al (2009) Analysis of internal short-circuit in a lithium ion cell. J Power Sources 194:550–557. https://​doi.​org/​10.​1016/​j.​jpowsour.​2009.​05.​002 CrossRef Santhanagopalan S et al (2009) Analysis of internal short-circuit in a lithium ion cell. J Power Sources 194:550–557. https://​doi.​org/​10.​1016/​j.​jpowsour.​2009.​05.​002 CrossRef go back to reference Orendorff CJ et al (2011) Experimental triggers for internal short circuits in lithium-ion cells. J Power Sources 196:6554–6558. https://​doi.​org/​10.​1016/​j.​jpowsour.​2011.​03.​035 CrossRef Orendorff CJ et al (2011) Experimental triggers for internal short circuits in lithium-ion cells. J Power Sources 196:6554–6558. https://​doi.​org/​10.​1016/​j.​jpowsour.​2011.​03.​035 CrossRef go back to reference Feng X et al (2016) Online internal short circuit detection for a large format lithium ion battery. Appl Energy 161:168–180. https://​doi.​org/​10.​1016/​j.​apenergy.​2015.​10.​019 CrossRef Feng X et al (2016) Online internal short circuit detection for a large format lithium ion battery. Appl Energy 161:168–180. https://​doi.​org/​10.​1016/​j.​apenergy.​2015.​10.​019 CrossRef go back to reference Huang P et al (2019) Non-dimensional analysis of the criticality of Li-ion battery thermal runaway behavior. J Hazard Mater 369:268–278. https://​doi.​org/​10.​1016/​j.​jhazmat.​2019.​01.​049 CrossRef Huang P et al (2019) Non-dimensional analysis of the criticality of Li-ion battery thermal runaway behavior. J Hazard Mater 369:268–278. https://​doi.​org/​10.​1016/​j.​jhazmat.​2019.​01.​049 CrossRef go back to reference Huang P et al (2016) Experimental and modeling analysis of thermal runaway propagation over the large format energy storage battery module with Li4Ti5O12 anode. Appl Energy 183:659–673. https://​doi.​org/​10.​1016/​j.​apenergy.​2016.​08.​160 CrossRef Huang P et al (2016) Experimental and modeling analysis of thermal runaway propagation over the large format energy storage battery module with Li4Ti5O12 anode. Appl Energy 183:659–673. https://​doi.​org/​10.​1016/​j.​apenergy.​2016.​08.​160 CrossRef go back to reference Shah K et al (2016) Experimental and theoretical analysis of a method to predict thermal runaway in Li-ion cells. J Power Sources 330:167–174. https://​doi.​org/​10.​1016/​j.​jpowsour.​2016.​08.​133 CrossRef Shah K et al (2016) Experimental and theoretical analysis of a method to predict thermal runaway in Li-ion cells. J Power Sources 330:167–174. https://​doi.​org/​10.​1016/​j.​jpowsour.​2016.​08.​133 CrossRef go back to reference Hu Z et al (2020) Numerical study of self-heating ignition of a box of lithium ion batteries during storage. Fire Technol. https://​doi.​org/​10.​1007/​s10694-020-00998-8 CrossRef Hu Z et al (2020) Numerical study of self-heating ignition of a box of lithium ion batteries during storage. Fire Technol. https://​doi.​org/​10.​1007/​s10694-020-00998-8 CrossRef go back to reference Wen C-Y et al (2012) Thermal runaway features of 18650 lithium-ion batteries for LiFePO 4 cathode material by DSC and VSP2. J Therm Anal Calorim 109(3):1297–1302. https://​doi.​org/​10.​1007/​s10973-012-2573-2 CrossRef Wen C-Y et al (2012) Thermal runaway features of 18650 lithium-ion batteries for LiFePO 4 cathode material by DSC and VSP2. J Therm Anal Calorim 109(3):1297–1302. https://​doi.​org/​10.​1007/​s10973-012-2573-2 CrossRef go back to reference Said AO et al (2019) Simultaneous measurement of multiple thermal hazards associated with a failure of prismatic lithium ion battery. Proc Combust Inst 37(3):4173–4180. https://​doi.​org/​10.​1016/​j.​proci.​2018.​05.​066 CrossRef Said AO et al (2019) Simultaneous measurement of multiple thermal hazards associated with a failure of prismatic lithium ion battery. Proc Combust Inst 37(3):4173–4180. https://​doi.​org/​10.​1016/​j.​proci.​2018.​05.​066 CrossRef go back to reference Gray B (2002) Spontaneous combustion and self-heating, 3rd edn. In: DiNenno PJ (ed) SFPE handbook of fire protection engineering, pp 211–228 Gray B (2002) Spontaneous combustion and self-heating, 3rd edn. In: DiNenno PJ (ed) SFPE handbook of fire protection engineering, pp 211–228 go back to reference Incropera FP et al (2007) Fundamentals of heat and mass transfer. Wiley, Hoboken Incropera FP et al (2007) Fundamentals of heat and mass transfer. Wiley, Hoboken go back to reference Werner D et al (2017) Thermal conductivity of Li-ion batteries and their electrode configurations—a novel combination of modelling and experimental approach. J Power Sources 364:72–83. https://​doi.​org/​10.​1016/​j.​jpowsour.​2017.​07.​105 CrossRef Werner D et al (2017) Thermal conductivity of Li-ion batteries and their electrode configurations—a novel combination of modelling and experimental approach. J Power Sources 364:72–83. https://​doi.​org/​10.​1016/​j.​jpowsour.​2017.​07.​105 CrossRef go back to reference Takashi U et al (2011) Self-discharge behavior and its temperature dependence of carbon electrodes in lithium-ion batteries. J Power Sources 196:8598–8603. https://​doi.​org/​10.​1016/​j.​jpowsour.​2011.​05.​066 CrossRef Takashi U et al (2011) Self-discharge behavior and its temperature dependence of carbon electrodes in lithium-ion batteries. J Power Sources 196:8598–8603. https://​doi.​org/​10.​1016/​j.​jpowsour.​2011.​05.​066 CrossRef Xuanze He Francesco Restuccia Yue Zhang Zhenwen Hu Xinyan Huang Jun Fang Guillermo Rein https://doi.org/10.1007/s10694-020-01011-y Other articles of this Issue 6/2020 Go to the issue OriginalPaper An Experimental Study on Preventing Thermal Runaway Propagation in Lithium-Ion Battery Module Using Aerogel and Liquid Cooling Plate Together Special Issue on Lithium Battery Fire Safety Numerical Study of Self-Heating Ignition of a Box of Lithium-Ion Batteries During Storage Thermal Runaway Behavior of Lithium Iron Phosphate Battery During Penetration Handling Lithium-Ion Batteries in Electric Vehicles: Preventing and Recovering from Hazardous Events A Novel Environmental-Friendly Gel Dry-Water Extinguishant Containing Additives with Efficient Combustion Suppression Efficiency
CommonCrawl
The gradient 1 Overview of differentiation 2 Gradients vs. vector fields 3 The change of a function of several variables: the difference 4 The rate of change of a function of several variables: the gradient 5 Algebraic properties of the difference quotients and the gradients 6 Compositions and the Chain Rule 7 The gradient is perpendicular to the level curves 8 Monotonicity of functions of several variables 9 Differentiation and anti-differentiation 10 When is anti-differentiation possible? 11 When is a vector field a gradient? Overview of differentiation Where are we in our study of functions in high dimensions? Once again, we provide a diagram that captures all types of functions we have seen so far as well as those haven't seen yet. They are placed on the $xy$-plane with the $x$-axis and the $y$-axis representing the dimensions of the input space and the output space. The first column consists of all parametric curves and the first row of all functions of several variables. The two have one cell in common; that is numerical functions. This time we will see how everything is interconnected. We show with the red arrows for different types of functions what type of functions are their difference quotients or the derivatives. In Chapter 7 and beyond, we faced only numerical functions and we implicitly used the fact that differentiation will not make us leave the confines of this environment. In particular, every function defined on the edges of a partition is the difference quotient of some function defined on the nodes of the partition. Furthermore, every continuous function is integrable and, therefore, is somebody's derivative. In this sense, the arrow can be reversed. More recently, we defined the difference quotient and the derivative of a parametric curve in ${\bf R}^n$ and those are again parametric curves in ${\bf R}^n$ (think location vs. velocity). That's why we have arrows that come back to the same cell. Once again, every continuous parametric curve is somebody's derivative. In this sense, the arrow can be reversed. The study of functions of several variables would be incomplete without understanding their rates of change! What we know so far is that we can compute the rate of change of such a function of two variables in the two main directions. The result is given by a vector called its gradient. For each point on the plane, we have a single vector but what if we carry out this computation over the whole plane? What if, in order to keep track of the correspondence, we attach this vector to the point it came from? The result is a vector field. It is a function from ${\bf R}^2$ to ${\bf R}^2$ and it is placed on the diagonal of our table. The same happens to functions of three variables and so on. Every gradient is a vector field but not every vector field is a gradient. In this sense, the arrow cannot we reversed! The situation seems to mimic the one with numerical functions: there are non-integrable functions. However, the arrow in the first cell is reversible if we limit ourselves to smooth (i.e., infinitely many times differentiable) functions. The problem is more profound with vector fields as we shall see later. Gradients vs. vector fields Vector fields are just functions with: the input consisting of two numbers and the output consisting of two numbers. We just choose to treat the former as a point on the plane and the latter as a vector attached to that point. This is just a clever way to visualize such a complex -- in comparison to the ones we have seen so far -- function. It's a location-dependent vector! Let's plot some vector fields by hand and then analyze them. Example (accelerated outflow). Let's consider this simple vector field: $$V(x,y)=<x,y>.$$ A vector field is just two functions of two variables: $$V(x,y)=<p(x,y),\ q(x,y)> \text{ with } p(x,y)=x,\ q(x,y)=y.$$ Plotting those two functions, as before, does not produce useful visualization. However, we will still follow the same pattern: we pick a few points on the plane, compute the output for each, and assign it to that point. The difference is that instead of a single number we have two and instead of a vertical bar that we erect at that point to visualize this number we draw an arrow. We carry this out for these nine points around the origin: $$\begin{array}{ccccc} (-1,1)&(0,1)&(1,1)\\ (-1,0)&(0,0)&(1,0)\\ (-1,-1)&(0,-1)&(1,-1) \end{array}\quad\leadsto\quad\begin{array}{ccccc} <-1,1>&<0,1>&<1,1>\\ <-1,0>&<0,0>&<1,0>\\ <-1,-1>&<0,-1>&<1,-1> \end{array}$$ Each vector on right starts at the corresponding point on left: What about the rest? We can guess that the magnitudes increase as we move away from the origin while the directions remain the same: opposite of the origin. For each point $(x,y)$ we copy the vector that ends there, i.e., $<x,y>$ and place it at this location. Now, if the vectors represent velocities of particles, what kind of flow is this? This isn't a fountain or an explosion (the particles would go slower away from the source). The answer is: this is a flow on a surface -- under gravity -- that gets steeper and steeper away from the origin! This surface would look like this paraboloid: Can we be more specific? Well, this surface looks like the graph of a function of two variables and the flow seems to follow the line fastest descent; maybe our vector field is the gradient of this function? We will find out but first let's take a look at the vector field visualized as a system of pipes: We recognize this as a discrete $1$-form. Now, the question above becomes: is it possible to produce this pattern of flow in the pipes by controlling the pressure at the joints? So, is this vector field -- when limited to the edges of a grid -- the difference of a function of two variables? Let's take the latter question; we need to solve this equation and such a function $z=f(x,y)$ that $$\Delta f=<x,y>?$$ The latter is just an abbreviation; the actual differences are $x$ on the horizontal edges and $y$ on the vertical. Let's concentrate on just one cell: $$[x,x+\Delta x]\times [y,y+\Delta y].$$ We choose the secondary node of an edge to be the primary node at the beginning of the edge: horizontal: $(x,y)$ in $[x,x+\Delta x]\times \{y\}$, etc.; vertical: $(x,y)$ in $\{x\}\times [y,y+\Delta y]$, etc. Then our equation develops as follows: $$\Longrightarrow\ \begin{cases} \Delta_x f&=x\\ \Delta_x f&=y \end{cases}\ \Longrightarrow\ \begin{cases} f(x+\Delta x,y)-f(x,y)&=x\\ f(x,y+\Delta y)-f(x,y)&=y \end{cases}\ \Longrightarrow\ \begin{cases} f(x+\Delta x,y)&=f(x,y)+x\\ f(x,y+\Delta y)&=f(x,y)+y \end{cases}$$ These recursive relations allow us to construct $f$ one node at a time. We use the first one to progress horizontally and the second to progress vertically: $$\begin{array}{cccc} \uparrow& &\uparrow\\ (x,y+\Delta y)&\to&(x+\Delta x,y+\Delta y)&\to\\ \uparrow& &\uparrow\\ (x,y)&\to&(x+\Delta x,y)&\to \end{array}$$ The problem is solved! However, there may be a conflict: what if we apply these two formulas consecutively but in a different order? Fortunately, going horizontally then vertically produces the same outcome as going vertically then horizontally: $$f(x+\Delta x,y+\Delta y)=f(x,y)+x+y.$$ This is the first instance of path-independence. Now the continuous case. Suppose $V$ is the gradient of some differentiable function of two variables $z=f(x,y)$. The result of this assumption is a vector equation that breaks into two: $$V=\nabla f\ \Longrightarrow\ V(x,y)=<x,y>=<f_x(x,y),\, f_y(x,y)>\ \Longrightarrow\ \begin{cases} x&=f_x(x,y),\\y&=f_y(x,y).\end{cases}$$ We now integrate one variable at a time: $$\begin{array}{lll} x&=f_x(x,y)&\Longrightarrow &f(x,y)=\int x\, dx &=\frac{x^2}{2}+C &\quad =\frac{x^2}{2}+C(y);\\ y&=f_y(x,y)&\Longrightarrow &f(x,y)=\int y\, dy &=\frac{y^2}{2}+K &\quad =\frac{y^2}{2}+K(x). \end{array}$$ Note that in either case we add the familiar constants of integration "$+C$" and "$+K$" (different for the two different integrations)... however, these constants are only constant relative to $x$ and $y$ respectively. That makes them functions of $y$ and $x$ respectively! Putting the two together, we have the following restriction on the two unknown functions: $$f(x,y)=\frac{x^2}{2}+C(y)=\frac{y^2}{2}+K(x).$$ Can we find such functions $C$ and $K$? If we group the terms, the choice becomes obvious: $$\begin{array}{lll} \frac{x^2}{2}\\+C(y)\\ \end{array}=\begin{array}{lll} K(x)\\+\frac{y^2}{2}\\ \end{array}$$ If (or when) it does not, we could just plug some values into this equation and examine the results: $$\begin{array}{lll} x=0&\Longrightarrow & C(y)=\frac{y^2}{2}+K(0)&\Longrightarrow & C(y)=\frac{y^2}{2}+\text{ constant};\\ y=0&\Longrightarrow & C(0)+\frac{x^2}{2}=K(x)&\Longrightarrow & K(x)=\frac{x^2}{2}+\text{ constant}.\\ \end{array}$$ So, either of the functions $C(y)$ and $K(x)$ differs from the corresponding expression by a constant. Therefore, we have: $$f(x,y)=\frac{x^2}{2}+\frac{y^2}{2}+L,$$ for some constant $L$. The surface is indeed a paraboloid of revolution. $\square$ When a vector field is the gradient of some function of two variables, this function is called a potential function, or simply a potential, of the vector field. Note that finding for a given vector field $V$ a function $f$ such that $\nabla f=V$ amounts to anti-differentiation. The following is an analog of several familiar results: any two potential functions of the same vector field defined on an open disk differ by a constant. So, you've found one -- you've found all, just like in Chapters 9 and 11. The proof is exactly the same as before but it relies, just as before, on the properties of the derivatives (i.e., gradients) discussed in the next section. The graphs of these functions then differ by a vertical shift. It is as if the floor and the ceiling in a cave have the exact same slope in all directions at each location; then the height of the ceiling is the same throughout the cave. Example (rotational flow). We consider this vector field again: $$V(x,y)=<y,\, -x>.$$ Our two functions of two variables are: $$V(x,y)=<p(x,y),\, q(x,y)> \text{ with } p(x,y)=y,\ q(x,y)=-x.$$ We pick a few points on the plane, compute the output for each, and assign it to that point: $$\begin{array}{ccccc} (-1,1)&(0,1)&(1,1)\\ (-1,0)&(0,0)&(1,0)\\ (-1,-1)&(0,-1)&(1,-1) \end{array}\quad\leadsto\quad\begin{array}{ccccc} <1,-1>&<1,0>&<1,-1>\\ <0,1>&<0,0>&<0,-1>\\ <-1,1>&<-1,0>&<-1,1> \end{array}$$ Each vector on right starts at the corresponding point on left: Now, if the vectors represent velocities of particles, what kind of flow is this? It looks like the water is flowing away from the center. Is it a whirl? Let's plot some more: These lie on the axes and they are all perpendicular to those axes. We realize that there is a pattern: $V(x,y)$ is perpendicular to $<x,y>$. Indeed, $$<y,-x>\cdot <x,y>=yx-xy=0.$$ From what we know about parametric curves, to follow these arrows a curve would be rounding the origin never getting closer to or farther away from it; this must be a rotation. Now, is this a flow on a surface produced by gravity like last time? If we visualize the vector field as a system of pipes, the question above becomes: is it possible to produce this pattern of flow in the pipes by controlling the pressure at the joints? Let's find out. We will try to solve this equation for $z=f(x,y)$: $$\Delta f=<y,\, -x>?$$ Just as in the last example, we choose the secondary node to be the primary node at the beginning of the edge. We have: $$\Longrightarrow\ \begin{cases} \Delta_x f&=y\\ \Delta_y f&=-x \end{cases}\ \Longrightarrow\ \begin{cases} f(x+\Delta x,y)-f(x,y)&=y\\ f(x,y+\Delta y)-f(x,y)&=-x \end{cases}\ \Longrightarrow\ \begin{cases} f(x+\Delta x,y)&=f(x,y)+y\\ f(x,y+\Delta y )&=f(x,y)-x \end{cases}$$ Can we use these recursive formulas to construct $f$? Is there a conflict: if we start at $(x,y)$ and then get to $(x+\Delta x,y+\Delta y)$ in the two different ways, will we have the same outcome? $$\begin{array}{cccc} \uparrow& &\uparrow\\ (x,y+\Delta y)&\to&(x+\Delta x,y+\Delta y)&\to\\ \uparrow& &\uparrow\\ (x,y)&\to&(x+\Delta x,y)&\to \end{array}$$ Unfortunately, the outcome is not the same: $$\begin{array}{ccc} & f(x+\Delta x,y+\Delta y)&=f(x,y)+y-(x+\Delta x)&=f(x,y)-x+y-\Delta x\\ \ne &f(x+\Delta x,y+\Delta y)&=f(x,y)-x+(y+\Delta y)&=f(x,y)-x+y+\Delta y. \end{array}$$ This is path-dependence! Suppose $V$ is the gradient of some function of two variables $f$: $$V=\nabla f\ \Longrightarrow\ V(x,y)=<y,\, -x>=<f_x(x,y),\, f_y(x,y)>\ \Longrightarrow\ \begin{cases} y&=f_x(x,y),\\-x&=f_y(x,y).\end{cases}$$ What do we do with those? They are partial derivatives so let's solve these equations by partial integration, one variable at a time: $$\begin{array}{lll} y&=f_x(x,y)&\Longrightarrow &f(x,y)=\int y\, dx &=xy+C ;\\ -x&=f_y(x,y)&\Longrightarrow &f(x,y)=\int -x\, dy &=-xy+K . \end{array}$$ Putting the two together, we have $$f(x,y)=xy+C(y)=-xy+K(x).$$ Can we find such functions $C$ and $K$? If we try to group the terms, they don't group well: $$\begin{array}{lll} xy\\+C(y)\\ \end{array}=\begin{array}{lll} -xy\\+K(x)\\ \end{array}$$ To confirm that there is a problem, let's plug some values into this equation: $$\begin{array}{lll} x=0&\Longrightarrow & C(y)=K(0)&\Longrightarrow & C(y)=\text{ constant};\\ y=0&\Longrightarrow & C(0)=K(x)&\Longrightarrow & K(x)=\text{ constant}.\\ \end{array}$$ So, both $C$ and $K$ are constant functions, which is impossible! Indeed, on left we have a function of two variables and a constant on right: $$2xy=-C+K=\text{ constant}.$$ This contradiction proves that our assumption that $V$ has a potential function was wrong; there is no such $f$. We may even say that the vector field isn't "integrable"! Geometrically, there is no surface a flow of water on which would produce this pattern. $\square$ An insightful if informal argument to the same effect is as follows. Suppose we travel along the arrows of a vector field. Suppose that eventually we arrive to our original location. Is it possible that this vector field has a potential function? Is it the gradient of some function of two variables? If it is, we have followed the direction of the (fastest) increase of this function... but once we have come back, what is the elevation? After all this climbing, it can't be the same! This function therefore cannot be continuous at this location. Then it also cannot be differentiable, a contradiction! Our conclusion that some continuous vector fields on the plane aren't derivatives has no analog in the $1$-dimensional, numerical, case discussed in Parts I and II. Example. Three-dimensional vector fields are more complex. The one below is similar to the first example above: $$V(x,y,z)=<x,y,z>.$$ The vectors point in the direction opposite of the direction to the origin. Just as its two-dimensional analog, there is a potential function: $$f(x,y,z)=\frac{x^2}{2}+\frac{y^2}{2}+\frac{z^2}{2}.$$ The level surfaces of this function are concentric spheres: The change of a function of several variables: the difference We consider functions of $n$ variables again. First let's look at the point-slope form of linear functions: $$l(x_1,...,x_n)=p+m_1(x_1-a_1)+...+m_n(x_n-a_n),$$ where $p$ is the $z$-intercept, $m_1,...,m_n$ are the chosen slopes of the plane along the axes, and $a_1,...,a_n$ are the coordinates of the chosen point in ${\bf R}^n$. Let's recast this expression, as before, in terms of the dot product with the increment of the independent variable: $$l(X)=p+M\cdot (X-A),$$ where $M=<m_1,...,m_n>$ is the vector of slopes, $A=(a_1,...,a_n)$ is the point in ${\bf R}^n$, and $X=(x_1,...,x_n)$ is our variable point in ${\bf R}^n$, and $X-A$ is how far we step away from our point of interest $A$. Then we can say that the vector $N=<m_1,...,m_n,1>$ is perpendicular to the graph of this function (a "plane" in ${\bf R}^{n+1}$). The conclusion holds independently from any choice of a coordinate system! Suppose now we have an arbitrary function $z=f(X)$ of $n$ variables, i.e., $z$ is a real number and $X=(x_1,...,x_n)$ in ${\bf R}^n$. We start with the discrete case. Suppose ${\bf R}^n$ is equipped with a rectangular grid with sides $\Delta x_k$ along the axis of each variable $x_k$. It serves as a partition with secondary nodes provided on the edges of the grid. We only consider the increment of $X$ in one of those directions: $$\Delta X_k=<0,...,0,\Delta x_k,0,...,0>.$$ Definition. The partial difference of $z=f(X)=f(x_1,...,x_n)$ with respect $x_k$ at $X$ are defined to be the change of $z$ with respect to $x_k$ denoted by: $$\Delta_k f\, (C)=f(X+\Delta X_k)-f(X),$$ where $C$ is a secondary node on the edge between the nodes $X$ and $X+\Delta X_k$. We collect these rates of change into one function! Definition. The difference of $z=f(X)=f(x_1,...,x_n)$ at $X$ is defined to be the change of $z$ with respect to the corresponding $x_k$ denoted by: $$\Delta f\, (C)=f(X+\Delta X_k)-f(X),$$ where $C$ is a secondary node on the edge between $X$ and $X+\Delta X_k$. Definition. The partial difference quotient of $z=f(X)=f(x_1,...,x_n)$ with respect $x_k$ at $X$ is defined to be the rate of change of $z$ with respect to $x_k$ denoted by: $$\frac{\Delta f}{\Delta x_k}(C)=\frac{f(X+\Delta X_k)-f(X)}{\Delta x_k},$$ where $C$ is a secondary node on the edge between $X$ and $X+\Delta X_k$. Definition. The gradient of $z=f(X)=f(x_1,...,x_n)$ at $X$ is the vector field the $k$th component of which is equal to the partial difference quotient of $z=f(X)$ with respect $x_k$ at $X$, denoted by: $$\operatorname{grad}f\, (X)=\frac{f(X+\Delta X_k)-f(X)}{\Delta x_k}.$$ Example (gradient ascent/descent). The rate of change of a function of several variables: the gradient For the continuous case, we focus on one point $A=(a_1,...,a_n)$ in ${\bf R}^n$. Definition. The partial derivative of $z=f(X)=f(x_1,...,x_n)$ with respect $x_k$ at $X=A=(a_1,...,a_n)$ are defined to be the limit of the difference quotient with respect to $x_k$ at $x_k=a_k$, if it exists, denoted by: $$\frac{\partial f}{\partial x_k}(A) =\lim_{\Delta x_k\to 0}\frac{\Delta f}{\Delta x_k}(A),$$ or $f_k'(A)$. The following is an obvious conclusion. Theorem. The partial derivative of $z=f(X)$ with respect to $x_k$ at $X=A=(a_1,...,a_n)$ is found as the derivative of the numerical function $$g(x)=f(a_1,...,a_{k-1},x,a_{k+1},...,a_n),$$ evaluated at $x=a_k$; i.e., $$\frac{\partial f}{\partial x_k}(A) = \frac{d}{dx}f(a_1,...,a_{k-1},x,a_{k+1},...,a_n)\bigg|_{x=a_k}.$$ So, this is the derivative of $z=f(x_1,...,x_n)$ with respect to $x_k$ with the rest of the variables fixed. These are, of course, just the slopes these edges of the graph. Definition. Suppose $z=f(X)$ is defined at $X=A$ and $$l(X)=f(A)+M\cdot (X-A)$$ is any of its linear approximations at that point. Then, $z=l(X)$ is called the best linear approximation of $f$ at $X=A$ if the following is satisfied: $$\lim_{X\to A} \frac{ f(X) -l(X) }{||X-A||}=0.$$ In that case, the function $f$ is called differentiable at $X=A$. Then vector $M$ is called the gradient or the derivative of $f$ at $A$. The numerator in the formula is the error of the approximation and the denominator is the length of the "run". The result is the tangent plane in dimension $1$ and the tangent plane in dimension $2$: So, we stick with the functions of $n$ variables the graphs of which -- on a small scale -- look like lines in dimension $n=1$, like planes in dimension $n=2$, and generally like ${\bf R}^n$! But there is more to it: what about the level curves? Below is a visualization of a differentiable function of two variables and its level curves: Not only these curves look like straight lines when we zoom in, they also progress at a uniform rate. For example, this function is not differentiable: Notation. There are multiple way to write the gradient. First is the Leibniz-style: $$f'(A),$$ and the Lagrange style: $$\frac{df}{dX}(A) \text{ and }\frac{dz}{dX}(A).$$ The following is also very common is science and engineering: $$\nabla f(A) \text{ and } \operatorname{grad}f(A).$$ Note that the gradient notation is to be read as: $$\big(\nabla f\big)(A),\ \big(\operatorname{grad}f\big)(A,$$ i.e., the gradient is computed and then evaluated at $X=A$. Theorem. For a function differentiable as $X=A$, there is only one best linear approximation at $A$. Proof. By contradiction. Suppose we have two such functions $$l(X)=f(A)+M\cdot (X-A)\ \text{ with }\ \lim_{X\to A} \frac{ f(X) -l(X) }{||X-A||}=0,$$ and $$p(X)=f(A)+Q\cdot (X-A)\ \text{ with }\ \lim_{X\to A} \frac{ f(X) -q(X) }{||X-A||}=0.$$ Then we have $$\lim_{X\to A} \bigg( \frac{ f(X) -f(A) }{||X-A||} +\frac{ M\cdot (X-A)}{||X-A||} \bigg)=0,$$ and $$\lim_{X\to A} \bigg( \frac{ f(X) -f(A) }{||X-A||} +\frac{ Q\cdot (X-A)}{||X-A||} \bigg)=0.$$ Therefore, by the Sum Rule we have: $$\lim_{X\to A} \bigg( \frac{ M\cdot (X-A)}{||X-A||} -\frac{ Q\cdot (X-A)}{||X-A||} \bigg)=0,$$ or $$\lim_{X\to A} \frac{ M\cdot (X-A)- Q\cdot (X-A)}{||X-A||} =0,$$ or $$\lim_{X\to A} \frac{ (M-Q)\cdot (X-A)}{||X-A||} =0,$$ or $$\lim_{X\to A} (M-Q)\cdot\frac{ X-A}{||X-A||} =0.$$ The limit of the fraction, however, does not exist. Therefore, $M-Q=0$. $\blacksquare$ Below is a visualization of a differentiable function of three variables given by its level surfaces: Not only these surfaces look like planes when we zoom in, they also progress at a uniform rate. For example, this function is not differentiable: Theorem. If $$l(X)=f(A)+M\cdot (X-A)$$ is the best linear approximation of $z=f(X)$ at $X=A$, then $$M=\operatorname{grad}f(A)=\left<\frac{\partial f}{\partial x_1}(A),..., \frac{\partial f}{\partial x_n}(A)\right>.$$ Now, suppose we carry out this procedure of linear approximation at each location throughout the domain of the function. We will have a vector at each location. This is a vector field! The gradient serves as the derivative of a differentiable function. Warning: When the function is not differentiable, combining its variables, say $x$ and $y$, into one, $X=(x,y)$, may be ill-advised even when the partial derivatives make sense. Algebraic properties of the difference quotients and the gradients Just as in dimension $1$, differentiation is a special kind of function too, a function of functions: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} f & \mapsto & \begin{array}{|c|}\hline\quad \frac{d}{dX} \quad \\ \hline\end{array} & \mapsto & G=f' \end{array}$$ The main difference is that the domain and the range of this function are different. We need to understand how this function operates. Warning: Even though the derivative of a parametric curve in ${\bf R}^n$ at a point and the derivative of a function of $n$ at a point are both vectors in ${\bf R}^n$, this doesn't make the two derivatives similar. We start with linear functions. After all, they serve as good-enough substitutes for the functions around a fixed point. This is the "Linear Sum Rule": $$\begin{array}{ll|l} &\text{linear function}&\text{its gradient}\\ \hline f(X)&=p+M(X-A)&M\\ +\\ g(X)&=q+N(X-A)&N\\ \hline f(X)+g(X)&=p+(M+N)(X-A)&M+N\\ \end{array}$$ We used the Linearity of the dot product. The "Linear Constant Multiple Rule" relies on the same property: $$\begin{array}{ll|l} &\text{linear function}&\text{its gradient}\\ \hline f(X)&=p+M(X-A)&M\\ \cdot k\\ \hline k\cdot f(X)&=kp+(kM)(X-A)&kM\\ \end{array}$$ Now, the rules for the general case follow from these two via limits and the corresponding rules for limits. Theorem (Sum Rule). (A) The difference and the difference quotient of the sum of two functions is the sum of their differences and difference quotients respectively; i.e., for any two functions of several variables $f,g$ defined at the nodes $X$ and $X+\Delta X$ of the partition, we have the differences and difference quotients defined at the corresponding secondary node $C$ satisfy: $$\Delta(f+g)\, (C)=\Delta f\, (C)+\Delta g\, (C),$$ and $$\frac{\Delta(f+g)}{\Delta X}(C)=\frac{\Delta f}{\Delta X}(C)+\frac{\Delta g}{\Delta X}(C).$$ (B) The sum of two functions differentiable at a point is differentiable at that point and its derivative is equal to the sum of their derivatives; i.e., for any two functions $f,g$ differentiable at $X=A$, we have: $$\frac{d(f+g)}{dX}(A)= \frac{df}{dX}(A) + \frac{dg}{dX}(A).$$ Proof. Suppose $$l(X)=f(A)+M\cdot (X-A)\text{ and } k(X)=g(A)+N\cdot (X-A)$$ are the best linear approximations at $A$ of $f$ and $g$ respectively. Then, the following is satisfied: $$\lim_{X\to A} \frac{ M\cdot (X-A) }{||X-A||}=0\text{ and }\lim_{X\to A} \frac{ N\cdot (X-A) }{||X-A||}=0.$$ We can add the two limit together, as allowed by the Sum Rule for Limits, and then manipulate the expression: $$\begin{array}{lll} 0&=\lim_{X\to A} \frac{ M\cdot (X-A) }{||X-A||}+\lim_{X\to A} \frac{ N\cdot (X-A) }{||X-A||}\\ &=\lim_{X\to A} \frac{ (M+N)\cdot (X-A) }{||X-A||}\\ \end{array}$$ According to the definition, $$l(X)+k(X)=f(A)+g(A)+(M+N)\cdot (X-A)$$ is the best linear approximation of $f+g$. $\blacksquare$ Theorem (Constant Multiple Rule). (A) The difference and the difference quotient of a multiple of a functions is the multiple of the difference and the difference quotient respectively; i.e., for any real $k$ and any function $f$ defined at the nodes $X$ and $X+\Delta X$ of the partition, we have the differences and the difference quotients defined at the corresponding secondary node $C$ satisfy: $$\Delta(k\cdot f)\, (C)=k\cdot \Delta f\, (C),$$ and $$\frac{\Delta(k\cdot f)}{\Delta X}(C)=k\cdot \frac{\Delta f}{\Delta X}(C),$$ (B) A multiple of a function differentiable at a point is differentiable at that point and its derivative is equal to the multiple of the function's derivative; i.e., for any real $k$ and any function $f$ differentiable at $X=A$, we have: $$\frac{d(k\cdot f)}{dX}(A) = k\cdot \frac{df}{dX}(A).$$ Unfortunately, multiplication and division of linear functions do not produce linear functions... Warning: Just as in the case of numerical functions, we face and reject the "naive" product rule: the derivative of the product is not the product of the derivatives! Not only the units don't match, it's worse this time: all three of the derivatives are vectors and the product of two can't give us the third... Theorem (Product Rule). (A) The difference and the difference quotient of the product of two functions is found as a combination of these functions and their difference and difference quotients respectively; for any two functions $f,g$ defined at the nodes $X$ and $X+\Delta X$ of the partition, we have the differences and the difference quotients defined at the corresponding secondary node $C$ satisfy: $$\Delta (f\cdot g)\, (C)=f(X+\Delta X) \cdot \Delta g\, (C) + \Delta f\, (C) \cdot g(X),$$ and $$\frac{\Delta (f\cdot g)}{\Delta X}(C)=f(X+\Delta X) \cdot \frac{\Delta g}{\Delta X}(C) + \frac{\Delta f}{\Delta X}(C) \cdot g(X).$$ (B) The product of two functions differentiable at a point is differentiable at that point and its derivative is found as a combination of these functions and their derivatives; specifically, given two functions $f,g$ differentiable at $X=A$, we have: $$\frac{d(f\cdot g)}{dX}(A) = f(A)\cdot \frac{dg}{dX}(A) + \frac{df}{dX}(A)\cdot g(A).$$ The formula is identical to that for numerical functions but we have to examine it carefully; same things have changed! Indeed, in the right hand side either term is the product of the value of one of the functions (a number) and the value of the gradient of the other (a vector). Furthermore we have a vector at the end of the computation: $$\begin{array}{ccccc} \text{scalar}&&\text{vector}&&\text{vector}&&\text{scalar}\\ f(A)& \cdot&\nabla g\, (A)& +& \nabla f\, (A)&\cdot &g(A)\\ &\text{vector}&&&&\text{vector}&\\ &&&\text{vector} \end{array}$$ It matches the left-hand side. Moreover, when $A$ varies, the formulas take the form with the algebraic operations discussed in the last section: $$(f \cdot g)' = f\cdot g' + f'\cdot g.$$ Here, either term is the product of one of the functions, a scalar function, and the gradient of the other, a vector field. Such a product is again a vector field and so is their sum. It matches the left-hand side. Example. Let's differentiate this function: Theorem (Quotient Rule). (A) The difference and the difference quotient of the quotient of two functions is found as a combination of these functions and their differences and difference quotients; for any two functions $f,g$ defined at the nodes $X$ and $X+\Delta X$ of the partition, we have the difference quotients defined at the corresponding secondary node $C$ satisfy: $$\Delta (f/ g)\, (C)=\frac{f(X+\Delta X) \cdot\Delta g\,(C) - \Delta f\, (C) \cdot g(X)}{g(X)g(X+\Delta X)},$$ and $$\frac{\Delta (f/ g)}{\Delta X}(C)=\frac{f(X+\Delta X) \cdot \frac{\Delta g}{\Delta X}(C) - \frac{\Delta f}{\Delta X}(C) \cdot g(X)}{g(X)g(X+\Delta X)},$$ provided $g(X),g(X+\Delta X) \ne 0$. (B) The quotient of two functions differentiable at a point is differentiable at that point and its derivative is found as a combination of these functions and their derivatives; specifically, given two functions $f,g$ differentiable at $X=A$, we have: $$\frac{d(f/g)}{dX}(A) = \frac{\frac{df}{dX}(A)\cdot g(A) - f(A)\cdot \frac{dg}{dX}(A)}{g(A)^2},$$ provided $g(A) \ne 0$. Similar to the previous theorem, either term in the numerator is the product of a scalar function and a vector field. Their sum is a vector field and it's still a vector field when we divide by a scalar function. This is the summary of the four properties re-stated in the gradient notation: $$\begin{array}{|ll|ll|} \hline \text{SR: }& \nabla(f+g)=\nabla f\, +\nabla g & \text{CMR: }& \nabla (kf)=k\, \nabla f& \text{ for any real }k\\ \hline \text{PR: }& \nabla(fg)=\nabla f\, g+f\, \nabla g& \text{QR: }& \nabla (f/g)=\frac{\nabla f\, g-f\, \nabla g}{g^2} &\text{ wherever }g\ne 0\\ \hline \end{array}$$ Compositions and the Chain Rule How does one learn the terrain around him without the ability to fly? By taking hikes around the area! Mathematically, the former is a function of two variables and the latter is a parametric curve. Furthermore, we examine the surface of the graph of this function via its composition with these parametric curves. There are two functions with which to compose a function of several variables: a parametric curve before or a numerical function after. This is the former: $$\begin{array}{|ccccc|} \hline &\text{trip map} & & \bigg|\\ \hline t&\longrightarrow & (x,y) &\longrightarrow & z\\ \hline &\bigg| & &\text{terrain map}\\ \hline \end{array}$$ Recall how we interpret this composition. We imagine creating a trip plan as a parametric curve $X=F(t)$: the times and the places put on a simple automotive map, and then bring the terrain map of the area as a function of two variables $z=f(x,y)$: The former give us the location for every moment of time and the latter the elevation for every location. Their composition gives us the elevation for every moment of time. To understand how the derivatives of these two functions are combined, we start with linear functions. In other words, what if we travel along a straight line on a flat, not necessarily horizontal, surface (maybe a roof)? After this simple substitution, the derivatives are found by direct examination: $$\begin{array}{lll|ll} &&\text{linear function}&\text{its derivative}\\ \hline \text{parametric curve:}&X=F(t)&=A+D(t-a)&D&\text{ in } {\bf R}^n\\ &\circ \\ \text{function of several variables: }&z=f(X)&=p+M\cdot (X-A)&M&\text{ in } {\bf R}^n\\ \hline \text{numerical function: }&f(F(t))&=p+M\cdot ( A+D(t-a) -A)\\ &&=p+(M\cdot D)(t-a)&M\cdot D&\text{ in } {\bf R} \end{array}$$ Thus, the derivative of the composition is the dot product of the two derivatives. We use this result for the general case of arbitrary differentiable functions via their linear approximations. The result is understood in the same way as in dimension $1$: It follows that the speed of the climb is proportional to both our horizontal speed and the steepness of the terrain. This number is computed as the dot product of: the derivative of the parametric curve $F$ of the trip, i.e., the horizontal velocity $\left< \frac{dx}{dt}, \frac{dy}{dt} \right>$, and the gradient of the terrain function $f$, i.e., $\left< \frac{\partial z}{\partial x}, \frac{\partial z}{\partial y} \right> $. For the discrete case, we need the parametric curve $X=F(t)$ to map the partition for $t$ to the partition for $X$. In other words, it has to follow the grid: Theorem (Chain Rule I). (A) The difference quotient of the composition of two functions is found as the product of the two difference quotients; i.e., for any parametric curve $X=F(t)$ defined at adjacent nodes $t$ and $t+\Delta t$ of a partition and any function of several variables $z=f(X)$ defined at the adjacent nodes $X=F(t)$ and $X+\Delta X=F(t+\Delta t)$ of a partition, we have the differences and the difference quotients (defined at the secondary nodes $a$ and $A=f(a)$ within these edges of the two partitions respectively) satisfy: $$\Delta (f\circ F)(a)= \Delta f\, (A),$$ and $$\frac{\Delta (f\circ F)}{\Delta t}(a)= \frac{\Delta f}{\Delta X}(A) \cdot \frac{\Delta F}{\Delta t}(a).$$ (B) The composition of a function differentiable at a point and a function differentiable at the image of that point is differentiable at that point and its derivative is found as the product of the two derivatives. In other words, if a parametric curve $X=F(t)$ is a differentiable at $t=a$ and a function of several variables $z=f(X)$ is differentiable at $X=F(a)$, then we have: $$\frac{d (f\circ F)}{dt}(a)= \frac{df}{dX}(A) \cdot \frac{dF}{dt}(a).$$ Note: While the right-hand side in part (B) involves a dot product, the one in part (A) is a scalar product. A function of several variables may appear in another context... This is the meaning of the composition when our function of several variables is followed by a numerical function: $$\begin{array}{|ccccc|} \hline &\text{terrain map} & & \bigg|\\ \hline (x,y)&\longrightarrow & z &\longrightarrow & u\\ \hline &\bigg| & &\text{pressure}\\ \hline \end{array}$$ Recall how we interpret this composition. In addition to the terrain map of the area as a function of two variables $z=f(x,y)$, we have the atmospheric pressure dependent on the elevation (above the sea level) as a numerical function: The former give us the elevation for every location and the latter the pressure for every elevation. Their composition gives us the pressure for every location. To understand how the derivatives of these two functions are combined, we start with linear functions. After this simple substitution, the derivatives are found by direct examination: $$\begin{array}{lll|ll} &&\text{linear function}&\text{its derivative}\\ \hline \text{function of several variables: }&z=f(X)&=p+M\cdot (X-A)&M&\text{ in } {\bf R}^n\\ &\circ \\ \text{numerical function:}&u=g(z)&=q+m(z-p)&m&\text{ in } {\bf R}\\ \hline \text{function of several variables: }&g(f(X))&=q+m(p+M\cdot (X-A)-p)\\ &&=q+(mM)\cdot (X-A))&mM&\text{ in } {\bf R}^n \end{array}$$ Thus, the derivative of the composition is the scalar product of the two derivatives. Once again, the parametric curve $z=f(X)$ has to map the partition for $X$ to the partition for $z$. Theorem (Chain Rule II). (A) The difference quotient of the composition of two functions is found as the product of the two difference quotients; i.e., for any function of several variables $z=f(X)$ defined at adjacent nodes $X$ and $X+\Delta X$ of a partition and any numerical function $u=g(z)$ defined at the adjacent nodes $z=f(X)$ and $z+\Delta z=f(X+\Delta X)$ of a partition, we have the differences and the difference quotients (defined at the secondary nodes $A$ and $a=f(A)$ within these edges of the two partitions respectively) satisfy: $$\Delta (g\circ f)(A)= \Delta g\, (a),$$ and $$\frac{\Delta (g\circ f)}{\Delta X}(A)= \frac{\Delta g}{\Delta z}(a) \cdot \frac{\Delta f}{\Delta X}(A).$$ (B) The composition of a function differentiable at a point and a function differentiable at the image of that point is differentiable at that point and its derivative is found as the product of the two derivatives. In other words, if a function of several variables $z=f(X)$ is differentiable at $X=A$ and a numerical function $u=g(z)$ is differentiable at $a=f(A)$, then we have: $$\frac{d (g\circ f)}{dX}(A)= \frac{dg}{dz}(a) \cdot \frac{df}{dX}(A).$$ Note: While the right-hand side in part (B) involves a scalar product, the one in part (A) is a product of two numbers. Notice how the intermediate variable is "cancelled" in the Leibniz notation in both of the two forms of the Chain Rule; first: $$\frac{dz}{\not{dX}}\cdot\frac{\not{dX}}{dt}=\frac{dz}{dt};$$ and second: $$\frac{du}{\not{dz}}\cdot\frac{\not{dz}}{dX}=\frac{du}{dX}.$$ Thus, in spite of the fact that these two compositions are very different, the Chain Rule has a somewhat informal -- but single -- verbal interpretation: the derivative of the composition of two functions is the product of the two derivatives. The word "product", as we just saw, is also ambiguous. We saw the multiplication of two numbers in the beginning of the book, then the dot product of two vectors, and finally a vector and a number: $$\begin{array}{lll} (f\circ g)'(x)&=f'(g(x))&\cdot g'(x),\\ (f\circ F)'(x)&=\nabla f(F(t))&\cdot F'(t),\\ \nabla (g\circ f)(X)&=g'(f(X))&\cdot \nabla f(X).\\ \end{array}$$ The context determines the meaning and this ambiguity serves a purpose: we will see later how this wording is, in a rigorous way, applicable to the composition of any two functions. The gradient is perpendicular to the level curves The result we have been alluding to is that the direction of the gradient is the direction of the fastest growth of the function. It is proven later but here we just consider the relation between the gradient and the level curves, i.e., the curves of constant value, of the function. Example. A function defined at the nodes on the plane is shown in the first column with its level curve visualized: In the second column, the difference quotient is computed and then below it is visualized. This curve and this vector are perpendicular. $\square$ Exercise. Consider other possible arrangements of the values of the function and confirm the conjecture. Example. In the familiar example of a plane: $$f(x,y)=2x+3y,$$ the gradient is a constant vector field: $$\nabla f(x,y)=<2,3>.$$ Meanwhile, its level curves are parallel straight lines: $$2x+3y=c.$$ The slope is $-2/3$ which makes them perpendicular to gradient vector $<2,3>$! $\square$ We then conjecture that the gradient and the level curves are perpendicular to each other. Let's consider a general linear function of two variables: $$z=f(x,y)=c+m(x-a)+n(y-b),$$ and $M=\nabla f=<m,n>$ the gradient of $f$. Let's pick a simple vector $D=<-n,m>$ perpendicular to $M=<m,n>$. Consider this straight line with $D$ as a direction vector: $$F(t)=(a,b)+<-n,m>t.$$ We substitute it into $f$: $$f(F(t))=f(a-nt,b+mt)=c+m(-nt)+n(mt)=c.$$ The composition is constant and, therefore, the line stays within a level curve of $f$. The conjecture is confirmed. Example. In the familiar example of a circular paraboloid: $$f(x,y)=x^2+y^2,$$ the gradient consists of the radial vectors: $$\nabla f(x,y)=<2x,2y>.$$ Meanwhile, its level curves are circles: $$x^2+y^2=c \text{ for }c>0.$$ The radii of a circle are known (and are seen above) to be perpendicular to the circle! $\square$ We need to make our conjecture precise before proving it. First, the level curves of a function of two variables aren't necessarily curves. They are just sets in the plane. For example, when $f$ is constant, all the level sets are empty but one which is the whole plane: $$f(x,y)=c\ \Longrightarrow\ \{(x,y):f(x,y)=b\}=\emptyset \text{ when }b\ne c\text{ and } \{(x,y):f(x,y)=c\}={\bf R}^2.$$ Furthermore, even when a level curve is a curve, it's an implicit curve and isn't represented by a function. Note: the question of when exactly level curves are curves will be discussed later. How do we sort this out? Just as before, we study the terrain by taking these hikes -- parametric curves -- and this time we choose an easy one: no climbing. We stay at the same elevation: In other words, our function $z=f(X)$ does not change along this parametric curve $X=F(t)$, i.e., their composition is constant: $$f(F(t))=\text{ constant }.$$ As you can see, this doesn't mean that the path of $F$ is a level set but simply its subset. We can go slow or fast and we can go in either direction... Second, what do we mean when we say a parametric curve and a vector are perpendicular to each other? The direction of a curve at a point is its tangent vector at that point, by definition! We are then concerned with: $$\text{ the angle between } \nabla f(A) \text{ and } F'(a),\text{ where }A=F(a),$$ and, therefore, with their dot product: $$\nabla f(F(a))\cdot F'(a).$$ Is it zero? But we just saw this expression in the last section! It's the right-hand side of the Chain Rule: $$(f\circ F)'(a)=\nabla f(F(a))\cdot F'(a).$$ Why is the left-hand side zero? Because it's the derivative of a constant function! Indeed, the path of $F$ lie within a level curve of $f$. So, we have: $$0=\frac{d}{dt}f(F(t))\bigg|_{t=a}=(f\circ F)'(a)=\nabla f(F(a))\cdot F'(a).$$ So, we have demonstrated that level curves and the gradient vectors are perpendicular: $$\nabla f(A) \perp F'(a).$$ What remains is just some caveats. First, the functions have to be differentiable for the derivatives to make sense. Second, neither of these derivatives should be zero or the angle between them will be undefined ($\nabla f(A)\ne 0$ and $F'(a)\ne 0$). Theorem. (A) Suppose a function $z=f(X)$ of several variables is defined at the adjacent nodes $X$ and $X+\Delta X\ne X$ of a partition. Then, if these two nodes lie within a level set of $z=f(X)$, i.e., $f(X)=f(X+\Delta X)$, then $$\frac{\Delta f}{\Delta X}(A)=0,$$ where $A$ is the secondary node of this edge. (B) Suppose a function of several variables $z=f(X)$ is differentiable at $X=A$ and a parametric curve $X=F(t)$ is differentiable on an open interval $I$ that contains $a$ with $F(a)=A$. Then, if the path of $X=F(t)$ lies within a level set of $z=f(X)$, then $$\frac{df}{dX}(A)\perp\frac{dF}{dt}(a),$$ provided both are non-zero. Exercise. What about the converse? We now demonstrate the following result that mixes the discrete and the continuous. Corollary. Suppose a function of several variables $z=f(X)$ is differentiable on an open set $U$ in ${\bf R}^n$. Suppose a parametric curve $X=F(t)$ is defined at adjacent nodes $t$ and $t+\Delta t$ of a partition. Suppose the points $P=F(t)$ and $Q=F(t+\Delta t)$ are distinct and lie within a level set of $z=f(X)$, i.e., $f(P)=f(Q)$, and the segment $PQ$ between them lies entirely within $U$. Then, for some point $A$ on $PQ$ and a secondary node $a$ of $[t,t+\Delta t]$, we have $$\frac{df}{dX}(A)\perp\frac{\Delta F}{\Delta t}(a),$$ provided the gradient is non-zero in $U$. Proof. Let $z=L(t)$ be the linear parametric curve with $L(t)=P$ and $L(t+\Delta t)=Q$. Then, $$\frac{dL}{dt}(a)=\frac{\Delta F}{\Delta t}(a),$$ for any choice of a secondary node $a$ of the interval $[t,t+\Delta t]$. We define a new numerical function defined at the nodes $t$ and $t+\Delta t$: $$h=f\circ F.$$ Then by the Mean Value Theorem, there is such a secondary node $a$ that: $$\frac{\Delta h}{\Delta t}(a)=\frac{dh}{dt}(a).$$ Since the former is zero by the assumption, we can apply the Chain Rule and conclude the following about the latter: $$0=\frac{dh}{dt}(a)= \frac{df}{dX}(A)\cdot\frac{dL}{dt}(a)=\frac{df}{dX}(A)\cdot\frac{\Delta F}{\Delta t}(a),$$ where $A=L(a)$. $\blacksquare$ The theorem remains valid no matter how a parametric curve traces the level curve as long as it doesn't stop. There are then only two main ways -- back and forth -- that a parametric curve can follow the level curve. But wait a minute, the theorem doesn't speak exclusively of functions of two variables? It seems to apply to level surfaces of functions of three variables. Indeed. The basic idea is the same: a parametric curve perpendicular to the gradient, even though there are infinitely many directions for the curve to go through the point. With all the variety of angles between their tangents, they all have the same angle with the gradient. In this exact sense we speak of the gradient being perpendicular to the level surface. This result is a free gift courtesy of abstract thinking and the vector notation! Example. The level surfaces of the radial vector field, $$V(x,y,z)=<x,y,z>,$$ as well as the vector field of the gravitation, $$W(x,y,z)=-\frac{c}{||<x,y,z>||^3}<x,y,z>,$$ are concentric spheres. The gradient vectors point away from the origin in the former case and towards it in the latter. One can imagine how, no matter what path you take on the surface of the Earth, your body will point away from the center. $\square$ With this theorem we can interpret the idea that the gradient points in the direction of the fastest growth of the function: this is the shortest path toward the "next" level curve. This informal explanation isn't good enough anymore. We will make the terms in this statement fully precise next. Monotonicity of functions of several variables Suppose $z=f(X)$ is a function of $n$ variables. Suppose $A$ is a point in ${\bf R}^n$. Then $V=\nabla f(A)\ne 0$ is a vector. As such it has a direction. More precisely, the direction of $V$ is its normalization, the unit vector $V/||V||$. Thus, the first part of the statement is well understood. But what does "the direction of the fastest growth of the function" mean? First, the gradient will be chosen from all possible directions, i.e., all unit vectors. Then, what does the "growth of the function in the direction of a unit vector" mean? Let's first take a look at dimension $n=1$. There are only two unit vectors, $i$ and $-i$, along the $x$-axis. Therefore, if $f'(A)>0$, then $i$ is the direction of the fastest growth; meanwhile, if $f'(A)<0$, it's $-i$. For higher dimensions, we certainly know what this statement means when the direction coincides with the direction of one of the axes: it's the partial derivative (vectors $i,\ -i$, $j,\ -j$ etc.). However, if we are exploring the terrain represented by a function of two variables, going only north-south or east-west is not enough. The idea comes from the earlier part of this section: we, again, take various trips around this terrain. This time we don't have to go far or follow any complex routs: we'll go along straight lines. Also, in order to compare the results, we will travel at the same speed, $1$, during all trips. We will consider all parametric curves $X=F_U(t)$ that start at $X=A$, i.e., $F_U(0)=A$, are linear, i.e., $F_U(t)=A+tU$, and, furthermore, have unit direction vector $||U||=1$. Warning: we are able to ignore non-linear parametric curves only under the assumption that $f$ is differentiable. Now we compare the rate of growth of $f$ along these parametric curves by considering their composition with $f$: $$h_U(t)=f(F_U(t)).$$ So, the rate of growth we are after is this: $$h'_U(0)=\frac{d}{dt}f(F_U(t))\bigg|_{t=0}=\nabla f(F_U(t))\cdot F_U'(t)\bigg|_{t=0}=\nabla f(A)\cdot U,$$ according to the Chain Rule. There is a convenient term for this quantity. The directional derivative of a function $z=f(X)$ at point $X=A$ in the direction of a unit vector $U$ is defined to be $$D_U(f,A)=\nabla f(A)\cdot U.$$ We continue: $$D_U(f,A)=||\nabla f(A)||\cdot ||U||\cos \alpha=||\nabla f(A)||\cos \alpha,$$ where $\alpha$ is the angle between $\nabla f(A)$ and $U$. As the gradient is known and fixed, the directional derivative in a particular direction depends on its angle with the gradient, as expected. Now, this expression is easy to maximize over $U$s. What direction, i.e., a unit vector $U$, provide the highest value of $D_U(f,A)$? Only $\cos \alpha$ matters and it reaches is maximum values, which is $1$, at $\alpha =0$. In other words, the maximum is reached when the direction coincides with the gradient! Theorem (Monotonicity Theorem). Suppose $z=f(X)$ is a function of $n$ variables differentiable at a point $A$ in ${\bf R}^n$. Then the directional derivative $D_U(f,A)$ reaches it maximum in the direction $U$ of the gradient $\nabla f(A)$ of $f$ at $A$; this maximum value is $||\nabla f(A)||$. This is the summary of the theorem and the rest of the analysis: Exercise. Explain the diagram. Theorem. The directional derivative of a function $z=f(X)$ at point $X=A$ in the direction of a unit vector $U$ is also found as the following limit: $$D_U(f,A)=\lim_{h\to 0}\frac{f(A+hU)-f(A)}{h}.$$ Exercise. Represent each partial derivative as a directional derivative. This is what happens with functions of $3$ variables: All vectors on one side of the level surface are the directions of increasing values of the function and all on the other side decreasing. Differentiation and anti-differentiation Let's review the algebraic properties of differentiation of functions of several variables. The properties are the same as before! Theorem (Algebra of Derivatives). For any differentiable functions, we have in the gradient notation: $$\begin{array}{|ll|ll|} \hline \text{SR: }& \nabla(f+g)=\nabla f+\nabla g & \text{CMR: }& \nabla (cf)=c\nabla f& \text{ for any real }c\\ \text{PR: }& \nabla(fg)=\nabla f\, g+f\nabla g& \text{QR: }& \nabla (f/g)=\frac{\nabla f\, g-f \nabla g}{g^2} &\text{ wherever }g\ne 0\\ \text{CR1: }& (f\circ F)'=\nabla f\cdot F'& \text{CR2: }& (g\circ f)'=g'\nabla f\\ \hline \end{array}$$ Example. $\square$ The Mean Value Theorem (Chapter 9) will help us to derive facts about the function from the facts about its gradient. For example: $$\begin{array}{l|l|ll} \text{info about }f &&\text{ info about }\nabla f\\ \hline f\text{ is constant }&\Longrightarrow &\nabla f \text{ is zero}\\ &\overset{?}{\Longleftarrow}&\\ \hline f\text{ is linear}&\Longrightarrow &\nabla f \text{ is constant}\\ &\overset{?}{\Longleftarrow}&\\ \hline \end{array}$$ Are these arrows reversible? If the derivative of the function is zero, does it mean that the function is constant? At this time, we have a tool to prove this fact. Consider this simple statement about terrains: "if there is no sloping anywhere in the terrain, it's flat". If $y=f(x)$ represent the position, we can restate this mathematically. Theorem (Constant). (A) If a function defined at the nodes of a partition of a cell in ${\bf R}^n$ has a zero difference throughout the partition, then this function is constant over the nodes; i.e., $$\Delta f\,(C) = 0 \ \Longrightarrow\ f=\text{ constant }.$$ (B) If a function differentiable on an open path-connected set $I$ in ${\bf R}^n$ has a zero gradient for all $X$ in $I$, then this function is constant on $I$; i.e., $$\frac{df}{dX}=0 \ \Longrightarrow\ f=\text{ constant }.$$ Proof. (A) If $X$ and $Y$ are two nodes connected by an edge with a secondary node $C$, then we have: $$\Delta f\,(C) = 0 \ \Longrightarrow\ f(X)-f(Y)=0\ \Longrightarrow\ f(X)=f(Y).$$ In a cell, any two nodes can be connected by a sequence of adjacent nodes, with no change in the value of $f$. (B) Suppose two points $A,B$ inside $I$ are given. Then there is a differentiable parametric curve $X=P(X)$ with its path that goes from $A$ to $B$ and lies entirely in $I$: $$P(a)=A,\ P(b)=B,\ P(t)\text{ in }I.$$ Define a new numerical function: $$h(t)=f(P(t)).$$ Then, by the Chain Rule we have: $$\frac{dh}{dt}(t)=\frac{d}{dt}\big( f(P(t)) \big)= \nabla f\, (P(t))\cdot F'(t)=0\cdot F'(t)=0.$$ Then, by the corollary to the Mean Value Theorem in Chapter 9, $h$ is a constant function. In particular, we have $$f(A) = f(B).$$ We will see later that the differentiability requirement is unnecessary. $\blacksquare$ Exercise. What if $\nabla f=0$ on a set that isn't path-connected? Is it still true that $$\nabla f=0 \ \Longrightarrow\ f=\text{ constant }?$$ Just as in dimension $1$, the openness of the domain is crucial. The problem then becomes one of recovering the function $f$ from its derivative (i.e., gradient) $\nabla f$, the process we have called anti-differentiation. In other words, we reconstruct the function from a "field of tangent lines or planes": Now, even if we can recover the function $f$ from it derivative $\nabla f$, there many others with the same derivative, such as $g=f+C$ for any constant vector $C$. Are there others? No. Theorem (Anti-differentiation). (A) If two functions defined at the nodes of a partition of a cell in ${\bf R}^n$ have the same difference, they differ by a constant; i.e., $$\Delta f\,(C) = \Delta g\,(C) \ \Longrightarrow\ f(X) – g(X)=\text{ constant }.$$ (B) If two functions differentiable on an open path-connected set $I$ in ${\bf R}^n$ have the same gradient, they differ by a constant; i.e., $$\frac{df}{dX} =\frac{dg}{dX} \ \Longrightarrow\ f – g=\text{ constant }.$$ Proof. (B) Define $$h(X) = f(X) – g(X).$$ Then, by SR, we have: $$\nabla h\, (X) = \nabla \left( f(X)–g(X) \right)=\nabla f\, (X)–\nabla g\, (X) =0,$$ for all $X$. Then $h$ is constant, by the Constant Theorem. $\blacksquare$ Geometrically, $$\nabla f =\nabla g \ \Longrightarrow\ f – g=\text{ constant },$$ means that the graph of $f$ shifted vertically gives us the graph of $G$. We can cut the list of algebraic rules down to the most important ones: $$\begin{array}{|ll|ll|} \hline \text{Linearity Rule: }& \nabla(\lambda f+\mu g)=\lambda\nabla f+\mu\nabla g \text{ for all real }\lambda, \mu\\ \text{Chain Rule: }& (f\circ F)'=\nabla f\cdot F'\\ \hline \end{array}$$ When is anti-differentiation possible? Recall the diagram of partial differentiation of a function of two variables. It produces the difference and the difference quotient (aka the gradient) both of which are functions defined at the secondary nodes of a partition: $$\begin{array}{cccccccccccc} &&&& &f\\ &&&&\swarrow_x&&_y\searrow\\ &&\Delta f=<&\Delta_x f& &,& &\Delta_y f&>\\ \end{array}$$ Definition. A function $G$ defined on the secondary nodes of a partition is called exact if $\Delta f=G$, for some function $f$ defined on the nodes of the partition. When the secondary nodes aren't specified, we speak of an exact $1$-form. Definition. A vector field $F$ defined on the secondary nodes of a partition is called gradient if $F(N)\cdot N=G(N)$ for some exact function $G$ and any secondary node $N$. Not all vector fields are gradient; the example we saw was $F(x,y)=<y,-x>$. Note that the Anti-differentiation Theorem is an analog of the familiar result from Chapters 8 and 9: $$\Delta f= \Delta g \ \Longrightarrow\ f-g=\text{constant}.$$ How do we know that a given function defined at the secondary nodes is exact? In other words, is it the difference of some function? We have previously solved this problem by finding, or trying and failing, such a function. The examples required producing the recursive formulas for $x$ and $y$ and then matching their applications in reverse order. The methods only work when the functions are simple enough. The familiar theorem below gives us a better tool. Surprisingly, this tool is further partial differentiation. We continue the above diagram: $$\begin{array}{ccc} &&&& &f\\ &&&&\swarrow_x&&_y\searrow\\ &&\Delta f=<&\Delta_x f& &,& &\Delta_y f&>\\ &&\swarrow_x &&_y\searrow&&\swarrow_x &&_y\searrow\\ &\Delta^2_{xx}f&&&&\Delta^2_{yx} f=\Delta^2_{xy} f &&&&\Delta^2_{yy} f\\ \end{array}$$ Recall the following result from Chapter 18. Theorem (Discrete Clairaut's Theorem). Over a partition in ${\bf R}^n$, first, the mixed second differences with respect to any two variables are equal to each other: $$\Delta_{yx}^2 f=\Delta_{xy}^2 f;$$ and, second, the mixed second difference quotients are equal to each other: $$\frac{\Delta^2 f}{\Delta y \Delta x}=\frac{\Delta^2 f}{\Delta x \Delta y}.$$ Thanks to this theorem we can draw conclusions from the assumption that we face a difference. So, the plan is, instead of trying to reverse the arrows in the first row of the diagram, we continue down and see whether we have a match: $$\Delta_y p=\Delta_x q.$$ As a summary, consider an arbitrary function on secondary nodes. It has arbitrary component functions, with no relations between them whatsoever! Everything changes once we make the assumption that it is exact. The theorem ensures that the diagram of the differences of the component functions of some $V=<p,q>$, on left, turns -- under this assumption -- into something rigid, on right: $$\begin{array}{cccccccccccc} &&& &..\\ &&&..&&..\\ &V=<&p& &,& &q&>\\ &&&_y\searrow&&\swarrow_x&&\\ &&&&\Delta_{y} p\, ...\, \Delta_x q \end{array}\leadsto\begin{array}{cccccccccccc} && &f\\ &&\swarrow_x&&_y\searrow\\ &p=\Delta_x f& && &q=\Delta_y f\\ &&_y\searrow&&\swarrow_x&&\\ &&&\Delta_y p=\Delta^2_{yx}f=\Delta^2_{xy}f=\Delta_x q \end{array}$$ This rigidity of the diagram means that the two trips from the top to the bottom produce the same result. We have described this property as commutativity. Indeed, it's about interchanging the order of the two operations of partial differentiation: $$\Delta_x\Delta_y=\Delta_y\Delta_x.$$ Theorem (Exactness Test dimension $2$). If $G$ is exact on a rectangle on the $xy$-plane with component functions $p$ and $q$, then $$\Delta_y p=\Delta_x q.$$ Corollary (Gradient Test dimension $2$). Suppose a vector field $V$ is defined on the secondary nodes of a partition of a rectangle in the $xy$-plane with component functions $p$ and $q$. If $V$ is gradient, then $$\frac{\Delta p}{\Delta y}=\frac{\Delta q}{\Delta x}.$$ Example. The quantity that vanishes when the function is gradient is called its rotor. It is a real-valued function defined on the faces of the partition, $$\Delta_y p-\Delta_x q.$$ For three variables, we just consider two at a time with the third kept fixed. Below is the diagram of the partial differences for three variables with only the mixed partial differences shown: $$\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} % \begin{array}{ccc} &&&&&\la{}&\la{}& \la{} &f& \ra{} & \ra{}& \ra{}\\ &&&&\swarrow_x&&&&\downarrow_y&&&&_z\searrow\\ &&&\Delta_x f & &&& &\Delta _y f & &&& &\Delta_z f \\ &&&\downarrow_y&_z\searrow&&&\swarrow_x&&_z\searrow&&&\swarrow_x&\downarrow_y&\\ &&&\Delta^2_{yx} f &&\Delta^2_{zx}f & \Delta^2_{xy}f &&&&\Delta^2_{zy}f& \Delta^2_{xz}f &&\Delta^2_{yz}f &&\\ &&& &\searrow&\swarrow&\searrow &\to&=&\leftarrow&\swarrow&\searrow&\swarrow&\\ \end{array}$$ The six that are left are paired up according to Discrete Clairaut's Theorem above. If $$V=<p,q,r> =<\Delta_x f,\Delta_y f,\Delta_z f>=\Delta f,$$ we can trace the differences of the component functions in the diagram. $$\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} % \begin{array}{ccc} &&&p& &&& &q& &&& &r\\ &&&\downarrow_y&_z\searrow&&&\swarrow_x&&_z\searrow&&&\swarrow_x&\downarrow_y&\\ &&&\Delta_{y}p&&\Delta_{z}p& \Delta_{x}q&&&&\Delta_{z}q& \Delta_{x}r&&\Delta_{y}r&&\\ &&& &\searrow&\swarrow&\searrow &\to&=&\leftarrow&\swarrow&\searrow&\swarrow&\\ \end{array}$$ We write down the results below. Theorem (Exactness Test dimension $3$). If $G$ is exact on a partition of a box in the $xyz$-space with component functions $p$, $q$, and $r$, then $$\Delta_y p=\Delta_x q,\ \Delta_z q=\Delta_y r,\ \Delta_x r=\Delta_z p.$$ Corollary (Gradient Test dimension $3$). Suppose a vector field $V$ is defined on the secondary nodes of a partition of a box in the $xyz$-space with component functions $p$, $q$, and $r$. If $V$ is gradient, then $$\frac{\Delta p}{\Delta y}=\frac{\Delta q}{\Delta x},\ \frac{\Delta q}{\Delta z}=\frac{\Delta r}{\Delta y} ,\ \frac{\Delta r}{\Delta x}=\frac{\Delta p}{\Delta z}.$$ Notice the following simple pattern. First the variables and the components are arranged around a triangle: $$\begin{array}{cccc} && & &x& & &&\\ && &\nearrow& &\searrow& &&\\ &&z& &\leftarrow& &y&&\\ \end{array}\quad \begin{array}{ccc} && & &p& & &&\\ && &\nearrow& &\searrow& &&\\ &&r& &\leftarrow& &q&&\\ \end{array}$$ Then one of the variables is omitted and the difference over the other two is set to $0$: $$\begin{array}{ccc} && & &\cdot& & &&\\ && && && &&\\ &&\bullet& &\Delta_z q=\Delta_y r& &\bullet&&\\ \end{array}\quad \begin{array}{ccc} && & &\bullet& & &&\\ && && &\Delta_y p=\Delta_x q& &&\\ &&\cdot& && &\bullet&&\\ \end{array}\quad \begin{array}{ccc} && & &\bullet& & &&\\ && &\Delta_x r=\Delta_z p& && &&\\ &&\bullet& && &\cdot&&\\ \end{array}\quad$$ The conditions of these two theorems, and their analogs in higher dimensions, put severe limitations on what functions can be exact. When is a vector field a gradient? Recall the diagram of partial differentiation of a function of two variables that produces the vector field of the gradient: $$\begin{array}{ccc} &&&& &f\\ &&&&\swarrow_x&&_y\searrow\\ &&\nabla f=<&f_x& &,& &f_y&>\\ \end{array}$$ Definition. A vector field that is the gradient of some function of several variables is called a gradient vector field. This function is then called a potential function of the vector field. Note that finding for a given vector field $V$ a function $f$ such that $\nabla f=V$ amounts to anti-differentiation as we try to reverse the arrows in the above diagram. The Anti-differentiation Theorem is an analog of several familiar result from Chapters 8 and 9: $$\nabla f= \nabla g\ \Longrightarrow\ f-g=\text{constant}.$$ Corollary. Any two potential functions of the same vector field defined on an open path-connected set differ by a constant within this set. Not all vector fields are gradient. The example we saw was $V(x,y)=<y,-x>$. There are many more... Example. Consider the spiral below. Can it be a level curve of a function of two variables? How do we know that a given vector field is gradient? We have previously solved this problem by finding, and trying and failing, a potential function for the vector field in dimension $2$. The examples required integration with respect to both variables and then matching the results. The methods only work when the functions are simple enough. The familiar theorem below gives us a better tool. Surprisingly, this tool is further partial differentiation. We continue the above diagram: $$\begin{array}{cccccccccccc} &&&& &f\\ &&&&\swarrow_x&&_y\searrow\\ &&\nabla f=<&f_x& &,& &f_y&>\\ &&\swarrow_x &&_y\searrow&&\swarrow_x &&_y\searrow\\ &f_{xx}&&&&f_{yx}=f_{xy}&&&&f_{yy}\\ \end{array}$$ Note that the last row is the derivative of the gradient, a vector field, somehow... Recall the following result from Chapter 18 that gives us the equality of the mixed second derivatives. Theorem (Clairaut's Theorem). The mixed second derivatives of a function $f$ of two variables with continuous second partial derivatives at a point $(a,b)$ are equal to each other; i.e., $$f_{xy}(a,b) = f_{yx}(a,b).$$ Thanks to this theorem we can draw conclusions from the assumption that a given vector field $V=<p,q>$ is gradient -- as long as its component functions $p$ and $q$ are twice continuously differentiable. So, the plan is, instead of trying to reverse the arrows in the first row of the diagram and find $f$ with $\nabla f=V$, we continue down and see whether we have a match: $$p_y=q_x.$$ Example. It's easy. For $V=<x,y>$, we have $$\begin{array}{lll} p=x&\Longrightarrow &p_y=0\\ q=y&\Longrightarrow &q_x=0 \end{array}\ \Longrightarrow\ \text{ match!}$$ The test is passed! So what? What do we conclude from that? Nothing. On the other hand, $V=<y,-x>$, we have $$\begin{array}{lll} p=y&\Longrightarrow &p_y=1\\ q=x&\Longrightarrow &q_x=-1 \end{array}\ \Longrightarrow\ \text{ no match!}$$ The test is failed. It's not gradient! $\square$ We draw no conclusion when the test is passed and when it isn't, we would still have to integrate to find out if it is gradient and, at the same time, try to find the gradient. Meanwhile the failure to satisfy the test proves the vector field is not gradient. As a summary, consider an arbitrary vector field. It has arbitrary component functions, with no relations between them whatsoever! Everything changes once we make the assumption that this is a gradient vector field. The theorem ensures that the diagram of the derivatives of the component functions of an arbitrary vector field $V=<p,q>$ on left turns -- under the assumption that it's gradient -- into something rigid on right: $$\begin{array}{cccccccccccc} &&& &..\\ &&&..&&..\\ &V=<&p& &,& &q&>\\ &&&_y\searrow&&\swarrow_x&&\\ &&&&p_{y}\ ...\ q_x&&&& \end{array}\leadsto\begin{array}{cccccccccccc} && &f\\ &&\swarrow_x&&_y\searrow\\ &p=f_x& && &q=f_y\\ &&_y\searrow&&\swarrow_x&&\\ &&&p_y=f_{yx}=f_{xy}=q_x&&&& \end{array}$$ This rigidity of the diagram means that the two trips from the top to the bottom produce the same result. We have described this property as commutativity. Indeed, it's about interchanging the order of the two operations of partial differentiation: $$\frac{\partial}{\partial x}\frac{\partial}{\partial y}=\frac{\partial}{\partial y}\frac{\partial}{\partial x}.$$ Theorem (Gradient Test dimension $2$). Suppose $V=<p,q>$ is a vector field with continuously differentiable on an open disk in ${\bf R}^2$ component functions $p$ and $q$. If $V$ is gradient, then $$p_y = q_x.$$ The quantity that vanishes when the vector field is gradient is called the rotor of the vector field. It is a function of two variables: $$p_y - q_x.$$ We will see later how the rotor is used to measure how close the vector field is to being gradient. Below is the diagram of the partial derivatives for three variables with only the mixed derivatives shown: $$\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} % \begin{array}{cccccccccccc} &&&&&\la{}&\la{}& \la{} &f& \ra{} & \ra{}& \ra{}\\ &&&&\swarrow_x&&&&\downarrow_y&&&&_z\searrow\\ &&&f_x& &&& &f_y& &&& &f_z\\ &&&\downarrow_y&_z\searrow&&&\swarrow_x&&_z\searrow&&&\swarrow_x&\downarrow_y&\\ &&&f_{yx}&&f_{zx}& f_{xy}&&&&f_{zy}& f_{xz}&&f_{yz}&&\\ &&& &\searrow&\swarrow&\searrow &\to&=&\leftarrow&\swarrow&\searrow&\swarrow&\\ \end{array}$$ The six that are left are paired up according to Clairaut's theorem. If $$V=<p,q,r> =<f_x,f_y,f_z>=\nabla f,$$ we can trace the derivatives of the component functions in the diagram. $$\newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} % \begin{array}{cccccccccccc} &&&p& &&& &q& &&& &r\\ &&&\downarrow_y&_z\searrow&&&\swarrow_x&&_z\searrow&&&\swarrow_x&\downarrow_y&\\ &&&p_{y}&&p_{z}& q_{x}&&&&q_{z}& r_{x}&&r_{y}&&\\ &&& &\searrow&\swarrow&\searrow &\to&=&\leftarrow&\swarrow&\searrow&\swarrow&\\ \end{array}$$ We write down the results below. Theorem (Gradient Test dimension $3$). Suppose $V=<p,q,r>$ is a vector field with continuously differentiable on an open ball in ${\bf R}^3$ component functions $p$, $q$, and $r$. If $V$ is gradient, then $$p_y=q_x,\ q_z=r_y,\ r_x=p_z.$$ Notice the following simple pattern. First the variables and the components are arranged around a triangle: $$\begin{array}{cccc} && & &x& & &&\\ && &\nearrow& &\searrow& &&\\ &&z& &\leftarrow& &y&&\\ \end{array}\quad \begin{array}{cccc} && & &p& & &&\\ && &\nearrow& &\searrow& &&\\ &&r& &\leftarrow& &q&&\\ \end{array}$$ Then one of the variables is omitted and the rotor over the other two is set to $0$: $$\begin{array}{cccc} && & &\cdot& & &&\\ && && && &&\\ &&\bullet& &q_z=r_y& &\bullet&&\\ \end{array}\quad \begin{array}{cccc} && & &\bullet& & &&\\ && && &p_y=q_x& &&\\ &&\cdot& && &\bullet&&\\ \end{array}\quad \begin{array}{cccc} && & &\bullet& & &&\\ && &r_x=p_z& && &&\\ &&\bullet& && &\cdot&&\\ \end{array}\quad$$ All three of these quantities: $p_y-q_x,\ q_z-r_y,\ r_x-p_z$, vanish when the vector field is gradient. In order to have only one, we will use them as components to form a new vector field, called the curl of the vector field. The conditions of these two theorems, and their analogs in higher dimensions, put severe limitations on what vector fields can be gradient. The source of these limitations is the topology of the Euclidean spaces of dimension $2$ and higher. They are to be discussed later. Back to dimension $2$. Example. The rotational vector field, $$V=<y,-x>,$$ is not gradient as it fails the Gradient Test. Let's consider its normalization: $$U=\frac{V}{||V||}=\frac{1}{\sqrt{x^2+y^2}}<y,\, -x>=\left< \frac{y}{\sqrt{x^2+y^2}},\ -\frac{x}{\sqrt{x^2+y^2}}\right>=<p,q>.$$ All vectors are unit vectors with the same directions as the last vector field: Let's test the condition of the Gradient Test: $$\begin{array}{lll} p_y=\frac{\partial}{\partial y}\frac{y}{\sqrt{x^2+y^2}}=\frac{1\cdot \sqrt{x^2+y^2}-y\frac{y}{\sqrt{x^2+y^2}}}{x^2+y^2}\\ q_x=\frac{\partial}{\partial x}\frac{-x}{\sqrt{x^2+y^2}}=-\frac{1\cdot \sqrt{x^2+y^2}-x\frac{x}{\sqrt{x^2+y^2}}}{x^2+y^2}\\ \end{array}\quad\text{ ...no match!}$$ This vector field also fails the test and, therefore, isn't gradient. Let's take this one step further: $$W=\frac{V}{||V||^2}=\frac{1}{x^2+y^2}<y,\ -x>=\left< \frac{y}{x^2+y^2},\ -\frac{x}{x^2+y^2}\right>=<p,q>.$$ The new vector field has the same directions but the magnitude varies; it approaches $0$ as we move farther away from the origin and infinite as we approach the origin; i.e., we have: $$W(X)\to 0\text{ as } ||X||\to \infty \text{ and } ||W(X)||\to \infty \text{ as } X\to 0.$$ Let's test the condition: $$\begin{array}{lll} p_y=\frac{\partial}{\partial y}\frac{y}{x^2+y^2}=\frac{1\cdot (x^2+y^2)-y\cdot 2y}{(x^2+y^2)^2}=\frac{x^2-y^2}{(x^2+y^2)^2}\\ q_x=\frac{\partial}{\partial x}\frac{-x}{x^2+y^2}=-\frac{1\cdot (x^2+y^2)-x\cdot 2x}{(x^2+y^2)^2}=-\frac{y^2-x^2}{(x^2+y^2)^2}\\ \end{array}\quad\text{ ...match!}$$ The vector field passes the test! Does it mean that it is gradient then? No, it doesn't and we will demonstrate that it is not! The idea is the same that we started with in the beginning of the chapter: a round trip along the gradients is impossible as it leads to a net increase of the value of the function according to the Monotonicity Theorem. It is crucial that the vector field is undefined at the origin. $\square$ We will show later that the converse of the Gradient Theorem for dimension $2$ isn't true: $$p_y=q_x\ \not\Longrightarrow\ <p,q>=\nabla f,$$ unless a certain further restriction is placed. This restriction is topological: there can be no holes in the domain. Furthermore, integrating $p_y-q_x$ will be used to measure how close the vector field is to being gradient. Retrieved from "https://calculus123.com/index.php?title=The_gradient&oldid=473"
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Pages.1104-1113 Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Citrus Pulp as a Dietary Source of Antioxidants for Lactating Holstein Cows Fed Highly Polyunsaturated Fatty Acid Diets Santos, G.T. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; Lima, L.S. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; Schogor, A.L.B. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; Romero, J.V. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; De Marchi, F.E. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; Grande, P.A. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; Santos, N.W. (Departamento de Zootecnia, Universidade Estadual de Maringa) ; Kazama, R. (Departamento de Zootecnia e Desenvolvimento Rural, Universidade Federal de Santa Catarina) https://doi.org/10.5713/ajas.2013.13836 The effects of feeding pelleted citrus pulp (PCP) as a natural antioxidant source on the performance and milk quality of dairy cows fed highly polyunsaturated fatty acid (FA) diets were evaluated. Four lactating Holstein cows were assigned to a $4{\times}4$ Latinsquare. Treatments, on a dry matter (DM) basis, were i) control diet; ii) 3% soybean oil; iii) 3% soybean oil and 9% PCP and; iv) 3% soybean oil and 18% PCP. When cows fed on citrus pulp, the DM intake tended to decrease. The total tract apparent digestibility of DM and ether extract decreased when cows fed on the control diet compared to other diets. Cows fed PCP had higher polyphenols and flavonoids content and higher total ferric reducing antioxidant power (FRAP) in milk compared to those fed no pelleted citrus pulp. Cows fed 18% PCP showed higher monounsaturated FA and lower saturated FA in milk fat compared with cows fed the other diets. The lowest n-6 FA proportion was in milk fat from cows fed control. The present study suggests that pelleted citrus pulp added to 9% to 18% DM increases total polyphenols and flavonoids concentration, and the FRAP in milk. Flavonoids;Polyphenols;Ferric Reducing Antioxidant Power Weiss, W. P. and D. J. Wyatt. 2003. Effect of dietary fat and vitamin e on $\alpha$-tocopherol in milk from dairy cows. J. Dairy Sci. 86:3582-3591. https://doi.org/10.3168/jds.S0022-0302(03)73964-2 Williams, R. J., J. P. Spencer, and C. Rice-Evans. 2004. Flavonoids: Antioxidants or signalling molecules? Free Radic. Biol. Med. 36:838-849. https://doi.org/10.1016/j.freeradbiomed.2004.01.001 Yang, S. L., D. P. Bu, J. Q. Wang, Z. Y. Hu, D. Li, H. Y. Wei, L. Y. Zhou, and J. J. Loor. 2009. Soybean oil and linseed oil supplementation affect profiles of ruminal microorganisms in dairy cows. Animal 3:1562-1569. https://doi.org/10.1017/S1751731109990462 Zhu, Q. Y., R. M. Hackman, J. L. Ensunsa, R. R. Holt, and C. L. Keen. 2002. Antioxidative activities of oolong tea. J. Agric. Food Chem. 50:6929-6934. https://doi.org/10.1021/jf0206163 Santos, F. A. P., M. P. Menezes Junior, J. M. C. Simas, A. V. Pires, and C. M. B. Nussio. 2001. Corn grain processing and its partial replacement by pelleted citrus pulp on performance, nutrient digestibility and blood parameters of dairy cows. Acta. Sci. Anim. Sci. 23:923-931. Shingfield, K. J., L. Bernard, C. Leroux, and Y. Chilliard. 2010. Role of trans fatty acids in the nutritional regulation of mammary lipogenesis in ruminants. Animal 4:1140-1166. https://doi.org/10.1017/S1751731110000510 Shiota, M., H. Konishi, and K. Tatsumi. 1999. Oxidative stability of fish oil blended with butter. J. Dairy Sci. 82:1877-1881. https://doi.org/10.3168/jds.S0022-0302(99)75421-4 Singleton, V. L. and J. A. Rossi. 1965. Colorimetry of total phenolics with phosphomolybdic-phosphotungstic acid reagents. Am. J. Enol. Vitic. 16:144-158. Solomon, R., L. E. Chase, D. Ben-Ghedalia, and D. E. Bauman. 2000. The effect of nonstructural carbohydrate and addition of full fat extruded soybeans on the concentration of conjugated linoleic acid in the milk fat of dairy cows. J. Dairy Sci. 83:1322-1329. https://doi.org/10.3168/jds.S0022-0302(00)74998-8 Sunvold, G. D., H. S. Hussein, G. C. Fahey, N. R. Merchen, and G. A. Reinhart. 1995. In vitro fermentation of cellulose, beet pulp, citrus pulp, and citrus pectin using fecal inoculum from cats, dogs, horses, humans, and pigs and ruminal fluid from cattle. J. Anim. Sci. 73:3639-3648. Tyrrell, H. F. and J. T. Reid. 1965. Prediction of the energy value of cow's milk. J. Dairy Sci. 48:1215-1223. https://doi.org/10.3168/jds.S0022-0302(65)88430-2 Van Soest, P. J. 1994. Nutrional Ecology of the Ruminant. Cornell University Press, Ithaca, NY, USA. Voltolini, T. V., G. T. Santos, M. A. Zambom, N. P. Ribas, E. E. Muller, J. C. Damasceno, L. C. V. Itavo, and D. R. Veiga. 2001. Influence of lactation stages on the counting of somatic cells of Holstein milk cows and identification of sources of mastitis pathogens in cattle. Acta. Sci. Anim. Sci. 23:961-966. Mertens, D. R. 2002. Gravimetric determination of amylasetreated neutral detergent fiber in feeds with refluxing in beakers or crucibles: Collaborative study. J. AOAC Int. 85:1217-1240. Miron, J., E. Yosef, D. Ben-Ghedalia, L. E. Chase, D. E. Bauman, and R. Solomon. 2002. Digestibility by dairy cows of monosaccharide constituents in total mixed rations containing citrus pulp. J. Dairy Sci. 85:89-94. https://doi.org/10.3168/jds.S0022-0302(02)74056-3 Murphy, J. J., J. F. Connolly, and G. P. McNeill. 1995. Effects on milk fat composition and cow performance of feeding concentrates containing full fat rapeseed and maize distillers grains on grass-silage based diets. Livest. Prod. Sci. 44:1-11. https://doi.org/10.1016/0301-6226(95)00049-Q National Research Council. 2001. Nutrients Requirements of Dairy Cattle. 7th Ed. National Academies Press, Washington, DC, USA. Palmquist, D. L., A. D. Beaulieu, and D. M. Barbano. 1993. Feed and animal factors influencing milk fat composition. J. Dairy Sci. 76:1753-1771. https://doi.org/10.3168/jds.S0022-0302(93)77508-6 Palmquist, D. L. and T. C. Jenkins. 1980. Fat in lactation rations: Review. J. Dairy Sci. 63:1-14. https://doi.org/10.3168/jds.S0022-0302(80)82881-5 Piperova, L. S., B. B. Teter, I. Bruckental, J. Sampugna, S. E. Mills, M. P. Yurawecz, J. Fritsche, K. Ku, and R. A. Erdman. 2000. Mammary lipogenic enzyme activity, trans fatty acids and conjugated linoleic acids are altered in lactating dairy cows fed a milk fat-depressing diet. J. Nutr. 130:2568-2574. Piquer, O., L. Rodenas, C. Casado, E. Blas, and J. J. Pascual. 2009. Whole citrus fruits as an alternative to wheat grain or citrus pulp in sheep diet: Effect on the evolution of ruminal parameters. Small Rumin. Res. 83:14-21. https://doi.org/10.1016/j.smallrumres.2008.11.009 Rindsig, R. B. and L. H. Schultz. 1974. Effects of abomasal infusions of safflower oil or elaidic acid on blood lipids and milk fat in dairy cows. J. Dairy Sci. 57:1459-1466. https://doi.org/10.3168/jds.S0022-0302(74)85089-7 Rocha Filho, R. R., P. F. Machado, R. D. D'Arce, and J. C. Francisco Jr. 1999. Citrus and corn pulp related to rumen volatile fatty acids production. Sci. Agric. 56:471-477. https://doi.org/10.1590/S0103-90161999000200030 Fox, P. F. and P. L. H. McSweeney. 1998. Dairy Chemistry and Biochemistry. Blackie Academic and Professional, Great Britain. Friedewald, W. T., R. I. Levy, and D. S. Fredrickson. 1972. Estimation of the concentration of low-density lipoprotein cholesterol in plasma, without use of the preparative ultracentrifuge. Clin. Chem. 18:499-502. Gilaverte, S., I. Susin, A. V. Pires, E. M. Ferreira, C. Q. Mendes, R. S. Gentil, M. V. Biehl, and G. H. Rodrigues. 2011. Diet digestibility, ruminal parameters and performance of Santa Ines sheep fed dried citrus pulp and wet brewer grain. R. Bras. Zootec. 40:639-647. https://doi.org/10.1590/S1516-35982011000300024 Grummer, R. R. and D. J. Carroll. 1991. Effects of dietary fat on metabolic disorders and reproductive performance of dairy cattle. J. Anim. Sci. 69:3838-3852. Hall, M. B., C. C. Larson, and C. J. Wilcox. 2010. Carbohydrate source and protein degradability alter lactation, ruminal, and blood measures. J. Dairy Sci. 93:311-322. https://doi.org/10.3168/jds.2009-2552 Hwang, S.-L., P.-H. Shih, and G.-C. Yen. 2012. Neuroprotective effects of citrus flavonoids. J. Agric. Food Chem. 60:877-885. https://doi.org/10.1021/jf204452y International Organization for Standardization. 1978. Animal and vegetable fats and oils - Preparation of methyl esters of fatty acids, London. Jenkins, T. C. 1993. Lipid metabolism in the rumen. J. Dairy Sci. 76:3851-3863. https://doi.org/10.3168/jds.S0022-0302(93)77727-9 Jenkins, T. C. and M. A. McGuire. 2006. Major advances in nutrition: Impact on milk composition. J. Dairy Sci. 89:1302-1310. https://doi.org/10.3168/jds.S0022-0302(06)72198-1 Kalscheur, K. F., B. B. Teter, L. S. Piperova, and R. A. Erdman. 1997. Effect of fat source on duodenal flow of trans-C18:1 fatty acids and milk fat production in dairy cows. J. Dairy Sci. 80:2115-2126. https://doi.org/10.3168/jds.S0022-0302(97)76157-5 Kim, S. C., A. T. Adesogan, and J. D. Arthington. 2007. Optimizing nitrogen utilization in growing steers fed forage diets supplemented with dried citrus pulp. J. Anim. Sci. 85:2548-2555. https://doi.org/10.2527/jas.2007-0059 Borsting, C. F., M. R. Weisbjerg, and T. Hvelplund. 1992. Fattyacid digestibility in lactating cows fed increasing amounts of protected vegetable oil, fish oil or saturated fat. Acta Agric. Scand. Sect. A Anim. Sci. 42:148-156. Broderick, G. A. and M. K. Clayton. 1997. A statistical evaluation of animal and nutritional factors influencing concentrations of milk urea nitrogen. J. Dairy Sci. 80:2964-2971. https://doi.org/10.3168/jds.S0022-0302(97)76262-3 Bu, D. P., J. Q. Wang, T. R. Dhiman, and S. J. Liu. 2007. Effectiveness of oils rich in linoleic and linolenic acids to enhance conjugated linoleic acid in milk from dairy cows. J. Dairy Sci. 90:998-1007. https://doi.org/10.3168/jds.S0022-0302(07)71585-0 Buriol, L., D. Finger, E. M. Schmidt, J. M. T. dos Santos, M. R. da Rosa, S. P. Quinaia, Y. R. Torres, H. S. Dalla Santa, C. Pessoa, M. O. de Moraes, L. V. Costa-Lotufo, P. M. P. Ferreira, A. C. H. F. Sawaya, and M. N. Eberlin. 2009. Chemical composition and biological activity of oil propolis extract: An alternative to ethanolic extract. Quim. Nova 32:296-302. https://doi.org/10.1590/S0100-40422009000200006 Chilliard, Y., A. Ferlay, R. M. Mansbridge, and M. Doreau. 2000. Ruminant milk fat plasticity: Nutritional control of saturated, polyunsaturated, trans and conjugated fatty acids. Ann. Zootech. 49:181-205. https://doi.org/10.1051/animres:2000117 Cortes, C., D. C. da Silva-Kazama, R. Kazama, N. Gagnon, C. Benchaar, G. T. Santos, L. M. Zeoula, and H. V. Petit. 2010. Milk composition, milk fatty acid profile, digestion, and ruminal fermentation in dairy cows fed whole flaxseed and calcium salts of flaxseed oil. J. Dairy Sci. 93:3146-3157. https://doi.org/10.3168/jds.2009-2905 Dai, J. and R. J. Mumper. 2010. Plant phenolics: extraction, analysis and their antioxidant and anticancer properties. Molecules 15:7313-7352. https://doi.org/10.3390/molecules15107313 Doner, L. W., G. Becard, and P. L. Irwin. 1993. Binding of flavonoids by polyvinylpolypyrrolidone. J. Agric. Food Chem. 41:753-757. https://doi.org/10.1021/jf00029a014 Fegeros, K., G. Zervas, S. Stamouli, and E. Apostolaki. 1995. Nutritive-value of dried citrus pulp and its effect on milk-yield and milk-composition of lactating ewes. J. Dairy Sci. 78:1116-1121. https://doi.org/10.3168/jds.S0022-0302(95)76728-5 AOAC. 1998. Official Methods of Analysis. 16th edn. Association of Official Analytical Chemists, Gaithersburg, MD, USA. Arthington, J. D., W. E. Kunkle, and A. M. Martin. 2002. Citrus pulp for cattle. Vet. Clin. North Am. Food Anim. Pract. 18:317-326. https://doi.org/10.1016/S0749-0720(02)00023-3 Assis, A. J., J. M. D. Campos, S. D. Valadares, A. C. de Queiroz, R. D. Lana, R. F. Euclydes, J. M. Neto, A. L. R. Magalhaes, and S. D. Mendonca. 2004. Citrus pulp in diets for milking cows. Intake of nutrients, milk production and composition. R. Bras. Zootec. 33:242-250. https://doi.org/10.1590/S1516-35982004000100028 Bampidis, V. A. and P. H. Robinson. 2006. Citrus by-products as ruminant feeds: A review. Anim. Feed Sci. Technol. 128:175-217. https://doi.org/10.1016/j.anifeedsci.2005.12.002 Bateman, H. G. and T. C. Jenkins. 1998. Influence of soybean oil in high fiber diets fed to nonlactating cows on ruminal unsaturated fatty acids and nutrient digestibility. J. Dairy Sci. 81:2451-2458. https://doi.org/10.3168/jds.S0022-0302(98)70136-5 Bauman, D. E. and J. M. Griinari. 2001. Regulation and nutritional manipulation of milk fat: low-fat milk syndrome. Livest. Prod. Sci. 70:15-29. https://doi.org/10.1016/S0301-6226(01)00195-6 Bauman, D. E. and J. M. Griinari. 2003. Nutritional regulation of milk fat synthesis. Annu. Rev. Nutr. 23:203-227. https://doi.org/10.1146/annurev.nutr.23.011702.073408 Baumgard, L. H., B. A. Corl, D. A. Dwyer, A. Saebo, and D. E. Bauman. 2000. Identification of the conjugated linoleic acid isomer that inhibits milk fat synthesis. Am. J. Physiol. Regul. Integr. Comp. Physiol. 278:179-184. Effect of abomasal or ruminal administration of citrus pulp and soybean oil on milk fatty acid profile and antioxidant properties vol.82, pp.03, 2015, https://doi.org/10.1017/S0022029915000187 Effect of soya bean and fish oil inclusion in diets on milk and plasma enzymes from sheep and goat related to oxidation vol.101, pp.4, 2016, https://doi.org/10.1111/jpn.12516 A meta-analysis of in situ degradability of corn grains and non-starch energy sources found in Brazil vol.11, pp.21, 2016, https://doi.org/10.5897/AJAR2016.10983 Replacing cereals with dehydrated citrus pulp in a soybean oil supplemented diet increases vaccenic and rumenic acids in ewe milk vol.99, pp.2, 2016, https://doi.org/10.3168/jds.2015-9966 Assessment of the effect of grape seed cake inclusion in the diet of healthy fattening-finishing pigs pp.09312439, 2017, https://doi.org/10.1111/jpn.12697 Effect of dietary inclusion of dried citrus pulp on growth performance, carcass characteristics, blood metabolites and hepatic antioxidant status of rabbits pp.0974-1844, 2017, https://doi.org/10.1080/09712119.2017.1355806 Citrus and Winery Wastes: Promising Dietary Supplements for Sustainable Ruminant Animal Nutrition, Health, Production, and Meat Quality vol.10, pp.10, 2018, https://doi.org/10.3390/su10103718 The rumen microbiome: a crucial consideration when optimising milk and meat production and nitrogen utilisation efficiency pp.1949-0984, 2018, https://doi.org/10.1080/19490976.2018.1505176 Decolorization of remazol brilliant blue R with laccase from Lentinus crinitus grown in agro-industrial by-products pp.0001-3765, 2018, https://doi.org/10.1590/0001-3765201820170458
CommonCrawl
Business Finance Annuity You are planning to save for retirement over the next 25 years. To do this, you will invest $880... You are planning to save for retirement over the next 25 years. To do this, you will invest $880 per month in a stock account and $480 per month in a bond account. The return of the stock account is expected to be 10.8 percent, and the bond account will earn 6.8 percent. When you retire, you will combine your money into an account with an annual return of 7.8 percent. Assume the returns are expressed as APRs. How much can you withdraw each month from your account assuming a 20-year withdrawal period? IRA: Various individual retirement accounts (IRAs) exist that allow individuals to allocate their savings to different investments depending on their tolerance for risk. The riskier the investment, the higher its return on average, Let r be the interest rate, and n = 25 * 12 = 300 be the number of periods. The interest rate for the stock account is 10.8% / 12 = 0.9%. The future value after 25 years of the $880 payments in the stock account is: {eq}FV=Payment*\frac{(1+r)^{n}-1}{r}\\ FV=880*\frac{(1+0.009)^{300}-1}{0.009}\\ FV=\$1,339,663\\ {/eq} The interest rate on the bond account is 6.8% / 12 = 0.57%. The future value of the $480 payments is: {eq}FV=Payment*\frac{(1+r)^{n}-1}{r}\\ FV=480*\frac{(1+0.0057)^{300}-1}{0.0057}\\ FV=\$379,131\\ {/eq} Combining the two amounts, Total amount = 1,339,663 + 379,131 = $1,718,794 This amount must be equal to the present value of the withdrawals over the next 20 years at an interest rate of 7.8% / 12 = 0.65%: {eq}PV=Withdrawal*\frac{1-(1+r)^{-n}}{r}\\Withdrawal=\frac{PV*r}{1-(1+r)^{-n}}\\Withdrawal=\frac{1,718,794*0.0065}{1-(1+0.0065)^{-240}}\\PMT=14,163\\ {/eq} The monthly withdrawals are $14,163. How to Calculate the Present Value of an Annuity Chapter 8 / Lesson 3 Learn how to find present value of annuity using the formula and see its derivation. Study its examples and see a difference between Ordinary Annuity and Annuity Due. You are planning to save for retirement over the next 20 years. To do this, you will invest $600 a month in a stock account and $300 a month in a bond account. The return of the stock account is expec You are planning to save for retirement over the next 35 years. To do this, you will invest $770 per month in a stock account and $370 per month in a bond account. The return of the stock account is e You are planning to save for retirement over the next 30 years. To do this, you will invest $800 a month in a stock account, and $350 per month in a bond account. The return of the stock account is ex You are planning to save for retirement over the next 30 years. To do this, you will invest $1,100 a month in a stock account and $800 a month in a bond account. The return of the stock account is exp You are planning to save for retirement over the next 25 years. To do this, you will invest $1,500 a month in a stock account and $1,400 a month in a bond account. The return of the stock account is e You are planning to save for retirement for over the next 25 years. To do this, you will invest $700 per month in a stock account and $300 per month in a bond account. The return of the stock account You are planning to save for retirement over the next 30 years. To do this, you will invest $850 per month in a stock account and $350 a month in a bond account. The return of the stock account is exp You are planning to save for retirement over the next 15 years. To do this, you will invest $1,000 a month in a stock account and $700 a month in a bond account. The return of the stock account is expected to be 12 percent, and the bond account will pay 7 You are planning to save for retirement over the next 30 years. To do this, you will invest $800 a month in a stock account and $350 a month in a bond account. The return of the stock account is expected to be 11 percent, and the bond account will pay 6 p You are planning to save for retirement over the next 30 years. To do this, you will invest $700 a month in a stock account and $300 a month in a bond account. The return of the stock account is expected to be 11%, and the bond account will pay 7%. When y You are planning to save for retirement over the next 30 years. To do this, you will invest $780 a month in a stock account and $380 a month in a bond account. The return of the stock account is expected to be 9.8 percent, and the bond account will pay 5. You are planning to save for retirement over the next 20 years. To do this, you will invest $1,200 per month in a stock account and $900 a month in a bond account. The return of the stock account is expected to be 10 percent, and the bond account will pay You are planning to save for retirement over the next 30 years. To do this, you will invest $700 a month in a stock account and $300 a month in a bond account. The annual return of the stock account i You are planning to save for retirement over the next 15 years. To do this, you will invest $1,100 a month in a stock account and $500 a month in a bond account. The return on the stock account is expected to be an EAR of 7 percent, and the bond account w You are planning to save for retirement over the next 15 years. To do this, you will invest $1,100 a month is a stock account and $500 in a bond account. The return on the stock account is expected to You are planning to save for retirement over the next 30 years. To do this, you will invest $890 a month in a stock account and $490 a month in a bond account. The return of the stock account is expected to be 10.9 percent, and the bond account will pay 6 You are planing to save for retirement over the next 15 years. To do this, you will invest $1,000 a month in a stock account and $700 a month in a bond account. The return on the stock is expected to You are planning to save for retirement over the next 30 years. To do this, you will need to invest $830 per month in a stock account and $430 per month in a bond account. The return of the stock acco Calculating Annuities - You are planning to save for retirement over the next 30 years. To do this, you will invest $860 per month in a stock account and $460 per month in a bond account. The return You are planning your retirement in 10 years. You currently have $161,000 in a bond account and $601,000 in a stock account. You plan to add $7,900 per year at the end of each of the next 10 years to You are planning your retirement in 10 years. You currently have 172,000 in a bond account and 612,000 in a stock account. You plan to add $6,800 per year at the end of each of the next 10 years to yo You are planning to save for retirement over the next 30 years. To save for retirement, you will invest $1,250 a month in a stock account in real dollars and $550 per month in a bond account in real dollars. The effective annual return of the stock accoun You are planning your retirement in 10 years. You currently have $150,000 in a bond account and $450,000 in a stock account. You plan to add $9,000 per year at the end of each of the next 10 years to your bond account. The stock account will earn an 11.5 You are planning to save for retirement over the next 30 years. To save for retirement, you will invest $1,500 a month in a stock account in real dollars and $575 per month in a bond account in real d You are planning to save for retirement over the next 30 years. To save for retirement, you will invest $1,150 per month in a stock account in real dollars and $540 per month in a bond account in real You are planning to save for retirement over the next 30 years. To save for retirement, you will invest $1,450 a month in a stock account in real dollars and $570 a month in a bond account in real dol You are planning your retirement in 10 years. You currently have $75,000 in a bond account and $300,000 in a stock account. You plan to add $6,000 per year at the end of each of the next 10 years to y Calculating Annuities You are planning to save for retirement over the next 30 years. To do this, you will invest $600 a month in a stock account and $300 a month in a bond account. The return of the stock account is expected to be 12 percent, and the bo What is the WITHDRAWAL $___per month - You are planning to save for retirement over the next 30 years. To do this, you will invest $750 per month in a stock account and $350 per month in a bond accoun You are planning your retirement and you come to the conclusion that you need to have saved $1,250,000 in 30 years. You can invest into an retirement account that guarantees you a 5% annual return. Ho You are trying to plan for retirement in 10 years, and currently you have $100,000 in a savings account and $300,000 in stocks. In addition, you plan on adding to your savings by depositing $10,000 per year in your savings account at the end of each of th You are planning to save for retirement over the next 30 years. To save for retirement you will invest $1,000 a month in a stock account in real dollars and $525 a month in a bond account in real doll You are trying to plan for retirement in 10 years, and currently you have $350,000 in savings account and $600,000 in stocks. In addition, you have plans on adding to your savings by depositing $10,000 per year in your savings account at the end of each o You are planning your retirement and you come to the conclusion that you need to have saved $1,500,000 in 30 years. You can invest into an retirement account that guarantees you a 6% annual return. How much do you have to put into your account at the end You are planning your retirement and you conclude that you need to have saved $1, 250.000 in 30 years. You can invest in a retirement account that guarantees you a 5% annual return. How much do you have to put into your account at the end of each year to You are planning your retirement and you conclude that you need to have saved $1,250,000 in 30 years. You can invest in a retirement account that guarantees you a 5% annual return. How much do you have to put into your account at the end of each year to r You have $50,000 in a retirement account, and you plan to deposit $3,000 at the end of every year until your account reaches $250,000. You expect to earn 6% annually on your savings. How many years will you have to work before you retire? a. 8 b. 10 c. You are going to retire 30 years from now and plan to live for another 25 years. You can earn 8% on your investment for the next 55 years. You just deposited $15,000 into the investment account. You w You plan to retire in 25 years. You have $50,000 currently saved and you plan to save an additional $500 every month (starting one month from now) until you retire. If you expect your retirement savings to grow at 7 percent per year (APR with monthly comp In doing some retirement planning you determine that you want to save $25,000 each year until you retire. You plan to invest it in a "guaranteed return mutual fund" which pays compound interest at 4 You currently have $3,564 in a retirement savings account that earns an annual return of 9.00%. You want to retire in 42 years with $1,000,000. How much more do you need to save at the end of every ye You plan to deposit $1.000 into a savings account at the end of every month for the next 25 years at which time you intend to retire with $1.000000 in the savings account. The savings account compounds interest monthly. What annual interest rate should th Suppose you have started planning your retirement, and you want to live in retirement only on your investment earnings. During retirement, you want to earn $40,000 per year. You can save $20,000 for the next 10 years until you retire. While saving, you ex Real Cash Flows You are planning to save for retirement over the next 30 years. To save for retirement, you will invest $950 a month in a stock account in real dollars and $450 a month in a bond account in real dollars. The effective annual return of the You deposit $3,000 per year at the end of each of the next 30 years into an account that pays 7% compounded annually towards your retirement. Once you retire, you will withdraw your retirement savings in 15 annual end-of-year installments, if the accumula You plan to retire 40 years from now. After retirement you want to be able to withdraw $20,000 annually at the end of each year from retirement account for 20 years. You plan to save a given amount of At the age of 33, to save for retirement, you decide to deposit $80 at the end of each month in an IRA that pays 4.4% compounded monthly. A. You will have approximately $ Blank in the IRA when you retire. B. The interest is approximately $ Blank? You are planning on retiring for retirement 34 years from now. You plan to invest $4,200 per year for the first 7 years, $6,900 per year for the next 11 years, and $14,500 per year for the following 1 You are planning your retirement in 10 years (10 years from today). a. You currently (t=0) have $120,000 in a bond account, and you plan to add $5,000 per year at the end of each of the next 10 years to the account. If the bond account earns a return of You currently have $100,000 in a retirement account earning 10% per year. You can deposit an additional $5,000 per year for the next 7 years. If you leave that money in the account for another 8 years, how much will you have 15 years from now? You want to retire at age 65. You plan to save $300 per MONTH starting today, for the 40 years in between. If you can earn 5.25% (APR) over those years on your savings, how much will you have upon retirement? You currently have $1,251 in a retirement savings account that earns an annual return of 11%. You want to retire in 44 years with $1,000,000. How much more do you need to save at the end of every year to reach your retirement goal? 1. You are planning for retirement 34 years from now. You plan to invest $4,200 per year for the first 7 years, $6,900 per year for the next 11 years, and $14,500 per year for the following 16 years ( You are planning for retirement 34 years from now. You plan to invest $4,200 per year for the first 7 years, $6.900 per year for the next 11 years, and $14,500 per year for the following 16 years (ass You plan to save $200 a month for the next 24 years and hope to earn an average rate of return of 10.6 percent. How much more will you have at the end of the 24 years if you invest your money at the b You are now 50 years old and plan to retire at age 65. You currently have a stock portfolio worth $150,000, a 401(k) retirement plan worth $250,000, and a money market account worth $50,000. Your stock portfolio is expected to provide annual returns of 12 You plan on retiring in 20 years. You currently have $275,000 and think you will need $1,000,000 to retire. Assuming that you do not deposit any additional money into the account, what annual return w You want to start saving for retirement. If you deposit $2,000 at the end of each year for the next 60 years and earn an 11% annual rate of return on the investment, how much will you have when you retire? Let's say you deposit $300 per month, every month for the next 10 years, beginning at the end of this month. You'll invest in a high-risk stock fund that will pay 7.25 %. What will be your balance when you go to retire in 10 years? You are saving for retirement in 40 years. You deposit $20,000 in a bank account today that pays 2.5% interest, compounded semiannually. You leave those funds on deposit until you retire. You also contribute $5,000 a year to a pension plan for 20 years an You have accumulated some money for your retirement. You are going to withdraw $81,756 every year at the beginning of the year for the next 24 years, starting today. Your account pays you 18.24 percen You are planning to make monthly deposits of $150 into a retirement account that pays 14 percent interest compounded monthly. If your first deposit will be made one month from now, your retirement account will be worth $___in 15 years You are planning to make monthly deposits of $70 into a retirement account that pays 6 percent interest compounded monthly. If your first deposit will be made one month from now, your retirement account will be worth $_______ in 30 years. You plan to retire in 40 years. You want to begin funding your retirement savings account immediately and will also do so at the beginning of February for the next 39 years (40 deposits in total). You would like to have $2,000,000 in your account at that Suppose that you save for retirement by contributing the same amount each month from your 23^{rd} birthday until your 65^{th} birthday, in a retirement account that pays a steady return of 7.5 % compounded monthly? Every month you save $100. Lisa is putting together a retirement plan and is scheduled to retire in 32 years. She is planning to open a retirement account and invest an equal amount each month into the retirement account. If she expects to earn 9% per year in the account and is pla If, instead, you invest $600/Year in a 401(k) that invests in an international stock mutual fund. Assuming an annual rate of return of 9%, how much will this fund be worth if retiring in 40 years? You have accumulated some money for your retirement. You are going to withdraw $92,645 every year at the beginning of the year for the next 19 years, starting today. Your account pays you 12.9% per ye
CommonCrawl
Delayed population models with Allee effects and exploitation MBE Home Travelling wave solutions of the reaction-diffusion mathematical model of glioblastoma growth: An Abel equation based approach 2015, 12(1): 71-81. doi: 10.3934/mbe.2015.12.71 Dynamics of competitive systems with a single common limiting factor Ryusuke Kon 1, Faculty of Engineering, University of Miyazaki, Gakuen Kibanadai Nishi 1-1, Miyazaki 889-2192, Japan Received April 2014 Revised October 2014 Published December 2014 The concept of limiting factors (or regulating factors) succeeded in formulating the well-known principle of competitive exclusion. This paper shows that the concept of limiting factors is helpful not only to formulate the competitive exclusion principle, but also to obtain other ecological insights. To this end, by focusing on a specific community structure, we study the dynamics of Kolmogorov equations and show that it is possible to derive an ecologically insightful result only from the information about interactions between species and limiting factors. Furthermore, we find that the derived result is a generalization of the preceding work by Shigesada, Kawasaki, and Teramoto (1984), who examined a certain Lotka-Volterra equation in a different context. Keywords: nonlinear complementary problem, saturated equilibrium, Lotka-Volterra equation., P-function, P-matrix. Mathematics Subject Classification: Primary: 34D20; Secondary: 92B0. Citation: Ryusuke Kon. Dynamics of competitive systems with a single common limiting factor. Mathematical Biosciences & Engineering, 2015, 12 (1) : 71-81. doi: 10.3934/mbe.2015.12.71 R. A. Armstrong and R. McGehee, Coexistence of species competing for shared resources,, Theoretical Population Biology, 9 (1976), 317. doi: 10.1016/0040-5809(76)90051-4. Google Scholar R. A. Armstrong and R. McGehee, Coexistence of two competitors on one resource,, Journal of Theoretical Biology, 56 (1976), 499. doi: 10.1016/S0022-5193(76)80089-6. Google Scholar R. A. Armstrong and R. McGehee, Competitive exclusion,, The American Naturalist, 115 (1980), 151. doi: 10.1086/283553. Google Scholar M. Hirsch and H. Smith, Monotone dynamical systems,, In A. Canada, II (2005), 239. Google Scholar J. Hofbauer, An index theorem for dissipative semiflows,, Rocky Mountain J. Math., 20 (1990), 1017. doi: 10.1216/rmjm/1181073059. Google Scholar J. Hofbauer and K. Sigmund, The Theory of Evolution and Dynamical Systems: Mathematical Aspects of Selection,, Cambridge University Press Cambridge, (1988). Google Scholar J. Hofbauer and K. Sigmund, Evolutionary Games and Population Dynamics,, Cambridge University Press, (1998). doi: 10.1017/CBO9781139173179. Google Scholar R. D. Holt, J. Grover and D. Tilman, Simple rules for interspecific dominance in systems with exploitative and apparent competition,, American Naturalist, 144 (1994), 741. doi: 10.1086/285705. Google Scholar S. A. Levin, Community equilibria and stability, and an extension of the competitive exclusion principle,, The American Naturalist, 104 (1970), 413. doi: 10.1086/282676. Google Scholar D. Logofet, Matrices and Graphs: Stability Problems in Mathematical Ecology,, CRC Press, (1993). Google Scholar R. McGehee and R. A. Armstrong, Some mathematical problems concerning the ecological principle of competitive exclusion,, Journal of Differential Equations, 23 (1977), 30. doi: 10.1016/0022-0396(77)90135-8. Google Scholar J. Moré and W. Rheinboldt, On P- and S-functions and related classes of n-dimensional nonlinear mappings,, Linear Algebra and its Applications, 6 (1973), 45. doi: 10.1016/0024-3795(73)90006-2. Google Scholar J. J. Moré, Classes of functions and feasibility conditions in nonlinear complementarity problems,, Mathematical Programming, 6 (1974), 327. doi: 10.1007/BF01580248. Google Scholar F. Scudo and J. Ziegler, Lecture Notes in Biomathematic, volume 22 of Lecture notes in Biomathematics,, Sprinter, (1978). Google Scholar N. Shigesada, K. Kawasaki and E. Teramoto, The effects of interference competition on stability, structure and invasion of a multispecies system,, J. Math. Biol., 21 (1984), 97. doi: 10.1007/BF00277664. Google Scholar H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems,, Mathematical Surveys and Monographs. American Mathematical Society, (1995). Google Scholar Y. Takeuchi and N. Adachi, The existence of globally stable equilibria of ecosystems of the generalized Volterra type,, J. Math. Biol., 10 (1980), 401. doi: 10.1007/BF00276098. Google Scholar Y. Takeuchi and N. Adachi, Existence of stable equilibrium point for dynamical systems of Volterra type,, J. Math. Anal. Appl., 79 (1981), 141. doi: 10.1016/0022-247X(81)90015-9. Google Scholar Y. Takeuchi, N. Adachi and H. Tokumaru, Global stability of ecosystems of the generalized Volterra type,, Math. Biosci., 42 (1978), 119. doi: 10.1016/0025-5564(78)90010-X. Google Scholar Y. Takeuchi, N. Adachi and H. Tokumaru, The stability of generalized Volterra equations,, J. Math. Anal. Appl., 62 (1978), 453. doi: 10.1016/0022-247X(78)90139-7. Google Scholar Lin Niu, Yi Wang, Xizhuang Xie. Carrying simplex in the Lotka-Volterra competition model with seasonal succession with applications. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021014 Xianyong Chen, Weihua Jiang. Multiple spatiotemporal coexistence states and Turing-Hopf bifurcation in a Lotka-Volterra competition system with nonlocal delays. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021013 Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan. On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29 (1) : 1709-1734. doi: 10.3934/era.2020088 Wenxiong Chen, Congming Li, Shijie Qi. A Hopf lemma and regularity for fractional $ p $-Laplacians. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3235-3252. doi: 10.3934/dcds.2020034 Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432 Hai Q. Dinh, Bac T. Nguyen, Paravee Maneejuk. Constacyclic codes of length $ 8p^s $ over $ \mathbb F_{p^m} + u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020123 Hongwei Liu, Jingge Liu. On $ \sigma $-self-orthogonal constacyclic codes over $ \mathbb F_{p^m}+u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020127 Sebastian J. Schreiber. The $ P^* $ rule in the stochastic Holt-Lawton model of apparent competition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 633-644. doi: 10.3934/dcdsb.2020374 Hongming Ru, Chunming Tang, Yanfeng Qi, Yuxiao Deng. A construction of $ p $-ary linear codes with two or three weights. Advances in Mathematics of Communications, 2021, 15 (1) : 9-22. doi: 10.3934/amc.2020039 Chunming Tang, Maozhi Xu, Yanfeng Qi, Mingshuo Zhou. A new class of $ p $-ary regular bent functions. Advances in Mathematics of Communications, 2021, 15 (1) : 55-64. doi: 10.3934/amc.2020042 Lingfeng Li, Shousheng Luo, Xue-Cheng Tai, Jiang Yang. A new variational approach based on level-set function for convex hull problem with outliers. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020070 Kimie Nakashima. Indefinite nonlinear diffusion problem in population genetics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3837-3855. doi: 10.3934/dcds.2020169 Ryusuke Kon
CommonCrawl
How do you experimentally calculate the ionization constant of the carbonate ion? Given sodium hydrogen carbonate, $\ce{NaHCO3}$, as well as hydrochloric acid, $\ce{HCl}$, how would you experimentally determine the ionization constant for the carbonate ion, $\ce{HCO3-}$? By experimentally, I mean by actually doing the reaction and measuring the pH and such, and ignoring the already known theoretical values. Here's what I've got so far. The sodium hydrogen carbonate dissociates in water. $$\ce{NaHCO3->Na+_{(aq)} + HCO3-_{(aq)}}$$ Then, titrate the ion with the hydrochloric acid, a strong acid. $$\ce{HCO3-_{(aq)} + HCl_{(aq)} <=> H2O_{(l)} + H2CO3_{(aq)} + Cl-_{(aq)}}$$ It's at this point that I would measure the $\mathrm{pH}$ and determine the necessary concentrations, and then plug it into the formula $$K_\mathrm{b} = \frac{\ce{[H2CO3]}}{\ce{[HCO3-][HCl]}}$$ But I'm wondering if I need to go further with the $\ce{H2CO3}$ since it would hydrolize in water: $$\ce{H2CO3_{(aq)} + H2O_{(l)} <=> H3O+_{(aq)} + HCO3-_{(aq)}}$$ inorganic-chemistry acid-base experimental-chemistry aqueous-solution titration Kyle AndersonKyle Anderson So far, so good. You'll have to graph the $\mathrm{pH}$ of your system with the volume of titrant you've used to construct what's called a titration curve. When you do the titration curve for this experiment, depending on the relative strengths of the bases involved, you will see one or two points of inflection points that will signal that the equilibrium conditions have changed. This will tell you if you need to take into account the second ionization. If there is a second inflection point, that's the second ionization. Without actually going ahead and looking at the $\mathrm{p}K_\mathrm{a}$ of $\ce{H2CO3}$, I think for most computations, it could be omitted, since it is a second ionization which has a much lower constant than the first. Not unless you're looking for zwitterionic states or something like that. Bernard Jude GutierrezBernard Jude Gutierrez Not the answer you're looking for? Browse other questions tagged inorganic-chemistry acid-base experimental-chemistry aqueous-solution titration or ask your own question. What are the products of the dissociation of sodium bicarbonate in water? What is the relative pH of the solution? How to calculate the concentrations of the species in the carbonate equilibrium from a titration with hydrochloric acid? Reasonings on the pH of calcium carbonate in ammonium chloride solution What is the pH of a 5M solution of hydrochloric acid? How to balance the equation for the reaction of aluminum bicarbonate and hydrochloric acid? If there are multiple bases in solution and a strong acid is added, will both bases react? How to calculate the pH of a sodium bicarbonate-sodium hydroxide buffer solution? Calcium carbonate and hydrochloric acid Which formula can be used to calculate the exact hydronium concentration present in sodium hydrogen carbonate solution? Finding mass percentage of components of a solid mixture
CommonCrawl
SCNrank: spectral clustering for network-based ranking to reveal potential drug targets and its application in pancreatic ductal adenocarcinoma Enze Liu1, Zhuang Zhuang Zhang2, Xiaolin Cheng3, Xiaoqi Liu2 & Lijun Cheng4 Pancreatic ductal adenocarcinoma (PDAC) is the most common pancreatic malignancy. Due to its wide heterogeneity, PDAC acts aggressively and responds poorly to most chemotherapies, causing an urgent need for the development of new therapeutic strategies. Cell lines have been used as the foundation for drug development and disease modeling. CRISPR-Cas9 plays a key role in every step-in drug discovery: from target identification and validation to preclinical cancer cell testing. Using cell-line models and CRISPR-Cas9 technology together make drug target prediction feasible. However, there is still a large gap between predicted results and actionable targets in real tumors. Biological network models provide great modus to mimic genetic interactions in real biological systems, which can benefit gene perturbation studies and potential target identification for treating PDAC. Nevertheless, building a network model that takes cell-line data and CRISPR-Cas9 data as input to accurately predict potential targets that will respond well on real tissue remains unsolved. We developed a novel algorithm 'Spectral Clustering for Network-based target Ranking' (SCNrank) that systematically integrates three types of data: expression profiles from tumor tissue, normal tissue and cell-line PDAC; protein-protein interaction network (PPI); and CRISPR-Cas9 data to prioritize potential drug targets for PDAC. The whole algorithm can be classified into three steps: 1. using STRING PPI network skeleton, SCNrank constructs tissue-specific networks with PDAC tumor and normal pancreas tissues from expression profiles; 2. With the same network skeleton, SCNrank constructs cell-line-specific networks using the cell-line PDAC expression profiles and CRISPR-Cas 9 data from pancreatic cancer cell-lines; 3. SCNrank applies a novel spectral clustering approach to reduce data dimension and generate gene clusters that carry common features from both networks. Finally, SCNrank applies a scoring scheme called 'Target Influence score' (TI), which estimates a given target's influence towards the cluster it belongs to, for scoring and ranking each drug target. We applied SCNrank to analyze 263 expression profiles, CRPSPR-Cas9 data from 22 different pancreatic cancer cell-lines and the STRING protein-protein interaction (PPI) network. With SCNrank, we successfully constructed an integrated tissue PDAC network and an integrated cell-line PDAC network, both of which contain 4414 selected genes that are overexpressed in tumor tissue samples. After clustering, 4414 genes are distributed into 198 clusters, which include 367 targets of FDA approved drugs. These drug targets are all scored and ranked by their TI scores, which we defined to measure their influence towards the network. We validated top-ranked targets in three aspects: Firstly, mapping them onto the existing clinical drug targets of PDAC to measure the concordance. Secondly, we performed enrichment analysis to these drug targets and the clusters there are within, to reveal functional associations between clusters and PDAC; Thirdly, we performed survival analysis for the top-ranked targets to connect targets with clinical outcomes. Survival analysis reveals that overexpression of three top-ranked genes, PGK1, HMMR and POLE2, significantly increases the risk of death in PDAC patients. SCNrank is an unbiased algorithm that systematically integrates multiple types of omics data to do potential drug target selection and ranking. SCNrank shows great capability in predicting drug targets for PDAC. Pancreatic cancer-associated gene candidates predicted by our SCNrank approach have the potential to guide genetics-based anti-pancreatic drug discovery. Pancreatic cancer is the third leading cause of cancer death in the United States. The American Cancer Society estimates that 53,070 Americans will be diagnosed with pancreatic cancer in 2017, and that 41,780 will die from the disease [1]. About 85% of pancreatic cancers are pancreatic ductal adenocarcinomas (PDACs). Despite decades of effort, PDAC has the shortest survival time of all major cancers, and the five-year survival rate is only ~ 8%. Patients diagnosed with PDAC are usually diagnosed at advanced stages, when tumor cells have spread into the lymphatic system and vicinal organs, which limit the choices of effective treatments [2]. Another challenge in treating PDAC is its treatment-recalcitrant characteristics [3, 4], which often lead to insensitivity towards many chemotherapeutic drugs and target-based drugs [5]. Even though drug combinations such as Gemcitabine plus epidermal growth factor receptor (EGFR) inhibitor Erlotinib or Gemcitabine plus Nab-paclitaxel have been widely applied in the clinical setting, survival is only modestly improved [3]. Therefore, identifying novel drug targets for treating PDAC is an urgent need. The establishment of cell lines from human tumors is largely responsible for our early progress in cancer research. Cancer cell models show immense potential for cancer medicine by linking cellular variation to genomic features. However, the complexity of modeling cancer in cells has increased the difficulty of observing and manipulating a complex PDAC process in a manner that cannot be performed in patients [6]. In recent years, CRISPR-Cas9 genome editing technology has become a reliable tool for discovering therapeutic targets in cancer cells and validating large-scale preclinical testing on cancer cells [7]. This ease of construction of CRISPR libraries enables large-scale screening that targets all (or a desired subset) of the protein-coding genes encoded in a whole genome by microarray-based platform [8]. The capabilities of CRISPR-based genetic screens offers great opportunities to observe cell variations, which further benefit essential gene selection and effective target identification on cancer cells. On the other hand, recent advances in high-throughput microarrays have produced a wealth of information concerning pancreatic cancer mechanisms. Whole genome profiling has allowed the simultaneous identification of hundreds of genes that are perturbed in pancreatic cancer patients. Substantial progress has been made in our understanding of the biology of pancreatic cancer from the molecular level, including cancer-associated genes for drug targets in PDAC [9]. However, it remains a challenge to identify potential targets by building upon cancer cell CRISPR/Cas9 genetic perturbation screen data and transcriptome data collected from patients and cancer cells. Network-based analysis has greatly benefited cancer biology. Patterns that reflect important cancer-related processes and mechanisms can be shown in a large-scale complex network, in which genes, proteins and other components interact with each other. A better understanding of associations/regulations of genes or proteins from a network perspective can provide valuable insights towards target selection for developing novel cancer treatments [10]. So far, biological networks have been widely used in numerous studies for identifying genes related to certain therapies through a curated database, specialized drug-protein [11] or protein-disease networks [12, 13]. (1) Curated databases, such as STRING protein-protein interaction [14] network and KEGG [15] pathway network, provide complete genome-wide networks that contain entire gene regulations, signal transductions and gene-protein associations. However, these methods are not built for specific cancer types, making them too generalized. It is also difficult for people to analyze them as a whole. (2) A drug-protein network is often used to investigate the mechanism of drug action and drug target prioritization [16]. For instance, Isik et.al provided drug target identification by perturbed gene expression from Connectivity Map (CMAP) [17] and protein-protein Interaction (PPI) network information. However, these technologies did not directly connect a drug with disease genes. (3) Constructing protein-disease networks is another approach to identify gene-disease associations for selecting therapeutic targets in cancer [18]. Ferrero et al. proposed a semi-supervised network approach, which evaluates disease association evidence and makes de novo predictions of potential therapeutic targets based on that [19]. These types of methods fail to incorporate target information in their models to accurately predict drug targets. CRISPR-Cas9 genome-wide perturbation data provides the opportunity to find genes vital to pancreatic cancer by looking at the mortality of an individual gene. The mortality of an individual gene is found from observing genes expression variation of cancer cells [20]. However, solely using gene perturbation data targets cannot resolve target ranking problems. Moreover, identifying drug targets that actually work on living tissues from gene perturbation data is still challenging. In this paper, we proposed a method called 'SCNrank' that systematically utilizes expression data from tissue and cell-line, along with gene perturbation data and PPI networks to select and rank druggable targets that effectively work on tissues. SCNrank systematically compares the network structure between PDAC tissue-specific network and PDAC cell-line-specific network to identify similarities commonly exited in two networks. PDAC then utilized CRISPR-Cas9 data to score and rank targets from these similarities. To our knowledge, this is the first-time people have proposed a model that systematically score and rank potential targets by considering network similarities between tumor networks and cell-line networks. On the other hand, we validated ranking drug targets by 1) mapping them onto existing PDAC drug targets; 2) applying pathway analysis on drug targets and the clusters within to show their functional associations with PDAC; and 3) performing survival analysis for top ranked drug targets. This study aimed to identify perturbed genes based on gene expression datasets representing distinct states of tumor tissues and adjacent normal tissues, and then align them with the integrated network that is generated from cell-line PDAC expression and CRISPR-Cas9 perturbation data for target selection (Fig. 1). Gene expression profiles from 263 samples, CRISPR-Cas9 data from 22 pancreatic cancer cell-lines, STRING protein-protein interaction (PPI) network consisting of 19,056 proteins and 116,009,230 PPI, and 1317 targets corresponding to all FDA-approved drugs are included in this study. We developed a subnetwork target identification algorithm called Spectral Clustering for Network-based target Ranking 'SCNrank'. The core idea of SCNrank is to align tissue PDAC patterns to cell-line PDAC, and then incorporate gene perturbation (CRISPR-Cas9 data) to score and rank targets based on these patterns. Workflow of this study (a) Constructing an integrated tissue-specific PDAC network with weighted nodes and weighted edges using tissue PDAC expression profile, normal PDAC expression profile and PPI network data. b Constructing an integrated cell-line-specific PDAC network with weighted nodes and weighted edges using cell-line PDAC expression profile, CRISPR data and PPI network data. c Spectral clustering for integrated tissue-specific PDAC network. d Aligning clustering results on integrated cell-line-specific PDAC network and ranking targets with a scoring scheme (TI score). e Validation on top ranked targets. SCNrank used STRING PPI [14] as a skeleton and expression data from PDAC tissue and cell-line as complements to construct two networks: one for tissue and one for cell-line, both of which share the same PPI skeleton but have totally different weights of nodes and edges, which are used to carry their unique characteristics. We took advantage of dimension reduction approaches to decompose the networks into clusters to better capture their common features in tissue network to detect optimal targets. Finally, we aligned the clusters of interest from tissue network to cell-line networks and then applied a customized Dijkstra-paths searching algorithm for searching and ranking all possible targets with each cluster. SCNrank includes four steps (see Fig. 1a-d): For cell-line PDAC and Tumor PDAC data respectively, the algorithm generates integrated networks and maps them onto the STRING PPI network so that they become comparable (Fig. 1a-b). Subnetwork partition (Fig. 1c) and a scoring scheme for aligned subnetworks (Fig. 1d) are two key methods of SCNrank. Validations on ranked targets are included in this study (Fig. 1e). The detailed SCNrank algorithm is illustrated in Fig. 2. Workflow of 'SCNrank' (a) Constructing integrated tissue PDAC network; (b) Constructing integrated cell-line PDAC network; (c) Spectral clustering for subnetwork partitioning; (d) Clusters alignment between tissue network and cell-line network, and then calculating TI score for targets to rank them Subnetwork partitioning is typically used to subdivide large networks into smaller, more efficient subnetworks. Subnetworks can reflect important cancer-related gene regulation processes and module mechanisms. Associations/regulations of genes or proteins from subnetworks can provide valuable insights towards target selection for developing novel treatments for pancreatic cancer [10]. Spectral clustering [21] is a dimension reduction and clustering graphs approach. It firstly reduces data dimension so that core features will be revealed, then performs clustering analysis on the simplified data to better categorize data compared to approaches that directly perform clustering on the complete data. In our study, spectral clustering is used to: firstly reduce data dimension for the integrated networks; then perform clustering for the integrated networks. Here, spectral clustering is designed to reduce data dimension and identify perturbed gene subnetworks of pancreatic tumors, where nodes are originated from dysregulation degree of gene expression datasets representing distinct states tumors and adjacent tumors normal, edges are from correlation coefficient of tumors gene expression profiles. Subnetwork alignment score for priority targets Numerous graph alignment approaches by certain features or conditions have been developed. Typically, algorithms include seed-based and score-based strategies. SubNet [22] firstly apply seed genes with the PageRank algorithm to identify aligned subnetworks [23]. Score-based strategies rely on the scoring schemes on either edges or nodes. Guo et al. proposed a condition-specific subnetwork selection algorithm that scores solely edges [24]. Dezso et al. developed an algorithm that scores nodes to extract disease-specific subnetworks [25]. However, this method only uses parts of graph information, such as nodes or edges to detect network structure variation, which is not enough to observe network topology variation [26]. IODNE deploys a minimum spanning tree search algorithm and simultaneously scores edges and nodes for selecting subnetworks that are most dysregulated for potential target disease genes. IODNE can then be successfully applied for breast cancer subnetwork identification [27]. However, it doesn't provide direct evidence of actionable drug targets. To overcome these drawbacks, we developed a scoring scheme that simultaneously takes node weight and edge weight into account. Dijkstra shortest paths algorithm is proposed to rank subnetworks for ranking targets. SCNrank algorithm 'SCNrank' takes multiple types of omics data from tissue and cell-line data as input to rank druggable targets. SCNrank mainly consists of four steps (shown as subgraphs A, B, C, D in Fig. 2). STEP A: construct an integrated network for tissue PDAC The algorithm first compares tumor tissue and normal tissue expression profiles to select the overexpressed genes in tumors. Since the sample number of tissue tumor and normal groups are not equal, we performed an unpaired T-test with a p-value cut-off 0.05. Log fold changes between tumor and normal tissue samples are calculated for all significantly overexpressed genes. The algorithm then constructed a correlation network by calculating the Pearson correlation coefficient as edge weights. Log fold change is then used as the node weights in the network. The algorithm then maps the integrated network onto the STRING PPI network and selects the overlapped subnetwork. The rationale of mapping is that: 1. we believe high correlations among genes that also reflect on protein level are more likely to be true; 2. mapping both tissue integrated network and cell-line tissue integrated network onto the same PPI network makes them comparable via the PPI network. Eventually, a network with the skeleton from PPI network, edge weights from pair-wise gene correlation, and node weights from Tumor -versus-Normal log fold change are constructed. STEP B: construct an integrated perturbation network of pancreatic cancer cells Only genes that are selected in STEP 1 are picked from the cell-line expression profile for integrated network construction. Similarly, the pair-wise Pearson correlation coefficients for these genes are calculated to build a correlation network. The network is then mapped onto the STRING PPI network and only the overlapped subnetwork is kept. Gene essentiality value (CRISPR-Cas9 data) is then integrated into the network as node (gene) weights. Finally, two constructed networks share the same nodes and edges but with totally different node weights and edge weights. STEP C: dimension reduction and network partition Spectral clustering [21] is a dimension reduction scheme that divides a network into pieces based on the spectrum (eigenvalues) of the corresponding similarity matrix. In the clustering process, the high dimension network is reduced to low dimension clusters since common features among variables can be better captured from a graph perspective. Given a graph G with n nodes and k categories, the objective function of spectral clustering can be described as: $$ \mathit{\min}. cut\left({A}_1,\dots {A}_k\right)=\frac{1}{2}\sum \limits_{i=1}^kW\left({A}_{i,}\overline{A_i}\right) $$ Where \( W\left({A}_{i,}\overline{A_i}\right) \) is the weight between cluster Ai and its complement set \( \overline{A_i} \). However, this has been proven as an NP-hard discrete problem. In this study, we applied a widely used spectral approach called RatioCut [28] to make the optimal cut by solving the following objective function: $$ \underset{A_1,\dots, {A}_k}{\min}\mathrm{Tr}\left({A}^{\prime } LA\right)\ \mathrm{subject}\ \mathrm{to}\ {A}^{\prime }A=I $$ Where L is the normalized Laplacian matrix (defined as formula (7)), \( A=Y{\left({Y}^TY\right)}^{-\frac{1}{2}} \) is a scaled partition matrix, and Y is a partition matrix indicating a clustering scheme. The general steps of performing spectral clustering can be described as: For n variables (nodes), construct an affinity matrix S $$ S=\left(\begin{array}{ccc}{s}_{11}& \cdots & {s}_{1n}\\ {}\vdots & \ddots & \vdots \\ {}{s}_{n1}& \cdots & {s}_{nn}\end{array}\right) $$ Where Sab in the matrix indicates the connectivity between variables a and b in the network. Construct a diagonal matrix D as a degree matrix $$ D=\left(\begin{array}{ccc}{d}_1& \cdots & 0\\ {}\vdots & \ddots & \vdots \\ {}0& \cdots & {d}_n\end{array}\right) $$ Where da in the matrix indicates the degree (total edges) of variable a in the network. Clearly, $$ {d}_a=\sum \limits_{k=1}^n{S}_{ak} $$ Construct Laplacian matrix $$ L\hbox{'}=D-S $$ Normalize the Laplacian matrix $$ L={D}^{-\frac{1}{2}}{L}^{\prime }{D}^{-\frac{1}{2}} $$ Perform singular value decomposition for matrix L Pick top K eigenvalues and their corresponding eigenvectors to generate a N ∗ K matrix Perform K-means clustering [29] on the extracted matrix. Clearly, the Laplacian matrix L consists of two types of node information: local information, which is node connectivity towards its neighbors in matrix S, and global information, which is node degrees, or 'influence' towards the entire network. Hence, the clustering strategy can be thought of as selecting similar nodes based on their local and global similarities. Inspired by this idea, we used the Pearson Correlation Coefficient (CC) among nodes instead of the connectivity value (0 or 1) in the affinity matrix to measure the local similarities among genes. We also plugged in log fold change of tumor versus normal expression value in the degree matrix to indicate the global influence of genes. Hence, matrix S and D becomes: $$ {S}^{\prime }=\left(\begin{array}{ccc}{r}_{11}& \cdots & {r}_{1n}\\ {}\vdots & \ddots & \vdots \\ {}{r}_{n1}& \cdots & {r}_{nn}\end{array}\right) $$ $$ {D}^{\prime }=\left(\begin{array}{ccc}{FC}_1& \cdots & 0\\ {}\vdots & \ddots & \vdots \\ {}0& \cdots & {FC}_n\end{array}\right) $$ Where rab is the CC between gene a and b in the expression profile when constructing the integrated tumor network and integrated cell-line network. FCa is the log fold change of the gene when comparing its expression value in the tumor group to its value in the normal group while constructing the integrated tumor network. In the cell-line network, FCa represents the gene essentiality value (CRISPR-Cas 9 value). To fulfill formula (5), S needs to be normalized to: $$ {S}^{\prime \prime }=\left(\begin{array}{ccc}\frac{r_{11}{FC}_1}{\sum_{k=1}^n{r}_{1k}}& \cdots & \frac{r_{1n}{FC}_1}{\sum_{k=1}^n{r}_{1k}}\\ {}\vdots & \ddots & \vdots \\ {}\frac{r_{n1}{FC}_n}{\sum_{k=1}^n{r}_{nk}}& \cdots & \frac{r_{nn}{FC}_n}{\sum_{k=1}^n{r}_{nk}}\end{array}\right) $$ Hence, the final Laplacian becomes L′ = D′ − S′′, and normalized Laplacian becomes $$ L={D}^{\prime -\frac{1}{2}}{L}^{\prime }{D}^{\prime -\frac{1}{2}} $$ For K-means clustering, picking the optimal K could be arbitrary. In our case, K is equal to the number of eigenvalues that the algorithm picked. Too many or too few eigenvalues will result in overfitting and underfitting, respectively. Hence, we applied an intuitive approach: from K = 1 to the total number of variables, we performed a K-means algorithm and calculated Hartigan's number, which is a measurement of the clustering quality, by comparing two clustering results. For a K-means clustering, if the number is greater than 10, then having K + 1-means clustering is of value [30]. We selected the K when the Hartigan's number is firstly less than 10. We understand that this scheme of picking K doesn't guarantee global optima. STEP D: graph structure similarity alignment between subnetworks of dysregulation genes in tumors and perturbation networks in cancer cells and score to rank for priority potential targets We applied spectral clustering on the tissue integrated network to look for genes that show common features. We then mapped 1317 targets (genes) for all FDA approved drugs onto clusters. Then, for the successfully mapped drug targets, we examined the influence that the target might have over whole clusters. We assumed that a drug target's 'influence' is limited to its cluster. In that case, a drug target's influence towards any node is determined by the paths between them. Hence, given a graph G (V, E), where V and E are node set and edge set, one can assume that the node weight set is W and edge weight set is Y. For a drug target x, its maximum 'influence' towards all other nodes can be described as: $$ \sum \limits_{k\in V}{W}_k\prod \limits_{i=k}^x{Y}_{i,{i}_{next}} $$ $$ where\ {Y}_{i,i\_ next}\in \mathrm{E} $$ Where \( \prod \limits_{i=k}^x{Y}_{i,i\_ next} \) indicates the transmitted influence from target x to a node k via one possible path. Obviously, to maximize term (12), for every other node i, we need to find the most correlated path between x and i. Thus, the total influence of x becomes: $$ \boldsymbol{TI}=\sum \limits_{k\in V}{W}_k\max \left(\prod \limits_{i=k}^x{Y}_{i,{i}_{next}}\right) $$ Here, the term \( \max \left(\prod \limits_{i=k}^x{Y}_{i,{i}_{next}}\right) \) represents the most correlated path between x and i. And we define term (13) as Target Influence score (TI). We then developed a scoring scheme for calculating TI for all 367 drug targets. The detail is described in Table 1: Table 1 Scoring scheme for identifying druggable targets from a clustered graph Given a source node and a graph, the famous Dijkstra algorithm [31] can find all shortest paths (a path that contains minimum weight) between the source node and all other nodes. By taking the reciprocal value for all edge weights, Dijkstra can be used to find the 'heaviest' paths, which is also the most correlated path, between the source node and all other member nodes in a cluster (a subnetwork). Thus, we applied the Dijkstra algorithm to find 'most correlated' paths between drug targets and all other genes within a cluster. The hypothesis behind it is that we believe when a drug target is aimed, the influence transmits to other nodes via most correlated paths. Moreover, in the cell-line specific network, two genes might be either positively or negatively correlated. Hence, we multiplied these correlations to allow the drug target to be positively or negatively associated with the other nodes. This multiplied coefficient will be multiplied by the node weight (gene essentiality value) to represent this node's reaction towards the knockdown/knockout of the drug target as the node's influence score. Finally, all influence scores within the cluster are summed up as the total influence score of the drug target for the entire cluster. Since most of the drug targets are highly regulated in tumors, we record the maximum score that a target can have towards its cluster. If multiple targets are in one cluster, we report the target that causes maximum influence on its cluster as the druggable target for that cluster. Finally, we ranked all targets by these influence scores. HMMR and POLE2 have been together reported as significantly overexpressed in PDAC and lung cancer from a large cohort. Expression data of PDAC Expression data gathered from 263 samples across three groups are used in this study, including 92 PDAC cell-line samples, 113 PDAC tissue samples and 58 adjacent normal pancreas tissue samples. These data are all from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) database and are all generated from Affymetrix Human Genome U133 Plus 2.0 Array, which contains 54,675 probes pointing to over 20,000 genes. Complete annotation of all samples can be found in Additional file 1, (Table 2). Table 2 Gene expression data used in this study along with their session Number in GEO database Protein-protein interaction network STRING [14] is a comprehensive and public pathway database (https://string-db.org/), which accumulates prior knowledge of biological pathways and protein-protein interactions. We included STRING network protein links version 11 data in our analysis. Genome-wide CRISPR-Cas9 screening data and gene essentiality value To measure gene essentiality, we used CRISPR-Cas9 v3.3.8 screening data from 'Project Achilles' [32,33,34], (https://portals.broadinstitute.org/achilles) which includes genome-wide CRISPR-Cas9 screening data that affect cell survival across 43 tumorous cell lines and genome-wide RNAi screening data over 501 cell-lines. We choose CRISPR-Cas9 over RNAi because recent studies have indicated that compared to RNAi, CRISPR-Cas9 has less off-target effects, and is thus better for cancer drug-target related research [35]. In total, gene perturbation data of 74,222 sgRNAs on 17,733 genes across 22 PDAC cell-lines are included in this study. FDA approved drug targets We downloaded all FDA approved drugs and their targets from Drug bank [5]. In total, all targets have been mapped onto 1317 genes, of which 283 genes are cancer drug targets. Data preprocessing Gene expression profiles preprocess We converted raw data (.cel files) to expression value by 3 steps: background correction, normalization and summarization. We then applied normalizations for all samples to make them comparable. Probe-based expression value is converted to gene-based expression value by sequentially applying the following settings: 1. A probe containing more than 20% missing data is eliminated; 2. A K-nearest neighbor approach (KNN) is applied to infer the missing data; We estimated the missing data with the average value from K = 10 nearest neighbors. 3. We convert probes to genes using Affymetrix U133 Plus 2.0 annotation file as a reference, which can be downloaded from AFFYMETRIX official website. For probes that point to the same gene, their value is averaged to represent the expression value of this gene. Expression data normalization To make the expression profiles from different samples comparable, We applied Microarray Suite 5 method (MAS 5.0) normalization algorithm [36], which is embedded in R package 'affy' available in Bioconductor (http://bioconductor.org). We then applied a Quantile Normalization for all expression samples to reduce the batch effect. Finally, all values are Log2 transformed and ready for analysis. CRISPR-Cas9 gene essentiality In CRISPR-Cas9 screening data, each single-guide RNA (sgRNA) in one gene has its unique expression fold change (before knockout versus after knockout), indicating its importance to cell survival. Each gene might have multiple sgRNAs. Since we were looking for drug targets on the gene level, we converted sgRNA level fold-change to gene level fold-change so that each gene will be directly linked to cell survival. The conversion scheme can be described as: For genes that are targeted by only one guide-RNA, we simply used its fold change value as fold change for this gene. For genes that are targeted by multiple sgRNA, we took the average of fold changes of these sgRNAs to represent the overall expression fold change of each gene. We defined gene level expression fold change as 'gene essentiality value' in this study. For all 22 pancreatic cell-lines, we calculated the average gene essentiality values among all cell-lines to represent the average gene essentiality values. Overlapping 15,664 common genes among 263 gene expression profiles for tumor tissue, normal tissue and cell-line are included for SCNrank analysis, among which 7376 genes are significantly dysregulated by non-paired t-test with a p-value less than 0.05. 4584 genes out of 7376 genes are significantly over-expressed in the tumor tissues group compared to the normal tissue group. We then mapped the 4584 genes onto the STRING human PPI network. Four thousand one hundred forty-four genes have overlapped with the PPI network. 367 out of 4144 are drug targets of FDA approved drugs. In total, 4141 genes and associated 931,288 pairs of gene-gene interaction of network with 367 FDA approved drugs' targets (which includes 90 cancer drug targets) are inputs into the SCNrank algorithm to seek potential targets for PDAC patients. Potential target subnetworks and targets ranked by SCNrank In target ranking process, we selected the top 40 eigenvalues for further K-means clustering, which led to 198 clusters (subnetworks) for PDAC patients. One hundred ninety-eight complete clusters and their members can be found in Additional file 1. All 367 targets are scored and ranked by SCNrank system. Table 3 shows the top ten targets and two well-known PDAC drug targets ranked by 'SCNrank', of which POLE2 and DHFR are known cancer drug targets. ERBB2 and MTOR are PDAC drug targets. A complete ranked list can be found in Additional file 1. Table 3 Statistics of top-ranked drug targets. Column 2: Ranks by SCNrank Column. 3: cancer drug target information. Column 4: average expression values in tumor tissue samples. Column 5: average expression values in normal tissue samples. Column 6: log2 fold change of expression differences between tumor group and tissue group. Column 7: T value from T-test between tumor and normal group. Column 8: P-value from T-test between tumor and normal group. Column 9: gene essentiality value (cell survival rate at T3 versus at T0). Positive values and negative values indicate an enhanced and reduced cell survival rate respectively in vitro The 12 selected genes are all highly expressed in tumor tissue compared to normal tissue. Moreover, the loss of all 12 genes cause reduced cell survival. Among them, two widely accepted targets ERBB2 and MTOR in treating PDAC are caught by SCNrank algorithm. PGK1, POLE2 and HMMR are the top three ranked targets. PGK1 is in a cluster of 41 genes. POLE2 and HMMR are together in a cluster of 67 genes. Figure 3 shows the expression level of two clusters containing the top three ranked targets in tumor tissue, normal tissue and cell lines. It can be observed that these genes show a concordant high expression pattern in cell-line and tumor groups than in the normal group. Heatmap of PGK1 and POLE2-HMMR clusters in three different expression profiles. Cluster 1 and 2 refer to PGK1 cluster and POLE2-HMMR cluster respectively. Tumor, Normal and Cell-line indicate tumor samples, normal samples and cell-line samples respectively. Red and Blue color in the panel label indicate over-expression and under-expression of genes respectively Glycolytic enzyme phosphoglycerate kinase 1 (PGK1) is a gene that codes for a glycolytic enzyme that catalyzes the synthesis of 3-phosphoglycerate. Its functions and mechanisms are not yet completely understood. As an inhibitor, PGK1 inhibits the secretion of vascular endothelial growth factor (VEGF) and interleukin-8, thus inhibiting Angiogenesis [37]. However, multiple studies have suggested that in metastatic tumor cells, PGK1 plays a completely contrary role. Overexpression of PGK1 facilitates not only tumor growth and interaction with microenvironment, but tumor invasion and metastasis in liver, gastric and prostate cancer [38, 39]. In this study, PGK1 has been identified as the target that can cause the highest influence towards its cluster (shown in Fig. 4a). It interacts not only with the greatest number of genes, but also with the greatest number of other targets in the cluster. Most of its correlations with its neighbors are positive. Top three ranked drug targets with their interactions with other nodes in corresponding clusters in cell-line integrated network and the survival analysis on them. In (a) and (b), cube nodes indicate known targets while the circle nodes indicate other genes. Red and blue lines indicate positive and negative correlations respectively. Line shade indicates correlation intensity. Nodes are placed in a clockwise order by their degrees. a Top rated Drug targets 'PGK1' and the subnetwork of its cluster. PGK1 is the node that has the highest number of connections. b Second and third rated Drug targets 'POLE2', 'HMMR' and the corresponding subnetwork of their common cluster. Yellow highlighted genes are common genes between HMMR and POLE2. RAD51 is the node that has the highest number of connections. c High expression of PGK1 versus Low expression of PGK1 survival curves. d High expression of HMMR versus Low expression of HMMR survival curves. e High expression of POLE2 versus Low expression of POLE2 survival curves DNA Polymerase Epsilon 2, Accessory Subunit (POLE2) is highly involved in DNA repair and replication. It has been previously reported to have a high association with colorectal cancer [40]. In this study, POLE2 is ranked as the second highest target. Even though its cluster is much larger than the cluster of PGK1 (shown in Fig. 4b), the influence of POLE2 towards the whole cluster is not as strong as the influence of PGK1. Hyaluronan Mediated Motility Receptor (HMMR), which is the target with the third highest score, is highly involved in cell motility. HMMR forms a complex with BRCA1 and BRCA2, thus it has been identified as a high-risk factor in multiple cancer types such as breast cancer and fibrosarcoma [41, 42]. Interestingly, HMMR is in the same cluster with POLE2 (shown in Fig. 4b). Their degrees and ranks are very similar, implying their equal influence towards the whole cluster. Pathway enrichment analysis for the top three ranked targets and their clusters For all 198 clusters, we performed pathway enrichment analysis with 'Gene Set Enrichment Analysis' GSEA [43]. We selected 'C5 go gene sets BP GO biological process' database version 6.2, which contains 4436 gene sets annotated by GO term with their functions, as a reference and performed functional analysis for each cluster with significance level P < 0.05. GSEA analysis required a ranked gene list to perform such analysis, so we used log fold change of tumor vs normal tissue as their weights and ranked them. Complete enriched pathway results and related gene lists can be found in Additional file 1. Our top-ranked gene, PGK1 with its cluster, has significantly enriched 'CARBOHYDRATE_CATABOLIC_PROCESS'. The second and third gene, HMMR and POLE2, with their clusters, have significantly enriched multiple pathways such as 'CELL CYCLE' and 'MITOSIS'. These pathways are all highly related to cell cycle and cell division, suggesting these two genes along with their cluster members, are critical components in regulating cell cycles. Moreover, HMMR and POLE2 enriched 8 pathways of 11 total enriched pathways that are enriched by the entire cluster, suggesting common functional activities. Ranked targets validation by clinical outcomes We performed survival analysis for differentially expressed PGK1, HMMR and POLE2 from public database 'GEPIA' (http://gepia.cancer-pku.cn/). GEPIA [44] is a public database containing 9736 tumors and 8857 normal samples from TCGA [45] and GTEx [46] projects. In Fig. 4c, d, e), all three targets showed a significant difference (Hazard ratio P-value< 0.01) in patients' survival. Low expression of these three genes provides significantly higher survival than high expression. Survival curves of all three genes show a similar pattern at around 20 months, at which low expression curves start to have clear segregation from high expression curves. Targets accordance comparison between clinical drug treatment in pancreatic cancer and selection by SCNrank algorithm Amanam and Chung systematically investigated all currently available targeted therapies and drug targets for pancreatic cancer [47]. Many studies have reported HER2 overexpression in up to 45% of patients with PDAC [48]. This is due to the fact that HER2 amplifications often occur in PDAC [49]. We mapped the known drug targets to our ranks system and listed result in Table 4. Table 4 Currently available drugs and drug targets for pancreatic cancer comparing associated target ranks from SCNrank algorithm In this study, HER2 is ranked 14th by SCNrank. SCNrank covered five commonly used targets in the clinical setting, of which ERBB2 and MTOR are highly ranked (rank 14 and rank 32 respectively). All the missing targets are not included in 4414 genes for constructing integrated networks at the start. Research in pancreatic cancer target selection Recently, drug target selection has been extensively studied and various methods have been developed. For instance, the 'Connectivity map' project (C-map) curated expression profiles of human cells exposed to thousands of drugs, which can be served for drug repositioning [17]. Ma et al. developed an algorithm named 'Met-express' that combines a gene co-expression network with the human metabolic network to predict drug targets for pancreatic cancer. However, these methods only utilize expression data as fundamental knowledge and incorporate other biological knowledge to predict targets. However, most drugs function on protein level eventually. And expression level regulation might not eventually reflect on protein level. Secondly, their analysis lacks the support of cell survival phenotypes that directly reflect the effects of gene knockdown/knockout experiment. To our knowledge, SCNrank is the first algorithm that can incorporate expression data, PPI data and gene perturbation data (CRISPR or RNAi) for selecting and ranking drug targets. The novelty of the SCNrank algorithm mainly reflects in: i. SCNrank is the first algorithm that takes advantage of dimension reduction methods to integrate three different types of omics data into a comprehensive network for drug target selection; ii. SCNrank ultilized CRISPR data to benefit the target selection. The CRISPR data can mimic the real drug response of drugs; iii. SCNrank uses spectral clustering to reduce data dimensions to capture features on tissue-based omics-data and ranks drug targets on cell-line omics-data, which makes the target selection process more reliable. Spectral clustering was initially introduced to cancer biology for identifying novel subtypes of Triple Negative Breast Cancer (TNBC) [50]. To our knowledge, it has never been used for selecting genotypic features from an integrated network. Despite the advantages, there is still room for SCNrank to improve. The possible future might include i. incorporate pathway information into target selection process for PDAC. Pathway information provides a different perspective in understanding the progression and treatment of PDAC [45, 51, 52]. Targeting cancer related pathways can be a highly effective strategy for treating PDAC. Thus, it is necessary to incorporate pathway information into the drug target ranking and selection process; ii. Incorporate functional information into the target selection process. SCNrank algorithm ranked drug targets mainly based on differential expression, protein-protein interaction and tissue-target concordance. However, different proteins might have different docking capacities, which directly affects their potential to become a druggable target. Unfortunately, SCNrank algorithm doesn't take this information into account for ranking targets. Integrating this information into the whole process is necessary. Clinical targets of drug in pancreatic cancer Tumor cells prefer glycolysis to oxidative phosphorylation for providing energy during proliferation and metastasis. This phenomenon is called the 'Warburg Effect' [53] and often occurs in certain tumor types such as brain cancer, liver cancer and pancreatic cancer. PGK1 is an important enzyme in the metabolic pathways. Recent studies have revealed that PGK1 can promote cell proliferation and tumorigenesis by enhancing the Warburg effect. For instance, Li et al.'s study reveals that PGK1 functions as a protein kinase to phosphorylate PDHK1, which further promotes the Warburg effect in brain tumorigenesis [54]. Hu et al. recently reported that acetylation of PGK1 can promote cell proliferation and tumorigenesis in liver cancer via glycolysis pathways [55]. Xie et al.'s study has pointed out that PGK1 is highly involved in MYC-induced metabolic reprogramming, which further causes a reinforced Warburg effect [56]. From the pathway analysis result from section 3.3, we also observed a significantly enriched 'cellular metabolic process' pathway, which implies the activated Warburg effect in our PDAC samples. So far, there are studies that focus on targeting the Warburg effect to treat pancreatic cancers. Rajeshkumar et al. has selected a small molecule called 'FX11', which inhibits a lactate dehydrogenase-A (LDH-A), a critical enzyme in metabolizing pyruvate, to block the Warburg effect [57]. They observed that for TP53 mutant cells, their approach can significantly increase tumor cell apoptosis. These studies provide the possibilities of targeting the Warburg effect to treat PDAC. Hence, together with the survival analysis result shown in Fig. 4a, our findings suggested that PGK1 is a potential target that alternatively targets the 'Warburg Effects' and thus is worth further experimental validation. 'DNA polymerase epsilon 2' (POLE2) and 'Hyaluronan-mediated motility receptor' (HMMR) have been previously reported as significantly hyper-expressed in both PDAC tissues and cell-line expression profiles [58]. Studies have linked HMMR and its product 'Receptor for Hyaluronan Mediated Motility' (RHAMM) to a variety of hematological malignancies and other solid tumors [59,60,61]. This is because RHAMM, working in concert with BRCA1 and BRAC2, can significantly promote tumor growth and metastasis for pancreatic cancer [62] in vivo, and multiple other cancer types such as basal-like breast cancer [63] and glioma [64] in vivo. Hence, Willemen et al. pointed out of HMMR/RHAMM being a considerable potential target for cancer immunotherapy [65]. Moreover, Li, Ji and Wang have targeted HMMR via long noncoding RNA (lncRNA) and successfully suppressed Glioblastoma in mouse xenograft model [66]. This evidence suggests that HMMR and its product RHAMM is worth further study in its potential to be used as a PDAC drug target. POLE2 is highly involved in DNA repair and replication. However, targeting POLE2 to treat cancer is rarely reported. Li et al. used β-elemene, which is a type of elemane sesquiterpenoids, to suppress POLE2 expression and restrain lung adenocarcinoma cell malignant in vitro [67], which could be used as evidence of treating pancreatic adenocarcinoma (PDAC) by targeting POLE2. In this study, we developed an algorithm called 'SCNrank' that links cell lines CRISPR technology with gene expression profiles and the PPI network to score and rank drug targets for PDAC. We utilized cutting edge dimension reduction methods and network analysis methods to identify the potential targets. We disclosed the molecular mechanism of potential disease genes in PDAC and roles systematically by performing pathway enrichment analysis. We validated our top-ranked genes by comparing them with existing pancreatic cancer drug targets and performing survival analysis on top-ranked targets to predict their clinical outcomes. We showed that the top-ranked target, PGK1, plays a key role in tumor cell glycolysis in PDAC and has high potential as a target for treating PDAC. Our second and third-ranked targets, POLE2 and HMMR have been proven to promote PDAC and various other cancer types. Moreover, HMMR has been extensively studied as a target for treating lung adenocarcinoma and glioma. This might serve as evidence of using HMMR as a novel drug target for PDAC. Taken together, the results provide new guidance for future clinical treatments. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2017. CA Cancer J Clin. 2017;67(1):7–30. Iovanna J, et al. Current knowledge on pancreatic cancer. Front Oncol. 2012;2:6. Kruger S, et al. Translational research in pancreatic ductal adenocarcinoma: current evidence and future concepts. World J Gastroenterol. 2014;20(31):10769–77. Kamisawa T, et al. Pancreatic cancer. Lancet. 2016;388(10039):73–85. Adamska A, Domenichini A, Falasca M. Pancreatic ductal adenocarcinoma: current and evolving therapies. Int J Mol Sci. 2017;18(7):1338. Frese KK, Tuveson DA. Maximizing mouse cancer models. Nat Rev Cancer. 2007;7(9):654. Shi J, et al. Discovery of cancer drug targets by CRISPR-Cas9 screening of protein domains. Nat Biotechnol. 2015;33(6):661. Wang T, Lander ES, Sabatini DM. Large-Scale Single Guide RNA Library Construction and Use for CRISPR-Cas9-Based Genetic Screens. Cold Spring Harb Protoc. 2016;2016(3):pdb top086892. Vincent A, et al. Pancreatic cancer. Lancet. 2011;378(9791):607–20. Barabási A-L, Gulbahce N, Loscalzo J. Network medicine: a network-based approach to human disease. Nat Rev Genet. 2011;12(1):56. Luo Y, et al. A network integration approach for drug-target interaction prediction and computational drug repositioning from heterogeneous information. Nat Commun. 2017;8(1):573. Dimitrakopoulos C, et al. Network-based integration of multi-omics data for prioritizing cancer genes. Bioinformatics. 2018;34(14):2441–8. Ritchie MD, et al. Methods of integrating data to uncover genotype-phenotype interactions. Nat Rev Genet. 2015;16(2):85–97. Szklarczyk D, et al. The STRING database in 2017: quality-controlled protein-protein association networks, made broadly accessible. Nucleic Acids Res. 2017;45(D1):D362–8. Kanehisa M, Goto S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28(1):27–30. Nielsen TE, Schreiber SL. Towards the optimal screening collection: a synthesis strategy. Angew Chem Int Ed Engl. 2008;47(1):48–56. Lamb J, et al. The connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. Science. 2006;313(5795):1929–35. Wang S, Peng J. Network-assisted target identification for haploinsufficiency and homozygous profiling screens. PLoS Comput Biol. 2017;13(6):e1005553. Ferrero E, Dunham I, Sanseau P. In silico prediction of novel therapeutic targets using gene–disease association data. J Transl Med. 2017;15(1):182. Jiang P, et al. Network analysis of gene essentiality in functional genomics experiments. Genome Biol. 2015;16(1):239. Shi J, Malik J. Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell. 2000;22(8):888–905. Lemetre C, Zhang Q, Zhang ZD. SubNet: a Java application for subnetwork extraction. Bioinformatics. 2013;29(19):2509–11. Jiang B, Gribskov M. Assessment of subnetwork detection methods for breast cancer. Cancer Inform. 2014;13(Suppl 6):15–23. Guo Z, et al. Edge-based scoring and searching method for identifying condition-responsive protein-protein interaction sub-network. Bioinformatics. 2007;23(16):2121–8. Dezso Z, et al. Identifying disease-specific genes based on their topological significance in protein networks. BMC Syst Biol. 2009;3:36. Grechkin M, et al. Identifying network perturbation in cancer. PLoS Comput Biol. 2016;12(5):e1004888. Mounika Inavolu S, et al. IODNE: an integrated optimization method for identifying the deregulated subnetwork for precision medicine in cancer. CPT Pharmacometrics Syst Pharmacol. 2017;6(3):168–76. Wei Y-C, Cheng C-K. Towards efficient hierarchical designs by ratio cut partitioning. In: 1989 IEEE International Conference on Computer-Aided Design. Digest of Technical Papers. IEEE; 1989. Hartigan JA, Wong MA. Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics). 1979;28(1):100–8. Chiang MM-T, Mirkin B. Intelligent choice of the number of clusters in k-means clustering: an experimental study with different cluster spreads. J Classif. 2010;27(1):3–40. Dijkstra EW. A note on two problems in connexion with graphs. Numer Math. 1959;1(1):269–71. Tsherniak A, et al. Defining a Cancer dependency map. Cell. 2017;170(3):564–576 e16. Aguirre AJ, et al. Genomic copy number dictates a gene-independent cell response to CRISPR/Cas9 targeting. Cancer Discov. 2016;6(8):914–29. Cowley GS, et al. Parallel genome-scale loss of function screens in 216 cancer cell lines for the identification of context-specific genetic dependencies. Sci Data. 2014;1:140035. Lin A, et al. CRISPR/Cas9 mutagenesis invalidates a putative cancer dependency targeted in on-going clinical trials. Elife. 2017;6:e24179. Gautier L, et al. Affy--analysis of affymetrix GeneChip data at the probe level. Bioinformatics. 2004;20(3):307–15. Wang J, et al. A glycolytic mechanism regulating an angiogenic switch in prostate cancer. Cancer Res. 2007;67(1):149–59. Zieker D, et al. Phosphoglycerate kinase 1 a promoting enzyme for peritoneal dissemination in gastric cancer. Int J Cancer. 2010;126(6):1513–20. Wang J, et al. Characterization of phosphoglycerate kinase-1 expression of stromal cells derived from tumor microenvironment in prostate cancer progression. Cancer Res. 2010;70(2):471–80. Punjabi P, Murday A. Successful surgical repair of a false aneurysm of the ascending aorta following orthotopic cardiac transplantation: a case report. Eur J Cardiothorac Surg. 1997;11(6):1174–5. Kalmyrzaev B, et al. Hyaluronan-mediated motility receptor gene single nucleotide polymorphisms and risk of breast cancer. Cancer Epidemiol Biomark Prev. 2008;17(12):3618–20. Shigeishi H, et al. Overexpression of the receptor for hyaluronan-mediated motility, correlates with expression of microtubule-associated protein in human oral squamous cell carcinomas. Int J Oncol. 2009;34(6):1565–71. Mootha VK, et al. PGC-1alpha-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes. Nat Genet. 2003;34(3):267–73. Tang Z, et al. GEPIA: a web server for cancer and normal gene expression profiling and interactive analyses. Nucleic Acids Res. 2017;45(W1):W98–W102. Cancer Genome Atlas Research, N, et al. The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013;45(10):1113–20. Carithers LJ, et al. A novel approach to high-quality postmortem tissue procurement: the GTEx project. Biopreserv Biobank. 2015;13(5):311–9. Amanam I, Chung V. Targeted Therapies for Pancreatic Cancer. Cancers (Basel). 2018;10(2):36. Yamanaka Y, et al. Overexpression of HER2/neu oncogene in human pancreatic carcinoma. Hum Pathol. 1993;24(10):1127–34. Chou A, et al. Clinical and molecular characterization of HER2 amplified-pancreatic cancer. Genome Med. 2013;5(8):78. Wang B, et al. Similarity network fusion for aggregating data types on a genomic scale. Nat Methods. 2014;11(3):333–7. Eser S, et al. Oncogenic KRAS signalling in pancreatic cancer. Br J Cancer. 2014;111(5):817. Neuzillet C, et al. Targeting the TGFβ pathway for cancer therapy. Pharmacol Ther. 2015;147:22–31. Vander Heiden MG, Cantley LC, Thompson CB. Understanding the Warburg effect: the metabolic requirements of cell proliferation. Science. 2009;324(5930):1029–33. Li X, et al. Mitochondria-translocated PGK1 functions as a protein kinase to coordinate glycolysis and the TCA cycle in tumorigenesis. Mol Cell. 2016;61(5):705–19. Hu H, et al. Acetylation of PGK1 promotes liver cancer cell proliferation and tumorigenesis. Hepatology. 2017;65(2):515–28. Xie H, et al. PGK1 Drives Hepatocellular Carcinoma Metastasis by Enhancing Metabolic Process. Int J Mol Sci. 2017;18(8):1630. Rajeshkumar NV, et al. Therapeutic targeting of the Warburg effect in pancreatic cancer relies on an absence of p53 function. Cancer Res. 2015;75(16):3355–64. Grutzmann R, et al. Gene expression profiling of microdissected pancreatic ductal carcinomas using high-density DNA microarrays. Neoplasia. 2004;6(5):611–22. Tzankov A, et al. In situ RHAMM protein expression in acute myeloid leukemia blasts suggests poor overall survival. Ann Hematol. 2011;90(8):901–9. Yamano Y, et al. Hyaluronan-mediated motility: a target in oral squamous cell carcinoma. Int J Oncol. 2008;32(5):1001–9. Ishigami S, et al. Prognostic impact of CD168 expression in gastric cancer. BMC Cancer. 2011;11:106. Du YC, et al. Receptor for hyaluronan-mediated motility isoform B promotes liver metastasis in a mouse model of multistep tumorigenesis and a tail vein assay for metastasis. Proc Natl Acad Sci U S A. 2011;108(40):16753–8. Maxwell CA, et al. Interplay between BRCA1 and RHAMM regulates epithelial apicobasal polarization and may influence risk of breast cancer. PLoS Biol. 2011;9(11):e1001199. Amano T, et al. Antitumor effects of vaccination with dendritic cells transfected with modified receptor for hyaluronan-mediated motility mRNA in a mouse glioma model. J Neurosurg. 2007;106(4):638–45. Willemen Y, et al. The tumor-associated antigen RHAMM (HMMR/CD168) is expressed by monocyte-derived dendritic cells and presented to T cells. Oncotarget. 2016;7(45):73960–70. Li J, Ji X, Wang H. Targeting long noncoding RNA HMMR-AS1 suppresses and radiosensitizes glioblastoma. Neoplasia. 2018;20(5):456–66. Li J, et al. Knockdown of POLE2 expression suppresses lung adenocarcinoma cell malignant phenotypes in vitro. Oncol Rep. 2018;40(5):2477–86. This article has been published as part of BMC Medical Genomics Volume 13 Supplement 5, 2020: The International Conference on Intelligent Biology and Medicine (ICIBM) 2019: Computational methods and application in medical genomics (part 1). The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-13-supplement-5. Department of BioHealth Informatics, School of Informatics and Computing, Indiana University—Purdue University, Indianapolis, IN, 46202, USA Department of Toxicology and Cancer Biology, College of Medicine, University of Kentucky, Lexington, KY, 40536, USA Zhuang Zhuang Zhang & Xiaoqi Liu College of Pharmacy, Division of Medicinal Chemistry and Pharmacognosy, the Ohio State University, Columbus, OH, 43210, USA Xiaolin Cheng Department of Biomedical informatics, College of medicine, the Ohio State University, Columbus, OH, 43210, USA Zhuang Zhuang Zhang Xiaoqi Liu LC and XL formulated the question and designed the study plan. LC and EL designed the model. EL gathered data, implemented the model and wrote the paper. ZZ and XC helped with the paper refining and provided methods on validation of the results. All authors read and approved the final manuscript. Correspondence to Xiaoqi Liu or Lijun Cheng. The authors declare that they have no competing interest. Information of all samples, identified clusters and ranked targets: S1: sample annotation; S2: complete ranked target list; S3: identified clusters and their members; S4: targets with clusters. Liu, E., Zhang, Z.Z., Cheng, X. et al. SCNrank: spectral clustering for network-based ranking to reveal potential drug targets and its application in pancreatic ductal adenocarcinoma. BMC Med Genomics 13, 50 (2020). https://doi.org/10.1186/s12920-020-0681-6 Integrated network Spectral clustering Drug target ranking
CommonCrawl
Reversible perturbations of conservative Hénon-like maps Martin boundary of brownian motion on Gromov hyperbolic metric graphs doi: 10.3934/dcds.2020352 Cylinder absolute games on solenoids L. Singhal 1,2, Beijing International Center for Mathematical Research, Peking University, Beijing, 100 871, China Current address: Yau Mathematical Sciences Center, Tsinghua University, Beijing, 100 084, China Received August 2019 Published October 2020 Fund Project: Parts of this work first appeared in a slightly different avatar in the author's PhD thesis submitted to the Tata Institute of Fundamental Research, Bombay in 2017. For a portion of that duration, financial support from CSIR, Government of India under SPM-07/858(0199)/2014- EMR-I is duly acknowledged Let $ A $ be any affine surjective endomorphism of a solenoid ${\Sigma_{{\mathcal{P}}}} $ over the circle $ S^1 $ which is not an infinite-order translation of $ {\Sigma_{{\mathcal{P}}}}$. We prove the existence of a cylinder absolute winning (CAW) subset $ F \subseteq {\Sigma_{{\mathcal{P}}}} $ with the property that for any $ x \in F $, the orbit closure $ \overline{\{ A^{\ell} x \mid \ell \in {\mathbb{N}} \}} $ does not contain any periodic orbits. A measure $ \mu $ on a metric space is said to be Federer if for all small enough balls around any generic point with respect to $ \mu $, the measure grows by at most some constant multiple on doubling the radius of the ball. The class of infinite solenoids considered in this paper provides, to the best of our knowledge, some of the early natural examples of non-Federer spaces where absolute games can be played and won. Dimension maximality and incompressibility of CAW sets is also discussed for a number of possibilities in addition to their winning nature for the games known from before. Keywords: Dynamical systems, Hausdorff dimension, Incompressible sets, Non-dense orbits, Schmidt's game. Mathematics Subject Classification: Primary: 11J61; Secondary: 28A80, 37C45. Citation: L. Singhal. Cylinder absolute games on solenoids. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020352 J. An, A. Ghosh, L. Guan and T. Ly, Bounded orbits of diagonalizable flows on finite volume quotients of products of $ {\rm {SL}}_2(\mathbb R)$, Adv. Math., 354 (2019), 106743, 18 pp. doi: 10.1016/j.aim.2019.106743. Google Scholar C. S. Aravinda, Bounded geodesics and Hausdorff dimension, Math. Proc. Cambridge Philos. Soc., 116 (1994), 505-511. doi: 10.1017/S0305004100072777. Google Scholar D. Badziahin, A. Pollington and S. Velani, On a problem in simultaneous Diophantine approximation: Schmidt's conjecture, Ann. of Math. (2), 174 (2011), 1837-1883. doi: 10.4007/annals.2011.174.3.9. Google Scholar D. Berend, Ergodic semigroups of epimorphisms, Trans. Amer. Math. Soc., 289 (1985), 393-407. doi: 10.1090/S0002-9947-1985-0779072-7. Google Scholar R. Broderick, L. Fishman and D. Kleinbock, Schmidt's game, fractals, and orbits of toral endomorphisms, Ergodic Theory Dynam. Systems, 31 (2011), 1095-1107. doi: 10.1017/S0143385710000374. Google Scholar R. Broderick, L. Fishman, D. Kleinbock, A. Reich and B. Weiss, The set of badly approximable vectors is strongly $C^1$ incompressible., Math. Proc. Cambridge Philos. Soc., 153 (2012), 319-339. doi: 10.1017/S0305004112000242. Google Scholar S. G. Dani, Bounded orbits of flows on homogeneous spaces, Comment. Math. Helv., 61 (1986), 636-660. Google Scholar S. G. Dani, On orbits of endomorphisms of tori and the Schmidt game, Ergodic Theory Dynam. Systems, 8 (1988), 523-529. doi: 10.1017/S0143385700004673. Google Scholar S. G. Dani, On badly approximable numbers, Schmidt games and bounded orbits of flows, in Number Theory and Dynamical Systems (eds. M. M. Dodson and J. A. G. Vickers), Cambridge Univ. Press, 134 (1989), 69–86. doi: 10.1017/CBO9780511661983.006. Google Scholar K. Falconer, Fractal Geometry, John Wiley & Sons, Ltd., Chichester, 1990. Google Scholar L. Fishman, D. Simmons and M. Urbański, Diophantine approximation and the geometry of limit sets in Gromov hyperbolic metric spaces, Mem. Amer. Math. Soc., 254 (2018), v+137pp. doi: 10.1090/memo/1215. Google Scholar S. A. Juzvinskiĭ, Calculation of the entropy of a group-endomorphism, Sibirsk. Mat. Ž., 8 (1967), 230–239. Google Scholar D. Y. Kleinbock and G. A. Margulis, Bounded orbits of nonquasiunipotent flows on homogeneous spaces, in Sinaĭ 's Moscow Seminar on Dynamical Systems, Amer. Math. Soc., 28 (1996), 141–172. doi: 10.1090/trans2/171/11. Google Scholar D. Kleinbock and T. Ly, Badly approximable $S$-numbers and absolute Schmidt games, J. Number Theory, 164 (2016), 13-42. doi: 10.1016/j.jnt.2015.12.014. Google Scholar D. Kleinbock and B. Weiss, Modified Schmidt games and Diophantine approximation with weights, Adv. Math., 223 (2010), 1276-1298. doi: 10.1016/j.aim.2009.09.018. Google Scholar S. Kristensen, Badly approximable systems of linear forms over a field of formal series., J. Théor. Nombres Bordeaux, 18 (2006), 421-444. doi: 10.5802/jtnb.552. Google Scholar D. A. Lind and T. Ward, Automorphisms of solenoids and $p$-adic entropy, Ergodic Theory Dynam. Systems, 8 (1988), 411-419. doi: 10.1017/S0143385700004545. Google Scholar C. T. McMullen, Winning sets, quasiconformal maps and Diophantine approximation, Geom. Funct. Anal., 20 (2010), 726-740. doi: 10.1007/s00039-010-0078-3. Google Scholar [19] H. L. Montgomery and R. C. Vaughan, Multiplicative Number Theory. I. Classical Theory, Cambridge University Press, Cambridge, 2007. Google Scholar W. M. Schmidt, On badly approximable numbers and certain games, Trans. Amer. Math. Soc., 123 (1966), 178-199. doi: 10.1090/S0002-9947-1966-0195595-4. Google Scholar S. Semmes, Some remarks about solenoids, 2, preprint, arXiv: 1210.0145. Google Scholar S. Weil, Schmidt games and conditions on resonant sets, preprint, arXiv: 1210.1152. Google Scholar A. M. Wilson, On endomorphisms of a solenoid, Proc. Amer. Math. Soc., 55 (1976), 69-74. doi: 10.1090/S0002-9939-1976-0390181-7. Google Scholar Lisa Hernandez Lucas. Properties of sets of Subspaces with Constant Intersection Dimension. Advances in Mathematics of Communications, 2021, 15 (1) : 191-206. doi: 10.3934/amc.2020052 Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399 Jan Bouwe van den Berg, Elena Queirolo. A general framework for validated continuation of periodic orbits in systems of polynomial ODEs. Journal of Computational Dynamics, 2021, 8 (1) : 59-97. doi: 10.3934/jcd.2021004 Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $ \beta $-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 The Editors. The 2019 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2020, 16: 349-350. doi: 10.3934/jmd.2020013 Nitha Niralda P C, Sunil Mathew. On properties of similarity boundary of attractors in product dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021004 Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381 Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176 João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138 Zhongbao Zhou, Yanfei Bai, Helu Xiao, Xu Chen. A non-zero-sum reinsurance-investment game with delay and asymmetric information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 909-936. doi: 10.3934/jimo.2020004 Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129 Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331 Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021012 Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151 Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Meihua Dong, Keonhee Lee, Carlos Morales. Gromov-Hausdorff stability for group actions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1347-1357. doi: 10.3934/dcds.2020320 Jesús A. Álvarez López, Ramón Barral Lijó, John Hunton, Hiraku Nozawa, John R. Parker. Chaotic Delone sets. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021016 L. Singhal
CommonCrawl
Representation learning in intraoperative vital signs for heart failure risk prediction Yuwen Chen ORCID: orcid.org/0000-0003-4032-59371,2,3 na1 & Baolian Qi1,3 na1 BMC Medical Informatics and Decision Making volume 19, Article number: 260 (2019) Cite this article The probability of heart failure during the perioperative period is 2% on average and it is as high as 17% when accompanied by cardiovascular diseases in China. It has been the most significant cause of postoperative death of patients. However, the patient is managed by the flow of information during the operation, but a lot of clinical information can make it difficult for medical staff to identify the information relevant to patient care. There are major practical and technical barriers to understand perioperative complications. In this work, we present three machine learning methods to estimate risks of heart failure, which extract intraoperative vital signs monitoring data into different modal representations (statistical learning representation, text learning representation, image learning representation). Firstly, we extracted features of vital signs monitoring data of surgical patients by statistical analysis. Secondly, the vital signs data is converted into text information by Piecewise Approximate Aggregation (PAA) and Symbolic Aggregate Approximation (SAX), then Latent Dirichlet Allocation (LDA) model is used to extract text topics of patients for heart failure prediction. Thirdly, the vital sign monitoring time series data of the surgical patient is converted into a grid image by using the grid representation, and then the convolutional neural network is directly used to identify the grid image for heart failure prediction. We evaluated the proposed methods in the monitoring data of real patients during the perioperative period. In this paper, the results of our experiment demonstrate the Gradient Boosting Decision Tree (GBDT) classifier achieves the best results in the prediction of heart failure by statistical feature representation. The sensitivity, specificity and the area under the curve (AUC) of the best method can reach 83, 85 and 84% respectively. The experimental results demonstrate that representation learning model of vital signs monitoring data of intraoperative patients can effectively capture the physiological characteristics of postoperative heart failure. Heart failure occurs when the heart is unable to pump sufficiently to maintain blood flow to meet the body's needs. Signs and symptoms commonly include shortness of breath, excessive tiredness and leg swelling. It has been considered as one of the deadliest human diseases worldwide, and the accurate prediction of this risk would be vital for heart failure prevention and treatment. It is estimated in the "Report on Cardiovascular Disease in China, 2018" by China Cardiovascular Center that more than 290 million people suffer from heart failure. Cardiovascular disease has become the leading cause of death for residents, accounting for more than 40% of total. Data from China Health Yearbook 2018 indicated that there are over 50 million operations each year in China, in which the perioperative adverse cardiac events have reached 2%. The incidence of adverse events in heart failure patients during surgery is 2–17%, which has become the most important reason for perioperative complications and mortalities, significantly higher than other patients (0.1–0.2%). At present, there is a lack of early intraoperative prediction techniques for perioperative adverse cardiac events. In addition to the basic Electrocardiograph (ECG), ST segment, ABP monitoring methods, researchers also utilized experimental indicators such as BMP9, neutrophil-lymphocyte ratio, creatine kinase isoenzyme stratification, having a certain evaluation effect on postoperative adverse cardiac events. However, it is difficult to predict early diagnosis and prediction because of obvious hysteresis, so it is often used in the postoperative diagnosis of adverse events. Therefore, the early clinical diagnosis of adverse events of heart failure still relies on the clinical experience of anesthesiologists and physicians. Currently, the research on heart failure is mainly based on the data from patients' medical records, physical characteristics, auxiliary examination, the treatment plan, and the algorithm is used to build the model for studying, analyzing and classifying of diagnosis and prediction. In addition, most studies mainly analyzed the characteristics of electrocardiogram data and built the diagnostic model of heart failure [1,2,3,4,5,6]. Choi et al. [7] used the recurrent neural network algorithm to analyze the diagnostic data of patients with heart failure, including time series of doctor's orders, spatial density and other characteristics, to build a diagnostic model of heart failure, and verified by experiment that the area under the curve (AUC) of the diagnosis of this model was 0.883. Koulaouzidis [8] used Naive Bayes algorithm to analyze the patients with heart failure in the last hospitalization and remote monitoring data, including patient's condition, cause of heart failure, complications, the examination, the New York Heart Association (NYHA) Functional Classification, treatment, and remote monitoring data (e.g., vital signs, body weight, treatment, alcohol consumption and general situation), and built the prediction model of the readmission of patients with heart failure, the predicted AUC reached 0.82 after followed-up of (286 + 281) d. Shameer et al. [9] also utilized Naive Bayes algorithm to analyze about data variables of patients with heart failure, including diagnosis data, treatment data, examination data, records of doctor's orders, and vital signs data, and built a model for predicting readmission of patients with heart failure, with a predicted AUC of 0.78. Zheng et al. [10] presented a method used support vector machine algorithm to analyze the data of patients with heart failure, including age, type of medical insurance, sensitivity assessment (audio-visual and thinking), complications, emergency treatment, the drug-induced risks, the period of last hospitalization, and built a prediction model for the readmission of patients with heart failure, with a prediction accuracy of 78.4%. Chen et al. [11] analyzed 24 h dynamic electrocardiogram of heart failure patients and healthy controls by using support vector machine (SVM) algorithm based on non-equilibrium decision tree. The paper first cut electrocardiogram into segments of more than 5 min, then analyzed the heart rate variability with RR interval series and built a model of heart failure severity classification, which achieved the classification accuracy of 96.61%. As far as we know that there is no research on the prediction of perioperative heart failure risk of patients by directly using intraoperative vital signs monitoring data. However, previous studies have shown that the intraoperative direct monitoring data has the significant value of early diagnosis and early warning after preprocessing and analyzing the time series data. Matthew et al. [12] presented that 30% of critical cardiovascular events have abnormal monitoring signs in 24 h before the cardiovascular critical event. In another study, the paper [13] analyzed 5 vital signs data of patients, and the deterioration of its indicators could warn the doctor of respiratory failure. Petersen provided a model to predict further treatment in the ICU of the patient with monitoring data, and its early warning sensitivity was 0.42 [14]. Therefore, we used intraoperative vital signs monitoring data to predict the risk of perioperative heart failure. However, the clinical information is far beyond the processing capacity of human brains because of its high rate of production and large amount, and the rapid change of the patient's condition. A lot of clinical information can make it difficult for medical staff to identify the information relevant to patient care. Since machine learning is a kind of algorithm that automatically analyzes and obtains rules from data and uses rules to predict unknown data, we used machine learning to build the model for heart failure risk prediction. Thus, in this paper, we mainly used five indicators, including the intraoperative monitoring heart rate, diastolic blood pressure, systolic blood pressure, blood oxygen saturation, pulse pressure difference to learn statistical feature representation, text feature representation and image feature representation of vital sign monitoring data, and then these features were then input into the classifier to predict perioperative heart failure. Our main contributions are in two areas: 1) To our knowledge, ours is the first study to predict perioperative heart failure using only intraoperative vital signs monitoring data, unlike other studies that used ECG data and bio-marker as input to a classifier. 2) Our methods create meaningful representations of vital signs monitoring data, we present three examples of representation learning, with a focus on representations that work for heart failure prediction. The rest of this paper is organized as follows: The preliminary and related technology, and methodology of this paper is discussed in Section 2. The Section 3 reports the experimental results, and the Section 4 discusses the implications and highlights limitations of the study. Finally, the Section 5 discusses the conclusion of this paper. In order to provide a common understanding throughout the text, this section describes the concept of PAA, SAX, LDA, GRTS and CNN algorithms utilized as feature extraction techniques and time series classification algorithms, which is implemented in the proposed approach. Time series classification (TSC) Classification of unlabeled time series into existing classes is a traditional data mining task. All classification methods start by establishing a classification model based on labeled time series. In this case, "labeled time series" means that we build the model using a training dataset with the correct classification of observations or time series. The model is then used to predict a new, unlabeled observations or time series. Prediction of heart failure risk is summarized as a multidimensional time series classification problem. TSC is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed [15, 16]. The time series classification problem is generally composed of extracting time series feature representation and machine learning classification algorithm. The methods used in this paper are the decision tree algorithm [17, 18], gradient boosting machine algorithm [19, 20], logistic regression algorithm [21], Bayesian algorithm [22], SVM [23], random forest [24] and popular deep learning methods [25, 26]. Piecewise approximate aggregation (PAA) Piecewise Approximate Aggregation was originally a time series data representation method proposed by Lin et al. [27]. It can significantly reduce the dimensionality of the data while maintaining the lower bound of distance measurement in Euclidean space. Assume that the original time series is C = {x1,x2, …xN}, the sequence defines that the PAA is \( \overline{\boldsymbol{C}}=\left\{{\overline{\boldsymbol{x}}}_{\mathbf{1}},{\overline{\boldsymbol{x}}}_{\mathbf{2}}\dots .{\overline{\boldsymbol{x}}}_{\boldsymbol{w}}\right\} \). Figure 1 shows the PAA of patient heart rate time series in this article. The Formula as Eq. 1. $$ {\overline{x}}_i=\frac{\omega }{N}\bullet \sum \limits_{j=\frac{N}{\omega}\left(i-1\right)+1}^{\frac{N}{\omega }i}{x}_j\ (1) $$ The PAA representation of time series data Symbolic aggregate approximation (SAX) Symbolic Aggregate Approximation [27] was a time-series data representation method that Lin et al. extended the PAA-based method to obtain the symbol and time series features in the discretized symbol representation of the PAA feature representation of a time series. Figure 2 shows the sax representation of the patient's heart rate. The red line shows the data that has been aggregated with the PAA. For each coefficient, we assign the literal associated with the area. The SAX representation of time series data Latent Dirichlet allocation (LDA) Latent Dirichlet Allocation [28] was proposed by Blei David in 2003 to estimate the subject distribution of the document. It gives a probability distribution to the topics of each document in the document set, so that by analyzing some documents to extract their topic distribution, you can cluster topics or classify text based on the topic distribution. See Formula 2 and Fig. 3. Here k is the number of topics (fixed on initialization of the LDA model), M is the number of documents, N is the number of words in the document, which itself is represented by the vector w as a bag-of-words. The βk is the multinomial distribution words that represent the topics and is drawn from the prior Dirichlet distribution with the parameter η. Similarly, the topic distribution θd is drawn from a Dirichlet prior with the parameter α. The zij is the topic which is most likely to have generated wij, which is the j-th word in the i-th document. In this paper, the topic model is used to extract the text features of patient's sign monitoring data. Specifically, the time series of vital signs is converted into symbols by SAX, these symbols are then transformed into human-readable text using high-level semantic abstraction. Finally, LDA model is used to extract text topics of patients for heart failure prediction. See below for details in section 3. $$ p\left(\theta, \boldsymbol{z}|\boldsymbol{w},\alpha, \beta \right)=\frac{p\left(\theta, \boldsymbol{z},\boldsymbol{w}|\alpha, \beta \right)}{p\left(\boldsymbol{w}|\alpha, \beta \right)} $$ The plate model representation of LDA Grid representation for time series (GRTS) The time series grid representation is an algorithm for converting time series data into images, which introduces a m × n grid structure to partition time series. According to the characteristics of time and value, the points in time series are assigned to their corresponding rectangles. The grid is then compiled into a matrix where each element is the number of points in the corresponding rectangle. The matrix form not only can reflect the point distribution characteristic of the sequence, but also improve the computational efficiency by using the sparse matrix operation method. See the algorithm for details [29]. Figure 4 demonstrates the schematic diagram of converting patient's heart rate, diastolic blood pressure, systolic pressure, and pulse pressure difference time series data into a grid representation. Grid representation for time series In recent year, deep learning (DL) models have achieved a high recognition rate for computer vision [30, 31] and speech recognition [32]. A Convolutional Neural Network is one of the most popular DL models. Unlike the traditional feature-based classification framework, CNN does not require hand-crafted features. Both feature learning and classification parts are integrated in a model and are learned together. Therefore, their performances are mutually enhanced. Related CNN algorithms can be found in [33]. The two most essential components of CNN are the convolution (Conv) layer and pooling (Pool) layer. Figure 5: a shows that the convolution layer realizes the convolution operation, and extracts the image features by calculating the inner product of the input image matrix and the kernel matrix. The other essential component is the pooling layer, also known as the sub-sampling layer, which is primarily responsible for simpler tasks. Figure 5: b shows that the pooling layer only retains part of the data after the convolution layer. It reduces the number of significant features extracted by the convolution layer and refines the retained features. In this paper, CNN is used to extract the image features of the vital signs monitoring data from surgical patients. a The convolution operation of Convolutional Neural Networks. b The pooling operation of Convolutional Neural Networks Representation learning for heart failure risk prediction This section mainly demonstrates how to use the different time series feature representation of vital signs during surgery to predict the risk of postoperative heart failure using the relevant techniques described above. First a general overview over the workflow is given and shown in Fig. 6. Then each of the components are described in more detail in individual subsections. The overall workflow of the proposed method The overall workflow of our presented method consists of three representation techniques towards heart failure which are described in more detail in the following Sections. They are: Statistical representation of vital signs data: Statistical analysis of vital signs monitoring data of surgical patients to extract features for heart failure prediction. Text representation of vital signs data: Firstly, the time series of vital signs is converted into symbols by the SAX, these symbols are then transformed into human-readable text using high-level semantic abstraction. Finally, the LDA model is used to extract text topics of patients for heart failure prediction. Image representation of vital signs data: The vital sign monitoring time series data of the surgical patient is converted into a grid image by using the grid representation, and then the convolutional neural network is directly used to identify the grid image for heart failure prediction. Perioperative heart failure prediction is based only on vital signs monitoring data of intraoperative patients. Indicators include heart rate (HR/hr), systolic blood pressure (NISYSBP/nisysbp), diastolic blood pressure (NIDIASBP/nidiasbpe), SpO2 (spo2), and pulse pressure difference (PP/pp). Learning window: defined as the duration of continuous monitoring during surgery, predictive window: defined as the patient's perioperative period. As shown in Fig. 7. Learning and prediction diagram Statistical representation of vital signs data In order to capture the various statistical feature of patient monitoring data trends, and mine intraoperative patient monitoring data from multiple dimensions in this paper, the mean (mean), variance (std), minimum (min), maximum (max), 25% (perc25), 50% (perc50), 75% (perc75) quantile, skewness (skew), kurtosis (kurt) and derivative variables of the first order difference (diff) of each monitoring index were calculated. That is, a total of 90 statistical parameters are obtained as derivative variables. The individual characteristic derivative variables are shown in Table 1, and the calculation is shown in Eq. 3. Finally, the classifier is used to predict heart failure. Specifically, the meaning of Feature variables in Table 1 are connected the abbreviation use "_" to add abbreviation together. For example: "mean_hr" means the mean of heart rate (hr), "min_diff_hr" means the minimum of the first order difference of heart rate, and "perc25_nisysbp" means that 25% of systolic blood pressure. Table 1 Overview about non-invasive physiological parameters and related feature variables $$ \mu =\frac{1}{T}\sum \limits_{i=1}^T{x}_i $$ $$ {\sigma}^2=\sum \limits_{i=1}^T\frac{1}{T}{\left({x}_i-\mu \right)}^2 $$ $$ \mathrm{skewness}\left(\mathrm{X}\right)=E\left[{\left(\frac{X-\mu }{\sigma}\right)}^3\right]=\frac{1}{T}\sum \limits_{i=1}^T\frac{{\left({x}_i-\mu \right)}^3}{\sigma^3} $$ $$ \mathrm{kurtosis}\left(\mathrm{X}\right)=E\left[{\left(\frac{X-\mu }{\sigma}\right)}^4\right]=\frac{1}{T}{\sum}_{i=1}^T\frac{{\left({x}_i-\mu \right)}^4}{\sigma^4} $$ $$ {Q}_{25\%}=\frac{n+1}{4} $$ $$ {Q}_{50\%}=\frac{2\left(n+1\right)}{4}=\frac{n+1}{2} $$ $$ {Q}_{75\%}=\frac{3\left(n+1\right)}{4} $$ Text representation of vital signs data The second method in this paper is based on the textual features of patient monitoring data for heart failure prediction. The specific process is shown in Fig. 8. These include the following steps: Normalization: Normalize the sign data to the mean 0 and variance 1. Segmentation: Use the PAA to segmentation patient vital sign data. Alphabetization of Symbols: Use the SAX to Symbolize patient vital sign data. Textualization: Use the rules engine to textual Symbolic alphabetized data. Topic clustering: Use the LDA to cluster all patient text data topics. Prediction: Predicting heart failure based on probability distribution of each patient's topic. Prediction of heart failure risk based on text features The advantage of textualization is that the results of the analysis are easier for humans to understand. Though the alphabetization of Symbols obtained from the SAX pattern extraction give a representation of the shape of the data within the time frame, the SAX strings are not intuitively understood and still have to be interpreted. Furthermore, by considering the statistics of the time frame in the abstract process, we are able to represent more information in the text than just the shape. Therefore, we use a rule-based engine that uses the SAX patterns and the statistical information of the time frame to produce text that is understandable to humans. The general form of the rules is given in Eq. 4 where < pattern > is the SAX pattern, < l > is the level, < f > is the feature, < mod > is a modifier for the pattern movement and < pm > is the pattern movement. Eq. 5 shows the possible values that the individual output variables can take. $$ \left\{<\mathrm{pattern}>\right\}=\left\{<\mathrm{l}><\mathrm{f}><\operatorname{mod}><\mathrm{pm}>\right\} $$ <l > = ['low','medium','high']. <f > = The values are shown in Table 1. $$ <\operatorname{mod}>=\left[`\mathrm{slowly}',`\mathrm{rapidly}',`\mathrm{upward}',`\mathrm{downward}'\right] $$ <pm> = ['decreasing','increasing','steady','peak','varying']. The heart rate, diastolic blood pressure, systolic blood pressure, spo2 and pulse pressure difference of the surgical patients are converted into text semantics. See Fig. 9. The patient text topic is extracted through the LDA, and finally the risk of heart failure is predicted by the classifier. The text representation of vital signs data Image representation of vital signs data Although deep learning is now well developed in computer vision and speech recognition, it is difficult to build predictive models when it comes to time series. Reasons include that Recurrent neural networks are difficult to train and there are no existing trained networks for time series. But if we turn the time series into pictures and then we can take advantage of the current machine vision for time series. Therefore, we convert the vital sign data of the patient into grid image by using the grid representation, and then the convolutional neural network is directly used to identify the grid image for heart failure prediction in this paper. See Fig. 10. Prediction of heart failure risk based on image features The grid representation is a compression technique that we convert a time series to a matrix format. Given a time series X = { xt, t = 1, 2,..., T}, the length of which is T, and a grid structure, which is equally partitioned into m × n rectangles and the number of row and column are m and n, respectively, we are able to produce a grid representation as where aij is the number of data points located in the i-th row and the j-th column so it should be an integer and satisfies aij ≥ 0. See the algorithm for details [29]. A good representation method should retain as much information as possible of the initial time series when compressing it. Time series contain not only time and value information but also point distribution information. The m × n grid structure can meet these requirements, so a method of representing time series is introduced. In this paper, the values of m and n that we used for the similarity measure are dependent on the structure of CNN. We designed a small network structure because of the small dataset, and all samples used the same m and n. The converted time-series grid image (see Fig. 4) is fused at the channel level as input to the convolutional neural network for heart failure prediction. The data used in this paper is from the Department of Anesthesiology, Southwest Hospital. All data were gathered from the surgical patients from June 2018 to October 2018. A total of 14,449 operations include 99 cases of postoperative heart failure, 46 cases of liver failure, 61 cases of death, renal failure 54,49 cases of respiratory failure and 31 cases of sepsis. The remaining is uncomplicated patients. 15 out of 99 patients with heart failure had incomplete monitoring data. These patients were removed from the experiment and the remaining 84 patients were positive. 168 cases of negative data were randomly selected from the normal data set for the experiment. The training set is 80% and testing set is 20%, we used 10-fold cross validation in the experiment. Particularly, we divided the training set into training set (9 sets) and validation set (1 set), then used the test set to evaluate our model. The data screening diagram is as Fig. 11. The data screening diagram Experiments based on statistical representation The statistical features have a total of 90 variables, and the data has to be selected before prediction. In order to reduce calculation complexity, features with lower importance should be removed. In this paper, the correlation was analyzed that calculating the Pearson CorrelationCoefficient of each feature, then the features with importance of 0 were removed. Figure 12 shows the correlation of each feature, in which the regions with dark color tend to have a strong correlation and vice versa. The correlation of each feature Models were built from these statistical features using 8 different classifiers: Adaboost, Decision Tree (DT), Support Vector Machine (SVM), Logistic regression (LR), naive Bayes (NB), Random forest (RF), Multiple perception machine (MLP), Gradient Boosting Decision Tree (GBDT). Because the sklearn library of python includes these machine learning methods, we used the sklearn library to build these models. The core principle of AdaBoost is to fit a sequence of weak learners (i.e., small decision trees) on repeatedly modified versions of the data. All the predictions are then combined by weighted majority voting (or summation) to produce the final prediction. The data modification for each so-called boosting iteration involves applying weights to each of the training sample. The parameter of Adaboost was: n_estimators is 100. Decision Tree is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features, where "DecisionTreeClassifier" of scikit-learn is a class capable of performing multi-class classification on a dataset. The parameters of DT were: criterion is "gini", min_samples_split is 2, min_samples_leaf is 1, min_weight_fraction_leaf is 0.0. SVM is a set of supervised learning methods used for classification, regression and outliers detection. SVM in scikit-learn supports both dense ("numpy.ndarray" and convertible to that by "numpy.asarray") and sparse (any "scipy.sparse") sample vectors as input. The parameter of SVM was: kernel is "rbf". In the model of Logistic regression, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function. Logistic regression is implemented in LogisticRegression. This implementation can fit binary, One-vs-Rest, or multinomial logistic regression with l2. Naive Bayes methods are a set of supervised learning algorithms based on Bayes theorem, whose "naive" assumption is the conditional independence between each pair of features of a given class variable value. Random forests achieve a reduced variance by combining diverse trees, sometimes at the cost of a slight increase in bias. In practice the variance reduction is often significant hence yielding an overall better model. In RF, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. Furthermore, when splitting each node during the construction of a tree, the best split is found either from all input features or a random subset of size max_features. The parameter of RF was: n_estimators is 100. The MLP is a supervised learning algorithm that learns a function f(·) : Rm → Ro by training on a dataset, where m is the number of dimensions for input and o is the number of dimensions for output. Given a set of features X= x1, x2, x1, …xm and a target y, it can learn a non-linear function approximator for either classification or regression. It is different from logistic regression, in that between the input and the output layer, there can be one or more non-linear layers, called hidden layers. The parameter of MLP was: hidden_layer_sizes is (5, 2). The GBDT is a generalization of boosting to arbitrary differentiable loss functions. GBDT is an accurate and effective off-the-shelf procedure that can be used for both regression and classification problems. The module "sklearn.ensemble" provides methods for both classification and regression via gradient boosted regression trees. The parameter of the GBDT was: n_estimators is 200. The other parameters of these models were the default parameters, see the Appendix for details. The results are shown in Table 2, and the Receiver Operating Characteristic (ROC) is shown in Fig. 13. Table 2 Sensitivity (TPR), specificity (TNR), F1 score, accuracy (ACC) of various classifiers The ROC curve of 8 classifiers based on Statistical Representation Experiments based on text representation Figure 9 provides a general overview of our experimental process. First, we convert the patient's vital signs monitoring data for 3 min into alphabetic symbols and convert consecutive 3 alphabetic symbols to text based on the rule engine. The LDA was used to unsupervised cluster all patient's text representation into 5 topics. We chose 5 topics after varying the number from 2 to 10, because it was noted that validation set accuracy did not improve after 5, so that each patient's vital signs monitoring data is represented by a 5-dimensional vector, summing to 1. Finally, we performed heart failure prediction based on the representation of the topic probability distribution using the same classifier and parameters as the Statistical Representation. The experimental results are shown in Table 2, and the ROC curve of the experiment is shown in Fig. 14. The ROC curve of 8 classifiers based on Text Representation Experiments based on image representation In this experiment, we first convert the patient's heart rate, diastolic blood pressure, systolic blood pressure, spo2, and pulse pressure difference into the grid image, and fuse the five images in the channel layer as input to the convolutional neural network (see the network structure designed in the previous section. See Fig. 11) to extract image features. Finally, heart failure is classified by softmax. $$ \left(5,\mathrm{L},1\right)=>\left(5,\mathrm{m},\mathrm{n}\right) $$ See Formula 6, where L is the length of the monitoring time series data, and (m, n) is the width and length of the grid image. The converted image has an associated length and width. Five grid maps of each patient simultaneously input into a convolutional neural network for heart failure recognition. The experimental results are shown in Table 2, and the ROC curve of the experiment is shown in Fig. 15. Figures 16 and 17 show the loss and accuracy of training and validation of convolutional neural networks. The ROC curve of CNN based on image representation The loss of training and validation of convolutional neural networks The accuracy of training and validation of convolutional neural networks Predictive results of various feature representations are presented in Table 2. These results demonstrate the GBDT classifier achieves the best results in the prediction of heart failure by statistical feature representation. The sensitivity, specificity and accuracy are 83, 85, 84% respectively; the NB classifier achieves the best results in the prediction of heart failure by text feature representation. The sensitivity, specificity and accuracy are 84, 73, 79% respectively; The sensitivity, specificity and accuracy of classification prediction based on convolutional neural network in image feature representation experiments also reached 89, 78 and 89%, respectively. It can be seen from Figs. 14, 15 and 16 that the AUC values based on the three feature representation algorithms are 0.92, 0.82, 083 respectively. Therefore, from the overall results, the patient's intraoperative vital signs monitoring data has the ability to capture the precursory information of heart failure during the perioperative period. Among the three feature representations, the method based on statistical representations achieves the best results. Because we did a lot of feature engineering before the model prediction, we removed the low-importance features and only retained the relevant features. In addition, the total sample size of the experiment is only 252 cases (positive: 84, negative: 168). Small sample size based on traditional feature engineering can achieve better results in classification. However, the method of text and image feature representation based on LDA and convolution neural network is likely to have the problem of under-fitting in the small sample training data set. Therefore, there should be a lot of room to improve the experimental results. Heart failure in the perioperative period is one of the most significant causes of postoperative death of patients. At present, because the valuable diagnostic indices of heart failure have lagged effect, which are often used only for differential diagnosis after adverse events have occurred, and are difficult to be used for early diagnosis and prediction, the early clinical diagnosis of adverse events of heart failure still relies on the clinical experience of anesthesiologists and physicians. Therefore, there is a lack of early intraoperative prediction techniques for perioperative adverse cardiac events. Previous studies have shown that the direct monitoring data in operation has the value of early diagnosis and early warning after preprocessing and analysis of time series data. However, as far as we know that there is no direct use of intraoperative monitoring signs data on patients with perioperative risk prediction of heart failure. Thus, our method is the first study to predict perioperative heart failure using only intraoperative monitoring of vital signs. At present, much literature in heart failure prediction and diagnosis has focused on using ECG data and bio-marker as input to a classifier. Because the heart failure prediction is more difficult than diagnosis, the methods of heart failure diagnosis usually achieved a better performance, such as: AUC of 0.883 (Choi et al. [7]), the classification accuracy of 96.61% (Chen et al. [11]). However, the methods of heart failure prediction usually achieved a poor performance, such as: the sensitivity of 0.42 (Petersen et al. [14]), the predicted AUC reached 0.82 (Koulaouzidis [8]), the predicted AUC of 0.78 (Shameer et al. [9]), the prediction accuracy of 78.4% (Zheng et al. [10]). Our work differs in that we only consider intraoperative monitoring of vital signs to predict the risk of heart failure, and the sensitivity, specificity and accuracy of the best method can reach 83, 85 and 84% respectively. It demonstrates that using only intraoperative monitoring of vital signs data can largely predict the risk of heart failure, and reach high accuracy. It shows a valuable potential to save the life for heart failure patients using intraoperative monitoring of vital signs. There are several limitations of this body of work. Firstly, prediction method based on text and image features is ineffective because of too few experimental samples. The model proposed in this paper can't clearly determine the specific correlation between intraoperative vital signs monitoring data and heart failure. Future directions for this work should include new model to clarify the correlation between the two and we could also improve the prediction quality of our model with additional features, such as relevant preoperative examination indicators, etc. In the future, we hope that such methods will be used to provide medical staff with the support to improve decision making for surgical surgeon. In this work, we proposed three machine learning methods including statistical learning representation, text learning representation and image learning representation to process vital signs monitoring data (heart rate, systolic pressure, diastolic pressure, blood oxygen saturation and pulse pressure) for estimating the risk of heart failure. The method was evaluated by monitoring data of perioperative patients in anesthesiology Department of Southwest Hospital. The results of our experiment demonstrated that the representation learning model of vital signs monitoring data in intraoperative patients can capture the physiological characteristics of heart failure in the perioperative period. Additionally, these results showed that the GBDT classifier has achieved the best results in predicting heart failure by statistical characteristics. The sensitivity, specificity and accuracy of the best method can reach 83, 85 and 84% respectively. Therefore, we can draw a conclusion that the patient's intraoperative vital signs monitoring data has the ability to capture the precursor information of heart failure in the perioperative period, which is important for reducing the risk of heart failure and improving the safety of the patient. Furthermore, this paper shows a valuable potential to develop modern medical diagnosis and treatment by using vital signs monitoring data in intraoperative patients for risk prediction of the perioperative adverse cardiac events. The raw data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. AUC: Area under the curve CNN: Conv: diff: DL: DT: ECG: Electrocardiograph GBDT: Gradient Boosting Decision Tree GRTS: HR/hr.: kurt: LDA: Latent Dirichlet Allocation Logistic regression MLP: Multiple perception machine NIDIASBP/nidiasbpe: NISYSBP/nisysbp: NYHA: New York Heart Association PAA: Piecewise Approximate Aggregation perc25: PP/pp.: Pulse pressure difference ROC: Receiver Operating Characteristic curve SAX: Symbolic Aggregate Approximation skew: Skewness std.: SVM: TNR: TPR: TSC: Time Series Classification Thuraisingham RA. A classification system to detect congestive heart failure using second-order difference plot of RR intervals. Cardiol Res Pract. 2009;2009:807379. Isler Y, Kuntalp M. Combining classical HRV indices with wavelet entropy measures improves to performance in diagnosing congestive heart failure. Comput Biol Med. 2007;37(10):1502–10. Yu SN, Lee MY. Conditional mutual information-based feature selection for congestive heart failure recognition using heart rate variability. Comput Methods Prog Biomed. 2012;108(1):299–309. Masetic Z, Subasi A. Congestive heart failure detection using random forest classifier. Comput Methods Prog Biomed. 2016;130:54–64. Melillo P, Fusco R, Sansone M, Bracale M, Pecchia L. Discrimination power of long-term heart rate variability measures for chronic heart failure detection. Med Biol Eng Comput. 2011;49(1):67–74. Pecchia L, Melillo P, Sansone M, Bracale M. Discrimination power of short-term heart rate variability measures for CHF assessment. IEEE Trans Inf Technol Biomed. 2011;15(1):40–6. Choi E, Schuetz A, Stewart WF, Sun J. Using recurrent neural network models for early detection of heart failure onset. J Am Med Inform Assoc. 2017;24(2):361–70. Koulaouzidis G, Iakovidis DK, Clark AL. Telemonitoring predicts in advance heart failure admissions. Int J Cardiol. 2016;216:78–84. Shameer K, Johnson KW, Yahi A, Miotto R, Li LI, Ricks D, Jebakaran J, Kovatch P, Sengupta PP, Gelijns SJPSB. Predictive modeling of hospital readmission rates using electronic medical record-wide machine learning: a case-study using mount sinai heart failure cohort. Pac Symp Biocomput. 2016;22:276-87. Zheng B, Zhang J, Yoon SW, Lam SS, Khasawneh M, Poranki S. Predictive modeling of hospital readmissions using metaheuristics and data mining. Expert Syst Appl. 2015;42(20):7110–20. Chen W, Zheng L, Li K, Wang Q, Liu G, Jiang Q. A novel and effective method for congestive heart failure detection and quantification using dynamic heart rate variability measurement. PLoS One. 2016;11(11):e0165304. Churpek MM, Yuen TC, Park SY, Meltzer DO, Hall JB, Edelson DP. Derivation of a cardiac arrest prediction model using ward vital signs*. Crit Care Med. 2012;40(7):2102–8. Fox A, Elliott N: Early warning scores: a sign of deterioration in patients and systems. Petersen JA, Antonsen K, Rasmussen LS. Frequency of early warning score assessment and clinical deterioration in hospitalized patients: a randomized trial. Resuscitation. 2016;101:91–6. Geurts P. Pattern extraction for time series classification. In, Berlin, Heidelberg. Principles of Data Mining and Knowledge Discovery. Springer Berlin Heidelberg, 2001;115–27. Wei WW. Time series analysis. In: The Oxford Handbook of Quantitative Methods in Psychology: Vol 2; 2006. Lior R , Oded M. Data mining with decision trees theory and applications[M]. World scientific; 2007. Quinlan JR. Induction of Decision Trees[J]. Machine Learning, 1986, 1(1):81–106 Ye J, Chow J-H, Chen J, Zheng Z: Stochastic gradient boosted distributed decision trees. In: Proceedings of the 18th ACM conference on Information and knowledge management. ACM, 2009. p. 2061–4. Friedman JHJCs, analysis d: Stochastic gradient boosting. 2002, 38(4):367–378. Pregibon DJTAoS: Logistic regression diagnostics. 1981, 9(4):705–724. Pelikan M, Goldberg DE, Cantú-Paz E. BOA: The Bayesian optimization algorithm. In: Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 1. Morgan Kaufmann Publishers Inc., 1999. p. 525–32. Joachims T: Making large-scale SVM learning practical. In.: Technical report, SFB 475: Komplexitätsreduktion in Multivariaten …; 1998. Breiman LJMl: Random forests. Machine learning. 2001;45(1):5–32. Ismail Fawaz H, Forestier G, Weber J, Idoumghar L, Muller P-A: Deep learning for time series classification: a review. Data Mining and Knowledge Discovery. 2019;33)4:917–63. Chen Y, Sun QL, Zhong K: Semi-supervised spatio-temporal CNN for recognition of surgical workflow. EURASIP Journal on Image and Video Processing. 2018;(2018)1:76. Lin J, Keogh E, Lonardi S, Chiu B: A symbolic representation of time series, with implications for streaming algorithms. In: Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery. ACM, 2003. p. 2–11. Blei DM, Andrew YN, Michael IJ. Latent dirichlet allocation. Journal of machine Learning research. 2003;993–1022. Ye Y, Jiang J, Ge B, Dou Y, Yang K. Similarity measures for time series data classification using grid representation and matrix distance. Knowl Inf Syst. 2019;(60)2:1105–34. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L. Large-Scale Video Classification with Convolutional Neural Networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2014. p. 1725–1732. Liu S, Deng W: Very deep convolutional neural network based image classification using small training sample size. In: 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR): 3–6 Nov. 2015 2015; 2015: 730–734. Deng L, Li J, Huang J, Yao K, Yu D, Seide F, Seltzer M, Zweig G, He X, Williams J et al: Recent advances in deep learning for speech research at Microsoft. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing: 26–31 May 2013 2013; 2013: 8604–8608. Zeiler MD. Fergus RJecocv: visualizing and understanding convolutional. Networks. 2014:818–33. This work is supported by the National Key Research & Development Plan of China (2018YFC0116704), which provides environmental and financial support in data collection. In addition, it is supported by Chongqing Technology Innovation and application research and development project (cstc2019jscx-msxmx0237), which provides technical consultation and financial support in model design and experiment. Yuwen Chen and Baolian Qi contributed equally to this work. Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu, China Yuwen Chen & Baolian Qi Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China University of Chinese Academy of Sciences, Beijing, China Search for Yuwen Chen in: Search for Baolian Qi in: YC and BQ contributed equally to this study. YC conceived the study and performed the experiments. YC and BQ wrote the paper. Have drafted the work or substantively revised it. YC reviewed and edited the manuscript. All authors read and approved the manuscript. Correspondence to Yuwen Chen. This study was approved by the Ethics Committee of the First Affiliated Hospital of Army Medical University, PLA, and the Approved No. of ethic committee is KY201936. All participants voluntarily participated in the study and signed the informed consent. Table 3 The model parameters Chen, Y., Qi, B. Representation learning in intraoperative vital signs for heart failure risk prediction. BMC Med Inform Decis Mak 19, 260 (2019) doi:10.1186/s12911-019-0978-6 Perioperative period Standards, technology, machine learning, and modeling Explainable AI in Medical Informatics and Decision Support
CommonCrawl
Jonathan Ramkissoon Posts About Bayesian Changepoint Detection of COVID-19 Cases in Pyro With the current global pandemic and its associated resources (data, analyses, etc.), I've been trying for some time to come up with an interesting COVID-19 problem to attack with statistics. After looking at the number of confirmed cases for some counties, it was clear that at some date, the number of new cases stopped being exponential and its distribution changed. However, this date was different for each country (obviously). This post introduces and discusses a Bayesian model for estimating the date that the distribution of new COVID-19 cases in a particular country changes. An important reminder before we get into it is that all models are wrong, but some are useful. This model is useful for estimating the date of change, not for predicting what will happen with COVID-19. It should not be mistaken for an amazing epidemiology model that will tell us when the quarantine will end, but instead a way of describing what we have already observed with probability distributions. All the code for this post can be found here. We want to describe $y$, log of the number of new COVID-19 cases in a particular country each day, as a function of $t$, the number of days since the virus started in that country. We'll do this using a segmented regression model. The point at which we segment will be determined by a learned parameter, $\tau$. This is model is written below: Likelihood: \[\begin{equation*} \begin{split} y = wt + b + \epsilon \end{split} \text{, } \qquad \qquad \begin{split} \epsilon \sim N(0, \sigma^2) \\[10pt] p(y \mid w, b, \sigma) \sim N(wt, \sigma^2) \end{split} \\[15pt] \end{equation*}\] \[\begin{equation*} \begin{split} \text{Where: } \qquad \qquad \end{split} \begin{split} w &= \begin{cases} w_1 & \text{if } \tau \le t\\ w_2 & \text{if } \tau \gt t\\ \end{cases} \\ b &= \begin{cases} b_1 & \text{if } \tau \le t\\ b_2 & \text{if } \tau \gt t\\ \end{cases} \end{split} \\[10pt] \end{equation*}\] Priors: \[\begin{equation*} w_1 \sim N(\mu_{w_1}, \sigma_{w_1}^2) \qquad \qquad w_2 \sim N(\mu_{w_2}, \sigma_{w_2}^2) \\[10pt] b_1 \sim N(\mu_{b_1}, \sigma_{b_1}^2) \qquad \qquad b_2 \sim N(\mu_{b_2}, \sigma_{b_2}^2) \\[10pt] \tau \sim Beta(\alpha, \beta) \qquad \qquad \sigma \sim U(0, 3) \end{equation*}\] In other words, $y$ will be modeled as $w_1t + b_1$ for days up until day $\tau$. After that it will be modeled as $w_2t + b_2$. The model was written in Pyro, a probabilistic programming language built on PyTorch. Chunks of the code are included in this post, but the majority of code is in this notebook. The data used was downloaded from Kaggle. Available to us is the number of daily confirmed cases in each country, and Figure 1 shows this data in Italy. It is clear that there are some inconsistencies in how the data is reported, for example, in Italy there are no new confirmed cases on March 12th, but nearly double the expected cases on March 13th. In cases like this, the data was split between the two days. The virus also starts at different times in different countries. Because we have a regression model, it is inappropriate to include data prior to the virus being in a particular country. This date is chosen by hand for each country based on the progression of new cases and is never the date the first patient is recorded. The "start" date is better interpreted as the date the virus started to consistently grow, as opposed to the date the patient 0 was recorded. Total confirmed COVID-19 cases in Italy on the left and daily cases on the right, from January 1st to March 15th 2020 Prior Specification Virus growth is sensitive to population dynamics of individual countries and we are limited in the amount of data available, so it is important to supplement the model with appropriate priors. Starting with $w_1$ and $w_2$, these parameters can be loosely interpreted as the growth rate of the virus before and after the date change. We know that the growth will be positive in the beginning and is not likely to be larger than $1$. With these assumptions, $w_1 \sim N(0.5, 0.25)$ is a suitable prior. We'll use similar logic for $p(w_2)$, but will have to keep in mind flexibility. Without a flexible enough prior here, the model won't do well in cases where there is no real change point in the data. In these cases, $w_2 \approx w_1$, and we'll see and example of this in the Results section. For now, we want $p(w_2)$ to be symmetric about $0$, with the majority of values lying between $(-0.5, 0.5)$. We'll use $w_2 \sim N(0, 0.25)$. Next are the bias terms, $b_1$ and $b_2$. Priors for these parameters are especially sensitive to country characteristics. Countries that are more exposed to COVID-19 (for whatever reason), will have more confirmed cases at its peak than countries that are less exposed. This will directly affect the posterior distribution for $b_2$ (which is the bias term for the second regression). In order to automatically adapt this parameter to different countries, we use the mean of the first and forth quartiles of $y$ as $\mu_{b_1}$ and $\mu_{b_2}$ respectively. The standard deviation for $b_1$ is taken as $1$, which makes $p(b_1)$ a relatively flat prior. The standard deviation of $p(b_2)$ is taken as $\frac{\mu_{b_2}}{4}$ so that the prior scales with larger values of $\mu_{b_2}$. \[b_1 \sim N(\mu_{q_1}, 1) \qquad \qquad b_2 \sim N(\mu_{q_4}, \frac{\mu_{q_4}}{4}) \notag\] As for $\tau$, since at this time we don't have access to all the data (the virus is ongoing), we're unable to have a completely flat prior and have the model estimate it. Instead, the assumption is made that the change is more likely to occur in the second half of the date range at hand, so we use $\tau \sim Beta(4, 3)$. class COVID_change(PyroModule): def __init__(self, in_features, out_features, b1_mu, b2_mu): super().__init__() self.linear1 = PyroModule[nn.Linear](in_features, out_features, bias = False) self.linear1.weight = PyroSample(dist.Normal(0.5, 0.25).expand([1, 1]).to_event(1)) self.linear1.bias = PyroSample(dist.Normal(b1_mu, 1.)) self.linear2.weight = PyroSample(dist.Normal(0., 0.25).expand([1, 1])) #.to_event(1)) self.linear2.bias = PyroSample(dist.Normal(b2_mu, b2_mu/4)) def forward(self, x, y=None): tau = pyro.sample("tau", dist.Beta(4, 3)) sigma = pyro.sample("sigma", dist.Uniform(0., 3.)) # fit lm's to data based on tau sep = int(np.ceil(tau.detach().numpy() * len(x))) mean1 = self.linear1(x[:sep]).squeeze(-1) mean2 = self.linear2(x[sep:]).squeeze(-1) mean = torch.cat((mean1, mean2)) obs = pyro.sample("obs", dist.Normal(mean, sigma), obs=y) return mean Hamiltonian Monte Carlo is used for posterior sampling. The code for this is shown below. model = COVID_change(1, 1, b1_mu = bias_1_mean, b2_mu = bias_2_mean) num_samples = 800 # mcmc nuts_kernel = NUTS(model) mcmc = MCMC(nuts_kernel, num_samples=num_samples, warmup_steps = 100, num_chains = 4) mcmc.run(x_data, y_data) samples = mcmc.get_samples() Since I live in Canada and have exposure to the dates precautions started, modeling will start here. We'll use February 27th as the date the virus "started". \[w_1, w_2 \sim N(0, 0.5) \qquad b_1 \sim N(1.1, 1) \qquad b_2 \sim N(7.2, 1) \notag\] Posterior Distributions Posterior distributions for each parameter in our model using Canada's COVID-19 data. Notice that the posteriors for $w_1$ and $w_2$ don't overlap Starting with the posteriors for $w_1$ and $w_2$, if there was no change in the data we would expect to see these two distributions close to each other as they govern the growth rate of the virus. It is a good sign that these distributions, along with the posteriors for $b_1$ and $b_2$, don't overlap. This is evidence that the change point estimated by our model is true. This change point was estimated as: 2020-03-28 As a side note, with no science attached, my company issued a mandatory work from home policy on March 16th. Around this date is when most companies in Toronto would have issues mandatory work from home policies where applicable. Assuming the reported incubation period of the virus is up to 14 days, this estimated date change makes sense as it is 12 days after widespread social distancing measures began! The model fit along with 95% credible interval bands can be seen in the plot below. On the left is log of the number of daily cases, which is what we used to fit the model, and on the right is the true number of daily cases. It is very difficult to visually determine a change point by simply looking at the number of daily cases, and even more difficult by looking at the total number of confirmed cases. Left: log(daily confirmed cases) with the estimated date that the curve started to flatten (March 28th) and the 90% credible interval. Right: Raw data for the daily cases each day, along with a 90% credible interval for the day the curve started to flatten Assessing Convergence When running these experiments, the most important step is to diagnose the MCMCfor convergence. I adopt 3 ways of assessing convergence for this model by observing mixing and stationarity of the chains and $\hat{R}$. $\hat{R}$ is the factor by which each posterior distribution will reduce by as the number of samples tends to infinity. A perfect $\hat{R}$ value is 1, and values less than $1.1$ are indicative of convergence. We observe mixing and stationarity of the Markov chains in order to know if the HMC is producing appropriate posterior samples. Below are trace plots for each parameter. Each chain is stationary and mixes well. Additionally, all $\hat{R}$ values are less than $1.1$. Trace plots and $\hat{R}$ values for all posterior samples, plotten for MCMC diagnostics. After convergence, the last thing to check before moving on to other examples is how appropriate the model is for the data. Is it consistent with the assumptions made earlier? To test this we'll use a residual plot and a QQ-plot, as shown below. I've outlined the estimated change point in order to compare residuals before and after the change to test for homoscedasticity. The residuals follow a Normal distribution with zero mean, and no have dependence with time, before and after the date of change. Residual and QQ-plots validating our error assumption. What About no Change? To test the model's robustness to a country that has not began to flatten the curve, we'll look at data from Canada up until March 28th. This is the day that the model estimated curve flattening began in Canada. Just because there isn't a true change date doesn't mean the model will output "No change". We'll have to use the posterior distributions to reason that the change date provided by the model is inappropriate, and consequentially there is no change in the data. \[w_1, w_2 \sim N(0, 0.5) \qquad b_1 \sim N(0.9, 1) \qquad b_2 \sim N(6.4, 1.6) \notag\] Posterior plots for parameters after selecting a date range where the COVID-19 curve has not began to flatten. Notice the distributions for $w_1$ and $w_2$ overlap The posteriors for $w_1$ and $w_2$ have significant overlap, indicating that the growth rate of the virus hasn't changed significantly. Posteriors for $b_1$ and $b_2$ are also overlapping. These show that the model is struggling to estimate a reasonable $\tau$, which is good validation for us that the priors aren't too strong. Although we have already concluded that there is no change date for this data, we'll still plot the model out of curiosity. Similar to the previous example, the MCMC has converged. The trace plots below show sufficient mixing and stationarity of the chains, and most $\hat{R}$ values less than $1.1$. Next Steps and Open Questions This model is able to describe the data well enough to produce a reliable estimate of the day flattening the curve started. An interesting byproduct of this is the coefficient term for the 2nd regression line, $w_2$. By calculating $w_2$ and $b_2$ for different countries, we can compare how effective their social distancing measures were. The logical next modelling step would be to fit a hierarchical model in order to use partial pooling of data between countries. Thank you for reading, and definitely reach out to me by e-mail or other means if you have suggestions or recommendations, or even just to chat!
CommonCrawl
Rationalizability of menu preferences Christopher J. Tyson ORCID: orcid.org/0000-0002-4546-71361 Economic Theory volume 65, pages917–934(2018)Cite this article A Correction to this article is available This article has been updated The class of preferences over opportunity sets ("menus") rationalizable by underlying preferences over the alternatives is characterized for the general case in which the dataset is unrestricted. In particular, both the universal set of alternatives and the domain of menus over which preferences are asserted by the decision maker are arbitrary. The key "Cover Dominance" axiom states that any menu strictly preferred to a collection of menus must be strictly preferred to any menu covered by the collection. The method of characterization relies upon transitivity of menu preferences, but completeness can be relaxed. The rationalizability question This paper studies the question of when observed preferences over opportunity sets ("menus") can be rationalized by underlying preferences over the alternatives they contain ("meals"). In the simplest environment, with finitely many options and a weak preference asserted by the decision maker between each pair of subsets of the universal menu, conditions for rationalizability were given by Kreps (1979, pp. 565–566) as a benchmark for his axiomatization of "preference for flexibility." Yet despite a large subsequent literature that incorporates into menu preferences various other tastes and influences on behavior, the rationalizability question remains unanswered in the general case.Footnote 1 We shall generalize the environment in Kreps (1979) along both dimensions mentioned in the previous paragraph. Firstly, any nonempty universal set of alternatives will be permitted, whether finite or (countably or uncountably) infinite. This will allow our framework to accommodate the many economic contexts in which finiteness is not a natural assumption, such as choice among consumption bundles, production plans, lotteries, and asset allocations.Footnote 2 Secondly, the domain of menus over which preferences are asserted will be permitted to be any nonempty subset of the set of conceivable choice problems. This will make our findings applicable to arbitrary datasets, including those that arise from observational data or laboratory experiments; or from structured settings where all menus are, for example, budget or production sets. At the heart of the rationalizability question is the need to translate meal preferences into menu preferences and vice versa. On the one hand, any meal-preference relation induces a menu-preference relation via a simple rule (see Definition 1): One menu is weakly preferred to another if each meal on the second menu is weakly inferior to some meal on the first menu. This relationship—which formalizes the concept of a rationalization—expresses the intuition that a menu is as good as the best meal it contains. Moving in the opposite direction is trivial in full-domain environments, where meal preferences coincide with the observed preferences over singleton menus. But in our general framework, singleton menus need not be included in the domain, and therefore a more reliable notion of revealed meal preference is required. We propose the following conception (see Definition 3), which in a sense reverses the rule in the previous paragraph: One meal is weakly preferred to another if each menu containing the first meal is weakly superior to some menu containing the second meal. This revealed relation captures the intuition that a meal is as bad as the worst menu containing it, and will be used to replicate the observed menu preferences in proving our results. Axioms for rationalizability In the finite, full-domain environment, Kreps (1979) characterizes the class of menu preferences rationalizable by complete and transitive meal preferences. The first of his two axioms, which we label Menu Order, states simply that the observed menu preferences are themselves complete and transitive. This is a straightforward consequence of the ordering properties imposed on the rationalizing meal-preference relation, and the argument does not depend on finiteness or the full-domain assumption (see Corollary 1). Hence, we inherit Menu Order from Kreps as a necessary condition. The second axiom used by Kreps (1979, p. 566), which we label Kreps Consistency, states that the more preferred of any two menus is indifferent to their union. This condition is clearly unsuitable for our framework, in which the domain of the menu-preference relation need not be closed under union. We therefore replace Kreps Consistency with a new and somewhat stronger axiom, Cover Dominance, which is appropriate for the general case.Footnote 3 The Cover Dominance axiom. The menus \(A,B_{1},B_{2},D \subset X\) are in the domain of the menu-preference relation \(\succsim \), with \(D \subset B_{1} \cup B_{2}\). If \(A \mathrel {\succ }B_{1}\) and \(A \mathrel {\succ }B_{2}\), then Cover Dominance requires that \(A \mathrel {\succ }D\) To understand the content of Cover Dominance, consider the situation in Fig. 1. Here \(\succsim \) is the weak menu-preference relation (with associated strict relation \(\mathrel {\succ }\) and indifference relation \(\mathrel {\sim }\)), X is the universal set of alternatives, and the menus \(A,B_{1},B_{2},D \subset X\) are in the domain of \(\succsim \). If both \(A \mathrel {\succ }B_{1}\) and \(A \mathrel {\succ }B_{2}\) and if \(\succsim \) is rationalizable by meal preferences, then each meal in \(B_{1} \cup B_{2}\) should be strictly inferior to a meal in A. If also \(D \subset B_{1} \cup B_{2}\), then each meal in D (the "covered" menu) should likewise be strictly inferior to a meal in A. This leads us to anticipate that \(A \mathrel {\succ }D\), which is the conclusion mandated by Cover Dominance.Footnote 4 Our main result (Theorem 1) thus characterizes rationalizability by means of the Menu Order and Cover Dominance conditions. In proving sufficiency of this axiom system, we also establish two of its implications that are of interest in their own right. One condition, Implicit Optima, states that each menu contains an alternative whose presence on any other menu guarantees that the second menu is no worse than the first. In terms of the rationalization, this means that even on infinite menus there can be found a greatest option with respect to the meal-preference relation. The second implied condition, Weak Cover Dominance, replaces strict with weak preference in both the hypotheses and the conclusion of Cover Dominance. This alternate version of the cover dominance property plays a role in the proof of sufficiency, as well as in linking our axiom system to that of Kreps (1979). Preferences over budget sets: an example For a concrete illustration of our framework and characterization result, let \(X = \mathfrak {R}^{2}_{+}\) and imagine a consumer with endowment \(\langle 1,1 \rangle \) who may face a variety of different relative prices. Imagine further that the consumer asserts preferences over the four price vectors \(\langle 1,1 \rangle , \langle 1,2 \rangle , \langle 2,1 \rangle \), and \(\langle 1,4 \rangle \); with respective budget sets \(B_{1}, B_{1/2}, B_{2}\), and \(B_{1/4}\). This situation is depicted in Fig. 2, where the points \(x^{1} , x^{2} , \ldots , x^{7}\) represent arbitrary consumption bundles in different regions of X. An example of preferences over the budget sets \(B_{1}, B_{1/2}, B_{2}\), and \(B_{1/4}\). The order \(B_{1/2} \mathrel {\succ }B_{2} \mathrel {\succ }B_{1} \mathrel {\succ }B_{1/4}\) violates Cover Dominance and thus is not rationalizable. In contrast, the order \(B_{1/4} \mathrel {\succ }B_{2} \mathrel {\succ }B_{1} \mathrel {\succ }B_{1/2}\) satisfies Cover Dominance and thus is rationalizable Suppose first that \(B_{1/2} \mathrel {\succ }B_{2} \mathrel {\succ }B_{1} \mathrel {\succ }B_{1/4}\). Since \(B_{1/2} \subset B_{1} \cup B_{1/4}\) and both \(B_{2} \mathrel {\succ }B_{1}\) and \(B_{2} \mathrel {\succ }B_{1/4}\), Cover Dominance requires that \(B_{2} \mathrel {\succ }B_{1/2}\). But this contradicts the observed preference \(B_{1/2} \mathrel {\succ }B_{2}\), so Cover Dominance fails in this case and we can conclude that \(\succsim \) is not rationalizable by complete and transitive meal preferences. Now, suppose instead that \(B_{1/4} \mathrel {\succ }B_{2} \mathrel {\succ }B_{1} \mathrel {\succ }B_{1/2}\). Since \(B_{1} \subset B_{2} \cup B_{1/2}\) and both \(B_{1/4} \mathrel {\succ }B_{2}\) and \(B_{1/4} \mathrel {\succ }B_{1/2}\), Cover Dominance implies \(B_{1/4} \mathrel {\succ }B_{1}\). This agrees with the observed preferences, and it is straightforward to verify that no other violations of Cover Dominance can be found in the dataset. Of course Menu Order also holds, and thus \(\succsim \) can be rationalized in this case. Note that, according to the preferences in the previous paragraph, each of the menus containing \(x^{5}\) (namely, \(B_{1}\) and \(B_{2}\)) is strictly superior to some menu containing \(x^{4}\) (namely, \(B_{1/2}\)). Evaluating each meal by the worst menu containing it, our revealed preference relation therefore considers \(x^{5}\) strictly superior to \(x^{4}\). Similarly, our relation considers \(x^{7}\) strictly superior to \(x^{5}\); \(x^{6}\) strictly superior to \(x^{7}\); and \(x^{1}, x^{2}, x^{3}\), and \(x^{4}\) indifferent to each other since they are all members of the lowest ranked menu (\(B_{1/2}\)). The meal preferences that rationalize a given menu-preference relation will not in general be unique.Footnote 5 Indeed, writing a generic consumption bundle in the present context as \(z = \langle z_{1},z_{2} \rangle \), the preferences \(B_{1/4} \mathrel {\succ }B_{2} \mathrel {\succ }B_{1} \mathrel {\succ }B_{1/2}\) can be rationalized by the two distinct meal-preference relations represented by the utility functions \(u(z) = 19 z_{1} + 30 z_{2}\) and \(v(z) = \max \{ 10 z_{1} z_{2}^{3} , z_{1}^{4} z_{2} \}\). Neither coincides with our revealed meal-preference relation, which in this case has just four indifference classes (corresponding to the four menus in the dataset). Concretely, the alternatives \({\tilde{x}}^{1} = \langle 0.2,1.0 \rangle \) and \({\tilde{x}}^{4} = \langle 2.0,0.1 \rangle \) are ranked as indifferent by our revealed relation, while the two utility functions yield opposing strict preferences computed as \(u({\tilde{x}}^{1}) = 33.8 < 41.0 = u({\tilde{x}}^{4})\) and \(v({\tilde{x}}^{1}) = 2.0 > 1.6 = v({\tilde{x}}^{4})\). This example makes clear that deducing the decision maker's true meal preferences from arbitrary menu-preference data will not typically be possible; and doing so is not our purpose in this paper. We wish, rather, to find concise and transparent conditions that characterize rationalizability without the help of domain assumptions and thus allow us to test this hypothesis under more realistic circumstances. The remainder of the paper is structured as follows. Section 2 describes how menu preferences are induced by meal preferences and how meal preferences are revealed by menu preferences, and introduces the Menu Order axiom. Section 3 proceeds to develop the Cover Dominance axiom and to state and outline the proof of our main result. Section 4 discusses rationalizability by incomplete meal preferences, shows how Kreps's original characterization can be derived as a corollary of our result, and demonstrates how the theory of rationalizable menu preferences parallels the theory of rationalizable choice functions. Proofs are in the "Appendix". Meal and menu preferences Let X be a nonempty set of alternatives (also called "options" or "meals"), write \({\mathfrak {X}}\) for the power set of X, and fix both a nonempty domain \({\mathfrak {D}}\subset {\mathfrak {X}}{\setminus }\{ \emptyset \}\) of menus and a relation \(\succsim \) on \({\mathfrak {D}}\). Our primitives are thus \(\langle X,{\mathfrak {D}},\succsim \rangle \). Write \({\mathfrak {D}}_{x} = \{ A \in {\mathfrak {D}}: x \in A \}\) for the set of menus that contain option x. Given a relation \(\mathrm {R}\) on X, write \(G(A,\mathrm {R}) = \{ x \in A : \forall y \in A \quad x \mathrm {R}y \}\) for the set of \(\mathrm {R}\)-greatest alternatives on menu A. A relation is a preorder if it is both reflexive and transitive. For brevity, a complete preorder will be referred to simply as an order.Footnote 6 As usual, we write \(A \mathrel {\sim }B\) when \(A \succsim B \succsim A\) and \(A \mathrel {\succ }B\) when \(A \succsim B \not \succsim A\). Likewise, we write \(x \mathrm {I}y\) when \(x \mathrm {R}y \mathrm {R}x\) and \(x \mathrm {P}y\) when both \(x \mathrm {R}y\) and \(\lnot y \mathrm {R}x\). Induced menu preferences Kreps (1979, p. 565) uses preferences over alternatives to define preferences over menus "in the obvious fashion." Definition 1 Given a relation \(\mathrm {R}\) on X, define a relation on \({\mathfrak {D}}\) as follows: For each \(A,B \in {\mathfrak {D}}\), let if and only if \(\forall y \in B\), there exists an \(x \in A\) such that \(x \mathrm {R}y\). In words, the induced relation weakly prefers menu A to B if each option on B is weakly inferior, according to the meal-preference relation \(\mathrm {R}\), to some option on A. This is consistent with the standard model of choice, in which the decision maker will eventually select from each menu a preference-maximal option according to which the menu itself may be valued. We write \(A \mathrel {\mathrel {\sim }_{\mathrm {R}}} B\) when and \(A \mathrel {\mathrel {\succ }_{\mathrm {R}}} B\) when . For complete \(\mathrm {R}\), Definition 1 can then be expressed as \(B \mathrel {\mathrel {\succ }_{\mathrm {R}}} A\) if and only if \(\exists y \in B\) such that \(\forall x \in A\) we have \(y \mathrm {P}x\). That is, a strict menu preference for B over A is induced by \(\mathrm {R}\) if some option on B is strictly better than every option on A. An important consequence of Definition 1 is that the induced relation inherits a number of ordering properties from \(\mathrm {R}\). A. If \(\mathrm {R}\) is reflexive, then is reflexive. B. If \(\mathrm {R}\) is complete, then is complete. C. If \(\mathrm {R}\) is transitive, then is transitive. Finally, we can use induced menu preferences to formalize our concept of rationalizability. A rationalization of \(\succsim \) is a relation \(\mathrm {R}\) on X such that . If the unobserved meal-preference relation is complete and transitive, then it follows from Proposition 1 that the induced menu-preference relation exhibits the same properties. This yields a necessary condition for rationalizability by an order in the general case. Condition 1 (Menu Order) The relation \(\succsim \) is an order. Corollary 1 If \(\succsim \) is rationalized by an order, then Menu Order holds. Let \(X = wxyz\) and \({\mathfrak {D}}= \{ z,wx,wz,xy,xz,yz,xyz \}\). Then, the order \(w \mathrm {P}x \mathrm {I}y \mathrm {P}z\) on meals induces the order \(wx \mathrel {\mathrel {\sim }_{\mathrm {R}}} wz \mathrel {\mathrel {\succ }_{\mathrm {R}}} xy \mathrel {\mathrel {\sim }_{\mathrm {R}}} xz \mathrel {\mathrel {\sim }_{\mathrm {R}}} yz \mathrel {\mathrel {\sim }_{\mathrm {R}}} xyz \mathrel {\mathrel {\succ }_{\mathrm {R}}} z\) on menus. For instance, we have that since \(w \mathrm {R}x\) and \(w \mathrm {R}y\), while since \(\lnot x \mathrm {R}w\) and \(\lnot y \mathrm {R}w\).Footnote 7 Revealed meal preferences In order to achieve the desired characterization, we will also need to be able to translate the decision maker's tastes from the menu-preference relation \(\succsim \) to a revealed meal-preference relation. This is accomplished by the following construction. Define a relation \({\hat{\mathrm {R}}}\) on X as follows: For each \(x,y \in X\), let \(x {\hat{\mathrm {R}}}y\) if and only if \(\forall A \in {\mathfrak {D}}_{x}\), there exists a \(B \in {\mathfrak {D}}_{y}\) such that \(A \succsim B\). Here, the revealed relation \({\hat{\mathrm {R}}}\) weakly prefers option x to y if each menu containing x is no worse, according to the primitive relation \(\succsim \), than some menu containing y. We write \(x {\hat{\mathrm {I}}}y\) when \(x {\hat{\mathrm {R}}}y {\hat{\mathrm {R}}}x\) and \(x {\hat{\mathrm {P}}}y\) when both \(x {\hat{\mathrm {R}}}y\) and \(\lnot y {\hat{\mathrm {R}}}x\). For complete \(\succsim \), Definition 3 can then be expressed as \(y {\hat{\mathrm {P}}}x\) if and only if \(\exists A \in {\mathfrak {D}}_{x}\) such that \(\forall B \in {\mathfrak {D}}_{y}\) we have \(B \mathrel {\succ }A\). That is, a strict meal preference for y over x is revealed by \(\succsim \) if some menu containing x is strictly worse than every menu containing y. The latter paraphrasing of Definition 3 conveys the rationale behind the revealed meal-preference relation \({\hat{\mathrm {R}}}\): If even the worst menu B containing y is strictly preferred to some menu A containing x, this suggests that y itself is strictly better than everything in A and in particular strictly better than x. The expression \(x {\hat{\mathrm {R}}}y\) records the absence of this situation, where the evidence from \(\succsim \) indicates instead that x is at least as good as y. Our next result is the meal-preference analog of Proposition 1, establishing that \({\hat{\mathrm {R}}}\) inherits the same ordering properties from \(\succsim \). A. If \(\succsim \) is reflexive, then \({\hat{\mathrm {R}}}\) is reflexive. B. If \(\succsim \) is complete, then \({\hat{\mathrm {R}}}\) is complete. C. If \(\succsim \) is transitive, then \({\hat{\mathrm {R}}}\) is transitive. Menu Order implies that \({\hat{\mathrm {R}}}\) is an order. For the domain defined in Example 1, we have \({\mathfrak {D}}_{w} = \{ wx,wz \}, {\mathfrak {D}}_{x} = \{ wx,xy,xz,xyz \}, {\mathfrak {D}}_{y} = \{ xy,yz,xyz \}\), and \({\mathfrak {D}}_{z} = \{ z,wz,xz,yz,xyz \}\). In this case, the order \(wx \mathrel {\sim }wz \mathrel {\succ }xy \mathrel {\sim }xz \mathrel {\sim }yz \mathrel {\sim }xyz \mathrel {\succ }z\) on menus (identical to the induced preferences in Example 1) reveals the original order \(w {\hat{\mathrm {P}}}x {\hat{\mathrm {I}}}y {\hat{\mathrm {P}}}z\) on meals. For instance, we have that \(w {\hat{\mathrm {R}}}x\) since \(wx \succsim xy\) and \(wz \succsim xy\), while \(\lnot x {\hat{\mathrm {R}}}w\) since \(xy \not \succsim wx\) and \(xy \not \succsim wz\). Main result Characterization of rationalizability We know from Corollary 2 that Menu Order is sufficient for \({\hat{\mathrm {R}}}\) to be complete and transitive. Hence, what is needed is a further condition that together with Menu Order will guarantee that this relation rationalizes the observed \(\succsim \). To construct the required axiom, we shall use the concept of a covering set of menus. The set \({\mathfrak {B}}\subset {\mathfrak {D}}\) is said to cover \(A \in {\mathfrak {D}}\) if \(A \subset \bigcup {\mathfrak {B}}:= \bigcup _{B \in {\mathfrak {B}}} B\). Our condition then states that any menu strictly preferred to the elements of a cover must be strictly preferred to the target of the cover. (Cover Dominance) Let \(A,D \in {\mathfrak {D}}\) and let \({\mathfrak {B}}\subset {\mathfrak {D}}\) cover D. If for each \(B \in {\mathfrak {B}}\) we have \(A \mathrel {\succ }B\), then \(A \mathrel {\succ }D\). Here, the intuition is that \({\mathfrak {B}}\) collectively should be no worse than D, so Cover Dominance has the flavor of a transitivity condition. Note, however, that \(\bigcup {\mathfrak {B}}\) may or may not be in \({\mathfrak {D}}\), so we cannot argue simply that \(A \mathrel {\succ }\bigcup {\mathfrak {B}}\succsim D\) and hence \(A \mathrel {\succ }D\). To show necessity of our new axiom, we shall need the set of \(\mathrm {R}\)-greatest elements of each menu to be nonempty. Implicitly, this is of course the set of eventual choices from the menu, and hence the additional structure required amounts to an assumption of nonempty-valued choice. If \(\succsim \) is rationalized by an order \(\mathrm {R}\) with \(G(\cdot ,\mathrm {R})\) nonempty, then Cover Dominance holds. Our main result combines the assumptions on meal preference and the conditions on menu preference across Corollary 1 and Proposition 3. Theorem 1 The relation \(\succsim \) is rationalized by an order \(\mathrm {R}\) with \(G(\cdot ,\mathrm {R})\) nonempty if and only if Menu Order and Cover Dominance hold. The menu preferences in Example 2 are rationalized by an order and hence satisfy both Menu Order and Cover Dominance. For instance, we have \(xz \subset xyz = xy \cup yz, wz \mathrel {\succ }xy\), and \(wz \mathrel {\succ }yz\), so Cover Dominance requires that \(wz \mathrel {\succ }xz\) (which is in fact the case). In contrast, the preferences \(wx \mathrel {\sim }wz \mathrel {\sim }xz \mathrel {\succ }xy \mathrel {\sim }yz \mathrel {\sim }xyz \mathrel {\succ }z\) fail Cover Dominance and so cannot be rationalized by an order.Footnote 8 Sufficiency of axioms To achieve our characterization, it remains to show that the axioms in Theorem 1 are sufficient for \({\hat{\mathrm {R}}}\) to rationalize \(\succsim \) and generate a nonempty \(G(\cdot ,{\hat{\mathrm {R}}})\). The former property means both that all observed preferences are faithfully reproduced by \({\hat{\mathrm {R}}}\), written ; and that all preferences induced by \({\hat{\mathrm {R}}}\) are genuine, written .Footnote 9 We shall verify the required properties of \({\hat{\mathrm {R}}}\) with the help of two auxiliary conditions implied by our axiom system. The first asserts the existence within each menu of an "implicit optimum" whose appearance on any other menu ensures weak menu-superiority.Footnote 10 (Implicit Optima) For each \(A \in {\mathfrak {D}}\), there exists an \(x \in A\) such that \(\forall B \in {\mathfrak {D}}_{x}\) we have \(B \succsim A\). A. If \(\succsim \) is complete, then Cover Dominance implies Implicit Optima. B. Menu Order and Implicit Optima imply Cover Dominance. This condition yields the desired nonemptiness property of \({\hat{\mathrm {R}}}\). Implicit Optima implies that \(G(\cdot ,{\hat{\mathrm {R}}})\) is nonempty. Conveniently, it can also be used to prove the faithful-reproduction property. If \(\succsim \) is transitive, then Implicit Optima implies that . Recall the menu-preference order \(\succsim \) defined in Example 3, for which Cover Dominance fails. Here, alternative w is an implicit optimum for the menus wx and wz, alternative x for the menus xy and xyz, alternative y for the menu yz, and alternative z for the menu z. The menu xz contains no implicit optimum, since \(x \in xy \prec xz\) and \(z \in yz \prec xz\). Our second auxiliary condition is a weak-preference counterpart of Cover Dominance and has a similar intuition in terms of the cover \({\mathfrak {B}}\) supplying a bridge between menu A and the (now weakly) inferior menu D. (Weak Cover Dominance) Let \(A,D \in {\mathfrak {D}}\) and let \({\mathfrak {B}}\subset {\mathfrak {D}}\) cover D. If for each \(B \in {\mathfrak {B}}\) we have \(A \succsim B\), then \(A \succsim D\). If \(\succsim \) is transitive, then Implicit Optima implies Weak Cover Dominance. This condition can be used to prove the genuineness property of \({\hat{\mathrm {R}}}\). Weak Cover Dominance implies that . Note that Weak Cover Dominance is not in general strong enough to yield Implicit Optima, even in the presence of Menu Order. To support this claim, we offer the following example. Let \(X = x_{1} y_{1} x_{2} y_{2} x_{3} y_{3} \ldots , A = x_{1} x_{2} x_{3} \ldots , B_{k} = x_{k} y_{k}\) for \(k \ge 1\), and \({\mathfrak {D}}= \{ A,B_{1},B_{2},B_{3},\ldots \}\). Moreover, let \(B_{1} \prec B_{2} \prec B_{3} \prec \ldots \), and let \(A \mathrel {\succ }B_{k}\) for \(k \ge 1\). While these preferences satisfy both Menu Order and Weak Cover Dominance, they fail Implicit Optima. Indeed, the menu A contains no implicit optimum since \(x_{k} \in B_{k} \prec A\) for \(k \ge 1\). Note that, in view of Proposition 4A, \(\succsim \) must fail Cover Dominance as well. This can be verified by observing that \(A \subset \bigcup _{k=1} ^{\infty } B_{k}\) and \(A \mathrel {\succ }B_{k}\) for \(k \ge 1\), while the conclusion \(A \mathrel {\succ }A\) is obviously false. To show that \({\hat{\mathrm {R}}}\) rationalizes \(\succsim \), we need the full strength of Implicit Optima and not just Weak Cover Dominance (see Propositions 6–8). Under Menu Order, we know that Implicit Optima and Cover Dominance are equivalent (see Proposition 4), but to ensure these conditions hold rationalizability alone is insufficient—nonemptiness is also needed (see Proposition 3). Fortunately, nonemptiness of \(G(\cdot ,{\hat{\mathrm {R}}})\) is guaranteed by Implicit Optima (see Proposition 5), making possible the construction of a two-way result in Theorem 1.Footnote 11 Additional results and discussion Incomplete preferences Theorem 1 can be adapted relatively easily to accommodate incompleteness of the primitive relation \(\succsim \) and the rationalizing relation \(\mathrm {R}\). Propositions 1–2 show that the properties of a preorder (namely, reflexivity and transitivity) transfer between meal and menu preferences independently of completeness, and hence the following condition is a suitable adaptation. (Menu Preorder) The relation \(\succsim \) is a preorder. A. If \(\succsim \) is rationalized by a preorder, then Menu Preorder holds. B. Menu Preorder implies that \({\hat{\mathrm {R}}}\) is a preorder. As for Cover Dominance, scrutiny of the proof of Theorem 1 reveals that a slightly different version of the axiom yields a valid characterization with or without the completeness assumption.Footnote 12 (Negative Cover Dominance) Let \(A,D \in {\mathfrak {D}}\) and let \({\mathfrak {B}}\subset {\mathfrak {D}}\) cover D. If for each \(B \in {\mathfrak {B}}\) we have \(B \not \succsim A\), then \(D \not \succsim A\). When \(\succsim \) is complete, we have both \(B \not \succsim A \Longleftrightarrow [A \succsim B \wedge B \not \succsim A] \Longleftrightarrow A \mathrel {\succ }B\) and \(D \not \succsim A \Longleftrightarrow [A \succsim D \wedge D \not \succsim A] \Longleftrightarrow A \mathrel {\succ }D\); and so the two versions of the condition are logically equivalent. In this case, we favor Cover Dominance since it is the more transparent and readily interpretable form of the axiom, but adapting our result to the incomplete case calls for the alternative form. The relation \(\succsim \) is rationalized by a preorder \(\mathrm {R}\) with \(G(\cdot ,\mathrm {R})\) nonempty if and only if Menu Preorder and Negative Cover Dominance hold. The proof of this result requires only minor changes to that of Theorem 1 and is therefore left to the reader. Domain restrictions Our main result characterizes rationalizability of menu preferences using the Menu Order and Cover Dominance axioms. To establish sufficiency, we have shown that in the presence of Menu Order, Cover Dominance is equivalent to Implicit Optima, which in turn implies Weak Cover Dominance. But Weak Cover Dominance is not in general strong enough to yield Implicit Optima, even with Menu Order. For this to be the case, we need to assume that the domain \({\mathfrak {D}}\) is finite, as established by the following proposition. Let \({\mathfrak {D}}\) be finite. Then, Menu Order and Weak Cover Dominance imply Implicit Optima. Assume now that the domain \({\mathfrak {D}}\) is both finite and closed under union.Footnote 13 This takes us into a setting where rationalizability is captured by the axiom originally proposed by Kreps (1979, p. 565). (Kreps Consistency) Let \(A,B \in {\mathfrak {D}}\) be such that \(A \cup B \in {\mathfrak {D}}\). If \(A \succsim B\), then \(A \mathrel {\sim }A \cup B\). If \(\succsim \) is reflexive, then Weak Cover Dominance implies Kreps Consistency. A straightforward implication of Kreps's condition is monotonicity with respect to set inclusion (called "desire for flexibility" in Kreps 1979). (Monotonicity) Let \(A,B \in {\mathfrak {D}}\). If \(A \subset B\), then \(B \succsim A\). If \(\succsim \) is complete, then Kreps Consistency implies Monotonicity. The latter fact is useful in proving the following converse to Proposition 10. Let \({\mathfrak {D}}\) be finite and closed under union. Then, Menu Order and Kreps Consistency imply Weak Cover Dominance. We can then state a version of our result that has Kreps's full-domain characterization as an immediate corollary.Footnote 14 Let \({\mathfrak {D}}\) be finite and closed under union. Then, \(\succsim \) is rationalized by an order \(\mathrm {R}\) with \(G(\cdot ,\mathrm {R})\) nonempty if and only if Menu Order and Kreps Consistency hold. (Kreps 1979) Let X be finite and \({\mathfrak {D}}= {\mathfrak {X}}{\setminus } \{ \emptyset \}\). Then, \(\succsim \) is rationalized by an order if and only if Menu Order and Kreps Consistency hold. Logical relationships between selected axioms. Each implication is labeled with the relevant result (e.g., "Prop. 9"), the required assumptions on menu preferences (e.g., "\(\succsim \) [is an] order"), and any necessary restrictions on the domain (e.g., "\({\mathfrak {D}}\) [is] finite") Selected axioms and implications are summarized in Fig. 3. When the domain is both finite and closed under union, any of the four conditions shown suffices (together with Menu Order) to characterize rationalizability. If the domain is not closed under union, then Kreps Consistency no longer suffices, and if \({\mathfrak {D}}\) is not finite then Weak Cover Dominance too is inadequate. For general domains, the desired axiomatization is supplied by either Cover Dominance or Implicit Optima. The latter condition employs an existential quantifier and so can be seen as less attractive in terms of falsifiability. For this reason, we use Cover Dominance in the statement of Theorem 1.Footnote 15 Analogy with rationalizability of choice functions Several aspects of our investigation of rationalizable menu preferences have counterparts in the theory of rationalizable choice functions. Here, we briefly outline the analogy between these two frameworks, assuming in the text for expository purposes that the domain \({\mathfrak {D}}\) is both finite and closed under union. A choice function over \({\mathfrak {D}}\) is a \(C : {\mathfrak {D}}\rightarrow {\mathfrak {X}}{\setminus } \{ \emptyset \}\) such that \(\forall A \in {\mathfrak {D}}\) we have \(C(A) \subset A\). The members of C(A) are interpreted as the options chosen from menu A. A rationalization of C is a relation \(\mathrm {R}\) on X such that \(C = G(\cdot ,\mathrm {R})\). Define the revealed meal-preference relation \(\bar{\mathrm {R}}\) by \(x \bar{\mathrm {R}}y\) if and only if \(\exists A \in {\mathfrak {D}}_{y}\) such that \(x \in C(A)\). Recall that Weak Cover Dominance requires any menu weakly preferred to the elements of a cover to be weakly preferred to the target of the cover. The choice-theoretic counterpart of this requirement is a condition referred to by Tyson (2013, p. 955) as "extraction consistency": Any alternative chosen from the elements of a cover must be chosen (if available) from the target of the cover.Footnote 16 Extraction consistency is necessary and sufficient for C to admit a rationalization, just as Weak Cover Dominance is necessary and sufficient for an order \(\succsim \) to admit a rationalization. Indeed, extraction consistency holds if and only if \(C = G(\cdot ,\bar{\mathrm {R}})\), just as Weak Cover Dominance holds if and only if . Extraction consistency is equivalent to the conjunction of two conditions: The first, "contraction consistency," says that any meal chosen from a larger menu must be chosen (if available) from a smaller menu, and is the analog of Monotonicity. The second, "weak expansion consistency," says that any meal chosen from each menu in a collection must be chosen from the union of the collection, and is the analog of the following menu-preference axiom.Footnote 17 (Weak Union Dominance) Let \(A \in {\mathfrak {D}}\) and let \({\mathfrak {B}}\subset {\mathfrak {D}}\) be such that \(\bigcup {\mathfrak {B}}\in {\mathfrak {D}}\). If for each \(B \in {\mathfrak {B}}\) we have \(A \succsim B\), then \(A \succsim \bigcup {\mathfrak {B}}\). We can now state a counterpart to the equivalence result for choice functions. If \({\mathfrak {D}}\) is closed under union and \(\succsim \) is a preorder, then Weak Cover Dominance is equivalent to the conjunction of Monotonicity and Weak Union Dominance. With regard to rationalizability, there are two notable differences between the menu-preference and choice-function frameworks. The first concerns the ordering properties of the rationalizing relation \(\mathrm {R}\). Transitivity of \(\mathrm {R}\) is needed for our main result, in contrast to the characterization of rationalizable choice functions via extraction consistency. Moreover, in view of Propositions 1–2 we can ensure that \(\mathrm {R}\) has the relevant ordering properties simply by imposing these same properties on \(\succsim \), without modifying Cover Dominance. This differs from the choice-function setting, where extraction consistency must be strengthened to guarantee the existence of an order rationalization.Footnote 18 The second difference concerns the existence of an \(\mathrm {R}\)-greatest alternative on each menu. In the menu-preference setting, we deal with this issue directly, proving (in Proposition 4A) that the Implicit Optima condition follows from our axiomatization and including nonemptiness of \(G(\cdot ,\mathrm {R})\) in the statement of our results. Indeed, it is to capture precisely this requirement that we use Cover Dominance in Theorem 1 rather than Weak Cover Dominance (the more direct analog of extraction consistency). In the choice-function setting, on the other hand, nonemptiness of \(G(\cdot ,\mathrm {R})\) is ensured by nonemptiness of the primitive C together with the definition of a rationalization, independently of any axioms imposed. Despite these differences, the theories of rationalizability for menu preferences and for choice functions have a considerable amount in common when formulated to allow arbitrary datasets, and this analogy may prove fruitful for future work in both areas. The original article has been updated due to typesetting mistakes in the equations. Barbera et al. (2004) survey the menu-preference literature, while models of temptation, in particular, are surveyed by Lipman and Pesendorfer (2013). Among the numerous more recent papers are those of Ahn and Sarver (2013), Dekel et al. (2009), Epstein et al. (2007), Olszewski (2007), and Stovall (2010). As mentioned by Dekel et al. (2009, p. 938), "[a] menu can be interpreted either literally or as an action which affects subsequent opportunities." Note that we deliberately avoid "modeling the set of alternatives as lotteries and utilizing the resulting linear structure by imposing the von Neumann–Morgenstern axioms" (Gul and Pesendorfer 2001, p. 1406); a practice pioneered by Dekel et al. (2001) and Gul and Pesendorfer (2001) and adopted in much of the ensuing menu-preference literature. (Some exceptions include Ergin 2003, Gul and Pesendorfer 2005, and Nehring 1999). While it has the advantage of facilitating precise identification of model components, such as the subjective state space in Dekel et al. (2001), the lottery formulation can be viewed as a purely technical device to the extent that objective risk is not essential to the phenomenon of interest (e.g., temptation). Moreover, this formulation requires more of the decision maker, who must rank menus of lotteries over outcomes rather than simply menus of outcomes. To be precise, Cover Dominance is logically stronger than Kreps Consistency in the presence of Menu Order (see Fig. 3). Observe that menu A is never compared directly to \(B_{1} \cup B_{2}\), which need not be in the domain of \(\succsim \). Moreover, note that Cover Dominance allows arbitrary (not only binary) unions of covering menus. This is a natural consequence of our objective of characterizing rationalizability over arbitrary domains, as evidenced by the similar non-uniqueness seen in Richter (1966), Bossert et al. (2006), Tyson (2013), and other contributions that share this goal. Recall that a binary relation \(\mathrm {R}\) on X is reflexive if \(\forall x \in X\) we have \(x \mathrm {R}x\); transitive if \(\forall x,y,z \in X\) we have \(x \mathrm {R}y \mathrm {R}z \Longrightarrow x \mathrm {R}z\); and complete if \(\forall x,y \in X\) we have \(\lnot x \mathrm {R}y \Longrightarrow y \mathrm {R}x\). Note in this example the multiplicative notation for enumerated sets, which we use when convenient. To see this, note that for any order rationalization \(\mathrm {R}\) we have that \(wx \mathrel {\succ }xy \Longrightarrow w \mathrm {P}x\) and \(wz \mathrel {\succ }z \Longrightarrow w \mathrm {P}z\). But this would imply that \(wx \mathrel {\succ }xz\) (which is in fact not the case). This is different from the statement that \({\hat{\mathrm {R}}}\) coincides with the decision maker's true but unobserved meal-preference relation \(\mathrm {R}\). Even when and we have successfully replicated the agent's menu preferences, we cannot be certain that either \(\mathrm {R}\subset {\hat{\mathrm {R}}}\) or \({\hat{\mathrm {R}}}\subset \mathrm {R}\). Indeed, the failure of these inclusions in general is made clear by the example of preferences over budget sets in Sect. 1.3. This condition strengthens the Desire for Commitment axiom used by Dekel et al. (2009, p. 946) to study "temptation-driven preferences." In the full-domain environment, Desire for Commitment requires that for each \(A \in {\mathfrak {D}}\) there exists an \(x \in A\) such that \(\{ x \} \succsim A\). Here, alternative x can be interpreted as an implicit optimum for menu A, but since Dekel et al. allow for temptation they do not require \(B \succsim A\) for menus \(B \in {\mathfrak {D}}_{x}\) other than the singleton \(\{ x \}\). Example 5 illustrates why Theorem 1 imposes the nonemptiness condition. At a somewhat deeper level, this condition is needed because our theory of rationalizable menu preferences parallels the theory of rationalizable choice functions (see Sect. 4.3). In the latter context, nonemptiness of each set of maximal alternatives is typically imposed as a background assumption, whereas we state the property explicitly. On the other hand, there is little prospect of relaxing transitivity, which is used heavily in the proof of Theorem 1. For instance, transitivity is employed to establish the necessity of Cover Dominance (in Proposition 3) and to show the faithful-reproduction property of \({\hat{\mathrm {R}}}\) (in Proposition 6). A referee points out that when \({\mathfrak {D}}\) is finite, the assumption that it is closed under union is substantially less restrictive. This is because a rationalization \(\mathrm {R}\) of menu preferences over any finite \({\mathfrak {D}}\) can be extended to the closure of \({\mathfrak {D}}\) under union, and moreover, nonemptiness of \(G(\cdot ,\mathrm {R})\) will survive this extension. The same is not true for infinite \({\mathfrak {D}}\) (cf. Example 5). Theorem 3 follows from Theorem 1 together with Propositions 4, 7, 9, 10, and 12. For a penetrating analysis of the structure of axioms and falsifiability of the associated theories, see Chambers et al. (2014). Formally, for each \(D \in {\mathfrak {D}}\) and for any cover \({\mathfrak {B}}\subset {\mathfrak {D}}\) of D, we have \([\bigcap _{B \in {\mathfrak {B}}} C (B)] \cap D \subset C (D)\). This is equivalent to the "V-Axiom" in Richter (1971, p. 33), apparently the first statement of the condition. Formally, contraction consistency requires that for each \(A,B \in {\mathfrak {D}}\) with \(A \subset B\) we have \(C(B) \cap A \subset C (A)\), while weak expansion consistency requires that for each \({\mathfrak {B}}\subset {\mathfrak {D}}\) with \(\bigcup {\mathfrak {B}}\in {\mathfrak {D}}\) we have \(\bigcap _{B \in {\mathfrak {B}}} C (B) \subset C (\bigcup {\mathfrak {B}})\). These conditions are, respectively, "Property \(\alpha \)" in Sen (1969, p. 384) and "Property \(\gamma \)" in Sen (1971, p. 314). The Congruence Axiom in Richter (1966, p. 637) is the classical condition achieving this goal. Ahn, D.S., Sarver, T.: Preference for flexibility and random choice. Econometrica 81, 341–361 (2013) Barbera, S., Bossert, W., Pattanaik, P.K.: Ranking sets of objects. In: Barbera, S., Hammond, P., Seidl, C. (eds.) Handbook of Utility Theory, Chapter 17, vol. 2, pp. 893–977. Springer, New York (2004) Bossert, W., Sprumont, Y., Suzumura, K.: Rationalizability of choice functions on general domains without full transitivity. Soc. Choice Welf. 27, 435–458 (2006) Chambers, C.P., Echenique, F., Shmaya, E.: The axiomatic structure of empirical content. Am. Econ. Rev. 104, 2303–2319 (2014) Dekel, E., Lipman, B.L., Rustichini, A.: Representing preferences with a unique subjective state space. Econometrica 69, 891–934 (2001) Dekel, E., Lipman, B.L., Rustichini, A.: Temptation-driven preferences. Rev. Econ. Stud. 76, 937–971 (2009) Epstein, L.G., Marinacci, M., Seo, K.: Coarse contingencies and ambiguity. Theor. Econ. 2, 355–394 (2007) Ergin, H.: Costly contemplation. Unpublished manuscript (2003) Gul, F., Pesendorfer, W.: Temptation and self-control. Econometrica 69, 1403–1435 (2001) Gul, F., Pesendorfer, W.: The simple theory of temptation and self-control. Unpublished manuscript (2005) Kreps, D.M.: A representation theorem for 'preference for flexibility'. Econometrica 47, 565–577 (1979) Lipman, B.L., Pesendorfer, W.: Temptation. In: Acemoglu, D., Arellano, M., Dekel, E. (eds.) Advances in Economics and Econometrics: Tenth World Congress, Chapter 8, vol. 1, pp. 243–288. Cambridge University Press, New York (2013) Nehring, K.: Preference for flexibility in a Savage framework. Econometrica 67, 101–119 (1999) Olszewski, W.: Preferences over sets of lotteries. Rev. Econ. Stud. 74, 567–595 (2007) Richter, M.K.: Revealed preference theory. Econometrica 34, 635–645 (1966) Richter, M.K.: Rational choice. In: Chipman, J.S., Hurwicz, L., Richter, M.K., Sonnenschein, H.F. (eds.) Preferences, Utility, and Demand, Chapter 2, pp. 29–58. Harcourt Brace Jovanovic, New York (1971) Sen, A.K.: Quasi-transitivity, rational choice, and collective decisions. Rev. Econ. Stud. 36, 381–393 (1969) Sen, A.K.: Choice functions and revealed preference. Rev. Econ. Stud. 38, 307–317 (1971) Stovall, J.E.: Multiple temptations. Econometrica 78, 349–376 (2010) Tyson, C.J.: Behavioral implications of shortlisting procedures. Soc. Choice Welf. 41, 941–963 (2013) The author would like to thank Andrew Ellis, Marco Mariotti, Sujoy Mukerji, and an anonymous referee for useful comments and suggestions. School of Economics and Finance, Queen Mary University of London, London, E1 4NS, UK Christopher J. Tyson Correspondence to Christopher J. Tyson. The original version of this article was revised: Author correction was misinterpreted and both the symbols (binary relations and negations) have been changed to negation symbols. Now, they have been corrected. Appendix: Proofs Proof of Proposition 1 A. For all \(A \in {\mathfrak {D}}\) and \(\forall x \in A\), we have \(x \mathrm {R}x\), since \(\mathrm {R}\) is reflexive, and hence . Thus, is reflexive. B. For all \(A,B \in {\mathfrak {D}}\), we have where the third implication uses the completeness of \(\mathrm {R}\). Thus, is complete. C. For all \(A,B,D \in {\mathfrak {D}}\), we have where the second implication assigns \(y=w\) and the third uses the transitivity of \(\mathrm {R}\). Thus, is transitive. \(\square \) A. For all \(x \in X\) and \(\forall A \in {\mathfrak {D}}_{x}\), we have \(A \succsim A\), since \(\succsim \) is reflexive, and hence \(x {\hat{\mathrm {R}}}x\). Thus, \({\hat{\mathrm {R}}}\) is reflexive. B. For all \(x,y \in X\), we have $$\begin{aligned} \lnot x {\hat{\mathrm {R}}}y \Longleftrightarrow&\lnot \forall A \in {\mathfrak {D}}_{x} \quad \exists B \in {\mathfrak {D}}_{y} \quad A \succsim B \\ \Longleftrightarrow&\exists A \in {\mathfrak {D}}_{x} \quad \forall B \in {\mathfrak {D}}_{y} \quad \lnot A \succsim B \\ \Longrightarrow&\exists A \in {\mathfrak {D}}_{x} \quad \forall B \in {\mathfrak {D}}_{y} \quad B \succsim A \\ \Longrightarrow&\forall B \in {\mathfrak {D}}_{y} \quad \exists A \in {\mathfrak {D}}_{x} \quad B \succsim A \\ \Longleftrightarrow&y {\hat{\mathrm {R}}}x , \end{aligned}$$ where the third implication uses the completeness of \(\succsim \). Thus, \({\hat{\mathrm {R}}}\) is complete. C. For all \(x,y,z \in X\) we have $$\begin{aligned} x {\hat{\mathrm {R}}}y {\hat{\mathrm {R}}}z \Longleftrightarrow&[\forall A \in {\mathfrak {D}}_{x} \quad \exists B \in {\mathfrak {D}}_{y} \quad A \succsim B] \wedge [\forall D \in {\mathfrak {D}}_{y} \quad \exists E \in {\mathfrak {D}}_{z} \quad D \succsim E] \\ \Longrightarrow&\forall A \in {\mathfrak {D}}_{x} \quad \exists B \in {\mathfrak {D}}_{y} \quad \exists E \in {\mathfrak {D}}_{z} \quad A \succsim B \succsim E \\ \Longrightarrow&\forall A \in {\mathfrak {D}}_{x} \quad \exists E \in {\mathfrak {D}}_{z} \quad A \succsim E \\ \Longleftrightarrow&x {\hat{\mathrm {R}}}z , \end{aligned}$$ where the second implication assigns \(D = B\) and the third uses the transitivity of \(\succsim \). Thus, \({\hat{\mathrm {R}}}\) is transitive. \(\square \) For all \(A,D \in {\mathfrak {D}}\) and \({\mathfrak {B}}\subset {\mathfrak {D}}\) that covers D, we have where the fifth implication follows from \(G(A,\mathrm {R}) \ne \emptyset \), the sixth from the transitivity of \(\mathrm {R}\), the seventh from the fact that \({\mathfrak {B}}\) covers D, and the eleventh from the completeness of \(\mathrm {R}\). Hence, Cover Dominance holds. \(\square \) A. Suppose that Implicit Optima fails, in which case \(\exists A \in {\mathfrak {D}}\) such that \(\forall x \in A\) we can find a \(B_{x} \in {\mathfrak {D}}_{x}\) with \(B_{x} \not \succsim A\). Since \(\succsim \) is complete, we have \(A \succsim B_{x}\) and thus \(A \mathrel {\succ }B_{x}\). We have also \(A \subset \bigcup _{x \in A} B_{x}\), and so Cover Dominance implies that \(A \mathrel {\succ }A\), a contradiction. B. For all \(A,D \in {\mathfrak {D}}\) and \({\mathfrak {B}}\subset {\mathfrak {D}}\) that covers D, we have $$\begin{aligned} \forall B \in {\mathfrak {B}}\quad A \mathrel {\succ }B \Longrightarrow&\exists x \in D \quad \exists B_{x} \in {\mathfrak {B}}\cap {\mathfrak {D}}_{x} \quad A \mathrel {\succ }B_{x} \succsim D \\ \Longleftrightarrow&\exists x \in D \quad \exists B_{x} \in {\mathfrak {B}}\cap {\mathfrak {D}}_{x} \quad [A \succsim B_{x} \succsim D \wedge B_{x} \not \succsim A] \\ \Longrightarrow&D \not \succsim A \\ \Longleftrightarrow&[A \succsim D \wedge D \not \succsim A] \\ \Longleftrightarrow&A \mathrel {\succ }D , \end{aligned}$$ where the first implication follows from Implicit Optima, the third from the transitivity of \(\succsim \), and the fourth from the completeness of \(\succsim \). Hence, Cover Dominance holds. \(\square \) We have that $$\begin{aligned}&\forall A \in {\mathfrak {D}}\quad \exists x \in A \quad \forall B \in {\mathfrak {D}}_{x} \quad B \succsim A \\&\quad \Longrightarrow \forall A \in {\mathfrak {D}}\quad \exists x \in A \quad \forall B \in {\mathfrak {D}}_{x} \quad \forall y \in A \quad \exists E \in {\mathfrak {D}}_{y} \quad B \succsim E \\&\quad \Longleftrightarrow \forall A \in {\mathfrak {D}}\quad \exists x \in A \quad \forall y \in A \quad \forall B \in {\mathfrak {D}}_{x} \quad \exists E \in {\mathfrak {D}}_{y} \quad B \succsim E \\&\quad \Longleftrightarrow \forall A \in {\mathfrak {D}}\quad \exists x \in A \quad \forall y \in A \quad x {\hat{\mathrm {R}}}y \\&\quad \Longleftrightarrow \forall A \in {\mathfrak {D}}\quad G(A,{\hat{\mathrm {R}}}) \ne \emptyset , \end{aligned}$$ where the initial assertion is Implicit Optima. Hence, \(G(\cdot ,{\hat{\mathrm {R}}})\) is nonempty. \(\square \) For all \(A,B \in {\mathfrak {D}}\), we have where the first implication follows from Implicit Optima and the second from the transitivity of \(\succsim \). Hence, we have . \(\square \) $$\begin{aligned} \forall B \in {\mathfrak {B}}\quad A \succsim B \Longrightarrow \exists x \in D \quad \exists B_{x} \in {\mathfrak {B}}\cap {\mathfrak {D}}_{x} \quad A \succsim B_{x} \succsim D \Longrightarrow A \succsim D , \end{aligned}$$ where the first implication follows from Implicit Optima and the second from the transitivity of \(\succsim \). Hence, Weak Cover Dominance holds. \(\square \) where the fourth implication follows from Weak Cover Dominance. Hence, we have . \(\square \) Proof of Theorem 1 If \(\succsim \) is rationalized by an order \(\mathrm {R}\) with \(G(\cdot ,\mathrm {R})\) nonempty, then Menu Order holds by Corollary 1 and Cover Dominance holds by Proposition 3. Conversely, if Menu Order and Cover Dominance hold, then Implicit Optima holds by Proposition 4A, Weak Cover Dominance holds by Proposition 7, \({\hat{\mathrm {R}}}\) rationalizes \(\succsim \) by Propositions 6 and 8 , \({\hat{\mathrm {R}}}\) is an order by Corollary 2, and \(G(\cdot ,{\hat{\mathrm {R}}})\) is nonempty by Proposition 5. \(\square \) Suppose Implicit Optima fails, in which case \(\exists A \in {\mathfrak {D}}\) such that \(\forall x \in A\) we can find a \(B_{x} \in {\mathfrak {D}}_{x}\) with \(B_{x} \not \succsim A\). Since \({\mathfrak {D}}\) is finite, the set \({\mathfrak {B}}= \{ B_{x} : x \in A \} \subset {\mathfrak {D}}\) is also finite. Moreover, since \(\succsim \) is an order, \(\exists y \in A\) such that \(\forall x \in A\) we have \(B_{y} \succsim B_{x}\). Observing that \(A \subset \bigcup {\mathfrak {B}}\), Weak Cover Dominance now implies that \(B_{y} \succsim A\), contradicting \(B_{y} \not \succsim A\). \(\square \) Proof of Proposition 10 Let \(A,B \in {\mathfrak {D}}\) be such that \(A \cup B \in {\mathfrak {D}}\) and \(A \succsim B\). We have \(A \succsim A\) since \(\succsim \) is reflexive and therefore \(A \succsim A \cup B\) by Weak Cover Dominance. Moreover, we have \(A \cup B \succsim A \cup B \supset A\) since \(\succsim \) is reflexive, and it follows that \(A \cup B \succsim A\) by Weak Cover Dominance. Thus, \(A \mathrel {\sim }A \cup B\), and Kreps Consistency holds. \(\square \) Given \(A,B \in {\mathfrak {D}}\) with \(A \subset B\), suppose that \(B \not \succsim A\). Then, \(A \succsim B\) since \(\succsim \) is complete and \(A \cup B = B \in {\mathfrak {D}}\) since \(A \subset B\), so that \(A \mathrel {\sim }A \cup B = B\) by Kreps Consistency. But this contradicts \(B \not \succsim A\). \(\square \) Given \(A,D \in {\mathfrak {D}}\) and \({\mathfrak {B}}\subset {\mathfrak {D}}\) that covers D, suppose that \(\forall B \in {\mathfrak {B}}\) we have \(A \succsim B\). Since \({\mathfrak {D}}\) is finite, \({\mathfrak {B}}\subset {\mathfrak {D}}\) is finite and can be enumerated as \({\mathfrak {B}}= \{ B_{1},\ldots ,B_{n} \}\). For each \(k \le n\), write \(E_{k} : = \bigcup _{i=1} ^{k} B_{i}\) and note that both \(E_{k} \in {\mathfrak {D}}\) and \(A \cup E_{k} \in {\mathfrak {D}}\) since \({\mathfrak {D}}\) is closed under union. Since \(A \succsim B_{1}\), we have \(A \mathrel {\sim }A \cup B_{1} = A \cup E_{1}\) by Kreps Consistency. [Inductive step begins.] Suppose that for some \(k < n\) we have \(A \mathrel {\sim }A \cup E_{k}\). Since \(A \succsim B_{k+1}\) and \(\succsim \) is transitive, it follows that \(A \cup E_{k} \succsim B_{k+1}\). But then $$\begin{aligned} A \mathrel {\sim }A \cup E_{k} \mathrel {\sim }[A \cup E_{k}] \cup B_{k+1} = A \cup E_{k+1} , \end{aligned}$$ using Kreps Consistency. [Inductive step ends.] By induction, we can conclude that \(A \mathrel {\sim }A \cup E_{n}\). Since \(D \subset \bigcup {\mathfrak {B}}= E_{n} \subset A \cup E_{n}\) and \(\succsim \) is complete, we have also \(A \cup E_{n} \succsim D\) by Proposition 11, and so \(A \succsim D\) since \(\succsim \) is transitive. Hence, Weak Cover Dominance holds. \(\square \) If Weak Cover Dominance holds, then Weak Union Dominance is immediate. Moreover, \(\forall A,B \in {\mathfrak {D}}\) if \(A \subset B\) then since \(\succsim \) is reflexive we have \(B \succsim B\) and thus \(B \succsim A\) by Weak Cover Dominance. Hence, Monotonicity holds. For the converse, suppose Monotonicity and Weak Union Dominance hold and take any \(A,D \in {\mathfrak {D}}\) and \({\mathfrak {B}}\subset {\mathfrak {D}}\) that covers D. Since \({\mathfrak {D}}\) is closed under union, we have \(\bigcup {\mathfrak {B}}\in {\mathfrak {D}}\). If for each \(B \in {\mathfrak {B}}\) we have \(A \succsim B\), then \(A \succsim \bigcup {\mathfrak {B}}\) by Weak Union Dominance, and since \(D \subset \bigcup {\mathfrak {B}}\) we have \(\bigcup {\mathfrak {B}}\succsim D\) by Monotonicity. But then \(A \succsim \bigcup {\mathfrak {B}}\succsim D\), and so \(A \succsim D\) since \(\succsim \) is transitive. Hence, Weak Cover Dominance holds. \(\square \) Tyson, C.J. Rationalizability of menu preferences. Econ Theory 65, 917–934 (2018). https://doi.org/10.1007/s00199-017-1043-2 General domains Opportunity sets Revealed preference
CommonCrawl
Trace: • mtd1 exercises:2015_cecam_tutorial:mtd1 Simple metadynamics simulation using the coordination numbers as variables First task: dynamics of two HNO3 molecules over a graphene sheet Second task: Metadynamics of the dissociation of HNO3 over a graphene sheet Third task: dynamics of Si6H8 Fourth task: Lagrangian MTD of the atomic rearrangement of Si6H8 Problem: Dissociation reaction of nitric acid on graphene and atomic rearrangements of a Si6H8 cluster described using coordination numbers Original author: Marcella Iannuzzi Complete source and output files: MTD1.tar.xz For this tutorial some input and output files are given in order to present a complete procedure to solve the given problem. Some hints are also given to help in the analysis of the results. In order to be able to run these examples, some paths need to be correctly set in the input files (i.e. set the variables LIBPATH, XYZPATH, RUNPATH). The coordinates are always read from xyz files. All the coordinate files needed for these exercises are collected in XYZ, whereas LIB_TOOLS contains the PP, basis sets and DFTB parameter file. The results presented in these examples are to be considered only as toy cases. The obtained processes are not very accurate, because no optimal parameters have been selected to speed-up a bit the calculation. The tasks to be completed are: Set up and preliminary simulations to learn about the dynamics of nitric acid on graphene, as obtained at the DFTB level of theory Metadynamics simulation to trigger the dissociation of nitric acid, by following changes of three different coordination numbers Set up and preliminary simulations to learn about the dynamics of the selected small Si cluster saturated by H atoms Metadynamics simulation aimed at observing atomic rearrangements of the cluster bay changing the coordination of both Si and H species. The examples on this system are in the directory GR_2HNO3. The goal is to simulate the dissociation of the HNO3 molecules with formation of products like H2O and/or NO or NO2 fragments. These reaction can occur in gas phase. However, the reaction should be catalyzed in presence of C particles (sooth). In the proposed example, the molecules are located in the vicinity of graphene, that should mimic the role of sooth. Graphene indeed is not very reactive, better models can be considered using defective or functionalized graphene. The initial coordinates are given in XYZ/grly5x3_2hno3.xyz. The study is started with a simple MD simulation at constant temperature (300 K), to learn about the dynamics of the two molecules over graphene. The DFTB description is employed to speed up the simulation, even if this might not be the optimal choice to faithfully describe the dissociation reaction. The provided input file GR_2HNO3/gr2hno3_nvt.inp contains all the instructions to run with either DFT, or DFTB, or PM6. It is sufficient to set the input variable METHOD_TO_USE giving the name of the method to be used, and the parts of the input related to the selected method are activated. In the present case DFTB is selected and the relevant part of the input is @IF ( ${METHOD_TO_USE} == DFTB ) &DFT &QS METHOD DFTB &DFTB SELF_CONSISTENT T DO_EWALD T DISPERSION T &PARAMETER PARAM_FILE_PATH ${LIBPATH}/scc PARAM_FILE_NAME scc_parameter UFF_FORCE_FIELD uff_table &END PARAMETER &END DFTB &END QS PERIODIC XYZ The system is fully periodic, and enough space is left above the graphene layer in order to avoid interactions with the images along $z$. This is a very simple model of the type of particles that might trigger the dissociation reaction, and we are not interested in the dynamics of the layer itself. Therefore, a a few atoms of the layer are constrained to fixed positions by &CONSTRAINT &FIXED_ATOMS LIST 48 51 54 57 60 45 59 44 58 43 Otherwise the MOTION section is quite standard for NVT simulations with a Nose-Hoover thermostat. Since the total number of degrees of freedom is small, even with the thermostat the equilibration to the desired temperature might result difficult. Hence, the rescaling of temperature is activated by the MOTION/MD#TEMP_TOL keyword, whenever the difference between the instantaneous temperature and the desired value exceeds the given tolerance. The output of this short simulation (5 ps) is stored in DFTB_NVT. Kinetic energy (3rd col.), temperature (4th col.), potential energy (5th col), and total energy (6th col) can be monitored from GR_2HNO3/DFTB_NVT/gr2hno3_nvt-1.ener. By visualizing the short trajectory, it is observed that the two molecule move very fast, and explore a large space of configurations, being often far from each other and far from graphene. In order to restrict the exploration to region where the dissociation catalyzed by graphene might occur, and thus avoid configurations that are not interesting for the specific study, it is necessary to limit the movement of the two molecules. To this purpose, an external potential defining a spherical potential centered on the center of the system coordinates is added, which acts only on the two molecules. In order to simplify the definition of the external potential, the coordinates are first centered in zero (XYZ/grly5x3_2hno3_cc.xyz). The new input file invoking the interaction with the spherical potential is GR_2HNO3/gr2hno3_nvt_epot.inp, where the only difference in the FORCE_EVAL section is &EXTERNAL_POTENTIAL ATOMS_LIST 61..70 FUNCTION 0.000000000001*(Z^2)^4 FUNCTION 0.0000000000001*(X^2)^4 FUNCTION 0.0000000000001*(Y^2)^4 Along the resulting 10 ps trajectory, the two molecules remain close to graphene, where they should be. The next step is to set up a few collective variables (CV) that can be later used for metadynamics (MTD) simulations. It is important to select good CV that can describe the relevant configuration along the reaction path. Moreover, it is useful to learn about the typical behavior of the selected CV along an unbiased MD run. Hence, after selecting a set of CV, preliminary runs should be performed in order to monitor the dynamics of these variables. This can be done setting up MTD simulations, where in facts no bias is added. The evolution of the variables is then monitored while the system explores the configurations around the initial one, i.e. belonging to the same (initial) basin of attraction on the free energy surface (FES). The evaluation of typical fluctuation amplitudes of the CV is particularly important in order to set the width of the Gaussian beads that are going to build up the penalty potential along the "real" MTD run. Moreover, it is important to learn which variations in the CV can occur spontaneously, i.e. do not need any bias, and where the CV cannot move without activation. The input GR_2HNO3/gr2hno3_mtd_4cv_h0_p1.inp has been prepared with the definition of four CV. The first is the coordination number (CN) of O to graphene &COORDINATION KINDS_FROM O KINDS_TO C R_0 [angstrom] 1.8 NN 8 ND 14 &END COORDINATION where NN and ND determine the curvature of the function used to compute the CN and R_0 is the reference OC distance. $CN_{\text{ OC}} = \frac{1}{N_{\text{ O}}} \sum_{i_{\text{O}}} \sum_{j_{\text{C}}} \frac{1-(\frac{r_{ij}}{R_0})^{nn}}{1-(\frac{r_{ij}}{R_0})^{nd}}$ This CV describes the interaction between the O atoms that may dissociate from N and graphene, where adsorption might occur. It should be approximately zero when the molecules are far from the layer. It becomes larger than zero, but always smaller than one, when one or more O get closer to the layer. With the given parameter the CN start being larger than zero for OC distances below 4 Å. The second Cv is the CN of N to O, that is about 3 for not dissociate molecules and smaller when the dissociation begins. KINDS_FROM N KINDS_TO O The third C is between H and graphene, since also H can be lost from the molecules and be adsorbed on graphene. KINDS_FROM H Finally the fourth CV is the distance between a point and a plane, where the point is the geometric center between the two N atoms of the system, while the plane is determined by the coordinates of three C atoms of graphene. &DISTANCE_POINT_PLANE &POINT TYPE GEO_CENTER ATOMS 1 ATOMS 48 ATOMS 69 70 ATOMS_PLANE 1 2 3 ATOM_POINT 4 &END DISTANCE_POINT_PLANE This last CV controls the distance of the molecules from the layer, which is an important factor to determine whether the dissociation is somehow favored by the presence of graphene. The output of the unbiased MD that monitors the behavior of these 4 CV is in DFTB_MTD_4CV_H0. It is obtained by invoking a MTD run where no penalty potential is added. Hence, the MOTION/FREE_ENERGY subsection is added within the section MOTION. The MTD run is controlled from the MOTION/FREE_ENERGY/METADYN subsection. In the present example, where no bias has to be added, the MOTION/FREE_ENERGY/METADYN section contains very few parameters &FREE_ENERGY &METADYN DO_HILLS .FALSE. &METAVAR SCALE 0.08 COLVAR 1 &END METAVAR SCALE 0.3 COMMON_ITERATION_LEVELS 3 MD 1 &END METADYN &END FREE_ENERGY With MOTION/FREE_ENERGY/METADYN#DO_HILLS set to .FALSE., it is required that no bias is added. Then for each defined COLVAR a MTD variable is initialized. The PRINT%COLVAR section controls the printing of the COLVAR output file, containing the instantaneous values of the CV as well as other parameters when needed. For the run without bias, no other information are needed, and the only interesting data in the output GR_2HNO3/DFTB_MTD_4CV_H0/gr2hno3_mtd_4cv_h0_p1-COLVAR.metadynLog are the second, third, fourth, and fifth columns, which are the instantaneous values of the CV at the indicated time (in fs, first column). By plotting the CV as recorded along the short MD trajectory (3 ps), the amplitude of the equilibrium fluctuations can be evaluated and then used to set up the size of the Gaussian hills that build up the biasing potential. The first CV fluctuates close to zero, with fluctuations smaller than 0.2. The second is around 2.8. The fluctuations are smaller due to the stiffness of the three NO bonds. The coordination of H to C is also typically zero, but it can change a lot when the molecules approach the layer, even if there is no dissociation of H and no binding to C. This indicates that this variable is difficult to control and might turn out to be tricky to use it to distinguish among different states of the reaction process. The point to plane distance shows quite large fluctuations and it is clearly not suited to distinguish a specific state along the reaction path. Moreover, its minima, when the two molecules are closer to the layer, correspond to the maxima of the third CV, i.e. the CN of H to C. At least before dissociation, the information that this variable provide is redundant. It might be interesting to run again this preliminary simulation after modifying the definition of the CV. For example, by changing the two exponents or even the reference distance of the CN, the range of the function can be made shorter or longer. It is maybe important to remind that the function defining the CV must have a gradient different from zero to affect the behavior of the system in a MTD run. Namely, the MTD force term affecting the dynamics of the atoms involved in the definition of the CV is proportional to the gradient of the CV function. The presented MTD run employs as CV only the three CN described above. The related input file is GR_2HNO3/gr2hno3_mtd_3cv_p1.inp and the output is stored in DFTB_MTD_3CV. The MOTION/FREE_ENERGY/METADYN input section has been modified to activate the MTD algorithm. DO_HILLS NT_HILLS 100 WW 3.0e-3 &HILLS One Gaussian hill is spawned every NT_HILLS MD steps, while the height of the hill in hartree is given by WW. These parameters together with the width of the Gaussian hills are important to determine the accuracy of the description of the FES through the MTD biasing potential. Since each variable has in principle different dimensions and different dynamics, the shape of the hills filling up the $N_{\text{CV}}$-dimensional configurations space, as defined by the selected CVs, is not isotropic. The parameter SCALE associated to the $i$-th MTD variable determines the amplitude of the Gaussian in the $i$-th space-direction of the $N_{\text{CV}}$-dimensional configuration space. This parameter, as well as the hill's height and the frequency of collocation, can be changed along the same MTD run by restarting with different values in the input. This feature is useful when the dynamics of some variable changes after some event has occurred (e.g. the fluctuation of a distance become larger after a bond breaking), or to modulate the resolution of the biasing potential (coarser or finer) in different regions of the space (e.g. coarser at the bottom of the FES basin and finer in the vicinity of the transition state). For an efficient exploration of the configurations space, it is important to spawn hills that are not too big, otherwise important features of the topography of the FES might be not sufficiently well resolved, or even the MTD-trajectory could follow the wrong path, missing the minimum energy path (MEP). On the other hand, filling up the whole space with too small hills might require excessively long simulation time. Given the hill size (height and width) and knowing approximately the size of the space spanned by the CVs and the barrier height, it is possible to estimate the number of hills needed to fill the basin of the FES and move to the next minimum. The MOTION/FREE_ENERGY/METADYN/PRINT/HILLS print key controls the printing of the HILLS file where the information on the spawned hills is stored: timestep, coordinates of the center in the CVs space (3 CV $=$ 3 columns), width in each dimension of the space (3 more columns), height (last column). The provided trajectory is about 100 ps long and indeed it shows the dissociation of the two molecules into NO2 and OH, whose OH fragments tend to interact with the graphene layer. From the behavior of the first and second CN the evolution of the system can be somehow deduced. In particular the are clear changes in the NO coordination, that becomes larger when the to molecules get closer, and becomes smaller when the OH is dissociated. Soon after the dissociation, the coordination is again close to 3, because the lost O is compensated by the fact that the two NO2 fragments stay close together, i.e. each N sees the O of the other fragments. The coordination of O to C, becomes larger when the molecules are closer to the layer, and fluctuates a lot due to the rapid movement of the molecules. After the dissociation, higher values of the CN are kept for longer time, indicating some more stable interaction of O with C. A better choice of the parameters defining this CN might help in resolving more clearly the two states, O interacting and O not interacting with the layer. As predicted the very large fluctuations of the H to C CN are of difficult interpretation and make this variable not very useful for the description of the process. A CN N to H, describing the dissociation of H from HNO3 be a better choice as third CV. Other quantities that can be monitored from the MOTION/FREE_ENERGY/METADYN/PRINT/COLVAR output, beside the instantaneous values of the three CVs (2nd, 3rd, and 4th col.) are : the instantaneous gradient of the bias potential computed with respect to CV (5th, 6th, 7th col.), the gradients wit respect to the CVs of wall potentials, if present, (8th,9th,10th col), the instantaneous value of the bias potential, and the instantaneous values of the wall potentials. The data file for this example are in SI6_CLU. In this case, a small Si cluster of 6 Si atoms saturated by 8 H atoms is studied. Si clusters show different arrangements. The equilibrium structure should be such that Si atoms keep the preferred tetrahedral coordination. In the presence of H saturating the dangling Si bonds, the structure can be open, like the chair structure that is used here as starting conformation. By loosing H atoms, through the formation of molecular hydrogen, the cluster undergoes some rearrangement. The structure should become more compact in order to saturate the Si coordination shell. As in the previous example, preliminary MD runs are carried out to study the dynamics of the system. The electronic structure is computed at the DFT level, using the PBE functional. A standard constant temperature MD is simulated running the input SI6_CLU/si6_clu_nvt.inp. The temperature is set at 300 K and it is controlled by applying the Nose-Hoover thermostat and the temperature rescaling (the small number of degrees of freedom does not allow thermalization within a few ps). The output of this simulation is in NVT. The energy curve over the 6 ps of simulation time and the visualization of the trajectory confirm that the cluster with the initial stoichiometry can be easily equilibrated in the chair structure. Possible changes in stoichiometry (loosing H) and structure are going to induce variations in the coordination shell of the Si atoms. Therefore, three CN are selected as CV : Si to Si, Si to H, and H to H. These variables are monitored over the equilibrium trajectory by running the input SI6_CLU/si6_clu_mtd_h0_p1.inp. As in the previous example, this is a MTD input where the the variable MOTION/FREE_ENERGY/METADYN#DO_HILLS is set to false, so that no biasing potential is added. The output obtained running this test is in MTD_H0. By monitoring the three CN along the 10 ps log trajectory (2nd, 3rd and 4th col. of SI6_CLU/MTD_H0/si6_clu_mtd_h0-COLVAR.metadynLog), it is observed that the three variables are well equilibrate, with relatively small fluctuations around the average. Given the chosen parameters KINDS_FROM Si KINDS_TO Si R_0 [angstrom] 2.55 the Si-Si CN oscillates around 2. Actually, in the chair configuration 4 Si are three fold coordinated with neighboring Si atoms and 2 are two fold coordinated. The bond length is about 2.4 Å. By changing the curvature of the function, the average value of the CN can be easily moved towards 3 and it can become more sensitive to fluctuations of the Si-Si bond length. The fluctuations of the two other CN are even smaller. Si-H fluctuates around 1.4 and H-H is very close to zero, since in the initial configuration the H atoms do not see each other. The output of a second simulation that monitors the same three CVs is in MTD_L_H0, and the corresponding input is SI6_CLU/si6_clu_mtd_l_h0_p1.inp. Nothing has been changed in the definition of the CVs, but the Lagrangian MTD formalism has been used. With this scheme, an auxiliary variable is associated to each CV, and when the biasing potential is added, it is defined as function of the auxiliary variables rather than of the CVs. The auxiliary variable behaves as additional degree of freedom. Therefore, an inertial mass is associated to it and its dynamics is determined by integrating the same type of equations of motion as for all the other degrees of freedom. The variable is coupled to the corresponding CV through a harmonic potential, and the forces acting on it are those derived from the harmonic potential and from the MTD biasing potential, when it is present. Hence, in the MOTION/FREE_ENERGY/METADYN/METAVAR input section, two additional parameters are needed, which are the mass of the auxiliary variable and the coupling constant for the harmonic potential: LAMBDA 2.5 MASS 30. A temperature is associated to the auxiliary variables and can be controlled by temperature rescaling. The use of thermostats for such few degrees of freedom is questionable. The Lagrangian MTD formalism is used in order to better control the kinetics of the CV. This control is obtained through the coupled to the auxiliary variables, whose dynamics depends on the mass and the temperature, besides the intensity of two contribution to the force. Hence, by tuning properly the coupling constant and the mass, the desired effect can be obtained. This might become important in order to collect the correct probability distribution in the configurations space defined by the CV, it is important that the system visits all the accessible conformations. By controlling the dynamics of the selected variables, a sort of adiabatic separation can be imposed between the relevant reaction parameters and all the other degrees of freedom. Whenever a MOTION/FREE_ENERGY/METADYN/METAVAR is defined (also in a not Lagrangian scheme), it is possible to limit the range of values that are going to be explored by setting a so called WALL potential. This is typically done to avoid the time-consuming exploration of regions of the configurations space that are not relevant for the process that is under investigation. When auxiliary variables are employed, the WALL potential can also be activated in order to avoid the exploration of unphysical values, that can happen by too large fluctuations away from the corresponding CV. In the case of the CN, negative values are unphysical and must be avoided. For this reason the subsection MOTION/FREE_ENERGY/METADYN/METAVAR/WALL is added MOTION/FREE_ENERGY/METADYN/METAVAR coupled to the H-H CN, which is known to oscillate close to zero in the initial state: &WALL POSITION 0.0 TYPE QUARTIC &QUARTIC DIRECTION WALL_MINUS K 100.0 In this case the potential function is $f(s)=K (s-s_0) ^4$, and it is activated whenever the variable becomes lower (DIRECTION WALL_MINUS) than the given $s_0$ (POSITION). When such Lagrangian scheme is used more columns appear in the COLVAR, which contain all the relevant information. The 1st column is always the time in fs, the next $N_{\text{CV}}$ columns are the instantaneous values of the auxiliary variables, followed by $N_{\text{CV}}$ columns where the instantaneous values of the CVs are reported. The next columns report the values of the potential gradients: $N_{\text{CV}}$ columns for the gradients of the harmonic potential, $N_{\text{CV}}$ for the gradients of the MTD biasing potential, and $N_{\text{CV}}$ for the gradient of the WALL potential (these are zeros when the corresponding potential is not activated). The following $N_{\text{CV}}$ are the velocities of the auxiliary variables. Then there are the instantaneous values of the harmonic potential, of the MTD potential, and of the WALL potential. The last column is the temperature of the auxiliary variables. The dynamics of the CVs along the two simulations, with and without Lagrangian MTD scheme, is equivalent. The CV and the auxiliary variables, in the Lagrangian MTD simulation, closely follow each other, which points to a strong enough coupling (could be also a bei looser). The masses assigned to the auxiliary variables seem not to affect significantly the time evolution at equilibrium, i.e. the inertia effect is quite small, also because the temperature of the auxiliary variables is also set at 300 K. In order to slowdown the oscillations of the three CN, and sample better the accessible configurations at each point of the in the CV space, the parameters to be tuned are then the mass and the temperature of the auxiliary variables. In the input SI6_CLU/si6_clu_mtd_l_h0_p2.inp, the definition of the Si-Si CN has been slightly changed, and the mass of the auxiliary variables has been increased. The effect f these changes can be investigated by running this input and comparing the results with the previous results (both the input can be run at a lower level of theory, just to explore the effects of the different parameters on the dynamics of the CV). A third input is proposed, where the MTD temperature is reduced to 100 K, SI6_CLU/si6_clu_mtd_l_h0_p3.inp. MTD_L_P2 contains the output of the MTD run performed with the parameters tested by running SI6_CLU/si6_clu_mtd_l_h0_p2.inp. The corresponding input is SI6_CLU/si6_clu_mtd_l_p2.inp. The MOTION/FREE_ENERGY/METADYN/METAVAR#SCALE parameter is the same for the three variables, since the fluctuations of the three CN are going to be quite similar. The collocation rate is every 100 MD steps, which is quite often, but reasonable, also because the hill size is not too big (about 1 Kcal/mol for the height). exercises/2015_cecam_tutorial/mtd1.txt · Last modified: 2015/08/19 14:59 by tmueller
CommonCrawl
A bi-filtering method for processing single nucleotide polymorphism array data improves the quality of genetic map and accuracy of quantitative trait locus mapping in doubled haploid populations of polyploid Brassica napus Guangqin Cai†1, 2, Qingyong Yang†1, 2, Bin Yi1, Chuchuan Fan1, Chunyu Zhang1, David Edwards3, Jacqueline Batley3 and Yongming Zhou1, 2Email author © Cai et al.; licensee BioMed Central. 2015 Received: 1 July 2014 Single nucleotide polymorphism (SNP) markers have a wide range of applications in crop genetics and genomics. Due to their polyploidy nature, many important crops, such as wheat, cotton and rapeseed contain a large amount of repeat and homoeologous sequences in their genomes, which imposes a huge challenge in high-throughput genotyping with sequencing and/or array technologies. Allotetraploid Brassica napus (AACC, 2n = 4x = 38) comprises of two highly homoeologous sub-genomes derived from its progenitor species B. rapa (AA, 2n = 2x = 20) and B. oleracea (CC, 2n = 2x = 18), and is an ideal species to exploit methods for reducing the interference of extensive inter-homoeologue polymorphisms (mHemi-SNPs and Pseudo-simple SNPs) between closely related sub-genomes. Based on a recent B. napus 6K SNP array, we developed a bi-filtering procedure to identify unauthentic lines in a DH population, and mHemi-SNPs and Pseudo-simple SNPs in an array data matrix. The procedure utilized both monomorphic and polymorphic SNPs in the DH population and could effectively distinguish the mHemi-SNPs and Pseudo-simple SNPs that resulted from superposition of the signals from multiple SNPs. Compared with conventional procedure for array data processing, the bi-filtering method could minimize the pseudo linkage relationship caused by the mHemi-SNPs and Pseudo-simple SNPs, thus improving the quality of SNP genetic map. Furthermore, the improved genetic map could increase the accuracies of mapping of QTLs as demonstrated by the ability to eliminate non-real QTLs in the mapping population. The bi-filtering analysis of the SNP array data represents a novel approach to effectively assigning the multi-loci SNP genotypes in polyploid B. napus and may find wide applications to SNP analyses in polyploid crops. Brassica napus SNP array Bi-filtering analysis QTL mapping Oilseed rape (Brassica napus L., AACC, 2n = 38) is one of the most important oil crops in the world, which provides not only edible oil but also raw materials for bio-energy applications. B. napus is an allotetraploid that was generated from the natural hybridization of its two progenitor diploids of Brassica rapa (AA, 2n = 20) and Brassica oleracea (CC, 2n = 18) about 7,500 years ago [1,2]. B. rapa and B. oleracea were produced by extensive triploidization of their ancestral species at the genomic level [3-5]. The B. napus two subgenomes An and Cn are largely collinear (93%) to the corresponding diploid Ar (B. rapa) and Co (B. oleracea) genomes [2]. The three species are believed to share a common ancestor with Arabidopsis thaliana [2,4-6]. Thus, on average, one ortholog Arabidopsis gene can find about four homologous copies in the B. napus genome [2,4,5,7]. Most orthologous gene pairs in B. rapa and B. oleracea remain as homoeologous pairs in B. napus An and Cn subgenomes (An ortholog gene in the An genome in most cases has a highly homologous copy of the sequence in the Cn genome) [2,5]. Single nucleotide polymorphism (SNP) markers have a wide range of applications in the construction of genetic maps, mapping and cloning of quantitative trait locus (QTL), linkage analysis, molecular marker-assisted selection (MAS), and molecular breeding of crops [8-12]. Edwards et al. and Hayward et al. estimated that there was a SNP in every 600 bp of the B. napus genome, for a total of approximately 1.7 million SNPs [13,14]. Westermeier at al. and Durstewitz et al. identified 87 and 604 SNPs in B. napus using an amplicon sequencing method, respectively [15,16]. Recently, Trick et al. identified 23,330 and 41,593 SNPs in the two cultivars, Ningyou7 and Tapidor using Solexa transcriptome sequencing [17-19]. Bus et al. identified more than 20,000 SNPs in 8 B. napus inbred lines using a next-generation restriction-site associated DNA (RAD) sequencing method [20]. A total of 7,322 genic SNPs were selected from publically available information for Illumina Infinium genotyping by Delourme et al. [21], and a ultrahigh-density SNP bin map containing 8,780 SNPs was constructed using a modified ddRADseq technique for two B. napus inbred lines and their 91 doubled haploid (DH) progenies [22]. Several methods have been used to successfully genotype B. napus with SNP markers, including mini-sequencing [15], Illumina GoldenGate genotyping [16], SNaPshot [23,24], Invader® [25] assays and SNAP primer amplification [7]. Recently, high-throughput 6K and 60K SNP arrays for B. napus based on the Illumina Infinium HD Assay have been developed, and used for QTL mapping [26,27], genome-wide association study [28], and genome structure analysis [29,30]. Compared with diploid species, such as rice [12,31], maize [32], tomato [33], chickpea [34], sorghum [35], and apple [36], the large-scale identification of SNPs and genotyping in B. napus faces more challenges due to the species' complex genome structure [2,37,38]. For instance, SNP identification by transcriptome sequencing or using known EST sequence data showed that approximately 90% of identified SNP loci correspond to hemi-SNPs [17], resulting in a large number of heterozygous signals in genotyping analyses with SNP arrays [16,38]. Because the generation of heterozygous signals is due mainly to the binding of the SNP probe to two or more different genomic sequences (i.e., non-specific binding), the detected signal may not represent the genotype corresponding to the SNP probe itself. Two traditional solutions to this problem are (i) either to remove the SNPs with the heterozygous signals from further analysis, or (ii) to code signals with the same P1 value as the A genotype, those with the same P2 value as the B, and those with a non-parental value, as missing values (recorded as "-") in a segregating population [26,27]. The first method will result in a low usage of the SNP array data, while the premise of the second method is the uniqueness of the binding site for the SNP probe in the genome. However, it is difficult for a considerable number of probes to meet this requirement in the B. napus genome [2,4,5], which leads to improper utilization of SNP data in many cases. Different parameters have been proposed for quality evaluation of SNP arrays in diploid species (e.g. rice, maize and apple). For instance, the cluster separation score (CSS) is often used to select high-quality SNP probes (CSS > 0.3). However, the CSS is not always suitable for the determination of SNP loci [39,40], as it only describes the degree of separation between two homozygous versus heterozygous clusters, rather than the separation between two homozygous clusters [39,40]. For the populations consisting of pure individuals, such as DH lines and recombinant inbreeding lines (RIL), probes with a high heterozygous proportion (>5%) and low minor allele frequencies (MAF < 0.01) may be filtered out, and materials/lines with high missing data (>20%) or low call rate (<0.7) may be discarded from further analysis [36,40-44]. Due to the high frequency of multi-loci SNPs (hemi-SNP) in polyploid species, such as B. napus, it is not suitable to simply apply the parameters and criterions developed in the diploid species to evaluating the quality of SNP array probes in polyploid species. Therefore, there is a need for development of effective procedures to assess the quality of the SNP array probes and to make full use of SNP genotyping data in polyploid species. In this study, a 6K SNP array (Illumina Infinium HD Assay) [27] for B. napus was applied to genotyping a DH population and its parents [30]. A procedure, called bi-filtering analysis was developed to improve the efficiency and accuracy of SNP array data analysis. The procedure firstly calculates the percentage of non-parental genotypes (PNPG), based on monomorphic loci, in a segregating population. Subsequently, the difference in PNPG of single-locus SNPs (Simple SNP and sHemi-SNP) and multi-loci SNPs (mHemi-SNP and Pseudo-simple SNP) was compared to filter multi-loci SNPs among the SNP loci and unauthentic lines in the DH population. Such a bi-directional filtering can optimize the population and eliminate multi-loci SNP interference, thus improving the quality of genetic map and accuracy of QTL mapping in polyploid B. napus. Plant materials, field trails and trait evaluation The HJ DH population was produced from microspore culture of F1 buds of the cross between Huashuang 5 (Hua5), a semi-winter type B. napus variety, and J7005, a winter-type B. napus pure line. The two parents were purified by microspore culture before hybridization. Detailed information about this population was described in Wu et al. [45] and Cai et al. [30]. The DH lines, together with their parental lines, F1 and RF1 hybrids were grown in a semi-winter rapeseed crop area, Wuhan in 2009–2010, 2010–2011, Huanggang in 2010–2011, and a spring rapeseed crop area Gansu in 2011, respectively. The field experiment followed a randomized complete block design with three replications. Each line was planted in two rows and 10–11 plants were maintained in each row, with a distance of 17 cm between plants within each row and 30 cm between rows. The parental line Hua5 was grown in every 20 lines as a control. The field management followed essentially regular breeding practice. Molecular marker and SNP array genotyping Primer sequences for the SSR markers used for genetic mapping were described by Fan et al. [46] and the sequence information of all SSR markers is provided by Cai at al. [30]. The genotyping of SNPs was performed using a 6K Illumina Infinium HD Assay SNP array of B. napus (Illumina Inc., San Diego, CA) developed by the University of Queensland. The SNP genotyping was conducted following the instructions from Infinium HD Assay Ultra Protocol Guide (http://www.illumina.com/). All the SNP array data were clustered and visualized for further analysis using the Illumina GenomeStudio software (Illumina Inc., San Diego, CA). Each SNP was re-checked manually to determine if any error was observed during the clustering analysis. Detailed information about SNP array genotyping and data processing was described in Cai et al. [30]. Construction of linkage map and QTL mapping The method for genetic linkage map construction was described by Cai et al. [30]. QTLs were detected using the composite interval mapping (CIM) procedure with the software QTL Cartographer V2.5 [47]. A significance threshold for QTL at the level P =0.05 was determined through permutation analysis using 1000 repetitions. The other parameters and methods for QTL mapping were as described by Feng et al. [48]. Majority of polymorphic SNP loci exhibited heterozygous signals in B. napus Previously, the HJ-DH population and its parental lines were genotyped with SSR markers and the 6K SNP arrays for B. napus [29,30] and the call rate of all 5,306 SNP loci on the array for all 192 samples was >0.7 [30]. There were 578 probes (10.9%) that were detected in less than 80% of samples and thus not included in further analysis. The remaining 4,728 SNPs were used for cluster analysis using GenomeStudio software [30]. Among the 4,728 SNPs, 521 (11%) had a CSS <0.3 (Figure 1a). As doubled haploids, all the DH lines should have only their parental genotypes with two expected homozygous clusters (AA and BB, Figure 1b). However, other types of genotyping data were observed after clustering, including SNPs with CSS <0.3 but with clear clusters (Figure 1c), SNPs with one of the parental genotype being heterozygous (Figure 1d) or not detected (no call, Figure 1e), and SNPs with a high frequency of non-parental genotype (NPG, i.e. the genotype in a SNP locus of a DH line different from any one of two parental lines) in the progeny population (Figure 1f). There were 155 polymorphic SNPs out of the 521 SNPs with CSS <0.3, most of which had clear clusters (Figure 1c). After a check of all the calls manually, the SNPs between two clusters were rescored as missing data ("-", Figure 1). Different types of single nucleotide polymorphism (SNP) probes as clustered by GenomeStudio software in the HJ-DH population. (a) Distribution of the cluster separation score (CSS) for all 5306 SNPs; (b)-(f) Scoring of SNP genotyping data from different types of SNP probes. The three highlighted clusters denote the areas where the three different genotypes of homozygous allele AA (red), heterozygous AB (purple) and homozygous allele BB (blue) are called. Allele calls that are ambiguously located in the lighter colored areas between or below these areas are scored as "no call" (NC). Ellipses are used to mark the positions of the cluster calling areas. The dots with black circles are calls that needed to be manually re-checked and re-scored to missing data ("-"). (b) Typical score from probe bna1131 with two expected homozygous clusters (AA and BB); (c) The score from the SNP bna1686 with CSS <0.3 but with two clear parental genotype clusters; (d) The score from the SNP bna4154 with one parent being heterozygous (AB); (e) The score from the SNP bna4116 with one parent being NC; (f) The score from the SNP bna2547 with obvious 3 clusters (AA, AB, BB) in which the non-parental genotype of AB cannot be re-clustered to any homozygous cluster manually. Among the 1,850 polymorphic loci out of the 4728 probes (39.1%) [30], 1,149 (62%) were detected as heterozygous signals in one parent (Table 1). There were also 1,005 SNPs (54.3%) that had three clear clusters with at least 1 DH lines per cluster in the DH population (Table 1). Those two types (heterozygous SNPs and SNPs with three clear clusters in the DH population) calls in SNP arrays for diploid species would frequently be discarded [40-44]. If a similar treatment was followed in this study, there would be only 158 polymorphic SNP loci with non-heterozygous calls and two clearly parental clusters in the population left for further genotyping analysis (Table 1), only accounting for 3.0% SNPs on the array. Such a choice will significantly compromise the high-throughput property of SNP arrays. Types and numbers of the polymorphism combinations in two parental lines Homologous signals Homologous/heterozygous signals Hua5 N a aThe number of the SNP loci. The number in the bracket designates the SNP locus number that exhibits segregation of three clear clusters (a cluster at least had one DH line) in the DH population. Above results revealed that near 40% SNPs were polymorphic between two parental lines, consistent with findings in maize and other crops [32,40]. However, data from monomorphic loci accounting for a large portion of the SNP array were directly discarded in previous studies [26,27,30], resulting in a potential loss of information from both the array and genotyped samples. Monomorphic SNP loci can be used to genotype the mapping population and assess SNP detection errors Because the genotype of a given locus in each DH line can be inferred according to their parental genotypes in theory, we hypothesized that the monomorphic SNP loci could be used to evaluate the authenticity of each DH line, as well as the stability and error in the SNP array detection. For that purpose, a two-dimensional matrix was established to genotype the individual lines of the DH population (Figure 2, Additional file 1: Table S1). This matrix listed the genotypes of each SNP locus in all DH lines horizontally and the genotypes of each DH line in all SNP loci vertically. In such a matrix, the occurrence of a NPG might be due to an error in the SNP detection system or due to the DH line itself (e.g. mechanical or biological contamination in the sample). The quantification of the percentage of non-parental genotypes (PNPG) in these SNPs for each DH line in the vertical direction can accurately identify the authenticity of each DH line in the population. After removing the potentially unauthentic DH lines, the remaining differences can be used to assess the reliability and stability of SNP detection in the array. Schematic diagram of a two-dimensional matrix for analyzing monomorphic single nucleotide polymorphisms (SNPs) in the HJ-DH population. The matrix lists the genotypes of each SNP locus in all doubled haploid (DH) lines horizontally and the genotypes of each DH line in all SNP loci vertically. The blank and black squares represent the parental genotype and non-parental genotype in the population, respectively. The PNPG_SNP and PNPG_DH are calculated by the formulas as described in the Results and Discussion section. For the DH lines listed in the vertical direction, the PNPG of DH lines (PNPG_DH) can be calculated with the following formula: $$ PNP{G}_{\_DH}=\frac{1-P{G}_j}{M_j} $$ Where PNPG_DH represents the percentage of NPGs in all detected SNP loci of the jth DH line, PGj represents the number of parental genotypes (PGs) in all SNP loci of the jth line, and Mj represents the number of detected SNP loci of the jth line. For a true DH line, the PNPG value should theoretically be zero. However, many factors such as genetic mutations, the stability of the SNP detection system, and the mechanical or biological contamination of the samples, can affect genotyping results. However, the probability that all of these factors will have a significant impact on the genotyping results is small. Based on the above considerations, in the subsequent analysis, PNPG =0.05 was set as a threshold value to determine the authenticity of a given DH line. The PNPG values were calculated for each DH line based on 3456 monomorphic SNP loci, including the SNPs that had more than 20% of missing data (Additional file 1: Table S1). The average PNPG for the population was 0.039, with a PNPG <0.03 in 179 lines and >0.05 in 11 lines (Figure 3, Additional file 1: Table S1). After removing these 11 lines, the average PNPG for the DH population decreased to 0.016. The remaining 179 DH lines were considered to be the true genetic offspring of the two parents and used for the subsequent analysis. We also checked the genotypes of these 11 unauthentic DH lines based on the 473 polymorphic SSR loci, and could not find the abnormity and error. One might consider a similar examination with monomorphic SSR markers. However, it seems only feasible to apply monomorphic SNPs for such a purpose, since monomorphic SSR markers normally are no longer used for genotyping of a segregation population in a regular SSR genotyping experiment, and the throughput of SSR genotyping is obviously much lower than that of SNP. In this regard, the high-throughput SNPs are more powerful in evaluating the authenticity of the DH population offspring than other regular markers. The above results showed that the PNPG value of a DH line could be used in the evaluation of the structure of the population and the authenticity of the offspring. After excluding the unauthentic DH lines from the population, the stability and error of the SNP array detection system could be further assessed. Frequency of the percentage of non-parental genotype (PNPG) measured by monomorphic single nucleotide polymorphisms (SNPs) in the HJ-DH population. The PNPG of each doubled haploid (DH) line is calculated by the formula as described in the Results and Discussion section. Due to the existence of a large number of inter-homoeologues in the A and C subgenomes of B. napus [17,49], it is difficult to ensure that a SNP probe only bind to a particular genomic sequence/region when designing SNP probes. Such a lack of specificity could result in a large number of heterozygous signals in SNP detection in B. napus. In this study, 62% of the detected SNPs were loci with heterozygous signals in one of the parents (Table 1), although both the parental lines were homozygous (doubled haploids through microspore culture). Such a non-specific binding of SNP probes and the consequent heterozygous signals in the array analysis could result in the appearance of the NPGs in the DH population. To test the hypothesis and to understand the cause of the heterozygous signals in the SNP array, we took a similar procedure as described by Trick et al. [17], in which an unambiguous allelic SNP was termed "Simple SNP" and the allelic polymorphisms due to the presence of homoeologous sequences "Hemi-SNP" [17]. In such a way, we classified SNPs into Simple SNP, Hemi-SNP, and Pseudo-simple SNP according to the allelic SNP types, availability of inter-homoeologue, and consequently the locus numbers a probe can bind to. The Simple SNP refers to a typical allelic SNP, which can be only targeted by its specific probe (a single locus). Such a detection generates AA/BB/NC but no AB signal in both the parental lines and their offspring DH lines. The Hemi-SNP refers to the incomplete allelic polymorphisms due to the presence of homoeologous sequences in the B. napus genome. The Pseudo-simple SNP refers to an allelic SNP derived from two homoeologues that possess inter-homoeologous polymorphisms in two parental lines. In a Hemi-SNP locus, the existence of mismatch bases would result in a difference in probe binding capacity [50-53]. For instance, if the P1 can bind two loci as shown in Figure 4 (right), P1_Locus1 is of the genotype A that has a 100% binding capacity to the SNP probe, whereas P1_Locus2 is of genotype B that would have a decreased binding capacity with as few as three mismatch sites in a 50 bp-long probe [50-53]. Such a binding difference would result in a heterozygous AB signal (Figure 4). On the other hand, if the probe failed to bind Locus2 due to the competition with Locus1, there would be an incorrect classification of AA (the genotypes of Locus1). To analyze the possibility of the occurrence of such an error, we set out to assess the stability and error of the SNP detection system by calculating the PNPG in SNP loci. In the two-dimensional matrix described above, the PNPG of the horizontal SNP loci (PNPG_SNP) can be calculated with the following formula: Possible genotypes derived from inter-homoeologues targeted by a given SNP probe and their frequency in the HJ-DH population. Considering the two inter-homoeologous sequences YY and RR in P1 and their alleles yy and rr in P2 as two independent loci in the genome, the DH population will expect four genotypes of YYRR, YYrr, yyRR, and yyrr with a frequency of 1/4 for each (top left). Fluorescence signals of the parental lines are assigned as AA (C/G base, red), BB (A/T base, green) and AB (heterozygosis, orange), respectively. In the case of a null locus, the miss signal is assigned as NC (grey). In Pseudo-simple SNP type (lower left), only a same set of signals as Simple SNP are detected in the parental lines but there will be non-parental genotypes (NPGs) segregation in the DH population in Group 1-5. In sHemi-SNP type, there will be AB (heterozygous) signal detected due to presence of hemi-SNP but there will no NPG in the DH population (top right). In mHemi-SNP type, the signal values are similar to sHemi-SNP in parental lines but there will be NPG signals detected in the DH populations due to multiple mismatched nucleotides within the inter-homoeologous sequences (lower right). The color bars can be used for calculation of signal values in the DH population (expected frequency). "*" marks the NPGs occurred in the DH population. N is the number of the polymorphic SNPs in indicated group(s). The numbers of each signal for corresponding SNP group in the 179 DH lines are listed in the column of AA:AB:BB:NC. The number in the bracket refers to the ratio for each signal (genotype). $$ PNP{G}_{\_SNP}=\frac{1-P{G}_i}{M_i} $$ Where PNPG_SNP represents the percentage of NPGs in the ith SNP loci of all DH lines, PGi represents the number of PGs in the ith SNP loci of all DH lines, and Mi represents the number of detected DH lines in the ith SNP loci. After excluding the unauthentic DH lines, if the genotypes of both parents are AA at a SNP locus, the genotype of all DH lines is theoretically AA at this SNP locus. If a different genotype (such as AB or BB) is detected, it is very likely to be the result of a detection error. In this case, a PNPG value of 0.05 was still considered as the threshold to determine the reliability of call for a SNP locus. Next, the PNPGs for 3,456 monomorphic SNPs in the horizontal direction were analyzed. A PNPG ≥0.05 was found at 108 (3.13%) SNP loci, indicating that the detection of most of the SNP loci was reliable. When excluding these 108 SNP loci, the average PNPG for the remaining SNP loci was 1.60E-03 (Table 2). The analysis of the remaining 3,348 SNP loci showed that if the genotypes of both parents were homozygous (AA or BB), the ratio of detecting a NPG in the population was <0.005; if the genotypes of both parents were heterozygous (AB), the ratio of detecting a homozygous genotype in the population was <0.05; and if both parents were detected as "no call" (NC), the PNPG was even lower (Table 2). These results suggest that if the PNPG is >0.05 at a SNP locus of a B. napus SNP array, it is most likely caused by inter-homoeologue polymorphism or signal superposition of multiple SNP loci. The generation of these heterozygous signals (AB) is due to the complexity of the genome of allotetraploid B. napus. The proportion of each theoretically possible genotype in the monomorphic SNP loci Percentage of parental and non-parental genotype a PNPG c 0.998 b 8.91E-05 aPercentages of parental and non-parental genotype are calculated as: number of each detected genotype/(179 analyzed DH lines × probe number for each genotype). bPercentages of parental genotypes were bolted. cPercentage of non-parental genotype. The above analysis showed that, using a two-dimensional matrix constructed with the genotypes of the monomorphic SNPs in the DH population and PNPG analysis, unauthentic DH lines could be excluded (using columns in the matrix), and the error and stability of the system could be estimated (using rows). These analyses can improve the quality and utilization efficiency of SNP array data. Bi-filtering analysis can reduce the interference of mHemi-SNP and Pseudo-simple SNP loci Previously, the assignment of polymorphic SNP loci was conducted using two methods. The first method is to simply remove the loci that exhibit heterozygous signals in one of the parents, and the other method is to mark signals in the segregating population that have the same P1 value as genotype A, signals that have the same value of P2 as B, and non-parental signal values with missing values ("-") [26,27,30]. The premise of the second method is the specific binding of a SNP probe to a locus in the genome. Due to the existence of a large number of inter-homoeologues in the genome of B. napus, a considerable number of the SNP probes cannot meet this requirement. To reduce the impact of multi-loci SNPs on the subsequent genetic linkage analysis, we further divided Hemi-SNPs into two sub-groups, sHemi-SNPs and mHemi-SNPs, according to whether the NPG can be identified in the DH population. A so-called sHemi-SNP refers to the probe call that generates heterozygous signal (AB) in one of the two parental lines and the parental genotypes can be detected but no NPG will be produce detected in their offspring DH lines. As illustrated in Figure 4 (group 8–11), a sHemi-SNP may include two of three possible signals (genotypes, AA/BB/AB) in parental lines, and for each group of parental genotype, their offspring DH lines can only produce parental genotypes. Furthermore, such a segregation of two different parental genotypes fits into the expected frequency. In contrast, the mHemi-SNP produces an extra non-parental signal in the offspring DH lines in addition to parent-type signals (genotypes) due to more mismatched bases available in the inter-homoeologue (Figure 4, group 12–16), which may result in no hybridization signal and consequently a null detection for one of the inter-homoeologous sequence. Obviously, the genotypes of mHemi-SNPs and Pseudo-simple SNPs are a superposition of the signals from multiple SNP loci, which cannot represent the corresponding genotype of the probe itself (Figure 4). Therefore, the mHemi-SNPs and Pseudo-simple SNPs should be removed to avoid any impact on the calculation of the linkage between these loci. To identify the difference between mHemi-SNP and Pseudo-simple SNP and the other two types of SNPs (Simple SNP and sHemi-SNP), a method similar to the one above used for the analysis of monomorphic SNPs was applied to analyze the PNPG values of polymorphic SNP loci. If the genotype of the parents was AB/BB of the two types of SNP: sHemi-SNP and mHemi-SNP, the PNPG values in the DH population were different (Figure 4, group 10, 11, 14, and 15). Considering the two inter-homoeologous sequences YY and RR in P1 and their alleles yy and rr in P2 as two independent loci in the genome, the DH population will expect four genotypes of YYRR, YYrr, yyRR, and yyrr with the fixed frequency of 1/4 for each through the haploid production (Figure 4, top left). In Pseudo-simple SNP type (Figure 4, lower left), only a same set of signals as Simple SNP (Additional file 2: Figure S1a-S1b) are detected in the parental lines but there will be NPGs segregation in the DH population in the first five groups (Figure 4, group 1–5; Additional file 2: Figure S1c-S1f). In sHemi-SNP type, there will be AB (heterozygous) signal detected due to the presence of hemi-SNP but will no NPG in the DH population (Figure 4, top right; Additional file 2: Figure S1i-S1j). In the mHemi-SNP type, the signal values are similar to sHemi-SNP in parental lines but there will be NPG signals detected in the DH populations due to multiple mismatched nucleotides within the inter-homoeologous sequences (Figure 4, lower right; Additional file 2: Figure S1k-S1m). Furthermore, assuming that Locus1 and Locus2 have no linkage relationship, the expected frequencies of signal values (reflecting the corresponding genotypes) in the DH population could be deduced according to parental signals (Figure 4). Once we have the expected frequencies for all possible four genotypes in the DH population, we can easily distinguish different types of SNPs listed in Figure 4. Since we can calculate the expected frequencies of PGs and NPGs in each group, we introduced a statistics of chi-squared test (χ2 test) as the probability of NPGs appearance (expected frequency versus observed frequency) in the DH population. We used PNPG =0.05 as the threshold to judge the presence of NPG or not. There was an exception in the type of Pseudo-simple SNP for above analysis, where the two polymorphic genotypes of AA/NC and BB/NC (Figure 4, group 6 and 7; Additional file 2: Figure S1g-S1h) cannot be distinguished from a typical Simple SNP through the PNPG values. However, the frequencies of the two parental signal values in the DH population of the Simple SNPs of AA/NC and BB/NC were 0.5 and 0.5, respectively, while the frequencies of AA/NC and BB/NC in the Pseudo-simple SNPs (group 6 and 7) were 0.75 and 0.25, respectively (Figure 4, group 6 and 7; Additional file 2: Figure S1g-S1h). Therefore, the frequencies of these two polymorphic genotypes of AA/NC and BB/NC can be used to distinguish whether the SNP was Simple SNP (0.5/0.5) or Pseudo-simple SNPs (0.75/0.25). It was noted that two parents with AB signals also generated three clear clusters of AA, AB, and BB signals (Additional file 2: Figure S1n). Based on the PNPG values of the SNP loci, the genotype data for several other polymorphic genotypes were identified from the single-locus sHemi-SNPs (PNPG <0.05). There were 175 (9.5%) SNP loci for the two polymorphic genotypes of AA/NC and BB/NC, including 53 loci with PNPG values >0.05; the remaining 122 loci (6.6% of the total polymorphic loci) can be separated into 80 Simple SNPs (P = 0.4391) and 42 Pseudo-simple SNPs (P = 0.0012) by χ2 test. Based on the above analysis, the SNP loci with PNPG values >0.05 can be considered as multi-loci SNPs (mHemi-SNPs and Pseudo-simple SNPs, the genotypes of AA/NC and BB/NC could be distinguished by examination of their segregation ratios), whereas the SNP loci with PNPG values <0.05 were considered as single-locus SNPs (Simple SNPs and sHemi-SNPs). Based on this standard, 1,573 SNP loci (85.0%) were screened from 1,850 polymorphic SNP loci for the subsequent analysis. Using the PNPG value and χ2 test to extract single-locus SNPs (Simple SNPs and sHemi-SNPs) from the SNP array data can maximize the utilization of the total SNP loci and remove the multi-loci SNPs (mHemi-SNPs and Pseudo-simple SNPs), which were difficult to identify in previous studies. It is worth to point out that homoeologous recombination between the A and C genomes might result in non-parental phenotype. There are two consequences if such recombination events happen. First, if a given SNP locus is located within the homoeologous recombination fragment, its genotype will be identified as a regular locus, no matter where the fragment is distributed in the B. napus genome. In this case, the locus cannot be assigned as a NPG count. Second, if such a SNP locus is located right at the breakpoint of the homoeologous recombination fragments, the locus will not be identified, thus resulting in a false NPG count. It is now known that the homoeologous recombination between the A and C genomes occurred in a relatively low frequency at the level of large fragments [2,30]. The probability that a given SNP locus is exactly located the breakpoint is rare. Therefore, it is reasonable to consider such a recombination event neglectable. The signal values of the 1,573 valid SNP loci were converted to genotype values. Genotypes that were the same as that of parent P1 were recorded as "A", genotypes that were the same as that of parent P2 were recorded as "B", and non-parental genotypes were treated as missing ("-"). We named this method of filtering out unauthentic lines and mHemi-SNP loci in SNP array data using the PNPG values as bi-filtering analysis. In brief, the bi-filtering method could be summarized as follows. First, we used the monomorphic SNPs to calculate the PNPG value of a given DH line (the number of the SNPs with non-parental genotypes of a given DH line divided by the total number of the genotyping SNPs of a given DH line) to filter out the unauthentic lines (Additional file 3: Figure S2); Second, we used the polymorphic SNPs to calculate the PNPG value of a given SNP locus (the number of the DH lines with non-parental genotypes of a given SNP divided by the number of the genotyped DH lines of a given SNP) to filter the mHemi-SNPs and Pseudo-simple SNP (Additional file 3: Figure S2). The bi-filtering method not only makes use of the monomorphic SNPs to identify unauthentic lines and to assess possible errors in SNP array detection, but also uses the polymorphic SNPs more accurately. The method thus can improve the efficiency and accuracy of the SNP array data with a large portion of heterozygous signals, which is common in the analysis of high-throughput genotyping in polyploid species [16,54]. The bi-filtering method was also suitable for analyzing the genotyping data by re-sequencing of the population and parents. A flow diagram was constructed for analyzing of the high-throughput genotyping data (re-sequencing and SNP array) of the bi-parental populations (Additional file 3: Figure S2). However, more specific work will be needed to verify the effect of the bi-filtering method for analyzing re-sequencing data. The bi-filtering analysis improves the quality of genetic linkage map Previously, we constructed a genetic map (Map C) with 190 DH lines and 2,323 polymorphic markers (1,850 SNPs and 473 SSRs) [30] by means of the conventional method that uses simple substitution of genotypes based on the signal value of the parents [26,27]. Linkage analysis mapped 2,115 markers in 19 linkage groups (LGs) on the Map C, which was 2,477.4 cM in length with an average spacing of 1.27 cM between the markers [30]. To assess the effect of the SNP array data processed with bi-filtering analysis on the quality of genetic map, we constructed a new version of the genetic map with processed data (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3) and compared such a map with the Map C. After a bi-filtering analysis of both mapping population and SNP markers as described above, 179 DH lines and 2,046 polymorphic loci (1,573 SNPs and 473 SSRs) were used to produce a genetic map. Linkage analysis finally placed 2,014 loci onto 19 LGs (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3) and resulted in a new version of the genetic map (Map B) with total length of 2,020.3 cM and an average spacing of 1.00 cM (Additional file 4: Table S2 and Additional file 5: Table S3). Comparison of the Map B and Map C with the HJ-DH population. The left and right vertical bar of each panel represents the linkage groups (LGs) of the Map B and Map C, respectively. Each LG and markers are represented with a vertical bar and transverse line, respectively. The same markers between the LG of the Map B and Map C are connected with black lines. The simple sequence repeat (SSR), single nucleotide polymorphism (SNP), mHemi-SNP and Pseudo-simple SNP, and the marker that can only be assigned on the Map B are shown with the black, red, blue and yellow transverse line, respectively. The data of the Map C comes from Cai et al. [30]. Compared with the Map C, the Map B now had an increased marker density after bi-filtering analysis of both unauthentic DH lines (11 lines) in mapping population, and mHemi-SNP and Pseudo-simple SNP markers in SNP arrays (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3). There were 208 markers (9.0% of the total polymorphic markers) that could not be located on the genetic map previously, while the ratio was reduced to 1.6% (32 markers) on the Map B (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3). Interestingly, all filtered 1,573 SNPs were all mapped on the Map B. There were 132 of the mHemi-SNP and Pseudo-simple SNP markers included in the Map C, which were excluded by the Map B (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3). On the LGs with fewer mHemi-SNPs and Pseudo-simple SNPs, such as LGs A04, A05, A09, C03 and C05, the two maps showed good consistency (Figure 5, and Additional file 5: Table S3). However, other LGs exhibited obvious inconsistent, especially in the regions harboring mHemi-SNPs and Pseudo-simple SNPs, suggesting that mHemi-SNPs, Pseudo-simple SNPs, and unauthentic DH lines affected the mapping quality. First, the unauthentic DH lines may cause the exchange of the marker positions. For instance, several regions on LGs A06 (30.1-69.7 cM), A08 (29–61 cM), C07 (0–18.2 cM) and C08 (86–107.7 cM) or their neighboring regions contained few mHemi-SNPs and Pseudo-simple SNPs in the Map C (Figure 5, Additional file 6: Figure S3, Additional file 4: Table S2, and Additional file 5: Table S3), but there were obvious marker rearrangements and inversions in the regions (indicated by crossed lines in Figure 5 and Additional file 6: Figure S3). The positions and orders of the markers in these regions became consistent with the Map B after removal of the 11 unauthentic DH lines (Additional file 6: Figure S3). Second, the mHemi-SNPs and Pseudo-simple SNPs could result in lower marker density of the Map C. There were obvious marker rearrangements and inversions in the regions of LGs A03 (0–132.3 cM; 9 mHemi-SNPs and Pseudo-simple SNPs), A07 (0–155.9 cM; 16 mHemi-SNPs and Pseudo-simple SNPs), C01 (0–190.1 cM; 18 mHemi-SNPs and Pseudo-simple SNPs) of the Map C due to the existence of mHemi-SNPs and Pseudo-simple SNPs (Figure 5 and Additional file 7: Figure S4). However, the positions and orders of the markers in these regions became consistent between the two Maps when the mHemi-SNPs and Pseudo-simple SNPs in these regions were removed (with the 11 unauthentic DH lines retained; Figure 5 and Additional file 7: Figure S4). Third, mHemi-SNPs, Pseudo-simple SNPs, and the unauthentic DH lines may impose influences jointly. For instance, in the regions of LGs A02 (62.8-122.8 cM; 6 mHemi-SNPs and Pseudo-simple SNPs), C02 (41.2-62.2 cM; 5 mHemi-SNPs and Pseudo-simple SNPs), C04 (138.8-174.1 cM; 4 mHemi-SNPs and Pseudo-simple SNPs) and C06 (21.2-61.7 cM; 7 mHemi-SNPs and Pseudo-simple SNPs) of the Map C that contained more mHemi-SNPs and Pseudo-simple SNPs, two maps of these regions showed obvious inconformity (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3). It was found that mHemi-SNPs and Pseudo-simple SNPs could cause a pseudo genetic linkage relationship between mHemi-SNP and Pseudo-simple SNP and other markers. In total, 87 mHemi-SNPs and Pseudo-simple SNPs were located on LGs A02, A03, A07, C01, C04, C06 and C08 of Map C, respectively, which resulted in excess fragments or markers on the Map C (Figure 5, Additional file 4: Table S2, and Additional file 5: Table S3). Other mHemi-SNPs and Pseudo-simple SNPs were dispersed in the Map C and thus resulted in a decrease in the mapping density. Due to such interferences, there were 77 Simple SNPs that could not be mapped on the genetic map, while all of these Simple SNPs were linked to the genetic map after bi-filtering analysis (Figure 5, and Additional file 5: Table S3). Since the linear relationship of the SSR marker loci on each of the LGs in both the Map B and C have been proved in different maps [45,46,55-58], a framework map of SSRs could thus serve as a reference to evaluate the linear relationships of SNP loci. To further compare the difference between the two maps, the graphical genotype of each DH line was constructed with the genotyping data from SSR markers and SNP markers processed with bi-filtering and conventional method, respectively. The graphical genotype of each DH line exhibited good collinearity between the framework map of SSRs and the Map B (Additional file 8: Figure S5). However, the graphical genotypes based on the Map C showed pseudo exchange fragments (caused by inversion, translocation and pseudo chromosome fragments) in some DH lines, especially in the LGs with more mHemi-SNPs and Pseudo-simple SNPs (Additional file 8: Figure S5). Based on the above analysis, we concluded that a screening of mHemi-SNPs, Pseudo-simple SNPs, and the unauthentic DH lines for the construction of genetic maps was important. The bi-filtering analysis can remove mHemi-SNPs, Pseudo-simple SNPs, and unauthentic DH lines, thus improving the quality of a genetic map as observed in the Map B. With more loci included in future higher density SNP arrays, such as 60 K SNP arrays [26,28,29], more mHemi-SNPs and Pseudo-simple SNPs were expected to be filtered and the mapping quality would be further improved. The bi-filtering analysis increase the accuracy of QTL mapping Next, we analyzed if the mHemi-SNP and Pseudo-simple SNP loci could have any adverse effects on QTL mapping. Results of QTL mapping of 20 agronomic traits in four environments were compared between the two maps. On the whole, 346 and 364 QTLs of 20 traits were identified by the Map B and Map C, respectively. There were 36 QTLs located on the excess fragments of LGs A03, A07, C01, C06 and C08 in the Map C, which can explain the contribution of phenotype (R 2 ) 2.68-21.56% (on average 9.96%), with logarithm of odds (LOD) score 3.66-21.1 (on average 7.07). However, these QTLs could not be identified in the Map B, because the pseudo fragments have been filtered and the pseudo-QTLs resulted from the mHemi-SNPs and Pseudo-simple SNPs eliminated. Moreover, the mHemi-SNPs and Pseudo-simple SNPs dispersed along the different LGs of the Map C also affected the QTL identification. There were 18 QTLs that could be identified in the Map B but not in the Map C. Above data thus illustrated that the mapping accuracy of QTLs can be affected to a significant extent by mHemi-SNPs and Pseudo-simple SNPs. In order to more clearly illustrate this effect, we focused on the QTLs on the LGs A07 and C01 (Figure 6). In Map C, each of these two LGs contained 18 and 16 mHemi-SNPs and Pseudo-simple SNPs respectively (Additional file 4: Table S2), which caused an inversion in the upper portion and an extra fragment with a length of approximately 82 cM in the lower portion of the LG C01. In this region, a major QTL of seed protein content (PC) with a LOD score of 11.9 and a contribution to the phenotype of up to 18.1% was detected. However, this QTL was no longer detectable in LG C01 in the Map B, in which these mHemi-SNPs and Pseudo-simple SNPs were eliminated. Such a difference suggested that these mHemi-SNPs and Pseudo-simple SNPs could cause the erroneous detection of QTLs. Similarly, for the LG A07, the presence of 16 mHemi-SNPs and Pseudo-simple SNPs led to disorder in the linkage relationship for the markers on A07 (Figure 6). Similarly, there were 6 pseudo QTLs identified in this region on Map C. Of which a QTL for silique density (SD) with a LOD score of 7.66 and a phenotypic contribution of up to 11.7% was mapped, but no QTL was detectable in the Map B. Taken together, these results indicate that the mHemi-SNPs and Pseudo-simple SNPs could interfere with the establishment of the linkage relationship between the markers and subsequently affect the subsequent QTL mapping, as well as candidate gene analysis, although they only accounted for 6.1% of the total number of markers in the Map C. Therefore, the removal of the unauthentic lines, mHemi-SNPs, and Pseudo-simple SNPs from the population using the bi-filtering method could improve the accuracy of a genetic map that is crucial for subsequent analyses. Comparison of genetic map construction and quantitative trait locus (QTL) mapping on the linkage groups (LGs) A07 and C01 of the HJ-DH population with the Map B and Map C. Only a sub-set of molecular markers are presented for each genetic LG. The single nucleotide polymorphism (SNP) markers with underlined (red color) are the mHemi-SNPs and Pseudo-simple SNPs. The detailed information about the LGs of the Map B is described in Additional file 3: Table S3, and the data of the Map C comes from Cai et al. [30]. The sub-set same molecular markers between the two LGs depicted are aligned with black lines. PC: protein content; SD: silique density. Significance thresholds for QTLs at the level P = 0.05 are estimated based on 1000 permutation. We have developed a novel bi-filtering method to effectively identify unauthentic DH lines as well as mHemi-SNP and Pseudo-simple SNP loci resulted from the superposition of the multiple SNP loci signals in SNP arrays. Such a bi-filtering analysis of the SNP array data can maximize the use of the SNP array data more accurately in polyploid species, to which many important crops belong. The power of the method would be more obvious in higher density arrays where manual filtering analysis will become difficult. Guangqin Cai and Qingyong Yang contributed equally to this work. Cluster separation score MAF: Minor allele frequency Parental genotype NPG: Non-parental genotype PNPG: Percentage of non-parental genotype sHemi-SNP: Hemi-SNP from a single locus mHemi-SNP: Hemi-SNP from multi-loci Protein content Silique density We thank Drs. Lingling Chen and Weibo Xie at College of Life Science and Technology, Huazhong Agricultural University, China for critical reading of the manuscript. We are grateful to two anonymous reviewers for their valuable comments and suggestions on the manuscript. The work is financially supported by the funding from Ministry of Science and Technology of China (Grant nos. 2014DFA32210 and 2012BAD49G00), Ministry of Agriculture of China (nycytx-00503 and 948 project (2011-G23)), National Natural Science Foundation of China (31371659, 31301005), China Postdoctoral Science Foundation (2013 M542033), and Huazhong Agricultural University (STSIF 2010YB05). Additional file 1: Table S1. Detailed information of PNPG of the DH population, monomorphic SNP loci and the two-dimensional matrix of the monomorphic SNP loci genotypes in the HJ-DH population. Additional file 2: Figure S1. Clusters of different types SNPs in the DH population without 11 unauthentic DH lines. The SNP classification and group number for each cluster are the same to that of Figure 4. Additional file 3: Figure S2. A flow diagram for analysis the high-throughput genotyping data (re-sequencing and SNP array) of the bi-parental populations. Additional file 4: Table S2. Parameters of two genetic maps constructed by the bi-filtering analysis method (Map B) and the conventional method (Map C). The data of the Map C comes from Cai et al. [30]. Additional file 5: Table S3. Detailed information of the genetic linkage maps of the HJ-DH population constructed by the bi-filtering analysis method (Map B), the homoeologous loci and homoeologous collinear fragments identified in B. rapa and B. oleracea. Additional file 6: Figure S3. Effects of unauthentic DH lines on the localization of the SNP markers on linkage groups (LGs) A06, A08, C07, and C08. The left, middle and right vertical bars of each panel represents the LGs constructed with the data from the Map C without the 11 unauthentic DH lines, the Map B, and the Map C with 11 unauthentic DH lines, respectively. Each LG and markers are represented with a vertical bar and transverse line, respectively. The same markers between the LG of these three maps are connected with black lines. The simple sequence repeat (SSR), single nucleotide polymorphism (SNP), mHemi-SNP and Pseudo-simple SNP, and the marker that can only be assigned on the Map B are shown with the black, red, blue and yellow transverse line, respectively. The data of the Map C were adopted from Cai et al. [30]. Additional file 7: Figure S4. Effects of mHemi-SNPs and Pseudo-simple SNPs on the localization of the SNP markers on linkage groups (LGs) A03, A07, and C01. The left, middle and right vertical bars of each panel represents the LGs that are constructed with the data from the Map C without the mHemi-SNPs and Pseudo-simple SNPs in 190 DH lines, the Map B, and the Map C with the mHemi-SNPs and Pseudo-simple SNPs in 190 DH lines, respectively. Each LG and markers are represented with a vertical bar and transverse line, respectively. The same markers between the LG of these three maps are connected with black lines. The simple sequence repeat (SSR), single nucleotide polymorphism (SNP), mHemi-SNP and Pseudo-simple SNP, and the marker that can only be assigned on the Map B are shown with the black, red, blue and yellow transverse line, respectively. The data of the Map C were adopted from Cai et al. [30]. Additional file 8: Figure S5. The graphical genotypes of three DH lines (DH2, DH3 and DH81) on LGs A07, C01 and C04 constructed by the SSR, bi-filtering and conventional method, respectively. In each panel, the left, middle and right LG constructed by the SSR, bi-filtering and conventional method, respectively. The data of the Map C comes from Cai et al. [30]. In each LG, the horizontal bars represent molecular markers, the red, blue and black color represents P1, P2 and missing genotype, respectively. Arrows indicates the pseudo fragments in the genetic map constructed by the conventional method, which did not exist in the maps constructed by the SSR and bi-filtering methods. YZ and GC conceived the research. GC and QY performed the SNP array data analysis and comparative mapping. JB and DE designed and developed the SNP array. BY carried out the SNP genotyping. GC, CF, and CZ performed the genetic map construction, graphic genotype analysis and QTL mapping. GC, YZ, JB, and DE wrote the paper. All the authors have commented, read and approved the final manuscript. National Key Laboratory of Crop Genetic Improvement, Huazhong Agricultural University, Wuhan, 430070, China Key Laboratory of Rapeseed Genetics and Breeding of Agriculture Ministry of China, Huazhong Agricultural University, Wuhan, 430070, China School of Agriculture and Food Sciences, University of Queensland, St Lucia, QLD, Australia Nagaharu U. Genome analysis in Brassica with special reference to the experimental formation of B. napus and peculiar mode of fertilization. Jap J Bot 1935, 7:389–452.Google Scholar Chalhoub B, Denoeud F, Liu S, Parkin IA, Tang H, Wang X, et al. Early allopolyploid evolution in the post-Neolithic Brassica napus oilseed genome. Science. 2014;345:950–3.View ArticlePubMedGoogle Scholar Lysak MA, Koch MA, Pecinka A, Schubert I. Chromosome triplication found across the tribe Brassiceae. Genome Res. 2005;15:516–25.View ArticlePubMed CentralPubMedGoogle Scholar Wang X, Wang H, Wang J, Sun R, Wu J, Liu S, et al. The genome of the mesopolyploid crop species Brassica rapa. Nat Genet. 2011;43:1035–9.View ArticlePubMedGoogle Scholar Liu S, Liu Y, Yang X, Tong C, Edwards D, Parkin IA, et al. The Brassica oleracea genome reveals the asymmetrical evolution of polyploid genomes. Nat Commun. 2014;5:3930.PubMed CentralPubMedGoogle Scholar Schranz ME, Lysak MA, Mitchell-Olds T. The ABC's of comparative genomics in the Brassicaceae: building blocks of crucifer genomes. Trends Plant Sci. 2006;11:535–42.View ArticlePubMedGoogle Scholar Yang Q, Fan C, Guo Z, Qin J, Wu J, Li Q, et al. Identification of FAD2 and FAD3 genes in Brassica napus genome and development of allele-specific markers for high oleic and low linolenic acid contents. Theor Appl Genet. 2012;125:715–29.View ArticlePubMedGoogle Scholar Syvanen A. Accessing genetic variation: genotyping single nucleotide polymorphisms. Nat Rev Genet. 2001;2:930–42.View ArticlePubMedGoogle Scholar Morgante M, Salamini F. From plant genomics to breeding practice. Curr Opin Biotechnol. 2003;14:214–9.View ArticlePubMedGoogle Scholar Schmid KJ, Sörensen TR, Stracke R, Törjék O, Altmann T, Mitchell-Olds T, et al. Large-scale identification and analysis of genome-wide single-nucleotide polymorphisms for mapping in Arabidopsis thaliana. Genome Res. 2003;13:1250–7.View ArticlePubMed CentralPubMedGoogle Scholar Zhu Y, Song Q, Hyten D, Van Tassell C, Matukumalli L, Grimm D, et al. Single-nucleotide polymorphisms in soybean. Genetics. 2003;163:1123–34.PubMed CentralPubMedGoogle Scholar Yu H, Xie W, Li J, Zhou F, Zhang Q. A whole-genome SNP array (RICE6K) for genomic breeding in rice. Plant Biotechnol J. 2013;12:28–37.View ArticlePubMedGoogle Scholar Edwards D, Forster JW, Cogan NOI, Batley J, Chagné D. Single nucleotide polymorphism discovery. In : Oraguzie NC, Rikkerink EHA, Gardiner SE, De Silva HN, editors. Association mapping in plants. New York: Springer; 2007. p. 53-76.Google Scholar Hayward A, Morgan JD, Edwards D. SNP discovery and applications in Brassica napus. J Plant Biochem Biotechnol. 2012;39:49–61.View ArticleGoogle Scholar Westermeier P, Wenzel G, Mohler V. Development and evaluation of single-nucleotide polymorphism markers in allotetraploid rapeseed (Brassica napus L.). Theor Appl Genet. 2009;119:1301–11.View ArticlePubMedGoogle Scholar Durstewitz G, Polley A, Plieske J, Luerssen H, Graner E, Wieseke R, et al. SNP discovery by amplicon sequencing and multiplex SNP genotyping in the allopolyploid species Brassica napus. Genome. 2010;53:948–56.View ArticlePubMedGoogle Scholar Trick M, Long Y, Meng J, Bancroft I. Single nucleotide polymorphism (SNP) discovery in the polyploid Brassica napus using Solexa transcriptome sequencing. Plant Biotechnol J. 2009;7:334–46.View ArticlePubMedGoogle Scholar Bancroft I, Morgan C, Fraser F, Higgins J, Wells R, Clissold L, et al. Dissecting the genome of the polyploid crop oilseed rape by transcriptome sequencing. Nat Biotechnol. 2011;29:762–6.View ArticlePubMedGoogle Scholar Harper AL, Trick M, Higgins J, Fraser F, Clissold L, Wells R, et al. Associative transcriptomics of traits in the polyploid crop species Brassica napus. Nat Biotechnol. 2012;30:798–802.View ArticlePubMedGoogle Scholar Bus A, Hecht J, Huettel B, Reinhardt R, Stich B. High-throughput polymorphism detection and genotyping in Brassica napus using next-generation RAD sequencing. BMC Genomics. 2012;13:281.View ArticlePubMed CentralPubMedGoogle Scholar Delourme R, Falentin C, Fomeju BF, Boillot M, Lassalle G, André I, et al. High-density SNP-based genetic map development and linkage disequilibrium assessment in Brassica napus L. BMC Genomics. 2013;14:120.View ArticlePubMed CentralPubMedGoogle Scholar Chen X, Li X, Zhang B, Xu J, Wu Z, Wang B, et al. Detection and genotyping of restriction fragment associated polymorphisms in polyploid crops with a pseudo-reference sequence: a case study in allotetraploid Brassica napus. BMC Genomics. 2013;14:346.View ArticlePubMed CentralPubMedGoogle Scholar Mikolajczyk K, Dabert M, Karlowski W, Spasibionek S, Nowakowska J. Allele-specific SNP markers for the new low linolenic mutant genotype of winter oilseed rape. Plant Breed. 2010;129:502–7.Google Scholar Rahman M, Li G, Schroeder D, McVetty PBE. Inheritance of seed coat color genes in Brassica napus (L.) and tagging the genes using SRAP, SCAR and SNP molecular markers. Mol Breed. 2010;26:439–53.View ArticleGoogle Scholar Hu X, Sullivan-Gilbert M, Gupta M, Thompson SA. Mapping of the loci controlling oleic and linolenic acid contents and development of fad2 and fad3 allele-specific markers in canola (Brassica napus L.). Theor Appl Genet. 2006;113:497–507.View ArticlePubMedGoogle Scholar Liu L, Qu C, Wittkop B, Yi B, Xiao Y, He Y, et al. A high-density SNP map for accurate mapping of seed fibre QTL in Brassica napus L. PLoS One. 2013;8, e83052.View ArticlePubMed CentralPubMedGoogle Scholar Raman H, Dalton-Morgan J, Diffey S, Raman R, Alamery S, Edwards D, et al. SNP markers-based map construction and genome-wide linkage analysis in Brassica napus. Plant Biotechnol J. 2014. doi:10.1111/pbi.12186.PubMedGoogle Scholar Li F, Chen B, Xu K, Wu J, Song W, Bancroft I et al.: Genome-wide association study dissects the genetic architecture of seed weight and seed quality in rapeseed (Brassica napus L.). DNA Res 2014, 21:355-367.Google Scholar Mason AS, Batley J, Bayer PE, Hayward A, Cowling WA, Nelson MN. High-resolution molecular karyotyping uncovers pairing between ancestrally related Brassica chromosomes. New Phytol. 2014;202:964–74.View ArticlePubMedGoogle Scholar Cai G, Yang Q, Yi B, Fan C, Edwards D, Batley J, et al. A complex recombination pattern in the genome of allotetraploid Brassica napus as revealed by a high-density genetic map. PLoS One. 2014;9, e109910.View ArticlePubMed CentralPubMedGoogle Scholar Chen H, Xie W, He H, Yu H, Chen W, Li J, et al. A high-density SNP genotyping array for rice biology and molecular breeding. Mol Plant. 2014;7:541–53.View ArticlePubMedGoogle Scholar Ganal MW, Durstewitz G, Polley A, Bérard A, Buckler ES, Charcosset A, et al. A large maize (Zea mays L.) SNP genotyping array: development and germplasm genotyping, and genetic mapping to compare with the B73 reference genome. PloS One. 2011;6, e28334.View ArticlePubMed CentralPubMedGoogle Scholar Sim S-C, Durstewitz G, Plieske J, Wieseke R, Ganal MW, Van Deynze A, et al. Development of a large SNP genotyping array and generation of high-density genetic maps in tomato. PLoS One. 2012;7, e40563.View ArticlePubMed CentralPubMedGoogle Scholar Hiremath PJ, Kumar A, Penmetsa RV, Farmer A, Schlueter JA, Chamarthi SK, et al. Carrasquilla‐Garcia N, Gaur PM, Upadhyaya HD: Large-scale development of cost-effective SNP marker assays for diversity assessment and genetic mapping in chickpea and comparative mapping in legumes. Plant Biotechnol J. 2012;10:716–32.View ArticlePubMed CentralPubMedGoogle Scholar Bekele WA, Wieckhorst S, Friedt W, Snowdon RJ. High-throughput genomics in sorghum: from whole-genome resequencing to a SNP screening array. Plant Biotechnol J. 2013;11:1112–25.View ArticlePubMedGoogle Scholar Chagné D, Crowhurst RN, Troggio M, Davey MW, Gilmore B, Lawley C, et al. Genome-wide SNP detection, validation, and development of an 8 K SNP array for apple. PLoS One. 2012;7, e31745.View ArticlePubMed CentralPubMedGoogle Scholar Ganal MW, Altmann T, Röder MS. SNP identification in crop plants. Curr Opin Plant Biol. 2009;12:211–7.View ArticlePubMedGoogle Scholar Ganal MW, Polley A, Graner EM, Plieske J, Wieseke R, Luerssen H, et al. Large SNP arrays for genotyping in crop plants. J Biosciences. 2012;37:821–8.View ArticleGoogle Scholar Hyten DL, Song Q, Choi I-Y, Yoon M-S, Specht JE, Matukumalli LK, et al. High-throughput genotyping with the GoldenGate assay in the complex genome of soybean. Theor Appl Genet. 2008;116:945–52.View ArticlePubMedGoogle Scholar Yan J, Yang X, Shah T, Sánchez-Villeda H, Li J, Warburton M, et al. High-throughput SNP genotyping with the GoldenGate assay in maize. Mol Breed. 2010;25:441–51.View ArticleGoogle Scholar Myles S, Boyko AR, Owens CL, Brown PJ, Grassi F, Aradhya MK, et al. Genetic structure and domestication history of the grape. Proc Natl Acad Sci U S A. 2011;108:3530–5.View ArticlePubMed CentralPubMedGoogle Scholar Zhao K, Tung CW, Eizenga GC, Wright MH, Ali ML, Price AH, et al. Genome-wide association mapping reveals a rich genetic architecture of complex traits in Oryza sativa. Nat Commun. 2011;2:467.View ArticlePubMed CentralPubMedGoogle Scholar Bachlava E, Taylor CA, Tang S, Bowers JE, Mandel JR, Burke JM, et al. SNP discovery and development of a high-density genotyping array for sunflower. PLoS One. 2012;7, e29814.View ArticlePubMed CentralPubMedGoogle Scholar Martínez-García PJ, Parfitt DE, Ogundiwin EA, Fass J, Chan HM, Ahmad R, et al. High density SNP mapping and QTL analysis for fruit quality characteristics in peach (Prunus persica L.). Tree Genet Genomes. 2013;9:19–36.View ArticleGoogle Scholar Wu J, Cai G, Tu J, Li L, Liu S, Luo X, et al. Identification of QTLs for Resistance to Sclerotinia Stem Rot and BnaC.IGMT5.a as a Candidate Gene of the Major Resistant QTL SRC6 in Brassica napus. PLoS One. 2013;8, e67740.View ArticlePubMed CentralPubMedGoogle Scholar Fan C, Cai G, Qin J, Li Q, Yang M, Wu J, et al. Mapping of quantitative trait loci and development of allele-specific markers for seed weight in Brassica napus. Theor Appl Genet. 2010;121:1289–301.View ArticlePubMedGoogle Scholar Windows QTL Cartographer 2.5 [http://statgen.ncsu.edu/qtlcart/WQTLCart.htm] Feng J, Long Y, Shi L, Shi J, Barker G, Meng J. Characterization of metabolite quantitative trait loci and metabolic networks that control glucosinolate concentration in the seeds and leaves of Brassica napus. New Phytol. 2012;193:96–108.View ArticlePubMedGoogle Scholar Edwards D, Batley J, Snowdon RJ. Accessing complex crop genomes with next-generation sequencing. Theor Appl Genet. 2013;126:1–11.View ArticlePubMedGoogle Scholar Borevitz JO, Liang D, Plouffe D, Chang H-S, Zhu T, Weigel D, et al. Large-scale identification of single-feature polymorphisms in complex genomes. Genome Res. 2003;13:513–23.View ArticlePubMed CentralPubMedGoogle Scholar Borevitz JO, Hazen SP, Michael TP, Morris GP, Baxter IR, Hu TT, et al. Genome-wide patterns of single-feature polymorphism in Arabidopsis thaliana. Proc Natl Acad Sci U S A. 2007;104:12057–62.View ArticlePubMed CentralPubMedGoogle Scholar Zhu T, Salmeron J. High-definition genome profiling for genetic marker discovery. Trends Plant Sci. 2007;12:196–202.View ArticlePubMedGoogle Scholar Xie W, Chen Y, Zhou G, Wang L, Zhang C, Zhang J, et al. Single feature polymorphisms between two rice cultivars detected using a median polish method. Theor Appl Genet. 2009;119:151–64.View ArticlePubMedGoogle Scholar Akhunov E, Nicolet C, Dvorak J. Single nucleotide polymorphism genotyping in polyploid wheat with the Illumina GoldenGate assay. Theor Appl Genet. 2009;119:507–17.View ArticlePubMed CentralPubMedGoogle Scholar Cai G, Yang Q, Yang Q, Zhao Z, Chen H, Wu J, et al. Identification of candidate genes of QTLs for seed weight in Brassica napus through comparative mapping among Arabidopsis and Brassica species. BMC Genet. 2012;13:105.View ArticlePubMed CentralPubMedGoogle Scholar Piquemal J, Cinquin E, Couton F, Rondeau C, Seignoret E, Doucet I, et al. Construction of an oilseed rape (Brassica napus L.) genetic map with SSR markers. Theor Appl Genet. 2005;111:1514–23.View ArticlePubMedGoogle Scholar Xu J, Qian X, Wang X, Li R, Cheng X, Yang Y, et al. Construction of an integrated genetic linkage map for the A genome of Brassica napus using SSR markers derived from sequenced BACs in B. rapa. BMC Genomics. 2010;11:594.View ArticlePubMed CentralPubMedGoogle Scholar Cheng X, Xu J, Xia S, Gu J, Yang Y, Fu J, et al. Development and genetic mapping of microsatellite markers from genome survey sequences in Brassica napus. Theor Appl Genet. 2009;118:1121–31.View ArticlePubMedGoogle Scholar
CommonCrawl
On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people. Drugs and catastrophe are seemingly never far apart, whether in laboratories, real life or Limitless. Downsides are all but unavoidable: if a drug enhances one particular cognitive function, the price may be paid by other functions. To enhance one dimension of cognition, you'll need to appropriate resources that would otherwise be available for others. Maj. Jamie Schwandt, USAR, is a logistics officer and has served as an operations officer, planner and commander. He is certified as a Department of the Army Lean Six Sigma Master Black Belt, certified Red Team Member, and holds a doctorate from Kansas State University. This article represents his own personal views, which are not necessarily those of the Department of the Army. The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.) Nootropics are also sought out by consumers because of their ability to enhance mood and relieve stress and anxiety. Nootropics like bacopa monnieri and L-theanine are backed by research as stress-relieving options. Lion's mane mushroom is also well-studied for its ability to boost the nerve growth factor, thereby leading to a balanced and bright mood.14 "You know how they say that we can only access 20% of our brain?" says the man who offers stressed-out writer Eddie Morra a fateful pill in the 2011 film Limitless. "Well, what this does, it lets you access all of it." Morra is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. The price is not as good as multivitamins or melatonin. The studies showing effects generally use pretty high dosages, 1-4g daily. I took 4 capsules a day for roughly 4g of omega acids. The jar of 400 is 100 days' worth, and costs ~$17, or around 17¢ a day. The general health benefits push me over the edge of favoring its indefinite use, but looking to economize. Usually, small amounts of packaged substances are more expensive than bulk unprocessed, so I looked at fish oil fluid products; and unsurprisingly, liquid is more cost-effective than pills (but like with the powders, straight fish oil isn't very appetizing) in lieu of membership somewhere or some other price-break. I bought 4 bottles (16 fluid ounces each) for $53.31 total (thanks to coupons & sales), and each bottle lasts around a month and a half for perhaps half a year, or ~$100 for a year's supply. (As it turned out, the 4 bottles lasted from 4 December 2010 to 17 June 2011, or 195 days.) My next batch lasted 19 August 2011-20 February 2012, and cost $58.27. Since I needed to buy empty 00 capsules (for my lithium experiment) and a book (Stanovich 2010, for SIAI work) from Amazon, I bought 4 more bottles of 16fl oz Nature's Answer (lemon-lime) at $48.44, which I began using 27 February 2012. So call it ~$70 a year. I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day. Nicotine's stimulant effects are general and do not come with the same tweakiness and aggression associated with the amphetamines, and subjectively are much cleaner with less of a crash. I would say that its stimulant effects are fairly strong, around that of modafinil. Another advantage is that nicotine operates through nicotinic receptors and so doesn't cross-tolerate with dopaminergic stimulants (hence one could hypothetically cycle through nicotine, modafinil, amphetamines, and caffeine, hitting different receptors each time). This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. My worry about the MP variable is that, plausible or not, it does seem relatively weak against manipulation; other variables I could look at, like arbtt window-tracking of how I spend my computer time, # or size of edits to my files, or spaced repetition performance, would be harder to manipulate. If it's all due to MP, then if I remove the MP and LLLT variables, and summarize all the other variables with factor analysis into 2 or 3 variables, then I should see no increases in them when I put LLLT back in and look for a correlation between the factors & LLLT with a multivariate regression. A television advertisement goes: "It's time to let Focus Factor be your memory-fog lifter." But is this supplement up to task? Focus Factor wastes no time, whether paid airtime or free online presence: it claims to be America's #1 selling brain health supplement with more than 4 million bottles sold and millions across the country actively caring for their brain health. It deems itself instrumental in helping anyone stay focused and on top of his game at home, work, or school. Learn More... The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance. But like any other supplement, there are some safety concerns negative studies like Fish oil fails to hold off heart arrhythmia or other reports cast doubt on a protective effect against dementia or Fish Oil Use in Pregnancy Didn't Make Babies Smart (WSJ) (an early promise but one that faded a bit later) or …Supplementation with DHA compared with placebo did not slow the rate of cognitive and functional decline in patients with mild to moderate Alzheimer disease.. Nootropics, also known as 'brain boosters,' 'brain supplements' or 'cognitive enhancers' are made up of a variety of artificial and natural compounds. These compounds help in enhancing the cognitive activities of the brain by regulating or altering the production of neurochemicals and neurotransmitters in the brain. It improves blood flow, stimulates neurogenesis (the process by which neurons are produced in the body by neural stem cells), enhances nerve growth rate, modifies synapses, and improves cell membrane fluidity. Thus, positive changes are created within your body, which helps you to function optimally irrespective of your current lifestyle and individual needs. It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties. A similar pill from HQ Inc. (Palmetto, Fla.) called the CorTemp Ingestible Core Body Temperature Sensor transmits real-time body temperature. Firefighters, football players, soldiers and astronauts use it to ensure that they do not overheat in high temperatures. HQ Inc. is working on a consumer version, to be available in 2018, that would wirelessly communicate to a smartphone app. The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A. These are some of the best Nootropics for focus and other benefits that they bring with them. They might intrigue you in trying out any of these Nootropics to boost your brain's power. However, you need to do your research before choosing the right Nootropic. One way of doing so is by consulting a doctor to know the best Nootropic for you. Another way to go about selecting a Nootropic supplement is choosing the one with clinically tested natural Nootropic substances. There are many sources where you can find the right kind of Nootropics for your needs, and one of them is AlternaScript. A 2015 review of various nutrients and dietary supplements found no convincing evidence of improvements in cognitive performance. While there are "plausible mechanisms" linking these and other food-sourced nutrients to better brain function, "supplements cannot replicate the complexity of natural food and provide all its potential benefits," says Dr. David Hogan, author of that review and a professor of medicine at the University of Calgary in Canada. Core body temperature, local pH and internal pressure are important indicators of patient well-being. While a thermometer can give an accurate reading during regular checkups, the monitoring of professionals in high-intensity situations requires a more accurate inner body temperature sensor. An ingestible chemical sensor can record acidity and pH levels along the gastrointestinal tract to screen for ulcers or tumors. Sensors also can be built into medications to track compliance. Today piracetam is a favourite with students and young professionals looking for a way to boost their performance, though decades after Giurgea's discovery, there still isn't much evidence that it can improve the mental abilities of healthy people. It's a prescription drug in the UK, though it's not approved for medical use by the US Food and Drug Administration and can't be sold as a dietary supplement either. Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want. Research on animals has shown that intermittent fasting — limiting caloric intake at least two days a week — can help improve neural connections in the hippocampus and protect against the accumulation of plaque, a protein prevalent in the brains of people with Alzheimer's disease. Research has also shown that intermittent fasting helped reduce anxiety in mice. So is there a future in smart drugs? Some scientists are more optimistic than others. Gary Lynch, a professor in the School of Medicine at the University of California, Irvine argues that recent advances in neuroscience have opened the way for the smart design of drugs, configured for specific biological targets in the brain. "Memory enhancement is not very far off," he says, although the prospects for other kinds of mental enhancement are "very difficult to know… To me, there's an inevitability to the thing, but a timeline is difficult." Natural-sourced ingredients can also help to enhance your brain. Superfood, herbal or Amino A ingredient cognitive enhancers are more natural and are largely directly derived from food or plants. Panax ginseng, matcha tea and choline (found in foods like broccoli) are included under this umbrella. There are dozens of different natural ingredients /herbs purported to help cognition, many of which have been used medicinally for hundreds of years. Critics will often highlight ethical issues and the lack of scientific evidence for these drugs. Ethical arguments typically take the form of "tampering with nature." Alena Buyx discusses this argument in a neuroethics project called Smart Drugs: Ethical Issues. She says that critics typically ask if it is ethically superior to accept what is "given" instead of striving for what is "made". My response to this is simple. Just because it is natural does not mean it is superior. Regardless, while in the absence of piracetam, I did notice some stimulant effects (somewhat negative - more aggressive than usual while driving) and similar effects to piracetam, I did not notice any mental performance beyond piracetam when using them both. The most I can say is that on some nights, I seemed to be less easily tired when writing or editing or n-backing (and I felt less tired than ICON 2011 than ICON 2010), but those were also often nights I was also trying out all the other things I had gotten in that order from Smart Powders, and I am still dis-entangling what was responsible. (Probably the l-theanine or sulbutiamine.) Smart pills containing Aniracetam may also improve communication between the brain's hemispheres. This benefit makes Aniracetam supplements ideal for enhancing creativity and stabilizing mood. But, the anxiolytic effects of Aniracetam may be too potent for some. There are reports of some users who find that it causes them to feel unmotivated or sedated. Though, it may not be an issue if you only seek the anti-stress and anxiety-reducing effects. Because these drugs modulate important neurotransmitter systems such as dopamine and noradrenaline, users take significant risks with unregulated use. There has not yet been any definitive research into modafinil's addictive potential, how its effects might change with prolonged sleep deprivation, or what side effects are likely at doses outside the prescribed range. Analyzing the results is a little tricky because I was simultaneously running the first magnesium citrate self-experiment, which turned out to cause a quite complex result which looks like a gradually-accumulating overdose negating an initial benefit for net harm, and also toying with LLLT, which turned out to have a strong correlation with benefits. So for the potential small Noopept effect to not be swamped, I need to include those in the analysis. I designed the experiment to try to find the best dose level, so I want to look at an average Noopept effect but also the estimated effect at each dose size in case some are negative (especially in the case of 5-pills/60mg); I included the pilot experiment data as 10mg doses since they were also blind & randomized. Finally, missingness affects analysis: because not every variable is recorded for each date (what was the value of the variable for the blind randomized magnesium citrate before and after I finished that experiment? what value do you assign the Magtein variable before I bought it and after I used it all up?), just running a linear regression may not work exactly as one expects as various days get omitted because part of the data was missing. Nootropics. You might have heard of them. The "limitless pill" that keeps Billionaires rich. The 'smart drugs' that students are taking to help boost their hyperfocus. The cognitive enhancers that give corporate executives an advantage. All very exciting. But as always, the media are way behind the curve. Yes, for the past few decades, cognitive enhancers were largely sketchy substances that people used to grasp at a short term edge at the expense of their health and well being. But the days of taking prescription pills to pull an all-nighter are so 2010. The better, safer path isn't with these stimulants but with nootropics. Nootropics consist of dietary supplements and substances which enhance your cognition, in particular when it comes to motivation, creativity, memory, and other executive functions. They play an important role in supporting memory and promoting optimal brain function.
CommonCrawl
Online Submission Submission Guidelines Ethical Guidelines Peer Review Policy Licence & Copyright Misconduct & Plagiarism Archiving & Data Policy Publication Fee European Journal of Environment and Public Health Issue 1 - In Progress 2019 - Volume 3 Issue 1 Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups Robert Delongchamp 1 2 * , Abby Holt 2, M. F. Faramawi 1, Appathurai Balamurugan 1 2, Gordon Reeve 2, Namvar Zohoori 1 2, Joseph Bates 1 2 1 University of Arkansas for Medical Sciences (UAMS), U.S.A. 2 Arkansas Department of Health, U.S.A. European Journal of Environment and Public Health, 2019 - Volume 3 Issue 1, Article No: em0024 https://doi.org/10.29333/ejeph/5838 Published Online: 11 Jul 2019 Views: 213 | Downloads: 123 In-text citation: (Delongchamp et al., 2019) Reference: Delongchamp, R., Holt, A., Faramawi, M. F., Balamurugan, A., Reeve, G., Zohoori, N., & Bates, J. (2019). Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups. European Journal of Environment and Public Health, 3(1), em0024. https://doi.org/10.29333/ejeph/5838 In-text citation: (1), (2), (3), etc. Reference: Delongchamp R, Holt A, Faramawi MF, Balamurugan A, Reeve G, Zohoori N, et al. Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups. EUROPEAN J ENV PUBLI. 2019;3(1):em0024. https://doi.org/10.29333/ejeph/5838 AMA 10th edition Reference: Delongchamp R, Holt A, Faramawi MF, et al. Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups. EUROPEAN J ENV PUBLI. 2019;3(1), em0024. https://doi.org/10.29333/ejeph/5838 Reference: Delongchamp, Robert, Abby Holt, M. F. Faramawi, Appathurai Balamurugan, Gordon Reeve, Namvar Zohoori, and Joseph Bates. "Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups". European Journal of Environment and Public Health 2019 3 no. 1 (2019): em0024. https://doi.org/10.29333/ejeph/5838 Reference: Delongchamp, R., Holt, A., Faramawi, M. F., Balamurugan, A., Reeve, G., Zohoori, N., and Bates, J. (2019). Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups. European Journal of Environment and Public Health, 3(1), em0024. https://doi.org/10.29333/ejeph/5838 Reference: Delongchamp, Robert et al. "Differences between Arkansas and the United States in Prevalence of Risk Factors Explain Variations in Ischemic Heart Disease Mortality Rates among Pre-Medicare (45-64) and Medicare (65-84) Age Groups". European Journal of Environment and Public Health, vol. 3, no. 1, 2019, em0024. https://doi.org/10.29333/ejeph/5838 Arkansas (AR) consistently has higher ischemic heart disease (IHD) death rates than the US, which is believed to be due to a higher prevalence in AR of major, modifiable risk factors. We examined the difference in IHD death rates between AR and the US as consequences of differences in the prevalence of nine risk factors between pre-Medicare age group (45-64) and Medicare age group (65-84). We modeled IHD deaths attributable to differential prevalence between AR and the US using mortality data and prevalence data from AR and US for years 2000-2010, and relative risk measures obtained from the INTERHEART and Atherosclerosis Risk in Communities studies. From 2000-2010, our study showed that if we were to reduce the prevalence of significant risk factors to US levels, we would reduce AR IHD deaths by 26.6% in the pre-Medicare age group and 15.9% in the Medicare age group. Most of the increased mortality was explained by higher prevalence of smoking and hypertension in AR. Other socioeconomic factors that contributed to an increased risk of poor health outcomes were education, income, and the lack of health insurance; with AR having worse outcomes than the US for the pre-Medicare age group. The importance of risk factors depended on race, sex, and age. The excess mortality in AR relative to the US for the two age groups can largely be explained by prevalence differences in smoking, hypertension, cholesterol, education and income. risk factors mortality ischemic heart disease Show / Hide HTML Content Although death rates from heart disease in the United States (US) and Arkansas (AR) have declined rapidly since the 1970s, the burden remains high (Centers for Disease Control and Prevention, 2013; Gillespie et al., 2013; Go et al., 2014). In 2013, heart disease was the leading cause of death in the US and AR with ischemic heart disease (IHD) contributing the majority of these deaths (Centers for Disease Control and Prevention, 2014). AR has substantially higher IHD death rates than the US (Centers for Disease Control and Prevention, 2014). Major modifiable risk factors for heart disease have been extensively studied and elucidated. However, their contribution to the excess mortality of high-burden states have not. These risk factors include tobacco use, physical inactivity, poor diet, diabetes, obesity, hypertension, and dyslipidemia; managing these risks can prevent the onset of IHD and premature death (American Heart Association, 2018; Go et al., 2014). In a recent study by Patel and coworkers, several of these major preventable risks combined accounted for half of the cardiovascular disease deaths in US adults aged 45-79 (Patel et al., 2015). This study suggests that nearly 10% of the nation's cardiovascular disease deaths would be prevented if risk factors were reduced to the levels in the lowest risk states. This is the only national cardiovascular study published that we know of that measured a decrease in cardiovascular mortality if all states were to reduce modifiable risks to a specified target. In addition to public health efforts focused on reductions in modifiable risk factors, access to healthcare could be important in reducing IHD mortality (Cutler and Meara, 2004). Arkansans in the pre-Medicare age group (age 45-64) not only have higher IHD death rates than the nation, but many have also historically lacked healthcare insurance as compared to the nation. A recent study by Case and Deaton found a marked increase in all-cause mortality in middle-aged, white men and women in the US between 1999 and 2013, reversing decades of progress in mortality (Case and Deaton, 2015). The percent of the population under the age of 65 without healthcare insurance in AR significantly exceeded the percent in the US in 2013 [19.1% (90% confidence interval [CI]: 18.4-19.8) versus 16.5% (90% CI: 16.3-16.5), respectively (US Department of Health & Human Services, 2015), making this sub-population particularly vulnerable. We investigated the contribution of the differences, between Arkansas and US, in the prevalence of major, modifiable risk factors, including health insurance, in pre-Medicare (45-64 years of age) and Medicare (65-84 years of age) age groups, to explore the effects that such reductions could have on the excess IHD mortality in Arkansas. The prevalence for each, modifiable, risk factor was estimated from the Behavioral Risk Factor Surveillance System (BRFSS) (Centers for Disease Control and Prevention, 2018a). IHD mortality rates were obtained from CDC Wonder, and population attributable fractions were calculated using the BRFSS prevalence estimates and published relative risks (Anand et al., 2008; Fowler-Brown et al., 2007; Lynch et al., 1996; Qureshi et al., 2003; Rasmussen et al., 2006; Tonne et al., 2005). The details are described in sections labeled: Mortality, Prevalence and Adjustment. For major modifiable risk factors, AR IHD mortality was adjusted by a population attributable fraction that assumed AR prevalence was reduced to US levels. Mortality adjustments were computed for the years, 2000 to 2010. IHD mortality rates were obtained for the US and AR, and prevalence of each risk factor was estimated for each year for both the US and AR. Within each year, we computed age-standardized mortality rates and age-standardized prevalence rates for eight demographic groups representing combinations of age (Older and Young), race (Black and White), sex (Female and Male). Older represents individuals in the Medicare age group and young represents adults between 45-64 years of age. Each group is labeled using a three letter code: Y means younger (age 45-64), O means older (65-84), W means non-Hispanic white, B means non-Hispanic black. Mortality rates for IHD (International Classification of Diseases – revision 10 codes I20-I25) in AR and in the US were obtained from the CDC Wonder compressed mortality files from 2000 to 2010 (Centers for Disease Control and Prevention, 2014). In Arkansas, a total of 11,272 IHD deaths among the 45-64 age group and 26,430 IHD deaths among the 65-84 age group were included in the study. In the US, a total of 716,405 IHD deaths among the 45-64 age group and 2,097,502 IHD deaths among the 65-84 age group were included. We downloaded age-specific IHD mortality rates for five-year age intervals and computed annual age-standardized rates in AR and the US for each of eight race-sex-age groups. Rates were adjusted in 5-year age groups to the 2000 US standard million. Survey results from the BRFSS were used to assess the prevalence of nine risk factors in the eight demographic groups in the US and AR (Centers for Disease Control and Prevention, 2018a). The annual prevalence of personal health behaviors were estimated for years 2000 to 2010. Included IHD risk factors were the prevalence of cigarette smoking, hypertension awareness, high cholesterol awareness, obesity, physical inactivity, diabetes, income, educational attainment, and health insurance status. The specific definitions of risk factor categories are given in Table 1. Demographic information of sex, age, and race were obtained from BRFSS. All other items were obtained from the core BRFSS survey for all years 2000-2010, with the exception of hypertension, cholesterol awareness and physical inactivity; these items were obtained from rotating core questions asked every other year during 2001, 2003, 2005, 2007, and 2009. Because BRFSS data collection methods change over time, modifications to the variable names and methods of collection were assessed and recoded when needed for consistency throughout all years (Centers for Disease Control and Prevention, 2015). Table 1. Risk factor categories defined from the CDC Behavioral Risk Factor Surveillance System (BRFSS) Risk Factor Categories BRFSS Definitions Smoking: never, former, current Never smoker = Respondent who reported smoking less than 100 cigarettes in their lifetime. Former smoker = Respondent who reported smoking at least 100 cigarettes in their lifetime and who did not smoke at the time of the survey. Current smoker = Respondent who reported smoking at least 100 cigarettes in their lifetime and who smoked every day or some days at the time of the survey. Hypertension: no, yes Respondent who has been told by a doctor, nurse, or other health professional that they have high blood pressure. Cholesterol: no, yes Respondent who has been told by a doctor, nurse, or other health professional that their blood cholesterol is high. Obesity: BMI < 30, BMI ≥ 30 Body mass index (BMI) calculated by the CDC from a respondent's reported height and weight. Physical Activity: no, yes Respondent who reported they participated in either moderate physical activity defined as 30 or more minutes per day for 5 or more days per week, or vigorous activity for 20 minutes per day on 3 or more days per week. Diabetes: no, yes Respondent who has been told by a doctor that they have diabetes. < $15,000 per year, ≥ $15,000 Respondent with an annual household income from all sources falling within the following categories: < $15,000 per year $15,000 to less than $25,000 $50,000 or more Education: < high school graduate, ≥ high school graduate Respondent who reported the highest grade or year of school that they completed; tabulated by the CDC as: Did not graduate high school Attended college or technical school College or technical school graduate Health Insurance Status: no, yes Respondent who had any kind of health care coverage, including health insurance, prepaid plans such as HMO's, or government plans such as Medicare. Hypertension and diabetes may be underreported because participants are unaware that they have these conditions. To adjust for this, results from a state-level randomized survey conducted from 2006 to 2008, Arkansas Cardiovascular Health Examination Survey (ARCHES), which collected self-reported and clinical measures, were used (Zohoori et al., 2011). ARCHES methodology was similar to the National Health and Nutrition Examination Survey (NHANES). NHANES, 2007 to 2008, reported that 80.7% of the respondents who had hypertension were aware of their condition; while 72.6% of ARCHES respondents were aware of their hypertension. Among respondents who self-reported they had been diagnosed with diabetes, 67.9% of NHANES respondents were aware of their diagnosis; while 76.7% of ARCHES respondents were aware of their condition (Centers for Disease Control and Prevention, 2018c; Zohoori et al., 2011). Table 2. Excess relative risk used in population attributable risk calculations by sex and age Males ≥ 65 Males 45-64 Females ≥ 65 Smoking Status(a) (Current + Former) 1.28+.50 2.42+.70 1.29+.17 3.40+.40 Hypertension(a) 1.02 1.63 1.84 3.00 High Cholesterol(a) 1.02 2.63 1.65 3.85 Obesity(a) 0.84 1.51 1.03 1.56 Physical Inactivity(a) 1.61 1.25 1.85 1.67 Diabetes(a) 1.47 1.93 2.71 4.69 Education(b) 1.0 1.0 1.0 1.0 Income(b) 1.0 1.0 1.0 1.0 No Health Insurance(c) .22 .22 .22 .22 RR from INTERHEART study (11) RR set to 2.0 (15-18) RR from the Atherosclerosis in Communities Study (12) The majority of the relative risk measures (smoking, hypertension, obesity, physical inactivity, diabetes) were obtained as odds ratios from the INTERHEART study, a worldwide case-control study of factors associated with acute myocardial infarction, as set out in Table 2 (Anand et al., 2008). Income, educational attainment, and health insurance status were not measured in this study; therefore, the Atherosclerosis Risk in Communities Study (ARIC) was used to determine the RR for the lack of health insurance for myocardial infarction (Table 2) (Fowler-Brown et al., 2007). Based on results of several studies that assessed socioeconomic status on cardiovascular disease risks, an RR of 2.0 was assigned to income and educational attainment (Lynch et al., 1996; Qureshi et al., 2003; Rasmussen et al., 2006; Tonne et al., 2005). A generalization of the population attributable fraction was used to compute the anticipated change in the mortality rate with a change in prevalence from p to q: \[\theta = \frac{(R - 1)(p - q)}{(1 + \left( R - 1 \right)p)}\] (1) where \(R\) is the relative risk in the exposed. Note that the usual definition of population attributable fraction is the case,\(\ q = 0.\) The adjusted mortality estimate is \(\mu\left( q \right) = \left( 1 - \theta \right)\mu\left( p \right)\). Equation (1) was extended to accommodate multiple exposure levels, such as the joint contribution of smoking and hypertension by cross-classifying the two risk factors (details available from corresponding author). For K risk factors with respective estimates of the population attributable fraction\(\left\{ \theta_{k}:k = 1,\ ..,\ K \right\}\), we computed the adjusted mortality as \[\left( 1 - \theta^{*} \right)\mu\] (2) where \(\theta^{*} = 1 -\) \(\prod_{k = 1}^{K}\left( 1 - \theta_{k} \right)\) is the combined population attributable fraction. We estimated the population attributable fraction, \(\theta^*\) and its variance from 1,000 bootstrap samples of each survey, 50 states in 11- years (Efron and Tibshirani, 1994). The bootstrap approach provided estimates of variance within domains needed for age-adjusted sex-race categories. In all groups, AR had markedly higher IHD mortality than the nation during 2000 through 2010. To succinctly describe these disparities, we plotted trends in the relative risks (Figure 1a) and the excess absolute risks (Figure 1b). The relative risk and excess absolute risk increased over the years with the exception of black women. Since about 2004, rate disparities among black women declined such that their 2010 disparity was the least among the race-sex groups. None-the-less, rates in all groups differed significantly from national rates. Further, the relative risks in the pre-Medicare group (YWM, YWF, YBM, YBF) were higher as compared with those in the Medicare group (OWM, OWF, OBM, OBF), Figure 1a. Figure 1. Disparities in age-standardized IHD mortality rates between AR and US. (a) relative risks (b) absolute risks; Disparities in age-standardized mortality after adjusting AR to US prevalence: (c) relative risks and (d) excess absolute risks Conceptually, disparities between AR and US (Figure 1a, 1b) arose from differences in the prevalence of modifiable factors. We evaluated the potential role of the prevalence of nine major, modifiable risk factors for IHD by computing a population attributable fraction (PAF) associated with a change in prevalence from the prevalence that was observed in AR to the prevalence that was observed in the US. Table 3 ranks risk factors for which the PAF was statistically significant. Smoking and hypertension were the major contributors. It is important to note that a disparity in socioeconomic status (i.e., education and income) contributed to IHD mortality differences between AR and the US for most race, sex, and age groups; younger black males (YBM) being the exception, Table 3. Obesity and no health insurance were important in the 45-64 age groups, but not in the 65-84 age groups. Diabetes did not contribute to the excess primarily because ARCHES adjusted prevalence was lower in AR than the corresponding US prevalence. Note that the order of the risk factor is a function of the difference between the prevalence in AR and the US, in addition to the excess relative risk in Table 2. Table 3. Rank(a) of significant risk factors as an explanation of the IHD mortality disparity between AR and the US, and population attributable fraction/potential deaths averted in AR OBF (g) YBF(g) YBM OWF YWF OWM YWM Hypertension (c) Smoking and Hypertension (c,d) Cholesterol (b) Physical Inactivity (b) Diabetes (b) Obesity (b) Education (e) Income (e) No Health Insurance (f) Population Attributable Fraction % (95% CI) 25.0 (18.3-31.2) (21.3-31.9) (9.1-14.1) Total IHD Deaths, AR Potential Deaths Averted, AR Note that empty cells (-) were not ranked because AR prevalence was not significantly greater than the US. Abbreviations: 95% CI, 95% confidence interval; IHD, ischemic heart disease; OBF, older black females (65-84); YBF, younger black females (45-64); OBM, older black males (65-84); YBM, younger black males (45-64); OWF, older white females (65-84); YWF, younger white females (45-64); OWM, older white males (65-84); YWM, younger white males (45-64) Ranking ordered by the magnitude of the statistically significant PAF RR from the INTERHEART study (11) Adjusted using data from the Arkansas Cardiovascular Health Examination Survey (ARCHES), 2006 to 2008 (13) Joint effect of smoking and hypertension RR from the Atherosclerosis Risk Communities Study (12) Smoking was not included for OBF, YBF because smoking rates in AR were lower than the US Table 3 also reports the combined PAF of the significant risk factors. Adjusting the relative risk (Figure 1c) or excess absolute risk (Figure 1d) by the PAF associated with these risk factors explained most of the disparity between AR and US IHD mortality rates. Over the period studied, 2000-2010, the greatest IHD mortality difference between AR and the US was in 2010. Table 4 elaborates the 2010 excess absolute risks (EAR) and relative risks (RR) depicted in Figure 1 by reporting 95% confidence intervals. In 2010, the observed RR and observed EAR were elevated for all groups except OBF, and after the PAF adjustment the disparities were largely removed except for YWF, YWM, and OWM. Older black males (OBM) showed a significant decreasing trend in Table 4. During the period 2000 to 2010, AR had 11,242 IHD deaths in the 45-64 age group and 26,430 IHD deaths in the 65-84 groups. Applying the PAF estimates in Table 3 to these deaths suggests that 26.6% of the deaths in the 45-64 groups (2,990 deaths) and 15.9% of deaths in the 65-84 age group (4,190 deaths) can be attributed to disparities between AR and the US in the prevalence of modifiable risk factors. Table 4. 2010 estimates and 95% confidence bounds of the relative risk, (RR), and excess absolute risk per 100,000, (EAR) RR (95% CI) RR(a) (95% CI) EAR (95% CI) EAR(a) (95% CI) 1.17 (1.00 - 1.36) 0.87 (0.73 – 1.04) 77.8 (-6.4 – 162.1) -58.8 (-131.6 – 13.9) 33.9 (10.9 – 56.9) 5.0 (-12.8 – 22.6) 142.5 (16.9 – 268.0) -198.1 (-317.9 – -78.2) -7.6 (-35.0 – 19.8) 14.9 (8.8 – 20.9) Abbreviations: 95% CI, 95% confidence interval; RR, relative risk; EAR, excess absolute risk; OBF, older black females (65-84); YBF, younger black females (45-64); OBM, older black males (65-84); YBM, younger black males (45-64); OWF, older white females (65-84); YWF, younger white females (45-64); OWM, older white males (65-84); YWM, younger white males (45-64) (a) AR rates adjusted to expected rates if risk factor prevalence were reduced to US levels This study shows that if Arkansans had the same level of risk factors as the US national average, there would have potentially been 26.6% fewer deaths from IHD in the 45-64 age group and 15.9% fewer deaths in the 65-84 age group during the period 2000 to 2010. The total number of deaths averted from the IHD would have been 7,180 and almost 3,000 of these would have been in the 45-64 age group; an age group representing a significant proportion of the workforce and those with parental responsibilities. The modifiable risks evaluated included cigarette smoking, obesity, hypertension, high blood cholesterol, diabetes, physical inactivity, education attainment, income level, and health insurance access. The modeling of the IHD mortality in this study accounted for the attributable risk from differential prevalence between AR and US after adjusting for confounders such as age, race, and gender. This methodology focuses on modifiable factors that have been reduced in other states relative to AR and provides a ranking of risk factors by their potential to reduce IHD rates in AR. Smoking and hypertension explained a large portion of the disparity between the AR and US IHD death rates, which implies that other states have reduced smoking and hypertension rates substantially more than AR. Another national study found the greatest reduction in cardiovascular disease mortality would occur in the southern states if risk factors were reduced to levels in lower risk western states (Patel et al., 2015). Our state-specific IHD study supports these findings, and the need to reduce the prevalence of IHD risk behaviors. In this study, the mortality rate disparity between AR and the US increased over time especially in the pre-Medicare age group. The importance of specific risk factors depended on race, sex, and age. Most of these increases in IHD death rates can be attributed to a higher prevalence of several risk factors in AR (Table 3). Taken together, these factors can probably explain the majority of the difference in IHD deaths between AR and the US (Figure 1, Table 4). However, it is problematic to account for correlations among risk factors when computing a cumulative contribution. This is primarily because BRFSS sample sizes were too small to estimate prevalence within cross-classifications of risk factors. The importance of specific risk factors depended on race, sex, and age, but hypertension was among the highest ranked in all demographic groups and smoking also ranked high among the factors. Because of their high ranks, we computed prevalence for their joint distribution. The ARCHES adjusted combination of hypertension and smoking contributed to the disparity of IHD mortality between AR and the US for all groups except black females (OBF, YBF), Table 3. Among black females in AR, hypertension prevalence is higher but smoking prevalence is lower than the US. After adjusting AR mortality rates using the PAF (36.2) in Table 3, IHD mortality rates among black males in the Medicare group dropped significantly below US rates (Figure 1c and Table 4). Table 3 disregarded risk factors for which the AR prevalence was less than the US prevalence. AR prevalence estimates for diabetes and obesity were significantly lower in OBM, and this may contribute to the low adjusted estimate. Differences between AR and the US socioeconomic factors and health insurance may play a significant role in these disparities (Alter et al., 2013; Baker et al., 2006; Fowler-Brown et al., 2007; McWilliams, Zaslavsky, Meara, & Ayanian, 2004). In previous work, Balamurugan and coworkers estimated the relative risk of death from acute myocardial infarction (the major component of IHD deaths) in census block groups as a function of education (proportion of the block group population over 25 years old who did not graduate from high school) and poverty (proportion of the block group population living below the federal poverty level) (Balamurugan et al., 2016). Both of these covariates explained significant amounts of the differences in IHD mortality among Arkansas's block groups. This result implies that there are substantial differences in IHD mortality among block groups that are associated with socioeconomic measures. While having no insurance explains some of the pre-mature IHD death among younger Arkansans compared to the US, it does not have an effect on the 65-84 age groups where essentially all have access to Medicare. A 10-year prospective health study, the Health and Retirement Study (HRS), followed adults aged 55 to 64 and found those without insurance at the start of the study had a 35% higher all-cause mortality rate than those who reported having private insurance (McWilliams et al., 2004). Some of this risk may be related to the lack of preventive care and interventions not available to the uninsured (Brooks et al., 2010). Investigators of the health insurance expansion under the Affordable Care Act (ACA) estimated that a 5.1% increase in treatment of hypertension among adults aged 25 to 64, would prevent 95,000 cardiovascular disease deaths in this group by 2050 (Li et al., 2015). BRFSS allows us to address the prevalence differences between the US and AR in income, education, and health insurance status. However, a substantial part of the risk is presumed to be confounded by other risk factors. For example, both hypertension and smoking prevalence were correlated with measures of education and poverty (Luepker et al., 1993). Because BRFSS is a self-reported survey, responses to questions about risk factors such as smoking and obesity may be biased by societal perceptions (Gebreab et al., 2015; Li et al., 2012; Nelson et al., 2003; Pierannunzi et al., 2013). For risk factors such as hypertension and diabetes, many individuals do not realize they have the condition. Underreporting results from other studies were incorporated to adjust for the underreporting of these chronic conditions (Centers for Disease Control and Prevention, 2018c; Zohoori et al., 2011). Our modeling also estimated the variation inherent in prevalence estimates, which allows us to include confidence intervals. Limitations in the study include the relatively small samples among ages 45-64 and 65-84, which limits precision, and thus was especially problematic for the black population. The limited sample sizes in BRFSS also restricted our ability to examine interactions. For example, an interaction may occur between hypertension, smoking and cholesterol, but creating subsets to account for this effect modification reduces the sample further, resulting in less precise outcomes, which limit the power to evaluate effects. In addition, we would need estimates of the RR for the interaction subsets, which are largely unavailable. The INTERHEART case-control study determined odds ratios among cases that were alive after their first MI event (Anand et al., 2008). We used these odds ratios as a risk measure of IHD death rather than MI incidence. Also, IHD includes heart diseases such as angina pectoris, other acute ischemic heart disease, and chronic ischemic heart disease besides acute and subsequent MI. There were also some differences in the way some of the risk factors are measured between BRFSS and the INTERHEART study, which may have led to inconsistent RR measures. Additional RR's for education, income, and health insurance were not available in the INTERHEART study. An excess RR for education and income were inferred from several studies, and the RR for health insurance was used from the Atherosclerosis Risk in Communities Study (Fowler-Brown et al., 2007; Lynch et al., 1996; Qureshi et al., 2003; Rasmussen et al., 2006; Tonne et al., 2005). Our findings determine which program interventions would have the greatest benefit of reducing IHD rates. Targeting these interventions could support efforts of the CDC and CMS Million Hearts initiative to prevent 1 million cardiovascular disease deaths by 2022 (Centers for Disease Control and Prevention, 2018b). No financial support was obtained for the work in this manuscript. Alter, D. A., Franklin, B., Ko, D. T., Austin, P. C., Lee, D. S., Oh, P. I., . . . Tu, J. V. (2013). Socioeconomic status, functional recovery, and long-term mortality among patients surviving acute myocardial infarction. PLoS One, 8, e65130. https://doi.org/10.1371/journal.pone.0065130 American Heart Association. (2018). Understand Your Risks to Prevent a Heart Attack. Available at: http://www.heart.org/en/health-topics/heart-attack/understand-your-risks-to-prevent-a-heart-attack (Accessed 15 October 2018) Anand, S. S., Islam, S., Rosengren, A., Franzosi, M. G., Steyn, K., Yusufali, A. H., . . . Yusuf, S. (2008). Risk factors for myocardial infarction in women and men: insights from the INTERHEART study. European Heart Journal, 29. 932-940. https://doi.org/10.1093/eurheartj/ehn018 Baker, D. W., Sudano, J. J., Durazo-Arvizu, R., Feinglass, J., Witt, W. P. and Thompson, J. (2006). Health insurance coverage and the risk of decline in overall health and death among the near elderly, 1992-2002. Medical Care, 44, 277-282. https://doi.org/10.1097/01.mlr.0000199696.41480.45 Balamurugan, A., Delongchamp, R., Im, L., Bates, J. and Mehta, J. L. (2016). Neighborhood and Acute Myocardial Infarction Mortality as Related to the Driving Time to Percutaneous Coronary Intervention–Capable Hospital. Journal of the American Heart Association, 5, e002378. https://doi.org/10.1161/JAHA.115.002378 Brooks, E. L., Preis, S. R., Hwang, S. J., Murabito, J. M., Benjamin, E. J., Kelly-Hayes, M., . . . Levy, D. (2010). Health insurance and cardiovascular disease risk factors. The American Journal of Medicine, 123, 741-747. https://doi.org/10.1016/j.amjmed.2010.02.013 Case, A. and Deaton, A. (2015). Rising morbidity and mortality in midlife among white non-Hispanic Americans in the 21st century. Proceedings of the National Academy of Sciences of the United States of America, 112, 15078-15083. https://doi.org/10.1073/pnas.1518393112 Centers for Disease Control and Prevention. (2013). Vital signs: avoidable deaths from heart disease, stroke, and hypertensive disease - United States, 2001-2010. MMWR.Morbidity and mortality weekly report, 62, 721-727. Centers for Disease Control and Prevention. (2014). Underlying Cause of Death 1999-2011 on CDC WONDER Online Database, released 2014. Data are from the Multiple Cause of Death Files, 1999-2011, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Available at: http://wonder.cdc.gov/ucd-icd10.html Centers for Disease Control and Prevention. (2015). Comparability of data: BRFSS 2011. Available at: http://www.cdc.gov/brfss/annual_data/2011/pdf/compare_11_20121212.pdf (Accessed 10 June 2015) Centers for Disease Control and Prevention. (2018a). Behavioral Risk Factor Surveillance System (BRFSS) Prevalence Data (2010 and prior). Available at: https://healthdata.gov/dataset/behavioral-risk-factor-surveillance-system-brfss-prevalence-data-2010-and-prior (Accessed 15 October 2018) Centers for Disease Control and Prevention. (2018b). Million Hearts. Available at: http://millionhearts.hhs.gov (Accessed 15 October 2018) Centers for Disease Control and Prevention. (2018c). National Health and Nutrition Examination Survey Data. Hyattsville, MD: U.S. Department of Health and Human Services. Available at: https://wwwn.cdc.gov/nchs/nhanes/ContinuousNhanes/Default.aspx?BeginYear=2007 (Accessed 15 October 2018) Cutler, D. M. and Meara, E. (2004). Changes in the Age Distribution of Mortality over the Twentieth Century. In D. A. Wise (Ed.), Perspectives on the Economics of Aging (pp. 333-365): University of Chicago Press. https://doi.org/10.7208/chicago/9780226903286.003.0010 Efron, B. and Tibshirani, R. J. (1994). An Introduction to the Bootstrap: Chapman & Hall. Fowler-Brown, A., Corbie-Smith, G., Garrett, J. and Lurie, N. (2007). Risk of cardiovascular events and death--does insurance matter? Journal of general internal medicine, 22, 502-507. https://doi.org/10.1007/s11606-007-0127-2 Gebreab, S. Y., Davis, S. K., Symanzik, J., Mensah, G. A., Gibbons, G. H. and Diez-Roux, A. V. (2015). Geographic variations in cardiovascular health in the United States: contributions of state- and individual-level factors. Journal of the American Heart Association, 4. https://doi.org/10.1161/JAHA.114.001673 Gillespie, C. D., Wigington, C. and Hong, Y. (2013). Coronary heart disease and stroke deaths - United States, 2009. Morbidity and mortality weekly report. Surveillance summaries (Washington, D.C.: 2002), 62 Suppl 3, 157-160. Go, A. S., Mozaffarian, D., Roger, V. L., Benjamin, E. J., Berry, J. D., Blaha, M. J., …, Stroke Statistics Subcommittee. (2014). Heart disease and stroke statistics--2014 update: a report from the American Heart Association. Circulation, 129, e28-e292. https://doi.org/10.1161/01.cir.0000441139.02102.80 Li, C., Balluz, L. S., Ford, E. S., Okoro, C. A., Zhao, G. and Pierannunzi, C. (2012). A comparison of prevalence estimates for selected health indicators and chronic diseases or conditions from the Behavioral Risk Factor Surveillance System, the National Health Interview Survey, and the National Health and Nutrition Examination Survey, 2007-2008. Preventive medicine, 54, 381-387. https://doi.org/10.1016/j.ypmed.2012.04.003 Li, S., Bruen, B. K., Lantz, P. M. and Mendez, D. (2015). Impact of Health Insurance Expansions on Nonelderly Adults with Hypertension. Preventing chronic disease, 12, E105. https://doi.org/10.5888/pcd12.150111 Luepker, R. V., Rosamond, W. D., Murphy, R., Sprafka, J. M., Folsom, A. R., McGovern, P. G. and Blackburn, H. (1993). Socioeconomic status and coronary heart disease risk factor trends. The Minnesota Heart Survey. Circulation, 88, 2172-2179. https://doi.org/10.1161/01.CIR.88.5.2172 Lynch, J. W., Kaplan, G. A., Cohen, R. D., Tuomilehto, J. and Salonen, J. T. (1996). Do cardiovascular risk factors explain the relation between socioeconomic status, risk of all-cause mortality, cardiovascular mortality, and acute myocardial infarction? American journal of epidemiology, 144, 934-942. https://doi.org/10.1093/oxfordjournals.aje.a008863 McWilliams, J. M., Zaslavsky, A. M., Meara, E. and Ayanian, J. Z. (2004). Health insurance coverage and mortality among the near-elderly. Health affairs (Project Hope), 23: 223-233. https://doi.org/10.1377/hlthaff.23.4.223 Nelson, D. E., Powell-Griner, E., Town, M. and Kovar, M. G. (2003). A comparison of national estimates from the National Health Interview Survey and the Behavioral Risk Factor Surveillance System. American Journal of Public Health, 93, 1335-1341. https://doi.org/10.2105/AJPH.93.8.1335 Patel, S. A., Winkel, M., Ali, M. K., Narayan, K. M. and Mehta, N. K. (2015). Cardiovascular Mortality Associated With 5 Leading Risk Factors: National and State Preventable Fractions Estimated From Survey Data. Annals of Internal Medicine. https://doi.org/10.7326/M14-1753 Pierannunzi, C., Hu, S. S. and Balluz, L. (2013). A systematic review of publications assessing reliability and validity of the Behavioral Risk Factor Surveillance System (BRFSS), 2004-2011. BMC medical research methodology, 13, 49-2288-2213-2249. https://doi.org/10.1186/1471-2288-13-49 Qureshi, A. I., Suri, M. F. K., Saad, M. and Hopkins, L. N. (2003). Educational attainment and risk of stroke and myocardial infarction. Medical Science Monitor, 9, 466-473. Rasmussen, J. N., Rasmussen, S., Gislason, G. H., Buch, P., Abildstrom, S. Z., Køber, L., …, Madsen, M. (2006). Mortality after acute myocardial infarction according to income and education. Journal of Epidemiology & Community Health, 60, 351-356. https://doi.org/10.1136/jech.200X.040972 Tonne, C., Schwartz, J., Mittleman, M., Melly, S., Suh, H. and Goldberg, R. (2005). Long-term survival after acute myocardial infarction is lower in more deprived neighborhoods. Circulation, 111, 3063-3070. https://doi.org/10.1161/CIRCULATIONAHA.104.496174 US Department of Health & Human Services. (2015). Key features of the Affordable Care Act by year. Available at: http://www.hhs.gov/healthcare/facts-and-features/key-features-of-aca-by-year/index.html (Accessed 8 July 2015) Zohoori, N., Pulley, L., Jones, C., Senner, J., Shoob, H. and Merritt, R. K. (2011). Conducting a statewide health examination survey: the Arkansas Cardiovascular Health Examination Survey (ARCHES). Preventing chronic disease, 8, A67. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
CommonCrawl
Why are these periods the same: a low earth orbit and oscillations through the center of the earth? Related: Why does earth have a minimum orbital period? I was learning about GPS satellite orbits and came across that Low Earth Orbits (LEO) have a period of about 88 minutes at an altitude of 160 km. When I took a mechanics course a couple of years ago, we were assigned a problem that assumed that if one could drill a hole through the middle of the Earth and then drop an object into it, what would your period of oscillation be? It just happens to be a number that I remembered and it was 84.5 minutes (see Hyperphysics). So if I fine-tuned the LEO orbit to a vanishing altitude, in theory, I could get its period to be 84.5 minutes as well. Of course, I am ignoring air drag. My question is: why are these two periods (oscillating through the earth and a zero altitude LEO) the same? I am sure that there is some fundamental physical reason that I am missing here. Help. newtonian-mechanics newtonian-gravity orbital-motion CarlosCarlos $\begingroup$ Wait a moment. You say "If I fine-tuned the LEO orbit to be 84.5 min" and then you wonder why it would be exactly 84.5 min? $\endgroup$ – ACuriousMind ♦ $\begingroup$ @ACuriousMind: I think my question is comparing two different oscillations: (1) the period of the person oscillating through a hole in the earth and (2) the period of a LEO orbit fine-tuned to an altitude that resulted in a period of 84.5 min? Is this clearer? I modified to question to reflect your comments. $\endgroup$ $\begingroup$ Not really. You are comparing two oscillations, one of which you adjusted to be precisely of the same period as the other. I really don't get the question. $\endgroup$ $\begingroup$ Note that you aren't just ignoring air drag, you're ignoring resistance from the trees and stuff that the satellite would be crashing through, since it's orbiting at surface level ;-) $\endgroup$ – Steve Jessop $\begingroup$ I don't think this qualifies as an answer so: I think the intuition here is that, objects in orbit are free falling toward the centre of the earth. Much the same way the dropped ball is free falling toward the centre of the earth. So the time it takes an object to do a quarter turn around the earth, should be about as long as it takes the ball to reach the centre of the earth. Come to think of it, maybe the ball should be slower. Since half way to the core, it has the earth above it pulling it back up. The orbit is always free falling. $\endgroup$ – Cruncher Intuitive explanation Suppose you drill two, perpendicular holes through the center of the Earth. You drop an object through one, then drop an object through the other at precisely the time the first object passes through the center. What you have now are two objects oscillating in just one dimension, but they do so in quadrature. That is, if we were to plot the altitude of each object, one would be something like $\sin(t)$ and the other would be $\cos(t)$. Now consider the motion of a circular orbit, but think about the left-right movement and the up-down movement separately. You will see it is doing the same thing as your two objects falling through the center of the Earth, but it is doing them simultaneously. caveat: an important assumption here is an Earth of uniform density and perfect spherical symmetry, and a frictionless orbit right at the surface. Of course all those things are significant deviations from reality. Mathematical proof Let's consider just the vertical acceleration of two points, one inside the planet and another on the surface, at equal vertical distance ($h$) from the planet's center: $R$ is the radius of the planet $g$ is the gravitational acceleration at the surface $a_p$ and $a_q$ are just the vertical components of the acceleration on each point If we can demonstrate that these vertical accelerations are equal, then we demonstrate that the differing horizontal positions have no relevance to the vertical motion of the points. Then we can free ourselves to think of vertical and horizontal motion independently, as in the intuitive explanation. Calculating $a_q$ is simple trigonometry. It's at the surface, so the magnitude of its acceleration must be $g$. Just the vertical component is simply: $$ a_q = g (\sin \theta) $$ If you have worked through the "dropping an object through a tunnel in Earth" problem, then you already know that in the case of $p$, its acceleration linearly decreases with its distance from the center of the planet (this is why the "uniform density" assumption is important): $$ a_p = g \frac{h}{R} $$ $h$ is equal for our two points, and finding it is again simple trigonometry: $$ h = R (\sin \theta) $$ $$ \require{cancel} a_p = g \frac{\cancel{R} (\sin \theta)}{\cancel{R}} \\ a_p = g (\sin \theta) = a_q $$ This also gives some insight to an unfortunate consequence: this method can be applied only to orbits on or inside the surface of the planet. Outside of the planet, $p$ no longer experiences an acceleration proportional to the distance from the center of mass ($a_p \propto h$), but instead proportional to the inverse square of distance ($a_p \propto 1/h^2$), according to Newton's law of universal gravitation. Phil FrostPhil Frost $\begingroup$ To add to this: mathematically, the $z$ coordinate of the projectile is governed by the same equation of motion whether it is orbiting the earth or passing straight through (in the $z$ direction). $\endgroup$ $\begingroup$ Great answer Phil! I knew it was something fundamental. This really shows that circular motion and linear periodic motion are really one and the same. $\endgroup$ $\begingroup$ This shows that the projection of a circular orbit is indeed sinusoidal. But it doesn't show that such a projection is a solution to an object oscillating through the center of the Earth. $\endgroup$ – BMS $\begingroup$ @BMS I've added a proof. $\endgroup$ – Phil Frost $\begingroup$ +1 but : "acceleration linearly decreases with its distance from the center of the planet" You should probably point out that this only holds for uniform densities, especially given that this assumption is fairly off. I realize you pointed this out right before starting your proof, but it'd be good for future readers to know where exactly this comes into play. $\endgroup$ – ticster Phil's answer, while beautifully illustrated, is a little incomplete. It relies on the fact that in the case of the tunnel you're solving the one dimensional projection of the low earth orbit satellite, but doesn't prove this. I do this below. The force applied on the object, for a sphere of uniform density, is actually : \begin{eqnarray} F &=& - \frac{4}{3} \pi \frac{G m \rho r^3}{r^2} \\ &=& - m g \frac{r}{R_{earth}} \\ &=& - k r \end{eqnarray} Where $k = \frac{mg}{R_{earth}}$. This is equivalent to a spring problem, whose solution will indeed be sinusoidal with period $2 \pi \sqrt{\frac{R_{earth}}{g}}$, the same as a low earth orbit period. Again, while Phil's answer does provide an illustration of this, it doesn't actually prove it. In particular, it leaves out the crucial fact that this only holds for a sphere of uniform density. ticsterticster An alternate explanation (which really is the same as the answer from @Phil): as per Kepler's laws, an orbit is an ellipse, and the orbiting period is proportional to the semi-major axis of the ellipse. A satellite in the lowest orbit will try to follow a special kind of ellipse (namely, a circle), whose semi-major axis is really the Earth radius (this is the "lowest orbit" because the satellite grazes the ground -- we ignore the atmosphere here). The oscillation in the hole is really another orbit -- it is a degenerate ellipse which has been flattened to a line. Yet its semi-major axis is still the Earth radius. Same semi-major axis, hence same period. Edit: as was pointed out, that expanation is bogus in two ways: The degenerate case for a "flattened" ellipse would be a half-diameter. If all the Earth's weight was concentrated at its center, the orbit, starting from "ground" level (6300 km or so from the center) with (almost) no lateral velocity would be an accelerated fall toward the center; when close to the center, the object would miss it "by mere inches" and quickly run around it, before speeding up back to the initial position at ground level. Furthermore, that "flattened ellipse" would have a semi-major axis of length about 3150 km (half the radius), for a period which would be eight times smaller than the low orbit. The Earth weight is not concentrated at its center. In fact you get an "oscillator" trajectory, that allows you to emerge in New Zealand if you started from England, precisely because the "Earth mass at a single point" model is not the one used in this thought experiment. While it is understandable that the low orbit and the ocsillator end up with periods of the same magnitude (they both are kinds of "free fall" against an Earth with the same weight, and starting at ground level), that hand-waving remark would have been equally applicable with an oscillator period being twice or half that of the low orbit. They seem to end up quite close to each other and I now have no idea whether this is mere coincidence or for some fundamental reason. Thomas PorninThomas Pornin $\begingroup$ Don't Kepler's laws assume that the center of the potential is at one focus of the ellipse though? (I honestly don't remember.) The foci of a degenerate ellipse are at the ends. $\endgroup$ – David Z $\begingroup$ There's one very small fault with this explanation. Kepler's laws assume the mass of the larger body to be concentrated at its center. When you go through the hole, the effective mass of the earth decreases as you approach the center, so your orbit through the center of the earth is not a Keplerian ellipse. $\endgroup$ – Tristan $\begingroup$ Most of what I know about orbital mechanics I learned from Kerbal Space Program, but I think it's true that the "flattened ellipse" path is not Keplerian. There is a Keplerian orbit for a flattened ellipse: a radial elliptic trajectory, but it's a quite different thing. $\endgroup$ My question is: why are these two periods (oscillating through the earth and a LEO) the same? I am sure that there is some fundamental physical reason that I am missing here. Help. It's a result of the (flawed) assumption of a uniform density Earth. The Earth is anything but a constant density object. The Earth's core is five times more dense than surface rock. Gravitational acceleration reaches a maximum of over 10 m/s2 at the core-mantle boundary, which is a bit less than halfway to the center of the Earth. A uniform density model implies that gravitational acceleration is about half the surface value at this depth. A better model of the Earth is to assume that acceleration due to gravity is a constant 10 m/s2 from the surface to halfway to the center of the Earth and then drops linearly to zero at the center of the Earth. This yields a period of 76.41 minutes rather than the 84.3 minute period of a 6371 km orbit (obviously ignoring air drag). An even better model is to use numerical integration with the Preliminary Reference Earth Model (A. Dziewonski and D. Anderson (1981), "Preliminary reference Earth model," Physics of the earth and planetary interiors 25:4, 297-356. (tabular data at http://geophysics.ou.edu/solid_earth/prem.html)). This yields a period of 76.38 minutes, which is very close to the simple model described above. David HammenDavid Hammen Phil Frost's argument in his answer (v4) is correct. Assuming a spherical Earth with constant density $\rho$ (and assuming for simplicity that the object for some reason can move freely$^1$ through Earth so that there is no air drag, and so that we can skip all the tunnel drilling and not worry about that the Earth's rotation could press the object up against the tunnel wall; and assuming that we use Earth-Centered Inertial (ECI) coordinate system, so that there are no fictitious forces; etc), then the governing 3D vector-valued ODE (derived from Newton's laws) is $$\tag{1} \frac{d^2{\bf r}}{dt^2}~=~-\frac{4\pi G\rho}{3}{\bf r}, \qquad\qquad {\bf r}~\equiv~\left(\begin{array}{c}x\cr y\cr z\end{array}\right).$$ This ODE (1) separates in three independent SHOs for the $x$, $y$ and $z$ coordinates with common characteristic period $$\tag{2} T~=~\sqrt{\frac{3\pi}{G\rho}}~\approx~ 84~ {\rm min}.$$ In particular, for an arbitrary trajectory with $|{\bf r}|\leq R$ (= radius of spherical Earth), the period is independent of initial position and initial velocity. $^1$More precisely: move freely apart from gravity. Qmechanic♦Qmechanic Not the answer you're looking for? Browse other questions tagged newtonian-mechanics newtonian-gravity orbital-motion or ask your own question. Time required to traverse a tunnel through the centre of a planet Why does earth have a minimum orbital period? If a stationary ball of neutrinos were released at the equator, what orbit would it take? What are the temperatures of objects in Low Earth Orbit (LEO)? Orbit through L4 and L5 If Earth and Jupiter are orbiting around Sun in same orbit, will they have same time period? Drag in low earth orbit What are the forces behind the orbit of the Earth around the Sun? Relation of speeds of Earth at extremities of orbit through force Can the Sun orbit the Earth?
CommonCrawl
tweetnotebook.com DIRECTORY Your repair guide directory network solutions internal server error - Whitney PC Repair network transport error - NEI Nemmer Electric Company nevada state quarter error - Scrappy's pc repair netflix ubuntu error code 1001 - Cheap Repairs network error xml http request exception 101 - Midessa Long Distance network link error 10049 wsaeaddrnotavail - Computer Guy The networking error - kbhtech.net network error yahoo mail was unable to connect - Your Way Computer Tech Team network error on facebook blocked friend - A Plus Computer Specialists networker administrator input/output error - Gigs of Knowledge network error please try connecting to the remote computer - Geek City USA, LLC network printer error messages - Ross' Computer networker error codes - AirGen Equipment LLC netware error 2436 - LAN Soulutions network error connection reset by peer winscp - A-1 Computers Inc online inverse error function calculator - Click A Tech We provide onsite computer services and repair. Let us come to you. 7034 Clear Valley Dr, San Antonio, TX 78242 http://www.clickatech.net online inverse error function calculator Lytle, Texas Wolfram Language» Knowledge-based programming for everyone. For |z| < 1, we have erf ⁡ ( erf − 1 ⁡ ( z ) ) = z {\displaystyle \operatorname ζ 2 \left(\operatorname ζ 1 ^{-1}(z)\right)=z} . Home Return to the Free Statistics Calculators homepage Return to DanielSoper.com Calculator Formulas References Related Calculators X Calculator: Inverse Error Function Free Statistics Calculators: Home > Inverse Error Function Calculator Inverse Intermediate levels of Re(ƒ)=constant are shown with thin red lines for negative values and with thin blue lines for positive values. Online Integral Calculator» Solve integrals with Wolfram|Alpha. The error function, denoted erf, is defined by the integral erf(x) = (2/√π)∫xo e-t2 dt. Practice online or make a printable study sheet. mean: std: Lower limit: Upper limit: Probablility: Extra probability results pending... SEE ALSO: Erfc, Inverse Erf RELATED WOLFRAM SITES: http://functions.wolfram.com/GammaBetaErf/InverseErfc/ REFERENCES: Bergeron, F.; Labelle, G.; and Leroux, P. All rights reserved. Weisstein ^ Bergsma, Wicher. "On a new correlation coefficient, its orthogonal decomposition and associated tests of independence" (PDF). ^ Cuyt, Annie A. For complex double arguments, the function names cerf and cerfc are "reserved for future use"; the missing implementation is provided by the open-source project libcerf, which is based on the Faddeeva IEEE Transactions on Communications. 59 (11): 2939–2944. Another form of erfc ⁡ ( x ) {\displaystyle \operatorname ⁡ 2 (x)} for non-negative x {\displaystyle x} is known as Craig's formula:[5] erfc ⁡ ( x | x ≥ 0 The error function is defined as: Error Function Table The following is the error function and complementary error function table that shows the values of erf(x) and erfc(x) for x ranging J. All rights reserved. The Taylor series about 1 is given by (8) (OEIS A002067 and A007019). Using the alternate value a≈0.147 reduces the maximum error to about 0.00012.[12] This approximation can also be inverted to calculate the inverse error function: erf − 1 ⁡ ( x ) Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Applications[edit] When the results of a series of measurements are described by a normal distribution with standard deviation σ {\displaystyle \textstyle \sigma } and expected value 0, then erf ( a Practice online or make a printable study sheet. The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function): erf ⁡ ( x ) = 2 x Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 6.2. Parker, F.D. "Integrals of Inverse Functions." Amer. It is also known as the Gauss Error Function, and is given by the formula $\large erf(x)=\frac{2}{\sqrt{\pi}}\int_0^xexp(-t^2)dt$ A numerical implementation of this function occurs in the calculator below. MR0167642. Conf., vol. 2, pp. 571–575. ^ Van Zeghbroeck, Bart; Principles of Semiconductor Devices, University of Colorado, 2011. [1] ^ Wolfram MathWorld ^ H. Step-by-step Solutions» Walk through homework problems step-by-step from beginning to end. Sloane, N.J.A. The probability field must contain a number only. Type I and Type II Errors Probability Distributions Tables Correlations Interpreting two-way ANOVA Numerical algorithms Normalised Incomplete Gamma Function Online Calculators CDF and Quantile Calculators (1) Error Function and Gaussian Distribution Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view TOPICS ABOUT HOMECALCULATORS Academics Arts Automotive Beauty Business Careers Computers Culinary Education Entertainment Family Finance Garden Health House & Poisson Distribution CDF and Quantile Calculator Log-normal CDF and Quantile Calculator Pascal Distribution CDF and Quantile Calculator Binomial Distribution CDF and Quantile Calculator Wilcoxon Signed Rank Test Calculator Mann Whitney U-test To use these approximations for negative x, use the fact that erf(x) is an odd function, so erf(x)=−erf(−x). D: A D package[16] exists providing efficient and accurate implementations of complex error functions, along with Dawson, Faddeeva, and Voigt functions. The inverse error function is usually defined with domain (−1,1), and it is restricted to this domain in many computer algebra systems. When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function: w ( z ) = At the real axis, erf(z) approaches unity at z→+∞ and −1 at z→−∞. Erf(x) is closely related to the normal probability curve; the cumulative distribution function of a normally distributed random variable X is CDF(X) = 0.5 + 0.5erf[(X-)/σ√2], where is the mean Mathematica: erf is implemented as Erf and Erfc in Mathematica for real and complex arguments, which are also available in Wolfram Alpha. Pressing the calculate button will populate the left box with the result. $x$: $erf(x)$: An implementation of the Gaussian CDF and Quantile function Calculator occurs below. Cambridge, England: Cambridge University Press, 1998. SciStatCalc Home SciStatCalc Version 1.5 Released Version 1.4 Released Bugs App now free Version 1.3 Released Multiple dataset analysis Two-factor ANOVA Version 1.2 Released Version 1.1 Released Distributions Processed Inverse CDF Carlitz, L. "The Inverse of the Error Function." Pacific J. Erfc is calculated with an error of less than 1x107 by using Chebyshev's approximation (see Numerical Recipes in C p. 176) Some Properties of the error function p = 0.47047 a1 Handbook of Continued Fractions for Special Functions. Blog Archive ► 2016 (2) ► April (2) ► 2015 (1) ► November (1) ► 2014 (3) ► December (1) ► January (2) ▼ 2013 (74) ► December (7) ► November However, for −1 < x < 1, there is a unique real number denoted erf − 1 ⁡ ( x ) {\displaystyle \operatorname Γ 0 ^{-1}(x)} satisfying erf ⁡ ( erf If you don't have access to an error function calculator such as the one above, you can approximate the function with the formula The error function can also be expressed with M.; Petersen, Vigdis B.; Verdonk, Brigitte; Waadeland, Haakon; Jones, William B. (2008). Some authors discuss the more general functions:[citation needed] E n ( x ) = n ! π ∫ 0 x e − t n d t = n ! π ∑ These generalised functions can equivalently be expressed for x>0 using the Gamma function and incomplete Gamma function: E n ( x ) = 1 π Γ ( n ) ( Γ New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels. Retrieved 2011-10-03. ^ Chiani, M., Dardari, D., Simon, M.K. (2003). Cody's algorithm.[20] Maxima provides both erf and erfc for real and complex arguments. Washington D.C., USA; New York, USA: United States Department of Commerce, National Bureau of Standards; Dover Publications. It is related to inverse erf by (2) It has the special values (3) (4) (5) It has the derivative (6) and its indefinite integral is (7) (which follows from the The error function is an odd function whose limit is -1 for negative values of x, and 1 for positive values of x. The inverse imaginary error function is defined as erfi − 1 ⁡ ( x ) {\displaystyle \operatorname ∑ 8 ^{-1}(x)} .[10] For any real x, Newton's method can be used to Java: Apache commons-math[19] provides implementations of erf and erfc for real arguments. Springer-Verlag. Parker, F.D. "Integrals of Inverse Functions." Amer. This directly results from the fact that the integrand e − t 2 {\displaystyle e^{-t^ − 2}} is an even function. IEEE Transactions on Wireless Communications, 4(2), 840–845, doi=10.1109/TWC.2003.814350. ^ Chang, Seok-Ho; Cosman, Pamela C.; Milstein, Laurence B. (November 2011). "Chernoff-Type Bounds for the Gaussian Error Function". online error function calculator Complementary Error Function In mathematics, the complementary error function (also known as Gauss complementary error function) is defined as: Complementary Error Function Table The following is the error function and complementary List of Engineering functions Privacy & cookies Contact Site map ©1993-2016MedCalcSoftwarebvba TOPICS ABOUT HOMECALCULATORS Academics Arts Automotive Beauty Business Careers Computers Culinary Education Entertainment Family Finance Garden Health House & Home ... online complementary error function calculator UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN You are not logged in Login ece444Theory and Fabrication of Integrated Circuits ece444 Home > Calculators > erfc.net HOME · LECTURE · LAB · GT · Pets Relationships Society Sports Technology Travel Error Function Calculator Erf(x) Error Function Calculator erf(x) x = Form accepts both decimals and fractions. Erfc is calculated with an error of less than 1x107 by using Chebyshev's approximation (see Numerical Recipes in C p.... numpy inverse error function The only way I get runtimes > 1s is if I naively loop over all the elements in a numpy array. –8one6 Mar 4 '14 at 16:15 add a comment| up Parameters:x : ndarray Input array. When to stop rolling a die in a game where 6 loses everything Did Dumbledore steal presents and mail from Harry? import numpy as np from scipy.special import erf def vectorized(n): x = np.random.randn(n) return erf(x) def loopstyle(n): x = np.random.randn(n) return [erf(v) for v in x] %timeit vectorized(10e5) %timei... octave inverse error function If x is a scalar, a and b must be of compatible dimensions. — Mapping Function: betaln (a, b) Return the natural logarithm of the Beta function, betaln (a, b) = See also: gammainc, gammaln, factorial. On machines that support 64 bit IEEE floating point arithmetic, realmax is approximately Built-in Variable: realmin The smallest floating point number that is representable. Complex Arithmetic The following functions are available for working with complex numbers. Otherwise, u and m... © Copyright 2018 tweetnotebook.com. All rights reserved.
CommonCrawl
On conformal measures of parabolic meromorphic functions DCDS-B Home Interaction of media and disease dynamics and its impact on emerging infection management January 2015, 20(1): 231-248. doi: 10.3934/dcdsb.2015.20.231 The stability of bifurcating steady states of several classes of chemotaxis systems Qian Xu 1, Department of Basic Courses, Beijing Union University, Beijing 100101 Received October 2013 Revised July 2014 Published November 2014 This paper concerns with the stability of bifurcating steady states obtained in [13] of several chemotaxis systems. By spectral analysis and the principle of the linearized stability, we prove that the bifurcating steady states are stable when the parameters satisfy some certain conditions. Keywords: Stability, spectral analysis, bifurcating steady states, chemotaxis systems., expansion. Mathematics Subject Classification: Primary: 35B32, 35B35; Secondary: 92C1. Citation: Qian Xu. The stability of bifurcating steady states of several classes of chemotaxis systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 231-248. doi: 10.3934/dcdsb.2015.20.231 X. Chen, J. Hao, X. Wang, Y. Wu and Y. Zhang, Stability of spiky solution of the Keller-Segel's minimal chemotaxis model, Journal of Differential Equations, 257 (2014), 3102-3134. doi: 10.1016/j.jde.2014.06.008. Google Scholar A. Chertock, A. Kurganov, X. Wang and Y. Wu, On a chemotaxis model with saturated chemotactic flux, Kinetic and Related Models, 5 (2012), 51-95. doi: 10.3934/krm.2012.5.51. Google Scholar M. G. Crandall and P. H. Rabinowitz, Bifurcation from simple eigenvalues, J. Functional Analysis, 8 (1971), 321-340. doi: 10.1016/0022-1236(71)90015-2. Google Scholar M. Crandall and P. Rabinowitz, Bifurcation, perturbation of simple eigenvalues and linearized stability, Arch.Rational Mech.Anal, 52 (1973), 161-180. Google Scholar T. Hillen and K. J. Painter, A user's guide to PDE models for chemotaxis, J. Math. Biol., 58 (2009), 183-217. doi: 10.1007/s00285-008-0201-3. Google Scholar D. Horstmann, From 1970 until now: The Keller-Segal model in chemotaxis and its consequences I, Jahresber. DMV, 105 (2003), 103-165. Google Scholar D. Horstmann, From 1970 until now: The Keller-Segal model in chemotaxis and its consequences II, Jahresber. DMV, 106 (2004), 51-69. Google Scholar E. Keller and L. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar X. Lai, X. Chen, C. Qin and Y. Zhang, Existence, uniqueness, and stability of bubble solutions of a chemotaxis model,, preprint., (). Google Scholar A. B. Potapov and T. Hillen, Metastability in chemotaxis models, J. of Dynamics and Diff. Eqs., 17 (2005), 293-330. doi: 10.1007/s10884-005-2938-3. Google Scholar R. Schaaf, Stationary solutions of chemotaxis systems, Trans. Amer. Math. Soc., 292 (1985), 531-556. doi: 10.1090/S0002-9947-1985-0808736-1. Google Scholar B. Sleeman, M. Ward and J. Wei, The existence, stability, and dynamics of spike patterns in a chemotaxis model, SIAM J. Appl. Math., 65 (2005), 790-817. doi: 10.1137/S0036139902415117. Google Scholar X. Wang and Q. Xu, Spiky and transition layer steady states of chemotaxis systems via global bifurcation and Helly's compactness theorem, J. Math. Biol., 66 (2013), 1241-1266. doi: 10.1007/s00285-012-0533-x. Google Scholar Tian Xiang. A study on the positive nonconstant steady states of nonlocal chemotaxis systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2457-2485. doi: 10.3934/dcdsb.2013.18.2457 Anne Nouri, Christian Schmeiser. Aggregated steady states of a kinetic model for chemotaxis. Kinetic & Related Models, 2017, 10 (1) : 313-327. doi: 10.3934/krm.2017013 P. Adda, J. L. Dimi, A. Iggidir, J. C. Kamgang, G. Sallet, J. J. Tewa. General models of host-parasite systems. Global analysis. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 1-17. doi: 10.3934/dcdsb.2007.8.1 Yunfeng Jia, Yi Li, Jianhua Wu. Qualitative analysis on positive steady-states for an autocatalytic reaction model in thermodynamics. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4785-4813. doi: 10.3934/dcds.2017206 Inom Mirzaev, David M. Bortz. A numerical framework for computing steady states of structured population models and their stability. Mathematical Biosciences & Engineering, 2017, 14 (4) : 933-952. doi: 10.3934/mbe.2017049 Yongli Cai, Yun Kang, Weiming Wang. Global stability of the steady states of an epidemic model incorporating intervention strategies. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1071-1089. doi: 10.3934/mbe.2017056 Yan'e Wang, Jianhua Wu. Stability of positive constant steady states and their bifurcation in a biological depletion model. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 849-865. doi: 10.3934/dcdsb.2011.15.849 Miguel A. Herrero, Marianito R. Rodrigo. Remarks on accessible steady states for some coagulation-fragmentation systems. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 541-552. doi: 10.3934/dcds.2007.17.541 Qi Wang, Lu Zhang, Jingyue Yang, Jia Hu. Global existence and steady states of a two competing species Keller--Segel chemotaxis model. Kinetic & Related Models, 2015, 8 (4) : 777-807. doi: 10.3934/krm.2015.8.777 Shubo Zhao, Ping Liu, Mingchao Jiang. Stability and bifurcation analysis in a chemotaxis bistable growth system. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1165-1174. doi: 10.3934/dcdss.2017063 O. A. Veliev. Essential spectral singularities and the spectral expansion for the Hill operator. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2227-2251. doi: 10.3934/cpaa.2017110 Soohyun Bae. Weighted $L^\infty$ stability of positive steady states of a semilinear heat equation in $\R^n$. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 823-837. doi: 10.3934/dcds.2010.26.823 Kousuke Kuto. Stability and Hopf bifurcation of coexistence steady-states to an SKT model in spatially heterogeneous environment. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 489-509. doi: 10.3934/dcds.2009.24.489 Wei-Ming Ni, Yaping Wu, Qian Xu. The existence and stability of nontrivial steady states for S-K-T competition model with cross diffusion. Discrete & Continuous Dynamical Systems, 2014, 34 (12) : 5271-5298. doi: 10.3934/dcds.2014.34.5271 Wen Feng, Milena Stanislavova, Atanas Stefanov. On the spectral stability of ground states of semi-linear Schrödinger and Klein-Gordon equations with fractional dispersion. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1371-1385. doi: 10.3934/cpaa.2018067 Francesca Romana Guarguaglini, Corrado Mascia, Roberto Natalini, Magali Ribot. Stability of constant states and qualitative behavior of solutions to a one dimensional hyperbolic model of chemotaxis. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 39-76. doi: 10.3934/dcdsb.2009.12.39 Yaping Wu, Qian Xu. The existence and structure of large spiky steady states for S-K-T competition systems with cross-diffusion. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 367-385. doi: 10.3934/dcds.2011.29.367 Aslihan Demirkaya, Panayotis G. Kevrekidis, Milena Stanislavova, Atanas Stefanov. Spectral stability analysis for standing waves of a perturbed Klein-Gordon equation. Conference Publications, 2015, 2015 (special) : 359-368. doi: 10.3934/proc.2015.0359 Farah Abdallah, Denis Mercier, Serge Nicaise. Spectral analysis and exponential or polynomial stability of some indefinite sign damped problems. Evolution Equations & Control Theory, 2013, 2 (1) : 1-33. doi: 10.3934/eect.2013.2.1 Shengji Li, Chunmei Liao, Minghua Li. Stability analysis of parametric variational systems. Numerical Algebra, Control & Optimization, 2011, 1 (2) : 317-331. doi: 10.3934/naco.2011.1.317 Qian Xu
CommonCrawl
Analysis and PDE Analysis and PDE Seminar Conferences and Scientific Meetings Master's Degree Projects Recent Preprints Analysis and PDE seminar takes place weekly, on Tuesdays, at 14.15. Location for Fall 2020: Auditorium 3 Realfagbygget, Allégaten 41 and Zoom. NEXT SEMINAR: Date and place: October 20, 2020, Zoom, 14:15. Contact the seminar organizer for link. Speaker: Mario Maurelli, Università degli Studi di Milano Title: Regularization by noise Abstract: We say that a regularization by noise phenomenon occurs if a possibly ill-posed ordinary or partial differential equation becomes well- posed by adding a suitable noise term. This phenomenon is counter-intuitive at first (one adds an irregular noise term to an irregular deterministic part and gets well-posedness). Nevertheless it has been shown for a wide class of ODEs and some PDEs, and has attracted a lot of attention in recent years, with the long-term goal of proving regularization by noise for equations coming from physics, especially fluid dynamics (see e.g. [Flandoli, St. Flour Lect. Notes, 2015]). In the first part of the talk, I will review some results and techniques of regularization by noise for ordinary differential equations. In the second part of the talk, I will review regularization by noise for linear PDEs of transport-type and I will show a regularization by noise result for a scalar conservation law with space-irregular drift (the latter is a joint work with Benjamin Gess). UPCOMING SEMINARS: Date and place: October 27: Break due to workshop "Analysis and Geometry in Norway" on Friday October 30. Date and place: November 3, 2020, Auditorium 3 Speaker: Razvan Mosincat, University of Bergen Date and place: November 10: Break due to to session of MAT331: Infinite-dimensional geometry. Date and place: December 10, 2020, Auditorium 3 Speaker: Mathias Palmstrøm, University of Bergen RECENT SEMINARS: Date and place: October 13: Break due to to session of MAT331: Infinite-dimensional geometry. Date and place: October 6, 2020, Zoom Speaker: Stefan Sommer, University of Copenhagen Title: Stochastic Shape Analysis and Probabilistic Geometric Statistics Abstract: Analysis and statistics of shape variation can be formulated in a geometric setting with geodesics modelling transitions between shapes. In the talk, I will show how such smooth geodesic models can be extended to account for noise resulting in stochastic shape evolutions and stochastic shape matching algorithms. I will connect these ideas to geometric statistics, the statistical analysis of general manifold valued data. Taking a probabilistic approach to geometric statistics leads to a geometric version of principal component analysis, and most probable paths for the resulting stochastic flows can be identified as geodesics for a sub-Riemannian metric on the frame bundle of the underlying manifold. Date and place: September 29, 2020, 14:15-16, Zoom Speaker: Erlend Grong, University of Bergen Title: On the equivalence problem on sub-Riemannian manifolds Abstract: How to determine if two objects are "the same"? Every topic of mathematics has their own notion of equivalence, usually refering to the existance of a certain map preserving all of the properties we are interested in. Isomorphisms in algebra, homeomorphisms in topology and isometries in differential geometry. However, given two objects from all of these examples, it is usually not evident whether or not such an "equivalence map" exists We will concern ourselves with this problem in differential geometry. In the first hour, we will give an introduction and discuss the problem in Riemannian geometry. We will in particular look at the role curvature plays in this question. For the second hour, we will look at Cartan geometry and its applications to sub-Riemannian manifolds. Date and place: September 22, 2020, 14:15-15:00, Zoom. Speaker: Francesca Tripaldi, University of Bern Title: Rumin complex on nilpotent Lie groups and applications Abstract: The present work focuses on introducing the tools needed to extend the construction of the Rumin complex to arbitrary nilpotent Lie groups (not necessarily gradable ones). This then enables the direct application of non-vanishing results for the $\ell^{q,p}$ cohomology to all nilpotent Lie groups. Date and place: September 15: Break due to to session of MAT331: Infinite-dimensional geometry. Date and place: September 8, 2020, 14:15-16:00, Auditorium 3 Speaker: Didier Pilod, University of Bergen Title: On the unique continuation of solutions to nonlocal non-linear dispersive equations Abstract: The first part of this talk is an introduction to the unique continuation problem in PDE. We will focus on elliptic problem and explain how to deal with nonlocal equations through the Caffarelli-Silvestre extension.In the second part, we explain how these ideas apply to a large class of nonlocal dispersive equations. If time allows, we will also discuss unique continuation properties for the water waves equations.This talk is based on a joint work with Carlos Kenig (Chicago), Gustavo Ponce (Santa Barbara) and Luis Vega (Bilbao). Speaker: Irina Markina, University of Bergen Title: One parametric family of geodesics on the Stiefel manifold Abstract: We start from a very mild introduction to the family of orthogonal and skew-symmetric matrices. We introduce a one-parametric family of metrics on the direct product of orthogonal matrices. Then we explain what is the Stiefel manifold and how it is related to the group of orthogonal matrices. The final goal is to explain how geodesics on the Stiefel manifold can be found by making use of the geodesics on the group of orthogonal matrices. The constructed family of geodesics for the introduced one parametric family of metrics includes various known cases used in the applied mathematics.This is a joint work with K. Hueper (University of Wurzburg) and F. Silva Leite (University of Coimbra). The Analysis and PDE seminar is cancelled for the rest of the semester. See you in the fall of 2020. Date and place: March 10, 2020, Seminar room Sigma Speaker: Adan Corcho, Federal University of Rio de Janeiro and Miguel Alejo, University of Cordoba Title: Stability of nonlinear patterns in low dimensional Bose gases Abstract: In this talk we will present recent results on the study of theorbital stability properties of the simplest nonlinear pattern in lowdimensional Bose gases, the black soliton solution.In the first part of the talk, we will introduce basic notions and conceptsrelated with this quantum model as well as physical and mathematicalmotivations to approach that problem.In the second part of the talk, we will present a more detailed scheme ofthe main result of this work on stability of the black soliton. This is asolution of a one dimensional nonintegrable defocusing Schrödinger model,represented by the quintic Gross-Pitaevskii equation (5GP). Once the blacksoliton is characterized as a critical point of the associated Ginzburg-Landau energy of the 5GP, I will show some coercivity propertiesof that energy around the black (and dark) soliton. We will also explain howto impose suitable orthogonality conditions and how to control the growthof some modulation parameters to finally prove that perturbations generatedby the symmetries of the 5GP stay close to the black soliton in the energyspace. Date and place: February 25, 2020, Seminar room Sigma. Speaker: Frédéric Vallet, Université de Strasbourg Title: On the multi-solitons of the Zakharov-Kuznetsov equations. Abstract: In the field of dispersive equations, traveling waves are one of the most fundamental objects. Those waves, also called solitons, keep their velocity and form along time, and are considered as elementary bricks of dispersive equations. The soliton resolution conjecture states that in long time, a solution of Zakharov-Kuznetsov equations (ZK) can be decomposed into a sum of solitons plus a small remainder. In the first talk, I will introduce the equations (ZK) and the context of those equations, then substantiate the existence and properties of solitons, and conclude with the existence and the uniqueness of solutions behaving in long time as a sum of decoupled solitons: the multi-solitons. The second talk will be dedicated to prove the construction of multi-solitons. Date and place: February 18, 2020, at 14:15, Seminar room Sigma Speaker: Jacek Jendrej, University Paris 13 Title: Strongly interacting kink-antikink pairs for scalar fields on a line. Abstract: I will present a recent joint work with Michał Kowalczyk and Andrew Lawrie. A nonlinear wave equation with a double-well potential in 1+1 dimension admits stationary solutions called kinks and antikinks, which are minimal energy solutions connecting the two minima of the potential. We study solutions whose energy is equal to twice the energy of a kink, which is the threshold energy for a formation of a kink-antikink pair. We prove that, up to translations in space and time, there is exactly one kink-antikink pair having this threshold energy. I will explain the main ingredients of the proof. Date and place: February 18, 2020, 15:15. Seminar room Sigma Speaker: Gianmarco Molino, University of Connecticut Title: Comparison Theorems on H-type Foliations, an Invitation to sub-Riemannian Geometry. Abstract: Sub-Riemannian geometry is a generalization of Riemannian geometry to spaces that have a notion of distance, but have restrictions on the valid directions of motion. These arise in a natural way in remarkably many settings.This talk will include a review of Riemannian geometry and an introduction to sub-Riemannian geometry. We'll then introduce the notion of H-type foliations; these are a family of sub-Riemannian manifolds that generalize both the K-contact structures arising in contact geometry and the H-type group structures. Our main focus will be recent results giving uniform comparison theorems for the Hessian and Laplacian on a family of Riemannian metrics converging to sub-Riemannian ones. From this we can conclude a sharp sub-Riemannian Bonnet-Myers type theorem. Date and place: February 11, 2020, Seminar room Sigma Speaker: Torstein Nilssen, University of Agder Title: Introduction to rough paths. Introductory part to workshop "Young researchers between geometry and stochastic analysis". Date and place: January 28, 2020, Seminar room Delta Title: Functions of random variable, inequalities on path space and geometry. We will give a quick introduction of functions with random inputs as functions on path space.We describe how to develop a functional analysis of such functions, first over flat space and then over curves space.We will end by describing the relationship between bounded curvature and functional inequalities on path space.We will end with presenting some new results relating functional inequalities on path space and curvature of sub-Riemannian spaces.The results are obtained in collaboration with Li-Juan Cheng and Anton Thalmaier (arXiv:1912.03575). Speaker: Zhenyu Wang, University of Bergen and Harbin Institute of Technology at Weiha Title: Numerical simulations for stochastic differential equations on manifolds by stochastic symmetric projection method. Stochastic standard projection technique, as an efficient approach to simulate stochastic differential equations on manifolds, is widely used in practical applications. However, stochastic standard projection methods usually destroy the geometric properties (such as symplecticity or reversibility), even though the underlying methods are symplectic or symmetric, which seriously affect long-time behavior of the numerical solutions. In this talk, a modification of stochastic standard projection methods for stochastic differential equations on manifolds is presented. The modified methods, called the stochastic symmetric projection methods, remain the symmetry and the ρ -reversibility of the underlying methods and maintain the numerical solutions on the correct manifolds. The mean square convergence order of these methods are proved to be the same as the underlying methods'. Numerical experiments are implemented to verify the theoretical results and show the superiority of the stochastic symmetric projection methods over the stochastic standard projection methods. Date and place: November 26, 2019, Seminar room Sigma Speaker: Jonatan Stava, University of Bergen Title: Cartan Connection in Sub-Riemannian Geometry. If we can associate a Cartan geometry with a sub-Riemannian manifold, the Cartan connection will give a notion of curvature. In the seminar we will look at how we can associate a Lie algebra to each point of a bracket generating sub-Riemannian manifold which is called the sub-Riemannian symbol of the manifold. In a paper by T. Morimoto (2006), he describes how one can obtain a Cartan geometry from a sub-Riemannian manifold with constant symbol in a canonical way. We will see how this method apply to sub-Riemannian manifolds with the Heisenberg Lie algebra as constant symbol. Title: Crash course in Brownian motion and stochastic integration, Part VI Speaker: Achenef Temesgen, University of Bergen Title: Dispersive estimates for the fractal wave equation Title: Crash course in Brownian motion and stochastic integration, Part V Speaker: Evgueni Dinvay, University of Bergen Title: The Whitham solitary waves. The Whitham equation was proposed as an alternative to the Korteweg-de Vries equation. Having the same nonlinearity as the latter, it featuresthe same linear dispersion relation as the full water-wave problem. It is known to be locally well-posed and admitting wave breaking.It was also proved to posses solitary wave solutions, firstly, in 2012 by Ehrnstrom, Groves and Wahlen.They have used a variational approach, reformulating the problem as a constrained minimization problem.To extract a converging minimizing sequence they have appealed to the Concentration Compactness principal.An alternative elegant proof was given by Stefanov and Wright recently in 2018. They have rescaled the Whitham travelling wave equationintroducing a small parameter that lead in the limit to the KdV travelling wave equation.Existence comes from appeal to the implicit function theorem. In the talk we will mostly discuss their approach in more details. Date and place: November 6, 2019, Seminar room Sigma Speaker: Razvan Monsincat, University of Bergen Title: Crash course in Brownian motion and stochastic integration, Part IV Date and place: October 29, 2019, Seminar room Sigma Title: Crash course in Brownian motion and stochastic integration, Part III Speaker: Niels Martin Møller, University of Copenhagen Title: Mean curvature flow and Liouville-type theorems In the first part we review the basics of mean curvature flow and its important solitons, which are model singularities for the flow, with a view towards minimal surface theory and elliptic PDEs. These solitons have been studied since the first examples were found by Mullins in 1956, and one may consider the more general class of ancient flows, which arise as singularity models by blow-up. Insight from gluing constructions indicate that classifying them as such is not viable, except e.g. under various curvature assumptions.In the talk's second part, however, without restrictions on curvature, we will show that if one applies certain "forgetful" operations - discard the time coordinate and take the convex hull - then there are only four types of behavior. To show this, we prove a natural new "wedge theorem" for proper ancient flows, which adds to a long story: It is reminiscent of a Liouville theorem (as for holomorphic functions), and generalizes our own wedge theorem for self-translaters from 2018 (a main motivating example throughout the talk) that implies the minimal surface case by Hoffman-Meeks (1990) which in turn contains the classical theorems by Omori (1967) and Nitsche (1965).This is joint work with Francesco Chini (U Copenhagen). Speaker: Alexander Schmeding Title: Crash course in Brownian motion and stochastic integration, Part II Title: Crash course in Brownian motion and stochastic integration, Part I Date and place: October 1, 2019, Seminar room Sigma Speaker: Pavel Gumenyuk, University of Stavanger Title: Univalent functions with quasiconformal extensions Univalent functions (i.e. conformal mappings) admitting quasiconformal extensions is a classical topic in Geometric Function Theory, closely related Teichmüller Theory. We consider the class S(k), 0<k<1, of all univalent functions in the unit disk (suitably normalized at the origin) which are restrictions of k-quasiconformal automorphisms of the complex plane. One of the basic tools for finding sufficient conditions for being an element of S(k) is a construction of quasiconformal extensions based on Loewner's parametric method, discovered by Jochen Becker in 1972. Becker's extensions have some special properties not shared by generic quasiconformal mapping; in particular, the corresponding class S^B(k) is a proper subset of S(k). This talk is based on recent joint works with István Prause and with Ikkei Hotta. We give a complete characterization of Becker's extensions in terms of the Beltrami coefficient. This result puts some light over the relationship between the classes S^B(k) and S(k).Our special interest to the class S^B(k) is due to the fact that it admits a parametric representation. Unfortunately, no similar results are known for the whole class S(k).Sharp estimates of the Taylor coefficients in classes of holomorphic functions is an old problem. For the class S(k), it is open for all coefficients a_n, n>2.R. Kühnau and W. Niske in 1977 raised a question whether there exists k_0>0 such that the minimum of |a_3| in S(k) equals k for all 0<k<k_0.Using Loewner's parametric representation of S^B(k), we show that such a k_0 does not exists. (This disproves a previously known result in this direction by S. Krushkal.) Date and place: September 17, 2019, Seminar room Sigma Speaker: Luc Molinet, Université de Tours Title: On the asymptotic stability of the Camassa-Holm peakons Abstract: The Camassa-Holm equation possesses peaked solitary waves called peakons. We prove a rigidity result for uniformly almost localized (up to translations) H^1-global solutions of the Camassa-Holm equation with a momentum density that is a non negative finite measure.More precisely, we show that such solution has to be a peakon. As a consequence, we prove that peakons are asymptotically stable in the class of H^1-functions with a momentum density that is a non negative finite measure. Title: Unconditional uniqueness of solutions to the Benjamin-Ono equation Abstract: The Benjamin-Ono equation (BO) arises as a model PDE for the propagation of long one-dimensional waves at the interface of two lay- ers of fluids with different densities. From the analytical point of view, it poses technical difficulties due to its quasilinear character. The global well- posedness in L^2 of BO was first shown by Ionescu and Kenig by using an intricate functional setting. Later, Molinet and Pilod, and more recently Ifrim and Tataru gave different and simpler proofs. In this talk, we are interested in the unconditional uniqueness of solutions to BO. Namely, for a given initial data we establish that there is only one solution without requiring any auxiliary condition on the solution itself. To this purpose we will use a method based on normal form reductions. Date and place: September 3, 2019, Seminar room Sigma Speaker: Alexander Schmeding, UiB Title: An invitation to infinite dimensional geometry Abstract: Many objects in differential geometry are intimately linked with infinite dimensional structures. For example, to a manifold one can associate it's diffeomorphism group which turns out to be an infinite dimensional Lie group. It carries geometric information which are of relevance in problems from fluid dynamics. After a short introduction to infinite-dimensional structures, I will discuss some connections between finite and infinite dimensional differential geometry.As a main example we will then consider the Euler equation of an incompressible fluid. Due to an observation by Arnold and the work of Ebin and Marsden, one can reformulate this partial differential equation as an ordinary differential equation, but on an infinite dimensional manifold. Using geometric techniques local wellposedness of the Euler equation can be established. If time permits we will then discuss a stochastic version of these results which is recent work together with M. Maurelli (Milano) and K. Modin (Chalmers, Gothenburg).The talk is supposed to give an introduction to these topics. So we will neither supposes familiarity with infinite-dimensional manifolds and their geometry nor with stochastic analysis. Date and place: August 27, 2019, Seminar room Sigma Speaker: Adán J. Corcho, Universidade Federal do Rio de Janeiro Title: On the global dynamics for some dispersive systems in nonlinear optics We consider two family of coupled equations in the context of nonlinear optics, whose coupling terms are given by quadratic nonlinearities. The first system is a perturbation of the classic cubic nonlinear Schrödinger equation by a dissipation delay term induced by the medium (Schrödinger - Debye system). In H^1-critical dimension, we present recent results about an alternative between the possible existence of blow-up solution or the grow of the Sobolev norm with high regularity with respect to the delay parameter of the system. The problem of existence of formation of singularities in finite or infinite time remains open for this system. The second model is given by the nonlinear coupling of two Schr ̈odinger equations and we will show the formation of singularities in the L^2-critical and super-critical cases using the dynamic coming from the Hamiltonian structure. Furthermore, we derive some stability and instability results concerning the ground state solutions of this model. Place: the seminar room 4A9f (\delta) Speaker: Dider Pilod, BFS Researcher, Mathematical Department, UiB Speaker: Eirik Berge, PhD student, Mathematical Department, NTNU Title: Decomposition Spaces From a Metric Geometry Standpoint Abstract: In this talk, I will introduce decomposition (function) spacesand discuss a few concrete examples. These are function spaces which appearin different subgenres of analysis such as harmonic analysis,time-frequency analysis and PDE's. I will explain how one can use metricspace geometry (more precisely large scale geometry) to understand andunify these spaces. Finally, if time, I will discuss how one decompositionspace can (or can not) embed in a geometric way into another decompositionsspace, and how this can be detected by utilizing metric geometry. Speaker: Erlend Grong, Postdoc, UiB, University of Paris Sud, France Speaker: Eivind Schneider, PhD student, University of Tromsoe Speaker: Razvan Mosincat, Postdoc, UiB Speaker: Claudio Munos, Associate Professor, University of Chile, Santiago, Chile Speaker: Mauricio Godoy Molina, Assistent Professor, University De La Frontera, Temuco, Chile Title: The Volterra equation on manifolds. Abstract: The analysis of integro-differential equations has been envoguefor many years, and many results have been produced by changingslightlythe domain of parameters, changing slightly the space of functions orchanging slightly the notion of derivative. This talk will deal withthelatter, and for reasons that will be discussed at length in the talk.Onthe application side, these equations appear in models of sub-diffusiveprocesses; but for the pure mathematician, if we need to applyconvolutions, we better work in the real line instead of a manifold.In this talk, I will discuss some of the ideas we have been pondering.Theaim is to extend some of the results obtained for Euclidean space toRiemannian manifolds, and to do that we need to fill in many atechnicalanalytic detail. This is a work in progress with Juan Carlos Pozo (University De La Frontera). Speaker: Irina Markina, Professor, Mathematical Department, UiB Title: On normal and abnormal geodesics on the sub-Riemannian geometry Abstract: This is a lecture oriented towards the master students in the groups of Analysis and PDE. We will revise the notion of the Riemannian geodesic, Hamiltonian formalism, and show what kind of new type of geodesics appears in sub-Riemannian geometry. Speaker: Arnaud Eychenne, PhD student, Mathematical Department, UiB Title: On the stability of 2D dipolar Bose-Einstein condensates Abstract: We study the existence of energy minimizers for a Bose-Einstein condensate with dipole-dipole interactions, tightly confined to a plane. The problem is critical in that the kinetic energy and the (partially attractive) interaction energy behave the same under mass-preserving scalings of the wave-function. We obtain a sharp criterion for the existence of ground states, involving the optimal constant of a certain generalized Gagliardo-Nirenberg inequality. Speaker: Luca Galimberti, Postdoc, University of Oslo Title: Well-posedness theory for stochastically forced conservation laws on Riemannain Manifolds. Abstract: We are given an n-dimensional smooth closed manifold M, endowed with a smooth Riemannian metric h. We study the Cauchy problem for a first-order scalar conservation law with stochastic forcing given by a cylindrical Wiener process W. After providing a reasonable notion of solution, we prove an existence and uniqueness-result for our Cauchy problem, by showing convergence of a suitable parabolic approximation of it. This is achieved thanks to a generalized Ito's formula for weak solutions of a wide class of stochastic partial differential equations on Riemannian manifolds. This is a joint work with K.H. Karlsen (UIO). Speaker: Evgueni Dinvay, PhD student, Mathematical Department, UiB Title: Global well-posedness for the BBM equation. Abstract: The regularized long-wave or BBM equation describesthe unidirectional propagation of long surface water waves.We will regard an initial value problem for the BBM equation.It will be shown how to prove its local well posedness with respect to time in Sobolev spaces H^s on real line applying the fixed point argument.Due to conservation of H^1 norm of solutions we will get automatically global well posedness in H^1.From H^1 the global result will be extended to the case with 0<s<1. Speaker: Jonatan Stava, master student, Mathematical Department, UiB Title: On deRham cohomology Abstract: A smooth introduction to deRham cohomology, undestandable for master students will be given Speaker: Luis Marin, master student, Mathematical Department, UiB Title: Geodesics on Generalized Damek-Ricci spaces. Abstract: Damek-Ricci spaces also called harmonic NA groups are harmonic extensionsof H-type groups and have been studied in great detail in harmonic analysis.We wish to generalize the notion of Damek-Ricci spaces, by loosening therestriction on the metric and allowing it to be not only positive definite.This talk will start by giving the basic definition of a psuedo H-typealgebra and discuss their existence and how they are related to Clifford algebras.Further we define the generalized Damek-Ricci spaces as the semi-direct productof a psuedo H-type group with an abelian group and discuss some propertiesof this space. From here we can furnish this space with a left invariantmetric and consider Damek-Ricci spaces as a Riemannian manifold, were we wantto use Hamiltonian formalism to derive a system of equations, wich soultionsgive us the geodesics on the Damek-Ricci space. Speaker: Sven I. Bokn, master student, Mathematical Department, UiB Title: On the elastica problem Abstract: "The problem of *elastica* was first proposed, and partially solved, byJames Bernoulli in the late 1600. The complete solution was attributedEuler in the mid 1700 for his detailed description. The solution set is afamily of curves that appear in many natural phenomena. Loosely speaking,the problem of *elastica* is to find a curve of fixed length and boundaryconditions that has minimal curvature. In this talk we will derive solutions to the *elastica* problem using basicconcepts from differential geometry and the calculus of variations. If timepermits, we will look at how we might address the problem of *elastica* asan optimal control problem on Lie groups. Furthermore, we will look atsimilarities between this problem and the problem of the rolling sphere." Speaker: Razvan Mosincat, PhD student, University of Edinburgh, UK Title: Low-regularity well-posedness for the derivative nonlinear Schrödinger equation Abstract: Harmonic analysis has played an instrumental role in advancing the study of nonlinear dispersive PDEs such as the nonlinear Schrödinger equation. In this talk, we present a method to prove well-posedness of nonlinear dispersive PDEs which avoids a heavy harmonic analytic machinery. As a primary example, we study the Cauchy problem for the derivative nonlinear Schrödinger equation (DNLS) on the real line. We implement an infinite iteration of normal form reductions (namely, integration by parts in time) and reformulate the equation in terms of an infinite series of multilinear terms. This allows us to prove the unconditional uniqueness of solutions to DNLS in an almost end-point space. This is joint work with Haewon Yoon (National Taiwan University).We will also discuss normal form reductions as used in the so-called /I/-method introduced by Colliander, Keel, Staffilani, Takaoka, and Tao. In particular, we consider DNLS on the torus and prove global well-posedness in an end-point space. We also use a coercivity property in the spirit of Guo and Wu to improve the mass-threshold under which the solutions exist globally in time. Speaker: Professor, Irina Markina, Institute of Mathematics, UiB Title: Heisenberg group as a subgroup og SU(1,2) Abstract: We consider the group SU(1,2) of linear transformations in 3 dimensional complex space preserving the metric of the index 1. We associate a unit ball in 2 dimensional complex space with the homogeneous space of SU(1,2) factorised by the isotropy subgroup preserving the origin in 2 dimensional complex space. We describe the Bruhat decomposition of SU(1,2) containing the Heisenberg group. At the end we present the root decomposition of the Lie algebra of the Lie group SU(1,2), where the Heisenberg algebra is naturally arises. Speaker: Professor, Victor Gichev, Sobolev Institute of Mathematics, Omsk, Russia Title: On intersections of nodal sets Abstract: The nodal set in a Riemannian manifold is the set of zeroes of a Laplace - Beltrami eigenfunction. We will give a sketch of the proof to the following assertion: if the first de Rham cohomologies of the manifold are trivial, then every pair of nodal sets corresponding to the same eigenvalue has a common point. The manifold is assumed to be compact and connected. If it is homogeneous, then it is possible to obtain an additional information on the intersection of nodal sets: a construction for a prescribed finite subset in a nodal set, estimates of their Hausdorff measures, and some other relating results. Place: the seminar room 4A5d (\sigma) Speaker: Researcher Didier Jacques Francois Pilod Title: On the local well-posedness for a full dispersion Boussinesq system with surface tension Abstract: We will prove local-in-time well-posedness for a fully dispersive Boussinesq system arising in the context of free surface water waves in two and three spatial dimensions.Those systems can be seen as a weak nonlocal dispersive perturbation of the shallow-water system. Our method of proof relies on energy estimates and a compactness argument. However, due to the lack of symmetry of the nonlinear part, those traditional methods have to be supplementedwith the use of a modified energy in order to close the a priori estimates. This talk is based on a joint work with Henrik Kalisch (University of Bergen) Speaker: Professor Jean-Claude Saut, Universite Paris Saclay, France Title: Existence and properties of solitary waves for some two-layer systems Abstract: We consider different classes of two-layer systems describing the propagation of internal waves, namely the Boussinesq-Full dispersion systemsand the (one-dimensional) two-way versions of the Benjamin-Ono and Intermediate Long Wave equations. After a brief survey on the derivation of asymptotic models for internal waves,we will establish the existence of solitary wave solutions and prove their regularity and decay properties. This is a joint work with Jaime Angulo Pava. Speaker: Eirik Berge, master student, Department of Mathematics, UiB Title: Sub-Riemannian Model Spaces of Step and Rank Three Abstract: The development of Riemannian geometry has been highly influencedby certain spaces with maximal symmetry called model spaces. Their ubiquitypresents itself throughout differential geometry from the classicalGaussian map for surfaces to comparison theorems based on volume, theLaplacian, or Jacobi fields. We will in this talk describe a generalizationof the classical model spaces in Riemannian geometry to the sub-Riemanniansetting introduced by Erlend Grong. We will discuss the Riemannian settingfirst to make the presentation (hopefully) accessible to non-experts. Thenwe move towards giving a quick description of the essential concepts neededin sub-Riemannian geometry before turning to the sub-Riemannian modelspaces. Theory regarding Carnot groups and tangent cones will be used toinvoke a powerful invariant of sub-Riemannian model spaces. These toolswill be used to study the classification of sub-Riemannian model spaces.Finally, we will restrict our focus to model spaces with step and rankequal to three and provide their complete classification. Speaker: Miguel Alejo, Federal University of Santa Catarina, Department of Mathematics, Florianopolis-Santa Catarina, Brasil Title: On the stability properties of some breather solutions Abstract: Breathers are localized vibrational wave packets that appear innonlinear systems, that is almost any physical system, when theperturbations are large enough for the linear approximation to bevalid. To be observed in a physical system, breathers should be stable. Inthis talk, there will be presented some results about the stabilityproperties of breather solutions of different continuous models drivenby nonlinear PDEs.It will be shown how to characterize variationally the breathersolutions of some nonlinear PDEs both in the line and in periodicsettings.Two specific variational characterizations will be analyzed:a) the mKdV equation, it model waves in shallow water, and theevolution of closed curves and vortex patches)b) the sine-Gordon equation: it describes phenomena in particlephysics, gravitation, materials and many other systems.Finally, it will be explain how to prove that breather solutions ofthe Gardner equation are also stable in the Sobolev H^2 Speaker: Yevhen Sevostianov, Professor, Zhytomyr Ivan Franko State University, Ukraine Title: Geometric Approach in the Theory of Spatial Mappings Space mappings with unbounded characteristics of quasiconformality havebeen investigated. In particular, we mean the so-called mappings withfinite distortion which are intensively investigated by leadingmathematicians in the last decade. The series of properties of theso-called Q-mappings and ring Q-mappings are obtained. The above mappingsare subtype of the mappings with finite distortion and include the mappingswith bounded distortion by Reshetnyak. In particular, the properties ofdifferentiability and ACL, the analogues of the theorems ofCasoratti-Sokhotski–Weierstrass, Liouville, Picard, Iversen etc. areobtained for the above mappings Speaker: Matteo Rafaelli, PhD student, Danmark Technical University, Title: Flat approximations of surfaces along curves Abstract: Given a (smooth) curve on a surface S isometrically embeddedin Euclidean three-space, we present a method for constructing a flat(i.e., developable) surface H which is tangent to S at all points ofthe curve. In the beginning of the talk we will be revising theclassical concepts of Frenet-Serret frame and Darboux frame, on whichsuch construction is based. We will conclude by briefly discussing how the method generalizes tothe case of Euclidean hypersurfaces. Speaker: Irina Markina, Professor, Department of Mathematics, UiB Title: Geodesic equations in sub-Riemannian geometry At the beginning of the talk we will revise the notion of Levi-Civita connection and relation between geodesics and curves minimizing distance function on a Riemannian manifold. After short definition of sub-Riemannian manifold we consider example of geodesic equation on the Heisenberg group. If time allows we will discuss some possible ways of generalization of equations that are the first variations. Speaker: Henrik Kalisch, Professor, Department of Mathematics, UiB Title: Existence and uniqueness of singular solutions to a conservation law arising in magnetohydrodynamics. Existence and admissibility of singular delta-shock solutions is discussed for hyperbolic systems of conservation laws, with a focus on systems which do not admit classical Lax-admissible solutions to certain Riemann problems.One such system is the so-called Brio system arising in magnetohydrodynamics.For this system, we introduce a nonlinear change of variables which can be used to define a framework in which any Riemann problem can be solved uniquely using a combination of rarefaction waves, classical shock waves and singular shocks. Speaker: Vincent Teyekpiti , PhD student, Department of Mathematics, UiB Title: Riemann Problem for a Hyperbolic System With Vanishing Buoyancy In this talk, we shall study a triangular system of hyperbolic equations which is derived as a model for internal waves at the interface of a two-fluid system. The focus will be on a shallow-water system for interfacial waves in the case of a neutrally buoyant two-layer fluid system. Such a situation arises in the case of large underwater lakes of compressible liquids such as CO2 in the deep ocean which may happen naturally or may be manmade. Depending on temperature and depth, such deposits may be either stable, unstable or neutrally stable, and in this talk, the neutrally stable case is considered.The motion of long waves at the interface can be described by a shallow-water system which becomes triangular in the neutrally stable case. In this case, the system ceases to be strictly hyperbolic, and the standard theory of hyperbolic conservation laws may not be used to solve the Riemann problem. It will be shown that the Riemann problem can still be solved uniquely. In order to solve the system, the introduction of singular shocks containing Dirac delta distributions travelling with the shock is required and the solutions are characterized in integrated form using Heaviside functions. We shall also characterize the solutions in terms of vanishing viscosity regularization and show that the two solution concepts coincide. Speaker: Pilod Didier, BFS researcher, Department of Mathematics, UiB Title: Construction of a minimal mass blow up solution of the modified Benjamin-Ono equation We construct a minimal mass blow up solution of the modified Benjamin-Ono equation (mBO), which is a classical one dimensional nonlinear dispersive model. Let a positive Q in H^{1/2}, be the unique ground state solution associated to mBO. We show the existence of a solution S of mBO satisfying |S| = |Q\| in L_2 and some asymptotic relations as times approach 0/ This existence result is analogous to the one obtained by Martel, Merle and Raphael (J. Eur. Math. Soc., 17 (2015)) for the mass critical generalized Korteweg-de Vries equation (gKdV). However, in contrast with the gKdV equation, for which the blow up problem is now well-understood in a neighborhood of the ground state, S is the first example of blow up solution for mBO. \medskipThe proof involves the construction of a blow up profile, energy estimates as well as refined localization arguments, developed in the context of Benjamin-Ono type equations by Kenig, Martel and Robbiano (Ann. Inst. H. Poincaré, Anal. Non Lin., 28 (2011)).Due to the lack of information on the mBO flow around the ground state, the energy estimates have to be considerably sharpened here. This talk is based on a joint work with Yvan Martel (Ecole Polytechnique) Title: A survey on the generalized KdV equations At the end of the 19th century, Boussinesq, and Korteweg and de Vries introduced a partial differential equation (PDE), today known as the Korteweg-de Vries (KdV) equation, to model the propagation of long waves in shallow water. The KdV equation is a nonlinear dispersive PDE admitting solitary waves solutions, also called solitons, playing an important role in fluid mechanics as well as in other fields of science, such as plasma physics. In this talk, we will focus on the generalized KdV equations (gKdV) to explain the different types of mathematical questions arising in the field of nonlinear dispersive equations. We will describe the techniques of modern analysis, from harmonic analysis to spectral theory, introduced to solve them, and talk about some related open problems. Finally, at the end of the talk, we will introduce the problem of minimal mass blow-up solutions that will be discussed next week with more details. Speaker: Stine Marie Berge, PhD student, NTNU Title: Frequency of Harmonic functions Abstract: In the talk we will look at the frequency of a harmonic function. When the harmonic functionis a homogeneous harmonic polynomial, then the frequency simplycoincides with the degree of the polynomial. The main goal is to showhow the increasing frequency implies that harmonicfunctions satisfy some kind of doubling property. We will show this byintroducing the concept of log-convexity. October 31 and November 07, 2017 Speaker: Jorge Luis Lopez Marin, master student, Mathematical Department, University of Bergen Title: Introduction to the Damek-Ricci space Abstract: In these two talks, we will introduce the notions of Lie algebras, Lie Groups and Damek-Riccispaces. The first talk will go through preliminaries to Damek-Ricci spaces and the second talkwill deal with the Damek-Ricci spaces. The first talk will start by introducing Lie algebras andlook at examples. From there we introduce Lie groups as smooth manifolds with additionalalgebraic structures and look at examples of such smooth manifolds. Then we look at howthese two mathematical objects are connected, namely by the Lie exponential map. We areparticular interested in Heisenberg type algebras and groups, as they play an important rolein Damek-Ricci spaces and will therefore introduce these also. The second talk will be takingthe notions from the first talk and use them construct Damek-Ricci spaces. We will look attwo different realizations of Damek-Ricci. Speaker: Sven I. Bokn, bachelor student, Mathematical Department, University of Bergen Title: Rolling of a ball Abstract: In this talk we will talk about the rolling of a ball over a plane or over another ball. We will introduce the necessary geometric background, such as a notion of a surface, frame, orientation, the group of orientation preserving rotations and its Lie algebra. We will deduce the kinematic equation of the rolling motion without slipping and twisting for both cases. Speaker: Anja Eidsheim, master student, Mathematical Department, University of Bergen Title: Module and extremal length in the plane Abstract: This talk aims to give a thorough introduction to the notions of module of a family of curves and extremal length in the plane. Relations between module and extremal length, and other interesting classical results will be mentioned. Starting out in the complex plane, the module of two types of classical canonical domains in the theory of conformal mappings, namely quadrilaterals and ring domains, will be explained. The module of curve families provides a natural transition from the theory of conformal maps in the complex plane to more general environments. As a first step towards the generalizations to module and capacity in Euclidean n-space and even further, the focus in the talk will move from a conformal module in the complex plane to module of a family of curves in R^2. Examples of how to find the extremal metric and calculate the module of curve families in both annulus and distorted annulus domains in R^2 will be shown.As students are the intended audience for this talk, no prior knowledge of the topic will be required. Speaker: Emanuele Bodon, Exchange student from the Department of Mathematics, University of Genova, Italy Title: Separability for Banach Spaces of Continuous Functions Abstract: In this talk, we will introduce Banach spaces of continuous functions, i.e.we will consider the Banach space of the continuous functions from acompact topological space to the real or the complex numbers with the supnorm (and, more generally, we will consider the space of bounded continuousfunctions on a not necessarily compact topological space).After introducing some important examples, we will deal with the problem ofunderstanding whether such a Banach space is separable or not.We will start from discussing the problem in the mentioned examples andthen give some sufficient conditions and (under some assumptions on thetopological space) also a characterization; doing this will require tointroduce some classical theorems of analysis and topology(Stone-Weierstrass theorem, Urysohn metrization theorem, partition ofunity). Speaker: Erlend Grong, postdoc, Universite Paris Sud, Laboratoire des Signaux et Systemes (L2S) Supelec, CNRS, Universite Paris-Saclay and the Mathematical Department, UiB Title: Comparison theorems for the sub-Laplacian Abstract: One of the main ways of observing curvature in a RIemannian manifold is to look at how the distance between two points changes as we move along geodesics.Namely, the second variation of the distance can be determined by using Jacobi fields, which are themselves controlled by curvature. As a result, we get can get an estimate for the Laplacian of the distance if we have a lower bound for the Ricci curvature. This result is called the Laplacian comparison theorem.Applying the same idea for a sub-elliptic operator and its corresponding distance, has turned out to be difficult. There have been some attempts to define analogues of Jacobi fields, but these definitions lead to difficult computations. Using Riemannian Jacobi fields and approximation argument, we are able to obtain comparison theorem in a wide range of cases, which are sharp in the case of sub-Riemannian Sasakian manifolds.Applications to these results is a Bonnet-Myers theorem and the measure contraction property.These results are from joint work with Baudoin, Kuwada and Thalmaier. Speaker: Eirik Berge, master student, Mathematical Department, UiB Title: Principal Bundles and Their Geometry (Part 2) Abstract: We will investigate an additional piece of information one canput on a principal bundle; a connection. We will use the fundamental vectorfield developed last time to obtain equivalent formulations and see howthis will lead us to connection and curvature forms on a principal bundle.Lastly, if time, I will go through the Stiefel manifold and discuss howIrina's talk on horizontal lifts from the Stiefel manifold can be put intothe principal bundle framework. Abstract: In this talk, we will introduce principal bundles and understanda few key examples. An additional piece of information, namely a principalconnection, will be introduced to "lift" the geometry of the base manifoldto the principal bundle. If time permits, we will discuss curvature andholonomy, and how they are related to the geometry of the base manifoldthrough the frame bundle. The talk will be elementary and will not requiremany prerequisites in differential topology or geometry. Speaker: Professor Aroldo Kaplan, CONICET-Argentina University of Massachusetts, Amherst Title: The Basic Holographic Correspondence Abstract: The correspondence between Einstein metrics on an open manifold andconformal structures on some boundary, has become a subject of renewedinterest after Maldacena's elaboration of it into the AdS-CFTcorrespondence. In this talk we will describe some of the mathematics inthe case of the hyperbolic spaces, which already leads to previouslyunknown solutions to Einstein's equations. Only basic Riemannian Geometrywill required for most of the talk. August 29 and September 5, 2017 Speaker: Professor Irina Markina, Mathematical Department, UiB Title: Geodesics on the Stiefel manifold. Abstract: Geodesics on the Stiefel manifold can be calculated by different ways. We will show how to do it by making use of the sub-Riemannian geodesics on the orthogonal group. We introduce the orthogonal group, the Steifel manifold as a homogeneous manifold of the orthogonal group. We discuss the sub-Riemannian structure induced by the projection map and prove a theorem stated general form of sub-Riemannian geodesics. I will try to be gentle to the audience and make the exposition accessible for master students. Speaker: Kim-Erling Bolstad-Larssen, master student, University of Bergen Title: Quadratic forms and its relation to Hurwitz' problem, theRadon-Hurwitz function and Clifford algebras. Abstract: In the seminar, we would like to explain the relation between several objects. Namely we will reveal how composition of quadratic forms is related to the Hurwitz problem. We also explain how it leads to the orthogonal design and Clifford algebras. The classical Radon-Hurwitz function, related to the Clifford algebras generated by a vector space with positive definite scalar product, was extended By Wolfe to the Clifford algebras generated by a vector space with an arbitrary indefinite non-degenerate scalar product. We present the formula and show the algorithm for its calculation. If time allows we will show also the relation of the above mentioned objects to some special Lie algebras. Speaker: Bhagyashri Nilesh Ingale, master student, University of Bergen Title: Extremality of the spiral stretch map and Teichmüller map. Abstract: We would like to explain how extremal function in the class of homeomorphic mappings with finite distortion is related to the Teichmüller theory. We start with an introduction to the theory of quasiconformal maps and the notion of the modulus of a family of curves. We define the spiral stretch map and explain its extremality property in the class of mappings with finite distortion.We show that the spiral stretch map is the Teichmüller map by finding the corresponding quadratic differential. Speaker: Stine Marie Eik, master student, University of Bergen Title: Riemannian and sub-Riemannian Lichnerowicz estimates. Abstract: We will begin by recalling some of the definitions from differentialgeometry before defining the Laplace operator on manifolds. Thereafter wewill state some of the most important properties of the Laplace operator and then move on to(hopefully) proving the Bochner formula and the Lichnerowicz estimate inthe Riemannian case. This gives us a bound on the first eigenvalue of theLaplacian for manifolds with positive Ricci curvature. In the second partof the talk we move to sub-Riemannian geometry, where we will define the(rough) sub-Laplacian and discuss the generalization of the Bochner formulaand Lichnerowicz estimate. Speaker: Eirik Berge, master student, University of Bergen Title: Invertibility of Fredholm operators in the Calkin algebra. Abstract: The aim of the talk is to present the connection between compactand Fredholm operators on a Banach space. It will begin with anintroduction to the theory of compact operators. We will discuss theFredholm alternative and how this can be used to solve integral equations.Then we will focus on the invertibility of Fredholm operators in the Calkinalgebra and derive properties of Fredholm operators through their relationto compact operators. Lastly, if time will allow, we will describe the most importantinvariant of Fredholm operators; their index. Speaker: Wolfram Bauer, Professor, Analysis Institute, Leibniz University of Hanover Title: The sub-Laplacian on nilpotent Lie groups – heat kernel and spectral zeta function. Abstract: We recall the notion of the subLaplacian on nilpotent Liegroups and their homogeneous spaces by a cocompact lattice (nilmanifolds). In the case of step 2 nilpotent groups differentmethods are known for deriving the heat kernel explicitly. Suchformulas can be used to study the spectral zeta function and heattrace asymptotic of the operator and to extract geometric informationfrom analytic objects. One may as well consider these operators ondifferential forms. We recall a matrix representation of the formLaplacian on the Heisenberg group. In the case of one forms thespectral decomposition and the corresponding heat operator will bederived in the talk by André Hänel. Speaker: Eugenia Malinnikova, Professor, Mathematical Department, NTNU Title: Frequency of harmonic functions and zero sets of Laplace eigenfunctions on Riemannian manifolds. Abstract: We will discuss combinatorial approach to the distributions of frequency of harmonic functions and its application to estimates of the area of zero sets of Laplace eigenfunctions in dimensions two and three. The talk is based on a joint work with A. Logunov. Speaker: Nikolay Kuznetsov, Researcher, Laboratory for Mathematical Modelling of Wave Phenomena, Institute for Problems in Mechanical Engineering, Russian Academy of Sciences, St. Petersburg Title: Babenko's equation for periodic gravity waves on water of finite depth Abstract: For the nonlinear two-dimensional problem, describing periodic steady waves on water of finite depth in the absence of surface tension, a single pseudo-differential operator equation (Babenko's equation) is considered. This equation has the same form as the equation for waves on infinitely deep water; the latter had been proposed by Babenko in 1987 and studied in detail by Buffoni, Dancer and Toland in 2000. Unlike the equation for deep water involving just the $2 \pi$-periodic Hilbert transform C, the equation to be presented in the talk contains an operator which is the sum of C and a compact operator depending on the depth of water. Speaker: Clara Aldana, research fellow, University of Luxembourg, Luxembourg Title: Spectral geometry on Surfaces. Abstract: I start the talk by introducing some basic concepts in spectral geometry. One considers the Laplace operator on a manifold and its spectrum. Two manifolds are isospectral if their Laplace spectrum is the same. The isospectral problem asks whether the Laplace spectrum determines the metric of the manifold: "Can one hear the shape of a drum?"I will mention some of the known results and the classical compactness theorem of isospectral metrics on surfaces proved by B. Osgood, R. Phillips and P. Sarnak. Next, I will define the determinant of the Laplacian and explain how it is an important global spectral invariant.I will explain the problems that appear when one wants to study determinants and isospectrality on open surfaces and surfaces whose metrics are singular. To finish I will present some of my results in this area. Speaker: Erlend Grong, Postdoc, Université Paris Sud and University of Bergen. Title: The geometry of second order differential operators Abstract: Let L be a second order partial differential operators. We want to show that such operators are associated with a certain shape.If L is an operator in two variables, this shape is a surface. In general, such shapes are objects called Riemannian manifolds.By studying the geometry of these shapes, we are able to get results for L and its heat operator.The above framework depends on the assumption that L is elliptic, and is well understood in this case.We will end the talk by discussing how we can get geometric results for L even when It is not elliptic. Speaker: Achenef Tesfahun Temesgen, postdoc of the Mathematical Department, University of Bergen Title: Small data scattering for semi-relativistic equations with Hartree-type nonlinearity Abstract: I will talk about well-posedness, and scattering of solutions for semi-relativistic equations with Hartree-type nonlinearity to free waves asymptotically as t->\infty. To do so I will first talk about the dispersive properties of free waves, and Strichartz estimates for the linear wave equations and Klein-Gordon equations. Joint analysis and PDE, and algebraic geometry seminar Speaker: Viktor Gonzalez Aguilera, Technological University of Santa Maria, Valparaiso, Chile Title: Limit points in the Deligne-Mumford moduli space Abstract: Let M_g be the moduli space of smooth curves of genus g defined over the complex numbers and Md_g be the set of stable curves of genus g. A well known result of Deligne and Mumford states that to set Md_g of stable curves of genus g can be endowed with a structure of projective complex variety and contains M_g as a dense open subvariety. The stable curves can be seen also from the point of view of Bers as Riemann surfaces with nodes, thus an element of Md_g can be considered as a stable curve or as a Riemann surface with nodes. The singular locus of M_g (or the branch locus) is stratified by equisymmetric stratas M_g(Γ, s, Φ, G) that when non empty are smooth connected locally closed algebraic subvarieties of M_g. In this talk we present some of the work of A. Costa, R. Díaz and myself, in order to describe the "limits" points of these stratas in terms of their associated dual graphs. We give some explicit examples and their projective realization as families of stable curves. Speaker: Irina, Markina, Professor, University of Bergen Title: Hypo-elliptic partial differential operators and Hormander theorem Abstract: In the talk we introduce the notion of hypo-ellipticity for partial differential operators. First we discuss the hypo-ellipticity of differential operators with constant coefficients. Then we consider a second order differential operator, generated by arbitrary vector fields defined in a domain of the n-dimentional Euclidean space. In the talk we will start to prove the Hormander theorem stating that if the mentioned vector fields are such that among their commutators always there are n linearly independent ones (probably different in different points of the domain), then the operator is hypo-elliptic. Speaker: Erlend Grond, Postdoc, Université Paris Sud and University of Bergen Title: Model spaces in sub-Riemannian geometry Abstract:The spheres, hyperbolic spaces and euclidean space are important reference spaces for understanding Riemannian geometry in general. They also play an important role in comparison results such as the Laplace comparison theorem and the volume comparison theorem. These reference or model spaces are characterized by constant sectional curvature and by their abundance of symmetries. In recent years, there have been several attempts to define an analogue of curvature for sub-Riemannian manifolds, based on either their geodesic flow or on related hypoelliptic partial differential operators. However, it has not been clear what the reference spaces should be in this geometry, for which we can test these definitions of curvature. We want to introduce model space by looking at sub-Riemannian spaces with a `maximal' group of isometries. These turn out to have a rich geometric structure, and exhibit many properties not found in their Riemannian analogue. January 23, 2017 and January 30, 2017 Speaker: Mauricio Antonio Godoy Molina, Associate Professor, Department of Mathematics, University de La Frontera, Temuco, Chile Title: Harmonic maps between Riemannian manifolds Abstract: Given two Riemannian manifolds, a harmonic map between them is,loosely speaking, a smooth map that is a critical point of an ad-hoc energy functional. Intuitively, they are the maps that have least "stretching". With this 'definition', one expects geodesics and minimal surfaces to be included as examples, and indeed they are. The aims of these talks are to define what are harmonic maps in Riemannian geometry, why are they interesting to some people and what can we say about them. In particular, we will spend some time filling in analytic and geometric prerequisites to study the question of existence of harmonic maps within a given homotopy class. Speaker: Evgueni Dinvay, Department of Mathematics, UiB Title: Spectral theorem in functional analysis Abstract: I am going to continue with spectral theory of linear operators acting in Hilbert spaces. This time I am going to formulate and prove the spectral theorem for unitary operators. These lectures should be regarded as an addition to the usual functional analysis course (MAT 311) and directed first of all to master and Ph.D. students. Abstract: I am going to continue with spectral theory of linear operators acting in Hilbert spaces. This time I am introducing the notion of spectral measure space. These lectures should be regarded as an addition to the usual functional analysis course (MAT 311) and directed first of all to master and Ph.D. students. Abstract: I am going to give some basics of spectral theory of linear operators acting in Hilbert spaces. We regard spectral theorem for unitary operators in particular. These lectures should be regarded as an addition to the usual functional analysis course and directed first of all to master and Ph.D. students. Speaker: Alexander Vasiliev, Professor, UiB. Title: Ribbon graphs and Jenkins-Strebel quadratic differentials Abstract: This will be the final part of my previous talk. Speaker: Professor Igor Trushin (Research Center for Pure and Applied Math., Tohoku University, Japan) Title: On inverse scattering on star-shaped and sun-type graphs. Abstract: We investigate inverse scattering problem for the Sturm-Liouville (1-D Schrodinger) operator on the graph, consisting of a finite number of half-lines joint with either a circle or a finite number of finite intervals. Uniqueness of reconstruction of potential and reconstruction procedure on the semi-infinite lines are established.This is a joint work with Prof.K.Mochizuki. Speaker: Alexander Vasiliev, Dept. Math, UiB Abstract: Ribbon (fat) graphs became a famous tool after Kontsevich used a combinatorial description of the moduli spaces of curves in terms of them, which led him to a proof of the Witten conjecture about intersection numbers of stable classes on the moduli space. We want to give some basics on ribbon graphs and their relation to Jenkins-Strebel quadratic differentials on Riemann surfaces. Speaker: Anastasia Frolova, PhD student, Department of Mathematics, UiB. Title: Quasiconformal mappings. Abstract: Quasiconformal mappings are a natural generalization of conformal mappings and are used in different areas of mathematics. We give give a short introduction to quasiconformal mappings in the plane, discuss their geometric and analytic properties. Speaker: Eric Schippers, Department of Mathematics, University of Manitoba, Winnipeg, Canada Title: The rigged moduli space of CFT and quasiconformal Teichmuller theory. Abstract: A central object of two-dimensional conformal field theory is the Friedan/Shenker/Segal/Vafa moduli space of Riemann surfaces with boundary parameterizations. D. Radnell and I showed that this moduli space can be identified with the (infinite-dimensional) Teichmuller space of bordered surfaces up to a discontinuous group action. In this talk I will give an overview of joint results with Radnell and Staubach, in which we apply the correspondence between the moduli spaces to both conformal field theory and Teichmuller theory. We will also discuss the relation with the so-called Weil-Petersson class Teichmuller space. Speaker: Eirik Berge, Master student of Mathematical Department, UiB Title: Frèchet spaces as modeling spaces for diffeomorphism groups. Abstract: I will give an introduction to Frèchet spaces with motivation towards studying the diffeomorphism group of a compact manifold. When the notion of a smooth maps between Frèchet spaces is developed, then Frèchet manifolds and Lie-Frèchet groups will be defined. It turns out that the diffeomorphism group can be given a Lie-Frèchet group structure. Time: 15.15-16.00 Place: the same Speaker: Stine Marie Eik, Master student of Mathematical Department, UiB Title: The bad behavior of the exponential map for the diffeomorphism group of the circle. Abstract: I will discuss constructions on diffeomorphism groups of compact manifolds and describe their Lie algebra. Moreover, I shall define the exponential map for the diffeomorphism group of the circle and prove that it is not a local diffeomorphism, in contrast with the finite dimensional case. Hence, one of the most powerful tool for studying Lie groups is significantly weakened when we generalize to the Lie-Frèchet groups. Speaker: PhD student Evgueni Dinvay, Dept. of Math., University of Bergen Title: Eigenvalue asymptotics for second order operators with discontinuous weight on the unit interval. Abstract: We consider a second order differential operator on the unit interval with the Dirichlet type boundary conditions. At the beginning of the presentation we will review the well-known results on the subject. There will be given eigenvalue asymptotic, a trace formula for this operator and will be formulated an inverse spectral problem. Then we discuss what results might be extended for the corresponding operator with discontinuous weight. Speaker: Professor Alexander Vasil'ev, Dept. of Math., University of Bergen Title: Moduli of families of curves on Riemann surfaces and Jenkins-Strebel differentials Abstract: We give a comprehensive review of the development of the method of extremal lengths (or their reciprocals called moduli) of families of curves on Riemann surfaces, a basic example of which is a punctured complex plane. The dual is the problem of the extremal partition of the Riemann surface. It turns out that the meeting point of these two problem is achieved by quadratic differentials which provide the extremal functions in the moduli problem, and at the same time, define the extremal partitions. Speaker: Professor Irina Markina, Dept. of Math., University of Bergen Title: Caccioppoli sets Abstract: It this seminar we will surprisingly see that the characteristic function of rational numbers is continuous (in some sense) and rather crazy sets (like coast of Norway) still have finite length. The talk is an introductory talk to the theory of sets of finite perimeter or Caccioppoli sets. We will start from the revision of notion of a function of bounded variation of one variable and will see what is a natural generalisation to n-dimensional Euclidean space. Later we will use BV functions to define a perimeter of an arbitrary measurable set. We compare the perimeter to the surface area measure. At the end we will see how Caccioppoli set can be defined in an arbitrary metric space. Speaker: Professor Santiago Díaz-Madrigal, University of Seville, Spain Title: Fixed Points in Loewner Theory. Abstract: Starting from the case of semigroups, we analyze (and compare) fixed points of evolution families as well as critical points of the associated vector fields. A number of examples are also shown to clarify the role of the different conditions assumed in the main theorems. Speaker: Professor Manuel Domingo Contreras Márquez, University of Seville, Spain Title: Integral operators mapping into the space of bounded analytic functions Speaker: Mauricio Godoy Molina, Assistant Professor, Universidad de La Frontera, Chile. Title: Tanaka prolongation of pseudo H-type algebras Abstract: The old problem of describing infinitesimal symmetries of distributions still presents many interesting questions. A way of encoding these symmetries for the special situation of graded nilpotent Lie algebras was developed by N. Tanaka in the 70's. This technique consists of extending or "prolonging" the algebra to one containing the original algebra in a natural manner, but no longer nilpotent. When this prolongation is of finite dimension, then the algebra we started with is called "rigid", and otherwise it is said to be of "infinite type". The goal of this talk is to extend a result by Ottazzi and Warhurst in 2011, to show that a certain class of 2-step nilpotent Lie algebras (the pseudo $H$-type algebras) are rigid if and only if their center has dimension greater or equal than three. This is a joint work with B. Kruglikov (Tromsø), I. Markina and A. Vasiliev (Bergen). Speaker: Alexey Tochin (UiB) Title: A general approach to Schramm-Löwner Evolution and its coupling to conformal field theory. Abstract: Schramm-Löwner Evolution (SLE) is a stochastic process that has made it possible to describe analytically the scaling limits of several two-dimensional lattice models in statistical physics. We consider a generalized version of SLE and then its coupling with another random object, called Gaussian free field, introduced recently by S. Sheffield and J. Dubedat. We investigate what other types of the generilized SLE can be coupled in a similar manner. November 24th and December 1st Title: Polynomial-lemniscates, trees and braids. Abstract: Following the papers "Polynomial-lemniscates, trees and braids"(Catanese, Paluszny) and "The fundamental group of generic polynomials"(Catanese, Wajnryb) we discuss the structure of the set of lemniscate-generic polynomials, i.e. polynomials whose critical level sets contain figure-eights. We describe characterization of such polynomials by trees and the action of the braid group on them. Speaker: Pavel Gumenyuk, Associate Professor, University of Stavanger. Title: Loewner-type representation for conformal self-maps of the disk with prescribed boundary fixed points. Abstract: It is well-known that the classical Loewner Theory provides the so-called Parametric Representation for the much studied class S of all normalized univalent holomorphic functions in the unit disk via solutions of a controllable ODE, known as the (radial) Loewner differential equation. It is less widely known that the radial Loewner equation also gives a representation of all univalent holomorphic self-maps of the unit disk with a fixed point at the origin. The corner stone of this representation is the fact that such maps form a semigroup w.r.t. the composition operation. Representations using the same heuristic scheme have been obtained for some other semigroups. The main problem in making this into a more or less general theory is that no method is known to determine whether the subsemigroup formed by all representable elements coincides with the original semigroup. Hence, it would be interesting to analyze Loewner's scheme in many different concrete examples. In this talk, we consider semigroups of univalent holomorphic self-maps with prescribed boundary regular fixed points (BRFPs). Probably the first attempt to construct a Loewner-type parametric representation for the case of one BRFP was made by H.~Unkelbach [Math. Z. 46 (1940) 329--336]. In a rigorous way it was established only in 2011 by V.V.~Goryainov [Mat. Sb.]. We discuss the case of several BRFPs, in which the approach by Goryainov cannot be applied. Speaker: Olga Vasilieva, Department of Mathematics, Universidad del Valle, Cali – COLOMBIA. Title: Optimal Control Theory and Dengue Fever Abstract: Dengue is a viral disease principally transmitted by Aedes aegypti mosquitoes. There is no vaccine to protect against dengue; therefore, dengue morbidity can only be reduced by appropriate vector control measures, such as: - suppression of the mosquito population, - reduction of the disease transmissibility. This presentation will be focused on implementation of these external control actions using the frameworks of mathematical modeling and control theory approach. In the first part, I will present and endemo-epidemic model derived from registered dengue case in Cali, Colombia and then propose a set of optimal strategies for dengue prevention and control. In the second part, I will present an alternative and unconventional vector control technique based on the use of biological control agent (Wolbachia) and formulate a decision-making model for Wolbachia transinfection in wild Aedes aegypti populations. Speaker: Dmitry Khavinson, Distinguished Professor, Department of Mathematics, University of South Florida, Tampa, USA. Title: Isoperimetric "sandwiches" and some free boundary boundary problems via approximation by analytic and harmonic functions Abstract: The isoperimetric problem, posed by the Greeks, proposes to find among all simple closed curves the one that surrounds the largest area. The isoperimetric theorem then states that the curve is a circle. It is frst mentioned in the writings of Pappus in the third century A.D. and is attributed there to Zenodorus. However, a rigorous proof was only achieved towards the end of the 19th century! I will start by discussing some of the history of the problem and several classical proofs of the isoperimetric inequality ( e.g., those due to Steiner, Hurwitz and Carleman).. Then we shall move on to a larger variety of isoperimetric inequalities, as , e.g., in Polya and Szego classic book of 1949, but deal with them via a relatively novel approach based on approximation theory. Roughly speaking, this approach can be characterized by a recently coined term`` sandwiches". A certain quantity is introduced, usually as a degree of approximation to a given simple function, e.g., z* , |x|^2, by either analytic or harmonic functions in some norm. Then, the estimates from below and above of the approximate distance are obtained in terms of simple geometric characteristics of the set, e.g., area, perimeter, capacity, torsional rigidity, etc. The resulting ``sandwich" yields the relevant isoperimetric inequality. Many of the classical isoperimetric problems studied this way lead to natural free boundary problems for PDE, many of which remain unsolved today. Then, as an example, I will talk about some applications to the study of shapes of electrifed droplets and small air bubbles in fluid flow. During the talks I will try not only to survey the known results and methods but focus especially on many open problems that remain. This series of talks is going to be definitely accessible to the first year graduate students, or advanced undergraduates majoring in mathematics and physics who have had a semester course in complex analysisvand a routine course in advanced calculus. Speaker: Catherine Beneteau, Associate Professor, MathematicalDepartment, University of South Florida, Tampa, USA. Title: Polynomial Solutions to an Optimization Problem in Classical Analytic Function Spaces Abstract: In this series of talks, I will introduce some classical spaces of analytic functions in the unit disk in the complex plane called Dirichlet type spaces. Examples of these spaces include the Hardy space (functions whose coefficients are square summable), the Bergman space (functions whose modulus squared is integrable with respect to area measure over the whole disk), and the (classical) Dirichlet space (functions whose image has finite area, counting multiplicity). I will discuss polynomials that solve an optimization problem that I will describe. These polynomials are intimately connected to some classical tools in analysis: reproducing kernels and orthogonal polynomials. In particular, I will examine the clusters of the zeros of these optimal polynomials and show how their location depends on the space being considered. I will begin by introducing all notation and terms. The series of talks will be accessible to advanced undergraduate and beginning graduate students. Speaker: Catherine Beneteau, Associate Professor, Mathematical Department, University of South Florida, Tampa, USA. Speaker: Catherine Bénéteau. Abstract: In this series of talks, I will introduce some classical spaces of analytic functions in the unit disk in the complex plane called Dirichlet type spaces. Examples of these spaces include the Hardy space (functions whose coefficients are square summable), the Bergman space (functions whose modulus squared is integrable with respect to area measure over the whole disk), and the (classical) Dirichlet space (functions whose image has finite area, counting multiplicity). I will discuss polynomials that solve an optimization problem that I will describe. These polynomials are intimately connected to some classical tools in analysis: reproducing kernels and orthogonal polynomials. In particular, I will examine the clusters of the zeros of these optimal polynomials and show how their location depends on the space being considered. Speaker: Alex Himonas, University of Notre Dame, Notre Dame, USA. Title: The Cauchy problem for weakly dispersive and dispersive equations with analytic initial data. Abstract: This talk presents an Ovsyannikov type theorem for an autonomous abstract Cauchy problem in a scale of decreasing Banach spaces, which in addition to existence and uniqueness of solution provides an estimate about the analytic lifespan of the solution. Then, using this theorem it discusses the Cauchy problem for Camassa-Holm type equations and systems with initial data in spaces of analytic functions on both the circle and the line. Also, it studies the continuity of the data-to-solution map in spaces of analytic functions. Finally, it compares these results with corresponding results for KdV type equations. Speaker: Kenro Furutani, Professor, Department of Mathematics, Tokyo University of Science. Title: Geometry of Symmetric Operator. Abstract: I will introduce two quantities, one is the Maslov index and the other is the Spectral flow. The former can be thought of as a classical mechanical quantity. The later can be seen as a quantum mechanical quantity, so that it exists only in the infinite dimension. Then I will explain their coincidence in a context of a selfadjoint elliptic boundary value problem, which is the meaning of the title. September 1st. Speaker: Irina Markina, Professor, Mathematical Department, UiB. Title: Definition of boundary complex and corresponding box operator. Abstract: Last time we introduced Dolbeault complex and box operator on a complex manifold. We will use the same ideas to introduce the boundary box operator for C-R manifolds. After discussing general ideas we will make concrete calculations for the boundary of the Siegel upper half space, that is isomorphic to the Heisenberg group. We will calculate the boundary box operator in terms of left invariant vector fields of the Heisenberg group and discuss the fundamental solution for boundary box operator. Title: Solution of inhomogeneous Cauchy - Riemann equation Abstract: This is an introduction lecture to the method of "orthogonal projections" used in the theory of inhomogeneous Cauchy - Riemann equations on complex manifolds. We will introduce the Dolbeault complex and calculate the box operator, which is analogue of the Laplace operator. This lecture will be used as an introduction to the boundary complex and solution of similar problems on CR manifolds. May7th Speaker: Armen Sergeev, Professor, Steklov Mathematical Institute, Moscow, Russia Title: Harmonic maps and Yang-Mills fields. Abstract: We consider a connection between harmonic maps of Riemann surfaces and Yang–Mills fields on R^4. Harmonic map from a Riemann surface into a Riemannian manifold is the extremal of the energy functional given by the Dirichlet integral. Such maps satisfy nonlinear elliptic equations of 2-nd order, generalizing Laplace–Beltrami equation. In the case when the target Riemannian manifold is Kaehler, i.e. provided with a complex structure compatible with Riemannian metric, the holomorphic and anti-holomorphic maps realize local minima of the energy. We are especially interested in harmonic maps of the Riemann sphere called briefly harmonic spheres. The Yang–Mills fields on R^4 are the extremals of the Yang–Mills action functional. Local minima of this functional are given by instantons and anti-instantons. There is an evident formal similarity between the Yang–Mills fields and harmonic maps and after Atiyah's paper of 1984 it became clear that there is a deep reason for such a similarity. Namely, Atiyah has proved that for any compact Lie group G there is a bijective correspondence between the gauge classes of G-instantons on R^4 and based holomorphic spheres in the loop space ΩG of G. This theorem motivates the harmonic spheres conjecture stating that it should exist a bijective correspondence between the gauge classes of Yang–Mills G-fields on R^4 and based harmonic spheres in ΩG. In our talk we discuss this conjecture and possible ways of its proof. Speaker: Erlend Grong, Postdoc, Mathematical Department, University of Luxembourg. Title: Horizontal holonomy with applications. Abstract: We look at holonomy groups defined by parallel transport along curves that are tangent with respect to a given subbundle. This simple idea turns out to have powerful applications to the theory of foliations and connections on fiber bundles. It is also computable, in the sense that we can formulate a generalization of the Ambrose-Singer theorem and the Ozeki theorem for this holonomy. These results are based on joint work with Yacine Chitour (L2S, Paris XI), Frédéric Jean (ENSTA Paris Tech) and Petri Kokkonen (Varian Medical Systems, Helsinki). First speaker: Stine Marie Eik, student, Mathematical Department, UiB. Title: Rademacher's Theorem Abstract: The theorem of Rademacher states that a special class of continuous functions, namely Lipschitz continuous functions, is differentiable almost everywhere. In this talk we will begin by developing the necessary theory to state the theorem, for thereafter give a sketch of the proof. Second Speaker: Anja Eidsheim, student, Mathematical Department, UiB. Title: Hausdorff Measure and the Dimension of Pretty Pictures Abstract: This talk will be starting out with reviewing the Hausdorff outer measure and a few of its properties as a function of some important parameters, particularly the Hausdorff dimension. We will proceed to look at some sets in R^n that have fractional dimension, and how the dimension of such fractals sets are calculated. If time permits, estimating the fractal dimension of naturally occurring objects studied in other fields than mathematics might also be mentioned as a group of applications of this theory. Speaker: Anastasia Frolova, PhD student, Mathematical Department, UiB. Title: Quadratic differentials, graphs and Stasheff polyhedra Abstract: We give a short overview of properties of rational quadratic differentials, which give solutions to certain problems in potential and approximation theory. An important problem in this context is to characterize and describe quadratic differentials with short trajectories. For this purpose we introduce a graph representation of rational quadratic differentials with one pole and classify graphs corresponding to quadratic differentials with short trajectories. We show connection between the graphs and Stasheff polyhedra, which leads to the description of the combinatorial structure of the set of quadratic differentials with short trajectories. Speaker: Mauricio Antonio Godoy Molina, Postdoc, Mathematical Department, UiB. Title: Riemannian and Sub-Riemannian geodesic flows Abstract: Sub-Riemannian (sR) geometry is very different from Riemannian (R) geometry in many senses, and not just an extra "sub-" in the name. Besides striking differences between their metric structures, many of the geometric invariants for R and for sR manifolds that might look quite similar in spirit, sometimes have completely unrelated behaviors. This is indeed the case for the (R and sR) geodesic flows, although in some well-studied situations (e.g., certain kinds of Lie group actions) the extra structure implies very nice relations between these flows. The goal of this talk is to show that the geodesic flows of a sR metric and that of a Riemannian extension commute if and only if the extended metric is parallel with respect to a certain connection. This helps us to describe the geodesic flow of sub-Riemannian metrics on totally geodesic Riemannian submersions. As a consequence we can characterize sub-Riemannian geodesics as the horizontal lifts of projections of Riemannian geodesics. This talked is based on a joint preprint with E. Grong available at http://arxiv.org/abs/1502.06018. Speaker: Sigmund Selberg, Professor, Mathematical Department, UiB. Title: The Dirac-Klein-Gordon equations: their non-linear structure and the regularity of their solutions. Abstract: The Dirac-Klein-Gordon system is a basic model of particle interactions in physics. Mathemathically this model can be studied as a system of non-linear dispersive PDEs. In this talk I will discuss the non-linear structure of these equations and how this structure enters into the analysis of the regularity properties of the solutions. If time permits I may also touch upon some themes related to Fourier restriction. Speaker: Alexey Tochin, PhD sudent, UiB. Title: Rigged Hilbert spaces Abstract: One of the basic result of finite-dimensional linear algebra states that for any self-adjoint or unitary operator there exist a complete system of eigenvectors. The situation becomes more complicated upon passing to the infinite-dimensional case. We consider some examples and possible solutions that involve a construction called Rigged Hilbert space. The same thing appears in attempts to define normally distributed Gaussian random law in an infinite dimensional linear space. We will try to make the talk understandable for bachelor and master level students. Speaker: Alexander Vasiliev, Professor, Mathematical Department, UiB. Title: Analysis and topology of polynomial lemniscates III. Abstract: The first part of my talk is dedicated to fingerprinting polynomial lemniscates. It turns out that they represent a good tool of image analysis and can be used for approximation of planar shapes. We study their fingerprints (in Mumford's terminology) by conformal welding and reveal the geometric sense of their inflection points. The second part concerns with the topological aspects of lemniscates. In particular, they perform the structure of an operad. Title: Analysis and topology of polynomial lemniscates II. Title: Analysis and topology of polynomial lemniscates I. Title: Rectifiable sets in the Euclidean space III. Abstract: In the last section concerning with rectifiable sets, we will review the fundamental theorem of calculus in several variables and see its generalisation on m-rectifiable sets. We also will discuss the area and co-area formulas, that can be considered as a general form of the Fubini theorem. Title: Rectifiable sets in the Euclidean space II Abstract: Today we recall the properties of Lipschitz maps from m-dimensional Euclidean space to n-dimensional Euclidean space and prove that any m-rectifiable set in R^n is "almost" C^1-smooth manifold. Title: Rectifiable sets in the Euclidean space I Abstract: In the next series of lecture (up to three) we will study the rectifiable sets in the Euclidean space. On the first lecture we recall the notion of the rectifiability of a curve and show that the one-dimensional Hausdorff measure of a Jordan curve is the length of this cirve. Then, we revise the properties of Lipschitz maps in the Euclidean space and define the rectifiable sets. Speaker: Mauricio Godoy Molina, Postdoc, Mathematical Department, UiB. Title: Capacity and energy III Abstract: To wrap up the contents from the previous seminars, I'll try to show some of the ideas behind the proof of Frostman's lemma, which relates Hausdorff measure and capacity, and the existence of sets of "arbitrary" Hausdorff dimension. Title: Capacity and energy Abstract: The aim of this week's seminar is to relate the $s$-capacity introduced last week with some notions in potential theory (á la Choquet), thus giving more intuitive reasons to calculate it. The fundamental idea is to characterize how big can the set of singularities of a superharmonic function be (surprise, surprise, they have capacity zero). If time permits, we will take a closer look at the proofs of some of the results mentioned last week. Abstract: The aim of this week's seminar is to introduce the Riesz $s$-capacity of a subset of Euclidean space and use it to deduce some good behavior of "small" sets, i.e., geometrically speaking, Radon measures shouldn't be too concentrated on small regions. As a consequence of this capacitarian approach (plus some technical details left behind in prior seminars), we will obtain formulas for the Hausdorff dimension of products of sets, and we will show that for any set $A$ of Radon measure $t>0$ and any $0<s<t$, there exists a subset of $A$ with Radon measure $s$. Speaker: Christian Autenried, PhD, Mathematical Department, UiB Title: Numerical methods in mathematical finance illustraded on the Monte Carlo method. Abstract: We introduce the idea and application of the Monte Carlo method to approximate integrals. For that purpose we construct a pseudo random number generator for uniformly distributed random variables. Furthermore, we present the method of control variates, which is among the most effective and broadly applicable technique for improving the efficiency of Monte Carlo simulation. Speaker: Mauricio Godoy Molina, Postdoc, Mathematical Department, UiB Title: Lipschitz maps. Abstract: To conclude this semester's series of lectures presenting some basic concepts in geometric measure theory, I will focus on Lipschitz maps. I will explain what role do Lipschitz maps play in this measure-theoretic context, how far are they from being differentiable, what is the measure of the set of critical points of a Lipschitz map and what is the relation between them and the Hausdorff measure. October 29th and 30th Speaker: Bruno Franchi, Professor, University of Bologna Title: Differential forms in Carnot groups Abstract: The aim of these talks is to present a comprehensive introduction to the theory of differential forms in Carnot groups (the so-called Rumin's complex). Main topics will be: - Left invariant differential forms in Carnot groups and the notion of weight; - The algebraic part of the differential and its pseudo-inverse; - Rumin's classes $E_0^*$; - The complex $(E^*,d)$ of ``lifted forms''; - Rumin differential $d_c$; - Rumin's complex $(E_0^*,d_c)$ is homotopic to the de Rham complex; - Examples: Heisenberg groups, Engel's group, free Carnot groups. - The intrinsic Laplacian on forms. Speaker: Dante Kalise, Johann Radon Institute for Computational and Applied Mathematics (RICAM), Linz, Austria. Title: Hamilton-Jacobi equations in optimal control: theory and numerics. Abstract: In this talk we will review some classical and recent results concerning the link between Hamilton-Jacobi equations and optimal control, its numerical ap- proximation, and different applications. A standard tool for the solution of optimal control problem is the applicati- on of the Dynamic Programming Principle proposed by Bellman in the 50's. In this context, the value function of the optimal control problem is characterized as the solution of a first-order, fully nonlinear Hamilton-Jacobi-Bellman (HJB) equation. The solution is understood in the viscosity solution sense introduced by Crandall and Lions. A major advantage of the approach is that a feedback mapping connecting the current state of the system and the optimal control can be obtained by means of the Pontryagin principle. However, since the HJB equation has to be solved in a state space of the same dimension as the system dynamics, the approach is only feasible for low dimensional dynamics. In the first part of the talk, we will present the main results related to HJB equati- ons, viscosity solutions and links to optimal control. The second part will be devoted to the construction of efficient and accurate numerical schemes for the approximation of HJB equations.. Speaker: Alexey Tochin, PhD student, Department of Mathematics, UiB. Title: The standard Gaussian distribution on infinite-dimensional linear spaces. Abstract: On the way to force some infinite-dimensional analog of the standard Gaussian measure one encounters with difficulties. In particular, it is not possible in countable-dimensional Hilbert space. We consider rigged space and Gaussian Hilbert space as possible solutions. In the second hour we introduce so-called nuclear space which is a sort of a limit of a sequence of Hilbert spaces. This concept is used in the Bochner-Minlos theorem that gives a very general approach to define not only Gaussian but even more advanced distributions. This is a one of the mathematical tools for Euclidean Quantum field theory. Speaker: Anastasia Frolova, PhD student, UiB. Title: Quadratic differentials, graphs and Laguerre polynomials. Abstract: We give a short introduction to rational quadratic differentials. We present graph representation of a particular type of such quadratic differentials. We show how rational quadratic differentials can be applied to study of limit zero distribution of Laguerre polynomials. First speaker: Torleif Anstensrud, PhD student, Department of Engineering Cybernetics, NTNU Title: Periodic solutions of nonlinear dynamic systems: Applications in legged robotics. Abstract: The talk will focus on the role of periodic solutions of nonlinear differential equations in the search for walking patterns for legged robots. I will talk briefly about periodic trajectories in general, and then specifically go into detail about periodic solutions of hybrid systems (systems having both continuous and discrete dynamics) and their relationship to stable walking. Following this, the method of virtual holonomic constraints will be introduced as a tool for searching for certain periodic trajectories. The method will be demonstrated in broad terms on one of the simplest walking machines, the 2D passive biped. Second speaker: Sergey Kolyubin, Research Fellow, Department of Engineering Cybernetics, NTNU Title: In Pursuit of the Optimal Trajectory for Robotic Pitche Abstract: We will discuss in which way the optimization can be used programming the robotic ball pitching world champion. The main challenge is planning the longest possible pitch given range, speed, and torque constraints for every joint. The original approach will be presented as a tool for trajectory generation. KUKA LWR compliant and redundant robotic arm is considered as a test bed for implementation. Finally, I will give propositions on how the results can be advanced and how you can participate there. See also the attached file for robot model. Title: Hausdorff measure. Abstract: We will introduce the general construction of Caratheodory of the outer measure and shows that it is Borel regular measure on Borel sets. The particular case of that construction is the Hausdorff measure in an arbitrary metric space. We compare the Hausdorff measure with the Lebesgue measure in the Euclidean space and introduce the Hausdorff dimension of a set. As an application we calculate the dimension of the Koch snowflake. Title: Differentiating of measures. Abstract: We will defined the derivative of one measure with respect to another and will study when this derivative exists. As a corollary we obtain the generalised first theorem of calculus and Radon-Nikodim theorem. We also will discuss the Hardy-Littlwood maximal function and the Caratheodory construction of outer measures, that particularly leads to the Hausdorff measure. Title: Review on measure theory. Abstract: Last time we spoke mostly on the Lebesque measure in Euclidean space. So we need several notions such as Borel and Radon measures, regular measure, and other to extend the Vitali's covering lemma from Lebesque measure to more general measures, such as Radom measures, for example. I will give necessary definitions and examples. If we have time we start to speak about differentiation of one measure with respect to another. Title: Covering lemmas in Euclidean space. Abstract: This semester we organise during the analysis seminar a special course in geometric measure theory following the book by Pertti Mattila "Geometry of Sets and Measures in Euclidean spaces, fractals and rectifiability". In the 1 lecture we will consider the Vitali's and Besicovitch's covering lemmas that allows to prove some theorems about differentiability of Lebesgue and Radon measures. I will smoothly introduce the necessary material, such that not very prepared students can follow the course. Speaker: Timothy Candy, Chapman Fellow, Imperial College London, UK Title: Critical well-posedness for the Cubic Dirac equation Abstract: We outline recent work towards a global well-posedness theory for the massless cubic Dirac equation for small, scale invariant data in spatial dimensions n = 2, 3. The main difficulty is the lack of available Strichartz estimates for the Dirac equation in low dimensions. To overcome this, there are two main steps. The first is a construction of the null frame spaces of Tataru that is adapted to the Dirac equation, and which form a suitable replacement for certain missing endpoint Strichartz estimates. The second is a number of bilinear and trilinear estimates that exploit subtle cancellations in the structure of the cubic non-linearity. This is joint work with Nikolaos Bournaveas. First speaker: Kenro Furutani, Professor, Department of Mathematics, Tokyo University of Science, Tokyo, Japan Title: Towards a construction of the heat kernel for a higher step Grushin operator Abstract: I start this talk from a geometric and group theoretical introduction of Grushin type operators and a possible integral form of their heat kernel. Then I explain a construction of an action function and a candidate of a volume function associated to a higher step Grushin operator by means of complex Hamilton-Jacobi method. Second speaker: Mitsuji Tamura, Assistant Professor, Department of Mathematics, Tokyo University of Science, Tokyo, Japan Title: On the global Carleman estimate and its applications. Abstract: In this talk, we consider the global Carleman estimate for inhomogeneous Schroedinger operator which depends on time. We reveal the relation between strongly and weakly pseudo convexity condition and the geometry of the domain of the definition of the Schroedinger operator. We mention applications of this inequality to the inverse problem, UCP of the boundary value problem for Schroedinger equation and to control problems. Speaker: Georgy Ivanov, PhD student, University of Bergen. Title: Gaussian free field and slit stochastic flows. Abstract: Connections between the Gaussian free field and SLE were first established by Schramm and Sheffield, and since then have been extensively studied in the literature. It was realized recently that the chordal, radial and dipolar SLEs are special cases of slit holomorphic stochastic flows. We investigate what other types of general slit holomorphic stochastic flows can be related to the Gaussian free field in a similar manner. Speaker: Melkana A. Brakalova, Associate Professor Department of Mathematics, Fordham University, NewYork, USA Title: Conformal invariants and Teichmuller's Modulsatz in the plane Abstract: Module of a quadrilateral/doubly connected domain, and extremal length of a family of curves, are two interconnected conformal invariants that have played a fundamental role in the study of analytic and geometric properties of quasiconformal mappings in the plane. I will introduce these notions, discuss some of their properties and methods of evaluation as well as examples of the impact they have had. In the second part of the talk I will state and prove Teichmuller's Modulsatz, using two methods, one based on conformal mapping techniques and the other, using appropriate admissible function, which allows the possibility of extending the Modulsatz to more general settings. Applications of the Modulsatz may also be discussed. First speaker: Giovani L. Vasconcelos, Professor, Department of Mathematics, Imperial College London, and Department of Physics, Federal University of Pernambuco, Recife, Brazil. Title: Conformal geometry in multiply connected domains: a new era of conformal mapping Abstract: Many important problems in two-dimensional physics can be conveniently formulated as boundary-value problems for analytic functions in the complex plane. If the domain of interest is simply or doubly connected, the problem can often be solved exactly by standard conformal mapping techniques. The situation is much more complicated, however, in the case of domains with higher connectivity because conformal mappings for such domains are notoriously difficult to obtain. In this talk, I will describe a large class of conformal mappings from a bounded circular domain to multiple-slit domains which are relevant for several physical systems. The slit maps are written explicitly in terms of the primary and secondary Schottky-Klein prime functions defined by the Schottky group associated the circular domain and its subgroups. As a first application of our theory, I will compute exact solutions for the free boundary problem corresponding to the steady motion of multiple bubbles in a Hele-Shaw cell. Time-dependent solutions for multiple Hele-Shaw bubbles will also be presented. Other possible applications in fluid dynamics (e.g. vortex dynamics around multiple obstacles), growth models (e.g. Loewner evolution in multiply connected domains), and 2D string theory (e.g. multi-loop diagrams) will be briefly discussed. Second speaker: Bruno Carneiro da Cunha, Professor, Federal University of Pernambuco, Recife, Brazil Title: Liouville Field Theory Applied to Boundary Problems Abstract: Riemann's mapping theorem allows one to associate a conformal map to a connected two-dimensional domain. This idea of "averaging over conformal maps" -- Liouville Field Theory -- has had many applications in critical phenomena and string theory. In this talk I will review some applications of Liouville field theory to problems, some surprisingly, related to two dimensional quantum gravity and the role of boundary conditions. Speaker: Donatella Danielli, Perdue University, West Lafayette, USA. Title: Frequency functions, monotonicity formulas, and the free boundary in the thin obstacle problem. Abstract: Monotonicity formulas play a pervasive role in the study of variational inequalities and free boundary problems. In this talk we will describe a new approach to a classical problem, namely the thin obstacle (or Signorini) problem, based on monotonicity properties for a family of so-called frequency functions. Speaker: István Prause, professor of the University of Helsinki. Title: Bilipschitz maps, logarithmic spirals and complex interpolation. Abstract: How much a bilipschitz map can spiral? We explore two complementary aspects: how fast and how often. Quasiconformal techniques turn out to be effective to study this problem. In many ways, rotational phenomena for bilipschitz maps are dual to stretching properties of quasiconformal maps. I will contrast these two and explain what links them together. The talk is based on joint work with K. Astala, T. Iwaniec and E. Saksman. Speaker: Mark Agranovsky, professor, Bar Ilan University (Israel). Title: Common nodal surfaces in Euclidean space. Abstract: Nodal sets are loci of Laplace eigenfunctions. They fairly desicribe wave propagation and are subject of a strong interest. While the global construction of a singe nodal set hardly can be well understood, one may hope that common nodal sets of large families of eigenfunctions must have a pretty special geometry. In particular, it was conjectured that common nodal hypersurfaces of eigenfunctions,arising as the spectrum of a compactly supported function, are cones-translates of the zero sets of nonzero harmonic homogeneous polynomials (spatial harmonics). It is confirmed in 2d and is still open in higher dimensions. The approaches and the current status of the problem will be discussed. Speaker: Christian Autenried, PhD student, UiB Title: Classification of 2-step nilpotent Lie algebras Abstract: We will describe some metric approach to study Lie algebras that are nilpotent of step 2. I will try to make my talk understandable for bachelor and master level students. Speaker: Alexey Tochin, PhD student, UiB Title: Gaussian free field and Schramm-Loewner evolution Abstract: This is a continuation of the talk about Gaussian free field presented in Geilo. We begin with reminding the definition and the basic properties. Then we see how the zero level line of the approximation of Gaussian free field generates the same random law as the one from the Scramm-Loewner evolution (O. Schramm and S. Sheffield 2005). We continue with the Markov property of the Gaussian free field to illustrate that fact. In the end the so-called Ward identities will be discussed. Speaker: Ragnar Winther, professor, University of Oslo. Title: Local bounded cochain projections and the bubble transform. Abstract: The study of discretizations of Hodge Laplace problems in finite element exterior calculus unifies the theory of mixed finite element approximations of a number of problems in areas like electromagnetism and fluid flow. The key tool for the stability analysis of these discretizations is the construction of projection operators which commute with the exterior derivative and at the same time are bounded in the proper Sobolev norms. Such projections are referred to as bounded cochain projections. The canonical projections, constructed directly from the degrees of freedom, will commute with the exterior derivative, but unfortunately, they are not properly bounded. On the other hand, bounded cochain projections have been constructed by combining a smoothing operator and the unbounded canonical projection. However, an undesired property of these smoothed projections is that, in contrast to the canonical projections, they are nonlocal. Therefore, we have recently proposed an alternative construction of bounded cochain projections, which also is local. This construction can be seen as a variant of the well known Clément operator, and it utilizes a double complex structure defined on the macroelements associated the subsimplexes of the grid. In addition, we will also discuss a new tool for analysis of finite element element methods, referred to as the bubble transform. In contrast to all the projections operators above, this transform will lead to projections with bounds which are independent of the polynomial degree of the finite element spaces. As a consequence, this can potentially simplify the analysis of the so-called p-method. Speaker: Mauricio Godoy Molina, postdoc, UiB Title: Abstracting the rolling problem. Abstract: The aim of this seminar is to present a generalization of the rolling system to the abstract framework of Cartan geometries, which are the most general environment in which the notion of "development" can be carried out. In this new context, many of the seemingly ad-hoc geometric concepts introduced for the rolling system become somewhat more natural, albeit less intuitive. This talk is based on joint work with Y. Chitour and P. Kokkonen from Paris XI. Speaker: Alexander Vasiliev, professor, University of Bergen. Title: Loewner equation and integrable systems. Abstract: Abstract: We argue that the Loewner equation serves as a background tool for some integrable systems. In particular, splitting time leads to the Vlasov equation for the distribution function of plasma, and to the Benney hierarchy. We also show that the solution to the Loewner equation with infinite dimensional time gives the Lax function which solves the dispersionless KP hierarcy in which the Benney equations may be recovered as the second in the dKP hierarchy. Joint work with Dmitri Prokhorov and Maxim Pavlov. Speaker: Victor Gichev, Sobolev Institute of Mathematics (Omsk Branch), Omsk, Russia. Title: Invariant cone fields and semigroups in Lie groups. Abstract: I shall describe briefly some areas of mathematics related to the objects of the title and concentrate on one of them, bi-invariant orderings in Lie groups. I'll formulate a theorem which characterizes cones corresponding to the ''good'' orderings and begin preparations to its proof. This includes a theorem on reachable sets of an invariant control system in a nilpotent Lie group extended by R. Speaker: Nam-Gyu Kang, Seoul National University, Republic of Korea. Title: Gaussian Free Field, Conformal Field Theory, and Schramm–Loewner Evolution. Abstract: I will present an elementary introduction to conformal field theory in the context of complex analysis and probability theory. Introducing Ward functional as an insertion operator under which the correlation functions are transformed into their Lie derivatives, I will explain several formulas in conformal field theory including Ward's equations. This presentation will also include relations between conformal field theory and Schramm–Loewner evolutions in various conformal types. Some recent work in the case of multiple SLE curves and their classical limits will be discussed. This is joint work with Nikolai Makarov, Hee-Joon Tak, Dapeng Zhan, and Tom Alberts. Speaker: Dmitri Prokhorov, professor, Saratov State University, Russia. Title: On a ratio of harmonic measures of slit sides. The talk is devoted to estimates of a ratio for harmonic measures of slit sides depending on geometric properties of a slit. For a domain $\Omega$ slit along a curve $\gamma=\gamma[0,t]$ and for a point $a\in\Omega\setminus\gamma$, define $m_k(t)$, $k=1,2$, harmonic measures of $\gamma_k[0,t]$ at $a$ with respect to $\Omega$, where $\gamma_1[0,t],\gamma_2[0,t]$ are two sides of $\gamma[0,t]$. We estimate asymptotically $$\frac{m_1(t)}{m_2(t)},\;\;\;t\to+0,$$ taking into account a geometry of $\gamma$. Speaker: Alexey Tochin, PhD student, University of Bergen. Title: General 1-Slit Loewner Equation. Abstract: We introduce a family of equations which in a certain sense generalize various versions of the Loewner equation. We start with the definitions of very general objects on an arbitrary Riemann surface. Using them, we introduce and analyze the 1-Slit Loewner Equation. We restrict our attention to the case of the hyperbolic Riemann surface (the unit disk) and explain some of our new results. This is part of our ongoing joint work with G.Ivanov and A.Vasiliev. Speaker: Hans Martin Reimann, University of Bern, Switzerland Title: The mathematics of hearing Abstract: Many mathematical problems arise in the study of the auditory pathway. The focus will be on two basic topics: How is the signal processing done in the inner ear and in the peripheral acoustic centers? Is there a calculus that suitably describes the neuronal processes? Speaker: Georgy Ivanov, PhD student, Mathematical Department, UiB Title: Stochastic holomorphic semiflows in the unit disk. Abstract: In 1984, H.Kunita considered stochastic flows on smooth paracompact manifolds. In particular, he showed that if the vector fields X_1, ... X_m defining a Stratonovich SDE are complete and generate a finite-dimensional Lie algebra G, then the corresponding flow is in fact a flow of diffeomorphisms, taking values in G. We restrict ourselves to the case of holomorphic flows on the unit disk, but allow the vector field at "dt" to be semicomplete. In this case, General Loewner theory provides an immediate proof of the fact that the corresponding stochastic flow is a flow of holomorphic maps of the unit disk into itself (a holmorphic semiflow). The results can be extended to multiply connected domains, as well as to general complex hyperbolic manifolds. This is part of our ongoing joint work with A.Tochin and A.Vasiliev. Speaker: Victor Kiselev, PhD student, Mathematical Department, UiB. Title: A fast segmentation method for color images. Abstract: Image segmentation has always been one of the central questions in image processing. To segment an image means to divide it into non-overlapping "meaningful" regions. We will discuss some methods used for such problems and a particular semi-supervised segmentation method, which is based on feature extraction from different color spaces and employs variational framework. Speaker: Christian Autenried (PhD student, UiB). Title: Classification of H-type Lie algebras Abstract: In the talk we show that the extension of an H-type Lie algebra n_{r,s} induced by a Clifford algebra Cl_{r,s} by n_{8,0}, n_{4,4} or n_{0,8} is preserving isomorphisms. This implies a method which reduces the classification of an arbitrary H-type Lie algebra n_{r,s} to the classification of n_{t,u} with 0 \leq t,u \leq 8. Furthermore, we give an overview of the current state of research of the classification of n_{t,u} with 0 \leq t,u \leq 8 and some methods to classify the H-type Lie algebras n_{t,u}. Speaker: Irina Markina, professor, University of Bergen. Title: Integer lattices on pseudo-$H$ groups Abstract: During the spring semester I presented our last result with A.Korolko and M.Godoy about the definition of pseudo $H$-type groups (or general $H$-type groups). At the present seminar I will explain why these groups admit an integer lattice. The existence of a lattice on a Lie group is equivalent to the existence of a basis on the corresponding Lie algebra. I will give all necessary definitions and present the construction of the concrete basis in one of pseudo $H$-type Lie algebras. This is a joint work with Professor Kenro Furutani from the University of Science and Technology of Tokyo, see arXiv:1305.6814. Speaker: Mahdi Khajeh Salehani, postdoc, University of Bergen. Title: A geometric approach to nonholonomic dynamics. Abstract: The Euler-Lagrange equations, while universal, are not always effective to analyze the dynamics of mechanical systems. For example, it is difficult to study the motion of a simple mechanical system like the Euler top using the Euler-Lagrange equations, either intrinsically or in generalized coordinates. In fact, Euler (1752) discovered that the equations of motion for the rigid body become significantly simpler if one uses, instead of the generalized velocities, the angular velocity components relative to a body frame. There actually exist a number of variational principles one may use to derive the equations of constrained mechanical systems. In this talk, we study some of these principles and give a geometric interpretation of the derived equations of motion in both holonomic and nonholonomic settings, generalizing the ideas pioneered by Euler and further developed by Lagrange (1788) and Poincaré (1901). Speaker: Nikolay Kuznetsov (Laboratory for Mathematical Modelling of Wave Phenomena, Institute for Problems in Mechanical Engineering, Russian Academy of Sciences, St Petersburg). Title: Steady water waves with vorticity: spatial Hamiltonian structure. Abstract: Spatial dynamical systems are obtained for two-dimensional steady gravity waves with vorticity on water of finite depth. These systems have Hamiltonian structure and Hamiltonian is essentially the flow-force invariant. Speaker: Alexander Vasiliev, professor, University of Bergen Title: Extremal metrics in the modulus problem for some families of curves and surfaces. Abstract: The modulus of families of curves (or extremal length) is a powerful method in analysis introduced originally by Grötzsch (1928) and developed by Beurling and Ahlfors (1950). It enjoys conformal invariance and uniqueness of the extremal metric. However the existence of the latter is a difficult problem and its explicit expression is known only in few cases. In 1974, Rodin proposed a method of calculation of the extremal metric in the case when the family of curves is the image of another family in the plane for which the extremal metric is known. We extend this theorem for the Euclidean space and polarizable groups and propose some application to integral inequalities. This is a joint work with Irina Markina (Bergen) and Melkana Brakalova (New York). Speaker: Anastasia Frolova, PhD student, University of Bergen Title: Cowen-Pommerenke type inequalities for univalent functions. Abstract: We present a new estimate for angular derivatives of univalent maps at the fixed points. The method we use is based on the properties of the reduced modulus of a digon and the problem of extremal partition of a Riemann surface. Title: One-parameter semigroups in the unit disk and estimates for angular derivatives. Abstract: We introduce a technique which allows us to deduce estimates for general holomorphic functions from estimates for univalent once. The method relies on theory of semigroups of holomorphic self-mappings of the unit disc. Speaker: Alexander Vasiliev, University of Bergen Title: Boundary distortion under conformal map Abstract: We survey some results on boundary distortion under conformal self-maps of the unit disk. In particular, we review Cowen-Pommerenke and Anderson-Vasiliev type inequalities making use of the moduli method. Speaker: Irina Markina, professor, University of Bergen Title: Relation between the module of family of curves and family of surfaces on Carnot groups. Abstract: I will start from reviewing the relation between the module of a family of curves connecting two compacts and family of surfaces separating these two compacts. I will present the extremal metrics and extremal families of curves and surfaces. Then I introduce the analogous notions of module on the Carnot groups pointing the novelties and difficulties. The main aim is to understand the extremal families and metrics in the geometrical setting of Carnot groups. Speaker: Galina Filipuk, Professor, University of Warsaw Title: Multiple orthogonal polynomials and their properties. Abstract: In this talk I shall speak about the multiple orthogonal polynomials, their definition, the raising and lowering operators and the differential equations they satisfy. I shall also present a few examples: the multiple orthogonal polynomials with exponential cubic weight, their zeros and the properties of Wronskians of multiple orthogonal polynomials. This is a joint work with W. Van Assche and L. Zhang (KULueven, Belgium). Abstract: We consider conformal maps from the unit disk into itself, such that they have two fixed points on the unit circle, and which are conformal at these points. We obtain an estimate of the product of the angular derivatives of such maps at the fixed points. The method we use is based on the properties of the reduced modulus of a digon and the problem of extremal partition of a Riemann surface. (Joint work with Alexander Vasil'ev) Speaker: Georgy Ivanov, PhD student, University of Bergen Title: Random walk and PdE on graphs Abstract: The deep connections between Brownian motion and partial differential equations are well-known. In this lecture we consider the discrete counterparts of these concepts - random walk and partial difference equations on graphs. This allows to illustrate the main ideas of the continuous theory, but at the same time requires having only elementary mathematical background (a first course in probability and basic notions of measure theory and discrete mathematics should suffice). Speaker: Bruno Franchi (Dipartimento di Matematica Universita' di Bologna, Bologna, Italy) Title: Intrinsic graphs in Carnot groups Abstract: he aim of this talk is to provide an introduction to the theory of intrinsic graphs in Carnot groups, and, in particular, to that of intrinsic Lipschitz graphs. The simple idea of intrinsic graph is the following one: let M, H be complementary homogeneous subgroups of a group G, then the intrinsic (left) graph of f: A\subset M\to H is the set graph f ={g . f(g): g\in A }. This notion deserves the adjective ``intrinsic'' since it is invariant under left translations or homogeneous automorphisms of the group (dilations in particular). We stress that neither Euclidean graphs are necessarily intrinsic graphs nor the opposite. Intrinsic graphs appeared naturally while studying non critical level sets of differentiable functions from a Carnot group G to the Euclidean space R^k. Indeed, implicit function theorems for groups can be rephrased stating precisely that these level sets are always, locally, intrinsic graphs. We shall discuss also a remarkably deep relationship between intrinsic graphs associated with a group decomposition and the so-called Rumin's complex (E_0^*,d_c) of differential forms in a Carnot group G. Speaker: Alexey Tochin (PhD student, UiB) Title: A generalization of SLE Abstract: This seminar is a continuation of the two previous ones. After repeating the key points we will proceed to the main subject, namely generalized SLE with one slit which was introduced in the very end of the last seminar. In essence, this is a two-parametric family of equations that contains the well-known Radial, Dipolar and Chordal SLE as 3 special cases. The properties of all of these new SLE equations are very similar to those of the classical ones. We will not prove them (the work is still in progress), but there will be a lot of numerical simulations illustrating them. (Joint work with Georgy Ivanov and Alexander Vasil'ev) Speaker: Georgy Ivanov (PhD student, UiB) Abstract: Radial, Chordal and Dipolar SLE (Schramm-Loewner evolution) can be defined as families of conformally invariant measures on curves, possessing the domain Markov property. The domain Markov property is closely related to the fact that the governing equations can be represented as time-homogeneous diffusion equations. We use general Loewner theory (Bracci, Contreras, Diaz-Madrigal, Gumenyuk) and consider a more general class of diffusions which generate slit evolutions. We use SLE with attractive boundary point (constructed in our earlier paper) as a model example for this class of measures. (Joint work with Alexey Tochin and Alexander Vasil'ev). Speaker: Mahdi Khajeh Salehani (Postdoc, UiB) Title: Classical nonholonomic vs. vakonomic mechanics: a report on the 'debate' Abstract: To study constrained mechanical systems, there are at least two approaches one may take, namely the "classical nonholonomic approach", which is based on the Lagrange-d'Alembert principle and is not variational in nature, and a variational axiomatic one known as the "vakonomic approach". In fact there are some fascinating differences between these two procedures, e.g., they do not always give the same equations of motion; the distinction between these two procedures has a long and distinguished history going back to Korteweg (1899), and has been discussed in a more modern context by Arnold, Kozlov and Neishtadt since 1983. In this seminar, we present the classical nonholonomic mechanics and the vakonomic mechanics of systems with constraints, and will compare them in order to see when these two mechanics are equivalent, i.e., when they give the same system of equations. For the class of mechanical systems that they are not so, we determine which one of these approaches is the appropriate one for deriving the equations of (mechanically possible) motions. Speaker: Christian Autenried (PhD student, UiB) Title: Clifford modules and admissible metrics Abstract: In this seminar we remind the definition of Clifford modules and present their classifications according to the metric that makes representations skew symmetric. Furthermore, we introduce pseudo-H-type algebras and consider three particular examples. Speaker: Arne Stray (professor, UiB) Title: Approximation by polynomials and translates of the Riemann zeta function Abstract: We discuss some recent work involving Mergelyans theorem and certain properties of Riemann famous function. Speaker: Alexander Vasiliev Title: Introduction to Neurogeometry Abstract: It will be an informal comprehensive introduction to a rather recent area of Neurogeometry. In particular, we address the problems of inpainting by anisotropic diffusion and sub-Riemannian underlying geometry of the first visual cortex. Speaker: Irina Markina, Professor, UiB Title: Algebras of Heisenberg type and possible generalisations Abstract: In the first seminar we introduced so called general H-type Lie algebras. On the second seminar I will reveal the relation between these Lie algebras and composition of quadratic forms and Clifford modules. Abstract: We introduce a special class of nilpotent Lie groups of step 2, that generalises the so-called Heisenberg-type groups, defined by A. Kaplan in 1980. We change the presence of the inner product to an arbitrary scalar product and relate the construction to the composition of quadratic forms and Clifford modules. We present geodesic equations for the sub-semi-Riemannian metric on nilpotent Lie groups of step 2 and solve them for the case of general H-type groups. We discuss possible classifications of these groups. Speaker: Prof. Boris Kruglikov (University of Tromsø, Norway) Title: A tale of two G2 Abstract: Exceptional Lie group G2 is a beautiful 14-dimensional continuous group, having relations with such diverse notions as triality, 7-dimensional cross product and exceptional holonomy. It was found abstractly by Killing in 1887 (complex case) and then realized as a symmetry group by Engel and Cartan in 1894 (real split case). Later in 1910 Cartan returned to the topic and realized split G2 as the maximal finite-dimensional symmetry algebra of a rank 2 (non-holonomic) distribution in dimension 5. This follows from Cartan's analysis of the symmetry groups of Monge equations of the form y'=f(x,y,z,z',z"). I will discuss the higher-dimensional generalization of this fact, based on the joint work with Ian Anderson. Compact real form of G2 was realized by Cartan as the automorphism group of octonions in 1914. In the talk I will also explain how to realize this G2 as the maximal symmetry group of a geometric object (non-degenerate almost complex structure in dimension 6) and discuss what other symmetry groups are allowed. Speaker: Prof. Sergey Favorov (Kharkov National University, Ukraine) Title: Blaschke-type conditions on unbounded domains, generalized convexity, and applications in perturbation theory Abstract: We introduce a notion of $r$-convexity for subsets of the complex plane. It is a pure geometric characteristic that generalizes the usual notion of convexity. For example, each compact subset of any Jordan curve is $r$-convex. Further, we investigate subharmonic functions that grow near the boundary in unbounded domains with $r$-convex compact complement. We obtain the Blaschke-type bounds for its Riesz measure and, in particular, for zeros of unbounded analytic functions in unbounded domains. These results are based on a certain estimates for Green functions on complements of some neighborhoods of $r$-convex compact set. We apply our results in perturbation theory of linear operators in a Hilbert space. Namely, let $A$ be a bounded linear operator with an $r$-convex spectrum such that the complement of its essential spectrum $\sigma_{ess}(A)$ is connected, and a linear operator $B$ be in the Schatten - von Neumann class $S_q$. We find quantitative estimates for the rate of condensation of the discrete spectrum $\sigma_d(A+B)$ near the essential spectrum $\sigma_{ess}(A)$ (note that under our condition $\sigma_{ess}(A+B)=\sigma_{ess}(A)$). Speaker: Anastasia Frolova (PhD student, UiB) Title: Extremal length method and estimates of angular derivatives of conformal mappings. Abstract: We introduce the notion of reduced modulus of a digon and use it to solve the following extremal problem of conformal mappings. We consider conformal maps from the unit disk into itself, such that it has a fixed point on the unit circle and is conformal at it. We obtain an estimate of the angular derivative of such maps. Title: On the connection between Gaussian Free Field (GFF), Stochastic Loewner Evolution (SLE) and Conformal Field Theory (CFT). Abstract: The main purpose of the talk is to show connections between stochastic processes, measures on curves, random 2-dimensional distributions, operator valued distributions and representations of the Virasoro algebra. We begin with a simple problem of the Harmonic Explorer and show its natural connection to SLE(4), interface of GFF and a representation of the Virasoro algebra. We will give necessary definitions via discrete versions of these objects as well as explore the continuous approach. Then turning to the general case of the Choradal SLE we will give a brief introduction to the Conformal Field Theory. We finish with the approach by Makarov and Kang designed to merge together all of the three concepts given in the title. Title: Sub-Riemannian geometry of Stiefel manifolds. Abstract: In the talk we consider the Stiefel manifold V(n;k) as a principal U(k)-bundle over the Grassmann manifold and study the cut locus from the unit element. We will give the complete description of this cut locus on V(n;1) and present the suffi{#0E}cient condition on the general case. Title: SLE and CFT Abstract: We review connections between the Stochastic Loewner Evolution, Gaussian Free Field and Conformal Field Theory following a recent preprint by Makarov and Kang Speaker: Erlend Grong (associate professor, HiB) Title: Submersions, lifted Hamiltonian systems and rolling manifolds Abstract: A submersion is a map between two manifolds $\pi:Q \to M$ that is surjective on each tangent space. The kernel of this map gives us a sub-bundle called the vertical bundle. A chosen complement $H$ to this bundle is called an Ehresmann connection. If the submersion $\pi$ is between two Riemannian manifolds and it in addition has the property of being a fiber wise isometry when restricted to $H$, then geodesics in $M$ are just projections of geodesics in $Q$ which are horizontal to $H$. Conversely, if the same conditions hold for $\pi$ and $Q$ is a principal $G$-bundle over $M$ with a "sufficiently nice metric", then the projections of the geodesics in $Q$ are the trajectories of gauge-charged particles under the influence of a magnetic field. This magnetic field is represented by the curvature of $H$. We will generalize these ideas by looking at Hamiltonian systems on $M$ and we construct a lifting of them to $Q$. Then we look at what we can know of the solutions in $M$ by looking at the solutions in $Q$, and conversely, how we can describe the solutions in $Q$ by their projections to $M$. This description relies on a new idea of parallel transport of vertical vector fields. In order to show the application of this new method, we apply it to try to describe optimal curves of the rolling manifold problem. Title: Dominating sets and simultaneous approximation in the unit disc Abstract: For a space H of analytic functions in the unit disc D, we look for a geometric characterization of the subsets F such that sup of |f| over F is equal to sup of |f| over D for all f in H. This problem is related to problems in simultaneous approximation that also will be discussed. Speaker: Georgi Raikov (Pontificia Universidad Católica de Chile) Title: Resonances and spectral shift function singularities for a magnetic Schroedinger operator Abstract: Let H0 be the 3D Schroedinger operator with constant magnetic field, V be an electric potential which decays sufficiently fast at infinity, and H = H0 + V. First, we consider the asymptotic behaviour of the Krein spectral shift function (SSF) for the operator pair (H,H0), near the Landau levels which play the role of thresholds in the spectrum of H0. We show that the SSF has singularities near the Landau levels, and describe these singularities in terms of appropriate Berezin - Toeplitz operators. Further, we define the resonances for the operator H and investigate their asymptotic distribution near the Landau levels. We show that under suitable assumptions on the potential V there are infinitely many resonances near every fixed Landau level. We find the main asymptotic term of the corresponding resonance counting function, which again is expressed in terms of the Berezin - Toeplitz operators arising in the description of the SSF singularities. The talk is based on joint works with J.-F. Bony (Bordeaux), V. Bruneau (Bordeaux), and C. Fernández (Santiago de Chile). Partially supported by the Chilean Science Foundation Fondecyt under Grant 1090467. Speaker: Oles Kutovyi (University of Bielefeld, Germany and MIT, USA) Title: Stochastic evolutions in ecological models and their scalings Abstract: We analyze an interacting particle system with a Markov evolution of birth-and-death type in continuum. The corresponding Vlasov-type scaling, which is based on a proper scaling of corresponding Markov generators and has an algorithmic realization in terms of related hierarchical chains of correlation functions equations is studied. The existence of rescaled and limiting evolutions of correlation functions as well as convergence to the limiting evolution are shown. Speaker: Olga Vasilieva (Universidad del Valle, Cali, Colombia) Title: Catch-to-stock dependence: the case of small pelagic fish with bounded fishing effort Abstract: Small pelagic fish (such as herring, anchovies, capelin, smelts, sardines or pilchards) is characterized by high reproduction rate and rather short life-cycle. Additionally, pelagic fish stock have strong recurrent cycles of fish abundance and scarcity and may provide high catch yields per unit of fishing effort even within the scarcity periods. The latter may provoke a collapse of fish stock since our abilities to predict their periods of abundance and/or scarceness are very limited. Empirical evidence and biological characteristics of pelagic fish suggest that, in contradiction with traditional fishery models, marginal catch of pelagic species does not react in linear way to changes in stock level. In this presentation, we allow non-linearity in catch-to-stock parameter and propose another variant of single-stock harvesting economic model focusing on the dependence of stationary solutions upon such non-linear parameter. Our principal interest consists in finding an optimal fishing effort leading to stationary solutions that prevent fishing collapse and help to avoid the species extinction. To do so, we first formulate a social planner's problem in terms of optimal control for infinite horizon, then analyze its formal solution by applying the Pontryagin's maximum principle and finally revise a possibly of a singular arc appearance. In conclusion, we also examine some core properties of stationary equilibrium reachable by means of a singular optimal control and prove the existence and uniqueness of steady states under some additional assumptions. This is a joint with Erica Cruz-Rivera (Universidad del Valle, Colombia) and Hector Ramirez-Cabrera (CMM, Universidad de Chile, Chile) within the frameworks of the Research Project C.I. 7807, 2010-2012.} Title: Self-intersections, corners and cusps of Loewner slits. Abstract: In 2010 Lind, Marshall and Rohde gave a characterization of Loewner driving terms generating Loewner traces with self-intersections and infinite spirals. Using a similar technique we characterize driving terms generating slits with corners, and propose a way to characterize driving terms generating tangent slits and slits with cusps. Title: "Controllability on infinite-dimensional manifolds" Abstract: One of the fundamental problems in control theory is that of controllability, the question of whether one can drive the system from one point to another with a given class of controls. A classic result in control theory of finite-dimensional systems is Rashevsky-Chow's theorem that gives a sufficient condition for controllability on any connected manifold of finite dimension. This result was proved independently and almost simultaneously by Rashevsky (1938) and Chow (1939). In this seminar, following the unified approach of A. Kriegl and P.W. Michor (1997) for a treatment of global analysis on a class of locally convex spaces known as convenient, we give a generalization of Rashevsky-Chow's theorem for control systems in regular connected manifolds modeled on convenient (infinite-dimensional) locally convex spaces which are not necessarily normable. This is a joint work with Prof. Irina Markina. Speaker: Vladimir Maz'ya (Professor, University of Liverpool (UK) and University of Linköping (Sweden)) Title: "Higher Order Elliptic Problems in Non-Smooth Domains" Abstract: We discuss sharp continuity and regularity results for solutions of the polyharmonic equation in an arbitrary open set. The absence of information about geometry of the domain puts the question of regularity properties beyond the scope of applicability of the methods devised previously, which typically rely on specific geometric assumptions. Positive results have been available only when the domain is sufficiently smooth, Lipschitz or diffeomorphic to a polyhedron. The techniques developed recently allow to establish the boundedness of derivatives of solutions to the Dirichlet problem for the polyharmonic equation under no restrictions on the underlying domain and to show that the order of the derivatives is maximal. An appropriate notion of polyharmonic capacity is introduced which allows one to describe the precise correlation between the smoothness of solutions and the geometry of the domain. We also study the 3D Lam\'e system and establish its weighted positive definiteness for a certain range of elastic constants. By modifying the general theory developed by Maz'ya (Duke, 2002), we then show, under the assumption of weighted positive definiteness, that the divergence of the classical Wiener integral for a boundary point guarantees the continuity of solutions to the Lamé system at this point. Speaker: Simon G. Gindikin (Rutgers University, USA) Title: "Holomorphic language for Cauchy-Riemann cohomology" Abstract: In multidimensional complex analysis it is not possible to work just with holomorphic functions: we need also to consider higher Cauchy-Riemann cohomology. Usually for their consideration we need go outside of holomorphic analysis. It turns out that there is a purely holomorphic language for cohomology . We will talk about this language discuss several situations at Fourier analysis, representations, differential equations where it is natural to work with cohomology. Speaker: Anastasia Frolova (MSc student, UiB) Title: "Critical measures and quadratic differentials" Abstract: In this talk, we will show how the theory of quadratic differentials is applicable to the problem of describing critical measures, which provide critical points of weighted logarithmic energy on the complex plane. We will also overview the connection between critical measures and solutions to the Lamé equation. The talk is based on the master thesis of A. Frolova Speaker: Xue-Cheng Tai (Professor, UiB) Title: "Partitioning of domains as a mathematical problem: numerical algorithms and applications" Abstract: This talk is devoted to the optimization problem of continuous multi-partitioning, or multi-labeling, which is based on a convex relaxation of the continuous Potts model. In contrast to previous efforts, which are trying to tackle the optimal labeling problem in a direct manner. Some algorithms will be supplied to numerical solve these problems with speed efficiency. In the end, we will also present several recent algorithms for computing global minimizers based on graph cut algorithms and augmented Lagrangian approaches. Speaker: Wolfram Bauer (Georg-August-Universität Göttingen, Germany) Title: "Commutative Toeplitz algebras on weighted Bergman spaces over the unit ball". Abstract: We recall the notion of Toeplitz operators acting on the Hardy space over the unit circle S1 and on weighted Bergman spaces over a domain Ω ⊂ Cn, respectively. Then we discuss the analysis of corresponding C∗- and Banach algebras which are generated by Toeplitz operators (we call them Toeplitz algebras). In the case where Ω = D ⊂ C is the open unit disc we describe classes of commutative C∗-algebras that are induces by automorphisms of D. The results can be generalized to the higher dimensional setting of standard weighted Bergman spaces over the unit ball in Cn, where n > 1. However in this case, new types of commutative Toeplitz Banach algebras appear that are not ∗-invariant and have no counterpart in the one-dimensional situation. If there is time we will explicitly describe the structure of the simplest type of such an algebra which arises in dimension n = 2. Some of the results have been obtained recently in a joint work with N. Vasilevski. Titile: "Chow's theorem" Abstract: We will discuss Wei-Liang Chow's paper "Über Systeme von linearen partiellen Differentialgleichungen erster Ordnung."(1939). This paper includes Chow's version of the Rashevski-Chow theorem. Our aim is to introduce the approach of Chow and to prove his theorem. Furthermore, you will get a translation of the paper which was only available in German. Speaker: Yacine Chitour (Laboratoire des signaux et systèmes, Université Paris-Sud 11) Title: "Rolling on a space form" Abstract: In this talk, we present generalizations of the classical development operation introduced by E. Cartan to define holonomy and which consists of rolling a Riemannian manifold M onto a tangent plane with no slipping nor spinning. In particular, we will consider the case of a Riemannian manifold Mrolling onto a space form. We prove that the existence of a principal bundle connection associated to this rolling problem, which enables us to address controllability issues without any Lie bracket computations but instead by computing some holonomy groups. This is the joint work with M. Godoy Molina and P. Kokkonen. Speaker: Anton Thalmaier (University of Luxembourg ) Title: "Brownian motion with respect to evolving metrics and Perelman's entropy formula" Abstract: We discuss aspects of stochastic differential geometry in the case when the underlying manifold evolves along a geometric flow. Special interest lies in entropy formulas for positive solutions of the heat equation (or conjugate heat equation) under the Ricci flow. Speaker: Stephan Wojtowytsch (Master's student, ERASMUS) Title: "The Alexandrov topology in Sub-Lorentzian geometry" Abstract: We will introduce the notion of the Alexandrov topology connected to the causal structure of spacetimes in Lorentzian geometry and general relativity, and deduce some of its properties. Then we investigate how it carries over to the more general sub-Lorentzian setting. Here due to the existence of singular curves, which we cannot use the calculus of variations on, the situation becomes more complex. Time permitting we will touch upon the subject of length maximizing curves and the sub-Lorentzian time separation function. In all points we will try to contrast the phenomena present to the corresponding results from Riemannian and sub-Riemannian geometry. Title: "Loewner evolution driven by a stochastic boundary point" Abstract: The seminar is based on the paper G.Ivanov, A.Vasil'ev "Loewner evolution driven by a stochastic boundary point", Analysis and Mathematical Physics, 1:387--412, 2012. In that paper we use ideas of general Loewner theory to construct a class of processes having invariance properties similar to those of SLE. Title: "Stochastic Loewner evolution and Conformal Field Theory " Abstract: We introduce basics of SLE (Stochastic Loewner evolution). One of the important problems in this theory is calculation of martingales as conservation (in mean) laws of this dynamical stochastic process. It turns out that this problem is related to well-known calculation of correlators in the Conformal Field Theory, in particular, to singular representations of the Virasoro algebra. We review these relations in a comprehensive way based on a series of papers by Bernard, Bauer, Werner, and Friedrich. Speaker: Erlend Grong (PhD student, UiB) Title: "Stochastic Integration and stochastic differential equations with applications" Abstract: The aim of the presentation is to give an introduction to the concept of random processes (also called stochastic processes), its integration and the notion of martingales. We look at the construction of the Itô integral and compare it to the construction of the integral with respect to a measure. We review some of the basic theorems and properties related to this. We end by discussing applications to stochastic differential equations (SDEs). The talk is supposed to be understandable for audience that is not very familiar with the measure theory. Speaker: Alexander Vasiliev (Professor, UiB) Title: "Evolution of 2D-shapes" Abstract: The study of 2D-shapes is a central problem in computer vision. Classification and recognition of objects from their observed silhouette (shape) is crucial. We give an overview of analysis of 2D-shapes via conformal welding and infinite-dimensional geometry. January 31st, February 7th Speaker: Chengbo Li (Tianjin University) Title: "Curvature invariants in contact sub-Riemannian structures and applications (I-II)" Abstract: We construct the curvature-type invariants of contact sub-Riemannian structures based on the study of differential geometry of curves in Lagrange Grassmannians in which we construct a complete system of symplectic invariants. The bridge between them is the so-called "Jacobi curves" associated with an extremal of the normal sub-Riemannian geodesic problem. The curvature invariants can be applied to the study of estimation of number of conjugate points of the extremal. If time permitting, we compare our construction of curvature invariants with that from the method of equivalence. Speaker: Mahdi K. Salehani (Postdoc, UiB) Title: "A geometric study of the three-body problem" Abstract: The "Newtonian three-body problem" is the mathematical study of how three heavenly bodies move in settings where the dynamics are dictated by Newton's law of motion. Like many mathematical problems, the simplicity of its statement belies the complexity of its solution. In fact, the problem has historically served as a source of mathematical discovery and new problems since 1687, the year of publication of Newton's "Principia mathematica."In this seminar, I shall present some results of my two recent works. Taking a differential geometric approach to the three-body problem -due to Wu-Yi Hsiang and Eldar Straume (2007, 2008), first a new family of periodic orbits for the planar three-body problem with non-uniform mass distributions will be exhibited. Then, applying an extension of Hamilton's principle to non-holonomic three-body systems, we obtain the generalized Euler-Lagrange equations of non-planar three-body motions; as an application of the derived dynamical equations, we answer the question raised by A. Wintner on classifying the "constant inclination solutions" of the three-body problem. 2011, May 10th Speaker: Anastasia Frolova (Master's student, UiB) Title: "Limit zero distributions of Heine-Stieltjes polynomials" Abstract: The seminar is based on the paper "Critical measures, Quadratic Differentials, and Weak Limits of Zeros of Stieltjes polynomials" by A. Martínez-Finkelshtein, E. A. Rakhmanov. We consider Heine-Stieltjes polynomials - polynomial solutions of Lamé equation. We define extremal and critical measures in order to study limit zero distributions of such polynomials. We investigate connections of quadratic differentials with critical measures. 2011, May 3rd Speaker: Elena Belyaeva (Master's student, UiB) Title: "Modulus method and its application to the theory of univalent functions" Abstract: We define a modulus of a family of curves according to the definition of Tamrazov and remind the notion of a quadratic differential on a Riemann surface. We consider the problem of defining a trajectory structure of quadratic differential depending on a parameter. Also, we consider one extremal problem which we solve using the modulus method. 2011, April 26th Speaker: Ksenia Lavrichenko (Master's student, UiB) Title: "Moduli of system of measures on Heisenberg group" Abstract: We shall define the p-module of system of measures according to the classical definition of B. Fuglede. We also recall our previous considerations of p-modulus of a family of curves joining the boundary components of the ring R in Heisenberg group. We explain the idea of a result of W. Ziemer and F. Gehring about relation of the conformal capacity of R to the extremal length of a family of surfaces that separate the boundary components of R in setting of Heisenberg group. Title: "Loewner equation with moving boundary attractive point" Abstract: A general version of the Loewner equation has been developed since 2008 by Bracci, Contreras, Diaz-Madrigal and Gumenyuk. It was shown that there exists a 1-1 correspondence between Loewner type evolution families and Herglotz vector fields. We study the case when the attractive point of the Herglotz field moves along the boundary of the unit disc. In the deterministic case we let the point move with constant radial speed. In the stochastic case it realizes the Brownian motion on the circle. 2011, March 29th Title: "Two mathematical problems from high energy physics and quantum field theory" Abstract: We discuss two mathematically independent problems. The first one is related to meromorphic functions of two (or more) variables and their applications in relativistic quantum scattering theory. The condition of polynomial boundedness leads to a very hard restriction on the function parameters that can be compared with experimental data. The second part will be devoted to functional integral. One of the most physically important approaches to it is connected with a formal extension to perturbative series. It gives so-called Feynman graphs and Feynman rules, which play a critical role in high energy and elementary particle physics. The functional integral and the corresponding series admit invariants, that will also be a subject of our discussion.. Title: "Parametrization of the Loewner-Kufarev evolution in Sato's Grassmannian" Abstract: We discuss complex and Cauchy-Riemann structures of the Virasoro algebra and of the Virasoro-Bott group in relation with the Loewner-Kufarev evolution. Based on the Hamiltonian formulation of this evolution we obtain an infinite number of conserved quantities and provide embedding of the Loewner-Kufarev evolution into Sato's Grassmannian. Speaker: Qifan Li (Master's student, UiB) Title: "The Carleson-Hunt theorem" Abstract: The Carleson's famous paper in 1966 proved that the Fourier series of square-integrable functions converges almost everywhere. As indicated in Hunt's paper in 1967, Carleson's method can be modified to deal with the functions in Lp-space with p>1. In addition to Carleson's work, Fefferman provides another approach to solve this problem in 1971. His proof relies on the almost orthogonality property of the maximal Carleson operator on the time-frequency plane. This inspired the development of the theory of the time-frequency analysis. The joint paper of Lacey and Thiele in 2000 showed that the maximal Carleson operator can be decomposed in the time-frequency plane in terms of wave-packets and they provide a new proof of Carleson's theorem. We will follow the Carleson's approach in this talk and discuss the iteration arguments and the construction of exceptional sets. Title: "Infinite dimensional sub-Riemannian geometry" Abstract: We will talk about different attempts to study sub-Riemannian geometry in infinite dimensional manifolds.We will first to look at the development of the metric approach to study the space of shapes. Then I will talk about my recent work (joint work with Irina and Alexander), where I try to use the previous ideas in order to study the space of holomorphic functions. It turns out that many of the properties from sub-Riemannian geometry on finite dimensional principal bundles are generalized to this case. Speaker: Christian Autenried (Master's student, UiB) Title: "Universal Grassmannian (introduction and continuation)" Abstract: We shall define some dense submanifolds of the Universal Grassmannian and consider their properties. Then we shall study the stratification that gives us better understanding of the structure of the Grassmannian. The next step is to define the Pluecker coordinates and the embedding of the Grassmanian into projective space L2. Finally we shall see how the rotation group acts on Grassmannian and how this action is related to the stratification structure. Title: "Universal Grassmannian (introduction)" Abstract: This is the first lecture, where the definition of the infinite dimensional Grassmannian will be given. The simplest properties, such as manifold structure and action of the group will be considered. Speaker: Irina Markina (professor, UiB) Abstract: In the following three lectures we will give the notion of an infinite dimensional analogue of the Grassmann manifold, that received the name Universal Grassmannian. In the first lecture I shall give auxiliary definitions from the functional analysis, such as the space of Hilbert-Schmidt and Fredholm operators, general linear restricted group and will provide elementary proofs and examples. In the following two lectures Christian Autenried will define the Universal Grassmannian as a manifold and present its properties. Title: "The Q space and Triebel conjecture" Abstract: This talk is based on the paper http://arxiv.org/abs/0908.4380 which describes the Paley-Littlewood characterization of Q space and proved that Q space is exactly the space connecting the conjecture of Hans Triebel regarding an isomorphism theorem for elliptic operators in BMO space. We refer to Wen Yuan, Winfried Sickel and Dachun Yang, Morrey and Campanato meet Besov, Lizorkin and Triebel, Lecture Notes Math. 2005 (2010), Springer, for the recent progress in this area. Title: "Polar coordinates on Carnot groups" Abstract: We describe a procedure for constructing "polar coordinates" in a certain class of Carnot groups elaborated by Z.M. Balog and J.T.Tyson (2002). We give explicit formulae for this construction in the setting of the Heisenberg group. The construction makes use of nonlinear potential theory, specially, the fundamental solutions to the p-sub-Laplace operators. One of the applications of this result is an exact capacity (module) estimate. Reference: Balogh, Zoltán M.(CH-BERN-IM); Tyson, Jeremy T.(1-SUNYS) Polar coordinates in Carnot groups. (English summary) Math. Z. 241 (2002), no. 4, 697-730. Speaker: Anna Korolko (PhD student, UiB) Title: "Variational Calculus" Title: "Gaussian free field" Abstract: This seminar is an overview of the survey "Gaussian free field for mathematicians" by S. Scheffield (arXiv:math/0312099 [math.PR]). Gaussian free field (GFF), known in physics as the Euclidean bosonic massless free field, is an analog of Brownian motion for the case of d-dimensional time. It is an important object for many constructions in statistical physics. Due to its conformal invariance, the 2-dimensional GFF is a useful tool for studying Schramm-Loewner evolution (SLE). Title: "Gaussian free field and conformal welding" Abstract: This seminar is based on the paper «Random curves by conformal welding» by K.Astala, P.Jones, A.Kupiainen, E.Saksman (2010). The authors construct a conformally invariant random family of Jordan curves in the plane by welding random homeomorphisms of the unit circle generated by the exponential of the trace of the 2-dimensional Gaussian Free Field (GFF). This construction is in a certain sense an analog of Schramm-Loewner Evolution (SLE) for the case of closed curves. We will start by giving the definitions of the trace of GFF and of the problem of conformal welding. Then we will give an outline of the construction and, if time permits, some of the technical details. Speaker: Mauricio Godoy (PhD student, UiB) Title: "On Gromov's theorem on group growth" Abstract: One of the most celebrated results of M. Gromov is the characterization of finitely generated groups of polynomial growth as groups with a nilpotent subgroup of finite index, called almost nilpotent groups. The proof and some related results of this theorem have a strong "sub-Riemannian flavour". For example, the degree of the polynomial growth, given by the Bass-Milnor-Wolf formula, also known as Bass-Guivarch formula, is surprisingly similar to Mitchell's formula for the Hausdorff dimension of a sub-Riemannian manifold at a regular point. The aim of this talk is to present the necessary definitions, to sketch the proof of Gromov's theorem from a sub-Riemannian point of view and to study some examples. Title: "Modifications of the Schwarz lemma for regular functions on a free domain" Abstract: We consider regular functions defined on an arbitrary open subset of the unit disk containing zero. Using classical properties of conformal maps and a sufficient condition for univalence for such functions we get special modifications of the Schwarz lemma and inequalities for the functions' coefficients. In particular, we apply such estimates to algebraic polynomials. Title: "Rolling and controllability" Abstract: Earlier this year, me and some friends (Mauricio, Irina and Fatima) submitted a paper where we have been working on an intrinsic formulation of the problem of rolling manifolds. Our work was inspired by ideas of Agrachev and Sachkov on two dimensional manifolds. Their result about controllability explains how the Gaussian curvature determines which points on the manifold can be reached, given an initial configuration of two rolling bodies. I will present now the notion that acts like an analogue for controllability condition in higher dimensions when one works with rolling problem. Title: "On the Rumin-Ge complex" Abstract: On the famous survey "Carnot-Carathéodory spaces seen from within", Mikhail Gromov proposes, among other ideas, a theory of horizontal differential forms for contact manifolds. This approach was subsequently explored by Michel Rumin, and extended (in a "non-canonical" way) to more general classes of sub-Riemannian manifolds by Zhong Ge. In this seminar I will present some of the basic constructions and ideas behind the theory and we will see how these give a natural environment for a Hodge theory on sufficiently nice sub-Riemannian manifolds. Speaker: Takaharu Yaguchi (The University of Tokyo) Title: The Discrete Variational Derivative Method Based on Discrete Differential Forms Abstract: As is well known, for PDEs that enjoy a conservation or dissipation property, numerical schemes that inherit the property are often advantageous in that the schemes are fairly stable and give qualitatively better numerical solutions in practice. Lately, Furihata and Matsuo have developed the so-called "discrete variational derivative method" that automatically constructs energy preserving or dissipative finite difference schemes. Although this method was originally developed on uniform meshes, the use of non-uniform meshes is of importance for multi-dimensional problems. In this talk, we will show an extension of this method to triangular meshes. This extension is achieved by combination of this method and the theory of the discrete differential forms by Bochev and Hyman. Speaker: Marek Grochowski (Cardinal Stefan Wyszynski University in Warsaw) Title: "An 'algorithm' for computing reachable sets for some sub-Lorentzian structures on R3" Abstract: The aim of my talk is to show a kind of algorithm allowing to construct functions defining reachable sets for certain sub-Lorentzian structures on R3, including contact and Martinet sub-Lorentzian structures. A number of functions (which can be equal to 2 or 4) needed for describing the (future) nonspacelike reachable set from a point q depends on whether there exists or there does not exist a timelike abnormal curve contained in the boundary of the reachable set from q. Title: "Sub-Lorentzian geometry" Abstract: My task would be to explain you basic facts and main definitions from sub-Lorentzian geometry. Speaker: Professor Alexander Vasiliev (UiB) Title: "Tangential properties of trajectories for holomorphic dynamics in the unit disk" Abstract: We consider dynamics of holomorphic selfmaps of the unit disk with a Denjoy-Wolff (DW) point of hyperbolic type at the boundary. Contreras and Diaz-Madrigal proved that if two dynamics have the same DW point such that any point of the unit disk approaches after iterations the DW point with the same tangent line at DW, then they are the same. Bracci supposed that we need to check this property only at finite number of points in the unit disk. We disprove this conjecture. Speaker: Professor Alexander Olevskii (Tel Aviv University, Israel) Title: "Wiener's "closer of translates" problem" Abstract: Wiener characterized cyclic vectors (with respect to translations) in L1 (R) and L2(R) in terms of zero sets of Fourier transform. He conjectured that similar characterization should be true for Lp (R) , 1<p Speaker: Professor Vladislav Poplavskii (Saratov State University) Title: "On determinants of Boolean matrices" Abstract: We introduce the notion of the determinant of the square matrices over Boolean algebra. We present applications of the determinant are under consideration to the theory of rank functions and to solution of linear systems of inequalities and equations. Speaker: Professor Dmitri Prokhorov (Saratov State University, Russia) Title: "Integrability cases of the Loewner equation" Abstract: We give the partial cases of the Loewner equation which can be integrated in quadratures. The corresponding mapping properties are described. Speaker: Professor Martin Schlichenmaier (University of Luxembourg) Title: "Krichever - Novikov type algebras - an overview" Abstract: The Witt algebra, its central extension the Virasoro algebra, and the affine Lie algebras play an important role in a number of fields. From the geometric point of view they are infinite-dimensional Lie algebras of meromorphic objects assocated to the Riemann sphere. Coming from applications there is a need for similar constructions for higher genus Riemann surfaces. They are given by Krichever - Novikov type algebras. In this talk we will introduce them and discuss their properties. Topic: "Differential equations in Matrices and Matrix exponential" Abstract: The exponential function for matrices will be introduced and one-parameter subgroups of matrix groups will be studied. We will show how these ideas can be used in the solution of certain types of differential equations. Speaker: Qifan Li (Master's student UiB) Title: "Proof of Astala's conjecture" continued Abstract: Quifan will continue and present the proof of a couple lemmas that he used in the proof of the main theorem. Title: "Proof of Astala's conjecture" Abstract: We will discuss the proof of Astala's conjecture which is the ground-breaking work of Michael T. Lacey, Eric T. Sawyer and Ignacio Uriarte-Tuero. The new idea in the paper is the proof of boundedness of a certain Calderon-Zygmund operators on spaces with non-doubling weights which will be also discussed in the presentation. Reference: Michael T. Lacey, Eric T. Sawyer, Ignacio Uriarte-Tuero: Astala's Conjecture on Distortion of Hausdorff Measures under Quasiconformal Maps in the Plane. arXiv:0805.4711v3 [math.CV] Speaker: Ksenia Lavrichenko (Master student UiB) Title: "Liebermann theorem for a particular case of Heisenberg group" Abstract: We consider contact transformations on three-dimensional Heisenberg group. It is well known that, for example, group SU(1,2) belongs to the class of contact transformations on Heisenberg group. The question arises are there any more? We shell discuss the theorem that gives the conditions under which one can produce the contact map flows by vector fields of a special form. Reference: A.Koranyi, H.M.Reimann "Quasiconformal mappings on the Heisenberg group", 1985. Speaker: Elena Belyaeva (Master's student UiB) Title: "Quadratic differentials on a Riemann surface" Abstract: A quadratic differential on a Riemann surface is locally represented by a meromorphic function that changes by means of multiplication by the square of the derivative under a conformal change of the parameter. It defines, in a natural way, a field of line elements on the surface, with singularities at the critical points of the differential, i. e. its zeros and poles. The integral curves of this field are called the trajectories of the differential. We consider the local a global trajectory structure of quadratic differentials and define completely the structure of trajectories in a special case. Speaker: Vendula Exnerova Title: "Bifurcation along a non-degenerated eigenvalue" Abstract: After an introduction to the bifurcation basic terms I would like to go on through Lyapunov-Schmidt reduction. With some preparation I would like to prove the Theorem about bifurcation along non-degenerated eigenvalue. Title: "Quantum harmonic oscillator and the Bloch sphere" continued Abstract: We shall discuss some quantum mechanics underlying the Heisenberg uncertainty and the Hopf principle bundle. We start with the simplest quantum harmonic oscillator. The symmetry is given by the energy conservation law. Then we turn to a closed system of N interacting particles with symmetries given my the angular momentum conservation. We shall discuss similarities and differences in these two models. Title: "Quantum harmonic oscillator and the Bloch sphere" Speaker: Professor Irina Markina (UiB) Title: " The Virasoro group as a complex manifold" continued The speaker will remined definitions and prove that the group Diff has CR-structure. Title: "The Virasoro group as a complex manifold" Abstract: The main purpose of the talk is to discuss the geometric structure of the group Diff of sense preserving diffeomorphisms of the unite circle S. It appears that it is an infinitedimensional CR-manifold in some complex Frechet space. A shall provide all necessary definitions. The Virasoro group Vir is a central extension of Diff by real numbers. We will see that the map Vir to Diff/S is a holomorphicaly trivial principal C*-bundle. Speaker: Henning Abbedissen Alsaker (Master's student, UiB) Title: "Multipliers of the Dirichlet space" Abstract: We define and study the Dirichlet space and some related spaces of analytic functions. We then address the problem of characterizing the multipliers of these spaces. Finally, if time allows, we consider the multipliers as a Banach algebra and state some results and pose some questions in this direction. Speaker: PhD student Anna Korolko (UiB) Title: "Generalized Heisenberg Groups" Abstract: We will discuss two-step nilpotent Lie groups with a natural left-invariant metric and consider some of their geometry. These groups constitute a natural generalization of the Heisenberg group. Speaker: Georgy Ivanov (Master's student, UiB) Title: "One-slit dynamics of domains and the norms of a driving term in the Loewner-Kufarev equation" Abstract: It has been known since 1923 that every single-slit mapping which satisfies certain normalization conditions can be represented as a solution of the Loewner equation with an appropriately chosen driving term, which is a continuous real-valued function. In 1947 Kufarev gave an example showing that the converse is not true, i.e., there exists a continuous driving term which generates a non-slit mapping. He also found a sufficient condition for a driving term to generate a one-slit mapping, namely the boundedness of the driving term's first derivative. The second known sufficient condition was given by Marshall, Rohde and Lind in 2005. They showed that if Lip(1/2)-norm of the driving term is less than 4, then the Loewner equation will generate a slit map. We construct a family of examples of non-slit solutions which includes Kufarev's example as a trivial case. This family contains both examples where the Lip(1/2)-norms are arbitrarily large and where they approach 4 from above arbitrarily close. Speaker: Postdoc Pavel Gumenyuk (UiB) Title: "Geometry behind Loewner chains" Abstract: This talk is a continuation of the previous seminar held by Prof. Santiago Díaz-Madrigal on Tuesday last week. There will be presented recent results on the admissible geometry for Loewner chains of chordal type in the most general case as well as in the special considered by V.V. Goryainov and I.Ba (1992) and by R.O.Bauer (2005). These results are achieved in collaboration with Prof. Manuel D. Contreras and Santiago Díaz-Madrigal from the University of Sevilla, SPAIN. Speaker: Dr. Yu-Lin Lin (Institute of Mathematics, Academia Sinica, Taipei, Taiwan) Title: "Large-time rescaling behaviors to the Hele-Shaw problem driven by injection" Abstract: This talk addresses a large-time rescaling behavior of Hele-Shaw cells for large data initial domains. The Polubarinova-Galin equation is the reformulation of zero surface tension Hele-Shaw flows with injection at the origin in two dimensions by considering the moving domain $\Omega(t)=f(B_{1}(0),t)$ for some Riemann mapping f(z,t). We give a sharp large-time rescaling behavior of global strong polynomial solutions to this equation and the corresponding moving boundary in terms of the invariant complex moments. Furthermore, by proving a perturbation theorem of polynomial solutions, we also show that a small perturbation of the initial function of a global strong polynomial solution also gives rise to global strong solution and a large-time rescaling behavior of the moving domain is shown as well. Speaker: Professor Santiago Díaz-Madrigal (joint work with professor Manuel Contreras), University of Seville Title: "Generalized Loewner theory in the unit disk" Abstract: We introduce a general version of the notion of Loewner chains and Loewner differential equations which extend and unify the classical cases of the radial and chordal variant of the Loewner differential equation as well as the theory of semigroups of analytic functions. In this very general setting, we establish a deep correspondence between these chains and the weak solutions of some specific non-autonomous differential equations. Among other things, we show that, up to a Riemman map, such a correspondence is one-to-one. In a similar way as in the classical Loewner theory, we prove that these chains are also solutions of a certain partial differential equation which resembles (and includes as a very particular case) the classical Loewner - Kufarev PDE. Speaker: PhD student Mauricio Godoy Molina (UiB) Title: "Looking for (sR)geodesics and (sR)Laplacians on spheres" Abstract: In this talk I will present some of our attempts for finding "convenient" distributions on odd dimensional spheres, and some consequences of their existence. Our primary goals are describing the sub-Riemannian geodesics and the intrinsic sub-Riemannian Laplacian induced by these distributions. A more important goal (but considerably harder) is finding the sub-Riemannian heat kernel, which will eventually lead to a closed expression for the associated Carnot-Carathéodory distance. This last part promises to be sketchy and incomplete, but motivational. Title: "Quasiconformal mapping on the Heisenberg group" (continuation) Abstract: In the first part of the talk we showed how the one-dimensional Heisenberg group appeared in the Bruhat decomposition of the group SU(1,2). The second part will be devoted to the definitions of contact and quasiconformal mappings on the Heisenberg group. After formulating some properties of quasiconformal mappings we prove the existance of a flow of contact maps on the Heisenberg group. Title: "Quasiconformal mapping on the Heisenberg group" Abstract: In the first part of the talk we show how the one-dimensional Heisenberg group appears in the Bruhat decomposition of the group SU(1,2). The second part will be devoted to the contact and quasiconformal mappings on the Heisenberg group. After formulating some properties of quasiconformal mappings we prove the existance of a flow of contact maps on the Heisenberg group. The 2nd talk in the mini-course "Quantum underdamped dissipative harmonic oscillator" Title: "Quantum underdamped dissipative harmonic oscillator" Abstract: We give some basics of quantum mechanics arriving at classical and quantum harmonic oscillator. We shall analyze a simplest example of mixed divergent-curl system, i.e., an underdamped dissipative harmonic oscillator, and present its first quantization using complex form of the Hamiltonian. Speaker: master student Elena Belyaeva (UiB) Title: "Nash equilibrium in games with ordered outcomes" This work is devoted to one special sort of games, studied with theory of games. A subject matter of this theory is situations where several sides participate, and every of sides pursues its own goal. The result, or final state of situation, is defined with joint actions of all sides. These situations are called games. Theory of games explore the possibilities of colliding sides and attempts to define such strategy for every player that the result of the whole game would be best in certain sense, called principle of optimality (we consider Nash principle of optimality). The main aim of current work is finding criterion conditions for existing a Nash equilibrium situations in mixed expansion of game with ordered outcomes. In part I we set a connection between Nash equilibrium situations and balanced submatrixes of payoff function's matrix. In part II we found required and sufficient conditions for balanced matrix. In appendix there is a program for finding a Nash equilibrium situations in arbitrary finite game of two players with ordered outcomes. Speaker: master student Ksenia Lavrichenko (UiB) Title: "Investigation of phase portraits of three-dimensional models of gene networks" Motivation: Prediction of regimes of molecular-genetic system functioning by structural and functional organization of a system is one of the key problems in the fields of bioinformatics studying gene network functioning. To address this problem, it is necessary to perform theoretical studies of functioning of gene networks' regulatory contours and to reveal their general regularities, which determine the presence or absence of ability to support stationary, cyclic, or other, more complex regimes of functioning. Results: Presence and stability of the limit cycles and stationary points of small amplitude resulting from the Andronov–Hopf bifurcation were studied in a system of ordinary differential equations which describes the behavior of a three-dimensional hypothetical gene regulatory network. Speaker: Ph.D. student Erlend Grong, University of Bergen, Norway Title: "Sub-Riemannian and sub-Lorentzian geometry on SU(1,1) and its universal covering" Abstract: We discuss the example of SU(1,1) with the pseudometric induced by the Killing form. Choosing different types of distributions, we get a sub-Riemannian and a sub-Lorentzian manifolds. We also lift these structures to the universal cover CSU(1,1). In the sub-Riemannian case, we find the distance function and describe the number of geodesics on SU(1,1) and CSU(1,1) completely. This is example is important because, unlike the Heisenberg group, the cut and conjugate loci do not coincide. Furthermore, we describe the sub-Lorentzian geodesics and compare them to the Lorentzian ones. This example is important because the CSU(1,1) with the induced Lorentzian metric is isometric to the anti-de Sitter space (or the universal cover of it, depending on how you define it). Speaker: Professor David Shoikhet, Department of Mathematics, ORT Braude College, Karmiel, Israel Title: "A flower structure of backward flow invariance domains" Abstract: We study conditions which ensure the existence of backward flow invariant domains for semigroups of holomorphic self-mappings of a simply connected domain $D$. More precisely, the problem is the following. Given a one-parameter semigroup $S$ on $D$, find a simply connected subset $\Omega\subset D$ such that each element of $S$ is an automorphism of $\Omega$, in other words, such that $S$ forms a one-parameter group on $\Omega$. Speaker: Fátima Silva Leite,Department of Mathematics and Institute of Systems and Robotics, University of Coimbra, Portugal Title: "The geometry of rolling maps" Abstract: Rolling maps describe how one smooth manifold rolls on another, without twist or slip. We will focus on the geometry of rolling a Riemannian manifold on its affine tangent space at a point. Both manifolds are considered to be equipped with the metric induced by the Euclidean metric of some embedding space. The Kinematic equations of a rolling motion can be described by a control system with constraints on velocities, evolving on a subgroup of the Euclidean group of rigid motions, describing simultaneously rotations and translations in space. Choosing the controls is equivalent to choosing one of the curves along which the two manifolds touch. Issues like controllability and optimal control of rolling motions will be addressed and illustrated for the most well studied of these nonholonomic mechanical systems, the rolling sphere. Other interesting geometric features of rolling motions will be highlighted. Speaker: Alexander Vasiliev, University of Bergen, Norway Title: "Quantization of dissipative systems and complex Hamiltonians" Abstract: We start with the classical notion of the first quantization and give the Dirac scheme using ladder operators. Then we suggest a general approach to quantization of dissipative systems, in which the imaginary part of the complex Hamiltonian plays the role of entropy. The damped harmonic oscillator is considered as a typical example. Speaker: Irina Markina, University of Bergen, Norway Title: "Why is sub-Riemannian geometry applicable?" Abstract: A sub-Riemannian geometry of 3D sphere can be defined by means of the Hopf fibration. We will give all necessary definitions, and describe a sub-Riemannian structure on the 3D sphere using the Hopf map and Ehresmann connection. Then we describe states and state vectors of the two-level quantum systems (qubits) and show how they lead to the Hopf map. At the end, we discuss adiabatic transport of the state vectors over curves in the Bloch sphere, that are sub-Riemannian geodesics in the geometric language. Speaker: Arne Stray, University of Bergen, Norway Title: "Extremal solutions to the Nevanlinna-Pick problem" Speaker: Henrik Kalisch, University of Bergen, Norway Title: "Non-existence of solitary water waves in three dimensions" Abstract: This talk will be about a paper of Walter Craig, concerning nonexistence of localized solitary-wave solutions in three dimensions. References: MR1949966 (2003m:76011) Craig, Walter Non-existence of solitary water waves in three dimensions. Recent developments in the mathematical theory of water waves (Oberwolfach, 2001). R. Soc. Lond. Philos. Trans. Ser. A Math. Phys. Eng. Sci. 360 (2002), no. 1799, 2127--2135. (Reviewer: Nikolay G. Kuznetsov) 76B03 (35J65 35Q51 35Q53 76B15 76B25) Speaker: Mauricio Godoy Molina, University of Bergen, Norway Title: "Sub-Riemannian geodesics of odd-dimensional spheres" Abstract: In this short talk, two interesting results will be presented; one concerning normal sub-Riemannian geodesics when the manifold is a principal G-bundle (for a suitable G) and the other concerning the construction of Popp's measure for odd-dimensional spheres. The first theorem will be applied to determine all possible normal (and thus all) sub-Riemannian geodesics when G=S1 and G=S3, and the second one will be applied in determining the intrinsic hypoelliptic Laplacian for S7, when the horizontal distribution has rank 6. Tuesday, December 9, 2008, Aud. Pi, 14:15 Speaker: Roland Friedrich (Max-Planck-Institut für Mathematik, Bonn, Germany) "Aspects of the Global Geometry underlying Stochastic Loewner Evolutions" Tuesday, December 2, 2008, room. 526, 14:15 Speaker: Pavel Gumenyuk (UiB) "Loewner chains in the unit disk" Tuesday, November 18, 2008, room. 526, 14:15 Speaker: Mauricio Godoy (UiB) "Global Sub-Riemannian Geometry of Spheres " Speaker: Anna Korolko (UiB) "Sub-semi-Riemannian geometry" Tuesday, October 28, November 4, 2008, room. 526, 14:15 Speaker: Erlend Grong (UiB) "Optimal control and geodesics on anti-de Sitter space" Tuesday, October 21, 2008, room. 640, 14:15 Speaker: Dante Kalise (UiB) "Numerical approximation of an optimal control problem in a strongly damped wave equation" Speaker: Georgy Ivanov (UiB) "Martingales with applications to Brownian motion and Walsh series" Tuesday, September 30, October 7, 2008, room. 526, 14:15 Speaker: Irina Markina (UiB) "Rashevskii theorem" Tuesday, September 23, 2008, room. 526, 14:15 Speaker: Alexander Vasiliev (UiB) "Slit-solutions to the Loewner-Kufarev equation" Tuesday, April 15, 2008, room. 534, 15:00 Speaker: Peter A. Clarkson (Kent University, UK) "Rational solutions of soliton equations" Tuesday, January 29, 2008, room. 534, 14:15 "The pointwise inequalities for Sobolev functions on Carnot groups" "Virasoro Algebra and Loewner Chains" "From Hele-Shaw flows to Integrable Systems. Historical Overview" Tuesday, October 2,9, 2007, room. 534 "Rotations, unit S^3 sphere, and Hopf fibration" Joint seminar (Analysis and Image Procesing Groups) Tuesday, September 18, 2007, room. 534 Speaker: Dominque Manchon (Blaise Pascal University, France) "Dendriform algebras and a pre-Lie Magnus type expansion" (joint work with Kurusch Ebrahimi-Fard) Speaker: Arne Stray (UiB) "Restrictions of the disc algebra described locally" Tuesday, September 4, 2007, room. 534 Speaker: Pavel Gumenyuk (UiB, Norway; Saratov State University, Russia) "Siegel disks and basins of attraction" Tuesday, April 24, 2007, room. 526 "On the distortion of the conformal radius under quasiconformal map" Wednesday, April 18, 2007, aud. "Pi" Speaker: Semen Nasyrov (Kazan State University, Russia) "Lavrentiev problem for an airfoil" Wednesday, March 14, 2007, aud. "Pi" Speaker: Yurii Semenov (NTNU, Trondheim) "Complex variables in the water entry problem" Wednesday, February 7, 2007, aud. "Pi" Wednesday, February 14, 2007, aud. "Pi" (continuation) Wednesday, February 21, 2007, aud. "Pi" (final part) "Virasoro Algebra: Analysis, Geometry, Integrability" Thursday, September 28, 2006, room 510 Thursday, October 19, 2006, room 534 (continuation) "Some interesting examples of Heisenberg-type homogeneous groups" Wednesday, September 13, 2006, Auditorium Pi Joint Seminar of Pure Mathematics Groups Speaker: Rubén Hidalgo (Universidad Técnica Federico Santa María, Valparaíso, Chile) "Extended Schottky groups" Wednesday, September 6, 2006, room 508 "Lower Schwarz-Pick estimates and angular derivatives" Wednesday, August 16, 2006, Auditorium Pi Analysis Seminar and Department's Colloquium Speaker: Dmitri Prokhorov (Saratov State University, Russia) "Dynamical systems and the Loewner equation" Wednesday, May 10, 2006, Auditorium Pi Speaker: J. Milne Anderson (University College London, UK) "Cauchy transform of point masses" Wednesday, April 26, 2006, room 526 Speaker: Yurii Lyubarskii (NTNU, Trondheim) "On decay of holomorphic functions" Wednesday, March 29, 2006, room 526 Title: "About Heisenberg group" (final talk) Title: "About Heisenberg group" (continuation) Wednesday, March 15, 2006, auditorium Pi Joint Analysis seminar and Department's colloquium Speaker: Björn Gustafsson (KTH, Stockholm) Title: "On inverse balayage and potential theoretic skeletons" Wednesday, March 8, 2006, room 526 Title: "About Heisenberg group" Thursday, February 2 and 16, 2006, room 510 Title: "Bosonic strings and subordination evolution" Thursday, December 8, 2005, room 526 Title: "A problem about harmonic functions" Thursday, October 13, 2005, room 526 Speaker: Alexander Vasil'ev (UiB) Title: "Modeling 2-D flows in Hele-Shaw cells by conformal maps" Speaker: Giuseppe Coclite (CMA Oslo and University of Bari, Italy) Title: "Global Weak Solutions to a Generalized Hyperelastic-Rod Wave Equation" Erlend Grong [email protected]+47 55 58 28 38 Universitetet i Bergen Matematisk institutt Realfagbygget, Allégt. 41 View campus map
CommonCrawl
State principle of moments: It states that, "If an object is equilibrium under the action of many forces acting on it. Then, the sum of moments of all forces acting about a point is zero." I.e. clockwise moment = anticlockwise moment Let's take a rod in which it is pivoted at point O such that it is in equilibrium at point O under the action of force f1, f2, f3, f4. Then, at equilibrium, ${{\rm{f}}_1}{\rm{*}}{{\rm{A}}_0} + {{\rm{f}}_2}{\rm{*}}{{\rm{B}}_0} - {{\rm{f}}_3}{\rm{*}}{{\rm{O}}_{\rm{C}}} - {{\rm{f}}_4}{\rm{*OD}} = 0$ ${\rm{or}},{\rm{\: }}{{\rm{f}}_1}{\rm{*}}{{\rm{A}}_0} + {{\rm{f}}_2}{\rm{*}}{{\rm{B}}_0} = {{\rm{f}}_3}{\rm{*}}{{\rm{O}}_{\rm{c}}} + {{\rm{f}}_4}{\rm{*OD}}$ Or, anticlockwise moment = clockwise moment Couple and momentum of a couple: A couple is a pair of forces, equal in magnitude, oppositely directed, and displaced by perpendicular distance or moment. The product of force and perpendicular distance from the axis of relation is called moment of force and is also called torque. So, Moment of force = f ${\rm{*}}$ r The SI unit of moment of force is Nm. Dimensions: $\left[ {{\rm{M}}{{\rm{L}}^2}{{\rm{T}}^{ - 2}}} \right]$ The torque on a body is measured by the product of the force applied and the torque arm(r), i.e., ${\rm{torque\: }}\left( {\rm{T}} \right) = {\rm{force}}\left( {\rm{F}} \right){\rm{*Torque\: arm}}\left( {\rm{r}} \right)$ Torque Due to a Couple Two equal unlike parallel forces acting on a body at different points constitute a couple. The moment of the couple is called torque. Hence, torque due to a couple Γ = force * couple arm=F * l Center of mass (C.M): Center of mass is the point on the object in which the applied force produces acceleration without rotation. Generally, the center of mass lies within the object but in some object no mass lies at the center of mass. In a ring or hollow sphere, no mass lies at the center of mass because the center of mass in such object lies at the center which is empty. Generally, the center of gravity and center of mass coincide. If two object rotate about center of mass, then we have, ${{\rm{m}}_1}{{\rm{x}}_1} = {{\rm{m}}_2}{{\rm{x}}_2}$ But, if a rigid body is rotating about the center of mass then, the co-ordinate of center of mass is ${\rm{X}} = \frac{{{{\rm{m}}_1}{{\rm{x}}_1} + {{\rm{m}}_2}{{\rm{x}}_2} + \ldots + {{\rm{m}}_{\rm{n}}}{{\rm{x}}_{\rm{n}}}}}{{{{\rm{m}}_1} + {{\rm{m}}_2} + \ldots + {{\rm{m}}_{\rm{n}}}}}$ $ = \frac{{ \in {\rm{mx}}}}{{ \in {\rm{m}}}} = \frac{{ \in {\rm{mx}}}}{{ \in {\rm{M}}}}$ And ${\rm{Y}} = \frac{{ \in {\rm{my}}}}{{\rm{M}}}$ Centre of mass of two bodies; Suppose two bodies of masses m1 and m2. Let masses be connected by rod and their centre of mass be C. Let x be the distance m1 x1 = m2x2 or x1 = x2$\frac{{{{\rm{m}}_2}}}{{{{\rm{m}}_1}}}$ x2 = x1$\frac{{{{\rm{m}}_1}}}{{{{\rm{m}}_2}}}$ So greater the mass of the body, nearer is their C.G. and so on. Lami's theorem: If a body is in equilibrium under the action of three forces, then each of the forces proportional to the site of the angle between the other two forces. Consider the three forces P, Q, R are acting on the particles A such that particles is in equilibrium. ${\rm{if\: }}\alpha ,\beta ,\gamma $ are the angles between the three forces as shown in the figure. From Lami's theorem, mathematically, we have $\frac{{\rm{P}}}{{{\rm{sin}}\alpha }} = \frac{{\rm{Q}}}{{{\rm{sin}}\beta }} = \frac{{\rm{R}}}{{{\rm{sin}}\gamma }}{\rm{\: }}$ if the body is in equilibrium. $\frac{{{\rm{AB}}}}{{{\rm{sin}}\alpha }} = \frac{{{\rm{BC}}}}{{{\rm{sin}}\beta }} = \frac{{{\rm{CA}}}}{{{\rm{sin}}\gamma }}$ $\frac{{\rm{P}}}{{{\rm{sin}}180 - \alpha }} = \frac{{\rm{Q}}}{{{\rm{sin}}180 - \beta }} = \frac{{\rm{R}}}{{{\rm{sin}}180 - \gamma }}$ Hence lami's theorem proved The conditions under which a rigid body remains in equilibrium under action of a set of coplanar forces: Generally, the base of the object should lie in lowest position so that the object becomes more stable. Similarly, the area of base should also be large. Also, the vertical line passing through C.G should lie on the large area of base. Under such three conditions, the objects become stable. A rigid body will be in equilibrium if the following two conditions are met. 1. The vector sum of the forces acting on the body must be zero 2. The net torque acting on the body must be zero. Forces acting in a single plane or in a same plane are called coplanar forces. If only two forces act through a point, they must be coplanar. However none parallel forces that do not act through a point cannot be coplanar. Three or more non-parallel forces acting through a point may not be coplanar. Co-planner force: If the line of action of all forces lies on a plane then, these forces are called co-planer forces. If the sum of two forces is equal and opposite to third force then, their resultant will be zero, but these all forces should be co-planner. Let${\rm{\vec P}}$, ${\rm{\vec Q}}$ and ${\rm{\vec R}}$ be three co-planner forces such that ${\rm{\vec P}}$ +${\rm{\vec Q}}$ = $\overrightarrow { - {\rm{R}}} $ , then their resultant or sum is zero. i.e${\rm{\vec P}} + {\rm{\vec Q}} + {\rm{\vec R}} = 0$ It easier to open or close a door by pulling from a point nearer its outermost edge than pulling nearer the hinge: It easier to open or close a door by pulling from a point nearer its outermost edge than pulling nearer the hinge because the centre of mass of a body is point where all the mass of the body can be considered to be concentrated. Therefore we can replace whole body by a single particle located at the centre of mass. It is easier to stand on two legs than on one leg: It is easier to stand on two legs than on one leg. A body will be stable if its base area is more and C.G. of the body lies within the base area as low as possible. The base area of two legs on the ground is much more than that of the one leg. Two unequal coplanar forces acting together produce condition of equilibrium: It easier to hold a load with arm folded than outstretched: The centre of mass of a body is point where all the mass of the body can be considered to be concentrated thus the force acts at this point, it easier to hold a load with arm folded than outstretched.
CommonCrawl
Derivation of the Curl formula in cartesian coordinates. By calculating the circulation per area of a vector field $$F(x,y,z) = F_x(x,y,z)\vec{x} + F_y(x,y,z)\vec{y} + F_z(x,y,z)\vec{z}$$ in a small rectangle around $(x_0, y_0, z_0)$ on the $xy$ plane, it can be shown the limit as the sides of the rectangle approach zero is $$\left(\frac{\partial F_y(x_0, y_0, z_0)}{\partial x} - \frac{\partial F_x(x_0, y_0, z_0)}{\partial y}\right)$$ The same calculation however is not that straightforward if the rectangle does not lie in the $xy$, $yz$, or $xz$ planes. Now if $\vec{n}$ is the normal of the plane, I thought that by performing a change of basis such that $\vec{n} \rightarrow \vec{z'} $ and by following the previous calculations we could show that the limit of the circulation per area is $$ \left(\frac{\partial F_{y'}(x'_0, y'_0, z'_0)}{\partial x'} - \frac{\partial F_{x'}(x_0, y_0, z_0)}{\partial y'}\right) $$ This is also the inner product of the curl of the vector field and the normal $\vec{n}$ As such the two should be equal: $$\left(\frac{\partial F_{y'}(x'_0, y'_0, z'_0)}{\partial x'} - \frac{\partial F_{x'}(x'_0, y'_0, z'_0)}{\partial y'}\right) = \\ \left[\left(\frac{\partial F_z(x_0, y_0, z_0)}{\partial y} - \frac{\partial F_y(x_0, y_0, z_0)}{\partial z} \right)\vec{x} + \left(\frac{\partial F_z(x_0, y_0, z_0)}{\partial x} - \frac{\partial F_x(x_0, y_0, z_0)}{\partial z} \right)\vec{y} + \left(\frac{\partial F_y(x_0, y_0, z_0)}{\partial x} - \frac{\partial F_x(x_0, y_0, z_0)}{\partial y} \right)\vec{z}\right] \cdot \vec{n} $$ I've been trying to prove the above equality for some time without success, specifically I am not sure how to handle the transformations correctly. Any help with this is much appreciated! linear-algebra multivariable-calculus derivatives differential-topology VeritasVeritas A simpler approach is via integral theorems. As stated in the question, the special cases for a rectangle in the $xy$ , $yz$ , $zx$ planes are well understood. According to Green's theorem : $$ \begin{cases} \iint_{xy} \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right) dx\, dy = \oint_{xy} \left( F_x\, dx + F_y\, dy \right) \\ \iint_{yz} \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) dy\, dz = \oint_{xy} \left( F_y\, dy + F_z\, dz \right) \\ \iint_{zx} \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\right) dz\, dx = \oint_{xy} \left( F_z\, dz + F_x\, dx \right) \end{cases} $$ But instead of rectangles, we take half rectangles, or better: the triangles $OAB$ , $OBC$ , $OAC$ respectively: Thanks to Green's theorem we can replace area integrals by line-integrals; mind that they are counter-clockwise. Then it is clear that, irrespective of any further content: $$ \oint_{OAB} + \oint_{OBC} + \oint_{OAC} + \oint_{ABC} = 0 $$ Assuming that the operator rot(ation) is not defined yet in general, this means that we now have an expression for it: $$ 2 \iint_{ABC} \vec{\operatorname{rot}}(\vec{F}) \cdot \vec{n}\, dA = \\ - \iint_{xy} \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right) dx\, dy - \iint_{yz} \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) dy\, dz - \iint_{zx} \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\right) dz\, dx $$ Continuing with infinitesimal volumes / areas and flipping normals on the right hand side, so that they become the components of the normal at the left hand side: $$ \vec{\operatorname{rot}}(\vec{F}) \cdot \vec{n}\, \Delta A = \\ \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right)\cdot n_x\, \Delta A + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\right)\cdot n_y\, \Delta A + \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right)\cdot n_z\, \Delta A $$ Leaving out the infinitesimal area $\,\Delta A\,$ gives us the same answer as found by the OP themselves. A somewhat neater approach is to calculate mean values and let the area of the (red) triangle go to zero: $$ \vec{\operatorname{rot}}(\vec{F}) \cdot \vec{n} = \lim_{ABC \to 0} \frac{\iint_{ABC} \vec{\operatorname{rot}}(\vec{F}) \cdot \vec{n}\, dA}{\iint_{ABC} dA} $$ Note. I've encountered essentially the same method at several places elsewhere in physics (I think it's with stress and strain). Aanyway, a related subject is : What does shear mean? Han de BruijnHan de Bruijn $\begingroup$ Great answer! This is the same approach taken here file.scirp.org/pdf/APM20120100008_94595561.pdf no? I decided to go with rectangle loops because it is possible to rigorously prove Stroke's theorem using Riemann sums if we prove the density limit for rectangle loops. The problem with triangles for me is that there is a fundamental restriction on the triangle forms for each plane and it's harder to prove that the surface can be appropriately triangulated. $\endgroup$ – Veritas Sep 29 '16 at 21:37 $\begingroup$ Yes, I think that publication reflects some essentials of the method. But what physicists usually do is: simply delete the volume integral signs by an "infinitesimal" argument, as is shown in my last step. Being a physicist by education myself, I find this sufficient rigor already; mathematicians may have a different opinion about it. Though it employs triangles, yet I don't think my argument has anything to do with "triangulation". $\endgroup$ – Han de Bruijn Sep 30 '16 at 13:46 We start by applying a rotation around the x and y axis $$ \left(\begin{array}{c} x' \\ y' \\ z' \end{array}\right) = \left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i\end{array}\right) \cdot \left(\begin{array}{c} x \\ y \\ z\end{array}\right) $$ This rotates the surface so that its normal at the required point, points upwards. This means, $$\left(\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right) = \left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i\end{array}\right) \cdot \vec{n} $$ and by using the inversion property of rotation matrices, $$ \vec{n} = \left(\begin{array}{ccc} a & d & g \\ b & e & h \\ c & f & i\end{array}\right) \cdot \left(\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right) = \left(\begin{array}{c} g \\ h \\ i \end{array}\right) $$ Notice that the normal $\vec{n}$ is the last row of our rotation matrix. Since the first and second row are also unit vectors orthogonal to $\vec{n}$, $$ n = \left|\begin{array}{ccc} \vec{i} & \vec{j} & \vec{k} \\ a & b & c \\ d & e & f \end{array}\right| \\ n_x = \left|\begin{array}{cc}b & c \\ e & f \end{array}\right|, n_y = \left|\begin{array}{cc}d & f \\ a & c \end{array}\right|, n_z = \left|\begin{array}{cc}a & b \\ d & e \end{array}\right|$$ We also want to rotate our vector field appropriately: $$ \left(\begin{array}{c} F_{x'}(P') \\ F_{y'}(P') \\ F_{z'}(P') \end{array}\right) = \left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i\end{array}\right) \cdot \left(\begin{array}{c} F_x(P) \\ F_y(P) \\ F_z(P)\end{array}\right) $$ Finally, $$\begin{gather}\frac{\partial F_{y'}}{\partial x'} - \frac{\partial F_{x'}}{\partial y'} \end{gather} = \\ \left|\begin{array}{cc} \frac{\partial}{\partial x'} & \frac{\partial}{\partial y'} \\ F_{x'} & F_{y'} \end{array}\right| = \\ \left|\begin{array}{ccc} a\frac{\partial}{\partial x} + b\frac{\partial}{\partial y} + c\frac{\partial}{\partial z} & d\frac{\partial}{\partial x} + e\frac{\partial}{\partial y} + f\frac{\partial}{\partial z} \\ aF_{x} + bF_{y} + cF_{z} & dF_{x} + eF_{y} + fF_{z} \end{array}\right| = \\ \left| \begin{array}{cc} a\frac{\partial}{\partial x} & e\frac{\partial}{\partial y} \\ aF_x & eF_y\end{array}\right| + \left| \begin{array}{cc} a\frac{\partial}{\partial x} & f\frac{\partial}{\partial z} \\ aF_x & fF_z\end{array}\right| + \left| \begin{array}{cc} b\frac{\partial}{\partial y} & d\frac{\partial}{\partial x} \\ bF_y & dF_x\end{array}\right| + \\ \left| \begin{array}{cc} b\frac{\partial}{\partial y} & f\frac{\partial}{\partial z} \\ bF_y & fF_z\end{array}\right| + \left| \begin{array}{cc} c\frac{\partial}{\partial z} & d\frac{\partial}{\partial x} \\ cF_z & dF_x\end{array}\right| + \left| \begin{array}{cc} c\frac{\partial}{\partial z} & e\frac{\partial}{\partial y} \\ cF_z & eF_y\end{array}\right| = \\ (bf-ce)\left| \begin{array}{cc} \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_y & F_z \end{array}\right| + (af-cd)\left| \begin{array}{cc} \frac{\partial}{\partial x} & \frac{\partial}{\partial z} \\ F_x & F_z \end{array}\right| + (ae-db)\left| \begin{array}{cc} \frac{\partial}{\partial x} & \frac{\partial}{\partial y} \\ F_x & F_y \end{array}\right| = \\ \left|\begin{array}{cc}b & c \\ e & f \end{array}\right|\left| \begin{array}{cc} \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_y & F_z \end{array}\right| - \left|\begin{array}{cc}d & f \\ a & c \end{array}\right|\left| \begin{array}{cc} \frac{\partial}{\partial x} & \frac{\partial}{\partial z} \\ F_x & F_z \end{array}\right| + \left|\begin{array}{cc}a & b \\ d & e \end{array}\right|\left| \begin{array}{cc} \frac{\partial}{\partial x} & \frac{\partial}{\partial y} \\ F_x & F_y \end{array}\right| = \\ n_x\left| \begin{array}{cc} \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_y & F_z \end{array}\right| - n_y\left| \begin{array}{cc} \frac{\partial}{\partial x} & \frac{\partial}{\partial z} \\ F_x & F_z \end{array}\right| + n_z\left| \begin{array}{cc} \frac{\partial}{\partial x} & \frac{\partial}{\partial y} \\ F_x & F_y \end{array}\right| = \\ \left| \begin{array}{ccc} \vec{i} & \vec{j} & \vec{k} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_x & F_y & F_z \end{array}\right| \cdot \vec{n}$$ Not the answer you're looking for? Browse other questions tagged linear-algebra multivariable-calculus derivatives differential-topology or ask your own question. What does shear mean? Intuition on the curl formula another proof of divergence theorem finding tangent planes Finding where tangent plane of ellipsoid intersects x-axis Point on surface where tangent plane is perpendicular to line. Vector and parametric form of the equation of lines tangent to a surface Tangent planes to $2+x^2+y^2$ and that contains the $x$ axis Aspecial case of the formula of a plane tangent to the graph $f(x,y)$ Tangential planes of $f(x,y) := (y^2-x)(y^2-2x) $ in $(-1,1)$ and $(-1,-1)$ Finding a vector field such that its curl equals a given vector field
CommonCrawl
Logical calculus A formalization of a meaningful logical theory. The derivable objects of a logical calculus are interpreted as statements, formed from the simplest ones (generally speaking, having subject-predicate structure) by means of propositional connectives and quantifiers. The most frequently used connectives are "not", "and", "or", "if …, then …", and the existential and universal quantifiers. Logical calculi are distinguished from arbitrary calculi (cf. Calculus) by the purely logical character of interpretations and derivation rules, and from logico-mathematical calculi (cf. Logico-mathematical calculus) by the absence in the language of symbols for specific mathematical predicates and functions (except for the symbol "=", the addition of which is interpreted as the introduction of equality and is usually supposed not to violate the logical character of the calculus). These differences have a relative character, since logical calculi remain pure formal systems, and the semantics of any possible interpretation of them must be regarded as something external, having heuristic but not conclusive value in the study of properties of the calculus. One of the most important logical calculi is the classical predicate calculus with function symbols. The language of this calculus, apart from parentheses and the logical symbols $\neg$, $\&$, $\lor$, $\supset$, $\exists$, $\forall$, contains three potentially infinite lists: lists of object variables, predicate variables and function variables. (Each of the predicate and function variables is endowed with information about its dimension, where for predicate variables the least dimension is 1 and for function variables the least dimension is 0.) Terms are defined as follows: 1) any object variable and any function variable of dimension 0 is a term; 2) if $T_1,\ldots,T_l$ are terms and $f$ is a function variable of dimension $l$, then $f(T_1,\ldots,T_l)$ is also a term. If $T_1,\ldots,T_k$ are terms and $P$ is a predicate variable of dimension $k$, then $P(T_1,\ldots,T_k)$ is called an atomic formula. Formulas are defined as follows: 1) any atomic formula is a formula; 2) if $F$ and $G$ are formulas and $x$ is an object variable, then the expressions $$\neg F,\quad(F\mathbin{\&}G),\quad(F\lor G),\quad(F\supset G),\quad\exists xF,\quad\forall xF$$ are also formulas. In the last two formulas all occurrences of the variable $x$ are said to be bound; occurrences of variables that are not associated with quantifiers in the process of constructing a formula are called free. A term $T$ is said to be free for $x$ in $F$ if no free occurrence of $x$ in $F$ is in a subformula of the form $\exists yG$ or $\forall yG$, where $y$ is one of the variables that occurs in $T$; $[F]_T^x$ denotes the result of substituting $T$ for all free occurrences of $x$ in $F$. Let $x$ be an arbitrary object variable, let $A,B,C,D$ be arbitrary formulas, where $D$ does not contain $x$ freely, and let $T$ be an arbitrary term, free for $x$ in $A$. The axioms of the calculus in question are all formulas of the following 10 kinds (each of which is called an axiom scheme): 1) $(A\supset(B\supset A))$, 2) $((A\supset B)\supset((A\supset(B\supset C))\supset(A\supset C)))$, 3) $(A\supset(B\supset(A\mathbin{\&}B)))$, 4a) $((A\mathbin{\&}B)\supset A)$, 4b) $((A\mathbin{\&}B)\supset B)$, 5a) $(A\supset(A\lor B))$, 5b) $(B\supset(A\lor B))$, 6) $((A\supset C)\supset((B\supset C)\supset((A\lor B)\supset C)))$, 7) $((A\supset B)\supset((A\supset\neg B)\supset\neg A))$, 8) $(\neg\neg A\supset A)$, 9) $(\forall xA\supset[A]_T^x)$, 10) $([A]_T^x\supset\exists xA)$. In addition, this calculus has three derivation rules: "from A and A B one can obtain B"; "from D A one can obtain D x A"; and "from A D one can obtain x A D". Provable formulas (or theorems) of the calculus in question are any formulas that can be obtained from the axioms of the calculus as a result of applying (possibly repeatedly) the given rules (see Derivation, logical). A basic interpretation of the predicate calculus. The domain of values of object variables is a non-empty set of objects $M$, that of function variables consists of functions from $M^l$ into $M$, and that of predicate variables consists of functions from $M^k$ into $\{0,1\}$ (one of the values is interpreted as "truth", the other as "falsehood"), where the number $k$ corresponds to the dimension of the predicate variable. Now for any atomic formula, fixing the value of the predicate variables in it and the values of the object and function variables that occur in it, one can talk of the truth or falsehood of this formula. Similarly, using truth tables for propositional connectives and the usual interpretation of quantifiers (as infinite conjunctions and disjunctions), one can judge the truth of an arbitrary formula of the language in question for the chosen $M$ and the chosen values of the predicate, function and free object variables that occur in it. A formula is said to be universally valid (generally valid) if it is true for any such choice. Thus, whatever the values of a two-place predicate variable $P$ and one-place function variable $f$, from the fact that for some $x$ and any $y$ the formula $P(f(x),f(y))$ is true it follows that there is a $z$ for which $P(z,f(z))$ is true. Consequently, the formula $$(\exists x\forall yP(f(x),f(y))\supset\exists z(P(z,f(z)))$$ is universally valid. One can prove that a formula is derivable in the calculus thus constructed if and only if it is universally valid (the so-called Gödel completeness theorem). This interpretation relies on rather complicated set-theoretic abstractions and is therefore inadmissible from the point of view of certain philosophies of mathematics and meta-mathematical theories (for example, intuitionism; finitism; constructive mathematics). In the framework of these theories one can obtain a completeness theorem by changing the semantics of the logical calculus. Numerous logical calculi are obtained by modification of the logical calculus constructed above. Thus, the addition to the language of the symbol "=" together with the schemes 11) $(T=T)$, 12) $((T_1=T_2)\supset([A]_{T_1}^x=[A]_{T_2}^x))$ (here $A$ and $T$ are arbitrary, and $T_1$ and $T_2$ are free for $x$ in $A$) leads to the classical predicate calculus with equality. Exclusion from the language of function variables leads to the pure (or narrow or restricted) predicate calculus. The axiom schemes 1)–8) in conjunction with the first derivation rule give the classical propositional calculus; since the subject-predicate structure of inferences cannot be analyzed by the tools of predicate calculi, instead of various types of variables in the language of these calculi one usually needs only one type — propositional variables, each of which acts as an atomic formula. The rejection of scheme 8 from all the calculi mentioned above leads to minimal logical calculi, and the rejection of schemes 7 and 8 leads to positive logical calculi. Other partial logical calculi are possible, for example those obtained by fixing part of the logical symbols or part of the variables of the language (in combination with a possible reconstruction of the system of axioms) while preserving all classically-derivable formulas consisting of these symbols and variables; these are the implicative propositional calculus (the only symbol is $\supset$), the pure one-place (monadic) predicate calculus (in the language there are only object variables and one-place predicate variables), etc. More meaningful examples of partial logical calculi are the intuitionistic (constructive) calculi, which are obtained from the classical calculi mentioned above by replacing scheme 8 by the scheme $8'$. $(A\supset(\neg A\supset B))$. The names of logical calculi are naturally formed from the terms mentioned; thus, the schemes 1–7, $8'$, 11, 12 define intuitionistic propositional calculus with equality. One also considers many-sorted logical calculi (and terms), where the substitution of terms of one kind for those of another is not allowed. In simple cases the domains of values of terms of different kinds are interpreted as different sets of objects (thus, convenient formalizations of plane geometry can be based on logical calculi with object variables of two kinds — "points" and "straight lines"). But one can successively consider first a calculus with a unique domain of objects, then a calculus with an additional domain of objects, namely predicates over the first domain (that is, in the second calculus one admits quantifiers with respect to the predicate variables of the first), etc. Thus there arise higher-order logical calculi (the logical calculus mentioned earlier is of the first order). The tendency to formalize logical theories with a more powerful supply of concepts leads to a number of generalizations of logical calculi. The consideration, together with "truth" and "falsehood", of various degrees of indeterminacy leads to various formalizations of many-valued logics (cf. Many-valued logic) and calculi of partial predicates. The latter are closely related to calculi of logical consequences and the strict implication calculus, which arose as a result of attempts to formalize the common use of the expression "A implies B" by removing the paradoxes of material implication and rejecting its definition in the form of a table. Modal logical calculi serve to formalize the distinction, studied in modal logic, between "necessary", "possible" and "contingent" assertions. Together with the specification of a logical calculus in terms of axiom schemes, one often meets formulations with finitely many specific axioms, but with the addition of various rules of substitution for variables. Reformulations of a logical calculus in the form of a Gentzen formal system are convenient in many questions of proof theory. A calculus is a completely adequately formalized theory if derivability of a formula in it is equivalent to its identical truth in the basic interpretation. The truth of derivable formulas is connected with the consistency (soundness, cf. Sound rule) of the calculus, and the derivability of all true formulas is connected with its completeness. All logical calculi mentioned above are sound, and many of them are complete in one sense or another (see Gödel completeness theorem). An important property of logical calculi is decidability (see Decision problem): almost-all propositional calculi that have been constructed are decidable; on the other hand, all the predicate calculi mentioned above (except the monadic one) are undecidable. Nevertheless, there are algorithms for undecidable logical calculi that for each derivable formula establish its derivability, but for certain underivable formulas they need not terminate. [1] D. Hilbert, P. Bernays, "Grundlagen der Mathematik" , 1 , Springer (1968) [2] S.C. Kleene, "Introduction to metamathematics" , North-Holland (1951) [3] P.S. Novikov, "Elements of mathematical logic" , Oliver & Boyd and Acad. Press (1964) (Translated from Russian) [4] A. Church, "Introduction to mathematical logic" , 1 , Princeton Univ. Press (1956) [5] , Mathematical theory of logical deduction , Moscow (1967) (In Russian; translated from English) (Collection of translations) [a1] S.C. Kleene, "Mathematical logic" , Wiley (1967) [a2] G. Kreisel, J.L. Krivine, "Elements of mathematical logic" , North-Holland (1967) (Translated from French) [a3] R. Wójcicki, "Theory of logical calculi" , Kluwer (1988) pp. 12 Logical calculus. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Logical_calculus&oldid=44633 This article was adapted from an original article by S.Yu. Maslov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Logical_calculus&oldid=44633"
CommonCrawl
Vol. 7, Issue 2, 2020August 18, 2020 EDT Sternal-Wound Infections following Coronary Artery Bypass Graft: Could Implementing Value-Based Purchasing be Beneficial? Dominique Brandt, Maximilian Blüher, Julie Lankiewicz, Peter J. Mallow, Rhodri Saunders, coronary artery bypass graft los hospital costs health care costs swi Copyright Logoccby-4.0 • https://doi.org/10.36469/jheor.2020.13687 Photo by National Cancer Institute on Unsplash Brandt D, Blüher M, Lankiewicz J, Mallow PJ, Saunders R. Sternal-Wound Infections following Coronary Artery Bypass Graft: Could Implementing Value-Based Purchasing be Beneficial? JHEOR. 2020;7(2):130-138. doi:10.36469/jheor.2020.13687 Figure 1. Data Availability for the Target Countries Figure 2. Prevalence of Superficial and DSWIs in Analyzed Countries Table 1. Burden of CABG-Related SWIs by Country with Base Case Parameters Figure 3. Estimated Cost of SWIs Per CABG in Evaluated Countries Table 2. Median Annual Burden (Range) of SWIs by Country Table 3. Potential for VBP for a Hospital Performing 1000 CABG Procedures Per Year Across Analyzed Countries Background/Objectives: Sternal-wound infections (SWIs) are rare but consequential healthcare-associated infections following coronary artery bypass graft surgery (CABG). The impact of SWIs on the cost of health care provision is unknown. The aim of this study was to quantify the burden of CABG-related SWIs across countries with mature health care systems and estimate value-based purchasing (VBP) levels based on the local burden. Methods: A structured literature review identified relevant data for 14 countries (the Netherlands, France, Germany, Austria, the United Kingdom, Canada, Italy, Japan, Spain, the United States, Brazil, Israel, Taiwan, and Thailand). Data, including SWI rates, CABG volume, and length of stay, were used to populate a previously published Markov model that simulates the patient's CABG-care pathway and estimates the economic (US$) and care burden of SWIs for each country. Based on this burden, scenarios for VBP were explored for each country. A feasible cost of intervention per patient for an intervention providing a 20% reduction in the SWI rate was calculated. Results: The SWI burden varied considerably between settings, with SWIs occurring in 2.8% (the United Kingdom) to 10.4% (the Netherlands) of CABG procedures, while the costs per SWI varied between US$8172 (Brazil) to US$54 180 (Japan). Additional length of stay after SWI was the largest cost driver. The overall highest annual burden was identified in the United States (US$336 million) at a mean cost of US$36 769 per SWI. Given the SWI burden, the median cost of intervention per patient that a hospital could afford ranged from US$20 (US$13 to US$42) in France to US$111 (US$65 to US$183) in Japan. Conclusions: SWIs represent a large burden with a median cost of US$13 995 per case and US$900 per CABG procedure. By tackling SWIs, there is potential to simultaneously reduce the burden on health care systems and improve outcomes for patients. Mutually beneficial VBP agreements might be one method to promote uptake of novel methods of SWI prevention. Healthcare-associated infections (HAIs), which are infections contracted in hospital while in care for another condition, represent a significant clinical and economic burden to hospitals and their patients.1 Collecting data on HAIs is complex, and currently there are no established standards for systematically reporting HAIs, making it difficult to estimate the global burden of HAIs.1,2 According to the WHO, HAIs affect 5.1% to 19% of hospitalized patients worldwide, and the prevalence and nature of HAIs are closely linked to economic development and quality of care. In the United States, the annual economic burden of HAIs has been estimated to range between US$28 billion and US$45 billion, with HAIs affecting 2 million patients and causing 90 000 deaths yearly.3 Costs vary from prolonged impatient stay, long-term disability, lost productivity, and death.1,4 Becoming more pertinent are the additional use of antibiotics and development of antimicrobial resistance despite increased emphasis on handwashing, sterilization, and terminal cleaning.1,4 Many developed health care systems have started to move away from fee-for-service payment models toward payment-for-performance, in hope of improving patient care and curbing costs.5–7 In the United States, the Centers for Medicare & Medicaid (CMS), which oversees United States federal health care programs, is now denying payment for treatment of certain HAIs.8 Since 2012, CMS has been applying a hospital value-based purchasing (VBP) model, where acute-care hospitals are paid according to their performance.9 Hospitals are compared to benchmarks defined for four domains: clinical care; person and community engagement; safety; and efficiency and cost reduction. Hospitals ranking in the worst performing quartile are sanctioned through a 1% reduction in payment.10 Furthermore, payments are denied for readmission following certain procedures, such coronary artery bypass graft surgery (CABG).9,11 Similar measures have also been undertaken by private payers in Australia, where Medibank introduced quality of service requirements in 2015 when payments for additional costs due to HAIs are denied.12 In the United Kingdom, readmissions within 30 days due to a surgical-site infection are not reimbursed.13 Shifting (at least partially) the cost impact from payers to providers may reduce national spending on health care, but it puts additional strain on hospitals and their budgets. Such performance penalties are complex to forecast and challenging to take into account in the budget. With a poor performance rating also impacting patient confidence, hospitals are searching for affordable ways to reduce HAIs.14–17 Sternal-wound infections (SWIs), which can occur after cardiac surgery, are a major and well-defined contributor to the burden of HAIs.18 Unlike most HAIs and surgical-site infections, the definition of SWIs is generally more consistent across health care systems. About 0.5% to 8% of patients are at risk of developing a superficial SWI in the pectoralis fascia, the subcutaneous tissue, and the skin.19–21 Superficial SWIs are often easily treatable with topical wound care and antibiotics.22 However, more severe infections, such as mediastinitis or deep SWIs (DSWIs), are associated with high morbidity and mortality.23 According to a 2015 review by Cotogni et al, 0.5% to 6.8% of cardiac surgery patients develop DSWIs, with in-hospital mortality rates ranging from 7% to 35%.20 SWIs come at a great cost with increased length of stay (LOS), high readmission rates, and reduced patient quality of life, which can fall below presurgical levels.24 CABG is one of the most commonly performed cardiac surgeries. It is a universally accepted, highly reported, and complex surgical procedure. Thus, it is well suited for comparisons of global SWI rates. CABG is a suitable setting for exploring the introduction of new interventions that could reduce the burden of SWIs. The introduction of beneficial interventions can, however, be impaired due to cost concerns. VBP involves a (generally low) base purchase price and a final price that is calculated from the value it provides to the purchaser—in this case hospitals—after extended use. VBP helps to ensure low financial risk to hospitals. Any additional savings derived from the new intervention could be shared between the purchaser and seller. If no benefit is derived, then the seller receives no additional payment beyond the base purchase price. The focus of this study was to evaluate the global burden of SWIs after CABG using comparable mature health care systems. The premise was to provide hospitals with a foundation when considering strategies to reduce their incidence of SWIs and corresponding costs in a VBP model. This study assessed the burden of SWIs after CABG in countries with mature health care systems, as defined by the 2017 Global Access to Healthcare report of The Economist Intelligence Unit.25 The report identified 15 countries with mature health care systems according to their Global Access to Healthcare Index: the Netherlands, France, Germany, Austria, the United Kingdom, Canada, Cuba, Italy, Japan, Spain, the United States, Brazil, Israel, Taiwan, and Thailand. We defined the burden of SWIs in three ways: (1) additional length of hospital stay, in the intensive care unit (ICU) or general ward (GW); (2) readmissions; and (3) additional cost of hospital care. To estimate the burden in each country, a Markov model with states representing the CABG-care pathway was used. This Markov model was adapted from a US-specific model published by Saunders and Lankiewicz.26 Here the model was generalized for countries with similarly mature health care systems. The analysis was completed as follows: A patient entered the model at surgery and was then taken to the ICU, where they received mechanical ventilation. During the time in ICU, the patient recovered sufficiently to be taken off the ventilator, discharged to the GW, and then discharged home. At all times, the patient was at risk of developing either a superficial SWI or a DSWI and dying. The cumulative incidence of SWI over time postsurgery was taken from Lankiewicz et al.27 The cumulative incidence curve was modeled using a dose–response Hill curve (\(\text{Cumulative SWI} = \alpha + \frac{\theta x^{\eta}}{Κ^{\eta} + x^{\eta}}\)), found to be most representative of the input data using CurveExpert Pro (Hyams Development, Chattanooga, TN), with alpha 2.96x10-30, theta 6.93, eta 1.08, and kappa 23.73. The curve was assumed to be consistent across countries, but it was adjusted up or down by a percentage factor for each country so that it provided the correct incidence of SWIs. Specifically, if country-specific data had an SWI incidence of 2.4% at 30 days, and the cumulative incidence curve gave an incidence of 4.8% at 30 days, then each value derived from the cumulative incidence curve would be multiplied by 0.5 (2.4/4.8). Probabilities of progression through the different stages of the modeled pathway were country specific, as were the additional LOS associated with SWIs, the cost of ICU and GW days, and the number of CABG procedures.27 All costs were converted to 2017 US$ using market exchange rate values at mid-range for the year. Model Inputs To identify country- and hospital-level data to populate the model, a review of the literature was performed for CABG surgeries, SWIs, DSWIs, SWI follow-up, LOS, and daily costs of ICU and GW. The literature review was performed in PubMed and Google Scholar by DB, MB, and PJM, with extracted country data checked for accuracy by a second author: either RS, MB, DB, or PJM. Countries in which at least 5 (of the 10 country-specific) parameters were identified were included in the model. Missing data for the retained countries were calculated using the median and interquartile ranges (IQRs) of values from countries for which data were available. In values for which more than one study was found, the midpoint was used. The robustness of the model was assessed using probabilistic sensitivity analysis with 52 iterations per country. VBP VBP levels were estimated using three assumptions: (1) no purchasing hospital would commit all the estimated cost burden to preventative measures; (2) available funds would be split over multiple interventions; and (3) any purchase agreement would include a cost of intervention per patient (CIPP) and a share in savings generated from reduced SWIs. The last item is the VBP, which is paid out only if the intervention meets its stated target. The potential purchase CIPP and VBP were calculated using the following formulas: \[ \text{CIPP} = \ \frac{\left( 1 - \ \text{RR}_{\text{SWI}}^{\text{INT}} \right) \bullet C^{\text{SWI}} \bullet \text{r}^{\text{SWI}} \bullet \frac{S_{\text{VBP}}^{\text{PUR}}}{100}}{\text{N}_{\text{VBP}}^{\text{INT}}} \] \[ \text{VBP} = \frac{(\text{r}^{\text{SWI}} \bullet \text{N}^{\text{PT}} \bullet \text{C}^{\text{SWI}} \bullet \left( 1 - \ \text{RR}_{\text{SWI}}^{\text{INT}} \right)) - (CIPP\ \bullet \ \text{N}_{\text{VBP}}^{\text{INT}} \bullet \text{N}^{\text{PT}})}{\frac{(100 \bullet \ \text{N}_{\text{VBP}}^{\text{INT}})}{\text{S}_{\text{VBP}}^{\text{SAV}}}} \] where for the CIPP formula, \(\text{RR}_{\text{SWI}}^{\text{INT}}\) is the relative risk of SWI events when using the intervention; \(\text{C}^{\text{SWI}}\)is the cost per SWI; \(\text{S}_{\text{VBP}}^{\text{PUR}}\) is the share of the SWI cost allocated to purchase of interventions under the VBP scheme; and \(\text{N}_{\text{VBP}}^{\text{INT}}\) is the number of interventions being considered. When calculating the VBP, \(\text{r}^{\text{SWI}}\) is the SWI rate at the hospital (inpatient events plus readmissions); \(\text{N}^{\text{PT}}\) is the number of patients in the target population; and \(\text{S}_{\text{VBP}}^{\text{SAV}}\) is the share of the cost savings being committed if an intervention meets its target under the VBP scheme. The following scenarios were considered: (1) 50% of the cost burden is made available for VBP of two interventions with a 15% savings share on success, and (2) 30% of the cost burden is made available for VBP of two interventions with a 25% savings share on success. All CIPP and VBP calculations assumed a hospital performing 1000 CABG procedures per year, were performed on the results of the probabilistic sensitivity analysis, and are presented as the median and range. Required data were identified for France, Germany, the Netherlands, the United Kingdom and the United States (Figure 1). One parameter was missing for Australia, Canada, Italy, Japan, Spain, and Taiwan; two are missing for Israel and Brazil; and three are missing for Thailand. No data were identified for Cuba. Extracted data showed that there was a high variability in prevalence of CABG procedures between countries. Germany had the highest rate with 61.4 CABG procedures per a population of 100 000, whereas Taiwan had the lowest with 6.4 per 100 000.28,29 Similarly, high variability in relative additional LOS was associated with superficial SWIs, ranging from 2 days in Spain to 49 days in Japan.28,30 The prevalence of deep sternal wounds, which impacted LOS and readmission rates, ranged between 3.4% (the Netherlands) and 0.8% (the United Kingdom and Thailand, Figure 2). The data used in the analysis can be found in the Supplementary Material, Table S1. Figure 1.Data Availability for the Target Countries Country color indicates the level of data availability, with countries highlighted in black having country-specific data for all 10 parameters. Those in dark gray were missing one, two, or three parameters; Cuba (light gray) had no data identified. Countries in white were not investigated in our analysis. Figure 2.Prevalence of Superficial and DSWIs in Analyzed Countries Abbreviations: CABG, coronary artery bypass graft surgery; DSWI, deep SWI; SWI, sternal-wound infection; SSWI, superficial SWI. Results for Israel are not shown in the graph as information on DSWIs was not available. Israel had a total of 3.6% of SWIs per CABG procedure. The model estimated the total burden of CABG-related SWIs in target countries to be US$557.7 million, with 60% of the burden (US$336.0 million, Table 1) located in the United States. Taiwan had the lowest burden, estimated at US$1.5 million. The cost per SWI was highest in Japan (US$54 180) and lowest in Brazil (US$8172, Table 1). The median cost per SWI across the analyzed countries was US$13 995 (IQR US$8172; US$23 590). When the total SWI burden was normalized by the procedure volume, the burden was the highest in Japan (US$2795 per procedure; Table 1) and in the United States (US$2113) and the lowest in the United Kingdom (US$436) and France (US$440). The model showed when SWI costs were estimated per CABG procedure, France had the lowest SWI cost per procedure and Japan had the highest (Table 1). In Japan and the United States, costs per day for ICU and GW care were much higher than in the other countries in the model. Table 1.Burden of CABG-Related SWIs by Country with Base Case Parameters Procedures, Number SWI Events, Number SWI Burden, US$ Mean Cost Per SWI, US$ Mean SWI Cost Per CABG, US$ France 19 280 717 8 496 817 11 845 441 Germany 50 472 2836 36 876 758 13 003 731 Italy 20 930 1378 18 940 618 13 741 905 United Kingdom 16 529 539 7 200 434 13 357 436 Netherlands 9685 814 18 365 837 22 551 1896 Spain 8294 726 8 715 780 12 008 1051 Australia 13 063 535 11 683 094 21 831 894 Israel 4037 150 2 323 092 15 508 575 Japan 21 313 1099 59 566 162 54 180 2795 Brazil 20 198 1223 9 996 520 8172 495 Canada 20 868 1239 33 091 101 26 707 1586 Taiwan 1510 103 1 467 327 14 249 971 Thailand 6581 518 4 972 120 9600 756 United States 159 063 9139 336 028 904 36 768 2113 Abbreviations: CABG, coronary artery bypass graft; SWI, sternal-wound infection. Results are rounded to the nearest whole number. Mean SWI cost per CABG represents the SWI burden divided by the number of CABG procedures. The results were generally robust to changes in model parameters during sensitivity analysis. The estimated cost of a SWI per CABG was consistently higher in Japan and the United States compared to other countries analyzed (Figure 3). The lower bound of the IQR for these two countries was more than the upper bound for all other countries. Even at the lowest estimate, the overall cost and resource use burden of SWIs following CABG in the United States was far greater than in any other country (Table 2). The total median burden over all analyzed countries was: US$529 million, 57 994 ICU days; 321 973 GW days; and 9418 readmissions. Figure 3.Estimated Cost of SWIs Per CABG in Evaluated Countries Abbreviations: AUS, Australia; BRA, Brazil; CABG, coronary artery bypass graft surgery; CAN, Canada; DEU, Germany; ESP, Spain; FRA, France; GBR, Great Britain; ISR, Israel; ITA, Italy; JPN, Japan; NLD, the Netherlands; SWI, sternal-wound infection; THA, Thailand; TWN, Taiwan; USA: the United States. The cost (in 2017 US$) that SWIs add to each CABG procedure is depicted as a box plot for each country. The shaded box indicates the interquartile range, with the whiskers being the standard deviation. Additional points plotted singularly are considered outliers. Within the shaded box, the line represents the median value, and the cross represents the mean. The box plot is informed by 52 simulations. Table 2.Median Annual Burden (Range) of SWIs by Country Cost Burden, US$ in Millions ICU Burden, Care Days GW Burden, Care Days Readmission Burden, Events France 12.97 (8.32 to 27.18) 1900 (800 to 3900) 9500 (5100 to 23 500) 430 (260 to 840) Germany 43.57 (28.06 to 81.82) 6300 (4000 to 12 500) 33 500 (20 900 to 75 500) 1450 (1050 to 2130) Italy 20.37 (11.63 to 33.93) 3100 (1600 to 5000) 17 000 (7500 to 31 100) 600 (380 to 1030) United Kingdom 12.34 (8.47 to 18.93) 1800 (1200 to 2800) 9800 (5800 to 15 600) 290 (200 to 400) Netherlands 12.97 (9.24 to 21.41) 2200 (1400 to 3400) 12 300 (7800 to 19 900) 230 (150 to 320) Spain 8.69 (4.81 to 13.24) 1200 (700 to 2200) 7000 (2900 to 12 700) 280 (150 to 380) Australia 13.13 (6.63 to 17.35) 1900 (900 to 2800) 10 800 (4600 to 15 200) 260 (200 to 340) Israel 3.24 (2.24 to 5.61) 600 (400 to 1000) 3400 (2200 to 5500) 70 (40 to 110) Japan 47.25 (27.78 to 77.85) 8500 (6000 to 12 600) 50 000 (36 700 to 72 000) 320 (200 to 440) Brazil 16.12 (9.86 to 23.33) 2900 (1800 to 4600) 16 300 (8700 to 27 500) 520 (270 to 640) Canada 29.49 (19.42 to 50.31) 3200 (2000 to 5600) 17 300 (9000 to 34 800) 490 (340 to 710) Taiwan 1.85 (1.29 to 3.44) 400 (600 to 300) 2300 (1300 to 3800) 40 (30 to 50) Thailand 7.58 (3.43 to 12.58) 1400 (600 to 2400) 7900 (2800 to 15 100) 160 (120 to 220) United States 299.40 (179.03 to 528.94) 22 600 (12 500 to 33 900) 125 100 (75 100 to 205 200) 4280 (2740 to 6100) Abbreviations: GW, general ward; SWI, sternal-wound infection. The potential for VBP to help combat postsurgical infections was assessed based on the burden of SWIs following CABG. We provide a worked example for Spain, where the cost of an SWI was estimated at US$12 008 (\(\text{C}^{\text{SWI}}\), Table 1). A Spanish hospital with an inpatient SWI rate of 4% and with 1.6% readmissions, total 5.6% (\(\text{r}^{\text{SWI}} = 0.056\))—that was looking to invest 30% of potential savings (\(\text{S}_{\text{VBP}}^{\text{PUR}} = 30\) in three interventions (\(\text{N}_{\text{VBP}}^{\text{INT}} = 3\)), targeting a 25% reduction in SWIs (\(\text{RR}_{\text{SWI}}^{\text{INT}} = 0.75\))—would have: \(\text{CIPP} = \ \frac{\left( 1 - 0.75 \right)\ \bullet \ 12\ 008\ \bullet \ 0.056\ \bullet \ \frac{30}{100}}{3} =\) US$16.81. Given the fact that the hospital performs 1200 CABG procedures per year (\(\text{N}^{\text{PT}} = 1200\)) and is offering a further 30% of any savings as a VBP (\(\text{S}_{\text{VBP}}^{\text{SAV}} = 30\)), the potential per intervention \(\text{VBP} = \frac{(0.056\ \bullet 1200\ \bullet 12\ 008\ \bullet \ \left( 1 - \ 0.75 \right)) - (16.81\ \bullet \ 3\ \bullet 1200)}{\frac{(100\ \bullet \ 3)}{30}} =\) US$ 14 121.84. If successful, the hospital would realize savings of US$87 036 per year. Under scenario 1, the median (range) CIPP for an intervention reducing the SWI rate by 20% was from US$34 (US$22–US$70) in France to US$111 (US$65–US$183) in Japan (Table 3). If the intervention succeeded in reducing the SWI rate by 20%, a median VBP of between US$5044 (France) and US$16 629 (Japan) would be received; the median hospital saving would be between US$57 165 (France, US$57 per patient) and US$188 460 (Japan, US$188 per patient). Table 3.Potential for VBP for a Hospital Performing 1000 CABG Procedures Per Year Across Analyzed Countries Scenario 1 CIPP, US$ Scenario 1 VBP, US$ Scenario 1 Hospital Savings, US$ (22 to 70) 5044 (3237 to 10 574) 57 165 (36 689 to 119 839) 20 (13 to 42) 11 769 (7554 to 24 673) 70 616 (45 322 to 148 037) Italy 49 (28 to 81) 7298 (47 240 to 137 809) 29 (9726 to 28 372) 102 173 (3841 to 8587) 63 460 (43 533 to 97 324) 22 (8963 to 20 037) 78 391 (53 776 to 120 224) (48 to 111) 10 044 (7154 to 16 578) 113 828 (16 694 to 38 682) 140 611 (100 162 to 232 095) Spain 52 (29 to 80) 7860 Australia 50 (25 to 66) 7536 Israel 40 (28 to 69) 6012 Japan 111 (65 to 183) 16 629 (110 795 to 310 502) 67 Brazil 40 (24 to 58) 5987 Canada 71 (47 to 121) 10 598 Taiwan 61 (43 to 114) 9178 Thailand 58 (26 to 96) 8644 United States 94 (56 to 166) 14 117 Abbreviations: CABG, coronary artery bypass graft surgery; CIPP, cost of intervention per patient; SWI, sternal-wound infection; VBP, value-based purchasing. All values are the median (range) calculated from the probabilistic sensitivity analysis results. The hospital savings assume that both new interventions implemented achieve the target of a 20% reduction in the SWI rate. Under purchasing scenario 2, the median CIPP was lower than in scenario 1. The CIPP ranged from US$20 (US$13–US$42) in France to US$67 (US$39–US$110) in Japan (Table 3). The VBP was generally higher, as was the hospital saving, if interventions met their target under scenario 2. With a 20% reduction in the SWI rate, the median hospital savings ranged from US$70 616 (US$71 per patient) in France to US$232 804 (US$232 per patient) in Japan (Table 3). SWIs after CABG pose a heavy clinical and economic burden on hospitals. Our study found that countries with mature health care systems incurred a median cost of US$13 995 (IQR US$8172; US$23 590) per SWI. Costs varied according to the individual country care pathways; in Japan and Australia, costs incurred were largely due to the extended LOS to treat DSWIs (averages of 66 and 53.2 days respectively).31,32 In the United States, in addition to the high cost of care, readmission costs were 10 times higher than in France and Canada and three times higher than in Germany. The cost per SWI was the highest in the United States at US$36 768 per case, similar to hospital-acquired Clostridium difficile infection, which has been reported to be US$34 157 (90% CI: US$33 134, US$35 180).33 As CMS moves forward with the Hospital-Acquired Condition Reduction Program, thereby reducing payments to the worse-performing quartile of hospitals with regard to their hospital-acquired conditions score, hospitals may choose to focus on infection reduction measures. Similar systems are in place around the globe, and there will be debate within hospitals as to whether reduction should focus on specific areas with achievable goals or on hospital-wide systems. There has been published success in reducing severe post-CABG infections, with studies demonstrating substantial reduction in DSWIs. Implementing a quality improvement process, a regional US medical center managed to achieve close to zero DSWIs.34 The authors used a bundled approach that included a multidisciplinary collaboration and a change in care pathways. The interventions included standardization of processes, new suture technique with braided triclosan-coated suture, silver-coated midsternal dressing, disposable electrocardiogram leads and wires, an insulin infusion protocol, chlorhexidine mouthwash, preoperative vancomycin, preoperative bath, and patient education.34 Similar achievements were made in Israel with an implemented wound-care protocol, the use of chlorhexidine–alcohol, and the exclusion of obese and diabetic women from bilateral internal thoracic artery graft.35 In both studies, it was a combination of changes that lead to success. Such extensive updates of the care pathway may not be feasible in all institutions. Any intervention to reduce the burden of SWIs would, however, be of benefit if it were priced appropriately. Studies have shown that a single intervention can be effective at reducing infection rates, such as introducing 24-hour IV antibiotic prophylaxis,36 local gentamicin sponges,37,38 interlocking figure-eight and nitinol flexigrip closure,36,39,40 and single-use ECG cables and leads.26,34 Using equations provided in the methods section and our estimates of the SWI burden, providers can calculate how much they may pay on a per patient basis for implementing one or more of these interventions. Given the high cost burden of SWIs, assigning 30% of the estimated savings toward purchasing two new interventions resulted in a viable cost of between US$11 (lowest estimate, France) and US$110 (highest estimate, Japan) per patient. The lowest of these costs per patient likely already covered the standard purchase cost for a number of available options. With VBP, however, the simple purchase price is not the end of the story. Providers need to monitor and track their progress on infection rates, so that the benefits of value-based and risk-sharing contracts can be leveraged. Sellers also need to remain engaged, promoting continuing education and correct use of the product to see any additional value returned. Our results were drawn from a simulation model and do not capture all aspects and subtleties of real-life care. The probabilities of moving from one health state to another were taken from published peer-reviewed studies, and for the feasibility of the model it was assumed that average patient characteristics and risk factors for developing an SWI were the same between countries. Risk factors modeled included morbid obesity (>35 kg/m2) and presence of diabetes; less prevalent comorbidities—such as chronic obstructive pulmonary disease, kidney disease, or peripheral vascular disease—were not included.20 There was uncertainty about some model parameters, with limited data available for numbers of days to treat SWIs (3 out of 14 countries had missing values) and DSWI (5 out of 14 countries), as well as for cost per day for care on the GW (3 out of 14 countries had missing values). The latter may be due to analyzed countries having their own hospital data collection systems only intended for policy implementation purposes, and our search was limited to the English language.41,42 Finally, our model assumed an equivalent care pathway in all settings, but care delivery likely varies (if only minorly) among health care systems, hospitals, and care units.6,43–45 SWI and DSWI have a high cost, with a median of US$13 995 per case and US$900 per CABG procedure. The overall cost was largely due to increased cost of care and LOS. The cost of readmissions was also a considerable concern. As hospitals are becoming more and more accountable for their outcomes, they may need to rethink care delivery pathways and invest in new procedures and equipment. Reduction of DSWIs is possible but requires investment in both process and infection prevention products. SWI is an area of care where VBP could be implemented, making care improvement possible with limited financial risk to hospitals. DB has no competing interests to declare. JL was an employee of Cardinal Health, the research funder, at the time of writing. PJM has consulted with Cardinal Health. RS is the owner and MB is an employee of Coreva Scientific, a health-economics consultancy that received fees for developing the Markov model and undertaking this research. Report on the Burden of Endemic Health Care-Associated Infection Worldwide. World Health Organization; 2011. Accessed July 29, 2020. https://apps.who.int/iris/bitstream/handle/10665/80135/9789241501507_eng.pdf van Mourik MSM, van Duijn PJ, Moons KGM, Bonten MJM, Lee GM. Accuracy of administrative data for surveillance of healthcare-associated infections: A systematic review. BMJ Open. 2015;5(8):e008424. doi:10.1136/bmjopen-2015-008424 Google ScholarPubMed CentralPubMed Stone PW. Economic burden of healthcare-associated infections: An American perspective. Expert Rev Pharmacoecon Outcomes Res. 2009;9(5):417-422. doi:10.1586/erp.09.53 Arefian H, Hagel S, Heublein S, et al. Extra length of stay and costs because of health care-associated infections at a German university hospital. Am J Infect Control. 2016;44(2):160-166. doi:10.1016/j.ajic.2015.09.005 World Health Organization: Regional Office for Europe. Guidelines on Core Components of Infection Prevention and Control Programmes at the National and Acute Health Care Facility Level. World Health Organization; 2016. Healthcare-Associated Infections. CDC Accessed September 15, 2019. https://www.cdc.gov/hai/index.html Vlaanderen FP, Tanke MA, Bloem BR, et al. Design and effects of outcome-based payment models in healthcare: A systematic review. Eur J Health Econ. 2019;20(2):217-232. doi:10.1007/s10198-018-0989-8 Stone PW, Glied SA, McNair PD, et al. CMS changes in reimbursement for HAIs. Med Care. 2010;48(5):433-439. doi:10.1097/mlr.0b013e3181d5fb3f Hospital Value-Based Purchasing. Centers for Medicare & Medicaid Services Accessed September 12, 2019. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Hospital-Value-Based-Purchasing-.html Hospital-Acquired Condition Reduction Program (HACRP). Centers for Medicare & Medicaid Services Accessed July 1, 2018. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/HAC-Reduction-Program.html Hospital Readmissions Reduction Program (HRRP). Centers for Medicare & Medicaid Services Accessed December 9, 2019. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program Magid B, Murphy C, Lankiewicz J, Lawandi N, Poulton A. Pricing for safety and quality in healthcare: A discussion paper. Infect Dis Health. 2018;23(1):49-53. doi:10.1016/j.idh.2017.10.001 Calderwood MS, Kleinman K, Huang SS, Murphy MV, Yokoe DS, Platt R. Surgical site infections. Med Care. 2017;55(1):79-85. doi:10.1097/mlr.0000000000000620 Bazzoli GJ, Thompson MP, Waters TM. Medicare payment penalties and safety net hospital profitability: Minimal impact on these vulnerable hospitals. Health Serv Res. 2018;53(5):3495-3506. doi:10.1111/1475-6773.12833 Bai G, Anderson GF. A more detailed understanding of factors associated with hospital profitability. Health Aff (Millwood). 2016;35(5):889-897. doi:10.1377/hlthaff.2015.1193 Stein SM, Day M, Karia R, Hutzler L, Bosco JAI. Patients' perceptions of care are associated with quality of hospital care: A survey of 4605 hospitals. Am J Med Qual. 2015;30(4):382-388. doi:10.1177/1062860614530773 Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The Relationship between Patients' Perception of Care and Measures of Hospital Quality and Safety. Health Services Research. 2010;45(4):1024-1040. doi:10.1111/j.1475-6773.2010.01122.x Anderson DJ, Podgorny K, Berríos-Torres SI, et al. Strategies to prevent surgical site infections in acute care hospitals: 2014 update. Infect Control Hosp Epidemiol. 2014;35(6):605-627. doi:10.1086/676022 Gulack BC, Kirkwood KA, Shi W, et al. Secondary surgical site infection after coronary artery bypass grafting: A multi-institutional prospective cohort study. J Thorac Cardiovasc Surg. 2018;155(4):1555-1562. Cotogni P, Barbero C, Rinaldi M. Deep sternal wound infection after cardiac surgery: Evidences and controversies. World J Crit Care Med. 2015;4(4):265. doi:10.5492/wjccm.v4.i4.265 Greco G, Shi W, Michler RE, et al. Costs Associated With Health Care-Associated Infections in Cardiac Surgery. J Am Coll Cardiol. 2015;65(1):15-23. doi:10.1016/j.jacc.2014.09.079 Singh K, Anderson E, Harper JG. Overview and management of sternal wound infection. Semin Plast Surg. 2011;25(1):25-33. doi:10.1055/s-0031-1275168 Meszaros K, Fuehrer U, Grogg S, et al. Risk factors for sternal wound infection after open heart operations vary according to type of operation. Ann Thorac Surg. 2016;101(4):1418-1425. doi:10.1016/j.athoracsur.2015.09.010 Colombier S, Kessler U, Ferrari E, von Segesser LK, Berdajs DA. Influence of deep sternal wound infection on long-term survival after cardiac surgery. Med Sci Monit Int Med J Exp Clin Res. 2013;19:668-673. The economist intelligence unit global access to healthcare index poster 2017. Accessed September 6, 2019. https://ukshop.economist.com/products/the-economist-intelligence-unit-healthcare-poster-2017 Saunders R, Lankiewicz J. The cost effectiveness of single-patient-use electrocardiograph cable and lead systems in monitoring for coronary artery bypass graft surgery. Front Cardiovasc Med. 2019;6:61. doi:10.3389/fcvm.2019.00061 Lankiewicz JD, Wong T, Moucharite M. The relationship between a single-patient-use electrocardiograph cable and lead system and coronary artery bypass graft surgical site infection within a Medicare population. Am J Infect Control. 2018;32(8):775-783. doi:10.1016/j.ajic.2018.01.023 Heart diseases 2016-Eurostat. Accessed December 9, 2019. https://ec.europa.eu/eurostat/news/themes-in-the-spotlight/heart-diseases-2016 Lee C, Cheng C, Yang YK, et al. Trends in the incidence and management of acute myocardial infarction from 1999 to 2008: Get with the guidelines performance measures in Taiwan. J Am Heart Assoc Cardiovasc Cerebrovasc Dis. 2014;3(4):e001066. doi:10.1161/jaha.114.001066 Hurley MP, Schoemaker L, Morton JM, et al. Geographic variation in surgical outcomes and cost between the United States and Japan. Am J Manag Care. 2016;22(9):600-607. Masuda M, Kuwano H, Okumura M, et al. Thoracic and cardiovascular surgery in Japan during 2012. Gen Thorac Cardiovasc Surg. 2014;62(12):734-764. doi:10.1007/s11748-014-0464-0 Lonie S, Hallam J, Yii M, et al. Changes in the management of deep sternal wound infections: A 12-year review. ANZ J Surg. 2015;85(11):878-881. doi:10.1111/ans.13279 Zhang S, Palazuelos-Munoz S, Balsells EM, Nair H, Chit A, Kyaw MH. Cost of hospital management of Clostridium difficile infection in United States-a meta-analysis and modelling study. BMC Infect Dis. 2016;16(1):447. doi:10.1186/s12879-016-1786-6 Kles CL, Murrah CP, Smith K, Baugus-Wellmeier E, Hurry T, Morris CD. Achieving and sustaining zero: Preventing surgical site infections after isolated coronary artery bypass with saphenous vein harvest site through implementation of a staff-driven quality improvement process. Dimens Crit Care Nurs. 2015;34(5):265-272. doi:10.1097/dcc.0000000000000131 Kieser TM, Rose MS, Aluthman U, Montgomery M, Louie T, Belenkie I. Toward zero: Deep sternal wound infection after 1001 consecutive coronary artery bypass procedures using arterial grafts: Implications for diabetic patients. J Thorac Cardiovasc Surg. 2014;148(5):1887-1895. doi:10.1016/j.jtcvs.2014.02.022 Vos RJ, Van Putte BP, Kloppenburg GTL. Prevention of deep sternal wound infection in cardiac surgery: A literature review. J Hosp Infect. 2018;100(4):411-420. doi:10.1016/j.jhin.2018.05.026 Schimmer C, Özkur M, Sinha B, et al. Gentamicin-collagen sponge reduces sternal wound complications after heart surgery: A controlled, prospectively randomized, double-blind study. J Thorac Cardiovasc Surg. 2012;143(1):194-200. doi:10.1016/j.jtcvs.2011.05.035 Bennett-Guerrero E. Effect of an Implantable Gentamicin-Collagen Sponge on Sternal Wound Infections Following Cardiac Surgery: A Randomized Trial. JAMA. 2010;304(7):755. doi:10.1001/jama.2010.1152 Bottio T, Rizzoli G, Vida V, Casarotto D, Gerosa G. Double crisscross sternal wiring and chest wound infections: A prospective randomized study. Thorac Cardiovasc Surg. 2003;126(5):1352-1356. doi:10.1016/s0022-5223(03)00945-0 Bejko J, Bottio T, Tarzia V, et al. Nitinol flexigrip sternal closure system and standard sternal steel wiring: Insight from a matched comparative analysis. J Cardiovasc Med. 2015;16(2):134-138. doi:10.2459/jcm.0000000000000025 Hamajima N, Sugimoto T, Hasebe R, et al. Medical facility statistics in Japan. Nagoya J Med Sci. 2017;79(4):515-525. Kusachi S, Kashimura N, Konishi T, et al. Length of stay and cost for surgical site infection after abdominal and cardiac surgery in Japanese hospitals: Multi-center surveillance. Surg Infect. 2012;13(4):257-265. doi:10.1089/sur.2011.007 Schrijvers G, van Hoorn A, Huiskes N. The care pathway: concepts and theories: an introduction. Int J Integr Care. 2012;12(Special Edition Integrated Care Pathways):e192. doi:10.5334/ijic.812 The pros and cons of inpatient and outpatient care. Japan Today. https://japantoday.com/category/features/opinions/the-pros-and-cons-of-inpatient-and-outpatient-care. Published September 15, 2015. Accessed July 29, 2020. Chawla A, Westrich K, Dai A, Mantels S, DuBois RW. Care pathways in US healthcare settings: current successes and limitations, and future challenges. AJMC. 2019;22:S260. doi:10.1016/j.jval.2019.04.1224
CommonCrawl
What is a "codon" in grammatical evolution? Asked 25 days ago The term codon is used in the context of grammatical evolution (GE), sometimes, without being explicitly defined. For example, it is used in this paper, which introduces and describes PonyGE 2, a Python library for GE, but it's not clearly defined. So, what is a codon? terminology evolutionary-algorithms genetic-programming grammatical-evolution codon nbro♦nbro Grammatical evolution To understand what a codon is, we need to understand what GE is, so let me first provide a brief description of this approach. Grammatical evolution (GE) is an approach to genetic programming where the genotypes are binary (or integer) arrays, which are mapped to the phenotypes (i.e. the actual solutions, which can be represented as trees, which, in turn, represent programs or functions), using a grammar (for example, expressed in Backus-Naur form). So, the genotypes (i.e. what is mutated, combined, or searched) and the phenotypes (the actual solutions, which are programs) are different in GE, and the genotype needs to be mapped to the phenotype to get the actual solution (or program), but this is not the case in all GP approaches (for example, in tree-based GP, the genotype and the phenotype are the same, i.e. trees, which represent functions). Codons In GE, a codon is a subsequence of $m$ bits of the genotype (assuming that genotypes are binary arrays). For example, let's say that we have only two symbols in our grammar, i.e. a (a number) and b (another number). In this case, we only need 2 bits to differentiate the two. If we had 3 symbols, a, b and + (the addition operator), we would need at least a sequence of 2 bits to define each symbol. So, in this case, we could have the following mapping a is represented by 00 (or the integer 0), b is represented by 01 (or 1), and + is represented by 10 (or 2) The operation a+b could then be represented by the binary sequence 001001 (or the integer sequence 021). The 2-bit subsequences 00, 01 and 10 (or their integer counterparts) are the codons. What do we need codons for? In GE, codons are used to index the specific choice of a production rule. To understand this, let's define a simple grammar, which is composed of a set of non-terminals (e.g. functions) $N = \{ \langle \text{expr} \rangle, \langle \text{op} \rangle, \langle \text{operand} \rangle, \langle \text{var} \rangle \}$ , a set of terminals (e.g. specific numbers or letters) $\mathrm{T}=\{1,2,3,4,+,-, /, *, \mathrm{x}, \mathrm{y}\}$, a set of production rules $P$, and an initial production rule $S = \langle \text{expr} \rangle $. In this case, the set of production rules $P$ is defined as follows \begin{align} \langle \text{expr} \rangle & ::= \langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle \; | \; \langle \text{operand} \rangle \\ \langle \text{op} \rangle & ::= + \; | \; - \; | \; * \; | \; / \\ \langle \text{operand} \rangle & ::= 1 \; | \; 2 \; | \; 3 \; | \; 4 \; | \; \langle \text{var} \rangle \\ \langle \text{var} \rangle & ::= \mathrm{x} \; | \; \mathrm{y} \end{align} So, there are four productions rules. To be clear, $\langle \text{var} \rangle ::= \mathrm{x} \; | \; \mathrm{y}$ is a production rule. The symbol $|$ means "or", so the left-hand side of each production is a non-terminal (and note that all non-terminals are denoted with angle brackets $\langle \cdot \rangle$), which is defined as (or can be replaced with) one of the right-hand choices, which can be a non-terminal or terminal. The first choice of each production rule is at index $0$. The second choice at index $1$, and so on. So, for example, in the case of the production $\langle \text{var} \rangle ::= \mathrm{x} \; | \; \mathrm{y}$, $\mathrm{x}$ is the choice at index $0$ and $\mathrm{y}$ is the choice at index $1$. The codons are the indices that we use to select the production rule's choice while transforming (or mapping) the genotype into a phenotype (an actual program). So, we start with the first production, in this case, $S = \langle \text{expr} \rangle$. If it's a non-terminal, then we replace it with one of its right-hand side choices. In this case, there are two choices $\langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$ (choice at index $0$) $\langle \text{operand} \rangle$ (choice at index $1$) If our genotype (integer representation) is, for example, $01$ (note that this is a sequence of integers), we would replace $\langle \text{expr} \rangle$ with $\langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$, then we would replace the first $\langle \text{expr} \rangle$ in $\langle \text{expr} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$ with $\langle \text{operand} \rangle$, so we would get $\langle \text{operand} \rangle \langle \text{op} \rangle \langle \text{expr} \rangle$, and so on and so forth, until we get an expression that only contains terminals, or the genotype is terminated. There can be other ways of mapping the genotype to the phenotype, but this is the purpose of codons. Codons in biology The term codon has its origins in biology: a subsequence of 3 nucleotides is known as a codon, which is mapped to an amino acid in order to produce the proteins. The set of all mappings from codons to amino acids is known as genetic code. Take a look at this article for a gentle introduction to the subject. So, in GE, codons also have a similar role to the role of codons in biology, i.e. they are used to build the actual phenotypes (in biology, the phenotypes would be the proteins or, ultimately, the organism). Codons do not have to be 2-bit subsequences, but they can be $m$-bit subsequences, for some arbitrary $m$. The term codon is similar to the term gene, which is also often used to refer to specific subsequences of the genotype (for example, in genetic algorithms), although they may not be synonymous (at least in biology, genes are made up of sequences of codons, so they are not synonymous). Moreover, the binary $m$-bit codons can first be mapped to integers, so codons can also just be integers, as e.g. used here or illustrated in figure 2.2 of this chapter. You can find more info about codons in the book Genetic Programming: An Introduction by Wolfgang Banzhaf et al., specifically sections 9.2.2. (p. 255), 9.2.3. (an example) and 2.3 (p. 39), or in chapter 2 of the book Foundations in Grammatical Evolution by Dempsey et al. Not the answer you're looking for? Browse other questions tagged terminology evolutionary-algorithms genetic-programming grammatical-evolution codon or ask your own question. Apart from grammatical evolution, what are other examples of grammar-guided genetic programming approaches? What is the difference between abstract, autonomous and virtual intelligent agents? Is the traditional meaning of "strong AI" outmoded? What does "we wrap the individual and reuse the codons" mean in the paper "Grammatical Evolution" by Neill and Ryan? What do the words "coarse" and "fine" mean in the context of computer vision? Terminology for the use of datasets as data points What does "off-the-shelf" mean? What is a unified neural network model?
CommonCrawl
2018 Fall – MAT 2071 Proofs and Logic – Reitz Exam Reviews Week 14 Assignments December 2, 2019 / Jonas Reitz / 0 Comments Written work, due Tuesday, 12/4, in class: Chapter 10 p167: 1, 2, 5, 10, 15 Day 25 Handout: Theorems NT 5.2, 5.3 WeBWorK – none OpenLab – none Project Deadlines: Final Draft of paper due in class on Tuesday 12/4. Group Presentations on Tuesday 12/4 and Thursday 12/6. Assignments, Uncategorized homeworkweek 14 Final Grades are posted Attendance (Instructor Only) Pre-Final Exam grade details are on the "Grades" page Group Project – Final Papers Final Exam Review answer key is now complete OpenLab #1: Advice from the Past – 2019 Fall – MAT 2071 Proofs and Logic – Reitz on OpenLab #7: Advice for the Future Franklin Ajisogun on OpenLab #7: Advice for the Future Franklin Ajisogun on OpenLab #3: "Sentences" Franklin Ajisogun on OpenLab #6: Proof Journal Jessie Coriolan on OpenLab #7: Advice for the Future Logic on Math StackExchange Is something like $x=x$ a proposition? February 1, 2023 I learned in discrete math class that a sentence like $x+2=4$ is not a proposition, because the truth value depends on the value of $x$. It is a propositional function. However, my question is, are propositional functions which are always true or always false, themselves propositions? So, would $x=x$ and also $x \neq x$ be […] How does a common application of the compactness theorem result in elementarily equivalent structures? February 1, 2023 In this question the answer by Alex Kruckman gives a standard argument using the compactness theorem for why a torsion group with elements of arbitrary large order is elementarily equivalent to a nontorsion group. Paraphrasing slightly: Let $G$ be a torsion group containing elements of arbitrarily large order. Let $c$ be a new constant symbol, […] PatrickR Question about derivation ($\lambda$-abstraction) February 1, 2023 How exactly is line 17 obtained using $abst$? The $abst$ rule is stated here (where $s\in \{\ast, \square\} $), and one of its premises is that $\Gamma \vdash \Pi x : A. B $ but there aren't any $\Pi$s in this derivation (except the last line, but we can't use that line). Moreover, there are […] Has there been an attempt to reduce set theory to logic? January 31, 2023 I am looking for papers and books that try to reduce set theory to logic. I know there have been attempts to reduce arithmetic to logic, but I am looking for texts that, by logic alone, prove some of the basic axioms of set theory, like the existence of the empty set, the existence of […] How to prove that if $\vdash_{ax} A$, then, for every formula B that is an instance of A, $\vdash_{ax} B$? January 31, 2023 $\vdash_{ax} A$ means that $A$ is a theorem - a formula such that there's a derivation $A_1, \ldots, A_n = A$. A derivation is a sequence of formulas $A_1, \ldots, A_n$ such that each formula in the sequence is either an instance of an axiom or is obtained through modus ponens. My textbook says, as […] What is the meaning of the word "assertion"? January 31, 2023 I often see the word "assertion" in logic books. They may list a sentence like : Snow is white. Then somewhere in the context, they may write "assertion of that sentence". I'm confused about the meaning of the word "assertion". In the following sentences, which is equivalent to which? Snow is white. "Snow is white" […] Is this predicate statement always true? January 31, 2023 Is this a tautology ?$\forall{x}\exists{y}(((p(x)\to q(y)) \to r(y)) \to((\forall{x}\space p(x) \to \forall{y} \space q(y)) \to \exists{y} \space r(y))$ I assume that left part is false them $\forall{x}\exists{y}((p(x)\to q(x))\to r(x)) = 1$ but can`t get aby further I tried to prove by making the right part similar to the left , but couldn't get an answer. […] Proving that $\Sigma \cup \Gamma \models \psi \implies \Sigma \cup \Gamma_0 \models \psi$ January 30, 2023 I was reading the solution my teacher suggested for the following problem: Show that the set of all infinite totally ordered sets $A$ is not finitely axiomatizable The solution goes like this: My teacher defined a set of sentences $\Sigma$ that express the fact that a set is totally ordered, and then defined a set […] Eduardo Magalhães Is this a way to write b is a power of 10 in Hofstadter's TNT? January 30, 2023 I was working on the practice problems in GEB by David Hofstadter. The challenge is to write that b is a power of 10 in TNT. This is what I came up with, but I am not sure if this is exactly correct: $$\neg\exists a: \exists c : \neg(Sa = 10) \land \neg(Sa=1) \land ((Sa […] dfssddsfasdf Is this quantifier statement $(\forall x)(\phi (x) \implies \psi(x)) \implies ((\forall x)\phi(x)\implies (\forall x)\psi (x)) $ always true? January 30, 2023 I have question whether I am solving this problem correctly I am checking if this statement is always true $(\forall x)(\phi (x) \implies \psi(x)) \implies ((\forall x)\phi(x)\implies (\forall x)\psi (x)) $ I tried assuming that it can be false , so $((\forall x)\phi(x)\implies (\forall x)\psi (x))$ =0 and then $(\forall x)(\phi (x) \implies \psi(x)) = […] OpenLab Assignments "Math Improve" .999 1 assignment assignments calculus calendar Doodling exam #3 exam 3 grades final papers grading criteria grading policy graph theory group paper group project homework logic mathography metacognition only if openlab OpenLab #4: Bridges and Walking Tours OpenLab7 OpenLab 8 OpenLab8 Open Lab 8 openlab assignment perfect circle points presentation project resource rubric semester project spring classes vi hart ViHart visual math Wau webwork week 8 week 14 welcome written work © 2023 2018 Fall – MAT 2071 Proofs and Logic – Reitz
CommonCrawl
Ethan Alexander USATT#: 1170667 15th Si & Patty Wasserman Jr. Open Table Tennis Championships 5 Nov 2021 - 7 Nov 2021 This page explains how Ethan Alexander (USATT# 1170667)'s rating went from 1854 to 1848 at the 15th Si & Patty Wasserman Jr. Open Table Tennis Championships held on 5 Nov 2021 - 7 Nov 2021. These ratings are calculated by the ratings processor which goes through 4 passes over the match results data for a tournament. The following values are produced at the end of each of the 4 passes of the ratings processor for Ethan Alexander for this tournament. Pass 1 Final Rating (Pass 4) You can click here to view a table of all the resultant values from each of the 4 passes (and the initial rating) of the ratings processor for all of the 172 players in this tournament. Sections below for further details on the initial rating and the 4 passes of the ratings processor. Note: We use mathematical notation to express the exact operations carried out in each pass of the ratings processor below. Whenever you see a variable/symbol such as for example Xi3, we are following the convention that the superscript part of the variable (in this case "3") indicates an index (such as in a series), and it should not be misconstrued to be an exponent (which is how it is used by default). The initial rating of a player for a tournament is the rating the player received at the end of the most recent tournament prior to the current tournament. If this is the first tournament the player has ever participated in (based on our records), then the player has no initial rating. The initial rating for 15th Si & Patty Wasserman Jr. Open Table Tennis Championships held on 5 Nov 2021 - 7 Nov 2021 for Ethan Alexander, and its source tournament are as follows: From Tournament End Day 1854 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Click here to view the details of the initial ratings for all the players in this tournament. Pass 1 Rating In Pass 1, we only consider all the players that come into this tournament with an initial rating while ignoring all the unrated players. If a rated player has a match against an unrated player, then that match result is ignored from the pass 1 calculations as well. We apply the point exchange table shown below to all the matches participated in by the rated players: Upset Result 0 - 12 8 8 13 - 37 7 10 88 - 112 4 20 113 - 137 3 25 238 and up 0 50 Suppose player A has an initial rating of 2000 and player B has an initial rating of 2064, and they played a match against each other. When computing the impact of this match on their rating, the "Point Spread" (as it is referred to in the table above) between these two players is the absolute value of the difference their initial ratings. When the player with the higher rating wins, presumably the better player won, which is the expected outcome of a match, and therefore the "Expected Result" column applies. If the player with the lower rating wins the match, then presumably this is not expected, and therfore it is deemed as an "Upset Result" and the value from that column in the table above is used. So, in our example of player A vs player B, if player B wins the watch, then the expected outcome happens, and 5 points are added to player B's rating and 5 points are deducted from player A's rating. Looking at Ethan Alexander's match results and applying the point exchange table, gives us the following result: Ethan Alexander's Wins USATT # 300 EXPECTED 0 Ethan Alexander 1170667 1854 Eduardo Granda 1170850 1554 363 EXPECTED 0 Ethan Alexander 1170667 1854 Brad Balmer 4037 1491 619 EXPECTED 0 Ethan Alexander 1170667 1854 Timothy Richard Doerr 5407 1235 216 EXPECTED 1 Ethan Alexander 1170667 1854 Jayden Cai 262852 1638 420 EXPECTED 0 Ethan Alexander 1170667 1854 Rick C. C. Dennie 71688 1434 307 EXPECTED 0 Ethan Alexander 1170667 1854 Jim Gableman 82695 1547 0 0 Ethan Alexander 1170667 0 Jose Peralta 1173273 0 77 EXPECTED 5 Ethan Alexander 1170667 1854 Aswin Kumar 203011 1777 Ethan Alexander's Losses 84 EXPECTED -5 Albert S Yang 94028 1938 Ethan Alexander 1170667 1854 672 EXPECTED -0 Nandan Naresh 84904 2526 Ethan Alexander 1170667 1854 559 EXPECTED -0 Aditya Godhwani 82768 2413 Ethan Alexander 1170667 1854 5 UPSET -8 Rohit Kalra 230575 1849 Ethan Alexander 1170667 1854 478 EXPECTED -0 Andrew Cao 211982 2332 Ethan Alexander 1170667 1854 180 EXPECTED -2 Winston Wu 222837 2034 Ethan Alexander 1170667 1854 You can click here to view a table of outcomes and points gained/lost from all the matches with all the players in this tournament. The "Outcome" column above, shows whether the match had an expected (player with the higher rating wins the match) or an upset (player with the higher rating loses the match) outcome. Based on this outcome, and using both the player's initial rating, we apply the point exchange table from above and show the ratings points earned and lost by Ethan Alexander in the "Gain" column. Matches are separated out into two tables for wins and losses where points are gained and lost respectively. We get the following math to calculate the Pass 1 Rating for Ethan Alexander: Gains/Losses 1854 - 5 + 0 + 0 + 0 + 1 + 0 + 0 + 0 + 0 + 0 + 0 + 5 - 8 + 0 - 2 =1845 You can click here to view a table of pass1 calculations for all the rated players in this tournament. The purpose of this pass is solely to determine ratings for unrated players. To do this, we first look at the ratings for rated players that came out of Pass 1 to determine an "Pass 2 Adjustment". The logic for this is as follows: We calculate the points gained in Pass 1. Points gained is simply the difference between the Pass 1 Rating and the Initial Rating of a player: ρi2 = Pi1 - Pi0 Symbol Universe Description Pi0 Pi0∈ℤ+ the initial rating for the i-th player. We use the symbol P and the superscript 0 to represent the idea that we sometimes refer to the process of identifying the initial rating of the given player as Pass 0 of the ratings processor. Pi1 Pi1∈ℤ+ the Pass 1 rating for the i-th player. ρi2 ρi2∈ℤ the points gained by the i-th player in this tournament. Note here that we use the superscript 2 to denote that this value is calculated and used in Pass 2 of the ratings processor. Further, ρi2 only exists for players who have a well defined Pass 1 Rating. For Players with an undefined Pass 1 Rating (unrated players), will have an undefined ρi2. i i∈[1,172]∩ℤ the index of the player under consideration. i can be as small as 1 or as large as 172 for this tournament and the i-th player must be a rated player. For rated players, Pass 1 points gained, ρi2, is used to calculate the Pass 2 Adjustment in the following way: If a player gained less than 50 points (exclusive) in pass 1, then we set that player's Pass 2 Adjustment to his/her Initial Rating. If a player gained between 50 and 74 (inclusive) points in pass 1, then we set the player's Pass 2 Adjustment to his/her Final Pass1 Rating. If a player gains 75 or more points (inclusive) in pass 1, then the following formula applies: If the player has won at least one match, and lost at least 1 match in the tournament, then the player's Pass 2 Adjustment is the average of his/her Final Pass 1 Rating and the average of his/her opponents rating from the best win and the worst loss, represented using the formula below: αi2 = ⌊ Pi1 + Bi + Wi 2 2 ⌋ where αi2 is the Pass 2 Adjustment for the current player, Pi1 is the Pass 1 Rating, Bi is the rating of the highest rated opponent against which the current player won a match, and Wi is the rating of the lowest rated opponent against which the current player lost a match. If a player has not lost any of his/her matches in the current tournament, the mathematical median (rounded down to the nearest integer) of all of the player's opponents initial rating is used as his/her Pass 2 Adjustment: αi2 = ⌊{Pk0}∼⌋ where Pk0 is the initial rating of the player who was the i-th player's opponent from the k-th match. q q∈[1,827]∩ℤ the index of the match result under consideration. q can be as small as 1 or as large as 827 for this tournament and the q-th match must be have both rated players as opponents. g g∈[1,5]∩ℤ the g-th game of the current match result under consideration. q can be as small as 1 or as large as 5 for this tournament assuming players play up to 5 games in a match. Pk0 Pk0∈ℤ+ initial rating of the i-th player's opponent from the k-th match. Therefore, the Pass 2 Adjustment for Ethan Alexander is calculated as follows: Given the initial rating of 1854, and the Pass 1 rating of 1845, the Pass 1 gain is 1845 - 1854 = -9. Since the Pass 1 gain of -9 is less than 50, the Pass 2 Rating (also referred to as Pass 2 Adjustment) is reset back to the initial rating. Therefore the Pass 2 Adjustment for Ethan Alexander is 1854. You can click here to view a table of Pass 2 Adjustments for all the rated players in this tournament. After calculating the Pass 2 Adjustment for all the rated players as described above, we can now calculate the Pass 2 Rating for all the unrated players in this tournament (which is the main purpose of Pass 2). Pass 2 Rating is calculated using the following formula: If all of the matches of an unrated player are against other unrated players, then the Pass 2 Rating for that player is simply set to 1200. You can click here to view these players who received a 1200 Pass 2 Rating. Not all of Ethan Alexander's matches were against unrated players. So this rule does not apply to him. For unrated players with wins and losses, where at least 1 of the opponents has an initial rating, the Pass 2 Rating is the average of the best win and the worst loss (using the Pass 2 Adjustment of all rated players) as defined by this formula here: Pi2 = ⌊ Bi2 + Wi2 2 ⌋ where Pi2 is the Pass 2 Rating for the i-th player, Bi2 is the largest Pass 2 Adjustment (best win) of the opponenet against whom the i-th player won a match, and Wi2 is the smallest Pass 2 Adjustment (worst loss) of the opponent against whom the i-th player lost a match. For unrated players with all wins and no losses, where at least 1 of the opponents has an initial rating, the Pass 2 Rating is calculated using the following formula: Pi2 = Bi2 + ∑k=0Mi-1 I(Bi2-αk2) where the function I(x) is defined as, \begin{equation} I(x)=\left\{ \begin{array}{ll} 10, & \text{if}\ x >= 1, x <= 50 \\ 5, & \text{if}\ x >= 51, x <=100 \\ 1, & \text{if}\ x >= 101, x <= 150 \\ 0, & \text{otherwise} \end{array}\right. \end{equation} Pi2 Pi2∈ℤ+ the pass 2 rating, of the i-th player in this tournament only applicable to unrated players, where Pi0 is not defined Bi2 Bi2∈ℤ+ the largest of the Pass 2 Adjustments of opponents of the i-th player against whom he/she won a match. αk2 αk2∈ℤ+ the Pass 2 Adjustment of the player who was the opponent of the i-th player in the k-th match I(x) I:ℤ↦ℤ+ a function that maps all integers to one of the values from -- 0, 1, 5, 10. Mi Mi∈ℤ+ total number of matches played by the i-th player in this tournament k k∈[0,Mi-1]∩ℤ+ The index of the match of the i-th player ranging from 0 to Mi-1 For unrated players with all losses and no wins, where at least 1 of the opponents has an initial rating, the Pass 2 Rating is calculated using the following formula: Pi2 = Wi2 + ∑k=0Mi-1 I(Wi2-αk2) where I(x) is defined above and, Wi2 Wi2∈ℤ+ the smallest of the Pass 2 Adjustments of opponents of the i-th player against whom he/she lost a match. For the rated players, all the work done in Pass 1 and Pass 2 to undone and they have their ratings reset back to their initial ratings while the unrated players keep their Pass 2 Adjustment as their final Pass 2 Rating. Since Ethan Alexander is a rated player, his Pass 2 Adjustment of 1854 will be ignored, along with him Pass 1 Rating of 1845 and his Pass 2 Rating will be set to his initial rating of 1854 with which he came into this tournament. Click here to see detailed information about the Pass 2 Ratings of all the other players in this tournament. Any of the unrated players who have all wins or all losses are skipped in Pass 3. Since Ethan Alexander has an initial rating of 1854, he is not an unrated player, and therefore this rule does not apply to him. You can click here to view list of all the players that are skipped in this Pass 3. Pass 3 Rating is calculated using 2 steps described below: In the first part of Pass 3, we apply the point exchange table described in Pass 1 above except this time by using all the players' Pass 2 Ratings. Looking at Ethan Alexander's wins and losses and applying the point exchange table, gives us the following result: You can click here to view a table of outcomes and points gained/lost from all the matches with all the players in this tournament for Pass 3 Part 1. The "Outcome" column above, shows whether the match had an expected (player with the higher rating wins the match) or an upset (player with the higher rating loses the match) outcome. Based on this outcome, and using both the player's Pass 2 Rating, we apply the point exchange table from above and show the rating points earned and lost by Ethan Alexander in the "Gain" column. Matches are divided up into two tables for wins and losses where points are "Gain"ed for the wins and "loss"ed for losses. Putting all the gains and losses together, we get the following math to calculate the rating for Ethan Alexander in this first part of Pass 3: Pass 3 Part 1 Rating You can click here to view a table of these calculations for all the players in this tournament. Given the Pass 3 Part 1 rating calculated above, the second part of Pass 3 looks very similar to the part of Pass 2 that deals with rated players where we calculate their Pass 2 Adjustment. First, we calculate the points gained in Pass 3 Part 1. Points gained is simply the difference between the Pass 3 Part 1 Rating and the Pass 2 Rating of a player: pi3 pi3∈ℤ+ the Pass 3 Part 1 rating for the i-th player. (Note that since this is an intermediate result, we are using a lower case p instead of the upper case P that we use to indicate final result from each pass of the ratings processor. ρi3 ρi3∈ℤ the points gained by the i-th player in this tournament in Pass 3. i i∈[1,172]∩ℤ the index of the player under consideration. i can be as small as 1 or as large as 172 for this tournament. Pass 3 points gained, ρi3, is then used to calculate the Pass 3 Part 2 Rating in the following way: If a player gained less than 50 points (exclusive) in Pass 3 Part 1, then we set that player's Pass 3 Part 2 Rating to his/her Pass 2 Rating. If a player gained between 50 and 74 (inclusive) points in Pass 3 Part 1, then we set the player's Pass 3 Part 2 Rating to his/her Pass 3 Part 1 Rating. If a player gains 75 or more points (inclusive) in Pass 3 Part 1, then the following formula applies: If the player has won at least one match, and lost at least 1 match in the tournament, then the player's Pass 3 Part 2 Rating is the average of his/her Pass 3 Part 1 Rating and the average of his/her opponents rating from the best win and the worst loss, represented using the formula below: αi3 = ⌊ pi3 + Bi3 + Wi3 2 2 ⌋ where αi3 is the Pass 3 Part 2 Rating for the current player, pi3 is the Pass 3 Part 1 Rating, Bi3 is the rating of the highest rated opponent against which the current player won a match, and Wi is the rating of the lowest rated opponent against which the current player lost a match. If a player has not lost any of his/her matches in the current tournament, the mathematical median (rounded down to the nearest integer) of all of the player's opponents rating is used as his/her Pass 3 Part 2 Rating: where pk3 is the Pass 3 Part 1 Rating of the i-th player's opponent from the k-th match. Therefore, the Pass 3 Part 2 Rating for Ethan Alexander is calculated as follows: Given the Pass 2 Rating of 1854, and the Pass 3 Part 1 rating of 1845, the Pass 3 Part 1 gain is 1845 - 1854 = -9. Since the Pass 3 Gain of -9 is less than 50, the Pass 3 Part 2 Rating is reset back to the Pass 2 Rating. Therefore the Pass 3 Part 2 Rating for Ethan Alexander is 1854. The Pass 3 Part 2 rating ends up becoming the final Pass 3 rating (also referred to as the Pass 3 Adjustment) except as follows: In the cases where the Pass 3 Part 2 rating is less than the players' initial rating Pi0, the Pass 3 rating is reset back to that players initial rating. Ethan Alexander's Pass 3 Part 2 Rating came out to 1854. Since this value is greater than Ethan Alexander's initial rating of 1854, his Pass 3 Adjustment is set to his Pass 3 Part 2 Rating of 1854. It is possible for the admin of this tournament to override the Pass 3 Adjustment calculated above with a value they deem appropriate. Ethan Alexander does not have a manually overridden value for his Pass 3 Adjustment, therefore the value remains at 1854. You can click here to view a table of Pass 3 Part 2 Ratings for all the players in this tournament along with any manually overridden values. Pass 4 is the final pass of the ratings processor. In this pass, we take the adjusted ratings (Pass 3 Adjustment) of all the rated players, and the assigned rating of unrated players (Pass 2 Rating), and apply the point exchange table to the match results based on these ratings to arrive at a final rating. Looking at Ethan Alexander's match results and applying the point exchange table, gives us the following result: 318 EXPECTED 0 Ethan Alexander 1170667 1854 Jose Peralta 1173273 1536 182 EXPECTED -2 Albert S Yang 94028 2036 Ethan Alexander 1170667 1854 The "Outcome" column above, shows whether the match had an expected (player with the higher rating wins the match) or an upset (player with the higher rating loses the match) outcome. Based on this outcome, and using both the players' Pass 3 Adjustment, we apply the point exchange table from above and show the ratings points earned and lost by Ethan Alexander in the "Gain" and "Loss" columns. Matches are separated out into two tables for wins and losses where points are gained and lost respectively. We get the following math to calculate the Pass 4 Rating for Ethan Alexander: You can click here to view a table of Pass 4 calculations for all the players in this tournament. Summary of calculations 15th Si & Patty Wasserman Jr. Open Table Tennis Championships - 5 Nov 2021 - 7 Nov 2021 Veera Chandrika 216249 1450 1400 1450 1450 1434 Julio Andres Gonzales 271802 2010 2008 2010 2010 2017 Duane Searles 81975 1828 1812 1828 1828 1812 Carmen Yu 217532 1961 2032 1961 2032 2062 Tiana Piyadasa 1170759 1388 1575 1388 1502 1600 Fiona Dubina 214975 1143 1170 1143 1143 1191 Ben Swislow 11716 1905 1905 1905 1905 1905 Baron Lip 214979 1670 1669 1670 1670 1669 Wei Hou 1167438 1554 1493 1554 1554 1543 Justin To 220356 1836 1835 1836 1836 1837 Lev Petryshyn 266183 483 501 483 483 514 Aziz Zarehbin 91906 2445 2470 2445 2445 2470 Jayden Cai 262852 1638 1625 1638 1638 1627 Craig Osikowicz 266436 1433 1525 1433 1519 1532 Scott Czarnecki 82310 1557 1541 1557 1557 1540 Kai Zarehbin 91905 2459 2432 2459 2459 2432 Payam Zarehbin 95794 1293 1186 1293 1293 1194 Roman Petryshyn 266184 1299 1331 1299 1299 1354 Vinay S Chandra 34825 2097 2137 2097 2097 2152 Terry Thibault 1164631 1105 1137 1105 1105 1143 Mark J. J. Hoffman 24438 1571 1558 1571 1571 1564 Ryan Mahoney 72369 1502 1497 1502 1502 1497 Jim Biggs 216789 1596 1551 1596 1596 1580 Medha Krishna 218837 1974 1972 1974 1974 1974 Yukinari Nakamura 271575 Unrated n/a 2261 2261 2253 Tay Nguyen 83113 1733 1692 1733 1733 1697 Bogdan Plugowski 69537 2032 1918 2032 2032 1945 Lee Seibold 77799 2053 2020 2053 2053 2020 Luis Miguel Rivera-Perez 263389 1959 1961 1959 1959 1961 Darya Tenenbaum 97281 1460 1478 1460 1460 1486 Luke Chilson 218083 2090 2072 2090 2090 2084 Jeevith Veera 222694 1038 1083 1038 1038 1085 Albert S Yang 94028 1938 2046 1938 2036 2064 Rick D. Green 30824 1141 1077 1141 1141 1081 Joshua Dinu Joseph 1157745 1526 1612 1526 1588 1620 Adhrit V Kini 1167478 904 954 904 957 1000 Michio Morita 214515 2067 2036 2067 2067 2059 Nathan To 221939 1387 1528 1387 1533 1607 Alex Uganski 81971 1640 1581 1640 1640 1621 Timothy J. Vandervest 28607 1816 1797 1816 1816 1802 SayeVikram Karthikeyan 201720 1497 1553 1497 1553 1584 Oluwole Ayangade 64766 2158 2132 2158 2158 2146 Mandy Yu 222971 2017 2078 2017 2078 2075 Bridget Maul 229627 573 552 573 573 540 Frank Aguilera 87881 817 817 817 817 817 Sarah Isabel Jalli 94697 2507 2504 2507 2507 2504 Borton Szeto 1164934 875 830 875 875 872 Robi Lexeme Tan Castillejo 269822 1653 1658 1653 1653 1662 Titus Dubina 1169799 203 203 203 203 203 Jedidiah Chung 91189 1742 1784 1742 1742 1784 Nitin Fuldeore 1164933 1112 1072 1112 1112 1082 Aswin Kumar 203011 1777 1783 1777 1777 1783 Aaron Yoon 1171592 482 629 482 775 826 Ricardo Reid 30165 1500 1484 1500 1500 1502 Christopher Wright 217090 666 665 666 666 666 Darren Tang 23413 1963 1892 1963 1963 1918 Nicholas Sherman 216331 1115 1291 1115 1378 1402 Gary Ng 230156 1329 1378 1329 1329 1378 Leon Li 14682 2287 2228 2287 2287 2264 Andrew Cao 211982 2332 2336 2332 2332 2336 Quinton Smith 223248 520 507 520 520 508 Sirat Mokha 212242 1875 1898 1875 1875 1898 Anwita Aneesh 1173151 172 172 172 172 172 Eduardo Granda 1170850 1554 1495 1554 1554 1502 Katy Lee 224029 409 408 409 409 408 Vignesh Iyer 270366 1998 1996 1998 1998 2002 Hammed-Taiwo Adeyinka 222239 2551 2573 2551 2551 2573 Sylwester Sobota 84043 1999 1966 1999 1999 1975 Jian Lin Tang 22998 1477 1332 1477 1477 1394 Mohamad Alzein 215073 1988 2027 1988 1988 2042 Rachel Wang 96633 2237 2268 2237 2237 2270 Alex Conrow 1171123 1213 1334 1213 1383 1418 Tashiya Piyadasa 222763 1883 1941 1883 1941 1955 Joon Lee 216874 1608 1610 1608 1608 1610 Alex Luo 202029 2022 2061 2022 2022 2089 Anya Shanbhag 1171127 1098 1278 1098 1291 1371 Jorge A. Vanegas 24999 1899 1898 1899 1899 1898 Brandon Popma 1151161 1335 1305 1335 1335 1324 Risheetha Bhagawatula 223797 1988 2041 1988 2041 2061 Lia Morales 1170878 1747 1735 1747 1747 1763 Casey Sheridan 267322 606 604 606 606 604 Shaoxiong Zheng 1173187 1163 1100 1163 1163 1112 Dylan Lewis 267065 486 576 486 714 739 Vivaan Chandra 1170374 818 755 818 818 791 Ronnie Coleman 97654 1808 1815 1808 1808 1822 Dion Payne Miller 93351 2061 2069 2061 2061 2082 Vlad Razvan Farcas 218690 2376 2378 2376 2376 2381 Kareem Azrak 269633 1763 1765 1763 1763 1765 Rick C. C. Dennie 71688 1434 1327 1434 1434 1350 Arcot Naresh 81883 2017 1982 2017 2017 1994 Aneesh Sreekumar 86584 1476 1475 1476 1476 1475 Nandan Naresh 84904 2526 2496 2526 2526 2496 Sid Naresh 84903 2593 2589 2593 2593 2589 Dell Sweeris 10415 2098 2088 2098 2098 2096 Kenzie Dubina 223311 895 892 895 895 899 Jace Bennett 268620 643 636 643 643 703 Maxwell Liu 221774 2198 2107 2198 2198 2107 Jenning Li 269648 Unrated n/a 1657 1657 1665 Frank Yin 1168345 1725 1694 1725 1725 1718 Doug Wruck 10604 1729 1709 1729 1729 1719 Faeq Zaman 93219 1293 1297 1293 1293 1298 Tony Miller 27215 1625 1619 1625 1625 1620 Brad Balmer 4037 1491 1507 1491 1491 1507 Mohammed A. A. Zaman 35195 1431 1430 1431 1431 1430 Shariq Zaman 93218 1088 1137 1088 1088 1162 Pawel Gluchowski 13394 2051 2045 2051 2051 2049 Tao Li 271451 714 813 714 916 999 Ben E. Ritter 8981 1238 1257 1238 1238 1266 Ethan Alexander 1170667 1854 1845 1854 1854 1848 Aditya Godhwani 82768 2413 2450 2413 2413 2450 Qi Cai 223588 1444 1436 1444 1444 1441 Gbenga Kayode 1166828 2347 2318 2347 2347 2318 Aarthi Loganathan 217446 2026 1928 2026 2026 1971 Aurimas Zemaitaitis 216168 1861 1927 1861 1927 1936 Miguel Yu 230248 745 685 745 745 734 Eva Harrison 219500 1186 1317 1186 1317 1364 Aarudharan A 271215 Unrated n/a 641 641 626 Tim Stoyanov 265072 1936 1869 1936 1936 1869 Jon Lee Freels 54635 1431 1490 1431 1490 1550 Winston Wu 222837 2034 2062 2034 2034 2082 William Wu 222838 2112 2090 2112 2112 2103 Brent Lacheta 216443 1111 1097 1111 1111 1102 Gabriel J Perez 92744 2359 2332 2359 2359 2332 Phillip Tam 61048 2156 2067 2156 2156 2108 Ryan Lin 220032 2067 2099 2067 2067 2106 Jim A. Engstrom 70172 1427 1380 1427 1427 1440 Satoshi Takano 1168655 1960 1901 1960 1960 1901 Geetha Krishna 224135 1710 1706 1710 1710 1709 Isabella Joy Xu 98155 2150 2182 2150 2150 2184 Tsetsen Batkhuyag 220297 1912 1928 1912 1912 1930 Linda Shu 94600 2261 2200 2261 2261 2218 Kelvin Lee 216968 1149 1095 1149 1149 1135 Xinyi Cai 1173267 Unrated n/a 1165 1165 1165 Amina Batkhuyag 220298 2060 2041 2060 2060 2052 Jeffrey Karras 1173270 Unrated n/a 808 808 801 Julius Karras 1173271 Unrated n/a 643 643 633 Matthew Lehmann 80125 2279 2279 2279 2279 2279 Mike Doyle 1173268 Unrated n/a 1130 1130 1129 Abigail YU 269965 1253 1257 1253 1253 1257 Jack Eaves 1173269 Unrated n/a 1149 1149 1137 Shahabul Arfeen 109456 Unrated n/a 809 809 804 Timothy Pop 1173274 Unrated n/a 528 528 533 Al Rowls 1173275 Unrated n/a 1988 1988 1980 Charles Shen 217235 2066 2244 2066 2259 2281 Walter Marion 1173272 Unrated n/a 1232 1232 1217 Jose Peralta 1173273 Unrated n/a 1536 1536 1518 Ferit Akova 214933 1684 1585 1684 1684 1616 Paul Williams 1173278 Unrated n/a 2012 2012 1997 Sam Burns 214932 1489 1382 1489 1489 1420 Daniel Cochran 10117 1413 1364 1413 1413 1389 Sophie Sharon 1173276 Unrated n/a 693 693 683 Steve Gonzales 54676 1850 1869 1850 1850 1872 James Sims 1173277 Unrated n/a 1477 1477 1487 Philip Schmucker 36154 1567 1564 1567 1567 1564 Tiffany Ke 89217 2301 2273 2301 2301 2273 Kary Fang 211608 2034 2024 2034 2034 2040 Kaye Chen 220824 2155 2113 2155 2155 2116 Vivek Kini 84684 2078 2070 2078 2078 2076 Jun Kobayashi 223642 1764 1817 1764 1817 1855 Jacob Karras 219292 1711 1616 1711 1711 1636 Harrison Ngo 89471 1934 1853 1934 1934 1853 Thomas Yu 62089 2220 2212 2220 2220 2246 Freddie (Zheyuan) Fan 1131823 1691 1581 1691 1691 1622 Lawer Dixon Jr 81533 1223 1294 1223 1304 1363 Walter Alomar 21175 2145 2121 2145 2145 2122 Isabella Luo 267433 1151 1186 1151 1151 1220 Rohit Kalra 230575 1849 1867 1849 1849 1867 Timothy Richard Doerr 5407 1235 1229 1235 1235 1230 Hannah Song 203187 2158 2161 2158 2158 2162 Marko Stambuk 218804 1267 1244 1267 1267 1247 Varin Chandra 1170751 809 858 809 877 933 Jim Gableman 82695 1547 1488 1547 1547 1501 Initial Ratings Veera Chandrika 216249 1450 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Julio Andres Gonzales 271802 2010 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Duane Searles 81975 1828 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Carmen Yu 217532 1961 LYTTC October Giant RR 2021 16 Oct 2021 17 Oct 2021 Tiana Piyadasa 1170759 1388 888 TTC $15K Butterfly / XIOM Labor Day Challenge 3 Sep 2021 6 Sep 2021 Fiona Dubina 214975 1143 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Ben Swislow 11716 1905 Illinois State Championships 25 Jun 2016 26 Jun 2016 Baron Lip 214979 1670 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Wei Hou 1167438 1554 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Justin To 220356 1836 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Lev Petryshyn 266183 483 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aziz Zarehbin 91906 2445 888 TTC $15K Butterfly / XIOM Labor Day Challenge 3 Sep 2021 6 Sep 2021 Jayden Cai 262852 1638 2021 HITTA Butterfly Halloween Open 29 Oct 2021 31 Oct 2021 Craig Osikowicz 266436 1433 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Scott Czarnecki 82310 1557 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Kai Zarehbin 91905 2459 888 TTC $15K Butterfly / XIOM Labor Day Challenge 3 Sep 2021 6 Sep 2021 Payam Zarehbin 95794 1293 US Nationals 2 Jul 2018 7 Jul 2018 Roman Petryshyn 266184 1299 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Vinay S Chandra 34825 2097 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Terry Thibault 1164631 1105 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Mark J. J. Hoffman 24438 1571 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Ryan Mahoney 72369 1502 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jim Biggs 216789 1596 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Medha Krishna 218837 1974 Butterfly 2021 September BTTC Open 24 Sep 2021 26 Sep 2021 Yukinari Nakamura 271575 Unrated Tay Nguyen 83113 1733 SpinBlock Two-Player Team Tournament 18 Sep 2021 18 Sep 2021 Bogdan Plugowski 69537 2032 2020 Aurora Cup 17 Jan 2020 17 Jan 2020 Lee Seibold 77799 2053 $6000 Nittaku Ohio Open 13 Aug 2021 14 Aug 2021 Luis Miguel Rivera-Perez 263389 1959 America's Team Championship 25 May 2019 25 May 2019 Darya Tenenbaum 97281 1460 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Luke Chilson 218083 2090 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Jeevith Veera 222694 1038 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Albert S Yang 94028 1938 Westchester 2021 September Open 25 Sep 2021 26 Sep 2021 Rick D. Green 30824 1141 2019 South Shore Sports Butterfly Open 16 Nov 2019 17 Nov 2019 Joshua Dinu Joseph 1157745 1526 2021 HITTA Butterfly Halloween Open 29 Oct 2021 31 Oct 2021 Adhrit V Kini 1167478 904 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Michio Morita 214515 2067 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Nathan To 221939 1387 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Alex Uganski 81971 1640 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Timothy J. Vandervest 28607 1816 2019 South Shore Sports Butterfly Open 16 Nov 2019 17 Nov 2019 SayeVikram Karthikeyan 201720 1497 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Oluwole Ayangade 64766 2158 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Mandy Yu 222971 2017 2021 Butterfly Cup 3 Sep 2021 5 Sep 2021 Bridget Maul 229627 573 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Frank Aguilera 87881 817 2020 Arnold Table Tennis Challenge 6 Mar 2020 8 Mar 2020 Sarah Isabel Jalli 94697 2507 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Borton Szeto 1164934 875 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Robi Lexeme Tan Castillejo 269822 1653 Paddle Palace Atlanta Fall Open 11 Sep 2021 12 Sep 2021 Titus Dubina 1169799 203 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Jedidiah Chung 91189 1742 2019 Aurora Summer Open 6 Jul 2019 6 Jul 2019 Nitin Fuldeore 1164933 1112 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aswin Kumar 203011 1777 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aaron Yoon 1171592 482 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Ricardo Reid 30165 1500 SpinBlock July Open - Giant Round Robin 17 Jul 2021 17 Jul 2021 Christopher Wright 217090 666 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Darren Tang 23413 1963 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Nicholas Sherman 216331 1115 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Gary Ng 230156 1329 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Leon Li 14682 2287 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Andrew Cao 211982 2332 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Quinton Smith 223248 520 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Sirat Mokha 212242 1875 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Anwita Aneesh 1173151 172 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Eduardo Granda 1170850 1554 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Katy Lee 224029 409 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Vignesh Iyer 270366 1998 ICC JOOLA Fall Open 2021 28 Aug 2021 29 Aug 2021 Hammed-Taiwo Adeyinka 222239 2551 2021 Atlanta Cup 23 Oct 2021 24 Oct 2021 Sylwester Sobota 84043 1999 Patty and Si Wasserman Junior Table Tennis Tournament 12 Mar 2021 13 Mar 2021 Jian Lin Tang 22998 1477 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Mohamad Alzein 215073 1988 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Rachel Wang 96633 2237 2021 Butterfly/Sonesta Select Invitational Tournament 7 Oct 2021 10 Oct 2021 Alex Conrow 1171123 1213 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Tashiya Piyadasa 222763 1883 888 TTC $15K Butterfly / XIOM Labor Day Challenge 3 Sep 2021 6 Sep 2021 Joon Lee 216874 1608 Butterfly Spin &Smash October Open 1 Oct 2021 3 Oct 2021 Alex Luo 202029 2022 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Anya Shanbhag 1171127 1098 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jorge A. Vanegas 24999 1899 2019 South Shore Sports Butterfly Open 16 Nov 2019 17 Nov 2019 Brandon Popma 1151161 1335 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Risheetha Bhagawatula 223797 1988 2021 Butterfly Florida Open 23 Jul 2021 25 Jul 2021 Lia Morales 1170878 1747 2021 HITTA Butterfly Halloween Open 29 Oct 2021 31 Oct 2021 Casey Sheridan 267322 606 2019 South Shore Sports Butterfly Open 16 Nov 2019 17 Nov 2019 Shaoxiong Zheng 1173187 1163 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Dylan Lewis 267065 486 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Vivaan Chandra 1170374 818 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Ronnie Coleman 97654 1808 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Dion Payne Miller 93351 2061 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Vlad Razvan Farcas 218690 2376 2021 HITTA Butterfly Halloween Open 29 Oct 2021 31 Oct 2021 Kareem Azrak 269633 1763 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Rick C. C. Dennie 71688 1434 2019 - 55th St. Joseph Valley Open 16 Mar 2019 16 Mar 2019 Arcot Naresh 81883 2017 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aneesh Sreekumar 86584 1476 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Nandan Naresh 84904 2526 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Sid Naresh 84903 2593 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Dell Sweeris 10415 2098 2019 US Nationals 30 Jun 2019 5 Jul 2019 Kenzie Dubina 223311 895 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Jace Bennett 268620 643 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Maxwell Liu 221774 2198 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jenning Li 269648 Unrated Frank Yin 1168345 1725 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Doug Wruck 10604 1729 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Faeq Zaman 93219 1293 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Tony Miller 27215 1625 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Brad Balmer 4037 1491 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Mohammed A. A. Zaman 35195 1431 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Shariq Zaman 93218 1088 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Pawel Gluchowski 13394 2051 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Tao Li 271451 714 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Ben E. Ritter 8981 1238 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Ethan Alexander 1170667 1854 $6000 Presper Financial Architects Open 8 Oct 2021 9 Oct 2021 Aditya Godhwani 82768 2413 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Qi Cai 223588 1444 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Gbenga Kayode 1166828 2347 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aarthi Loganathan 217446 2026 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aurimas Zemaitaitis 216168 1861 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Miguel Yu 230248 745 Edgeball Chicago International Table Tennis Open 26 Oct 2019 27 Oct 2019 Eva Harrison 219500 1186 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Aarudharan A 271215 Unrated Tim Stoyanov 265072 1936 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jon Lee Freels 54635 1431 2021 Athens Fall Upside Down Tournament 2 Oct 2021 2 Oct 2021 Winston Wu 222837 2034 2021 MDTTC October Open 9 Oct 2021 10 Oct 2021 William Wu 222838 2112 2021 MDTTC October Open 9 Oct 2021 10 Oct 2021 Brent Lacheta 216443 1111 Edgeball Chicago International Table Tennis Open 27 Oct 2018 28 Oct 2018 Gabriel J Perez 92744 2359 $6000 Nittaku Ohio Open 13 Aug 2021 14 Aug 2021 Phillip Tam 61048 2156 2020 Aurora Cup 17 Jan 2020 17 Jan 2020 Ryan Lin 220032 2067 2021 MDTTC October Open 9 Oct 2021 10 Oct 2021 Jim A. Engstrom 70172 1427 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Satoshi Takano 1168655 1960 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Geetha Krishna 224135 1710 Butterfly 2021 September BTTC Open 24 Sep 2021 26 Sep 2021 Isabella Joy Xu 98155 2150 2021 Butterfly/Sonesta Select Invitational Tournament 7 Oct 2021 10 Oct 2021 Tsetsen Batkhuyag 220297 1912 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Linda Shu 94600 2261 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Kelvin Lee 216968 1149 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Xinyi Cai 1173267 Unrated Amina Batkhuyag 220298 2060 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jeffrey Karras 1173270 Unrated Julius Karras 1173271 Unrated Matthew Lehmann 80125 2279 $6000 Nittaku Ohio Open 13 Aug 2021 14 Aug 2021 Mike Doyle 1173268 Unrated Abigail YU 269965 1253 2021 Butterfly Cup 3 Sep 2021 5 Sep 2021 Jack Eaves 1173269 Unrated Shahabul Arfeen 109456 Unrated Timothy Pop 1173274 Unrated Al Rowls 1173275 Unrated Charles Shen 217235 2066 Westchester 2021 September Open 25 Sep 2021 26 Sep 2021 Walter Marion 1173272 Unrated Jose Peralta 1173273 Unrated Ferit Akova 214933 1684 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Paul Williams 1173278 Unrated Sam Burns 214932 1489 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Daniel Cochran 10117 1413 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Sophie Sharon 1173276 Unrated Steve Gonzales 54676 1850 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 James Sims 1173277 Unrated Philip Schmucker 36154 1567 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Tiffany Ke 89217 2301 ICC JOOLA Fall Open 2021 28 Aug 2021 29 Aug 2021 Kary Fang 211608 2034 2021 Butterfly Florida Open 23 Jul 2021 25 Jul 2021 Kaye Chen 220824 2155 2021 Butterfly Florida Open 23 Jul 2021 25 Jul 2021 Vivek Kini 84684 2078 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jun Kobayashi 223642 1764 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jacob Karras 219292 1711 2021 US National Table Tennis Championships 4 Jul 2021 9 Jul 2021 Harrison Ngo 89471 1934 2021 Athens GA Table Tennis Summer Open 26 Jun 2021 26 Jun 2021 Thomas Yu 62089 2220 2021 MDTTC October Open 9 Oct 2021 10 Oct 2021 Freddie (Zheyuan) Fan 1131823 1691 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Lawer Dixon Jr 81533 1223 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Walter Alomar 21175 2145 Butterfly Spin & Smash August Open 6 Aug 2021 8 Aug 2021 Isabella Luo 267433 1151 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Rohit Kalra 230575 1849 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Timothy Richard Doerr 5407 1235 56th RoboPong St. Joseph Valley Open 28 May 2021 30 May 2021 Hannah Song 203187 2158 2021 HITTA Butterfly Halloween Open 29 Oct 2021 31 Oct 2021 Marko Stambuk 218804 1267 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Varin Chandra 1170751 809 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Jim Gableman 82695 1547 2021 Edgeball Chicago International Open 30 Oct 2021 31 Oct 2021 Pass 1 Ratings Amina Batkhuyag 220298 2060 - 7 - 7 - 1 + 0 - 8 + 4 + 0 - 6 + 6 + 0 + 0 =2041 Ethan Alexander 1170667 1854 - 5 + 0 + 0 + 0 + 1 + 0 + 0 + 0 + 0 + 0 + 0 + 5 - 8 + 0 - 2 =1845 Jayden Cai 262852 1638 - 3 + 0 + 2 + 0 + 1 - 1 - 16 + 0 + 0 + 4 + 0 + 0 + 0 + 0 =1625 SayeVikram Karthikeyan 201720 1497 + 0 - 4 + 0 + 16 + 0 + 0 + 5 + 45 - 6 =1553 Phillip Tam 61048 2156 + 4 - 40 + 0 - 20 - 35 + 0 + 2 =2067 Vivek Kini 84684 2078 + 0 - 8 =2070 Gbenga Kayode 1166828 2347 - 25 + 0 + 0 - 4 =2318 Sid Naresh 84903 2593 + 3 - 13 + 2 + 0 + 2 + 0 + 2 =2589 Aaron Yoon 1171592 482 + 0 + 50 - 1 + 50 + 0 + 0 - 2 + 50 + 0 + 0 + 0 + 0 =629 Bridget Maul 229627 573 - 20 + 0 - 1 =552 Titus Dubina 1169799 203 + 0 + 0 + 0 + 0 + 0 =203 Nathan To 221939 1387 + 50 + 0 + 0 - 2 + 50 + 1 + 50 - 8 + 0 + 0 + 0 =1528 Joshua Dinu Joseph 1157745 1526 + 0 + 10 + 0 + 0 + 2 + 0 + 30 + 0 - 30 - 20 + 16 + 0 - 3 + 45 + 40 - 4 + 0 + 0 + 0 =1612 Vignesh Iyer 270366 1998 + 5 - 2 - 2 + 0 + 10 - 7 + 0 + 0 + 0 + 0 - 1 + 0 - 5 =1996 Daniel Cochran 10117 1413 - 45 - 3 - 1 =1364 Justin To 220356 1836 + 0 + 0 - 3 + 0 - 1 - 1 + 0 + 4 =1835 Robi Lexeme Tan Castillejo 269822 1653 + 0 + 0 + 0 + 0 - 1 + 10 - 1 + 1 - 4 + 0 + 0 + 0 + 0 + 0 + 0 =1658 Abigail YU 269965 1253 + 0 - 1 + 0 + 0 - 6 + 13 + 0 + 0 + 0 - 6 + 0 + 0 + 0 + 0 + 4 =1257 Kai Zarehbin 91905 2459 - 3 + 1 + 0 + 0 + 0 - 10 + 3 + 0 + 0 - 13 - 5 =2432 Jim Biggs 216789 1596 + 0 + 4 - 1 - 3 + 0 + 0 + 6 - 35 + 0 - 16 =1551 Christopher Wright 217090 666 + 0 + 0 + 0 + 0 + 2 + 0 - 1 - 2 =665 Duane Searles 81975 1828 + 0 - 16 + 0 =1812 Shariq Zaman 93218 1088 + 0 + 30 - 1 + 20 + 0 + 0 =1137 Wei Hou 1167438 1554 + 16 + 1 + 0 + 0 + 0 + 0 - 25 + 0 + 0 + 2 - 25 + 0 - 35 + 4 - 5 + 6 =1493 Jon Lee Freels 54635 1431 + 0 + 0 + 0 + 1 - 5 + 50 + 0 + 25 + 0 + 0 + 25 + 8 - 45 =1490 Qi Cai 223588 1444 + 0 + 0 - 8 =1436 Freddie (Zheyuan) Fan 1131823 1691 - 50 - 5 + 0 - 50 + 0 + 0 + 0 + 1 - 5 + 5 - 6 =1581 Tiffany Ke 89217 2301 - 1 + 6 + 2 + 2 + 0 - 30 + 0 - 7 =2273 Alex Uganski 81971 1640 + 0 + 20 - 3 + 0 + 0 - 4 - 40 - 50 + 1 + 0 + 4 + 5 + 3 + 5 + 0 =1581 Miguel Yu 230248 745 + 0 - 5 + 0 - 50 + 0 - 20 + 2 + 0 + 16 - 3 =685 Darya Tenenbaum 97281 1460 + 8 + 0 + 7 + 0 + 0 - 4 + 7 =1478 Sam Burns 214932 1489 - 50 - 3 + 0 + 6 - 1 + 0 + 4 - 50 - 13 + 0 =1382 Jian Lin Tang 22998 1477 - 50 - 1 + 0 - 50 + 0 + 0 - 20 + 2 + 0 + 0 + 0 + 0 + 0 - 13 - 13 =1332 Gary Ng 230156 1329 + 0 + 0 + 0 + 4 + 45 =1378 Matthew Lehmann 80125 2279 - 3 + 0 + 0 + 0 + 0 + 3 =2279 Sirat Mokha 212242 1875 - 1 - 1 + 25 + 3 - 3 =1898 Tashiya Piyadasa 222763 1883 - 2 + 30 - 1 + 0 - 3 + 7 + 1 + 16 + 1 - 6 - 2 + 0 - 2 + 0 - 6 + 25 + 3 + 2 - 5 + 0 + 0 + 0 + 0 =1941 Tony Miller 27215 1625 - 2 - 1 + 0 - 3 + 0 + 0 + 0 =1619 Quinton Smith 223248 520 + 0 - 10 + 0 + 0 - 3 + 0 + 0 + 0 =507 Brad Balmer 4037 1491 + 0 - 50 + 0 + 16 + 0 + 0 + 50 + 0 =1507 Eduardo Granda 1170850 1554 - 16 + 0 + 0 + 0 + 8 + 0 - 6 - 45 + 0 =1495 Linda Shu 94600 2261 + 0 - 6 + 0 + 0 + 0 - 13 + 0 + 0 + 0 + 1 + 1 - 35 + 0 - 10 + 1 =2200 Vinay S Chandra 34825 2097 - 3 + 0 + 5 + 1 - 10 + 7 + 35 + 0 + 1 + 10 + 3 + 2 + 5 + 10 - 10 - 16 + 0 =2137 Mark J. J. Hoffman 24438 1571 - 16 + 0 + 0 + 0 + 0 + 8 - 5 =1558 Mandy Yu 222971 2017 - 3 + 0 + 6 + 40 + 30 - 10 + 0 + 0 - 8 + 6 - 6 + 6 - 7 - 1 + 0 + 0 + 0 - 8 + 16 =2078 Brandon Popma 1151161 1335 - 25 - 1 + 0 + 0 - 4 + 0 =1305 Vlad Razvan Farcas 218690 2376 + 0 + 0 + 0 + 2 =2378 Kelvin Lee 216968 1149 + 0 + 0 - 50 + 0 + 0 - 10 + 6 =1095 Risheetha Bhagawatula 223797 1988 + 13 + 0 + 0 + 0 - 3 + 10 + 35 - 6 + 0 + 0 - 2 - 7 + 13 =2041 Jeevith Veera 222694 1038 + 0 + 0 + 50 - 5 =1083 Charles Shen 217235 2066 + 10 - 1 + 0 + 8 + 20 + 1 + 0 + 0 + 8 + 7 + 2 + 0 + 1 + 13 + 0 + 8 + 0 + 45 + 30 + 8 + 8 + 10 =2244 Alex Luo 202029 2022 + 0 - 3 - 1 + 7 + 0 + 5 + 2 + 25 + 0 - 8 - 16 + 0 + 8 + 7 + 13 =2061 Scott Czarnecki 82310 1557 + 0 - 10 + 0 + 0 - 6 =1541 Nandan Naresh 84904 2526 - 16 + 0 + 0 + 0 + 0 + 0 + 0 - 25 + 1 + 0 + 0 + 0 + 5 + 5 =2496 Albert S Yang 94028 1938 + 0 + 5 + 0 + 0 + 20 + 1 + 0 + 5 + 3 + 6 + 25 - 1 - 3 - 3 + 6 + 13 + 5 - 6 + 16 + 0 + 16 =2046 Jim Gableman 82695 1547 + 0 - 8 + 0 + 3 - 50 - 4 =1488 Sylwester Sobota 84043 1999 + 8 + 0 + 3 + 2 - 10 + 0 + 0 + 0 - 8 - 25 - 13 + 1 + 10 - 1 =1966 Jedidiah Chung 91189 1742 + 0 + 4 + 0 + 45 + 0 + 0 + 0 + 0 - 3 - 4 =1784 Harrison Ngo 89471 1934 - 35 - 5 - 40 + 0 + 0 + 0 + 3 + 0 + 0 + 2 + 0 - 6 =1853 Rick C. C. Dennie 71688 1434 + 0 + 0 + 1 - 1 + 0 - 50 - 50 + 0 - 7 + 0 + 0 =1327 Dylan Lewis 267065 486 + 0 + 10 + 0 + 50 + 0 + 0 + 0 + 30 + 0 =576 Walter Alomar 21175 2145 + 3 - 2 - 25 + 0 =2121 Aditya Godhwani 82768 2413 - 2 + 3 + 0 + 0 + 25 + 13 - 2 =2450 Casey Sheridan 267322 606 + 1 - 2 - 1 =604 Lawer Dixon Jr 81533 1223 + 0 + 0 + 0 + 0 + 50 + 0 - 1 - 20 + 0 + 0 + 0 + 50 + 0 - 2 - 1 - 5 + 0 =1294 Jacob Karras 219292 1711 - 45 - 50 =1616 Ferit Akova 214933 1684 - 1 + 0 + 0 - 10 - 2 - 50 + 0 - 30 - 6 + 0 + 0 =1585 Katy Lee 224029 409 + 0 + 0 + 0 - 1 + 0 + 0 =408 Isabella Joy Xu 98155 2150 + 3 - 2 + 0 + 2 + 6 + 0 + 3 + 0 + 0 + 16 + 8 + 2 - 6 + 0 =2182 Pawel Gluchowski 13394 2051 - 4 - 2 + 5 + 0 - 8 + 3 =2045 Varin Chandra 1170751 809 + 5 + 0 + 50 + 0 + 0 + 1 - 5 + 0 + 0 + 0 + 0 + 0 + 0 + 2 + 0 + 0 - 4 + 0 =858 Hannah Song 203187 2158 + 0 + 3 + 3 - 5 + 1 + 0 + 0 - 20 + 30 - 5 - 8 + 2 + 2 =2161 Vivaan Chandra 1170374 818 + 0 + 0 + 0 - 50 + 1 + 0 - 16 + 0 + 0 + 0 + 0 + 0 + 2 + 0 =755 Ronnie Coleman 97654 1808 + 2 + 0 - 13 - 6 - 3 + 1 + 0 - 3 - 1 + 0 + 2 + 0 - 16 - 1 + 45 - 2 + 0 + 0 - 6 + 8 + 0 =1815 Dion Payne Miller 93351 2061 + 7 - 30 + 0 - 8 + 0 + 0 + 2 + 0 + 20 + 7 + 1 + 0 + 8 + 3 - 1 + 10 + 0 - 13 - 8 + 10 =2069 Fiona Dubina 214975 1143 + 0 + 0 + 0 - 10 + 50 + 0 - 13 =1170 Jim A. Engstrom 70172 1427 + 0 + 0 + 3 - 50 - 5 + 0 + 4 + 0 + 0 + 13 + 1 + 0 - 13 - 8 + 13 - 25 + 20 =1380 Kary Fang 211608 2034 + 2 - 3 + 6 + 3 - 5 - 3 + 0 - 16 + 7 - 1 =2024 Winston Wu 222837 2034 + 0 - 7 + 0 + 6 + 0 + 1 + 0 + 5 + 4 + 35 - 7 + 0 + 8 + 2 + 0 - 6 - 13 =2062 Rachel Wang 96633 2237 + 0 - 1 + 0 + 5 + 1 - 16 + 5 + 0 + 0 + 1 + 1 + 0 + 1 + 10 - 1 + 25 =2268 Satoshi Takano 1168655 1960 - 45 - 4 - 1 - 6 - 3 =1901 Rick D. Green 30824 1141 - 8 - 50 + 0 - 6 =1077 Philip Schmucker 36154 1567 + 0 + 0 + 0 + 5 + 0 + 0 + 0 + 0 - 8 =1564 Terry Thibault 1164631 1105 + 0 + 0 + 0 + 0 + 0 - 8 + 0 + 0 + 40 + 0 + 0 =1137 Aneesh Sreekumar 86584 1476 + 0 + 1 - 2 + 0 + 0 =1475 Thomas Yu 62089 2220 + 0 + 25 + 16 + 3 + 0 + 2 + 13 + 0 - 40 + 0 + 0 + 0 + 1 + 0 + 2 - 4 + 3 - 30 + 1 =2212 Timothy J. Vandervest 28607 1816 + 0 + 0 + 0 - 13 - 2 + 0 - 8 + 4 =1797 Timothy Richard Doerr 5407 1235 + 0 + 0 + 0 + 0 + 0 + 0 + 0 - 1 + 0 + 0 - 4 + 0 - 1 + 0 + 0 =1229 Jorge A. Vanegas 24999 1899 - 1 =1898 Kaye Chen 220824 2155 + 2 + 0 + 1 + 0 - 20 + 0 - 25 + 0 =2113 Veera Chandrika 216249 1450 + 0 - 8 - 50 + 0 + 8 =1400 Faeq Zaman 93219 1293 + 0 + 0 + 6 - 2 =1297 Payam Zarehbin 95794 1293 - 3 + 0 + 1 - 50 - 2 + 0 + 0 - 40 - 13 + 0 + 0 =1186 Michio Morita 214515 2067 - 25 + 2 + 20 + 0 + 0 - 20 - 8 =2036 Gabriel J Perez 92744 2359 - 2 + 0 + 0 + 0 + 0 + 0 - 25 =2332 Tay Nguyen 83113 1733 + 0 + 10 + 0 + 0 - 1 - 50 =1692 Luis Miguel Rivera-Perez 263389 1959 + 7 - 5 =1961 Aziz Zarehbin 91906 2445 + 16 - 2 + 4 + 0 + 0 + 0 + 0 + 0 + 10 + 0 - 2 + 3 + 1 - 5 + 0 + 0 =2470 Geetha Krishna 224135 1710 + 0 + 0 + 0 + 0 + 0 - 2 + 0 - 2 + 0 + 0 + 0 + 0 + 0 =1706 Eva Harrison 219500 1186 + 0 + 50 + 0 + 6 + 0 + 45 + 50 + 0 - 20 + 0 + 0 =1317 Arcot Naresh 81883 2017 - 3 + 0 + 0 + 0 + 0 + 5 - 10 + 0 - 2 - 25 =1982 Jace Bennett 268620 643 + 2 + 0 + 20 + 3 + 0 + 0 - 30 + 0 - 2 + 0 =636 Tsetsen Batkhuyag 220297 1912 + 30 - 2 + 0 - 5 + 0 + 0 - 3 - 2 + 0 - 2 =1928 Marko Stambuk 218804 1267 + 2 + 0 - 25 =1244 Mohammed A. A. Zaman 35195 1431 + 0 - 1 =1430 Andrew Cao 211982 2332 - 1 + 0 + 0 + 1 - 3 + 0 + 0 - 1 + 0 + 0 + 0 + 4 - 3 + 7 =2336 Dell Sweeris 10415 2098 - 1 + 0 + 1 - 10 =2088 Nicholas Sherman 216331 1115 + 0 + 13 + 0 + 0 + 0 + 0 + 0 + 20 + 50 + 0 + 0 + 0 + 0 + 8 + 0 + 0 + 0 + 50 + 10 + 10 + 5 + 10 =1291 Ben E. Ritter 8981 1238 + 0 + 50 + 0 - 30 - 1 =1257 Lee Seibold 77799 2053 - 2 - 16 - 13 - 2 =2020 Mohamad Alzein 215073 1988 + 0 + 0 + 0 + 0 - 8 - 1 + 0 - 5 + 0 + 0 + 1 + 6 + 40 + 6 =2027 Ben Swislow 11716 1905 + 0 + 0 + 2 - 2 + 0 + 0 =1905 Nitin Fuldeore 1164933 1112 + 0 + 0 - 40 =1072 Shaoxiong Zheng 1173187 1163 - 50 - 13 =1100 Doug Wruck 10604 1729 + 0 - 20 - 7 + 40 + 3 - 2 + 0 + 0 + 16 + 0 - 50 + 0 =1709 Hammed-Taiwo Adeyinka 222239 2551 + 2 + 1 + 13 + 6 =2573 Oluwole Ayangade 64766 2158 + 1 + 0 + 2 + 0 - 30 + 4 - 3 =2132 Maxwell Liu 221774 2198 + 1 + 1 + 2 + 0 + 0 + 0 - 35 + 0 - 25 + 1 + 0 + 6 - 2 - 40 + 0 =2107 Tao Li 271451 714 + 50 + 0 + 1 - 2 + 0 + 50 + 0 + 0 =813 Aswin Kumar 203011 1777 + 0 + 2 + 0 + 0 + 5 + 0 - 1 - 5 + 5 =1783 Adhrit V Kini 1167478 904 + 0 + 0 + 40 + 0 + 0 - 1 + 8 - 1 + 0 + 4 + 0 =954 Darren Tang 23413 1963 + 10 + 0 + 0 - 16 - 25 + 0 + 3 + 2 - 40 - 4 - 2 + 1 + 20 - 20 =1892 Tim Stoyanov 265072 1936 + 0 - 50 - 1 - 5 - 7 - 4 =1869 Alex Conrow 1171123 1213 + 0 + 0 + 4 + 50 + 25 + 0 + 0 - 2 + 0 + 0 - 1 + 45 + 0 + 0 + 0 =1334 Baron Lip 214979 1670 + 0 + 1 - 2 =1669 Frank Yin 1168345 1725 + 0 - 3 + 0 + 0 + 0 + 0 + 0 + 6 + 0 + 1 + 0 + 13 - 4 - 40 + 0 + 0 - 4 =1694 Julio Andres Gonzales 271802 2010 + 0 + 13 - 1 - 10 + 8 + 0 + 3 - 5 + 6 - 16 =2008 Lia Morales 1170878 1747 + 13 + 25 + 3 + 0 - 1 + 50 - 10 + 0 + 0 + 0 - 3 - 45 + 6 - 50 + 0 + 0 + 0 + 0 + 0 =1735 Leon Li 14682 2287 - 16 + 1 + 0 + 0 + 1 - 45 =2228 Steve Gonzales 54676 1850 - 2 - 7 + 25 + 1 + 2 =1869 Bogdan Plugowski 69537 2032 + 0 + 0 + 0 - 20 - 35 + 0 + 0 + 1 - 50 - 10 =1918 Brent Lacheta 216443 1111 + 0 - 2 - 4 + 0 - 8 + 0 + 0 =1097 Rohit Kalra 230575 1849 + 0 + 0 + 0 + 0 + 4 + 6 + 8 =1867 Aurimas Zemaitaitis 216168 1861 + 0 + 0 - 25 + 1 + 6 + 3 + 0 + 35 - 5 - 1 + 2 + 35 - 5 + 20 =1927 William Wu 222838 2112 + 0 + 0 + 3 - 1 - 10 + 1 + 2 + 0 + 6 + 0 + 0 + 0 - 13 + 0 - 10 =2090 Ricardo Reid 30165 1500 + 5 + 0 - 1 + 0 + 0 - 20 =1484 Carmen Yu 217532 1961 + 40 + 25 + 2 - 5 - 6 + 16 - 6 + 0 + 5 + 0 =2032 Jun Kobayashi 223642 1764 + 13 + 40 + 0 + 0 =1817 Craig Osikowicz 266436 1433 + 40 - 13 + 0 + 0 + 1 + 1 + 35 + 50 - 6 + 0 + 0 + 0 + 0 + 13 + 1 - 25 + 2 + 0 - 7 =1525 Sarah Isabel Jalli 94697 2507 + 2 + 0 - 6 + 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 =2504 Luke Chilson 218083 2090 + 7 - 6 + 0 + 5 - 25 + 0 + 5 + 3 - 3 + 6 - 10 =2072 Borton Szeto 1164934 875 + 0 + 2 - 50 + 0 + 0 + 1 + 3 - 1 =830 Isabella Luo 267433 1151 + 0 + 8 + 0 + 50 - 13 + 0 + 0 + 0 - 1 - 50 + 0 + 0 + 25 + 30 - 10 + 0 + 0 + 0 + 0 - 4 =1186 Kareem Azrak 269633 1763 + 35 + 0 + 7 + 3 + 3 + 0 + 0 + 0 + 4 + 0 + 16 + 0 + 0 - 50 - 1 - 13 - 2 + 0 + 0 + 0 + 0 =1765 Tiana Piyadasa 1170759 1388 + 50 + 13 + 50 + 8 + 0 + 20 - 4 + 2 + 0 + 13 + 2 + 0 + 20 - 2 + 35 + 30 + 50 + 0 + 0 - 50 + 50 + 0 - 50 + 0 - 50 =1575 Anwita Aneesh 1173151 172 + 0 =172 Joon Lee 216874 1608 + 0 + 3 - 2 + 0 + 0 + 6 - 5 =1610 Anya Shanbhag 1171127 1098 + 0 + 13 + 50 - 1 + 0 + 0 + 0 + 0 + 0 + 8 + 0 + 1 + 1 - 6 + 0 + 0 + 1 + 50 + 0 + 0 + 0 + 50 + 13 =1278 Ryan Mahoney 72369 1502 - 5 + 0 + 0 + 0 + 0 + 0 + 0 =1497 Ryan Lin 220032 2067 + 8 + 2 + 0 + 0 + 4 + 0 - 4 + 0 - 8 + 0 + 0 + 0 + 25 + 0 + 5 =2099 Medha Krishna 218837 1974 - 2 + 0 + 0 - 6 + 0 + 16 - 1 + 10 - 6 - 3 - 2 - 2 + 0 - 6 + 0 =1972 Lev Petryshyn 266183 483 + 20 + 0 + 0 - 2 + 0 + 0 + 0 + 0 + 0 =501 Aarthi Loganathan 217446 2026 - 13 - 30 - 5 + 0 + 8 + 0 - 7 + 0 - 35 - 5 - 45 + 5 + 6 - 1 - 1 + 0 + 25 + 0 =1928 Roman Petryshyn 266184 1299 + 1 + 6 + 0 + 0 + 0 + 5 - 30 + 25 + 25 =1331 Frank Aguilera 87881 817 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 =817 Kenzie Dubina 223311 895 + 0 + 0 + 5 - 8 + 0 + 0 + 0 =892 Points Exchanged 148 EXPECTED Sarah Isabel Jalli 94697 2507 2 Gabriel J Perez 92744 2359 2 138 EXPECTED Hammed-Taiwo Adeyinka 222239 2551 2 Aditya Godhwani 82768 2413 2 134 EXPECTED Sid Naresh 84903 2593 3 Kai Zarehbin 91905 2459 3 81 UPSET Aziz Zarehbin 91906 2445 16 Nandan Naresh 84904 2526 16 134 EXPECTED Aditya Godhwani 82768 2413 3 Matthew Lehmann 80125 2279 3 219 EXPECTED Hammed-Taiwo Adeyinka 222239 2551 1 Andrew Cao 211982 2332 1 287 EXPECTED Sarah Isabel Jalli 94697 2507 0 Thomas Yu 62089 2220 0 337 EXPECTED Gabriel J Perez 92744 2359 0 Alex Luo 202029 2022 0 38 UPSET Risheetha Bhagawatula 223797 1988 13 Aarthi Loganathan 217446 2026 13 133 EXPECTED Isabella Joy Xu 98155 2150 3 Mandy Yu 222971 2017 3 151 EXPECTED Kary Fang 211608 2034 2 Tashiya Piyadasa 222763 1883 2 551 EXPECTED Linda Shu 94600 2261 0 Geetha Krishna 224135 1710 0 42 UPSET Hammed-Taiwo Adeyinka 222239 2551 13 Sid Naresh 84903 2593 13 181 EXPECTED Kaye Chen 220824 2155 2 Medha Krishna 218837 1974 2 148 EXPECTED Sid Naresh 84903 2593 2 Aziz Zarehbin 91906 2445 2 44 EXPECTED Hammed-Taiwo Adeyinka 222239 2551 6 Sarah Isabel Jalli 94697 2507 6 206 EXPECTED Sarah Isabel Jalli 94697 2507 1 Tiffany Ke 89217 2301 1 349 EXPECTED Sarah Isabel Jalli 94697 2507 0 Hannah Song 203187 2158 0 40 EXPECTED Tiffany Ke 89217 2301 6 Linda Shu 94600 2261 6 273 EXPECTED Linda Shu 94600 2261 0 Risheetha Bhagawatula 223797 1988 0 151 EXPECTED Tiffany Ke 89217 2301 2 Isabella Joy Xu 98155 2150 2 352 EXPECTED Sarah Isabel Jalli 94697 2507 0 Kaye Chen 220824 2155 0 124 EXPECTED Hannah Song 203187 2158 3 Kary Fang 211608 2034 3 127 UPSET Thomas Yu 62089 2220 25 Gbenga Kayode 1166828 2347 25 67 UPSET Thomas Yu 62089 2220 16 Leon Li 14682 2287 16 105 EXPECTED Phillip Tam 61048 2156 4 Pawel Gluchowski 13394 2051 4 189 EXPECTED Leon Li 14682 2287 1 Dell Sweeris 10415 2098 1 359 EXPECTED Gbenga Kayode 1166828 2347 0 Mohamad Alzein 215073 1988 0 123 EXPECTED Thomas Yu 62089 2220 3 Vinay S Chandra 34825 2097 3 359 EXPECTED Thomas Yu 62089 2220 0 Aurimas Zemaitaitis 216168 1861 0 993 EXPECTED Dell Sweeris 10415 2098 0 Terry Thibault 1164631 1105 0 370 EXPECTED William Wu 222838 2112 0 Jedidiah Chung 91189 1742 0 167 EXPECTED Thomas Yu 62089 2220 2 Lee Seibold 77799 2053 2 401 EXPECTED Rachel Wang 96633 2237 0 Justin To 220356 1836 0 98 EXPECTED Aziz Zarehbin 91906 2445 4 Gbenga Kayode 1166828 2347 4 414 EXPECTED Nandan Naresh 84904 2526 0 William Wu 222838 2112 0 332 EXPECTED Sid Naresh 84903 2593
CommonCrawl
Search ACM DL See also: Digital Library Home All ACM Journals ACM Transactions on Cyber-Physical Systems (TCPS) the premier journal for the publication of high-quality original research papers and survey papers that have scientific and technological understanding of the interactions of information processing, networking and physical processes ACM Author Rights ACM Author Policies AE Guidelines Information and Guidelines About TCPS Physical Layer Key Generation: Securing Wireless Communication in Automotive Cyber-Physical Systems Jiang Wan, Anthony Lopez, Mohammad Abdullah Al Faruque Modern automotive Cyber-Physical Systems (CPS) are increasingly adopting wireless communications for Intra-Vehicular, Vehicle-to-Vehicle (V2V), and Vehicle-to-Infrastructure (V2I) protocols as a promising solution for challenges such as the wire harnessing problem, collision detection, and collision avoidance, traffic control, and environmental... (more) Tradeoffs in Neuroevolutionary Learning-Based Real-Time Robotic Task Design in the Imprecise Computation Framework Pei-Chi Huang, Luis Sentis, Joel Lehman, Chien-Liang Fok, Aloysius K. Mok, Risto Miikkulainen A cyberphysical avatar is a semi-autonomous robot that adjusts to an unstructured environment and... (more) TORUS: Scalable Requirements Traceability for Large-Scale Cyber-Physical Systems Roopak Sinha, Barry Dowdeswell, Gulnara Zhabelova, Valeriy Vyatkin Cyber-Physical Systems (CPS) contain intertwined and distributed software, hardware, and physical components to control complex physical processes. They find wide application in industrial systems, such as smart grid protection systems, which face increasingly complex communication and computation needs. Due to the scale and complexity of the... (more) Anonymous, Fault-Tolerant Distributed Queries for Smart Devices Edward Tremel, Ken Birman, Robert Kleinberg, Márk Jelasity Applications that aggregate and query data from distributed embedded devices are of interest in many settings, such as smart buildings and cities, the... (more) Inferring Smart Schedules for Dumb Thermostats Srinivasan Iyengar, Sandeep Kalra, Anushree Ghosh, David Irwin, Prashant Shenoy, Benjamin Marlin Heating, ventilation, and air conditioning (HVAC) accounts for over 50% of a typical home's energy usage. A thermostat generally controls HVAC usage in a home to ensure user comfort. In this article, we focus on making existing "dumb" programmable thermostats smart by applying energy analytics on smart meter data to infer... (more) Threat Analysis in Systems-of-Systems: An Emergence-Oriented Approach Andrea Ceccarelli, Tommaso Zoppi, Alexandr Vasenev, Marco Mori, Dan Ionita, Lorena Montoya, Andrea Bondavalli Cyber-physical Systems of Systems (SoSs) are large-scale systems made of independent and autonomous cyber-physical Constituent Systems (CSs) which may interoperate to achieve high-level goals also with the intervention of humans. Providing security in such SoSs means, among other features, forecasting and anticipating evolving SoS functionalities,... (more) Model-Based Quantitative Evaluation of Repair Procedures in Gas Distribution Networks Marco Biagi, Laura Carnevali, Fabio Tarani, Enrico Vicario We propose an approach for assessing the impact of multi-phased repair procedures on gas distribution networks, capturing load profiles that can... (more) Looking Under the Hood of Z-Wave: Volatile Memory Introspection for the ZW0301 Transceiver C. W. Badenhop, S. R. Graham, B. E. Mullins, L. O. Mailloux Z-Wave is a proprietary Internet of Things substrate providing distributed home and office automation services. The proprietary nature of Z-Wave devices makes it difficult to determine their security aptitude. While there are a variety of open source tools for analyzing Z-Wave frames, inspecting non-volatile memory, and disassembling firmware,... (more) National-scale Traffic Model Calibration in Real Time with Multi-source Incomplete Data Desheng Zhang, Tian He, Fan Zhang Real-time traffic modeling at national scale is essential to many applications, but its calibration is extremely challenging due to its large spatial... (more) Reinforcement Learning for UAV Attitude Control William Koch, Renato Mancuso, Richard West, Azer Bestavros Autopilot systems are typically composed of an "inner loop" providing stability and control, whereas an "outer loop" is responsible for mission-level objectives, such as way-point navigation. Autopilot systems for unmanned aerial vehicles are predominately implemented using... (more) Building Virtual Power Meters for Online Load Tracking Sean Barker, Sandeep Kalra, David Irwin, Prashant Shenoy Many energy optimizations require fine-grained, load-level energy data collected in real time, most typically by a plug-level energy meter. Online... (more) (see all articles in the ToC) CFP: Special Issue on Security and Privacy for Connected Cyber-Physical Systems This special issue focuses on security & privacy aspects of emerging trends and applications involving Machine-to-Machine Cyber Physical Systems (M2M CPSs) in both generic and specific domain of interests. We invite original research articles proposing innovative solutions to improve IoT security and privacy, taking in account the low resource characteristics of CPS components, the distributed nature of CPSs, and connectivity constraints of IoT devices. For more information, visit the Special Issue webpage. CFP: Special Issue on Time for CPS Timing is crucial for safety, security, and responsiveness of Cyber-Physical System (CPS). This special issue invites manuscripts that study any aspect of the interaction of CPS and its timing. For more information, visit the Special Issue webpage. CFP: Special Issue on User-Centric Security and Safety for Cyber-Physical Systems This special issue focuses on user-centric security and safety aspects of cyber-physical systems (CPS), with the aims of filling gaps between the user behaviour and the design of complex cyber-physical systems. For more information, visit the Special Issue webpage. CFP: Special Issue on Human-Interaction-Aware Data Analytics for Cyber-Physical Systems This special issue focuses on fundamental problems involving human-interaction-aware data analytics with future CPS. The aim of this special issue is to provide a platform for researchers and practitioners from academia, government and industry to present their state-of-the-art research results in the area of human-interaction-aware data analytics for CPS. For more information, visit the Special Issue webpage. CFP: Special Issue on Self-Awareness in Resource Constrained Cyber-Physical Systems This special issue seeks original manuscripts which will cover recent development on methods, architecture, design, validation and application of resource-constrained cyber-physical systems that exhibit a degree of self-awareness. For more information, visit the Special Issue webpage. CFP: Special Issue on Real-Time aspects in Cyber-Physical Systems This special issue invites original, high-quality work that report the latest advances in real-time aspects in CPSs. Featured articles should present novel strategies that address real-time issues in different aspects of CPS design and implementation, including theory, system software, middleware, applications, network, tool chains, test beds, and case studies. For more information, visit the Special Issue webpage. CFP: Special Issue on Transportation Cyber-Physical Systems The aim of this special issue will be to feature articles on new technologies that will impact future transportation systems. They might span across vehicular technologies – such as autonomous vehicles, vehicle platooning and electric cars, communication technologies to enable vehicle-to-vehicle and vehicle-to-infrastructure communication, security mechanisms, infrastructure-level technologies to support transportation, as well as management systems and policies such as traffic light control, intersection management, dynamic toll pricing and parking management. In addition to terrestrial transportation, traffic control and autonomous management of aerial vehicles and maritime ships are also of interest. For more information, visit the Special Issue webpage. Cyber-Physical Systems (CPS) has emerged as a unifying name for systems where the cyber parts, i.e., the computing and communication parts, and the physical parts are tightly integrated, both at the design time and during operation. Such systems use computations and communication deeply embedded in and interacting with physical processes to add new capabilities to physical systems. These cyber-physical systems range from miniscule (pace makers) to large-scale (a national power-grid). There is an emerging consensus that new methodologies and tools need to be developed to support cyber-physical systems. READ MORE Introduction to the Special Issue on Human-Interaction-Aware Data Analytics for Cyber-Physical Systems Tongquan Wei (East China Normal University); Junlong Zhou (Nanjing University of Science and Technology); Rajiv Ranjan (Newcastle University); Isaac Triguero (University of Nottingham); Huafeng Yu (Boeing Research & Technology); Chun Jason Xue (City University of Hong Kong); Schahram Dustdar (TU Wien) Improving the Security of Visual Challenges Junia Valente (The University of Texas at Dallas); Kanchan Bahirat (The University of Texas at Dallas); Kelly Venechanos (The University of Texas at Dallas); Alvaro Cárdenas (The University of Texas at Dallas); Balakrishnan Prabhakaran (The University of Texas at Dallas) This paper proposes new tools to detect the tampering of video feeds from surveillance cameras. Our proposal illustrates the unique cyber-physical properties that sensor devices can leverage for their cyber-security. While traditional authentication and attestation algorithms exchange digital challenges between devices authenticating each other, our work instead proposes challenges that manifest physically in the field of view of the camera (e.g., a QR code in a display, a change of color in lighting, an infrared light, etc.). This physical (challenge) and cyber (verification) attestation mechanism can help protect systems even when the sensors (cameras) and actuators (Display, IR LEDs, Color Lightbulbs) are compromised. Designing a controller with image-based pipelined sensing and additive uncertainties Robinson Medina (Technische Universiteit Eindhoven); Juan Valencia (Technische Universiteit Eindhoven); Sander Stuijk (Technische Universiteit Eindhoven); Dip Goswami (Technische Universiteit Eindhoven); Twan Basten (Technische Universiteit Eindhoven) Pipelined control is an image-based control that uses parallel instances of its image-processing algorithm in a pipelined fashion to improve the quality of control. A higher number of pipes improves the controller settling time resulting in a trade-off between resources and control performance. In real-life applications, it is common to have a continuous-time model with additive uncertainties in one or more parameters that may affect the controller performance and therefore, the trade-off analysis. We consider models with uncertainties denoted by matrices with a single non-zero element, potentially caused by multiple uncertain parameters in the model. We analyse the impact of such uncertainties in the before-mentioned trade-off. To do so, we introduce a discretization technique for the uncertain model. Next, we use the discretized model with uncertainties to analyse the robustness of a pipelined controller designed to enhance performance. Such an analysis captures the relationship between resource usage, control performance, and robustness. Our results show that the tolerable uncertainties for a pipelined controller decreases when increasing the number of pipes. We also show the feasibility of our technique by implementing a realistic example in a Hardware-In-the-Loop simulation. Publish or Drop Traffic Event Alerts? Quality-aware Decision Making in Participatory Sensing-based Vehicular CPS Rajesh P Barnwal (Central Mechanical Engineering Research Institute CSIR); Nirnay Ghosh (Singapore University of Technology and Design); Soumya K Ghosh (Indian Institute of Technology Kharagpur); Sajal K Das (Missouri University of Science and Technology) The vehicular cyber-physical systems (VCPS), among several other applications, may help in addressing the ever increasing problem of congestions in large cities. Nevertheless, this may be hindered by the problem of data falsification, which results out of either wrong perception of a traffic event or generation of fake information by the participating vehicles. Such information fabrication may cause re-routing of vehicles and artificial congestions, leading to economic, public safety, environmental, and health hazards. Thus, it is imperative to infer truthful traffic information at real-time for restoration of operation reliability of the VCPS. In this work, we propose a novel reputation scoring and decision support framework, called Spoofed and False Report Eradicator (SAFE), which offers a cost-effective and efficient solution to handle data falsification problem in the VCPS domain. It includes humans in the sensing loop by exploiting the paradigm of participatory sensing and a concept of mobile security agent (MSA) to nullify the effects of deliberate false contribution, and a variant of the distance bounding mechanism to thwart location-spoofing attacks. A regression-based model integrates these effects to generate the expected truthfulness of a participants contribution. To determine if any contribution is true or not, a generalized linear model is used to transform expected truthfulness into a Quality of Contribution (QoC) score. The QoC of different contributions are aggregated to compute the user reputation. Such reputation enables classification of different participation behaviors. Finally, an Expected Utility Theory (EUT)-based decision model is proposed which utilizes the reputation score to determine if an information should be published or dropped. To evaluate SAFE through experimental study, we compare the reputation-based user segregation performance achieved by our framework with that generated by the state-of-the-art reputation mechanisms. Experimental results demonstrate that SAFE is able to better capture subtle differences in user behaviors based on quality, quantity and location accuracy, and significantly improves operational reliability through accurate publishing of only legitimate information. Socially-Aware Path Planning for a Flying Robot in Close Proximity of Humans Hyung-Jin Yoon (University of Illinois at Urbana-Champaign); Christopher Widdowson (University of Illinois at Urbana-Champaign); Thiago Marinho (University of Illinois at Urbana-Champaign); Ranxiao Frances Wang (University of Illinois at Urbana-Champaign); Naira Hovakimyan (University of Illinois at Urbana Champaign) In this article, we describe a motion planning framework in a cyber-physical system (CPS) that takes into account the human's safety perception in the presence of a flying robot. We use Virtual reality (VR) as a safe testing environment to collect psychological signals from the test subjects experiencing a flying robot in their vicinity. The collected data shows that the sensor signals from the physical part (human) of CPS are influenced by unknown factors due to the distraction by other factors when the human's attention is focused not only on the robot but also on other stimuli. To overcome this issue, we propose to model the change of the focus in the human's attention as a latent discrete random variable, which clusters the data samples into two groups of relevant and irrelevant samples. The proposed model improves the likelihood over the Gaussian noise model, which only minimizes the squared error. We also present a numerical optimal path planning method that ensures spatial separation from the obstacle despite the time discretization in the CPS. Optimal paths generated using the proposed model result in reasonable safety distance from the human. In contrast, the paths generated by the standard regression model with Gaussian noise assumption have undesirable shapes due to over-fitting. Catering to Your Concerns: Automatic Generation of Personalised Security-Centric Descriptions for Android Apps Tingmin Wu; Lihong Tang; Rongjunchen Zhang; Sheng Wen; Cecile Paris; Surya Nepal; Marthie Grobler; Yang Xiang Android users are increasingly concerned with the privacy of their data and security of their devices. To improve the security awareness of users, recent automatic techniques produce security-centric descriptions by performing program analysis. However, the generated text does not always address users? concerns as they are generally too technical to be understood by ordinary users. Moreover, different users have varied linguistic preferences, which do not match the text. Motivated by this challenge, we develop an innovative scheme to help users avoid malware and privacy-breaching apps by generating security descriptions that explain the privacy and security related aspects of an Android app in clear and understandable terms. We implement a prototype system, PERSCRIPTION, to generate personalised security-centric descriptions that automatically learn users? security concerns and linguistic preferences to produce user-oriented descriptions. We evaluate our scheme through experiments and user studies. The results clearly demonstrate the improvement on readability and users? security awareness of PERSCRIPTION?s descriptions compared to existing description generators. Introduction to the Special Issue on Real-Time aspects in Cyber-Physical Systems Luis Almeida; Bjorn Andersson; Jen-Wei Hsieh; Li-Pin Chang; Xiaobo Sharon Hu Modeling and Optimization for Self-Power Non-Volatile IoT Edge Devices with Ultra-Low Harvesting Power Chen Pan (University of Pittsburgh); Mimi Xie (University of Pittsburgh); Song Han (University of Connecticut); Zhihong Mao (University of Pittsburgh); Jingtong Hu (University of Pittsburgh ) Energy harvesters are becoming increasingly popular as power sources for IoT edge devices. However, one of the intrinsic problems of energy harvester is that the harvesting power is often weak and frequently interrupted. Therefore, energy harvesting powered edge devices have to work intermittently. To maintain execution progress, execution states need to be checkpointed into the non-volatile memory before each power failure. In this way, previous execution states can be resumed after power comes back again. Nevertheless, frequent checkpointing and low charging efficiency generate significant energy overhead. To alleviate these problems, this paper conducts a thorough energy efficiency analysis and proposes three algorithms to maximize the energy efficiency of program execution. First, a non-volatile processor (NVP) aware task scheduling (NTS) algorithm is proposed to reduce the size of checkpointing data. Second, a tentative checkpointing avoidance (TCA) technique is proposed to avoid checkpointing for further reduction of checkpointing overhead. Finally, a dynamic wake-up strategy (DWS) is proposed to wake up the edge device at proper voltages where the total hardware and software overhead is minimized for further energy efficiency maximization. The experiments on a real testbed demonstrate that, with the proposed algorithms, an edge device is resilient to extremely weak and intermittent power supply and the energy efficiency is as $2\times$ high as the baseline technique. Improved LDA Dimension Reduction Based Behavior Learning with Commodity WiFi for Cyber-Physical Systems Fu Xiao (Nanjing University of Posts and Telecommunications); Jing Chen (Nanjing University of Posts and Telecommunications); Zhetao Li (Xiangtan University); Haiping Huang (Nanjing University of Posts and Telecommunications); Lijuan Sun (Nanjing University of Posts and Telecommunications) In recent years, rapid development of sensing and computing has led to very large data sets. There is an urgent demand for innovative data analysis and processing techniques that are secure, privacy-protected and sustainable. In this paper, taking human activities and interactions with Cyber-Physical Systems (CPS) into consideration, we propose a human behavior learning system based on Channel State Information (CSI) utilizing a series of algorithms for data analysis and processing. Aiming to recognize a set of gestures, our system is designed based on the observation that different gestures have different effects on signals and specific gesture signals have a unique energy spectrum. Specifically, an improved Linear Discriminant Analysis Algorithm (I-LDA) is devised to reduce the dimension of human behavior signalsand lower computational cost. Additionally, behaviors are learned by Logistic Regression Algorithm (LRA) where bandwidth ratios in energy spectrum are selected as features to eliminate the impact of different speeds. We implement our system on commercial off-the-shelf WiFi devices and conduct a large number of experiments in a typical indoor environment to evaluate its performance. Experimental results show that our system is robust with average recognition accuracy of up to 96%. Energy-Efficient ECG Signal Compression for User Data Input in Cyber-Physical Systems by Leveraging Empirical Mode Decomposition Hui Huang (Michigan Technological University); Shiyan Hu (Michigan Technological University); Ye Sun (Michigan Technological University) Human physiological data are naturalistic and objective user data inputs for a great number of cyber-physical systems (CPS). Electrocardiogram (ECG) as a widely used physiological golden indicator for certain human state and disease diagnosis is often used as user data input for various CPS such as medical CPS and human-machine interaction. Wireless transmission and wearable technology enable long-term continuous ECG data acquisition for human-CPS interaction; however, these emerging technologies bring challenges of storing and wireless transmitting huge amounts of ECG data, leading to energy efficiency issue of wearable sensors. ECG signal compression technique provides a promising solution for these challenges by decreasing ECG data size. In this study, we develop the first scheme of leveraging empirical mode decomposition (EMD) on ECG signals for sparse feature modeling and compression and further propose a new ECG signal compression framework based on EMD constructed feature dictionary. The proposed method features in compressing ECG signals using a very limited number of feature bases with low computation cost, which significantly improves the compression performance and energy efficiency. Our method is validated with the ECG data from MIT-BIH arrhythmia database and compared with existing methods. The results show that our method achieves the compression ratio (CR) of up to 164 with the root mean square error (RMSE) of 3.48% and the average CR of 88.08 with the RMSE of 5.66%, which is more than twice of the average CR of the state-of-the-art methods with similar recovering error rate of around 5%. For diagnostic distortion perspective, our method achieves high QRS detection performance with the sensitivity (SE) of 99.8% and the specificity (SP) of 99.6%, which shows that our ECG compression method can preserve almost all the QRS features and have no impact on the diagnosis process. In addition, the energy consumption of our method is only 30% of that of other methods when compared under the same recovering error rate. A Sustainable and User Behavior Aware Cyber-Physical System for Home Energy Management Wei Li (The University of Sydney); Xiaomin Chang (The University of Sydney); Junwei Cao (Tsinghua University); Ting Yang (Tianjin University); Yaojie Sun (Fudan University); Albert Y. Zomaya (The University of Sydney) There is a growing trend for employing cyber-physical systems to help smart homes to improve the comfort of residents. However, a residential cyber-physical system is differed from a common cyber-physical system since it directly involves human interaction, which is full of uncertainty. The existing solutions could be effective for performance enhancement in some cases when no inherent and dominant human factors are involved. Besides, The rapidly rising interest in the deployments of cyber-physical systems at home does not normally integrate with energy management schemes, which is a central issue that smart homes have to face. In this paper, we propose a cyber-physical system based energy management framework to enable a sustainable edge computing paradigm while meeting the needs of home energy management and residents. This framework aims to enable the full use of renewable energy while reducing electricity bills for households. A prototype system was implemented using real world hardware. The experiment results demonstrated that renewable energy is fully capable of supporting the reliable running of home appliances most of the time and electricity bills could be cut by up to 60% when our proposed framework was employed. Combining Detection and Verification for Secure Vehicular Cooperation Groups Mikael Asplund (Linkoping University) Coordinated vehicles for intelligent traffic management are instances of a cyber-physical systems with strict correctness requirements. A key building block for these systems is the ability to establish a group membership view that accurately captures the locations of all vehicles in a particular area of interest. We formally define view correctness in terms of soundness and completeness and establish theoretical bounds for the ability to verify view correctness. Moreover, we present an architecture for an online view detection and verification process that uses the information available locally to a vehicle. This architecture uses an SMT solver to automatically prove view correctness. We evaluate this architecture and demonstrate that the ability to verify view correctness is on par with the ability to detect view violations. A Distributed Tensor-Train Decomposition Method for Cyber-Physical-Social Services Xiaokang Wang (University of Electronic Science and Technology of China); Laurence T. Yang (St. Francis Xavier University); Yihao Wang (University of Electronic Science and Technology of China); Xingang Liu (University of Electronic Science and Technology of China); Qingxia Zhang (Fudan University); Jamal Deen (University of Electronic Science and Technology of China) Cyber-Physical-Social Systems (CPSS) integrating the cyber, physical and social worlds, is a key technology to provide proactive and personalized services for humans. In this paper, we studied CPSS, by taking human-interaction-aware big data (HIBD) as the starting point. However, the HIBD collected from all aspects of our daily lives are of high-order and large-scale, which brings ever-increasing challenges for their cleaning, integration, processing and interpretation. Therefore, new strategies of representing and processing of HIBD becomes increasingly important in the provision of CPSS services. As an emerging technique, tensor, is proving to be a suitable and promising representation and processing tool of HIBD. In particular, tensor networks, as a kind of significant tensor decomposition, bring advantages of computing, storage and application of HIBD. Furthermore, Tensor-Train (TT), another kind of tensor networks, is particularly well suited for representing and processing high-order data by decomposing a high-order tensor into a series of low order tensors. However, at present, there is still need for an efficient Tensor-Train decomposition method for massive data. Therefore, for lager-scale HIBD, a highly-efficient computational method of Tensor-Train is required. In this paper, a distributed Tensor-Train (DTT) decomposition method is proposed to process the high-order and large-scale HIBD. The high performance of the proposed DTT such as the execution time is demonstrated with a case study on a typical CPSS data - CT (Computed Tomography) image data. Furthermore, as a typical CPSS application for HIBD - recognition was carried out in TT to illustrate the advantage of DTT. A Crowdsensing-based Cyber-physical System for Drone Surveillance Using Random Finite Set Theory Chaoqun Yang (Zhejiang University); Li Feng (Macau University of Science and Technology); Zhiguo Shi (Zhejiang University); Rongxing Lu (University of New Brunswick); Kim-Kwang Raymond Choo (University of Texas at San Antonio) Given the popularity of drones for leisure, commercial and government (e.g. military) usage, there is increasing focus on drone regulation. For example, how can the city council or some government agency detect and track drones more efficiently and effectively, say in a city to ensure that the drones are not engaged in authorized activities? Therefore, in this paper, we propose a crowdsensing-based cyber-physical system for drone surveillance. The proposed system, CSDrone, utilizes surveillance data captured and sent from citizens' mobile devices (e.g., Android and iOS devices, as well as other image or video capturing devices) to facilitate jointly drone detection and tracking. Our system uses random finite set (RFS) theory and RFS-based Bayesian filter. We also evaluate CSDrone's effectiveness in drone detection and tracking. The findings demonstrate that in comparison to existing drone surveillance systems, CSDrone has a lower cost, and is more flexible and scalable. Human-Interaction-Aware Adaptive Functional Safety Processing for Multi-Functional Automotive Cyber-Physical Systems Guoqi Xie(Hunan University); Wei Wu(Hunan University); Yang Bai(Hunan University); Yanwen Li(China Automotive Technology and Research Center); Renfa Li(Hunan University); Keqin Li(State University of New York) The functional safety research for automotive cyber-physical systems (ACPS) has been studied in recent years; however, these studies merely consider the change in the exposure of the functional safety classification and assume that the driver's controllability in the functional safety classification is always fixed and uncontrollable. In fact, the driver's controllability is variable during the runtime phase, such that the execution process of safety-critical automotive functions is a human-interaction-aware process between the driver and ACPS. To adapt to the changes in the driver's controllability, this paper studies the human-interaction-aware adaptive functional safety processing for multi-functional ACPS in two main phases. In the design phase, where the driver's controllability is fixed at the highest level (i.e., C3), we obtain the approximate optimal priority sequence of safety-critical functions without exhausting all sequences by proposing the refined exploration method. In the runtime phase, where the driver's controllability level is variable (i.e., C0, C1, C2, or C3), we propose the human-interaction-aware task remapping method to autonomously respond to the change of the driver's controllability. Examples and experiments confirm that the proposed adaptive functional safety processing can reduce overall task redundancy of safety-critical automotive functions while meeting their functional safety requirements, shorten overall response time of safety-critical automotive functions, and increase the slack time for non-safety-critical automotive functions. Efficient Multi-Factor User Authentication Protocol with Forward Secrecy for Real-Time Data Access in WSNs Ding Wang (Peking University); Ping Wang (Peking University); Chenyu Wang (Beijing University of Posts and Telecommunications) It is challenging to design a secure and efficient multi-factor authentication scheme for real-time user data access in wireless sensor networks (WSNs). On the one hand, such real-time applications are generally security-critical, and various security goals need to be met. On the other hand, sensor nodes and users' mobile devices are typically of resource-constrained nature, and expensive cryptographic primitives cannot be used. In this work, we first revisit four foremost multi-factor authentication schemes, i.e., Srinivas et al.'s (IEEE TDSC'18), Amin et al.'s (JNCA'18), Li et al.'s (JNCA'18) and Li et al.'s (IEEE TII'18) schemes, and use them as case studies to reveal the difficulties and challenges in designing a multi-factor authentication scheme for WSNs right. We identify the root causes for their failures in achieving truly multi-factor security and forward secrecy. We further propose a robust multi-factor authentication scheme that makes use of the imbalanced computational nature of the RSA cryptosystem, particularly suitable for scenarios where sensor nodes (but not the user's device) are the main energy bottleneck. Comparison results demonstrate the superiority of our scheme. As far as we know, it is the first one that can satisfy all the twelve criteria of the state-of-the-art evaluation metric under the harshest adversary model so far. Resilient Clock Synchronization using Power Grid Voltage Dima Rabadi (Institute for Infocomm Research, A*STAR); Rui Tan (Nanyang Technological University); David K.Y. Yau (Singapore University of Technology and Design); Sreejaya Viswanathan (Advanced Digital Science Center); Hao Zheng (Zhejiang University,); Peng Cheng (Zhejiang University,) Many clock synchronization protocols based on message passing, e.g., the Network Time Protocol (NTP), assume symmetric network delays to estimate the one-way packet transmission time as half of the roundtrip time. As a result, asymmetric network delays caused by either network congestion or malicious packet delays can cause significant synchronization errors. This paper exploits sinusoidal voltage signals of an alternating current (ac) power grid to limit the impact of the asymmetric network delays on these clock synchronization protocols. Our extensive measurements show that the voltage signals at geographically distributed locations in a city are highly synchronized. Leveraging calibrated voltage phases, we develop a new clock synchronization protocol, which we call Grid Time Protocol (GTP), that allows direct measurement of one-way packet transmission times between its slave and master nodes, subject to an analytic condition that can be easily verified in practice. The direct measurements render GTP resilient against asymmetric network delays under this condition. A prototype implementation of GTP maintains sub-ms synchronization accuracy for two nodes tens of kilometers apart in Singapore and Hangzhou, China, respectively, in the presence of malicious packet delays. Simulations driven by real network delay measurements between Singapore and Hangzhou under both normal and congested network conditions also show the synchronization accuracy improvement by GTP.We believe that GTP is suitable for grid-connected distributed systems that are currently served by NTP but desire higher resilience against unfavorable network dynamics and packet delay attacks. Test Specification and Generation for Connected and Autonomous Vehicle in Virtual Environment BaekGyu Kim (Toyota InfoTechnology Center, U.S.A.); Takoto Masuda (Toyota InfoTechnology Center, U.S.A.); Shinichi Shiraishi (Toyota InfoTechnology Center) The trend of connected / autonomous features adds significant complexity to the traditional automotive systems to improve driving safety and comfort. Engineers are facing significant challenges in designing test environments that are more complex than ever. We propose a test framework that allows one to automatically generate various virtual road environments from the path specification and the behavior specification. The path specification intends to characterize geometric paths that an environmental object (e.g., roadways or pedestrians) needs to be visualized or move over. We characterize this aspect in the form of linear or nonlinear constraints of 3-Dimensional coordinates. Then, we introduce a test coverage, called an area coverage, to quantify the quality of generated paths in terms of how wide area the generated paths can cover. We propose an algorithm that automatically generate such paths using a SMT (Satisfiability Modulo Theories) solver. On the other hand, the behavioral specification intends to characterize how an environmental object changes its mode changes over time by interacting with other objects (e.g., a pedestrian waits for a signal or start crossing). We characterize this aspect in the form of timed automata. Then, we introduce a test coverage, called an edge/location coverage, to quantify the quality of the generated mode changes in terms of how many modes or transitions are visited. We propose a method that automatically generates many different mode changes using a model-checking method. To demonstrate the test framework, we developed the right turn pedestrian warning system in intersection scenarios and generated many different types of pedestrian paths and behaviors to analyze the effectiveness of the system. All ACM Journals | See Full Journal Index Search TCPS enter search term and/or author name Current Issue: Volume 3 Issue 2, March 2019 SIGN UP FOR TOC SERVICES : EMAIL OR RSS Tei-Wei Kuo National Taiwan University & Academia Sinica Tarek Abdelzaher Karl-Erik Arzen Lunds University See all Editorial Board members ACM, Inc. • The ACM Digital Library is published by the Association for Computing Machinery.
CommonCrawl
Global Trade Guide Economy Economics Marginal Propensity to Save (MPS) By Julia Kagan What Is the Marginal Propensity to Save (MPS)? In Keynesian economic theory, the marginal propensity to save (MPS) refers to the proportion of an aggregate raise in income that a consumer saves rather than spends on the consumption of goods and services. Put differently, the marginal propensity to save is the proportion of each added dollar of income that is saved rather than spent. MPS is a component of Keynesian macroeconomic theory and is calculated as the change in savings divided by the change in income, or as the complement of the marginal propensity to consume (MPC). Marginal Propensity to Save= Change in Saving/Change in Income\begin{aligned}&\text{Marginal Propensity to Save}\\&\qquad=\ \text{Change in Saving/Change in Income}\\&\qquad=\ 1\minus \text{ MPC}\end{aligned}​Marginal Propensity to Save= Change in Saving/Change in Income​ MPS is depicted by a savings line: a sloped line created by plotting change in savings on the vertical y-axis and change in income on the horizontal x-axis. Marginal propensity to save is the proportion of an increase in income that gets saved instead of spent on consumption. MPS varies by income level. MPS is typically higher at higher incomes. MPS helps determine the Keynesian multiplier, which describes the effect of increased investment or government spending as an economic stimulus. Marginal Propensity To Save Understanding the Marginal Propensity to Save (MPS) Suppose you receive a $500 bonus with your paycheck. You suddenly have $500 more in income than you did before. If you decide to spend $400 of this marginal increase on a new business suit and save the remaining $100, your marginal propensity to save is 0.2 ($100 change in saving divided by $500 change in income). The other side of marginal propensity to save is marginal propensity to consume, which shows how much a change in income affects purchasing levels. Marginal Propensity to Consume\begin{aligned} &\text{Marginal Propensity to Consume}\\ &\qquad\quad +\ \text{Marginal Propensity to Save}\ =\ 1. \end{aligned}​Marginal Propensity to Consume​ In this example, where you spent $400 of your $500 bonus, marginal propensity to consume is 0.8 ($400 divided by $500). Adding MPS (0.2) to MPC (0.8) equals 1. The marginal propensity to save is generally assumed to be higher for wealthier individuals than it is for poorer individuals. Given data on household income and household saving, economists can calculate households' MPS by income level. This calculation is important because MPS is not constant; it varies by income level. Typically, the higher the income, the higher the MPS, because as wealth increases, so does the ability to satisfy needs and wants, and so each additional dollar is less likely to go toward additional spending. However, the possibility remains that a consumer might alter savings and consumption habits with an increase in pay. Naturally, with an increase in salary comes the ability to cover household expenses more easily, allowing for more leeway to save. With a higher salary also comes access to goods and services that require greater expenditures. This may include the procurement of higher-end or luxury vehicles or relocation to a new, pricier residence. If economists know what consumers' MPS is, they can determine how increases in government spending or investment spending will influence saving. MPS is used to calculate the expenditures multiplier using the formula: 1/MPS. The expenditures multiplier tells us how changes in consumers' marginal propensity to save influences the rest of the economy. The smaller the MPS, the larger the multiplier and the more economic impact a change in government spending or investment will have. Marginal Propensity To Consume (MPC) Marginal propensity to consume represents the proportion of a pay raise that is spent on the consumption of goods and services, as opposed to being saved. Consumption Function The consumption function is a mathematical formula that represents the functional relationship between total consumption and gross national income. What Does "Investment Multiplier" Mean? An investment multiplier quantifies the additional positive impact on aggregate income and the general economy generated from investment spending. Average Propensity To Save Definition The average propensity to save (APS) is an economic term that refers to the proportion of income that is saved rather than spent on goods and services. Multiplier Effect Definition The multiplier effect measures the impact that a change in investment will have on final economic output. Average Propensity to Consume The average propensity to consume refers to the percentage of income spent on goods and services rather than on savings. Marginal Propensity to Consume vs. to Save: Knowing the Difference How is marginal propensity to save calculated? Which factors drive the marginal propensity to consume? Income Effect vs. Price Effect: What's the Difference? How do you calculate the marginal propensity to consume? Can Keynesian Economics Reduce Boom-Bust Cycles?
CommonCrawl
Mupirocin-resistant Staphylococcus aureus in Africa: a systematic review and meta-analysis Adebayo O. Shittu1, Mamadou Kaba2,3, Shima M. Abdulgader2, Yewande O. Ajao1, Mujibat O. Abiola1 & Ayodele O. Olatimehin1 Mupirocin is widely used for nasal decolonization of Staphylococcus aureus to prevent subsequent staphylococcal infection in patients and healthcare personnel. However, the prolonged and unrestricted use has led to the emergence of mupirocin-resistant (mupR) S. aureus. The aim of this systematic review was to investigate the prevalence, phenotypic and molecular characteristics, and geographic spread of mupR S. aureus in Africa. We examined five electronic databases (EBSCOhost, Google Scholar, ISI Web of Science, MEDLINE, and Scopus) for relevant English articles on screening for mupR S. aureus from various samples in Africa. In addition, we performed random effects meta-analysis of proportions to determine the pooled prevalence of mupR S. aureus in Africa. The search was conducted until 3 August 2016. We identified 43 eligible studies of which 11 (26%) were obtained only through Google Scholar. Most of the eligible studies (28/43; 65%) were conducted in Nigeria (10/43; 23%), Egypt (7/43; 16%), South Africa (6/43; 14%) and Tunisia (5/43; 12%). Overall, screening for mupR S. aureus was described in only 12 of 54 (22%) African countries. The disk diffusion method was the widely used technique (67%; 29/43) for the detection of mupR S. aureus in Africa. The mupA-positive S. aureus isolates were identified in five studies conducted in Egypt (n = 2), South Africa (n = 2), and Nigeria (n = 1). Low-level resistance (LmupR) and high-level resistance (HmupR) were both reported in six human studies from South Africa (n = 3), Egypt (n = 2) and Libya (n = 1). Data on mupR-MRSA was available in 11 studies from five countries, including Egypt, Ghana, Libya, Nigeria and South Africa. The pooled prevalence (based on 11 human studies) of mupR S. aureus in Africa was 14% (95% CI =6.8 to 23.2%). The proportion of mupA-positive S. aureus in Africa ranged between 0.5 and 8%. Furthermore, the frequency of S. aureus isolates that exhibited LmupR, HmupR and mupR-MRSA in Africa were 4 and 47%, 0.5 and 38%, 5 and 50%, respectively. The prevalence of mupR S. aureus in Africa (14%) is worrisome and there is a need for data on administration and use of mupirocin. The disk diffusion method which is widely utilized in Africa could be an important method for the screening and identification of mupR S. aureus. Moreover, we advocate for surveillance studies with appropriate guidelines for screening mupR S. aureus in Africa. Staphylococcus aureus is a well-recognized human pathogen that is implicated in a wide array of superficial, invasive and toxigenic infections [1]. Meta-analyses of published studies have provided evidence that S. aureus nasal carriage is an important risk factor for subsequent infection among patients with surgical site infections and atopic dermatitis [2, 3]. Other high-risk groups include patients colonized with methicillin-resistant Staphylococcus aureus (MRSA) undergoing dialysis, and patients admitted in the intensive care unit [4, 5]. Consequently, infection prevention strategies such as nasal decolonization are employed to minimize the occurrence of staphylococcal infection and reduce the risk of transmission in healthcare settings [6, 7]. Mupirocin (2%) nasal ointment alone or in combination with 4% chlorhexidine (CHG) based body wash is considered as the main decolonization strategy for S. aureus carriage [8, 9]. Mupirocin is a naturally occurring antibiotic produced by Pseudomonas fluorescens that interferes with protein synthesis by competitive inhibition of the bacterial isoleucyl-tRNA synthetase (IRS) [10, 11]. It gained prominence in the mid-1990s for the eradication of S. aureus nasal carriage due to its effectiveness, safety and cost [12]. Mupirocin-resistant (mupR) S. aureus was first reported in the United Kingdom in 1987 [13]. Since then, it has been reported in several countries worldwide [14,15,16,17]. The emergence of mupR S. aureus has been associated with unrestricted policies and use of mupirocin for long periods in health care settings [8, 18]. Decolonization failure in patients with S. aureus carriage is associated with high-level mupirocin resistance (HmupR - minimum inhibitory concentration [MIC]: ≥512 μg/ml), while that of low-level mupirocin resistance (LmupR – MIC: 8-64 μg/ml) is still unclear [7, 19]. LmupR is mediated through point mutation (largely V588F and V631F) in the native isoleucyl-tRNA synthetase (ileS) gene [20]. In contrast, HmupR is mainly attributed to the acquisition of plasmids with the mupA (or ileS2) gene encoding an additional IRS with no affinity for mupirocin [11, 21]. Another determinant for HmupR is the acquisition of a plasmid-mediated mupB gene [22]. There is no data summarizing reports on screening, prevalence, characterization, and geographic spread of mupR S. aureus in Africa. This systematic review evaluated published articles that assessed for mupirocin resistance in African S. aureus isolates. The findings from this systematic review highlight the need to develop an early warning system, including harmonized strategies for the prompt screening and identification of mupR S. aureus in Africa. Literature search strategy The relevant English articles from human and animal investigations were retrieved by three authors (YA, SA, and AS) from five electronic databases (EBSCOhost, Google Scholar, ISI Web of Science, MEDLINE, and Scopus). The search terms for each database are reported in Table 1. The literature search was concluded on 3 August 2016. Table 1 Keywords used to identify eligible studies available in five biomedical databases Eligible article identification The identification of the eligible articles was conducted according to the guidelines for preferred reporting items for systematic reviews and meta-analyses (PRISMA) [23]. We defined an eligible article as a peer-reviewed publication that (i) included mupirocin in the antibiotic susceptibility testing of S. aureus isolates, and (ii) employed phenotypic ((disc diffusion, E-test, minimum inhibitory concentration (MIC), VITEK and other automated methods)), and/or molecular ((conventional or real-time polymerase chain reaction (PCR)) techniques. International multicentre studies that included African countries were also eligible for inclusion. Data extraction and analysis The relevant data were extracted from each of the eligible articles included in this systematic review. A study that analysed S. aureus isolates from another investigation but answered a different research question were both considered as one study (Table 2). We performed three levels of analysis (Fig. 1). First, to understand the characteristics and geographic spread of mupR S. aureus in Africa, studies that included mupirocin in the antibiotic susceptibility testing and employed phenotypic and/or molecular techniques were identified. Secondly, the prevalence of S. aureus with the mupA gene, isolates that expressed LmupR and HmupR, and mupR-MRSA in Africa were derived from each eligible study as follows: $$ MupA\hbox{-} \mathrm{positive}\ S.\kern0.5em aureus=\frac{\mathrm{Number}\ \mathrm{of}\ MupA\hbox{-} \mathrm{positive}\ S.\kern0.5em aureus\ \mathrm{isolates}}{\mathrm{Total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{isolates}\ \mathrm{screened}\ \mathrm{with}\ \mathrm{mupirocin}} $$ $$ S.\kern0.5em aureus\ \mathrm{that}\ \mathrm{expressed}\ \mathrm{LmupR}=\frac{\mathrm{Number}\ \mathrm{of}\ S.\kern0.5em aureus\ \mathrm{isolates}\ \mathrm{with}\ \mathrm{LmupR}}{\mathrm{Total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{isolates}\ \mathrm{screened}\ \mathrm{with}\ \mathrm{mupirocin}} $$ $$ S.\kern0.5em aureus\ \mathrm{that}\ \mathrm{expressed}\ \mathrm{HmupR}=\frac{\mathrm{Number}\ \mathrm{of}\ S.\kern0.5em aureus\ \mathrm{isolates}\ \mathrm{with}\ \mathrm{HmupR}}{\mathrm{Total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{isolates}\ \mathrm{screened}\ \mathrm{with}\ \mathrm{mupirocin}} $$ $$ \mathrm{MupR}\hbox{-} \mathrm{MRSA}=\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{mupR}\hbox{-} \mathrm{MRSA}\ \mathrm{isolates}}{\mathrm{Total}\ \mathrm{number}\ \mathrm{of}\ \mathrm{isolates}\ \mathrm{screened}\ \mathrm{with}\ \mathrm{mupirocin}} $$ Table 2 Characteristics of the 43 eligible studies on screening for mupirocin resistance in Staphylococcus aureus from various sources in Africa The Preferred Reporting Items for Systematic Review and Meta-analysis flow diagram Thirdly, to estimate the prevalence of mupR S. aureus in humans, studies that employed at least one of the screening methods with defined breakpoint for mupirocin resistance were included in the meta-analysis. The StatsDirect statistical software version 3.0.165 (England: StatsDirectLtd.2016) was utilized to assess the heterogeneity of the eligible studies included in the meta-analysis (Cochran Q-test) [24], and to ascertain the inconsistency across the studies (I2 statistic) [25]. The random effects model was used to determine the pooled prevalence of mupR S. aureus in Africa. The criterion for statistical significance for heterogeneity was set at alpha = 0.05. The risk of publication bias was assessed and visualized by a Funnel plot [26, 27]. Eligible studies from electronic database search We identified 43 reports (Table 1) of which 34 studies investigated only human samples. The remaining nine studies assessed samples from only animals (n = 5), human and environmental sources (n = 2), human and animal sources (n = 1), and cockroaches (n = 1). Most of the eligible studies (32/43; 74%) were obtained from EBSCOhost, ISI Web of Science, MEDLINE, and Scopus. The remaining studies (11/43; 26%) were obtained only through Google Scholar and consisted of studies conducted in Egypt [28,29,30,31], South Africa [32,33,34], Nigeria [35, 36], Ethiopia [37] and Kenya [38]. Screening and identification of mupR S. aureus in Africa Only 12 of the 54 (22%) African countries reported data on screening for mupR S. aureus (Fig. 2). The first published article indicated that mupirocin had been in use in Africa, at least from the late 1980s [39]. Most of these studies (28/43; 65%) were conducted in Nigeria (10/43; 23%), Egypt (7/43; 16%), South Africa (6/43; 14%) and Tunisia (5/43; 12%) (Fig. 2). MupR S. aureus was mainly identified through the disk diffusion method (29/43; 67%). The guidelines by the Clinical and Laboratory Standards Institute (CLSI), previously known as National Committee for Clinical Laboratory Standards (NCCLS), were broadly used in Africa (Table 2). However, a number of studies [28, 29, 31, 33, 36, 40,41,42,43,44,45,46] utilized the disk diffusion method with CLSI guidelines that had no zone diameter breakpoint for mupirocin. Moreover, some studies [47,48,49] did not provide information on the year of publication of the CLSI guidelines. MupR S. aureus was reported in six African countries including South Africa [32,33,34, 46, 50, 51], Egypt [29,30,31, 52], Nigeria [36, 44, 53], Ghana [54, 55], Libya [56, 57] and Ethiopia [37] (Fig. 2; Table 2). The mupA-positive S. aureus was detected in five studies from Egypt [30, 52], South Africa [33, 50] and Nigeria [53]. LmupR and HmupR were both reported in six human studies conducted in South Africa [32, 33, 50], Egypt [30, 52] and Libya [57]. The mupR-MRSA isolates were identified in South Africa [32, 34, 50, 51], Egypt [30, 31, 52], Libya [56, 57], Ghana [55] and Nigeria [36] (Table 3). MupR-MRSA was not reported from MRSA isolates recovered from studies conducted in Egypt [28, 58], Tunisia [59, 60] and Algeria [47]. Studies on screening for mupirocin-resistant Staphylococcus aureus in Africa Table 3 Prevalence of mupirocin-resistant S. aureus from various sources in Africa based on phenotypic and molecular methods An assessment of data on mupR S. aureus at the regional level is described as follows (Fig. 3). Geographic distribution of mupirocin-resistant (mupR) Staphylococcus aureus in Africa. Countries (in green) in which mupR S. aureus have been investigated but not reported. Countries (in red) in which mupR S. aureus have been investigated and reported Seventeen eligible studies were recorded from this region, including Egypt [28,29,30,31, 40, 52, 58], Tunisia [41,42,43, 59, 60], Libya [56, 57, 61], Algeria [47] and Morocco [62]. MupR S. aureus was reported in six studies conducted in two North African countries: Egypt [29,30,31, 52] and Libya [56, 57]. PCR detection of the mupA gene was performed in only two studies conducted in Egypt [30, 52]. In addition, one of the reports identified two mupA positive MRSA that exhibited LmupR [30]. MupR S. aureus was not detected in Tunisia [41,42,43, 59, 60], Algeria [47], and Morocco [62]. S. aureus resistance to mupirocin was investigated in Nigeria [35, 36, 44, 48, 49, 53, 63,64,65,66] and Ghana [54, 55, 67, 68]. Only two studies from Ghana reported on mupR S. aureus [54, 55]. In Nigeria, three studies (including two from only human sources and one from both animal and human samples, respectively) reported on S. aureus isolates that demonstrated HmupR [36, 44, 53]. MupR S. aureus was not detected in studies conducted in Gabon [69], and São Tomé and Príncipe [70]. In this review, we identified four eligible studies conducted in Kenya [38, 71, 72] and Ethiopia [37]. A report on the role of cockroaches as potential vectors of foodborne pathogens in Ethiopia identified 17 mupR S. aureus isolates [37]. All the S. aureus isolates (one animal and two human studies) from Kenya were susceptible to mupirocin [38, 71, 72]. The six studies reported in this geographical area were from South Africa and consisted of two single centre studies [34, 46] and four multicenter studies [32, 33, 50, 51]. MupR S. aureus was identified in all the reports, while mupA-positive S. aureus isolates were noted in only two studies [33, 50]. Prevalence of mupR S. aureus in Africa The random-effects pooled prevalence of mupR S. aureus in Africa is 14% (95% CI =6.8 to 23.2%). This was calculated based on 11 heterogeneous human studies (Figs. 4 and 5) conducted in South Africa [32, 33, 50, 51], Ghana [54, 55], Egypt [30, 52], Libya [56, 57] and Nigeria [53]. In Africa, the proportion of S. aureus isolates with the mupA gene, and those that expressed LmupR and HmupR ranged between 0.5 and 8%, 4 and 47%, 0.5 and 38%, respectively. The frequency of mupR-MRSA isolates ranged between 5 and 50% (Table 3). Bias assessment (Funnel) plot for studies assessing rates of mupirocin-resistant Staphylococcus aureus in Africa. Random effects (DerSimonian-Laird). Pooled proportion = 0.139303 (95% CI = 0.067511 to 0.23165). Bias indicators, Begg-Mazumdar: Kendall's tau = 0.2 P = 0.4454, Egger: bias = 4.771137 (95% CI = −2.517874 to 12.060148) P = 0.1728, Harbord: bias = 2.014783 (92.5% CI = −5.90181 to 9.931377) P = 0.6208 Pooled estimate of proportions (human studies) for mupirocin-resistant Staphylococcus aureus in Africa Association of MupR S. aureus with mupirocin use in Africa There is no data on the use of mupirocin as an agent for S. aureus decolonization and its association with mupR S. aureus in Africa. MupR S. aureus and biofilm production A report from Egypt noted that mupR-MRSA were moderate to strong biofilm producers [52]. MupR S. aureus and co-resistance to other antibiotics In this systematic review, two studies (conducted in Egypt and South Africa) showed that mupR S. aureus was associated with multi-drug resistance [30, 33]. Molecular characterization of mupR S. aureus in Africa Only three studies provided molecular data on mupR S. aureus in Africa [45, 54, 55]. A report provided evidence of a 35 kb (non-conjugative) and 41.1 kb (conjugative) plasmid encoding mupA in S. aureus isolates from Nigeria and South Africa [45]. It also described an MRSA clone that demonstrated LmupR in South Africa. LmupR was also identified among MRSA isolates assigned with ST36, ST88, and ST789 in Ghana [55]. A cross-sectional S. aureus study identified a methicillin susceptible S. aureus (MSSA) strain with HmupR from a 51-year-old hospital staff in Ghana [54]. Molecular characterization indicated that the strain (spa type t4805) was PVL-positive. This is the first systematic review on mupR S. aureus in Africa and clearly showed the paucity of data on the continent. Nevertheless, this study indicated a high prevalence ((14% (95% CI =6.8 to 23.2)) of mupR S. aureus in Africa. These observations support the need for mupR S. aureus surveillance data to provide information on its epidemiology and clinical significance in Africa. It is noteworthy that Google Scholar was valuable in the identification of several eligible studies [28,29,30,31,32,33,34,35,36,37,38]. We observed that 26% (11/43) of the eligible studies were identified from African journals which were not indexed in commonly used electronic databases. Google Scholar has been considered as a useful supplement with other electronic databases for systematic review search [73] including recent meta-analyses of published studies on S. aureus in Africa [74, 75]. The phenotypic methods for the screening and identification of mupR S. aureus include disc diffusion (two-disc strategy: 5 μg and 200 μg), agar dilution, broth micro-dilution and E-test [19]. In this study, the disk diffusion method and the CLSI (formerly NCCLS) guidelines were strategies mainly applied to detect mupR S. aureus in Africa. However, we observed certain inconsistencies [28, 29, 31, 33, 36, 40,41,42,43,44,45,46,47,48,49]. For instance, a number of studies [28, 29, 31, 33, 36, 40,41,42, 44,45,46] applied the disk diffusion method with the CLSI guidelines that had no breakpoint values for mupirocin. The 2017 CLSI guidelines recommend the use of the 200 μg disk to differentiate between HmupR and the absence of HmupR (i.e. no zone = HmupR; any zone = absence of HmupR) [76]. The 200 μg disk with a different breakpoint (Susceptible ≥30 mm, Resistance < 18 mm) is also endorsed for the differentiation between HmupR and the absence of HmupR in the latest versions (accessed 28th May, 2018) of the European Committee for Antimicrobial Susceptibility Testing (EUCAST) and Comité de l'antibiogramme de la Société Française de Microbiologie (CA-SFM) [77, 78]. The breakpoint values for the detection of LmupR and differentiation from HmupR are not provided in these documents (CA-SFM, CLSI, and EUCAST). Despite this limitation, the disk diffusion method in conjunction with any of these guidelines could at least be valuable for the preliminary screening and identification of HmupR S. aureus in Africa. MRSA decolonization failure is of clinical significance as it is often attributed to persistence or re-colonization associated with isolates exhibiting HmupR, while that of LmupR is not clear [7, 19, 79]. In this review, the prevalence of S. aureus that exhibited LmupR, HmupR and mupR-MRSA in Africa was predicated on a range of methods using different guidelines. We suggest that surveillance data from Africa is established on harmonized guidelines to enhance quality assurance and comparison at the continental and global level. We noted a prevalence of mupR-MRSA ranging between 5 and 50% in Africa (Table 3). This is of serious concern. Specifically, the relationship between mupirocin resistance and MRSA has important consequences on infection control measures and effectiveness of decolonization strategies [8]. MupR-MRSA could limit the choices available for the control and prevention of healthcare-associated MRSA infections (7, 8). Therefore, surveillance studies are important to investigate the emergence and spread of mupirocin resistance in hospital settings in Africa. This is important among patients at high risk of MRSA infections, including patients in the dermatology, dialysis and the Intensive Care Units. In addition, there is the need for more data on the molecular characterization of mupR S. aureus in Africa [45, 54, 55]. For instance, whole genome sequencing (WGS) will assist in understanding the transmission dynamics of mupR S. aureus in Africa. Moreover, WGS data will allow comprehensive investigation of the genetic basis for LmupR mutation (which is largely due to V588F and V631F in the native gene (ileS)) and mupB-positive S. aureus in Africa. Language bias was the main limitation of this systematic review as we did not include studies published in French, Portuguese, Arabic and Spanish. This study showed the need for more epidemiological data to understand the transmission, burden and risk factors associated with mupR S. aureus in Africa. In addition, there is a need for data on administration and use of mupirocin in community and hospital setting in Africa. This is important in antibiotic stewardship to mitigate the emergence and spread of mupR S. aureus in Africa. Finally, this systematic review highlighted the need for harmonized guidelines to facilitate the comparison of data on mupR S. aureus from Africa. HmupR: High-level mupirocin resistance LmupR: Low-level mupirocin resistance MRSA: Methicillin-resistant Staphylococcus aureus Methicillin-susceptible S. aureus mupR: Mupirocin-resistant PCR: PVL: Panton Valentine Leucocidin S. aureus : Lowy FD. Staphylococcus aureus infections. N Engl J Med. 1998;339:520–32. Levy PY, Ollivier M, Drancourt M, Raoult D, Argenson JN. Relation between nasal carriage of Staphylococcus aureus and surgical site infection in orthopedic surgery: the role of nasal contamination. A systematic literature review and meta-analysis. Orthop Traumatol Surg Res. 2013;99:645–51. https://doi.org/10.1016/j.otsr.2013.03.030. Totté JE, van der Feltz WT, Hennekam M, van Belkum A, van Zuuren EJ, Pasmans SG. Prevalence and odds of Staphylococcus aureus carriage in atopic dermatitis: a systematic review and meta-analysis. Br J Dermatol. 2016;175:687–95. https://doi.org/10.1111/bjd.14566. Zacharioudakis IM, Zervou FN, Ziakas PD, Mylonakis E. Meta-analysis of methicillin-resistant Staphylococcus aureus colonization and risk of infection in dialysis patients. J Am Soc Nephrol. 2014;25:2131–41. https://doi.org/10.1681/ASN.2013091028. Article PubMed PubMed Central CAS Google Scholar Ziakas PD, Anagnostou T, Mylonakis E. The prevalence and significance of methicillin-resistant Staphylococcus aureus colonization at admission in the general ICU setting: a meta-analysis of published studies. Crit Care Med. 2014;42:433–44. https://doi.org/10.1097/CCM.0b013e3182a66bb8. Wertheim HF, Melles DC, Vos MC, van Leeuwen W, van Belkum A, Verbrugh HA, Nouwen JL. The role of nasal carriage in Staphylococcus aureus infections. Lancet Infect Dis. 2005;5:751–62. https://doi.org/10.1016/S1473-3099(05)70295-4. Septimus EJ, Schweizer ML. Decolonization in prevention of health-care associated infections. Clin Microbiol Rev. 2016;29:201–22. https://doi.org/10.1128/CMR.00049-15. Poovelikunnel T, Gethin G, Humphreys H. Mupirocin resistance: clinical implications and potential alternatives for the eradication of MRSA. J Antimicrob Chemother. 2015;70:2681–92. https://doi.org/10.1093/jac/dkv169. Global Guidelines for the prevention of surgical site infection. World Health Organization, Geneva. 2016. http://www.who.int/gpsc/ssi-prevention-guidelines/en/ Accessed 15 June 2017. Fuller AT, Mellows G, Woolford M, Banks GT, Barrow KD, Chain EB. Pseudomonic acid: an antibiotic produced by Pseudomonas fluorescens. Nature. 1971;234:416–7. Gilbart J, Perry CR, Slocombe B. High-level mupirocin resistance in Staphylococcus aureus: evidence for two distinct isoleucyl-tRNA synthetases. Antimicrob Agents Chemother. 1993;37:32–8. Perl TM, Golub JE. New approaches to reduce Staphylococcus aureus nosocomial infection rates: treating S. aureus nasal carriage. Ann Pharmacother. 1998;32:S7–16. Rahman M, Noble WC, Cookson B. Mupirocin resistant Staphylococcus aureus. Lancet. 1987;330:387–8. https://doi.org/10.1016/S0140-6736(87)92398-1. Hughes J, Stabler R, Gaunt M, Karadag T, Desai N, Betley J, Ioannou A, Aryee A, Hearn P, Marbach H, Patel A, Otter JA, Edgeworth JD, Tosas AO. Clonal variation in high- and low-level phenotypic and genotypic mupirocin resistance of MRSA isolates in south-East London. J Antimicrob Chemother. 2015;70:3191–9. https://doi.org/10.1093/jac/dkv248. PubMed CAS Article Google Scholar Boswihi SS, Udo EE, Al-Sweih N. Shifts in the clonal distribution of methicillin-resistant Staphylococcus aureus in Kuwait hospitals: 1992-2010. PLoS One. 2016;11:e0162744. https://doi.org/10.1371/journal.pone.0162744. Hayden MK, Lolans K, Haffenreffer K, Avery TR, Kleinman K, Li H, Kaganov RE, Lankiewicz J, Moody J, Septimus E, Weinstein RA, Hickok J, Jernigan J, Perlin JB, Platt R, Huang SS. Chlorhexidine and mupirocin susceptibility of methicillin-resistant Staphylococcus aureus isolates in the REDUCE-MRSA trial. J Clin Microbiol. 2016;54:2735–42. Gostev V, Kruglov A, Kalinogorskaya O, Dmitrenko O, Khokhlova O, Yamamoto T, Lobzin Y, Ryabchenko I, Sidorenko S. Molecular epidemiology and antibiotic resistance of methicillin-resistant Staphylococcus aureus circulating in the Russian Federation. Infect Genet Evol. 2017;53:189–94. https://doi.org/10.1016/j.meegid.2017.06.006. Hetem DJ, Bonten MJ. Clinical relevance of mupirocin resistance in Staphylococcus aureus. J Hosp Infect. 2013;85:249–56. https://doi.org/10.1016/j.jhin.2013.09.006. Swenson JM, Wong B, Simor AE, Thomson RB, Ferraro MJ, Hardy DJ, Hindler J, Jorgensen J, Reller LB, Traczewski M, McDougal LK, Patel JB. Multicenter study to determine disk diffusion and broth microdilution criteria for prediction of high- and low-level mupirocin resistance in Staphylococcus aureus. J Clin Microbiol. 2010;48:2469–75. https://doi.org/10.1128/JCM.00340-10. Antonio M, McFerran N, Pallen MJ. Mutation affecting the Rossman fold of isoleucyl-tRNA synthetase are correlated with low-level mupirocin resistance in Staphylococcus aureus. Antimicrob Agents Chemother. 2002;46:438–42. https://doi.org/10.1128/AAC.46.2.438-442.2002. Hodgson JE, Curnock SP, Dyke KG, Morris R, Sylvester DR, Gross MS. Molecular characterization of the gene encoding high-level mupirocin resistance in Staphylococcus aureus J2870. Antimicrob Agents Chemother. 1994;38:1205–8. https://doi.org/10.1128/AAC.38.5.1205. Seah C, Alexander DC, Louie L, Simor A, Low DE, Longtin J, Melano RG. MupB, a new high-level mupirocin resistance mechanism in Staphylococcus aureus. Antimicrob Agents Chemother. 2012;56:1916–20. https://doi.org/10.1128/AAC.05325-11. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097. https://doi.org/10.1371/journal.pmed.1000097. Cochran WG. The combination of estimates from different experiments. Biometrics. 1954;10:101–29. Huggins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327:557–60. https://doi.org/10.1136/bmj.327.7414.557. Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315:629–34. https://doi.org/10.1136/bmj.315.7109.629. Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones DR, Lau J, Carpenter J, Rücker G, Harbord RM, Schmid CH, Tetzlaff J, Deeks JJ, Peters J, Macaskill P, Schwarzer G, Duval S, Altman DG, Moher D, Higgins JP. Recommendations for examining and interpreting funnel plots assymettry in meta-analysis of randomised control trials. BMJ. 2011;342:1–8. https://doi.org/10.1136/bmj.d4002. Salama MF. Comparative molecular analysis of community or health care associated methicillin-resistant Staphylococcus aureus. Egypt J Med Microbiol. 2006;15:371–80. Taher S, Roshdy H. Prevalence of Panton-Valentine Leukocidin genes among Staphylococcus aureus isolates in Mansoura University hospitals. Egypt J Med Microbiol. 2009;18:97–108. Wali I, Ouda N, El-Seidi E. Mupirocin resistance among methicillin resistant Staphylococcus aureus isolates in an Egyptian hospital. Egypt J Med Lab Sci. 2011;20:1–11. Melake N, Zakaria AS, Ibrahim NH, Salama M, Mahmoud AZ. Prevalence of agr specificity groups among in vitro biofilm forming methicillin resistant Staphylococcus aureus strains isolated from nasal carriers. Int J Microbiol Res. 2014;5:76–84. https://doi.org/10.5829/idosi.ijmr.2014.5.2.83184. Marais E, Aithma N, Perovic O, Oosthuyen WF, Musenge E, Dusé AG. Antimicrobial susceptibility of methicillin-resistant Staphylococcus aureus isolates from South Africa. S Afr Med J. 2009;99:170–3. PubMed CAS Google Scholar Wasserman E, Orth H, Senekal M, Harvey K. High prevalence of mupirocin resistance associated with resistance to other antimicrobial agents in Staphylococcus aureus isolated from patients in private health care, Western Cape. South Afr J Infect Dis. 2014;29:126–32. Swe Swe K, Naidoo N, Jaglal P. Molecular epidemiology of a suspected methicillin-resistant Staphylococcus aureus outbreak in a renal unit of a central academic hospital in KwaZulu-Natal, South Africa. South Afr J Infect Dis. 2015;30:6–10. Bamaiyi PH, Aniesona AT. Prevalence and antimicrobial susceptibility patterns of bovine and ovine Staphylococcus aureus isolates in Maiduguri, Nigeria. Adv Anim Vet Sci. 2013;1:59–64. Mai-siyama IB, Okon KO, Adamu NB, Askira UM, Isyaka TM, Adamu SG, Mohammed A. Methicillin-resistant Staphylococcus aureus (MRSA) colonization rate among ruminant animals slaughtered for human consumption and contact persons in Maiduguri, Nigeria. Afr J Microbiol Res. 2014;8:2643–9. https://doi.org/10.5897/AJMR2014.6855. Tachbele E, Erku W, Gebre-Michael T, Ashenafi M. Cockroach-associated food-borne bacterial pathogens from some hospitals and restaurants in Addis Ababa, Ethiopia: Distribution and antibiograms. JRTPH. 2006;5:34–41. Njage PMK, Dolci S, Jans C, Wangoh J, Lacroix C, Meile L. Phenotypic and genotypic antibiotic resistance patterns of Staphylococcus aureus from raw and spontaneously fermented camel milk. BJAST. 2013;3(3):87–98. Rode H, Hanslo D, de Wet PM, Millar AJW, Cywes S. Efficacy of mupirocin in methicillin-resistant Staphylococcus aureus burn wound infection. Antimicrob Agents Chemother. 1989;33:1358–61. Salem-Bekhit M. Phenotypic and genotypic characterization of nosocomial isolates of Staphylococcus aureus with reference to methicillin resistance. Trop J Pharm Res. 2014;13:1239–46. https://doi.org/10.4314/tjpr.v13i8.7. Ben Slama K, Gharsa H, Klibi N, Jouini A, Lozano C, Gómez-Sanz E, Zarazaga M, Boudabous A, Torres C. Nasal carriage of Staphylococcus aureus in healthy humans with different levels of contact with animals in Tunisia: genetic lineages, methicillin resistance, and virulence factors. Eur J Clin Microbiol Infect Dis. 2011;30:499–508. https://doi.org/10.1007/s10096-010-1109-6. Gharsa H, Slama KB, Lozano C, Gomez-Sanz E, Klibi N, Sallem RB, Gomez P, Zarazaga M, Boudabous A, Torres C. Prevalence, antibiotic resistance, virulence traits and genetic lineages of Staphylococcus aureus in healthy sheep in Tunisia. Vet Microbiol. 2012;156:367–73. https://doi.org/10.1016/j.vetmic.2011.11.009. Gharsa H, Sallem RB, Slama KB, Gomez-Sanz E, Lazano C, Jouini A, Klibi N, Zarazaga M, Boudabous A, Torres C. High diversity of genetic lineages and virulence genes in nasal Staphylococcus aureus isolates from donkeys destined to food consumption in Tunisia with predominance of the ruminant associated CC133 lineage. BMC Vet Res. 2012;8:203. https://doi.org/10.1186/1746-6148-8-203. Olonitola OS, Inabo HI, Olayinka BO, Bugo ID. Nasal carriage of methicillin-resistant Staphylococcus aureus by primary school pupils in a university staff school, Zaria, Nigeria. Int J Bio. Chem Sci. 2007;1:71–5. https://doi.org/10.4314/ijbcs.v1i1.39701. Shittu AO, Udo EE, Lin J. Phenotypic and molecular characterization of Staphylococcus aureus isolates expressing low- and high-level mupirocin resistance in Nigeria and South Africa. BMC Infect Dis. 2009;9:10. https://doi.org/10.1186/1471-2334-9-10. Zinn CS, Westh H, Rosdahl VT. SARISA study group. An international multicenter study of antimicrobial resistance and typing of hospital Staphylococcus aureus isolates from 21 laboratories in 19 countries or states. Microb Drug Resist. 2004;10:160–8. https://doi.org/10.1089/1076629041310055. Ouchenane Z, Smati F, Rolain J-M, Raoult D. Molecular characterization of methicillin-resistant Staphylococcus aureus isolates in Algeria. Pathol Biol (Paris). 2011;59:e129–32. https://doi.org/10.1016/j.patbio.2009.11.004. Okon KO, Basset P, Uba A, Lin J, Oyawoye B, Shittu AO, Blanc DS. Co-occurrence of predominant Panton-Valentine Leukocidin-positive sequence type (ST) 152 and multidrug-resistant ST 241 Staphylococcus aureus clones in Nigerian hospitals. J Clin Microbiol. 2009;47:3000–3. https://doi.org/10.1128/JCM.01119-09. Raji A, Ojemhen O, Umejiburu U, Ogunleye A, Blanc D, Basset P. High genetic diversity of Staphylococcus aureus in a tertiary care hospital in Southwest Nigeria. Diagn Microbiol Infect Dis. 2013;77:367–9. https://doi.org/10.1016/j.diagmicrobio.2013.08.030. Shittu AO, Lin J. Antimicrobial susceptibility patterns and characterization of clinical isolates of Staphylococcus aureus in KwaZulu-Natal province, South Africa. BMC Infect Dis. 2006;6:125. https://doi.org/10.1186/1471-2334-6-125. Perovic O, Iyaloo S, Kularatne R, Lowman W, Bosman N, Wadula J, Seetharam S, Duse A, Mbelle N, Bamford C, Dawood H, Mahabeer Y, Bhola P, Abrahams S, Singh-Moodley A. Prevalence and trends of Staphylococcus aureus bacteraemia in hospitalized patients in South Africa, 2010-2012: laboratory-based surveillance mapping of antimicrobial resistance and molecular epidemiology. PLoS One. 2015;10:e0145429. https://doi.org/10.1371/journal.pone.0145429. Barakat GI, Nabil YM. Correlation of mupirocin resistance with biofilm production in methicillin-resistant Staphylococcus aureus from surgical site infections in a tertiary Centre, Egypt. J Glob Antimicrob Resist. 2016;4:16–20. https://doi.org/10.1016/j.jgar.2015.11.010. Shittu A, Lin J, Kolawole D. Antimicrobial susceptibility patterns of Staphylococcus aureus and characterization of MRSA in southwestern Nigeria. Wounds. 2006;18:77–84. Egyir B, Guardabassi L, Nielsen SS, Larsen J, Addo KK, Newman MJ, Larsen AR. Prevalence of nasal carriage and diversity of Staphylococcus aureus among inpatients and hospital staff at Korle Bu teaching hospital, Ghana. J Glob Antimicrob Resist. 2013;1:189–93. https://doi.org/10.1016/j.jgar.2013.05.006. Egyir B, Guardabassi L, Monecke S, Addo KK, Newman MJ, Larsen AR. Methicillin-resistant Staphylococcus aureus strains from Ghana include USA300. J Glob Antimicrob Resist. 2015;3:26–30. https://doi.org/10.1016/j.jgar.2014.11.006. Ahmed MO, Abuzweda AR, Alghazali MH, Elramalli AK, Amri SG, Aghila ES, Abouzeed YM. Misidentification of methicillin-resistant Staphylococcus aureus (MRSA) in hospitals in Tripoli, Libya. Libyan J Med. 2010;5:5230. https://doi.org/10.3402/ljm.v5i0.5230. Ahmed MO, Elramalli AK, Amri SG, Abuzweda AR, Abouzeed YM. Isolation and screening of methicillin-resistant Staphylococcus aureus from health care workers in Libyan hospitals. EMHJ. 2012;18:37–42. Enany S, Yaoita E, Yoshida Y, Enany M, Yamamoto T. Molecular characterization of Panton-Valentine Leukocidin-positive community-acquired methicillin-resistant Staphylococcus aureus isolates in Egypt. Microbiol Res. 2010;165:152–62. https://doi.org/10.1016/j.micres.2009.03.005. Ben Nejma MB, Mastouri M, Jrad BBH, Nour M. Characterization of ST80 Panton-Valentine Leukocidin-positive community-acquired methicillin-resistant Staphylococcus aureus clone in Tunisia. Diagn Microbiol Infect Dis. 2013;77:20–4. https://doi.org/10.1016/j.diagmicrobio.2008.02.010. Ben Nejma MB, Merghni A, Mastouri M. Genotyping of methicillin resistant Staphylococcus aureus strains isolated from hospitalized children. Int J Pediatr. 2014;2014:314316. https://doi.org/10.1155/2014/314316. Ferghani NEL. An open study of mupirocin in Libyan patients with skin infections. J Int Med Res. 1995;23:508–17. https://doi.org/10.1177/030006059502300615. Souly K, Ait el Kadi M, Lhmadi K, Biougnach H, Boughaidi A, Zouhdi M, Benasila S, Elyoussefi Z, Bouattar T, Zbiti N, Skalli Z, Rhou H, Ouzeddoun N, Bayahia R, Benamar L. Epidemiology and prevention of Staphylococcus aureus nasal carriage in hemodialyzed patients. Med Mal Infect. 2011;41:469–74. https://doi.org/10.1016/j.medmal.2011.05.005. Shittu AO, Okon K, Adesida S, Oyedara O, Witte W, Strommenger B, Layer F, Nübel U. Antibiotic resistance and molecular epidemiology of Staphylococcus aureus in Nigeria. BMC Microbiol. 2011;11:92. https://doi.org/10.1186/1471-2180-11-92. Shittu A, Oyedara O, Abegunrin F, Okon K, Raji A, Taiwo S, Ogunsola F, Onyedibe K, Elisha G. Characterization of methicillin-susceptible and -resistant staphylococci in the clinical setting: a multicentre study in Nigeria. BMC Infect Dis. 2012;12:286. https://doi.org/10.1186/1471-2334-12-286. Ayepola OO, Olasupo NA, Egwari LO, Becker K, Schaumburg F. Molecular characterization and antimicrobial susceptibility of Staphylococcus aureus isolates from clinical infection and asymptomatic carriers in Southwest Nigeria. PLoS One. 2015;10:e0137531. https://doi.org/10.1371/journal.pone.0137531. Akobi B, Aboderin O, Sasaki T, Shittu A. Characterization of Staphylococcus aureus isolates from faecal samples of the Straw-Coloured Fruit Bat (Eidolon helvum) in Obafemi Awolowo University (OAU), Nigeria. BMC Microbiol. 2012;12:279. https://doi.org/10.1186/1471-2180-12-279. Egyir B, Guardabassi L, Esson J, Nielsen SS, Newman MJ, Addo KK, Larsen AR. Insights into nasal carriage of Staphylococcus aureus in an urban and a rural Community in Ghana. PLoS One. 2014;9:e96119. https://doi.org/10.1371/journal.pone.0096119. Amissah NA, Glasner C, Ablordey A, Tetteh CS, Kotey NK, Prah I, van der Werf TS, Rossen JW, van Dijl JM, Stienstra Y. Genetic diversity of Staphylococcus aureus in Buruli ulcer. PLoS Negl Trop Dis. 2015;9:e0003421. https://doi.org/10.1371/journal.pntd.0003421. Ngoa UA, Schaumburg F, Adegnika AA, Kösters K, Möller T, Fernandes JF, Alabi A, Issifou S, Becker K, Grobusch MP, Kremsner PG, Lell B. Epidemiology and population structure of Staphylococcus aureus in various population groups from a rural and semi-urban area in Gabon, Central Africa. Acta Trop. 2012;124:42–7. https://doi.org/10.1016/j.actatropica.2012.06.005. Conceição T, Silva IS, de Lencastre H, Aires-de-Sousa M. Staphylococcus aureus nasal carriage among patients and health care workers in São Tomé and Príncipe. Microb Drug Resist. 2014;20:57–66. https://doi.org/10.1089/mdr.2013.0136. Aiken AM, Mutuku IM, Sabat AJ, Akkerboom V, Mwangi J, Scott JAG, Morpeth SC, Friedrich AW, Grundmann H. Carriage of Staphylococcus aureus in Thika level 5 hospital, Kenya: a cross-sectional study. Antimicrob Resist Infect Control. 2014;3:22. https://doi.org/10.1186/2047-2994-3-22. Omuse G, Kabera B, Revathi G. Low prevalence of methicillin resistant Staphylococcus aureus as determined by an automated identification system in two private hospitals in Nairobi, Kenya: a cross sectional study. BMC Infect Dis. 2014;14:669. https://doi.org/10.1186/s12879-014-0669-y. Haddaway NR, Collins AM, Coughlin D, Kirk S. The role of Google scholar in evidence reviews and its applicability to Grey literature searching. PLoS One. 2015;10:e0138237. https://doi.org/10.1371/journal.pone.0138237. Eshetie S, Tarekegn F, Moges F, Amsalu A, Birhan W, Huruy K. Methicillin resistant Staphylococcus aureus in Ethiopia: a meta-analysis. BMC Infect Dis. 2016;16:689. https://doi.org/10.1186/s12879-016-2014-0. Deyno S, Fekadu S, Astatkie A. Resistance of Staphylococcus aureus to antimicrobial agents in Ethiopia: a meta-analysis. Antimicrob Resist Infect Control. 2017;6:85. https://doi.org/10.1186/s13756-017-0243-7. Clinical and Laboratory Standard Institute (CLSI). Performance standards for antimicrobial susceptibility testing: 27th edition. CLSI supplement M100. Wayne CLSI. 2017. The European Committee on Antimicrobial Susceptibility Testing. Breakpoint tables for interpretation of MICs and zone diameters. Version 8.1, 2018. http://www.eucast.org. Accessed 28th May, 2018. Comite de l'antibiogramme de la Societe Francaise de Microbiologie – recommendations 2018 v.1.0 mai. http://www.sfm-microbiologie.org. Accessed 28th May, 2018. Hurdle JG, O'Neill AJ, Mody L, Chopra I, Bradley SF. In vivo transfer of high-level mupirocin resistance from Staphylococcus epidermidis to methicillin-resistant Staphylococcus aureus associated with failure of mupirocin prophylaxis. J Antimicrob Chemother. 2005;56:1166–8. Shittu AO, Lin J, Morrison D, Kolawole DO. Isolation and molecular confirmation of a multiresistant catalase-negative Staphylococcus aureus in Nigeria. J Infect. 2003;46:203–4. https://doi.org/10.1053/jinf.2002.1106. SMA was supported by the Organization for Women in Science in the Developing World (OWSD). AOS received funding through the Deutscher Akademischer Austausch Dienst (DAAD award) Staff Exchange Programme (2016). MK was a Wellcome Trust (UK) Fellow (102429/Z/13/Z). His research is currently supported by the Carnegie Corporation of New York (USA) early-career fellowship, the CIHR CTN International Fellowship (Canada), and the US National Institutes of Health (1R01HD093578-01). We appreciate the kind assistance of Oluwafemi Daramola in the preparation of the manuscript. This review received support through the Deutscher Akademischer Austausch Dienst (DAAD award) Staff Exchange Programme (2016). However, the opinions expressed in this review are that of the authors. All supporting materials (Figures and Tables) are included in the manuscript. Department of Microbiology, Obafemi Awolowo University, Ile-Ife, Osun State, 22005, Nigeria Adebayo O. Shittu, Yewande O. Ajao, Mujibat O. Abiola & Ayodele O. Olatimehin Division of Medical Microbiology, Department of Pathology, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa Mamadou Kaba & Shima M. Abdulgader Institute of Infectious Disease and Molecular Medicine, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa Mamadou Kaba Adebayo O. Shittu Shima M. Abdulgader Yewande O. Ajao Mujibat O. Abiola Ayodele O. Olatimehin AOS conceived the project. YOA, SMA and AOS extracted the data and reviewed the articles. MOA and AOO wrote the initial draft of the manuscript. AOS, SMA, YOA, and MK wrote the subsequent draft. All the authors reviewed and agreed on the final version of the manuscript before submission for publication. Correspondence to Adebayo O. Shittu. The authors declare that there are no competing interests. Shittu, A.O., Kaba, M., Abdulgader, S.M. et al. Mupirocin-resistant Staphylococcus aureus in Africa: a systematic review and meta-analysis. Antimicrob Resist Infect Control 7, 101 (2018). https://doi.org/10.1186/s13756-018-0382-5 Received: 17 January 2018
CommonCrawl
Prescribed fire limits wildfire severity without altering ecological importance for birds Quresh S. Latif ORCID: orcid.org/0000-0003-2925-50421, Victoria A. Saab2 & Jonathan G. Dudley3 Fire suppression and anthropogenic land use have increased severity of wildfire in western U.S. dry conifer forests. Managers use fuels reduction methods (e.g., prescribed fire) to limit high-severity wildfire and restore ecological function to these fire-adapted forests. Many avian species that evolved in these forests, however, are adapted to conditions created by high-severity wildfire. To fully understand the ecological implications of fuels reduction treatments, we need to understand direct treatment effects and how treatments modulate subsequent wildfire effects on natural communities. We studied bird population and community patterns over nine years at six study units, including unburned (2002–2003), after prescribed fire (2004–2007), and after wildfire (2008–2010). We used a before-after, control-impact (BACI) approach to analyze shifts in species occupancy and richness in treated units following prescribed fire and again in relation to burn severity following wildfire. We found examples of both positive and negative effects of wildfire and prescribed fire on bird species occupancy depending on and largely consistent with their life history traits; several woodpecker species, secondary cavity-nesting species, aerial insectivores, and understory species exhibited positive effects, whereas open cup canopy-nesting species and foliage- or bark-gleaning insectivores exhibited negative effects. Wildfire affected more species more consistently through time than did prescribed fire. Wildfire burned units initially treated with prescribed fire less severely than untreated units, but the slopes of wildfire effects on species occupancy were similar regardless of prior prescribed fire treatment. Our results suggest managers can employ prescribed fire to reduce wildfire severity without necessarily altering the ecological importance of wildfire to birds (i.e., the identity of species exhibiting negative versus positive responses). Additional study of the ecological implications of various fuels reduction practices, representing a range of intensities and fire regimes, would further inform forest management that includes biodiversity objectives. La supresión de incendios y el uso antropogénico de la tierra han incrementado la severidad de los incendios en los bosques secos de coníferas del Oeste de los EEUU. Los gestores de recursos usan métodos de reducción del combustible (i.e. quemas prescriptas), para limitar la alta severidad de los incendios y restaurar las funciones de esos bosques adaptados al fuego. Muchas especies de aves que evolucionaron en esos bosques, sin embargo, están adaptadas a condiciones creadas por fuegos de alta severidad. Para comprender totalmente las implicancias ecológicas de los tratamientos de reducción del combustible, necesitamos entender los efectos directos de estos tratamientos y cómo ellos modulan los efectos subsiguientes del fuego sobre las comunidades naturales. Estudiamos las poblaciones de aves y los patrones de la comunidad por nueve años en seis unidades de estudio incluyendo el no quemado (control, 2002-2003}, luego de una quema prescripta (2004-2007) y luego de un incendio (2008-2010). Usamos la aproximación de antes-después, y control de impacto (Before-After Control Impact, BACI en inglés) para analizar las desviaciones en la ocupación de especies y riqueza en las unidades tratadas luego de las quemas prescriptas y nuevamente en relación a la severidad de la quema luego de un incendio. Encontramos ejemplos de efectos tanto positivos como negativos del incendio y de las quemas prescriptas en la ocupación del espacio por las aves, dependiendo sobre y consistentemente, de las características de sus historias de vida; muchas especies de pájaros carpinteros, especies secundarias que usan las cavidades para anidar, pájaros insectívoros y especies de sotobosque exhibieron efectos positivos, mientras que aquellas especies que anidan en el dosel superior y aquellas insectívoras que se alimentan de invertebrados del follaje o de la corteza exhibieron efectos negativos. Los incendios afectaron más consistentemente y en el tiempo, más especies que las quemas prescriptas. Las unidades tratadas inicialmente con quemas prescriptas fueron afectadas menos severamente que las no tratadas cuando fueron alcanzadas por incendios, aunque los efectos de la pendiente en la ocupación de las especies fueron similares independientemente del tratamiento previo por las quemas prescriptas. Nuestros resultados sugieren que los gestores pueden emplear quemas prescriptas para reducir la severidad sin que necesariamente se altere la importancia ecológica de los incendios sobre las aves (i.e., la identidad de las especies que exhibieron respuestas negativas sobre positivas). Estudios adicionales sobre la implicancia de varias prácticas de reducción del combustible, representando un rango de intensidades y regímenes de fuego, podría informar sobre manejo forestal que incluya objetivos de biodiversidad Wildfire strongly shapes the amount and distribution of biodiversity in western North American forests. Some species occur more frequently and others less frequently in recently burned forest, causing community composition to vary with burn severity (Saab et al. 2005; Kalies et al. 2010; Fontaine and Kennedy 2012). Landscapes containing a diversity of forest stands varying in history of fire are therefore expected to support the greatest array of species (Clarke 2008; Fontaine et al. 2009; Fontaine and Kennedy 2012). Within the last ~ 100 years, anthropogenic fire suppression, logging, development, livestock grazing, and climate change have caused fuel accumulation and homogenization of vegetation structure in many lower elevation dry conifer forests of the western USA (Covington and Moore 1992; Agee 1993; Brown et al. 2004; Schoennagel et al. 2004). These changes have shifted fire regimes towards less frequent but larger and more severe wildfire, with potential negative consequences for the economic and esthetic values of forests, human safety, and wildlife diversity (Dale et al. 2001; Brown et al. 2004; Schoennagel et al. 2004). Forest managers widely implement fuel reduction treatments, i.e., prescribed fire, timber harvest, or some combination of both, to limit wildfire size and severity, with the ultimate goal of restoring historical vegetation structure and composition to mitigate anthropogenic impacts (Fulé et al. 2012). Empirical studies confirm expected reductions in wildfire severity in treated areas for a limited number of years following treatment, particularly when fuel loads are greatly reduced (Pollet and Omi 2002; Fulé et al. 2012; Prichard and Kennedy 2014; Fernandes 2015). Thus, strategically placed treatments could help managers reduce the extent of subsequent wildfire (Stevens et al. 2014). Some expect this approach to restore historical conditions and ecological function to many dry conifer forests (Walker et al. 2018). Desirable historical conditions are difficult to achieve because of climate-induced ecological changes (McKelvey et al. 2021) and because they vary regionally and by spatial scale (Schoennagel et al. 2004; Bock and Block 2005; Illán et al. 2014). Animal ecologists therefore suggest treatments could be ineffective or inappropriate in regions like the central Rocky Mountains where historical levels of diversity were associated with relatively heterogeneous landscapes maintained by mixed-severity fire (Saab et al. 2005; Latif et al. 2016b). Birds are valuable as focal organisms for understanding faunal community relationships with wildfire and forest management. Surveys do not require specialized equipment (Sutherland et al. 2004), allowing changes in bird population densities, species distributions, and community structure to readily inform management strategies aimed at biological conservation (Saab and Powell 2005; Saab et al. 2005). Additionally, hierarchical occupancy models facilitate analysis of survey data to evaluate population and community relationships with environmental disturbance and management treatments (Dorazio et al. 2006; Russell et al. 2009; Russell et al. 2015; Latif et al. 2016b). Bird responses to disturbance depend on species ecology and life history traits (Saab and Powell 2005; Smucker et al. 2005; Kotliar et al. 2007; Fontaine and Kennedy 2012; Seavy and Alexander 2014). Wildfire opens the canopy, which can stimulate understory vegetative growth and improve foraging and nesting opportunities for shrub-nesting and ground-foraging species, and creates snags that provide important nesting and foraging resources for cavity-nesting species (Hutto 1995; Kotliar et al. 2002; Saab et al. 2009). In contrast, tree mortality after wildfire reduces resources for canopy-nesting species and species that forage on live trees (Kotliar et al. 2007; Fontaine et al. 2009). As with wildlife communities in general, ecologists expect landscapes representing the historical range of fire conditions to support the greatest array of bird species (see reviews by Kalies et al. 2010; Fontaine and Kennedy 2012). Fuel reduction treatments could shape bird diversity in various ways. Birds may respond directly to treatment-induced changes in forest structure (Russell et al. 2009; Gaines et al. 2010; Fontaine and Kennedy 2012; White et al. 2013). Managers sometimes look to fuel treatments to provide a surrogate for wildfire (McIver et al. 2013), but severity and scale limit the potential for treatment effects to emulate wildfire effects on birds (Fontaine and Kennedy 2012). Instead, the current debate focuses more on how treatments modulate burn severity and consequent wildfire effects on birds (Hutto et al. 2014). By altering subsequent wildfire behavior, fuel reduction treatments may change the ecological significance of wildfire for birds. For example, by limiting high-severity crown fire, wildfire in treated areas may not generate enough snags to benefit cavity-nesting species or open the canopy sufficiently to benefit understory species (Hutto et al. 2015). Conversely, reduced tree mortality may result in limited negative impacts of wildfire for canopy-nesting and foliage-gleaning species. Studies comparing wildfire behavior and bird responses to wildfire in treated versus untreated stands are needed to test these hypotheses. Knowledge of how fuel reduction treatments directly and indirectly influence avian populations and communities will inform forest management activities that incorporate habitat conservation for avian diversity. We studied avian relationships with prescribed fire and wildfire in the Payette National Forest (NF), a lower elevation dry conifer forest in the central Rocky Mountains historically associated with a mixed-severity fire regime. We surveyed birds in paired treatment and control study units before (2002–2003) and after (2004–2007) prescribed fire and following wildfire (2008–2010). We evaluated two primary hypotheses: (1) wildfire burn severity would be lower in units initially treated with prescribed fire, and consequently, (2) birds would respond differently to wildfire in treated compared to untreated units. Secondarily, being more severe and extensive, we expected wildfire to have stronger effects on bird populations (inferred from changes in species occupancy) than prescribed fire. We also built on published literature and evaluated hypotheses therein regarding expected responses to wildfire and prescribed fire for particular life histories (Russell et al. 2009; Latif et al. 2016b). We primarily evaluated our hypotheses by looking for temporal shifts in species occupancy of sites varying in burn severity following disturbance, providing relatively strong inference of disturbance effects (Popescu et al. 2012; Russell et al. 2015). We considered implications of observed patterns for forest management with objectives that include conservation of avian diversity. Study system The Payette NF is in the central Rocky Mountains of western North America (45° 00′ 30″ N 116° 02′ 30″ W; elevation 1127–2075 m). The East Zone Complex Fire burned the Payette NF in July–October 2007 (95,100 ha; Fig. 1). About 60 years prior to this study, forest managers began suppressing wildfire and managing for multiple uses, including timber harvest, mining, recreation, livestock grazing, wildlife habitat, and watershed management (Hollenbeck et al. 2013). Following the classification scheme of Miller and Thode (2007), burn severity within the East Zone Complex Fire perimeter was classified as 9% unburned, 19% low severity, 26% moderate severity, and 46% high severity (see also Latif et al. 2016b). Study area and units where forest bird data were collected in the Payette National Forest (ID, USA) in relation to prescribed fire treatments and subsequent wildfire. The fire perimeter is for the 2007 East Zone Complex wildfire The canopy was dominated by large (≥ 23 cm dbh) ponderosa pine (Pinus ponderosa) trees (> 65%; Hollenbeck et al. 2013). Other tree species included Douglas-fir (Pseudotsuga menziesii), lodgepole pine (Pinus contorta), and small patches (< 10 ha) of quaking aspen (Populus tremuloides) in snowmelt drainages. Common understory species include snowberry (Symphoricarpos albus), spirea (Spirea betulifolia), Saskatoon serviceberry (Amelanchier alnifolia), and chokecherry (Prunus virginiana). Study units We established six study units distributed across ~ 20,000 ha (Fig. 1). Study units were delineated in pairs so that members of each pair were similar in vegetation and topography, and 1 member of each pair was randomly selected for prescribed fire treatment (Table 1) (for additional details, see Saab et al. (2007)). Forest managers applied prescribed fire treatments in spring prior to the breeding season for most bird species (April–early May); one unit was treated in 2004, and the other two in 2006. The East Zone Complex Fire subsequently burned 5 units (2 treatment, 3 control). Table 1 Treatment timing and sampling for study units used to examine forest bird occupancy relationships with prescribed fire and wildfire at the Payette National Forest, Idaho. Members of unit pairs represented similar pre-fire environmental conditions Bird surveys We surveyed birds at 110 point survey stations distributed across the 6 study units (Table 1, Fig. 1). We spaced survey points at least 150 m apart (mean = 277, SD = 68) within study unit boundaries. For statistical independence, we space most points ≥ 200 m apart, but we were forced to space a minority of points (32%) in closer proximity due to steep topography with limited access for humans. We surveyed each point twice each between 23 May and 3 July over the years they were surveyed. We began surveys just after the dawn chorus and completed them within 5 h. Observers recorded all birds detected during a 5-min count and estimated distances to each detected individual. We only included detections recorded within 100 m of the surveyor in this analysis. Our sampling design was a robust design (Pollock 1982) with years as the primary periods and visits within years as secondary periods. We surveyed birds in one unit pair through 2006 and the remaining two pairs through 2007 and then continued monitoring the five units burned by wildfire in 2008–2010. Thus, we obtained data representing 2–4 years before and 2–3 years after prescribed fire treatment and 3 years of post-wildfire (Table 1). Burn severity measurements and analysis We measured burn severity using the composite burn severity index (CBI; Key and Benson 2006) modified to accommodate our study area and objectives. We calculated a CBI value for each survey point (0–3 range) representing the mean of up to 11 components quantifying aspects of canopy structure, understory cover, and downed woody fuels (for details, see Additional file 1: Appendix A). We derived components from field measurements of these attributes before fire (2002–2003), after prescribed fire (2004–2007), and after wildfire (2008–2010). Components represented either changes in these attributes from before to after disturbance or aspects of burn severity apparent after disturbance (e.g., extent of char). We only measured points within burned units and assumed CBI = 0 for units that were not burned during our study. Others describe in detail how CBI values correspond with changes in various aspects of vegetation structure (Key and Benson 2006; Saab et al. 2006). In short, CBI = 0, 0 < CBI < 1.25, 1.25 < CBI < 2.25, and CBI > 2.25 are interpretable as unburned, low severity, moderate severity, and high severity, respectively. In general, low-severity fire primarily affects understory vegetation with minimal canopy mortality (< 40%), whereas high-severity fire results in much greater canopy mortality (> 70 %). We quantified prescribed fire CBI (hereafter CBIPF) using environmental data collected before (2002–2003) versus after (2004–2007) prescribed fire, and wildfire CBI (hereafter CBIWF) with data from immediately before (2004–2007) versus after wildfire (2008–2010). Unfortunately, wildfire burned one treated study unit (Fitsum Creek) before we could measure post-treatment prescribed fire conditions (2004–2007). For this unit, we imputed CBIWF by (1) calculating CBITotal representing overall burn severity (i.e., changes from 2002–2003 to 2008–2010), (2) regressing CBIWF as a linear function of CBITotal at units where both were available (Buckhorn, Dutch Oven, Williams, and Deadman), and (3) using the resulting regression model (CBIWF = β0 + β1 × CBITotal, with estimates β0 [s.e.] = −0.04623 [0.07653] and β1 = 0.98415 [0.04271]) to impute missing data. As a covariate of occupancy, we imputed missing CBIWF values using a normally distributed prior with mean and SD representing model-predicted CBIWF. Data for calculating CBIPF were relatively limited, so we did not use CBIPF as a covariate of occupancy. Rather, we modeled occupancy with a categorical treatment effect (TRTPF = 0 or 1 for survey points in untreated versus treated units, respectively). We then summarized CBIPF values where available (Dutch Oven, Parks Creek) to inform inference and compare with CBIWF. We compared CBIWF between treated and untreated points within treatment-control unit pairs and compared CBIWF with CBIPF where available to evaluate the effect of prescribed fire treatments on subsequent wildfire severity. Occupancy models We analyzed avian relationships with prescribed fire using community occupancy models formulated within a Bayesian hierarchical modeling framework (Dorazio et al. 2006; Russell et al. 2009). Occupancy models leverage repeat-survey data to estimate species detectability (p) conditional upon occupancy (species occurrence within a specified time period), allowing unbiased estimation of occupancy probabilities (ψ) given sufficient data and adherence to model assumptions (MacKenzie et al. 2002; MacKenzie et al. 2006). We assumed that the occupancy states of species could change among years, but not between visits within a year. We estimated species-specific parameters as random variables governed by community-level parameters. The use of a common distribution among species improves the precision of species-specific parameter estimates, particularly for rare species (Dorazio et al. 2006; Russell et al. 2009). We excluded raptors, owls, and grouse from analysis because they were not readily detectable with our survey methods. Additionally, we only included species that bred in our study areas. For mobile animals such as birds, detectability (p) estimated with surveys repeated over a season quantifies both within-season movement and the observation process (i.e., availability and perceptibility; sensu Chandler and Andrew Royle 2013; Amundson et al. 2014). In principle, occupancy probabilities thereby estimated the probability of a surveyed point intersecting ≥1 home range for a given species (Efford and Dawson 2012; Latif et al. 2016a). We compiled a 3-dimensional data matrix y, where element yijt was the sum of binary indicators for species detection (Sanderlin et al. 2014). Given a binary indicator xijkt = 1, we detected species i (i = 1,…,N) at survey point j (j = 1,…,J) during visit k (k = 1,…,K) in year t (t = 1,…,T; T = 4). Because we did not have covariates that differed for detection between visits, we analyzed the sum of all binary detections for species i over all visits at each survey point j in year t, where \( {y}_{ijt}=\sum \limits_{k=1}^K{x}_{ijkt} \) and yijt ϵ {0,1,…,K}. We modeled these data given probability of detection pi, and occupancy latent state zijt using a Bernoulli distribution with probability of success pi × zijt: $$ \left[{y}_{ijt}|{p}_i,{z}_{ijt}\right]\sim Bin\left(K,{p}_i\times {z}_{ijt}\right) $$ where the latent variable zijt for occupancy given probability of occupancy ψijt was modeled as: $$ \left[{z}_{ijt}|{\psi}_{ijt}\right]\sim Bern\left({\psi}_{ijt}\right) $$ We analyzed changes in species occupancy patterns using a model that leveraged our before-after, control-impact (BACI) sampling for examining disturbance effects (Popescu et al. 2012; Russell et al. 2015). For prescribed fire effects, we modeled occupancy (ψijt) as a function of treatment (TrtPF,j), period (Perjt = 0 or 1 for before or after survey point j was treated, respectively), and the interaction of the two (Trtj × Perjt). Thus, $$ \mathrm{logit}\left({\psi}_{ijt}\right)={\beta}_{0,i}+{\beta}_{Per_{\mathrm{PF}},i}\times {Per}_{\mathrm{PF},t}+{\beta}_{Trt_{\mathrm{PF}},i}\times {Trt}_{\mathrm{PF},j}+{\beta}_{Per_{\mathrm{PF}}\times {Trt}_{\mathrm{PF}},i}\times {Per}_{\mathrm{PF},t}\times {Trt}_{\mathrm{PF},j} $$ where β0, i is the intercept and \( {\beta}_{Per_{\mathrm{PF}},i} \), \( {\beta}_{Trt_{\mathrm{PF}},i} \), and \( {\beta}_{Per_{\mathrm{PF}}\times {Trt}_{PF},i} \) describe the additive and interactive effects of covariates PerPF, t and TrtPF, j on occupancy of species i at survey point j in year t. We restricted analysis of prescribed fire effects to data collected before wildfire (2002–2007). For wildfire effects, we analyzed data collected 2 years before and 3 years after wildfire (2006–2010) using two models. The first model analyzed overall wildfire effects: $$ \mathrm{logit}\left({\psi}_{ijt}\right)={\beta}_{0,i}+{\beta}_{Per_{\mathrm{WF}},i}\times {Per}_{\mathrm{WF},t}+{\beta}_{CBI_{\mathrm{WF}},i}\times {CBI}_{\mathrm{WF},j}+{\beta}_{Per_{\mathrm{WF}}\times {CBI}_{\mathrm{WF}},i}\times {Per}_{\mathrm{WF},t}\times {CBI}_{\mathrm{WF},j} $$ The second model analyzed differences in wildfire effects between units treated versus untreated with prescribed fire: $$ \mathrm{logit}\left({\psi}_{ijt}\right)={\beta}_{0,i}+{\beta}_{Per_{\mathrm{WF}},i}\times {Per}_{\mathrm{WF},t}+{\beta}_{Trt_{\mathrm{PF}},i}\times {Trt}_{\mathrm{PF},t}+{\beta}_{CBI_{\mathrm{WF}},i}\times {CBI}_{\mathrm{WF},j}+{\beta}_{Per_{\mathrm{WF}}\times {CBI}_{\mathrm{WF}},i}\times {Per}_{\mathrm{WF},t}\times {CBI}_{\mathrm{WF},j}+{\beta}_{Trt_{PF}\times {CBI}_{\mathrm{WF}},i}\times {Trt}_{\mathrm{PF},j}\times {CBI}_{\mathrm{WF},j}+{\beta}_{Per_{\mathrm{WF}}\times {Trt}_{\mathrm{PF}}\times {CBI}_{\mathrm{WF}},i}\times {Per}_{\mathrm{WF},t}\times {Trt}_{\mathrm{PF},j}\times {CBI}_{\mathrm{WF},j} $$ As in Eq. 3, β0, i is the intercept and all remaining β parameters describe additive and interactive effects of covariates on avian occupancy in Eqs. 4 and 5. All estimated parameters were species-specific normal random effects. For numerical purposes, we rescaled CBIWF values to mean = 0 and SD = 1 prior to analysis. For all three models above (Eqs. 3, 4, and 5), we drew inference of disturbance (prescribed fire or wildfire) effects from the extent to which occupancy shifted towards or away from burned (or unburned) survey points following disturbance. Interaction parameters in Eqs. 3, 4, and 5 quantified these shifts, whereas additive parameters controlled for potentially confounding environmental variation among survey points and time periods (Popescu et al. 2012). We considered statistically supported interaction parameters (90% Bayesian credible interval [BCI] excluded zero) strong evidence for disturbance effects. We used one additional model to analyze annual changes in bird occupancy and time-dependent disturbance effects with all available data (2002–2010). This model included a random year effect and year-specific prescribed fire and wildfire effects: $$ \mathrm{logit}\left({\psi}_{ijt}\right)={\beta}_{0, it}+{\beta}_{Trt_{\mathrm{PF}},i{t}_{\mathrm{PF}}}\times {Trt}_{\mathrm{PF},j}+{\beta}_{CBI_{\mathrm{WF}},i{t}_{\mathrm{WF}}}\times {CBI}_{\mathrm{WF},j} $$ The intercept, β0, it, varied with species and year according to nested normal random effects (year within species). Prescribed fire effects (\( {\beta}_{Trt_{\mathrm{PF}},i{t}_{\mathrm{PF}}} \)) were estimated separately for 4 distinct time periods, pre-treatment (tPF = 0) and 1–3 years post-treatment (tPF = 1–3, respectively). Similarly, wildfire effects (\( {\beta}_{CBI_{\mathrm{WF}},i{t}_{\mathrm{WF}}} \)) were estimated for 4 time periods, pre-fire (2006–2007; tWF = 0) and 1–3 years post-fire (2008–2010; tWF = 1–3, respectively). \( {\beta}_{Trt_{\mathrm{PF}},i{t}_{\mathrm{PF}}} \) was not estimated for 2008–2010 and \( {\beta}_{CBI_{\mathrm{WF}},i{t}_{\mathrm{WF}}} \) was not estimated for 2002–2005 for comparability with other models (see Eqs. 3, 4, and 5). We used this model to look for time-dependencies in disturbance effects (i.e., where 95% BCIs for \( {\beta}_{Trt_{\mathrm{PF}},i\left({t}_{\mathrm{PF}}\epsilon \left\{1,2,3\right\}\right)}-{\beta}_{Trt_{\mathrm{PF}},i\left({t}_{\mathrm{PF}}=0\right)} \) or \( {\beta}_{CBI_{\mathrm{WF}},i\left({t}_{\mathrm{WF}}\epsilon \left\{1,2,3\right\}\right)}-{\beta}_{CBI_{\mathrm{WF}},i\left({t}_{\mathrm{WF}}=0\right)} \) excluded zero, Eq. 6). Additionally, we scanned yearly occupancy estimates for surveyed sites (\( \psi {\prime}_t={\sum}_{j=1}^J{z}_{ijt}/J \)) to identify notable changes among pre-treatment (2002–2003), post-treatment (2004–2007), and post-wildfire (2008–2010) periods. All sites surveyed after wildfire were burned by wildfire to some degree (min CBI = 0.39, see the "Results" section), so we expected some changes in overall occupancy for species with similar responses to low- versus high-severity wildfire. We considered inference from changes in overall occupancy weaker, however, because estimates of these changes did not control for potentially confounding factors as did shifts in occupancy with respect to CBI (see above). In addition to species-specific relationships, we plotted emergent changes between species richness with treatment condition. We estimated species richness (Njt) at each survey point j and year t: \( {N}_{jt}=\sum \limits_{i=1}^{\max (i)}{z}_{ijt} \). Similar to some (Russell et al. 2009, Latif et al. 2016b) and unlike others (Dorazio et al. 2006, Kéry et al. 2009), we did not augment data to represent unobserved species, so community-level inferences were restricted to the subset of species observed at least once during our studies. We modeled detectability as a species-specific normal random effect b0,i: $$ \mathrm{logit}\left({p}_i\right)={b}_{0,i} $$ where pi is the probability of detecting species i when surveying a given survey point in a given year when the species was present (i.e., ≥ 1 home range intersected the 100-m point neighborhood). Unlike others (Russell et al. 2015), we did not consider treatment effects on detectability. Estimated effects on detectability from preliminary analyses were imprecise (credible intervals overlapped 0 for all species) and model convergence was difficult to achieve, suggesting the additional complexity strained limits of the data (Q. Latif unpublished data). We therefore only modeled heterogeneity in detectability among species and assumed detectability did not change with treatment condition. We modeled heterogeneity among species using a correlation term (ρ) that related species intercepts of detection probability (b0,i) with occupancy probability (β0,i) (Dorazio and Royle 2005, Kéry et al. 2009). We sampled posterior parameter distributions for all models using JAGS v4 (Plummer 2003) programmed from R (R Core Team 2013; Su and Yajima 2014). We used independent non-informative priors for all parameters (for priors, see Additional file 1: Appendix B). For each model, we ran 6 parallel MCMC chains of length 100,000 it, burn-in 10,000 it, and thinning 100 it to sample posterior distributions. We verified that neffective ≥ 100 and \( \hat{R} \) ≤ 1.1 for all parameters (Gelman and Hill 2007). We examined model goodness-of-fit (GOF) using posterior predictive testing (Gelman and Hill 2007). Specifically, we calculated a Bayesian p value representing the proportion of simulated datasets drawn from model posterior predictive distributions with deviance higher than deviance for observed datasets from each location, where p < 0.05 or p > 0.95 constituted evidence for lack of fit. We detected 60 species across all survey points and years (Table 2). The five most frequently detected species were Yellow-rumped Warbler, Western Tanager, Chipping Sparrow, Hammond's Flycatcher, and Red-breasted Nuthatch. The distributions of CBIs for wildfire and prescribed fire at surveyed points broadly overlapped, but on average, wildfire was more severe especially in areas not initially treated with prescribed fire (Fig. 2). Severe wildfire (CBI > 2) was extensive at untreated units (10 of 30 points) in contrast with wildfire and prescribed fire at treated units (1 of 40 points for each; Fig. 2). Species-specific detection probability estimates varied and correlated moderately with occupancy (Additional file 1: Appendix C). We found no evidence for lack of fit for community occupancy models (GOF p values ranged 0.49–0.50 for all 4 models). Table 2 Number of detections (no. point × year occasions detected) recorded for species observed 2002–2010 in the Payette National Forest, Idaho. The maximum number of detections possible for each fire condition (unburned, treated units after prescribed fire, and after wildfire) are noted in header parentheses Box plots depicting distributions of composite burn index (CBI) values for survey points treated by prescribed fire and burned by wildfire. Wildfire CBIs are shown separately for units previously treated versus not treated with prescribed fire. Boxes delineate the 25th, 50th, and 75th percentiles; whiskers denote the distance to observations furthest from the nearest quartiles (i.e., 25th and 75th percentiles) that are also within 1.5 × the inter-quartile range from the nearest quartile; and dots are observations further than 1.5 × the inter-quartile range from the nearest quartile Overall prescribed fire and wildfire effects We found statistically supported prescribed fire effects for 2 species and wildfire effects for 7 species (Fig. 3). House Wren, Hairy Woodpecker, Olive-sided Flycatcher, and Brewer's Sparrow occupancy shifted towards high-severity burned points following wildfire (Fig. 4). Conversely, Cassin's Vireo, Townsend's Warbler, and Warbling Vireo occupancy shifted towards lower severity burned points (Fig. 5). Rock Wren and American Three-toed Woodpecker shifted towards treated units after prescribed fire (Fig. 6). These prescribed fire effects translated into smaller changes in occupancy than did wildfire effects (compare Fig. 6 with Figs. 4 and 5). Posterior median estimates (dots) with 90% credible intervals (error bars) for wildfire and prescribed fire effects on avian species occupancy at the Payette National Forest (ID, USA). Full species names are provided in Table 2. Error bars are color coded based on statistical support (credible intervals excluding zero) and direction (orange = positive; blue = negative). Positive versus negative values indicate occupancy shifts towards versus away from (respectively) treated (βPrescribed fire treatment) or high-severity burned (βWildfire CBI) sites. βWildfire CBI represents \( {\beta}_{Per_{\mathrm{WF}}\times {CBI}_{\mathrm{WF}},i} \) in Eq. 4. βPrescribed fire treatment represents \( {\beta}_{Per_{\mathrm{PF}}\times {Trt}_{\mathrm{PF}},i} \) in Eq. 3 Predicted occupancy probabilities along a wildfire burn severity gradient (composite burn index; CBI) for species exhibiting statistically supported positive wildfire effects, i.e., where occupancy shifted towards high-severity burned sites after wildfire. Occupancy relationships are depicted before (gray) and after (black) wildfire. Species are House Wren (HOWR), Hairy Woodpecker (HAWO), Olive-sided Flycatcher (OSFL), and Brewer's Sparrow (BRSP) Predicted occupancy probabilities along a wildfire burn severity gradient (composite burn index; CBI) for species exhibiting statistically supported negative wildfire effects, i.e., where occupancy shifted towards low-severity or unburned sites after wildfire. Occupancy relationships are depicted before (gray) and after (black) wildfire. Species are Cassin's Vireo (CAVI), Townsend's Warbler (TOWA), and Warbling Vireo (WAVI) Predicted occupancy probabilities for treatment versus control units before (gray) and after (black) prescribed fire for species exhibiting statistically supported prescribed fire effects, i.e., where occupancy shifts towards or away from treated sites. Species depicted are American Three-toed Woodpecker (ATTW) and Rock Wren (ROWR) Wildfire effects on species occupancy were similar in units previously treated with prescribed fire compared to untreated units (Fig. 7). The data did not definitively support differences in wildfire effect between treated and untreated units for any species (all 95% BCIs for \( {\beta}_{Per_{\mathrm{WF}}\times {Trt}_{\mathrm{PF}}\times {CBI}_{\mathrm{WF}},i} \) from Eq. 5 included zero). Additionally, species exhibiting the strongest shifts towards or away from high-severity burned points after wildfire were the same in units initially treated versus untreated with prescribed fire (Fig. 7). For certain species, statistical support for wildfire effects differed with prior treatment (see Mountain Chickadee, Cassin's Vireo, and Warbling Vireo). Nevertheless, species never exhibited completely contradictory effects in treated versus untreated units. Posterior median estimates (dots) with 90% credible intervals (error bars) for wildfire effects on avian species occupancy at the Payette National Forest (ID, USA). Positive versus negative values indicate occupancy shifts towards versus away from (respectively) high-severity burned sites after wildfire in units previously treated with prescribed fire. Full species names are provided in Table 2. Error bars are color coded based on statistical support (credible intervals excluding zero) and direction (orange = positive; blue = negative). βWildfire CBI (untreated) represents \( {\beta}_{Per_{\mathrm{WF}}\times {CBI}_{\mathrm{WF}},i} \) and βWildfire CBI (treated) represents \( {\beta}_{Per_{\mathrm{WF}}\times {CBI}_{\mathrm{WF}},i}+{\beta}_{Per_{\mathrm{WF}}\times {Trt}_{\mathrm{PF}}\times {CBI}_{\mathrm{WF}},i} \) in Eq. 5 Annual changes in occupancy and time-dependent effects Twenty-four species exhibited disturbance effects that were statistically supported overall or time-dependent, or notable changes in annual occupancy (including the 9 species highlighted above; Figs. 8, 9, and 10). Black-backed Woodpecker, American Three-toed Woodpecker, Rock Wren, and Cassin's Finch exhibited statistically supported shifts towards treated units in year 2 after prescribed fire (Fig. 8; see also similar but less statistically supported pattern for Brewer's Sparrow). Hermit Thrush shifted towards treated units in years 1 and 2 and then shifted back in year 3 following prescribed fire. Hammond's Flycatcher, Chipping Sparrow, Ruby-crowned Kinglet, and Calliope Hummingbird exhibited lagged shifts towards untreated units in years 2 or 3 following prescribed fire. Posterior median estimates (dots) with 90% credible intervals (error bars) for time-dependent prescribed fire effects on avian species occupancy at the Payette National Forest (ID, USA). Estimates are for all 24 species with statistically supported time-dependent disturbance effects or notable changes in annual occupancy overall (see full species names in Table 2). Error bars are color coded based on statistical support (credible intervals excluding zero) and direction (orange = positive; blue = negative). Positive versus negative values indicate occupancy shifts towards versus away from (respectively) treated sites after treatment. βPrescribed fire treatment represents \( {\beta}_{Trt_{\mathrm{PF}},i\left({t}_{\mathrm{PF}}\epsilon \left\{1,2,3\right\}\right)}-{\beta}_{Trt_{\mathrm{PF}},i\left({t}_{\mathrm{PF}}=0\right)} \) in Eq. 6, where tPF = 1, 2, and 3 correspond with year 1 (Y1), year 2 (Y2), and year 3 (Y3), respectively, after prescribed fire Posterior median estimates (dots) with 90% credible intervals (error bars) for time-dependent wildfire effects on avian species occupancy at the Payette National Forest (ID, USA). Estimates are for all 24 species with statistically supported time-dependent disturbance effects or notable changes in annual occupancy overall (see full species names in Table 2). Error bars are color coded based on statistical support (credible intervals excluding zero) and direction (orange = positive; blue = negative). Positive versus negative values indicate occupancy shifts towards versus away from (respectively) high-severity burned sites after wildfire. βWildfire CBI represents \( {\beta}_{CBI_{\mathrm{WF}},i\left({t}_{\mathrm{WF}}\epsilon \left\{1,2,3\right\}\right)}-{\beta}_{CBI_{\mathrm{WF}},i\left({t}_{\mathrm{WF}}=0\right)} \) in Eq. 6, where tWF = 1, 2, and 3 correspond with 2008 (Y1), 2009 (Y2), and 2010 (Y3) Posterior median estimates (dots) with 90% credible intervals (error bars) for annual occupancy of surveyed sites for bird species at the Payette National Forest (ID, USA). Estimates are for all 24 species with statistically supported time-dependent disturbance effects or notable changes in annual occupancy overall (see full species names in Table 2). The vertical dashed lines demark when prescribed fire treatments were applied (one unit in 2004; 2 units in 2006), and the vertical solid line demarks when wildfire occurred The 5 species with positive wildfire effects that were statistically supported overall or time-dependent—Hairy Woodpecker, Olive-sided Flycatcher, House Wren, White-breasted Nuthatch, and Brewer's Sparrow—all shifted towards higher severity burned points primarily in years 2–3 after wildfire (Fig. 9). Occupancy for the 3 species with overall negative wildfire effects (Cassin's Vireo, Warbling Vireo, and Townsend's Warbler; see Figs. 3 and 5) shifted immediately towards lower severity burned points and remained there in all 3 years following wildfire (Fig. 9). Red-breasted Nuthatch, White-breasted Nuthatch, Hammond's Flycatcher, Lazuli Bunting, and Pine Siskin initially shifted towards lower severity burned points in year 1 but then shifted back in years 2–3 after wildfire. Of particular note, White-breasted Nuthatch occupancy was distributed in completely opposite directions with respect to wildfire burn severity in years 1 (negative) versus 3 (positive). Dusky Flycatcher, Brewer's Sparrow, MacGillivray's Warbler, and Hermit Thrush also exhibited ephemeral shifts towards lower severity burned points in year 1, although these shifts received relatively weak statistical support. Annual occupancy varied notably among pre-treatment (2002–2003), post-treatment (2004–2007), and post-wildfire years (2008–2010) for many species, providing further insight into disturbance effects (Fig. 10). In addition to shifting towards occupying high-severity burned points, Hairy Woodpecker and House Wren occupancy increased overall after wildfire. Mountain Bluebird, Dusky Flycatcher, Black-headed Grosbeak, Lazuli Bunting, MacGillivray's Warbler, Cassin's Finch, and Pine Siskin also exhibited notable albeit sometimes lagged increases in occupancy (Fig. 10) despite weak or negative shifts in occupancy with respect to burn severity following disturbance (Figs. 8 and 9). Hammond's Flycatcher and Hermit Thrush exhibited declines in occupancy following disturbance (Fig. 10). Species richness We observed no definitive effects of prescribed fire or wildfire on species richness. Species richness increased overall by ~ 5 species following wildfire, but 95% BCIs for site-specific richness estimates overlapped considerably (Fig. 11 top row). Additionally, high-severity burned sites did not become any more or less species rich than lower severity sites following wildfire (i.e., a slight negative relationship with CBI was maintained; Fig. 11 top row). Overall species richness did not substantially change following prescribed fire, nor did the difference in species richness between treated and untreated units (Fig. 11 bottom row). Species richness posterior estimates (median with 90% BCIs) for point × year survey occasions plotted against wildfire burn severity (CBI; top panels) and within prescribed fire treatment versus control units (bottom panels) before (left) versus after (right) disturbance. Best-fit lines show mean species richness trends for posterior median estimates Our results suggest prescribed fire does not necessarily change the short-term ecological importance of wildfire to birds, even with limited wildfire burn severity. Prescribed fire limited subsequent wildfire burn severity within treated units (see also Pollet and Omi 2002; Prichard and Kennedy 2014; Fernandes 2015; Cary et al. 2017) but did not substantially modulate avian responses to wildfire. Species exhibiting the strongest shifts in occupancy towards or away from high-severity burned points after wildfire were similar in units initially treated versus not treated with prescribed fire. Our study also highlights the limited ability of prescribed fire to emulate wildfire effects on birds. Wildfire substantially and strongly affected shifts in occupancy in relation to burn severity for multiple bird species. Differing metrics prevented us from quantitatively comparing wildfire with prescribed fire effects. Nevertheless, prescribed fire qualitatively affected a different and smaller set of bird species than did wildfire. Several species exhibited dramatic changes in overall occupancy following wildfire but not prescribed fire. Prescribed fire effects tended to be relatively time dependent and brief compared to wildfire effects. Finally, prescribed fire treatments would probably not extend across areas comparable to those burned by wildfire. These differences likely reflected differences in the extent and magnitude of how wildfire versus prescribed fire affected vegetation structure and composition and ultimately habitat conditions for species. Wildfire affected communities primarily by altering the distribution of individual species rather than overall richness. Species richness increased somewhat after wildfire, but the magnitude of this change was small compared to variation among sites burned at similar severity, and wildfire did not substantially change which sites were most speciose. Rather, wildfire effects were primarily apparent for individual species modulated by life history (discussed further below), implying changes in species composition. Thus, the significance of wildfire to bird communities and avian diversity depends on the extent, distribution, timing, and severity of wildfire across landscapes (Kalies et al. 2010; Fontaine and Kennedy 2012; Latif et al. 2016b). We note that although prescribed fire did not substantially alter the slopes of occupancy relationships with wildfire burn severity, prescribed fire did limit subsequent wildfire burn severity. Thus, prescribed fire may limit the magnitude of species occupancy changes following wildfire in so far as those changes depend on burn severity, as they do in our model. Nevertheless, we show that the character of avian responses to wildfire (as represented by slopes of species occupancy relationships) was not substantially altered by previous prescribed fire. Fire effects depend on species life history, population response, and resource dynamics Disturbance effects on species occupancy were generally consistent with species life histories and patterns reported in the literature (Smucker et al. 2005; Kotliar et al. 2007; Russell et al. 2009; Fontaine and Kennedy 2012; Latif et al. 2016b). Positive fire effects (i.e., overall or time-dependent shifts towards burned sites after wildfire or prescribed fire) on bark-drilling woodpeckers (Hairy, American Three-toed, and Black-backed Woodpecker) were congruent with their reliance on standing dead wood for nesting and bark (e.g., Scolytidae) and wood-boring (Cerambycidae and Buprestidae) beetle larvae for food (Covert-Bratland et al. 2006; Kotliar et al. 2008). More pronounced prescribed fire compared to wildfire effects for disturbance specialist Black-backed and American Three-toed Woodpecker were unexpected, but sample sizes for these species were low and a lack of unburned sites (CBI = 0) may have limited power for estimating wildfire effects. Data describing specific life activities (e.g., nesting, foraging, and dispersal) may more effectively resolve relationships with wildfire for these species (Kotliar et al. 2008; Saab et al. 2009; Latif et al. 2013). Additional species with positive fire effects included secondary cavity nesters (House Wren, White-breasted Nuthatch), aerial insectivores (Olive-sided Flycatcher), and species that nest or forage in the understory or on the ground (Brewer's Sparrow, Rock Wren). These effects likely reflect increased nesting opportunities in snags generated by wildfire (secondary cavity nesters), increased foraging opportunities in canopy openings (aerial insectivores), and improved habitat quality with understory revegetation (understory species) (Kotliar et al. 2002; Saab et al. 2005; Smucker et al. 2005; Fontaine and Kennedy 2012; Latif et al. 2016b). Species exhibiting negative fire effects (shifts away from burned sites) included open-cup canopy nesters and bark- and foliage-gleaning insectivores (Mountain Chickadee, Cassin's Vireo, Warbling Vireo, and Townsend's Warbler), reflecting expected net losses in resources for these species (Kotliar et al. 2002; Saab et al. 2005; Smucker et al. 2005; Fontaine and Kennedy 2012; Latif et al. 2016b). Time-dependent fire effects and overall changes in occupancy suggest some potential nuances in population responses or how fire affects resources. For bark-drilling woodpeckers, wood-boring beetle prey primarily colonize burned forests in year 2 following lags in tree mortality (Ray et al. 2019). Increased cavity availability for secondary cavity-nesting species follows excavation by woodpeckers in initial years (Norris and Martin 2010). More generally, lagged positive effects may reflect greater recruitment of young in subsequent years produced initially by a relatively small number of burned-site colonists or residents. Conversely, site fidelity in the early post-fire years may delay negative population responses for species reliant on green foliage for nesting and foraging. Following wildfire in temperate forests, soil nutrient releases are typical and herbaceous vegetation regrowth begins within 1 year (Boerner 1982). Shrubs tend to dominate for the next 5–6 years (Schlesinger and Gill 1980; Boerner 1982), a process benefiting a variety of avian species (e.g., Hannon and Drapeau 2005; Saab and Powell 2005; Fontaine and Kennedy 2012). In contrast, brief positive effects of prescribed fire suggest relatively short-lived resource pulses for affected species (e.g., Cassin's Finch and Hermit Thrush). Resources may be greatest initially at lower severity burned sites after wildfire for species that nest or forage in the understory or on the ground. As time since fire progresses, however, resource availability may increase with increasing productivity at high-severity sites. Overall increases in occupancy after wildfire coupled with short-lived negative effects followed by positive effects in subsequent years suggested such resource dynamics for some species (e.g., Lazuli Bunting and Pine Siskin). Study limitations and future directions We did not include prescribed fire or wildfire effects on detectability, so fire effects on occupancy reported here do not explicitly control for potential spatial heterogeneity in detectability. In preliminary analyses, we did not find statistical support for such effects (Q. Latif and V. Saab unpublished data), potentially reflecting a lack of such effects or limitations in statistical power. Given our sampling design, however, detectability could include information on movement between replicate surveys (Latif et al. 2016a) or heterogeneity in abundance (Royle and Nichols 2003), both of which are ecologically relevant. By ignoring fire effects on detectability, we forced any fire effects on movement or local abundance to be reflected in occupancy rather than detectability estimates (sensu Latif et al. 2018). Nevertheless, further study of fire or habitat effects on observer error could provide additional insight for interpreting our results. Unlike others (Russell et al. 2009), we did not include persistence effects relating occupancy with the prior year's occupancy state (see also Russell et al. 2015). Having used BACI to control for potentially confounding spatial and temporal variation, we sought to avoid including further complexity and maximize information for estimating disturbance effects. Future study employing models representing occupancy dynamics (e.g., colonization, persistence, turnover) may yield additional insights into mechanisms underlying patterns observed here. Our results are limited to fuel reduction treatments consisting exclusively of prescribed fire of primarily low severity in dry conifer forests characterized by mixed-severity fire regimes. Birds have been shown to respond more strongly to prescribed fire treatments elsewhere likely due to greater treatment severity (e.g., Russell et al. 2009; Bagne and Purcell 2011). Greater treatment severity may elicit responses that more closely resemble responses to wildfire, but such treatments may also alter the behavior and ecological significance of subsequent wildfire. In so far as warming temperatures or drought affect prescribed fire treatment severity, patterns observed here may depend on climate. Selective timber harvest may further alter direct and indirect treatment implications for birds by removing substantial standing woody biomass (Sallabanks et al. 2000; Perry and Thill 2013). Direct and indirect ecological implications of fuel treatments may further depend on historical fire regime (sensu Latif et al. 2016b). Studies examining interactions of fuel treatments, wildfire, and birds across a range of treatment severities, types, and sizes are needed to fully inform the management of fire-adapted forests with objectives that include conservation of avian diversity. Management implications Our results suggest managers can use prescribed fire to limit burn severity of subsequent wildfire without completely compromising the value of wildfire to fire-associated species (e.g., woodpeckers, secondary cavity-nesting species, aerial insectivores, and understory species) within burned areas. Given similar species relationships with burn severity in areas previously treated with prescribed fire (as observed here), we expect wildfire to generate some habitat for fire-associated species, albeit potentially less than in untreated areas in so far as burn severity is limited. In contrast, prescribed fire is unlikely to resemble wildfire in its ecological value for fire-associated species, limiting the value of prescribed fire as a surrogate for wildfire. Instead, the implications of prescribed fire for avian diversity in dry conifer forests may hinge more so on whether and how it shapes the spatial extent of subsequent wildfire. Where the extent of treatment units are dwarfed by subsequent wildfire extent (e.g., Fig. 1), treatments may never be extensive enough to limit potential wildfire severity across entire landscapes. Nevertheless, prescribed fire treatments arranged strategically could break up landscapes and limit wildfire spread, particularly in conjunction with other fuel treatments and fire control measures (Arkle et al. 2012; Hunter and Robles 2020). In multiple use forests, such fire management strategies would ideally allow sufficient wildfire to maintain biodiversity while limiting wildfire extent enough to meet other objectives, such as human safety and infrastructure protection. We leveraged a rare opportunity to study the serial effects of prescribed fire and subsequent wildfire on small landbirds. To the extent that wildfire affects resources for other species similarly, our study provides evidence that prescribed fire does not necessarily compromise the ecological value of subsequent wildfire for wildlife. Conversely, prescribed fire could help counteract the effects of climate warming by limiting burn severity. To the extent that climate warming compromises the ecological value of wildfire by increasing its extent and severity (e.g., Jones et al. 2021), the effective application of prescribed fire could help mitigate these impacts. The datasets used and/or analyzed here are available from the corresponding author on reasonable request. Agee, J.K. 1993. Ponderosa pine and lodgepole pine forests. In Fire Ecology of Pacific Northwest Forests, ed. J.K. Agee, 320–350. Island Press. Amundson, Courtney L., J. Andrew Royle, and Colleen M. Handel. 2014. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts. Auk 131 (4): 476–494 https://doi.org/10.1642/AUK-14-11.1. Arkle, Robert S., David S. Pilliod, and Justin L. Welty. 2012. Pattern and process of prescribed fires influence effectiveness at reducing wildfire severity in dry coniferous forests. Forest Ecology and Management 276: 174–184 https://doi.org/10.1016/j.foreco.2012.04.002. Bagne, Karen E., and Kathryn L. Purcell. 2011. Short-term responses of birds to prescribed fire in fire-suppressed forests of California. The Journal of Wildlife Management 75 (5): 1051–1060 https://doi.org/10.1002/jwmg.128. Bock, Carl E., and William M. Block. 2005. Fire and birds in the southwestern United States. Studies in Avian Biology 30: 14–32. Boerner, Ralph E.J. 1982. Fire and nutrient cycling in temperate ecosystems. BioScience 32 (3): 187–192 https://doi.org/10.2307/1308941. Brown, Richard T., James K. Agee, and Jerry F. Franklin. 2004. Forest restoration and fire: principles in the context of place. Conservation Biology 18 (4): 903–912 https://doi.org/10.1111/j.1523-1739.2004.521_1.x. Cary, Geoffrey J., Ian D. Davies, Ross A. Bradstock, Robert E. Keane, and Mike D. Flannigan. 2017. Importance of fuel treatment for limiting moderate-to-high intensity fire: findings from comparative fire modelling. Landscape Ecology 32 (7): 1473–1483 https://doi.org/10.1007/s10980-016-0420-8. Chandler, Richard B., and J. Andrew Royle. 2013. Spatially explicit models for inference about density in unmarked or partially marked populations. Annals of Applied Statistics 7 (2): 936–954. https://doi.org/10.1214/12-AOAS610. Clarke, M.F. 2008. Catering for the needs of fauna in fire management: science or just wishful thinking? Wildlife Research 35 (5): 385–394 https://doi.org/10.1071/WR07137. Covert-Bratland, Kristin A., William M. Block, and Tad C. Theimer. 2006. Hairy woodpecker winter ecology in ponderosa pine forests representing different ages since wildfire. The Journal of Wildlife Management 70 (5): 1379–1392 https://doi.org/10.2307/4128059. Covington, W.W., and M.M. Moore. 1992. Postsettlement changes in natural fire regimes: implications for restoration of old-growth ponderosa pine forests. In Old-growth forests in the southwest and Rocky Mountain regions: proceedings of a workshop, ed. Merrill R. Kaufmann, W.H. Moir, and Richard L. Bassett, 81–99. Portal: U.S. Department of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station. Dale, Virginia H., Linda A. Joyce, Steve McNulty, Ronald P. Neilson, Matthew P. Ayres, Michael D. Flannigan, Paul J. Hanson, Lloyd C. Irland, Ariel E. Lugo, Chris J. Peterson, Daniel Simberloff, Frederick J. Swanson, Brian J. Stocks, and B. Michael Wotton. 2001. Climate change and forest disturbances. BioScience 51 (9): 723–734 https://doi.org/10.1641/0006-3568(2001)051[0723:CCAFD]2.0.CO;2. Dorazio, R. M., and J. A. Royle. 2005. Estimating size and composition of biological communities by modeling the occurrence of species. Journal of the American Statistical Association 100:389–398. Dorazio, Robert M., J. Andrew Royle, Bo Söderström, and Anders Glimskär. 2006. Estimating species richness and accumulation by modeling species occurrence and detectability. Ecology 87 (4): 842–854 https://doi.org/10.1890/0012-9658(2006)87[842:esraab]2.0.co;2. Efford, Murray G., and Deanna K. Dawson. 2012. Occupancy in continuous habitat. Ecosphere 3 (4): article 32 https://doi.org/10.1890/ES11-00308.1. Fernandes, Paulo M. 2015. Empirical support for the use of prescribed burning as a fuel treatment. Current Forestry Reports 1 (2): 118–127 https://doi.org/10.1007/s40725-015-0010-z. Fontaine, Joseph B., Daniel C. Donato, W. Douglas Robinson, Beverly E. Law, and J. Boone Kauffman. 2009. Bird communities following high-severity fire: response to single and repeat fires in a mixed-evergreen forest, Oregon, USA. Forest Ecology and Management 257 (6): 1496–1504 https://doi.org/10.1016/j.foreco.2008.12.030. Fontaine, Joseph B., and Patricia L. Kennedy. 2012. Meta-analysis of avian and small-mammal response to fire severity and fire surrogate treatments in U.S. fire-prone forests. Ecological Applications 22 (5): 1547–1561 https://doi.org/10.1890/12-0009.1. Fulé, Peter Z., Joseph E. Crouse, John Paul Roccaforte, and Elizabeth L. Kalies. 2012. or burning treatments in western USA ponde/or burning treatments in western USA ponderosa or Jeffrey pine-dominated forests help restore natural fire behavior? Forest Ecology and Management 269: 68–81 https://doi.org/10.1016/j.foreco.2011.12.025. Gaines, William, Maryellen Haggard, James Begley, John Lehmkuhl, and Andrea Lyons. 2010. Short-term effects of thinning and burning restoration treatments on avian community composition, density, and nest survival in the eastern Cascades dry forests, Washington. Forest Science 56 (1): 88–99. Gelman, A., and J. Hill. 2007. Data analysis using regression and multilevel/ hierarchical models. Cambridge University Press, New York, NY. Hannon, S.J., and P. Drapeau. 2005. Burns, birds, and the boreal forest. Studies in Avian Ecology 30: 97–115. Hollenbeck, Jeff P., Lisa J. Bate, Victoria A. Saab, and John F. Lehmkuhl. 2013. Snag distributions in relation to human access in ponderosa pine forests. Wildlife Society Bulletin 37 (2): 256–266 https://doi.org/10.1002/wsb.252. Hunter, Molly E., and Marcos D. Robles. 2020. The effects of prescribed fire on wildfire regimes and impacts: a framework for comparison. Forest Ecology and Management 475: 118435 https://doi.org/10.1016/j.foreco.2020.118435. Hutto, R.L. 1995. Composition of bird communities following stand-replacement fires in Northern Rocky Mountain (U.S.A.) conifer forests. Conservation Biology 9 (5): 1041–1058. https://doi.org/10.1046/j.1523-1739.1995.9051033.x-i1. Hutto, Richard L., Monica L. Bond, and Dominick A. DellaSala. 2015. Using bird ecology to learn about the benefits of severe fire. In The ecological importance of mixed-severity fires, ed. Dominick A. DellaSala and Chad T. Hanson, 55–88. Elsevier. https://doi.org/10.1016/B978-0-12-802749-3.00003-7. Hutto, Richard L., Aaron D. Flesch, and Megan A. Fylling. 2014. A bird's-eye view of forest restoration: do changes reflect success? Forest Ecology and Management 327 (0): 1–9 https://doi.org/10.1016/j.foreco.2014.04.034. Illán, Javier Gutiérrez, Chris D. Thomas, Julia A. Jones, Weng-Keen Wong, Susan M. Shirley, and Matthew G. Betts. 2014. Precipitation and winter temperature predict long-term range-scale abundance changes in Western North American birds. Global Change Biology 20 (11): 3351–3364 https://doi.org/10.1111/gcb.12642. Jones, G.M., H.A. Kramer, W.J. Berigan, S.A. Whitmore, R.J. Gutiérrez, and M.Z. Peery. 2021. Megafire causes persistent loss of an old-forest species. Animal Conservation https://doi.org/10.1111/acv.12697. Kalies, E.L., C.L. Chambers, and W.W. Covington. 2010. Wildlife responses to thinning and burning treatments in southwestern conifer forests: a meta-analysis. Forest Ecology and Management 259 (3): 333–342 https://doi.org/10.1016/j.foreco.2009.10.024. Kéry, M., J. A. Royle, M. Plattner, and R. M. Dorazio. 2009. Species richness and occupancy estimation in communities subject to temporary emigration. Ecology 90:1279–1290. Key, Carl H., and Nathan C. Benson. 2006. Landscape assessment. Sampling and analysis methods, 55. USDA Forest Service Genaral Technical Report RMRS-GTR-164-CD. Kotliar, Natasha B., Sallie J. Hejl, Richard L. Hutto, Vicki A. Saab, C.P. Mellcher, and M.E. McFadzen. 2002. Effects of fire and post-fire salvage logging on avian communities in conifer-dominated forests of the western United States. Studies in Avian Biology 25: 49–64. Kotliar, Natasha B., Patricia L. Kennedy, and Kimberly Ferree. 2007. Avifaunal responses to fire in southwestern montane forests along a burn severity gradient. Ecological Applications 17 (2): 491–507 https://doi.org/10.1890/06-0253. Kotliar, Natasha B., Elizabeth W. Reynolds, and Douglas H. Deutschman. 2008. American three-toed woodpecker response to burn severity and prey availability at multiple spatial scales. Fire Ecology 4 (2): 26–45. https://doi.org/10.4996/fireecology.0402026. Latif, Quresh S., Martha M. Ellis, and Courtney L. Amundson. 2016a. A broader definition of occupancy: comment on Hayes and Monfils. The Journal of Wildlife Management 80 (2): 192–194 https://doi.org/10.1002/jwmg.1022. Latif, Quresh S., Martha M. Ellis, Victoria A. Saab, and Kim Mellen-McLean. 2018. Simulations inform design of regional occupancy-based monitoring for a sparsely distributed, territorial species. Ecology and Evolution 8 (2): 1171–1185 https://doi.org/10.1002/ece3.3725. Latif, Quresh S., Victoria A. Saab, Jonathan G. Dudley, and Jeff P. Hollenbeck. 2013. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist. Ecology and Evolution 3 (13): 4348–4364 https://doi.org/10.1002/ece3.790. Latif, Quresh S., Jamie S. Sanderlin, Victoria A. Saab, William M. Block, and Jonathan G. Dudley. 2016b. Avian relationships with wildfire at two dry forest locations with different historical fire regimes. Ecosphere 7 (5): e01346 https://doi.org/10.1002/ecs2.1346. MacKenzie, Darryl I., James D. Nichols, G.B. Lachman, S. Droege, J. Andrew Royle, and C.A. Langtimm. 2002. Estimating site occupancy rates when detection probabilities are less than one. Ecology 83 (8): 2248–2255. https://doi.org/10.1890/0012-9658(2002)083[2248:ESORWD]2.0.CO;2. MacKenzie, Darryl I., James D. Nichols, J. Andrew Royle, Kenneth H. Pollock, Larissa L. Baily, and James E. Hines. 2006. Occupancy estimation and modeling. Elsevier Inc. McIver, James D., Scott L. Stephens, James K. Agee, Jamie Barbour, Ralph E.J. Boerner, Carl B. Edminster, Karen L. Erickson, Kerry L. Farris, Christopher J. Fettig, Carl E. Fiedler, Sally Haase, Stephen C. Hart, Jon E. Keeley, Eric E. Knapp, John F. Lehmkuhl, Jason J. Moghaddas, William Otrosina, Kenneth W. Outcalt, Dylan W. Schwilk, Carl N. Skinner, Thomas A. Waldrop, C. Phillip Weatherspoon, Daniel A. Yaussy, Andrew Youngblood, and Steve Zack. 2013. Ecological effects of alternative fuel-reduction treatments: highlights of the National Fire and Fire Surrogate study (FFS). International Journal of Wildland Fire 22 (1): 63–82 https://doi.org/10.1071/WF11130. McKelvey, Kevin S., William M. Block, Theresa B. Jain, Charles H. Luce, Deborah S. Page-Dumroese, Bryce A. Richardson, Victoria A. Saab, Anna W. Schoettle, Carolyn H. Sieg, and Daniel R. Williams. 2021. Adapting research, management, and governance to confront socioecological uncertainties in novel ecosystems. Frontiers in Forests and Global Change 4 (14) https://doi.org/10.3389/ffgc.2021.644696. Miller, Jay D., and Andrea E. Thode. 2007. Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sensing of Environment 109 (1): 66–80. https://doi.org/10.1016/j.rse.2006.12.006. Norris, Andrea R., and Kathy Martin. 2010. The perils of plasticity: dual resource pulses increase facilitation but destabilize populations of small-bodied cavity-nesters. Oikos 119 (7): 1126–1135 https://doi.org/10.1111/j.1600-0706.2009.18122.x. Perry, Roger W., and Ronald E. Thill. 2013. Long-term responses of disturbance-associated birds after different timber harvests. Forest Ecology and Management 307: 274–283 https://doi.org/10.1016/j.foreco.2013.07.026. Plummer, Martyn. 2003. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), March 20-22, Vienna, Austria. Pollet, Jolie, and Philip N. Omi. 2002. Effect of thinning and prescribed burning on crown fire severity in ponderosa pine forests. International Journal of Wildland Fire 11 (1):1-10. https://doi.org/10.1071/WF01045, 1. Pollock, Kenneth H. 1982. A capture-recapture design robust to unequal probability of capture. The Journal of Wildlife Management 46 (3): 752–757 https://doi.org/10.2307/3808568. Popescu, Viorel D., Perry de Valpine, Douglas Tempel, and M. Zachariah Peery. 2012. Estimating population impacts via dynamic occupancy analysis of Before-After Control-Impact studies. Ecological Applications 22 (4): 1389–1404 https://doi.org/10.1890/11-1669.1. Prichard, Susan J., and Maureen C. Kennedy. 2014. Fuel treatments and landform modify landscape patterns of burn severity in an extreme fire event. Ecological Applications 24 (3): 571–590 https://doi.org/10.1890/13-0343.1. R Core Team. 2013. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing URL https://www.R-project.org/. Ray, Chris, Daniel R. Cluck, Robert L. Wilkerson, Rodney B. Siegel, Angela M. White, Gina L. Tarbill, Sarah C. Sawyer, and Christine A. Howell. 2019. Patterns of woodboring beetle activity following fires and bark beetle outbreaks in montane forests of California, USA. Fire Ecology 15 (1): 21 https://doi.org/10.1186/s42408-019-0040-1. Royle, J. Andrew, and James D. Nichols. 2003. Estimating abundance from repeated presence-absence data or point counts. Ecology 84 (3): 777–790 https://doi.org/10.1890/0012-9658(2003)084[0777:eafrpa]2.0.co;2. Russell, James C., Martin Stjernman, Åke Lindström, and Henrik G. Smith. 2015. Community occupancy before-after-control-impact (CO-BACI) analysis of Hurricane Gudrun on Swedish forest birds. Ecological Applications 25 (3): 685–694 https://doi.org/10.1890/14-0645.1. Russell, Robin E., J. Andrew Royle, Victoria A. Saab, John F. Lehmkuhl, William M. Block, and John R. Sauer. 2009. Modeling the effects of environmental disturbance on wildlife communities: avian responses to prescribed fire. Ecological Applications 19 (5): 1253–1263 https://doi.org/10.1890/08-0910.1. Saab, V., Lisa J. Bate, John Lehmkuhl, Brett G. Dickson, Scott Story, S. Jentsch, and William M. Block. 2006. Changes in downed wood and forest structure after prescribed fire in ponderosa pine forests. In Fuels management - how to measure success, Portland, OR, USA. Proceedings RMRS-P-41. Fort Collins: USDA, Forest Service, RMRS., 2006 28-30 March. Saab, Victoria A., William M. Block, Robin E. Russell, John F. Lehmkuhl, Lisa Bate, and Rachel White. 2007. Birds and burns of the Interior West. Pacific Northwest Research Station, U.S. Forest Service. PNW-GTR-712. Saab, Victoria A., and Hugh D.W. Powell. 2005. Fire and avian ecology in North America: process influencing pattern. Studies in Avian Biology 30: 1–9. Saab, Victoria A., Hugh D.W. Powell, Natasha B. Kotliar, and Karen R. Newlon. 2005. Variation in fire regimes of the Rocky Mountains: implications for avian communities and fire management. Studies in Avian Biology 30: 76–96. Saab, Victoria A., Robin E. Russell, and Jonathan G. Dudley. 2009. Nest-site selection by cavity-nesting birds in relation to postfire salvage logging. Forest Ecology and Management 257 (1): 151–159. https://doi.org/10.1016/j.foreco.2008.08.028. Sallabanks, Rex, Edward B. Arnett, and John M. Marzluff. 2000. An evaluation of research on the effects of timber harvest on bird populations. Wildlife Society Bulletin 28 (4): 1144–1155. Sanderlin, Jamie S., William M. Block, and Joseph L. Ganey. 2014. Optimizing study design for multi-species avian monitoring programmes. Journal of Applied Ecology 51 (4): 860–870 https://doi.org/10.1111/1365-2664.12252. Schlesinger, William H., and David S. Gill. 1980. Biomass, production, and changes in the availability of light, water, and nutrients during the development of pure stands of the chaparral shrub, Ceanothus megacarpus, after fire. Ecology 61 (4): 781–789 https://doi.org/10.2307/1936748. Schoennagel, T., T.T. Veblen, and William H. Romme. 2004. The interaction of fire, fuels, and climate across Rocky Mountain forests. Bioscience 54 (7): 661–676. https://doi.org/10.1641/0006-3568(2004)054[0661:TIOFFA]2.0.CO;2. Seavy, Nathaniel E., and John D. Alexander. 2014. Songbird response to wildfire in mixed-conifer forest in south-western Oregon. International Journal of Wildland Fire 23 (2): 246–258 https://doi.org/10.1071/WF12081. Smucker, Kristina M., Richard L. Hutto, and Brian M. Steele. 2005. Changes in bird abundance after wildfire: importance of fire severity and time since fire. Ecological Applications 15 (5): 1535–1549 https://doi.org/10.1890/04-1353. Stevens, Jens T., Hugh D. Safford, and Andrew M. Latimer. 2014. Wildfire-contingent effects of fuel treatments can promote ecological resilience in seasonally dry conifer forests. Canadian Journal of Forest Research 44 (8): 843–854 https://doi.org/10.1139/cjfr-2013-0460. Su, Yu-Sung, and Masanao Yajima. 2014. R2jags: a package for running jags from R. R package version 3.3.0. http://CRAN.R-project.org/package=R2jags. Sutherland, W.J., I. Newton, and Rhys E. Green. 2004. Bird ecology and conservation: a handbook of techniques. New York: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198520863.001.0001. Walker, Ryan B., Jonathan D. Coop, Sean A. Parks, and Laura Trader. 2018. Fire regimes approaching historic norms reduce wildfire-facilitated conversion from forest to non-forest. Ecosphere 9 (4): e02182 https://doi.org/10.1002/ecs2.2182. White, Angela M., Elise F. Zipkin, Patricia N. Manley, and Matthew D. Schlesinger. 2013. Simulating avian species and foraging group responses to fuel reduction treatments in coniferous forests. Forest Ecology and Management 304: 261–274 https://doi.org/10.1016/j.foreco.2013.04.039. National Fire Plan and Payette National Forest provided funding and logistical support for this work. We thank field crews for conducting bird and vegetation surveys. In particular, A. Newhouse, D. Ramos, S. Story, and S. Copeland helped with field work on the Payette study area. The Rocky Mountain Research Station and Bird Conservancy of the Rockies supported author time during manuscript preparation. National Fire Plan and Payette National Forest provided funding and logistical support for this work. Bird Conservancy of the Rockies, 14500 Lark Bunting Lane, Brighton, CO, 80603, USA Quresh S. Latif Rocky Mountain Research Station, USDA Forest Service, 1648 South 7th Avenue, MSU Campus, Bozeman, MT, 59717, USA Victoria A. Saab Rocky Mountain Research Station, USDA Forest Service, 322 East Front Street, Suite 401, Boise, ID, 83702, USA Jonathan G. Dudley VAS designed the study and obtained funding. VAS and JGD organized and oversaw data collection. QSL and VAS developed the analysis approach. QSL implemented the analysis and drafted the manuscript. VAS and JGD contributed editorial input during manuscript preparation. The authors read and approved the final manuscript. Correspondence to Quresh S. Latif. Appendices A, B, and C. Latif, Q.S., Saab, V.A. & Dudley, J.G. Prescribed fire limits wildfire severity without altering ecological importance for birds. fire ecol 17, 37 (2021). https://doi.org/10.1186/s42408-021-00123-2 Community composition Dry conifer forest Western North America Fire Ecology Chats Wildfire and Prescribed Fire Effects on Wildlife
CommonCrawl
Tomoya Kawaguchi, Ph.D. Materials Scientist News&Articles Complex Refractive Index \( \newcommand{\diff}{\mathrm{d}} \renewcommand{\deg}{^{\circ}} \newcommand{\e}{\mathrm{e}} \newcommand{\A}{\unicode{x212B}} \newcommand{\pprime}{^{\prime\prime}} \) Generally, optics based on the electromagnetics is described with the refractive index, while such index does not barely appear in X-ray diffraction (XRD) from the kinematical approach, presumably because of the different historical background; however, it is of great importance to see their relationship to understand the diffraction anomalous fine structure (DAFS) method. For example, the energy dependence of \(f\pprime\) term is not completely equivalent to a linear absorption coefficient, \(\mu\); \(f\pprime\) should be divided by the photon energy in order to treat it as the absorption spectrum equivalent to \(\mu\) in the theoretical framework and analysis of the XAFS field. Thus, the following section briefly describes the relationship between the complex atomic scattering factor derived before and the conventional complex refractive index. The refractive index is defined as the ratio of the wave numbers in a material and a vacuum as follows: \tilde{n} \equiv k / K = 1 – \delta + i\beta = n + i\beta, \label{Eq:refractive_index} \tag{1} where \(K\) and \(k\) are the wave numbers in the vacuum and the material, respectively, \(n\) is the real part of the refractive index and \(\delta\) corresponds to the its discrepancy from 1, and \(\beta\) is the imaginary part of the refractive index. Note that the sign in the imaginary part depends on the definition of the wavefunction; the above description is based on the wavefunction of \(\exp \left\{ i \left( \vec{k}\cdot \vec{r} – \omega t \right) \right\} \). Thus, the wave number in the materials is written as k = \tilde{n} K, \label{Eq:wave_number_in_material} and consequently the electric field in the material is calculated as E &= E_{0} \exp \left\{ i (kx – \omega t)\right\} \notag \\ %&= E_{0} \exp \left\{ i(\tilde{n} K x – \omega t)\right\} \notag \\ %&= E_{0} \exp \left\{ i (nKx – \omega t) \right\} \exp (-\beta K x) \notag \\ &= E_{0} \exp \left\{ i(Kx-\omega t) \right\} \exp (-i \delta K x) \exp (- \beta K x) . \label{Eq:wave_in_material} The second and the third terms in the last line of the above equation indicate the phase shift and the absorption, respectively. The absorption is further described with \(\beta\) in the intensity (i.e., proportional to the square of the electric field ) as I(x) = I(0) \exp (- 2\beta K x). \label{Eq:absorption_due_to_refractive_index} Since the absorption is also written as \(I(x) = I(0) \exp (- \mu x) \) with the linear absorption coefficient, \(\mu\), the relationship between \(\beta\) and \(\mu\) is obtained by comparing the exponential parts as \beta = \frac{\mu}{2K} = \frac{\lambda} {4\pi} \mu. \label{Eq:absorption_coefficient} The complex refractive index is also written in the form of \tilde{n} = \sqrt{\frac{\epsilon \mu_{m}}{\epsilon_{0} \mu_{m0}}}, \label{Eq:full_refractive_index} where \(\epsilon\) and \(\mu_{m}\) are dielectric constants and magnetic permeability, respectively. A subscript 0 denotes the values in the vacuum. The material is magnetically equivalent to the vacuum. Thus, since \(\mu_{m} = \mu_{m0}\), the above index can be reduced as \tilde{n} = \sqrt{\frac{\epsilon }{\epsilon_{0}}}. The dielectric constant, \(\epsilon\), is also related to an electric susceptibility, \(\chi\), with an equation of \epsilon = \epsilon_{0} (1 + \chi_{e}). Connection between the refractive index and the scattering factors The refractive index is related to the atomic scattering factor as follows. Electric dipole moment, \(P_{e}\), is written as \(P_{e}= \epsilon_{0} \chi_{e} E\). At the same time, \(P_{e}\) is also described as \(-n_{s} ex\), where \(n_{s}\) is the volume density of the dipoles. Then, P_{e} = – n_{s} e x_{0} = \epsilon_{0} \chi_{e} E_{0}. Therefore, \(\chi_{e}\) is further calculated with the amplitude of the forced oscillator described in Resonant Scattering as follows: \chi_{e} &= \frac{-n_{s} e}{\epsilon_{0} E_{0}} x_{0} \notag \\ &= \frac{-n_{s} e}{\epsilon_{0} E_{0}} \left(- \frac{eE_{0}}{m} \frac{1}{(\omega_{s}^{2} – \omega^{2} -i\omega\Gamma)} \right) \notag \\ &=\frac{n_{s} e^{2} }{\epsilon_{0} m} \frac{1}{(\omega_{s}^{2} – \omega^{2} -i\omega\Gamma)} \notag \\ & = \frac{n_{s} e^{2} }{\epsilon_{0} m \omega^{2}} \frac{- \omega^{2}}{(\omega_{s}^{2} – \omega^{2} -i\omega\Gamma)}. Furthermore, the last term can be replaced by the atomic scattering factor of the single oscillator. Then, \chi_{e} & = – \frac{n_{s} e^{2} \lambda^{2}}{\epsilon_{0} m (2\pi c)^{2}} f_{s} \notag \\ & = – \frac{r_{0} }{\pi} n_{s} \lambda^{2} f_{s}. \notag \\ Thus, we obtain the relationship between the electric susceptibility and the dispersion correction term of the atomic scattering factor. On the other hand, the complex refractive index is written with the electric susceptibility, \(\chi_{e}\), by assuming \(\chi_{e} \ll 1\) and consequently with the dispersion correction term as follows: \tilde{n} &= \left( \frac{\epsilon}{\epsilon_{0}} \right)^{\frac{1}{2}} \notag \\ &= \left( 1 + \chi_{e} \right)^{\frac{1}{2}} \notag \\ &\sim 1 + \frac{1}{2} \chi_{e} \notag \\ &= 1- \frac{r_0}{2\pi} \lambda^{2} n_{s} f_{s}. \label{Eq:refractive_index_with_fs} In the scope of the single forced oscillator mode, this refractive index is expressed in the form of \tilde{n} = 1- \frac{2\pi r_{0} n_{s} c^{2}}{\omega^{2} – \omega_{s}^{2} + i\Gamma \omega}. \label{Eq:refractive_index_of_oscillator_model} Again, the refractive index is also affected by the bound of the electron to the nucleus as seen in the atomic scattering factor. Practically, \(f_{s}\) is replaced by the complex atomic scattering factor determined from experiments, i.e., \(f_j (\vec{Q}, E) = f^{0}_{j}(\vec{Q}) + f'_{j}(E) + i f\pprime_{j}(E) \). When assuming the forward scattering, i.e., \(\vec{Q} = 0\), \(f^{0}\) value is identical to the atomic number, \(Z\), and then \tilde{n}= 1- \frac{r_0}{2\pi} \lambda^{2} \sum_j n_{j} \left( Z_{j} + f'_{j} + if"_{j} \right), \label{Eq:refractive_index_with_scattering_factor} where \(n_{j}\) denotes the number of atoms of element \(j\) in a unit volume. If the material is a crystal, Eq. \eqref{Eq:refractive_index_with_scattering_factor} is also rewritten with the unit cell volume, \(v_{c}\) as \tilde{n}= 1- \frac{r_0}{2\pi v_{c}} \lambda^{2} F(Q=0, E) =1- \frac{r_0}{2\pi v_{c}} \lambda^{2} \sum_j \left( Z_{j} + f'_{j} + if\pprime_{j} \right). \label{Eq:refractive_index_with_structure_factor} \tag{10} Thus, the real and imaginary pars of the complex refractive index are obtained by comparing each pat in the above equation and Eq. \eqref{Eq:refractive_index} as follows \delta = \frac{r_0}{2\pi v_{c}} \lambda^{2} \sum_j \left( Z_{j} + f'_{j}\right),\ \ \beta = -\frac{r_0}{2\pi v_{c}} \lambda^{2} \sum_j f\pprime_{j}. \label{Eq:each_parts_of_refractive_index_with_structure_factor} Furthermore, \(\mu\) is linked with \(f\pprime\) by Eq. \eqref{Eq:absorption_coefficient} by \mu = \frac{4\pi}{\lambda} \beta = \frac{2\lambda r_{0}}{v_{c}} \sum_j \left( -f\pprime_{j} \right). \label{Eq:mu_and_f_double_prime} Therefore, \(f\pprime\) obtained from the DAFS method is completely equivalent to \(\mu\) by multiplying wavelength, \(\lambda\), to \(f\pprime\) (or divided by the photon energy). Importantly, \(f\pprime\) is a negative value because \(\mu\) is positive from Eq. \eqref{Eq:mu_and_f_double_prime}. Conventionally, \(f\pprime\) appears as a positive value in many textbooks and/or tables. It causes no problem as long as we discuss the diffraction intensity, where only the square of \(f\pprime\) appears in the intensity; however, it is of importance for the DAFS method to distinguish the sign of \(f\pprime\) values because we need to extract the site- and/or phase dependent \(f\pprime\) value itself by directly solving the simultaneous equation of the weighted \(f\pprime\) values. For further study…(This article was written based on the following books) J. Als-Nielsen and D. McMorrow, Elements of Modern X-ray Physics, 2nd ed., John Wiley & Sons, 2011. 菊田惺志(S. Kikuta), X線散乱と放射光科学 基礎編 (X-ray scattering and synchrotron radiation science -basics- (English title was translated by T.K.)), 東京大学出版 (University of Tokyo Press), 2011. 10/07/2018 10/07/2018 by tkawaguchi Resonant Scattering Resonant scattering term In the previous article "scattering by one electron", the classical Thomson scattering from an extended distribution of free electrons is derived as \(-r_{0}f^{0}(\vec{Q})\), where \(-r_{0}\) is the Thomson scattering length of a single element and \(f^{0}(\vec{Q})\) is the atomic form factor. The atomic form factor is a Fourier transform of the electron distribution in an atom; therefore, it is a real number and independent of photon energy. In contrast, there exists an absorption edge and a fine structure in an absorption spectrum in the x-ray region. Thus, the absorption term should be included into the scattering length as an imaginary part, which is proportional to the absorption cross-section, by assuming a more elaborated model rather than that of a cloud of free electrons. As energy dependent terms from the forced oscillator model will be derived in the following discussion, an atomic scattering factor consists of real and imaginary energy-dependent terms as well as the conventional atomic form factor as follows: f(\vec{Q}, E) = f^{0} (\vec{Q}) + f'(E) + i f\pprime(E), \label{Eq:complex_scattering_factor} where \(f'\) and \(f\pprime\) are the real and imaginary parts of the dispersion corrections. These terms are called resonant scattering terms, while at one time it was conventionally referred to as anomalous scattering factors. Usually \(\vec{Q}\) dependence of the resonant scattering terms are negligible, because the dispersion corrections are dominated by electrons in the core shell such as \(K\) shell, which is spatially confined around an atomic nucleus. Fig. 1. Resonant terms, \(f'\) and \(f\pprime\), of a bare Ni atom from the theoretical calculation Fig. 1 shows the theoretical curve of the resonant scattering terms of a bare Ni atom. \(f'\) shows local minimum cusps at each absorption edge, while \(f\pprime\), which should be , strictly speaking, negative values, has absorption edges as described in the XAFS section. These features of the resonant terms enable us to carry out various sophisticated x-ray scattering techniques, e.g., Multi-wavelength Anomalous Diffraction (MAD) for the determination of the unique crystalline structure without suffering from the phase problem [1]. Furthermore, by using the polarization and azimuthal dependences of the resonant terms, the resonant scattering techniques also contribute to the determination of spin and orbital orders seen in a strongly-correlated electron system and enantiomeric materials such as quartz [2, 3]. The structural analysis is usually carried out by hard x-ray (typically, >5 keV), where \(K\) absorption edges of third-period elements locates as shown in Fig. 2. Fig. 2. Resonant terms of Fe, Co and Ni Thus, we can evaluate the occupations of the similar elements such as Fe, Co and Ni in a crystal by the resonant scattering technique thanks to the characteristic of steep decrease in \(f'\) at the each absorption edge, while the nonresonant x-ray diffraction technique hardly distinguishes the contributions from similar elements to a certain crystallographic site. This kind of approach to extract the structural information at a specific element is also applicable to the structural analysis of the amorphous materials by total scattering measurements [4, 5, 6]. Importantly, the diffraction anomalou fine structure (DAFS) method is also one of measurement techniques utilizing this resonant feature. In this method, we observe the \(f\pprime\) and \(f'\) spectra, which also reflect the fine structure as seen in a X-ray absorption fine structure (XAFS) spectrum of \(f\pprime\), through the scattering channel. In this article, a forced charged oscillator model is introduced to explain the basic principles behind how \(f'\) and \(f\pprime\) appear in the atomic scattering factor. This model is obviously classical and a crude approximation; however, it can help us to understand the relationship between \(f'\) and \(f\pprime\), i.e., scattering and absorption. The forced charged oscillator model Suppose that an electron bound in an atom is subjected to the electric field of an incident x-ray beam, \(\vec{E}_{in} = \hat{\vec{x}} E_{0} \e^{-i\omega t}\), which is linearly polarized along the \(x\) axis with amplitude \(E_{0}\) and frequency \(\omega\). The motion equation for this electron is \ddot{x} + \Gamma \dot{x} + \omega_{s}^{2}x = – \left( \frac{eE_{0}}{m}\right) \e^{-i\omega t}, \label{Eq:forced_motion_electron} where \(\Gamma \dot{x}\) is the velocity-dependent damping term corresponding to the dissipation of energy from the applied electric field due to the re-radiation, \(\omega_{s}\) is the resonant frequency usually much larger than the damping constant, \(\Gamma\). The solution for the differential equation is described as \(x(t) = x_{0}\e^{-i\omega t}\) and consequently the amplitude of the forced oscillation is x_{0} = – \left( \frac{eE_{0}}{m}\right) \frac{1}{\omega_{s}^{2} – \omega^{2} -i\omega\Gamma}. \label{Eq:coefficient_of_solution} The radiated field for an observer at a distance \(R\) and at time \(t\) is proportional to the \(\ddot{x}(t-R/c)\) at the earlier time \(t' = t -R/c\); therefore, E_{\mathrm{rad}}(R,t) = \left( \frac{e^{2}}{4\pi \epsilon_{0} m c^{2} }\right) \ddot{x}(t-R/c), where the polarization factor \(\hat{\vec{\epsilon}}\cdot \hat{\vec{\epsilon}}'\) is assumed to be 1. By inserting the specific value of \(\ddot{x}(t-R/c)\) calculated from \(x(t) = x_{0}\e^{-i\omega t}\) and Eq. \eqref{Eq:coefficient_of_solution}, the above equation is expanded to be E_{\mathrm{rad}}(R,t) = \frac{\omega^{2}}{\omega_{s}^{2} – \omega^{2} -i\omega\Gamma} \left( \frac{e}{4\pi \epsilon_{0} R c^{2} }\right) E_{0} \e^{-i\omega t} \left( \frac{\e^{ikR}}{R} \right) or equivalently \frac{E_{\mathrm{rad}}(R,t)}{E_{\mathrm{in}}} = -r_{0}\frac{\omega^{2}}{\omega^{2} – \omega_{s}^{2} + i\omega\Gamma} \left( \frac{\e^{ikR}}{R} \right). The atomic scattering length, \(f_{s}\), is defined to be the amplitude of the outgoing spherical wave, \((\e^{ikR}/R)\). (cf. scattering by one electron) Thus, \(f_{s}\) in units of \(-r_{0}\) is f_{s} = \frac{\omega^{2}}{\omega^{2} – \omega_{s}^{2} + i\omega\Gamma}, \label{Eq:resonant_terms_of_single_oscillator} where a subscript, \(s\), denotes the "single oscillator". For frequencies greatly larger than the resonant frequency, i.e., \(\omega \gg \omega_{s}\), the value of \(f_{s}\) should be approximated to be the Thomson scattering length of 1. Thus, the following reduction makes the equation clearer to understand the resonant scattering terms: f_{s} &= \frac{\omega^{2} – \omega_{s}^{2} + i\omega \Gamma + \omega_{s}^{2} – i\omega \Gamma}{\omega^{2} – \omega_{s}^{2} + i\omega\Gamma} = 1 + \frac{ \omega_{s}^{2} – i\omega \Gamma}{\omega^{2} – \omega_{s}^{2} + i\omega\Gamma} \notag \\ &\sim 1 + \frac{ \omega_{s}^{2}}{\omega^{2} – \omega_{s}^{2} + i\omega\Gamma}, \label{Eq:fs_reduction} where the last line is derived based on the fact that \(\Gamma\) is usually much less than \(\omega_{s}\). Eq. \eqref{Eq:fs_reduction} clearly shows that the second term corresponds to the dispersion correction to the scattering factor. When the dispersion correction is written as \(\Delta f (\omega)\), the value is described as \Delta f ( \omega) = f'_{s} + if\pprime_{s} = \frac{ \omega_{s}^{2}}{\omega^{2} – \omega_{s}^{2} + i\omega\Gamma} with the real part given by f'_{s} = \frac{ \omega_{s}^{2} ( \omega^{2} – \omega_{s}^{2} )}{(\omega^{2} – \omega_{s}^{2} )^{2} + (\omega\Gamma)^{2}} \label{Eq:dispersion_correction_real} and the imaginary part also given by f\pprime_{s} = – \frac{ \omega_{s}^{2} \omega \Gamma}{(\omega^{2} – \omega_{s}^{2})^{2} + (\omega\Gamma)^{2}}. \label{Eq:dispersion_correction_imaginary} Fig. 3. The real and imaginary parts of the dispersion corrections calculated from Eqs. (\ref{Eq:dispersion_correction_real}, \ref{Eq:dispersion_correction_imaginary}). The damping factor, \(\Gamma\), is assumed to be \(0.1\omega_{s}\). The dispersion correction terms calculated from the forced oscillator model are shown in Fig. 3. The imaginary part of the dispersion correction, \(f\pprime\), corresponds to the absorption, showing the peak profile at \(\omega = \omega_{s}\). In contrast, the absorption spectrum of a real material is like an edge rather than the peak; the discrepancy between two spectra shows the limitation of the single forced oscillator model. In order to model this behavior, we need to take into account so-called oscillator strength, \(g_{o}(\omega_{s})\), to compensate the gap between the model and a real material, which gives the population of the single oscillators dependent on the photon energy; however, it still gives no explanation of XAFS (oscillation characteristic observed in the condensed material). Eventually, though the single oscillator model is not adequate for the quantitative understanding of the resonant scattering terms, it is helpful to understand the emergence of the dispersion correction terms due to the bound of the electron to the nucleus. The quantitative explanation and its evaluation require more sophisticated approach of quantum mechanics, where \(f'\) and \(f\pprime\) are derived from the 1st- and 2nd-order perturbation theory of the interaction Hamiltonian of \((e \vec{A} \cdot \vec{p} / m)\). [1] W. a Hendrickson, "Determination of macromolecular structures from anomalous diffraction of synchrotron radiation.," Science, vol. 254, iss. 5028, p. 51–58, 1991. @article{Hendrickson1991, abstract = {Resonance between beams of x-ray waves and electronic transitions from bound atomic orbitals leads to a phenomenon known as anomalous scattering. This effect can be exploited in x-ray crystallographic studies on biological macromolecules by making diffraction measurements at selected wavelengths associated with a particular resonant transition. In this manner the problem of determining the three-dimensional structure of thousands of atoms is reduced to that of initially solving for a few anomalous scattering centers that can then be used as a reference for developing the entire structure. This method of multiwavelength anomalous diffraction has now been applied in a number of structure determinations. Optimal experiments require appropriate synchrotron instrumentation, careful experimental design, and sophisticated analytical procedures. There are rich opportunities for future applications.}, author = {Hendrickson, W a}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Science (New York, N.Y.)/Hendrickson\_Science (New York, N.Y.)\_1991.pdf:pdf}, journal = {Science}, keywords = {Crystallography,Crystallography: methods,Models,Molecular,Molecular Structure,Particle Accelerators,Protein Conformation,X-Ray Diffraction,X-Ray Diffraction: instrumentation}, number = {5028}, pmid = {1925561}, title = {{Determination of macromolecular structures from anomalous diffraction of synchrotron radiation.}}, url = {http://www.ncbi.nlm.nih.gov/pubmed/1925561}, Y. Tanaka, T. Kojima, Y. Takata, A. Chainani, S. W. Lovesey, K. S. Knight, T. Takeuchi, M. Oura, Y. Senba, H. Ohashi, and S. Shin, "Determination of structural chirality of berlinite and quartz using resonant x-ray diffraction with circularly polarized x-rays," Phys. rev. b, vol. 81, iss. 14, p. 144104, 2010. @article{Tanaka2010, author = {Tanaka, Yoshikazu and Kojima, Taro and Takata, Yasutaka and Chainani, Ashish and Lovesey, Stephen W. and Knight, Kevin S. and Takeuchi, Tomoyuki and Oura, Masaki and Senba, Yasunori and Ohashi, Haruhiko and Shin, Shik}, doi = {10.1103/PhysRevB.81.144104}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Physical Review B/Tanaka et al.\_Physical Review B\_2010.pdf:pdf}, journal = {Phys. Rev. B}, title = {{Determination of structural chirality of berlinite and quartz using resonant x-ray diffraction with circularly polarized x-rays}}, url = {http://link.aps.org/doi/10.1103/PhysRevB.81.144104}, Y. Tanaka, T. Takeuchi, S. Lovesey, K. Knight, A. Chainani, Y. Takata, M. Oura, Y. Senba, H. Ohashi, and S. Shin, "Right Handed or Left Handed? Forbidden X-Ray Diffraction Reveals Chirality," Phys. rev. lett., vol. 100, iss. 14, p. 145502, 2008. author = {Tanaka, Yoshikazu and Takeuchi, Tomoyuki and Lovesey, Stephen and Knight, Kevin and Chainani, Ashish and Takata, Yasutaka and Oura, Masaki and Senba, Yasunori and Ohashi, Haruhiko and Shin, Shik}, doi = {10.1103/PhysRevLett.100.145502}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Physical Review Letters/Tanaka et al.\_Physical Review Letters\_2008.pdf:pdf}, journal = {Phys. Rev. Lett.}, title = {{Right Handed or Left Handed? Forbidden X-Ray Diffraction Reveals Chirality}}, url = {http://link.aps.org/doi/10.1103/PhysRevLett.100.145502}, [4] E. Matsubara and Y. Waseda, "Structural studies of oxide thin films, solutions and quasicrystals by anomalous x-ray scattering method," in Reson. anomalous x-ray scatt., Amsterdam, 1994, p. 345. @inproceedings{Matsubara1994, address = {Amsterdam}, author = {Matsubara, E and Waseda, Y}, booktitle = {Reson. Anomalous X-ray Scatt.}, editor = {Materlik, G and Sparks, C. J. and Fischer, K}, title = {{Structural studies of oxide thin films, solutions and quasicrystals by anomalous x-ray scattering method}}, S. Hosokawa, W. Pilgrim, A. Höhle, D. Szubrin, N. Boudet, J. Bérar, and K. Maruyama, "Key experimental information on intermediate-range atomic structures in amorphous Ge2Sb2Te5 phase change material," J. appl. phys., vol. 111, iss. 8, p. 83517, 2012. @article{Hosokawa2012, author = {Hosokawa, Shinya and Pilgrim, Wolf-Christian and Höhle, Astrid and Szubrin, Daniel and Boudet, Nathalie and Bérar, Jean-François and Maruyama, Kenji}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Applied Physics/Hosokawa et al.\_Journal of Applied Physics\_2012.pdf:pdf}, issn = {00218979}, journal = {J. Appl. Phys.}, title = {{Key experimental information on intermediate-range atomic structures in amorphous Ge2Sb2Te5 phase change material}}, url = {http://link.aip.org/link/JAPIAU/v111/i8/p083517/s1\&Agg=doi}, K. Ohara, L. Temleitner, K. Sugimoto, S. Kohara, T. Matsunaga, L. Pusztai, M. Itou, H. Ohsumi, R. Kojima, N. Yamada, T. Usuki, A. Fujiwara, and M. Takata, "The Roles of the Ge-Te Core Network and the Sb-Te Pseudo Network During Rapid Nucleation-Dominated Crystallization of Amorphous Ge2Sb2Te5," Adv. funct. mater., vol. 22, iss. 11, p. 2251–2257, 2012. @article{Ohara2012, author = {Ohara, Koji and Temleitner, L\'{a}szl\'{o} and Sugimoto, Kunihisa and Kohara, Shinji and Matsunaga, Toshiyuki and Pusztai, L\'{a}szl\'{o} and Itou, Masayoshi and Ohsumi, Hiroyuki and Kojima, Rie and Yamada, Noboru and Usuki, Takeshi and Fujiwara, Akihiko and Takata, Masaki}, doi = {10.1002/adfm.201102940}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Advanced Functional Materials/Ohara et al.\_Advanced Functional Materials\_2012.pdf:pdf}, issn = {1616301X}, journal = {Adv. Funct. Mater.}, title = {{The Roles of the Ge-Te Core Network and the Sb-Te Pseudo Network During Rapid Nucleation-Dominated Crystallization of Amorphous Ge2Sb2Te5}}, url = {http://doi.wiley.com/10.1002/adfm.201102940}, Calculation of a structure factor A structure factor is calculated by summing up scattering factors of each atom with multiplying the phases at each atomic position in a unit cell as described previously in an equation. It is a facile approach to calculate the structure factor; however, the calculation becomes complicated when the unit cell includes the large number of atoms. Furthermore, it is frequently difficult to distinguish the equivalent and non-equivalent sites in this approach. For example in body-centered cubic (BCC) metal, atom occupy (0, 0, 0), (1/2, 1/2, 1/2) sites, and all of them are equivalent. Thus, these atoms must have exactly the same local/electronic structure. In contrast, in cesium chloride structure, (0, 0, 0) and (1/2, 1/2, 1/2) are occupied by chloride and cesium atoms, respectively, being inequivalent each other; therefore, the local/electronic structure must be different. The "site-distinguished" analysis of the diffraction anomalous fine structure (DAFS) method provides individual XAFS spectra at the inequivalent sites in the material as seen in the latter case. These relations and concepts are comprehensively understood in the framework of Space Group, which is well-described in the International Tables for Crystallography vol. A[1]. Any crystalline materials except for a quasicrystal belong to a certain space group, and the structure factor of each space group is calculated and included in the international table [2].1The book title is International Tables for X-ray Crystallography. Vol. I, which is an earlier series of International Tables for Crystallography vol. A. Note that the structure factors for a general position in each space group is given only in this older version. Thus, the calculation of the structure factor should be carried out based on the crystallographic site in space group, which is more versatile and convenient. The table provides the structure factor as values of "\(A(\vec{G}\cdot \vec{r}_{m})\)" and "\(B(\vec{G}\cdot \vec{r}_{m})\)", whose definition is as follows: A(\vec{G}\cdot \vec{r}_{m}) &= \sum_{e} \cos (\vec{G}\cdot \vec{r}_{m}) \\ B(\vec{G}\cdot \vec{r}_{m}) &= \sum_{e} \sin (\vec{G}\cdot \vec{r}_{m}), \label{Eq:def_A_and_B} where \(\sum_{e}\) denotes the summation of equivalent positions belonging the site in a unit cell. Then, the structure factor is described with these \(A\) and \(B\) values as F(\vec{G}) = \sum_{j} f_{j}A_{j} + i \sum_{j} f_{j}B_{j}, \label{Eq:A_B_based_structure_factor} where \(\sum_{j}\) is the summation of independent sites, and \(f_{j}\) is the atomic scattering factor of the atom at site \(j\). Note that both \(A\) and \(B\) values should be multiplied by the ratio of the numbers of atoms at general and special positions. For example, when calculating the structure factor of \(32e\) site (Wyckoff letter for a special position) in space group \(F d \bar{3} m\) (No. 227), the factor of 32/192 (\(192i\) is for general position in this space group) should be multiplied in order to take into account the overlap of the atoms at the special position. Also note that the value of \(B\) disappears in the space group with centrosymmetry, because \(\sum_{e} \sin (\vec{G}\cdot \vec{r}_{m})\) becomes 0 when the same element locates at \(\vec{r}_{m}\) and \(-\vec{r}_{m}\) due to the nature of \(\sin\) function. [1] International Tables for Crystallography Volume A, 5th ed., T. Hahn, Ed., Springer, 2006. @book{Hahn2006, edition = {5th}, editor = {Hahn, Th}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Unknown/Unknown\_Unknown\_2006.pdf:pdf}, isbn = {0792365909}, title = {{International Tables for Crystallography Volume A}}, [2] International Tables for X-ray Crystallography Volume I, N. F. M. Henry and K. Lonsdale, Eds., Birmingham: Kynoch Press, 1952. @book{Henry1952, address = {Birmingham}, editor = {Henry, Norman F. M and Lonsdale, Kathleen}, publisher = {Kynoch Press}, title = {{International Tables for X-ray Crystallography Volume I}}, Lorentz factor In the previous articles, the scattered intensity and related factors are derived. In an actual experiment of diffraction anomalous fine structure (DAFS) as well as the conventional X-ray diffraction (XRD), we need to "scan" a sample and/or a finite-size detector to obtain the whole shape of the diffraction peak, whose area is proportional to the structure factor described above. Thus, the factor concerning \(\vec{Q}\) step2Usually we carry out an XRD measurement with the same angel step; however, it causes unequal interval in \(Q\) space, because \(\diff Q / \diff \theta = 4 \pi \cos \theta / \lambda\). and/or the fraction of detective diffraction, which are dependent on scattering angle and also energy, should be introduced for the evaluation of the scattered intensity, being called the Lorentz factor. This factor is essentially quite different from the other factors described above, i.e., atomic scattering factor, structure factor and Debye-Waller factor, because the Lorentz factor gives no structural information about the crystalline sample, being derived only from the experimental aspect. Thus, the effect of the Lorentz factor should be corrected for the subsequent structural and spectroscopic analyses using X-ray diffraction. In the Lorentz factor appearing in a textbook and/or articles regarding the conventional XRD, only the angular dependence is focused and discussed, presumably because such a measurement is carried out by a single monochromatic x-ray. In contrast, the correction of the energy-dependence in the Lorentz factor is necessary in the spectroscopic analysis like the DAFS method. Thus, the following article briefly describes the derivation of the Lorentz factor including its energy dependence as well as that of the scattering angle. Fig. 1. Scattering from a small crystal for the evaluation of the Lorentz factor.The incident beam is assumed to be monochromatic and collimated, and to fully illuminate the crystalline particle. The scattered intensity \(I_{\mathrm{SC}}\) is proportional to the flux \(\Phi_{0}\) and to the differential cross section of the sample. Let's start from the case of a single crystal. A schematic of the experimental setup to measure the integrated intensity of diffraction from a single crystal particle is shown in Fig. 1. In the calculation, it is assumed that the incident and scatted beam are monochromatic, and that the incident is perfectly collimated while the scatted beam necessarily is not perfectly collimated, because the number of the lattice \(N\) is finite and the beam will have some divergence. Left-hand side of Fig. 1 shows the schematic of the reciprocal space around a reciprocal lattice point, where only the portion of the point on the Ewald's sphere, which is a sphere depicted by the possible terminal points of the outgoing wave number vector \(\vec{k'}\), is observed at a certain incident angle (purple lines). Thus, the crystal has to be rotated (i.e., rocking scan) to obtain the integrated intensity from the reciprocal lattice points, which is drawn with watery, purple, pink lines, on an axis of \(\theta\). As shown in the previous article, the Laue function becomes the Dirac's delta function when the number of the lattices is sufficiently large; therefore, the integrate intensity is described as follows: \int \diff \hat{\vec{k}}' \delta (\vec{Q}-\vec{G} ) = \int \diff \hat{\vec{k}}' \delta(\vec{k} – \vec{k}' – \vec{G}), \label{Eq:integration_of_k_LF} where \(\hat{\vec{k}}'\) is a unit vector along \(\vec{k}'\). The element of the solid angle \(\diff \hat{\vec{k}}'\) is two-dimensional vector, which means the integration of the all scattered angles under a fixed incident angle. For the calculation, the vector, \(\vec{s} = k' \hat{\vec{s}}\), is introduced instead of \(\vec{k}'\), where \(\hat{\vec{s}}\) is a unit vector (see Fig.2). Then, the integration is transformed by adding the integration of the unit value2\(\int s^{2} \delta (s^{2} – k'^{2}) \diff s = \int s^{2} \frac{1}{2s} \delta(t) \diff t = \int \sqrt{t + k'^{2}} \delta (t) /2 \diff t\) by changing of the variables of \(t = s^{2} – k'^{2}\). Then, \(\int s^{2} \delta (s^{2} – k'^{2}) \diff s = k'/2\), and \(1 = (2/k') \int s^{2} \delta (s^{2} – k'^{2}) \diff s\). into \label{Eq:addition_one_value_integration_LF} \int \diff \hat{\vec{k}}' \delta (\vec{k} – \vec{k}' -\vec{G} ) = \overbrace{\frac{2}{k'} \int s^{2} \delta(s^{2} – k'^{2}) \diff s}^{1}\int \delta(\vec{k} – \vec{s} – \vec{G} ) \diff \hat{\vec{s}}, where \(\vec{k}'\) is replaced by \(\vec{s}\) in the second integration. The trick of adding the integration and the change of variable is to transform the integration of two-dimension into that of three-dimension. Fig. 2. Left: schematic of the reciprocal lattice point (gray ellipse) and the scattering, wave number vectors. Right: transformation of the integration parameters. Based on the schematic of the integration parameter in Fig. 2, the above equation is furthered transformed into \int \diff \hat{\vec{k}}' \delta (\vec{k} – \vec{k}' -\vec{G} ) &= \frac{2}{k'} \int \delta(s^{2} – k'^{2}) \delta(\vec{k} – \vec{s} – \vec{G} ) \diff \hat{\vec{s}}\diff s \notag \\ &= \frac{2}{k'} \int \delta(s^{2} – k'^{2}) \delta(\vec{k} – \vec{s} – \vec{G} ) \diff \vec{s}. \label{Eq:int_para_change_LF} When \(\vec{s} = \vec{k} – \vec{G}\), \(\delta(\vec{k} -\vec{s} – \vec{G}) = \delta (0)\) and the integration is reduced to be \int \diff \hat{\vec{k}}' \delta (\vec{k} – \vec{k}' -\vec{G} ) &= \frac{2}{k'} \delta((\vec{k} – \vec{G})\cdot (\vec{k} – \vec{G}) – k'^{2}) \notag \\ &= \frac{2}{k} \delta(G^2 – 2kG \sin \theta ). \label{Eq:int_result_LF} Finally, the preparation for the integration of \(\theta\), which means the evaluation of the integrated intensity under the rocking scan, is completed. The differential cross section of the diffraction is described by using the result above as \label{Eq:cross_section_LF} \left( \frac{\diff \sigma}{\diff \Omega}\right)_{\mathrm{int. \ over\ } \vec{k}'} = r_{0}^{2}P |F(\vec{Q})|^{2} N v_{c}^{*} \frac{2}{k}\delta(G^{2} -2kG \sin \theta). Because the integration of the delta function by \(\theta\) gives the following value \label{Eq:int_delta_func_LF} \int \delta(G^{2} -2kG \sin \theta) \diff \theta = \left[ \frac{-1}{2kG \cos \theta} \right] _{t = 0} = \frac{-1}{2k^{2} \sin 2 \theta}, The cross-section is further derived as \left( \frac{\diff \sigma}{\diff \Omega}\right)_{\mathrm{int. \ over}\ \vec{k}', \theta} &= r_{0}^{2}P |F(\vec{Q})|^{2} N v_{c}^{*} \frac{2}{k} \frac{1}{2 k^{2} \sin 2 \theta} \notag \\ &= r_{0}^{2}P |F(\vec{Q})|^{2} N \frac{\lambda^{3}}{v_{c}} \frac{1}{\sin 2 \theta} \label{Eq:cross_section_result_LF} and detective intensity is I_{\mathrm{SC}}\left( \mathrm{ \frac{photons}{sec}} \right) = \Phi_{0} \left( \mathrm{ \frac{photons}{unit\ area \times sec}} \right) r_{0}^{2}P |F(\vec{Q})|^{2} N \frac{\lambda^{3}}{v_{c}} \frac{1}{\sin 2 \theta}. \label{Eq:scat_intensity_result_LF} Therefore, when we discuss the energy dependency of the DAFS spectrum, we need to correct the factor, \(\lambda^{3}/\sin 2 \theta\) or \(1/(E^{3} \sin 2 \theta)\), in a single crystal. In the powder diffraction, the Lorentz factor is decomposed into three parts as follows: \label{Eq:three_LF_in_XRPD} L(\theta, E) = L_{1}L_{2}L_{3}, where \(L_{1} = 1/ ( E^{3} \sin 2\theta )\) same as that in single crystal, and both \(L_{2}\) and \(L_{3}\) are additional Lorentz factors for the powder diffraction, which will be subsequently introduced. \(L_{2}\) is derived from the angle dependence of the number of the observable crystalline particles. A sample for X-ray powder diffraction (XRPD) is randomly oriented crystallites, whose scattered intensity is described as a simple summation of the scattering intensity from each small crystalline particle. Thus, the scattered intensity is dependent on the number of the observable reciprocal lattice points at the same time. The sphere of the numerous reciprocal lattice point, whose radius is \(|\vec{G}| = 2\pi / d \equiv G\), and the observable area on the "reciprocal sphere" drown as a ribbon are shown in Figure 3. When it is supposed that the number of the crystalline particle is \(N\) and their reciprocal lattice points homogeneously distribute on the sphere, the angle dependence of the area of the ribbon corresponds to the \(L_{2}\) value. Straight line CP is a perpendicular to the lattice plane of a crystalline particle we observe, and \(\Delta \theta\) is an acceptable angle of the diffraction derived from the divergence and the energy width of the incident beam. Namely, the particles whose reciprocal points locate among the range of \(\Delta \theta\) satisfy the diffraction condition. Thus, from the geometric consideration of Fig. 3, the number, \(\Delta N\), is described as Fig. 3. Reciprocal lattice point "sphere" and the acceptable angle. \Delta N &= G \Delta \theta 2 \pi G \sin (\pi – \theta) \notag \\ &= 2\pi G^{2} \Delta \theta \cos \theta . \label{Eq:number_of_particle_on_the_ribbon} Since the area of the whole sphere is \(4\pi G^{2}\), the fraction of the observable number of the particles is \label{Eq:ratio_of_the_number_of_particles} \frac{\Delta N}{N} = \frac{\Delta \theta \cos \theta}{2}. Therefore, the integrated intensity of XRPD is proportional to the factor: \label{Eq:L2_factor} L_{2} = \cos \theta. The factor of \(L_{3}\) is from the observation technique of XRPD, where we usually observe a portion of the Debye-Scherrer ring, by scanning a detector with a finite-size sensitive area. When the camera length, i.e., the distance between the detector and the sample, is \(R\), the radius of the Debye-Sherrer ring at the detector position is \(R \sin 2 \theta\), and consequently that of the length is \(2 \pi R \sin 2 \theta\) as shown in Fig. 4. Fig. 4. Geometry of the observation of the Debye-ring If we observe the portion of the Debye-Scherrer ring of \(\delta R\), the ratio of these lengths, i.e., \(\delta R/ 2\pi R \sin \theta\), corresponds to the observable scattered intensity. Thus, the integrated intensity is proportional to the factor: L_{3} = \frac{1}{\sin 2 \theta}. Finally, we obtain the complete Lorentz factor for XRPD and powder-DAFS as follows: L(\theta, E) &= L_{1} L_{2} L_{3} \notag \\ &= \frac{1}{E^{3}\sin 2\theta} \cos \theta \frac{1}{\sin 2\theta} \notag \\ &= \frac{1}{4 E^{3}\sin ^{2} \theta \cos \theta}. \label{Eq:full_lorentz_factor_for_XRPD} For further study…(This article was written based on the following book) @book{Als-Nielsen2011, author = {Als-Nielsen, J and McMorrow, D}, edition = {2nd}, publisher = {John Wiley \& Sons}, title = {{Elements of Modern X-ray Physics}}, Debye-Waller factor The lattice has been assumed to be "perfectly rigid" in the evaluation of the scattering amplitude from the crystal in the previous article; however, the atoms vibrate due to two distinct causes in a real material. The first is from the uncertainty principle of the quantum mechanics, which is independent on temperature and observed even at 0 K, being called zero-point fluctuation. The second is from the elastic wave and/or phonon in the crystal, depending on the temperature. Whether the vibration is caused by above two mechanisms, the atomic vibration reduces the magnitude of the interference of the scattering wave from the different atoms due to the "ambiguity" of the atomic position, eventually decreasing the scattering amplitude. This attenuation factor is known as the Debye-Waller factor in x-ray diffraction. The Debye-Waller factor is affected by some factors. The magnitude of the attenuation basically depends on the element; a heavier element shows small attenuation at a certain temperature. Furthermore, the attenuation magnitude is also dependent on the crystallographic site even when the same element is occupied. In addition, this vibration effect is enhanced in the higher scattering angle, i.e., small lattice spacing, because the scattering at the higher angle is more sensitive to the phase difference than that at the lower angle. Usually, the Debye-Waller factor is implemented into the structure factor by multiplying an exponential attenuation term, whose derivation will be given in the subsequent section. The site-selectivity of the diffraction anomalous fine structure (DAFS) originates from the difference in the contribution of the atoms to the a certain diffraction as described in the derivation of the structure factor. Thus, this factor should be included into the DAFS analysis to accurately separate each contribution of the atom in the different crystallographic sites. For the sake of simplicity of the derivation, the scattering amplitude of a crystal consisting of a single element with some displacement from the average position is evaluated as follows: \label{Eq:DW_cal_displacement} F^{\mathrm{crystal}}(\vec{Q}) = \sum_{n} f(\vec{Q})\e^{i\vec{Q}\cdot (\vec{R}_{n} + \vec{u}_{n})}, where \(\vec{R}_{n} + \vec{u}_{n}\) is the instantaneous position of the atom, \(\vec{R}_{n}\) is the time-averaged mean position, and \(\vec{u}_{n}\) is the displacement, which temporal average value, \(\left< \vec{u}_{n} \right>\), is zero from the definition. Since the scattering intensity is calculated by taking the product of the scattering amplitude and its complex conjugate, the time-average scattering intensity is \label{Eq:time_average_intensity} I &= \left\langle \sum_{m} f(\vec{Q})\e^{i\vec{Q}\cdot (\vec{R}_{m} + \vec{u}_{m})} \sum_{n} f^{*}(\vec{Q})\e^{-i\vec{Q}\cdot (\vec{R}_{n} + \vec{u}_{n})} \right\rangle \notag \\ &= \sum_{m} \sum_{n} f(\vec{Q}) f^{*}(\vec{Q}) \e^{i\vec{Q}\cdot (\vec{R}_{m} -\vec{R}_{n})} \left\langle \e^{i\vec{Q}\cdot (\vec{u}_{m} -\vec{u}_{n})} \right\rangle. For the further calculation, the last term of the second row in the equation is rewritten as \label{Eq:DW_dimension_reduction} \left\langle \e^{i\vec{Q}\cdot (\vec{u}_{m} -\vec{u}_{n})} \right\rangle = \left\langle \e^{iQ (u_{Qm} – u_{Qn})} \right\rangle, where \(u_{Qn}\) is the component of the \(\vec{u}_{n}\) parallel to the vector, \(\vec{Q}\), for the \(n\)-th atom. By using the Baker-Hausdorff theorem expressed as \label{Eq:BH_theory} \left\langle \e^{ix} \right\rangle = \e^{-\frac{1}{2} \left\langle x^{2} \right\rangle}, the right hand side in Eq. \eqref{Eq:DW_dimension_reduction} is reduced to be \left\langle \e^{iQ (u_{Qm} – u_{Qn})} \right\rangle &= \e^{-\frac{1}{2} \left\langle Q^{2}(u_{Qm}-u_{Qn})^{2} \right\rangle} \notag \\ &= \e^{-\frac{1}{2} Q^{2} \left\langle (u_{Qm}-u_{Qn})^{2} \right\rangle} \notag \\ &= \e^{-\frac{1}{2} Q^{2} \left\langle u_{Qm}^{2} \right\rangle} \e^{-\frac{1}{2} Q^{2} \left\langle u_{Qn}^{2} \right\rangle} \e^{Q^{2} \left\langle u_{Qm}u_{Qn} \right\rangle}. \label{Eq:reduction_of_u} Because of the translation symmetry, \(u_{Qn}^{2} = u_{Qm}^{2}\) and its value will be simply expressed as \(u_{Q}^{2}\). In addition \(\e^{-Q^{2} \left\langle u_{Q}^{2} \right\rangle /2}\) is also expressed as \(\e^{-M}\) in the following derivation. In order to separate the scattering intensity into two terms, the last term of Eq. \eqref{Eq:reduction_of_u} is written as \label{Eq:correlated_vibration} \e^{Q^{2} \left\langle u_{Qm}u_{Qn} \right\rangle} = 1 + \left\{ \e^{Q^{2} \left\langle u_{Qm}u_{Qn} \right\rangle} – 1 \right\}. Then, the scattered intensity is decomposed into two terms as I &= \sum_{m} \sum_{n} f(\vec{Q}) \e^{-M} \e^{i\vec{Q}\cdot \vec{R}_{m}} f^{*}(\vec{Q}) \e^{-M} \e^{-i\vec{Q}\cdot \vec{R}_{n}} \notag \\ &+ \sum_{m} \sum_{n} f(\vec{Q}) \e^{-M} \e^{i\vec{Q}\cdot \vec{R}_{m}} f^{*}(\vec{Q}) \e^{-M} \e^{-i\vec{Q}\cdot \vec{R}_{n}} \left\{ \e^{Q^{2} \left\langle u_{Qm}u_{Qn} \right\rangle} – 1 \right\}. \label{Eq:decomposed_scattered_intensity} The first term is the elastic scattering from a lattice, i.e., x-ray diffraction; however the scatted intensity is weaken by the factor of \(\e^{-M} ( < 1) \), which is known as the Debye-Waller factor. This factor can be generally introduced by replacing the atomic scattering factor by \label{Eq:introduction_of_DW_factor} f^{\mathrm{atom}} = f^{0}(\vec{Q}) \e^{- \frac{1}{2} Q^{2} \langle u_{Q}^{2} \rangle} \equiv f^{0}(\vec{Q}) \e^{-M}. Conventionally, the magnitude of the Debye-Waller factor is given and discussed in the form of \(B_{T}\) as \label{Eq:introduction_of_B} M = \frac{1}{2}Q^{2}\langle u_{Q}^{2} \rangle = \frac{1}{2} \left( \frac{4\pi \sin \theta}{\lambda} \right)^{2} \langle u_{Q}^{2} \rangle = B_{T} \left( \frac{\sin \theta}{\lambda} \right)^{2}, \label{Eq:definition_of_B} B_{T} \equiv 8\pi^{2} \langle u_{Q}^{2} \rangle, because of the traditional reason of the XRD description, where the angle dependence of a parameter is favorably expressed as a function of \((\sin \theta / \lambda)\) rather than \(Q\) (for example, the atomic form factor is also given in the above form in equation in the previous article. If the atoms vibrate isotropically, \(\langle u^{2} \rangle = \langle u^{2}_{x} + u^{2}_{y} + u^{2}_{z} \rangle = 3 \langle u^{2}_{x} \rangle = 3 \langle u_{Q}^{2} \rangle \), then \label{Eq:isotropic_DW_factor} B_{T, \mathrm{isotropic}} = \frac{8 \pi^{2}}{3}\langle u^{2} \rangle. Though the deviation above proceeded on the assumption of the single element, the structure factor of plural elements is analogically derived as \label{Eq:structure_factor_with_DW_factor} F &= \sum_{m} f_{m} \exp \left( -M_{m}\right) \exp \left( i\vec{Q}\cdot \vec{r}_{m}\right) \\ &= \sum_{m} f_{m} \exp \left\{ -B_{T, m} \left( \frac{\sin \theta}{\lambda}\right)^{2} \right\} \exp \left( i\vec{Q}\cdot \vec{r}_{m}\right). The magnitude of the Debye-Waller factor of each element can be evaluated by a preliminary XRD analysis such as the Rietveld analysis based on the XRPD. Typical values are available on International tables for x-ray crystallography vol. II, ranging from 0 to 2. The refined value should be used for the site-separation of the absorption spectrum obtained from the DAFS method. Scattering from a crystal In a crystal, atoms or molecules form a periodic structure with the translational symmetry, which is frequently called long range order. Thus, scatterings from each atom are interfered and eventually create a diffraction pattern, which reflects the crystalline structure. Since each diffraction has a different contribution from each atom, the diffraction anomalous fine structure (DAFS) method is capable to distinguish the crystalline site-specific spectroscopic information by measuring an energy dependence of the diffraction intensity. This article will provide a brief review about the description of the conventional x-ray diffraction. As seen in the discussion of the scattering from an atom, the scattering amplitude from plural atoms, i.e., a crystal, are similarly calculated from the summation of phases multiplied by atomic scattering factors of each atom. First, the position of an atom in the crystal is described by taking the translational symmetry into account as \label{Eq:position_vector} \vec{r}_{l}= \vec{R}_{n} + \vec{r}_{m}, \label{Eq:atomic_position_vector_definition} \vec{R}_{n} &\equiv n_{1}\vec{a}_{1}+n_{2}\vec{a}_{2}+n_{3}\vec{a}_{3} \\ \vec{r}_{m} &\equiv x_{m}\vec{a}_{1}+y_{m}\vec{a}_{2}+z_{m}\vec{a}_{3}, where \(\vec{R}_{n}\) is a vector specifying the number \(n\)-th unit cell, \(\vec{r}_{m}\) is a vector indicating \(m\)-th atomic position in a unit cell, \(\vec{a}_{1}\), \(\vec{a}_{2}\) and \(\vec{a}_{3}\) are lattice vectors along with \(a\), \(b\) and \(c\) axes, and \(x_{m}\), \(y_{m}\) and \(z_{m}\) are the fractional coordinate of \(m\)-th atom. Thus, the scattering amplitude from the crystal can be described as \label{Eq:scat_amp_all_atom} F^{\mathrm{crystal}}(\vec{Q}) = \sum^{\mathrm{All\ atoms}}_{l}f_{l}(\vec{Q})\e^{i\vec{Q}\cdot \vec{r}_{l}}, where \(f_{l}(\vec{Q})\) is the atomic form factor3This factor will be replaced by the complex atomic scattering factor including the resonant dispersion term (i.e., \(f(\vec{Q}) \to f(\vec{Q}, E) = f^{0}(\vec{Q}) + f'(E) + if"(E)\)) in the description of the resonant scattering and DAFS. For the sake of simplicity, these energy-dependent terms are ignored in this derivation. of the atom placed at position \(\vec{r}_{l}\). The scattering amplitude is readily decomposed into two contributions from the lattice and that from the inside of the unit cell with Eq. \eqref{Eq:position_vector} as \label{Eq:decomposition_of_scattering_amplitude} F^{\mathrm{crystal}}(\vec{Q}) = \sum^{\mathrm{All\ atoms}}_{\vec{R}_{n}+\vec{r}_{j}} f_{j}(\vec{Q})\e^{i\vec{Q}\cdot(\vec{R}_{n}+\vec{r}_{j})} = \overbrace{\sum_{n}\e^{i\vec{Q}\cdot\vec{R}_{n}}}^{\mathrm{Lattice}} \overbrace{\sum_{m}f_{j}(\vec{Q})\e^{i\vec{Q}\cdot\vec{r}_{m}}}^{\mathrm{Unit\ cell}} The first term of Eq. \eqref{Eq:decomposition_of_scattering_amplitude} is the sum of the scattering from the lattice, while the second term is the sum of atoms in the unit cell, which is known as structure factor. The diffraction from a crystal is observed under the diffraction condition, which is conventionally described as \(2d\sin \theta = \lambda\). The equivalent diffraction condition in the vector form is \label{Eq:laue_condition} \vec{Q} = \vec{G}, where \(\vec{G}\) is a vector pointing reciprocal lattice defined as \label{Eq:def_G} \vec{G} = h\vec{a}^{*}_{1}+k\vec{a}^{*}_{2}+l\vec{a}^{*}_{3}. \(h\), \(k\) and \(l\) are all integers and called "Miller indices". \(\vec{a}^{*}_{1}\), \(\vec{a}^{*}_{2}\) and \(\vec{a}^{*}_{3}\) are basis vectors of reciprocal space defined as2Whether the factor 2\(\pi\) is multiplied or not is dependent on textbooks and backgrounds of their authors. In this article, I consistently use the definition of the reciprocal lattice vector with 2\(\pi\) factor. \vec{a}^{*}_{1} &= 2\pi \frac{\vec{a}_{2}\times \vec{a}_{3}}{v^{*}_{c}}\\ \vec{a}^{*}_{3} &= 2\pi \frac{\vec{a}_{1}\times \vec{a}_{2}}{v^{*}_{c}}, where the volume of the unit cell in the reciprocal space, \(v^{*}_{c}\), is calculated as \(v^{*}_{c} = \vec{a}_{1}\cdot (\vec{a}_{2}\times \vec{a}_{3}) = \vec{a}_{2}\cdot (\vec{a}_{3}\times \vec{a}_{1}) = \vec{a}_{3}\cdot (\vec{a}_{1}\times \vec{a}_{2})\). These vectors fulfill a condition that \label{Eq:relation_between_real_and_reciprocal_vector} \vec{a}_{i} \cdot \vec{a}_{j} = 2\pi \delta_{ij}, where \(\delta_{ij}\) is the Kronecker's delta function. This diffraction condition is derived from the feature of the first term of right hand side in Eq. \eqref{Eq:decomposition_of_scattering_amplitude}, i.e., the scattering from the lattice. That is \label{Eq:laue_function} \left| \sum_{n}\e^{i\vec{Q}\cdot\vec{R}_{n}} \right|^{2} \to Nv^{*}_c \sum_{\vec{G}}\delta (\vec{Q}-\vec{G}) \qquad \text{as} \quad N \to \infty where \(\delta\) is the Dirac's delta function, \(N\) is the total number of unit cell, i.e., \(N_{1}\times N_{2}\times N_{3}\). \(N_{1}\), \(N_{2}\) and \(N_{3}\) are the number of unit cell along with \(\vec{a}_{1}\), \(\vec{a}_{2}\) and \(\vec{a}_{3}\), respectively. Therefore, the diffraction intensity, \(\left| F^{\mathrm{crystal}}(\vec{Q}) \right|^{2}\), is observed only when \(\vec{Q} = \vec{G}\) as described in Eq. \eqref{Eq:laue_condition}, since the Dirac's delta function, \(\delta(x)\), shows the non-zero value only at \(x=0\). Under the diffraction condition, the structure factor is reduced as follows: \sum_{m}f_{m}(\vec{Q})\exp \left( i\vec{Q}\cdot\vec{r}_{m} \right) &= \sum_{m}f_{m}(\vec{G})\exp \left( i\vec{G}\cdot\vec{r}_{m} \right) \notag \\ &= \sum_{m}f_{m}(\vec{G})\exp \left\{ 2\pi i \left( x_{m}h + y_{m}k + z_{m}l \right) \right\}, where \(\sum_{m}\) is a summation of all atoms in a unit cell3The calculation of the structure factor is sometimes complicated when many atoms occupy in the unit cell. The sophisticated calculation technique based on the site symmetry will be described in the other article. Therefore, the scattering amplitude from a crystal under diffraction condition is finally derived to be \label{Eq:derived_scattering_amplitude} I(\vec{Q} = \vec{G}) \propto \left| \sum_{m}f_{m}(\vec{G})\exp \left\{ 2\pi i \left( x_{m}h + y_{m}k + z_{m}l \right) \right\} \right|^{2}. The essence of the site-distinguished analysis by the DAFS method is this structure factor, where the atomic scattering factor is multiplied by the phase factor, \(\exp \left\{ 2\pi i \left( x_{m}h + y_{m}k + z_{m}l \right) \right\}\). This factor causes the difference in the contributions from a certain element at each diffraction line, consequently enabling us to site-selectively analyze the energy dependence of the atomic scattering factor4The energy dependency has not been introduced yet in this article; however, the atomic scattering factor actually dependent on the incident photon energy. The objective of the DAFS analysis is to extract this energy dependence of the atomic scattering factor at each crystallographic site. of the same element at the different sites. Scattering by an atom In the description of scattering by an atom, we need to take into account the interference of x-rays radiated from different positions in the atom since an electron spreads around the atomic nucleus in quantum-mechanical picture even when the number of electrons is one. Fig. 1 Configuration of the scattering process by an atom Figure 1 shows the configuration of the scattering process by an atom, whose atomic number is \(Z\). \(\vec{k}\) and \(\vec{k'}\) are the wave number vectors, which lengths are the same, i.e., \(|\vec{k}| = |\vec{k'}| =k = 2\pi/\lambda\), \(\vec{r}\) is a position where we evaluate the interference of x-ray, \(\rho(\vec{r})\) is an electron density at \(\vec{r}\), and \(\diff V\) is a volume element at \(\vec{r}\). The phase difference, \(\Delta \phi(\vec{r})\) between x-rays radiated at the origin and position \(\vec{r}\) is described to be \label{Eq:phase_difference_in_an_atom} \Delta \phi(\vec{r}) = \left( \vec{k} – \vec{k}' \right) \cdot \vec{r} = \vec{Q} \cdot \vec{r}, \label{Eq:dif_Q_vector} \vec{Q} = \vec{k} – \vec{k}'. This interference occurs in the whole atom; therefore, the scattering amplitude is as follows: \label{Eq:integral_of_phases} -r_{0}\int \rho(\vec{r}) \e^{i \Delta \phi (\vec{r})} \diff V = -r_{0}\int \rho(\vec{r}) \e^{i \vec{Q}\cdot \vec{r}} \diff V \equiv -r_{0}f^{0}(\vec{Q}), where the integration is carried out in the whole atom and \(f^{0}(\vec{Q})\) is known as the "atomic form factor". From the definition, it is definitely expected that \(f^{0}(\vec{Q} = 0)\) is identical to the atomic number, i.e., the number of electrons, \(Z\). Thus, \(f^{0}(\vec{Q})\) value is scattering power described with a unit of the number of electrons called "electron unit (eu), which is frequently used to discuss the absorption amplitude in the DAFS method as well as the conventional diffraction technique. When we assume the spherical electron density, i.e., \(\rho(\vec{r}) \to \rho(r)\), the atomic form factor is described and evaluated in a simpler form because the electric density is just a function of distance \(r\). With the absolute value5\begin{equation}\begin{split}|\vec{Q}|^{2} &= |\vec{k} – \vec{k}' |^{2} \\ &= \vec{k}^{2} + \vec{k}'^{2} – 2 \vec{k} \cdot \vec{k}' \\ &= 2k^{2} -2k^{2} \cos 2\theta \\ &= 8 \pi^{2} (1 – \cos 2\theta) /\lambda^{2} \\ &= 16 \pi^{2} \sin^{2} \theta /\lambda^{2}\end{split}\end{equation}. Thus, \(|Q| = 4\pi \sin \theta /\lambda\) of \(|\vec{Q}| = 4\pi \sin \theta / \lambda\), the atomic form factor can be reduced2The variable transformation of the integration is performed in the geometric picture of 3-dimensional polar coordinate, i.e., reduction of the variables of \(\phi\) and \(\theta\). into \label{Eq:integration_f_zero} f^{0}(Q) = \int_{0}^{\infty} 4\pi r^{2} \rho(r) \frac{\sin Qr}{Qr} \diff r. Thus, if one knows the electron density, \(\rho(r)\), by some theoretical calculations, the atomic form factors of each element and ions can be evaluated and used for the structure analysis by x-ray diffraction technique. As seen in Eq. \eqref{Eq:integration_f_zero}, the atomic factor is a function of only \(Q\); therefore, the values determined from the electron density calculated by the quantum mechanical approaches such as Hartree-Fock or Fermi-Thomas-Dirac are available as a function of \((\sin \theta/ \lambda)\) in International tables for crystallography; vol. C [1] as in the following form: \label{Eq:model_atomic_form_factor} f^{0}\left(\frac{\sin \theta}{\lambda}\right) = \sum_{i =1}^{4 \mathrm{\ or\ } 5} a_{i} \exp \left\{ -b_{i} \left( \frac{\sin \theta}{\lambda} \right)^{2} \right\} + c, where \(a_{i}\), \(b_{i}\) and \(c\) values of each element and ion are given in the book. International Tables For Crystallography Volume C, 3rd ed., E. Prince, Ed., Wiley, 2004. @book{Prince2004, doi = {10.1107/97809553602060000103}, edition = {3rd}, editor = {Prince, E}, title = {{International Tables For Crystallography Volume C}}, Scattering by one electron The ability of an electron to scatter an x-ray is described in terms of differential scattering length defined as follows: \label{Eq:dif_scattering_length} \left( \frac{\diff \sigma}{\diff \Omega}\right) \equiv \frac{I_{\rm{SC}}}{\Phi_0 \Delta \Omega}, where \(\Phi_0\) is the strength of the incident beam (the number of photons passing through a unit area per second), \(I_{\rm{SC}}\) is the number of scattered photons recorded per second in a detector positioned at a distance \(R\) away from the object, \(\Delta \Omega\) is a solid angle of the detector. Configuration of the scattering process by one electron The values of the right-hand side in Eq. \(\eqref{Eq:dif_scattering_length} \)is also described by the electric fields of incoming and scattered x-ray with \(\Phi_0 = c \left| \vec{E}_{\rm{in}}\right|^2 /\hbar \omega\) and \(I_{\rm{SC}} = cR^2\Delta \Omega \left| \vec{E}_{\rm{rad}}\right|^2 /\hbar \omega\) as follows3\(\left| \vec{E}_{\rm{in}}\right|^2 /\hbar \omega\) denotes the number of photons of energy \(\hbar \omega\) through the unit area.: \label{Eq:dif_scattering_length_electric_field} \left( \frac{\diff \sigma}{\diff \Omega}\right) = \frac{\left| \vec{E}_{\rm{rad}}\right|^2 R^2}{\left| \vec{E}_{\rm{in}}\right|^2}. In a classical model of the elastic scattering of the x-ray, the scattered x-ray is generated by the electron forcedly vibrated by the electric field of the incoming x-ray 2 The scattering process is also modeled by the quantum field theory more precisely than that of classical [1].. The radiated field is proportional to the charge of the electron \(-e\), and to the acceleration, \(a_X(t')\), cause by the electric filed of the incident x-ray, which is a linearly-polarized in \(x-z\) plane, evaluated at a time \(t'\) earlier than the observation time \(t\) since the speed of light is finite value of \(c\). Thus, the electron field of the radiated x-ray is written as \label{Eq:rad_electric_field} E_{\rm{rad}}(R,t) \propto \frac{-e}{R}a_{X}(t')\sin \Phi, where \(t' = t – R/c\). The full acceleration from the force on the electron is evaluated with Newton's equation of motion as \label{Eq:acceleration} a_{X}(t') =\frac{-e E_{0}\e^{-i\omega t'}}{m} = \frac{-e}{m}E_{\rm{in}}\e^{i\omega (R/c)} = \frac{-e}{m}E_{\rm{in}}\e^{ikR}, where \(E_{\mathrm{in}} = E_{0}\e^{-i\omega t}\) is the electric field of the incoming x-ray. Therefore, Eq. \(\eqref{Eq:rad_electric_field}\) can be rearranged to be \label{Eq:ratio_of_electric_field} \frac{E_{\mathrm{rad}}(R,t)}{E_{\mathrm{in}}} \propto \left( \frac{e^2}{m} \right)\frac{\e^{ikR}}{R} \sin \Psi. In order to complete the derivation of the differential cross section of the electron, it is necessary to check the dimension of both members of Eq. \(\eqref{Eq:ratio_of_electric_field}\). First, the left-hand side is definitely dimensionless. On the other hand, the dimension of \(\e^{ikR}/R\) is the inverse of the length. Therefore, the proportionality coefficient of Eq. \(\eqref{Eq:ratio_of_electric_field}\) must have units of length. By noting that in SI units the Coulomb energy at distance \(r\) from a point charge \(-e\) is \(e^2/(4\pi \epsilon_{0}r)\) while the dimensionally the energy is also described as the form of \(mc^2\), the proportionality coefficient \(r_{0}\) is then written as \label{Eq:classic_radius_of_electron} r_{0} = \left( \frac{e^2}{4\pi \epsilon_{0} m c^{2} } \right) = 2.82 \times 10^{-5} \ \A. This value is known as the Thomson scattering length or classical radius of the electron. By generalizing the relationship of the electric fields of incident and radiated x-rays, the ratio of the radiated electric field to the incident electric field described in Eq. \(\eqref{Eq:ratio_of_electric_field}\) is reduced to \label{Eq:complete_ratio_of_electric_field} \frac{E_{\mathrm{rad}}(R,t)}{E_{\mathrm{in}}} = -r_{0} \frac{\e^{ikR}}{R} \left| \hat{\vec{\epsilon}} \cdot \hat{\vec{\epsilon}}'\right|, where "\(-\)" indicates the radiated x-ray has a different phase from that of the incident x-ray by 180\(\deg\) because the charge of the electron is negative, \(\hat{\vec{\epsilon}}, \hat{\vec{\epsilon}}'\) are unit vectors for the electric field of the incident and radiated x-rays. Therefore, the differential cross-section becomes \label{Eq:derived_cross_section} \left( \frac{\diff\sigma}{\diff\Omega} \right) = r_{0}^{2}\left| \hat{\vec{\epsilon}} \cdot \hat{\vec{\epsilon}}'\right|^{2}. The factor of \(\left| \hat{\vec{\epsilon}} \cdot \hat{\vec{\epsilon}}'\right|^{2}\) is called the Polarization factor and its value is dependent on the polarization of the incoming x-ray and experimental geometry: \label{Eq:polarization_factor} P = \left| \hat{\vec{\epsilon}} \cdot \hat{\vec{\epsilon}}'\right|^{2} = \begin{cases} 1 & \text{horizontal linear polarization: vertical scattering plane} \\ \cos^{2} \Psi & \text{horizontal linear polarization: horizontal scattering plane}\\ \left( 1 + \cos^{2} \Psi \right)/2 & \text{unpolarized source: x-ray tube} \end{cases} The resultant equation predicts that the scattering intensity becomes very weak if \(\Psi\) is around 90\(\deg\) when a detector is scanned in the horizontal plane with the light source of horizontal linear polarization, which is definitely unfavorable to the usual scattering measurement3In contrast, a detector for fluorescence x-ray absorption spectroscopy is favorably placed at this position, because of the low contribution from the elastic scattering.. This is the reason why a detector is vertically scanned in a synchrotron radiation facility, whose polarization is generally linear in the horizontal plane due to the electron orbit in a storage ring. [1] J. Als-Nielsen and D. McMorrow, Elements of Modern X-ray Physics, 2nd ed., John Wiley & Sons, 2011. Cure of our addiction to oil and stabilization of the human-induced climate change are overriding issues of the energy policy in the world. Electric vehicles (EVs) are one of the promising technologies for tackling these problems thanks to their higher energy efficiency and compatibility to the green energies in comparison with a conventional gasoline car. A key technology of the EV is a rechargeable battery such as a lithium ion battery (LIB), which has been utilized in our mobile devices; however, further developments of the battery are still necessary for the dissemination of EVs. Schematic of electrochemical reaction in a lithium ion battery A LIB is a kind of rechargeable (secondary) batteries, firstly released by a Japanese company, Sony, by using LiCoO2 proposed by Mizushima et al. (Prof. Goodenough's group) [1] as a positive electrode (PE) and graphite as a negative electrode (NE) in 1990's. Since then, LIBs has contributed to developments of mobile devices thanks to their high energy-density and power, and long cycle life in comparison with the other popularized secondary batteries such as lead-acid storage battery, NiCd battery and nickel-metal hydride battery [2, 3]. Recently, polyvalent rechargeable batteries [4], where the polyvalent cations, e.g., Mg2+, Ca2+ and Al3+ etc., are used as a "guest ion" instead of Li ions, and a dual-salt battery [5] also have attracted attention for their relatively higher energy density and safety than those of LIBs; however, the science of LIBs are still of great importance for the wide applications and as a model system of intercalation chemistry. In LIBs, an essential electrochemical reaction is Li insertion/extraction (frequently called, lithiation/delithiation) between solid-state electrodes and liquid electrolyte in a typical cell system. As for discharge of the battery, Li atoms are extracted from a NE, e.g., conventionally graphite, to the electrolyte. On the other hand, Li ions in the electrolyte are inserted to a PE. At the same time, the electrons are also extracted from the NE and inserted into the PE through an outer circuit, which is connected to some devices such as a bulb and a motor in EVs etc. In the PE, on the basis of conventional understanding of solid-state electrochemistry, the valence state of the transition metal, i.e., Co ion in the figure, changes by receiving Li and electron as the charge compensation, which is one of the important electrochemical reaction in the PE. The above electrochemical reactions are summarized as follows: \mathrm{Co(IV)O_{2} + Li^{+} + e^{-} } &\to \mathrm{LiCo(III)O_{2}} \\ \mathrm{LiC_{6}} &\to \mathrm{ C_{6} + Li^{+} + e^{-} } Thus, the reduction reaction proceeds on the PE, while the oxidation reaction does on the NE in the discharging process. When using a Li metal as a reference electrode in a three-electrode cell in the same electrolyte, the open circuit voltage (OCV) of the positive and negative electrodes corresponds to electrode potential (V vs. Li+/Li), which directly corresponds the chemical potential of Li atoms in each electrode by defining chemical potential of Li in the Li metal as a standard condition. Since the electromotive force is the difference of electrode potentials of the PE and the NE, a material of high redox potential is suitable for the PE, whereas that of low redox potential for the NE. Such systems are often called "Rocking chair type battery" because Li ions move between the NE and the PE through the electrolyte; the total amount of Li ions in the electrolyte is always constant during the battery operation. Thus, any combinations of the electrode materials for the PE and NE are allowed in LIBs as long as a candidate material accommodates the considerable amount of Li at a reasonable potential, which may makes the LIB science fascinating. In the current LIBs, since the capabilities of the PE in terms of the energy density, which is the product of potential and capacity, is the bottleneck of the whole battery system and required to be further developed, I would like to focus on structural chemistry of the PEs. In the era of "Li (primary) battery", MnO2 is one of the conventional electrode material, where Li ions are inserted into the MnO2 structure, whereas the re-extraction of the inserted Li ions is difficult. In contrast, TiS2 and MoS2 are capable of the reversible lithiaion/delithiaion; therefore, they were used as the PE in "Li (secondary, rechargeable) battery", where the Li metal used to be used as the NE. Unfortunately, this kind of battery did not become common, because the use of Li metal in the rechargeable battery was risky due to the short circuit caused by the dendritc plating nature of the Li pure metal. Through the development period of Li batteries, the emergence of a "Li Ion Battery" from Sony started a new era of Li secondary batteries, where graphite and Li transition-metal complex oxides are used as the electrodes. Structures of the typical positive electrodes. (a) LiCoO2, (b) LiMn2O4, (c) LiFePO4 The most conventional PE material is LiCoO2 (LCO) [1]. Its crystalline structure is categorized into α-NaFeO2 type layered rock-salt structure, where Li and Co atoms occupy the octahedral sites in the face centered cubic (FCC) lattice of oxygen atoms. The name of "layered" derives from the layer-by-layer cation ordering of [111] direction in the cubic lattice, which reduces the space symmetry from cubic to hexagonal. Consequently, the structure of LiCoO2 is understood as a layered compound, where Li occupies the interlayer gallery inbetween CoO2 sheets (see the figure). LiNiO2 belongs to this family of electrode, and various related electrodes were reported, e.g., LiNi1/2Mn1/2O2 [6, 7] and LiNi1/3Mn1/3Co1/3O2[8]. In these modified materials, the available amount of Li in the layered structure was significantly improved, whereas only 0.6Li can be extracted from LCO with keeping the good electrode charge and discharge cycle life, i.e., cyclability. LiMn2O4 (LMO) and LiFePO4 (LFP) are also important materials, which were reported from the same group of LCO [9, 10]. LMO has the normal spinel structure, where Li and Mn occupy the tetrahedral and octahedral sites in the oxygen FCC framework, respectively. The cyclability of LMO is known to be relatively low because the Jahn-Teller Mn3+ ion is unstable in the crystalline and dissolves into the electrolyte during the cycling. Thus, the partial exchange of Mn with the other transition element of Cr, Ni etc., was conducted in order to keep the valence state of Mn to IV, which effectively improved the cyclability and potential of the spinel-type electrode[11].Today it is known as high-voltage electrode materials.Both in LCO and MCO, only the half of the possible valence change of the transition metals is used for the charge compensation accompanied with lithiation/delithiation, because of low availability of the amount of Li atoms in the layered rock-salt structure and high atomic ration of Mn/Li in the spinel electrode, respectively. Thus, the structure of LiFePO4, which was also reported by Goodenough's group [10], is understood as follows; the half of the transition metal in the composition of LiCoO2, is exchanged by phosphorous in order to enhance the structural stability and the electrode potential, which is called ordered-olivine structure and categorized to polyanion compounds. In recent years, LFP is one of the most promising electrodes thanks to high thermal stability, low cost and relatively high capacity. Furthermore, LFP is a favorable model material for the analyses of the electrochemically-driven structural phase transition [12, 13, 14, 15, 16, 17, 18, 19] owing to its simple two-phase reaction behavior between the wide Li composition. Since the phase transition of LFP causes the considerable volume change about 5%-8%, the effects of the strain energy on the electrochemical capabilities of the electrode have been intensively studied with the phase-field micromechanical simulation [20, 21] and the in-situ simultaneous XRD and XAFS measurement [22] by our group. As the further developed electrode materials based on polyanionic LFP, high capacity electrodes such as Li2FeSiO4 [23, 24, 25, 26, 27], Li2FePO4F [28, 29, 30], Li2MP2O7 (M = Fe, Mn) [31] and Li4NiTeO6 [32] were reported, where two Li reaction are available and would contribute to the significant improvement of the energy density of the battery. On the other hand, the electrode of the layered rock-salt structure was also developed to be Li-rich layered electrode material, in which LiMO2 ( M = Mn, Co, Ni ) is stabilized by adding electrochemically inactive Li2MnO3 and shows the capacity of > 200 mAh g-1 by the almost one Li extraction from the structure[33, 34], while the conventional LiMO2 is unstabilized by only 0.6~0.7 Li extraction. As described above, Li2MnO3 is electrochemically inactive because the valence state of Mn in this materials is Mn4+ and further oxidation by the electrochemical delithiation is generally difficult; however, substitution of Mn atom by other transition metal in forth-period element such as Ru, Mo opens a new way to utilize these Li-rich layered electrodes [35, 36, 37, 38]. Furthermore, the redox reaction of oxygen is found to be available in this family of electrodes, which was demonstrated by substituting Sn in Li2RuO3 [39], whereas the importance of the redox contribution of oxygen has been pointed out from the ab initio calculations[40, 41]. Ab initio calculations also greatly contributes to the material design for the electrodes of LIBs [42, 43, 44, 45]. The cation mixing is also very common phenomena in the above electrode materials as well as LNO, which affects diffusion and electrode potential, phase transition, electric and magnetic properties, and eventually the whole electrode properties.Furthermore, the cation mixing is driven by the repetitive cycling, which degrades the energy density of the electrode in the long-term perspective. In some cases, some complex electrode materials have plural sites for a certain transition metal, where the electronic/local structural changes accompanied by lithiation/delithiation would be different each other. Thus, the demonstration of the site-selective analyses by the DAFS method in LNO is of great importance to show how to understand the relationship between the cation-mixing and the electrochemical properties in the electrode materials for LIBs. [1] K. Mizushima, P. C. Jones, P. J. Wiseman, and J. B. Goodenough, "A NEW CATHODE MATERIAL FOR BATTERIES OF HIGH ENERGY DENSITY," Mater. res. bull., vol. 15, p. 783–789, 1980. @article{Mizushima1980a, abstract = {A new system LixCoO 2 (0 M. Armand and J. -M. Tarascon, "Building better batteries.," Nature, vol. 451, iss. 7179, p. 652–657, 2008. @article{Armand2008, author = {Armand, M and Tarascon, J.-M.}, doi = {10.1038/451652a}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nature/Armand, Tarascon\_Nature\_2008.pdf:pdf}, journal = {Nature}, keywords = {19th Century,20th Century,21st Century,Air,Automobiles,Automobiles: history,Bioelectric Energy Sources,Bioelectric Energy Sources: economics,Bioelectric Energy Sources: history,Bioelectric Energy Sources: trends,Biomass,Biomimetics,Cellular Phone,Cellular Phone: history,Conservation of Energy Resources,Conservation of Energy Resources: economics,Conservation of Energy Resources: history,Conservation of Energy Resources: methods,Conservation of Energy Resources: trends,Electrochemistry,Electrochemistry: economics,Electrochemistry: history,Electronics,Electronics: economics,Electronics: history,Electronics: trends,History,Lithium,Lithium: chemistry,Nanotechnology,Nanotechnology: trends,Oxygen,Oxygen: chemistry}, title = {{Building better batteries.}}, url = {http://www.ncbi.nlm.nih.gov/pubmed/18256660}, J. -M. Tarascon and M. Armand, "Issues and challenges facing rechargeable lithium batteries.," Nature, vol. 414, iss. 6861, p. 359–67, 2001. @article{Tarascon2001, abstract = {Technological improvements in rechargeable solid-state batteries are being driven by an ever-increasing demand for portable electronic devices. Lithium-ion batteries are the systems of choice, offering high energy density, flexible and lightweight design, and longer lifespan than comparable battery technologies. We present a brief historical review of the development of lithium-based rechargeable batteries, highlight ongoing research strategies, and discuss the challenges that remain regarding the synthesis, characterization, electrochemical performance and safety of these systems.}, author = {Tarascon, J.-M. and Armand, M}, doi = {10.1038/35104644}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nature/Tarascon, Armand\_Nature\_2001(2).pdf:pdf}, pages = {359--67}, title = {{Issues and challenges facing rechargeable lithium batteries.}}, D. Aurbach, Z. Lu, A. Schechter, Y. Gofer, H. Gizbar, R. Turgeman, Y. Cohen, M. Moshkovich, and E. Levi, "Prototype systems for rechargeable magnesium batteries.," Nature, vol. 407, iss. 6805, p. 724–727, 2000. @article{Aurbach2000, abstract = {The thermodynamic properties of magnesium make it a natural choice for use as an anode material in rechargeable batteries, because it may provide a considerably higher energy density than the commonly used lead-acid and nickel-cadmium systems. Moreover, in contrast to lead and cadmium, magnesium is inexpensive, environmentally friendly and safe to handle. But the development of Mg batteries has been hindered by two problems. First, owing to the chemical activity of Mg, only solutions that neither donate nor accept protons are suitable as electrolytes; but most of these solutions allow the growth of passivating surface films, which inhibit any electrochemical reaction. Second, the choice of cathode materials has been limited by the difficulty of intercalating Mg ions in many hosts. Following previous studies of the electrochemistry of Mg electrodes in various non-aqueous solutions, and of a variety of intercalation electrodes, we have now developed rechargeable Mg battery systems that show promise for applications. The systems comprise electrolyte solutions based on Mg organohaloaluminate salts, and Mg(x)Mo3S4 cathodes, into which Mg ions can be intercalated reversibly, and with relatively fast kinetics. We expect that further improvements in the energy density will make these batteries a viable alternative to existing systems.}, author = {Aurbach, D and Lu, Z and Schechter, A and Gofer, Y and Gizbar, H and Turgeman, R and Cohen, Y and Moshkovich, M and Levi, E}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nature/Aurbach et al.\_Nature\_2000.pdf:pdf}, publisher = {Macmillian Magazines Ltd.}, shorttitle = {Nature}, title = {{Prototype systems for rechargeable magnesium batteries.}}, url = {http://dx.doi.org/10.1038/35037553}, S. Yagi, T. Ichitsubo, Y. Shirai, S. Yanai, T. Doi, K. Murase, and E. Matsubara, "A concept of dual-salt polyvalent-metal storage battery," J. mater. chem. a, vol. 2, iss. 4, p. 1144–1149, 2014. @article{Yagi2014, abstract = {In this work, we propose and examine a battery system with a new design concept. The battery consists of a non-noble polyvalent metal (such as Ca, Mg, Al) combined with a positive electrode already well-established for lithium ion batteries (LIBs). The prototype demonstrated here is composed of a Mg negative electrode, LiFePO4 positive electrode, and tetrahydrofuran solution of two kinds of salts (LiBF4 and phenylmagnesium chloride) as an electrolyte. The LIB positive-electrode materials such as LiFePO4 can preferentially accommodate Li+ ions; i.e., they work as a "Li pass filter". This characteristic enables us to construct a septum-free, Daniel-battery type dual-salt polyvalent-metal storage battery (PSB). The presented dual-salt PSB combines many advantages, e.g., fast diffusion of Li+ ions in the positive electrode, high cyclability, and a high specific capacity of lightweight polyvalent metals. The concept is expected to allow the design of many combinations of dual-salt PSBs having a high energy density and high rate capability.}, author = {Yagi, Shunsuke and Ichitsubo, Tetsu and Shirai, Yoshimasa and Yanai, Shingo and Doi, Takayuki and Murase, Kuniaki and Matsubara, Eiichiro}, doi = {10.1039/c3ta13668j}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Materials Chemistry A/Yagi et al.\_Journal of Materials Chemistry A\_2014.pdf:pdf}, isbn = {10.1039/C3TA13668J}, journal = {J. Mater. Chem. A}, publisher = {The Royal Society of Chemistry}, title = {{A concept of dual-salt polyvalent-metal storage battery}}, url = {http://pubs.rsc.org/en/content/articlehtml/2014/ta/c3ta13668j}, K. Kang, C. Chen, B. J. Hwang, and G. Ceder, "Synthesis, Electrochemical Properties, and Phase Stability of Li 2 NiO 2 with the Immm Structure," Chem. mater., vol. 16, iss. 13, p. 2685–2690, 2004. @article{Kang2004, abstract = {The electrochemical properties and phase stability of the orthorhombic Immm structure of composition Li2NiO2 are studied experimentally and with first principles calculations. The material shows a high specific charge capacity of about 320 mAh/g and discharge capacity of about 240 mAh/g at the first cycle. The experimental results and first principles calculations all indicate that the orthorhombic Immm structure is rather prone to phase transformation to a close-packed layered structure during the electrochemical cycling. The possibility of stabilizing the orthorhombic Immm structure during the electrochemical cycling by partial substitution of Ni is also evaluated. A detailed analysis of the crystal field energy difference between octahedral and square-planar coordinated Ni2+ indicates that crystal field effects may not be large enough to stabilize Ni2+ in a square planar environment when the cost of electron pairing is taken into account. Rather, we attribute the stability of Li2NiO2 in the Immm structure to the more favorable Li arrangement as compared to a possible Li2NiO2 structure with octahedral Ni. The electrochemical properties and phase stability of the orthorhombic Immm structure of composition Li2NiO2 are studied experimentally and with first principles calculations. The material shows a high specific charge capacity of about 320 mAh/g and discharge capacity of about 240 mAh/g at the first cycle. The experimental results and first principles calculations all indicate that the orthorhombic Immm structure is rather prone to phase transformation to a close-packed layered structure during the electrochemical cycling. The possibility of stabilizing the orthorhombic Immm structure during the electrochemical cycling by partial substitution of Ni is also evaluated. A detailed analysis of the crystal field energy difference between octahedral and square-planar coordinated Ni2+ indicates that crystal field effects may not be large enough to stabilize Ni2+ in a square planar environment when the cost of electron pairing is taken into account. Rather, we attribute the stability of Li2NiO2 in the Immm structure to the more favorable Li arrangement as compared to a possible Li2NiO2 structure with octahedral Ni.}, author = {Kang, Kisuk and Chen, Ching-Hsiang and Hwang, Bing Joe and Ceder, Gerbrand}, doi = {10.1021/cm049922h}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chemistry of Materials/Kang et al.\_Chemistry of Materials\_2004.pdf:pdf}, journal = {Chem. Mater.}, publisher = {American Chemical Society}, title = {{Synthesis, Electrochemical Properties, and Phase Stability of Li 2 NiO 2 with the Immm Structure}}, url = {http://dx.doi.org/10.1021/cm049922h}, Z. Lu, Z. Chen, and J. R. Dahn, "Lack of Cation Clustering in Li[Ni x Li 1/3 - 2 x /3 Mn 2/3 - x / 3 ]O 2 (0 < x ≤ 1 / 2 ) and Li[Cr x Li (1 - x )/3 Mn (2 - 2 x )/3 ]O 2 (0 < x < 1)," Chem. mater., vol. 15, iss. 16, p. 3214–3220, 2003. @article{Lu2003, abstract = {Recent papers by Ammundsen et al. and Pan et al. give evidence for the formation of local regions high in Mn content and other local regions high in Cr or Ni content in Li[Li0.2Cr0.4Mn0.4]O2 and Li[Ni0.5Mn0.5]O2 by EXAFS and NMR methods, respectively. These observations are surprising for the following reasons:? (1) each of these materials is a part of a solid solution series, Li[CrxLi(1-x)/3Mn(2-2x)/3]O2 (0 < x < 1) or Li[NixLi1/3-2x/3Mn2/3-x/3]O2 (0 < x < 1/2); (2) the materials are made at high temperature, and entropy considerations suggest that like transition-metal atoms should not cluster; and (3) the electrochemical and structural properties of the materials vary smoothly with composition. Here, using careful X-ray diffraction on many samples from each solid solution, we show that it is very unlikely that such local regions high in Mn, Ni, or Cr exist. We show that long-ranged lithium ordering on the 31/2 a ? 31/2 a superstructure occurs as expected based on the work of Schick et al., however, this does not imply local regions of Li2MnO3. Instead, the diffraction angles of the superstructure peaks shift with composition suggesting that the Mn, Cr, or Ni are uniformly mixed on the transition-metal sites. In addition, we show that the electrochemical behavior of Li[NixLi1/3-2x/3Mn2/3-x/3]O2 heated to 1000 °C is improved compared to that of samples made at 900 °C. Recent papers by Ammundsen et al. and Pan et al. give evidence for the formation of local regions high in Mn content and other local regions high in Cr or Ni content in Li[Li0.2Cr0.4Mn0.4]O2 and Li[Ni0.5Mn0.5]O2 by EXAFS and NMR methods, respectively. These observations are surprising for the following reasons:? (1) each of these materials is a part of a solid solution series, Li[CrxLi(1-x)/3Mn(2-2x)/3]O2 (0 < x < 1) or Li[NixLi1/3-2x/3Mn2/3-x/3]O2 (0 < x < 1/2); (2) the materials are made at high temperature, and entropy considerations suggest that like transition-metal atoms should not cluster; and (3) the electrochemical and structural properties of the materials vary smoothly with composition. Here, using careful X-ray diffraction on many samples from each solid solution, we show that it is very unlikely that such local regions high in Mn, Ni, or Cr exist. We show that long-ranged lithium ordering on the 31/2 a ? 31/2 a superstructure occurs as expected based on the work of Schick et al., however, this does not imply local regions of Li2MnO3. Instead, the diffraction angles of the superstructure peaks shift with composition suggesting that the Mn, Cr, or Ni are uniformly mixed on the transition-metal sites. In addition, we show that the electrochemical behavior of Li[NixLi1/3-2x/3Mn2/3-x/3]O2 heated to 1000 °C is improved compared to that of samples made at 900 °C.}, author = {Lu, Zhonghua and Chen, Zhaohui and Dahn, J. R.}, doi = {10.1021/cm030194s}, title = {{Lack of Cation Clustering in Li[Ni x Li 1/3 - 2 x /3 Mn 2/3 - x / 3 ]O 2 (0 < x ≤ 1 / 2 ) and Li[Cr x Li (1 - x )/3 Mn (2 - 2 x )/3 ]O 2 (0 < x < 1)}}, url = {http://dx.doi.org/10.1021/cm030194s}, [8] T. Ohzuku and Y. Makimura, "Layered Lithium Insertion Material of LiCo1/3Ni1/3Mn1/3O2 for Lithium-Ion Batteries," Chem. lett., vol. 30, iss. 7, p. 642–643, 2001. @article{Ohzuku2001, author = {Ohzuku, T. and Makimura, Yoshinari}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chem. Lett/Ohzuku, Makimura\_Chem. Lett.\_2001.pdf:pdf}, journal = {Chem. Lett.}, keywords = {LCNMO}, publisher = {J-STAGE}, title = {{Layered Lithium Insertion Material of LiCo1/3Ni1/3Mn1/3O2 for Lithium-Ion Batteries}}, url = {http://joi.jlc.jst.go.jp/JST.JSTAGE/cl/2001.642?from=Google}, M. M. Thackeray, P. J. Johnson, L. A. de Picciotto, P. G. Bruce, and J. B. Goodenough, "Electrochemical extraction of lithium from LiMn2O4," Mater. res. bull., vol. 19, iss. 2, p. 179–187, 1984. @article{Thackeray1984, abstract = {Lithium has been removed electrochemically at 15 $\mu$A/cm2 from LiMn2O4 (spinel) to yield single phase Li1−xMn2O4 for 0 < × ⩽ 0.60. The electrochemical curve suggests that beyond x = 0.60 an electrochemical process other than lithium extraction occurs. Powder X-ray-diffraction spectra indicate that during the extraction process the [Mn2]O4 framework of the spinel structure remains intact. Previous results have shown that 1.2 Li+ ions can also be inserted into LiMn2O4, which suggests that lithium may be cycled in and out of the [Mn2]O4 framework of the spinel structure over a wide range of x, at least from Li0.4Mn2O4 to Li2Mn2O4. Discussion of the mechanism of formation of $\lambda$-MnO2 in an acidic environment is extended.}, author = {Thackeray, M.M. and Johnson, P.J. and de Picciotto, L.A. and Bruce, P.G. and Goodenough, J.B.}, doi = {10.1016/0025-5408(84)90088-6}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Materials Research Bulletin/Thackeray et al.\_Materials Research Bulletin\_1984.pdf:pdf}, journal = {Mater. Res. Bull.}, title = {{Electrochemical extraction of lithium from LiMn2O4}}, url = {http://www.sciencedirect.com/science/article/pii/0025540884900886}, [10] A. K. Padhi, K. S. Nanjundaswamy, and J. B. Goodenough, "Phospho‐olivines as Positive‐Electrode Materials for Rechargeable Lithium Batteries," J. electrochem. soc., vol. 144, iss. 4, p. 1–7, 1997. @article{Padhi1997, author = {Padhi, A.K and Nanjundaswamy, K.S and Goodenough, J.B}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/J. Electrochem. Soc/Padhi, Nanjundaswamy, Goodenough\_J. Electrochem. Soc.\_1997.pdf:pdf}, journal = {J. Electrochem. Soc.}, title = {{Phospho‐olivines as Positive‐Electrode Materials for Rechargeable Lithium Batteries}}, url = {http://link.aip.org/link/?JESOAN/144/1188/1}, C. Sigala, D. Guyomard, A. Verbare, Y. Piffard, and M. Tournoux, "Positive electrode materials with high operating voltage for lithium batteries: LiCryMn2 − yO4 (0 ≤ y ≤ 1)," Solid state ionics, vol. 81, iss. 3-4, p. 167–170, 1995. @article{SIGALA1995, abstract = {Reversible lithium deintercalation of chromium-substituted spinel manganese oxides LiCryMn2 − yO4 (0 ≤ y ≤ 1) in the voltage range 3.4–5.4 V versus Li, occurs in two main steps for 0 < y < 1: one at about 4.9 V and the other at about 4 V. The 4.9 V process capacity increases with the chromium content while the 4 V process capacity decreases at the same time. Excellent cyclability was observed for y ≤ 0.5 while materials with y ≥ 0.75 were loosing capacity rapidly upon cycling. Changing the chromium composition of these materials enables the control of the average intercalation voltage in the range 4.05–4.5 V versus Li, a voltage range where no material was known before. A low manganese to chromium substitution rate in LiMn2O4 was found to be beneficial to the specific capacity and energy and to the cyclability of the spinel materials. Due to the selected electrolyte composition with high stability against oxidation, extra capacity due to electrolyte oxidation at each cycle remained very low even though the charge voltage was highly oxidative.}, author = {Sigala, C and Guyomard, D and Verbare, A and Piffard, Y and Tournoux, M}, doi = {10.1016/0167-2738(95)00163-Z}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Solid State Ionics/Sigala et al.\_Solid State Ionics\_1995.pdf:pdf}, journal = {Solid State Ionics}, keywords = {Cathode material,High voltage material,Li battery,Li intercalation,Lithium chromium manganese oxide,Spinel oxides}, number = {3-4}, title = {{Positive electrode materials with high operating voltage for lithium batteries: LiCryMn2 − yO4 (0 ≤ y ≤ 1)}}, url = {http://www.sciencedirect.com/science/article/pii/016727389500163Z}, H. Liu, F. C. Strobridge, O. J. Borkiewicz, K. M. Wiaderek, K. W. Chapman, P. J. Chupas, and C. P. Grey, "Batteries. Capturing metastable structures during high-rate cycling of LiFePO₄ nanoparticle electrodes.," Science, vol. 344, iss. 6191, p. 1252817, 2014. @article{Liu2014, abstract = {The absence of a phase transformation involving substantial structural rearrangements and large volume changes is generally considered to be a key characteristic underpinning the high-rate capability of any battery electrode material. In apparent contradiction, nanoparticulate LiFePO4, a commercially important cathode material, displays exceptionally high rates, whereas its lithium-composition phase diagram indicates that it should react via a kinetically limited, two-phase nucleation and growth process. Knowledge concerning the equilibrium phases is therefore insufficient, and direct investigation of the dynamic process is required. Using time-resolved in situ x-ray powder diffraction, we reveal the existence of a continuous metastable solid solution phase during rapid lithium extraction and insertion. This nonequilibrium facile phase transformation route provides a mechanism for realizing high-rate capability of electrode materials that operate via two-phase reactions.}, author = {Liu, Hao and Strobridge, Fiona C and Borkiewicz, Olaf J and Wiaderek, Kamila M and Chapman, Karena W and Chupas, Peter J and Grey, Clare P}, doi = {10.1126/science.1252817}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Science (New York, N.Y.)/Liu et al.\_Science (New York, N.Y.)\_2014.pdf:pdf}, pages = {1252817}, title = {{Batteries. Capturing metastable structures during high-rate cycling of LiFePO₄ nanoparticle electrodes.}}, url = {http://www.sciencemag.org/content/344/6191/1252817}, R. Malik, F. Zhou, and G. Ceder, "Kinetics of non-equilibrium lithium incorporation in LiFePO4," Nat. mater., vol. 10, iss. 8, p. 587–590, 2011. @article{Malik2011, author = {Malik, Rahul and Zhou, Fei and Ceder, G.}, doi = {10.1038/nmat3065}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nat. Mater/Malik, Zhou, Ceder\_Nat. Mater.\_2011.pdf:pdf}, journal = {Nat. Mater.}, title = {{Kinetics of non-equilibrium lithium incorporation in LiFePO4}}, url = {http://www.nature.com/doifinder/10.1038/nmat3065}, C. Delmas, M. Maccario, L. Croguennec, F. {Le Cras}, and F. Weill, "Lithium deintercalation in LiFePO4 nanoparticles via a domino-cascade model.," Nat. mater., vol. 7, p. 665–671, 2008. @article{Delmas2008a, abstract = {Lithium iron phosphate is one of the most promising positive-electrode materials for the next generation of lithium-ion batteries that will be used in electric and plug-in hybrid vehicles. Lithium deintercalation (intercalation) proceeds through a two-phase reaction between compositions very close to LiFePO(4) and FePO(4). As both endmember phases are very poor ionic and electronic conductors, it is difficult to understand the intercalation mechanism at the microscopic scale. Here, we report a characterization of electrochemically deintercalated nanomaterials by X-ray diffraction and electron microscopy that shows the coexistence of fully intercalated and fully deintercalated individual particles. This result indicates that the growth reaction is considerably faster than its nucleation. The reaction mechanism is described by a 'domino-cascade model' and is explained by the existence of structural constraints occurring just at the reaction interface: the minimization of the elastic energy enhances the deintercalation (intercalation) process that occurs as a wave moving through the entire crystal. This model opens new perspectives in the search for new electrode materials even with poor ionic and electronic conductivities.}, author = {Delmas, C and Maccario, M and Croguennec, L and {Le Cras}, F and Weill, F}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nat. Mater/Delmas et al.\_Nat. Mater.\_2008.pdf:pdf}, title = {{Lithium deintercalation in LiFePO4 nanoparticles via a domino-cascade model.}}, C. Delacourt, P. Poizot, J. Tarascon, and C. Masquelier, "The existence of a temperature-driven solid solution in LixFePO4 for 0 ≤ x ≤ 1," Nat. mater., vol. 4, iss. 3, p. 254–260, 2005. @article{Delacourt2005, author = {Delacourt, Charles and Poizot, Philippe and Tarascon, Jean-Marie and Masquelier, Christian}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nature Materials/Delacourt et al.\_Nature Materials\_2005.pdf:pdf}, title = {{The existence of a temperature-driven solid solution in LixFePO4 for 0 ≤ x ≤ 1}}, B. Kang and G. Ceder, "Battery materials for ultrafast charging and discharging," Nature, vol. 458, iss. 7235, p. 190–193, 2009. author = {Kang, Byoungwoo and Ceder, Gerbrand}, doi = {10.1038/nature07853}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nature/Kang, Ceder\_Nature\_2009.pdf:pdf}, publisher = {Macmillan Magazines Ltd, Brunel Rd, Houndsmills, Basingstoke, Hants, RG 21 2 XS, UK}, title = {{Battery materials for ultrafast charging and discharging}}, url = {http://dx.doi.org/10.1038/nature07853 http://burgaz.mit.edu/PUBLICATIONS/nature07853.pdf}, W. Dreyer, J. Jamnik, C. Guhlke, R. Huth, J. Moškon, and M. Gaberšček, "The thermodynamic origin of hysteresis in insertion batteries," Nat. mater., vol. 9, iss. 5, p. 448–453, 2010. @article{Dreyer2010, abstract = {Lithium batteries are considered the key storage devices for most emerging green technologies such as wind and solar technologies or hybrid and plug-in electric vehicles. Despite the tremendous recent advances in battery research, surprisingly, several fundamental issues of increasing practical importance have not been adequately tackled. One such issue concerns the energy efficiency. Generally, charging of 10(10)-10(17) electrode particles constituting a modern battery electrode proceeds at (much) higher voltages than discharging. Most importantly, the hysteresis between the charge and discharge voltage seems not to disappear as the charging/discharging current vanishes. Herein we present, for the first time, a general explanation of the occurrence of inherent hysteretic behaviour in insertion storage systems containing multiple particles. In a broader sense, the model also predicts the existence of apparent equilibria in battery electrodes, the sequential particle-by-particle charging/discharging mechanism and the disappearance of two-phase behaviour at special experimental conditions.}, annote = {From Duplicate 1 ( The thermodynamic origin of hysteresis in insertion batteries. - Dreyer, Wolfgang; Jamnik, Janko; Guhlke, Clemens; Huth, Robert; Mo\v{s}kon, Jo\v{z}e; Gaber\v{s}\v{c}ek, Miran )}, author = {Dreyer, Wolfgang and Jamnik, Janko and Guhlke, Clemens and Huth, Robert and Mo\v{s}kon, Jo\v{z}e and Gaber\v{s}\v{c}ek, Miran}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nat. Mater/Dreyer et al.\_Nat. Mater.\_2010.pdf:pdf}, publisher = {Nature Publishing Group}, title = {{The thermodynamic origin of hysteresis in insertion batteries}}, url = {http://www.ncbi.nlm.nih.gov/pubmed/20383130 http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat2730.html}, Y. Asari, Y. Suwa, and T. Hamada, "Formation and diffusion of vacancy-polaron complex in olivine-type LiMnPO_\4\ and LiFePO_\4\," Phys. rev. b, vol. 84, iss. 13, p. 134113, 2011. @article{Asari2011, author = {Asari, Yusuke and Suwa, Yuji and Hamada, Tomoyuki}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Phys. Rev. B/Asari, Suwa, Hamada\_Phys. Rev. B\_2011.pdf:pdf}, title = {{Formation and diffusion of vacancy-polaron complex in olivine-type LiMnPO\_\{4\} and LiFePO\_\{4\}}}, Y. Orikasa, T. Maeda, Y. Koyama, H. Murayama, K. Fukuda, H. Tanida, H. Arai, E. Matsubara, Y. Uchimoto, and Z. Ogumi, "Direct observation of a metastable crystal phase of Li(x)FePO4 under electrochemical phase transition.," J. am. chem. soc., vol. 135, iss. 15, p. 5497–500, 2013. @article{Orikasa2013a, abstract = {The phase transition between LiFePO4 and FePO4 during nonequilibrium battery operation was tracked in real time using time-resolved X-ray diffraction. In conjunction with increasing current density, a metastable crystal phase appears in addition to the thermodynamically stable LiFePO4 and FePO4 phases. The metastable phase gradually diminishes under open-circuit conditions following electrochemical cycling. We propose a phase transition path that passes through the metastable phase and posit the new phase's role in decreasing the nucleation energy, accounting for the excellent rate capability of LiFePO4. This study is the first to report the measurement of a metastable crystal phase during the electrochemical phase transition of LixFePO4.}, author = {Orikasa, Yuki and Maeda, Takehiro and Koyama, Yukinori and Murayama, Haruno and Fukuda, Katsutoshi and Tanida, Hajime and Arai, Hajime and Matsubara, Eiichiro and Uchimoto, Yoshiharu and Ogumi, Zempachi}, doi = {10.1021/ja312527x}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/J. Am. Chem. Soc/Orikasa et al.\_J. Am. Chem. Soc.\_2013.pdf:pdf}, journal = {J. Am. Chem. Soc.}, pages = {5497--500}, title = {{Direct observation of a metastable crystal phase of Li(x)FePO4 under electrochemical phase transition.}}, url = {http://dx.doi.org/10.1021/ja312527x}, T. Ichitsubo, K. Tokuda, S. Yagi, M. Kawamori, T. Kawaguchi, T. Doi, M. Oishi, and E. Matsubara, "Elastically constrained phase-separation dynamics competing with charge process in LiFePO4/FePO4 system," J. mater. chem. a, vol. 1, p. 2567–2577, 2013. @article{Ichitsubo2013, author = {Ichitsubo, Tetsu and Tokuda, Kazuya and Yagi, Shunsuke and Kawamori, Makoto and Kawaguchi, Tomoya and Doi, Takayuki and Oishi, Masatsugu and Matsubara, Eiichiro}, doi = {10.1039/c2ta01102f}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Materials Chemistry A/Ichitsubo et al.\_Journal of Materials Chemistry A\_2013(3).pdf:pdf;:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Materials Chemistry A/Ichitsubo et al.\_Journal of Materials Chemistry A\_2013.pdf:pdf}, title = {{Elastically constrained phase-separation dynamics competing with charge process in LiFePO4/FePO4 system}}, url = {http://pubs.rsc.org/en/content/articlehtml/2013/ta/c2ta01102f}, T. Ichitsubo, T. Doi, K. Tokuda, E. Matsubara, T. Kida, T. Kawaguchi, S. Yagi, S. Okada, and J. Yamaki, "What determines the critical size for phase separation in LiFePO4 in lithium ion batteries?," J. mater. chem. a, vol. 1, iss. 46, p. 14532–14537, 2013. @article{Ichitsubo2013c, author = {Ichitsubo, Tetsu and Doi, Takayuki and Tokuda, Kazuya and Matsubara, Eiichiro and Kida, Tetsuya and Kawaguchi, Tomoya and Yagi, Shunsuke and Okada, Shigeto and Yamaki, Jun-ichi}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Materials Chemistry A/Ichitsubo et al.\_Journal of Materials Chemistry A\_2013.pdf:pdf}, pages = {14532--14537}, publisher = {Royal Society of Chemistry}, title = {{What determines the critical size for phase separation in LiFePO4 in lithium ion batteries?}}, K. Tokuda, T. Kawaguchi, K. Fukuda, T. Ichitsubo, and E. Matsubara, "Retardation and acceleration of phase separation evaluated from observation of imbalance between structure and valence in LiFePO4/FePO4 electrode," Apl mater., vol. 2, iss. 7, p. 70701, 2014. @article{Tokuda2014, author = {Tokuda, Kazuya and Kawaguchi, Tomoya and Fukuda, Katsutoshi and Ichitsubo, Tetsu and Matsubara, Eiichiro}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/APL Materials/Tokuda et al.\_APL Materials\_2014.pdf:pdf}, journal = {APL Mater.}, title = {{Retardation and acceleration of phase separation evaluated from observation of imbalance between structure and valence in LiFePO4/FePO4 electrode}}, url = {http://scitation.aip.org/content/aip/journal/aplmater/2/7/10.1063/1.4886555}, S. Nishimura, S. Hayase, R. Kanno, M. Yashima, N. Nakayama, and A. Yamada, "Structure of Li2FeSiO4.," J. am. chem. soc., vol. 130, iss. 40, p. 13212–13213, 2008. @article{Nishimura2008a, abstract = {A large-scale lithium-ion battery is the key technology toward a greener society. A lithium iron silicate system is rapidly attracting much attention as the new important developmental platform of cathode material with abundant elements and possible multielectron reactions. The hitherto unsolved crystal structure of the typical composition Li2FeSiO4 has now been determined using high-resolution synchrotron X-ray diffraction and electron diffraction experiments. The structure has a 2 times larger superlattice compared to the previous beta-Li3PO4-based model, and its origin is the periodic modulation of coordination tetrahedra.}, author = {Nishimura, Shinchi and Hayase, Shogo and Kanno, Ryoji and Yashima, Masatomo and Nakayama, Noriaki and Yamada, Atsuo}, doi = {10.1021/ja805543p}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/J. Am. Chem. Soc/Nishimura et al.\_J. Am. Chem. Soc.\_2008.pdf:pdf}, title = {{Structure of Li2FeSiO4.}}, url = {http://dx.doi.org/10.1021/ja805543p}, D. Rangappa, K. D. Murukanahally, T. Tomai, A. Unemoto, and I. Honma, "Ultrathin nanosheets of Li2MSiO4 (M = Fe, Mn) as high-capacity Li-ion battery electrode.," Nano lett., vol. 12, iss. 3, p. 1146–1151, 2012. @article{Rangappa2012, abstract = {Novel ultrathin Li(2)MnSiO(4) nanosheets have been prepared in a rapid one pot supercritical fluid synthesis method. Nanosheets structured cathode material exhibits a discharge capacity of \~{}340 mAh/g at 45 ± 5 °C. This result shows two lithium extraction/insertion performances with good cycle ability without any structural instability up to 20 cycles. The two-dimensional nanosheets structure enables us to overcome structural instability problem in the lithium metal silicate based cathode materials and allows successful insertion/extraction of two complete lithium ions.}, author = {Rangappa, Dinesh and Murukanahally, Kempaiah Devaraju and Tomai, Takaaki and Unemoto, Atsushi and Honma, Itaru}, doi = {10.1021/nl202681b}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nano letters/Rangappa et al.\_Nano letters\_2012.pdf:pdf}, journal = {Nano Lett.}, keywords = {Artificial,Electric Power Supplies,Electrodes,Energy Transfer,Equipment Design,Equipment Failure Analysis,Ions,Lithium,Lithium Compounds,Lithium Compounds: chemistry,Lithium: chemistry,Manganese,Manganese: chemistry,Membranes,Nanostructures,Nanostructures: chemistry,Nanostructures: ultrastructure,Oxides,Oxides: chemistry,Particle Size,Sulfates,Sulfates: chemistry}, title = {{Ultrathin nanosheets of Li2MSiO4 (M = Fe, Mn) as high-capacity Li-ion battery electrode.}}, R. Dominko, M. Bele, M. Gaberšček, a. Meden, M. Remškar, and J. Jamnik, "Structure and electrochemical performance of Li2MnSiO4 and Li2FeSiO4 as potential Li-battery cathode materials," Electrochem. commun., vol. 8, iss. 2, p. 217–222, 2006. @article{Dominko2006, author = {Dominko, R. and Bele, M. and Gaber\v{s}\v{c}ek, M. and Meden, a. and Rem\v{s}kar, M. and Jamnik, J.}, doi = {10.1016/j.elecom.2005.11.010}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Electrochem. Commun/Dominko et al.\_Electrochem. Commun.\_2006.pdf:pdf}, journal = {Electrochem. Commun.}, keywords = {Li2FeSiO4,cathode material,crystal structure,iron silicate,lithium-ion battery,manganese silicate}, mendeley-tags = {Li2FeSiO4}, title = {{Structure and electrochemical performance of Li2MnSiO4 and Li2FeSiO4 as potential Li-battery cathode materials}}, url = {http://linkinghub.elsevier.com/retrieve/pii/S1388248105003607}, T. Masese, Y. Orikasa, T. Mori, K. Yamamoto, T. Ina, T. Minato, K. Nakanishi, T. Ohta, C. Tassel, Y. Kobayashi, H. Kageyama, H. Arai, Z. Ogumi, and Y. Uchimoto, "Local structural change in Li2FeSiO4 polyanion cathode material during initial cycling," Solid state ionics, vol. 262, p. 110–114, 2014. @article{Masese2014a, abstract = {To elucidate the Li+ extraction and insertion mechanism for Li2FeSiO4 nanoparticles, at the atomic scale, X-ray absorption spectroscopy (XAS) measurements at Fe and Si K-edges were performed. Fe K-edge XAS spectra suggest irreversible changes occurring in the local and electronic environment of iron which can be attributable to the characteristic shift in potential plateau during initial cycling of Li2−xFeSiO4 system. While the local environment around Fe atoms significantly changes upon initial cycling, the local SiO environment is mostly maintained.}, author = {Masese, Titus and Orikasa, Yuki and Mori, Takuya and Yamamoto, Kentaro and Ina, Toshiaki and Minato, Taketoshi and Nakanishi, Koji and Ohta, Toshiaki and Tassel, C\'{e}dric and Kobayashi, Yoji and Kageyama, Hiroshi and Arai, Hajime and Ogumi, Zempachi and Uchimoto, Yoshiharu}, doi = {10.1016/j.ssi.2013.11.018}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Solid State Ionics/Masese et al.\_Solid State Ionics\_2014.pdf:pdf}, keywords = {Li2FeSiO4,Lithium-ion battery,X-ray absorption spectroscopy (XAS)}, title = {{Local structural change in Li2FeSiO4 polyanion cathode material during initial cycling}}, T. Masese, Y. Orikasa, C. Tassel, J. Kim, T. Minato, H. Arai, T. Mori, K. Yamamoto, Y. Kobayashi, H. Kageyama, Z. Ogumi, and Y. Uchimoto, "Relationship between Phase Transition Involving Cationic Exchange and Charge–Discharge Rate in Li 2 FeSiO 4," Chem. mater., vol. 26, iss. 3, p. 1380–1384, 2014. @article{Masese2014, abstract = {Li2FeSiO4 is considered a promising cathode material for the next-generation Li-ion battery systems owing to its high theoretical capacity and low cost. Li2FeSiO4 exhibits complex polymorphism and undergoes significant phase transformations during charge and discharge reaction. To elucidate the phase transformation mechanism, crystal structural changes during charge and discharge processes of Li2FeSiO4 at different rates were investigated by X-ray diffraction measurements. The C/50 rate of lithium extraction upon initial cycling leads to a complete transformation from a monoclinic Li2FeSiO4 to a thermodynamically stable orthorhombic LiFeSiO4, concomitant with the occurrence of significant Li/Fe antisite mixing. The C/10 rate of lithium extraction and insertion, however, leads to retention of the parent Li2FeSiO4 (with the monoclinic structure as a metastable phase) with little cationic mixing. Here, we experimentally show the presence of metastable and stable LiFeSiO4 polymorphic phases caused by lithium extraction and insertion. Li2FeSiO4 is considered a promising cathode material for the next-generation Li-ion battery systems owing to its high theoretical capacity and low cost. Li2FeSiO4 exhibits complex polymorphism and undergoes significant phase transformations during charge and discharge reaction. To elucidate the phase transformation mechanism, crystal structural changes during charge and discharge processes of Li2FeSiO4 at different rates were investigated by X-ray diffraction measurements. The C/50 rate of lithium extraction upon initial cycling leads to a complete transformation from a monoclinic Li2FeSiO4 to a thermodynamically stable orthorhombic LiFeSiO4, concomitant with the occurrence of significant Li/Fe antisite mixing. The C/10 rate of lithium extraction and insertion, however, leads to retention of the parent Li2FeSiO4 (with the monoclinic structure as a metastable phase) with little cationic mixing. Here, we experimentally show the presence of metastable and stable LiFeSiO4 polymorphic phases caused by lithium extraction and insertion.}, author = {Masese, Titus and Orikasa, Yuki and Tassel, C\'{e}dric and Kim, Jungeun and Minato, Taketoshi and Arai, Hajime and Mori, Takuya and Yamamoto, Kentaro and Kobayashi, Yoji and Kageyama, Hiroshi and Ogumi, Zempachi and Uchimoto, Yoshiharu}, doi = {10.1021/cm403134q}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chemistry of Materials/Masese et al.\_Chemistry of Materials\_2014.pdf:pdf}, title = {{Relationship between Phase Transition Involving Cationic Exchange and Charge–Discharge Rate in Li 2 FeSiO 4}}, url = {http://dx.doi.org/10.1021/cm403134q}, B. L. Ellis, W. R. M. Makahnouk, Y. Makimura, K. Toghill, and L. F. Nazar, "A multifunctional 3.5 V iron-based phosphate cathode for rechargeable batteries," Nat. mater., vol. 6, iss. 10, p. 749–753, 2007. @article{Ellis2007a, abstract = {In the search for new positive-electrode materials for lithium-ion batteries, recent research has focused on nanostructured lithium transition-metal phosphates that exhibit desirable properties such as high energy storage capacity combined with electrochemical stability. Only one member of this class--the olivine LiFePO(4) (ref. 3)--has risen to prominence so far, owing to its other characteristics, which include low cost, low environmental impact and safety. These are critical for large-capacity systems such as plug-in hybrid electric vehicles. Nonetheless, olivine has some inherent shortcomings, including one-dimensional lithium-ion transport and a two-phase redox reaction that together limit the mobility of the phase boundary. Thus, nanocrystallites are key to enable fast rate behaviour. It has also been suggested that the long-term economic viability of large-scale Li-ion energy storage systems could be ultimately limited by global lithium reserves, although this remains speculative at present. (Current proven world reserves should be sufficient for the hybrid electric vehicle market, although plug-in hybrid electric vehicle and electric vehicle expansion would put considerable strain on resources and hence cost effectiveness.) Here, we report on a sodium/lithium iron phosphate, A(2)FePO(4)F (A=Na, Li), that could serve as a cathode in either Li-ion or Na-ion cells. Furthermore, it possesses facile two-dimensional pathways for Li+ transport, and the structural changes on reduction-oxidation are minimal. This results in a volume change of only 3.7\% that--unlike the olivine--contributes to the absence of distinct two-phase behaviour during redox, and a reversible capacity that is 85\% of theoretical.}, author = {Ellis, B L and Makahnouk, W R M and Makimura, Y and Toghill, K and Nazar, L F}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nat. Mater/Ellis et al.\_Nat. Mater.\_2007.pdf:pdf}, shorttitle = {Nat Mater}, title = {{A multifunctional 3.5 V iron-based phosphate cathode for rechargeable batteries}}, url = {http://dx.doi.org/10.1038/nmat2007}, B. L. Ellis, T. N. Ramesh, W. N. Rowan-Weetaluktuk, D. H. Ryan, and L. F. Nazar, "Solvothermal synthesis of electroactive lithium iron tavorites and structure of Li2FePO4F," J. mater. chem., vol. 22, iss. 11, p. 4759–4766, 2012. @article{Ellis2012, author = {Ellis, B. L. and Ramesh, T. N. and Rowan-Weetaluktuk, W. N. and Ryan, D. H. and Nazar, L. F.}, doi = {10.1039/c2jm15273h}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Materials Chemistry/Ellis et al.\_Journal of Materials Chemistry\_2012.pdf:pdf}, journal = {J. Mater. Chem.}, title = {{Solvothermal synthesis of electroactive lithium iron tavorites and structure of Li2FePO4F}}, url = {http://xlink.rsc.org/?DOI=c2jm15273h}, N. Recham, J-N. Chotard, L. Dupont, C. Delacourt, W. Walker, M. Armand, and J. -M. Tarascon, "A 3.6 V lithium-based fluorosulphate insertion positive electrode for lithium-ion batteries," Nat. mater., vol. 9, iss. 1, p. 68–74, 2010. @article{Recham2010a, abstract = {Li-ion batteries have contributed to the commercial success of portable electronics, and are now in a position to influence higher-volume applications such as plug-in hybrid electric vehicles. Most commercial Li-ion batteries use positive electrodes based on lithium cobalt oxides. Despite showing a lower voltage than cobalt-based systems (3.45 V versus 4 V) and a lower energy density, LiFePO(4) has emerged as a promising contender owing to the cost sensitivity of higher-volume markets. LiFePO(4) also shows intrinsically low ionic and electronic transport, necessitating nanosizing and/or carbon coating. Clearly, there is a need for inexpensive materials with higher energy densities. Although this could in principle be achieved by introducing fluorine and by replacing phosphate groups with more electron-withdrawing sulphate groups, this avenue has remained unexplored. Herein, we synthesize and show promising electrode performance for LiFeSO(4)F. This material shows a slightly higher voltage (3.6 V versus Li) than LiFePO(4) and suppresses the need for nanosizing or carbon coating while sharing the same cost advantage. This work not only provides a positive-electrode contender to rival LiFePO(4), but also suggests that broad classes of fluoro-oxyanion materials could be discovered.}, author = {Recham, N and Chotard, J-N and Dupont, L and Delacourt, C and Walker, W and Armand, M and Tarascon, J.-M.}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nat. Mater/Recham et al.\_Nat. Mater.\_2010.pdf:pdf}, title = {{A 3.6 V lithium-based fluorosulphate insertion positive electrode for lithium-ion batteries}}, S. Nishimura, M. Nakamura, R. Natsui, and A. Yamada, "New lithium iron pyrophosphate as 3.5 V class cathode material for lithium ion battery," J. am. chem. soc., vol. 132, iss. 39, p. 13596–13597, 2010. abstract = {A new pyrophosphate compound Li(2)FeP(2)O(7) was synthesized by a conventional solid-state reaction, and its crystal structure was determined. Its reversible electrode operation at ca. 3.5 V vs Li was identified with the capacity of a one-electron theoretical value of 110 mAh g(-1) even for ca. 1 $\mu$m particles without any special efforts such as nanosizing or carbon coating. Li(2)FeP(2)O(7) and its derivatives should provide a new platform for related lithium battery electrode research and could be potential competitors to commercial olivine LiFePO(4), which has been recognized as the most promising positive cathode for a lithium-ion battery system for large-scale applications, such as plug-in hybrid electric vehicles.}, author = {Nishimura, S and Nakamura, M and Natsui, R and Yamada, A}, doi = {10.1021/ja106297a}, keywords = {Diphosphates,Diphosphates: chemical synthesis,Diphosphates: chemistry,Electric Power Supplies,Electrochemistry,Electrodes,Iron,Iron: chemistry,Lithium,Lithium: chemistry,Models,Molecular}, title = {{New lithium iron pyrophosphate as 3.5 V class cathode material for lithium ion battery}}, M. Sathiya, K. Ramesha, G. Rousse, D. Foix, D. Gonbeau, K. Guruprakash, A. S. Prakash, M. L. Doublet, and J-M. Tarascon, "Li4NiTeO6 as a positive electrode for Li-ion batteries.," Chem. commun., vol. 49, p. 11376–11378, 2013. @article{Sathiya2013d, abstract = {Layered Li4NiTeO6 was shown to reversibly release/uptake ∼2 lithium ions per formula unit with fair capacity retention upon long cycling. The Li electrochemical reactivity mechanism differs from that of Li2MO3 and is rooted in the Ni(4+)/Ni(2+) redox couple, that takes place at a higher potential than conventional LiNi1-xMnxO2 compounds. We explain this in terms of inductive effect due to Te(6+) ions (or the TeO6(6-) moiety).}, author = {Sathiya, M and Ramesha, K and Rousse, G and Foix, D and Gonbeau, D and Guruprakash, K and Prakash, A S and Doublet, M L and Tarascon, J-M}, doi = {10.1039/c3cc46842a}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chemical communications/Sathiya et al.\_Chemical communications\_2013.pdf:pdf}, journal = {Chem. Commun.}, title = {{Li4NiTeO6 as a positive electrode for Li-ion batteries.}}, url = {http://pubs.rsc.org/en/content/articlehtml/2013/cc/c3cc46842a}, M. M. Thacheray, C. S. Johnson, J. T. Vaughey, N. Li, and S. A. Hackney, "Advances in manganese-oxide 'composite' electrodes for lithium-ion batteries," J. mater. chem., vol. 15, iss. 23, p. 2257–2267, 2005. @article{Thacheray2005, abstract = {Recent advances to develop manganese-rich electrodes derived from 'composite' structures in which a Li2MnO3 (layered) component is structurally integrated with either a layered LiMO2 component or a spinel LiM2O4 component, in which M is predominantly Mn and Ni, are reviewed. The electrodes, which can be represented in two-component notation as xLi2MnO3·(1 − x)LiMO2 and xLi2MnO3·(1 − x)LiM2O4, are activated by lithia (Li2O) and/or lithium removal from the Li2MnO3, LiMO2 and LiM2O4 components. The electrodes provide an initial capacity >250 mAh g−1 when discharged between 5 and 2.0 V vs. Li0 and a rechargeable capacity up to 250 mAh g−1 over the same potential window. Electrochemical charge and discharge reactions are followed on compositional phase diagrams. The data bode well for the development and exploitation of high capacity electrodes for the next generation of lithium-ion batteries.}, annote = {とりあえずcomposite electrodeと認識しているLi-rich系のレビュー}, author = {Thacheray, M. M. and Johnson, Christopher S. and Vaughey, John T. and Li, N. and Hackney, Stephen A.}, doi = {10.1039/b417616m}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Journal of Materials Chemistry/Thacheray et al.\_Journal of Materials Chemistry\_2005.pdf:pdf}, title = {{Advances in manganese-oxide 'composite' electrodes for lithium-ion batteries}}, url = {http://pubs.rsc.org/en/content/articlehtml/2005/jm/b417616m}, M. M. Thackeray, S. Kang, C. S. Johnson, J. T. Vaughey, R. Benedek, and S. A. Hackney, "Li2MnO3-stabilized LiMO2 (M = Mn, Ni, Co) electrodes for lithium-ion batteries," J. mater. chem., vol. 17, iss. 30, p. 3112, 2007. @article{Thackeray2007b, author = {Thackeray, Michael M. and Kang, Sun-Ho and Johnson, Christopher S. and Vaughey, John T. and Benedek, Roy and Hackney, S. A.}, doi = {10.1039/b702425h}, title = {{Li2MnO3-stabilized LiMO2 (M = Mn, Ni, Co) electrodes for lithium-ion batteries}}, url = {http://xlink.rsc.org/?DOI=b702425h}, J. Lee, A. Urban, X. Li, D. Su, G. Hautier, and G. Ceder, "Unlocking the potential of cation-disordered oxides for rechargeable lithium batteries.," Science, vol. 343, iss. 6170, p. 519–22, 2014. @article{Lee2014a, abstract = {Nearly all high-energy density cathodes for rechargeable lithium batteries are well-ordered materials in which lithium and other cations occupy distinct sites. Cation-disordered materials are generally disregarded as cathodes because lithium diffusion tends to be limited by their structures. The performance of Li1.211Mo0.467Cr0.3O2 shows that lithium diffusion can be facile in disordered materials. Using ab initio computations, we demonstrate that this unexpected behavior is due to percolation of a certain type of active diffusion channels in disordered Li-excess materials. A unified understanding of high performance in both layered and Li-excess materials may enable the design of disordered-electrode materials with high capacity and high energy density.}, author = {Lee, Jinhyuk and Urban, Alexander and Li, Xin and Su, Dong and Hautier, Geoffroy and Ceder, Gerbrand}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Science (New York, N.Y.)/Lee et al.\_Science (New York, N.Y.)\_2014.pdf:pdf}, title = {{Unlocking the potential of cation-disordered oxides for rechargeable lithium batteries.}}, url = {http://www.sciencemag.org/content/343/6170/519}, H. Kobayashi, "Structure and lithium deintercalation of Li2−xRuO3," Solid state ionics, vol. 82, iss. 1-2, p. 25–31, 1995. @article{Kobayashi1995, abstract = {Lithium deintercalation process of lithium ruthenium oxide, Li2RuO3, was characterized by X-ray diffraction and electrochemical measurements. The deintercalation proceeded from x = 0.0 to 1.3 with two-phasic reactions for 0 < x ≤ 0.5 and 0.7 ≤ x ≤ 1.0. Monophasic properties were observed for the compositions, Li1.4RuO3 and Li0.9RuO3; Li1.4RuO3 has a monoclinic cell isostructural to Li2RuO3, and Li0.9RuO3 has rhombohedral symmetry with the ilmenite-related structure. The lithium deintercalation from the lithium layer caused the rearrangement of the oxide-ion array from a cubic close packed (ccp) to a hexagonal close packed (hcp) structure. Further, electrical and magnetic properties were discussed on the basis of electrical and magnetic measurements.}, author = {Kobayashi, H}, doi = {10.1016/0167-2738(95)00135-S}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Solid State Ionics/Kobayashi\_Solid State Ionics\_1995.pdf:pdf}, keywords = {li2−xruo3,lithium deintercalation,ruthenium oxide}, title = {{Structure and lithium deintercalation of Li2−xRuO3}}, url = {http://dx.doi.org/10.1016/0167-2738(95)00135-S}, J. Ma, Y. Zhou, Y. Gao, X. Yu, Q. Kong, L. Gu, Z. Wang, X. Yang, and L. Chen, "Feasibility of Using Li 2 MoO 3 in Constructing Li-Rich High Energy Density Cathode Materials," Chem. mater., vol. 26, iss. 10, p. 3256–3262, 2014. @article{Ma2014, abstract = {Layer-structured xLi2MnO3·(1 ? x)LiMO2 are promising cathode materials for high energy-density Li-ion batteries because they deliver high capacities due to the stabilizing effect of Li2MnO3. However, the inherent disadvantages of Li2MnO3 make these materials suffer from drawbacks such as fast energy-density decay, poor rate performance and safety hazard. In this paper, we propose to replace Li2MnO3 with Li2MoO3 for constructing novel Li-rich cathode materials and evaluate its feasibility. Comprehensive studies by X-ray diffraction, X-ray absorption spectroscopy, and spherical-aberration-corrected scanning transmission electron microscopy clarify its lithium extraction/insertion mechanism and shows that the Mo4+/Mo6+ redox couple in Li2MoO3 can accomplish the task of charge compensation upon Li removal. Other properties of Li2MoO3 such as the nearly reversible Mo-ion migration to/from the Li vacancies, absence of oxygen evolution, and reversible phase transition during initial (de)lithiation indicate that Li2MoO3 meets the requirements to an ideal replacement of Li2MnO3 in constructing Li2MoO3-based Li-rich cathode materials with superior performances. Layer-structured xLi2MnO3·(1 ? x)LiMO2 are promising cathode materials for high energy-density Li-ion batteries because they deliver high capacities due to the stabilizing effect of Li2MnO3. However, the inherent disadvantages of Li2MnO3 make these materials suffer from drawbacks such as fast energy-density decay, poor rate performance and safety hazard. In this paper, we propose to replace Li2MnO3 with Li2MoO3 for constructing novel Li-rich cathode materials and evaluate its feasibility. Comprehensive studies by X-ray diffraction, X-ray absorption spectroscopy, and spherical-aberration-corrected scanning transmission electron microscopy clarify its lithium extraction/insertion mechanism and shows that the Mo4+/Mo6+ redox couple in Li2MoO3 can accomplish the task of charge compensation upon Li removal. Other properties of Li2MoO3 such as the nearly reversible Mo-ion migration to/from the Li vacancies, absence of oxygen evolution, and reversible phase transition during initial (de)lithiation indicate that Li2MoO3 meets the requirements to an ideal replacement of Li2MnO3 in constructing Li2MoO3-based Li-rich cathode materials with superior performances.}, author = {Ma, Jun and Zhou, Yong-Ning and Gao, Yurui and Yu, Xiqian and Kong, Qingyu and Gu, Lin and Wang, Zhaoxiang and Yang, Xiao-Qing and Chen, Liquan}, doi = {10.1021/cm501025r}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chemistry of Materials/Ma et al.\_Chemistry of Materials\_2014.pdf:pdf}, title = {{Feasibility of Using Li 2 MoO 3 in Constructing Li-Rich High Energy Density Cathode Materials}}, url = {http://dx.doi.org/10.1021/cm501025r}, [38] M. Sathiya, K. Ramesha, G. Rousse, D. Foix, D. Gonbeau, A. S. Prakash, M. L. Doublet, K. Hemalatha, and J. -M. Tarascon, "High Performance Li2Ru1–yMnyO3 (0.2 ≤ y ≤ 0.8) Cathode Materials for Rechargeable Lithium-Ion Batteries: Their Understanding," Chem. mater., vol. 25, p. 1121–1131, 2013. @article{Sathiya2013a, author = {Sathiya, M and Ramesha, K and Rousse, G. and Foix, D and Gonbeau, D and Prakash, A.S. and Doublet, M. L. and Hemalatha, K. and Tarascon, J.-M.}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chemistry of Materials/Sathiya et al.\_Chemistry of Materials\_2013.pdf:pdf}, title = {{High Performance Li2Ru1–yMnyO3 (0.2 ≤ y ≤ 0.8) Cathode Materials for Rechargeable Lithium-Ion Batteries: Their Understanding}}, M. Sathiya, G. Rousse, K. Ramesha, C. P. Laisa, H. Vezin, M. T. Sougrati, M-L. Doublet, D. Foix, D. Gonbeau, W. Walker, A. S. Prakash, M. {Ben Hassine}, L. Dupont, and J-M. Tarascon, "Reversible anionic redox chemistry in high-capacity layered-oxide electrodes," Nat. mater., vol. advance on, 2013. @article{Sathiya2013, author = {Sathiya, M. and Rousse, G. and Ramesha, K. and Laisa, C. P. and Vezin, H. and Sougrati, M. T. and Doublet, M-L. and Foix, D. and Gonbeau, D. and Walker, W. and Prakash, A. S. and {Ben Hassine}, M. and Dupont, L. and Tarascon, J-M.}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nat. Mater/Sathiya et al.\_Nat. Mater.\_2013.pdf:pdf}, title = {{Reversible anionic redox chemistry in high-capacity layered-oxide electrodes}}, volume = {advance on}, M. Aydinol, A. Kohan, G. Ceder, K. Cho, and J. Joannopoulos, "Ab initio study of lithium intercalation in metal oxides and metal dichalcogenides," Phys. rev. b, vol. 56, iss. 3, p. 1354–1365, 1997. @article{Aydinol1997b, author = {Aydinol, M. and Kohan, A. and Ceder, Gerbrand and Cho, K. and Joannopoulos, J.}, doi = {10.1103/PhysRevB.56.1354}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Physical Review B/Aydinol et al.\_Physical Review B\_1997.pdf:pdf}, title = {{Ab initio study of lithium intercalation in metal oxides and metal dichalcogenides}}, url = {http://link.aps.org/doi/10.1103/PhysRevB.56.1354}, G. Ceder, Y. -M. Chiang, D. R. Sadoway, M. K. Aydinol, Y. -I. Jang, and B. Huang, "Identification of cathode materials for lithium batteries guided by first-principles calculations," Nature, vol. 392, iss. 6677, p. 694–696, 1998. @article{Ceder1998c, abstract = {Lithium batteries have the highest energy density of all rechargeable batteries and are favoured in applications where low weight or small volume are desired — for example, laptop computers, cellular telephones and electric vehicles1. One of the limitations of present commercial lithium batteries is the high cost of the LiCoO2 cathode material. Searches for a replacement material that, like LiCoO2, intercalates lithium ions reversibly have covered most of the known lithium/transition-metal oxides, but the number of possible mixtures of these2, 3, 4, 5 is almost limitless, making an empirical search labourious and expensive. Here we show that first-principles calculations can instead direct the search for possible cathode materials. Through such calculations we identify a large class of new candidate materials in which non-transition metals are substituted for transition metals. The replacement with non-transition metals is driven by the realization that oxygen, rather than transition-metal ions, function as the electron acceptor upon insertion of Li. For one such material, Li(Co,Al)O2, we predict and verify experimentally that aluminium substitution raises the cell voltage while decreasing both the density of the material and its cost.}, author = {Ceder, Gerbrand and Chiang, Y.-M. and Sadoway, D. R. and Aydinol, M. K. and Jang, Y.-I. and Huang, B.}, doi = {10.1038/33647}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Nature/Ceder et al.\_Nature\_1998.pdf:pdf}, title = {{Identification of cathode materials for lithium batteries guided by first-principles calculations}}, url = {http://dx.doi.org/10.1038/33647}, [42] L. Wang, T. Maxisch, and G. Ceder, "Oxidation energies of transition metal oxides within the GGA+U framework," Phys. rev. b, vol. 73, p. 195107, 2006. @article{Wang2006, author = {Wang, Lei and Maxisch, Thomas and Ceder, Gerbrand}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Phys. Rev. B/Wang, Maxisch, Ceder\_Phys. Rev. B\_2006.pdf:pdf}, keywords = {calculation}, mendeley-tags = {calculation}, title = {{Oxidation energies of transition metal oxides within the GGA+U framework}}, Y. Koyama, Y. Makimura, I. Tanaka, H. Adachi, and T. Ohzuku, "Systematic Research on Insertion Materials Based on Superlattice Models in a Phase Triangle of LiCoO[sub 2]-LiNiO[sub 2]-LiMnO[sub 2]," J. electrochem. soc., vol. 151, iss. 9, p. A1499, 2004. @article{Koyama2004, author = {Koyama, Yukinori and Makimura, Yoshinari and Tanaka, Isao and Adachi, Hirohiko and Ohzuku, Tsutomu}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/J. Electrochem. Soc/Koyama et al.\_J. Electrochem. Soc.\_2004.pdf:pdf}, pages = {A1499}, publisher = {The Electrochemical Society}, title = {{Systematic Research on Insertion Materials Based on Superlattice Models in a Phase Triangle of LiCoO[sub 2]-LiNiO[sub 2]-LiMnO[sub 2]}}, url = {http://jes.ecsdl.org/content/151/9/A1499.full}, S. Mishra and G. Ceder, "Structural stability of lithium manganese oxides," Phys. rev. b, vol. 59, iss. 9, p. 6120–6130, 1999. @article{Mishra1999, abstract = {We have studied stability of lithium-manganese oxides using density functional theory in the local density and generalized gradient approximation (GGA). In particular, the effect of spin-polarization and magnetic ordering on the relative stability of various structures is investigated. At all lithium compositions the effect of spin polarization is large, although it does not affect different structures to the same extent. At composition LiMnO2, globally stable Jahn-Teller distortions could only be obtained in the spin-polarized GGA approximation, and antiferromagnetic spin ordering was critical to reproduce the orthorhombic LiMnO2 structure as ground state. We also investigate the effect of magnetism on the Li intercalation potential, an important property for rechargeable Li batteries.}, author = {Mishra, S. and Ceder, Gerbrand}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Physical Review B/Mishra, Ceder\_Physical Review B\_1999.pdf:pdf}, publisher = {American Physical Society}, shorttitle = {Phys. Rev. B}, title = {{Structural stability of lithium manganese oxides}}, J. Reed and G. Ceder, "Role of Electronic Structure in the Susceptibility of Metastable Transition-Metal Oxide Structures to Transformation," Chem. rev., vol. 104, iss. 10, p. 4513–4534, 2004. @article{Reed2004, author = {Reed, John and Ceder, Gerbrand}, doi = {10.1021/cr020733x}, file = {:C$\backslash$:/Users/Tomoya/Documents/Mendeley Desktop/Chemical Reviews/Reed, Ceder\_Chemical Reviews\_2004.pdf:pdf}, journal = {Chem. Rev.}, title = {{Role of Electronic Structure in the Susceptibility of Metastable Transition-Metal Oxide Structures to Transformation}}, url = {http://dx.doi.org/10.1021/cr020733x}, Past talk:15th JRCC seminar I gave a talk about brief introduction of materials science of rechargeable battery study. Thank you for coming. I will give a brief introduction of my study at 15th seminar by Japanese Reserchers Crossing in Chicago (JRCC). The talk will be given in Japanese. The detail information is as follows: Date: Saturday, October 27th 1:00pm~ Venue: Japan Information Center, Consulate-General of Japan in Chicago 737 North Michigan Avenue 10th floor, Chicago, Illinois, 60611 Organizer: Japanese Researchers Crossing in Chicago (JRCC) Supported by Japan Information Center, Consulate-General of Japan in Chicago About a site logo A site logo consisting of hexagons on top of this page represents a motif of an oxide-crystal structure, in which a metal and six oxygen ions form a coordinate octahedron. The colors indicate a subtle difference between the chemical states of the elements, which my developed analysis technique has the advantage to elucidate. Acknoeledgment This web site was developed by WordPress based on Pique theme. The author thanks to people who support and develop this respectable culture. The crystalline-structure models were drown using VESTA. 11/18/2019 Update Publication (four papers added) etc. 12/2/2018 Update a front photo. 12/2/2018 Update Publication (two papers added). 10/7/2018 Post an article (Complex refractive index). 9/30/2018 Post an article (Resonant scattering). 9/18/2018 Post an article (Lorentz factor). 9/17/2018 Post an article (Debye Waller factor). 9/17/2018 Post an article (scattering from a crystal). 9/16/2018 Post an article (scattering by an atom). 9/16/2018 Post an article (scattering from an electron). 8/30/2018 Post an article (lithium-ion battery). 8/16/2018 The site was launched. © 2018 Tomoya Kawaguchi. All rights reserved. Privacy Policy
CommonCrawl
Iran's Land Suitability for Agriculture Mohsen B. Mesgaran1, Kaveh Madani ORCID: orcid.org/0000-0003-0378-31702, Hossein Hashemi3,4 & Pooya Azadi1 Scientific Reports volume 7, Article number: 7670 (2017) Cite this article Mathematics and computing Increasing population has posed insurmountable challenges to agriculture in the provision of future food security, particularly in the Middle East and North Africa (MENA) region where biophysical conditions are not well-suited for agriculture. Iran, as a major agricultural country in the MENA region, has long been in the quest for food self-sufficiency, however, the capability of its land and water resources to realize this goal is largely unknown. Using very high-resolution spatial data sets, we evaluated the capacity of Iran's land for sustainable crop production based on the soil properties, topography, and climate conditions. We classified Iran's land suitability for cropping as (million ha): very good 0.4% (0.6), good 2.2% (3.6), medium 7.9% (12.8), poor 11.4% (18.5), very poor 6.3% (10.2), unsuitable 60.0% (97.4), and excluded areas 11.9% (19.3). In addition to overarching limitations caused by low precipitation, low soil organic carbon, steep slope, and high soil sodium content were the predominant soil and terrain factors limiting the agricultural land suitability in Iran. About 50% of the Iran's existing croplands are located in low-quality lands, representing an unsustainable practice. There is little room for cropland expansion to increase production but redistribution of cropland to more suitable areas may improve sustainability and reduce pressure on water resources, land, and ecosystem in Iran. Increasing population and consumption have raised concerns about the capability of agriculture in the provision of future food security1, 2. The overarching effects of climate change pose further threats to the sustainability of agricultural systems3, 4. Recent estimates suggested that global agricultural production should increase by 70% to meet the food demands of a world populated with ca. 9.1 billion people in 20505. Food security is particularly concerning in developing countries, as production should double to provide sufficient food for their rapidly growing populations5, 6. Whether there are enough land and water resources to realize the production growth needed in the future has been the subject of several global-scale assessments7,8,9. The increase in crop production can be achieved through extensification (i.e. allocating additional land to crop production) and/or intensification (i.e. producing a higher yield per unit of land)7. At the global scale, almost 90% of the gain in production is expected to be derived from improvement in the yield, but in developing countries, land expansion (by 120 million ha) would remain a significant contributor to the production growth5, 10. Land suitability evaluations10, yield gap analysis8, 11, and dynamic crop models9 have suggested that the sustainable intensification alone or in conjugation with land expansion could fulfil the society's growing food needs in the future. Although the world as a whole is posited to produce enough food for the projected future population, this envisioned food security holds little promise for individual countries as there exist immense disparities between regions and countries in the availability of land and water resources, and the socio-economic development. Global Agro-Ecological Zone (GAEZ v3.0) analysis12 suggests that there are vast acreages of suitable but unused land in the world (about 1.4 billion ha) that can potentially be exploited for crop production; however, these lands are distributed very unevenly across the globe with some regions, such as the Middle East and North Africa (MENA), deemed to have very little or no land for expansion. Likewise, globally available fresh water resources exceed current agricultural needs but due to their patchy distribution, an increasing number of countries, particularly in the MENA region, are experiencing severe water scarcity10. Owing to these regional differences, location-specific analyses are necessary to examine if the available land and water resources in each country will suffice the future food requirements of its nation, particularly if the country is still experiencing significant population growth. As a preeminent agricultural country in the MENA region13, Iran has long been pursuing an ambitious plan to achieve food self- sufficiency. Iran's self- sufficiency program for wheat started in 199014, but the low rate of production increase (Supplementary Fig. S1) has never sustainably alleviated the need for grain imports. Currently, Iran's agriculture supplies about 90% of the domestic food demands but at the cost of consuming 92% of the available freshwater15,16,17,18,19. In rough terms, the net value of agricultural import (i.e. ~$5B) is equal to 14% of Iran's current oil export gross revenue20. Located in a dry climatic zone, Iran is currently experiencing unprecedented water shortage problems which adversely, and in some cases irreversibly, affect the country's economy, ecosystem functions, and lives of many people21, 22. The mean annual precipitation is below 250 mm in about 70% of the country and only 3% of Iran, i.e. 4.7 million ha, receives above 500 mm yr−1 precipitation (Supplementary Fig. S2). The geographical distribution of Iran's croplands (Supplementary Fig. S3) shows that the majority of Iran's cropping activities take place in the west, northwest, and northern parts of the country where annual precipitation exceeds 250 mm (Supplementary Fig. S2). However, irrigated cropping is practiced in regions with precipitations as low as 200 mm year−1, or even below 100 mm year−1. To support agriculture, irrigated farming has been implemented unbridled, which has devastated the water scarcity problem22, 23. The increase in agricultural production has never been able to keep pace with raising demands propelled by a drastic population growth over the past few decades, leading to a negative net international trade of Iran in the agriculture sector with a declining trend in the near past (Supplementary Fig. S1). Although justified on geopolitical merits, Iran's self-sufficiency agenda has remained an issue of controversy for both agro-ecological and economic reasons. Natural potentials and constraints for crop production need to be assessed to ensure both suitability and productivity of agricultural systems. However, the extents to which the land and water resources of Iran can meet the nation's future food demand and simultaneously maintain environmental integrity is not well understood. With recent advancement in GIS technology and availability of geospatial soil and climate data, land suitability analysis now can be conducted to gain insight into the capability of land for agricultural activities at both regional24, 25 and global scales26, 27. Land evaluation in Iran has been conducted only at local, small scales28 and based on the specific requirements of a few number of crops such wheat29, rice30 and faba bean31. However, there is no large scale, country-wide analysis quantifying the suitability of Iran's land for agricultural use. Herein, we systematically evaluated the capacity of Iran's land for agriculture based on the soil properties, topography, and climate conditions that are widely known for their relevance with agricultural suitability. Our main objectives were to: (i) quantify and map the suitability of Iran's land resources for cropping, and (ii) examine if further increase in production can be achieved through agriculture expansion and/or the redistribution of croplands without expansion. The analyses were carried out using a large number of geospatial datasets at very high spatial resolutions of 850 m (for soil properties and climate) and 28 m (for topography). Our results will be useful for estimating Iran's future food production capacity and hence have profound implications for the country's food self-sufficiency program and international agricultural trade. Although the focus of this study is Iran, our approach is transferrable to other countries, especially to those in the MENA region that are facing similar challenges: providing domestic food to a rapidly growing population on a thirsty land. We classified Iran's land into six suitability categories based on the soil, topography, and climate variables (see Methods) known to be important for practicing a profitable and sustainable agriculture. These suitability classes were unsuitable, very poor, poor, medium, good, and very good (see Methods for details). This classification provides a relative measure for comparing potential crop yields across different lands. Four major land uses that were excluded from the suitability analysis comprised 19.3 (12%) million ha of Iran's land (Supplementary Table S1), leaving 142.8 million ha available for agricultural capability evaluation (Table 1). Table 1 Area (million ha) and percentage of Iran's land within agricultural suitability classes based on three suitability analysis criteria. Also shown is the total area of lands excluded from the analysis. Land suitability irrespective of climate limitations When land suitability was evaluated solely based on the soil and topographic constraints (i.e. excluding climate variables), 120 million ha (74%) of land was found to have a poor or lower suitability ranks (Table 2). Lands with a medium suitability cover 17.2 million ha (11%) whilst high-quality lands (good and very good classes) do not exceed 5.8 million ha (4%) (Table 1). Table 2 List of GIS data used for the suitability analysis of Iran's land for crop production. The spatial distribution of suitability classes shows that the vast majority of lands in the center, east and, southeast of Iran have a low potential for agriculture irrespective of water availability and other climate variables (Fig. 1). As shown in Fig. 2, the potential agricultural productivity in these regions is mainly constrained by the low amount of organic carbon (OC) and high levels of sodium contents (ESP). Based on soil data32, Iran's soil is poor in organic matters with 67% of the land area estimated to have less than 1% OC. Saline soils, defined by FAO33 as soils with electrical conductivity (EC) > 4 dS/m and pH < 8.2, are common in 41 million ha (25%) of Iran. Although many plants are adversely affected by the saline soils (EC > 4 dS/m), there are tolerant crops such as barley and sugar beet that can grow almost satisfactorily in soils with ECs as high as 20 dS/m34, which was used as the upper limit of EC in this analysis (see Supplementary Table S1). Although sodic soils (ESP > 15% and pH > 8.2 as per FAO's definition)33 are less abundant in Iran (~0.5 million ha), soils that only have high ESP (i.e. regardless of pH) covers ~30 million ha (18% of lands). We used an ESP of 45% as the upper limit for cropping since tolerant crops such as sugar beet and olive can produce acceptable yield at such high ESP levels34. As shown in Fig. 2, EC is not listed among the limiting factors, while we know soil salinity is a major issue for agriculture in Iran. This discrepancy can be explained by the higher prevalence of soils with ESP > 45% compared to those with EC > 20 dS/m, which can spatially mask saline soils. That is, the total area of soils with EC > 20 dS/m was estimated to be about 6.4 million ha (4% of lands), while soils exceeding the ESP threshold of 45 constituted ~12 million ha (7%) i.e. almost double the size of saline soils. Iran's land suitability for agriculture based on soil and topographic variables. See Table 3 for the definitions of suitability classes. Map was generated using QGIS 2.18. Edaphic and topographic constraints of agriculture in Iran. Geographical distribution of the limiting soil and topographic factors for lands classified as unsuitable, very poor, and poor as shown in Fig. 1. Suitability > 0.4 refers to as medium, good, and very good lands (see Table 3). Acronyms: Cation Exchange Capacity, CEC; Organic carbon, OC; Base saturation, BS; Exchangeable Sodium Percentage, ESP; Available Water Content, AWC. Map was generated using QGIS 2.18. Iran's high-quality lands for cropping (good and very good classes) are confined to a narrow strip along the Caspian Sea (Golestan, Mazandaran and Gilan provinces) and few other provinces in the west and northwest (e.g. West Azerbaijan, Kurdistan, and Kermanshah) (Fig. 1). In the latter provinces, the main agaricultural limitations are caused by high altitude and steep slopes (Fig. 2) as these regions intersect with the two major mountain ranges in the north (Alborz) and west (Zagros). Land suitability for rainfed farming Thus far, the land suitability analysis was based on soil and terrain conditions only, reflecting the potential agricultural productivity of Iran's without including additional limitations imposed by the water availability and climatic variables. However, Iran is located in one of the driest areas of the world where water scarcity is recognized as the main constraint for agricultural production. Based on aridity index35 (see Methods), our analysis showed that 98% of Iran could be classified as hyper-arid, arid, or semi-arid (Supplementary Fig. S4). August and January are the driest and wettest months in Iran, respectively, as shown in Fig. 3. Over half of the country experiences hyper-arid climate conditions for five successive months starting from June (Supplementary Fig. S5). This temporal pattern of aridity index has dire consequences for summer grown crops as the amount of available water and/or the rate of water uptake by the crop may not meet the atmospheric evaporative demand during these months, giving rise to very low yields or total crop failure. It must be noted that the high ratio of precipitation (P) to potential evapotranspiration (PET) in humid regions could also result from low temperature rather than high precipitation. There is a high degree of overlap between regions that experience wetter conditions for most of the year (Supplementary Fig. S5) and those identified as suitable for agriculture based on their soil and terrain conditions (Fig. 1). This spatial overlap suggests that some of the land features relevant to cropping might be correlated with the climate parameters. In fact, soil organic carbon has been found to be positively correlated with precipitation in several studies36,37,38. Spatial distribution of the length of the growing period (months) in Iran. Length of moist growing period was defined as the number consecutive months wherein precipitation exceeds half the PET39 (see Table 2 for source of data and Methods for more details). Map was generated using QGIS 2.18. To incorporate climate variables into our land suitability analysis, we used monthly precipitation and PET as measures of both overall availability and temporal variability of water. We derived, from monthly precipitation and PET data, the length of the growing period across Iran (Fig. 3). Growing period was defined as the number of consecutive months wherein precipitation exceeds half the PET39. As shown in Fig. 3, areas where moisture conditions allow a prolonged growing period are predominately situated in the northern, northwestern, and western Iran with Gilan province exhibiting the longest growing period of 9 months. For over 50% of the lands in Iran, the length of the moist growing period is too short (≤2 months)34 to support any cropping unless additional water is provided through irrigation (Fig. 3). However, these areas, located in the central, eastern, and southeastern Iran, suffer from the shortage of surface and groundwater resources to support irrigated farming in a sustainable manner. Taking into account daily climate data and all types of locally available water resources can improve the accuracy of the length of growing period estimation. The productivity of rainfed farming is also affected by the selection of planting date40, which often depends on the timing of the first effective rainfall events. For this joint soil-terrain-climate analysis, all regions with a growing season of two months or shorter were assigned a suitability value of zero and thus classified as unsuitable for agriculture. We then evaluated the capacity of land for rainfed farming by using a precipitation cut-off of 250 mm year−1, which is often regarded as the minimum threshold for the rainfed farming (see Supplementary Fig. S6). As shown in Table 1, the inclusion of the length of growing period and precipitation threshold into the analysis only slightly reduced the total area of high-quality lands (good and very good classes) from 5.8 to 5.4 million ha. This implies that most lands with suitable soil and terrain conditions also receive sufficient amount of moisture to sustain rainfed agriculture. On the contrary, the area of unsuitable lands increased from 39.7 to 112.9 million ha when precipitation and duration of growing season thresholds were superimposed on the soil and topographic constraints. This increase in unsuitable acreage was mainly driven by the demotion of lands from the very poor class to the unsuitable class (Table 1). The addition of moisture constraints also reduced the area of medium suitability lands by 4.8 million ha. In summary, for the rainfed farming suitability analysis, 125 million ha (77%) of Iran's land might be classified as poor or lower ranks whilst only 18 million ha (11%) meet the required conditions for the medium or higher suitability classes (Table 1). The geographical distribution of these land classes is mapped in Fig. 4. Almost the entire central Iran (Yazd, Semnan, Markazi, and Esfahan), and the vast majority of land area in the eastern (South Khorasan and the southern part of Khorasan Razavi), southeastern (Sistan and Baluchistan, and Kerman) and southern (Hormozgan and Bushehr) provinces were found to be unsuitable for rainfed farming. Almost half the area of Khuzestan and three-quarters of Fars provinces were also characterized unsuitable. Over the entire east, only in the northern part of Khorasan Razavi province, is there a belt of marginally suitable lands satisfying the requirements of a potentially prosperous rainfed agriculture (Fig. 4). Land suitability for rainfed agriculture. Iran's land suitability with potential for rained agriculture was assessed based on soil properties, terrain, and a minimum precipitation threshold of 250 mm year−1. See Table 3 for the definitions of suitability classes. Map was generated using QGIS 2.18. Land suitability under both rainfed and irrigated conditions In the next step of the analysis, the suitability of land was scaled with the annual precipitation over the range of 100 (minimum level) to 500 mm year−1 (optimal level). The lower limit (i.e. 100 mm year−1) is deemed to exclude the desert areas for agricultural use41 whilst the upper limit (i.e. 500 mm year−1) represents a benign moisture environment for the growth of many crops34, 42 (see Supplementary Fig. S6). This last analysis, hereafter referred to as precipitation scaling method, makes no assumption as to whether the cropping practices rely on rainfall or irrigation to satisfy crop water requirement and may thus represent a more comprehensive approach for agricultural suitability assessment. The same minimum length of growing period (≥2 months) and soil/topographic constraints as with the two previous methods were used in this analysis. Compared to the rainfed agriculture analysis, the precipitation scaling method mainly changed the distribution of lands within the lower suitability classes (Table 1). For example, a great proportion of lands within the unsuitable class was shifted up to the very poor and poor classes. This implies that, to a limited extent, irrigation can compensate for the below threshold precipitation (i.e. 250 mm year−1). Nevertheless, water availability cannot necessarily justify agriculture in areas with low soil and topographic suitability. This has an important implication for water management in Iran that has a proven record of strong desire for making water available to drier areas through groundwater pumping, water transfer, and dam construction. The majority of high-quality lands (i.e. good and very good), which also retains sufficient levels of moisture (i.e. good and very good classes) are located in the western and northern provinces of Iran (Fig. 6). Kermanshah province accommodates the largest area (763,000 ha) of such lands followed by Kurdistan (644,000 ha). High-quality lands were estimated to cover 33% and 21% of these two provinces, respectively. Other provinces with high percentages of high quality lands were Gilan (24%), Mazandaran (16%), West Azerbaijan (14%), and Lorestan (14%). For 17 provinces, however, high-quality lands covered less than 1% of their total area (Fig. 5). Land suitability based on precipitation scaling method. Iran's agricultural land suitability based on soil properties, terrain, and climate conditions. In this analysis, the suitability of land increases with annual precipitation over the range of 100 to 500 mm year−1 (see Methods for details and Table 3 for definition of suitability classes). Map was generated using QGIS 2.18. Suitability of Iran's existing croplands To estimate the total area of croplands within each suitability class, we visually inspected 1.2 million ha of Iran's land by randomly sampling images from Google Earth (see Methods). The proportion of land used for cropping increased almost linearly with the suitability values obtained from the precipitation scaling method (Fig. 6). Total cropping area (harvested, fallow, and abandoned) in Iran was estimated to be about 24.6 million ha, which is greater than the reported value (i.e. 14.5 million ha) by the Iran's Ministry of Agriculture17, 18. This authority reports the harvested area; hence, the fallow or abandoned lands (i.e. those that might have once been cultivated) are not included in their calculation of active agricultural area. Our visual method, however, captures all lands that are currently under cultivation or had been used for cropping in the near past that are now in fallow or set-aside (but have yet retained the landmarks of a cultivated land such as furrows). Land suitability of existing croplands. Distribution of Iran's agricultural lands (cultivated or uncultivated) among different suitability classes corresponding to Fig. 6. Left figure shows the percentage of the land within each of the suitability classes that have been used for cropping. The donut chart (right) shows the proportion of Iran's total agricultural area that falls within each suitability class. The slope, intercept, and R2 values for the linear regression model (dashed line) are 108.8, 6.2 and 0.98, respectively. The relative distribution of croplands amongst the suitability classes (Fig. 6) shows that about 52% (13 million ha) of the croplands in Iran are located in areas with poor suitability or lower ranks as identified by the precipitation scaling method. Particularly concerning are the 4.2 million ha of lands (i.e. 17% of total agricultural area) that fall within the unsuitable class. Approximately 3.4 million ha (i.e. 14%) of cropping areas occur in good and very good lands (Fig. 6). However, no agricultural expansion can be practiced in these areas as all available lands in these suitability classes have already been fully exploited. Medium quality lands comprise 12.8 million ha (8%) of Iran's land surface area (Table 1), of which about 8.6 million ha (67%) have been already allocated to agriculture (Fig. 6). Nevertheless, due to their sparse spatial distribution and lack of proper access, only a small portion of the unused lands with medium suitability (i.e. 4.2 million ha) can be practically deployed for agriculture. Using FAO's spatial data on rainfed wheat yield in Iran12, we estimated the mean yield for wheat cropping areas located within each of the six suitability classes. As shown in Fig. 7, the yield of the rainfed wheat increased proportionally with improving suitability index, showing that our suitability index adequately translates to crop yield. Using the observed yield-suitability relationship (Fig. 7), we estimated that 0.8 million ton (~8% of Iran's wheat production in 2014–2015) of wheat grain might be produced per year by allocating 1 million ha of the unused lands from the medium suitability class to rainfed wheat cropping. Rainfed wheat yield as related to land suitability. Georeferenced data on rainfed wheat yield in Iran, obtained from FAO12, showed a linear relationship with land suitability values. The slope, intercept, and R2 values for the linear regression model (dashed line) are 1.46, 0.12 and 0.98, respectively. Whilst the insufficiency of water resources has long been realized as a major impediment to developing a productive agriculture in Iran, our study highlights the additional limitations caused by the paucity of suitable land resources. Environmental pressures will further limit the possibility for land expansions. That is, Iran as a member of Convention on Biological Diversity is obliged to fulfil Aichi Biodiversity Targets whose Target 11 requires Iran to expand its protected area to 17% by 202043, which is almost double the size of the current protected areas in Iran (Supplementary Table S1). Agriculture also needs to compete with other types of land uses with urbanization being an important driver of agricultural land loss44. By converting arable lands to a barren desert, desertification is a growing global concern, particularly in the MENA region45 and Iran46. The redistribution of croplands from the low-quality lands to more suitable ones has potentials to improve crop yields and the sustainability of agriculture in Iran. A recent global-scale study concluded that by reallocating croplands to suitable environmental conditions, the global biomass production could increase by 30% even without any land expansion9. However, reallocation planning requires accurate mapping of croplands, which is not currently available for Iran. Inefficient agricultural practices in unsuitable lands need to be avoided as they produce little yields at the cost of exacerbating land degradation and water scarcity problem. Our estimations shows that rainfed wheat production from a small acreage of 1.0 million ha in the medium suitability class can equal that from 5.5 million ha of lands in unsuitable or very poor areas (Table 3). Although this conclusion may not hold for other crops grown in Iran, the wheat crop could be a good candidate to make such a generalization as wheat is the most widely cultivated crop in the country (50% of total harvest area)17 and is considered as a very low demanding plant, which has adapted to a broad range of contrasting environments. Table 3 Conversion of suitability values to suitability classes. Redistribution of croplands, however, will not be a trivial task for both the Iranian decision makers and stakeholders due to various socio-economic and logistic barriers. Lands found suitable for agriculture may not be easily accessible if scattered sparsely or occur in remote areas. Given the land and water limitations, increasing the crop production in Iran needs to be achieved through sustainable intensification, which has been found a promising approach for ensuring food security in several global-scale studies7, 8. As such, it is of vital importance for Iran to properly use its limited agricultural lands, improve water use efficiency, optimize crop pattern distribution, and adopt modern cultivation techniques. Practicing certain industrial agriculture methods in the unsuitable lands might be a viable strategy to sustainably maintain these lands in the agricultural sector while avoiding the potential socio-economic and political costs associated with redistribution of agricultural lands and farming populations. For example, protected agriculture (e.g. hydroponic greenhouse facilities) can be established at some of these locations to cope with both land suitability and water availability constraints47. While water insufficiency is a major limiting factor for both field and protected farming, the latter will be affected to a lesser extent. Our suitability assessment is based on a general set of requirements known to affect the productivity of a large number of crops, but there would exist crops with exceptional adaptive traits that can grow under less favourable conditions. Although we used the most updated geospatial data at the finest available resolution, the result of our suitability analysis should be interpreted in commensuration with the reliability and quality of the original data. For example, whereas the GlobCover database48 reliably maps the distribution of forests and rangelands in Iran, our visual inspection of satellite images (see Supplementary Fig. S8) showed that sometimes their utilized method lacks the required precision to distinguish cultivated from uncultivated croplands. Although soil erosion was not directly incorporated into our analysis, the use of slope at the very high resolution (~28 m) implicitly accounts for this effect. The interaction between variables and the quality of subsoil are among other factors that can be considered in the future studies. This study used precipitation as the only water availability factor. Including surface water and groundwater availability can further improve the adequacy of the land evaluation analysis. Given the good correlation between water availability and land suitability for agriculture, the general findings of this study are not expected to change significantly by the inclusion of water availability conditions. Nevertheless, due to the current water shortage constraints across the country21, the potential agricultural capacity of the country is likely to decrease when water availability is added to the analysis. Although global projections suggest that the suitable lands may expand with climate changes26, how these changes, particularly in precipitation pattern, would affect the suitability of Iran's land for crop production in the future is subject to high degree of uncertainty and needs further work. We examined the suitability of Iran's land for agriculture based on a large number of soil attributes and terrain and climate conditions at a very high resolution. We found that on top of the well-known water limitations, land resources also pose significant barriers to sustainable agriculture in Iran. A sizeable acreage of current farmlands occurs in unsuitable and very poor suitability ranks. The production from these lands not only is low but also can cause environmental damage and hence subject to further production decline in the future. Land expansion is unlikely to add significantly to Iran's food production capacity. However, redistribution of lands from lower suitability ranks to more suitable lands can partially improve the overall sustainability of Iran's agriculture. Increased food production capacity should, therefore, be achieved through the adoption of certain modern agricultural practices (e.g. greenhouse farming, advance irrigation systems and improved germplasm), particularly in areas where land suitability is not necessarily high. In pursuit of food sovereignty, Iran needs to balance its interest in increased food security against water sustainability. This conclusion may hold true for most countries in the MENA region as their water resources are too scarce to support irrigated farming over the long term. We evaluated the potential suitability and limitations of Iran's land for crop production using a parametric method. According to FAO49, crop production is defined as the "actual harvested production from the field or orchard and gardens". We, therefore, used "crop" in a broader sense than that of the Iranian Ministry of Agriculture by excluding any specifications regarding the plant's taxonomy, life cycle, type of use, and commodity. For example, Iran's Ministry of Agriculture distinguishes field crops17 (e.g. wheat and rice) from the horticultural crops18 (e.g. orchards and vegetables) and provides separate reports for each of these two categories. Our analysis made no such a distinction. Throughout this report, we used cropping and agriculture interchangeably, although agriculture has a broader definition and also includes the practice of animal production such as fishery and livestock. Georeferenced data related to soil properties (~850 m resolution), topography (~28 m resolution), climate (~850 m resolution), and land cover (~300 m resolution) were collated from various sources as listed in Table 2. The size of grid cells in GIS layers with coarser resolution was changed to meet the resolution of the finest layer i.e. the topography layer which had a resolution of ~28 m. The gdalwarp function in Qgis was used to change the resolution of coarser layers. Provincial data on agricultural crop production, area and yield were extracted from the latest reports provided by Iran's Ministry of Agriculture16,17,18. Inland water bodies, protected areas, urbanized areas, and natural forests and pastures were excluded from the analysis. We used 15 major soil properties that characterize the fertility (e.g. cation exchange capacity, CEC), toxicity (e.g. CaCO3), salinity (e.g. electrical conductivity, EC), sodicity (e.g. exchangeable sodium percentage, ESP), workability and rooting conditions (e.g. soil texture), and the water holding capacity of the soil (available water content, AWC). These soil parameters are known for their large effects on plant growth and have been used in previous land evaluation studies26, 50. The terrain was characterized by the slope and elevation. Steep terrains are not suitable for cropping as they can limit the functionality of machinery and pose high risks for soil erosion. For each grid cell, we estimated the maximum slope from a digital elevation model (DEM, see Table 2) using QGIS (version 2.14.3 Essen). We used altitude merely as a surrogate for mountainous areas (rather than a limiting factor per se) and assumed that areas with elevation greater than 2,750 m above mean sea level are unsuitable for agriculture51, 52. Aridity index, AI, (annual and monthly) was estimated from precipitation and potential evapotranspiration (PET) data using35: $$AI=\frac{Precipitation}{PET},$$ which was then classified into five categories according to UNESCO35: hyper arid AI < 0.03, arid 0.03 < AI < 0.2, semi-arid 0.2 < AI < 0.5, sub-humid 0.5 < AI < 0.65, and humid AI > 0.65. Both precipitation and PET data are based on long-term (1960–1990) mean annual data (Table 2). Suitability Analysis We first evaluated land suitability based on the soil and topographic variables only, which reflects the potential capacity of land resources for cropping. The limitation imposed by climate was then incorporated into land suitability analysis by using both annual and monthly precipitation and PET data. From the monthly precipitation and PET data, we determined the length of the growing period, LGP, as the number of consecutive months wherein precipitation exceeded half the PET39. The use of LGP enabled us to account for both the total amount of precipitation as well as its distribution over time, which might be equally important for a productive farming. We assumed an LGP ≤ 2 months to be too short to let a crop to complete its life cycle. Thus, the analysis assigned a suitability index of zero to all regions with such short LGPs. There are only very few crops, such as radish, that can mature within a growing period of two months34. To evaluate the suitability of land for rainfed farming we used a mean annual precipitation cut-off of 250 mm year−1, which is often considered as the minimum precipitation required for practicing a satisfactory rainfed cropping (see Supplementary Fig. S6). All regions with precipitation lower than 250 mm year−1 were, therefore, characterized as unsuitable for rainfed farming whilst the suitability of the remaining lands (i.e. those with precipitation greater than 250 mm year−1) was evaluated based on their soil and topographic properties. In addition to the rainfed cut-off method, we also used a more general modelling approach wherein the suitability of land was assumed to increase progressively with the mean annual precipitation following a stepwise function as in Supplementary Figure S7. We used 100 mm year−1 as the lower limit of precipitation for cropping as this threshold is deemed to delineate the desert areas in Iran41. For most crops evaluated by FAO34, 42, a minimum of 500 mm year−1 is required to achieve reasonable economic yields. We, therefore, used this value as the upper threshold in our stepwise function (Supplementary Figure S7). The same LGP threshold (≥2 months) and soil/topographic constraints were used in this analysis. Three types of mathematical functions were used to transform each soil, topographic, and precipitation variable to a suitability value varying from 0 (unsuitable) to 1 (optimum or highly suitable). A Z-shaped response function was used for variables that are positively correlated with crop growth (Supplementary Fig. S7a), such as OC, CEC, and BS (Supplementary Table S2). The mathematical expression for this type of relationship can be formulated as follows: $$S(V)=\{\begin{array}{cc}0 & \,if\,V\le {V}_{min}\\ \frac{V-{V}_{min}}{{V}_{ol}-{V}_{m}} & \,if\,{V}_{min} < V < {V}_{ol}\\ 1 & if\,V\ge {V}_{ol}\end{array}$$ where \(S(V)\) is the suitability index as a function of the individual variable \(V\); the parameter \({V}_{min}\) indicates the minimum value of \(V\) required for crop growth; and \({V}_{ol}\) is the lowest optimum value of \(V\) at or beyond which the highest suitability can be obtained. As an example, a \({V}_{min}\) = 0.20 was used for OC as the soil with OC value of lower than 0.20% is not suitable for agriculture34. The suitability of soil increases with increasing OC (this is assumed to be linear here) and for most crops an OC content of 1.8% provides the optimal conditions for growth57, i.e. \({V}_{ol}\) = 1.8%. Where a variable was inversely correlated with growth suitability, e.g. slope and calcium carbonate content (Supplementary Table S2), we used a "mirrored-Z" shape response shape (Supplementary Fig. S7b) to quantify its suitability index: $$S(V)=\{\begin{array}{cc}1 & if\,V\le {V}_{oU}\\ \frac{{V}_{max}-V}{{V}_{max}-{V}_{oU}} & \,if\,{V}_{oU} < V < {V}_{max}\\ 0 & if\,V\ge {V}_{max}\end{array}$$ where \({V}_{max}\) is the maximum value of variable \(V\) beyond which no cropping is possible, and \({V}_{oU}\) is the uppermost optimum value of \(V\) for cropping. For example, 0 to 5% slope represents a range in which cropping can be done with no limitation with regard to the steepness with the optimal upper bound (\({V}_{oU})\) being 5%. For some variables, e.g. pH (Supplementary Table S2), there is an optimal range below or beyond which the suitability of the variable decrease by moving toward either of the extreme (Supplementary Fig. S7c). This type of relationship gives rise to a "dent-shape" response and can be formulated as follows: $$S(V)=\{\begin{array}{ll}\frac{V-{V}_{min}}{{V}_{ol}-{V}_{min}} & if\,{V}_{min} < V < {V}_{ol}\\ 1 & if\,{V}_{ol}\le V < {V}_{oh}\\ \frac{{V}_{max}-V}{{V}_{max}-{V}_{oU}} & if\,{V}_{oU} < V < {V}_{max}\\ 0\, & else\end{array}$$ The threshold values for above equations were obtained from various databased and literature33, 34, 42, 57. Similar functional responses have been used in other studies24,25,26. The suitability of each of the 12 soil textures as related to nutrient availability, workability and rooting conditions were obtained from FAO57 (Supplementary Table S3). Soil textures of Iran's land were derived from the soil sand, silt and clay contents32 according to the USDA soil classification system58. Once the suitability of a grid cell with respect to individual soil, topographic, and precipitation variables was calculated, the overall suitability of the cell was estimated based on the Liebig's law of the minimum. That is, the growth is controlled by the scarcest resource or most limiting factor59: $$S{I}_{i}=min(S({V}_{j}))$$ where \(S{I}_{i}\) is the suitability value for grid cell \(i\) over all variables, \({V}_{j}\), with \(j=\{1,\ldots ,n\}\) and \(n\) being the total number of variables used in the analysis. The variable with the lowest suitability value was identified as the most limiting factor for cropping (Fig. 2). Although SI provides a relative measure for comparing the suitability of different lands for cropping, the productivity and sustainability of agriculture declines with decreasing SI (see Fig. 7). The suitability index (SI) was then classified into six categories as shown in Table 3. We verified the adequacy of our land evaluation approach by investigating the relation between the suitability index and estimated crop yields. We obtained georeferenced data on rainfed wheat yield in Iran from FAO12 for year 2000 and calculated the mean crop yields for each of the six suitability classes. As shown in Fig. 7, the yield increases proportionally with improving land suitability, implying that our suitability values translate to the crop performance very well. Our visual estimation of agricultural areas (see below) shows that there are unused lands in the medium suitability class. We therefore used the relationship between land suitability and crop yield to estimate the potential gain in wheat production if a specific portion of these lands is used for rainfed wheat cropping. As there is no reliable georeferenced data on agricultural areas in Iran (see Supplementary Fig. S8), the distribution of croplands among the suitability classes was determined by randomly inspecting 1.2 million ha of land images from the Google Earth. We visually estimated the proportion of each image occupied by agricultural areas and summed them up to estimate the portion and the total area of croplands and orchards within each suitability class. Godfray, H. C. J. et al. Food Security: The challenge of feeding 9 billion people. Science 327, 812–818 (2010). CAS Article PubMed ADS Google Scholar Gregory, P. J. & George, T. S. Feeding nine billion: the challenge to sustainable crop production. J. of Exp. Bot. 62, 5233–5239 (2011). Lobell, D. B. et al. Prioritizing climate change adaptation needs for food security in 2030. Science 319, 607–610 (2008). Vermeulen, S. et al. Climate change, agriculture and food security: a global partnership to link research and action for low-income agricultural producers and consumers. Cur. Opi. in Env. Sust. 4, 128–133 (2012). Bruinsma J. The resources outlook: by how much do land, water and crop yields need to increase by 2050? In: (ed. Conforti, P.) Looking ahead in world food and agriculture: Perspectives to 2050. Rome: FAO (2011). Alexandratos N. & Bruinsma, J. World agriculture towards 2030/2050: ESA Working Paper No 12-03. Rome: FAO (2012). Tilman, D., Balzer, C., Hill, J. & Befort, B. L. Global food demand and the sustainable intensification of agriculture. Proc. of the Nat. Aca. of Sci. 108, 20260–20264 (2011). CAS Article ADS Google Scholar Mueller, N. D. et al. Closing yield gaps through nutrient and water management. Nature 490, 254–257 (2012). Mauser, W. et al. Global biomass production potentials exceed expected future demand without the need for cropland expansion. Nat. Commun. 6, 8946 (2015). Fischer, G., Hizsnyik, E., Prieler, S. & Wiberg, D. Scarcity and abundance of land resources: competing uses and the shrinking land resource base. SOLAW Background Thematic Report. FAO (2011). Foley, J. A. et al. Solutions for a cultivated planet. Nature 478, 337–342 (2011). FAO/IIASA, Global Agro-ecological Zones (GAEZ v3.0) Data Portal, FAO, Rome, Italy and IIASA, Laxenburg, Austria (2012). Hashemi, H. Climate Change and the Future of Water Management in Iran, Middle East Critique, 128, 1–17 (2015). Food and Agriculture Organization of the United Nations. Iran country fact sheet on food and agriculture policy trends (2014). The World Bank Data, www.data.worldbank.org (2016). A Statistical Overview of Field Crops Harvested Area and Production in the Past 36 Years, Iranian Ministry of Agriculture (In Farsi) (2015). Agriculture Statistics: Volume 1, Field Crops, Iranian Ministry of Agriculture (2013–2014). Agriculture Statistics: Volume 2, Horticultural Crops, Iranian Ministry of Agriculture (2013). FAOSTAT, Food and Agriculture Organization of the United Nations, www.fao.org/faostat. Azadi, P., Dehghanpour, H., Sohrabi, M. & Madani, K. The Future of Iran's Oil and Its Economic Implications, Working Paper 1, Stanford Iran 2040 Project, Stanford University, October 2016, https://purl.stanford.edu/mp473rm5524 (2016). Madani, K., Aghakouchak, A. & Mirchi, A. Iran's Socio-economic Drought: Challenges of a Water-Bankrupt Nation, Iranian Studies. 49, (2016). Madani, K. Water management in Iran: what is causing the looming crisis? J. Envi. Stud. & Scie. 4, 315–328 (2014). Keshavarz, A., Ashrafi, S., Hydari, N., Pouran, M. & Farzaneh, E. March. Water allocation and pricing in agriculture of Iran. In Water conservation, reuse, and recycling: proceeding of an Iranian American workshop, The National Academies Press: Washington, DC, 153–172 (2005). Baja, S., Chapman, D. M. & Dragovich, D. A Conceptual Model for Defining and Assessing Land Management Units Using a Fuzzy Modeling Approach in GIS Environment. Envi. Manag. 29, 647–661 (2002). Elsheikh, R. et al. Agriculture Land Suitability Evaluator (ALSE): A decision and planning support tool for tropical and subtropical crops. Computers and electronics in agriculture 93, 98–110 (2013). Zabel, F., Putzenlechner, B. & Mauser, W. Global agricultural land resources–a high resolution suitability evaluation and its perspectives until 2100 under climate change conditions. PloS one 9, e107522 (2014). Article PubMed PubMed Central ADS Google Scholar van Velthuizen H. et al. Mapping biophysical factors that influence agricultural production and rural vulnerability. Food and Agriculture Organization of the United Nations and International Institute for Applied Systems Analysis, Rome (2007). Shahbazi, F. et al. Land use planning in Ahar area (Iran) using MicroLEIS DSS. International Agrophysics 22, 277–286 (2008). Bagherzadeh, A. & Mansouri Daneshvar, M. R. Physicall and suitability evaluation for specific cereal crops using GIS at Mashhad Plain. Northeast of Iran. Front. Agric. China 5, 504–513 (2011). Maddahi, Z., Jalalian, A., Kheirkah Zarkesh, M. M. & Honarjo, N. Land suitability analysis for rice cultivation using a GIS-based fuzzy multi-criteria decision making approach: central part of Amol district, Iran. Soil &. Water Res. 12, 29–38 (2017). Kazemi, H., Sadeghi, S. & Halil, A. Developing a land evaluation model for faba bean cultivation using geographic information system and multi-criteria analysis (A case study: Gonbad-Kavous region, Iran). Ecological Indicators 63, 37–47 (2016). Hengl, T., de Jesus, J. M., MacMillan, R. A., Batjes, N. H. & Heuvelink, G. B. M. SoilGrids1km — Global Soil Information Based on Automated Mapping. Plos One 9, e105992 (2014). Abrol, I. P., Yadav, J. S. P. & Massoud, F. I. Salt-affected soils and their management. U.N. Food and Agric. Organ. Soils Bull. Rome 39, 131 (1988). Sys, C., van Ranst, E., Debaveye, J. & Beernaert F. Land evaluation. Part III: Crop requirements. Agric. Publ. 7. Administration for Dev. Coop., Brussels, Belgium, (1993). United Nations Educational, Scientific and Cultural Organization (UNESCO), Map of the world distribution of arid regions: Map at scale 1:25,000,000 with explanatory note. MAB Technical Notes 7, UNESCO, Paris, (1979). Hontoria, C., Saa, A. & Rodríguez-Murillo, J. C. Relationships between soil organic carbon and site characteristics in peninsular Spain. Soi. Sci. Soc. Ame. J. 63, 614–621 (1999). Martins, A. A. A., Madeira, M. V. & Refega, A. A. G. Influence of rainfall on properties of soils developed on granite in Portugal. Arid Land Res. & Manag. 9, 353–366 (1995). Tate, K. R. Assessment, based on a climosequence of soils in tussock grasslands, of soil carbon storage and release in response to global warming. J. Soil Scie. 43, 697–707 (1992). Report on the Agro-ecological Zones Project. Vol. 1, Methodology and results for Africa. World Soil Resources Report 48/1, FAO, Rome. Bannayan, M., Rezaei, E. E. & Hoogenboom, G. Determining optimum planting dates for rainfed wheat using the precipitation uncertainty model and adjusted crop evapotranspiration. Agri. Wat. Manag. 126, 56–63 (2013). Khosroshahi, M., Khashki, M. T. & Moghaddam, T. E. Determination of climatological deserts in Iran. Iran. J. Ran. & Des. Res. 16, 96–113 (2009). Food and Agriculture Organization of the United Nations, EcoCrop Database, FAO, Rome, Italy, www.ecocrop.fao.org (2013). The Fifth National Report to the Convention on Biological Diversity. www.cbd.int/countries/?country=ir (2015). Wang, C., Gao, Q., Wang, X. & Yu, M. Scie. Rep. 6, 37658 (2016). Spatially differentiated trends in urbanization, agricultural land abandonment and reclamation, and woodland recovery in Northern China. El Shaer, H. M. Land desertification and restoration in Middle East and North Africa (MENA) region. Scien. in Col. & Arid Reg. 7, 0007–0015 (2015). Amiraslani, F. & Deirdre, D. Combating desertification in Iran over the last 50 years: an overview of changing approaches. J. Envi. Manag. 92, 1–13 (2011). de Anda, J. & Shear, H. Potential of Vertical Hydroponic Agriculture in Mexico. Sustainability 9, 140 (2017). Bontemps, S., Defourny, P., Bogaert, E., Arino, O. & Kalogirou, V. GLOBCOVER, Products Description and Validation Report. ESA, University catholique de Louvain, (2009). FAO Statistical Pocketbook - World Food and Agriculture, Food and Agriculture Organization of the United Nations, Rome Italy, 231 (2015). Fischer, G., van Velthuizen, H., Shah, M. & Nachtergaele, F. Global Agro-Ecological Assessment for Agriculture in the 21st Century: Methodology and Results, IIASA Research Report. IIASA, Laxenburg, Austria, RR-02-02, (2002). Singh, J., & Dhillon, S. S. Agricultural geography. New Delhi: Tata McGraw-Hill (1984). Baker, N. T., & Capel, P. D. Environmental factors that influence the location of crop agriculture in the conterminous United States. No. 2011-5108, US Geographical Survey (2011). UNEP-WCMC, Protected Area Profile for Iran (Islamic Republic Of) from the World Database of Protected Areas, www.protectedplanet.net, (2016). Shangguan, W., Dai, Y., Duan, Q., Liu, B. & Yuan, H. A Global Soil Data Set for Earth System Modeling. J. Adv. Mod. Eart. Syst. 6, 249–263 (2014). Land Processes Distributed Active Archive Center (LP DAAC), located at USGS/EROS, Sioux Falls, SD. http://lpdaac.usgs.gov. (2016). Zomer, R. J. et al. Climate Change Mitigation: A Spatial Analysis of Global Land Suitability for Clean Development Mechanism Afforestation and Reforestation. Agric. Ecos. & Envir. 126, 67–80 (2008). Fischer, G. et al. Global Agro-ecological Zones (GAEZ v3. 0)-Model Documentation. Laxenburg, Austria: International Institute for Applied Systems Analysis, (2012). USDA (United States Department of Agriculture), Soil Mechanics Level I-Module 3: USDA Textural Classification Study Guide. National Employee Development Staff, Soil Conservation Service, USDA, (1987). F. Salisbury, Plant physiology (4th ed.). Belmont: Wadsworth, (1992). Hijmans, R. J., Cameron, S. E., Parra, J. L., Jones, P. G. & Jarvis, A. Very high resolution interpolated climate surfaces for global land areas. Int. J. Clim. 25, 1965–1978 (2005). Stanford Iran 2040 Project, Hamid and Christina Program in Iranian Studies, Stanford University, Stanford, CA, 94305, USA Mohsen B. Mesgaran & Pooya Azadi Centre for Environmental Policy, Imperial College London, London, SW7 2AZ, UK Kaveh Madani School of Earth, Energy, and Environmental Sciences, Geophysics Department, Stanford University, Stanford, CA, 94305, USA Hossein Hashemi Center for Middle Eastern Studies and Department of Water Resources Engineering, Lund University, Lund, Sweden Mohsen B. Mesgaran Pooya Azadi M.B.M. conceived the study and collated the data. M.B.M. and P.A. ran the model and generated the figures. K.M. and H.H. supervised the precipitation and water consumption analyses. All authors contributed to the writing and reviewing of the manuscript. Correspondence to Pooya Azadi. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. SUPPLEMENTARY INFO Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Mesgaran, M.B., Madani, K., Hashemi, H. et al. Iran's Land Suitability for Agriculture. Sci Rep 7, 7670 (2017). https://doi.org/10.1038/s41598-017-08066-y DOI: https://doi.org/10.1038/s41598-017-08066-y Discriminant analysis of the participated farmers' characteristics in the conservation agriculture project based on the learning transfer system Pouria Ataei , Hassan Sadighi , Mohammad Chizari & Enayat Abbasi Environment, Development and Sustainability (2021) Exploring the Impact of Weather Variability on Phenology, Length of Growing Period, and Yield of Contrast Dryland Wheat Cultivars Mohammad Kheiri , Saeid Soufizadeh , Saghi Movahhed Moghaddam & Abdolali Ghaffari Agricultural Research (2021) Trend analysis of evapotranspiration over Iran based on NEX‐GDDP high‐resolution dataset Vahid Ghalami , Bahram Saghafian & Tayeb Raziei International Journal of Climatology (2021) Physical versus virtual water transfers to overcome local water shortages: A comparative analysis of impacts Fatemeh Karandish , Rick J. Hogeboom & Arjen Y. Hoekstra Advances in Water Resources (2021) FPGA based effective agriculture productivity prediction system using fuzzy support vector machine G. Prabakaran , D. Vaithiyanathan & Madhavi Ganesan Mathematics and Computers in Simulation (2021)
CommonCrawl
Overview of Interoperable Private Attribution Posted by ekr on 15 Feb 2022 Note: this post contains a bunch of LaTeX math notation rendered in MathJax, but it doesn't show up right in the newsletter verison. You should mostly be able to follow along anyway except for the "Technical Details" section and the Appendix (which is part of why it's an appendix) so you may want to instead read the version on the site. Recently, Erik Taubeneck (Meta), Ben Savage (Meta), and Martin Thomson (Mozilla) recently published a new technique for measuring the effectiveness of online ads called Interoperable Private Attribution (IPA). This has received a fair amount of attention—including some not so positive comments on Hacker News. I've written before about how to use a variant of this technology to measure vaccine doses, but I thought it would be useful to walk through how IPA works in its intended setting. Attribution and Conversion Measurement # For obvious reasons, advertisers and publishers want to know how effective their ads are. The basic tool for this is what's called "attribution" or "conversion measurement", Suppose I see an ad for a product on a news site and click on it, taking me to the merchant, where I subsequently make a purchase. This is called a conversion, and advertisers want to know which ads convert—and how often—and which ones do not. At the moment, conversion measurement is mostly done with cookies, as shown in the figure below: Let's walk through this in pieces. First, the client visits the publisher site. The publisher serves the client a Web page with an IFRAME from the advertiser[1] (reminder: an IFRAME is HTML element that allows one a Web page to display inside another Web page, even from two different sites). When the advertiser sends the page, it also sends a tracking cookie to the client, in this case 1234. The user views the ad (an impression) and clicks through, which takes them to the merchant. In this case, they just make an immediate purchase, but they might also shop around on the site or even go away and come back later. Eventually, the user makes a purchase ("converts"). When the merchant sends the confirmation page it includes a tracking pixel (an invisible image) served off of the advertiser's site. When the browser retrieves the pixel, it sends the advertiser's cookie (1234) back to the advertiser. The cookie allows the advertiser to connect the original click and the resulting purchase, thus measuring the conversion. You'll note that what's technically being measured in this example is the conversion from the impression to the purchase. If you wanted to measure the click instead, there are a number of ways to do this, such as having the ad click redirect through the advertiser or having a Javascript hook that informed the advertiser of the click. The problem with this technique is that it involves the advertiser tracking you across the Internet: it sees which Web site you are on every time it shows you an ad, and for a big ad network this can be a pretty appreciable fraction of your browsing history. This is a serious privacy problem and browsers are gradually deploying techniques to prevent this kind of tracking, such as Firefox's Enhanced Tracking Protection and Safari's Intelligent Tracking Protection. Those technologies are good for user privacy but interfere with conversion measurement. IPA is a mechanism designed to provide conversion measurement without degrading user privacy. The Basic Idea # The main idea behind IPA is to replace cookie-based linkage with linkage based on an anonymous identifier. Let's assume that each client $i$ has a single unique identifier $I_i$ (I'll discuss how this identifier is assigned below). This identifier can't be read directly off the client but instead has to be accessed via an API e.g., getIPAEvent() that produces an encrypted version of the identifier $E(I_i)$. The encryption is randomized so that each time the identifier is encrypted, the ciphertext is different, preventing linkage of the encrypted identifiers. To represent that, we use the notation $E(R_j, I_i)$ where $R_j$ is the randomizing value. Two encrypted values $E(R_j, I_i)$ and $E(R_{j'}, I_{i'})$ will with high probability be different unless both the identifier and the randomizer are the same. However, by use of an appropriate service they can be decrypted and matched up. If we go back to the conversion scenario described above, but instead use IPA, it would look like this: Everything is the same up to the point where the ad is displayed, except that along with the ad the advertiser also sends some Javascript code that calls getIPAEvent()[2]. The browser responds by providing an encrypted version of the identifier, with random value $R_1$: $E(R_1, I_i)$. The advertiser just stores this information on a list of the impressions for this ad (note that as before we are measuring impressions). When the user actually buys the product, the merchant calls getIPAEvent() and gets a new encrypted version of the identifier, this time with a different randomizer, $R_2$: $E(R_2, I_i)$. The merchant sends the encrypted value it receives to the advertiser. However, even though the identifiers are the same, because the randomizers are different, the encrypted values are different, thus preventing either the advertiser or the merchant from linking them. The only thing that the advertiser knows is that there has been one impression (because it saw it directly) and one purchase (because the merchant told it about it). It's important to note that this is all information that the merchant and the ad server knew already: the only secret information is the identifier and that's encrypted. In order to decrypt it and match up these events, you need to use the IPA decryption and blinding service. The basic idea behind the service is that the advertiser (or merchant) has a set of encrypted identifiers that it sends to the service and the service returns information about the number of matches. So, for instance, you might send in 20 encrypted identifiers and get back something like: Unmatched impressions 2 Unmatched purchases 3 Impression/purchase pairs 6 Two impressions/one purchase 1 Note: it's important that the IPA service only operate on batches of reports and produce aggregate reports about the batch; otherwise the advertiser could just send in small numbers of reports at a time. More on this below. Internally, the service works by having a pair of servers which cooperate to decrypt and blind the input values. The advertiser (or merchant) sends its values to the first server, which decrypts, blinds, and shuffles them, and then passes them on to the second server, which does the same thing, as shown in the diagram below (I've used a different color for each identifier to help make it easier to follow). In this example, the advertiser has two encrypted impressions and two encrypted purchases (it knows which are which because that information was available when the API was called, so it can just label them). One of the impressions and one of the purchases line up but it doesn't know that. It passes all of its data in a batch to the first server of the IPA service (A) which partially decrypts them, blinds them with its secret, and then passes them to server B. Server B decrypts them the rest of the way and applies its own blinding key. At this point server B has a list of blinded identifiers labeled with whether they were impressions or purchases. Because the blinding keys are constant, each time identifier $I_1$ is blinded, the blinded values are the same, and so it can match up the impression and purchase for $I_1$ (both shown in blue). However, because the values are blinded, it can't match them up to the input reports. Given this information, the server it can then produce a report to the advertiser to the effect that there was one pair, one unmatched impression and one unmatched purchase. Multi-Device # One of the main requirements for the design of IPA is that it allow for linking activity across multiple devices. For instance, I might see an ad on my mobile device but make the purchase on my desktop machine. Obviously, advertisers and publishers want to be able to measure the impact of their ads. With the current cookie-based system it's possible under some circumstances to associate those events. For instance, if Facebook is displaying the ad and you're logged into Facebook, then your Facebook account ID can be used to link them up. A number of the proposed private conversion measurement systems (e.g., Apple's Private Click Measurement) do not allow for this use case, which is clearly a big part of Meta's motivation for proposing IPA, as a lot of their usage is on mobile. IPA handles this case in a straightforward fashion, via the per-client identifier. Earlier I just assumed that each client $i$ had an identifier $I_i$ but didn't say how it was assigned. If instead, we arrange that each user has the same identifier across all of their devices, then IPA just naturally links up impressions on device A and device B without any extra work. This of course reduces to the problem of how to get a per-user identifier synchronized across devices. One obvious approach would be to have the devices synchronize it, much as browsers can sync history across devices. However, there are a number of cases where this won't work, for instance if you use Chrome on your Android device and Firefox on your desktop,[3] or if the impression came from something other than a browser like an app or a smart TV (I'm no happier than you are about ads on my smart TV, let alone having their conversion measured). IPA addresses this issue in a clever but counterintuitive fashion: it allows any domain (e.g., example.com or more likely facebook.com) to set a per-domain identifier (which IPA calls a "match key") that can be used by any domain. The idea here is that when you log into some system (e.g., Facebook), it sets an identifier that is tied to your account and is therefore the same across all your devices. The identifier can be used by any advertiser or merchant (via the getIPAEvent() API), no matter which domain they are on, thus preventing Facebook from being the only people who can do attribution via the Facebook account. Key to making this work is that the identifier is write-only: nobody—including the original domain—can access it, except by using the API, which of course only produces an unlinkable, encrypted value. This prevents the identifier from being used directly for tracking, as would otherwise be the case for a world-readable value. In fact, you can't even ask whether the identifier was set, because then it would leak one bit. Of course, the original domain knows the identifier for a given user (because it generated it) and it can set a cookie on the client to remember if it set the identifier, but if the cookie is deleted, then it doesn't know either. IPA Technical Details # This section provides technical details on how the IPA service works. I've attempted to make them mostly accessible and can be understood based on high school math[4] , but they can also be skipped if necessary. If you don't care about the details—or you already waded through this in my post on linking up vaccine doses—you can skip this section and still be fine. Note: in ordinary integer math, given $g^a$ and $g$ it's easy to compute $a$ but we're going to be doing this in an elliptic curve where that computation is hard. Everything else is pretty much the same, but just remember that part.[5] The service is implemented by having a pair of servers, $A$ and $B$. Each has a Diffie-Hellman key pair, which is to say a secret value $x$ and a public value computed as $g^x$. We'll call $A$'s key pair $(a, g^a)$ and $B$'s pair $(b, g^b)$. Each server also has a secret blinding key $K_a$ and $K_b$. These servers are operated by different entities who are trusted not to collude. However, if either service behaves correctly then you're OK. The service then publishes a combined public key $g^{a+b}$ which can be computed by multiplying the public keys: $g^a * g^b$ (if you remember your high school math!). In order to submit an ID $I$, the sender first encrypts it. It generates a random secret $x$ and computes: $g^{x(a+b)} = {(g^{a+b})}^x$. Note that we're using the service combined public key and the sender's private value $x$, so the result is a secret from attackers who don't know either $x$ or $a+b$. It then multiplies $I$ by this value and sends the pair of values (this is just classic ElGamal Encryption, but to the key $g^{a+b}$): $$g^x, I * g^{x(a+b)}$$ Importantly, this second term can be broken up into a part involving only $a$ and a part involving only $b$. I.e., $$I * g^{x(a+b)} = I * g^{xa} * g^{xb}$$ Again, this is just high school math. These values then get sent to $A$ (or $B$, it doesn't matter), who computes $g^{xa} = {(g^{x})}^a$ (recall it knows $a$). It then divides the second part by $g^{xa}$: $$I *g^{xb} = \frac{I * \cancel{g^{xa}} * g^{xb}}{\cancel{g^{xa}}}$$ This cancels out the $g^{xa}$ term, leaving you with just a term that involves $b$, and thus the pair: $$g^x, I * g^{xb}$$ $A$ then blinds this value, by exponentiating both values to $K_a$, giving: $$(g^x)^{K_a}, (I * g^{xb})^{K_a}$$ We can flatten this out to give: $$g^{x * K_a}, I^{K_a} * g^{(xb)(K_a)}$$ $A$ batches these values up with other inputs it has received, shuffles them, and sends them to $B$. $B$ takes the first term and computes $(g^{x*Ka})^b = g^{x * K_a * b} = g^{(xb)(K_a)}$. It then divides the second term by this value, to get: $$I^{K_a} = \frac{I^{K_a} * \cancel{g^{(xb)(K_a)}}}{\cancel{g^{(xb)(K_a)}}}$$ Finally, $B$ blinds the value by taking it to the power $K_b$, this giving us: $$I^{(K_a)(K_b)} = (I^{K_a})^{K_b}$$ That was a lot of math, but the bottom line is that the actual identifier $I$ (e.g., the SSN -- Updated 2022-02-16 account id) has been converted into a new blinded value, with (hopefully) the following properties: Neither $A$ or $B$ ever saw $I$ $A$ sees the input encrypted version but doesn't learn the blinded version. $B$ sees the blinded version but doesn't learn the encrypted version. You need to know $K_a$ and $K_b$ to compute the blinded version of $I$. Disclaimer: The IPA documents were just published recently, so I don't think they have seen enough analysis to prove they are secure. Here I'm just describing how it's supposed to work. Privacy Properties # The basic two privacy properties we are trying to achieve here are: Neither the advertiser nor the merchant is able to associate a specific input report to a specific output report, even with the help of one of the servers (because you need both $K_a$ and $K_b$). This is true even if they also know the identifiers, which are not even required to be high entropy (e.g., they can be e-mail addresses). Neither the advertiser nor the merchant is able to determine which users are represented in a given set of reports or are associated with a given piece of additional data (see below). As far as I know, no attacks on property (1) are known (though see the above caveat about insufficient analysis) but we do know of an attack on property (2) (see the appendix). The basic situation is that the advertiser can collude with whoever issued the match keys and with one of the servers to determine if a given user is incorporated in a set of reports. However, if both servers are honest, this attack will not work. This is not the desired privacy target, which is that you only have to trust that at least one server is honest, but it's where things currently stand. In any case, the second server learns more than the first server because it knows which reports match up with which other reports. However, it still doesn't know which ones match up to which input reports because it doesn't know $K_a$. This is still a somewhat weird asymmetry, and when we look at additional data in the next section, we'll remove it. Importantly, the summaries that are provided to the advertiser can still leak data. For instance, suppose that the advertiser wants to know if impression A and purchase B are from the same user: it can send them in together with a bunch of fake reports which have random non-matching identifiers. If the report that comes back lists any matches, then it know that A and B match. This is a generalized problem in any aggregate reporting system which I covered in some detail previously and there are a variety of potential defenses, including trying to ensure that data comes from "valid" clients and adding noise to the output. The IPA proposal contemplates some kind of noise injection along with budgeting for the number of queries but doesn't really include a complete design. Although this system provides a fair degree of privacy if you trust the servers, there will of course be people who don't trust them, or just don't want to send their data on principle. One question I've seen asked is whether it will be possible to configure your software not to participate. However, from a privacy perspective, it's actually undesirable to have the API call just fail because then you have sent some information to the server that might be used to track you (as most people will not disable the API). A better approach technically is just to send an unusable report, e.g., the encryption of a randomly selected ID. This should not be possible to distinguish from a valid report without the cooperation of both servers and knowing what valid identifiers look like. Obviously, whether there is such a configuration knob depends on the software you are using. Additional Data # So far the system we have described just lets us count matches, but what if we want to record more than matches, for instance by measuring the total amount of money spent by customers via a given ad campaign? This turns out to be a somewhat tricky problem to solve because we need to make sure that that information doesn't turn into a mechanism for tracking reports through the system. For instance, in the diagram above, I had the advertiser label each report as either an impression or a purchase; this is mostly fine as long as we only have those two labels because if there are a reasonable number of each you don't know much about whether a given output and a given input match up. However, if we let the advertiser attach arbitrary labels, this would obviously be a problem because then they could collude with one of the servers to track a given input through the process (this is of course the same reason you have to shuffle). Naively, suppose that the merchant adds the customer's email address to the report, then obviously if that pops out the other end then you have a real problem. IPA doesn't contain a complete proposal for this, but does have some handwaving. The general idea is that the client, not the advertiser or merchant would attach "additional data" (the cute name for this is a "sidecar") to their report. This data would be supplied by the server which would say something like "make a report that says that this purchase was for 100 dollars". This additional data would also be multiply encrypted so that neither server could individually decrypt it, but that once it had been shuffled, the second server would get it along with the blinded identifier. Note that this additional data would not be blinded because otherwise you wouldn't be able to add up the results; it just appears unmodified in the output. But wait, you say, if we just let the advertiser provide arbitrary data, then it can provide a user identifier of its own which will then show up in the output and we're back where we started. The proposed fix is that instead of just reporting the value directly, the client instead reports it via some secret-sharing mechanism like Prio. Of course, this means that the client actually has to submit two reports, one that is processed by server A then server B and one that is processed by server B then server A, as shown below: As shown here, the client generates two reports, each of which contains a Prio share for the value provided by the advertiser. When the advertiser is ready, it sends one report share to Server A and one report share to Server B. In this case, I've shown reports from two clients, each with one share. As described above, each server partly decrypts its reports, shuffles, and then passes it to the other server. The other server completes the decryption, correlates the matching reports, and aggregates (e.g., adds up) the additional data.[6] Finally, Server A sends its aggregated additional data to Server B which combines it with its aggregated additional data and sends the result back to the advertiser (see my post on Prio for more details on how this part of the process works). So far so good, except that I haven't specified how the additional data is encrypted. This part turns out to be somewhat tricky and the IPA authors don't have a published design for it at the moment, so this is piece is still a hard hat area. Status of IPA # So what's the status of IPA? This has been the source of some confusion, perhaps in part because Google has implemented some of their "Privacy Sandbox" proposals in Chrome and has already done or proposed to do "origin trials" (a kind of limited access test) for them. At present, however, IPA is just a proposal. It has been submitted to the W3C Private Advertising Technology Community Group for consideration but has yet to be adopted, let alone shipped by anyone. In other words, it's a potentially interesting idea but not something that is finished or ready to standardize. Appendix: Linear Relation Attacks # The IPA authors describe a few known attacks on the system (though more analysis is needed). The most interesting one is what they term "linear relation" attacks. The basic idea behind this kind of attack is to use the blinding process as an oracle to determine whether a given user was in the report set. Recall that the result of the blinding process for identity $I_i$ is $I_i^{K_a K_b}$. So if you have two identities $I_1$ and $I_2$ their blinded versions are of course: $I_1^{K_a K_b}$ and $I_1^{K_a K_b}$, These have the interesting property that: $$(I_1^{K_a K_b})(I_2^{K_a K_b}) = (I_1 I_2)^{K_a K_b}$$ Updated 2022-02-16: oops, fixed a subscript If the advertiser knows a user's identifier and it has the cooperation of one of the servers, it can use this fact to determine whether a given user was in a set of reports. If the target user has identifier $I_t$ it creates two fake reports $I_x$ and $I_y$ such that: $I_y = I_tI_x$. When these are blinded, the result is: $I_x^{K_a K_b}$ $I_y^{K_a K_b} = (I_x I_t)^{K_a K_b} = (I_x^{K_a K_b})(I_t^{K_a K_b})$ And if a report from the target was included, then the reports will also included the blinded version of $I_t$, which is $I_t^{K_a K_b}$. The colluding server then looks to see whether there are a triplet of blinded values $(B_1, B_2, B_3)$ such that $B_1 = B_2 * B_3$. If there are, then they know that $B_1$ corresponds to $I_y$ and that one of $B_2$ or $B_3$ corresponds to $I_t$.[7] As I said above, this is a known attack and the authors are working on ideas to address it. Note also that this attack depends on knowing users identifiers, so it can't be done by any site, but just by (or with the help of) the one issuing the identifiers. Usually this is from an ad network of some kind, but I'm simplifying. ↩︎ The actual proposal proposal uses different names for the impression and the purchase, but that's not necessary for this simple example. ↩︎ Yes, it's bad that sync between browsers of different manufacturers doesn't work, but that's a whole different story. ↩︎ In particular, the facts that $(g^a)(g^b) = g^{a+b}$ and $(g^a)^b = g^{ab}$. ↩︎ Yes, I know I'm using exponential notation. It's easier to follow for people not used to EC notation. ↩︎ I've omitted the discussion of the Prio proofs for simplicity. ↩︎ Note that another way to execute this is to just create a new identity that is the product of two existing identities; this lets you learn if both are in a set of reports. ↩︎ Next: Risks (or non-risks) of scanning QR codes Previous: Ensuring Privacy For Age Verification
CommonCrawl
Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking Peiqi Zhang*, Tingting Hu, Dingli Luo, Songlin Du, Takeshi Ikenaga Graduate School of Information, Production and Systems 1-ms vision systems represent an extreme case of temporal development in video sensing techniques. Moreover, a 1-ms dual-hand tracking system leverages the dexterous functionality of hands and thus serves as a seamless and intuitive interface for Human-Computer Interaction. Deep CNN is promising for high tracking robustness, however, neither GPU-based nor FPGA-based implementation addresses the tracking task with ultra-high-speed. This paper proposes: (a) A paradigm to directly map a deep CNN as a hardwired circuit, so the entire network runs in parallel and high processing speed is obtained. The network is exempted from memory access since all intermediate neural values are implicitly represented in hardware states. And condensed binarization is used to reduce resource utilization; (b) Hardware design of the hardwired network on FPGA, inside which kernel-adapted convolutional trees are devised to maximize the parallelism. The speed bottleneck of the network is therefore removed by implementing convolutional layers as fine-grained pipelines with unified components; (c) FPGA-GPU hetero complementation, which utilizes an auxiliary GPU network to compensate for accuracy of the FPGA network without affecting its speed. The quick primary results on FPGA are intermittently refined using delayed but accurate hints from GPU. Implementation results show that the proposed method reaches 973fps and consumes merely 1.30ms to process on $640\times 480$ images, while the accuracy is only 4.7% lower compared with the general method on test sequences. Video demonstrations are available at https://wcms.waseda.jp/em/5f9d020f136e7. IEEE Transactions on Circuits and Systems for Video Technology https://doi.org/10.1109/TCSVT.2021.3103784 1-ms vision system FPGA system hardwired circuit 10.1109/TCSVT.2021.3103784 Dive into the research topics of 'Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking'. Together they form a unique fingerprint. Deep neural networks Engineering & Materials Science 100% Convolutional neural networks Engineering & Materials Science 93% Field programmable gate arrays (FPGA) Engineering & Materials Science 84% Graphics processing unit Engineering & Materials Science 76% Computer hardware Engineering & Materials Science 24% Human computer interaction Engineering & Materials Science 20% Pipelines Engineering & Materials Science 15% Zhang, P., Hu, T., Luo, D., Du, S., & Ikenaga, T. (2022). Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking. IEEE Transactions on Circuits and Systems for Video Technology, 32(12), 8192-8203. https://doi.org/10.1109/TCSVT.2021.3103784 Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking. / Zhang, Peiqi; Hu, Tingting; Luo, Dingli et al. In: IEEE Transactions on Circuits and Systems for Video Technology, Vol. 32, No. 12, 01.12.2022, p. 8192-8203. Zhang, P, Hu, T, Luo, D, Du, S & Ikenaga, T 2022, 'Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking', IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 12, pp. 8192-8203. https://doi.org/10.1109/TCSVT.2021.3103784 Zhang P, Hu T, Luo D, Du S, Ikenaga T. Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking. IEEE Transactions on Circuits and Systems for Video Technology. 2022 Dec 1;32(12):8192-8203. doi: 10.1109/TCSVT.2021.3103784 Zhang, Peiqi ; Hu, Tingting ; Luo, Dingli et al. / Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking. In: IEEE Transactions on Circuits and Systems for Video Technology. 2022 ; Vol. 32, No. 12. pp. 8192-8203. @article{c3150a55f8db4bf584870adae3e164dc, title = "Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking", abstract = "1-ms vision systems represent an extreme case of temporal development in video sensing techniques. Moreover, a 1-ms dual-hand tracking system leverages the dexterous functionality of hands and thus serves as a seamless and intuitive interface for Human-Computer Interaction. Deep CNN is promising for high tracking robustness, however, neither GPU-based nor FPGA-based implementation addresses the tracking task with ultra-high-speed. This paper proposes: (a) A paradigm to directly map a deep CNN as a hardwired circuit, so the entire network runs in parallel and high processing speed is obtained. The network is exempted from memory access since all intermediate neural values are implicitly represented in hardware states. And condensed binarization is used to reduce resource utilization; (b) Hardware design of the hardwired network on FPGA, inside which kernel-adapted convolutional trees are devised to maximize the parallelism. The speed bottleneck of the network is therefore removed by implementing convolutional layers as fine-grained pipelines with unified components; (c) FPGA-GPU hetero complementation, which utilizes an auxiliary GPU network to compensate for accuracy of the FPGA network without affecting its speed. The quick primary results on FPGA are intermittently refined using delayed but accurate hints from GPU. Implementation results show that the proposed method reaches 973fps and consumes merely 1.30ms to process on $640\times 480$ images, while the accuracy is only 4.7% lower compared with the general method on test sequences. Video demonstrations are available at https://wcms.waseda.jp/em/5f9d020f136e7.", keywords = "1-ms vision system, FPGA system, hand tracking, hardwired circuit, neural networks", author = "Peiqi Zhang and Tingting Hu and Dingli Luo and Songlin Du and Takeshi Ikenaga", note = "Funding Information: This work was supported in part by the Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) under Grant 21K11816 and in part by the National Natural Science Foundation of China under Grant 62001110. This article was recommended by Associate Editor H. Rezatofighi. Publisher Copyright: {\textcopyright} 1991-2012 IEEE.", doi = "10.1109/TCSVT.2021.3103784", journal = "IEEE Transactions on Circuits and Systems for Video Technology", T1 - Highly-Parallel Hardwired Deep Convolutional Neural Network for 1-ms Dual-Hand Tracking AU - Zhang, Peiqi AU - Hu, Tingting AU - Luo, Dingli AU - Du, Songlin AU - Ikenaga, Takeshi N1 - Funding Information: This work was supported in part by the Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) under Grant 21K11816 and in part by the National Natural Science Foundation of China under Grant 62001110. This article was recommended by Associate Editor H. Rezatofighi. Publisher Copyright: © 1991-2012 IEEE. N2 - 1-ms vision systems represent an extreme case of temporal development in video sensing techniques. Moreover, a 1-ms dual-hand tracking system leverages the dexterous functionality of hands and thus serves as a seamless and intuitive interface for Human-Computer Interaction. Deep CNN is promising for high tracking robustness, however, neither GPU-based nor FPGA-based implementation addresses the tracking task with ultra-high-speed. This paper proposes: (a) A paradigm to directly map a deep CNN as a hardwired circuit, so the entire network runs in parallel and high processing speed is obtained. The network is exempted from memory access since all intermediate neural values are implicitly represented in hardware states. And condensed binarization is used to reduce resource utilization; (b) Hardware design of the hardwired network on FPGA, inside which kernel-adapted convolutional trees are devised to maximize the parallelism. The speed bottleneck of the network is therefore removed by implementing convolutional layers as fine-grained pipelines with unified components; (c) FPGA-GPU hetero complementation, which utilizes an auxiliary GPU network to compensate for accuracy of the FPGA network without affecting its speed. The quick primary results on FPGA are intermittently refined using delayed but accurate hints from GPU. Implementation results show that the proposed method reaches 973fps and consumes merely 1.30ms to process on $640\times 480$ images, while the accuracy is only 4.7% lower compared with the general method on test sequences. Video demonstrations are available at https://wcms.waseda.jp/em/5f9d020f136e7. AB - 1-ms vision systems represent an extreme case of temporal development in video sensing techniques. Moreover, a 1-ms dual-hand tracking system leverages the dexterous functionality of hands and thus serves as a seamless and intuitive interface for Human-Computer Interaction. Deep CNN is promising for high tracking robustness, however, neither GPU-based nor FPGA-based implementation addresses the tracking task with ultra-high-speed. This paper proposes: (a) A paradigm to directly map a deep CNN as a hardwired circuit, so the entire network runs in parallel and high processing speed is obtained. The network is exempted from memory access since all intermediate neural values are implicitly represented in hardware states. And condensed binarization is used to reduce resource utilization; (b) Hardware design of the hardwired network on FPGA, inside which kernel-adapted convolutional trees are devised to maximize the parallelism. The speed bottleneck of the network is therefore removed by implementing convolutional layers as fine-grained pipelines with unified components; (c) FPGA-GPU hetero complementation, which utilizes an auxiliary GPU network to compensate for accuracy of the FPGA network without affecting its speed. The quick primary results on FPGA are intermittently refined using delayed but accurate hints from GPU. Implementation results show that the proposed method reaches 973fps and consumes merely 1.30ms to process on $640\times 480$ images, while the accuracy is only 4.7% lower compared with the general method on test sequences. Video demonstrations are available at https://wcms.waseda.jp/em/5f9d020f136e7. KW - 1-ms vision system KW - FPGA system KW - hand tracking KW - hardwired circuit KW - neural networks U2 - 10.1109/TCSVT.2021.3103784 DO - 10.1109/TCSVT.2021.3103784 JO - IEEE Transactions on Circuits and Systems for Video Technology JF - IEEE Transactions on Circuits and Systems for Video Technology
CommonCrawl
Remote estimation of rapeseed yield with unmanned aerial vehicle (UAV) imaging and spectral mixture analysis Yan Gong1,3, Bo Duan1, Shenghui Fang1,3, Renshan Zhu2,3, Xianting Wu2,3, Yi Ma1 & Yi Peng1,3 The accurate quantification of yield in rapeseed is important for evaluating the supply of vegetable oil, especially at regional scales. This study developed an approach to estimate rapeseed yield with remotely sensed canopy spectra and abundance data by spectral mixture analysis. A six-band image of the studied rapeseed plots was obtained by an unmanned aerial vehicle (UAV) system during the rapeseed flowering stage. Several widely used vegetation indices (VIs) were calculated from canopy reflectance derived from the UAV image. And the plot-level abundance of flower, leaf and soil, indicating the fraction of different components within the plot, was retrieved based on spectral mixture analysis on the six-band image and endmember spectra collected in situ for different components. The results showed that for all tested indices VI multiplied by leaf-related abundance closely related to rapeseed yield. The product of Normalized Difference Vegetation Index and short-stalk-leaf abundance was the most accurate for estimating yield in rapeseed under different nitrogen treatments with the estimation errors below 13%. This study gives an important indication that spectral mixture analysis needs to be considered when estimating yield by remotely sensed VI, especially for the image containing obviously spectral different components or for crops which have conspicuous flowers or fruits with significantly different spectra from their leave. Rapeseed is an important cash crop cultivated primarily for its oil-rich seeds which can be processed into edible oil used all over the world. The byproducts of rapeseed are also widely used for animal feed, biofuel and medicine [1]. It is reported that in the last decade rapeseed displayed the highest production rise amongst oil crops [2] due to the long-term increase of global food and fuel demands. The accurate estimation of rapeseed yield, especially at regional scale, is of significance to evaluate the supply of vegetation oil and help enhance food security. Remote sensing technique can efficiently obtain canopy spectra data from space, which carries valuable information indicating the canopy interaction with solar radiation such as vegetation absorption and scattering [3]. Many methods have been developed trying to relate the vegetation spectra to its optical properties for evaluating vegetation growth. Leaf pigments strongly absorb visible light thus reducing vegetation reflectance in the visible range [4], and vegetation reflectance in the near-infrared (NIR) range is affected by thick plant tissues and canopy structures [5]. Optical vegetation indices (VI), calculated from reflectance of different spectral ranges [6], have been developed to retrieve biophysical parameters such as leaf area index [7, 8], chlorophyll content [9, 10] and biomass [11, 12]. Instead of establishing regression algorithms of using VI to estimate vegetation parameter, machine-learning methods employ more sophisticated statistical techniques to develop relationships between the vegetation spectra and biophysical parameters [13]. For example, Bacour et al. [14] applied a neural network to estimate leaf area index and vegetation fraction with MERIS satellite reflectance at 11 bands. Verrelst et al. [15] used Gaussian process machine learning techniques to retrieve chlorophyll content with 62-band CHRIS satellite images. These methods can make full use of spectral information at all bands and be able to approximate complex non-linear functions, thus they often appeared more robust and adaptive than VI-based algorithms especially for hyperspectral data [16]. Despite spectral information, vegetation structure-related features can be also remotely estimated. Yue et al. [17] constructed crop 3D models using images taken from different positions for the same area, which clearly showed the height variations in wheat under different nitrogen and water treatments. Generally, VI-based methods are the mainstream approach for estimating biophysical parameters in various terrestrial ecosystems [18] and from various remote sensing platforms [19,20,21]. For multispectral data which is available for most current sensors, many experiments showed that machine-learning methods only slightly improved estimation accuracy compared with VI-based methods. The use of appropriate VI can give comparable performance to the complex machine-learning methods but with much more efficiency and feasibility [13]. The increase or decrease of crop photosynthesis capacity, which can be captured through spectral measures (e.g., VIs), directly affects plant development thus determining its ultimate yield. Thus VI showed the good potential as a basic and simple approach for remote estimation of crop yield at the large scale [22, 23]. Becker-Reshef et al. [24] found that in winter wheat the maximum Normalized Difference Vegetation Index (NDVI) derived from MODIS satellite data of each season closely followed the yield variations with the correlation coefficient above 0.74; Rahman et al. [25] utilized AVHRR-satellite-based NDVI and temperature data to model annual yield in rice with residual values in individual years around 4%; Sakamoto et al. [26] mapped U.S. corn yields successfully using Wide Dynamic Range Vegetation Index (WDRVI) derived from time-series MODIS data with the estimation error below 30% at the state level; Liang et al. [27] reported a good relationship between grape yield and NDVI derived from Landsat data having the correlation coefficient above 0.64. Remote sensing is able to offer the spatial and temporal information of the study site timely and economically. Their application for crop yield evaluation has been demonstrated across a wide range of scales and geographic locations [28,29,30,31]. Due to the limitation of the spatial resolution as well as the landscape fragmentation, there may be a considerable discrepancy between pixel sizes of the used remotely sensed images and much smaller sizes of the studied croplands. For example, MODIS satellite data, which is free available and widely used all over the world, obtains the daily global observations at the spatial resolution of 0.25–1 km. While the smallholder farms in China, which accounted to 98% of the total farm area in China, had the typical size smaller than 0.002 km2 [32, 33]. In this case, one pixel on an image encompasses several land cover types. Even for the high resolution data, the signal of one pixel can be contributed by multiple cropland components (e.g., soil, leaf, flower and fruit) that have significantly different spectra [34]. VI derived from spectra of such mixed pixels may include the data of components not or weakly related to yield, which introduces unexpected uncertainties for yield estimation. This problem is more obvious when applying to rapeseed. Unlike grain crops, conspicuous flowers will appear on top of the rapeseed canopy at its early reproductive stage and the flowering period may last more than 30 days [35]. Rapeseed flowers were bright yellow with dense petals that can scatter the radiation to all possible directions, while rapeseed leaves are green orienting nearly horizontal. With the same vegetation cover, it is observed that canopy spectra of rapeseed during flowering stage was twice as high as during green-up stage, especially in the green and NIR spectral ranges [36]. When the remotely detected canopy spectra is greatly mixed by flower and leaf spectra, the accuracy of estimating vegetation parameter with pixel-level VI would decrease. Behrens et al. [37] showed the weak correlations between NDVI and rapeseed biomass with the correlation coefficient below 0.1; Fang et al. [36] reported the uncertainties increased by 50% when using VI to estimate vegetation fraction in rapeseed during its flowering season. Canopy reflectance sensed from the space is confounded by different components of rapeseed cropland, and there is a need to consider the factor of spectral mixture that will influence the yield estimates especially during the flowering period. Many studies used spectral mixture analysis to quantify the spectral contributions from different components within a pixel [38,39,40]. It assumes that the individual pixel is mixed by a few dominant components with different proportions that appear in the studied scene, and these components spectrally contribute to the total pixel signal at sub-pixel scale [41]. Endmembers, the dominant components of the image scene and not themselves mixed by other components, are firstly identified. A set of pure spectra of these endmembers is measured as field data, and the fraction of each endmember within a pixel can be estimated based on comparing the pixel spectra and field-collected endmembers' spectra in multiple bands [42]. This method is commonly applied to assess vegetation properties. Based on measured spectra of two endmembers (bare soil and dense vegetation), Gitelson et al. [43] developed an approach to estimate vegetation fraction in sampling zones; Li and Strahler [44] proposed a model separating a pixel reflectance into reflectance of four components (sunlit ground, sunlit crown, shadowed ground and shadowed crown), and this model was further extended to estimate tree density in woodland using Landsat satellite data [45]. However, the analysis of how the spectral mixture will affect yield estimates and how to select appropriate endmembers for yield estimation in rapeseed has not been adequately elaborated and addressed. Recently, Unmanned Aerial Vehicles (UAV) are increasingly used as an innovative remote sensing platform for environmental applications [46, 47]. Unlike field-collected data, UAV can fly over the predetermined area to obtain the images efficiently with very high spatial (e.g., centimeters) and temporal (e.g., daily observations) resolutions, which greatly reduces the labor and time costs [48]. In comparison to most satellite and airborne platforms, the availability of using customizable sensor on UAV as well as the flexibility of changing UAV flight altitude and attitude can give us an easy access to data with the spatial and spectral resolutions as required by users [49]. This is particularly beneficial for precision agriculture by offering the image with resolutions appropriately selected for detailed observations on the in-field crop growth. For example, Jin et al. [50] developed a method to estimate wheat density using images taken from a hexacopter flying at very low altitude (3–7 m); López-Granados et al. [51] mapped weed distributions in croplands based on images collected by UAV at different heights; Zhou et al. [52] predicted rice yield using multi-temporal images acquired by two cameras with different spectral ranges mounted on an UAV system. UAV-collected data is becoming a promising tool for monitoring crop growth and assisting in field managements. This study explores to improve VI-based approach for estimating rapeseed yield by considering spectral mixture factors. The image of the study site was remotely obtained by an UAV system. The first objective is to compare and evaluate several widely used VIs for rapeseed yield estimation. The second objective is to identify and analyze the endmembers that appear in remotely sensed scene and mostly related to rapeseed yield. The final objective is to develop an approach for the accurate estimation of rapeseed yield with VI data and spectral mixture analysis. In this investigation, we studied 24 rapeseed plots located at Rapeseed Experiment and Research Base (30.1127°N,115.5894°E), Central China Agricultural University, Wuxue, Hubei, China. They were of the size about 15 m × 2 m and all planted with the same hybrid of rapeseed (Huayouza No.9) [53]. The field managements for these plots were similar except that different amounts of nitrogen fertilizer were applied. Eight nitrogen (N) rates (0, 45, 90, 135, 180, 225, 270 and 360 kg/ha) were utilized, and each rate was repeated on three randomly distributed plots (Fig. 1). All the plots were irrigated and weeded regularly. The growing season for our studied rapeseed was from Sept. 2014 to the following May. In this study, one UAV flight was arranged to obtain the image of study area on Mar. 21, 2015 during the early flowering stage of the rapeseed. In this period, rapeseed was on the stage that plants increase photosynthetic rates due to strong carbon sink of developing flowers and fruits [37]. Thus, the obtained image at this stage probably corresponded to the maximum photosynthesis capacity of rapeseed plants, which is indicative to its final yield. For all 24 plots, half of each plot was sampled periodically for crop growth evaluations while the other half of each plot was kept intact until the harvest date for yield determination. Study area in this study and the nitrogen fertilizer applications in 24 rapeseed plots Rapeseed yield determination The 24 rapeseed plots were harvested on 5 May, 2015. In each plot, half of the above-ground plant materials (around 15 m2) were all cut for yield determination. The harvested materials were exposed to the sun for 10 days for seed threshed. The seeds were then cleaned and put into an oven at 60 °C until their weight did not change (around 4 days). All the dry seeds were weighted together and the plot yield was calculated as the ratio of this total weight to the ground area (kg/ha). The final yield of 24 plots varied from 1000 to 3500 kg/ha, which represented a wide range of yield variation. Canopy reflectance and VI derived from UAV data The UAV flight was carried out on Mar. 21, 2015 between 10:00 and 13:00 local time when changes in solar zenith angle were minimal and the weather was clear with low cloud cover observed. The Mini-MCA system (Mini-MCA 6, Tetracam Inc., Chatsworth, CA, USA) was mounted on an UAV (S1000, SZ DJI Technology Co., Ltd, Shenzhen, China) to obtain images of the studied area. Mini-MCA is consisted of six individual miniature digital cameras, and each camera lens was equipped with a customer-specified band pass filter centered at wavelength of 490, 550, 670, 720, 800 or 900 nm respectively at the band width of 10 nm. These bands were selected since they were commonly used for estimating vegetation photosynthesis-related parameters [37, 54, 55]. Prior to the flight, six cameras were co-registered in the laboratory using a camera distortion correction model [56] so that the corresponding pixels of each lens were spatial overlapped in the same focal plane. During the flight, a gimbal stable platform was used to help adjust the camera system pointing close to nadir [57], which minimized the fluctuations in collected reflectance due to variations of observation azimuth angles. The flight altitude was kept at 50 m above the ground to acquire images at the spatial resolution around 2.5 cm. For each exposure, six cameras simultaneously took a picture to produce a six-band-composite image of the study area. In this study, the image digital numbers (DN) were converted to surface reflectance using the empirical line approach [58, 59]. Four calibration ground targets, providing a relatively flat response to incident radiation throughout the visible to NIR spectral ranges, were placed in the cameras' field of view as a standard for image radiometric corrections. The calibration targets used in this study are made of highly durable woven polyester fabric at the size of 0.4 m × 0.6 m, having the relatively constant reflectance of 6%, 24%, 48% and 100%, respectively (more details can be found at: http://www.tetracam.com/Products_Ground_Calibration_Panels.htm). Assuming a linear relationship between surface reflectance and DN values, canopy surface reflectance \(\uprho\left(\uplambda \right)\) can be calculated as [60, 61]: $$\rho (\lambda ) = DN(\lambda ) \times G_{\lambda } + B_{\lambda } \quad (\uplambda = 490, 550, 670, 720, 800\; {\text{and}}\; 900\,{\text{nm}})$$ where \({\text{DN}}\left(\uplambda \right)\) is the digital number of a given pixel at wavelength \(\uplambda\); \({\text{B}}_{\lambda }\) and \({\text{G}}_{\lambda }\) are bias and gains of the sensor at wavelength \(\uplambda\). For each wavelength, B and G can be calculated based on DN values of pixels from four calibration targets (referring to \(DN_{0.06}\), \(DN_{0.24}\), \(DN_{0.48}\), \(DN_{1}\)) $$\left[ {\begin{array}{*{20}c} B \\ G \\ \end{array} } \right] = \left( {\left[ \begin{aligned} \begin{array}{*{20}c} 1 & {DN_{0.06} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{0.24} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{0.48} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{1.00} } \\ \end{array} \hfill \\ \end{aligned} \right]^{T} \left[ \begin{aligned} \begin{array}{*{20}c} 1 & {DN_{0.06} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{0.24} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{0.48} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{1.00} } \\ \end{array} \hfill \\ \end{aligned} \right]} \right)^{ - 1} \left[ \begin{aligned} \begin{array}{*{20}c} 1 & {DN_{0.06} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{0.24} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{0.48} } \\ \end{array} \hfill \\ \begin{array}{*{20}c} 1 & {DN_{1.00} } \\ \end{array} \hfill \\ \end{aligned} \right]^{T} \left[ {\begin{array}{*{20}c} {0.06} \\ {0.24} \\ {0.48} \\ {1.00} \\ \end{array} } \right]$$ Within each of 24 plots, we defined a maximum rectangle fitted the plot (including around 30,000 pixels). And the plot-level reflectance was calculated as the average value of all pixels within the defined rectangle. Plot-level VI was then retrieved from plot-level canopy reflectance (Table 1). Table 1 Vegetation indices tested in this study Spectral mixture analysis and endmember abundance To analyze the factor of spectral mixture within a pixel, five endmembers were considered in this study: (1) flower (FL), (2) sessile leaf (SE-LF), (3) short stalk leaf (SS-LF), (4) wet soil (W-soil) and (5) dry soil (D-soil). They were the dominant components visible in our studied scene (Fig. 2). Samples of each component were collected from the study area and their spectra were immediately measured in situ using a hyperspectral radiometer (Analytical Spectral Devices Inc., Boulder, CO, USA). This radiometer was equipped with a 25° field-of-view optical fiber that obtained sample reflectance in range of 350–1100 nm at a spectral resolution around 1 nm. The measurements of W- or D- soil spectra were conducted in all plots (at least six sampling areas per plot) with the ASD fiber pointing to the area at the appropriate height to make sure the instant field of view was all covered by wet or dry soil with no vegetation, and the averaged spectra was used as soil spectra. The leaf spectra were taken using ASD with a self-illuminated leaf clip for sessile leaf and short-stalk leaf respectively. For each leaf, spectral reflectance was scanned at 5 positions randomly distributed on the leaf adaxial side and six leaves were sampled per plot. The average of all spectra scans was then used as leaf reflectance. Since the rapeseed flower is small and narrow, the sample flowers were gathered together on a black background and arranged to fully cover the sensor's view field to make sure that the radiometer collected the pure spectra of flower. By this way, the reference endmember reflectance of five components were obtained: \(\rho_{FL}\), \(\rho_{SE - LF}\), \(\rho_{SS - LF}\), \(\rho_{W - soil}\) and \(\rho_{D - soil}\). Endmembers selected in this study For spectral mixture analysis, the linear mixing spectral model [70] was used in this study to estimate the fractional abundance of each spectral endmember. It is assumed that the acquired image can be represented as a linear mixture of a few dominant spectral endmembers. For a given pixel at the wavelength λ, the pixel reflectance ρ(λ) can be approximated as: $$\uprho\left(\uplambda \right) = \mathop \sum \limits_{i = 1}^{N} Abd_{i} \rho_{i} \left( \lambda \right);\quad {\text{and}}\quad 0 \le Abd_{i} \le 1;\quad {\text{and}}\quad \mathop \sum \limits_{i = 1}^{N} Abd_{i} = 1$$ where N is the number of selected endmembers, Abdi is the fractional abundance of endmember i, \(\rho_{i} \left( \lambda \right)\) is the reference reflectance of endmember i at band λ. The abundance is constrained between 0 and 1, and for each pixel the sum of the abundance of all endmembers equals to 1. An abundance of 0 indicates no spectral contributions from the particular endmember, while an abundance of 1 means this pixel spectra is the same with pure spectra of the particular endmember. In this study, we selected flower, sessile leaf, short-stalk leaf, wet soil and dry soil as five endmembers. According to Eq. 3, abundance of the selected five components for each pixel can be retrieved from the six-band UAV image of the study site [71,72,73] (run by MATLAB 7.5) as: $$\left[ {\begin{array}{*{20}c} {\rho (\lambda_{1} )} \\ {\rho (\lambda_{2} )} \\ {\rho (\lambda_{3} )} \\ \begin{aligned} \rho (\lambda_{4} ) \hfill \\ \rho (\lambda_{5} ) \hfill \\ \rho (\lambda_{6} ) \hfill \\ \end{aligned} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\rho_{\text{FL}} (\lambda_{1} )} & {\rho_{\text{SE-LF}} (\lambda_{1} )} & {\rho_{\text{SS-LF}} (\lambda_{1} )} & {\rho_{\text{W-soil}} (\lambda_{1} )} & {\rho_{\text{D-soil}} (\lambda_{1} )} \\ {\rho_{\text{FL}} (\lambda_{2} )} & {\rho_{\text{SE-LF}} (\lambda_{2} )} & {\rho_{\text{SS-LF}} (\lambda_{2} )} & {\rho_{\text{W-soil}} (\lambda_{2} )} & {\rho_{\text{D-soil}} (\lambda_{2} )} \\ {\rho_{\text{FL}} (\lambda_{3} )} & {\rho_{\text{SE-LF}} (\lambda_{3} )} & {\rho_{\text{SS-LF}} (\lambda_{3} )} & {\rho_{\text{W-soil}} (\lambda_{3} )} & {\rho_{\text{D-soil}} (\lambda_{3} )} \\ {\rho_{\text{FL}} (\lambda_{4} )} & {\rho_{\text{SE-LF}} (\lambda_{4} )} & {\rho_{\text{SS-LF}} (\lambda_{4} )} & {\rho_{\text{W-soil}} (\lambda_{4} )} & {\rho_{\text{D-soil}} (\lambda_{4} )} \\ {\rho_{\text{FL}} (\lambda_{5} )} & {\rho_{\text{SE-LF}} (\lambda_{5} )} & {\rho_{\text{SS-LF}} (\lambda_{5} )} & {\rho_{\text{W-soil}} (\lambda_{5} )} & {\rho_{\text{D-soil}} (\lambda_{5} )} \\ {\rho_{\text{FL}} (\lambda_{6} )} & {\rho_{\text{SE-LF}} (\lambda_{6} )} & {\rho_{\text{SS-LF}} (\lambda_{6} )} & {\rho_{\text{W-soil}} (\lambda_{6} )} & {\rho_{\text{D-soil}} (\lambda_{6} )} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\text{Abd}}_{\text{FL}} } \\ {{\text{Abd}}_{\text{SE-LF}} } \\ {{\text{Abd}}_{\text{SS-LF}} } \\ {{\text{Abd}}_{\text{W-soil}} } \\ {{\text{Abd}}_{\text{D-soil}} } \\ \end{array} } \right]$$ where \(\uprho\left( {\lambda_{i} } \right)\) is the surface reflectance of the given pixel at band \(\lambda_{i}\) (i = 1, 2…6). \(\rho_{FL} \left( {\lambda_{i} } \right)\), \(\rho_{{SE{-}LF}} \left( {\lambda_{i} } \right)\), \(\rho_{{SS{-}LF}} \left( {\lambda_{i} } \right)\), \(\rho_{{W{-}soil}} \left( {\lambda_{i} } \right)\) and \(\rho_{{D{-}soil}} \left( {\lambda_{i} } \right)\) are the endmember reflectance at band \(\lambda_{i}\) for flower, sessile leave, short stalk leave, wet soil and dry soil, respectively. \(Abd_{FL}\), \(Abd_{{SE{-}LF}}\), \(Abd_{{SS{-}LF}}\), \(Abd_{{W{-}soil}}\) and \(Abd_{{D{-}soil}}\) are the abundance of flower, sessile leave, short stalk leave, wet soil and dry soil respectively, referring to the fraction of the given component within a pixel. Pixel by pixel, the abundance images of five endmembers were then constructed. For each abundance image, the previous defined rectangle in each of 24 plots for calculating plot-level VI was used to retrieve plot-level abundance by averaging abundance values of all pixels within the given rectangle. Yield estimation in rapeseed using VI and abundance data In this study, plot-level VI was firstly correlated with rapeseed yield directly. Since leaves are the main organ for photosynthesis in rapeseed that will determine its production and the seed number largely depends on the number of flowers that will be further translated into pods, plot-level VI was multiplied by plot-level leaf or flower abundance for relating to rapeseed yield. As linear relationships are easy to implement and sensitive to wide range of variation in the dependent variable [74], four linear relationships were developed using 24 samples: (1) yield versus VI, (2) yield versus VI × AbdFL (3) yield versus VI × (AbdSE-LF + AbdSS-LF) and (4) yield versus VI × AbdSS-LF. Coefficients of determination (R2) and coefficients of variation (CV) were analyzed and compared. Algorithm establishment using leave-one-out cross-validation This study used the leave-one-out cross-validation approach [75] to establish the algorithm for rapeseed yield estimation. The samples were trained and tested for K times (K is the number of samples, K = 24 in this study). For each time i, K − 1 samples were used iteratively as training data for calibrating the coefficients (Coefi) of the algorithm with the accuracy of the coefficients of determination (R 2i ), and the remaining single sample was used for validation to obtain the estimation error (Ei). This procedure was repeated K times, with all samples used for both calibration and validation and each sample used exactly one time as validation data. From K iterations, the final algorithm with the accuracy (R2 and root mean square error—RMSE) can be produced as: $$Coef = \frac{{\mathop \sum \nolimits_{i = 1}^{K} Coef_{i} }}{K} \quad R^{2} = \frac{{\mathop \sum \nolimits_{i = 1}^{K} R^{2}_{i} }}{K}\quad {\text{RMSE}} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{K} E_{i}^{2} }}{K}}$$ Relationship of VI versus yield in rapeseed In this study, the yield was firstly correlated with several widely used VIs. Among the tested indices, CIred edge, EVI, DVI, RDVI, TVI and SAVI showed significant correlations with yield (R2 > 0.7) in rapeseed, while NDVI, RVI, VARI and CIgreen had weak correlations with rapeseed yield (R2 below 0.52)—Table 2. In addition, the relationships of NDVI and VARI versus yield appeared nonlinear. As shown in Fig. 3, NDVI and VARI were saturated to moderate to high yield variations when the yield of rapeseed exceeding 2000 (kg/ha), but SAVI and CIred edge related to yield almost linearly. Table 2 Correlation coefficient (R2) between VI and yield in rapeseed The relationships of yield and a CIgreen, b RVI, c NDVI, d VARI, e CIred edge and f SAVI Image-based abundance analysis In order to improve the accuracy of yield estimates, spectral mixture was considered as a factor affecting yield in our developed approach. Figure 4 presented the measured spectra of five endmembers appearing in the studied rapeseed-plot. As soil moisture increased, soil reflectance decreased at all wavelengths. Obvious spectra difference was observed for flower, sessile leaf and short stalk leaf in rapeseed plant. Flower reflectance was lower than half of the leaf reflectance in blue band (3% vs. 8%), but it was much higher than leaf reflectance in green, red and NIR bands. Compared to short stalk leaf, sessile leaf had much lower green reflectance but a little higher NIR reflectance. Pure spectral reflectance of flower, sessile leaf, short stalk leaf, dry soil and wet soil in the studied rapeseed plots Based on spectra of selected endmembers, abundance image of each component was derived for the study area. The abundance images of flower, short stalk leaf, sessile leaf, wet soil, and dry soil were given in Fig. 5. Generally, among the five abundance images, flower abundance image appeared the brightest. The abundance image of short stalk leaf was overall brighter than that of sessile leaf. And the brightness of dry and wet soil abundance images was relatively low (Fig. 5b–f). Pixels located at the ridges between the plots were bright in soil abundance images but dark in flower/leaf abundance images. Noted that obvious brightness heterogeneity was existed among different plots in the images, and such heterogeneity patterns were quite different in flower abundance image and leaf abundance images. a The six-band image of the study area obtained by UAV system (true color was shown). Abundance images derived from spectral mixture analysis on the UAV six-band image for b flower, c sessile leaf, d short stalk leaf, e dry soil and f wet soil Yield estimation using VI and abundance data Since flower and leaf were the most important organs for rapeseed photosynthesis and production, in our proposed approach we used the information of plot-level flower abundance (AbdFL), leaf (sessile leaf and short stalk leaf together) abundance (AbdSE-LF + AbdSS-LF) and short stalk leaf abundance (AbdSS-LF) to evaluate the yield in rapeseed. Generally, using VI × AbdFL to estimate rapeseed yield was less accurate than using VI alone with higher CV and lower R2 values except for CIgreen, VARI and RVI. For all tested indices, multiplying leaf-related abundance information (VI × (AbdSE-LF + AbdSS-LF) and VI × AbdSS-LF) increased the accuracy of yield estimation (Table 3). As shown in Fig. 6, using the product of leaf-related abundance and VI was able to estimate yield accurately with R2 above 0.7 and CV blow 17 (%). Especially for the indices which had weak correlation with yield (such as NDVI, CIgreen, VARI, RVI), the yield estimation accuracy was greatly improved when using VI × (AbdSE-LF + AbdSS-LF) and VI × AbdSS-LF, with R2 increased by 0.3 and CV decreased by 8%. Also noticed, for all indices VI × AbdSS-LF consistently gave better estimation results than VI × (AbdSE-LF + AbdSS-LF). The algorithms were established using the leave-one-out cross-validation approach for NDVI × AbdSS-LF, CIred edge × AbdSS-LF, TVI × AbdSS-LF and SAVI × AbdSS-LF, which had the highest correlation with yield (Table 4). They worked accurately for estimating yield in rapeseed with RMSE below 303 kg/ha and CV below 13.1% (Fig. 7). Moreover, the relationship of yield versus NDVI × AbdSS-LF appeared much more linear related to yield than the relationship of yield versus NDVI (Figs. 3 and 7). Table 3 The coefficients of determination (R2) and coefficients of variation (CV) of relationships of yield versus VI, yield versus VI × AbdFL, yield versus VI × (AbdSE-Lf + AbdSS-LF) and yield versus VI × AbdSS-LF The comparison of a coefficients of determination (R2) and b coefficients of variation (CV) for relationships of (1) yield versus VI, (2) yield versus VI × AbdFL (3) yield versus VI × (AbdSE-LF + AbdSS-LF) and (4) yield versus VI × AbdSS-LF for the studied indices Table 4 The algorithms for estimating rapeseed yield using the product of vegetation index and short-stalk-leaf abundance Validation of algorithms, established using the leave-one-out cross-validation approach, for estimating rapeseed yield in 24 plots under different nitrogen treatments by a NDVI × AbdSS-LF, b CIred edge × AbdSS-LF, c TVI × AbdSS-LF and d SAVI × AbdSS-LF The indices tested in this study was mostly originally developed for estimating vegetation greenness-related parameters such as chlorophyll content, leaf area index and vegetation fraction. It is found that crop greenness during mature growing stage was indicative to crop yield and some indices have been successfully used for yield estimation in maize and soybean [76]. However, they didn't work accurately for yield estimates in rapeseed (Table 2, Fig. 3). Especially for indices using green reflectance (CIgreen and VARI), the relationships of VI versus yield were weak with R2 below 0.43. This is consistent with finding from Sulik and Long [77] that the correlation between NDVI and yield was only 0.22 in spring canola during flowering seasons in Oregon, USA, thus they proposed a yellowness index which was linearly and strongly related to canola yield with the correlation coefficient around 0.76. Unlike grain crops (e.g., maize or soybean), rapeseed during early mature stage had conspicuous flowers which may occupy the top of canopy for more than 30 days. The flowers are numerous and aligned in racemes, and they appear bright yellow. In this case, canopy reflectance in green bands would be more affected by flower absorption and scattering. On the other hand, plot-level VI was calculated from mixed components including flower, leaf and soil. Each component contributed differently to rapeseed yield, so using VI alone for yield regression may introduce unexpected uncertainties. Thus the abundance images of each component were produced trying to associate VI with the component most relevant to rapeseed yield. Among the five abundance images, flower abundance was the brightest (Fig. 5) indicating that flowers occupied the largest proportion in view of sensor. This is not surprising since the rapeseed was blooming in our studied period, and flowers were growing on the top of canopy thus easily being seen by the sensor. Noted that the abundance of short stalk leaf was generally higher than that of sessile leaf. In rapeseed plant, sessile leaf was quite small and vertically oriented (Fig. 2), thus it was likely to be hidden underneath the flower petals. Although short stalk leaf was developed underneath the sessile leaf, it was much bigger and horizontal expanded thus appearing more visible in view of sensor. Due to the different nitrogen treatments applied in 24 plots, the greenness of rapeseed in different plots varied when the images were taken, and different plots would have contrasting yield thereafter ranging from 1000 to 3500 kg/ha. It is observed that flower abundance image was quite homogeneous in 24 plots, but obvious difference in leaf abundance existed among different plots (Fig. 5). It indicated that leaf abundance was more sensitive than flower abundance to nitrogen usage variations. Compared to using VI to estimate yield, the accuracy of yield estimation increased when using VI × AbdSS-LF and VI × (AbdSE-LF + AbdSS-LF) for all indices, but for most indices the accuracy decreased when using VI × AbdFL (Fig. 6, Table 3). The flowering period of rapeseed can last for more than 30 days. The plants begin to flower firstly at the main stems, and then on upper branches followed by lower branches. The studied image was taken in the early flowering period, thus the flowers that may bloom later were missing at the observation moment. Only one observation (even several) during the relatively long flowering period cannot record the complete information of possible flowers of all plants. So multiplying the flower abundance at one moment weakened the relationship of VI versus yield. During flowering period, plant leaves were fully developed and their greenness maintained quite stable. Many studies showed that leaves of rapeseed are mainly responsible for photosynthesis which is crucial to final yield, and the leaf status at plant mature stage is representative to crop potential yield [78]. The product of VI and leaf-related abundance may somewhat get rid of components that are not closely related to rapeseed yield (e.g., soil and flower at one moment). For all tested VIs, VI × (AbdSE-LF + AbdSS-LF) related to rapeseed yield closely with R2 above 0.7. Moreover, multiplying the abundance of short stalk leaf further increased the accuracy of yield for all indices. Wang et al.'s [35] experiments evaluated and compared the contributions of short stalk leaves and sessile leaves on rapeseed yield. They found that the removal of sessile leaves obviously decreased the number of rapeseed pods while the removal of short stalk leaves not only decreased the number of pods but also the number of seeds per pod. As shown in Fig. 6, by multiplying AbdSS-LF all indices were able to estimate yield quite accurately with R2 above 0.75 and CV below 15.7%. Even for VIs appeared weakly related to rapeseed yield such as CIgreen, VARI and NDVI, the use of AbdSS-LF enabled them to achieve comparable accuracy with other indices. VI × AbdSS-LF associated VI to the fraction of short stalk leaf in a plot, which is the most relevant component for rapeseed yield, thus resulting in higher accuracy for yield estimation than VI alone. It indicated that the model of yield∝VI × AbdSS-LF may be applicable for all greenness-related VIs and not selective to indices with specific spectral bands and sophisticated formulations, which greatly expand the range of choice for yield estimation using remotely sensed images with conventional and few bands. This study developed an approach to estimate rapeseed yield using the product of vegetation index and leaf abundance retrieved from the UAV image. The approach is simple but gives an important indication that spectral mixture analysis needs to be considered when estimating yield by remotely sensed VI, especially for the image containing obviously spectral different components. For all tested VIs in rapeseed, the product of VI and leaf abundance was capable of estimating yield especially for those VIs which seemed weakly related to rapeseed yield in many studies [79, 80]. Instead of creating a new spectral index requiring specific spectral bands or sophisticated formulations, maybe an effective and simple way is to relate VI to abundance of plant components most relevant with its final yield. The results of this work can provide a conceptual background for using satellite data of which the spectral mixture may be an issue. The endmembers proposed in this study are particularly for rapeseed yield estimation, which is not applicable to other crops. But this work may offer a theoretical framework for yield estimation in crops which have conspicuous flowers or fruits with significantly different spectra from their leaves (e.g., rapeseed, cotton). Our future work is to apply this approach to real satellite data and in other crop species. In addition, we'd like to test this approach in crops planted in various regions under different weather conditions in order to explore the robustness of our approach to changes in meteorological parameters such as temperature, humidity, precipitation and wind speed. In this study, we developed an approach to estimate rapeseed yield using UAV-obtained canopy reflectance and abundance data. It is observed that canopy reflectance collected during rapeseed flowering period is mixed and confounded by reflectance of flower, leaf and soil. Thus, the spectral mixture analysis was conducted to estimate the fractional abundance of different components that appear in the studied scene within a pixel. Flower, sessile leaf, short stalk leaf, wet soil and dry soil were selected as endmemebers and abundance images of these components were produced based on the six-band UAV image. For all tested indices, the product of plot-level VI and leaf-related abundance closely related to rapeseed yield with R2 above 0.75. Among the tested VIs, multiplying NDVI, CIred edge, TVI, and SAVI by short-stalk-leaf abundance were the most accurate for yield estimates in rapeseed under different nitrogen fertilizer treatment with estimation errors below 13.1%. USDA. Oilseeds: world markets and trade. http://apps.fas.usda.gov/psdonline/circulars/oilseeds.pdf (2014). 9 May 2014. Marchand L, Pelosi C, González-Centeno MR, et al. Trace element bioavailability, yield and seed quality of rapeseed (Brassica napus L.) modulated by biochar incorporation into a contaminated technosol. Chemosphere. 2016;156:150–62. Thenkabail PS, Lyon JG, Huete A. Hyperspectral remote sensing of vegetation. Boca Raton: CRC Press; 2011. p. 1943–61. Woolley JT. Reflectance and transmittance of light by leaves. Plant Physiol. 1971;47(5):656–62. Gausman HW, Allen WA, Cardenas R. Reflectance of cotton leaves and their structure 1. Remote Sens Environ. 1969;1(1):19–22. Hatfield JL, Gitelson AA, Schepers JS, et al. Application of spectral remote sensing for agronomic decisions. Agron J. 2008;100(3):117–31. Viña A, Gitelson AA, Nguy-Robertson AL, et al. Comparison of different vegetation indices for the remote assessment of green leaf area index of crops. Remote Sens Environ. 2011;115(12):3468–78. Gitelson AA, Viña A, Arkebauer TJ, et al. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys Res Lett. 2003;30(5):1248. https://doi.org/10.1029/2002GL016450. Peng Y, Nguy-Robertson A, Arkebauer T, et al. Assessment of canopy chlorophyll content retrieval in maize and soybean: implications of hysteresis on the development of generic algorithms. Remote Sens. 2017;9(3):226. Schlemmer M, Gitelson A, Schepers J, et al. Remote estimation of nitrogen and chlorophyll contents in maize at leaf and canopy levels. Int J Appl Earth Obs Geoinf. 2013;25:47–54. Hansen PM, Schjoerring JK. Reflectance measurement of canopy biomass and nitrogen status in wheat crops using normalized difference vegetation indices and partial least squares regression. Remote Sens Environ. 2003;86(4):542–53. Mutanga O, Adam E, Cho MA. High density biomass estimation for wetland vegetation using WorldView-2 imagery and random forest regression algorithm. Int J Appl Earth Obs Geoinf. 2012;18:399–406. Gholizadeh H, Rahman AF, Rahman AF. Comparing the performance of multispectral vegetation indices and machine-learning algorithms for remote estimation of chlorophyll content: a case study in the Sundarbans mangrove forest. Milton Park: Taylor & Francis, Inc.; 2015. Bacour C, Baret F, Béal D, et al. Neural network estimation of LAI, fAPAR, fCover, and LAI × C ab, from top of canopy MERIS reflectance data: principles and validation. Remote Sens Environ. 2006;105(4):313–25. Verrelst J, Alonso L, Camps-Valls G, et al. Retrieval of vegetation biophysical parameters using gaussian process techniques. IEEE Trans Geosci Remote Sens. 2012;50(5):1832–43. Krasnopolsky VM, Schiller H. Some neural network applications in environmental sciences. Part I: forward and inverse problems in geophysical remote measurements. Neural Netw. 2003;16(3–4):321–34. Yue J, Yang G, Li C, et al. Estimation of winter wheat above-ground biomass using unmanned aerial vehicle-based snapshot hyperspectral sensor and crop height improved models. Remote Sens. 2017;9(7):708. Damm A, Guanter L, Paul-Limoges E, et al. Far-red sun-induced chlorophyll fluorescence shows ecosystem-specific relationships to gross primary production: an assessment based on observational and modeling approaches. Remote Sens Environ. 2015;166:91–105. Haboudane D, Miller JR, Tremblay N, et al. Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens Environ. 2002;81(2–3):416–26. Chianucci F, Disperati L, Guzzi D, et al. Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV. Int J Appl Earth Obs Geoinf. 2016;47:60–8. Peng Y, Gitelson AA, Sakamoto T. Remote estimation of gross primary productivity in crops using MODIS 250 m data. Remote Sens Environ. 2013;128(1):186–96. Becker-Reshef I, Vermote E, Lindeman M, et al. A generalized regression-based model for forecasting winter wheat yields in Kansas and Ukraine using MODIS data. Remote Sens Environ. 2010;114(6):1312–23. Bolton DK, Friedl MA. Forecasting crop yield using remotely sensed vegetation indices and crop phenology metrics. Agric For Meteorol. 2013;173:74–84. Becker-Reshef I, Justice C, Sullivan M, et al. Monitoring global croplands with coarse resolution earth observations: the Global Agriculture Monitoring (GLAM) project. Remote Sensing. 2010;2(6):1589–609. Rahman A, Khan K, Krakauer NY, et al. Use of remote sensing data for estimation of Aman rice yield. Int J Agric For. 2012;2(1):101–7. Sakamoto T, Gitelson AA, Arkebauer TJ. Near real-time prediction of US corn yields based on time-series MODIS data. Remote Sens Environ. 2014;147:219–31. Sun L, Gao F, Anderson MC, et al. Daily mapping of 30 m LAI and NDVI for Grape yield prediction in California Vineyards. Remote Sens. 2017;9(4):317. Funk C, Budde ME. Phenologically-tuned MODIS NDVI-based production anomaly estimates for Zimbabwe. Remote Sens Environ. 2009;113(1):115–25. Rojas O. Operational maize yield model development and validation based on remote sensing and agro-meteorological data in Kenya. Int J Remote Sens. 2007;28(17):3775–93. Salazar L, Kogan F, Roytman L. Use of remote sensing data for estimation of winter wheat yield in the United States. Int J Remote Sens. 2007;28(17):3795–811. Kastens JH, Kastens TL, Kastens DLA, et al. Image masking for crop yield forecasting using AVHRR NDVI time series imagery. Remote Sens Environ. 2005;99(3):341–56. CSAC (The Office of China's Second Agricultural Census). Compilation of China's Second Agricultural Census. Beijing: China Statistics Press; 2009. Ju X, Gu B, Wu Y, Galloway JN. Reducing China's fertilizer use by increasing farm size. Glob Environ Change. 2016;41:26–32. Gilabert MA, Garcíaharo FJ, Meliá J. A mixture modeling approach to estimate vegetation parameters for heterogeneous canopies in remote sensing. Remote Sens Environ. 2000;72(3):328–45. Wang C, Hai J, Yang J, et al. Influence of leaf and silique photosynthesis on seeds yield and seeds oil quality of oilseed rape (Brassica napus L.). Eur J Agron. 2016;74:112–8. Fang S, Tang W, Peng Y, et al. Remote estimation of vegetation fraction and flower fraction in oilseed rape with unmanned aerial vehicle data. Remote Sens. 2016;8(5):416. Behrens T, Müller J, Diepenbrock W. Utilization of canopy reflectance to predict properties of oilseed rape (Brassica napus L.) and barley (Hordeum vulgare L.) during ontogenesis. Eur J Agron. 2006;25(4):345–55. Yang J, Weisberg PJ, Bristow NA. Landsat remote sensing approaches for monitoring long-term tree cover dynamics in semi-arid woodlands: comparison of vegetation indices and spectral mixture analysis. Remote Sens Environ. 2012;119:62–71. Tooke TR, Coops NC, Goodwin NR, et al. Extracting urban vegetation characteristics using spectral mixture analysis and decision tree classifications. Remote Sens Environ. 2009;113(2):398–407. Franke J, Roberts DA, Halligan K, et al. Hierarchical multiple endmember spectral mixture analysis (MESMA) of hyperspectral imagery for urban environments. Remote Sens Environ. 2009;113(8):1712–23. Smith MO, Ustin SL, Adams JB, et al. Vegetation in deserts: I. A regional measure of abundance from multispectral images. Remote Sens Environ. 1990;31(1):1–26. Smith MO, Johnson PE, Adams JB. Quantitative determination of mineral types and abundances from reflectance spectra using principal components analysis. J Geophys Res Solid Earth. 1985;90(S02):C797–C804. Gitelson AA, Kaufman YJ, Stark R, et al. Novel algorithms for remote estimation of vegetation fraction. Remote Sens Environ. 2002;80(1):76–87. Li X, Strahler AH. Geometric-optical bidirectional reflectance modeling of the discrete crown vegetation canopy: effect of crown shape and mutual shadowing. IEEE Trans Geosci Remote Sens. 1992;30(2):276–92. Franklin J, Strahler AH. Invertible canopy reflectance modeling of vegetation structure in semiarid woodland. IEEE Trans Geosci Remote Sens. 1988;26(6):809–25. Elarab M, Ticlavilca AM, Torres-Rua AF, et al. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture. Int J Appl Earth Obs Geoinf. 2015;43:32–42. Berni JAJ, Zarco-Tejada PJ, Suárez L, et al. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans Geosci Remote Sens. 2009;47(3):722–38. Du M, Noguchi N. Monitoring of wheat growth status and mapping of wheat yield's within-field spatial variations using color images acquired from UAV-camera system. Remote Sens. 2017;9(3):289. Holman F, Riche A, Michalski A, et al. High throughput field phenotyping of wheat plant height and growth rate in field plot trials using UAV based remote sensing. Remote Sens. 2016;8(12):1031. Jin X, Liu S, Baret F, et al. Estimates of plant density of wheat crops at emergence from very low altitude UAV imagery. Remote Sens Environ. 2017;198. López-Granados F, Torres-Sánchez J, Castro AID, et al. Object-based early monitoring of a grass weed in a grass crop using high resolution UAV imagery. Agron Sustain Dev. 2016;36(4):67. Zhou X, Zheng HB, Xu XQ, et al. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. Isprs J Photogramm Remote Sens. 2017;130:246–55. Ma N, Yuan J, Li M, et al. Ideotype population exploration: growth, photosynthesis, and yield components at different planting densities in winter oilseed rape (Brassica napus L.). PLoS ONE. 2014;9(12):e114232. Kira O, Linker R, Gitelson A. Non-destructive estimation of foliar chlorophyll and carotenoid contents: focus on informative spectral bands. Int J Appl Earth Obs Geoinf. 2015;38:251–60. Ray SS, Jain N, Miglani A, et al. Defining optimum spectral narrow bands and bandwidths for agricultural applications. Curr Sci. (00113891), 2010;98(10). Zhang W, Li Y, Li D, et al. Distortion correction algorithm for UAV remote sensing image based on CUDA[C]. In: 35th International Symposium on Remote Sensing of Environment (ISRSE35), IOP Conference Series: Earth and Environmental Science. 2014, p. 17. Turner D, Lucieer A, Malenovský Z, et al. Spatial co-registration of ultra-high resolution visible, multispectral and thermal images acquired with a micro-UAV over Antarctic moss beds. Remote Sens. 2014;6(5):4003–24. Dwyer JL, Kruse FA, Lefkoff AB. Effects of empirical versus model-based reflectance calibration on automated analysis of imaging spectrometer data: a case study from the Drum Mountains, Utah. Photogramm Eng Remote Sens. 1995;61(10):1247–54. Laliberte AS, Goforth MA, Steele CM, et al. Multispectral remote sensing from unmanned aircraft: image processing workflows and applications for rangeland environments. Remote Sens. 2011;3(11):2529–51. Farrand WH, Singer RB, Merényi E. Retrieval of apparent surface reflectance from AVIRIS data: a comparison of empirical line, radiative transfer, and spectral mixture methods. Remote Sens Environ. 1994;47(3):311–21. Wang C, Myint SW. A simplified empirical line method of radiometric calibration for small unmanned aircraft systems-based remote sensing. IEEE J Sel Top Appl Earth Obs Remote Sens. 2015;8(5):1876–85. Rouse JWJ, Haas RH, Schell JA, et al. Monitoring vegetation systems in the Great Plains with ERTS. Nasa Spec Publ. 1974;351:309. Gitelson AA, Viña A, Ciganda V, et al. Remote estimation of canopy chlorophyll content in crops. Geophys Res Lett. 2005;32(8):93–114. Jordan CF. Derivation of leaf-area index from quality of light on the forest floor. Ecology. 1969;50(4):663–6. Richardson AJ, Wiegand CL. Distinguishing vegetation from soil background information. Photogramm Eng Remote Sens. 1977;43(12):1541–52. Roujean JL, Breon FM. Estimating PAR absorbed by vegetation from bidirectional reflectance measurements. Remote Sens Environ. 1995;51(3):375–84. Liu HQ, Huete A. A feedback based modification of the NDVI to minimize canopy background and atmospheric noise. IEEE Trans Geosci Remote Sens. 1995;33(2):457–65. Broge NH, Leblanc E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens Environ. 2001;76(2):156–72. Huete AR. A soil-adjusted vegetation index (SAVI). Remote Sens Environ. 1988;25(3):295–309. Singer RB, Mccord TB. Mars—large scale mixing of bright and dark surface materials and implications for analysis of spectral reflectance. In: Lunar and Planetary Science Conference Proceedings. Lunar and Planetary Science Conference Proceedings. 1979. p. 1835–48. Heinz D, Chang CI, Althouse MLG. Fully constrained least-squares based linear unmixing [hyperspectral image classification]. In: Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International, vol. 2. IEEE. 2002. p. 1401–3. Pan B, Shi Z, An Z, et al. A novel spectral-unmixing-based green algae area estimation method for GOCI data. IEEE J Sel Top Appl Earth Obs Remote Sens. 2016;PP(99):1–13. Pu H, Chen Z, Wang B, et al. Constrained least squares algorithms for nonlinear unmixing of hyperspectral imagery. IEEE Trans Geosci Remote Sens. 2014;53(3):1287–303. Nguy-Robertson A, Gitelson A, Peng Y, et al. Green leaf area index estimation in maize and soybean: combining vegetation indices to achieve maximal sensitivity. Agron J. 2012;104(5):1336–47. Fielding AH, Bell JF. A review of methods for the assessment of prediction errors in conservation presence/absence models. Environm Conserv. 1997;24(1):38–49. [Kohavi R. The power of decision tables. In: Machine Learning: ECML-95. Berlin: Springer; 1995. p. 174–89]. Vina A, Gitelson AA, Rundquist DC, et al. Monitoring maize (L.) phenology with remote sensing. Agron J. 2004;96(4):1139–47. Sulik JJ, Long DS. Spectral considerations for modeling yield of canola. Remote Sens Environ. 2016;184:161–74. Habekotte B. Evaluation of seed yield determining factors of winter oilseed rape (Brassica napus L.) by means of crop growth modelling. Field Crops Res. 1997;54(2):137–51. Basnyat P, Mcconkey B, Lanfond GP, et al. Optimal time for remote sensing to relate to crop grain yield on the Canadian prairies. Can J Plant Sci. 2004;84(1):97–103. Piekarczyk J, Sulewska H, Szymańska G. Winter oilseed-rape yield estimates from hyperspectral radiometer measurements. Quaest Geogr. 2011;30(1):77–84. All authors have made significant contributions to this research. SF conceived of the research ideas. YG and YP designed the experiments, conducted the data analysis and provided the writing of this paper. BD performed the majority of the data processing, and YM provided rapeseed yield data. RZ and XW provided important insights and suggestions on this research from the perspective of agronomists. All authors read and approved the final manuscript. We acknowledge the support and use of facilities and equipment provided by the Lab for Remote Sensing of Crop Phenotyping Institute, School of Remote Sensing and Information Engineering and College of Life Sciences, Wuhan University, China. We are very thankful to the research groups led by Dr. Jianwei Lu and Dr. Shanqin Wang, College of Resources and Environment, Huazhong Agricultural University, China for their hard work to collect the yield data and their generosity to share the data. We also appreciate Dr. Can Dai from School of Resources and Environmental Science, Hubei University, China for her help on the revisions about rapeseed biology. The remotely sensed data used in this study is available upon the approval of Dr. Shenghui Fang from School of Remote Sensing and Information Engineering, Wuhan University, China. The rapeseed yield data in this study is available upon the approval of Dr. Jianwei Lu from College of Resources and Environment, Huazhong Agricultural University, China. All authors agreed to publish this manuscript. All authors read and approved the manuscript. This research was supported by National Natural Science Foundation of China (41771381), China High Resolution Earth Observation System Project (30-Y20A29-9003-15/17), National 863 Project of China (2013AA102401), and Fundamental Research Funds for the Central Universities (2042017kf0236). School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, 430079, China Yan Gong, Bo Duan, Shenghui Fang, Yi Ma & Yi Peng College of Life Sciences, Wuhan University, Wuhan, 430072, China Renshan Zhu & Xianting Wu Lab for Remote Sensing of Crop Phenotyping, Wuhan University, Wuhan, 430079, China Yan Gong, Shenghui Fang, Renshan Zhu, Xianting Wu & Yi Peng Yan Gong Bo Duan Shenghui Fang Renshan Zhu Xianting Wu Yi Ma Yi Peng Correspondence to Yi Peng. Gong, Y., Duan, B., Fang, S. et al. Remote estimation of rapeseed yield with unmanned aerial vehicle (UAV) imaging and spectral mixture analysis. Plant Methods 14, 70 (2018). https://doi.org/10.1186/s13007-018-0338-z DOI: https://doi.org/10.1186/s13007-018-0338-z Yield estimation Canopy reflectance Spectral mixture analysis Plants in computer vision
CommonCrawl
The average separation between the proton and the electron The average separation between the proton and the electron in a hydrogen atom in ground state is $5.3 \times 10^{-11} \mathrm{~m}$. (a) Calculate the Coulomb force between them at this separation (b) When the atom goes into its first excited state the average separation between the proton and the electron increases to four times its value in the ground state. What is the Coulomb force in this state?
CommonCrawl
Variations of the peak positions in the longitudinal profile of noon-time equatorial electrojet Zié Tuo ORCID: orcid.org/0000-0002-0837-36751, Vafi Doumbia1, Pierdavide Coïsson2, N'Guessan Kouassi1 & Abdel Aziz Kassamba1 In this study, the seasonal variations of the EEJ longitudinal profiles were examined based on the full CHAMP satellite magnetic measurements from 2001 to 2010. A total of 7537 satellite noon-time passes across the magnetic dip-equator were analyzed. On the average, the EEJ exhibits the wave-four longitudinal pattern with four maxima located, respectively, around 170° W, 80° W, 10° W and 100° E longitudes. However, a detailed analysis of the monthly averages yielded the classification of the longitudinal profiles in two types. Profiles with three main maxima located, respectively, around 150° W, 0° and 120° E, were observed in December solstice (D) of the Lloyd seasons. In addition, a secondary maximum observed near 90° W in November, December and January, reinforces from March to October to establish the wave-four patterns of the EEJ longitudinal variation. These wave-four patterns were divided into two groups: a group of transition which includes equinox months March, April and October and May in the June solstice; and another group of well-established wave-four pattern which covers June, July, August of the June solstice and the month of September in September equinox. For the first time, the motions in the course of seasons of various maxima of the EEJ noon-time longitudinal profiles have been clearly highlighted. The equatorial electrojet (EEJ) is a daytime ionospheric current that flows eastward along the magnetic equator at about 105 km altitude (Chapman 1951). Most of the EEJ characteristics like day-to-day, seasonal, latitudinal, longitudinal variability and the counter-electrojet phenomenon have been described through its magnetic effect recorded on ground as well as onboard polar orbiting satellites (Cain and Sweeney 1973; Gouin 1967; Gurubaran 2002; Langel et al. 1993). During the International Equatorial Electrojet Year (IEEY), simultaneous measurements were carried out in the longitude sectors of Asia, Africa and South America (Amory-Mazaudier et al. 1993; Arora et al. 1993). Magnetic data recorded along station chains across the dip-equator resulted in important advances for understanding the EEJ characteristics. Based on this dataset, Doumouya et al. (2003) established the longitudinal profile of EEJ, which was found to be inversely correlated with the geomagnetic main field intensity. However, Doumouya and Cohen (2004) noticed a relative amplified EEJ intensity in the longitude sector around 100° E when they included the magnetic data from Baclieu (105.44° E, 9.25 N 1.35° dip-lat) in Vietnam. This observation was confirmed by the CHAllenging Minisatellite Payload (CHAMP) satellite magnetic data (Doumouya and Cohen 2004). The geomagnetic field measurements operated onboard Satelite de Applicaciones Cientificas-C (SAC-C), Oersted and CHAMP satellites resulted in improved descriptions of the EEJ longitudinal variation (Alken and Maus 2007; Doumouya and Cohen 2004; Jadhav et al. 2002). Thus, the EEJ longitudinal profiles are now known to exhibit up to three or four maxima located approximately around − 90° E, 0°, 100° E and 180° E (Alken and Maus 2007; Doumbia et al. 2007; Doumbia and Grodji 2016; Doumouya and Cohen 2004; Jadhav et al. 2002). According to Alken and Maus (2007), Doumbia et al (2007) and Doumbia and Grodji (2016), these longitudinal structures of the EEJ can be subject of seasonal variations. Indeed, it was shown that the EEJ longitude profiles with three maxima are observed during the December solstice, while the profiles with four maxima are observed during equinoxes and the June solstice. However, the transitions between the EEJ longitudinal patterns of three maxima and those of four maxima and the background physical processes are not well understood. In the present study, the seasonal variations of the EEJ longitudinal structures are examined. In that purpose the longitudinal variation of the EEJ is revisited from the full CHAMP satellite magnetic data recorded from 2001 to 2010. In particular, the progressive changes from three maxima to four maxima, and vice versa, of the EEJ longitude patterns in the course of the year is analyzed on the basis of the average monthly longitude profiles. The motions of the upper mentioned maxima according to the seasons are also examined. Data and data processing The present work is based on CHAMP satellite OVerhauser Magnetometer (OVM) data that were recorded from 2001 to 2010 (Rother and Michaelis 2019) (https://isdc-old.gfz-potsdam.de/index.php). CHAMP was a near-polar orbiting satellite that was launched on July 15, 2000 onto a low altitude (about 460 km) circular orbit, with an inclination of 87.3° and orbital period of 93.55 min (Alken and Maus 2007; Lühr et al. 2004; Lühr and Maus 2006). The satellite was deorbited on September 19, 2010. One of the advantages of CHAMP orbit is that it provides a good latitudinal and local time coverage allowing accurate studies of the ionospheric current systems. The geomagnetic force F was recorded with the OVM magnetometer at a sampling rate of 1 s, in the range from 18,000 to 65,000 nT, with 10 pT resolution and noise level of 50 pT. The absolute error is estimated at about 0.5 nT. The magnetic data recorded on board CHAMP satellites are composed of the sum of the geomagnetic main field, the crustal anomaly fields, the magnetic effects of ionospheric and magnetospheric currents and their induced effects in the ground. Thus, any study of one of these sources requires its contribution to be isolated from that of the other magnetic sources. The purpose of the present work is to study the EEJ by analyzing its magnetic effect, extracted from the total observed magnetic force F. The geomagnetic main field that represents about 99% of the total measured field, is estimated and removed by using the International Geomagnetic Reference Field (IGRF-12) model (Thébault et al. 2015). The remaining residual field, shown in Fig. 1, is designated as total residuals \((\Delta F)\). (\(\Delta F\)) is expected to include the crustal fields and the magnetic effects of ionospheric and magnetospheric currents. The EEJ magnetic effect is confined to a relative narrow latitude band and produces a V-shape depression at the magnetic dip-equator. This effect seems to overlap a long-wavelength background signal, from which it will be isolated. The approach consists of polynomial fitting of the background signal (Doumouya and Cohen 2004). The total residuals \(({{\varvec{\Delta}}}{\varvec{F}})\) isolated from the CHAMP satellite observed total force (\({\varvec{F}}\)) by subtracting the geomagnetic main field estimated with IGRF model For this study, magnetically quiet time data are selected according to the values of the Kp index that is set to be smaller than 3+. Noon-time data for satellite passes between 11 and 13 LT are considered to estimate the EEJ strength (\(\Delta F_{{{\text{eej}}}}\)). Table 1 depicts the numbers of satellite noon-time passes selected per month. Figure 2 shows the monthly distribution of satellite passes per year. A total number of 7537 noon-time satellite passes have been selected. Table 1. Monthly distribution of selected noon-time satellite passages between 11 LT and 13 LT during geomagnetically quiet days with Kp < 3+. Monthly distribution of satellite noon-time passes per year from 2001 to 2010. Each year is identified by a specific color Extracting the EEJ magnetic effect by using a polynomial fitting The magnetic effect of the EEJ was extracted from the total magnetic residuals by subtracting the background signal (Fig. 3). The background signal was fitted with a 12-degree polynomial. This degree was chosen by trial-and-error testing from 6° to 30° (Doumbia and Grodji 2016; Doumouya and Cohen 2004). The solid line represents the total residuals and the dashed line represents the polynomial fitting of the background signal. The right panel of Fig. 3 shows the latitudinal profiles of the EEJ magnetic effect extracted for a CHAMP satellite pass across the dip-equator at 1° E on 17 September 2001. The EEJ magnetic effect exhibits a sharp depression with a minimum at the dip-equator. This depression is flanked by two maxima that are located on the average at about \(\pm \;7^\circ\) on either side of the magnetic dip-equator. The difference in amplitudes for the same day may be due to the longitudinal dependence of the EEJ. For different days, this difference also includes the day-to-day variability of the EEJ. (Doumouya and Cohen 2004; Thomas et al. 2017). The magnetic signature of the equatorial electrojet (EEJ). The left panel shows the total residuals \(\Delta F\) (solid line) and background long wave length signal (dashed line). The right panel shows the EEJ magnetic signature isolated from the total residuals Correction of satellite altitude effects on the EEJ strength The CHAMP satellite mission lasted about 10 years from 2000 to 2010. This duration is close to the length of one solar cycle. During this period, the satellite orbit continuously drifted from the initial altitude of 460 km to 250 km at the end of the mission (Fig. 4). The decreasing altitude of CHAMP gradually moved it closer to the EEJ. As a consequence, the observed EEJ effects may have gradually increased due to the decreasing distance between the satellite and the EEJ current, located at about 105 km altitude. Thus, variations of the observed EEJ strength can be expected to include both the effects of the solar cycle and the satellite altitude variations. In this section, we reduce the effects of satellite altitude variations. The altitude effect correction consists of normalizing the measurements to 400 km altitude applying Eq. (1). This method is based on Le Mouël et al. (2006) appendix. $$\Delta F_{400} = \frac{{\Delta h_{{{\text{sat}}}} }}{{\Delta h_{400} }} \Delta F_{{{\text{sat}}}} ,$$ CHAMP satellite altitude variations from 2001 to 2010 where \(\Delta F_{400}\) is the EEJ strength at \(h_{400} = 400\) km, \(\Delta F_{{{\text{sat}}}}\) is the EEJ strength at a given altitude \(h_{{{\text{sat}}}}\), \(\Delta h_{{{\text{sat}}}}\) is the distance between the satellite and the EEJ altitude. \(\Delta h_{400}\) is the distance between the normalized altitude and the EEJ altitude. Longitudinal variation of EEJ The EEJ strength (\(\Delta F_{{{\text{eej}}}}\)) is estimated from the latitudinal profiles by the difference between the minimum at the dip-equator and the average of the maxima on either side for each satellite pass across the magnetic equator. Figure 5 depicts the average longitudinal variation of the EEJ during September equinox. The dots represent \(\Delta F_{{{\text{eej}}}}\) for single satellite passes across the dip-equator. The solid red line shows the median values of \(\Delta F_{{{\text{eej}}}}\) over every 15-degree longitude interval from − 180° E to 180° E and the smoothed black line is obtained by spline interpolation of the median values. The four maxima located, respectively, at about − 170° E, − 80° E, − 10° E and 100° E longitudes confirm the four-wave structure of the EEJ longitudinal variation, shown in previous studies (Alken and Maus 2007; Doumbia et al. 2007; Le Mouël et al. 2006; Yamazaki and Maute 2017). This structure is susceptible to seasonal variations that are examined in the next section. Longitudinal variation of the EEJ during September equinox. The dots represent the EEJ strength estimated from single noon-time passes, the solid lines represent linear interpolation (red line) and spline interpolation (black line) of the median values of the EEJ strength in every 15° longitude Seasonal variation of EEJ longitudinal profiles Figure 6 shows the monthly averages of the EEJ longitudinal variations. The blue and green curves represent, respectively, the first quartile (25% of data) and third quartile (75% of data). The solid red line and the smoothed black line are the same as Fig. 5. The patterns of the EEJ longitudinal profiles evolve from month to month. These patterns can be divided into two main kinds. Profiles with three maxima are observed from November to February, while those with four maxima, referred to as "wave-four" pattern are observed from March to October. The patterns with three maxima are mainly observed during the December solstice, while the wave-four patterns include June solstice, March and September equinoxes. However, the patterns observed from March to May and in October seem to be transition phases between the three maxima and wave-four patterns. According to these remarks, the longitudinal profiles of the EEJ are classified into three categories in Fig. 7. In the top panel, the EEJ longitudinal profiles with three maxima are shown. These profiles exhibit three main maxima that are located, respectively, from left to right, near − 150° E, 0° E and 120° E, which are referred to as L1, L3 and L5. However, on the west side, one can observe a secondary maximum near − 90° E (L2) in November and December, which is slightly visible in January, but totally vanished in February. On the east side, another secondary maximum can be observed near 70° E (L4) in January and February. In the middle panel, in addition to the three main maxima observed above, the west side secondary maximum (L2) stabilizes and gets matured at about − 80° E, while the east side secondary maximum (L4) is slightly visible in March and April, but totally vanished in May and it does not appear in October. It is as if this secondary maximum (L4) and the east side main maximum (L5) combine to form a single maximum around 120° E. This process completes the wave-four pattern establishment as depicted in the bottom panel. In summary, we have two west side maxima around about − 150° E, − 80° E, one maximum around 0° E and another maximum around 100° E. Monthly average noon-time longitudinal variations of the EEJ. The dots represent the EEJ strength estimated from single noon-time passes, the solid lines represent linear interpolation (red line) and spline interpolation (black line) of the median values of the EEJ strength in every 15° longitude. The blue and green lines represent, respectively, the spline interpolations of the first and third quartile values of the EEJ strength in every 15° longitude Different configurations of the EEJ longitudinal structures. In the top panel, the EEJ longitudinal profiles with three main maxima corresponding to December solstice are shown. The middle panel shows the wave-four patterns the EEJ longitude profiles in the transition phases during March, April, May and October. The bottom panel exhibits well-established wave-four patterns from June to September. L1, L2, L3, L4 and L5 and plus symbol indicate the locations of various maxima Figure 8 shows the motions of the maxima in longitude in the course over a year. It is to be noticed that the locations L1, L3 and L5 of the three main maxima move, respectively, over 50°, 20° and 60° longitudes in the course of the year. They oscillate, respectively, around the meridians − 160° E, − 10° E and 120° E. L1 and L5 move in the same phase from the east sides to the west side of the − 160° E and 120° E meridians, respectively, while L3 moves from the west side to the east side of the − 10° E meridian, in opposite phase with respect to L1 and L5. However, during the transition phase, while L1 seems to stabilize at − 160° E, L5 moves across the 120° E meridian westward from March to May and eastward in October. L2 is almost stable at − 80° E meridian in the transition phase and moving very little to east during the rest of time. L4 appears on January at 70° E and move to 55° E on February before combining with L5 during March and April. Motions of the maxima in longitude in the course of a year. The color zones depict the three categories of EEJ longitudinal profiles shown in the figure. The yellow zone corresponds to the periods of dominant three maxima (NDJF), the blue to the periods of transition (MAMO) and the pink to the periods of dominant wave-four patterns (JJAS) In this study, the seasonal variations of the EEJ longitudinal profiles were examined based on CHAMP satellite magnetic measurements from 2001 to 2010. 7537 satellite noon-time passes across the magnetic dip-equator were analyzed. The EEJ strength was estimated from the latitudinal profiles of its magnetic signatures, with dense coverage of all longitude sectors. Based on these results, the EEJ longitudinal variation was revisited. On the average, the EEJ exhibits the wave-four longitudinal pattern with four maxima located, respectively, around − 170° E, − 80° E, − 10° E and 100° E longitudes. This confirms the results obtained in previous studies (Alken and Maus 2007; Doumbia et al. 2007; Doumbia and Grodji 2016; Doumouya and Cohen 2004; Jadhav et al. 2002; Lühr et al. 2004). However, a detailed analysis of the monthly averages yielded the classification of the longitudinal profiles in two types. Profiles with three main maxima located, respectively, around − 150° E, 0° E and 120° E, were observed in November, December, January and February. In addition, a secondary maximum observed near − 90° E started in November, December and slightly in January, to finally establish the wave-four patterns from March to October. It is to be noticed that the period of wave-four patterns includes the months of equinoxes (E) and June solstice (J) of Lloyd seasons. According to our observations, this period was divided into a period of transition (March, April, May and October) and a period of well-established wave-four structure (June, July, August and September). The period of transition includes two phases. The first phase consists of transition from three to four maxima in March, April and May, and the second phase consists of a short transition from four to three maxima during October. In summary, the patterns the EEJ longitudinal variation have been divided into three groups of 4 months each: (i) the group of three maxima, (ii) the group of transition and (iii) the group of well-established wave-four pattern. While the first group coincides with December solstice (D) of the Lloyd seasons, the second group spans partially on the equinox (March, April and October) and June solstice (May), the third group covers partially June solstice (June, July and August) and equinox (September). The locations of the three main maxima of the EEJ longitudinal profiles identified from West to East, respectively, as L1, L3 and L5, have been found to clearly oscillate around average positions in longitude. Thus, L1 and L5 move from the east sides to the west side of, respectively, − 160° E and 120° E meridians, while L3 moves from the west side to the east side of − 10° E meridian. During the transition phase, L1 stabilizes at − 160° E and L5 moves westward across 120° E meridian from March to May and eastward in October. In the transition phase, L2 almost stabilizes at − 80° E meridian, moving very little to east during the rest of time. Another secondary maximum (L4) was also observed near 70° E, but only in January and February. The results above clearly demonstrate the dependence of the EEJ longitudinal structures on season and confirm the finding of previous studies by Alken and Maus (2007), Doumbia et al (2007) and Doumbia and Grodji (2016). Indeed, those studies have shown that the EEJ longitudinal profiles with three maxima were observed during the December solstice, while the profiles with four maxima were observed during equinoxes and the June solstice of Lloyd seasons. However, results are slightly different for the transition phases and well-established wave-four structures, which are instead inter-seasonal. In addition, it is the first time that the motions of various maxima of the EEJ longitudinal structures have been clearly highlighted. The original features of the present work can be summarized as: The full CHAMP satellite 10-year magnetic data base was used, which statistically better supports the kind of detailed analysis conducted in this manuscript. Previous works (Alken and Maus 2007; Doumbia et al. 2007; Doumbia and Grodji 2016; Doumouya and Cohen 2004; Jadhav et al. 2002; Lühr et al. 2004) have only made broad remarks on the EEJ longitudinal variations and attributed different morphologies to December solstice (three maxima) and four maxima for the other seasons. In the present study, these features of the EEJ longitude profiles are captured more finely on a monthly basis. The present study yielded for the first time a special classification of the EEJ longitude profiles in three main categories as shown in this manuscript. In addition, we have shown how transitions are made from one structure to the other. Our classification shows that most of the features of the EEJ longitude profiles are inter-seasonal, instead of coinciding with a single particular season. The motions in longitudes of different maxima of the EEJ longitude profiles in course of the year were examined the first time. These new features open the way to better perspectives in the analysis of the physical processes that govern the EEJ longitudinal variation, especially the roles of thermospheric winds and their seasonal behaviors in this longitudinal variation (England et al. 2006; Immel et al. 2006; Lühr et al. 2008). The structures and seasonal dependence of the EEJ longitudinal variation have been considered to be linked with the wave structures of the thermospheric winds (Doumbia et al. 2007; Doumbia and Grodji 2016; Immel et al. 2006; Kil et al. 2007; Lühr et al. 2008; Lühr and Maus 2006). In the ionosphere, winds and electric fields are known to be modulated by the tidal excitations that propagate upward from lower atmospheric layers. Doumbia et al. (2007) simulated such tidal excitations for migrating tides diurnal and semi-diurnal components based on the National Center for Atmospheric Research Thermosphere–Ionosphere Electrodynamics General Circulation Model (NCAR TIEGCM). Furthermore, the wave structures of the thermospheric winds described in many studies (Häusler et al. 2007; Häusler and Lühr 2009; Immel et al. 2006), have been found to exhibit similar longitudinal variations. In a companion paper, the influence of thermospheric winds will be examined with combined migrating and non-migrating tidal excitations, for a better understanding of the background physical processes involved in the transition between the various EEJ longitudinal patterns. The CHAMP satellite magnetic data and the Kp were available on GeoForschungsZentrum (GFZ) database (https://isdc-old.gfz-potsdam.de/index.php). EEJ: Equatorial electrojet IEEY: International Equatorial Electrojet Year OVM: OVerhauser Magnetometer CHAMP: CHAllenging Minisatellite Payload SAC-C: Satelite de Aplicaciones Cientificas-C Alken P, Maus S (2007) Spatio-temporal characterization of the equatorial electrojet from CHAMP, Ørsted, and SAC-C satellite magnetic measurements: empirical model of the EEJ. J Geophys Res Space Phys. https://doi.org/10.1029/2007JA012524 Amory-Mazaudier C, Vila P, Achache J, Achy-Seka A, Albouy Y, Blanc E, Boka K, Bouvet J, Cohen Y, Dukhan M, Doumouya V, Fambitakoye O, Gendrin R, Goutelard C, Hamoudi M, Hanbaba R, Hougninou E, Hu Cc, Kakou K, Kobea Toka A, Lassudrie Duchesne P, Mbipom E, Menvielle M, Ogunade SO, Onwumechili CA, Oyinloye JO, Rees D, Richmond A, Sambou E, Schmucker E, Tireford J, Vassal J (1993) International equatorial electrojet year: the African sector. Braz J Geophys 11:303–317 Arora BR (1993) Indian IEEY geomagnetic observational program and some preliminary results. Braz J Geophys 11:365–386 Cain JC, Sweeney RE (1973) The POGO data. J Atmos Terr Phys 35:1231–1247. https://doi.org/10.1016/0021-9169(73)90021-4 Chapman S (1951) The equatorial electrojet as detected from the abnormal electric current distribution above Huancayo, Peru, and elsewhere. Archiv für Meteorologie Geophysik und Bioklimatologie Serie A 4:368–390. https://doi.org/10.1007/BF02246814 Doumbia V, Grodji ODF (2016) On the longitudinal dependence of the equatorial electrojet. In: Fuller-Rowell T, Yizengaw E, Doherty PH, Basu S (eds) Geophysical monograph series. Wiley, Hoboken, pp 115–125. https://doi.org/10.1002/9781118929216.ch10 Doumbia V, Maute A, Richmond AD (2007) Simulation of equatorial electrojet magnetic effects with the thermosphere-ionosphere-electrodynamics general circulation model: equatorial electrojet magnetic effects. J Geophys Res Space Phys. https://doi.org/10.1029/2007JA012308 Doumouya V, Cohen Y (2004) Improving and testing the empirical equatorial electrojet model with CHAMP satellite data. Ann Geophys 22:3323–3333. https://doi.org/10.5194/angeo-22-3323-2004 Doumouya V, Cohen Y, Arora BR, Yumoto K (2003) Local time and longitude dependence of the equatorial electrojet magnetic effects. J Atmos Sol Terr Phys 65:1265–1282. https://doi.org/10.1016/j.jastp.2003.08.014 England SL, Maus S, Immel TJ, Mende SB (2006) Longitudinal variation of the E-region electric fields caused by atmospheric tides. Geophys Res Lett. https://doi.org/10.1029/2006GL027465 Gouin P (1967) A propos de l'existence possible d'un contre electrojet aux latitudes magnetiques equatorials. Ann Geophys 23:41–47 Gurubaran S (2002) The equatorial counter electrojet: part of a worldwide current system? Geophys Res Lett. https://doi.org/10.1029/2001GL014519 Häusler K, Lühr H (2009) Nonmigrating tidal signals in the upper thermospheric zonal wind at equatorial latitudes as observed by CHAMP. Ann Geophys 27:2643–2652. https://doi.org/10.5194/angeo-27-2643-2009 Häusler K, Lühr H, Rentz S, Köhler W (2007) A statistical analysis of longitudinal dependences of upper thermospheric zonal winds at dip equator latitudes derived from CHAMP. J Atmos Sol Terr Phys 69:1419–1430. https://doi.org/10.1016/j.jastp.2007.04.004 Immel TJ, Sagawa E, England SL, Henderson SB, Hagan ME, Mende SB, Frey HU, Swenson CM, Paxton LJ (2006) Control of equatorial ionospheric morphology by atmospheric tides. Geophys Res Lett. https://doi.org/10.1029/2006GL026161 Jadhav G, Rajaram M, Rajaram R (2002) A detailed study of equatorial electrojet phenomenon using Ørsted satellite observations. J Geophys Res Space Phys. https://doi.org/10.1029/2001JA000183 Kil H, Oh S-J, Kelley MC, Paxton LJ, England SL, Talaat E, Min K-W, Su S-Y (2007) Longitudinal structure of the vertical E × B drift and ion density seen from ROCSAT-1. Geophys Res Lett. https://doi.org/10.1029/2007GL030018 Langel RA, Purucker M, Rajaram M (1993) The equatorial electrojet and associated currents as seen in Magsat data. J Atmos Terr Phys 55:1233–1269. https://doi.org/10.1016/0021-9169(93)90050-9 Le Mouël J-L, Shebalin P, Chulliat A (2006) The field of the equatorial electrojet from CHAMP data. Ann Geophys 24:515–527. https://doi.org/10.5194/angeo-24-515-2006 Lühr H, Maus S (2006) Direct observation of the F region dynamo currents and the spatial structure of the EEJ by CHAMP. Geophys Res Lett. https://doi.org/10.1029/2006GL028374 Lühr H, Rother M, Köhler W, Ritter P, Grunwaldt L (2004) Thermospheric up-welling in the cusp region: evidence from CHAMP observations. Geophys Res Lett. https://doi.org/10.1029/2003GL019314 Lühr H, Rother M, Häusler K, Alken P, Maus S (2008) The influence of nonmigrating tides on the longitudinal variation of the equatorial electrojet: modulation of the EEJ by non-migrating tides. J Geophys Res Space Phys. https://doi.org/10.1029/2008JA013064 Rother M, Michaelis I (2019) CH-ME-3-MAG-CHAMP 1 Hz combined magnetic field time series (level 3). GFZ Data Services. https://doi.org/10.5880/GFZ.2.3.2019.004 Thébault E, Finlay C, Toh H (2015) Special issue "international geomagnetic reference field—the twelfth generation." Earth Planets Space. https://doi.org/10.1186/s40623-015-0313-0 Thomas N, Vichare G, Sinha AK (2017) Characteristics of equatorial electrojet derived from Swarm satellites. Adv Space Res 59:1526–1538. https://doi.org/10.1016/j.asr.2016.12.019 Yamazaki Y, Maute A (2017) Sq and EEJ—a review on the daily variation of the geomagnetic field caused by ionospheric dynamo currents. Space Sci Rev 206:299–405. https://doi.org/10.1007/s11214-016-0282-z We are grateful to the German Aerospace Center (DLR) and (GFZ) for making CHAMP data available for this study. This work was partially carried out at the Institut de physique du globe de Paris (IPGP) during a visit of Mr Tuo Zié that was financially supported by PASRES (Programme d'Appui Stratégique à la Recherche Scientifique). PASRES (Programme d'Appui Stratégique à la Recherche Scientifique) supported 5 months stay at the Institut de physique du globe de Paris (IPGP) in Paris for part of this work. Laboratoire de Physique de l'Atmosphère et de Mécanique des fluides, UFR-SSMT, Université Felix Houphouet Boigny, Abidjan, Côte d'Ivoire Zié Tuo, Vafi Doumbia, N'Guessan Kouassi & Abdel Aziz Kassamba Université de Paris, Institut de Physique du Globe de Paris, CNRS, 75005, Paris, France Pierdavide Coïsson Zié Tuo Vafi Doumbia N'Guessan Kouassi Abdel Aziz Kassamba The present work was performed in the framework of the Ph.D. thesis of Zié Tuo, under the supervision of Professor Vafi Doumbia in collaboration with Dr. Pierdavide Coïsson. Zié Tuo, Vafi Doumbia and Pierdavide Coïsson contributed to the data processing and analysis. Vafi Doumbia and Pierdavide Coïsson verified the analytical methods and the findings of the manuscript and contributed to the discussions of the results. All the authors contributed to the final manuscript submitted. All authors read and approved the final manuscript. Correspondence to Zié Tuo. Tuo, Z., Doumbia, V., Coïsson, P. et al. Variations of the peak positions in the longitudinal profile of noon-time equatorial electrojet. Earth Planets Space 72, 174 (2020). https://doi.org/10.1186/s40623-020-01305-z Longitudinal variation Seasonal dependence 2. Aeronomy
CommonCrawl
Bulletin PDFs APS March Meeting 2015 Monday–Friday, March 2–6, 2015; San Antonio, Texas Session T12: Focus Session: Non-Oxide Nanostructures and Artificially Structured Materials and Related Phenomena Hide Abstracts Sponsoring Units: DMP Chair: Pratibha Dev, Naval Research Laboratory Room: 007C T12.00001: ABSTRACT WITHDRAWN [Preview Abstract] T12.00002: Artificially-Engineered III-Nitride Digital Alloy for Solar Energy Harvesting Wei Sun, Chee-Keong Tan, Nelson Tansu The pursuit of III-Nitride based solar cell has been primarily driven by the attribute of broad solar spectrum coverage through the use of InGaN material. However, the phase separation in high In-content InGaN alloy has been one of the largest barrier in the pursuit of nitride-based solar cells. Thus, a new approach in extending the bandgap coverage in nitride-based alloy needs to be pursued. In this work, we propose a novel artificially engineered III-Nitride based digital alloy structure to overcome the limitation presented by the epitaxy of phase-separated InGaN material with high In-content. The InGaN digital alloy structure is a short period superlattice that is formed by GaN and InN thin film layers alternately in which the thickness of each layer is represented by a number of monolayer (ML). By adjusting the thickness of GaN layer (m MLs) and InN layer (n MLs), the In-content and the band structure of InGaN digital alloy can be engineered correspondingly. The use of this digital alloys demonstrated suitability of this method in extending the bandgap coverage in nitride-based semiconductors. [Preview Abstract] T12.00003: Simulation of Epitaxial Growth of DNA-nanoparticle Superlattices on Pre-patterned Substrates Saijie Pan, Ting Li, Monica Olvera de la Cruz DNA self-assembly is a well-developed approach towards the construction of a great variety of nanoarchitectures. E-beam lithography is widely used for high-resolution nanoscale patterning. Recently, a new technique combining the two methods was developed to epitaxially grow DNA-mediated nanoparticle superlattices on a pre-patterned surface[1]. Here we use multi-scale simulations to study and predict the formation and defects of the absorbed superlattice monolayer. We demonstrate that the epitaxial growth is enthalpy driven and show that the anisotropy of the DNA-mediated substrates leads to structure defects. We develop design rules to dramatically reduce defects of the attached layer. Ultimately, with the assist of our simulation, this technique will open the door for the construction of well-ordered, three-dimensional novel metamaterials. [1] H. Atwater, et al. Nano Lett. 2013, 13, 6084. [Preview Abstract] T12.00004: Theory of Energy Level Tuning in Quantum Dots by Surfactants Danylo Zherebetskyy, Lin-Wang Wang Besides quantum confinement that provides control of the quantum dot (QD) band gap, surface ligands allow control of the absolute energy levels. We theoretically investigate energy level tuning in PbS QD by surfactant exchange. We perform direct calculations of real-size QD with various surfactants within the frame of the density functional theory and explicitly analyze the influence of the surfactants on the electronic properties of the QD. This work provides a hint for predictable control of the absolute energy levels and their fine tuning within 3 eV range by modification of big and small surfactants that simultaneously passivate the QD surface. [Preview Abstract] T12.00005: Crystalline (Al$_{1-x}$B$_{x})$PSi$_{3}$ and~(Al$_{1-x}$B$_{x})$AsSi$_{3}$ tetrahedral phases via reaction of Al(BH$_{4})_{3}$ and M(SiH$_{3})_{3}$ (M$=$P, As) Patrick Sims, Andrew White, Toshihiro Aoki, Jose Menendez, John Kouvetakis Crystalline (Al$_{1-x}$B$_{x})$PSi$_{3}$ alloys ($x =$ 0.04-0.06) are grown lattice-matched on Si(100) by reactions of P(SiH3)3 and Al(BH4)3 using low-pressure CVD. The materials have been characterized by ellipsometry, XRD, XTEM, EELS and EDS, indicating the formation of single-phase monocrystalline layers with tetrahedral structures based on AlPSi$_{3}$. The latter comprises interlinked AlPSi$_{3}$ tetrahedra in which Al-P pairs are isolated within a Si matrix. Raman scattering of Al$_{1-x}$B$_{x}$PSi$_{3}$ films support the presence of substitutional B in place of Al and provides evidence that B is bonded to P. The substitution of B atoms is desirable for promoting lattice matching, as required for Si-based solar cell designs. Analogous reactions of As(SiH3)3 with Al(BH4)3 produce (Al$_{1-x}$B$_{x})$AsSi$_{3}$ in which the B incorporation is limited to doping concentrations at 10$^{20}$ cm$^{-3}$. In both cases the Al(BH4)3 efficiently delivers Al to create crystalline group IV-III-V materials comprising light, earth abundant elements with possible application in photovoltaics and light element refractory solids. [Preview Abstract] T12.00006: Optical trends in InP polytypic superlattices Guilherme Sipahi, Tiago de Campos, Paulo Eduardo de Faria Junior Recent advances in growth techniques have allowed the fabrication of semiconductor nanostructures with mixed wurtzite/zinc-blende crystal phases. Although the optical characterization of these polytypic structures is well reported in the literature, a deeper theoretical understanding of how crystal phase mixing and quantum confinement change the output linear light polarization is still needed. Here, we theoretically investigate the effects of these polytypic homojunctions on the interband absorption of an InP superlattice [1]. Using a single 8x8 k.p Hamiltonian that describes both crystal phases [1,2] together with the effects of quantum and optical confinement we where able to explain the recent optical exeperimental results carried on polytypic InP [3]. In summary, we have shown how the interplay of crystal phase mixing and quantum confinement can be used for light polarization engineering in polytypic homojunctions. \\[4pt] [1] P. E. Faria Junior, T. Campos and G. M. Sipahi, J. Appl. Phys. 2014 in press, arXiv:1409.6836.\\[0pt] [2] P. E. Faria Junior and G. M. Sipahi, J. Appl. Phys. 112, 103716 (2012).\\[0pt] [3] E. G. Gadret, et al., Phys. Rev. B 82, 125327 (2010). [Preview Abstract] T12.00007: Erbium doped Aluminum Nitride Nanoparticles for Nano-Thermometer Applications Sneha G. Pandya, Martin E. Kordesch We have synthesized Nanoparticles (NPs) of Aluminum Nitride (AlN) doped \textit{in} situ with Erbium (Er) using the inert gas condensation technique. These NPs have optical properties that make them good candidates for nanoscale temperature sensors. The Photoluminescence (PL) spectrum of Er$^{3+}$ in these NPs shows two emission peaks in the green region at around 540 nm and 560 nm. The ratio of the intensities of these luminescence peaks is related to temperature. Using Boltzmann's distribution, the temperature of the NP and its surrounding can be calculated. The NPs were directly deposited on (111) p-type Silicon wafers, TEM grids and glass cover slips. XRD and HRTEM study indicates that most of the NPs have crystalline hexagonal AlN structure. An enhancement of the luminescence from these NPs was observed after heating in-air at 770 K for 3 hours. The sample was then heated in air using a scanning optical microscope laser. The corresponding change in PL peak intensities of the NPs was recorded for laser powers ranging from 0.2-15.1 mW. Temperature calculated using the Boltzmann's distribution was in the range of 320-470 K. This temperature range is of interest for semiconductor device heating and for thermal treatment of cancerous cells, for example. [Preview Abstract] T12.00008: Controlled formation of GeSi nanostructures on pillar-patterned Si substrate Tong Zhou, Ceng Zeng, Yongliang Fan, Zuimin Jiang, Jinsong Xia, Zhenyang Zhong GeSi quantum nanostructures (QNs) have potential applications in optoelectronic devices due to their unique properties and compatibility with the sophisticated Si technology. However, the disadvantages of poor quantum efficiency of the GeSi QNs on flat Si (001) substrates hinder their optoelectronic applications. Today, numerous growth strategies have been proposed to control the formation of GeSi QNs in hope of improving the optoelectronic performances. One of the ways is to fabricate GeSi QNs on patterned substrates, where the GeSi QNs can be greatly manipulated in aspects of size, shape, composition, orientation and arrangement. Here, self-assembled GeSi QNs on periodic Si (001) sub-micro pillars (SPMs) are systematically studied. By controlling the growth conditions and the diameters of the SPMs, different GeSi QNs, including circularly arranged quantum dots (QDs), quantum rings (QRs), and quantum dot molecules (QDMs), are realized at the top edge of SMPs. Meanwhile, fourfold symmetric GeSi QDMs can be also obtained at the base edges of the SPMs. The promising features of self-assembled GeSi QNs are explained in terms of the surface chemical potential, which disclose the critical effect of surface morphology on the diffusion and the aggregation of Ge adatoms. [Preview Abstract] T12.00010: Utilizing Ballistic Electron Emission Microscopy to Study Sidewall Scattering of Electrons Westly Nolting, Chris Durcan, Robert Balsano, Vincent LaBella Sidewall scattering of electrons in aggressively scaled integrated devices dramatically increases the resistance since the dimensions are approaching the mean free path of electrons in a metal $\sim$ 40 nm. Ballistic Electron Emission Microscopy (BEEM) can be utilized to study hot electron scattering in metal films. In this presentation BEEM is performed on a lithographically patterned interface between a metal and a semiconductor to determine its potential at measure sidewall scattering. This is accomplished by acquiring spectra on a regularly spaced grid and then fitting the spectra to determine both the Schottky barrier height and the slope of the spectra. The position dependent maps of these two parameters are then related to the scattering at the interface due to the underlying pattern. [Preview Abstract] T12.00011: Novel size effects on magneto-optics in the spherical quantum dots M. Kushwaha We embark on investigating the magneto-optical absorption in {\em spherical} quantum dots {\em completely} confined by a harmonic potential and exposed to an applied magnetic field in the symmetric gauge. This is done within the framework of Bohm-Pines' RPA that enables us to derive and discuss the full Dyson equation that takes proper account of the Coulomb interactions. Intensifying the confinement or magnetic field and reducing the dot-size yields a blue-shift in the absorption peaks. However, the size effects are seen to be predominant in this role. The magnetic field tends to maximize the localization of the particle, but leaves the peak position of the radial distribution intact. The intra-Landau level transitions are forbidden. [Preview Abstract] T12.00012: Silicene, germanene and tinene: Modeling of IR absorbance and topological states Friedhelm Bechstedt, Lars Matthes, Olivia Pulci, Paola Gori The graphene-like but Si-, Ge- or Sn-derived group-IV honeycomb crystals [1] have attracted much attention due to their unique properties and their recent realization in experiments [2]. We study their electronic and optical properties by means of ab initio electronic-structure calculations. Conical valence and conduction bands and a vanishing electronic band gap have enormous consequences. Independent of the group-IV element and the degree of hybridization a universal absorbance ruled by the Sommerfeld finestructure constant appears [3,4]. This result is however influenced by spin-orbit coupling, which also plays an important role for germanene and tinene nanoribbons. Topological metallic edge states appear, if the edges are non-magnetic [5]. \\[4pt] [1] L. Matthes et al., J. Phys. CM 25, 395305 (2013)\\[0pt] [2] P. Vogt et al., PRL 108, 155501 (2012)\\[0pt] [3] F. Bechstedt et al., APL 100, 261906 (2012)\\[0pt] [4] L. Matthes et al., PRB 87, 035438 (2013); New J. Phys. 16, 105007 (2014)\\[0pt] [5] L. Matthes, F. Bechstedt, PRB 90, 165431 (2014) [Preview Abstract] T12.00013: Internal Strain in Nano-Diamond and Boron Nitride William Mattson, Donald Johnson Nanodiamond surfaces undergo reconstruction imposing stress on nanoparticle (NP) core and possibly storing strain energy. The unique way in which these NPs store energy may lead to useful applications, but a greater understanding of strain energy storage/release is needed. In the current work, density functional theory methods are employed to predict structural properties and energetics of C (diamond) and cubic-BN NPs. The goal is to quantify NP core stress and its relationship to surface rearrangement, particle size, and material composition. Initial results suggest different chemical factors drive surface rearrangement, leading to compressive stress in C and tensile stress in BN. [Preview Abstract] T12.00014: Plasmon Enhanced Raman Scattering in Ag-CdTe Core-Shell Nanostructures Sheng Wang, Dexiong Liu, Jiang Zeng, Hua Zhang, Deliang Wang, Zhenyu Zhang Surface-enhanced Raman scattering (SERS) has been a powerful technique in investigating the properties of semiconductors. For semiconductor thin films, plasmon resonance and photoluminence (PL) are two important factors in determining the signal of SERS. Here we carry out a combined experimental and theoretical study of the optical properties of metal-semiconductor hybrid nanosystemes using SERS. First, we fabricate Ag-CdTe core-shell nanostructures by depositing CdTe on Ag nanoparticle arrays. By varying the thickness of the CdTe shell, one peak of plasmon is tuned to the wavelength of the incident light for resonant absorption, which is further verified by our finite-difference time-domain simulations. The coupling between the plasmons and excitons at the interface quenches the radiative PL process, while the non-radiative Raman scattering process is unaffected. Furthermore, the importance of multi-phonon resonance Raman scattering in these systems is investigated. [Preview Abstract] T12.00015: Band Gaps in InN/GaN Superlattices: Polar and Nonpolar Growth Directions Niels Christensen, Izabela Gorczyca, Kamila Skrobas, Tadeusz Suski, Axel Svane The electronic structures of short-period superlattices (SLs) consisting of $m$InN/$n$GaN unit cells with composition ($m$,$n)$ have been calculated within the density-functional theory including corrections for the ``LDA gap error''. The variation of the gaps with SL composition and the dependence on the growth direction, the \textit{polar} (c) and \textit{nonpolar} (a,m) directions in the wurtzite structure, are compared. The band gaps calculated for the polar SLs are much smaller than those found for non-polar SLs due to the electric polarization fields in the (c) SLs. For the (\textit{1,m}) class of polar samples photoluminescence measurements yield energy-gap values, which are much larger than the calculated values. The reason for this is that the structure of the samples differs from the assumed ideal composition. Transmission electron microscopy studies of the assumed polar \textit{1}InN/$n$GaN SLs show that the real structure is 1In$_{\mathrm{x}}$Ga$_{\mathrm{1-x}}$N/GaN with In-content x$=$0.33. New calculations for such SLs are in perfect agreement with photoluminescence experiments. [Preview Abstract]
CommonCrawl
Prediction of protein self-interactions using stacked long short-term memory from protein sequences information Volume 12 Supplement 8 Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology Yan-Bin Wang1,2 na1, Zhu-Hong You1 na1, Xiao Li1, Tong-Hai Jiang1, Li Cheng1 & Zhan-Heng Chen1,2 BMC Systems Biology volume 12, Article number: 129 (2018) Cite this article Self-interacting Proteins (SIPs) plays a critical role in a series of life function in most living cells. Researches on SIPs are important part of molecular biology. Although numerous SIPs data be provided, traditional experimental methods are labor-intensive, time-consuming and costly and can only yield limited results in real-world needs. Hence,it's urgent to develop an efficient computational SIPs prediction method to fill the gap. Deep learning technologies have proven to produce subversive performance improvements in many areas, but the effectiveness of deep learning methods for SIPs prediction has not been verified. We developed a deep learning model for predicting SIPs by constructing a Stacked Long Short-Term Memory (SLSTM) neural network that contains "dropout". We extracted features from protein sequences using a novel feature extraction scheme that combined Zernike Moments (ZMs) with Position Specific Weight Matrix (PSWM). The capability of the proposed approach was assessed on S.erevisiae and Human SIPs datasets. The result indicates that the approach based on deep learning can effectively resist data skew and achieve good accuracies of 95.69 and 97.88%, respectively. To demonstrate the progressiveness of deep learning, we compared the results of the SLSTM-based method and the celebrated Support Vector Machine (SVM) method and several other well-known methods on the same datasets. The results show that our method is overall superior to any of the other existing state-of-the-art techniques. As far as we know, this study first applies deep learning method to predict SIPs, and practical experimental results reveal its potential in SIPs identification. As the embodiment of life activity, protein does not exist in isolation, but through interaction to complete most of the process in the cell. Protein-protein interaction (PPIs) has been the focus of the study of biological processes. SIPs are considered to be a unique protein interaction. SIPs have the same arrangement of amino acids. This leads to the formation of homodimer. Previous studies have proved that SIPs play a leading role in the discovering the laws of life and the evolution of protein interaction networks (PINs) [1]. It is important to understand whether proteins can interact with themselves, which helps clarify the function of proteins, insights into the regulation of protein function, and predicts or prevents disease. The homo-oligomerization have proven to play a significant role in the wide-ranging biological processes, for instance, immunological reaction, signal transduction, activation of enzyme, and regulation of gene expression [2,3,4,5]. It has been found that SIPs are a main aspect in regulating protein function by means of allosteric means. Many studies have shown that the diversity of proteins can be extended by SIPs without growing genome size. In addition, self-interaction helps to increase stability and prevent protein denaturation by reducing its surface area. SIPs have the potential to interact with many other proteins, hence, it occupies a significant position in cellular systems. SIPs have an ability to improve the stability of protein and avoid the denaturation of proteins and reduce its superficial area. An endless stream of experimental methods is used to detect protein self-interaction. However, these methods have certain drawbacks and limitations. It is urgent to develop an effective and reliable novel approach for predicting SIPs. In recent years, some computational systems have been designed for predicting PPIs. Zaki et al. [6] projected a scheme for predicting SIPs that used only protein primary structure based on pairwise similarity theory. Zahiri J at el. [7] introduced an approach called PPIevo for predicting PPIs using a feature extraction algorithm. You et al. [8] gave a method called PCA-ELM that shows great ability in predicting PPIs. M. G. Shi et al. [9] shown a powerful method, which used correlation coefficient (CC) combined with support vector machine (SVM). This proposed method could be used in predicting PPIs, giving satisfactory results. These methods generally tend to use certain information about protein pairs, for instance, colocalization, coexpression and coevolution. Nevertheless, such feature is not applicable to deal with SIPs problems. Besides, the PPIs data sets adopted in above approaches do not cover SIPs. Hence, these computational-based methods not suitable for predicting SIPs. In the past research, Liu et al. [10] developed a prediction model to predict SIPs named as SLIPPER by mixing several typical known attributes. However, there is a major defect in this prediction model, which cannot deal with proteins that are not included in the current human interatomic. Given the limits of the above-mentioned approaches, it is needed to develop a more practical computational method for identifying SIPs. In this study, a novel computational scheme based on deep learning named ZM-SLSTM is proposed for detecting SIPs from protein sequence. We firstly converted the SIPs sequence into Position Specific Weight Matrix (PSWM). Second, a novel feature extraction approach named as Zernike moments (ZMs) is adopted to generate feature vector from PSWM. Then, we build a Stacked Long Short-Term Memory (SLSTM) to predict SIPs. The proposed model was executed on S.erevisiae and human SIPs data sets. Satisfactory results are obtained with high accuracy of 95.69 and 97.88%, respectively. This method is also compared with other methods including Support Vector Machine (SVM), other (named as SLIPPER, CRS, SPAR DXECPPI, PPIevo and LocFuse). The results show that the ZM-SLSTM method perform better than any those methods. For all we know, our study is the first to adopt the deep-learning technology to predict SIPs, and experimental results show that our method can effectively resist data skew and improve the prediction performance relative to the existing technique. We download 20,199 data of human sequences protein from the Uniprot database [11]. The PPIs data come from Various resource libraries including MatrixDB, BioGRID, DIP, IntAct and InnateDB [12,13,14,15,16]. In order to obtain the SIP data set, the PPI data that can interact with itself were collected. Accordingly, we obtained 2, 994 human SIPs sequences. To collect datasets scientifically and efficiently, the human SIPs dataset is screened by the following steps [17]: (1) the protein sequence(>5000residues or < 50 residues) was removed from the whole human sequences protein; (2) For the construction of the positive data set, the selected SIPs must meet one of the following situations: (a) At least two mass experiments or one small scale experiment have shown that this protein sequence can interact with itself; (b) the protein must be homooligomer in UniProt; (c) the self-interaction of this protein have been reported by more than one publication; (3) For the sake of establish negative data set, all known SIPs were deleted from the whole human proteome. As a result, 1441 human SIPs were selected to build positive data sets and 15,938 human protein that non-interacting were selected to build negative datasets. In addition, to better verify the usefulness of the designed scheme, we constructed the S.erevisiae SIPs dataset that cover 710 SIPs and 5511 non-SIPs by using above strategy. Position specific weight matrix PSWM [18] was first adopted for detecting proteins of distantly related. The PSWM successfully applied in the field of biological information, including protein disulfide connectivity, protein structural classes, and subnuclear localization, DNA or RNA binding sites [19,20,21,22,23]. In the study, we used PSWM for predicting SIPs. A PSWM for a query protein is a Y×20 matrix M = {mij: i =1 ⋯ Y and j = 1 ⋯ 20}, where the Y represents the size of the protein sequence and the number of columns of M matrix denotes 20 amino acids. In order to construct PSWM, a position frequency matrix is first created by calculating the presence of each nucleotide on each position. This frequency matrix can be represented as p(u, k), where u means position, k is the kth nucleotide. The PSWM can be expressed as \( {M}_{ij}={\sum}_{k=1}^{20}p\left(u,k\right)\times w\left(v,k\right) \), where w(v, k) is a matrix whose elements represent the mutation value between two different amino acids. Consequently, high scores represent highly conservative positions, and low points represent a weak conservative position. In this paper, the PSWM of a protein sequences were generated by using Position specific iterated BLAST (PSI-BLAST) [24]. To get high and broad homologous information, we set three iterations and set the e-value to 0.001. Zernike moments In this paper, the Zernike moments are introduced to extract meaningful information from protein sequence and generate feature vector [25,26,27,28,29,30]. We introduce the concept of the Zernike function to clearly define the moments of the Zernike. A set of complex polynomials are introduced by Zernike which form a complete orthogonal set within the unit circle. These polynomials are represented as Vnm(x, y). These polynomials have the following form: $$ {V}_{xy}\left(n,m\right)={V}_{xy}\left(\rho, \theta \right)={R}_{xy}\left(\rho \right){e}^{jy\theta}\mathrm{for}\ \rho \le 1 $$ where x is a positive integer greater than zero, y is integer, and satisfies |y| < x, where x - |y| is an even number. ρ is the length from (0, 0) to the pixel (n, m). θ represents included angle between vector ρ and n axis in counterclockwise direction. Rxy(ρ)is $$ {R}_{xy}\left(\rho \right)=\sum \limits_{s=0}^{\left(x-|y|/2\right)}{\left(-1\right)}^s\frac{\left(x-s\right)!}{s!\left(\frac{x+\left|y\right|}{2}-s\right)!\left(\frac{x+\left|y\right|}{2}-s\right)!}{\rho}^{x-2s} $$ From equation (2), we can find Rx, − y(ρ) = Rxy(ρ). These orthogonal polynomials are satisfying: $$ \underset{0}{\overset{2\pi }{\int }}{\int}_0^1{V}_{xy}^{\ast}\left(\rho, \theta \right){V}_{pq}\left(\rho, \theta \right)\rho d\rho d\theta =\frac{\pi }{x+1}{\delta}_{xp}{\delta}_{yq}\kern0.75em $$ $$ {\delta}_{ab}=\left\{\begin{array}{c}1\ \\ {}0\end{array}\right.\kern2.25em \genfrac{}{}{0pt}{}{a=b}{otherwise} $$ The Zernike moments can be obtained by calculating (5) $$ {Z}_{xy}=\frac{x+1}{\pi }{\sum}_{\left(\rho, \theta \right)\in unit\ circle}\sum f\left(\rho, \theta \right){V}_{nm}^{\ast}\left(\rho, \theta \right) $$ To calculate the ZMs of a protein sequence represented by a PSWM matrix, the origin is at the center of the matrix, and the points in the matrix are mapped inside the unit circle., i.e., n2 + m2 ≤ 1. The value falling outside the unit circle is not calculated [31,32,33,34,35]. Note that \( {A}_{xy}^{\ast }={A}_{x,-y.} \) To sum up, Zernike moments can extract some important information. When we use the Zernike moments, there is a problem that must be considered is how big nmaxshould be set? The moments of lower order extract unsophisticated feature and the moments of higher order capture details feature. Figure 1 shows the magnitude plots of the Zernike moments with low order. Considering that we not only need enough information for more accurate classification, but also need to control the dimension of feature to reduce the computational cost. In this experiment, xmax is set to 30 [36,37,38,39,40]. This moment information constitutes the feature vectors of protein sequences $$ \overrightarrow{F}={\left[\left|{A}_{11}\right|,\left|{A}_{22}\right|,\dots \dots, \left|{A}_{NM}\right|\right]}^T $$ where |Anm| represents the absolute value of Zernike moments. The zeroth order moments are not computed because they do not contain any valuable information and ZMs without considering m < 0, since they are inferred through \( {A}_{n,-m}={A}_{nm}^{\ast }. \) Plots of the magnitude of the Zernike moments with low order Finally, in order to eliminate noise as much as possible and to reduce the computational complexity, the feature dimensional was reduced from 240 to 150 by means of principal component analysis (PCA) method [41]. Long short-term memory Long Short-Term Memory (LSTM), a special recurrent neural network, performs much better than standard recurrent neural networks in many tasks. Almost all exciting results based on recurrent neural networks are implemented by them. In this work, the deep LSTM net structure was first introduced to predict self-interaction protein. The main difference between LSTM network and other networks is its use of complex memory block instead of the neurons of general network. The memory block contains three multiplicative 'gate' units (the input, forget, and output gates.) along with some memory cells (one or more). The gate unit is used to control the information flow, and the memory cell is used to store the historical information [42,43,44]. The structure of the memory block is shown in the Fig. 2, to better understand the work of the gate unit, memory cells are not shown in the Fig. 2. The gate removes or restore information to the cell state by controlling the information flow. More specific, the input and output of the information flow are respectively handled by the input and output gates. The forget gate determines how much of the previous unit's information is retained to the current unit. In addition, in order to enable memory blocks to store earlier information, we add a peephole to the block to connect the memory cell to the gate [45, 46]. The structure of memory blocks in SLSTM networks The information flow passing through a memory block needs to do the following operations to complete the mapping from input x to output h: $$ {i}_t= sigm\left({W}_i\bullet \left[{C}_{t-1},{x}_t,{h}_{t-1}\right]+{b}_i\right) $$ $$ {f}_t= sigm\left({W}_f\bullet \left[{C}_{t-1},{x}_t,{h}_{t-1}\right]+{b}_f\right) $$ $$ {o}_t= sigm\left({W}_o\bullet \left[{C}_t,{x}_t,{h}_{t-1}\right]+{b}_o\right) $$ $$ {\overset{\check{} }{C}}_t=\mathit{\tanh}\left({W}_C\bullet \left[{x}_t,{h}_{t-1}\right]+{b}_C\right) $$ $$ {C}_t={C}_{t-1}\ast {f}_t+{\overset{\check{} }{C}}_t\ast {i}_t $$ $$ {h}_t=\tanh \left({C}_t\right)\ast {o}_t+{\overset{\check{} }{C}}_t\ast {i}_t $$ Here, symbols related to the letter C represent cell activation vectors, the symbol f, i, o, and C are respectively the forget gate, input gate, output gate. The items related to W (Wi, Wf, Wf), represent weight matrices, the items related to b (bi, bf, bo, bC) denote bias, σ is sigmoid function, ∗ is the element-wise product of the vectors. Stacked long short-term memory A large number of theoretical and practical results support that the deep hierarchical network model can be more competent for complex tasks than shallow one. We construct the Stacked Long Short-Term Memory (SLSTM) net by stacking multiple LSTM hidden layers on top of each other, which contain one input layer, three LSTM hidden layers, one output layer. Figure 3 shows a SLSTM network. The number of neurons in the input layer is equal to the dimension of the input data. Each SLSTM hidden layer consist of 16 memory blocks. The number of neurons in the output layer equals the number of classes. Therefore, the number of neurons or memory blocks in each layer of the network are 200–16–16-16-2. In output layer, the softmax function is used to generate probabilistic results. A Stacked Long Short-Term Memory network Prevent over fitting Overfitting problems exist in many prediction or classification models. Even the deep learning model with superior performance is no exception. A great deal of theoretical and practical work has proved that over-fitting can be reduced or avoided by adding "dropout" operation on neural net. "dropout" provides a way to approximate combine exponentially different neural network architectures [47]. More specific, "dropout" involves two important operations: 1) Dropout randomly discards hidden units and edges connected with them with a fixed probability in each training case; 2) In the test, dropout is responsible for integrating multiple neural networks generated during training. The first operation makes it possible to produce a different network almost every training case and these different networks share the same weights for the hidden units. The Fig. 4 describes a network model after using dropout. At test time, all hidden layer neurons are used without "dropout", but the weight of the network is a reduced version of the trained weights. The proportion of weight reduction equals to the probability of the unit being retained [48]. By weight reduction, a large number of dropout networks can be merged into a single neural network and provide a similar performance to averaging over all networks [49]. Network structure after using dropout In order to evaluate the methods presented in this paper, we used a few commonly used indicators: The accuracy (ACC), true positive rate (TPR), positive predictive value (PPV), specificity (SPC), and Matthew's Correlation Coefficient (MCC). The definition is given as follows: $$ \mathrm{ACC}=\frac{TN+ TP}{TN+ FN+ TP+ FP} $$ $$ \mathrm{TPR}=\frac{TP}{FN+ TP} $$ $$ \mathrm{PPV}=\frac{TP}{TP+ FP} $$ $$ \mathrm{SPC}=\frac{TN}{TN+ FP} $$ $$ \mathrm{MCC}=\frac{\left( TP\times TN\right)+\left( FP\times FN\right)}{\sqrt{\left( TP+ FN\right)\times \left( TP+ FP\right)\times \left( TN+ FN\right)\times \left( TN+ FP\right)}} $$ where TP means those samples, have interacting, are predicted correctly, FP represents those samples, true non-interacting with each other, are judged to be interaction. TN represents those samples, true noninteracting with each other, are predicted correctly. FN represents those samples, true interacting with each other, are judged to be non-interacting. Furthermore, the Receiver operating characteristic (ROC) is portrayed to appraise the performance of a set of classification results and the AUC is computed as an important evaluation indicator [50, 51]. Assessment of prediction The proposed method is validated on two standard SIPs dataset. Each dataset is divide into three parts: The training set, accounted for 40 % of the total data; The verification set, accounts for 30 % of the total data; and the test set, accounts for 30 % of the total data. The training data sets are used to fit the weights of connections between memory block in the SLSTM network. The validation sets are used to fine tune model parameters and determine optimal performance models. Another function of the validation data set is to prevent overfitting by early stopping: when the errors on the validation data set begin to increase, the model stops training, because is a token of overfitting. The test data set is used for unbiased evaluation of the trained model. We train model only setting 200 epochs and using Nadam optimization method, that has more constraints on the learning rate, and also has a more direct impact on the gradient update. As Table 1 shows, the accuracy obtained by the ZMs-SLSTM is 95.69% for S.erevisiae and 97.88% for Human data sets. Beyond that, several other evaluation indicators also show the potential of our approach. More specifically, on S.erevisiae, the proposed method achieved TPR of 92.97%, SPC of 95.94%, PPV of 67.23%, MCC of 77.43% and AUC of 0.9828, respectively. For Human dataset with more samples, this method produces better results with TPR of 88.00%, SPC of 98.70%, PPV of 84.93%, MCC of 85.60% and AUC of 0.9908, respectively. The ROC curves achieved by the proposed ZMs-SLSTM method was exposed in Fig. 5. Table 1 The results produced by the proposed method and the SVM-based method on PPIs datasets ROC curves achieved by the proposed approach The performance of SVM-based approach We verify the performance of our classifier by compare it with the SVM (Support Vector Machine) classifier representing the most advanced technologies. In this experience, we took the same feature extraction process in S.erevisiae and Human datasets, respectively. We used LIBSVM tools [52] to implement the classification of SVM. The SVM parameters of c and g are 0.5 and 0.6 by the grid search method. Table 1 indicates, our ZMs-SLSTM method is significantly superior to SVM-based methods, particularly for predicting the true self-interacting protein pairs. Focus on S.erevisiae dataset, 95.69% ACC, 92.97% TPR, 77.43% MCC and 0.9828 AUC of the ZMs-SLSTM is much higher than the corresponding values for the SVM-predictor with 93.06% ACC, 57.22%TPR, 64.59% MCC and 0.9345. AUC. Similar situations also appear on the Human data set, the performance of the ZMs-SLSTM method has been found to be better with 97.88% ACC, 88.00% TPR, 98.70% SPC, 84.93% PPV, 85.60% MCC and 0.9908 AUC versus 95.30% ACC, 54.26% TPR, 99.01% SPC, 83.27% PPV, 66.07% MCC and 0.9261 AUC, respectively. In particular, higher TPR (92.97% on S.erevisiae dataset and 88.00% on Human dataset) indicates our method can give more accurate results than SVM-based approach (57.22% on S.erevisiae dataset and 54.26% on Human dataset) in predicting true SIPs. Comparison with other methods To further evaluate our proposed approach, we also compared it with six existing methods (SLIPPER, CRS, SPAR, DXECPPI, PPIevo and LocFuse). Table 2 presents the results of several methods on S.erevisiae and Human data sets. From Table 2, compared with other methods, our method significantly improves the overall performance of the SIPs prediction. In addition, SLIPPER contains some restrictions. Second, it integrates a large amount of known knowledge, such as GO terms, PINs, drug targets, and enzymes. In particular, the degree of protein in the PIN makes a significant contribution to SIP predictions. However, for unknown or artificial proteins in actual applications, all information is difficult to access directly. Therefore, as long as the protein sequence is known, our method is necessary for improved SIP prediction. DXECPPI is a PPI predictor, because the traditional PPI predictor uses correlation information between two proteins, such as co-expression, co-evolution and co-localization, and cannot be effectively used for SIP prediction. Therefore, our method can be used as a necessary supplement for PPI prediction. For S.erevisiae data set, the method presented by this paper achieves the best accuracy of 95.69%, which is much higher than that of other methods. More obvious improvements are reflected in TPR, MCC, and AUC. Observe the results on the S.erevisiae data set, 92% TPR achieved by the ZMs-SLSTM approach is more than three triple that of the DXECPPI method, and 77.43% MCC achieved by the ZMs-SLSTM approach is more than four triple that of the PPIevo method. 0.9828 AUC achieved by the ZMs-SLSTM approach is 37% higher than the average of other methods. High TPR shows that our method has little error rate in identifying self-interacting proteins. The high MCC and AUC show that our model is robust, practical, and can effectively resist data skew. The SIP prediction for Human dataset (Table 2) have also been greatly improved by using our approach. 97.88% ACC, 85.60% MCC and 0.9908 AUC of the ZMs-SLSTM is are way above the corresponding values for the other method. In addition, compared the results of SVM-based method (Table 1) and six existing methods (SLIPPER, CRS, SPAR, DXECPPI, PPIevo and LocFuse), it can be found that our method is still overall superior to the six existing predictors. This shows that the proposed feature extraction strategy proposed in this paper is efficient, useful and plays an important role in the SIPs prediction model. The results of this study illustration that the ZMs-SLSTM approach is capable of effectively improving the prediction performance of SIPs. Table 2 Performance comparison of seven approaches on both the S.erevisiae and Human datasets This method can produce good results mainly due to: effective feature extraction strategy and reliable classifiers. The protein feature extraction scheme consisting of PSWM and ZMs effectively captures the evolutionary information of protein and produces the most characteristic features that improve the ability of the classifier to distinguish unknown samples during the testing phase. The robust and efficient SLSTM deep neural network also make a great contribution to accuracy improvement that provide stronger classification performance than traditional machine learning method in interaction pattern recognition. The performance improvement brought by SLSTM comes mainly from the following reasons: 1) Compared with the traditional machine learning methods, the hierarchical structure of deep learning algorithms can process more complex data, and automatically learn abstract and more useful features. 2) Two mechanisms to prevent overfitting, dropout and early stopping, make the prediction model trained more reliable, robust and excellent. 3) In the testing phase, we merged all dropout networks generated by the training processes, which led to a better result. 4) The SLSTM network uses memory blocks instead of simple neurons, which allows the network to learn more knowledge about self-interacting proteins during training. In recent years, the rise of deep learning technology has constantly affected the development of various fields. However, the ability of deep learning techniques in predicting self-interacting proteins has not been witnessed. In this work, a SLSTM neural network was constructed as a deep learning model to predict SIPs only using protein sequences. The method is applied to two standard data sets and the results show it is reliable, stable and accurate for predicting SIPs. The contribution of the proposed approach comes mainly from three technologies: SLSTM network, ZMs feature extractor, PSWM. Specifically, each protein sequence was converted into PSWM by using PSI-BLAST. The ZMs then is adopted to catch the valuable information from PSWM and form feature vectors that as input of classifier. Finally, the SLSTM deep network is used to predict SIPs. For further measuring the performance of the ZMs-SLSTM method, ZMs-SVM and other six methods were implemented on S.erevisiae and Huamn data sets for comparing with the proposed approach. The results from these experiments indicate that the SIPs detection capability of the proposed scheme is overall ahead of that of the earlier methods and SVM-based approach. The performance improvement caused by this method is mainly dependent on the use of an excellently deep learning model and a fresh and high-performance feature extraction scheme. To the best of our knowledge, this study is the first to build a deep learning model for SIP prediction using protein sequence, and the results demonstrate our method is strong and practical. MCC: Matthew's correlation coefficient Protein interaction networks PPIs: Protein-protein interaction PPV: Positive predictive value PSWM: Self-interacting proteins SLSTM: SPC: SVM: TPR: True positive rate ZMs: Ispolatov I, Yuryev A, Mazo I, Maslov S. Binding properties and evolution of homodimers in protein–protein interaction networks. Nucleic Acids Res. 2005;33(11):3629–35. Park HK, Lee JE, Lim J, Jo DE, Park SA, Suh PG, Kang BH. Combination treatment with doxorubicin and gamitrinib synergistically augments anticancer activity through enhanced activation of Bim. BMC Cancer. 2014;14(1):431. Katsamba P, Carroll K, Ahlsen G, Bahna F, Vendome J, Posy S, Rajebhosale M, Price S, Jessell TM, Ben-Shaul A. Linking molecular affinity and cellular specificity in cadherin-mediated adhesion. Proc Natl Acad Sci. 2009;106(28):11594. Baisamy L, Jurisch N, Diviani D. Leucine zipper-mediated homo-oligomerization regulates the rho-GEF activity of AKAP-Lbc. J Biol Chem. 2005;280(15):15405–12. Koike R, Kidera A, Ota M. Alteration of oligomeric state and domain architecture is essential for functional transformation between transferase and hydrolase with the same scaffold. Protein Sci. 2009;18(10):2060–6. Nazar Z, Sanja LM, Wassim EH, Piers C. Protein-protein interaction based on pairwise similarity. BMC Bioinformatics. 2009;10(1):1–12. Zahiri J, Yaghoubi O, Mohammad-Noori M, Ebrahimpour R, Masoudi-Nejad A. PPIevo: protein-protein interaction prediction from PSSM based evolutionary information. Genomics. 2013;102(4):237–42. You ZH, Lei YK, Zhu L, Xia J, Wang B. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis. BMC Bioinformatics. 2013;14(8):1–11. Shi MG, Xia JF, Li XL, Huang D. Predicting protein–protein interactions from sequence using correlation coefficient and high-quality interaction dataset. Amino Acids. 2010;38(3):891. Liu Z, Guo F, Zhang J, Wang J, Lu L, Li D, He F. Proteome-wide prediction of self-interacting proteins based on multiple properties. Mol Cell Proteomics. 2013;12(6):1689. Consortium UP. UniProt: a hub for protein information. Nucleic Acids Res. 2015;43(Database issue):204–12. Chatr-Aryamontri A, Breitkreutz BJ, Oughtred R, Boucher L, Heinicke S, Chen D, Stark C, Breitkreutz A, Kolas N, O'Donnell L. The BioGRID interaction database: 2015 update. Nucleic Acids Res. 2011;43(Database issue):D470. Xenarios I, Rice DW, Salwinski L, Baron MK, Marcotte EM, Eisenberg D. DIP: the database of interacting proteins: 2001 update. Nucleic Acids Res. 2000;32(1):D449. Orchard S, Ammari M, Aranda B, Breuza L, Briganti L, Broackes-Carter F, Campbell NH, Chavali G, Chen C, Del-Toro N. The MIntAct project--IntAct as a common curation platform for 11 molecular interaction databases. Nucleic Acids Res. 2014;42:358–63. Launay G, Salza R, Multedo D, Thierrymieg N, Ricardblum S. MatrixDB, the extracellular matrix interaction database: updated content, a new navigator and expanded functionalities. Nucleic Acids Res. 2014;43(Database issue):321–7. Breuer K, Foroushani AK, Laird MR, Chen C, Sribnaia A, Lo R, Winsor GL, Hancock REW, Brinkman FSL, Lynn DJ. InnateDB: systems biology of innate immunity and beyond—recent updates and continuing curation. Nucleic Acids Res. 2013;41(Database issue):D1228. Liu X, Yang S, Li C, Zhang Z, Song J. SPAR: a random forest-based predictor for self-interacting proteins with fine-grained domain information. Amino Acids. 2016;48(7):1655. Bailey TL, Gribskov M. Methods and statistics for combining motif match scores. Journal of computational biology a journal of computational. Mol Cell Biol. 1998;5(2):211–21. Delorenzi M, Speed T. An HMM model for coiled-coil domains and a comparison with PSSM-based predictions. Bioinformatics. 2002;18(4):617–25. Liang Y, Liu S, Zhang S. Prediction of protein structural classes for low-similarity sequences based on consensus sequence and segmented PSSM. Comput Math Methods Med. 2015;2015(2):1–9. Wang J, Wang C, Cao J, Liu X, Yao Y, Dai Q. Prediction of protein structural classes for low-similarity sequences using reduced PSSM and position-based secondary structural features. Gene. 2015;554(2):241–8. Chen K, Kurgan L. Computational prediction of secondary and Supersecondary structures: Humana Press; 2013. Tomii K, Kanehisa M. Analysis of amino acid indices and mutation matrices for sequence comparison and structure prediction of proteins. Protein Eng. 1996;9(1):27. Lobo I. Basic local alignment search tool (BLAST). J Mol Biol. 2008;215(3):403–10. Chen Z, Sun SK. A Zernike moment phase-based descriptor for local image representation and matching. IEEE transactions on image processing a publication of the IEEE signal processing Society 2010, 19(1):205–219. Chong CW, Raveendran P, Mukundan R. A comparative analysis of algorithms for fast computation of Zernike moments. Pattern Recogn. 2003;36(3):731–42. Farzam M, Shirani S. A robust multimedia watermarking technique using Zernike transform. In: Multimedia Signal Processing, 2001 IEEE Fourth Workshop on: 2001; 2001. p. 529–34. Hse H, Newton AR. Sketched symbol recognition using Zernike moments. 2004;1:367–70. Hwang SK, Billinghurst M, Kim WY. Local descriptor by Zernike moments for real-time Keypoint matching. In: Image and Signal Processing, Congress on: 2008; 2008. p. 781–5. Khotanzad A, Hong YH. Invariant image recognition by Zernike moments. IEEE Trans Pattern Analys Mach Intell. 1990;12(5):489–97. Kim WY, Kim YS. Sig Proc Image Commun. 2000;16(1–2):95–102. Li S, Lee MC, Pun CM. Complex Zernike moments features for shape-based image retrieval. IEEE Trans Syst Man Cybernetics Part A Syst Hum. 2009;39(1):227–37. Liao SX, Pawlak M. On the accuracy of Zernike moments for image analysis. IEEE Trans Pattern Analys Mach Intell. 1998;20(12):1358–64. Liao SX, Pawlak M. A study of Zernike moment computing; 2006. Mukundan R, Ramakrishnan KR. Fast computation of Legendre and Zernike moments. Pattern Recogn. 1995;28(9):1433–42. Noll RJ. Zernike polynomials and atmospheric turbulence. J Opt Soc Am. 1976;66(3):207–11 1917–1983. Schwiegerling J, Greivenkamp JE, Miller JM. Representation of videokeratoscopic height data with Zernike polynomials. J Opt Soc Am A Opt Image Sci Vis. 1995;12(10):2105–13. Singh C, Walia E, Upneja R. Accurate calculation of Zernike moments. Inf Sci. 2013;233(233):255–75. Turney JL, Mudge TN, Volz RA. Invariant image recognition by Zernike moments. IEEE Trans Pattern Analys Mach Intell. 1990;12(5):489–97. Wang JY, Silva DE. Wave-front interpretation with Zernike polynomials. Appl Opt. 1980;19(9):1510–8. Mika S, Lkopf B, Smola A, Ller KR, Scholz M, Tsch G, Kernel PCA. de-noising in feature spaces. In: Conference on advances in neural information processing systems II: 1999; 1999. p. 536–42. Sak H, Senior A, Beaufays F. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. Com Sci. 2014:338–42. Tai KS, Socher R, Manning CD. Improved semantic representations from tree-structured long short-term memory networks. Com Sci. 2015;5(1):36. Dyer C, Ballesteros M, Ling W, Matthews A, Smith NA. Transition-based dependency parsing with stack long short-term memory. Com Sci. 2015;37(2):321–32. Wollmer M, Schuller B, Eyben F, Rigoll G. Combining long short-term memory and dynamic Bayesian networks for incremental emotion-sensitive artificial listening. IEEE J Selected Topics Signal Proc. 2010;4(5):867–81. Sainath TN, Vinyals O, Senior A, Sak H. Convolutional, long short-term memory, fully connected deep neural networks. In: IEEE Int Conference on Acoustics, Speech and Signal Processing: 2015; 2015. p. 4580–4. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way tprevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–58. Dahl GE, Sainath TN, Hinton GE. Improving deep neural networks for LVCSR using rectified linear units and dropout. In: IEEE International Conference on Acoustics, Speech and Signal Processing: 2013; 2013. p. 8609–13. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. Com Sci. 2012;3(4):212–23. Hanley JA, Mcneil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143(1):29. Huang J, Ling CX. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans Knowledge Data Eng. 2005;17(3):299–310. Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol. 2011;2(3):1–27. Publication of this article was sponsored in part by the National Science Foundation of China, under Grants 61722212 and 61572506, in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences. The authors would like to thank all anonymous reviewers for their highly-developed advices. https://figshare.com/s/0d99da1a33850136e2cf About this supplement This article has been published as part of BMC Systems Biology Volume 12 Supplement 8, 2018: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-8. Yan-Bin Wang and Zhu-Hong You contributed equally to this work. Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Science, Urumqi, 830011, China Yan-Bin Wang, Zhu-Hong You, Xiao Li, Tong-Hai Jiang, Li Cheng & Zhan-Heng Chen University of Chinese Academy of Sciences, Beijing, 100049, China Yan-Bin Wang & Zhan-Heng Chen Yan-Bin Wang Zhu-Hong You Xiao Li Tong-Hai Jiang Li Cheng Zhan-Heng Chen YBW and ZHY considered the algorithm, carried out analyses, arranged the data sets, carried out experiments, and wrote the manuscript. XL, THJ, LC and ZHC designed, performed and analyzed experiments. All authors read and approved the final manuscript. Correspondence to Zhu-Hong You or Xiao Li. Wang, YB., You, ZH., Li, X. et al. Prediction of protein self-interactions using stacked long short-term memory from protein sequences information. BMC Syst Biol 12 (Suppl 8), 129 (2018). https://doi.org/10.1186/s12918-018-0647-x
CommonCrawl
Microrna Synthesis MicroRNAs synthesis, mechanism, function, and recent clinical trials The time of administration of each condition was similar to the r Posted on September 30, 2019 by micr4174 The time of administration of each condition was similar to the recommended time of intake provided on the product label, while a recent study using GlycoCarn® for performance improvement had subjects consume this condition 90 minutes prior to exercise [12]. this website Our rationale for the change to 60 minutes prior to exercise was based on our inclusion of maltodextrin to the GlycoCarn® in the current design and the fact that the added carbohydrate may have enhanced uptake of the GlycoCarn®, as well as the fact that we wanted to maintain as much similarity in the treatment protocol as possible. Prior to using any of the above five conditions, all subjects underwent an identical test protocol using water only. This was to serve as a baseline familiarization trial to the protocol, as we have previously noted that even in well trained men, such a protocol as used in the present design requires one session in order to fully familiarize subjects to the exercise movements and the volume of exercise (unpublished findings). Hence, a total of six sessions of the exercise protocol were performed by all subjects. It should be noted that the baseline condition, although presented within the AG-881 results section for comparison purposes, was not used in the statistical analysis. Figure 1 Supplement 1 ingredients (per one serving). Figure 2 Supplement 2 ingredients (per one serving). Figure 3 Supplement 3 ingredients (per one serving). All conditions were provided in powder form and were fruit punch flavor. The placebo and GlycoCarn® these conditions were produced and then packaged into individual servings by Tishcon Corporation (Westbury, NY). The three supplements used for comparison were purchased from a local General Nutrition Center store in containers. To ensure precision of dosing, each of these three conditions was weighed on a laboratory grade balance prior to mixing in water. Again, two servings of each condition were used in this design. Our rationale for this was based on the fact that the majority of users of such supplements use 2-3 servings rather than one. In fact, the label instructions for use of these Blasticidin S in vivo products indicate a serving size between 1 and 3 servings. Unlike GlycoCarn®, which is obviously a single ingredient (mixed with maltodextrin in the present design), the supplements contained numerous ingredients (as can be seen in Figures 1, 2, and 3), some of which are stimulants. Exercise Test Protocol For all six test days, subjects reported to the lab following a minimum of an eight hour overnight fast. After arrival to the lab, a blood sample was obtained following a 10 minute period of rest. Subjects then rated their perceived and subjective level of muscle ""pump"" in the upper body using a visual analog scale (0 = no pump; 10 = the most intense pump ever experienced). M, 1 kb DNA ladder (Fermentas); M2, 1 kb DNA ladder (Roch M2, 1 kb DNA ladder (Roche). Figure 3 Schematic representation of new IS 711 loci found in B. abortus field isolates. B12 (upper panel) and B16 and its related isolates (lower panel). The full-length 842 bp IS711 elements and their overlapping ORFs appear in grey. The Bru-RS1 element is shown as hatched box. The duplicated TA at the consensus YTAR site is shown below. Small black arrows represent the positions of site-specific primers. Numbers between primers indicate the molecular size of PCR products. The coordinates are based on the B. abortus 9-941 annotation. ORFs BruAb1_0734, BruAb1_0735 and BruAb1_0736 encode hypothetical proteins; lldP, L-lactate permease (BruAb1_0737); BruAb2_462 encodes a putative D-amino acid oxidase family protein; asnC, transcriptional regulator AsnC family (BruAb2_0459). The x-B12 and x-B16 IS711 sequences were MLN2238 datasheet nearly identical to that of IS711_1a and depicted only changes in a few nucleotides (Figure 4A). On the basis of the high IS711 sequence similarity across sequenced B. abortus strains, we performed Wnt inhibitor a cluster analysis between the IS711 copies of B. abortus 9-941 and those additional ones found in 2308, RB51, B12 and B16 strains to get insight about their origin (Figure 4B). Although as expected, the analysis disclosed only low sequence dissimilarity, it suggested that the new copies might derive from IS711_1a. Since a previous work has shown P-type ATPase that the IS711_xa in the B. abortus alkB locus and the IS711_x-08 in strain 2308 are identical to IS711_1a [3], the inclusion of IS711_x-B12 and IS711_x-B16 in the same cluster supports the hypothesis that IS711_1a is more active than other copies in the B. abortus genome and can transpose into new sites or even into sites shared with related species. Figure 4 Sequence analysis of IS 711 copies found in B. abortus strains. (A), Sequence alignment (IS711_1a is from B. abortus 9-941). Single nucleotide polymorphisms are shadowed and numbered according to IS ORFs coordinates. (B), Clustering of full-length B. abortus IS711 copies found in B. abortus 9-941 (note that truncated 5a copy was excluded), additional IS711 copy carried by B. abortus 2308 (x-08) and B. abortus RB51 (x-RB51, accession no M94960), and the additional copies found in field isolates (x-B12, x-B16). IS transposition can disrupt genes and produce negative polar effects, but also cause beneficial changes by remodeling genomes through long range recombination [15]. In the case of strain B12, it is AZD5153 supplier uncertain whether the intergenic position of IS711 disturbs the expression of nearby genes. Most IS711 studied in detail (1a, 2a, 3a, 5a, 6a, xa and x-08) are also located within intergenic regions showing that transposition is mostly viable when occurring into neutral sites. Recently, a unique alkane monooxygenase that belongs to luciferas Recently, a unique www.selleckchem.com/products/salubrinal.html alkane monooxygenase that belongs to luciferase family was reported for G. thermodenitrificans [12]. Here, we report that two novel membrane proteins, superoxide https://www.selleckchem.com/products/ABT-888.html dismutase, catalase, and acyl-CoA oxidase activities were dramatically increased in the cells of G. thermoleovorans B23 when they were grown on alkanes. Induction of above enzymatic activities upon alkane degradation has never been reported for bacteria but reported for yeast, such as C. tropicalis [13, 14]. This result suggests that alkane degradation pathway is at least partly shared by eukaryotes and deep-subsurface thermophilic bacteria. Results and Discussion Microscopic observations The shape of G. thermoleovorans B23 cells before and after cultivation in the presence of alkanes was compared with each other by a scanning electron microscope (Fig. 1a, b). It was found that the cells became longer and thicker after 14-day growth on alkanes. No such swell was observed for the cells grown in the absence of alkanes (picture not shown). This dynamic change of cell shape prompted us to analyze the cellular proteins produced in relation to alkane degradation. Figure 1 Scanning electron micrographs of the strain B23 cells before (a) and after (b) cultivation on LBM supplemented with 0.1% (v/v) alkanes. Cells were grown without shaking at 70°C for 14 days. The bars indicate the size of 5 μm. Background of the cells is cellulose fibers of filter paper on which cells are adsorbed and fixed. Induction of Ro 61-8048 mw protein productions by alkanes Comparative analysis of proteins by SDS-PAGE showed that production levels of at least three kinds of proteins were increased after 10-day cultivation with alkanes (Fig. 2a). These were 24 kDa, 21 kDa and 16 kDa proteins, which were designated as P24, P21 and P16, respectively. Although a protein band at 40 kDa (P40) also seems to increase in Fig. 2a, reappearance of this phenomenon was not high (see Fig. 3) and therefore no further work was performed on this protein. When the cells were simultaneously exposed to alkanes in rich nutrient L-broth, where catabolite repression would have probably prevented alkane degradation gene from being expressed, induction of these proteins were not observed. Bay 11-7085 It is of interest that increase in the production level of these three proteins became significant at the time when alkane degradation started (Fig. 2b). When we tested other hydrophobic substrates, no such induction was observed for palmitic acid, tributyrin, trimyristin, or dicyclopropylketone (DCPK) which is an inducer of alkane degradation gene expression in P. oleovorans. Figure 2 a, Induction of P24, P21 and P16 productions in G. thermoleovorans B23. Cells were cultivated in LBM supplemented with 0.1% alkane mixtures (V/V) for 14 days at 70°C. Total cell fractions were loaded on an SDS-12% polyacrylamide gel. All organisms that encode a pfor also encode a Fd-dependent hydro All organisms that encode a pfor also encode a Fd-dependent hydrogenase (H2ase), bifurcating H2ase, and/or a NADH:Fd oxidoreductase (NFO), and are thus capable of reoxidizing reduced Fd produced by PFOR. Conversely, G. thermoglucosidasius and B. cereus, which encode pdh but not pfor, do not encode enzymes capable of reoxidizing reduced Fd, and thus do not produce H2. While the presence of PDH allows for additional NADH production that could be used for ethanol production, G. thermoglucosidasius and B. cereus end-product profiles suggest that this NADH is preferentially rexodized through lactate production rather than ethanol production. Pyruvate decarboxylase, a homotetrameric enzyme that catalyzes the decarboxylation Crenolanib manufacturer of pyruvate to acetaldehyde was not encoded by any of the species considered in this study. Given the requirement of reduced electron carriers for ATM Kinase Inhibitor the production of ethanol/H2, the oxidative EPZ-6438 decarboxylation of pyruvate via PDH/PFOR is favorable over PFL for the production of these biofuels. Genome analyses revealed that a number of organisms, including P. furiosus, Ta. pseudethanolicus, Cal. subterraneus subsp. tencongensis, and all Caldicellulosiruptor and Thermotoga species considered, did not encode PFL. In each of these species, the production of formate has neither been detected nor reported. Unfortunately, many studies do not report formate production, despite the presence of PFL. This may be a consequence of the quantification methods used for volatile fatty acid detection. When formate is not produced, the total oxidation value of 2 CO2 per mole glucose (+4), must be balanced with the production of H2 and/or ethanol. Thus, the "total molar reduction values of reduced end-products (H2 + ethanol)", termed RV EP , should be −4, providing that all carbon and electron flux is directed towards end-product formation and not biosynthesis. Indeed, RV EP 's were usually greater than 3.5 in organisms that do not encode pfl (T. maritima, Ca. saccharolyticus), and below 3.5 in those that do encode pfl Cobimetinib mouse (C. phytofermentans, C. thermocellum, G. thermoglucosidasius, and B. cereus; Table 2). In some studies, RV EP 's were low due to a large amount of carbon and electron flux directed towards biosynthesis. In G. thermoglucosidasius and B. cereus RV EP 's of H2 plus ethanol ranged from 0.4 to 0.8 due to higher reported formate yields. The large differences in formate yields between organisms that encode pfl may be due to regulation of pfl. In Escherichia coli[82, 83] and Streptococcus bovis[84, 85], pfl expression has been shown to be negatively regulated by AdhE. Thus presence of pfl alone is not a good indicator of formate yields. Genes involved in acetyl-CoA catabolism, acetate production, and ethanol production The acetyl-CoA/acetate/ethanol node represents the third major branch-point that dictates how carbon and electrons flow towards end-products (Figure 1). Patients were excluded if, on the study day, they required hospit Patients were excluded if, on the study day, they required hospitalisation for an acute illness. Patients were otherwise eligible if they were outpatients in the community, electively admitted for diagnostic tests or were inpatients for physical rehabilitation. Age, sex, weight, height, dabigatran etexilate dose rates, co-prescribed medications and comorbidities were recorded. Using these data, we calculated each individual's CHA2DS2-VASc (1 point for each of Congestive heart failure, Hypertension, Diabetes mellitus, Vascular disease, Age 65–74 years, Female sex, 2 points for each of Age ≥75 years, Previous stroke) and HAS-BLED (1 point for each of Hypertension, Abnormal renal/liver function, Stroke, Bleeding history or predisposition, Labile international normalized ratio, Elderly, Drugs/alcohol concomitantly) scores, which estimate thromboembolic and haemorrhagic risks, respectively P5091 cell line [33, 34]. GFR was estimated for each individual using the four equations listed in Table 2. The results from the various CKD-EPI equations were converted from units of mL/min per 1.73 m2 to mL/min according to Eq. 1: $$ \textGFR_\textmL/min = \textGFR_\textmL/min\,per 1.73\,\textm^2 \times \frac\textBSA1.73\,\textm^2 $$ (1)where the body surface area of the individual (BSA) was calculated using Mosteller's equation [35–39]. 2.3 Sample Collection and Laboratory Analysis Each patient provided a set of venous blood samples 10–16 hours post-dose for SCH727965 in vivo measuring plasma creatinine and cystatin C concentrations, plasma free thyroxine and thyroid-stimulating hormone (TSH) concentrations (BD Vacutainer® lithium heparin tubes); Pictilisib Hemoclot® Thrombin Inhibitor times (HTI, Hyphen BioMed, Neuville-sur-Oise, France) (BD Vacutainer® citrate tubes); plasma dabigatran concentrations (BD Vacutainer® K2 ethylene diamine tetraacetic acid [EDTA] tubes). Blood cells from the EDTA tubes were used for genotyping. Serum creatinine and cystatin C concentrations were only measured Hydroxychloroquine price at a single point in time for each participant, as intra-individual variance (coefficient of variation, CV) of these biomarker concentrations has been reported to be around 7 % in clinically stable individuals [40]. Serum creatinine was measured using an Abbott® Aeroset analyser (Abbott Park, IL, USA) by the modified Jaffe reaction. This was IDMS-aligned for the period of this study and had an inter-day CV of <4.0 %. Serum cystatin C was measured using a particle-enhanced nephelometric immunoassay on a Behring Nephelometer II analyser (Siemens Diagnostics, Marburg, Germany), with a CV <4.5 % [41]. The use of a contemporary Siemens assay for cystatin C is consistent with the recommendations by Shlipak et al. [42]. In conclusion, in this study we demonstrated the expression of D2 In conclusion, in this study we demonstrated the expression of D2R, MGMT and VEGF in 197 different histological subtypes of pituitary adenomas, and analyzed the relationships between D2R, MGMT and VEGF expression and the association of D2R, MGMT and VEGF expression with PA clinical features including patient sex, tumor growth pattern, tumor recurrence, tumor size, tumor tissue texture and bromocriptine application. Our data revealed that PRL-and GH-secreting PAs exist high expression of D2R, responding to dopamine CYT387 agonists; Most PAs exist low expression of MGMT and high expression of VEGF, TMZ or bevacizumab treatment could be applied under the premise of indications. Acknowledgements We thank the Department of Pathology of Jinling Hospital, School of Medicine, Nanjing WZB117 supplier University, for technical buy SHP099 support. This study was supported by National Natural Science Foundation of China (NO. 30801178). References 1. Bianchi A, Valentini F, Iuorio R, Poggi M, Baldelli R, Passeri M, Giampietro A, Tartaglione L, Chiloiro S, Appetecchia M, Gargiulo P, Fabbri A, Toscano V, Pontecorvi A, De Marinis L: Long-term treatment of somatostatin analog-refractory growth hormone-secreting pituitary tumors with pegvisomant alone or combined with long-acting somatostatin analogs: a retrospective analysis of clinical practice and outcomes. J Exp Clin Cancer Res 2013, 32:40. doi:10.1186/1756-9966-32-40.PubMedCentralPubMedCrossRef 2. Wan H, Chihiro O, Yuan S: MASEP gamma knife radiosurgery for secretory pituitary adenomas: experience in 347 consecutive cases. J Exp Clin Cancer Res 2009, 28:36. many doi:10.1186/1756-9966-28-36.PubMedCentralPubMedCrossRef 3. Mantovani A, Macrì A: Endocrine effects in the hazard assessment of drugs used in animal production. J Exp Clin Cancer Res 2002, 21:445–456.PubMed 4. Colao A, Pivonello R, Di Somma C, Savastano S, Grasso LF, Lombardi G: Medical therapy of pituitary adenomas: effects on tumor shrinkage. Rev Endocr Metab Disord 2009, 10:111–123.PubMedCrossRef 5. Takeshita A, Inoshita N, Taguchi M, Okuda C, Fukuhara N, Oyama K, Ohashi K, Sano T, Takeuchi Y, Yamada S: High incidence of low O(6)-methylguanine DNA methyltransferase expression in invasive macroadenomas of Cushing's disease. Eur J Endocrinol 2009, 161:553–559.PubMedCrossRef 6. Ortiz LD, Syro LV, Scheithauer BW, Ersen A, Uribe H, Fadul CE, Rotondo F, Horvath E, Kovacs K: Anti-VEGF therapy in pituitary carcinoma. Pituitary 2012, 15:445–449.PubMedCrossRef 7. Fadul CE, Kominsky AL, Meyer LP, Kingman LS, Kinlaw WB, Rhodes CH, Eskey CJ, Simmons NE: Long-term response of pituitary carcinoma to temozolomide. Report of two cases. J Neurosurg 2006, 105:621–626.PubMedCrossRef 8. With the quartz tube, we were able to confine the evaporated mate With the quartz tube, we were able to confine the evaporated material and maintain a uniform gas pressure in the vicinity of the evaporation source. A molybdenum boat was used as an evaporation source. For depositing the thin films, the glass AZD9291 molecular weight substrate was pasted at the top of the tube. Film thickness was measured with a quartz crystal thickness monitor (FTM 7, BOC Edwards, West Sussex, UK). After loading the glass substrate and the source material, the chamber was evacuated to 10-5 Torr. The inert gas (Ar) with 0.1 Torr pressure was injected into the sub-chamber, and the same gas pressure was maintained throughout the evaporation process. Once a thickness of 500 Å was attained, the evaporation source was covered with a shutter, which was operated from outside. After the process was over, thin films were taken out of the chamber and were analyzed for structural and optical properties. X-ray diffraction patterns of thin NCT-501 molecular weight films of a-Se x Te100-x nanorods were obtained with the help of an Ultima-IV (Rigaku, Tokyo, Japan) diffractometer (λ = 1.5418 Å wavelength CuKα radiation at 40 kV accelerating voltage and 30 mA current), using parallel beam geometry with a multipurpose thin film attachment. X-ray diffraction (XRD) patterns for all the studied thin films were recorded in theta – 2 theta scans with a grazing incidence angle of 1°, an angular interval (20° to 80°), a step size of 0.05°, and a count time of 2 s per step. Field emission scanning electron microscopic (FESEM) images of these thin see more films containing aligned nanorods were obtained using a Quanta FEI SEM (FEI Co., Hillsboro, OR, USA) operated at 30 kV. A 120-kVtransmission electron microscope (TEM; JEM-1400, JEOL, Tokyo, Japan) was employed to study the microstructure of these aligned nanorods. Energy-dispersive spectroscopy (EDS) was employed to study the composition of these as-deposited films using EDAX (Ametek, Berwyn, PA, USA) operated at an accelerating voltage of 15 kV for 120 s. To study the optical properties of these samples, we deposited the a-Se x Te100-x thin films on the glass substrates at room temperature using a modified thermal evaporation system. The thickness of the films was kept fixed at 500 Å, which was measured using the quartz crystal thickness monitor (FTM 7, BOC Edwards). The experimental data on optical absorption, reflection, and transmission was recorded using a computer-controlled tuclazepam JascoV-500UV/Vis/NIR spectrophotometer (Jasco Analytical Instruments, Easton, MD, USA). It is well known that we normally measure optical density with the instrument and divide this optical density by the thickness of the film to get the value of the absorption coefficient. To neutralize the absorbance of glass, we used the glass substrate as a reference as our thin films were deposited on the glass substrate. The optical absorption, reflection, and transmission were recorded as a function of incident photon energy for a wavelength range (400 to 900 nm). For these different gases, we examined the etch rate and pattern For these different gases, we examined the etch rate and pattern transfer anisotropy to get all parameters for obtaining the designed pattern. PAA mask formation The PAA thin films used in this work were formed in oxalic acid aqueous solution (5 w.t.%) at a constant voltage of 40 V. The initial Al thickness was 1.3 μm, deposited by e-gun evaporation. Some of the samples were subjected to an annealing step before anodization (at 500°C for 30 min). In all cases, the anodization was performed in two steps and under the same experimental Blasticidin S conditions for all samples. The final PAA thickness was different from one sample to another, depending on the thickness of the sacrificial layer formed selleck screening library during the first anodization step. Three layer thicknesses were used: CX-6258 nmr 390, 400, and 560 nm. The sample characteristics are summarized in Table 1. Table 1 Characteristics of the PAA layers in the three different samples used in this work PAA thickness (nm) Pore size in nm after pore widening for 40 min Annealing Sample 1 390 35 – 45 No Sample 2 560 35 – 55 Yes Sample 3 400 35 – 45 Yes All samples were subjected to pore widening and removal of the barrier layer from pore base to get vertical pores that reach the Si substrate. An example of SEM image of the surface of an optimized PAA film used in this work is depicted in Figure 2. In this sample, the Al film was not annealed before anodization. The average pore size was 45 nm, and the PAA film thickness was 390 nm. Figure 2 High magnification top view SEM image of sample 1. The PAA film thickness of sample 1 is 390 nm, and the average pore diameter is about 45 nm. Reactive ion etching Linifanib (ABT-869) of Si through the PAA mask The mechanisms involved in reactive ion etching combine physical (sputtering) and chemical etching. The gases or mixture of gases used and the RIE power and gas pressure are critical parameters that determine the etch rate. The etch rate is also different on large Si surface areas compared to the etch rate through a mask with nanometric openings. In this work, the PAA mask used showed hexagonally arranged pores with size in the range of 30 to 50 nm and interpore distance around 30 nm. Three different gases or gas mixtures were used: SF6 (25 sccm), a mixture of SF6/O2 (25 sccm/2.8 sccm), and a mixture of SF6/CHF3 (25 sccm/37.5 sccm). In the first case, the etching of Si is known to be isotropic, while in the last two cases, it is more or less anisotropic. Separate experiments were performed for each gas mixture. In all cases, we used three different etching times, namely, 20, 40, and 60 s. The conditions used for the RIE were as follows: power 400 W and gas pressure 10 mTorr. An example of SEM image from sample 1 after RIE for 20 s in the three different gases/gas mixtures is shown in Figure 3. Research carried out in Europe and Asia has begun to address this Research carried out in Europe and Asia has begun to address this question with various culture-based studies. Researchers from Taiwan, Finland, Sweden, Demark and the Netherlands have examined various dog populations and have been able to culture C. jejuni, C. coli, C. upsaliensis, C. helveticus, C. lari and other Campylobacter spp. from canine fecal samples using various growth conditions and media [13–17]. Reported carriage rates of Campylobacter spp. in domestic dogs ranged from 2.7% to 100% of dogs tested [13, 16], with some studies reporting isolation of multiple species of Campylobacter from a single dog [15, 17]. A major influence on our understanding of Campylobacter ecology in dogs has been our reliance on culture-based methods. MK-4827 Various selective media have been used for Campylobacter isolation [18], with most relying on a cocktail of antibiotics in a rich basal medium to selectively isolate Campylobacter. However, it has been recognized that Campylobacter MK-1775 clinical trial species other than C. coli, C. jejuni, and C. lari are often sensitive to the antibiotics in these media [19]. Filter-based methods, in combination with nonselective media, have been shown to result in the isolation of a greater diversity of Campylobacter species [20], but these approaches are more labour-intensive, less selective and prone to overgrowth of fecal contaminants [19]. As our understanding of campylobacters, both pathogenic and non-pathogenic, expands beyond C. LY2874455 nmr jejuni and C. coli, so must our detection methods. The goal of this study was to take a culture-independent approach to the profiling of Campylobacter species in domestic pet dogs in an effort to evaluate this zoonotic reservoir and describe changes in fecal Campylobacter populations associated with diarrhea. Established species-specific selleck chemicals llc quantitative PCR (qPCR) assays targeting the 60 kDa chaperonin (cpn60) gene of C. coli, C. concisus, C. curvus, C. fetus, C. gracilis, C. helveticus, C. hyointestinalis, C. jejuni, C. lari, C. mucosalis, C. rectus, C. showae, C. sputorum, and C. upsaliensis [21] were used to determine the Campylobacter profiles of 70 healthy dogs and 65 dogs with diarrhea. This study represents the largest culture-independent, quantitative investigation of Campylobacter in pet dogs conducted to date and is one of only a few studies to focus on North American animals. Results Campylobacter profiles from healthy and diarrheic dog fecal samples Total bacterial DNA was extracted from the feces of 70 healthy dogs (from 52 households) and 65 dogs with diarrhea (from 60 households) (Additional file 1: Table S1) and tested for the presence of 14 Campylobacter species. Each sample was tested for an individual species in four reactions (duplicate reactions within an assay and each assay run twice). If a sample did not yield three or four detectable test values (above the assay cut-off of 103 organisms/g of feces [21]), the sample was defined as undetectable for that test. A limitation of this study is the low response rate Those who we A limitation of this study is the low response rate. Those who were invited and agreed to participate returned their informed consent form or agreed by email or phone. This approach may have attracted the most ideal workers, although it may also have attracted the least healthy fire fighters. In the Netherlands, WHS in this sector was performed on a voluntary basis. Therefore, the study population reported herein is thought to be a reflection of the future participants in WHS. For the determination of the odds ratios, it is more important to have no specific selection within one of the subgroups eFT-508 concentration in the comparison, for example in professionals or volunteers, because that could cause a change in odds ratio. We found no reason to assume that specific selection within one of the subgroups occurred. From these results, it can be concluded that certain subgroups (gender, professionalism and age) of fire fighters are more prone to at least one specific work-related diminished health requirement. Therefore, specific parts of the WHS can be given more attention in high-risk groups. To determine the additional value of using the high-risk group approach for fire fighters, the long-term benefits of using the high-risk and general approaches to keep fire fighters healthy and with good performance in their jobs should be studied in future. Acknowledgments We thank the fire departments and fire fighters for their cooperation in this study. This work was supported by a grant from 'A + O fonds Gemeenten'. Conflict of interest The authors declare that they Akt activator have no conflict of interest. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. References Åstrand P, Rodahl K, Dahl H, Strømme SB (2003) Textbook of work physiology. Physiological bases of exercise. Human Kinetics, Champaign Cooney M, Dudina A, Whincup P, Capewell S, Menotti A, Jousilahti P et al (2009) Re-evaluating the Rose approach: comparative benefits of the population and high-risk preventive find more strategies. Eur J Cardiovasc Prev https://www.selleckchem.com/products/Roscovitine.html Rehabil 16:541–549CrossRef de Beurs E, Zitman F (2005) Brief symptom inventory (BSI): reliability and validity of a practical alternative for SCL-90 [In Dutch: de brief symptom inventory (BSI): De betrouwbaarheid en validiteit van een handzaam alternatief voor de SCL-90]. Leiden, LUMC: department Psychiatry; Report No. 8 Eekhof JAH, van Weert HCPM, Spies TH, Hufman PW, Hoftijzer NP, Mul M, Meulenberg F, Burgers JS (2002) Dutch society of general practitioners- standard for hearing impairment (In Dutch: NHG-standard slechthorendheid) Graham I, Atar D, Borch-Johnsen K, Boysen G, Burell G, Cifkova R et al (2007) European guidelines on cardiovascular disease prevention in clinical practice: executive summary. signaling molecule In patients with tuberculoma IRIS, we observed a high prevalence A O Results: Total cholesterol (TC), triglyceride (TG), LDL-cholester Some objective measures (respiratory function, observer-rated opi We also fitted a bivariate probit model with cluster-robust SE tr
CommonCrawl
Killing form 2010 Mathematics Subject Classification: Primary: 17B [MSN][ZBL] The Killing form is a certain bilinear form on a finite-dimensional Lie algebra, introduced by W. Killing . Let $\def\f#1{\mathfrak{#1}}\f G$ be a finite-dimensional Lie algebra over a field $k$. By the Killing form on $\f G$ is meant the bilinear form $$\def\tr{\textrm{tr}\;}\def\ad{\textrm{ad}\;}B(x,y) = \tr(\ad x \cdot \ad y),\quad x,y\in \f G $$ where $\tr$ denotes the trace of a linear operator, and $\ad x$ is the image of $x$ under the adjoint representation of $\f G$ (cf. also Adjoint representation of a Lie group), i.e. the linear operator on the vector space $\f G$ defined by the rule $z\mapsto [z,x]$, where $[\;,\;]$ is the commutation operator in the Lie algebra $\f G$. The Killing form is symmetric. The operators $\ad x$, $x\in \f G$, are skew-symmetric with respect to the Killing form, that is, $$B([x,y],z) = B(x,[y,z])\quad \textrm{ for all } x,y,z\in \f G.$$ If $\f G_0$ is an ideal of $\f G$, then the restriction of the Killing form to $\f G_0$ is the same as the Killing form of $\f G_0$. Each commutative ideal $\f G_0$ is contained in the kernel of the Killing form. If the Killing form is non-degenerate, then the algebra $\f G$ is semi-simple (cf. Lie algebra, semi-simple). Suppose that the characteristic of the field $k$ is 0. Then the radical of $\f G$ is the same as the orthocomplement with respect to the Killing form of the derived subalgebra $\f G' = [\f G,\f G]$. The algebra $\f G$ is solvable (cf. Lie algebra, solvable) if and only if $\f G\perp \f G'$, i.e. when $B([x,y],z) = 0$ for all $x,y,z\in \f G$ (Cartan's solvability criterion). If $\f G$ is nilpotent (cf. Lie algebra, nilpotent), then $B(x,y) = 0$ for all $x,y\in\f G$. The algebra $\f G$ is semi-simple if and only if the Killing form is non-degenerate (Cartan's semi-simplicity criterion). Every complex semi-simple Lie algebra contains a real form $\Gamma$ (the compact Weyl form, see Complexification of a Lie algebra) on which the Killing form is negative definite. The Killing form is a key tool in the Killing–Cartan classification of semi-simple Lie algebras over fields $k$ of characteristic 0. If $\textrm{char}\; k \ne 0$, the Killing form on a semi-simple Lie algebra may be degenerate. The Killing form is also called the Cartan–Killing form. Let $X_1,\dots,X_n$ be a basis for the Lie algebra $L_1$, and let the corresponding structure constants be $\def\g{\gamma}\g_{ij}^k$, so that $[X_i,X_j] = \g_{ij}^k X_k$ (summation convention). Then in terms of these structure constants the Killing form is given by $$B(X_a,X_b) = g_{ab} = \g_{ac}^d\g_{bd}^c$$ The metric (tensor) $g_{ab}$ is called the Cartan metric, especially in the theoretical physics literature. Using $g_{ab}$ one can lower indices (cf. Tensor on a vector space) to obtain "structure constants" $\g_{abc} = g_{da} \g_{bc}^d$ which are completely anti-symmetric in their indices. (A direct consequence of the Jacobi identity and equivalent to the anti-symmetry of the operator $\ad y$ with respect to $B(x,z)$; cf. above.) [Bo] N. Bourbaki, "Elements of mathematics. Lie groups and Lie algebras", Addison-Wesley (1975) (Translated from French) MR0682756 Zbl 0319.17002 [Ca] E. Cartan, "Sur la structure des groupes de transformations finis et continus", Oevres Complètes, 1, CNRS (1984) pp. 137–288 Zbl 0007.10204 JFM Zbl 59.0430.02 JFM Zbl 25.0638.02 [Hu] J.E. Humphreys, "Introduction to Lie algebras and representation theory", Springer (1972) pp. §5.4 MR0323842 Zbl 0254.17004 [Ka] I. Kaplansky, "Lie algebras and locally compact groups", Chicago Univ. Press (1971) MR0276398 Zbl 0223.17001 [Ki] W. Killing, "Die Zusammensetzung der stetigen endlichen Transformationsgruppen I" Math. Ann., 31 (1888) pp. 252–290 JFM Zbl 20.0368.03 [Ki2] W. Killing, "Die Zusammensetzung der stetigen endlichen Transformationsgruppen II" Math. Ann., 33 (1889) pp. 1–48 JFM Zbl 20.0368.03 [Ki3] W. Killing, "Die Zusammensetzung der stetigen endlichen Transformationsgruppen III" Math. Ann., 34 (1889) pp. 57–122 JFM Zbl 21.0376.01 [Ki4] W. Killing, "Die Zusammensetzung der stetigen endlichen Transformationsgruppen IV" Math. Ann., 36 (1890) pp. 161–189 [Na] M.A. Naimark, "Theory of group representations", Springer (1982) (Translated from Russian) MR0793377 Zbl 0484.22018 [Va] V.S. Varadarajan, "Lie groups, Lie algebras and their representations", Springer, reprint (1984) MR0746308 Zbl 0955.22500 Killing form. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Killing_form&oldid=42303 This article was adapted from an original article by D.P. Zhelobenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Killing_form&oldid=42303" Nonassociative rings and algebras
CommonCrawl
Impact of air–sea coupling on the probability of occurrence of heat waves in Japan Akira Hasegawa ORCID: orcid.org/0000-0002-7674-69551, Yukiko Imada2, Hideo Shiogama1,3, Masato Mori4, Hiroaki Tatebe5 & Masahiro Watanabe1 In extreme event attribution, which aims to answer whether and to what extent a particular extreme weather event can be attributed to global warming, the probability of an event is generally estimated through large ensemble simulations, using an atmospheric general circulation model (AGCM). In islands, such as Japan, it has been considered that surface air temperature (SAT) can be significantly affected by the surrounding sea surface temperature (SST), which mostly is affected by atmospheric circulation at mid- and high-latitudes. Therefore, the absence of SST responses to atmospheric variability in AGCMs impacts the estimation of the occurrence of extreme events, such as heat waves in Japan. In this study, we examined the impact of air–sea coupling on the probability of occurrence of severe heat waves that occurred in Japan in the summer of 2010 by analyzing the probability differences obtained from AGCM and coupled general circulation model (CGCM) large-ensemble experiments. The observed ocean temperature, salinity, and sea ice were assimilated in the 100-member CGCM experiments, as they were assigned as boundary conditions in the 100-member AGCM experiments. The SAT around Japan in the northern summer is largely related to the Bonin high, whose interannual variability is largely affected by the Silk Road and Pacific-Japan (PJ) pattern teleconnections in the real world. The SAT anomaly over Japan was related to the pressure variability due to the Silk Road and PJ patterns in the CGCM experiment. By contrast, the SAT over Japan simulated by AGCM was less sensitive to such pressure variability, and the SAT ensemble spread became narrower in AGCM. The results suggest that the probability of occurrence of the 2010 heat wave in Japan would tend to be underestimated by the AGCM ensemble compared to the CGCM ensemble, provided that the ensemble averages of the SAT anomalies were equal between CGCM and AGCM experiments. This study raised the issue of the absence of SST response to atmospheric variability in AGCMs, which can critically impact the estimation of extreme event probability, particularly in mid-latitude islands, such as Japan. Extreme event attribution (EA) has been developed over the past decade to address questions regarding the impact of global warming on specific extreme weather events. Various types of approaches have been used to conduct EA, including observational analyses, model analyses, and multi-method studies (Stott et al. 2016; Easterling et al. 2016; Otto 2017). Shepherd (2016) highlighted two aspects of attribution questions: how anthropogenic climate change has altered the probability of occurrence of individual extreme events (risk-based approach), and how the severity (magnitude or intensity) of a particular event has changed due to climate change (storyline approach). Generally, the risk-based approach is based on two model experiments—for current conditions and a counterfactual world without anthropogenic emissions (with pre-industrial concentrations of CO2 and other greenhouse gases, including aerosols)—as described by Allen (2003). By preparing large ensembles to simulate the current climate and a counterfactual climate without anthropogenic climate change, the presumption of the shape of the extremal distribution is not required to compare the probability. The selection of a numerical model depends on the most relevant type of natural variability for the case analyzed. In many cases, an atmospheric general circulation model (AGCM) has been used for EA studies because the timescale of the most concerning extreme weather events is usually within 1 month. Compared to this timescale, dominant intrinsic oceanic or sea ice variability has a considerably longer timescale, represented by interannual, decadal, and multi-decadal variability. Thus, the state of the ocean or sea ice during an extreme event can be considered the boundary condition for AGCM as a background climate condition of the extreme event in focus. However, in mid- and high-latitude oceans, atmospheric circulation is rather a driver of ocean circulation and considerably affects surface heat fluxes at shorter time scales. The atmospheric model led to unrealistic heat flux at the air–sea interface (Yu and Mechoso 1999) and the absence of important feedback processes (Kitoh and Arakawa 1999). These issues raise the question whether air–sea interaction critically affects the estimation of event probability in the EA approach, particularly for islands surrounded by ocean, such as Japan. Fischer et al. (2018) showed that sea surface temperature (SST)-forced models underestimate the temperature variability, whereas Uhe et al. (2016) indicated that the effect of air–sea interaction on extreme events is considerably smaller. Therefore, whether air–sea coupling critically affects an EA framework remains unclear. It also appears to depend on the location of the focus. In this study, we propose a new framework to estimate the probability of extreme events, based on a conceptually more realistic approach, which considers both the longer time-scale background situation and the shorter air–sea coupling processes around mid-latitude small islands. It should be noted that a true probabilistic distribution of an observed event remains unknown because the observation is only a single realization among infinite possibilities. Thus, a numerical simulation is needed to deduce the distribution. Figure 1 shows a conceptual illustration explaining the composing elements of the variance of a probability density function (PDF) for a certain extreme index. When we use long-term coupled model intercomparison project (CMIP) type historical simulations (see Section 2) during some decades to draw a PDF (Fig. 1a, assuming single model), its variance is composed of an intrinsic natural variability of the ocean and atmosphere and the climatological tendency in response to external forcings, such as greenhouse gases and aerosols. When a large ensemble of CMIP-type historical simulations is available, we can draw a PDF using only the data of a certain period (e.g., a certain month of a certain year) when an extreme event occurred (Fig. 1b). In this case, the external forcings are fixed to a specific level, and the variance of the PDF is composed of atmospheric and oceanic (sea ice) natural variabilities. Schematic illustration of the probability density functions for SAT and SST, compared between a CGCM historical experiment during some decades, b CGCM historical experiment, c CGCM assimilation, and d AMIP-type experiments for a certain year By contrast, when we use AGCM simulations with fixed boundary conditions for the observed ocean and sea ice focusing on the period of a certain extreme event (Fig. 1d), the variance of its PDF is composed only of atmospheric intrinsic natural variability, which has been the most used in many EA studies. In shorter time scales, which can be perceived by humans, probabilistic behaviors that can trigger extreme weather are induced by atmospheric intrinsic variability, whereas the effect of oceanic fast responses to the atmospheric stochastic variability should also be considered as a component of the probabilistic behaviors in the mid-latitude coastal regions. Given the fixed SST, the coastal air temperature might have a narrower variance. Thus, the conventional approach using AGCM might be inadequate to consider those probabilistic behaviors of the ocean. In this study, we tested the estimation of extreme event probabilities using large-ensemble simulations of ocean data assimilation with a coupled general circulation model (CGCM) that can consider both short-term oceanic responses to atmospheric variability and prescribed long-term oceanic intrinsic variability (Fig. 1c). By comparing event probability between large ensembles of AGCM and the assimilated CGCM runs, we evaluated the impact of air–sea coupling on the shape of a PDF, particularly on its tail. The difference would be critical to the estimation of a risk ratio in EA studies. In this paper, we will not discuss the impact of air–sea coupling on a risk ratio but focus on the shape of PDFs, thus only using factual simulations. As an example, we compared the probability of extreme events between large ensemble simulations of AGCM and CGCM, focusing on the extreme warm event that occurred in 2010 in Japanese islands, which are surrounded by the ocean. The surface air temperature (SAT) around Japan in the northern summer is largely related to the Bonin high. Two types of teleconnection patterns—the Pacific-Japan pattern (Nitta 1987; Kosaka and Nakamura 2010) and the Silk Road pattern (Enomoto et al. 2003; Enomoto, 2004)—largely affect the interannual variability of the Bonin high (e.g., Wakabayashi and Kawamura 2004; Yasunaka and Hanawa 2006). Thus, we focused on the representation of the causal relationship between SAT variability around Japan and the Silk Road and PJ teleconnections in AGCM and CGCM simulations. The data, methods, and experimental design are described in the following section. The impact of air–sea coupling on the probability of the Japanese heat wave event in 2010 is described in Section 3. The different contributions of the PJ and Silk Road teleconnections between AGCM and CGCM simulations are discussed in Section 4, and the conclusion is presented in Section 5. Methods and experimental design We used the sixth version of the Model for Interdisciplinary Research on Climate (MIROC6), described by Tatebe et al. (2019) in more details. MIROC6 is composed of T85L81 atmosphere and 1L62 ocean. It is one of the CMIP phase 6 (CMIP6; Eyring et al. 2016) models. To estimate the probabilities of extreme events, we prepared two types of large ensemble simulations with the atmospheric component of MIROC6 (MIROC6-AGCM) and the coupled version of MIROC6 (MIROC6-CGCM). Using MIROC6-AGCM, a 20-member ensemble of the Atmospheric Model Intercomparison Project (AMIP)-type experiment was conducted from 1979 to 2014, which was driven by the observed SST, sea ice concentration (COBE-SST2; Hirahara et al. 2014), and historical anthropogenic and natural external forcing agents. The probability of the Japanese heat wave was subsequently evaluated by increasing the ensemble size to 100 only for 2010, which corresponds to a factual simulation of a traditional EA framework. To reproduce seasonal and interannual variabilities in the CGCM framework, we adopted 10-member full-field assimilation experiments, in which observed ocean temperature, salinity, and sea ice were assimilated for 1951–2014, which were conducted as an initialization process for the CMIP6 Decadal Climate Prediction Project (DCPP, Kataoka et al., 2020. under review). To increase the number of ensemble members from 10 to 100, we reran the full-field assimilation runs from February to December 2010 with 100 different combinations of atmosphere and ocean restart files. This full-field assimilation experiment is termed as CAssm. For reference, CGCM runs without any data assimilation were also analyzed. A 50-member ensemble of the CMIP6 historical experiment (hereafter denoted as CHist) was available from 1851 to 2014 as a no-assimilation CGCM case. The anthropogenic and natural external forcing agents are common among AMIP, CHist, and CAssm experiments. SST and sea ice concentrations assimilated in CAssm are the same as the boundary conditions in AMIP runs. For verification, we used COBE-SST2 (Hirahara et al. 2014), Global Precipitation Climatology (GPCP; Adler et al. 2003), and a 55 year Japanese reanalysis dataset (JRA-55; Kobayashi et al. 2015). We defined SST around Japan (SSTJP) as the area average of SST in the longitude-latitude box (125–150° E, 25–50° N). The SAT averaged over Japanese islands is denoted as SATJP. The Silk Road pattern index is defined as the second mode of empirical orthogonal function (EOF) analysis of detrended 200 hPa geopotential height in the box (30–50° N, 30–130° E) for each 30 year ensemble simulation and JRA-55. The first mode of EOF corresponded to the Arctic Oscillation-like mode (not shown). The Pacific-Japan pattern index is defined as the second mode of EOF analysis of 850 hPa relative vorticity in the box (0–60° N, 100–160° E). We assumed the multiple linear regression of SATJP anomaly (y) as the response variable for each teleconnection pattern, as follows: $$ y={\beta}_0+{\beta}_{\mathrm{PJ}}\bullet {x}_{\mathrm{PJ}}+{\beta}_{\mathrm{SR}}\bullet {x}_{\mathrm{SR}}+\varepsilon $$ where xPJ and xSR are the PJ and Silk Road pattern indices defined above, βPJ and βSR are multiple regression coefficients for the PJ and Silk Road patterns, and β0 and ε are the intercept and error terms, respectively. Thirty-year simulations Figure 2a–c shows the time-series of SSTJP anomalies in August from 1981 to 2010 for AMIP, CAssm, and CHist experiments, respectively. The black line in each panel indicates the SSTJP anomaly in COBE-SST2 dataset, whereas the colored thick lines represent ensemble averages, and the colored thin lines represent the 20-, 10-, and 50-member ensembles for AMIP, CAssm, and CHist experiments, respectively. With regards to the ensemble mean response, as shown in Table 1, CAssm shows a relatively high reproductivity of the observed interannual SSTJP variability (R ~ 0.74) due to the ocean temperature assimilation, compared to CHist (R ~ 0.30) without assimilation. Table 1 shows that the averages and standard deviations of SSTJP in both CAssm and CHist are relatively lower than those in COBE-SST2, used as a boundary condition for JRA-55 and the AMIP experiment. Time series of SST anomalies around the Japanese islands (SSTJP, left) and SAT anomalies over land of the Japanese islands (SATJP, right) for August 1981 to 2010, obtained from a, d AMIP, b, e CAssm, and c, f CHist experiments. Black lines of SSTJP and SATJP anomalies mean those in COBE-SST2 and JRA-55, respectively. Colored thin and thick lines show the anomalies of each ensemble member and the ensemble average, respectively Table 1 Averages and standard deviations of SSTJP for August 1981 to 2010 and correlation coefficients of SSTJP with COBE-SST2 for August over the 30 years. Numbers in parentheses represent the number of samples for the 30 years Figure 2d–f shows the time-series of SATJP anomalies in August from 1981 to 2010 for AMIP, CAssm, and CHist experiments, respectively. The black lines indicate the SATJP anomaly in the JRA-55 reanalysis dataset (Kobayashi et al. 2015). The number of ensemble members in AMIP (20) was twice that of CAssm (10). Nevertheless, SATJP in the AMIP ensemble shows a relatively smaller variance compared to that in the CAssm ensemble, indicating a relatively small noise in AMIP ensemble. The ensemble mean response of SATJP in AMIP showed similar interannual variability as that found in the JRA-55 dataset, as shown in Fig. 2d, whereas CAssm showed a relatively weak coherency with JRA-55, as shown in Fig. 2e. Table 2 shows that the correlation coefficients of SATJP with JRA-55 are approximately 0.65 and 0.27 for AMIP and CAssm, respectively. Based on the comparison of the correlation coefficients, AMIP seems to be more reproducible. However, to deduce the PDF of extreme events, it can be said that AMIP overestimates a signal-to-noise ratio (underestimates the noise) because of the absence of the intraseasonal responses of the ocean to the atmospheric internal variability, which can affect the probability of SATJP extremes. Because of the absence of the oceanic intraseasonal responses to the atmosphere, SATJP in AMIP appeared to be strongly restricted around the observed SST field. Hence, we deduced the range of noise created by the intraseasonal internal variabilities of both atmosphere and ocean to estimate the probability of extreme events from large-ensemble simulations. It is considered that the absence of noises induced by the ocean could lead to overestimation of a signal-to-noise ratio, resulting in a larger correlation in AMIP. Thus, we assume that CAssm with relatively larger noise and smaller correlation would capture a more realistic probabilistic distribution. By contrast, the interannual variability simulated by CHist is significantly different from that in JRA-55 (R ~ 0.10) because of the model-own internal variability, which is free from the long-term observed variation in the ocean and sea ice. Table 2 Averages and standard deviations of SATJP for August 1981–2010 and correlation coefficients of SATJP with JRA-55 for August over the 30 years. Numbers in parentheses represent the number of samples for the 30 years Figure 3a–i shows the SAT, sea level pressure (slp), and 200 hPa geopotential height (z200) anomalies for August 2010 (reference period of climatology: 1981–2010) in JRA-55, AMIP, and CAssm, respectively. Both observed and simulated SAT showed a positive anomaly around Japan in August 2010. On this date, the Japanese islands were covered by positive pressure anomalies at lower (slp) and upper (z200) levels (Fig. 3d–i). In the ensemble mean field (Fig. 3e, f, h, i), the simulated slp anomalies near Japan are off to the south and the wave trains along the subtropical jet are unclear. This is because noise components are compensated, whereas the observed field includes atmospheric noises. When we focus on the one specific member whose PJ and Silk Road indices are similar to those of the observation, the circulation patterns are comparable with JRA-55 (not shown). Anomaly maps of the a–c SAT, d–f sea level pressure, g–i 200-hPa geopotential height, and j–l precipitation for August 2010 obtained from JRA-55 and GPCP (left column), MIROC6 AMIP (center column), and CAssm (right column) experiments, respectively Figure 3j shows that GPCP had a relatively low precipitation anomaly around Japan in August 2010. In Fig. 3k and l, low precipitation anomalies over the Japanese islands are also shown in AMIP and CAssm, respectively, but they are unclear. Both AMIP and CAssm represent the dry region on the east side of the Philippines, associated with the cold SST region (not shown) in Fig. 3k and l. The contrast between dry and wet regions of the maritime continent is more enhanced in AMIP than in CAssm and JRA-55. The correlation coefficient between SST and precipitation in August 1981–2010 is mostly positive over the western North Pacific in AMIP, and negative in CAssm and observation (not shown), as indicated by Wang et al. (2005). These differences between AMIP and CAssm indicate that CAssm could reproduce the water and energy cycles more reasonably than AMIP in the Asian-Pacific summer monsoon region. The climatological error of the simulated SATJP in August 1981–2010 against the SATJP in JRA-55 was + 0.13 K for AMIP and CAssm, and − 0.28 K for CHist, as shown in Table 2. The standard deviation of SATJP for August 1981–2010 was approximately 0.92 K in JRA-55 and 0.63 K in AMIP, CAssm, and CHist. Therefore, MIROC6 tended to underestimate the temperature variability than the reanalysis around Japan in August, regardless of the atmosphere–ocean coupling. When comparing the simulation results with the observation or reanalysis, the anomaly from the climatological average was used to avoid climatological errors in this study. The bias of the interannual variance was not corrected. Next, we investigated the relationship of the interannual variability of SATJP in August with the Silk Road and PJ patterns in the AGCM and CGCM simulations. Regression patterns for August 200 hPa geopotential height onto August SATJP, obtained from the 350 year samples (10-member ensemble for 35 years for 1980–2014), showed wave trains over Eurasia forming a positive anomaly over Japan, with a 98% significance level for both AMIP and CAssm, as shown in Fig. 4a and b. It is known that the Tibetan high extends eastward and brings heat waves over Japan, associated with the Silk Road teleconnection (Imada et al. 2019). The difference between Fig. 4a and b shows that the Silk Road pattern is emphasized in CAssm compared to AMIP (Fig. 4c). Regression maps of 200 hPa geopotential height (z200; upper panels) and 850 hPa relative vorticity (vor850; lower panels) onto SATJP obtained from the 350 year (10 members from 1980 to 2014) ensemble simulations using a, d AMIP and b, e CAssm experiments. Differences between CAssm and AMIP experiments are shown for the regressions of c z200 and f vor850, respectively. The red boxes show the target areas of EOF analysis for a Silk Road and d PJ pattern indices, respectively By contrast, regression patterns for August 850 hPa relative vorticity (vor850) onto August SATJP obtained from the 350 year samples showed a negative (anticyclonic) anomaly over Japan and positive (cyclonic) anomalies to the north and south of Japan with a 98% significance level for both AMIP and CAssm, as shown in Fig. 4d and e. This is consistent with the PJ pattern, which is often observed during Japanese high-temperature events (Imada et al. 2019). Figure 5 shows the multiple linear regression of the detrended SATJP (SAT*JP) anomaly in August for the 350 year ensemble simulations of AMIP and CAssm. The horizontal and vertical axes indicate the PJ and Silk Road indices, respectively. The positive (negative) values of horizontal and vertical axes of Fig. 5 mean that the anticyclonic (cyclonic) anomalies cover the Japanese islands, associated with the PJ and Silk Road patterns, respectively. The PJ and Silk Road pattern indices in Fig. 5 are normalized by each standard deviation. The 350 scattered circles show the SAT*JP anomaly colored with a temperature level for each experiment. The background tiles show the average of SAT*JP anomalies within them. Figure 5a shows that SAT*JP is less correlated to the PJ and Silk Road pattern teleconnections in the 350 year ensemble simulations of AMIP. By contrast, hot (cold) SAT*JP cases are located at the upper-right (lower-left) side of the diagram in CAssm (Fig. 5b). Thus, the anticyclonic (cyclonic) anomaly over the Japanese islands, associated with the positive (negative) values of the PJ and Silk Road pattern indices, makes the detrended SATJP hotter (colder) in August for 1980 to 2014 in CAssm, as we expect in the real world. The multiple regression coefficients βPJ and βSR are listed in Fig. 5 with the 95% confidence intervals as the numbers within parentheses. Although βPJ is not significant in AMIP, it is significantly positive in CAssm for August 1980–2014. The detrended SAT over the Japanese islands is more sensitive to the PJ pattern in CAssm than in AMIP. Nonetheless, βSR is significant but small (~ 0.09) in AMIP and CAssm. The contribution of the Silk Road pattern to SAT*JP is similar regardless of the air–sea coupling. Multiple linear regression of the detrended SATJP anomalies against PJ and Silk Road pattern indices obtained from 350 year (10 members for 1980 to 2014) ensemble simulations using a AMIP and b CAssm experiments. Each circle shows the detrended SATJP anomaly for each year, and the background tiles show the average of the detrended SATJP anomalies within them. The gray tiles mean there are no circles within them. βPJ and βSR are the regression coefficients of PJ and Silk Road pattern indices with 95% confidence intervals as the numbers within parentheses, respectively We also calculated the multiple linear regression of SAT*JP anomaly against both the teleconnection indices in August 1980–2014 using JRA-55. The multiple regression coefficients βPJ and βSR are not significant in JRA-55 dataset. As the number of samples is quite smaller in the reanalysis than in large ensemble simulations, the relationship between SAT*JP and the PJ and Silk Road teleconnection patterns is not clear in JRA-55 (figures not shown). The PJ and Silk Road pattern indices are approximately − 0.59 and 1.07 in August 2010 in JRA-55, respectively. Thus, the positive contribution of the Silk Road teleconnection to SATJP is partly canceled by the negative effect of the PJ pattern in August 2010 in the reanalysis dataset. Large ensemble simulations for August 2010 Figure 6a and b shows the PDFs of SATJP and SSTJP anomalies in August 2010 for each experiment, respectively, estimated by the kernel method (Silverman 1986; Kimoto and Ghil 1993). Probability density functions of a SATJP and b SSTJP anomalies for August 2010 obtained from CAssm (red), AAssm (green), and AMIP (blue) experiments, estimated by the kernel method (Silverman 1986; Kimoto and Ghil 1993). Vertical lines at approximately a 1.94 K and b 0.86 K show the SATJP and SSTJP anomalies obtained from JRA-55 and COBE-SST2, respectively Table 3 shows the comparison of SSTJP for August 2010 between COBE-SST2 and the 100-member CAssm experiments. The 100-member ensemble average of SSTJP in CAssm was approximately 0.76 K smaller than the SSTJP in COBE-SST2 for August 2010. In CAssm, we can estimate the possible SST variance induced by the atmospheric intrinsic variability using the 100-member ensemble, whereas it is not possible with a single realization of the observation. The standard deviation of SSTJP in August 2010 is approximately 0.22 in CAssm. Figure 6b shows the PDFs of SSTJP in August 2010 for COBE-SST2, AMIP, and CAssm. The black and blue dashed lines are fixed at approximately 0.86 K because COBE-SST2 is used as a fixed boundary condition for 100-member AMIP simulations. The exceedance probability of SSTJP in August 2010 was approximately 9.1% in CAssm. Table 3 Averages and standard deviations of SSTJP for August 2010, and exceedance probabilities over ΔSSTJP in COBE-SST2 for August 2010 in CAssm. Numbers in parentheses represent the number of samples for August 2010 Table 4 shows that the 100-member ensemble averages of SATJP in AMIP and CAssm were approximately 0.79 and 1.26 K smaller than those in JRA-55 for August 2010, respectively. One reason for this is the systematic biases of SATJP in MIROC6, as shown in Table 2. The other reason is the difference between the observation (including the stochastic noise) and ensemble mean values (without the stochastic noise). The ensemble average of SATJP in AMIP is higher than that in CAssm in August 2010 because the AMIP simulations are strongly constrained by the observed SST. Table 4 also shows that the standard deviation of SATJP among the 100-member ensemble was larger in CAssm (approximately 0.49 K) than in AMIP (approximately 0.37 K). Table 4 Averages and standard deviations of SATJP for August 2010, and exceedance probabilities over ΔSATJP in JRA-55 for August 2010 in AMIP, AAssm, and CAssm experiments. Numbers in parentheses represent the number of samples for August 2010 Figure 6a shows the PDFs of SATJP anomalies in August 2010 for JRA-55, AMIP, and CAssm. The black line at 1.94 K indicates the SATJP anomaly in JRA-55. The exceedance probabilities over the threshold of 1.94 K were estimated for each experiment (Fig. 6a and Table 4). The exceedance probability was approximately 1.1% and 0.18% in AMIP and CAssm, respectively. The difference in the shape of the PDFs between AMIP (narrow and tall) and CAssm (wide and short) was similar to that expected from Fig. 1. Figure 6a also shows a large difference in the PDF peaks between AMIP and CAssm. These differences imply that the AGCM framework used in the conventional EA approach might underestimate the variance of PDF and be strongly constrained by the observed value, resulting in overestimation of the probability of occurrence of an extreme event compared to the CAssm framework, because of the absence of the internal variability associated with the air–sea coupling. We could not detect a pure effect of atmosphere–ocean interaction by comparing AMIP and CAssm because the SST boundary condition of AMIP is a single realization among the possible perturbed SST patterns used in CAssm. The differences in the PDFs between AMIP and CAssm could be strongly affected by different SST patterns (one-way direct impact from SST). To examine the effect of pure air–sea coupling (two-way interaction), we conducted additional 100-member ensemble experiments using AGCM, obtaining the daily boundary conditions of SST and sea ice concentration from the 100-member outputs of CAssm. The additional 100-member experiment is denoted as AAssm in this study. We compared the PDFs of SATJP between CAssm and AAssm in Fig. 6. The anomalies in AAssm were defined by the deviation from their climatological averages in CAssm. In Fig. 6a, the shape of the SATJP PDF curve for CAssm is wider and shorter than that of AAssm. Figure 6a and Table 4 show that the exceedance probability was slightly smaller in AAssm as a result of smaller variance due to the absence of air–sea interaction. Furthermore, the PDF of CAssm showed negative skewness compared to that of AAssm. This can be understood from the relation with the PJ and Silk Road pattern indices. Figure 7 shows the multiple linear regression of the SATJP anomalies from the 100-member ensemble averages (δSATJP) against the PJ and Silk Road pattern indices for August 2010 of AMIP, AAssm, and CAssm, as shown in Fig. 5. The PJ and Silk Road pattern indices for both AAssm and CAssm in Fig. 7 are commonly normalized by the standard deviation for the 350 year ensemble simulations of CAssm, while those for AMIP in Fig. 7a is normalized by the standard deviation for the 350 year ensemble simulations of AMIP. Multiple linear regression of SATJP anomalies against PJ and Silk Road pattern indices for August 2010 from a AMIP, b AAssm, and c CAssm experiments. βPJ and βSR are the regression coefficients of PJ and Silk Road pattern indices with 95% confidence intervals as the numbers within parentheses, respectively Figure 7a and b shows that the SATJP anomaly fluctuates in AMIP and AAssm regardless of the pressure variability over Japan due to PJ and Silk Road pattern variabilities in August 2010, which are consistent with Fig. 5a. Both regression coefficients βPJ and βSR are not significant to the 100-member AAssm experiment in August 2010, as shown in Fig. 7b. By contrast, Fig. 7c shows that SATJP becomes higher (lower) under the anticyclonic (cyclonic) anomaly over the Japanese islands associated with the positive (negative) teleconnection indices in August 2010, which is consistent with Fig. 5b. Both regression coefficients βPJ and βSR are positive with 95% significance level in August 2010 in CAssm, as shown in Fig. 7c. SATJP variability associated with the PJ and Silk Road teleconnection patterns is reasonably reproduced in CGCM simulations. The SATJP variance among the 100-member ensemble was larger in CAssm than in AAssm for August 2010. The standard deviations are approximately 0.39 and 0.49 for AAssm and CAssm, respectively, as shown in Table 4. Figure 6a shows that the enhanced variance of SATJP in CAssm is mainly due to the lower SATJP cases, which are located at the lower-left side in Fig. 7c. Therefore, the SAT variance over Japan could be increased in association with the increase of negative skewness induced by atmospheric teleconnection patterns and atmosphere–ocean coupling. Because anticyclonic anomalies change the surface temperature through the adiabatic heating and less cloudiness, the nonlinear variation of cloud covers is one of the reasons of the negative skewness. This study investigated the impact of air–sea interaction on the probability of occurrence of extreme events in mid-latitude small islands surrounded by the ocean. We compared the event probability, estimated by AGCM and CGCM large-ensemble experiments, of the heat waves that occurred in Japan in August 2010. The observed ocean temperature, salinity, and sea ice were assimilated into 100-member CGCM experiments. We compared the CGCM assimilation ensemble (CAssm) for August 2010 with two types of 100-member AGCM experiments: the AGCM ensemble with the boundary conditions from the CGCM assimilation experiment (AAssm), and the AMIP-type ensemble with the single boundary condition from the observation (AMIP). The AMIP-type ensemble has generally been used in the study of event attribution. We can interpret the difference of SAT anomaly around Japan between AMIP and CAssm experiments, illustrated in Fig. 1c and d, by decomposing the difference into two parts: one between AMIP and AAssm, and the other between AAssm and CAssm experiments. One difference between AMIP and AAssm is the ensemble average shift of the SAT anomalies due to the SST distribution difference between the observation and assimilated fields. The ensemble-averaged intensity of the Japanese heat wave in 2010 in AAssm becomes smaller than that in AMIP, as shown in Fig. 6a. The other difference between AMIP and AAssm is the ensemble variance of the SAT due to the different variance of the SST field. The SAT ensemble spread under the 100-kind boundary conditions in AAssm becomes larger than that under the single boundary condition in AMIP. These are the main sources of the difference in the SAT PDFs between the two types of AGCM experiments in Fig. 6, and both differences depend on the assimilation intensity in the CGCM. The difference between AAssm and CAssm is induced by the pure effect of the air–sea coupling. The assimilated CGCM experiment can reproduce the SAT variability due to the pressure variability related with the Silk Road and PJ pattern teleconnections, as expected in the real world. Such a mechanism could expand the ensemble spread of the SAT over Japan under the air–sea coupled condition. However, the absence of air–sea coupling in AAssm could distort the response mechanism of the SAT over Japan to such atmospheric internal variability. Even if the SST fields are common between AGCM and CGCM ensembles, the ensemble spread of the SAT over Japan in August 2010 is reduced in the AGCM experiment. Note that we did not investigate whether the air–sea coupling modified the PJ and Silk Road pattern teleconnections themselves in this study. The results showed that the ensemble spread of the SAT over Japan in the CGCM experiment was larger than those in the two types of AGCM experiments for August 2010. If the ensemble average of the SAT anomaly is equal in CGCM and AGCM ensembles, the probability of occurrence of the heat wave over Japan in August 2010 could be estimated to be smaller by AMIP ensemble, compared to CAssm. However, as the ensemble average of the SAT anomaly was large in AMIP, the probability of occurrence of the heat wave was estimated to be smaller in CAssm ensemble, compared to AMIP. Further analysis suggested that the SAT anomaly over Japan was well related to the pressure variability due to the Silk Road and PJ pattern indices in the CGCM assimilation experiment, as reported for the real world. By contrast, the simulated SAT over Japan by AGCM was less sensitive to these atmospheric internal variabilities, and the ensemble spread became narrower in the AGCM experiment. In many extreme EA studies, the probability of an event has been estimated using large ensemble simulations performed by AGCMs. This study raised the issue of the absence of SST response to atmospheric variability in AGCMs, which can critically impact the estimation of extreme event probability, particularly in mid-latitude islands, such as Japan. The new framework using the CGCM assimilation system proposed in this study conceptually provides a more realistic probability distribution. Our next step is to produce a counterfactual ensemble without anthropogenic climate change by applying the CGCM assimilation system and verify the impact of air–sea coupling on the estimated probability ratio from the two ensembles. A part of ensemble members of CHist are available as the historical experiment in CMIP6. The other data supporting the conclusions of this article are available upon request. AGCM: Atmospheric general circulation model AMIP: Atmospheric model intercomparison project CGCM: Coupled general circulation model CMIP: Coupled model intercomparison project Event attribution Probability density function Surface air temperature SST: Adler RF, Huffman GJ, Chang A, Ferraro R, Xie P-P, Janowiak J, Rudolf B, Schneider U, Curtis S, Bolvin D, Gruber A, Susskind J, Arkin P, Nelkin E (2003) The Version-2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979-Present). J Hydrometeorol 4:1147–1167 Allen MR (2003) Liability for climate change. Nature 421:891–892 Easterling DR, Kunkel KE, Wehner MF, Sun L (2016) Detection and attribution of climate extremes in the observed record. Weather Clim Extremes 11:17–27. https://doi.org/10.1016/j.wace.2016.01.001 Enomoto T (2004) Interannual variability of the Bonin high associated with the propagation of Rossby waves along the Asian jet. J Meteor Soc Jpn 82:1019–1034. https://doi.org/10.2151/jmsj.2004.1019 Enomoto T, Hoskins BJ, Matsuda Y (2003) The formation mechanism of the Bonin high in August. Quart J Roy Meteor Soc 129:157–178. https://doi.org/10.1256/qj.01.211 Eyring V, Bony S, Meehl GA, Senior CA, Stevens B, Stouffer RJ, Taylor KE (2016) Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci Model Dev 9:1937–58. https://doi.org/10.5194/gmd-9-1937-2016 Fischer EM, Beyerle U, Schleussner CF, King AD, Knutti R (2018) Biased estimates of changes in climate extremes from prescribed SST simulations. Geophys Res Lett 45:8500–8509. https://doi.org/10.1029/2018GL079176 Hirahara S, Ishii M, Fukuda Y (2014) Centennial-scale sea surface temperature analysis and its uncertainty. J Climate 27:57–75. https://doi.org/10.1175/jcli-d-12-00837.1 Imada Y, Watanabe M, Kawase H, Shiogama H, Arai M (2019) The July 2018 high temperature event in Japan could not have happened without human-induced global warming. SOLA 15A:8–12. https://doi.org/10.2151/sola.15A-002 Kataoka T, Tatebe H, Koyama H, Mochizuki T, Ogochi K, Naoe H, Imada Y, Shiogama H, Kimoto M, Watanabe M (2020) Seasonal to decadal predictionswith MIROC6: Description and basic evaluation. J Adv Modeling Earth Systems. https://doi.org/10.1029/2019MS002035 Kimoto M, Ghil M (1993) Multiple flow regimes in the Northern Hemisphere winter. Part I: Methodology and hemispheric regimes. J Atmos Sci 50:2625–2643 Kitoh A, Arakawa O (1999) On overestimation of tropical precipitation by an atmospheric GCM with prescribed SST. Geophys Res Lett 26:2965–2968 Kobayashi S, Ota Y, Harada Y, Ebita A, Moriya M, Onoda H, Onogi K, Kamahori H, Kobayashi C, Endo H, Miyaoka K, Takahashi K (2015) The JRA-55 reanalysis: General specifications and basic characteristics. J Meteor Soc Jpn 93:5–48. https://doi.org/10.2151/jmsj.2015-001 Kosaka Y, Nakamura H (2010) Mechanisms of meridional teleconnection observed between a summer monsoon system and a subtropical anticyclone. Part I: the Pacific-Japan pattern. J Climate 23:5085–5108. https://doi.org/10.1175/2010JCLI3413.1 Nitta T (1987) Convective activities in the tropical western Pacific and their impact on the Northern Hemisphere summer circulation. J Meteor Soc Jpn 65:373–390. https://doi.org/10.2151/jmsj1965.65.3_373 Otto FEL (2017) Attribution of weather and climate events. Annu Rev Env Resour 42:627–646. https://doi.org/10.1146/annurev-environ-102016-060847 Shepherd TG (2016) A common framework for approaches to extreme event attribution. Curr Climate Change Rep 2:28–38. https://doi.org/10.1007/s40641-016-0033-y Silverman BW (1986) Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability, Vol. 26, Chapman andHall, London. p 175 Stott PA, Christidis N, Otto FEL, Sun Y, Vanderlinden J-P, van Oldenborgh GJ, Vautard R, von Storch H, Walton P, Yiou P, Zwiers FW (2016) Attribution of extreme weather and climate-related events. WIREs Clim Change 7:23–41. https://doi.org/10.1002/wcc.380 Uhe P, Otto FEL, Haustein K, van Oldenborgh GJ, King AD, Wallom DCH, Allen MR, Cullen H (2016) Comparison of methods: Attributing the 2014 record European temperatures to human influences. Geophys Res Lett 43:8685–8693. https://doi.org/10.1002/2016GL069568 Wakabayashi S, Kawamura R (2004) Extraction of major teleconnection patterns possibly associated with the anomalous summer climate in Japan. J Meteor Soc Jpn 82:1577–1588. https://doi.org/10.2151/jmsj.82.1577 Wang B, Ding Q, Fu X, Kang I-S, Jin K, Shukla J, Doblas-Reyes F (2005) Fundamental challenge in simulation and prediction of summer monsoon rainfall. Geophys Res Lett 32:L15711. https://doi.org/10.1029/2005GL022734 Yasunaka S, Hanawa K (2006) Interannual summer temperature variations over Japan and their relation to large-scale atmospheric circulation field. J Meteor Soc Jpn 84:641–652. https://doi.org/10.2151/jmsj.84.641 Yu J-Y, Mechoso CR (1999) A discussion on the errors in the surface heat fluxes simulated by a coupled GCM. J Climate 12:416–426 We thank the support from the TOUGOU program funded by MEXT, Japan. GPCP precipitation data was procured from NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their website at https://www.esrl.noaa.gov/psd/. The authors are also grateful to Japan Meteorological Agency for providing JRA-55 and COBE-SST2 datasets. This study was supported by Integrated Research Program for Advancing Climate Models (TOUGOU program) Grant Number JPMXD0717935457, of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. Atmosphere and Ocean Research Institute, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba, 277-8564, Japan Akira Hasegawa, Hideo Shiogama & Masahiro Watanabe Meteorological Research Institute, Japan Meteorological Agency, Tsukuba, Ibaraki, Japan Yukiko Imada Center for Global Environmental Research, National Institute for Environmental Studies, Tsukuba, Ibaraki, Japan Hideo Shiogama Research Institute for Applied Mechanics, Kyushu University, Fukuoka, Japan Masato Mori Research Center for Environmental Modeling and Application, Japan Agency for Marine-Earth Science and Technology, Yokohama, Kanagawa, Japan Hiroaki Tatebe Akira Hasegawa Masahiro Watanabe AH performed the experimental study and analyzed the data. YI proposed and designed the study and analyzed the data. HS proposed and designed the study. MM prepared the MIROC6 AGCM system and related datasets. HT created the full-field assimilation system of MIROC6 and the long-term assimilation simulation. MW assisted in its interpretation. All authors have read and approved the final manuscript. MW serves as the area representative of Area Theme A "Prediction and Projection of Large-Scale Climate Changes Based on Advanced Model Development" under TOUGOU program (Integrated Research Program for Advancing Climate Models). HT serves as the representative of subject (i)-a "Near-future climate change predictions and promotion of CMIP6 experiments" in Area Theme A of TOUGOU program. MW also serves as the representative of subject (ii)-b "Analysis of factors in past climate changes and unusual weather and future projections" in Area Theme A. YI, HS, MM, and AH have been studying event attribution under the subject (ii)-b of Area Theme A of TOUGOU program. Correspondence to Akira Hasegawa. Hasegawa, A., Imada, Y., Shiogama, H. et al. Impact of air–sea coupling on the probability of occurrence of heat waves in Japan. Prog Earth Planet Sci 7, 78 (2020). https://doi.org/10.1186/s40645-020-00390-8 Atmosphere-ocean coupling Heat waves Atmospheric internal variability Silk Road wave train Pacific-Japan pattern 2. Atmospheric and hydrospheric sciences
CommonCrawl
Corellian I like math and stuff. ordinary-differential-equations exponential-function special-functions How can I simplify this expression of tricky logarithms? How to solve $\frac{\mathrm{d}y}{\mathrm{d}t}=1+y$? What is the inverse function of $e^x +x$? Simple representation of radicals of a number Remembering /Deriving the values of sine and cosine of 18 degrees,36 degrees,54 degrees,72 degrees Logarithmic square Translating this sentence into mathematical symbols Find $f '(0)$ if $f(x)= \frac{(x^2+1)(x+1)\cos x}{e^{x(x+3)}}$ Are certain equations for orthogonal trajectories of a curve incomplete? Partitioning the unit disc with a parabola to satisfy ratio of areas Where does the middle name go in a name that has been translated into Chinese? What is the difference between "кафедра" and "факультет"?
CommonCrawl
Partitioning of thermostable glucoamylase in polyethyleneglycol/salt aqueous two-phase system Vinayagam Ramesh1 & Vytla Ramachandra Murty1 A major challenge in downstream processing is the separation and purification of a target biomolecule from the fermentation broth which is a cocktail of various biomolecules as impurities. Aqueous two phase system (ATPS) can address this issue to a great extent so that the separation and partial purification of a target biomolecule can be integrated into a single step. In the food industry, starch production is carried out using thermostable glucoamylase. Humicola grisea serves as an attractive source for extracellular production of glucoamylase. In the present investigation, the possibility of using polyethylene glycol (PEG)/salt-based ATPS for the partitioning of glucoamylase from H. grisea was investigated for the first time. Experiments were conducted based on one variable at a time approach in which independent parameters like PEG molecular weight, type of phase-forming salt, tie line length, phase volume ratio, and neutral salt concentration were optimized. It has been found that the PEG 4000/potassium phosphate system was suitable for the extraction of glucoamylase from the fermentation broth. From the results, it was observed that, at a phase composition of 22 % w/w PEG 4000 and 12 % w/w phosphate in the presence of 2 % w/w NaCl and at pH 8, glucoamylase was partitioned into the salt-rich phase with a maximum yield of 85.81 %. A range of parameters had a significant influence on aqueous two-phase extraction of glucoamylase from H. grisea. The feasibility of using aqueous two-phase extraction (ATPE) as a preliminary step for the partial purification of glucoamylase was clearly proven. Glucoamylase (EC 3.2.1.3) is a hydrolytic enzyme that degrades starch and related oligosaccharides, leading to the production of β-d-glucose. Other sectors that benefit from glucoamylase include brewing, textile, food, paper, and pharmaceutical industries [1]. Glucoamylase is sourced from different microbial specimens like bacteria, yeasts, and fungi. The commercial production of glucoamylase has been mainly carried out using the genera Aspergillus and Rhizopus [2]. For the manufacture of high-fructose corn syrups, starch needs to be first converted to glucose by high-temperature liquefaction and saccharification [3]. A lot of focus is currently made on the high thermostability of glucoamylase used in the starch processing. Hence, a highly thermostable and environmentally compatible glucoamylase is very essential for industrial purposes [4]. The main benefits of using thermostable enzymes in the starch processing industry include increased reaction rates, decreased contamination risk and cost-reduction in terms of cooling system [5, 6]. The thermophilic fungus, Humicola grisea possesses an efficient hydrolytic system for the production of glucoamylase. Moreover, the enzyme is stable when exposed to high temperature for a longer duration. With regard to these advantages, glucoamylase derived from the thermophilic fungus, H. grisea MTCC 352 has been used in the current study [3]. A variety of downstream processing techniques such ion exchange chromatography, hydrophobic interaction chromatography, and gel filtration chromatography have been exploited for the purification of glucoamylase [1, 7–11]. But the flipside of these procedures is that they are expensive, time consuming, and are often multistep low-yield protocols, not suitable for large scale production. In this regard, the use of aqueous two phase systems (ATPSs) for extraction and purification of glucoamylase has been attempted in the present investigation. Aqueous two-phase extraction (ATPE) has been widely used as a rapid and economic method for the separation and partial purification of many intracellular and extracellular enzymes [12–15]. ATPS can be formulated by mixing appropriate quantity of two hydrophilic polymers or a hydrophilic polymer and a salt. However, the use of ATPS based on hydrophilic polymer and a salt has attracted many researchers because of the following advantages: ease of separation, low cost, ease of scale-up and operation, biocompatibility, and high water content. Moreover, ATPE has high capacity and yield [16]. The protein partitioning in any ATPS depends on many factors such as hydrophobic interactions, hydrogen bonding, ionic interactions, and van der Waals forces. Therefore, with respect to the type of polymer, polymer molecular weight and concentration, type of salt and concentration, tie line length (TLL), phase volume ratio (V R), and other processing parameters such as pH, temperature, and presence of neutral salt concentration, and the partitioning behavior varies [17, 18]. Over the years, ATPSs are widely used in the purification of monoclonal antibodies, extractive fermentation, and recovery of industrial enzymes [18]. Recent studies have employed the use of ATPS (polyethylene glycol (PEG)/potassium phosphate) for biomolecule extraction and primary purification, to a great extent. Nandini and Rastogi [19] dealt with the partitioning of lactoperoxidase from milk whey and studied the effect of phase-forming salt, PEG molecular weight, pH, TLL and V R, resulting in a purification-fold (PF) of 2.31. Ratanapongleka [20] studied the partitioning behavior of laccase from Lentinus polychrous Lev., to study the effect of PEG molecular weight and concentration, salt concentration, pH, and NaCl, leading to 99 % yield and PF of 3. Babu et al. [21] studied the extraction of polyphenol oxidase from pineapple and studied PEG molecular weight and concentration, salt concentration, and pH, which gave 90 % recovery and a PF of 2.7. Naganagouda and Mulimani [22] carried out ATPE of α-galactosidase from Aspergillus oryzae and studied the effect of PEG molecular weight, salt concentration, pH, and NaCl, resulting in a PF of 3.6 and recovery of 87.71 %. The portioning of glucoamylase from Aspergillus awamori NRRL 3112 was studied by Minami and Kilikian [23] using a two-step ATPE consisted of PEG/phosphate system and achieved a threefold PF. Glucoamylase from the same organism was partitioned using bioaffinity extraction with starch as a free bioligand by de Gouveia and Kilikian [24]. To the best of our knowledge, there are no available studies based on the ATPE for glucoamylase from any known thermophilic fungi. The present investigation was done to comprehend and augment the partition of glucoamylase. Accordingly, studies were systematically carried out by varying the stated parameters, through the one-variable-at-a-time approach. In the current study, the choice of the phase-forming salt was first done, followed by the molecular weight of PEG (fixing the concentration of PEG and salt at a constant level). Next, the influence of process parameters such as tie line length, phase volume ratio, and pH were investigated. Finally, the effect of the presence of a neutral salt (sodium chloride) on the partitioning behavior of glucoamylase was studied. Polyethylene glycol (molecular weight (MW) 1000, 2000, 4000, and 6000), dipotassium hydrogen orthophosphate, potassium dihydrogen orthophosphate, trisodium citrate, tripotassium citrate, magnesium sulfate, magnesium sulfate heptahydrate, sodium chloride, and calcium chloride were obtained from Merck (India). Potato dextrose agar, yeast extract, and soluble starch were obtained from Hi Media Laboratories Pvt. Ltd (India). Glucose oxidase/peroxidase (GOD-POD) assay kit used was obtained from Agappe diagnostics Ltd (India). All chemicals were of analytical grade. The fungi H. grisea MTCC 352 was obtained from Microbial Type Culture Collection, Chandigarh, India. Enzyme production and preparation of crude enzyme The microorganism was maintained on potato dextrose agar (PDA) slant, grown at 45 °C for 10 days before being stored at 4 °C. Glucoamylase was produced through submerged cultivation in a chemically defined medium. The medium consisted of 2.84 g soluble starch, 0.96 g yeast extract, 0.05 g KH2PO4, 0.24 g K2HPO4, 0.05 g NaCl, 0.05 g CaCl2, 0.19 g MgSO4.7H2O, and 0.1 mL of Vogel's trace elements solution. The pH of the medium was adjusted to 6 [3]. Cultures were incubated with agitation at 150 rpm at 45 °C for 4 days. The fermented broth was further subjected to filtration using Whatman No. 1 filter paper. After the filtrate was centrifuged at 10,000 rpm for 10 min, the fungal mycelia were removed. The cell-free supernatant was referred to as crude enzyme and was used throughout the experiments. Partitioning studies in aqueous two-phase system Aqueous two-phase systems were prepared by mixing the requisite amounts of PEG and the various salts (trisodium citrate, tripotassium citrate, magnesium sulfate, and mono/dibasic potassium phosphate). The total weight of the systems was 10 g, and the crude enzyme amount was 10 % of the total system. The tubes were vigorously vortexed and centrifuged at 3000 rpm for 10 min to speed up the separation process. The phase equilibration was achieved by overnight incubation of the tubes, and the samples were withdrawn from the individual phase and then analyzed for total protein and glucoamylase activity. Without the incorporation of the enzyme, the samples were analyzed against blanks containing similar composition, to avoid interference of the phase components. Glucoamylase activity An appropriate amount of the crude enzyme was allowed to react with 1 % (w/v) soluble starch solution in 50 mM citrate buffer (pH 5.5), at 60 °C for 10 min. The concentration of the glucose produced was estimated by GOD-POD method using a standard glucose curve prepared under similar conditions. One unit of glucoamylase activity was defined as the amount of enzyme that releases 1 μmol of glucose from soluble starch per minute under assay conditions. The total protein was estimated, as described by Bradford [25], using bovine serum albumin as a standard. Estimation of partition parameters The partitioning parameters in ATPS were calculated as follows. The phase volume ratio (V R) was defined as the ratio of volume in the top phase (V T) and bottom phase (V B). $$ {V}_{\mathrm{R}}=\frac{V_{\mathrm{T}}}{V_{\mathrm{B}}} $$ The partition coefficient for glucoamylase (K GA) was defined as the ratio of glucoamylase activity in the top phase (A T) to that in the bottom phase (A B). $$ {K}_{\mathrm{GA}}=\frac{A_{\mathrm{T}}}{A_{\mathrm{B}}} $$ The partition coefficient for total protein (K TP) was defined as the ratio of protein concentration in the top phase (C T) to that in the bottom phase (C B). $$ {K}_{\mathrm{T}\mathrm{P}}=\frac{C_{\mathrm{T}}}{C_{\mathrm{B}}} $$ The specific activity (SA) was defined as the ratio of glucoamylase activity (A) to protein concentration (C) in the respective phases. $$ \mathrm{S}\mathrm{A}=\frac{A}{C} $$ The purification factor (PF) was calculated by the ratio of the specific activity in the bottom phase (SAB) to the specific activity in the crude extract (SAF). $$ \mathrm{P}\mathrm{F}=\frac{{\mathrm{SA}}_{\mathrm{B}}}{{\mathrm{SA}}_{\mathrm{F}}} $$ The glucoamylase yield in the bottom phase is given by the following equation. $$ \mathrm{Yield}\ \left(\%\right)=\frac{100}{1+{V}_{\mathrm{R}}{K}_{\mathrm{GA}}\ } $$ The TLL is defined as $$ \mathrm{T}\mathrm{L}\mathrm{L}\left(\%\right)=\sqrt{{\left({C}_{\mathrm{PT}}-{C}_{\mathrm{PB}}\right)}^2+{\left({C}_{\mathrm{SB}}-{C}_{\mathrm{ST}}\right)}^2} $$ where C PT and C PB are the PEG concentrations (% w/w) in the top and bottom phases, respectively, and C ST and C SB are the salt concentrations (% w/w) in the top and bottom phases, respectively. The essence of ATPE lies in the differential partitioning of the target biomolecule to one phase and the contaminants to the other phase. It is this mechanism that leads to the purification of a target biomolecule. Extraction of biomolecules using ATPS could be tougher using theoretical predictions, primarily due to the fact that a complex set of parameters decide the extent of partitioning in an ATPS. They include the properties of the biomolecule (size, charge, and hydrophobicity) and the properties of the system like (i) type and concentration of phase-forming salt, (ii) concentration and molecular weight of phase-forming polymer, (iii) tie line length, (iv) phase volume ratio, (v) pH of the system, and (vi) concentration of neutral salts. Details of the selection of each of these parameters and their effect on partitioning of glucoamylase have been presented in the following sections. Effect of phase-forming salts Due to the significant influence of the phase-forming salt on system environment, its selection has a direct consequence on separation, concentration, and purification of a given biomolecule in ATPE [26]. In order to identify the most appropriate salt for the recovery of glucoamylase and ensure its efficient extraction, ATPE experiments were performed by incorporating a phase system of PEG (MW 4000) with four different phase-forming salts such as trisodium citrate, tripotassium citrate, magnesium sulfate, and mono/dibasic potassium phosphate. The partition coefficients of H. grisea-derived glucoamylase and total protein using 15 % (w/w) PEG 4000 + 15 % (w/w) salt are shown in Fig. 1. In all the phase systems studied, it was observed that the values of K GA and K TP obtained from all of the systems were lesser than 1. Glucoamylase was preferentially partitioned to the bottom phase that indicated a strong preference of glucoamylase to the bottom phase. It resulted in low partition coefficients in the range of 0.28–0.78. This is in agreement with Minami and Kilikian [23]. ATPE studies on glucoamylase from A. awamori. The disparities in the values of K GA in the partitioning process are caused by the non-uniform distribution of the salt ions in the top and bottom phases. It is also due to the difference in the electric potential that improves protein mobility to the other phase by electrostatic repulsion/attraction, hydrophobicity, and size of the salt ions [18, 27]. The specific activity of the top and bottom phases and yield and purification factor for the bottom phase in the systems with different phase-forming salts are portrayed in Table 1. It was noted that the yield was higher in the bottom phase (69.73–82.48 %) for all the phase-forming salts. Except for the magnesium salt, it was observed that the specific activity (U/mg) was higher in the bottom phase and so was the purification factor, for rest of the salts. Trisodium citrate and potassium phosphate exhibited relatively higher yield of 82.23 & 82.48% respectively. However, based on the purification factor, potassium phosphate resulted in a higher PF (1.46). Potassium phosphate system was recognized to be more effective for lactoperoxidase [19], laccase [20], polyphenol oxidase [21], and α-galactosidase [22]. Based on the preliminary results, PEG/potassium phosphate (K2HPO4 and KH2PO4) system was used for further studies. Effect of phase-forming salt on the partitioning of glucoamylase Table 1 Effect of phase-forming salts on glucoamylase partitioning Effect of PEG molecular weight The molecular weight of PEG decides the extent of partition of the target biomolecules and the other molecules in the extract. As the chain length of PEG increases, the volume exclusion effect generally follows an increasing trend. In the presence of salt, the hydrophobicity of the polymer-rich top phase increases with a rise in chain length [18]. The extraction efficiency is influenced by the composition of phases and the number of polymer–protein interactions. These factors are governed by the polymer, possessing different degrees of polymerization [28]. In order to attain the most appropriate molecular weight of PEG for the recovery of glucoamylase, partitioning studies were carried out by employing PEG/KH2PO4 K2HPO4 system with different molecular weights of PEG (1000, 2000, 4000, and 6000). The concentration of the phase composition and pH was maintained at a constant value throughout (15 % (w/w) PEG 4000 + 15 % (w/w) potassium phosphate, pH 7). The partition coefficients of glucoamylase and total protein are shown in Fig. 2. The partition coefficients of glucoamylase (K e) and total protein (K p) were found to decrease with an increase in PEG molecular weight. The decrease in partition coefficient of glucoamylase and total proteins could be ascribed to the effect of volume exclusion, which increases with an increase in molecular weight of the polymer. As a result, the biomolecules will selectively partition to the bottom phase. Similar results were observed by Nandini and Rastogi [26], Priyanka et al. [29], and Lakshmi et al. [30]. The specific activity of the top and bottom phases and yield and purification factor for the bottom phase in the systems with different PEGs (MW 1000, 2000, 4000, and 6000) are shown in Table 2. From the experimental runs, we observed that the specific activity of the bottom phase was greater than that of the top phase, irrespective of the molecular weight of the polymer. With a rise in the molecular weight of PEG, the yield of glucoamylase in the bottom phase was on the increasing mode. This trend can be explained due to the increase of the top phase hydrophobicity. As there is an increase in the chain length of PEG, it experiences a deficiency in hydroxyl groups for the same concentration of the polymer [12]. The specific activity of the enzyme in the bottom phase rose up to the stage when the molecular weight increased from 1000 to 4000. Thereafter, with PEG 6000, it took a dip; so was the case with the purification factor. This behavior is due to the fact that the bottom phase may be reaching the solubility limit with respect to glucoamylase. The resulting salting-out effect, therefore, tends to push the enzyme to the top phase. Similar results were observed by Yuzugullu and Duman [31] and Madhusudhan and Raghavarao [32]. Thus, it was observed that the transition of yield and purification factor towards a higher value was observed at PEG 4000. Based on this fact, PEG 4000 was chosen for further studies. Effect of PEG molecular weight on the partitioning of glucoamylase Table 2 Effect of PEG molecular weight on glucoamylase partitioning Effect of TLL The effect of TLL (22.91–31.61 %) on glucoamylase partitioning was investigated in PEG 4000/potassium phosphate systems. The composition of the PEG-salt system within the specified TLL range was obtained from the liquid-liquid equilibrium data, as provided by Carvalho et al. [33]. The phase volume ratio was maintained at 1 for these set of experiments. It was observed that the partition coefficient values of both glucoamylase and total protein increased with an increase in TLL (Fig. 3). This could be because of the decrease in the relative free volume in the bottom phase and subsequent decrease in the solubility of the biomolecules [32, 34]. As depicted in the Fig. 4, the increase in the partition coefficient of glucoamylase with the increase in TLL at constant volume ratio results in the decrease in glucoamylase yield (Eq. 6). The purification factor increased and reached a maximum value of 1.72 at TLL of 30.62 %. A decrease in the purification factor value was observed for further increase in TLL, and this may be due to the high salt concentration at the bottom phase which affects the solubility of glucoamylase [30]. Effect of TLL on the partitioning of glucoamylase Effect of TLL on the recovery of glucoamylase Effect of phase volume ratio In order to further purify the enzyme, various volume ratios (0.41–1.57) were selected on the TLL of 30.62 % and the consecutive effect of this on PF and yield was investigated. It can be evidenced from Fig. 5 that the increase in phase volume ratio increases the yield owing to the reduction in the bottom phase volume [32]. A lower PF was observed at lower V R due to the fact that a larger volume of bottom phase at the lower V R promotes the partitioning of the contaminant proteins to the bottom phase. A maximum PF of 1.84 was observed at a V R of 1.37 and further increment resulted in a decrease in PF. In contrast to this, the yield decreased with the increase in phase volume ratio. Similar result was observed by Chethana et al. [35]. Effect of V R on the recovery of glucoamylase Effect of pH One of the significant factors that govern the partition behavior of biomolecules in an ATPS is the pH at which the process is carried out. Any change in pH has the ability to influence the charge of the solute or the ratio of the charged molecules. The phase system selected from the previous step (22 % (w/w) PEG 4000 and 12 % (w/w) potassium phosphate) was further subjected to pH changes from 6 to 9. The variation of partition coefficients with respect to the pH of the system is shown in Fig. 6. The increase in pH improved the migration of contaminant proteins to the top phase and consequently the partition coefficient of glucoamylase was found to decrease. This phenomenon enhanced the purification factor and yield at the bottom phase. The results are in accordance with reported literature (Nandini and Rastogi [19] and Naganagouda and Mulimani [22]). It can be visualized from Fig. 7 that the increase in pH has a positive effect on glucoamylase yield and it reached a maximum yield of 82.62 % at a pH of 9. But the PF reached a maximum of 2.61 at a pH of 8 and decreased further. The low stability of glucoamylase at higher pH could be a possible reason for this reduction [4]. It is a well-known fact that the pH of the ATPS has a profound effect on the partitioning of biomolecules since it may change the charge of the biomolecule or the ratio of the charged biomolecules. The partitioning depends on the system pH and the isoelectric point of glucoamylase. The literature reveals that the isoelectric point of glucoamylase from H. grisea is greater than 8 [8, 36, 37]. The decrease in pH makes the glucoamylase more positively charged and leads to stronger interaction between glucoamylase and polymer which migrated more enzymes to the PEG-rich phase. Similar results were observed by Nandini and Rastogi [19] and Ratanapongleka [20]. Effect of pH on the partitioning of glucoamylase Effect of pH on the recovery of glucoamylase Effect of NaCl One of the definitive methods to arrive at an optimum value of the selectivity and yield has been the addition of neutral salts to the ATPS [16]. With a view to examine the effect of a neutral salt on the partitioning of the enzyme, the concentration NaCl was varied from 0 to 5 % w/w in the selected system from the previous step (22 % w/w PEG 4000 12 % w/w phosphate system, pH 8.0). In general, the addition of neutral salts to the ATPS changes the partitioning behavior of protein by changing the electrostatic potential difference between the phases or by increasing the hydrophobic interactions [38]. Because of the changes in the electrostatic potential difference, the increase in NaCl concentration promoted more partitioning of glucoamylase to the bottom phase and a lowest partition coefficient of 0.126 was obtained at 2 % NaCl concentration (Fig. 8). This system resulted in a PF of 2.68 and a yield of 85.81 %. As shown in Fig. 8, the further increase in NaCl concentration caused the reduction of PF which could be as a consequence of increase in hydrophobic interactions between the protein and PEG in the top phase [39]. Effect of NaCl on the recovery of glucoamylase Based on the above observations, it is clear that PEG 4000 and KH2PO4/K2HPO4 phase system can be used as a potential technique for the separation and partial purification of glucoamylase. The recovery of glucoamylase from thermophilic fungal sources using aqueous two-phase extraction was reported for the first time. The influence of various parameters on separation and partial purification of glucoamylase from H. grisea in aqueous two-phase systems was revealed. The PEG 4000/potassium phosphate phase system was found to be the most efficient for the extraction of glucoamylase, when compared to other salt systems. It was noted that glucoamylase preferentially partitioned to the salt-rich bottom phase. The optimized conditions of tie line length were at 30.62 %, phase volume ratio 0.53, pH 8, and 2 % w/w NaCl. The said conditions provided a maximum yield of 85.81 % and purity of 2.68-fold compared to crude extract. Overall, the results demonstrated the feasibility of using ATPE as a preliminary step for the partial purification of glucoamylase. Riaz M, Perveen R, Javed MR, Nadeem HU, Rashid MH (2007) Kinetic and thermodynamic properties of novel glucoamylase from Humicola sp. Enzyme Microb Tech 41:558–564 Pandey A (1995) Glucoamylase research: an overview. Starch 47:439–445 Ramesh V, Murty VR (2014) Sequential statistical optimization of media components for the production of glucoamylase by thermophilic fungus Humicola grisea MTCC 352. Enzyme Res. http://www.hindawi.com/journals/er/2014/317940/ Gomes E, Souza SR, Grandi RP, Da Silva R (2005) Production of thermostable glucoamylase by Aspergillus flavus A 1.1 and Thermomyces Lanuginosus A 13.37. Braz J Microbiol 36:75–82 Kaur P, Satyanarayana T (2004) Production and starch saccharification by a thermostable and neutral glucoamylase of a thermophilic mould Thermomucor indicae-seudaticae. World J Microbiol Biotechnol 20:419–425 Koç O, Metin K (2010) Purification and characterization of a thermostable glucoamylase produced by Aspergillus flavus HBF34. African J Biotechnol 9(23):3414–3424 Ferreira-Nozawa MS, Rezende JL, Guimarães LHS, Terenzi HF, Jorge JA, Polizeli MLTM (2008) Mycelial glucoamylases produced by the thermophilic fungus Scytalidium thermophilum strains 15.1 and 15.8. Purification and biochemical characterization. Braz J Microbiol 39(2):344–352 Campos L, Felix CR (1995) Purification and characterization of a glucoamylase from Humicola grisea. Appl Env Microbiol 61(6):2436–2438 Nguyen QD, Rezessy-Szabó JM, Claeyssens M, Stals I, Hoschke A (2002) Purification and characterization of amylolytic enzymes from thermophilic fungus Thermomyces lanuginosus strain ATCC 34626. Enzyme Microb Tech 31:345–352 Thorsen TS, Johnsen AH, Josefsen K, Jensen B (2006) Identification and characterization of glucoamylase from the fungus Thermomyces lanuginosus. Biochim Biophys Acta 1764(4):671–676 Negi S, Gupta S, Banerjee R (2011) Extraction and purification of glucoamylase and protease produced by Aspergillus awamori in a single-stage fermentation. Food Technol Biotechnol 49:310–315 Gautam S, Simon L (2006) Partitioning of β-glucosidase from Trichoderma reesei in poly(ethylene glycol) and potassium phosphate aqueous two-phase systems: influence of pH and temperature. Biochem Eng J 30:104–108 Madhusudhan MC, Raghavarao KSMS, Nene S (2008) Integrated process for extraction and purification of alcohol dehydrogenase from baker's yeast involving precipitation and aqueous two phase extraction. Biochem Eng J 38:414–420 Kammoun R, Chouayekh H, Abid H, Naili B, Bejar S (2009) Purification of CBS 819.72 -amylase by aqueous two-phase systems: modelling using response surface methodology. Biochem Eng J 46:306–312 Kianmehr A, Pooraskari M, Mousavikoodehi B, Mostafavi SS (2014) Recombinant D-galactose dehydrogenase partitioning in aqueous two-phase systems: effect of pH and concentration of PEG and ammonium sulfate. Bioresource Bioprocess 1:6 Albertsson PA (1987) Partitioning of cell particles and macromolecules, 3rd edn. New York, John Wiley and Sons Benavides J, Rito-Palomares M (2008) Practical experiences from the development of aqueous two-phase processes for the recovery of high value biological products. J Chem Technol Biotechnol 83:133–142 Raja S, Murty VR, Thivaharan V, Rajasekar V, Ramesh V (2011) Aqueous two phase systems for the recovery of biomolecules—a review. Science Technol 1:7–16 Nandini KE, Rastogi NK (2011) Integrated downstream processing of lactoperoxidase from milk whey involving aqueous two-phase extraction and ultrasound-assisted ultrafiltration. Appl Biochem Biotechnol 163:173–185 Ratanapongleka K (2012) Partitioning behavior of laccase from Lentinus polychrous Lev in aqueous two phase systems. Songklanakarin J Sci Technol 34(1):69–76 Babu BR, Rastogi NK, Raghavarao KSMS (2008) Liquid–liquid extraction of bromelain and polyphenol oxidase using aqueous two-phase system. Chem Eng Process 47:83–89 Naganagouda K, Mulimani VH (2008) Aqueous two-phase extraction (ATPE): an attractive and economically viable technology for downstream processing of Aspergillus oryzae α-galactosidase. Process Biochem 43:1293–1299 Minami NM, Kilikian BV (1998) Separation and purification of glucoamylase in aqueous two-phase systems by a two-step extraction. J Chromatogr B 711:309–312 de Gouveia T, Kilikian BV (2000) Bioaffinity extraction of glucoamylase in aqueous two-phase systems using starch as free bioligand. J Chromatogr B 743:241–246 Bradford MM (1976) A rapid and sensitive for the quantitation of microgram quantitites of protein utilizing the principle of protein-dye binding. Anal Biochem 72:248–254 Nandini KE, Rastogi NK (2011) Liquid–liquid extraction of lipase using aqueous two-phase system. Food Bioprocess Technol 4:295–303 Nagaraja VH, Iyyaswami R (2015) Aqueous two phase partitioning of fish proteins: partitioning studies and ATPS evaluation. J Food Sci Technol 52(6):3539–3548. http://www.ncbi.nlm.nih.gov/pubmed/26028736 Mohamadi HS, Omidinia E (2007) Purification of recombinant phenylalanine dehydrogenase by partitioning in aqueous two-phase systems. J Chromatogr B 854:273–278 Priyanka BS, Rastogi NK, Raghavarao KSMS, Thakur MS (2012) Downstream processing of luciferase from fireflies (Photinus pyralis) using aqueous two-phase extraction. Process Biochem 47:1358–1363 Lakshmi MC, Madhusudhan MC, Raghavarao KSMS (2012) Extraction and purification of lipoxygenase from soybean using aqueous two-phase system. Food Bioprocess Technol 5:193–199 Yuzugullu Y, Duman YA (2015) Aqueous two-phase (PEG4000/Na2SO4) extraction and characterization of an acid invertase from potato tuber (Solanum tuberosum). Prep Biochem Biotechnol 45(7):696–711. http://www.ncbi.nlm.nih.gov/pubmed/25127162 Madhusudhan MC, Raghavarao KSMS (2011) Aqueous two phase extraction of invertase from baker's yeast: effect of process parameters on partitioning. Process Biochem 46:2014–2020 Carvalho CP, Coimbra JSR, Costa IAF, Minim LA, Silva LHM, Maffia MC (2007) Equilibrium data for PEG 4000 plus salt plus water systems from (278.15 to 318.15) K. J Chem Eng Data 52:351–356 Selvakumar P, Ling TC, Walker S, Lyddiatt A (2012) Recovery of glyceraldehyde 3-phosphate dehydrogenase from an unclarified disrupted yeast using aqueous two-phase systems facilitated by distribution analysis of radiolabelled analytes. Sep Purif Technol 85:28–34 Chethana S, Nayak CA, Raghavarao KSMS (2007) Aqueous two phase extraction for purification and concentration of betalains. J Food Eng 81:679–687 Cereia M, Guimaraes LHS, Nogueira SCP, Jorge JA, Terenzi HF, Greene LJ, Polieli MLTM (2006) Glucoamylase isoform (GAII) purified from a thermophilic fungus Scytalidium thermophilum 15.8 with biotechnological potential. African J Biotechnol 5(12):1239–1245 Aquino ACMM, Jorge JA, Terenzi HF, Polizeli MLTM (2001) Thermostable glucose-tolerant glucoamylase produced by thermophilic fungus Scytalidiuyem thermophilum. Folia Microbiol 46(1):11–16 Kavakçıoğlu B, Tarhan L (2013) Initial purification of catalase from Phanerochaete chrysosporium by partitioning in poly(ethylene glycol)/salt aqueous two phase systems. Sep Purif Technol 105:8–14 Raja S, Murty VR (2013) Optimization of aqueous two-phase systems for the recovery of soluble proteins from tannery wastewater using response surface methodology. J Eng. http://www.hindawi.com/journals/je/2013/217483/ The authors gratefully acknowledge the Department of Biotechnology, MIT, Manipal University for providing the facilities to carry out the research work. Department of Biotechnology, Manipal Institute of Technology, Manipal University, Manipal, 576104, Karnataka, India Vinayagam Ramesh & Vytla Ramachandra Murty Vinayagam Ramesh Vytla Ramachandra Murty Correspondence to Vinayagam Ramesh. Both the authors have active participation in the implementation and analysis of the present study. Ramesh performed the research protocols and wrote the manuscript. Both the authors have read and approved the final version of the manuscript. Ramesh, V., Murty, V.R. Partitioning of thermostable glucoamylase in polyethyleneglycol/salt aqueous two-phase system. Bioresour. Bioprocess. 2, 25 (2015). https://doi.org/10.1186/s40643-015-0056-6 Aqueous two-phase systems (ATPS) Glucoamylase Humicola grisea
CommonCrawl
Skip to main content Skip to sections January 2020 , 2:37 | Cite as Design and application of air to fuel ratio controller for LPG fueled vehicles at typical down-way Suroto Munahar Bagiyo Condro Purnomo Muji Setiyo Aris Triwiyatno Joga Dharma Setiawan First Online: 06 December 2019 Part of the following topical collections: 3. Engineering (general) This article presents an investigation of air–fuel ratio (AFR) controllers applied to liquefied petroleum gas (LPG) fuelled vehicles with second-generation LPG kits. When a vehicle is running on a down-way, fuel consumption tends to be rich because of the increased vacuum in the intake manifold. Therefore, an AFR controller was developed that can work based on a vehicle's tilt sensor combined with an oxygen sensor. AFR controllers are employed to regulate injectors to form leaner mixtures. We tested the performance of AFR controller at a typical down-way of 10°, 15°, and 20°. As a result, the AFR controller was able to increase AFR value from an average of 14.5 (without controller) to 15.5–16.2, depending on the gear position and down-way angle. Furthermore, a greater of road slope was observed to have produced greater AFR. This AFR controller is very promising to be applied to vehicles operating in mountainous areas. LPG vehicle AFR controller Down-way In the past few decades, environmental factors have become the main orientation in technological development, especially concerning health issues. In addition to the industrial sector, transportation is one of the sectors targeted to reduce global warming, air pollution and emissions [1, 2, 3]. Therefore, the design of vehicle technology needs to consider emission factors [4, 5]. From another perspective, there is also a potential global energy crisis and this call for the design of technology to improve fuel efficiency for new and operating vehicles [6]. Electric vehicles (EVs) and Fuel Cell Vehicles (FCVs) are very promising to reduce fuel consumption and emissions, even to zero value, in the future. However, the implementation of EVs and FCVs is constrained in developing countries due to uncompetitive prices and limited mileage [7]. In EVs, the battery requires a long time to charge with high input power [8] while FCV is limited by infrastructure needed to produce hydrogen. In the medium term, hybrid vehicle (HVs) is a reasonable choice and it involves combining internal combustion engines (gasoline/diesel) with electric motors [8, 9]. However, this technology is also not yet widely accepted due to the relatively high total cost of ownership (TCO). Therefore, in the short term, controlling air to fuel ratio (AFR) is an alternative method to reduce fuel consumption and emissions. This system has progressed rapidly, even with the use of proportional–integral–derivative (PID) for stoichiometric purpose [10]. Neural networks as intelligent control systems have also been applied to control AFR with the concept of brain tissue [11]. Several other studies have been conducted by processing signals generated by oxygen sensors [12], applying genetic algorithms [13], fuzzy logic controllers (FLC) [14, 15, 16], diagonal recurrent neural network (DRNN) [17], and brakes control system [18]. Moreover, other methods to reduce emissions have been researched to include the application of alternative fuels such as ethanol, methanol, compressed natural gas (CNG), and LPG [19, 20]. Ethanol produces good efficiency and reduces emissions, but it cannot be produced in large numbers except a country has a reliable policy on agricultural land for food and energy [21]. Therefore, LPG is considered an alternative and choice of several countries due to many advantages such as high octane, lower exhaust emissions, and availability. Research on several variables of LPG as an alternative fuel has been conducted by different researchers. For example, Morganti [22] conducted a study to test the research octane number (RON) and motor octane numbers (MON) for iso-butane, propylene, n-butane and propane, followed by observations of auto-ignition from a mixture of propane and butane. In another study, Chikhi [23] investigated CO, HC, NOx and CO2 emissions produced by 17 units of bi-fuel vehicle using LPG to replace gasoline and diesel. Moreover, it is also possible to control the sulfur and toxic gases produced by LPG vehicles to achieve better emissions [24, 25]. Other studies focus on iso-octane and air mixtures [26], performance characteristics of LPG, CNG and LNG vehicles [27], direct-injection application with lean combustion methods [28], and risk analysis of the safety of LPG-fueled cars [29]. Meanwhile, several research works have also been conducted on the control of LPG. In 2015, Erkus [30] developed an LPG control system to be applied in carburetor-based engines. The results of this study confirm an increase in engine performance and better exhaust emissions compared to the carburetor system. Others study, including the fuel cut-off method to cut off LPG flow to the engine during deceleration by controlling the solenoid on the vaporizer [31, 32], emission comparison using a control system on liquid phase injection (LPI) and direct injection (DI) [33], as well as the characteristics of injection duration and control [34, 35, 36]. This has led to the development of intelligent control systems to support fuel efficiency. However, the studies conducted have not considered the contours of the land, such as up-way and down-way. When a vehicle passes through a down-way, kinetic force, and gravity affect its movement. Meanwhile, when the vehicle accelerates on a down-way, the fuel is reduced or even cut off. Furthermore, even though LPG kits technology is now equal to GDI technology, in fact, more LPG vehicles use second-generation LPG kits (vapor phase injection, VPI) without strict AFR and emission settings [37]. With second-generation LPG kits, AFR stoichiometry is only obtained in partial conditions. When the vehicle accelerates in the down-way, the tendency for low AFR is greater than the high AFR. Therefore, we developed the AFR control system on LPG vehicles that pay attention to the slope of the road. This control system works based on the primary information from the tilt sensor. 2.1 Vehicle specification This research was conducted on gasoline cars modified into LPG with the Vapor Phase Injection (VPI) system. Vehicle and injector specifications are presented in Tables 1 and 2, respectively. 5A-FE Bore × Stroke 78.7 × 77 mm DOHC, 4 valves per cylinder Maximum power output 77 kw @ 6000 rpm Maximum torque 135 Nm @ 4800 rpm Injector specification 12–15 Volt With a natural suction system, to achieve a stoichiometric mixture, the amount of injected LPG depends on the air entering the combustion chamber. Therefore, the stoichiometric mixture (ṁstoich) on the cylinder is highly dependent on the mass of LPG injected, the number of cylinders (icyl), and the engine speed (n). The formula to obtain the stoichiometric mixture \((\lambda = 1)\) is presented in the following equation. $$\lambda = \frac{{2\dot{m}_{air} }}{{i_{cyl} \cdot M_{LPG} \cdot m_{LPG} \cdot n}}$$ In the VPI system, LPG is divided into two phases, liquid and gas. LPG in the tank until the vaporizer is a liquid phase, while LPG injected into the intake manifold from the vaporizer is a gas phase. The injected LPG depends on the effective flow area—μA (mm2), injection duration—tLPG (s), gas pressure—ΔPLPG (Pa) at intake manifold temperature—T0 (K) and pressure—P0 (Pa). In addition, LPG has cp of 1750 J/kg K and RLPG of 161.26 J/kmol K, therefore, the mass of the injected LPG is presented in the following equation [36]. $$m_{LPG} = \left( {\mu A} \right)_{LPG} \cdot \frac{{p_{{0 + \Delta P_{LPG} }} }}{{ R_{LPG} T_{0} }} \left( {\frac{{p_{0} }}{{p_{{0 + \Delta P_{LPG} }} }}} \right) \cdot \frac{{c_{{p - R_{LPG} }} }}{{ c_{p} }} \cdot \sqrt {2 \cdot c_{p} \cdot T_{0 } \left[ {1 - \left( {\frac{{p_{0} }}{{ p_{{0 + \Delta P_{LPG} }} }}} \right) \cdot \frac{{R_{LPG} }}{{c_{p} }}} \right]} \cdot t_{LPG}$$ 2.2 AFR controller The AFR controller developed in this study worked as a signal manipulator generated by the throttle position sensor (TPS). The controller circuit was paired between the TPS and ECU which was regulating the AFR based on the information from the TPS, camshaft position sensor, and signals from other sensors. The injector sprayed LPG in the gas phase to be regulated using the pulse wide modulation (PWM) method. The angle sensor, as a road tilt angle detector, was paired with the AFR controller while the AFR meter obtains data from an oxygen sensor attached to the exhaust manifold. The concept of the LPG controller designed with the AFR data retrieval is presented in Fig. 1. Set up LPG controller and AFR measurement 3.1 Prototyping The parts of the AFR controller circuit work in an integrated manner, as shown in Fig. 2. For example, the TPS detects the throttle valve position that sends signals to the ECU. Moreover, the controller consists of power supply, angle sensor, transistor, relay, capacitor, and a variable resistor. The power supply, as a voltage source, activates the relay and angle sensor while the NPN transistor activates the relay when it gets triggered from the sensor angle. Furthermore, the capacitor controls the relay just after the trigger from the angle sensor is sent to the transistor and the LPG controller becomes activated by adjusting the sensitivity of the sensor angle. Relay contact points at normally closed terminals are connected to ECU and TPS, while normally open terminals are connected to ECU and variable resistors. Wiring diagram of AFR controller AFR controller works based on the position of the vehicle. When the vehicle is running in the down-way, the angle sensor produces a signal which triggers the transistor to activate the relay. This, however, reduces the voltage from TPS to ECU through a variable resistor. Furthermore, the ECU also reduce the flow of LPG entering the engine through the injectors. Oxygen sensor meters are used to determine the amount of LPG injected as shown in Fig. 3a. Finally, the Clinometer installed to measure the angle of inclination is as shown in Fig. 3b. We use clinometer because it is more practical with high accuracy (maximum deviation is only 0.2° compared to measurements using bevel gauge). Installation of oxygen sensors (a), clinometer (b), LPG injectors (c), testing LPG controllers on down-way (d) ECU controls LPG injectors based on information from sensors such as CMP for engine speed, TPS for throttle valve opening positions, and an AFR meter attached inside the exhaust manifold to measure mixed quality by monitoring the LPG/air mixture entering the engine. The LPG controller block diagram is presented in Fig. 4. Block diagram of LPG controller 3.2 AFR measurement The AFR was measured at varying road slopes including 10°, 15° and 20°. Therefore, the AFR data on speed gear 1, 2 and 3 positions without and with LPG controller at a slope angle of 10°, 15°, and 20° are presented in Table 3. Tests carried out in down-way each of 10 s. For each road-slope, the vehicle is set to start running at 0 s and the AFR controller will work at 2 s and end at 9 s after the vehicle starts. Vehicle speeds are left naturally based on the level of the road slope. The AFR controller is intentionally set at the 2 s after start running to ensure the slope of the vehicle is in the down-way, not at the speed-bump or when the vehicle is stuck in a road pit. Therefore, the AFR value after the 9th second is the same as the initial value, caused by the vehicle being stopped. If the vehicle is travelling in a long down-way, a high AFR will last as long as the vehicle is still detected tilt. The graphs of the test results are presented in Figs. 5, 6, and 7, respectively. AFR measurement data Speed gear position Without AFR controller With AFR controller ± 0.4 ± 0.35 AFR profile on various road slope at 1st speed gear AFR profile on various road slope at 2nd speed gear AFR profile on various road slope at 3rd speed gear From Table 3 and Figs. 5, 6, and 7, the result shows that the greater slope of the road produced higher AFR. This indicates that the kinetic force and vehicle gravity can be used as input parameters to control the AFR. In previous studies [31, 32], the AFR controller system in LPG vehicles were applied based on decelerations with input parameters from engine, brake, and vehicle speed. In this study, the AFR controller developed has the ability to work on reduced road conditions at low vehicle and engine rotation speeds. Based on the data during observation, this AFR controller has a great potential to be applied to modified LPG vehicles that operate in mountainous areas, although this only makes a small contribution to areas with the majority of flat roads. The results showed that the kinetic force, gravity, vehicle weight, and road slope have the potential to be used as input signals by the AFR controller to improve fuel efficiency. The AFR controller developed was able to increase the AFR value from an average value of 14.5 to 15.5–16.2, depending on the down-way angle while the gear position has no measurable effect. Furthermore, a greater slope of the road was observed to have produced greater AFR. In conclusion, the AFR controller has the ability to increase AFR and it is very suitable for modified LPG vehicles with first and second generation of converter kits that are not yet equipped with lambda sensors, especially for LPG vehicles operating in mountainous areas. This research is part of an environmentally friendly vehicle development project at the Automotive Laboratory of Universitas Muhammadiyah Magelang. The researchers appreciate the technicians involved in this study. The author(s) declare that there is no conflict of interest regarding the publication of this article. The Clean Air Initiative for Asian Cities Center (CAI-Asia Center). Improving Vehicle Fuel Economy in the ASEAN Region (2010). https://cleanairasia.org/, https://cleanairasia.org/improving-vehicle-fuel-economy-in-the-asean-region/. Accessed 10 Oct 2016 Santos G (2017) Road transport and CO2 emissions: what are the challenges? Transp Policy 59:71–74CrossRefGoogle Scholar Colvile R, Hutchinson E, Mindell J, Warren R (2001) The transport sector as a source of air pollution. Atmos Environ 35(9):1537–1565CrossRefGoogle Scholar Karagiorgis S, Glover K, Collings N (2007) Control challenges in automotive engine management. Eur J Control 13(2–3):92–104CrossRefGoogle Scholar Michalek JJ, Papalambros PY, Skerlos SJ (2004) A study of fuel efficiency and emission policy impact on optimal vehicle design decisions. J Mech Des 126(6):1062–1070CrossRefGoogle Scholar Tverberg GE (2012) Oil supply limits and the continuing financial crisis. Energy 37:27–34MathSciNetCrossRefGoogle Scholar Messagie M, Lebeau K, Coosemans T, Macharis C, Van Mierlo J (2013) Environmental and financial evaluation of passenger vehicle technologies in Belgium. Sustainability (Switzerland) 5(12):5020–5033CrossRefGoogle Scholar Deilami S et al (2011) Real-time coordination of plug-in electric vehicle charging in smart grids to minimize power losses and improve voltage profile. IEEE Trans Smart Grid 2(3):456–467CrossRefGoogle Scholar Setiawan IC (2019) Policy simulation of electricity-based vehicle utilization in Indonesia (electrified vehicle—HEV, PHEV, BEV and FCEV). Automot Exp 2(1):1–8CrossRefGoogle Scholar Iliev S (2015) A comparison of ethanol and methanol blending with gasoline using a 1-D engine model. Procedia Eng 100:1013–1022CrossRefGoogle Scholar Zhai Y-J, Yu D-L (2009) Neural network model-based automotive engine air/fuel ratio control and robustness evaluation. Eng Appl Artif Intell 22(2):171–180CrossRefGoogle Scholar Cavina N, Corti E, Moro D (2008) Closed-loop individual cylinder air–fuel ratio control via UEGO signal spectral analysis. IFAC Proc Vol 41(2):2049–2056CrossRefGoogle Scholar Zhao J, Xu M (2013) Fuel economy optimization of an Atkinson cycle engine using genetic algorithm. Appl Energy 105:335–348CrossRefGoogle Scholar Wu T, Karkoub M, Chen H, Yu W, Her M (2015) Robust tracking observer-based adaptive fuzzy control design for uncertain nonlinear MIMO systems with time delayed states. Inf Sci 290:86–105CrossRefGoogle Scholar Bouarar T, Guelton K, Manamanni N (2010) Robust fuzzy Lyapunov stabilization for uncertain and disturbed Takagi—Sugeno descriptors. ISA Trans 49(4):447–461CrossRefGoogle Scholar Jansri A, Sooraksa P (2012) Enhanced model and fuzzy strategy of air to fuel ratio control for spark ignition engines. Comput Math Appl 64(5):922–933CrossRefGoogle Scholar Zhai Y, Yu D, Guo H, Yu DL (2010) Engineering applications of artificial intelligence robust air/fuel ratio control with adaptive DRNN model and AD tuning. Eng Appl Artif Intell 23(2):283–289CrossRefGoogle Scholar Triwiyatno A, Sinuraya EW, Setiawan JD, Munahar S (2016) Smart controller design of air to fuel ratio (AFR) and brake control system on gasoline engine. In: 2nd International Conference on Information Technology, Computer, and Electrical Engineering, pp 233–238Google Scholar Masum BM, Masjuki HH, Kalam MA, Palash SM, Habibullah M (2015) Effect of alcohol-gasoline blends optimization on fuel properties, performance and emissions of a SI engine. J Clean Prod 86:230–237CrossRefGoogle Scholar Elfasakhany A (2015) Investigations on the effects of ethanol–methanol–gasoline blends in a spark-ignition engine: performance and emissions analysis. Int J Eng Sci Technol 18(4):713–719CrossRefGoogle Scholar Hulwan SVJDB (2018) Multizone model study for DI diesel engine running on diesel ethanol biodiesel blends of high ethanol fraction. Int J Automot Mech Eng 15(3):5451–5467CrossRefGoogle Scholar Morganti KJ, Foong TM, Brear MJ, Da Silva G, Yang Y, Dryer FL (2013) The research and motor octane numbers of liquefied petroleum gas (LPG). Fuel 108(2013):797–811CrossRefGoogle Scholar Chikhi S, Boughedaoui M, Kerbachi R, Joumard R (2014) ScienceDirect On-board measurement of emissions from liquefied petroleum gas, gasoline and diesel powered passenger cars in Algeria. JES 26(8):1651–1659Google Scholar Cho CP, Kwon OS, Lee YJ (2014) Effects of the sulfur content of liquefied petroleum gas on regulated and unregulated emissions from liquefied petroleum gas vehicle. Fuel 137:328–334CrossRefGoogle Scholar Myung C, Choi K, Kim J, Lim Y, Lee J, Park S (2012) "Comparative study of regulated and unregulated toxic emissions characteristics from a spark ignition direct injection light-duty vehicle fueled with gasoline and liquid phase LPG (liquefied petroleum gas). Energy 44(1):189–196CrossRefGoogle Scholar Dimitris Assanis MSW, Wagnon Scott W (2015) An experimental study of flame and autoignition interactions of iso-octane and air mixtures. Combust Flame 162(4):1214–1224CrossRefGoogle Scholar Raslavi L, Mockus S, Ker N, Starevi M (2014) Liquefied petroleum gas (LPG) as a medium-term option in the transition to sustainable fuels and transport. Renew Sustain Energy Rev 32:513–525CrossRefGoogle Scholar Kim J, Kim K, Oh S (2016) An assessment of the ultra-lean combustion direct-injection LPG (liquefied petroleum gas) engine for passenger-car applications under the FTP-75 mode. Fuel Process Technol 154:219–226CrossRefGoogle Scholar Van Den Schoor F, Middha P, Van Den Bulck E (2013) Risk analysis of LPG (liquefied petroleum gas) vehicles in enclosed car parks. Fire Saf J 57:58–68CrossRefGoogle Scholar Erkus B, Karamangil MI, Surmen A (2015) Designing a prototype LPG injection electronic control unit for a carburetted gasoline engine. Uludağ Univ J Fac Eng 20(2):141–153CrossRefGoogle Scholar Setiyo S, Munahar M (2017) Modeling of deceleration fuel cut-off for LPG fuelled engine using fuzzy logic controller. Int J Veh Struct Syst 9(4):261–265Google Scholar Setiyo M, Munahar S (2017) AFR and fuel cut-off modeling of LPG-fueled engine based on engine, transmission, and brake system using fuzzy logic controller (FLC). J Mechatron Electr Power Veh Technol 8:50–59CrossRefGoogle Scholar Myung CL, Kim J, Choi K, Hwang IG, Park S (2012) Comparative study of engine control strategies for particulate emissions from direct injection light-duty vehicle fueled with gasoline and liquid phase liquefied petroleum gas (LPG). Fuel 94:348–355CrossRefGoogle Scholar Jaworski A, Kuszewski H, Lejda K, Ustrzycki A (2016) The effect of injection timing on the environmental performances of the engine fueled by LPG in the liquid phase. In: Lejda K, Woś P (eds) Internal combustion engines system, vol. i, no. tourism. IntechOpen, London, p 13Google Scholar PradeepBhasker J, Porpatham E (2016) LPG gaseous phase electronic port injection on performance, emission and combustion characteristics of Lean Burn SI Engine. IOP Conf Ser Earth Environ Sci 40(1):0–11Google Scholar Mitukiewicz G, Dychto R, Leyko J (2015) Relationship between LPG fuel and gasoline injection duration for gasoline direct injection engines. Fuel 153:526–534CrossRefGoogle Scholar World LPG Association (2017) Autogas incentive policies, 2017 edition, Neuilly-sur-SeineGoogle Scholar © Springer Nature Switzerland AG 2019 1.Department of Automotive EngineeringUniversitas Muhammadiyah MagelangMagelangIndonesia 2.Depattment of Electrical EngineeringUniversitas DiponegoroSemarangIndonesia 3.Department of Mechanical EngineeringUniversitas DiponegoroSemarangIndonesia Munahar, S., Purnomo, B.C., Setiyo, M. et al. SN Appl. Sci. (2020) 2: 37. https://doi.org/10.1007/s42452-019-1839-8 Received 05 August 2019 Accepted 03 December 2019
CommonCrawl
Discrete and Continuous Dynamical Systems - B 2016, Volume 21, Issue 10: 3301-3314. Doi: 10.3934/dcdsb.2016098 This issue Previous Article Next Article Minimizing $\mathcal R_0$ for in-host virus model with periodic combination antiviral therapy Complex dynamics in the segmented disc dynamo Jianghong Bao1, School of Mathematics, South China University of Technology, Guangzhou, Guangdong Received: May 31, 2015 The present work is devoted to giving new insights into the segmented disc dynamo. The integrability of the system is studied. The paper provides its first integrals for the parameter $r=0$. For $r>0$, the system has neither polynomial first integrals nor exponential factors, and it is also further proved not to be Darboux integrable. In addition, by choosing an appropriate bifurcation parameter, the paper proves that Hopf bifurcations occur in the system and presents the formulae for determining the direction of the Hopf bifurcations and the stability of bifurcating periodic solutions. Segmented disc dynamo, Darboux integrability, Darboux first integrals, exponential factors, Hopf bifurcation. Mathematics Subject Classification: Primary: 37G10, 37J30; Secondary: 34C23. E. C. Bullard, The stability of a homopolar dynamo, Proc. Camb. Phil. Soc., 51 (1955), 744-760.doi: 10.1017/S0305004100030814. C. Christopher, J. Llibre and J. V. Pereira, Multiplicity of invariant algebraic curves in polynomial vector fields, Pacific J. Math., 229 (2007), 63-117.doi: 10.2140/pjm.2007.229.63. B. Hassard, N. Kazarinoff and Y. Wan, Theory and Application of Hopf Bifurcation, Cambridge University Press, 1981. R. Hide, How to locate the electrically-conducting fluid core a planet from external magnetic observations, Nature, 271 (1978), 640-641.doi: 10.1038/271640a0. G. Jiang and Q. Lu, Impulsive state feedback control of a predator-prey model, J. Comput. Appl. Math., 200 (2007), 193-207.doi: 10.1016/j.cam.2005.12.013. E. Knobloch, Chaos in the segmented disc dynamo, Phys. Lett. A, 82 (1981), 439-440.doi: 10.1016/0375-9601(81)90274-7. Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, Springer-Verlag, New York, 1998. Y. Liu, S. Pang and D. Chen, An unusual chaotic system and its control, Math. Comput. Modelling, 57 (2013), 2473-2493.doi: 10.1016/j.mcm.2012.12.006. J. Llibre and X. Zhang, Darboux theory of integrability in image taking into account the multiplicity, J. Differ. Equ., 246 (2009), 541-551.doi: 10.1016/j.jde.2008.07.020. H. K. Moffatt, A self-consistent treatment of simple dynamo systems, Geophys. Astrophys. Fluid Dyn., 14 (1979), 147-166.doi: 10.1080/03091927908244536. H. K. Moffatt, Magnetic Field Generation in Electrically Conducting Fluids, Cambridge University Press, 1978. C. Valls, Darboux integrability of a nonlinear financial system, Appl. Math. Comput., 218 (2011), 3297-3302.doi: 10.1016/j.amc.2011.08.069. Z. Wei, Dynamical behaviors of a chaotic system with no equilibria, Phys. Lett. A, 376 (2011), 102-108.doi: 10.1016/j.physleta.2011.10.040. Jianghong Bao
CommonCrawl
Are Disasters a Risk to Regional Fiscal Balance? Evidence from Indonesia Astrid Wiyanti1,2 & Alin Halimatussadiah3 International Journal of Disaster Risk Science volume 12, pages 839–853 (2021)Cite this article Indonesia is an archipelago country and is fairly vulnerable to disasters. While disasters generally affect government revenue and expenditure, their effects likely vary by country. This study examines the effect of disasters on the fiscal balance, revenue, and expenditure of local governments. We used panel data and fixed effects methods to estimate the degree to which disaster severity influences budgetary solvency at the district and provincial levels in Indonesia between 2010 and 2018. This study revealed that disasters can strain fiscal balance at the district and provincial levels due to a decrease in own-source revenue and an increase in social assistance expenditure, capital expenditure, consumption expenditure, and unexpected expenditure. The district expenditure most threatened by disasters is consumption expenditure, while the provincial expenditure most threatened is unexpected expenditure. We also found that an increase in capital expenditure can lead to financial burden due to delays of planned projects or post-disaster reconstruction. Based on these findings, it is clear that some forms of insurance or other financing schemes are necessary to mitigate the adverse impacts of disasters on regional fiscal balance. The impacts of disasters, such as human fatalities, infrastructure damage, and welfare losses, tend to affect macroeconomic and fiscal conditions. Shabnam (2014), Bergholt and Lujala (2012), and Wu et al. (2018) all found that disasters impact economic growth. Disasters also pose a risk to state revenues and expenditures (Lis and Nickel 2010; Lee et al. 2018; Medina 2018). Disasters often result in increased emergency response expenditure, capital expenditure for reconstruction, and social assistance expenditure. They also tend to result in reduced productivity and individual income, leading to a decline in government revenues. The effect of disasters on government revenues and expenditures varies by country. Klomp and Valckx (2014) found that the impact of disasters on a country's fiscal conditions is inversely related to its level of development. Miao et al. (2018), looking at the United States, found that disasters have no effect on government revenues or operational expenditure but negatively impact capital expenditure. Lee et al. (2018) found that disasters lead to decreased revenues and increased expenditures for governments in least-developed countries (LDCs). This study examined the effect of disasters on the fiscal balance, revenue, and expenditure of local governments, using Indonesia as an example, in order to fill gaps in the literature. First, this study used disaggregated data at the district and provincial levels. Similar studies have largely been conducted at the national level (Lis and Nickel 2010; Noy and Nualsri 2011; Lee et al. 2018; Ouattara et al. 2018; Benali et al. 2019). Such studies generally consider total expenditure without distinguishing between each type of expenditure. As a result, government expenditure that is less or not affected by disasters, such as some employee expenditure, is included in this study's estimation model. Second, this study considered all disasters, regardless of the size of their impact. Third, this study used fiscal balance indicators to determine fiscal health in the regions. Studies about the fiscal effects of disasters at the state level have been conducted by Miao et al. (2018) and Unterberger (2017). Miao et al. (2018) used disaggregated, state-level data from the United States, although the United States and Indonesia differ in many ways in their approach to revenue generation. For example, Indonesia uses value added tax, which is collected by the central government, but the United States uses sales tax, which is collected by state governments. Unterberger (2017) conducted research specifically on flooding, so further studies must be done to examine the fiscal effects of disasters involving other hazards. This study is quantitative research that aims to examine the impact of disasters on fiscal balance at the provincial and district levels. We employed the fixed effects method and used the budgetary solvency ratio as the primary indicator at the district and provincial levels from 2010 to 2018 for Indonesia. We analyzed each budgetary solvency component, including own-source revenue, unexpected expenditure, capital expenditure, and consumption expenditure. Independent variables used to measure the effects of disasters include the number of death and missing people. Affected people variables include injured and displaced people. The number of damaged buildings includes houses, public buildings (for example, schools, places of worship, and health facilities), and private buildings (for example, factories and offices). We also used length of damaged roads and area of damaged forest to capture disaster severity. We used secondary data provided by related ministries/national entities for all variables. This research can serve as a basis for considering policy formulation for disaster mitigation and disaster financing. Section 2 explains the concept of fiscal balance and how it is impacted by disasters. Section 3 explains our dataset and the econometric model. Section 4 presents our results, a discussion, and the limitations of this study. Economic and Fiscal Impacts of Disasters Botzen et al. (2019) classified the economic and fiscal impacts of disasters as either direct or indirect. The direct fiscal impacts usually arise on the side of expenditures. There are three types of post-disaster financing in Indonesia: emergency response, rehabilitation, and reconstruction (Ministry of Finance 2018). Emergency response financing is used to find or rescue casualties, construct basic infrastructure for displaced people, and provide the necessary materials to meet people's basic needs. Rehabilitation financing is used to restore the social and economic conditions of the community. Reconstruction financing goes to repairing damaged public facilities. Local governments must prioritize the reconstruction of health facilities and educational facilities, as both fulfill basic community needs and significantly impact economic growth (Noy and Edmonds 2019). The increased expenditure necessary in the aftermath of disasters impacts both local and central governments. Unterberger (2017) and Benali et al. (2019) found that disasters may increase the debt of local governments with limited fiscal capacity. In countries that embrace a decentralized system, this can be addressed through transfers from the central government. Deryugina (2017) and Miao et al. (2018) also found that disasters lead to significantly higher rates of transfers from central governments to regional governments. Gross domestic product (GDP) is often used to quantify the indirect economic impact of disasters (Botzen et al. 2019). Based on previous studies, disasters can have varying effects—both positive and negative—on GDP (Chhibber and Laajaj 2008; Panwar and Sen 2019; Strulik and Trimborn 2019). If a disaster causes severe damage, GDP tends to decrease; however, if the damage from a disaster is minor, GDP tends to increase (Chhibber and Laajaj 2008). Strulik and Trimborn (2019) found that the effects of disasters on GDP depend on a community's welfare spending. If a disaster occurs in a country with high welfare spending, GDP may grow as a result of citizens' increasing consumption to replace assets that were destroyed. Another economic indicator commonly used to assess the indirect impact of disasters is government revenue. Unsurprisingly, the effect of disasters on budget revenue is largely negative, as they result in welfare losses and fewer funds available to finance projects, repay debts, or create reserves in the short term (Unterberger 2017). In turn, revenues stemming from sales, property, income, and consumption taxes, and local levies decrease (Noy and Nualsri 2011; Miao et al. 2018). The effect of disasters on government revenue can be examined in both the short and long terms. Using the vector auto-regressive panel method, Ouattara et al. (2018) found that tropical storms in the Caribbean resulted in decreased government revenues over the year in which the disaster occurred. A USD 1 increase in damage caused by disasters led to a reduction of USD 8 in government revenue. Using the same method, Benali et al. (2019) found that disasters involving flooding, earthquakes, and hurricanes significantly reduced government revenues in developing countries over a two-year period and in developed countries over a one-year period. Several related studies have been conducted for different types of data and study locations. Lis and Nickel (2010) and Noy and Nualsri (2011) both conducted studies in developed and developing countries using aggregated data. Lis and Nickel (2010) used the debt-to-GDP ratio as their dependent variable while Noy and Nualsri (2011) used state revenue and spending policy as their dependent variables. Lee et al. (2018) looked at LDCs using aggregated data and found that disasters could increase government expenditure and reduce state revenue. Miao et al. (2018) and Deryugina (2017) both used disaggregated data to assess the impact of disasters on local government expenditure in the United States and found that disasters can increase the level of central government expenditure on transfers to regions. Miao et al. (2018) analyzed the effects of disasters on the revenue and expenditure of state governments in the United States. They found that revenue from sales, property, and income taxes was not affected by disasters. This is related to government tax policy, such as the provision of income tax relief for disaster-affected people and reappraisal of property values following disaster, which could result in a lower tax base on property tax. In line with Noy and Nualsri's (2011) findings, this may happen because the central government in developed countries tends to lower tax rates and increase spending to restore fiscal balance and stabilize long-term tax revenues. In terms of expenditure, their study shows that disasters lead to increased local government spending to finance disaster-recovery programs but do not lead to increased capital expenditure. Several disaster impact studies were also conducted in LDCs and small island developing states (SIDS). With the random effects method, Lee et al. (2018), who used aggregated data from 12 LDCs, found that disasters negatively and significantly affect economic growth, fiscal balance, and trade balance. Noy and Edmonds (2019) found a robust negative correlation between disasters and fiscal conditions in five SIDS. Disaster insurance schemes, such as the Pacific Catastrophe Risk Insurance, did not necessarily help. Demographics play a key role in the impact of disasters on government revenue and expenditure. Skidmore and Toya (2013) used population density and fertility rates as control variables. Their study showed a strong positive correlation between population growth and government revenue. Fertility rates, however, were insignificant. In this section, we discussed the data and estimating model that were employed. We went through the various types of data and data sources in detail in the data section. Then, in the model estimation section, we described the dependent and independent variables as well as method and equations. We also presented an overview of using the selected data, as well as relevant literature that we employed to determine the model's specifications. We used data at the district and provincial levels from 2010 to 2018. The law that currently regulates local taxes and levies in Indonesia was enacted at the end of 2009. To limit bias caused by excessive variation, we did not use data from before 2010. We used unbalanced panel data, as regional expansion occurred during the period under review. We used two kinds of data: fiscal data and disaster data. Fiscal data consist of local own-source revenue, capital expenditure, consumption expenditure, social assistance expenditure, and unexpected expenditure. All fiscal data were provided by the Directorate-General for Financial Balance at the Indonesian Ministry of Finance (MoF). In line with Ouattara et al. (2018), we converted all values into natural logged form. Local own-source revenue includes local taxes, levies, and other authorized sources such as profits from the selling of local assets, goods, and services; interest income from savings; currency exchange profits; and compensation. At the district and provincial government levels, different types of taxes are levied. Parking, hotel and restaurant, property, and other taxes are examples of local tax sources at the district level, while the motor vehicle, cigarette, and surface water taxes are local tax sources at the provincial level. The primary dependent variable used to measure fiscal balance is the budgetary solvency ratio. The concept of fiscal balance in government is a condition under which government revenues can meet government expenditures (Chapman 2008). Fiscal balance is essential for local governments to maintain the level and quality of public services, maintain expenditures, and avoid widening the deficit during recessions (Honadle et al. 2004; Jimenez 2009). There are two classifications of fiscal balance at the local government level: standardized fiscal health and actual fiscal health (Honadle et al. 2004). Standardized fiscal balance denotes the government's ability to manage its revenues and expenditures without assistance from the central government; actual fiscal balance, in contrast, considers assistance from the central government. We used standardized fiscal balance to capture the capacity of each region to respond to disasters. Budgetary solvency is the government's ability to generate revenue in order to meet its expenditure needs (Bisogno et al. 2019). Budgetary solvency comprises aspects of revenue capacity and expenditure pressure. The value is essential for examining the independence and sustainability of fiscal conditions at the level of local government. If budgetary solvency reaches one or greater, it indicates good fiscal capabilities, as it can finance its expenditures (Alam and Hoque 2019). Using Eq. 1, we calculated the budgetary solvency ratio by dividing local own-source revenue by disaster-related expenditures. Other fiscal factors that we used as dependent variables to explain the effect of disasters on budgetary solvency include local own-source revenue, capital expenditure, consumption expenditure, social assistance expenditure, and unexpected expenditure. $${\text{Budgetary Solvency}}_{i,t} = \left[ {\mathop \sum \limits_{t = 1}^{n} \frac{{Local {\text{Own}} - {\text{Source Revenue }}_{i,t} }}{{{\text{Capital Expenditure}}_{i,t} + {\text{Consumption Expenditure}}_{i,t} + {\text{ Social Assistance Expenditure}}_{i,t} + {\text{Unexpected Expenditure}}_{i,t} }}{ }} \right]$$ Data often used to detail the magnitude of disasters include the number of people killed, injured, and affected, and the economic damages (Shabnam 2014; Unterberger 2017; Miao et al. 2018; Tang et al. 2019). Due to the limited availability of data at the district level, we used the number of damaged buildings instead of the economic damages. The data on disaster severity were provided by the National Agency for Disaster Management (NADM). These data include the death toll and missing people; the number of people injured and displaced; the number of damaged houses; the number of damaged public buildings, including schools, health facilities, and places of worship; the number of damaged private buildings, including offices and factories; damaged roads (length); and the area of damaged forests. To normalize the severity data in line with Noy and Nualsri (2011), Wu et al. (2018), and Tang et al. (2019), it is divided by its total exposure or economic condition. We only used damaged forest data at the provincial level, as the estimation at the district level contains a bias on the revenue and expenditure sides. On the revenue side, the area of damaged forests is biased at the district level because forest fires are closely associated with the forest area owned by the region. Districts with large forest areas tend to have higher local own-source revenue than those with small forest areas. Local own-source revenue is derived from taxes and levies related to forest management, such as forest produce fees and timber utilization permits. On the expenditure side, bias is closely related to the government authority in forest fire control. Based on Indonesian regulation, when the effects of a forest fire are limited to a single district, that district is responsible for managing the fire. However, if other districts are impacted, responsibility lies at the provincial level or even with the central government. As forest fires produce smog, which impacts the surrounding area, forest fire management is generally the responsibility of provincial governments. As a result, data on damaged forests are only appropriate at the provincial level. The impact of disasters on fiscal balance is closely related to demographic and economic conditions. Therefore, we used population density (POPDEN) and gross regional domestic product per capita (GRDPCAP) as control variables, in line with Deryugina (2017) and Unterberger (2017). We used nominal GRDP, as the fiscal data that we used contain elements of inflation. These data were published by the Indonesian Central Bureau of Statistics (CBS). The values of POPDEN and GRDPCAP variables were converted into natural logged form. Estimation Model We employed the fixed effects method to reduce the risk of omitted variable bias. This method was also used by Bergholt and Lujala (2012) to investigate the impact of disasters on economic growth as well as Leppänen et al. (2017) to investigate the impact of climate change on regional expenditure. We applied a robust estimation to solve the heteroscedasticity issue. This approach also helps to solve the problem of autocorrelation (Cermeño and Grier 2006). The post-disaster financing process for rehabilitation and reconstruction should be viewed in the medium/long term. The reconstruction of buildings and the procurement of goods following a disaster constitute a time-consuming process. To accommodate the delay in post-disaster financing, we used an estimation model with one year and two year lags. To test the robustness, we also included a time fixed effect (in two additional models). The models used in this study are: Model 1, which uses Eq. 2, is a severity function in which each regressor interacts partially with dependent variables, in line with Tang et al. (2019). $$y_{i,t}^{l,m,k} = \alpha_{0} + \mathop \sum \limits_{j = 0}^{2} \beta_{j} DM_{k,i,t - j} + \mathop \sum \limits_{g = 1}^{2} \gamma_{g} X_{g,i,t} + \theta_{i} + \varepsilon_{it}$$ Model 2, which uses Eq. 3, is a severity function in which all regressors interact simultaneously with dependent variables, in line with Shabnam (2014) and Unterberger (2017). $$y_{i,t}^{l,m} = \alpha_{0} + \mathop \sum \limits_{j = 0}^{2} \mathop \sum \limits_{k = 1}^{7} \sigma_{j} DM_{k,i,t - j} + \mathop \sum \limits_{g = 1}^{2} \gamma_{g} X_{g,i,t} + \theta_{i} + \varepsilon_{it}$$ yi,t is the dependent variable in region i and period t. \(l\) = 1,2,3,4,5,6; \(l\) is an independent variable consisting of (1) budgetary solvency, (2) local own-source revenue, (3) social assistance expenditure, (4) unexpected expenditure, (5) capital expenditure, and (6) consumption expenditure. m = 1,2; m is the level of government where 1 = district level, 2 = provincial level. \({\alpha }_{0}\) is the usual constant term. j = time-year lag DM is a variable that indicates the damage or severity of disasters. k = 1,2,3,4,5,6; k is a variable that indicates the type of damage consisting of (1) fatalities, (2) affected people, (3) damaged houses, (4) damaged public buildings, (5) damaged private buildings, (6) damaged roads, and (7) damaged forests. All data were normalized. Fatalities, affected people, and damaged houses were divided by total population; damaged public buildings and damaged private buildings were divided by GRDP; damaged roads were divided by population density; damaged forests were divided by total area. Xg is a control variable, where g = 1 for population density (POPDEN), g = 2 for GRDP per capita (GRDPCAP). \({\theta }_{i}\) is a region fixed effect. \({\varepsilon }_{it}\) is an error term. We divided this section into three parts: descriptive statistics, estimation results using the regression model, and limitation of this study. We used descriptive statistics to examine the data in general, focusing on the classification of each indicator used, such as fiscal indicators and disaster severity indicators. We showed our main results in the section estimation results. This study still needs to be developed. Thus, in the final part of this section, we presented our potential development of this study. Statistics of Fiscal and Disaster Severity Indicators Disaster impacts from 2010–2018 are shown in Figs. 1 and 2. The two figures show that there is no significant difference in the pattern of the distribution of the casualties (fatalities and affected people) parameter and the building damage parameter from the disaster, except in the central Kalimantan area, where the casualties parameter is low but the building damage parameter is high. There is a similar pattern between casualties and building damage in other areas. These two parameters are evenly distributed throughout Java and Sulawesi. They are equally distributed in the southwest of Sumatra. On Papua, they are higher in the north than in the south. Source National Agency for Disaster Management (NADM) and Indonesian Central Bureau of Statistics (CBS); authors' calculations Total ratio of casualties (fatalities and affected people) to population in Indonesia in 2010–2018. Total ratio of building damage (public and private) to gross regional domestic product (GRDP) in Indonesia in 2010–2018. Multiple previous studies have proven that geographical conditions influence the impact of disasters on the economy (Lis and Nickel 2010; Skidmore and Toya 2013; Deryugina 2017). Regional elevation may be used as one of the geographical indicators to assess the impact of disasters. Escaleras and Register (2012) and Skidmore and Toya (2013) employed regional average elevation as the independent variable in their models. We did not include elevation as an independent variable in our model since the topographic feature in Indonesia varies even within the same district. But we classified each district into lowland and highland categories to determine whether there is a statistical relationship between elevation and disaster severity. Based on the criteria used by the Indonesia Geospatial Information Agency, regional elevation could be categorized into lowland districts with elevations of less than 500 m above sea level and highland districts with elevations of more than 500 m above sea level, therefore we adopted this classification in using the data provided by the CBS (Table 1). Table 1 Disaster impact at the district level by elevation category in Indonesia in 2010–2018 The averages for fatalities, buildings damaged, and roads damaged in lowland districts are higher than those in highland districts (see Table 1). This finding contrasts with that of Skidmore and Toya (2013), who found that the death toll in highland countries is higher than that in lowland countries. Their study looked at various types of hazards in both types of countries. Common hazards in lowland countries include floods and landslides, while hazards and disasters in highland countries include volcanic eruptions and earthquakes. In the context of Indonesia, the frequency of disasters is higher in the lowland regions than in the highland regions. The types of disasters that generate significant numbers of casualties and damaged buildings, such as those that involve the hazards of earthquakes and tsunamis, are more common in lowland regions. The impact of disasters on fiscal balance depends on population density (Ibarraran et al. 2009; Deryugina 2017; Bisogno et al. 2019). Based on these facts—Java is Indonesia's most populous island, with a population density eight times that of the national average; the area of this island is just 6.75% of the overall area of Indonesia, yet according to the 2020 population census conducted by CBS, the population of Java is 270.2 million people or 56.1% of the total population of Indonesia—we divided our data into two segments: on Java and outside Java (Table 2). The averages for disaster frequency, casualties, and buildings damaged are higher on Java than outside Java. This finding is in line with Ibarraran et al. (2009), who found that higher population density leads to higher death tolls. However, the ratio of casualties to population (total of fatalities and affected people variable) and the ratio of buildings damaged (total of houses damaged, private buildings damaged, and public buildings damaged variable) to GRDP is higher outside Java than on Java. Importantly, disasters with a high degree of severity are more common outside Java than on Java. Table 2 Statistics of all variables at the district level on Java and outside Java in 2010–2018 Additionally, fatalities may be influenced by the level of trade openness, education, and inequality (Ibarraran et al. 2009; Skidmore and Toya 2013). People with low incomes and education levels, in general, have lack ability to adapt and dealing with disasters. Also, poor people who live in disaster-prone areas will be unable to migrate due to financial constraints. Then, because trade openness is linked to technological transfer, it can have an impact on the number of disaster victims (Skidmore and Toya 2013; Panwar and Sen 2019). Countries with a high level of trade openness will find it easier to transfer technology from countries with better technology. Countries could use technology to practice and improve disaster-related safety tools and infrastructures. By doing so, the country can reduce disaster-related fatalities. The average budgetary solvency ratio is higher on Java than outside Java, as Java's large population produces higher local own-source revenue than districts outside Java. Budgetary solvency outside Java has a very wide range of values between the minimum of 0.003 and the maximum of 3.345. This suggests disparities in fiscal capacity between districts outside Java. The number of casualties, and buildings damaged by disasters in Indonesia have fluctuated, but show a rising trend over the last four years (Fig. 3). There is an increasing trend in the proportion of the number of casualties among the population between 2015 and 2018. The ratio of total buildings damaged to total disasters shows a similar trend. Trend of the number of casualties (fatalities and affected people) and buildings damaged by disasters in Indonesia in 2010–2018. Source National Agency for Disaster Management (NADM); authors' calculations Disasters in Indonesia most commonly involve flooding; the most impactful disasters, however, involve a combination of earthquake and tsunami (Table 3). Flooding resulted in the highest number of affected people—but combined earthquakes and tsunamis resulted in the highest death toll. While this combination occurred just five times between 2010 and 2018, it resulted in enormous numbers of deaths, affected people, and damaged buildings. Table 3 Number of casualties and building damage stemming from each type of disaster in Indonesia in 2010–2018 Estimation Results This research used the fixed effects method. The results of the estimation model examining the effect of disasters on fiscal balance at the district level are shown in Table 4 while the effect at the provincial level is shown in Table 5. Almost all disaster severity variables show significant negative values for the fiscal balance at both the district and provincial levels. Several variables, including fatalities, affected people, damaged houses, damaged private buildings, and damaged roads negatively affect fiscal balance at the district level. At the provincial level, damaged roads and damaged forests both show a significant negative value for budgetary solvency. Only the affected people variable indicates a significantly positive effect on budgetary solvency. To examine the cause of this difference, we estimated each fiscal component (see Table 4). Table 4 Estimated effects of disaster severity on budgetary solvency variables at the district level in Indonesia in 2010–2018 Table 5 Estimated effects of disaster severity on budgetary solvency variables at the provincial level in Indonesia in 2010–2018 We regressed all independent variables simultaneously (Model 2) to test robustness; this is shown in Table 4 (column 7) for data at the district level and Table 5 (column 8) for data at the provincial level. Our estimated results suggest that the direction in the simultaneous model is consistent with the direction in the partial model. The damaged roads variable consistently has a significant negative impact on budgetary solvency at the district level. The damaged roads and damaged forests variables consistently have a significant negative impact on budgetary solvency at the provincial level. Population density and GRDP per capita are used as control variables at all government levels. Both districts and provinces show significant value despite the different directions. For data at the district level, population density and GRDP per capita show a positive direction toward budgetary solvency. For data at the provincial level, population density shows a positive direction toward budgetary solvency but GRDP per capita shows a negative direction. This directional difference likely stems from the differences in tax types between the two government levels. Provincial government taxes are mostly derived from taxes related to motor vehicles; the increase in GRDP per capita is not always accompanied by an increase in the number of vehicles or the use of fuel. Fatalities significantly affect fiscal balance at the district level in the year in which disasters occur (see Table 4, column 1); however, this significant relationship does not exist at the provincial level. The risk of fiscal balance at the district level occurs due to the decrease in local own-source revenue and the increase in spending on goods and services. Although the fiscal balance at the provincial level does not show a significant value (see Table 5, column 1), fatalities significantly lower local own-source revenue and increase unexpected spending. This decline in own-source revenue is in line with Miao et al. (2018), who found that disasters significantly reduce government revenues from property and sales taxes due to a decrease in income and consumption. Miao et al. (2018) argued that disasters can also lead to the migration of people from disaster-prone areas to other regions. This finding is consistent with Benali et al. (2019), who found that economic contraction stemming from disasters may decrease the government's capacity to earn revenue from standard tax collection. The increase in local own-source revenue in the year following a disaster is in line with Benson and Clay's (2004) assessment that disasters cause those who are affected to postpone their tax payments. The affected people variable negatively and significantly affects fiscal balance at the district level in the year in which disasters occur (see Table 4, column 2); it positively and significantly affects fiscal balance at the provincial level one year after the disaster occurred. Increased consumption expenditure harms fiscal balance, resulting in the negative effect at the district level. The positive effect of disasters on fiscal balance at the provincial level stems from an increase in local own-source revenue and a decrease in consumption expenditure at one year lag (see Table 5, column 2). Declining consumption expenditure is thought to be due to the reallocation of funds to other expenditures, such as social assistance and capital (although neither of these generates significant value). Houses damaged by disasters significantly negatively affect the fiscal balance at the district level in the year in which disasters occur (see Table 4, column 3); however, they do not significantly affect the fiscal balance at the provincial level (see Table 5, column 3). The decrease at the district level is due to increased consumption expenditure. While the effect on the fiscal balance at the provincial level is insignificant, our model shows that disasters increase unexpected expenditure at the provincial level. The damaged public buildings variable has no significant effect on the fiscal balance at the district level or the provincial level (see Table 4, column 4; Table 5, column 4). Capital expenditure is closely related to financing for the reconstruction process. Based on the estimated results, the number of damaged public buildings significantly lowers capital expenditure at the district level. This suggests that the fiscal burden for reconstruction is borne by the provincial level—or even the central level—rather than the district level. Capital expenditure at the provincial level shows a positive direction at one year lag, though the value is insignificant. This may indicate two things: first, the reconstruction cost is not very high relative to the total cost of development at the provincial level; second, the reconstruction process takes a long time, as it consists of both planning and implementation (Benson and Clay 2004; Mochizuki et al. 2015). As a result, funding for reconstruction cannot be captured in our model, which uses a two year lag. If the lag was greater than two years, it would reduce the number of observations, and further research that uses a longer observation period would be needed. The damaged private buildings variable reduces the fiscal balance at the district level in the same year and in the two years that follow (see Table 4, column 5); however, it does not significantly affect the fiscal balance at the provincial level (see Table 5, column 5). The decrease in fiscal balance at the district level stems from a decrease in local own-source revenue and an increase in consumption expenditure. Provincial governments also experience a decline in local own-source revenue due to the decrease in output borne by the private sector. These results are consistent with Narayan (2003), who found that 2003 Cyclone Ami in Fiji prompted a decrease in private-sector economic activity, which was reflected in the decline in investment and exports. The damaged roads variable significantly negatively affects fiscal balance at the district and provincial levels (see Table 4, column 6; Table 5, column 6) through an increase in all types of expenditure, including social assistance expenditure, unexpected expenditure, capital expenditure, and consumption expenditure. Capital expenditure on road reconstruction increases significantly during the year in which a disaster occurs. This finding suggests that local governments at both the district and provincial levels are quite responsive in terms of road restoration. The damaged forests variable significantly negatively affects fiscal balance at the provincial level (see Table 5, column 7). This negative impact stems from a decrease in local own-source revenue and increases in both social expenditure and capital expenditure. This is in line with Calkin et al. (2005), who found that spending by the central government in the United States between 1970 and 2002 was proportional to the increase in forest area burned. To prevent the adverse effects of forest fires on the fiscal balance, local governments require fire-prevention efforts conducted collectively with landowners and local communities. Disasters may increase the burden of capital expenditure for two reasons. First, disasters can delay development projects, potentially burdening the fiscal balance in the year following a disaster. Our estimation in Table 6, panel 1 (fatalities) shows that disasters significantly lower capital expenditure during the year in which a disaster occurs but increase capital expenditure the following year. This suggests that disasters commonly delay preplanned development projects (Benson and Clay 2004). Second, disasters may increase capital expenditure due to the need for reconstruction and repair. The road reconstruction process can be completed quickly by district and provincial governments. This result is illustrated in Table 6, panel 6; the damaged roads variable positively and significantly affects capital expenditure during the year in which a disaster occurs. However, the reconstruction processes for other forms of public infrastructure may take far longer (Benson and Clay 2004). This result can be seen in Table 6, panel 2 (affected people variable), which shows that disasters significantly increase capital expenditure at the district level in the year following a disaster. These results stray from those of Ouattara et al. (2018), who found that disasters do not significantly affect capital expenditure in the Caribbean. Other resources, such as international aid, are likely a reason for the differences between our study and their study. Table 6 Estimated result of disaster severity indicators on all fiscal variables (partial model) in Indonesia in 2010–2018 In addition to the partial model, we also ran a simultaneous model using Eq. 3. The results of this model can be seen in Table 4, column 7 for the district level and Table 5, column 8 for the provincial level. Our findings suggest that disasters have a strong negative impact on fiscal balance at both the district and provincial levels. These effects can be seen in the damaged roads variable. Based on these findings, local governments should adjust their fiscal policies in order to minimize higher disaster-related expenditures. One policy option is to provide disaster insurance. Another policy option is for local governments to issue a municipal bond. Disaster bonds may also be issued by central governments to finance post-disaster reconstruction. We also estimated the fixed effect model by including the time fixed effect to check the robustness for data at the district level and the results were presented in Table 7. Our estimated model showed that the results in the model without the time fixed effect are consistent with the results in the model that includes the time fixed effect, even though there is a decrease in the number of significant variables in the model that includes the time fixed effect. Fatalities, private buildings damaged, and roads damaged showed the same direction in the estimation model with or without time fixed effect. Table 7 Estimated effects of disaster severity on budgetary solvency variables at the district level (includes time fixed effect) Limitations of this Study This study has some limitations. First, we used data on the number of buildings damaged without calculating their economic value. Similar research has used the economic value of physical damage, which might more accurately reflect the magnitude of the disaster. Nevertheless, without detailed information, it is difficult to quantify the economic value of building damage. Second, we used annual data due to limited data availability. The results would have been more precise if we had used monthly data. Disasters that occur at the beginning of a year would have a larger impact on fiscal conditions in that year than those that occur at the end of a year, for which the impacts would be seen in the following fiscal year (Benali et al. 2019). For Indonesia, disasters are shown to strain fiscal balance at both the district and provincial levels. At the district and provincial levels, the damaged roads variable demonstrated robust negative effects on fiscal balance. This decline in fiscal balance is largely due to a decrease in local own–source revenue and increases in social assistance expenditure, unexpected expenditure, capital expenditure, and consumption expenditure. At the district level, the riskiest type of expenditure to be increased is consumption expenditure; at the provincial level, it is unexpected expenditure. The findings of this study should encourage local governments to develop financing mechanisms to stabilize their fiscal operations amid disasters. Disaster-prevention planning through medium-term or long-term financing schemes is necessary to mitigate the adverse impacts of disasters on fiscal balance. Other financing instruments, such as disaster insurance or issuing bonds should be considered by all governments. While local governments have several options for post-disaster finance, the process of rehabilitation and reconstruction should not be delayed so that the accelerated output will be faster (Bevan and Cook 2015). Alam, M., and A. Hoque. 2019. Spending pressure, revenue capacity and financial condit[i]on in municipal organizations: An empirical study. The Journal of Developing Areas 53(1): 243–256. Benali, N., M.B. Mbarek, and R. Feki. 2019. Natural disaster, government revenues and expenditures: Evidence from high and middle-income countries. Journal of the Knowledge Economy 10(2): 695–710. Benson, C., and E.J. Clay. 2004. Understanding the economic and financial impacts of natural disasters. Disaster Risk Management Series no. 4. Washington, DC: World Bank. Bergholt, D., and P. Lujala. 2012. Climate-related natural disasters, economic growth, and armed civil conflict. Journal of Peace Research 49(1): 147–162. Bevan, D., and S. Cook. 2015. Public expenditure following disasters. World Bank Policy Research Working Paper no. 7355. https://doi.org/10.1596/1813-9450-7355. Accessed 9 Sept 2021. Bisogno, M., B. Cuadrado-Ballesteros, S. Santis, and F. Citro. 2019. Budgetary solvency of Italian local governments: An assessment. International Journal of Public Sector Management 32(2): 122–141. Botzen, W.J.W., O. Deschenes, and M. Sanders. 2019. The economic impacts of natural disasters: A review of models and empirical studies. Review of Environmental Economics and Policy 13(2): 167–188. Calkin, D.E., K.M. Gebert, J.G. Jones, and R.P. Neilson. 2005. Forest service large fire area burned and suppression expenditure trends, 1970–2002. Journal of Forestry 103(4): 179–183. Cermeño, R., and K.B. Grier. 2006. Conditional heteroskedasticity and cross-sectional dependence in panel data: An empirical study of inflation uncertainty in the G7 countries. In Panel data econometrics: Theoritical contributions and empirical applications, ed. B.H. Baltagi, 259–277. Amsterdam: Elsevier. Chapter Google Scholar Chapman, J.I. 2008. State and local fiscal sustainability: The challenges. Public Administration Review 68(1): 115–131. Chhibber, A., and R. Laajaj. 2008. Disasters, climate change and economic development in Sub-Saharan Africa: Lessons and directions. Journal of African Economies 17((Supplement 2)): ii7–ii49. Deryugina, T. 2017. The fiscal cost of hurricanes: Disaster aid versus social insurance. American Economic Journal: Economic Policy 9(3): 168–198. Escaleras, M., and C.A. Register. 2012. Fiscal decentralization and natural hazard risks. Public Choice 151(1–2): 165–183. Honadle, B.W., J.M. Costa, and B.A. Cigler. 2004. Fiscal health for local governments. Amsterdam: Elsevier. Ibarraran, M.E., M. Ruth, S. Ahmad, and M. London. 2009. Climate change and natural disasters: Macroeconomic performance and distributional impacts. Environment, Development and Sustainability 11(3): 549–569. Jimenez, B.S. 2009. Fiscal stress and the allocation of expenditure responsibilities between state and local governments: An exploratory study. State and Local Government Review 41(2): 81–94. Klomp, J., and K. Valckx. 2014. Natural disasters and economic growth: A meta-analysis. Global Environmental Change 26: 183–195. Lee, D., H. Zhang, and C. Nguyen. 2018. The economic impact of natural disasters in Pacific island countries: Adaptation and preparedness. IMF Working Papers Vol. 18. Washington, DC: International Monetary Fund. https://doi.org/10.5089/9781484353288.001. Accessed 6 Sept 2020. Leppänen, S., L. Solanko, and R. Kosonen. 2017. The impact of climate change on regional government expenditures: Evidence from Russia. Environmental and Resource Economics 67(1): 67–92. Lis, E.M., and C. Nickel. 2010. The impact of extreme weather events on budget balances. International Tax Public Finance 17(4): 378–399. Medina, L. 2018. Assessing fiscal risks in Bangladesh. Asian Development Review 35(1): 196–222. Miao, Q., Y. Hou, and M. Abrigo. 2018. Measuring the financial shocks of natural disasters: A panel study of U.S. States. National Tax Journal 71(1): 11–44. Ministry of Finance. 2018. Disaster risk financing and insurance strategy. Jakarta, Indonesia: Fiscal Policy Agency. Mochizuki, J., S. Vitoontus, B. Wickramarachchi, S. Hochrainer-Stigler, K. Williges, R. Mechler, and R. Sovann. 2015. Operationalizing iterative risk management under limited information: Fiscal and economic risks due to natural disasters in Cambodia. International Journal of Disaster Risk Science 6(4): 321–334. Narayan, P.K. 2003. Macroeconomic impact of natural disasters on a small island economy: Evidence from a CGE Model. Applied Economics Letters 10(11): 721–723. Noy, I., and C. Edmonds. 2019. Increasing fiscal resilience to disasters in the Pacific. Natural Hazards 97(3): 1375–1393. Noy, I., and A. Nualsri. 2011. Fiscal storms: Public spending and revenues in the aftermath of natural disasters. Environment and Development Economics 16(1): 113–128. Ouattara, B., E. Strobl, J. Vermeiren, and S. Yearwood. 2018. Fiscal shortage risk and the potential role for tropical storm insurance: Evidence from the Caribbean. Environment and Development Economics 23(6): 702–720. Panwar, V., and S. Sen. 2019. Economic impact of natural disasters: An empirical re-examination. Margin: The Journal of Applied Economic Research 13(1): 109–139. Shabnam, N. 2014. Natural disasters and economic growth: A review. International Journal of Disaster Risk Science 5(2): 157–163. Skidmore, M., and H. Toya. 2013. Natural disaster impacts and fiscal decentralization. Land Economics 89(1): 101–117. Strulik, H., and T. Trimborn. 2019. Natural disasters and macroeconomic performance. Environmental and Resource Economics 72(4): 1069–1098. Tang, R., J. Wu, M. Ye, and W. Liu. 2019. Impact of economic development levels and disaster types on the short-term macroeconomic consequences of natural hazard-induced disasters in China. International Journal of Disaster Risk Science 10(3): 371–385. Unterberger, C. 2017. How flood damages to public infrastructure affect municipal budget indicators. Economics of Disasters and Climate Change 2(1): 5–20. Wu, J., G. Han, H. Zhou, and N. Li. 2018. Economic development and declining vulnerability to climate-related disasters in China. Environmental Research Letters 13(3): 034013. We thank the Indonesian Endowment Fund for Education (LPDP) for funding this research. Economic Planning and Development Policy, University of Indonesia, Jakarta, 10440, Indonesia Astrid Wiyanti Center for Climate Finance and Multilateral Policy, Fiscal Policy Agency, Ministry of Finance of Indonesia, Jakarta, 10710, Indonesia Department of Economics, University of Indonesia, Depok, 16424, Indonesia Alin Halimatussadiah Correspondence to Astrid Wiyanti. Wiyanti, A., Halimatussadiah, A. Are Disasters a Risk to Regional Fiscal Balance? Evidence from Indonesia. Int J Disaster Risk Sci 12, 839–853 (2021). https://doi.org/10.1007/s13753-021-00374-2 Issue Date: December 2021 Budgetary solvency Disaster impacts Disaster insurance Fiscal balance
CommonCrawl
Is the inductive effect always measured relative to hydrogen? Wikipedia has defined the Inductive effect thus: "In Chemistry and Physics, the inductive effect is an experimentally observable effect of the transmission of charge through a chain of atoms in a molecule, resulting in a permanent dipole in a bond." Recently, I learned from a teacher that the inductive effect of any atom/group is always measured with respect to hydrogen. Under this definition, we would ideally attach the atom/group with an atom of hydrogen and observe the direction of the dipole moment caused by the difference in electronegativity between hydrogen and the atom/group in question. If the electrons-initially positioned at the middle of the bond-are pushed toward hydrogen, the atom/group is termed as a "$+I$", electron-donating group. On the other hand, if the electrons are pulled toward the atom/group, it is termed as a "$-I$", electron-withdrawing group. Thereafter, this atom/group will always exhibit this type of inductive effect. Even if a situation does arise, in which it appears as though an atom/group that exhibits the $-I$ effect with hydrogen is exhibiting the $+I$ effect with another atom/group, this must be the cause of other dominating effects such as the mesomeric effect and in those cases also, it is said that the atom exhibits the $-I$ and not the $+I$ effect. To illustrate the clear distinction between the definition put forward by the teacher and the one on Wikipedia, let us consider a molecule of $\ce{Cl2}$. Under the definition on Wikipedia, it is clear that $\ce{Cl2}$ is a non-polar molecule (i.e. No permanent dipole moment has been experimentally observed in this molecule). Hence, no inductive effect is in play and neither of the two $\ce{Cl}$ atoms show either the $+I$ or $-I$ effect. On the other hand, if we work with the definition put forward by the teacher, we observe that an atom of $\ce{Cl}$ exhibits the $-I$ effect when attached to an atom of hydrogen, being more electronegative than it. Therefore, in the $\ce{Cl2}$ molecule, no permanent dipole exists as a result of the $-I$ effect of both chlorine atoms cancelling away from opposite directions. It appears that only one of these two definitions is definitely correct and I am confused as to which one that is. organic-chemistry electronegativity orthocresol♦ $\begingroup$ Why should one Cl atom exert inductive effect on the other? They are both equally electronegative, so they will pull the shared electron pair with equal 'force' and so there would be no inductive effect. $\endgroup$ – Shoubhik Raj Maiti Nov 26 '16 at 6:29 $\begingroup$ I think this is in principle a problem in terminology. You will always have an (a net) inductive effect when you have different electronegativities. The terms +I/-I should only be used in organic chemistry, and there you would indeed compare them to the standard CH bond. $\endgroup$ – Martin - マーチン♦ Nov 30 '16 at 13:13 $\begingroup$ @Martin-マーチン Is it possible for u to perhaps elaborate your point and write an answer? $\endgroup$ – user33789 Dec 1 '16 at 0:29 Your definition is falling apart because you are over-applying it. In the definition, you can see that it says: ...transmission of charge through a chain of atoms in a molecule... It doesn't make sense to apply this in the case of a two-atom molecule, because the inductive effect is used to measure how one bond effects the electron density of other bonds in the molecule. You can assign any bond a dipole moment, but inductive effects are used specifically to explain phenomena observed in organic chemistry, where chains of molecules are (exceedingly) common. To answer your other question: yes, inductive effects are always measured relative to hydrogen. Linear free-energy relationships are a quantifiable way of measuring a reaction's sensitivity with respect to a certain parameter. The Hammett equation is used to study how sensitive a molecule is in a given reaction to a change in its subsituent's inductive and resonance effects, for which the reference subsituent in all cases is a hydrogen atom. This is perhaps because the $\ce{C-H}$ is the most common bond in all of organic chemistry. Linear free-energy relationships are particularly useful because similar as to what you said, substituent groups will always have the same relative inductive effects no matter the molecule. A quaternary ammonium group will always have a larger $-I$ effect (and larger $\sigma$ value) than a nitro group than a trifluoromethyl group. A molecule's response to these changes can give great mechanistic insight into organic reactions and the pathways through which they proceed. A table of the different types of Hammett relations$^{[1]}$: Their reference reactions: $\hspace{3.25cm}$ $[1]$ Anslyn, E. V.; Dougherty, D. A. Modern Physical Organic Chemistry; University Science: Sausalito, CA, 2006. ringoringo $\begingroup$ I'm afraid I don't quite understand the last bit completely: "Linear free energy relationships are a quantifiable way of measuring a reaction's sensitivity with respect to a certain parameter." Is it possible to simplify this so that a highschool student can also understand it? $\endgroup$ – user33789 Dec 2 '16 at 0:14 $\begingroup$ The Hammett equation, for instance, measures how sensitive the ionization of various substances are to changing the substituent para to the carboxyl group. There are other relationships that measure things such as how leaving groups or nucleophiles are affected by changing substituents, how the choice of solvent effects the reaction... the list goes on. If you are interested you can read more in the link I added. $\endgroup$ – ringo Dec 2 '16 at 2:53 I myself am having a little bit of trouble wrapping my head around the Wikipedia definition which they cite from an organic chemistry textbook. I am specifically troubled by the phrase 'resulting in a permanent dipole moment.' According to all definitions of the inductive effect I know, a methyl group is electron-donating ($+I$). Yet, if you consider para-xylene, the molecule suddenly has a centre of inversion (point group $C_\mathrm{2h}$). Therefore, there is no permanent dipole in para-xylene. (Depending on the methyl rotamers, it might also have $C_\mathrm{2v}$, in which case the dipole would be perpendicular to the benzene ring plane — and since there is almost no vertical structure, ignoring the three aliphatic $\ce{C-H}$ bonds, it is predicted to be very small.) There is no reason to assume that methyl groups suddenly lose their $+I$ effect just because two of them are bonded to two different sides of a benzene ring. The NMR supports the assignment of a double $+I$-substitution with the aromatic protons of para-xylene displaying chemical shifts of $7.05~\mathrm{ppm}$ (compared to $7.36~\mathrm{ppm}$ in benzene; same solvent $\ce{CDCl3}$). Obviously there is additional shielding present in spite of no observable permanent dipole, which can only be attributed to the methyl groups' $+I$ effect. The same case can be made for other symmetrically para-substituted benzenes such as para-nitrobenzene ($8.41~\mathrm{ppm}$ in $\ce{C6D12 + C3D6O}$), para-dichlorobenzene ($7.26~\mathrm{ppm}$ in $\ce{CDCl3}$) and others. In these cases, especially dichlorobenzene, the point group is actually predicted to strictly be $D_\mathrm{2h}$ — strictly containing inversion. Thus, when having to choose between your teacher's and Wikipedia's definions, I want to go with your teacher's because it better explains what we discuss in basically every seminar. JanJan In Pauling electro negativity scale H is standard or reference atom. The electron pairs shared between carbon atoms in a carbon chain enjoy a certain excess of electron charge from C-H sigma bonded MOs . Now the other atom or groups influence on this excess share can be either strengthening or weakening the electron cloud density on carbon chain. All +I GROUPS INCREASE SUCH DENSITY to a considerable extent on immediately bonded fellow to stranger , which dissipates rapidly as you go farther in carbon chain. Similarly - I groups have opposite effect. Hope i conveyed the idea ! krishna mohan .Mkrishna mohan .M NMR: Most deshielded protons (electron withdrawal) Inductive effect of hydrogen isotopes In which all substituents does Inductive effect overpower Mesomeric effect? Inductive effect of phenyl ring Hydrogen Bonding (with oxygen) - restricted to oxygen singly bonded to H or not? Acid-Base determination of organic compounds What criteria have to be met to term a carboxylic acid a 'fatty acid'? How do you determine steric hindrance? Deciding if a molecule is polar and non-polar in cyclic carbon compounds Does tetrachlorodibenzo-p-dioxin (2,3,7,8) have a nonzero dipole moment?
CommonCrawl
Results for 'Nina Liu' (try it on Scholar) Dynamic Characteristics of Metro Tunnel Closely Parallel to a Ground Fissure.Nina Liu, Quanzhong Lu, Xiaoyang Feng, Wen Fan, Jianbing Peng, Weiliang Liu & Xin Kang - 2019 - Complexity 2019:1-11.details Liu, Liangjian 劉梁劍, Heaven, Humans, and Boundary: An Exposition of Wang Chuanshan's Metaphysics 天· 人· 際· 對王船山的形上學闡明. [REVIEW]JeeLoo Liu - 2009 - Dao: A Journal of Comparative Philosophy 8 (1):105-108.details Wang Fuzhi in Asian Philosophy Interpreting Chinese Education: A Review of Liu, Ross, and Kelly's The Ethnographic Eye: An Interpretive Study of Education in China. [REVIEW]X. Liu - 2004 - Journal of Thought 39 (1):147-150.details Philosophy of Education in Philosophy of Social Science Liu Ben Wen Ji.Ben Liu - 2008 - Zhongguo She Hui Ke Xue Chu Ban She.details Liu Shiying Ji: Luo Ji Yu Zhe Xue Yan Jiu.Shiying Liu - 2010 - Xian Zhuang Shu Ju.details Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic Liu Shipei Ru Xue Lun Ji.Shipei Liu - 2010 - Sichuan da Xue Chu Ban She.details Chinese Philosophy in Asian Philosophy Liu Xianqi Wen Ji.Xianqi Liu - 2008 - Jilin da Xue Chu Ban She.details Liu Xianxin Xue Shu Lun Ji.Xianxin Liu - 2010 - Guangxi Shi Fan da Xue Chu Ban She.details Chinese Philosophy: Topics, Misc in Asian Philosophy Ru Xue, Wen Hua Yu Zong Jiao: Liu Shuxian Xian Sheng Qi Zhi Shou Qing Lun Wen Ji.Shuxian Liu, Minghui Li, Haiyan Ye & Zongyi Zheng (eds.) - 2006 - Taiwan Xue Sheng Shu Ju.details Xiu Yuan Zhi Lu: Xianggang Zhong Wen da Xue Zhe Xue Xi Liu Shi Zhou Nian Xi Qing Lun Wen Ji.Guoying Liu & Canhui Zhang (eds.) - 2009 - Zhong Wen da Xue Chu Ban She.details Consciousness and the Self: New Essays.JeeLoo Liu & John Perry (eds.) - 2011 - Cambridge University Press.details 'I never can catch myself at any time without a perception, and never can observe any thing but the perception.' These famous words of David Hume, on his inability to perceive the self, set the stage for JeeLoo Liu and John Perry's collection of essays on self-awareness and self-knowledge. This volume connects recent scientific studies on consciousness with the traditional issues about the self explored by Descartes, Locke and Hume. Experts in the field offer contrasting perspectives on matters such as (...) the relation between consciousness and self-awareness, the notion of personhood and the epistemic access to one's own thoughts, desires or attitudes. The volume will be of interest to philosophers, psychologists, neuroscientists, cognitive scientists and others working on the central topics of consciousness and the self. (shrink) Epistemology of Mind in Philosophy of Mind Immunity to Error through Misidentification in Philosophy of Mind Persons in Metaphysics Philosophy of Consciousness in Philosophy of Mind Self-Consciousness in Experience in Philosophy of Mind The Explanatory Gap in Philosophy of Mind $79.64 used $85.41 new $97.25 direct from Amazon Amazon page Using Reinforcement Learning to Examine Dynamic Attention Allocation During Reading.Yanping Liu, Erik D. Reichle & Ding-Guo Gao - 2013 - Cognitive Science 37 (8):1507-1540.details A fundamental question in reading research concerns whether attention is allocated strictly serially, supporting lexical processing of one word at a time, or in parallel, supporting concurrent lexical processing of two or more words (Reichle, Liversedge, Pollatsek, & Rayner, 2009). The origins of this debate are reviewed. We then report three simulations to address this question using artificial reading agents (Liu & Reichle, 2010; Reichle & Laurent, 2006) that learn to dynamically allocate attention to 1–4 words to "read" as efficiently (...) as possible. These simulation results indicate that the agents strongly preferred serial word processing, although they occasionally attended to more than one word concurrently. The reason for this preference is discussed, along with implications for the debate about how humans allocate attention during reading. (shrink) Translating and Transforming Utopia Into the Mandarin Context: Case Studies From China and Taiwan.Yi-Chun Liu - 2016 - Utopian Studies 27 (2):333.details While the English translation of Thomas More's Utopia first appeared in 1551, and enjoyed periodic cycles of reincarnation through several retranslations in the centuries to come, a Mandarin edition was not attempted until the mid-1930s. As of 2016—five hundred years after Utopia was first published—ten editions and nine translators have been involved in various efforts to transfer More's canonical work into the Mandarin linguistic and cultural context. The very first Mandarin Chinese edition was translated by Liú Línshēng, published in 1935 (...) by the Taiwan Commercial Press. This edition inaugurated the upsurge in translations of Utopia in the mid-twentieth century. Three editions were produced at this time: Dài... (shrink) Political Realism and Utopianism in Social and Political Philosophy Phil 225 Philosophy of the Arts Fall 1995.JeeLoo Liu - manuscriptdetails Class meeting time: T R 9:55 - 11:10 AM Instructor: JeeLoo Liu Office location: Welles 107 Office hours: M W 2-4 PM E-mail: [email protected].. Aesthetics and Marxism: Chinese Aesthetic Marxists and Their Western Contemporaries.Kang Liu - 2000 - Duke University Press.details Although Chinese Marxism—primarily represented by Maoism—is generally seen by Western intellectuals as monolithic, Liu Kang argues that its practices and projects are as diverse as those in Western Marxism, particularly in the area of aesthetics. In this comparative study of European and Chinese Marxist traditions, Liu reveals the extent to which Chinese Marxists incorporate ideas about aesthetics and culture in their theories and practices. In doing so, he constructs a wholly new understanding of Chinese Marxism. Far from being secondary considerations (...) in Chinese Marxism, aesthetics and culture are in fact principal concerns. In this respect, such Marxists are similar to their Western counterparts, although Europeans have had little understanding of the Chinese experience. Liu traces the genealogy of aesthetic discourse in both modern China and the West since the era of classical German thought, showing where conceptual modifications and divergences have occurred in the two traditions. He examines the work of Mao Zedong, Lu Xun, Li Zehou, Qu Qiubai, and others in China, and from the West he discusses Kant, Schiller, Schopenhauer, and Marxist theorists including Horkheimer, Adorno, Benjamin, and Marcuse. While stressing the diversity of Marxist positions within China as well as in the West, Liu explains how ideas of culture and aesthetics have offered a constructive vision for a postrevolutionary society and have affected a wide field of issues involving the problems of modernity. Forcefully argued and theoretically sophisticated, this book will appeal to students and scholars of contemporary Marxism, cultural studies, aesthetics, and modern Chinese culture, politics, and ideology. (shrink) My Humanist Detour From China to the United States.Wendy Liu - 2012 - Essays in the Philosophy of Humanism 20 (1):57-68.details I would describe myself as an accidental humanist, if not atheist. That was very much how I felt when I found myself on June 4, 2010, standing at the podium of the sixty-ninth annual conference of the American Humanist Association. I was receiving the Humanist Pioneer Award. But what did I do to deserve the honor? The golden letters on the beautifully crafted award said: "To Wendy Liu for her pioneering work that advances Humanist values and critical thought through cross (...) cultural perspectives on American Society." The "pioneering work" presumably meant my writings on US-China relatedtopics, especially the collection of essays on my understanding of America from a Chinese and non-religious angle. As an aspiring writer, I was happy to be recognized for anything, not to say that particular angle. But that angle, with which I stumbled my way to the San Jose conference, was not an accident. It had come a long way with me on a journey starting in Xian, China, my hometown. Talking about Xian, the terracotta warriors of Qin Shihuang, the first Emperor of China, would probably come to one's mind. Visitors have marveled at the work of ancient artisans, especially the rendering of individual facial features of the clay soldiers in eternity. In contrast to that humanistic touch was the cruelty of Emperor Qin, who ordered that upon his death the entrance to the underground mausoleum be sealed on completion, entombing the laborers inside to keep it secret.1 This is a brief but telling picture of humanism vs. tyranny in China–the once-upon-a time China. (shrink) Book Review. [REVIEW]Jeeloo Liu - 2009 - Dao: A Journal of Comparative Philosophy 8:105-108.details Liu, Liangjian 劉梁劍, Heaven, Humans, and Boundary: An Exposition of Wang Chuanshan's Metaphysics 天· 人· 際· 對王船山的形上學闡明 Shanghai 上海: Shanghai Renmin Chubanshe 上海人民出版社, 2007, 12+271 pages. A Robust Defence of the Doctrine of Doing and Allowing.Xiaofei Liu - 2012 - Utilitas 24 (1):63-81.details Philosophers debate over the truth of the Doctrine of Doing and Allowing, the thesis that there is a morally significant difference between doing harm and merely allowing harm to happen. Deontologists tend to accept this doctrine, whereas consequentialists tend to reject it. A robust defence of this doctrine would require a conceptual distinction between doing and allowing that both matches our ordinary use of the concepts in a wide range of cases and enables a justification for the alleged moral difference. (...) In this article, I argue not only that a robust defence of this doctrine is available, but also that it is available within a consequentialist framework. (shrink) Moral Dilemmas in Normative Ethics The Doctrine of Double Effect in Normative Ethics Topics in Deontological Moral Theories in Normative Ethics Dynamic Logic of Preference Upgrade.Johan van Benthem & Fenrong Liu - 2007 - Journal of Applied Non-Classical Logics 17 (2):157-182.details Statements not only update our current knowledge, but also have other dynamic effects. In particular, suggestions or commands ?upgrade' our preferences by changing the current order among worlds. We present a complete logic of knowledge update plus preference upgrade that works with dynamic-epistemic-style reduction axioms. This system can model changing obligations, conflicting commands, or ?regret'. We then show how to derive reduction axioms from arbitrary definable relation changes. This style of analysis also has a product update version with preferences between (...) actions, as well as worlds. Some illustrations are presented involving defaults and obligations. We conclude that our dynamic framework is viable, while admitting a further extension to more numerical ?utility update'. (shrink) Preferences in Decision Theory in Philosophy of Action Love of Money and Unethical Behavior Intention: Does an Authentic Supervisor's Personal Integrity and Character Make a Difference? [REVIEW]Thomas Li-Ping Tang & Hsi Liu - 2012 - Journal of Business Ethics 107 (3):295-312.details We investigate the extent to which perceptions of the authenticity of supervisor's personal integrity and character (ASPIRE) moderate the relationship between people's love of money (LOM) and propensity to engage in unethical behavior (PUB) among 266 part-time employees who were also business students in a five-wave panel study. We found that a high level of ASPIRE perceptions was related to high love-of-money orientation, high self-esteem, but low unethical behavior intention (PUB). Unethical behavior intention (PUB) was significantly correlated with their high (...) Machiavellianism, low self-esteem, and low intrinsic religiosity. Our counterintuitive results revealed that the main effect of LOM on PUB was not significant, but the main effect of ASPIRE on PUB was significant. Further, the significant interaction effect between LOM and ASPIRE on unethical behavior intention provided profoundly interesting findings: High LOM was related to high unethical behavior intention for people with low ASPIRE, but was related to low unethical intention for those with high ASPIRE. People with high LOM and low ASPIRE had the highest unethical behavior intention; whereas those with high LOM and high ASPIRE had the lowest. We discuss results in light of individual differences, ethical environment, and perceived demand characteristics. (shrink) Business Ethics in Applied Ethics Intentions, Misc in Philosophy of Action Reasoning About Agent Types and the Hardest Logic Puzzle Ever.Fenrong Liu & Yanjing Wang - 2013 - Minds and Machines 23 (1):123-161.details In this paper, we first propose a simple formal language to specify types of agents in terms of necessary conditions for their announcements. Based on this language, types of agents are treated as 'first-class citizens' and studied extensively in various dynamic epistemic frameworks which are suitable for reasoning about knowledge and agent types via announcements and questions. To demonstrate our approach, we discuss various versions of Smullyan's Knights and Knaves puzzles, including the Hardest Logic Puzzle Ever (HLPE) proposed by Boolos (...) (in Harv Rev Philos 6:62–65, 1996). In particular, we formalize HLPE and verify a classic solution to it. Moreover, we propose a spectrum of new puzzles based on HLPE by considering subjective (knowledge-based) agent types and relaxing the implicit epistemic assumptions in the original puzzle. The new puzzles are harder than the previously proposed ones in the literature, in the sense that they require deeper epistemic reasoning. Surprisingly, we also show that a version of HLPE in which the agents do not know the others' types does not have a solution at all. Our formalism paves the way for studying these new puzzles using automatic model checking techniques. (shrink) Doxastic and Epistemic Logic in Logic and Philosophy of Logic Philosophy of Artificial Intelligence in Philosophy of Cognitive Science Filiality Versus Sociality and Individuality: On Confucianism as "Consanguinitism".Qingping Liu - 2003 - Philosophy East and West 53 (2):234-250.details : Confucianism is often valued as a doctrine that highlights both the individual and social dimensions of the ideal person, for it indeed puts special emphasis on such lofty goals as loving all humanity and cultivating the self. Through a close and critical analysis of the texts of the Analects and the Mencius, however, it is attempted to demonstrate that because Confucius and Mencius always take filial piety, or, more generally, consanguineous affection, as not only the foundation but also the (...) supreme principle of human life, the individual and social dimensions are inevitably subordinated to and substantially negated by the filial precisely within the Confucian framework, with the result that Confucianism in essence is neither collectivism nor individualism, but "consanguinitism.". (shrink) Chinese Philosophy: Ethics in Asian Philosophy Classical Chinese Philosophy in Asian Philosophy Explaining the Emergence of Cooperative Phenomena.Chuang Liu - 1999 - Philosophy of Science 66 (3):106.details Phase transitions are well-understood phenomena in thermodynamics (TD), but it turns out that they are mathematically impossible in finite SM systems. Hence, phase transitions are truly emergent properties. They appear again at the thermodynamic limit (TL), i.e., in infinite systems. However, most, if not all, systems in which they occur are finite, so whence comes the justification for taking TL? The problem is then traced back to the TD characterization of phase transitions, and it turns out that the characterization is (...) the result of serious idealizations which under suitable circumstances approximate actual conditions. (shrink) Emergence in Metaphysics Interlevel Relations in Physical Science in Philosophy of Physical Science Specificity of Face Processing Without Awareness.Guomei Zhou, Lingxiao Zhang, Jinting Liu, Jiaoteng Yang & Zhe Qu - 2010 - Consciousness and Cognition 19 (1):408-412.details The recognition memory for inverted faces is especially difficult when compared with that for non-face stimuli. This face inversion effect has often been used as a marker of face-specific holistic processing. However, whether face processing without awareness is still specific remains unknown. The present study addressed this issue by examining the face inversion effect with the technique of binocular rivalry. Results showed that invisible upright faces could break suppression faster than invisible inverted faces. Nevertheless, no difference was found for invisible (...) upright houses and invisible inverted houses. This suggested that face processing without awareness is still specific. Some face-specific information can be processed by high-level brain areas even when that information is invisible. (shrink) Consciousness and Psychology in Philosophy of Cognitive Science Science of Consciousness in Philosophy of Cognitive Science Laws and Models in a Theory of Idealization.Chuang Liu - 2004 - Synthese 138 (3):363 - 385.details I first give a brief summary of a critique of the traditional theories of approximation and idealization; and after identifying one of the major roles of idealization as detaching component processes or systems from their joints, a detailed analysis is given of idealized laws – which are discoverable and/or applicable – in such processes and systems (i.e., idealized model systems). Then, I argue that dispositional properties should be regarded as admissible properties for laws and that such an inclusion supplies the (...) much needed connection between idealized models and the laws they `produce'' or `accommodate''. And I then argue that idealized law-statements so produced or accommodated in the models may be either true simpliciter or true approximately, but the latter is not because of the idealizations involved. I argue that the kind of limiting-case idealizations that produce approximate truth is best regarded as approximation; and finally I compare my theory with some existing theories of laws of nature.We seem to trace [in KingLear] ... the tendency of imagination toanalyse and abstract, to decomposehuman nature into its constituentfactors, and then to construct beings in whomone or more of these factors isabsent or atrophied or only incipient. (shrink) Idealization in General Philosophy of Science Laws of Nature, Misc in General Philosophy of Science Verisimilitude in General Philosophy of Science Measuring the Process of Quality of Care for ST‐Segment Elevation Acute Myocardial Infarction Through Data‐Mining of the Electronic Discharge Notes.Sheng-Nan Chang, Jou-Wei Lin, Shi-Chi Liu & Juey-Jen Hwang - 2008 - Journal of Evaluation in Clinical Practice 14 (1):116-120.details Philosophy of Medicine in Philosophy of Science, Misc RT₂² Does Not Imply WKL₀.Jiayi Liu - 2012 - Journal of Symbolic Logic 77 (2):609-620.details We prove that RCA₀ + RT $RT\begin{array}{*{20}{c}} 2 \\ 2 \\ \end{array} $ ̸͢ WKL₀ by showing that for any set C not of PA-degree and any set A, there exists an infinite subset G of A or ̅Α, such that G ⊕ C is also not of PA-degree. Does Female Directorship on Independent Audit Committees Constrain Earnings Management?Jerry Sun, Guoping Liu & George Lan - 2011 - Journal of Business Ethics 99 (3):369 - 382.details This study examines whether the gender of the directors on fully independent audit committees affects the ability of the committees in constraining earnings management and thus their effectiveness in overseeing the financial reporting process. Using a sample of 525 firm-year observations over the period 2003 to 2005, we are unable to identify an association between the proportion of female directors on audit committees and the extent of earnings management. Students' Academic Cheating in Chinese Universities: Prevalence, Influencing Factors, and Proposed Action. [REVIEW]Yuchao Ma, Donald L. McCabe & Ruizhi Liu - 2013 - Journal of Academic Ethics 11 (3):169-184.details Quantitative research about academic cheating among Chinese college students is minimal. This paper discusses a large survey conducted in Chinese colleges and universities which examined the prevalence of different kinds of student cheating and explored factors that influence cheating behavior. A structural equation model was used to analyze the data. Results indicate that organizational deterrence and individual performance have a negative impact on cheating while individual perceived pressure, peers' cheating, and extracurricular activities have a positive impact. Recommendations are proposed to (...) reduce the level of academic cheating in China. Many of these are universal in nature and applicable outside of China as well. (shrink) Academic and Teaching Ethics in Philosophy of Social Science Approximation, Idealization, and Laws of Nature.Chang Liu - 1999 - Synthese 118 (2):229-256.details Traditional theories construe approximate truth or truthlikeness as a measure of closeness to facts, singular facts, and idealization as an act of either assuming zero of otherwise very small differences from facts or imagining ideal conditions under which scientific laws are either approximately true or will be so when the conditions are relaxed. I first explain the serious but not insurmountable difficulties for the theories of approximation, and then argue that more serious and perhaps insurmountable difficulties for the theory of (...) idealization force us to sever its close tie to approximation. This leads to an appreciation of lawlikeness as a measure of closeness to laws, which I argue is the real measure of idealization whose main purpose is to carve nature at its joints. (shrink) Law Statements in General Philosophy of Science Impacts of Instrumental Versus Relational Centered Logic on Cause-Related Marketing Decision Making.Gordon Liu - 2013 - Journal of Business Ethics 113 (2):243-263.details The purpose of cause-related marketing is to capitalise on a firm's social engagement initiatives to achieve a positive return on a firm's social investment. This article discusses two strategic perspectives of cause-related marketing and their impact on a firm's decision-making regarding campaign development. The instrumental dominant logic of cause-related marketing focuses on attracting customers' attention in order to generate sales. The relational dominant logic of cause-related marketing focuses on building relationships with the target stakeholders through the enhancement of a firm's (...) legitimacy. The combination of these two types of logic gives rise to four types of cause-related marketing: altruistic, commercial, social and integrative. This paper uses the qualitative method to explore a firm's marketing decision choices regarding campaign-related decision dimensions—campaign duration, geographical scope, cause selection, and implementation strategy—for each type of cause-related marketing. The finding provides theoretical, managerial and public policy implications. (shrink) Three Time Scales of Neural Self-Organization Underlying Basic and Nonbasic Emotions.Marc D. Lewis & Zhong-xu Liu - 2011 - Emotion Review 3 (4):416-423.details Our model integrates the nativist assumption of prespecified neural structures underpinning basic emotions with the constructionist view that emotions are assembled from psychological constituents. From a dynamic systems perspective, the nervous system self-organizes in different ways at different time scales, in relation to functions served by emotions. At the evolutionary scale, brain parts and their connections are specified by selective pressures. At the scale of development, connectivity is revised through synaptic shaping. At the scale of real time, temporary networks of (...) synchronized activity mediate responses to situations. To the degree that humans share common emotional functions, neural structuration is similar across scales, giving rise to "basic" emotions. However, unique developmental and situational factors select for neural configurations mediating emotional variants. (shrink) Emotions in Philosophy of Mind The Effect of Guanxi on Audit Quality in China.Jihong Liu, Yaping Wang & Liansheng Wu - 2011 - Journal of Business Ethics 103 (4):621-638.details Two types of guanxi have a close association with auditor independence in China: firm-level connections derived from state ownership and personal connections developed through management affiliations with external auditors. This article examines the effects of these two types of connection and their joint effect on audit quality. We find that state ownership and management affiliations with the external auditor both increase the probability of receiving a clean audit opinion in China. Furthermore, the probability increment brought by management affiliations for non-state-owned (...) enterprises (NSOEs) is greater than that for state-owned enterprises (SOEs). These results suggest that state ownership and management affiliations are two important types of connection that impair auditor independence, and that management affiliations are of greater importance to private-sector firms than to SOEs. (shrink) Unconscious Processing of Dichoptically Masked Words.Anthony G. Greenwald, M. R. Klinger & T. J. Liu - 1989 - Memory and Cognition 17:35-47.details Unconscious Perception in Philosophy of Cognitive Science Framework for a Protein Ontology.Darren A. Natale, Cecilia N. Arighi, Winona Barker, Judith Blake, Ti-Cheng Chang, Zhangzhi Hu, Hongfang Liu, Barry Smith & Cathy H. Wu - 2007 - BMC Bioinformatics 8 (Suppl 9):S1.details Biomedical ontologies are emerging as critical tools in genomic and proteomic research where complex data in disparate resources need to be integrated. A number of ontologies exist that describe the properties that can be attributed to proteins; for example, protein functions are described by Gene Ontology, while human diseases are described by Disease Ontology. There is, however, a gap in the current set of ontologies—one that describes the protein entities themselves and their relationships. We have designed a PRotein Ontology (PRO) (...) to facilitate protein annotation and to guide new experiments. The components of PRO extend from the classification of proteins on the basis of evolutionary relationships to the representation of the multiple protein forms of a gene (products generated by genetic variation, alternative splicing, proteolytic cleavage, and other post-translational modification). PRO will allow the specification of relationships between PRO, GO and other OBO Foundry ontologies. Here we describe the initial development of PRO, illustrated using human proteins from the TGF-beta signaling pathway (http://pir.georgetown.edu/pro). (shrink) Biological Information in Philosophy of Biology Biological Natural Kinds in Philosophy of Biology Genetics and Molecular Biology in Philosophy of Biology Does Relationship Quality Matter in Consumer Ethical Decision Making? Evidence From China.Zhiqiang Liu, Fue Zeng & Chenting Su - 2009 - Journal of Business Ethics 88 (3):483 - 496.details This study explores the linear logic between consumer ethical beliefs (CEBs) and consumer unethical behavior (CUB) in a Chinese context. A relational view helps fill the belief–behavior gap by exploring the moderating role of relationship quality in reducing CUBs. Specifically, when consumers are more receptive to a set of actions that may be deemed inappropriate by moral principles, they are more likely to engage in unethical behaviors. However, when consumers perceive their misconduct as possibly damaging to the relationship developed with (...) the seller, they tend to refrain from unethical behaviors. CEBs and relationship quality also combine to affect unethical behaviors. Although consumers find the misconduct acceptable according to their ethical beliefs, they become less likely to conduct the behavior if they have a close relationship with the seller. The results contribute to a better understanding of the simplistic logic that connects CEBs and their unethical behaviors and shed light on how close relationships with consumers help contain CUBs. (shrink) A Two-Level Perspective on Preference.Fenrong Liu - 2011 - Journal of Philosophical Logic 40 (3):421 - 439.details This paper proposes a two-level modeling perspective which combines intrinsic 'betterness' and reason-based extrinsic preference, and develops its static and dynamic logic in tandem. Our technical results extend, integrate, and re-interpret earlier theorems on preference representation and update in the literature on preference change. Deontic Logic in Logic and Philosophy of Logic Re-Inflating the Conception of Scientific Representation.Chuang Liu - 2015 - International Studies in the Philosophy of Science 29 (1):41-59.details This article argues for an anti-deflationist view of scientific representation. Our discussion begins with an analysis of the recent Callender–Cohen deflationary view on scientific representation. We then argue that there are at least two radically different ways in which a thing can be represented: one is purely symbolic, and therefore conventional, and the other is epistemic. The failure to recognize that scientific models are epistemic vehicles rather than symbolic ones has led to the mistaken view that whatever distinguishes scientific models (...) from other representational vehicles must merely be a matter of pragmatics. It is then argued that even though epistemic vehicles also contain conventional elements, they do their job of demonstration in spite of such elements. (shrink) Attention Alters the Appearance of Motion Coherence.T. Liu, S. Fuller & M. Carrasco - 2006 - Psychonomic Bulletin and Review 13 (6):1091-1096.details Attention and Consciousness in Psychology in Philosophy of Cognitive Science Love Life: Aristotle on Living Together with Friends.Irene Liu - 2010 - Inquiry: An Interdisciplinary Journal of Philosophy 53 (6):579-601.details According to Aristotle, the most characteristic activity of friendship is "living together" [to suzên]. This paper seeks to understand living together in the light of his famous, foundational claim that humans are social by nature. Based on an interpretation of Nicomachean Ethics 9.9, I explain our need for friends in terms of a more fundamental human need to appreciate one's life as a whole. I then argue that friendship is built into the very structure of human life itself such that (...) human living is living together. (shrink) Aristotle: Friendship in Ancient Greek and Roman Philosophy Confirming Idealized Theories and Scientific Realism.Chuang Liu - unknowndetails Two types of idealization in theory construction are distinguished, and the distinction is used to give a critique of Ron Laymon's account of confirming idealized theories and his argument for scientific realism. Arguments For and Against Scientific Realism in General Philosophy of Science Scientific Realism, Misc in General Philosophy of Science Priority Structures in Deontic Logic.Johan Benthem, Davide Grossi & Fenrong Liu - 2014 - Theoria 80 (2):116-152.details This article proposes a systematic application of recent developments in the logic of preference to a number of topics in deontic logic. The key junction is the well-known Hansson conditional for dyadic obligations. These conditionals are generalized by pairing them with reasoning about syntactic priority structures. The resulting two-level approach to obligations is tested first against standard scenarios of contrary-to-duty obligations, leading also to a generalization for the Kanger-Anderson reduction of deontic logic. Next, the priority framework is applied to model (...) two intuitively different sorts of deontic dynamics of obligations, based on information changes and on genuine normative events. In this two-level setting, we also offer novel takes on vexed issues such as the Chisholm paradox and modelling strong permission. Finally, the priority framework is shown to provide a unifying setting for the study of operations on norms as such, in particular, adding or deleting individual norms, and even merging whole norm systems in different manners. (shrink) Prioritized Imperatives and Normative Conflicts.Fengkui Ju & Fenrong Liu - 2011 - European Journal of Analytic Philosophy 7 (2):35-58.details Imperatives occur ubiquitously in natural languages. They produce forces which change the addressee's cognitive state and regulate her actions accordingly. In real life we often receive conflicting orders, typically, issued by various authorities with different ranks. A new update semantics is proposed in this paper to formalize this idea. The general properties of this semantics, as well as its background ideas are discussed extensively. In addition, we compare our framework with other approaches of deontic logics in the context of normative (...) conflicts. (shrink) Von Wright's "The Logic of Preference" Revisited.Fenrong Liu - 2010 - Synthese 175 (1):69 - 88.details Preference is a key area where analytic philosophy meets philosophical logic. I start with two related issues: reasons for preference, and changes in preference, first mentioned in von Wright's book The Logic of Preference but not thoroughly explored there. I show how these two issues can be handled together in one dynamic logical framework, working with structured two-level models, and I investigate the resulting dynamics of reason-based preference in some detail. Next, I study the foundational issue of entanglement between preference (...) and beliefs, and relate the resulting richer logics to belief revision theory and decision theory. (shrink) Approximations, Idealizations, and Models in Statistical Mechanics.Chuang Liu - 2001 - Erkenntnis 60 (2):235-263.details In this paper, a criticism of the traditional theories of approximation and idealization is given as a summary of previous works. After identifying the real purpose and measure of idealization in the practice of science, it is argued that the best way to characterize idealization is not to formulate a logical model – something analogous to Hempel's D-N model for explanation – but to study its different guises in the praxis of science. A case study of it is then made (...) in thermostatistical physics. After a brief sketch of the theories for phase transitions and critical phenomena, I examine the various idealizations that go into the making of models at three difference levels. The intended result is to induce a deeper appreciation of the complexity and fruitfulness of idealization in the praxis of model-building, not to give an abstract theory of it. (shrink) Approximation in General Philosophy of Science Direct download (10 more) Gauge Gravity and the Unification of Natural Forces.Chuang Liu - 2001 - International Studies in the Philosophy of Science 17 (2):143 – 159.details Physics seems to tell us that there are four fundamental force-fields in nature: the gravitational, the electromagnetic, the weak, and the strong (or interactions). But it also seems to tell us that gravity cannot possibly be a force-field, in the same sense as the other three are. And yet the search for a grand unification of all four force-fields is today one of the hottest pursuits. Is this the result of a simple confusion? This article aims at clarifying this situation (...) by (i) reviewing the gauge-field programme and its conception of unification of force-fields, (ii) examining the various attempts at a gauge theory of gravity, and (iii) articulating the nature of "gauging" and using it to explain the difference between gravity and the other force-fields. (shrink) Philosophy of Physics, Miscellaneous in Philosophy of Physical Science Instability, Modus Ponens and Uncertainty of Deduction.Huajie Liu - 2006 - Frontiers of Philosophy in China 1 (4):658-674.details Considering the instability of nonlinear dynamics, the deductive inference rule Modus ponens itself is not enough to guarantee the validity of reasoning sequences in the real physical world, and similar results cannot necessarily be obtained from similar causes. Some kind of stability hypothesis should be added in order to draw meaningful conclusions. Hence, the uncertainty of deductive inference appears to be like that of inductive inference, and the asymmetry between deduction and induction becomes unrecognizable such as to undermine the basis (...) for the fundamental cleavage between analytic truth and synthetic truth, as W. V. O. Quine pointed out. Induction is not inferior to deduction from a pragmatic point of view. (shrink) Investigating the Relationship Between Protestant Work Ethic and Confucian Dynamism: An Empirical Test in Mainland China. [REVIEW]Suchuan Zhang, Weiqi Liu & Xiaolang Liu - 2012 - Journal of Business Ethics 106 (2):243-252.details This study examined the relationship between the Protestant Work Ethic (PWE) and Confucian Dynamism in a sample of 1,757 respondents from several provinces in mainland China. Mirels and Garrett's (J Consult Clin Psychol 36:40–44, 1971 ) PWE Scale and Robertson's (Manag Int Rev 40:253–268, 2000 ) Confucian Dynamism Scale were used to measure the work ethics. The 16 items of the PWE Scale and eight items of the Confucian Dynamism Scale were initially subjected to a principal components analysis. Factor analysis (...) produced four factors of the PWE, which were labeled as follows: hard work, internal motive, admiration of work itself, and negative attitude to leisure; and three factors of the Confucian Dynamism, which were labeled: long-term orientation, short-term orientation, and guanxi orientation. The results of a multiple regression analysis indicated that all the dimensions of PWE were positively related to Confucian Dynamism, but negatively to guanxi orientation. The results also indicated that three PWE dimensions ("hard work," "internal motive," and "admiration of work itself") were positively and significantly related to long-term orientation, but two of them were related negatively and significantly to the short-term orientation of Confucian Dynamism. In addition, the results showed that the dimension—admiration of work itself—of PWE was significantly and negatively associated with the guanxi orientation, but significantly and positively to the short-term orientation. (shrink) Anticipating Intentional Actions: The Effect of Eye Gaze Direction on the Judgment of Head Rotation.Matthew Hudson, Chang Hong Liu & Tjeerd Jellema - 2009 - Cognition 112 (3):423-434.details Moral Reason, Moral Sentiments and the Realization of Altruism: A Motivational Theory of Altruism.JeeLoo Liu - 2012 - Asian Philosophy 22 (2):93-119.details This paper begins with Thomas Nagel's (1970) investigation of the possibility of altruism to further examine how to motivate altruism. When the pursuit of the gratification of one's own desires generally has an immediate causal efficacy, how can one also be motivated to care for others and to act towards the well-being of others? A successful motivational theory of altruism must explain how altruism is possible under all these motivational interferences. The paper will begin with an exposition of Nagel's proposal, (...) and see where it is insufficient with regard to this further issue. It will then introduce the views of Zhang Zai and Wang Fuzhi, and see which one could offer a better motivational theory of altruism. All three philosophers offer different insights on the role of human reason/reflection and human sentiments in moral motivation. The paper will end with a proposal for a socioethical moral program that incorporates both moral reason and moral sentiments as motivation. (shrink) Altruism and Psychological Egoism in Normative Ethics Classical Confucianism in Asian Philosophy Moral Realism and Irrealism in Meta-Ethics Moral Reasoning and Motivation in Meta-Ethics Zhang Zai in Asian Philosophy 1 — 50 / 999
CommonCrawl
Atomic Number Of Oxygen What is the Subjects. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. See The Discovery of Oxygen: Common Compounds: It forms bonds with almost every other element and results in oxidation. It is atomic number 8 with element symbol O. The atomic number uniquely identifies a chemical element. Additional facts and information regarding the Periodic Table and the elements may be accessed via the Periodic Table Site Map. Atomic mass is never an integer number for several reasons: The atomic mass reported on a periodic table is the weighted average of all the naturally occuring isotopes. CAS number. It uniquely identifies a chemical element. Beryllium (Group II) has an extra electron and proton compared with lithium. Thus, the atomic number of H \ce{H} H is 1 1 1 , of O \ce{O} O is 8 8 8 , and of F \ce{F} F is 9 9 9. 2%, by weight, of the earth's crust. Scheele found a gas that enhances combustion while heating several compounds, including mercury oxide, manganese oxide and potassium nitrate. It usually also represents the amount or electroms in the atom. Knowing the density (and enrichment) we can calculate the number densities of the constituents (i. atomic number synonyms, atomic number pronunciation, atomic number translation, English dictionary definition of atomic number. Estimated Crustal Abundance: 4. (b) Isotopes of oxygen will have a different mass number (A), that is different numbers of neutrons. Correct Answer: 8 protons in its nucleus. Calculate the average atomic mass of copper. Look up isotopes and search for the most common isotope of oxygen. Nitrogen also has stable isotopes, such as N-14 and N-15. The value of J, appended as a right subscript, defines the level. The regulation of catalytic activity in the oxygen reduction reaction (ORR) is significant to the development of metal–air batteries and other oxygen involving energy conversion devices. Sodium atomic orbital and chemical bonding information. Define "isotope" using mass number, atomic number, number of protons, neutrons and electrons. How to edit atomic radii, switch between element tables and configure your defaults. The atomic number uniquely identifies a chemical element. Oxygen-mask is attested from 1920. 999 atomic mass units and if you wanted to get a more precise number here, let's get a calculator out. The atomic number of an element never changes, meaning that the number of protons in the nucleus of every atom in an element is always the same. atomic number but differ in mass. Let's say that we start off with UO2 fuel. were rearranged by atomic number. Air is made of 21% oxygen and 78% nitrogen. This fluence is the flux (atoms/cm 2 /sec) times the exposure period (seconds), with the flux defined as the number density of atomic oxygen (atoms/cm 3 ) times the orbital velocity (cm/s). An isotope is a compound with the same number of protons and electrons, but different number of neutrons. Define atomic number. The increase in ionisation energy (I. Find more Chemistry widgets in Wolfram|Alpha. 9994 amu Melting Point:-218. This is the defining trait of an element: Its value determines the identity of the atom. The number of protons in a nucleus is called the atomic number and always equals the number of electrons in orbit about that nucleus (in a nonionized atom). Many have been investigated, and many more areas can be explored. How to edit atomic radii, switch between element tables and configure your defaults. Melting Point -218. 8% for mass= 64. It uniquely identifies a chemical element. gold Au, oxygen O 2 ). It is the third most abundant of all the elements of nature. Oxygen is a colourless, odourless, tasteless gas essential to living organisms, being taken up by animals, which convert it to carbon dioxide; plants, in turn, utilize carbon dioxide as a source of carbon and return the oxygen to the atmosphere. The easiest way to find the number of protons, neutrons, and electrons for an element is to look at the element's atomic number on the periodic table. The number of protons is equal to the number of electrons, unless there's an ion superscript listed after the element. Care must be taken in these types of determinations however. Atomic weight is the sum of the number of protons (its atomic number) + the number of neutrons. The atomic number of Oxygen is eight (8) The atomic weight of O is 15. The resulting hybrid orbitals are called sp hybrids. The atomic number is the number of a chemical element in the periodic table of Mendeleev; the atomic number is equal to the number of protons and electrons. 54 cm diameter were punched out of the test sample films. All the isotopes of carbon have equal number of six protons, which is the atomic number of carbon, but different number neutrons varying from 2 to 16. Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on the images you see here!. That number is equal to the number of protons. The compound Na 2 O consists of two Sodium cations and one Oxygen anion. For elements that exist as molecules, it is best to explicitly state whether molecules or atoms are meant. Element Groups: Alkali Metals Alkaline Earth Metals Transition Metals Other Metals Metalloids Non-Metals Halogens Noble Gases Rare Earth Elements. 999 u and Symbol: "O" Fun fact:- Almost two-thirds of the weight of living things comes from oxygen, mainly because living things contain a lot of water and 88. The n = 1 energy level has two electrons, and the n = 2 energy level has six electrons. The Periodic Table has been updated since then with new elements and information. The atomic number of oxygen is the number of protons it has; so it is 8. Atomic Number The more negative the energy, the stronger is the electron held. It is a member of the chalcogen group in the periodic table , a highly reactive nonmetal , and an oxidizing agent that readily forms oxides with most elements as well as with other compounds. Atomic mass of Oxygen is 15. Photo Credit: NASA There are many biomedical applications of atomic oxygen. Lesson Objectives. Atomic oxygen can be used in medical applications, such as texturing surfaces for use in glucose monitors. The gas can also be isolated and sold in pure form for an assortment of uses, and was first isolated and identified in 1774. Oxygen is the chemical element with the symbol O and atomic number 8. Atomic number. Oxygen has an atomic number of 8. Atomic Symbols, Atomic Numbers, and Mass Numbers By Debbie McClinton Dr. An atomic mass of 16 was assigned to oxygen prior to the definition of the unified atomic mass unit based upon 12 C. The n = 1 energy level has two electrons, and the n = 2 energy level has six electrons. So it's right here, so there's one proton in. It uniquely identifies a chemical element. See The Discovery of Oxygen: Common Compounds: It forms bonds with almost every other element and results in oxidation. Check out Oxygen on the Periodic Table which arranges each chemical element according to its atomic number, as based on the Periodic Law, so that chemical elements with similar properties are in the same column. Oxygen, the most widely occurring element on the Earth, is a colorless, odorless, and tasteless gas. Uses of Oxygen: Forms almost 21% of atmosphere. Welcome to the Academic Success Center's website. an atomic mass of 8. Atoms contain protons, neutrons and electrons. Chemical elements listed by atomic mass The elemenents of the periodic table sorted by atomic mass. So the atomic number is symbolized by Z and it refers to the number of protons in a nucleus. 8% for mass= 64. It uniquely identifies a chemical element. It is the most common component of the earth's crust at 49%. Silicon-oxygen minerals make most of the Earth's crust. I know that relative atomic mass of X12X2122C is 12 u. That means there are 11 electrons in a sodium atom. Knowing the density (and enrichment) we can calculate the number densities of the constituents (i. Oct 30, 2006 · The selective reduction of oxygen to water in such biological systems is crucial, not only in order to maximize the energy produced for cellular metabolism but also because hydrogen peroxide is a. Why could helium-3 never undergo alpha decay?. Water (H 2 O) Oxygen difluoride (OF 2) Interesting facts: It is the third most abundant chemical element in the universe after hydrogen and helium. After hydrogen and helium it is the third most abundant element in the universe. In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all of these products together. Nov 16, 2008 · The atomic number of oxygen is 8. Its atomic number is 8 and indicated by the symbol O. The Periodic tables in Black Mesa are period-accurate to the game's setting (late 90's-early 2000's), with elements such as Flerovium (named in 2012) appearing unnamed. Therefore, Formed compound nuclei by absorption of one neutron in 238U and 235U are 239U* and 236U*. Therefore, atomic number is a convenient way of distinguishing different elements. The number of protons in the nucleus of every atom of an element is always the same, but this is not the case with the number of neutrons. The resulting hybrid orbitals are called sp hybrids. 02 cm-1 (13. Atomic oxygen has been used to texture the surface of polymers that may fuse with bone. Jul 21, 2010 · The 18 after the oxygen indicates its atomic weight. Carbon has atomic number 6. Each type of atom can make a characteristic number of bonds depending on the number of available electrons. Carbon has 6 protons and an atomic number of 6; oxygen has 8 protons and thus and atomic number of 8. an atomic number of 16 3. Therefore the atomic radius of chlorine is 0. Oxidation States:-2. An atom that contains 8 protons, 8 electrons, and 9 neutrons has 1. So if an atom has 8 protons (atomic number = 8), it must be oxygen. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. Although other scientists contributed to knowledge about atomic weights, much of the experimental work that was used to improve the Table of Atomic Weights was done by J. This fluence is the flux (atoms/cm 2 /sec) times the exposure period (seconds), with the flux defined as the number density of atomic oxygen (atoms/cm 3 ) times the orbital velocity (cm/s). Published 15 September 2008 • 2008 IOP Publishing Ltd Journal of Physics D: Applied Physics, Volume 41, Number 19. Fill in the number of protons and neutrons and draw dots for the proper locations of the electrons. Some people only require oxygen therapy while sleeping, while others may require it 24 hours a day. An element's atomic number is the number of protons in the nucleus of a single atom of that element. ATOMIC STRUCTURE 3. 9 percent of water's weight comes from oxygen. Homophone for the atomic number of oxygen crossword clue Today's clue from the New York Times crossword puzzle is : Homophone for the atomic number of oxygen First let's look and see if we can find any hints in the New York Times crossword puzzle. a chemical element, atomic number 8, atomic weight 15. We call it atomic weight but it's really just the weighted average, it's not weight in kind of the physics sense of measuring a force. Question: The atomic number of oxygen is 8. Define "isotope" using mass number, atomic number, number of protons, neutrons and electrons. Chemical symbol for Oxygen is O. An oxygen anion, O2−. The downfall of the phlogiston theory required a new name, which Lavoisier provided. This article on Oxygen properties provide facts and information about the physical and chemical properties of Oxygen which are useful as homework help for chemistry students. It is also most abundant element in the Earth. Its atomic number is 8 and indicated by the symbol O. 632% 14N and 0. 4 °F) Number of Protons/Electrons: 8 Number of Neutrons: 8 Classification: Non-metal Crystal Structure: Cubic Density @ 293 K: 1. 022 x 10 23 O atoms. User: The atomic number of oxygen is 8, because oxygen has Weegy: Atomic number is the number of protons found in the nucleus of an atom. View the answer now. This means that an atom of oxygen, regardless of the isotope, will have exactly 8 protons in its nucleus. This is the defining trait of an element: Its value determines the identity of the atom. Suppose you want to draw the energy level diagram of oxygen. 999 atomic mass units and if you wanted to get a more precise number here, let's get a calculator out. Another example is oxygen, with atomic number of 8 can have 8, 9, or 10 neutrons. So we're going to talk about hydrogen in this video. Number of Energy Levels: 2: First Energy Level: 2: Second Energy Level: 6: Oxygen: Elements by Name: Elements by Number: Home: Graphic courtesy of. Why could helium-3 never undergo alpha decay?. Under ordinary conditions, it can exist as a pure element in the form of oxygen gas (O 2 ) and also ozone (O 3 ). Oxygen has a molar mass of 15. Atomic Weight Calculator. Mass number is defined as the sum of number of protons and neutrons that are present in an atom. What are common uses for it? Common uses for Titanium are: propeller shafts for boats, airplanes, missiles, rockets, artificial hips,. Why is the relative atomic mass of oxygen less than 16?. 0079 atomic mass units. Another example is oxygen, with atomic number of 8 can have 8, 9, or 10 neutrons. , they protect against oxidative stress and damaged caused by reactive oxygen species (ROS) otherwise known as free radicals, but a foundational knowledge of these ROS is essential to understand the role of antioxidants in body dynamics. The oxide growth model for silver exposed to energetic atomic oxygen has to take the above points. 9 percent of water's weight comes from oxygen. Another way of naming isotopes uses the name of the element followed by the isotope's mass number. 022 \times 10^{23}$ molecules of $\ce{H2SO4}$, each of which contains one sulphur atom, two hydrogen atoms and four oxygen atoms. a chemical element, atomic number 8, atomic weight 15. (a) Isotopes of oxygen will have the same atomic number (Z = 8), that is same number of protons. B) exactly 8 daltons. Get the free "Electron Configuration Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. That means there are 11 electrons in a sodium atom. (Points: 3) 8 protons and 8 neutrons 16 protons and 16 neutrons 8 protons and 7 neutrons 16 protons and 0 neutrons Nitrogen (N) is made up of 99. Darwin and atomic selection of message EVENT parameters. Atomic number definition, the number of positive charges or protons in the nucleus of an atom of a given element, and therefore also the number of electrons normally surrounding the nucleus. The mass number is the sum of neutrons and protons. About two thirds of the human body and nine tenths of water is oxygen. Formatting: superscript numbers where appropriate but omit parentheses. Melting Point -218. Sep 23, 2019 · The number of protons in the nucleus (called the nuclear charge). CAS number. The atomic number (symbol: Z) of an atom is the number of protons in the nucleus of the atom. Further specifying the MJ quantum number would define the state for the atomic eigenfunction. What does oxygen mean? Information and translations of oxygen in the most comprehensive dictionary definitions resource on the web. Another way of naming isotopes uses the name of the element followed by the isotope's mass number. If you were somehow able to change the proton number of this atom to 7, even if everything else remained the same, it would no longer be an oxygen atom, it would be nitrogen. Question 1 2. We have 1 answer for this clue. Subtracting an element's mass number from its atomic mass tells you the number of protons in its nucleus. Define "isotope" using mass number, atomic number, number of protons, neutrons and electrons. In the periodic table of elements, oxygen is a p-block element and it belongs to group 16 and period 2. A pair of oxygen atoms is a molecule of oxygen. However many protons you have, that's how many electrons you'll need. Banks, Kim K. This group of inert (or noble) gases also includes krypton (Kr: 4s2, 4p6), xenon (Xe: 5s2, 5p6) and radon (Rn: 6s2, 6p6). Atomic oxygen can be used in medical applications, such as texturing surfaces for use in glucose monitors. So in your question, $\pu{1. Compound - A compound is a substance made of more than one type of atom (e. By continuing to use this site you consent to the use of cookies on your device as described in our cookie policy unless you have disabled them. Lithium, Beryllium, Boron are exceptions that show depletion despite their low atomic number. Oxygen changes from a gas to a liquid at a temperature of 182. The atomic number of oxygen is 8, because oxygen has A. 8) Oxygen has an atomic number of 8 and a mass number of 16. May 31, 2017 · Oxygen has an atomic number of 8. The atomic number of a chemical element (also known as its proton number) is the number of protons found in the nucleus of an atom of that element. you will learn about this later). Suppose you want to draw the energy level diagram of oxygen. You will find a link at the bottom of the. Atomic Mass Atomic mass is based on a relative scale and the mass of 12C (carbon twelve) is defined as 12 amu; so, this is an exact number. Atomic oxygen can be used in medical applications, such as texturing surfaces for use in glucose monitors. List of Elements with Range of Atomic Weights. To determine the number of protons, you just look at the atomic number. Electron shell. For example, the electron configuration of Oxygen (atomic number 8) is 1s^2 2s^2 2p^4. Why could helium-3 never undergo alpha decay?. 4° C Boiling Point -183. The number of energy levels holding electrons (and the number of electrons in the outer energy level). Apr 28, 2006 · The atomic number represents the amount of protons in the nucleus in the atom. Atomic and Molecular Weights The subscripts in chemical formulas, and the coefficients in chemical equations represent exact quantities. is related to the number of oxygen atoms per unit surface and time arriving at the surface. 999 atomic mass units and if you wanted to get a more precise number here, let's get a calculator out. A table of chemical elements ordered by atomic number and color coded according to type of element. Because of its willingness to bond to other nonmetallic elements it is often referred to as the building block of life. Atomic Mass of Oxygen. First determine the atomic number of the element. The rest of the isotopes are unstable. Many have been investigated, and many more areas can be explored. Its atomic number is 8 and indicated by the symbol O. Atomic oxygen synonyms, Atomic oxygen pronunciation, Atomic oxygen translation, English dictionary definition of Atomic oxygen. Mar 13, 2018 · The amount of electrons in an atom is tied to the amount of protons. The atomic number is marked with the symbol Z, taken from a German word zahl (or atomzahl, which is ' atomic number ' in German). It is nonmetallic and tetravalent—making four electrons available to form covalent chemical bonds. The atomic mass number of oxygen is 16. 022×10 23 atoms or molecules per gram-mole): M mN n A (1). 023 × 10 23. The n = 1 energy level has two electrons, and the n = 2 energy level has six electrons. Atomic Number of Oxygen. For elements that exist as molecules, it is best to explicitly state whether molecules or atoms are meant. H 2 O, for example, indicates that a water molecule comprises exactly two atoms of hydrogen and one atom of oxygen. for the element of SODIUM, you already know that the atomic number tells you the number of electrons. The O = 16 scale was formalized when a committee appointed by the Deutsche Chemische Gesellschaft called for the formation of an international commission on atomic weights in March 1899. Symbol Z Abbr. , it has 8 protons and 8 electrons. Oxygen needs (atomic number 8) needs two electrons to give it full s and p subshells. Representation of an element, E with its atomic number and mass number is: The three isotopes of oxygen are: We know that, Number of protons in an isotope = atomic number of the element. Isotopes: Two or more forms of an element with the same atomic number (same number of protons in their nuclei), but different atomic masses (different numbers of neutrons in their nuclei). Subtracting an element's mass number from its atomic mass tells you the number of protons in its nucleus. Who discovered it? William Gregor. Oxygen is the eighth element with a total of 8 electrons. Oxygen (element #8) = 16. Therefore the atomic radius of chlorine is 0. Published 15 September 2008 • 2008 IOP Publishing Ltd Journal of Physics D: Applied Physics, Volume 41, Number 19. B) exactly 8 daltons. Welcome to the Academic Success Center's website. If you were somehow able to change the proton number of this atom to 7, even if everything else remained the same, it would no longer be an oxygen atom, it would be nitrogen. Carbon has 6 protons and an atomic number of 6; oxygen has 8 protons and thus and atomic number of 8. However, expressing the reaction in terms of gas volumes following Gay-Lussac's law of combining gas volumes, two volumes of hydrogen react with one volume of oxygen to produce two volumes of water, suggesting (correctly) that the atomic weight of oxygen is sixteen. Watch full episodes of Oxygen true crime shows including Snapped, Killer Couples, and Three Days to Live. The atomic number of oxygen is 8, because oxygen has A. 61805 eV) Ref. Number of protons in Oxygen is 8. Where more than one isotope exists, the value given is the abundance weighted average. Carbon has atomic number 6. Quizengines Biology What is the maximum number of covalent bonds an element with atomic number 8 can make with Oxygen has an atomic number of 8 and a mass number. Oxygen's symbol is O and atomic number is 8. Symbol O A nonmetallic element constituting 21 percent of the atmosphere by volume that occurs as a diatomic gas, O2, and in many compounds such as water. Is it an Isotope? Yes. Thus, all atoms that have the same number of protons--the atomic number--are atoms of the same element. (The atomic number for oxygen is 8, and the atomic mass is 15. Nov 06, 2011 · The Question: - "Oxygen forms three separate ions. 999 u and Symbol: "O" Fun fact:- Almost two-thirds of the weight of living things comes from oxygen, mainly because living things contain a lot of water and 88. Show the distribution of electrons in oxygen atom (atomic number 8) using the orbital Login. Atoms contain protons, neutrons and electrons. Atomic Number Atomic Mass Electron Configuration Number of Neutrons Melting Point Boiling Point Date of Discovery Crystal Structure. The atomic number of oxygen is the number of protons it has; so it is 8. The oxide growth model for silver exposed to energetic atomic oxygen has to take the above points. Relative atomic mass The mass of an atom relative to that of carbon-12. The atomic number of an element never changes, meaning that the number of protons in the nucleus of every atom in an element is always the same. You will find a link at the bottom of the. A number in parentheses indicates the uncertainty in the last digit of the atomic weight. Molecular oxygen (O 2) (often called free oxygen) on Earth is thermodynamically unstable. No two different elements will have the atomic number. Here is a collection of facts about this essential element. 999 u or g/mol. This energy level model shows two electrons on the first energy level and six electrons on the second energy level. That box on the left has all of the information you need to know about one element. Daniel Rutherford discovered this non-metal element in 1772. Atomic Number of Oxygen is 8. 93% and 13 C which forms the remaining form of carbon on earth. 022 \times 10^{23}$ molecules of $\ce{H2SO4}$, each of which contains one sulphur atom, two hydrogen atoms and four oxygen atoms. Care must be taken in these types of determinations however. 9994 amu Melting Point:-218. 3 - Electron Configuration for Atoms of the First 20 Elements When the electrons are arranged in their lowest energy state, the atom is in the ground state. Because all of the isotopes of an element have the same atomic number, the atomic number is often left off the isotope notation. Atoms want to have the same number of neutrons and protons but the number of neutrons can change. And you can find the atomic number on the periodic table. In our website you will find the solution for Homophone for the atomic number of oxygen crossword clue crossword clue. Atomic number is defined as the number of protons or number of electrons that are present in an atom. 2%, by weight, of the earth's crust. On the Periodic Table of Elements it is located with the nonmetals. Atomic Number of Oxygen. Oxygen-mask is attested from 1920. (2) The word oxygen is derived from the Greek words 'oxys' meaning acid and 'genes' meaning forming. C) approximately 16 grams. There is a pronounced peak in abundance in the vicinity of iron (Fe). It is the supporter of combustion in air and was the standard of atomic, combining, and molecular weights until 1961, when carbon 12 became the new standard. click on any element's name for further information on chemical properties, environmental data or health effects. A very common science class activity is building 3D models of atoms. Atomic Number of Oxygen is 8. Graphic in a new window. Visit Crime Time for breaking crime news and listen to the Martinis & Murder podcast. The only intention that I created this website was to help others for the solutions of the New York Times Crossword. Oxygen atomic orbital and chemical bonding information. The atomic number of oxygen is 8, because oxygen has. Atomic Symbols, Atomic Numbers, and Mass Numbers By Debbie McClinton Dr. It is an active, life-sustaining component of the atmosphere, constituting nearly 21% of volume of the air we breathe. What is the atomic mass/mass number of the atom in the diagram above? How many protons are in the nucleus of an atom with an atomic number of 15? How many electrons are in the nucleus of an atom with an atomic number of 20?. Banks, Kim K. In writing the electron configuration for oxygen the first two electrons will go in the 1s orbital. Since physicists referred to 16 O only, while chemists meant the naturally-abundant mixture of isotopes, this led to slightly different mass scales between the two disciplines. This means that an atom of oxygen, regardless of the isotope, will have exactly 8 protons in its nucleus. The atomic number of oxygen is 8 and the atomic mass of one isotope of oxygen is 17. 02 cm-1 (13. Oct 21, 2019 · Atomic images reveal unusually many neighbors for some oxygen atoms has studied the bonding of a large number of nitrogen and oxygen atoms using state-of-the-art scanning transmission electron. That is a very good question, David. 022 x 10 23 O atoms; "1 mole of oxygen atoms" means 6. This is approximately the sum of the number of protons and neutrons in the nucleus. Some isotopes are radioactive-meaning they "radiate" energy as they decay to a more stable form, perhaps another element half-life: time required for half of the atoms of an element to decay into stable form. Chemical elements listed by atomic mass The elemenents of the periodic table sorted by atomic mass. The atomic number of oxygen is 8 because oxygen has A)eight protons in the nucleaus )electrons in eight shells C)a - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. The value of J, appended as a right subscript, defines the level. Atomic oxygen can be used in medical applications, such as texturing surfaces for use in glucose monitors. Oxidation States:-2. The number of protons in the nucleus of an atom is its atomic number (Z). 022×1023 atoms or molecules per mole):. Atomic Orbits The nature of atoms and the manner in which electrons interact and move about their nucleus has puzzled scientists for a long time. Atomic Number of Oxygen is 8. Shell number one can only hold 2 electrons. 9994 grams per mole. The n = 1 energy level has two electrons, and the n = 2 energy level has six electrons. ATOMIC NUMBER DENSITY Number of Atoms (n) and Number Density (N) The number of atoms or molecules (n) in a mass (m) of a pure material having atomic or molecular weight (M) is easily computed from the following equation using Avogadro's number (NA = 6. Electron shell. The elements of the periodic table are listed in order of increasing atomic number. 02 cm-1 (13. When these two atoms react, both become stable. The H-O-O bond angle in this molecule is only slightly larger than the angle between a pair of adjacent 2p atomic orbitals on the oxygen atom, and the angle between the planes that form the molecule is slightly larger than the tetrahedral angle. This is the defining trait of an element: Its value determines the identity of the atom. Why do we specify 12C? We do not simply state that the mass of a C atom is 12 AMU because elements exist as a variety of isotopes. electrons in eight shells. 022⋅1023 =1. Atomic number of an element never changes: for example, the atomic number of oxygen is always 8, and the atomic number of Chlorine is always 18. I have a problem I can't figure out. Oxygen Atomic Structure. The chemical element oxygen is a highly reactive non-metal. Life support will start high-pressure flood of oxygen, and release some bubbles. Which is, once again, 1. Oct 18, 2011 · Every element has separate atomic number, and no element has the same atomic number.
CommonCrawl
Development of multi-pitch tool path in computer-controlled optical surfacing processes Jing Hou1,2, Defeng Liao1,2 & Hongxiang Wang1 Tool path in computer-controlled optical surfacing (CCOS) processes has a great effect on middle spatial frequency error in terms of residual ripples. Raster tool path of uniform path pitch is one of the mostly adopted paths, in which smaller path pitch is always desired for restraining residual ripple errors. However, too dense paths cause excessive material removal in lower removal regions deteriorating the form convergence. With this in view, we propose a novel tool path planning method named multi-pitch path. With the path, the material removal map is divided into several regions with varied path pitches according to the desired removal depth in each region. The path pitch is designed larger at low removal regions while smaller at high removal regions, and the feeding velocity of the tool is maintained at high level when scanning the whole surface. Results and conclusions Experiments were conducted to demonstrate this novel tool path planning method, and the results indicate that it can successfully restrain the residual ripples, and meanwhile guarantee favorable convergent rate of form error. Large optics has been widely used in interferometers, telescopes, high-power lasers and other optical systems. In these systems, the optics are required of stringent specifications of low, middle and high spatial frequency errors [1, 2]. Various CCOS processes have been developed which can provide good solutions for the fabrication of these optics because of their high convergence rates of low frequency error (i.e. surface form) [3,4,5]. Nowadays, more and more attentions have been paid to the middle spatial frequency (MSF) error, which is crucial for image performance and beam quality [6]. MSF error is primarily introduced during the CCOS processes, and it is hard to restrain. It is reported that MSF error is mainly affected by the initial surface error distribution (spatial and frequency domain), the removal function characters (profile, removal efficiency and stability) and the adopted paths [7, 8]. During CCOS, the tool is numerically controlled to traverse a path with a varied feeding velocity to obtain the desired removal map. The tool path plays an important role in the deterministic removal process, which has to cover the whole optic surface. There are several tool paths utilized in CCOS processes, such as the regular raster and spiral paths, and several kinds of random path [9,10,11]. The random path is claimed to be useful for reducing the MSF error, [12] but is hard to achieve a high precision surface form because of the difficulty in tool speed management [7]. The spiral and raster paths are more prone to generating MSF error in terms of residual ripples due to their inherent regular pattern. Spiral paths are suitable for circular optics as the tool is driven to traverse a radius while the optic mounted on a turntable rotates simultaneously [13]. Raster path is usually adopted for the fabrication of square-shaped optics. During polishing with raster path, the tool feeds along a straight line and then translates to another parallel line. This process is repeated to cover the whole surface. The pitch between adjacent path lines is commonly set identical (i.e. uniform pitch) on the whole surface, and the feeding velocity along each path is instantaneously controlled based on the local removal [14]. It is obvious that for a uniform removal map of a certain removal amount, the smaller the path pitch, the larger the feeding velocity. However, if the desired feeding velocity is larger than the largest one allowed by the machine, the actual feeding velocity has to be changed to the largest one, which will introduce extra dwell time leading to material over-removal. As the tool feeds fast at lower region while slowly at higher region with the uniform pitch path, the smallest removal region (lowest region) will commonly bring with such over-removal. Since, the decreased pitch is propitious to restraint of MSF errors [15]. On the other hand, the decreased pitch will greatly increase over-removal especially in the lowest region deteriorating the form correction precision. Hence, it is needed to develop an optimized tool path planning method which can solve this problem. A novel tool path planning method named multi-pitch path, is developed in this paper. With this method, the material removal map is divided into several regions with varied path pitches according to the desired removal depth in each region. The path pitch is designed larger at low removal regions, so as to bring much less over-removal; while smaller at high removal regions, so as to decrease MSF error in terms of the residual ripples. The path has obvious advantage in restraint of residual ripples, and meanwhile can guarantee convergent rate of form errors. In the following section II, the correlation between the ripple and MSF error is analyzed to verify the rationality of characterizing the MSF error by ripples. In section III, the factors impacting ripple errors, including the removal amount and path pitch are discussed. In section IV, the multi-pitch path and polishing procedure with the path are detailed and the experimental validation is conducted. Verification of characterizing MSF errors with the residual ripple Spatial frequency of surface errors is divided into several separate bands in the field of high power lasers [2]: surface figure (>33 mm), MSF error (0.12 ~ 33 mm) and surface roughness (0.01 ~ 0.12 mm). There are two types of specification for MSF error; one is RMS value after band pass filtering, and the other is a not-to-exceed line for the power spectral density (PSD) as a function of spatial frequency [16]. In the following, we select RMS after band pass filtering over 0.12 ~ 33 mm range for evaluation of the MSF error. MSF errors induced by CCOS processes are commonly in form of residual ripples. Thus, in order to quantitatively specify the correlation between residual ripple error and MSF error, we formulated a series of sinusoidal surface forms with variable spatial frequency and magnitude (see Fig. 1). The surface forms are sinusoidal distributed in x direction, while are uniformly distributed in y direction. Surface forms of this shape are fairly similar to the local regions of surface forms practically corrected by CCOS processes, which are nearly sinusoidally-distributed in the scanning direction while nearly uniformly-distributed in the feeding direction. Herein, the MSF error, in terms of the RMS value in the mid-spatial frequency band (0.0303 ~ 8.33 mm−1), is calculated for all the surface forms as shown in Fig. 2. It is revealed that the spatial frequency has little effect on the RMS value, while there has a good linear relationship between the spatial magnitude and the RMS value. Thus, we should focus on the residual ripple magnitude rather than the frequency while restraining MSF errors. One example of sinusoidal distributed residual ripples, (a) contour map and (b) one-dimensional distribution The relationships between the MSF error and the ripple features. a Rms and ripple frequency and (b) Rms and ripple magnitude (Rms after band pass: 0.12-33 mm) Influencing factors of residual ripple errors As revealed above that residual ripple error can be characterized by the ripple amplitude, i.e. the peak-to-valley value of the ripple (PVe). We introduce a normalized PV value of residual ripple (PVn), which is derived from PVe divided by the average removal depth (r). PVn represents the residual error PVe while achieving unit removal (see Eq.1). $$ {\mathrm{PV}}_{\mathrm{n}}={\mathrm{PV}}_{\mathrm{e}}/r $$ Primary factors impacting the residual ripple errors include the scanning pitch (i.e., path pitch), removal depth, and tool influence function (TIF) features. Without loss of generality, we modelled variable scanning pitch resulting in a uniform removal map as well as variable removal map under the same scanning pitch, to reveal the effects of the canning pitch and removal depth on the residual ripple and MSF error. Herein, we consider a Magnetorheological Finishing (MRF) TIF traversing a uniform pitch raster path, under the condition that the feeding direction is set in perpendicular to the fluid flow direction as shown in Figs. 3 and 4. As the TIF traverses a single line path with a constant feeding velocity of v, the removal is uniformly distributed along the feeding direction, while the removal distribution R j in the perpendicular direction can be obtained by Eq.2 (see Fig. 5), in which TIF matrix (R, unit in um/s) has s row, k column elements as shown in Eq.3; and the pixel size is p (unit in mm). $$ {R}_j=p/v\cdot \sum_{i=1}^k{r}_{i,j},\kern0.5em j=1,\dots, l. $$ $$ R=\left[\begin{array}{l}{r}_{11}\kern0.75em {r}_{12}\cdots \kern0.5em {r}_{1k}\\ {}\\ {}\\ {}\\ {}{r}_{11}\kern0.75em {r}_{12}\cdots \kern0.5em {r}_{1k}\\ {}\\ {}\\ {}\\ {}\cdots \kern0.5em \cdots \kern0.5em {r}_{i\ j}\cdots \\ {}\\ {}\\ {}{r}_{s1}\kern0.75em {r}_{s2}\cdots \kern0.5em {r}_{sk}\end{array}\right] $$ Uniform pitch raster path for the modeling MRF TIF chosen for the following simulation and experimental Removal amount by a single path in the scanning direction Figure 6 shows the local removal amount distribution in the scanning direction while correcting uniformly-distributed form errors. The blue sections represent the removal amount in independent single path and the red one is the convolved removal amount. It is obvious that the convolved removal amount is periodically distributed, and the spatial wavelength is identical to the scanning pitch. It is confirmed that surface form correction by small-sized TIF inevitably induces residual ripple error. Removal distribution in the scanning direction Figure 7a shows that PVe becomes a linear growth along with the increment of the removal amount. It is suggested that a less removal amount is propitious to restraint of PVe. Figure 7b shows the PVn value as a function of scanning pitch. PVn increases as the scanning pitch is increased. It is noticeable that PVn increases slowly until the scanning pitch reaches ~1.1 mm, and then increases sharply. It is revealed that while correcting the surface form of optics by sub-aperture polishing, it is desired to adopt a smaller tool-path pitch for restraint of residual ripple. Residual error as a function of (a) removal amount and (b) scanning pitch Development of the multi-pitch tool path Correction of the form error by CCOS processes aims to polish every region to a desired plane of absolute flatness, which is commonly located at the lowest point on the surface as shown in Fig. 8. In fact, a lower plane has to be selected due to the maximum motion speed of the tool. Such a removal map introducing extra removal isn't propitious to restraint of residual ripples as revealed above. Schematic of material removal distribution by the CCOS process If the desired plane selected at the lowest point, the desired removal amount at the point would be zero. As the tool traverses across the point, it inevitably removes material deteriorating figure convergence, thus the tool are commonly driven with a most velocity allowable for the machine. Furthermore, the path pitch within lowest regions should be as large as possible so as to introduce less over-removal, but in uniform pitch tool path, a large pitch would deteriorate the residual ripple errors. Therefore, we develop a multi-pitch tool path which has a large pitch in less removal regions reducing over-removal and small pitch in more removal regions so as to decrease the residual ripples while guaranteeing the figure convergence. The polishing procedure with the multi-pitch tool path is showed in Fig. 9. First, we should generate the removal map according to the actual surface figure and the desired surface figure. Then, the removal map is divided into several subregions based on the removal variance. After that we calculate the scanning path pitch and generate the path for each subregion. The spacing between adjacent dwell points along each path line, i.e. the feeding pitch, is also determined. The feeding pitch can be adopted within a wide range, yet value of the scanning pitch is recommended. After determination of the scanning and feeding pitches, we then acquire the dwell points on the whole surface. The polishing time at each dwell point (i.e. the dwell map) can be solved with various algorithms such as discrete convolution model, the linear equation model and so forth [17]. Finally, the CNC code can be generated according to the dwell point on the path and the dwell time map. Polishing procedure with multi-pitch path Determination of the path pitch in each subregion While generating multi-pitch tool path, we first divide the optic surface into several subregions according to the removal map. The whole material removal scope within the maximum and minimum removals is divided into several ranges, and then each removal range determines the corresponding subregions. The number of the removal ranges or subregions depends on the whole removal scope; the larger the removal, the more the ranges or subregions. Generally, 3 ~ 6 removal ranges or subregions are appropriate for most cases. Assuming a removal map has a maximum removal of r and a minimum removal of 0, it is divided into m subregions and the removal variance in each subregion has the same value dr, then the removal in each subregion can be derived by Eqs.4–5. $$ \left(k-1\right)\cdot \varDelta r\le {r}_k<k\cdot \varDelta r,\kern1em k=1,\dots, m. $$ $$ \varDelta r=r/m $$ While determining the path pitch in a subregion, a dwell point P which has a removal of h and covers a square area in the subregion is considered, as shown in Fig. 10. The removal is almost uniformly distributed in the tiny square area, and then the correlation among the removal depth (r), path pitch (d) and feeding velocity (v) can be obtained by Eq.6. It is revealed that a certain d can be calculated for a given s, r and v max, as shown in Eq.7. $$ s=r\cdot d\cdot v $$ $$ d=s/\left(r\cdot {v}_{\mathrm{max}}\right) $$ Fig. 10 Dwell point in the uniform removal map In Eqs.6-7 h represents the feeding pitch, v max the largest feeding velocity allowed by the machine, and s is the volume removal rate of the TIF, which can be derived by Eq.8: $$ s={p}^2\cdot \sum_{j=1}^l\sum_{i=1}^k{R}_{i,j} $$ Where p (unit in mm) is the pixel size of the TIF, and Ri,j (unit in um) is the TIF removal rate. As revealed in the previous section, minimum path pitches are desired for restraint of residual ripple errors. Eq.7 indicates that the feeding velocity is inversely proportional to the pitch; thus, we can adopt a maximum feeding velocity allowed by the machine so as to decrease the pitch. However, increasing feeding velocity has a significant impact on the stability of TIF. A too large feeding velocity will result in alteration of TIF, and hence deteriorate efficiency of figure correction as well as MSF errors. Further, the machine imposes restrictions on the moving velocity and acceleration of every movable component. Hence, there is a favorable maximum velocity allowed for each polishing machine. Herein, the largest feeding velocity (v max) allowed by the machine can be adopted in practice so as to reduce the pitch and hence the PVe. As each subregion is determined within a material removal range, we adopt the minimum removal depth in each region for calculation of the corresponding path pitch (see Eq.9), which will prevent the feeding velocity exceeding the specified maximum value. Then, the pitch in each region can be obtained by Eq.10. $$ {r}_1=0.5\cdot \mathrm{d}r,{r}_k=\left(k-1\right)\cdot \mathrm{d}r,\kern0.5em k=2,\dots, n. $$ $$ {d}_k=s/\left({r}_k\cdot {v}_{\mathrm{max}}\right) $$ In CCOS processes, the scanning path pitch should be restricted within a range in practice. The minimum value of the pitch is determined by the positioning & moving precision of the polishing machine. The maximum one is primarily dependent on the TIF size (i.e. <1/6 size). Further, a too large path pitch isn't propitious to correcting the form error and restraining the ripple error. Solution and implementation of dwell time map Dwell time map in terms of the polishing time at each dwell point provides the time that the tool dwells on the corresponding position to obtain desired removal. In the multi-pitch tool path, the dwell points are allocated at each path line with a feeding pitch. The feeding pitch can be specified according to the scanning pitch. Then, the dwell time map is solved by any developed algorithms, such as the discrete convolution method, linear equation method and so forth. The local feeding velocity (v f) can be derived from the pitch and the local removal (r f) at the corresponding point, as revealed in Eqs.11–12. In the multi-pitch tool path, the removal variance in each region is greatly decreased compared to the conventional tool path with a constant pitch on the whole optic surface. As the tool scans the path lines in any subregion, the path pitch is decreased as much as possible in every region, which is prone to improving the implementation precision of the dwell-time. $$ s\cdot t={r}_f\cdot d\cdot {v}_f\cdot t $$ $$ {v}_f=s/\left(d\cdot {r}_f\right) $$ During generation of multi-pitch tool path, the optic surface is divided into several regions. In each region, the tool scans a raster path with a featured constant pitch. The pitch is dependent on the removal in the region, and the larger the removal, the smaller the pitch. During implementation of the dwell time map, the tool will traverse all the paths that generated covering the whole surface. Herein, we suggest that each region be scanned individually. In each region, adjacent path lines can be interconnected at the ends during implementation of the dwell-time map. As the tool traverses a path line and reaches the end, it translates to the nearby end of the next line and traverses this line (see Fig. 11). The translation stroke from one line to another maybe introduces extra dwell time, which will cause undesired removal and deteriorate the convergence rate of the surface form. If the tool lifts up after completing the last feeding segments in each path line, it will inevitably introduce extra removal during the lifting process. It is suggested that the tool lifts up while traversing the last feeding segment within a period longer than the determined dwell time. At this condition, the increased actual dwell time will compensate the decreased removal function achieving approximately the desired removal. Similarly, the tool descends while traversing the first feeding segment of the next path line. After the tool has covered one subregion, it also lifts off the optic and translates above to the first dwell point of another subregion. Then it descends to accomplish the subsequent dwell time. The lifting of the tool during the translation process wouldn't bring with extra removal. Translation stoke of the multi-pitch tool path Herein, we utilize the multi-pitch tool path and regular uniform pitch tool path for figure correction with MRF process. The two paths are compared through simulation and experiments. MRF process is a typical CCOS process characterized by the stable TIF and deterministic figuring procedure. The MRF machine has x, y, z axes for translation motions, C axis for rotation motion and A axis for swing motion. The maximum translating velocity of the x, y, z axes allow for 50 mm/s. The diameter of the wheel is 300 mm. The spotting and figuring processes are conducted under the condition: wheel speed 200 rpm, MR fluid ribbon height 1.6 mm and the penetration depth of the optic into the ribbon 0.4 mm. The magnetic field strength applied to the MR fluid ribbon is also stably controlled. The TIF obtained by spotting process is showed in Fig. 4. We used two 200 mm × 200 mm sized optics (1#,2#). The optics are previously ground and polished with continuous polishing process. They both have a favorable initial MSF error specification because the continuous polishing has distinct advantage in restraint of MSF errors. Figures of the both are similarly distributed with a PV value of approximately 0.443um, as shown in Fig. 12. In the following, we employed the TIF to correct the optic figures respectively. Initial figures of the optics polished with (a) multi-pitch and (b) uniform pitch tool path The practical feeding velocity is set to 50 mm/s for determining the pitches. We then calculate the desired pitch for each removal depth by Eq.7. Herein, 1# optic is polished with multi-pitch tool path, while 2# optic with uniform pitch tool path for comparison. The removal map of 1# optic is divided into 4 regions, and the removal depths in the regions are as follows: 1) 0 ~ 0.0633um, 0.0633 ~ 0.190um, 0.190 ~ 0.316um, 0.315 ~ 0.443um, then the pitches in the regions can be obtained: 0.8, 0.395, 0.132, 0.099 mm. The pitch of 2# optic is set at 0.8 mm on the whole surface. The dwell points are generated with a feeding pitch of 0.3 mm along every path, and the dwell time map is solved by common discrete convolution algorithm. Then the CNC code for controlling the kinematics of the MRF machine is generated based on the dwell point and dwell time. The simulation and experimental results are shown in Figs. 13, 14 and 15. Simulation results of the 1# and 2# optic figures with (a) multi-pitch and (b) uniform pitch tool paths. PVe of S1 ~ S4 are all smaller than 0.01 λ, while PVe of V1, V2, V3, V4 are approximately 0.008, 0.02, 0.04, 0.06λ Polishing results of the 1# and 2# optic figures with (a) multi-pitch and (b) uniform pitch tool pathss Residual profiles and PSD errors of the simulation and polishing results, the sampling area is part of the subregions. a Residual profile of S4 simulation, b PSD error of S4 simulation, c residual profile of S4 polishing results, d PSD error of S4 polishing results, e residual profile of V4 simulation, f PSD error of V4 simulation, g residual profile of V4 polishing results, h PSD error of V4 polishing results The both optics have a surface form of approximately 0.095um PV after polishing with multi-pitch and uniform pitch tool paths respectively in simulation and experiments. In the uniform pitch path, the residual ripples are fairly large and non-uniformly distributed depending on the local removal. The regions with more removal have larger residual ripples. In contrast, the multi-pitch polished optic exhibits superiority in restraining residual ripple. As the more removal regions with a much smaller pitch path, the residual ripples are restrained. It is noticeable that the optic polished with multi-pitch path has slight depression at the edge between adjacent regions due to that the tool translation stroke from one path line to another introduces extra removal. Although, the depression is so small that it has little effect on the figure error. A multi-pitch tool path was developed for CCOS processes. With this tool path, the removal map is divided into several subregions, and the pitch in each subregion is set individually. In small removal subregions, the pitch is larger introducing less extra removal so that guarantee the convergence of the figure correction, while the large removal subregions the pitch is smaller so as to decrease the residual ripples. The multi-pitch tool path has been verified that it is beneficial to restraining the ripples while maintaining the convergence of the figure correction. CCOS: Computer-controlled optical surfacing MRF: Magnetorheological Finishing MSF: Middle spatial frequency PSD: Power spectral density PVe: Peak-to-valley value of the ripple PVn: Normalized PV value of residual ripple TIF: Tool influence function Betti, R., Hurricane, O.A.: Inertial-confinement fusion with lasers [J]. Nat. Phys. 12(5), 435–448 (2016) Pohl, M., Börret, R.: Simulation of mid-spatials from the grinding process [J]. J. Eur. Opt. Society-Rapid Publ. 11, (2016) Almeida, R., Börret, R., Rimkus, W., et al.: Polishing material removal correlation on PMMA–FEM simulation [J]. J. Eur. Opt. Society-Rapid Publ. 11, (2016) Wang, C.J., Cheung, C.F., Ho, L.T., et al.: A novel multi-jet polishing process and tool for high-efficiency polishing [J]. Int. J. Mach. Tools Manuf. 115, 60–73 (2017) Arnold, T., Boehm, G., Paetzelt, H.: New freeform manufacturing chain based on atmospheric plasma jet machining [J]. J. Eur. Opt. Society-Rapid Publ. 11, (2016) Tamkin, J.M., Milster, T.D.: Effects of structured mid-spatial frequency surface errors on image performance [J]. Appl. Opt. 49(33), 6522–6536 (2010) Hu, H., Dai, Y., Peng, X.: Restraint of tool path ripple based on surface error distribution and process parameters in deterministic finishing [J]. Opt. Express. 18(22), 22973–22981 (2010) Wang, C., Yang, W., Ye, S., et al.: Restraint of tool path ripple based on the optimization of tool step size for sub-aperture deterministic polishing [J]. Int. J. Adv. Manuf. Technol. 75(9–12), 1431–1438 (2014) Dai, Y.F., Shi, F., Peng, X.Q., et al.: Restraint of mid-spatial frequency error in magneto-rheological finishing (MRF) process by maximum entropy method [J]. Sci. China Ser. E: Technol. Sci. 52(10), 3092–3097 (2009) Wang, C., Wang, Z., Xu, Q.: Unicursal random maze tool path for computer-controlled optical surfacing. Appl. Opt. 54(34), 10128–10136 (2015 Dec 1) Yu, G., Li, H., Walker, D.: Removal of mid spatial-frequency features in mirror segments [J]. J. Eur. Opt. Society-Rapid Publ. 6, (2011) Dunn, C.R., Walker, D.D.: Pseudo-random tool paths for CNC sub-aperture polishing and other applications [J]. Opt. Express. 16(23), 18942–18949 (2008) Walker, D.D., Yu, G., Bibby, M., et al.: Robotic automation in computer controlled polishing [J]. J. Eur. Opt. Society-Rapid Publ. 11, (2016) Zhang, X., Yu, J., Zhang, Z., et al.: Analysis of residual fabrication errors for computer controlled polishing aspherical mirrors [J]. Opt. Eng. 36(12), 3386–3391 (1997) Cheng, H.B.: Independent variables for optical surfacing systems [M], p. 76. Springer-Verlag, Berlin (2014) Spaeth, M.L., Manes, K.R., Widmayer, C.C., et al.: The National Ignition Facility wavefront requirements and optical architecture [C]. SPIE. 5341, 25–42 (2004) ADS Google Scholar Wang, C., Yang, W., Wang, Z., et al.: Dwell-time algorithm for polishing large optics [J]. Appl. Opt. 53(21), 4752–4760 (2014) This work was supported by Science Challenge Project of China, No. JCKY2016212A506–0501. Data will be shared after publication. School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, 150001, China Jing Hou, Defeng Liao & Hongxiang Wang Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang, 621900, China Jing Hou & Defeng Liao Jing Hou Defeng Liao Hongxiang Wang DL and JH developed the multi-pitch tool path; HW assisted conducting the experiments. All authors read and approved the final manuscript. Correspondence to Defeng Liao. Hou, J., Liao, D. & Wang, H. Development of multi-pitch tool path in computer-controlled optical surfacing processes. J. Eur. Opt. Soc.-Rapid Publ. 13, 22 (2017). https://doi.org/10.1186/s41476-017-0050-z DOI: https://doi.org/10.1186/s41476-017-0050-z Multi-pitch tool path Middle spatial frequency error Residual ripple Removal regions
CommonCrawl
Spatial and temporal patterns of smoking prevalence in Ontario Gang Meng1, K Stephen Brown1,2,3 & Mary E Thompson3 Smoking prevalence varies over time and place due to various social, environmental and policy influences. However, its spatio-temporal patterns at small-area level are poorly understood. This paper attempts to describe spatio-temporal patterns of adult (age > 18) and youth (age 12–18) smoking prevalence at the municipality level in Ontario, Canada and identify potential socio-demographic, environmental, and policy factors that may affect the patterns. Multilevel temporal and spatio-temporal models were fitted to the Canadian Community Health Surveys (2000–2008) data. In total, approximately 160,000 respondents 12 years of age and over living in Ontario were used for this analysis. The results indicate that during the time period 2003–2008, age-sex stratified smoking prevalence dropped for both the adult and youth populations in Ontario. The tendency is more obvious for youth than for adults. Smoking restriction at home is a leading factor associated with the decline of adult smoking prevalence, but does not play the same role for youth smoking. Despite the overall reduction, smoking prevalence varies considerably across the province and inequalities among municipalities have increased. Clusters of high and low smoking prevalence are both found within the study region. The identified spatial and temporal variations help to indicate problems at the local level and suggest future research directions. Identifying these variations helps to strengthen surveillance and monitoring of smoking behaviours and the evaluation of policy and program development at the small-area level. Tobacco use persists as the number one cause of preventable disease and death in many parts of the world, including Ontario [1]. In association with increased recognition of the harmful health consequences of smoking and increased legislation and policies against smoking, smoking prevalence has decreased consistently in the United States and Canada in recent years. On the other hand, international evidence shows that, in response to increased marketing restrictions, tobacco companies have increased availability of outlets selling tobacco in socially deprived neighbourhoods [2,3], and promotion of tobacco products in specific areas [4]. In Canada, the production and sale of contraband tobacco products has become widespread [5,6]. All these trends may undermine the effectiveness of tobacco control policies and result in a rebound or a halt to the decline of smoking prevalence. Accurate estimation of smoking prevalence over time and over small areas is important for measuring progress towards anti-smoking objectives, revealing underlying social and environmental determinants, evaluating current anti-smoking campaigns and policies, and planning for specific area-based anti-smoking programs. Previous studies have identified that smoking behaviours are not only determined by numerous individual-level factors, but are also affected by various social, economic, environmental and policy factors. For example, it is found that supermarkets and convenience stores (the major retailers of tobacco) are more accessible [7] and more concentrated [8] in socially deprived neighbourhoods, and more tobacco advertisements are found in lower socioeconomic communities [9]. Tobacco companies target their advertising to more predominantly minority communities [10,11]. The number of agents displaying no-smoking signs or providing information discouraging smoking may affect the smoking rate in a jurisdiction [12]. Neighbourhood violence [13], socio-economic disadvantage [14,15] and social disorganization [16] may be associated with high smoking prevalence. Urban–rural differences [17] and ethnic spatial segregation [18] may result in significantly different rates of smoking. Tobacco control interventions and policies, such as smoking restrictions in workplaces [19], schools [20], communities and homes [21], restrictions on sales to minors [22], health warnings on tobacco products [23], cigarette price increases [24], and community anti-smoking programs [25], may all lead to smoking behavioural changes. These identified social, environmental and policy-related determinants suggest that smoking prevalence may vary significantly over time and space. However, unequal changes of small-area patterns of smoking prevalence over time and the extent to which social and environmental determinants may affect the inequalities remain largely under-explored. Only a few studies have attempted to describe the contribution of geography to the total variation of smoking in Canada, and potential explanations of such variation by individual, socioeconomic, demographic characteristics, and family anti-smoking norms [26,27]. The purpose of the paper is to evaluate current adult (age over 18) and youth (age 12 to 18) smoking prevalence and spatio-temporal trends over recent years at the municipality-level in Ontario, Canada, and to identify socio-demographic, environmental, and policy determinants that may affect the patterns. The revealed patterns and potential determinants may not only depict the status quo of smoking behaviours in Ontario, but may also predict the risky areas and point out directions for policy decision making to reduce the prevalence and inequalities of smoking among small areas. Data on 165,372 respondents from 2000 to 2008 in Ontario, Canada were collected in the Canadian Community Health Surveys (CCHS) (cycles 1.1, 2.1, 3.1, 2007, and 2008). The CCHS is a repeated cross-sectional survey that collects information related to health status, health behaviours (including smoking), community-oriented health determinants and health care utilization for the Canadian population. The first cycle of CCHS started in 2000 and the data were collected for both 2000 and 2001. The second cycle data were collected in 2003 while the third cycle data were collected in 2005. The surveys after 2006 were conducted yearly. In Ontario, about half of the sample respondents were selected from an area frame and the other half from a list frame of telephone numbers. A stratified two-stage design established for the Canadian Labour Force Survey (LFS) was used for the area frame, while a random sampling process was used given a telephone list in each health region. A full description of the sampling methods is available online at Statistics Canada's website [28]. Based on this sampling design, although samples are not uniformly distributed among small areal units (smaller than health regions), almost all the census sub-divisions (CSDs) contain enough respondents for the estimation of smoking prevalence at this level. Since CSDs are deemed to be equivalent to municipalities of Canada, the data provide an important opportunity to examine the spatial and temporal patterns and determinants of smoking prevalence among municipalities. Respondents' ages in the collected CCHS data range from 12 to 102. Smokers were defined as individuals who had smoked more than 100 cigarettes in their lifetimes, and smoked at least once in the previous 30 days. In addition to smoking status, the data contain age, gender, socio-demographic factors, psycho-social factors, policy related variables, geographical locations, and geographical identifiers (postal codes). Variables used in the current analysis were described in Table 1. Since this is a secondary analysis of Statistics Canada data, no ethics clearance is required by the Office of Research Ethics at the University of Waterloo. All security procedures required by Statistics Canada to access and use the data for analysis were followed. Table 1 Variable description Temporal and spatio-temporal analyses To analyze the seemingly downward overall time trend of smoking prevalence in Ontario and potential affecting factors, multi-level temporal models were constructed and fitted using the SAS v9.2 GLIMMIX procedure. Since adults and youth smoking behaviours may be affected by different risk factors, adult (age 19 and over, including 147,118 respondents) and youth (age 12 – 18, including 18,254 respondents) populations were analyzed separately. Assuming that the time trend of smoking prevalence is not linear over the years, the full temporal models are defined as follows. For adult i in census subdivision j: $$ \begin{array}{l} Adult\ smoking\ status \sim binary\ \left({p}_{ij}\right)\\ {} Level\ 1\ \left( person\ level\right):\ logit\left({p}_{ij}\right) = {\beta}_{0j} + {\beta}_1AG{E}_{ij} + {\beta}_2SE{X}_{ij} + {\beta}_3M{S}_{ij} + {\beta}_4 INCOM{E}_{ij} + {\beta}_5 UNEMPLO{Y}_{ij} + {\beta}_6\\ {}\kern1em LOWED{U}_{ij} + {\beta}_7 PL{S}_{ij} + {\beta}_8SB{C}_{ij} + {\beta}_9 SMKRW{C}_{ij} + {\beta}_{10} SMKRW{P}_{ij} + {\beta}_{11} HOME\_ RESTRI{C}_{ij} + {\beta}_{12}GEO+\\ {}\kern1em {\beta}_{13}YEA{R}_{ij} + {\beta}_{14}YEA{R_{ij}}^2 + {\beta}_{15}YEA{R}_{ij}* HOME\_ RESTRI{C}_{ij}+{\beta}_{16}YEA{R_{ij}}^2* HOME\_ RESTRI{C}_{ij}\\ {} Level\ 2\ \left( Census\ subdivision\ level\right):\kern0.5em {\beta}_{0j} = {\gamma}_0 + {v}_{0j}\end{array} $$ For youth i in census subdivision j: $$ \begin{array}{l} Youth\ smoking\ status \sim binary\ \left({p}_{ij}\right)\\ {} Level\ 1\ \left( person\ level\right):\ logit\left({p}_{ij}\right) = {\beta}_{0j} + {\beta}_1AG{E}_{ij} + {\beta}_2SE{X}_{ij} + {\beta}_3 INCOM{E}_{ij}+{\beta}_4 PL{S}_{ij}+{\beta}_5SB{C}_{ij}+{\beta}_6 HOME\_ RESTRI C\\ {}{}_{ij} + {\beta}_7GEO + {\beta}_8YEA{R}_{ij} + {\beta}_9YEA{R_{ij}}^2 + {\beta}_{10}YEA{R}_{ij}* HOME\_ RESTRI{C}_{ij}+{\beta}_{11}YEA{R_{ij}}^2* HOME\_ RESTRI{C}_{ij}\\ {} Level\ 2\ \left( Census\ subdivision\ level\right):\kern0.5em {\beta}_{0j} = {\gamma}_0 + {v}_{0j}\end{array} $$ where smoking status has a binary distribution. The log odds of smoking probabilities are regressed to year, and year squared. For adults, the model at level 1 (individual level) also includes age, sex, marital status (MS), family income (INCOME), unemployment (UNEMPLOY), low education (LOWEDU), perceived life stress (PLS), sense of belonging to communities (SBC), complete and partial work place smoking restrictions (SMKRWC and SMKRWP), home smoking restrictions (HOME_RESTRIC), and geographic locations (GEO). The GEO variable is included to control for any variations of smoking prevalence between large urban (the Greater Toronto Area), other urban and rural areas. For youth, the model includes age, sex, family income, PLS, SBC, home smoking restriction (HOME_RESTRIC), and GEO. Assuming that smoking prevalence is different among municipalities, a random intercept was constructed at the census subdivision level with a fixed average effect γ 0 , and a random effect v 0j , which has a normal distribution with a mean of 0. The time trend was tested by incrementally adding explanatory variables in the above models. The overall time trend was first tested by adding in only the time variables and controlling for age and sex (Model 1). The socio-demographic, socio-economic (SES), psycho-social, and workplace smoking restriction variables were then added to the model to test whether or not these variables may have potential impacts on the time trend (Model 2). The variable of home smoking restrictions was further added (Model 3), followed by adding in the interaction terms of time and home smoking restriction (Model 4) to test the potential impact of home smoking restriction on the time trends. Since only smokers were asked the question on home smoking restrictions in the 2000 and 2001 surveys and all respondents were asked the same question in 2003–2008 surveys, the above models were fitted using the 2003–2008 data only, which include 112,848 adult and 13,863 youth respondents. To test how spatial dependencies are modeled and whether or not there are remaining spatial autocorrelations, spatial dependencies at the area level were also calculated using the global Moran's I [29] on the CSD-level residual, v 0j , after Equations (1) and (2) were fitted. Previous research suggests that the extent of home smoking restrictions is one of the most powerful determinants of cessation [21] and may therefore be an important predictor for smoking reduction. To test the association between smoking restriction and adult smoking cessation, a model similar to that of Equation (1) was also constructed with the variable of successful cessation as the outcome and year variables removed. Based on the results of the above analysis, the distributions of smoking prevalence among municipalities and the changes of these patterns over time were further constructed and tested using multi-level spatial temporal modeling (WinBUGS 1.4.3) [30]. The models for adult and youth were constructed as follows. $$ Smoking\ status \sim binary\ \left({p}_{ij}\right) $$ Level 1 (PERSON LEVEL): $$ \begin{array}{l} logit\left({p}_{ij}\right) = {\beta}_{0j} + {\beta}_1AG{E}_{ij} + {\beta}_2SE{X}_{ij} + {\beta}_3M{S}_{ij} + {\beta}_4 INCOM{E}_{ij} + {\beta}_5 UNEMPLOYMEN{T}_{ij} + {\beta}_6 LOWED{U}_{ij} + {\beta}_7 PL{S}_{ij}\\ {} + {\beta}_8SB{C}_{ij} + {\beta}_9 SMKRW{C}_{ij} + {\beta}_{10} SMKRW{P}_{ij} + {\beta}_{11j}YEA{R}_{ij} + {\beta}_{12j} HOME\_ RESTRI{C}_{ij}\end{array} $$ Level 2 (CSD LEVEL): $$ \begin{array}{l}{\beta}_{0j} = {\gamma}_0 + {v}_{0j} + {u}_{0j}\\ {}{\beta}_{11j} = {\gamma}_1 + {v}_{1j} + {u}_{1j}\\ {}{\beta}_{12j} = {\gamma}_2 + {v}_{2j} + {u}_{2j}\end{array} $$ $$ logit\left({p}_{ij}\right) = {\beta}_{0j} + {\beta}_1AG{E}_{ij} + {\beta}_2SE{X}_{ij} + {\beta}_3 INCOM{E}_{ij} + {\beta}_4 PL{S}_{ij} + {\beta}_5SB{C}_{ij} + {\beta}_6 HOME\_ RESTRI{C}_{ij} + {\beta}_{7j}YEA{R}_{ij} $$ $$ \begin{array}{l}{\beta}_{0j} = {\gamma}_0 + {v}_{0j} + {u}_{0j}\\ {}{\beta}_{7j} = {\gamma}_1 + {v}_{1j} + {u}_{1j}\end{array} $$ The models at level 1 are similar to the corresponding temporal models in Equations (1) and (2). Since the time trend after controlling for identified variables was almost linear (see the Results section), only a single YEAR variable (rather than YEAR and YEAR2) is included in Equations (3) and (4) for simplicity. The GEO variable is taken out since the effects of geographical locations have already been borne by u 0j , u 1j and u 2j . At the CSD level, based on the results of the above temporal models, it is assumed that smoking prevalence, the time influence, and smoking restrictions at home may vary among municipalities for adults, and smoking prevalence and the time influence may vary among municipalities for youth. The fixed average effects γ 0 , γ 1 , and γ 2 , the uncorrelated random effects v 0j , v 1j and v 2j , and the spatially correlated random effects u 0j , u 1j and u 2j were used for smoking prevalence, the time influence and smoking restriction at home respectively to analyze the municipal-level variations. Given the generally large sizes of municipalities, spatial dependencies likely only exist among adjacent municipalities. Therefore, an intrinsic conditional autoregression (CAR) model with a contiguity neighbourhood structure (assuming only adjacent neighbourhoods are spatially auto-correlated) was used for u 0j , u 1j and u 2j to model the spatial dependencies at the municipal level. After these models were fitted, the spatial variation of smoking prevalence, time influence, and smoking restriction at home can be described using v 0j + u 0j , v 1j + u 1j , and v 2j + u 2j respectively. Since WinBUGS models allow missing data to be treated as stochastic nodes (values to be estimated), all the data obtained from 2000 to 2008 were used to fit the models. The posterior mean values and random effects were used for estimating the spatio-temporal impacts of smoking prevalence. It can be seen that the spatial and temporal interactions were explicitly measured by the spatially dependent coefficient of the YEAR variable, namely β 11j for adult and β 7j for youth. This coefficient allows spatially unequal changes of smoking prevalence over time to be mapped and dramatic changes to be identified. Since CCHS is a repeated cross-sectional survey, survey weights were also adjusted for the proposed analysis that pools together data from different cycles. The adjusted weight is constructed as follows: $$ W=WTS\_M* sample\_ size/ sum\_ of\_ sample\_ size s $$ where WTS_M is the CCHS survey weight, sample_size is the sample size of current cycle, and sum_of_sample_sizes is the sum of sample sizes from all cycles being used for the analysis. This adjustment allows samples from different cycles to be comparable. The adjusted weights were applied to the temporal models (Equations 1 and 2) so that the estimates are representative of the population in the study area. Given the inability of the Bayesian models in WinBUGS to incorporate weights, the weights were not applied to the spatio-temporal models for Equations (3) and (4). Temporal and spatio-temporal patterns of adult smoking prevalence Table 2 shows that the weight adjusted smoking prevalence have been dropped from 26.2% in 2000 to 21.3% in 2008. To examine potential determinants of smoking prevalence and the downward trends, models described in Equations (1) and (2) were fitted and the results were presented in Table 3. The Moran's I test of global spatial autocorrelation on CSD-level residuals shows that the spatial autocorrelations for four adult model residuals are small, but statistically significant. This may not affect model fitting, but indicate potential existence of local clusters that need to be further examined. Table 2 Prevalence of smoking and smoking restriction by years, Ontario (2000–2008, CCHS) Table 3 Test results on factors affecting smoking trend (2003–2008) Using the odds of smoking prevalence in 2003 as the baseline and the fixed coefficient estimates for YEAR and YEAR2, Figures 1 and 2 present the temporal trends for adult and youth smoking prevalence between 2003 and 2008. In these figures, five estimated time trends are presented. The predicted percentage changes presented hereafter are calculated by balanced prevalence; i.e. the predicted smoking prevalence averaged across all levels of the corresponding controlling variables. Temporal trends for adult smoking prevalence (2003–2008). Temporal trends for youth smoking prevalence (2003–2008). Figure 1 shows that the rate of decline of adult smoking prevalence has slowed down over the years. From 2003 to 2008, the odds ratio goes down to 0.92, representing a reduction of the balanced adult smoking prevalence by 1.54%. The downward trend is somewhat reduced to 0.7% (odds ratio goes down to 0.96) after controlling for variables of SES, psycho-social variables and workplace restrictions, indicating some potential impact of these variables on the reduction of smoking prevalence. The downward trend is reversed to have a 0.6% increase (odds ratio goes up to 1.03) after further controlling for the home smoking restriction variable. This indicates that home smoking restrictions may account for about 1.3% of the adult smoking reductions over the five years between 2003 and 2008. In model 4 of Equation (1), the interaction terms between the two time variables (YEAR and YEAR2) and smoking restriction at home are statistically significant, indicating some potential change of the impacts of smoking restriction at home on adult smoking prevalence over the years. However, compared to the main effect (−0.8458), the interactions are relatively small. The two interaction terms (YEAR*HOME_RESTRIC and YEAR2* HOME_RESTRIC) averaged out and made the changing impacts over the years relatively even. For adults with smoking restrictions at home, the odds ratio is nearly the same between 2003 and 2008, indicating that there is no obvious change of smoking prevalence over these years for adults with smoking restrictions at home. For adults without smoking restrictions at home, the odds ratio goes up by 0.1 from 2003 to 2008, representing a 2.1% increase of smoking prevalence. Therefore, since the smoking prevalence did not change for adults in an environment with home smoking restrictions, but increased in an environment without home smoking restrictions, the overall downward trend of adult smoking prevalence must be associated with the increase in smoking restricted homes over these years. The data (Table 2) also show that home smoking restrictions increased from 69.6% in 2003 to 78.5% in 2008. Given the above results that home smoking restriction may explain the downward trend of adult smoking, a further test on the association between smoking restriction and cessation was conducted. The result shows that partial workplace smoking restriction (0.155, P < 0.001), complete workplace smoking restriction (0.036, P < 0.0001) and home smoking restrictions (0.82, P < 0.0001) are all positively associated with adult successful cessations after controlling for age, sex, SES, marital status, psycho-social factors, and geography. While the result confirms the associations between smoking restrictions and successful cessations, the downward trend of smoking prevalence is only associated with smoking restrictions at home, possibly due to the increased prevalence of smoking restriction at home, but not at the workplace, in the period under study. Table 2 does confirm that while home smoking restrictions increased, there was no obvious change for smoking restrictions at workplaces over the years 2003–2008. To investigate how adult smoking prevalence and the impact of smoking restriction at home change over time and space, the spatio-temporal pattern of adult smoking was estimated using Equation (3). The spatial distribution of the estimated random effects for the YEAR parameter, v 1j + u 1j , without and with adding in the home smoking restriction variable, are demonstrated in the two maps in Figure 3. The spatial patterns in the first map in Figure 3 show that smoking rate changes differently from municipality to municipality. After controlling for known factors, adult smoking reduction is found largely around large metropolitan areas, including the GTA and Ottawa, and the northwestern part of Ontario. The northwestern area with a relatively light color shown in the map is Rainy River and several other surrounding CSDs, which contain 1105 adult respondents in the data. A potential "route" of increased smoking rate can be observed starting from the east corner of Ontario (Cornwall city) and extending to Northern Ontario (around the city of Greater Sudbury) along the Ottawa valley. A few other areas of increased smoking rates can also be observed on south-western Ontario along Lake Erie. Quintile distribution of CSD-level random time impacts on adult smoking without and with controlling for home smoking restrictions (2000–2008). Comparing the two maps in Figure 3, although the changes of smoking rates are different among municipalities, the spatial patterns are almost the same for the two maps, suggesting that there is no particular effect of home smoking restriction clustering in certain areas. A relatively higher value in each category is seen in the latter map compared to the former. This is consistent with the result in Table 3 that smoking restrictions have a potential impact that contributes to the changes in adult smoking prevalence over the years. Thus, smoking restrictions at home may have increased evenly among municipalities over the years. Figure 4 shows how these time changes affect the pattern of adult smoking prevalence from 2000 to 2008. The overall trend shows that smoking prevalence gradually increases as location moves to the north. It can be seen that the lowest smoking rates are still around the GTA and Ottawa. The Rainy River area still shows a relatively low smoking rate. As has been shown in the time influence map (Figure 3), the highest smoking prevalence has moved toward the Ottawa Valley area by 2008. However, the pattern in 2008 is not as clear as it is in 2000. Smoking inequalities among CSDs increased although overall smoking rates decreased. The random effect (v 0j + u 0j ) ranges from −0.678 to 0.813, representing a variation of the balanced smoking prevalence (predicted smoking prevalence averaged across all levels of the explanatory variables) from 15.4% to 44.7%. CSD level predicted distributions of adult smoking prevalence in 2000 and 2008 without controlling for home smoking restriction. Although home smoking restriction may increase evenly among municipalities and its impact on smoking rates may not change over the years, the impact may not be the same from municipality to municipality. Figure 5 shows the distribution of the random effect of home smoking restriction on adult smoking prevalence among municipalities. It can be seen in this figure that the pattern is somewhat consistent with the pattern of adult smoking prevalence in 2008 (Figure 4) and the pattern of time impacts (Figure 3). While Figure 3 indicates that the presence of home smoking restrictions does not affect the time influence on adult smoking, the spatial distribution of home smoking restrictions is related to the distribution of smoking prevalence over municipalities. The similarity of Figures 3 and 5 may indicate that there may be factors that affect both smoking rates and smoking restrictions at home. Quintile distribution of impact of smoking restriction at home on adult smoking (2000–2008). Temporal and spatio-temporal patterns of youth smoking prevalence For youth smoking, Table 2 shows that the weight adjusted smoking prevalence has dropped from 13.8% in 2000 to 7.2% in 2008. Models similar to those for adults were fitted and the results were presented in Table 3. The global Moran's I test shows that the spatial autocorrelations for the four youth model residuals are not statistically significant, indicating a good fit of the models accounting for spatial dependencies. The curves in Figure 2 show a similar pattern of decrease for youth smoking prevalence. The balanced prevalence goes down by 2% (odds ratio goes down to 0.6) from 2003 to 2008. However, the downward trend does not actually change after adding in household income, sense of belonging to local community, perceived life stress and home smoking restrictions (odds ratio = 0.63, representing a balanced prevalence decrease of 1.9%). The two interaction terms, YEAR*HOME_RESTRIC and YEAR2*HOME_RESTRIC, do show that there is a potential increased effect of home smoking restriction on youth smoking prevalence over the years. For youth with home smoking restrictions, prevalence goes down by 2.9% (odds ratio = 0.51) from 2003 to 2008. For youth without home smoking restrictions, prevalence goes down first and goes back up again to a final increase of 0.4% (odds ratio = 1.08) in 2008. Although home smoking restriction does not explain the downward trend of youth smoking, the potential restrictive impact on youth smoking of a home environment with smoking restriction does increase over these years. The spatial distribution of the random time influence, v 1j + u 1j , estimated using Equation (4), is mapped in Figure 6. The map shows somewhat different patterns than adult time influence (Figure 3). It can be seen that the highest youth smoking reduction over the years is around the GTA, Essex County, the City of Kingston, the City of Timmins, and the Town of Rainy River. Several areas have the highest youth smoking increases, including areas around Brantford (where reserves marketing cigarettes are located), the counties of Hastings and Prince Edward, and a few other areas in Northern Ontario. Quintile distribution of CSD-level random time impact on youth smoking (2000–2008). Figure 7 shows the CSD-level changes of youth smoking prevalence from 2000 to 2008. It can be seen that while the overall smoking prevalence is reduced, the pattern does not have significant changes. The overall pattern shows that youth smoking rates are lower in the south than in the north. In 2008, higher smoking rates can be found in the Thunder Bay and Algoma districts, around the Brantford area, and somewhat along the Ottawa Valley. The range of the log odds differences in smoking rates is from −1.57 to 0.3, representing a balanced percentage change from 2.7% to 15.4%. Unlike adult smoking prevalence, youth smoking shows a somewhat reduced inequality over the years. This may indicate potential success of provincial level youth smoking intervention programs or policies. CSD level predicted distributions of youth smoking prevalence in 2000 and 2008. The case study analysis shows that both adult and youth smoking prevalence have been declining in Ontario in the recent decade (Table 2). In addition to the raw prevalence, comparing to the solid black lines (Model 1) in Figures 1 and 2, youth prevalence shows a faster reduction than adult prevalence. This trend may indicate the success of youth smoking prevention strategies, programs or policies in Ontario in recent years [31]. Current cessation systems in Ontario have difficulty reaching youth and young adults and the proportion of youth smokers who tried to quit in the past 12 months has declined since 1999 [31]. Despite that, the youth smoking prevalence still has greater reduction than the adult prevalence indicating the success of youth smoking prevention in Ontario. This fact may also indicate the relative impact of prevention programs in comparison with cessation programs. Since smoking is addictive, cessation is difficult to achieve. Even if the same programs are available to youth and adults, youth may receive more benefit from the fact of not starting cigarette smoking. Smoking restrictions at home are a leading factor associated with the decline of adult smoking prevalence, but do not appear to be a factor for youth smoking changes. While the analysis does indicate that smoking restrictions at home are associated with more quit smoking attempts, the causal relationship needs to be further tested since it is likely that some social, environmental or policy determinants may result in both the reduction of smoking rates and the increase in home smoking restrictions. For example, quitters may ban smoking in their homes as an aid to staying quit. Nevertheless, since home smoking restrictions are not yet a part of the provincial legislation, the increase of smoking restrictions at home reflects the overall improvement of people's conception of the harm and social unacceptability of smoking. This conceptual change may be the underlying reason for smoking reduction and stricter rules on smoke-free homes. Further evidence for this explanation is that home smoking restrictions have increased faster than the decrease of smoking prevalence (Table 2), indicating an arrival of conceptual changes before the changes in smoking behaviours. Although home smoking restrictions do not account for the drop in youth smoking (possibly because youth rarely smoke at home), the analysis shows that the impact of home smoking restrictions on youth smoking has increased (Model 4 in Table 3). This may be another indication of potential success of youth smoking interventions over the years. As discussed earlier, current youth comprehensive tobacco control programs typically focus on reducing the initiation and prevalence of smoking among children and youth. Innovative multi-media campaigns have also been launched to prevent smoking among youth in Ontario [32]. These interventions may all be potential reasons that have led to the drop of youth smoking prevalence and the increased impact of a home smoke-free environment on youth smoking behaviours. Future research is needed to evaluate the impacts of youth smoking policies on local youth smoking behaviours and prevalence. Geographically, the overall patterns show that northern Ontario residents have higher smoking prevalence for both adults and youth than their southern Ontario counterparts. Since these patterns were obtained after controlling for SES, psycho-social factors, and smoking restrictions, potential reasons for this pattern may be due to a large portion of aboriginal population in northern communities and/or difference in conception of the social and health impacts of smoking between the southern Ontario population and the relatively remote northern communities. Future research may be needed to characterize the conceptual differences between northern and southern Ontario residents or between geographically connected and remote area residents. While the drop in youth smoking rate was not explained by known factors, such as SES, psycho-social factors and smoking restrictions, the map in Figure 6 actually shows where the highest reductions have been seen. It is suspected that the reduction of smoking prevalence in these areas may be due to the successful implementation of local anti-smoking programs or policies, such as school-based programs [31]. The effective implementation of provincial anti-smoking policies and health promotion strategies actually relies on local Public Health Units to educate, provide appropriate resources to, and communicate with the public through various smoking prevention, protection and cessation programs. These together with other local interventions, mass-media champions, and/or tobacco promotions may lead to the variation in local smoking prevalence. It can be observed in Figures 4 and 7 that in some places where adult and youth smoking prevalence are high in 2000, the prevalence is even higher in 2008. This suggests that there are areas where existing policies have had no effect. The maps also show that, compared to the adult and youth smoking prevalences in 2000, smoking inequalities among municipalities increased in 2008 although overall smoking rates decreased. Some identified clusters of high smoking prevalence, such as the route staring from Cornwall and along the Ottawa Valley, may indicate potential routes of contraband sales [33]. This phenomenon is somewhat consistent with the temporal models, which also indicate that adult and youth smoking prevalence went up from 2003 to 2008 in homes without smoking restrictions (Figures 1 and 2). These upward trends of smoking prevalence in pro-smoking environments are not explained in the current study, after controlling for demographic, socio-economic, psycho-social and smoking restriction factors. However, excluding the impacts of the above factors, it may be an indication that tobacco sellers' efforts to promote tobacco products have never stopped, and such efforts may be a potential explanation of these prevalence increases. Further research may be needed to explore the interaction of tobacco sales and pro-smoking environments on smoking behaviours. Large metropolitan areas, such as the GTA and Ottawa, have the lowest smoking prevalence, while smaller-sized cities have relatively higher smoking prevalence compared to rural areas. Unlike other areas, although the GTA has relatively more smoking reduction, smoking restriction at home is not a leading factor. This may suggest the reduction of smoking rates in GTA may not be due to people's consciousness about the harm of smoking. The credit is often given to the recent immigrants in GTA since GTA has the largest immigrant population in Canada and recent immigrants have lower smoking rates than non-immigrants [34]. These rural–urban and large-small urban differences need to be addressed in future research. The study illustrates a more general phenomenon, that the decreased adult and youth smoking prevalence (as shown in Table 2) is actually an averaged result of a dynamic process in which both increasing and decreasing trends exist at different times and in space. The temporal and spatio-temporal analyses used in this research provide an effective method for mapping the variances and interactions between time and place for their impacts on smoking prevalence. The identified spatial and temporal variations help to indicate problems at the local level and suggest future research directions. Identifying these variations helps to strengthen surveillance and monitoring of smoking behaviours and the evaluation of policy and program development at the small-area level. The identified clusters of higher or lower smoking prevalence in particular times and places may help the identification of best practices and area-specific programs for future smoking reduction. Ministry of Health Promotion. Building on our gains, taking action now: Ontario's tobacco control strategy for 2011–2016. 2011; [http://www.mhp.gov.on.ca/en/smoke-free/TSAG%20Report.pdf] Pearce J, Hiscock R, Moon G, Barnett R. The neighbourhood effects of geographical access to tobacco retailers on individual smoking behaviour. J Epidemiol Community Health. 2009;63:69–77. Hyland A, Travers MJ, Cummings KM, Bauer J, Alford T, Wieczorek WF. Tobacco outlet density and demographics in Erie County, New York. Am J Public Health. 2003;93:1075–6. Reid RJ, Peterson NA, Lowe JB, Hughey J. Tobacco outlet density and smoking prevalence: does racial concentration matter? Drugs Educ Prev Policy. 2005;12:233–8. Sweanor DT, Martial LR. The smuggling of tobacco products: lessons from Canada. Ottawa: Non-Smokers' Rights Association/Smoking and Health Action Foundation; 1994. Global Tobacco control forum. Canada's implementation of the framework convention on tobacco control: a civil society's 'shadow report'. 2010; [http://www.smoke-free.ca/pdf_1/FCTC-Shadow-2010-Canada.pdf] Pearce J, Witten K, Hiscock R, Blakely T. Are socially disadvantaged neighbourhoods deprived of health-related community resources? Int J Epidemiol. 2007;36:348–55. Pearce J, Day P, Witten K. Neighbourhood provision of food and alcohol retailing and social deprivation in urban New Zealand. Urban Policy Res. 2008;26:213–27. Barbeau E, Wolin K, Naumova E, Balbach E. Tobacco advertising in communities: associations with race and class. Prev Med. 2005;40:16–22. Hackbarth D, Schnopp-Wyatt D, Katz D, Williams J, Silvestri B, Pfleger M. Collaborative research and action to control geographic placement of outdoor advertising of alcohol and tobacco products in Chicago. Public Health Rep. 2001;116:558–67. Luke D, Esmundo E, Bloom Y. Smoke signs: patterns of tobacco billboard advertising in a metropolitan region. Tob Control. 2000;9:16–23. Frohlich KL, Potvin L, Gauvin L, Chabot P. Youth smoking initiation: disentangling context from composition. Health Place. 2002;8:155–66. Ganz M. The relationship between external threats and smoking in central Harlem. Am J Public Health. 2000;90:367–71. Reijneveld SA. Neighbourhood socioeconomic context and self-reported health and smoking: a secondary analysis of data on seven cities. J Epidemiol Community Health. 2002;56:935–42. Ross CE. Walking, exercise, and smoking: does neighborhood matter? Soc Sci Med. 2000;51:265–74. Chuang YC, Li YS, Wu YH, Chao HJ. A multilevel analysis of neighborhood and individual effects on individual smoking and drinking in Taiwan. BMC Public Health. 2007;7:151. Volzke H, Neuhauser H, Moebus S, Baumert J, Berger K, Stang A, et al. Urban–rural disparities in smoking behaviour in Germany. BMC Public Health. 2006;6:146. Landrine H, Klonoff EA. Racial segregation and cigarette smoking among Blacks: findings at the individual level. J Health Psychol. 2000;5:211–9. Parry O, Platt S. Smokers at risk: implications of an institutionally bordered risk-reduced environment. Health Place. 2000;6:117–23. Kumar R, O'Malley PM, Johnston LD. School tobacco control policies related to students' smoking and attitudes toward smoking: national survey results, 1999–2000. Health Educ Behav. 2005;32:780–94. Donald R, Shopland DR, Anderson CM, Burns DM. Association between home smoking restrictions and changes in smoking behaviour among employed women. J Epidemiol Community Health. 2006;60 Suppl 2:44–50. Altman DG, Wheelis AY, McFarlane M, Lee HR, Fortmann SP. The relationship between tobacco access and use among adolescents: a four community study. Soc Sci Med. 1999;48:759–75. Koval JJ, Aubut JA, Pederson LL, O'HEGARTY M, CHAN SSH. The potential effectiveness of warning labels on cigarette packages: the perceptions of young adult Canadians. Can J Public Health. 2005;96:353–6. Colman G, Remler DK. Vertical equity consequences of very high cigarette tax increases: if the poor are the ones smoking, how could cigarette tax increases be progressive? Cambridge, MA: National Bureau of Economic Research; 2004. Working Papers 10906. Vartiainen E, Pallonen U, McAlister A, Paska P. Eight-year follow-up results of an adolescent smoking prevention program: the North Karalia Youth Project. Am J Public Health. 1990;80:78–9. Corsi DJ, Chow CK, Lear SA, Subramanian SV, Teo KK, Boyle MH. Smoking in context: a multilevel analysis of 49,088 communities in Canada. Am J Prev Med. 2012;43:601–10. Corsi DJ, Lear SA, Chow CK, Subramanian SV, Boyle MH, Teo KK. Socioeconomic and geographic patterning of smoking behaviour in Canada: a cross-sectional multilevel analysis. PLoS One. 2013;8(2):e57646. doi:10.1371/journal.pone.0057646. Statistics Canada: Other reference periods - Canadian Community Health Survey - Annual Component (CCHS). 2014; [http://www23.statcan.gc.ca/imdb/p2SV.pl?Function=getInstanceList&SDDS=3226&InstaId=15282&SurvId=144171], accessed November 03, 2014. Moran PAP. The interpretation of statistical maps. J Royal Stat Soc B. 1948;10:243–51. Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS - a Bayesian modelling framework: concepts, structure, and extensibility. Stat Comput. 2000;10:325–37. Smoke-Free Ontario Scientific Advisory. Evidence to guide action: comprehensive tobacco control in Ontario. Ontario: Ontario Agency for Health Protection and Promotion; 2010. Ministry of Health Promotion: Smoke-Free Ontario Strategy. 2006; [http://hnhu.org/wp-content/uploads/smoke_free_ontario_strategy1.pdf], accessed October 30, 2014. Royal Canadian Mounted Police. Contraband Tobacco Enforcement Strategy. Catalogue no. PS61-11/2010. 2011; [http://www.rcmp-grc.gc.ca/pubs/tobac-tabac/2011-contr-strat/2011-eng.pdf], accessed November 03, 2014. Toronto Public Health. Toronto's Health Status Indicator Series: Smoking by immigrant status. 2011[http://www.toronto.ca/health/map/index.htm], accessed November 03, 2014. The South-Western Ontario Research Data Centre of Statistics Canada allowed access to the CCHS data. PROPEL Center for Population Health Impact, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L3G1, Canada Gang Meng & K Stephen Brown Ontario Tobacco Research Unit, Toronto, Ontario, Canada K Stephen Brown Department of Statistics and Actuarial Science, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1, Canada K Stephen Brown & Mary E Thompson Gang Meng Mary E Thompson Correspondence to Gang Meng. GM contributed to the study design and drafting, and designed and performed the statistical analysis. KSB contributed to the study design, drafting and revision of the article. MET contributed to the study design, drafting and revision of the article. All authors read and approved the final manuscript. Meng, G., Brown, K.S. & Thompson, M.E. Spatial and temporal patterns of smoking prevalence in Ontario. BMC Public Health 15, 182 (2015). https://doi.org/10.1186/s12889-015-1526-7 Spatio-temporal modeling Canadian Community Health Surveys
CommonCrawl
Innate immune response in insects: recognition of bacterial peptidoglycan and amplification of its recognition signal Kim, Chan-Hee;Park, Ji-Won;Ha, Nam-Chul;Kang, Hee-Jung;Lee, Bok-Luel 93 The major cell wall components of bacteria are lipopolysaccharide, peptidoglycan, and teichoic acid. These molecules are known to trigger strong innate immune responses in the host. The molecular mechanisms by which the host recognizes the peptidoglycan of Gram-positive bacteria and amplifies this peptidoglycan recognition signals to mount an immune response remain largely unclear. Recent, elegant genetic and biochemical studies are revealing details of the molecular recognition mechanism and the signalling pathways triggered by bacterial peptidoglycan. Here we review recent progress in elucidating the molecular details of peptidoglycan recognition and its signalling pathways in insects. We also attempt to evaluate the importance of this issue for understanding innate immunity. CLIP-domain serine proteases in Drosophila innate immunity Jang, In-Hwan;Nam, Hyuck-Jin;Lee, Won-Jae 102 Extracellular proteases play an important role in a wide range of host physiological events, such as food digestion, extracellular matrix degradation, coagulation and immunity. Among the large extracellular protease family, serine proteases that contain a "paper clip"-like domain and are therefore referred to as CLIP-domain serine protease (clip-SP), have been found to be involved in unique biological processes, such as immunity and development. Despite the increasing amount of biochemical information available regarding the structure and function of clip-SPs, their in vivo physiological significance is not well known due to a lack of genetic studies. Recently, Drosophila has been shown to be a powerful genetic model system for the dissection of biological functions of the clip-SPs at the organism level. Here, the current knowledge regarding Drosophila clip-SPs has been summarized and future research directions to evaluate the role that clip-SPs play in Drosophila immunity are discussed. Proteomic analysis of heat-stable proteins in Escherichia coli Kwon, Soon-Bok;Jung, Yun-A;Lim, Dong-Bin 108 Some proteins of E. coli are stable at temperatures significantly higher than $49^{\circ}C$, the maximum temperature at which the organism can grow. The heat stability of such proteins would be a property which is inherent to their structures, or it might be acquired by evolution for their specialized functions. In this study, we describe the identification of 17 heat-stable proteins from E. coli. Approximately one-third of these proteins were recognized as having functions in the protection of other proteins against denaturation. These included chaperonin (GroEL and GroES), molecular chaperones (DnaK and FkpA) and peptidyl prolyl isomerases (trigger factor and FkpA). Another common feature was that five of these proteins (GroEL, GroES, Ahpc, RibH and ferritin) have been shown to form a macromolecular structure. These results indicated that the heat stability of certain proteins may have evolved for their specialized functions, allowing them to cope with harsh environments, including high temperatures. Molecular cloning and characterization of 1-hydroxy-2-methyl-2-(E)-butenyl 4-diphosphate reductase (CaHDR) from Camptotheca acuminata and its functional identification in Escherichia coli Wang, Qian;Pi, Yan;Hou, Rong;Jiang, Keji;Huang, Zhuoshi;Hsieh, Ming-shiun;Sun, Xiaofen;Tang, Kexuan 112 Camptothecin is an anti-cancer monoterpene indole alkaloid. The gene encoding 1-hydroxy-2-methyl-2-(E)-butenyl 4-diphosphate reductase (designated as CaHDR), the last catalytic enzyme of the MEP pathway for terpenoid biosynthesis, was isolated from camptothecin-producing Camptotheca acuminata. The full-length cDNA of CaHDR was 1686 bp encoding 459 amino acids. Comparison of the cDNA and genomic DNA of CaHDR revealed that there was no intron in genomic CaHDR. Southern blot analysis indicated that CaHDR belonged to a low-copy gene family. RT-PCR analysis revealed that CaHDR expressed constitutively in all tested plant organs with the highest expression level in flowers, and the expression of CaHDR could be induced by 100 ${\mu}M$ methyl-jasmonate (MeJA), but not by 100 mg/L salicylic acid (SA) in the callus of C. acuminata. The complementation of CaHDR in Escherichia coli ispH mutant MG1655 demonstrated its function. Cytotoxic activity and probable apoptotic effect of Sph2, a sphigomyelinase hemolysin from Leptospira interrogans strain Lai Zhang, Yi-xuan;Geng, Yan;Yang, Jun-wei;Guo, Xiao-kui;Zhao, Guo-ping 119 Our previous work confirmed that Sph2/LA1029 was a sphigomyelinase-like hemolyisn of Leptospira interrogans serogroup Icterohaemorrhagiae serovar Lai. Characteristics of both hemolytic and cytotoxic activities of Sph2 were reported in this paper. Sph2 was a heat-labile neutral hemolysin and had similar hemolytic behavior as the typical sphingomyelinase C of Staphylococcus aureus upon sheep erythrocytes. The cytotoxic activity of Sph2 was shown in mammalian cells such as BALB/C mouse lymphocytes and macrophages, as well as human L-02 liver cells. Transmission electron microscopic observation showed that the Sph2 treated BALB/C mouse lymphocytes were swollen and ruptured with membrane breakage. They also demonstrated condensed chromatin as a high-density area. Cytoskeleton changes were observed via fluorescence confocal microscope in Sph2 treated BALB/C mouse lymphocytes and macrophages, where both cytokine IL-$1{\beta}$ and IL-6 were induced. In addition, typical apoptotic morphological features were observed in Sph2 treated L-02 cells via transmission electron microscope and the percentage of apoptotic cells did increase after the Sph2 treatment detected by flow cytometry. Therefore, Sph2 was likely an apoptosis-inducing factor of human L-02 liver cells. Detection for folding of the thrombin binding aptamer using label-free electrochemical methods Cho, Min-Seon;Kim, Yeon-Wha;Han, Se-Young;Min, Kyung-In;Rahman, Md. Aminur;Shim, Yoon-Bo;Ban, Chang-Ill 126 The folding of aptamer immobilized on an Au electrode was successfully detected using label-free electrochemical methods. A thrombin binding DNA aptamer was used as a model system in the presence of various monovalent cations. Impedance spectra showed that the extent to which monovalent cations assist in folding of aptamer is ordered as $K^+$ > $NH_4^+$ > $Na^+$ > $Cs^+$. Our XPS analysis also showed that $K^+$ and $NH_4^+$ caused a conformational change of the aptamer in which it forms a stable complex with these monovalent ions. Impedance results for the interaction between aptamer and thrombin indicated that thrombin interacts more with folded aptamer than with unfolded aptamer. The EQCM technique provided a quantitative analysis of these results. In particular, the present impedance results showed that thrombin participates a folding of aptamer to some extent, and XPS analysis confirmed that thrombin stabilizes and induces the folding of aptamer. AtbZIP16 and AtbZIP68, two new members of GBFs, can interact with other G group bZIPs in Arabidopsis thaliana Shen, Huaishun;Cao, Kaiming;Wang, Xiping 132 AtbZIP16 and AtbZIP68 are two putative G group bZIP transcription factors in Arabidopsis thaliana, the other three members of G group bZIPs are GBF1-3 which can bind G-box. Members of G group have conservative protein structure: highly homological basic region and a proline-rich domain in the N-terminal region. Here, we report that AtbZIP16 and AtbZIP68 could bind cis elements with ACGT core, such as G-box, Hex, C-box and As-1, but with different binding affinities which from high to low were G-box > Hex > C-box > As-1; AtbZIP16 and AtbZIP68 could form homodimer and form heterodimer with other members of G group; N-terminal proline rich domain of AtbZIP16 had transactivation activity in yeast cells while that of AtbZIP68 did not; AtbZIP16 and AtbZIP68 GFP fusion protein localized in the nucleus of onion epidermal cells. These results indicated that AtbZIP16 and AtbZIP68 were two new members of GBFs. In Arabidopsis, AtbZIP16 and AtbZIP68 may also participate in light-responsive process in which GBF1-3 are involved. Guinea pig cysteinyl leukotriene receptor 2 (gpCysLT2) mediates cell proliferation and intracellular calcium mobilization by LTC4 and LTD4 Ito, Yoshiyuki;Hirano, Minoru;Umemoto, Noriko;Zang, Liqing;Wang, Zhipeng;Oka, Takehiko;Shimada, Yasuhito;Nishimura, Yuhei;Kurokawa, Ichiro;Mizutani, Hitoshi;Tanaka, Toshio 139 We cloned and pharmacologically characterized the guinea pig cysteinyl leukotriene (CysLT) 2 receptor (gpCysLT2). gpCysLT2 consists of 317 amino acids with 75.3%, 75.2%, 73.3% identity to those of humans, mice and rats, respectively. The gpCysLT2 gene is highly expressed in the lung, moderately in eosinophils, skin, spleen, stomach, colon, and modestly in the small intestine. CysLTs accelerated the proliferation of gpCysLT2-expressing HEK293. Leukotriene C4 (LTC4) and Leukotriene D4 (LTD4) enhanced the cell proliferation higher than Bay-u9773, a CysLT2 selective partial agonist and a nonselective antagonist for CysLT receptors. Bay-u9773 did not antagonize the cell proliferation by LTC4 and LTD4. Despite the equipotency of the mitogenic effect among these chemicals, calcium mobilization (CM) levels were variable (LTC4 > LTD4 >> Bay-u9773), and Bay-u9773 antagonized the CM by LTC4. Moreover, the Gi/o inhibitor pertussis toxin perfectly inhibited agonist-induced cell proliferation. These results reveal that cell proliferation via CysLT2 signaling was mediated by Gi/o signaling but independent of calcium mobilization. Expression profile identifies novel genes involved in neuronal differentiation Kim, Jung-Hee;Lee, Tae-Young;Yoo, Kyung-Hyun;Lee, Hyo-Soo;Cho, Sun-A;Park, Jong-Hoon 146 In the presence of NGF, PC12 cells extend neuronal processes, cease cell division, become electrically excitable, and undergo several biochemical changes that are detectable in developing sympathetic neurons. We investigated the expression pattern of the apoptosis-related genes at each stage of neuronal differentiation using a cDNA microarray containing 320 apoptosis-related rat genes. By comparing the expression patterns through time-series analysis, we identified candidate genes that appear to regulate neuronal differentiation. Among the candidate genes, HO2 was selected by real-time PCR and Western blot analysis. To identify the roles of selected genes in the stages of neuronal differentiation, transfection of HO2 siRNA in PC12 cells was performed. Down-regulation of HO2 expression causes a reduction in neuronal differentiation in PC12 cells. Our results suggest that the HO2 gene could be related to the regulation of neuronal differentiation levels. Increased expression of the F1Fo ATP synthase in response to iron in heart mitochondria Kim, Mi-Sun;Kim, Jin-Sun;Cheon, Choong-Ill;Cho, Dae-Ho;Park, Jong-Hoon;Kim, Keun-Il;Lee, Kyo-Young;Song, Eun-Sook 153 The objective of the present study was to identify mitochondrial components associated with the damage caused by iron to the rat heart. Decreased cell viability was assessed by increased presence of lactate dehydrogenase (LDH) in serum. To assess the functional integrity of mitochondria, Reactive Oxygen Species (ROS), the Respiratory Control Ratio (RCR), ATP and chelatable iron content were measured in the heart. Chelatable iron increased 15-fold in the mitochondria and ROS increased by 59%. Deterioration of mitochondrial function in the presence of iron was demonstrated by low RCR (46% decrease) and low ATP content (96% decrease). Using two dimensional gel electrophoresis (2DE), we identified alterations in 21 mitochondrial proteins triggered by iron overload. Significantly, expression of the $\alpha$, $\beta$, and d subunits of $F_1F_o$ ATP synthase increased along with the loss of ATP. This suggests that the $F_1F_o$ ATP synthase participates in iron metabolism. Hepatitis B virus X protein enhances NFκB activity through cooperating with VBP1 Kim, Sang-Yong;Kim, Jin-Chul;Kim, Jeong-Ki;Kim, Hye-Jin;Lee, Hee-Min;Choi, Mi-Sun;Maeng, Pil-Jae;Ahn, Jeong-Keun 158 Hepatitis B virus X protein (HBx) is essential for hepatitis B virus infection and exerts a pleiotropic effect on various cellular machineries. HBx has been also demonstrated as an indirect transcriptional transactivator of various different viral and cellular promoters. In addition, HBx is involved in the development of various liver diseases including hepatocellular carcinoma. However the mechanism of HBx in hepatocellular carcinogenesis remains largely unknown. In this study, to identify possible new cellular proteins interacting with HBx, we carried out yeast two-hybrid assay. We obtained several possible cellular partners including VBP1, a binding factor for VHL tumor suppressor protein. The direct physical interaction between HBx and VBP1 in vitro and in vivo was confirmed by immunoprecipitation assay. In addition, we found that VBP1 facilitates HBx-induced $NF{\kappa}B$ activation and cell proliferation. These results implicate the important role of HBx in the development of hepatocellular carcinoma through its interaction with VBP1. Protective effect of p53 in vascular smooth muscle cells against nitric oxide-induced apoptosis is mediated by up-regulation of heme oxygenase-2 Kim, Young-Myeong;Choi, Byung-Min;Kim, Yong-Seok;Kwon, Young-Guen;Kibbe, Melina R.;Billiar, Timothy R.;Tzeng, Edith 164 The tumor suppressor gene p53 regulates apoptotic cell death and the cell cycle. In this study, we investigated the role of p53 in nitric oxide (NO)-induced apoptosis in vascular smooth muscle cells (VSMCs). We found that the NO donor S-nitroso-N-acetyl-penicillamine (SNAP) increased apoptotic cell death in p53-deficient VSMCs compared with wild-type cells. The heme oxygen-ase (HO) inhibitor tin protoporphyrin IX reduced the resistance of wild-type VSMCs to SNAP-induced cell death. SNAP promoted HO-1 expression in both cell types. HO-2 protein was increased only in wild-type VSMCs following SNAP treatment; however, similar levels of HO-2 mRNA were detected in both cell types. SNAP significantly increased the levels of non-heme-iron and dinitrosyl iron-sulfur clusters in wild-type VSMCs compared with p53-deficient VSMCs. Moreover, pretreatment with FeSO4 and the carbon monoxide donor CORM-2, but not biliverdin, significantly protected p53-deficient cells from SNAP-induced cell death compared with normal cells. These results suggest that wild-type VSMCs are more resistant to NO-mediated apoptosis than p53-deficient VSMCs through p53-dependent up-regulation of HO-2. Protein transduction of an antioxidant enzyme: subcellular localization of superoxide dismutase fusion protein in cells Kim, Dae-Won;Kim, So-Young;Lee, Hwa;Lee, Yeum-Pyo;Lee, Min-Jung;Jeong, Min-Seop;Jang, Sang-Ho;Park, Jin-Seu;Lee, Kil-Soo;Kang, Tae-Cheon;Won, Moo-Ho;Cho, Sung-Woo;Kwon, Oh-Shin;Eum, Won-Sik;Choi, Soo-Young 170 In protein therapy, it is important for exogenous protein to be delivered into the target subcellular localization. To transduce a therapeutic protein into its specific subcellular localization, we synthesized nuclear localization signal (NLS) and membrane translocation sequence signal (MTS) peptides and produced a genetic in-frame SOD fusion protein. The purified SOD fusion proteins were efficiently transduced into mammalian cells with enzymatic activities. Immunofluorescence and Western blot analysis revealed that the SOD fusion proteins successfully transduced into the nucleus and the cytosol in the cells. The viability of cells treated with paraquat was markedly increased by the transduced fusion proteins. Thus, our results suggest that these peptides should be useful for targeting the specific localization of therapeutic proteins in various human diseases.
CommonCrawl
Multidimensional construct of life satisfaction in older adults in Korea: a six-year follow-up study Hyun Ja Lim1, Dae Kee Min2, Lilian Thorpe1 & Chel Hee Lee3 Aging raises wide-ranging issues within social, economic, welfare, and health care systems. Life satisfaction (LS) is regarded as an indicator of quality of life which, in turn, is associated with mortality and morbidity in older adults. The objective of this study was to identify the relevant predictors of life satisfaction and to investigate changes in a multidimensional construct of LS over time. This analysis utilized data from the large-scale, nationally representative Korean Retirement and Income Study (KReIS), a longitudinal survey conducted biennially from 2005 to 2011. Outcome measures were degree of satisfaction with health, economic status, housing, neighbor relationships, and family relationships. GEE models were used to investigate changes in satisfaction within each of the five domains. Of a total 3531 individuals aged 65 or older, 2083 (59%) were women, and the mean age was 72 (s.d = ±6) years. The majority had a spouse (60.8%) and lived in a rural area (58%). Analysis showed that physical and mental health were consistently and significantly associated with satisfaction in each of the domains after adjusting for potential confounders. Living in a rural area and living with a spouse were related to satisfaction with economic, housing, family relationships, and neighbor relationships compared to living in urban areas and living without a spouse; the only outcome that did not show relationship to these predictors was health satisfaction. Female and rural residents reported greater economic satisfaction compared to male and urban residents. Living in an apartment was associated with 1.32 times greater odds of economic satisfaction compared to living in a detached house (95% CI: 1.14–1.53; p < 0.0001). Economic satisfaction was also 1.62 times more likely among individuals living with a spouse compared to single households (95% CI: 1.35–1.96; p < 0.0001). Financial stress index value was found to be a significant predictor of satisfaction with family relationships. Our study indicates that a single domain of LS or overall LS will miss many important aspects of LS as age-related LS is multi-faceted and complicated. While most studies focus on overall life satisfaction, considering life satisfaction as multidimensional is essential to gaining a complete picture. In 2015, people aged 60 or over made up 12.3% (901 million) of the 7.3 billion global population, a proportion that is growing at a rate of 3.26% per year [1]. This number is projected to rise to 1.4 billion by 2030 and 2.1 billion by 2050. Compared to other nations, Asian countries such as Japan, China, and South Korea have been recognized to be aging more rapidly [1]. Global aging raises wide-ranging issues within social, economic, welfare, and health care systems which impact older adults and their families [2]. Life satisfaction (LS) is subjective well-being and is regarded as an indicator of quality of life. LS is influenced by individual demographic and clinical characteristics, as well as age [3–5]. Especially in the older population, LS should be considered as a multidimensional construct, including domains such as physical health, mental health, socio-economic status, social and family relationships, and the environment [6, 7]. As these domains are known to impact health, LS might be used to predict mortality and morbidity in older adults [8–10]. Since both LS and health are frequently thought to decline with age, LS is a popular outcome variable for evaluating older people's lives and typically reflects broad domains in community-based and population-based studies of older adults [6, 11]. Although the assumption that LS declines in older age seems self-evident, particularly as health conditions deteriorate and living environments changes, research to date has been less definitive. Age- and sex-specific changes in LS among older adults remain unclear, and studies show inconsistent results. Some studies have found that age was positively correlated with LS [12–15], while other studies have detected a significant decline in LS over time [4, 16–19]. Still other studies have found stable levels of LS [20, 21]. Older women have been found by some to have lower levels of LS than older men [3, 22–24]. However, a few studies have also found that neither age nor gender was associated with LS [5, 25]. Physical and mental health have been significantly associated with LS in the older population [26–31]. Older adults who have retained their physical abilities and can perform activities of daily living tend to have higher LS, while those who perceive their health as poor tend to have lower LS. This mirrors much of the literature on depression in older adults, which suggests that those with serious medical illnesses, injuries, disability, isolation, and recent relocation appear to be more vulnerable to depression [32], whereas older adults in general, especially the younger ones, may have lower rates than young adults [33–36]. Depressive symptoms have been negatively correlated with LS in the older population, especially among older adults who live alone [26, 30, 37, 38]. Marital status, family status and household composition have also been associated with LS among older adults. Older adults who are living with their spouse, children, or in other types of cohabitation have been reported to have greater LS than those who are living alone [39–44]. These findings of poorer LS among the socially isolated older adults may stem from inadequate financial and emotional support, a lack of caregivers, or negative public perceptions that lead to poor mental health. Financial security is an essential component of LS and is significantly associated with LS in the older population. Many studies suggest that financial difficulty in older individuals is related to depression and low LS [28, 31, 45, 46]. It is plausible that older adults with financial security have greater LS because they have financial resources to mitigate life's challenges. However, a meta-analysis showed that the association between income and LS is relatively small, as quality of life in older people is not reduced by reduced income [47]. These individuals were found to be able to adjust their needs and desires to their financial situation. Social support from friends and neighbors, as well as family, has also been significantly associated with the LS of older adults [23, 31, 42, 48–50]. Many studies have published findings that place of residence is associated with LS among the older population. Typically place of residence is often considered very broadly as either urban or rural. Living environment is relevant for older adults well-being and aging well, partly for enabling social engagement but studies show inconsistent results. Some studies show that urban residents have higher life satisfaction than rural residents [46, 51], while other studies were conducted in either rural or urban areas so comparisons were not possible [41, 44, 52]. Most studies examined place of residence in association with health, but very few studied LS. Huang found that a majority of older people in urban areas have a pension and enjoy other social welfare privileges and therefore, urban older adults have higher life satisfaction than rural older adults [53]. Millward [51] also found that life satisfaction varied significantly by urban–rural zones, including the inner city, suburbs, inner commuter belt, and outer commuter belt. In their study, older adults in the inner city had the highest LS [51]. Other studies have found the opposite results. Rural communities still gap behind in income distribution, access to affordable healthcare systems, social welfare programs and benefits and education [54]. However, older adults that lived in a rural environment presented a higher LS score than the ones living in urban settings because old adults living within a relatively steady social network, which provides regular contact over time, have high LS [55, 56]. Overall, literature related to LS in older adults is somewhat lacking. Most studies on LS in older adults are limited by their focus on a single aspect of LS. Additionally, many studies use cross-sectional designs, which offer little understanding of how LS changes over time. It is clear that consideration of LS as a multidimensional construct is essential to obtaining a complete picture of an individual's state of LS. The aims of our study were to investigate changes in a multidimensional construct of life satisfaction (including satisfaction with physical health, mental health, economic, housing, family relationships, and neighbor relationships) among older adults and further elucidate relationships between each component of life satisfaction and relevant predictors using a longitudinal study. To address the study objectives, we analyzed data from the six-year follow-up Korean Retirement and Income Study (KReIS) using GEE models. The conceptual framework for this study is derived from the previous concepts, life course perspective and socio-ecological models to explain life satisfaction in older adults. Our study adopts the theoretical framework by Cummins as its foundation [57, 58]. For the general population, Cummins has proposed the Comprehensive Quality of Life Scale based on both empirical and theoretical grounds, which has been found to be valid, reliable and sensitive. It specifies seven domains: material well-being, emotional well-being, health, productivity, intimacy, safety, and community. An individual's well-being can be efficiently and comprehensively measured through these seven domains, which can be summed to yield a single measure of well-being. Since official productivity (i.e. job employment) is not relevant for the older population, only the remaining six domains were used. Emotional well-being can be assessed in part by evaluation of leisure activities, leisure time, or spiritual well-being. Such emotional well-being predicted increased psychological well-being and lower depressive symptoms [59, 60]. Therefore, helping older adults to maintain participation in informal leisure pursuits has important implications for promoting well-being in later life [60]. Our study adopted revised multidimensional domains of LS in old population from the Cummins' conceptual model (Fig. 1). Unfortunately, in the data set our study was based on, indicators of emotional well-being, such as leisure activity, were not available. LS is a multidimensional construct in our study, with five satisfaction domains that include physical and mental health, economic, housing, family relationships, and neighbor relationships. To our knowledge, no previous study has assessed which factors are important predictors of LS change in older adults over time within each component of life satisfaction. Our primary hypothesis was that there are changes of LS in older adults. We also expected to find common but differing predictors among multidimensional LS domains. Thus, our second hypothesis was that demographic and environmental characteristics are predictors of each component of life satisfaction. Multidimensional domains: revised conceptual model for life satisfaction in old population (Cummins, [58]) Data and sample Korea is a country with a rapidly growing percentage of older adults and a relatively recently instituted national pension system (since 1988) which does not yet cover or is not enough for most older adults [61]. In 2014, the rate of poverty in the Korean adults of over 65 years old reached the highest level among 34 OECD countries, of 49.6% [62]. Due to forced early voluntary retirement, mean retirement ages are earlier than in Western countries, and as such, financial insecurity is likely to be a major contributor to life satisfaction. The data for this study comes from the Korean Retirement and Income Study (KReIS), a longitudinal survey conducted biennially from 2005 to 2011. The KReIS used a stratified sampling frame taken from the Korean Population and Housing Census in 2000. A total of 8567 individuals aged 50 or older participated in the initial survey in 2005. The core questions that the survey asked covered a wide-range of topics, including demographic aspects, economic status, housing, retirement, health status, and satisfaction with life. For our study, baseline responses from individuals aged 65 or older at the initial survey were examined, as were their subsequent responses for each wave that followed as long as answers to satisfaction items were provided. A total of 3531 individuals in the initial 2005 survey met our study criteria, with subjects at follow-up assessments numbering 3041, 2697, and 2330 at 2007, 2009, and 2011, respectively. Outcome measures in this study were satisfaction with health, economic status, housing, neighbor relationships, and family relationships. Satisfaction with each item was originally assessed on a 5-point scale that asked, "To what extent are you satisfied with the item below?", evaluated on a scale ranging from 1 to 5 (very unsatisfactory = 1, unsatisfactory = 2, fair = 3, satisfactory = 4, very satisfactory =5). Satisfaction outcomes in this study were dichotomized, combining 'very satisfactory' or 'satisfactory' as 'satisfactory', and 'very unsatisfactory' or 'unsatisfactory' or 'fair' as 'not satisfactory'. In general, people in Korea are culturally hesitant to use the extreme answer and so the proportions of the extreme responses in our study were very small. Thus the 5-point LS was converted to binary outcome, Satisfactory vs Not-satisfactory, even though it can result in a loss of information regarding the original rating distributions. To investigate determinants of successful aging related to LS, Rowe & Kahn distinguished "usual" aging (non-pathologic but high risk) and "successful" aging (low risk and high function) [63]. In our study, the LS outcomes grouped 'Very Satisfactory/Satisfactory' is representing "successful aging". Besides, other studies with the older population also dichotomized the same way as we did [42, 46]. Predictor variables were gender, age, education, presence of spouse, residential area, number of family members in the household, household composition type, housing type, current physical and mental health status, private health insurance, household income, and household expense. Age was recorded in years at the time of the baseline 2005 survey and was categorized as groups aged 65–69, 70–74, 75–79, 80 years and older. Sex was coded 0 = male and 1 = female. Information about education was coded as 0 = no education, 1 = elementary school (Grade 1–6), and 2 = middle school (Grade 7–9) or higher. Residential area was categorized into two areas by population size: urban (population ≥ 50,000) was coded as 0 and rural (population < 50,000) was coded as 1. Household composition type was categorized as 1 = living alone, 2 = living with a spouse, and 3 = mixed arrangements. Housing type was categorized as 1 = detached house, 2 = apartment, and 3 = other types. Current physical and mental health status were dichotomized as 1 = good or very good and 0 = very poor, poor or fair. Using household income and household expenses, household financial stress index (%) was calculated as \( \frac{\left(\mathrm{household}\kern0.75em \mathrm{income}\kern0.5em -\kern0.5em \mathrm{household}\kern0.75em \mathrm{expense}\right)}{\mathrm{household}\kern0.5em \mathrm{income}}\kern0.5em \times \kern0.5em 100. \) Household financial index indicates levels of financial adequacy in a household. A positive value means financially good enough while a negative value means financial difficulty. Data were first analyzed to examine distributions and checked for outliers. Descriptive statistics were used to summarize the baseline characteristics of the study subjects. Student's t-test and ANOVA were used for group comparison of continuous variables. For group comparison of categorical variables, the Chi-square test was used. Cross-sectional satisfaction outcomes were first analyzed by year, and the Cochran-Armitage test was then applied to assess trend in the proportion of respondents who were satisfied within each satisfaction outcome during the 6-year follow-up period. Correlation analysis between LS outcomes and covariates was also conducted. In addition to the univariate and multivariate analysis, a generalized estimating equations (GEE) model was used to adjust for repeated measurements among the study participants. The GEE model accounts for all available data points, such that respondents with incomplete data sets are not excluded from analysis under the assumption that missing are occurred at random [64]. Briefly for the GEE model, let yij is the jth outcome for the ith subject and xi be the corresponding covariate vector. Then the GEE model can be written as g(E[yij |xi]) = Xi β where g(.) is a link function. For our binary outcome, let π ij = E(yij) be the expected probability of Satisfactory LS for subject i at the jth measurement. Then with logit link function, the GEE model is, $$ \log \left(\frac{\pi_{ij}}{1-{\pi}_{ij}}\right)= \log \left(\frac{P\left({y}_{ij}=1\Big|{x}_i\right)}{P\left({y}_{ij}=0\Big|{x}_i\right)}\right)={\mathbf{X}}_{\mathrm{ij}}\boldsymbol{\upbeta} . $$ The GEE method is an efficient and flexible analytic technique to estimate model parameters, while controlling for the within-subject correlation in longitudinal data [64]. Using GEE method, the multiple outcome measurements of data are pooled so that LS outcome measurement from the previous time period can be controlled. Univariate and multivariate logistic GEE models were developed using logit links; correlation between repeated assessments was examined prior to selecting the most appropriate correlation structure. The dependent variable in the model was the study participant's satisfaction outcome (1 = satisfactory, 0 = not satisfactory). The GEE models included the covariates of sex, age, and education at the time of the 2005 survey as time-invariant while the other predictors were regarded as time-varying covariates. In the model building process, only significant predictors with p < 0.1 from the univariate GEE model were considered for the multivariate models. In the final models, interactions among the main predictor were also examined. For the GEE model goodness-of-fit, the QICu (Quasilikelihood under the Independence model Criterion) statistic was used [65]. Odds ratios (OR) and 95% confidence intervals (CI) were calculated. All reported p-values were 2-tailed, and α = 0.05 was set for statistical significance. All statistical analyses were carried out using SAS version 9.4 (SAS Institute, Cary, NC, USA). In this study, a total of 2083 (59%) were women, and the mean age at the baseline was 72 (s.d = ±6) years (72.4 ± 6.2 years for women and 71.4 ± 5.6 years for men). Of the total study sample, the majority had a spouse (60.8%) and received no education or only elementary schooling (71%). In terms of household composition type, 40% were living with a spouse and 60% were either single or in a mixed arrangement living with others. About 58% of the study population lived in rural areas with a population under 50,000, and 59% lived in a detached house. Regarding economic status, very few had private health insurance (6.1%), and the mean household financial index value was -151 (s.d = 1780). Of the study sample, 2065 (58.5%) had a negative household financial index, i.e. household expenses exceed household income. Only small proportions of subjects had good physical and mental health status (18.3 and 28.9%, respectively). Table 1 further presents descriptive statistics of baseline characteristics. Table 2 shows the correaltion between five dimensions of life satisfaction, the overall life satisfaction, and the time-variant covariates. Table 1 Baseline demographic characteristics of the study subjects. (N = 3531) Table 2 Correaltion between five dimensions of life satisfaction, the overall life satisfaction, and the time-variant covariates Descriptive statistics showed that satisfaction with family relationships, neighbor relationships, and housing ranged between 43 and 66% but health and economic status were small and relatively stable (Fig. 2). These temporal patterns were observed in both men and women and in both rural and urban areas. Except in regard to neighbor relationships, the proportion expressing satisfaction was consistently higher in men than women, especially in regard to health where the proportion was twice as high (Fig. 3). Comparing residential areas, rural participants were more frequently satisfied compared to urban participants except in the outcome of health satisfaction (Fig. 4). The data showed no differences in satisfaction among age groups regarding economic status, housing, and family relationships. However, subjects age 65–69 were more likely to be satisfied with their health, whereas those age 80 or older were less likely to be satisfied with neighbor relationships compared to the other age groups (Fig. 2). To further detail the associations between each satisfaction outcome and subject characteristics, results from the GEE models are presented. Proportion of subjects at each survey time point who reported being satisfied within each of the five satisfaction domains, all subjects combined and within each specific domain by age group. Note that values are the proportion of "Very Satisfied/Satisfied" responses from each LS dimension Proportion of subjects at each survey time point who reported being satisfied within each of the five satisfaction domains, stratified by sex. Note that values are the proportion of "Very Satisfied/Satisfied" responses from each LS dimension Proportion of subjects at each survey time point who reported being satisfied within each of the five satisfaction domains, stratified by residential area. Note that values are the proportion of "Very Satisfied/Satisfied" responses from each LS dimension Health satisfaction The GEE model showed that sex, presence of a spouse, education level, physical health status, and mental health status were significantly related to health satisfaction (Table 3). Aging was not associated with health satisfaction. Not living with a spouse resulted in a 25% reduction in odds of health satisfaction compared to living with a spouse (OR = 0.746; 95% CI: 0.634–0.891; p = 0.001). There was an interaction between sex and mental health (p = 0.0008). Men and women who reported good mental health were 2.71 and 4.29 times more likely to report satisfaction with their health compared to men and women with poor mental health, respectively (p < 0.0001). Among persons with good mental health, no difference between men and women was observed in health satisfaction (p = 0.933). However, among subjects with poor mental health status, women were less likely to be satisfied with their health compared to men (OR = 0.636; 95 CI: 0.503 – 0.804; p = 0.0002). Aging was not associated with health satisfaction. Table 3 Health satisfaction. Estimation of odds ratio (OR), 95% confidence interval (C.I), and p-value from longitudinal random effects model Economic satisfaction Sex, age, education, residential area, housing, household composition type, physical health status, mental health status, and financial stress index were significantly associated with economic satisfaction (Table 4). Female and rural residents were more likely to report economic satisfaction compared to male and urban residents. Subjects living in an apartment were 1.32 times more likely to experience economic satisfaction compared to those living in a detached house (95% CI: 1.14–1.53; p < 0.0001). The coupled household was associated with 1.62 times greater odds of economic satisfaction compared to single households (95% CI: 1.35 –1.96; p < 0.0001). Good physical and mental health were significantly associated with economic satisfaction (p < 0.0001). Higher education and a positive financial stress index also showed higher economic satisfaction. There was an interaction between age and residential area (p = 0.0001). Comparisons of economic satisfaction between rural and urban residents were significant only in those age 65–69 (p < 0.0001), but not in the other age groups. Among urban residents, the older age groups were more likely to experience economic satisfaction. However, this trend was not shown among rural residents. Table 4 Financial satisfaction. Estimation of odds ratio (OR), 95% confidence interval (C.I), and p-value from longitudinal random effects model Satisfaction with housing Age, education, residential area, house type, household composition type, private insurance, physical health status, and mental health status were significantly associated with satisfaction with housing (Table 5). Rural residents were more likely to experience satisfaction with housing compared to urban residents (OR = 1.307; 95% CI: 1.184 -1.441; p < 0.0001). Having private insurance was also associated with a greater likelihood of experiencing satisfaction with housing compared to no private insurance (OR = 1.374; 95% CI: 1.152–1.639; p = 0.0004). There was no difference in housing satisfaction between males and females. Good physical and mental health were significantly associated with satisfaction with housing (p < 0.0001), as were increased age and higher education. Interaction between house type and household composition type was shown (p < 0.0001); single subjects or couples living in an apartment had greater odds of satisfaction with housing than those living in detached houses or other housing types. Table 5 Housing satisfaction. Estimation of odds ratio (OR), 95% confidence interval (C.I), and p-value from longitudinal random effects model Satisfaction with family relationships Sex, education, residential area, house type, household composition type, physical health status, mental health status, and financial stress index were significant factors in satisfaction with family relationships (Table 6). Female subjects and subjects who lived in an apartment were more likely to experience satisfaction in family relationships compared to male subjects and those living in detached houses (OR = 1.239; 95% CI: 1.111–1.338; p = 0.0001 and OR = 1.19; 95% CI: 1.063–1.333; p = 0.0026, respectively). Good physical and mental health were significantly associated with satisfaction with family relationships (p < 0.0001). Satisfaction with family relationships showed an interaction between residential area and household composition type (p < 0.0001). For singles living in rural areas, the odds of satisfaction with family relationships were higher than for singles living in urban areas (OR = 2.095; 95% CI: 1.813–2.421; p < 0.0001). However, satisfaction with family relations was not different between rural and urban areas for coupled and other household compositions. Aging was not a significant factor in satisfaction with family relations. Table 6 Satisfaction with family relationships. Estimation of odds ratio (OR), 95% confidence interval (C.I), and p-value from longitudinal random effects model Satisfaction with neighbor relationships Sex, age, residential area, housing type, household composition type, physical health status, and mental health status were significant factors in satisfaction with neighbor relationships (Table 7). Rural residents had odds of satisfaction with their neighbor relationships that was 1.729 (95% CI: 1.576–1.895; p < 0.0001) higher compared to urban residents. Subjects living in an apartment or other types of housing were significantly less likely to experience satisfaction with neighbor relationships compared to those living in detached houses (p < 0.0001). Good physical and mental health were significantly associated with satisfaction with neighbor relationships (p < 0.0001). However, individuals age 80 or older were significantly less likely to indicate satisfaction with neighbor relationships compared to the other age groups. Satisfaction with neighbor relationship also showed an interaction between sex and household composition type (p = 0.002). Among both couples and singles, females were 1.22 and 1.873 times more likely to be satisfied with neighbor relationships compared to males (p = 0.052 and p < 0.0001, respectively), but not for other types of household composition (p = 0.121). Table 7 Satisfaction with Neighbor Relationships. Estimation of odds ratio (OR), 95% confidence interval (C.I), and p-value from longitudinal random effects model Our study aimed to determine factors that are significantly associated with the five domains of life satisfaction: health, economic, housing, family relations, and neighbor relations. Findings are consistent with some previous studies that indicate the importance of physical and mental health, financial strain, residential area, housing type, and living environment for LS among the older population. Our study found that physical and mental health were consistently significantly associated with satisfaction in each of these domains after adjusting for potential confounders. This finding aligns with many other studies [24, 28, 29, 31]. Many studies have also found that mental health symptoms such as depression, anxiety, and psychosomatic problems are associated with lower life satisfaction [25, 38, 46, 50]. However, our data only contained general mental health status information and did not provide specific mental health symptoms, clinical examination findings, chronic conditions, medication use, physical activities, or activities of daily living. Living in a rural area and living with a spouse were associated with being satisfied with economic status, housing, family relations, and neighbor relations, but these factors were not connected to satisfaction with health. Living in a rural area may provide a relatively steady and close social network through regular contact over time, which provides support and satisfaction in multiple aspects of LS. This finding on residential area also supported the previous studies [56, 66]. Even though our study showed that living in a rural area was not associated with health satisfaction, other studies found higher levels of life satisfaction in urban elders than rural elders because of greater access to basic social and medical service [37, 46, 51]. Connections between physical environment, social environment, and life satisfaction was also observed in housing type and living arrangements. Compared to living in a detached house, living in an apartment was associated with satisfaction in economic status, housing, and family relationships, but a lack of satisfaction in neighbor relationships. Living alone appears to result in less frequent satisfaction than living with a spouse or in other household composition types, which is consistent with the previous studies [26, 37, 39–43]. It is known that living arrangements influence life satisfaction, as living alone increases anxiety around situations of sickness and financial difficulty. A study in China showed that those living in single-generation households had lower psychological well-being than those who living in three-generation households or skipped-generation households [52]. In our study, having enough financial resources provided significantly higher economic satisfaction and satisfaction with family relationships; however, this factor did not significantly affect satisfaction with health, neighbor relationships, or, specifically, housing. Nonetheless, having private health insurance, a factor associated with financial stability, was associated with housing satisfaction. Some recent studies have shown that older age predicted an increase in life satisfaction [14, 15] but others suggested that life satisfaction peaked at the age of 65 and then decreased [12]. Others yet suggest that there is a very late, age related decline in life satisfaction in the oldest age groups [18, 19]. However, our study did not show any of these patterns. The results from the eight- year and nine-year longitudinal studies by Gana [67] and Rocke [68] showed that life satisfaction was rather stable. Similarly, our study also indicated that life satisfaction among individuals up to the age of 80 years remains relatively constant in terms of health and family relationships. This may be due in part to prolonged survival of those with genetic predisposition to good health and those with strong family supports who can report satisfaction in these areas into their more advanced years. In contrast, a rapid decline in satisfaction with the neighbor relationships for all ages was also seen. Our findings provide valuable scientific evidence both for understanding why many studies have presented inconsistent results and for proposing that measuring one dimension of life satisfaction is not appropriate. Our study showed gender differences in satisfaction with health, economic status, family and neighbor relationships, but no difference in housing satisfaction. As expected, women were observed to be less often satisfied with their health than men, given that women tend to out-live men and subsequently experience more health-related problems and loneliness. This finding is consistent with other studies [23, 50]. Our study also showed that satisfaction in family and neighbor relationships were higher among women than men. Oshio [42] found gender differences in the associations of LS with family and social relations in Japan. For example, family relations were of more importance to men compared to women. In older men whose marital status remained stable, LS was also constant, while in case of women there was a decline in LS. In addition, LS in men increased with marriage while it had no significant role for women [42]. Social relationship is also a stronger determinant of life satisfaction in older women than in older men [42, 69]. Compared to old men, old women are more likely to use friends as their associates and give more support in order to maintain friendships, and maintain contact with extended family members as well as with friends [70, 71]. This may be attributable to women being more actively connected to family members, friends, and neighbors whereas most older men rely on their wives for social support and rewarding relationships [72] Another possibility is that more traditional patriarchal roles in the older population adversely affects mood in older married women, as suggested by Jang et al [73]. Women LS also increases with higher number of social activities and friend circle which is not that significant predictor of LS among the males [44]. A major strength of our study was the ability to examine changes in the multidimensional construct of life satisfaction over a period of six years using a large sample of longitudinal data. Most of the studies to date have been cross-sectional studies and have used a single measure of life satisfaction, which is questionable in validity. However, our study used multiple domains of life satisfaction, resulting in a more comprehensive assessment. The second strength of our work is the generalizability of the study results. As the findings are based on a nationally representative longitudinal sample, they can easily be generalized to the Korean older adults. A third strength is utilization of the financial stress index. For individual economic status, measures of income are often not precise and employing a valid measure of income is difficult. In our study, application of the financial stress index provided an accurate measurement of a subjects' economic status and, to the best of our knowledge, no other study has used it. The fourth strength of our study is that we used the GEE modeling approach, which allows us to effectively deal with missing values and to take into account correlations between an individual's repeated measurements. This study also has several limitations. The variables in our study do not cover important potential factors related to the domains of life satisfaction such as activities of daily living. Another limitation is that the data does not contain health-related variables; additional medically-based health measures, including chronic disease, anxiety, depression, etc., would have further improved our understanding of life satisfaction in older adults. In addition, as indicated in many studies, social support and family support measures are additional important factors associated with life satisfaction in older adults. Unfortunately, such variables were not available for our study. In the GEE model, we assumed that data are missing at random. However, technically, it is not easy to show that this assumption is valid. While most studies have focused on overall life satisfaction, considering multidimensional life satisfaction is essential to gaining a complete picture. Our study showed that physical and mental health status was most significantly associated with a multidimensional construct of life satisfaction among the Korean older adults. Our study also showed that, depending on the domain, aging is negatively or positively related to life satisfaction. It indicates that a single domain of LS or overall LS will miss many important aspects of LS because age-related LS is multifaceted and complicated. Thus using a single dimension or simplified overall LS might not be appropriate for drawing conclusions when studying older adults. Further research including personal behaviors, social networks, and medical, psychological, and environmental variables needed to comprehensively understand, and subsequently improve, life satisfaction in the older population. GEE: Generalized estimating equations Korean Retirement and Income Study LS: Life satisfaction United Nations (Department of Economic and Social Affairs, Population Division). Report: living arrangements of older persons around the world. Economic and social correlates of living arrangements. 2005. http://www.un.org/esa/population/publications/livingarrangement/chapter3.pdf arrangement/chapter3.pdf (accessed 10 Feb 2016). United Nations. World population prospects. 2015. http://esa.un.org/unpd/wpp/Publications/Files/Key_Findings_WPP_2015.pdf (accessed 10 Feb 2016). Ferring FD, Balducci C, Burholt V, Wenger C, Thissen F, Weber G, Hallberg I. Life satisfaction of older people in six European countries: findings from the European study on adult well-being. Eur J Aging. 2004;1:15–25. Baird BM, Lucas RE, Donnellan MB. Life satisfaction across the lifespan: findings from two nationally representative panel studies. Soc Indic Res. 2010;99:183–203. Subasi F, Hayran O. Evaluation of life satisfaction index of the elderly people living in nursing homes. Arch Gerontol Geriat. 2005;41:23–9. Efklides A, Maria K, Grace C. Subjective quality of life in old age in Greece, the effect of demographic factors, emotional state, and adaptation to aging. Eur Psychol. 2003;8:178–91. Lee SG, Jeon SY. The relations of socioeconomic status to health status, health behaviors in the elderly. J Prev Med Public Health. 2005;38:154–62. Berg AI, Hoffmanb L, Hassinga LB, McClearnc GE, Johansson B. What matters, and what matters most, for change in life satisfaction in the oldest-old? A study over 6 years among individuals 80+. Aging Ment Health. 2009;13:191–201. Collins AL, Glei DA, Goldman N. The role of life satisfaction and depressive symptoms in all-cause mortality. Psychol Aging. 2009;24:696–702. Kimm H, Sull JW, Gombojav B, Yi SW, Ohrr H. Life satisfaction and mortality in elderly people: the Kangwha cohort study. BMC Public Health. 2012;12:54. Grann JD. Assessment of emotions in older adults: mood disorders, anxiety, psychological well-being, and hope. In: Kane RA, Kane RL, editors. Assessing older persons: measures, meaning, and practical application. New York: Oxford University Press; 2000. p. 129–69. Mroczek DK, Spiro A. Change in life satisfaction during adulthood: findings from the veterans affairs normative aging study. J Pers Soc Psychol. 2005;88:189–202. Blanchflower DG, Oswald AJ. Is well-being U-shaped over the life cycle? Soc Sci Med. 2008;66:1733–49. Gaymu J, Springer S. Living conditions and life satisfaction of older Europeans living alone: a gender and cross-country analysis. Aging Soc. 2010;30:1153–75. Stone A, Schwartz JE, Broderick JE, Deaton A. A snapshot of the age distribution of psychological well-being in the United States. Proc Natl Acad Sci U S A. 2010;107:9985–90. Chen C. Aging and life satisfaction. Soc Indic Res. 2001;54:57–79. Fujita F, Diener E. Life satisfaction set point: stability and change. J Pers Soc Psychol. 2005;88:158–64. Gerstorf D, Ram N, Estabrook R, Schupp J, et al. Life satisfaction shows terminal decline in old age: longitudinal evidence from the German Socio-Economic Panel Study (SOEP). Dev Psychol. 2008;44:1148–59. Gerstorf D, Ram N, Röcke C, Lindenberger U, Smith J. Decline in life satisfaction in old age: longitudinal evidence for links to distance-to-death. Psychol Aging. 2008;23:154–68. Diener E, Suh EM. Subjective well-being and age: An international analysis. In: Schaie KW, Lawton MP, editors. Annual review of gerontology and geriatrics: focus on emotion and adult development, vol. 17. New York: Springer; 1998. p. 304–24. Hamarat E, Thompson D, Steele D, Matheny K, Simons C. Age differences in coping resources and satisfaction with life among middle-aged, young-old, and oldest-old adults. J Genet Psychol. 2002;163:360–7. Smith J, Baltes MM. The role of gender in very old age: profiles of functioning and everyday life patterns. Psychol Aging. 1998;13:676–95. Pinquart M, Sorensen S. Influences on loneliness in older adults: a meta-analysis. Basic Appl Soc Psychol. 2001;23:245–66. Carmel S, Bernstein JH. Gender differences in physical health and psychosocial wellbeing among four age-groups of elderly people in Israel. Int J Aging Hum Dev. 2003;56:113–31. Won M, Choi Y. Are Koreans Prepared for the Rapid Increase of the Single-Household Elderly? Life Satisfaction and Depression of the Single-Household Elderly in Korea. Sci World J. 2013;2013(972194):4. doi: 10.1155/2013/972194. Berg AI, Hassing LB, McClearn GE, Johansson B. What matters for life satisfaction in the oldest-old? Aging Ment Health. 2006;10:257–64. Borg C, Fagerström C, Balducci C, Burholt V, et al. Life satisfaction in 6 European countries: the relationship to health, self-esteem, and social and financial resources among people (Aged 65-89) with reduced functional capacity. Geriatr Nurs. 2008;29:48–57. Chou KL, Chi I. Financial strain and life satisfaction in Hong Kong elderly Chinese: moderating effect of life management strategies including selection, optimization, and compensation. Aging Ment Health. 2002;6:172–7. Gwozdz W, Sousa-Poza A. Aging, health and life satisfaction of the oldest old: an analysis for Germany. Soc Indic Res. 2010;97:397–417. An J, An K, O'Conner L, Wexler S. Life satisfaction, self-esteem, and perceived health status among elder Korean women: Focus on living arrangements. J Transcult Nurs. 2008;19:151–60. Lacruz ME, Emeny RT, Baumert J, Ladwig KH. Prospective association between self-reported life satisfaction and mortality: results from the MONICA/KORA Augsburg S3 survey cohort study. BMC Public Health. 2011;11:579. Katon WJ. Epidemiology and treatment of depression in patients with chronic medical illness. Dialogues Clin Neurosci. 2011;13:7–23. Regier DA, Boyd JH, Burke Jr JD, Rae DS, et al. One-month prevalence of mental disorders in the United States. Based on five Epidemiologic Catchment Area sites. Arch Gen Psychiatry. 1988;45:977–86. Murphy JM, Laird NM, Monson RR, Sobol AM, Leighton AH. A 40-year perspective on the prevalence of depression: the stirling county study. Arch Gen Psychiatry. 2000;57:209–15. Mojtabai R, Olfson M. Major depression in community-dwelling middle-aged and older adults: prevalence and 2- and 4-year follow-up symptoms. Psychol Med. 2004;34:623–34. Pattern SB, Wang JL, Williams JV, Currie S, et al. Descriptive epidemiology of major depression in Canada. Can J Psychiatry. 2006;51:84–90. Zhang W, Liu G. Childlessness, psychological well-being, and life satisfaction among the elderly in China. J Cross Cult Gerontol. 2007;22:185–203. Van Der Horst RK, Mclaren S. Social relationship as predictors depression and suicidal ideation in older adults. Aging Mental Health. 2005;9:517–25. Linda HK. A concept analysis of healthy aging. Nurs Forum. 2005;40:45–57. Kooshiar H, Yahaya N, Hamid TA, Samah A, SedaghatJou V. Living arrangement and life satisfaction in older malaysians: the mediating role of social support function. PLoS ONE. 2012;7:e43125. Banjare P, Dwivedi R, Pradhan J. Factors associated with the life satisfaction amongst the rural elderly in Odisha. India Health Qual Life Outcomes. 2015;13:201. Oshio T. Gender differences in the associations of life satisfaction with family and social relations among the Japanese elderly. J Cross Cult Gerontol. 2012;27:259–74. Al-Kandari Y, Crew DE. Social support and Health among elderly Kuwaitis. J Biosoc Sci. 2014;46:518–30. Zhou Y, Zhou L, Fu C, et al. Socio-economic factors related with the subjective well-being of the rural elderly people living independently in China. Int J Equity Health. 2015;14:5. Yamaoka K. Social capital and health and well-being in East Asia: a population-based study. Soc Sci Med. 2008;66:885–99. Li C, Chi I, Zhang X, Cheng Z, Zhang L, Chen G. Urban and rural factors associated with life satisfaction among older Chinese adults. Aging Ment Health. 2015;19:947–54. Pinquart M, Sorensen S. Influence of socioeconomic status, social support, and competence on subjective well-being in later life: a meta-analysis. Psychol Aging. 2001;15:187–224. Bennett KM. Psychological wellbeing in later life: The longitudinal effects of marriage, widowhood and marital status change. Int J GeriatrPsychiatry. 2005;20:280–4. Shankar A, Rafnsson SB, Steptoe A. Longitudinal associations between social connections and subjective wellbeing in the English Longitudinal Study of Aging. Psychol Health. 2015;30:686–98. Victor C, Scambler S, Bond J, Bowling A. Being alone in later life: loneliness, social isolation and living alone. Rev Clin Gerontol. 2000;10:407–17. Millward H, Spinney J. Urban–rural variation in satisfaction with life: demographic, health, and geographic predictors in Halifax, Canada. Appl Res Qual Life. 2013;8:279–97. Silverstein M, Cong Z, Li S. Intergenerational Transfers and Living Arrangements of Older People in Rural China: Consequences for Psychological Well-Being. J Gerontol Ser B Psychol Sci Soc Sci. 2006;61:256–66. Huang H, Humphreys BR. Sports participation and happiness: evidence from US microdata. J Econ Psychol. 2012;33:776–93. Zimmer Z, Kwong J. Socioeconomic status and health among older adults in rural and urban China. J Aging Health. 2004;16:44–70. von Humboldt S, Leal I, Pimenta F. Sense of Coherence, Sociodemographic, Lifestyle, and Health-related Factors in Older Adults' Subjective Well-being. Int J Gerontol. 2015;9:15–9. Lang FR, Heckhaousen J. Perceived control over development and subjective well-being. J Pers Soc Psychol. 2001;81:509–23. Lang Cummins RA. On the trail of gold standard for life satisfaction. Soc Indic Res. 1995;35:179–200. Cummins RA. The domains of life satisfaction: An attempt to order chaos. Soc Indic Res. 1996;38:303–28. Everard KM, Lach HW, Fisher EB, Baum MC. Relationship of activity and social support to the functional health of older adults. J Gerontol B Psychol Sci Soc Sci. 2000;55:S208–12. Janke M, Payne L, Van Puymbroeck M. The role of informal and formal leisure activities in the disablement process. Int J Aging Hum Dev. 2008;67:231–57. Korea National Pension System. http://www.nps.or.kr/jsppage/main.jsp (accessed 10 Feb 2016). Pensions at a Glance 2015, http://www.oecd.org/publications/oecd-pensions-at-a-glance-19991363.htm (accessed 10 Feb 2016). Rowe JW, Kahn RL. Human aging: Usual and successful. Science. 1987;237:143–9. Singer J, Willett J. Applied longitudinal data analysis. Oxford: Oxford University Press; 2003. Pan W. Akaike's information criterion in generalized estimating equations. Biometrics. 2001;57:120–5. Von Humboldt S, Leal I, Pimenta F. Living Well in Later Life: The Influence of Sense of Coherence, and Socio-Demographic, Lifestyle and Health-Related Factors on Older Adults' Satisfaction with Life. Appl Res Qual Life. 2014;9:631–42. Gana K, Bailly N, Saada Y, Joulain M, et al. Does life satisfaction change in old age: results from an 8-year longitudinal study. J Gerontol B Psychol SciSoc Sci. 2013;68:540–52. Rocke C, Lachman ME. Perceived trajectories of life satisfaction across past, present, and future: profiles and correlates of subjective change in young middle-aged, and older adults. Psycho Aging. 2008;23:833–47. Cheng ST, Chan AC. Relationship with others and life satisfaction in later life: do gender and widowhood make a difference? J Gerontol B Psychol Sci Soc Sci. 2006;61:46–53. Chappell NL, Badger M. Social Isolation and Well-Being. J Gerontol. 1989;44:S169–76. Gurung RAR, Taylor SE, Seeman TE. Accounting for changes in social support among married older adults: Insights from the MacArthur Studies of Successful Aging. Psychol Aging. 2003;18:487–96. Park NS, Jang Y, Lee BS, Haley WE, Chiriboga DA. The Mediating Role of Loneliness in the Relation Between Social Engagement and Depressive Symptoms Among Older Korean Americans: Do Men and Women Differ? J Gerontol Ser B Psychol Sci Soc Sci. 2013;68:193–201. Jang SN, Kawachi I, Chang J, Boo K, Shin HG, Lee H, Cho SI. Marital status, gender, and depression: Analysis of the baseline survey of the Korean Longitudinal Study of Ageing (KLoSA). Soc Sci Med. 2009;69:1608–15. The authors thank all the study participants for generously joining this survey. The authors also thank all the research staff who did data collection and recruitment of participants. The authors thank the reviewers for their helpful comments. The authors declare that they have no funding support for this study. The study data can be obtained from the website: http://www.nps.or.kr/jsppage/research/panel/panel_05.jsp. HL conceived the original study idea, lead the study and wrote the first draft of the manuscript. DM provided inputs for statistical analysis and interpretation of data. LT provided clinical input to the study and interpretation of the results. CL performed the statistical analysis. All authors read and approved the final manuscript. This study does not require ethics approval because the data is publicly available. Department of Community Health & Epidemiology, College of Medicine, University of Saskatchewan, 107 Wiggins Road, Saskatoon, SK, S7N 5E5, Canada Hyun Ja Lim & Lilian Thorpe Department of Information Statistics, Duksung Women's University, Seoul, Korea Dae Kee Min Clinical Research Support Unit, College of Medicine, University of Saskatchewan, Saskatoon, Canada Chel Hee Lee Hyun Ja Lim Lilian Thorpe Correspondence to Hyun Ja Lim. Lim, H.J., Min, D.K., Thorpe, L. et al. Multidimensional construct of life satisfaction in older adults in Korea: a six-year follow-up study. BMC Geriatr 16, 197 (2016). https://doi.org/10.1186/s12877-016-0369-0 Longitudinal study Korean Retirement and Income Study (KReIS) GEE model
CommonCrawl
Hamiltonian Lie algebroids over presymplectic and Poisson manifolds Alan Weinstein, University of California Berkeley Friday, July 20, 2018 - 11:30am to 12:20pm Earth Sciences Centre, Room 1050 (Reichman Family Lecture Hall) The fact that an action of a Lie algebra ${\frak g}$ on a (pre)symplectic manifold $(M,\omega)$ is hamiltonian can be interpreted in terms of the ``action Lie algebroid" $\frak g \times M$. I will report on work in progress with Christian Blohmann in which we are developing a theory of ``hamiltonian Lie algebroids" which extends this interpretation to general Lie algebroids $A$. Here, a vector bundle connection on $A$ replaces the role played by the natural trivialization of $\frak g \times M$. Our work was originally inspired by the problem of understanding in symplectic terms why the initial value constraint manifold in general relativity is coisotropic, but the general theory has turned out to be very rich in itself. Among other things, it extends to Lie algebroids the Atiyah-Bott relation between momentum maps and the Weil model of equivariant cohomology. The problem of determining when the tangent bundle of a symplectic manifold is hamiltonian leads to a simple but unanswered question in pure symplectic topology: does every exact symplectic manifold admits a nowhere vanishing Liouville vector field? I will also say something about work which we are beginning on a Poisson version, which suggests that there should eventually be a common extension to a notion of hamiltonian Lie algebroids over Dirac manifolds. Poisson 2018 - International Conference on Poisson Geometry
CommonCrawl
Formulas and substitution Out in the real world, all sorts of amazing relationships play out every day: Air temperatures change with ocean temperatures. Populations of species rise and fall depending on seasons, food availability and the number of predators. The surface area of a human body can even be measured fairly accurately according to your height and weight. One of the most powerful things about mathematics is its ability to describe and measure these patterns and relationships exactly. Given a mathematical formula for the relationship between, say, the weight of a patient and how much medication they should be given, we can find one quantity by substituting a value for the other. We have come across so many different formulas in mathematics that allow us to measure quantities such as Area, Volume, Speed etc. Let's have a look at the process of substituting values into these formulas to find a particular unknown. The perimeter of a triangle is defined by the formula $P=x+y+z$P=x+y+z. Find $P$P if the length of each of its three sides are $x=5$x=5 cm, $y=6$y=6 cm and $z=3$z=3 cm. By inserting the number values of $x$x, $y$y and $z$z we have a new equation that we can use to find the value of $P$P: $P=5+6+3$P=5+6+3 $P=14$P=14 cm The area of a square with side $a$a is given by the formula $A=a^2$A=a2. Find $A$A if $a=6$a=6 cm. From the information above, we know that we are finding the area of a square where each side measures $6$6cm. Substituting our value for $a$a into the formula: $A=6^2$A=62 $A=36$A=36 $cm^2$cm2 The simple interest generated by an investment is given by the formula $I=\frac{P\times R\times T}{100}$I=P×R×T100​. Given that $P=1000$P=1000, $R=6$R=6 and $T=7$T=7, find the interest generated. The surface area of a rectangular prism is given by formula $S=2\left(lw+wh+lh\right)$S=2(lw+wh+lh), where $l$l , $w$w and $h$h are the dimensions of the prism. Given that a rectangular prism has a length of $8$8 cm, a width of $7$7 cm and a height of $9$9 cm, find its surface area. Literal equations Solving for a quantity of interest in a literal equation is an important skill to learn. It can come in very handy when you know the value of one algebraic symbol but not another. For example, in the formula $A=pb+y$A=pb+y, the value $A$A is by itself on the left-hand side of the equals sign. In common language, we might say that it has been "solved for" because it is by itself, even though we do not yet know its value. When we previously tried to solve equations, we took steps to get the variable by itself. When solving for a quantity of interest, we might have more than one variable, but we still use a similar process: Group any like terms Simplify using the inverse of addition or subtraction. Simplify further by using the inverse of multiplication or division. Solve for $x$x in the following equation: $y=\frac{x}{4}$y=x4​ Solve for $R$R in the following equation: $V=IR-E$V=IR−E $\frac{x}{9}+\frac{n}{2}=5$x9​+n2​=5
CommonCrawl
${suggestion.description} `; return html; } } }]).on('autocomplete:selected', function(event, suggestion) { window.location.href = offsetURL(suggestion.path); }); // remove inline display style on autocompleter (we want to // manage responsive display via css) $('.algolia-autocomplete').css("display", ""); }) .catch(function(error) { console.log(error); }); }); Mitchell Palmer Home About ☰ A Covid-19 Vulnerability Index New Zealand COVID-19 If Covid-19 gets into the community, which countries have the vaccinations and health system capacity to handle it Mitchell Palmer https://mitchellpalmer.nz (Yale-NUS College) One of the greatest dangers of Covid-19 is that it can overwhelm healthcare systems, thus degrading the standard of care recieved by all patients. Assuming an outbreak occurs, a country's vulnerability to such an outcome can be approximated as a function of three factors: How many vaccinations have been given How effective those vaccinations are (against hospitalisation or serious illness) How much capacity the country's healthcare system has Ideally, a vulnerability index would include a variety of other factors – especially the age distribution of the population, the number of people with natural immunity to Covid-19 already acquired and the ability of the healthcare system to scale at speed – but such factors are difficult to summarise in single number avaliable for a large number of countries. As such, this brief blog post limits itself to a much simpler, more tractable model. First, we consider \(n\) different vaccines administered in a country (\(c\)) with a population (\(p_c\)), each with an effectiveness against hospitalization (\(e_i\)), a recommended number of doses for a full course (i.e., 2 for Pfizer and 1 for J&J) (\(f_i\)), and a number of doses of the vaccine given (\(d_i\)). With those figures, we can create an 'efficacy-weighted vaccination rate' (\(r_c\)). This weighting accounts for the obvious fact that lower efficacy vaccines need higher coverage to reach equivalent protection. \[ r_c = \frac{\sum^{n}_{i=0}e_i \frac{d_i}{f_i}}{p_c} \] When we combine that vaccination rate (or, in fact, \((1-r_c)\) to proxy for the number of cases likely to escape, given the vaccination rate and efficacy) with a suitable proxy for healthcare system capacity – in this case, the number of hospital beds \(b_c\) per head, we create a vulnerability metric \(v_c\) which essentially proxies for how long/severe an outbreak would have to be, in the absence of non-pharmaceutical interventions like lockdowns, to overwhelm the health system. \[ v_c = \frac{1-r_c}{\left(\frac{b_c}{p_c}\right)} \] The Practical Stuff The best source for COVID-19 data is Our World in Data, which has an incredible collection of data on many aspects of the pandemic. Unfortunately, however, its by-manufacturer data is relatively scarce. Due to my particular interest in these four countries, I have manually added the latest data for Singapore (Pfizer/Moderna is assumed at 50% share each, together with 100% of non-government vaccines being Sinovac), Australia (sourced from the TGA safety reports), New Zealand (from the Ministry of Health), and the United Kingdom (from the MHRA safety data) to their dataset. Estimating efficacy-weighted vaccination rates We start with OWID's vaccine-by-manufacturer data (as at 16 September 2021 11pm Singapore time) with the aforementioned countries' data added. Then we join that data with data on the efficacy of the various vaccines against hospitalization caused by the Delta variant from various sources (this data is reproduced at the end of the post). Where data for the delta variant is not avaliable, we adjust the efficacy rate down by 10% (not percentage points), which is the percentage which one study found the Astra-Zeneca's efficacy fell by when confronted by Delta rather than other variants or the original disease. Population data is then taken from the World Bank. We now have a useful metric for how well-vaccinated a population is against the risk of hospitalization due to the delta variant. Estimating vulnerability Now, following from the formula above, we integrate the World Bank's data on the number of hospital beds avaliable per capita to come to a conclusion about the vulnerability of the health system to an outbreak. Now, there are a variety of limitations to this data (notably that World Bank beds-per-capita data can be up to 10 years old [it isn't for most countries in this dataset though] and is based on capacity, not spare capacity at any given moment), but the broad conclusion should be deeply worrying for my New Zealand compatriots: With our current level of vaccination and the current capacity of our health care system, New Zealand would be the most vulnerable rich country for which data is avaliable were the Delta variant to enter the community. On my crude metric, we would be roughly 50% more vulnerable than Australia. This seemingly validates the Prime Minister's decision to move very quickly into lockdown given the most recent outbreak – and offers some explanation for why NSW took longer to do so. Moreover, it emphasises why it is so important for New Zealand's vaccination programme to succeed, despite its incredibly slovenly start. There is a variety of complicated modelling (for instance, using a susceptible-infected-recovered model to account for natural immunity and exponential spread) which could likely improve the specific applicability of this model to decision-making, but, as a starting point and an intuition-builder, I think it is useful. Please let me know if you have any thoughts on how I could improve it! (Source code is avaliable at Github) Vaccine Effectiveness Data fullcourse adj_efficacy Johnson&Johnson 0.71 1 TRUE https://www.wsj.com/articles/j-j-vaccine-highly-effective-against-delta-variant-in-south-african-trial-11628292645 0.710 Oxford/AstraZeneca 0.92 2 TRUE https://www.gov.uk/government/news/vaccines-highly-effective-against-hospitalisation-from-delta-variant 0.920 Sinovac 0.88 2 FALSE https://doi.org/10.1056/NEJMoa2107715 0.792 Sputnik V 0.81 2 TRUE https://www.science.org/news/2021/08/russia-s-sputnik-v-protects-against-severe-covid-19-delta-variant-study-shows 0.810 Moderna 0.95 2 TRUE https://www.cdc.gov/mmwr/volumes/70/wr/mm7037e2.htm?s_cid=mm7037e2_w 0.950 Pfizer/BioNTech 0.96 2 TRUE https://www.gov.uk/government/news/vaccines-highly-effective-against-hospitalisation-from-delta-variant 0.960 CanSino 0.91 1 FALSE https://www.straitstimes.com/asia/south-asia/cansinobios-covid-19-vaccine-657-per-cent-effective-in-global-trial-pakistan-health 0.819 Sinopharm/Beijing 0.79 2 FALSE https://www.who.int/news-room/feature-stories/detail/the-sinopharm-covid-19-vaccine-what-you-need-to-know 0.711 If you see mistakes or want to suggest changes, please create an issue on the source repository.
CommonCrawl
Coding Ground C++ Programming 8085 Microprocessor Prime Packs UPSC IAS Exams Notes Developer's Best Practices Effective Resume Writing HR Interview Questions Program to check if two given matrices are identical in C++ C++Server Side ProgrammingProgramming Given two matrix M1[r][c] and M2[r][c] with 'r' number of rows and 'c' number of columns, we have to check that the both given matrices are identical or not. If they are identical then print "Matrices are identical" else print "Matrices are not identical" Identical Matrix Two matrices M1 and M2 are be called identical when − Number of rows and columns of both matrices are same. The values of M1[i][j] are equal to M2[i][j]. Like in the given figure below both matrices m1 and m2 of 3x3 are identical − $$M1[3][3]=\begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 9 \ \end {bmatrix} \:\:\:\:M2[3][3] =\begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 9 \ \end{bmatrix} $$ Input: a[n][n] = { {2, 2, 2, 2}, {2, 2, 2, 2}, {3,3, 3, 3}, {3,3, 3, 3}}; b[n][n]= { {2, 2, 2, 2}, {3, 3, 3, 3}}; Output: matrices are identical Output: matrices are not identical Iterate both matrices a[i][j] and b[i][j], and check a[i][j]==b[i][j] if true for all then print they are identical else print they are not identical. Step 1 -> define macro as #define n 4 Step 2 -> Declare function to check matrix is same or not int check(int a[][n], int b[][n]) declare int i, j Loop For i = 0 and i < n and i++ Loop For j = 0 and j < n and j++ IF (a[i][j] != b[i][j]) Step 3 -> In main() Declare variable asint a[n][n] = { {2, 2, 2, 2}, {3, 3, 3, 3}} Declare another variable as int b[n][n] = { {2, 2, 2, 2}, IF (check(a, b)) Print matrices are identical Print matrices are not identical #define n 4 // check matrix is same or not int check(int a[][n], int b[][n]){ for (i = 0; i < n; i++) for (j = 0; j < n; j++) int a[n][n] = { {2, 2, 2, 2}, int b[n][n] = { {2, 2, 2, 2}, cout << "matrices are identical"; cout << "matrices are not identical"; matrices are identical Sunidhi Bansal Updated on 23-Sep-2019 10:55:03 Python Program to check if two given matrices are identical Java program to check if two given matrices are identical C# program to check if two matrices are identical Check if two lists are identical in Python C++ Program to Check Multiplicability of Two Matrices How to check if two matrices are equal in R? Python program to check whether two lists are circularly identical Check if two list of tuples are identical in Python C program to compare if the two matrices are equal or not Program to multiply two matrices in C++ C# program to add two matrices C# program to multiply two matrices Write Code to Determine if Two Trees are Identical in C++ C++ Program to check if given numbers are coprime or not Check if tuple and list are identical in Python Enjoy unlimited access on 5500+ Hand Picked Quality Video Courses Training for a Team Affordable solution to train a team and make them project ready. Submit Demo Request We make use of First and third party cookies to improve our user experience. By using this website, you agree with our Cookies Policy. Agree Learn more
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. How is gravitational potential energy $mgh$? I know the derivation that $W=Fd$, hence $F=mg$ and $d=h$ so energy gained by the body is $mgh$ considering the body on the ground to have $0$ gravitational potential energy. But the definition of work is (as given in my book) Work done is the product of force and displacement caused by it in the same direction. That means work done on a body to lift it against gravity to a certain height should be equal to the potential energy gained by it, right? My book also states that: $mg$ is the minimum force required to lift a body against earth's gravity(without acceleration). But how does that make sense? Suppose a body is kept on the ground, and we apply a force $mg$ on it, won't the force of gravity and this external force cancel out and ultimately result in no movement of the body? How is the derivation of $U=mgh$ thus obtained? forces energy newtonian-gravity work potential-energy Mehmer MehmerMehmer 6141313 bronze badges Part of the problem is to distinguish between the work done by a particular force and the net work done by all the forces. The second is to notice that the work done on an object depends on the process undergone. The third is to understand that the relationship between work and potential energy is that the work done by a conservative force is proportional to the change in the potential energy. Let's walk through the scenario. A block of mass $m$ sits on the ground at position $y=0$. There are two forces acting: the gravitational force downward and then normal force upward. Newton's 2nd Law tells us that $$ m\vec{a}=\vec{F}_{\textrm{net on object}} = \vec{F}_{\textrm{G, on object by Earth}} +\vec{N}_{\textrm{on object by ground}}\,. $$ We'll abbreviate these as $\vec{F}_{\textrm{net}}$, $\vec{F}_{\textrm{G}}$, and $\vec{N}$. In the case where the object is just sitting on the ground, the acceleration is clearly zero, and the normal and gravitational force cancel each other out. The block doesn't move, and so the net work done by either force must be zero: $$ W_{\textrm{by G}}=\int_i^f\vec{F}_{\textrm{G}}\cdot d\vec{r}=\vec{F}_{\textrm{G}}\cdot\Delta\vec{r} = 0\, $$ where the second equality holds beccause the gravitational force is constant near the surface of the Earth, and the third holds because the net displacement is zero. Now, someone grabs the block, accelerates it upwards, and then starts lifting the block upwards at constant speed. Ignoring the acceleration part, as the block moves up at constant speed, the net force on it must be zero, and so the gravitational force and normal force acting must cancel, as they did above, although now $\vec{N} = \vec{N}_{\textrm{by person}}$, which we'll just call $\vec{N}$. The work done by gravity and the work done by the person lifting the block can be computed as follows: $$ W_{\textrm{by G}}=-mg(y_f-y_i)\,, $$ where $y_f$ and $y_i$ are the initial and final heights of the object, and $$ W_{\textrm{by N}}=N_{\textrm{by hand}}(y_f-y_i)\,. $$ Note that these two works are equal and opposite, and so the net work done is zero, as it must because the kinetic energy isn't changing! However, the works done by the individual forces are non-zero. Looking at the $W_{\textrm{by G}}$, we can see that we can alternatively define it as $$ W_{\textrm{by G}} = -(U_f-U_i)\,, $$ where we define $U = mgy$ to be the potential energy when the object is at height $i$. Then, $U_f-U_i = mgy_f - mgy_i$ is just the change in potential energy as the object is lifted from height $y_i$ to height $y_f$. We could write this as $mgy_f - mgy_i = mgh$, where $h$ is the change in height, but this isn't a great way to do things, because $h$ could be negative (if the block moves downward), and it's easy to confuse a position with a change in position is if it's not notated correctly. I would write this as $mgy_f - mgy_i = mg\Delta y$. To tie this in with the OP's specific questions, then, note that while the block is sitting on the ground, the potential energy is constant because its position doesn't change. The value of the potential energy itself is a meaningless quantity; it's only changes in potential energy that matter, via $W = -\Delta U$. We derive $U= mgy$ by considering the work done during a process in which the position of the object changes. Last important note: the third bullet point requires a change in perspective, and without this change in perspective, things can go wrong (mixed up understandings and incorrect calculations). In our analysis above, we chose the system to be the ball, and we computed the change in kinetic energy of the ball by computing the works done by all forces acting on the ball. If these works cancel, then the net change in kinetic energy is zero. If instead we move to a potential energy language, we have to reconsider what we call our system. Instead of thinking about the work done by the Earth via gravity on the ball, we consider a new system composed of both the Earth and the ball. In that case, we replace the work done by the Earth on the ball by the change in potential energy of the Earth-ball system, i.e., \begin{align} \Delta KE_{\textrm{ball}} &= W_{\textrm{N}}+W_{\textrm{G}} = \Delta KE_{\textrm{ball}} = W_{\textrm{N}}-\Delta PE_{\textrm{G}} \Longrightarrow \\ W_{\textrm{N}} &= \Delta KE_{\textrm{ball}} + \Delta PE_{\textrm{G}} \end{align} Since the kinetic energy of the Earth doesn't change, $$ \Delta KE_{\textrm{system}} = \Delta KE_{\textrm{ball}} + \Delta KE_{\textrm{Earth}} = \Delta KE_{\textrm{ball}}\,, $$ and so we can write $$ W_{\textrm{ext}} = \Delta KE_{\textrm{system}} + \Delta PE_{\textrm{system}}\,, $$ where $W_{\textrm{ext}}$ is the work done by objects outside the system on objects inside the system, or work done by external forces. In this case, that is the work done by the person in lifting the ball. marchmarch $\begingroup$ Everything is cool but the problem is i haven't learnt calculus yet as it's not in 10th grade, so i just gotta take your word for it now :P Thanks though, appreciate the effort $\endgroup$ – Mehmer $\begingroup$ You don't need calculus, though. The only part where that comes in is in the general definition of the work done. Since all these forces are constant, the work reduces to the product of the component of the force in the direction of the displacement times the displacement. So you can completely ignore the integral! Everything else is fine. $\endgroup$ $\begingroup$ @JavaMonke Nope! The speed is in fact irrelevant, since when you compute $F_{\textrm{G},y}(y_f-y_i)$, the velocity (or even speed) doesn't even come into it! The "ignoring" the initial acceleration part was in service of understanding how the net work might be zero while the works done by individual forces aren't. $\endgroup$ $\begingroup$ Alright, so $W_N= \Delta PE_G$ since $\Delta KE=0$? $\endgroup$ $\begingroup$ This is exactly why I recommended not writing down $W_N=\Delta PE_G$! That only holds for the very specific situation I was considering, in which the normal and gravitational forces are equal and opposite! What we have is that $W_G = -\Delta PE_G = -mg(y_f-y_i)$, no matter what the situation, and then $W_{F_H}=\Delta K +\Delta PE_G = \Delta K + \Delta PE_G$. In this case, you actually need to compute $W_{F_H}$, which means you need to know exactly what force $F_H$ was acting during the process. $\endgroup$ Suppose a body is kept on the ground, and we apply a force 𝑚𝑔 on it, won't the force of gravity and this external force cancel ... Yes, they will cancel. Net force = 0, acceleration = 0. ...our and ultimately result in no movement of the body? acceleration = 0 does not mean velocity = 0. If we could get the block moving for a moment, then the $mg$ force would be sufficient to maintain that motion against gravity. And since it is moving, there is now a non-zero displacement that can be applied. BowlOfRedBowlOfRed $\begingroup$ I considered the same, but won't that minimum extra force required to give the body a certain velocity be considered too when measuring the energy gained by the ball? $\endgroup$ $\begingroup$ You can create scenarios where it does not. First of all there is no minimum speed, so you can make the KE arbitrarily small (much smaller than the potential energy gained). Also, you can apply it before consideration. Start the object moving before your reference point and only consider the work done after the reference point. $\endgroup$ – BowlOfRed $\begingroup$ @JavaMonke Yes, and that extra force gives it kinetic energy. Then, in order for the object to stop at height $h$, you need to dampen the upward force a little, to convert that kinetic energy into potential energy. So from "unmoving at height $0$" to "unmoving at height $h$", the lifting force has gone from slightly above $mg$, to $mg$, to slightly below $mg$, averaging out to exactly $mg$. $\endgroup$ – Arthur $\begingroup$ Thanks! @Arthur $\endgroup$ That is called positive work. That's the work you do lifting the body because the force you apply is in the same direction as the displacement of the body. Positive work transfers energy to the body. But at the same time you are doing positive work, gravity is doing negative work since its force is opposite to the direction of the displacement. Negative work takes energy away from the body. In this case, gravity takes part or all of the energy you supply the body and stores it as gravitational potential energy (GPE) of the Earth-body system. That means work done on a body to lift it against gravity to a certain height should be equal to the potential energy gained by it, right? Yes, if the body is to have no change in kinetic energy, i.e., $\Delta KE=0$, between $0$ and the height $h$. That will be the case if the body begins and ends at rest and the net work done is $mgh-mgh=0$. The underlying principle here is the work energy theorem which states: The net work done on an object equals its change in kinetic energyat the height. My book also states that: But how does that make sense? Since the body starts at rest on the ground, in order to get it moving you need to apply a force $>mg$ to give it an initial acceleration. But in order for all your work to wind up as GPE, before reaching the height $h$ you need to apply a force $<mg$ to decelerate the body and bring it to rest at $h$ for a $\Delta KE$ of zero. What happens in between $0$ and $h$ does not matter, as long as the object begins and ends at rest so that $\Delta KE=0$. Bob DBob D Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged forces energy newtonian-gravity work potential-energy or ask your own question. Why is the net work of a hiker carrying a 15 kg backpack upwards 10 meters = 0 J (Giancoli)? Electric potential and kinetic energy in any flowing charge Why doesn't the potential energy of any object equal 0 Zero Potential Energy Change in Raising a Mass How has the definition of gravitational potential energy been derived? How does negative gravitational work result in an increase in potential energy? What forces are required to physically separate two bodies in a gravitational system? What does potential energy really mean? How exactly is potential energy and work done defined in this example?
CommonCrawl
How are heavier elements such as carbon and silicon distributed within the Sun? In a previous question I asked about the source of carbon and silicate dust that Solar Probe Plus will encounter in its close flyby of the Sun. It seems likely that most sources would include infall from outside the solar system or fragments resulting from collisions inside the solar system. But the Sun is believed to be about 0.4% carbon and 0.1% silicon by mass (see here and here). I believe these represent the relative abundances during the formation of the Sun, but do not account for nucleosynthesis that early in the Sun's life. Is the distribution of carbon and silicon in the Sun is believed to be uniform, or does it have a radially dependent distribution? Is there be any kind of gravitational settling, i.e. heavier atoms tending to move toward the center as happens in planets? Since we can only directly measure elemental abundances directly near the surface using spectroscopic techniques, would the radial distribution come from some model or calculation? the-sun elemental-abundances $\begingroup$ I believe what heavier element go to center of core just as it happens with heavier stars. $\endgroup$ – Free Consulting Jul 28 '16 at 21:13 A page from the Institute for Advanced Study links to data from various modifications to the Standard Solar Model. The newest given there is from Bahcall1 et al. (2005), which I'll use as an example. The authors' calculations depend on data from the Opacity project. Data from two models are available through the IAS link: the BS05(AGS, OP) model and the BS05(OP) model; the difference lies in the main thing you're interested in: heavy element abundances. BS05(AGS, OP) has, for instance, a central2 $^{12}\text{C}$ mass fraction of $7.79\times10^{-6}$, while BS05(OP) has a central $^{12}\text{C}$ mass fraction of $1.05\times10^{-5}$. All of the central data from the major thermodynamic variables (e.g. pressure and temperature), as well as the mass fractions, are well within an order of magnitude in each model. That said, the general heavy element trend is the same in both models. $^{12}\text{C}$ increase by two to three orders of magnitude from the center. $^3\text{He}$ (rather than the more common $^4\text{He}$, which has a mass fraction given by $Y$) increases by one to two orders of magnitude. $^{14}\text{N}$ actually decreases by two orders of magnitude, while $^{16}\text{O}$ remains relatively constant. The model gives no data for silicon. Other new models agree (though not all), to less than an order of magnitude or so, with the data from Bahcall et al. Graphical representations can be found in e.g. this paper: I'm unsure of the reason behind the $^3\text{He}$ spike; that may warrant a follow-up questions. My one guess is that this could be due to its role as an intermediate nucleus in the second step of the p-p chain. However, the same trends for each element are followed, although the changes in mass fraction are not constant at all radii $r$. As for your question about how the central figures were derived, the answer is that helioseismological measurements and neutrino fluxes are some of the best indicators of composition. This was used as the basis for the BS05(AGS, OP) and BS05(OP) models, as well as, in fact, most variants of the Standard Solar Model. 1 This is Bahcall's page, so, naturally, these are some of his papers. However, the results appear to be consistent with the calculations of others. 2 Technically, this is from about 0.0016 solar radii, but this distance step is comparatively small. HDE 226868♦HDE 226868 $\begingroup$ Wow, thanks for taking the time to put this together. Those plots are exactly what I hoped to see - there's so much happening there! Indeed this suggests further questions. Thank you! The footnotes are appreciated as well, I'm pretty sure John Bahcall's papers can be trusted in this context :-). $\endgroup$ – uhoh Aug 3 '16 at 2:45 $\begingroup$ I noticed that there is relatively little overlap between the carbon and nitrogen fraction curves. Might this imply that the CNO cycle is a less common mode of fusion in the Sun? $\endgroup$ – dualredlaugh Aug 3 '16 at 3:05 $\begingroup$ @dualredlaugh The CNO cycle is relatively uncommon in the Sun; it starts becoming dominant at about 1.7$\times$10$^7$ K, a bit higher than the Sun's core temperature. So yes, the p-p chain is far more prevalent than the CNO cycle in Sun-like stars. $\endgroup$ – HDE 226868♦ Aug 3 '16 at 3:07 $\begingroup$ @dualredlaugh The CNO is energetically unimportant (1%), but is responsible for turning almost all the C into N in the core. The N to O (and back to C) bit is much slower. $\endgroup$ – Rob Jeffries Aug 3 '16 at 6:57 Not the answer you're looking for? Browse other questions tagged the-sun elemental-abundances or ask your own question. What is the origin of the dust near the sun? How many sun-like stars are there in the universe? Declination and Ascension - the Sun and Andromeda Why is carbon so rare on the Moon and on Mars? How do you find the altitude of the Sun if you are on the Moon? Are planets moving away from the sun? Have we ever observed a body, such as a large asteroid, "hitting" the Sun? How are the element abundances calculated for a meteorite in the Hydrogen log10 scale? Is the composition of stars in future made of more and more heavy elements? What is a "differential chemical abundance"?
CommonCrawl
Which type of star would be best used for a Shkadov thruster to reach Andromeda as soon as possible? I had Originally thought of using Nicoll-Dyson beams to propel probes and small ships from the Milky Way to as many galaxies as they can reach to turn stars in those galaxies into Shkadov thrusters to return to our galaxy. The problem with Shkadov thrusters is they have extremely slow initial acceleration due to the mass of the star but once its around a billion years a star like our sun can have moved 35,000 light-years and will be moving at 20 km/s by then. Nicoll-Dyson beams also have an issue of the beam spreading and being less effective propelling a ship once we get into inter-galactic distances. So I thought since they are arriving home on Shkadov thrusters they could also have left the Milky Way in the same way, at least your massive fuel source isn't millions of light-years away. For the return journey the majority of stars will be red dwarfs due to their trillions of years life span but they are extremely slow moving stars due to their low energy output so a massive star may be the best bet as even though they are much more massive and harder to move their energy output is magnitudes higher but the major problem with these stars is their short life span. What makes this calculation even harder is that the two galaxies are moving towards each other and it could be asked why bother traveling there when we will collide in 4 billion years but I would like to arrive in Andromeda long before the merger. Is there a type of star that could get to Andromeda as a Shkadov thruster long before the merger or could even a Nicoll-Dyson beam using the correct star type propel a probe or small ship all the way to Andromeda in the shorter travel times I am after? Edit: someone had done some calculations and they said it is not possible to reach Andromeda with a massive star, their calculations put our suns output as being able to reach in 20 billion years and a 10 solar mass star reaching in 1 billion years, both travel times being far longer than their life span but they did have an interesting suggestion of riding the supernova blast for the remaining journey. space-constructs $\begingroup$ 'As soon as possible'? A metric that is very hard to qualify. $\endgroup$ – Justin Thyme the Second $\begingroup$ @JustinThymetheSecond Well I did have a question is was going to add which is: When would be the best time to leave, should you leave as soon as possible to arrive in the fastest time, or would waiting till the speed the galaxies are moving together increases actually be he best time to leave? $\endgroup$ $\begingroup$ @JustinThymetheSecond The aim is to arrive as soon from now as possible using a single star, and as mentioned in the answers the stars output vs life span is the issue. $\endgroup$ $\begingroup$ Solar system: "That's it I'm moving out when I grow up!". Milky: "Oh no you don't, Sun." $\endgroup$ $\begingroup$ The thing is, the star doesn't have to last the entire journey. Conservation of momentum. Once the star accelerates you to a certain velocity, you keep going at that velocity. So 'best' could imply using up the star as quickly as possible, for the greatest speed. But do you want to decelerate at the end, or do you want any part of the star to be left over? Artillary shells do not need to decelerate at the end of their trip, so 'best time' for them is different than 'best time' for an airplane. $\endgroup$ You want to use light to push the star. The more light the star emits, the more push it can produce. But to produce more light the star needs more mass, which will affect your acceleration. Where is the sweet spot? According to Wikipedia, the mass-luminosity relationship can be written as ${L \over L_{S}}=p({M\over M_s})^q$ if $M<0.43M_s$ then $q=2.3, p=0.23$ if $ 0.43M_s < M<2M_s$ then $q=4, p=1$ if $ 2M_s < M<55M_s$ then $q=3.5, p=1.4$ if $M>55M_s$ then $q=1, p=32000$ If we assume that thrust is proportional to luminosity, the above can give us the dependence between thrust and mass and thus allow us to calculate the maximum acceleration we can get, assuming that in the non relativistic regime we have $a=F/m$ We get that $a = {p L_s \over {M_s}^q}M^{q-1} $ Finding the maximum vs M of the above function will give you the optimum thruster. As a crude engineer I have plotted a chart of the acceleration vs the mass of the star, resulting in the following chart Which tells that the best thruster is a star with 55 solar masses. Bigger than that will not give you more acceleration. If you are interested in maximum deltaV instead that in the maximum acceleration, you have now to combine the thrust with the amount of time it can act, given by the star lifetime. This table gives an indication of a star lifetime based on its mass conveniently computed into a table where $deltaV = a\cdot time$, you get the following It is evident that the maximum deltaV will be provided by a star with 60 solar masses: a lot of push for a very short amount of time. L.Dutch♦L.Dutch 259k5656 gold badges545545 silver badges11281128 bronze badges $\begingroup$ Thanks, do you know the life-span for a 55 solar mass star? the calculations are bit over my head. $\endgroup$ $\begingroup$ I just read that for a 40 solar mass star its life span is a millions years? so that wouldn't reach Andromeda. $\endgroup$ $\begingroup$ @RandySavage, included also the lifetime of the star into the calculation $\endgroup$ – L.Dutch ♦ $\begingroup$ I think our answers differ because of your assumption that $L\propto F$ and that the proportionality factor is constant. While the former is true, I think that just using that ignores that $L=Fv$, and so there's a quadratic relationship between $L\Delta t/M$ and $v$, not the linear one you're proposing. I guess we also differ in that you've considered using stars that are quite massive and I've implicitly ignored them as too volatile (and I say this for the benefit of anyone reading the answers and wondering why they're so different!). $\endgroup$ – HDE 226868 ♦ $\begingroup$ It looks like you dropped a factor of 1000 in the sub-3-solar-mass star lifetimes in the delta-V table. This changes the results. $\endgroup$ – AI0867 Ideally a star of $6\text{-}8$ solar masses. Very massive stars are not the best choice, for two reasons. The first one is that these stars tend to be quite violent during their lives, with strong stellar winds and sometimes energetic non-thermal radiation, like x-rays. Adding shielding to a megastructure like a Shkadov thruster might be possible, but it's a pain. Plus, after some millions of years, if the star is heavier than 8 solar masses, it'll explode in a supernova, and there's a very good chance that your thruster will simply be destroyed in intergalactic space. The second reason is that for stars of above $2M_{\odot}$, the final velocity a star can produce throughout the entirety of its lifetime is essentially independent of mass, for a reasonable mass-luminosity relation.$^{\dagger}$ We can actually do these calculations by simply invoking conservation of energy, following the method of Hooper 2018, who applied to to propelling stars using energy gathered by Dyson spheres. The argument there is one of conservation of energy. The final velocity $v$ after a thruster has operated for time $\Delta t$ is, for stars of $M>2M_{\odot}$, $$v=0.034c\;\left(\frac{\Delta t}{1\;\text{Gyr}}\right)^{1/2}\left(\frac{M}{2M_{\odot}}\right)^{1.25}\left(\frac{\eta}{1}\right)^{1/2}$$ where $\eta$ is some efficiency factor. Let's assume our star will die before we reach Andromeda, an assumption I think should hold for all stars of $M>6M_{\odot}$$^{\ddagger}$. The lifetime of the star scales as $\tau\propto M^{-2.5}$, and so if we assume that $\Delta t=\tau$, we see that the mass dependence for $v$ actually drops right out! Let's assume, then, that the mass of the star is unimportant for the stars of the masses we're interested in. I then argue that we should pick a star in the range $6M_{\odot}<M<8M_{\odot}$. Why? There are a couple of reasons: A star more massive than $8M_{\odot}$ will undergo a supernova prior to arriving at Andromeda. A star less massive than $6M_{\odot}$ will not have had its full energy tapped by the time it arrives at Andromeda. A less massive star lives for a longer period of time, and therefore it can be a source of auxiliary power for other thruster functions for longer. Stars in this mass range are much less likely to have outbursts and eruptions than the massive stars others have argued for. In short, pick a star of moderate mass, and you'll reach Andromeda efficiently and, most importantly, without having been incinerated by a supernova. $^{\dagger}$L.Dutch notes a break in the mass-luminosity relation for $M>55M_{\odot}$, though I'm not sure that this is widely-used, and at any rate, these stars are extremely rare. $^{\ddagger}$I got this value by assuming that all stars of $M>2M_{\odot}$ reach terminal speeds of $v_{\text{max}}\approx0.045c$ (which you can see by a quick calculation using the above formula) and would have mean speeds of approximately half that. The travel time to Andromeda is then roughly 114 million years, and a star of mass $M=6M_{\odot}$ would leave the main sequence after that time - I neglect main sequence evolution. HDE 226868♦HDE 226868 $\begingroup$ Thanks, do you know how long the journey would take for a 20 solar mass and at what point in the journey it will go supernova? I was imagining an epic scene with loads of supernova and red giants filling the sky of the galaxy they were arriving to. $\endgroup$ $\begingroup$ But would a 9 solar mass take much longer to make the same distance? does the 57 million years include the star going supernova and coasting the rest off the momentum? $\endgroup$ $\begingroup$ @RandySavage I once again made an incorrect assumption about the time it would take - it should be greater than that by a factor of 2. The total travel time does take into account the time spent coasting, yes, for those stars that would die after reaching Andromeda. $\endgroup$ $\begingroup$ Note you'll need the star to decelerate before reaching the destination, so "jettisoning" is not really an option, is it ? $\endgroup$ – StephenG - Help Ukraine $\begingroup$ @StephenG The OP specifically mentioned "probes and small ships" in the question, so I suspect the payload is actually quite low. Nothing unfeasible about, say, storing energy from the star along the way to power a set of ion thrusters or some such - it's peanuts in comparison to the star's total output. The only issue is storing electricity, and I would be surprised if that's a serious problem. $\endgroup$ Instead of using a Shdakov thruster, use... The Caplan thruster! A hypothetical megastructure that essentially acts as an immense rocket, shooting stuff one way to propel yourself the other way. This requires a basic Dyson swarm first. Since your civilisation can construct Shdakov thrusters only through probes, I'm going to assume that they have the capability to create a Dyson swarm. A Caplan thruster is a space station-like megastructure pointing towards the sun that draws on energy from the Dyson swarm and gathers solar matter, powering nuclear fusion which ejects particles from its 'thruster' at around 1% the speed of light. A secondary thruster fires a second jet of particles at the sun, pushing it forward so the power of the primary thruster doesn't cause the Caplan megastructure to impact the Sun. To quote from the paper, which I will link. 'A jet with the mass loss rate m and average speed (v) gives the sun an acceleration of m(v)/M⊙' To maximise the acceleration, or a, you must increase m and (v) without m being large enough to impact the lifespan of the star. The Caplan thruster uses immense electromagnetic fields to gather hydrogen and helium from the sun, since it requires millions of tons of fuel a second. However this sparse interstellar matter is not enough to power the Caplan thruster alone. This is where we will use the Dyson swarm. The swarm will focus sunlight onto the star itself, heating these areas to incredible temperatures and causing millions upon millions of tons of matter to rise from the star, which will be funneled into the Caplan thruster using its electromagnetic fields. The helium and hydrogen are separated, where the helium is used in thermonuclear fusion reactors, with primary thruster expelling radioactive oxygen at a billion degrees. The secondary thruster works by using particle accelerators to fire the collected hydrogen back at the sun, balancing out the Caplan thruster to prevent it crashing into the surface. The star can be moved 50 light years in only one million years. The use of stellar matter will also extend the lifespan of the star, since smaller stars undergo fusion at a slower rate. If we assume a perfectly efficient Dyson Swarm, in only 5 Megayears the star could reach velocities of up to 200km/s as opposed to the 20km/s that Shdakov thrusters reach after an even longer period of time, however the mass loss rate limits the usage of the star to 100 megayears of use before the star becomes impacted enough as to limit performance and shrink. It's more viable to redirect the star onto the trajectory you wish for it to travel, firing the Caplan thruster for only 10 megayears in that direction. I know this isn't directly answering your question but I think that a Caplan thruster is currently the best way to go about stellar engines> Link to the paper: link Jefferey DawsonJefferey Dawson $\begingroup$ I hadn't even heard of this type of thruster, thats really cool, thanks. $\endgroup$ An unnatural one. L.Dutch's answer is a good start. If you just want to find a natural star to ride along with, something in 55-60 solar mass range is fine. And indeed, that is a good place to start.... But you can do much better than just finding a natural star and riding along. After all, you've already got the technology to build a Shkadov thruster, and you've got millions of years and a whole stellar system of resources to continue developing. Stars increase in luminosity throughout their lifespans, as the core becomes more compact and fusion gets faster. The final supernova is kind of just the endpoint of that continuous process... and kind of a huge waste, as well. If you can lift material off the star as it ages during the journey, you can arrest the luminosity increase and extend its lifetime. That mass then has a variety of uses. You can use it as reaction mass to improve your propulsion efficiency and get to Andromeda faster. You can use it to slowly build a companion star that will provide additional power output and improved thrust. Or you can save it to feed back into the original star later when it starts to actually run out of fuel. Logan R. KearsleyLogan R. Kearsley $\begingroup$ Thanks, I had thought about lifting to control the star but I wasn't sure how the lower mass star later in the journey would effect the speed, Having a secondary star would look really cool, do you think they would start binary orbiting and cause trouble to the ship and direction of travel? $\endgroup$ $\begingroup$ @RandySavage They would have to be built to orbit each other. As long as the secondary artificial star is built to thrust in the same direction, that should not cause any problems. $\endgroup$ – Logan R. Kearsley $\begingroup$ I am not sure if the stellar composition won't have some influence on parts of that process. The lifted material will be higher in heavy elements and not be the best start for a new star. On the other hand, at that level you might as well fully manage the nuclear reaction, i.e. lift out the used up fusion products and occasionally throw in some brown dwarfs full of hydrogen you bring along. $\endgroup$ – mlk $\begingroup$ @mlk That's an excellent start for a new star--higher metallicity means it will burn brighter from the start, giving you a better power-to-mass ratio. $\endgroup$ Methods to move supermassive black holes? Gravitational lenses for focusing giant lasers What would the sky view be like of a final stage massive merged galaxy? What percentage of stars would need to become Shkadov thrusters to move the whole galaxy? Could a galaxy of Nicoll-Dyson beams hit a single target? How could a galaxy be altered to house hundreds of thousands times more Stars? Is a particle beam the best method to fire a black hole at a target? How many Ultra massive black holes should be added to a map of known black holes? How would a K3 civilization collect the majority of the universes available mass in the form of Intracluster gas? What speed could a Nicoll-Dyson beam propel a solar sail ship to? Matrioshka brains with stellar engines?
CommonCrawl
Proceedings of the American Mathematical Society Published by the American Mathematical Society, the Proceedings of the American Mathematical Society (PROC) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Proceedings of the American Mathematical Society is 0.85. Journals Home eContent Search About PROC Editorial Board Author and Submission Information Journal Policies Subscription Information A best constant and the Gaussian curvature by Chong Wei Hong PDF Proc. Amer. Math. Soc. 97 (1986), 737-747 Request permission For axisymmetric $f \in {C^\infty }({S^2})$ we find conditions to make $f$ the scalar curvature of a metric pointwise conformal to the standard metric of ${S^2}$. Closely related to these results, we prove that in the inequality (Moser [8]) \[ \int _{{S^2}} {{e^u} \leq C{e^{\left \| {\nabla u} \right \|_2^2/16\pi \quad }}\forall u \in H_1^2({S^2})} {\text { with }}\int _{{S^2}} {u = 0} ,\], the best constant $C = {\text {Vol(}}{{\text {S}}^2}{\text {)}}$. Shing Tung Yau, Survey on partial differential equations in differential geometry, Seminar on Differential Geometry, Ann. of Math. Stud., vol. 102, Princeton Univ. Press, Princeton, N.J., 1982, pp. 3–71. MR 645729 Thierry Aubin, Nonlinear analysis on manifolds. Monge-Ampère equations, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 252, Springer-Verlag, New York, 1982. MR 681859, DOI 10.1007/978-1-4612-5734-9 J. Kazdan, Gaussian and scalar curvature, an update, Seminar on Differential Geometry (S. T. Yau, ed.), Princeton Univ. Press, Princeton, N. J., 1982, pp. 185-191. J. Moser, On a nonlinear problem in differential geometry, Dynamical systems (Proc. Sympos., Univ. Bahia, Salvador, 1971) Academic Press, New York, 1973, pp. 273–280. MR 0339258 Jerry L. Kazdan and F. W. Warner, Existence and conformal deformation of metrics with prescribed Gaussian and scalar curvatures, Ann. of Math. (2) 101 (1975), 317–331. MR 375153, DOI 10.2307/1970993 Thierry Aubin, Meilleures constantes dans le théorème d'inclusion de Sobolev et un théorème de Fredholm non linéaire pour la transformation conforme de la courbure scalaire, J. Functional Analysis 32 (1979), no. 2, 148–174 (French, with English summary). MR 534672, DOI 10.1016/0022-1236(79)90052-1 Jerry L. Kazdan and F. W. Warner, Curvature functions for compact $2$-manifolds, Ann. of Math. (2) 99 (1974), 14–47. MR 343205, DOI 10.2307/1971012 J. Moser, A sharp form of an inequality by N. Trudinger, Indiana Univ. Math. J. 20 (1970/71), 1077–1092. MR 301504, DOI 10.1512/iumj.1971.20.20101 Haïm Brézis and Louis Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math. 36 (1983), no. 4, 437–477. MR 709644, DOI 10.1002/cpa.3160360405 E. Kamke, Differentialgleichungen, Lösungsmethoden und Lösungen. I, Gewöhnliche Differentialgleichungen, Akademische Verlagsgesellschaft, Leipzig, 1967. Retrieve articles in Proceedings of the American Mathematical Society with MSC: 58G30, 35B45, 53C20, 58E99 Retrieve articles in all journals with MSC: 58G30, 35B45, 53C20, 58E99 Journal: Proc. Amer. Math. Soc. 97 (1986), 737-747 MSC: Primary 58G30; Secondary 35B45, 53C20, 58E99 MathSciNet review: 845999
CommonCrawl
Demonstration of a length control system for ALPS II with a high finesse 9.2 m cavity Jan H. Põld ORCID: orcid.org/0000-0002-1863-96251,2 & Aaron D. Spector ORCID: orcid.org/0000-0002-6575-81923 Light-shining-through-a-wall experiments represent a new experimental approach in the search for undiscovered elementary particles not accessible with accelerator based experiments. The next generation of these experiments, such as ALPS II, require high finesse, long baseline optical cavities with fast length control. In this paper we report on a length stabilization control loop used to keep a 9.2 m cavity resonant. The finesse of this cavity was measured to be 101,300 ±500 for 1064 nm light. Fluctuations in the differential cavity length as seen with 1064 nm and 532 nm light were measured. Such fluctuations are of high relevance, since 532 nm light will be used to sense the length of the ALPS II regeneration cavity. Limiting noise sources and different control strategies are discussed, in order to fulfill the length stability requirements for ALPS II. Axion-like particles [1] represent an extension to the standard model of particle physics that could explain a number of astrophysical phenomena including the transparency of the universe for highly energetic photons [2] as well as excesses in stellar cooling [3]. These particles are characterized by their low mass, m<1 meV, and weak coupling to two photons, g<10−10 GeV −1. The most prominent axion-like particle is the axion itself which is predicted to preserve the so called charge-parity conservation of Quantum chromodynamics [4]. Axions and axion-like particles are also excellent candidates to explain the dark matter in our universe [5]. Light-shining-through-a-wall experiments attempt to measure the interaction between axion-like particles and photons by shining a laser through a strong magnetic field at an optical barrier. This will generate a flux of axion-like particles traveling through the optical barrier to another region of strong magnetic field on the other side of the barrier. Here, some of the axion-like particles will reconvert to photons that can be measured. Any Light Particle Search (ALPS) II [6] is a light-shining-through-a-wall experiment that is currently being set up at DESY in Hamburg. It uses strong, superconducting dipole magnets and a high power laser with 122 m cavities on either side of the optical barrier to boost the conversion probability of photons to axion-like particles and vice versa. The cavity before the barrier is called the Production Cavity (PC), while the cavity after the barrier is called the Regeneration Cavity (RC). In order for ALPS II to reach a sensitivity necessary to probe the photon couplings predicted by the aforementioned astrophysical phenomena the experiment must employ long baseline, high finesse cavities. This is because increasing the number of photons in the PC increases the axion-like particle flux, while the finesse of the RC amplifies the probability that axion-like particles will reconvert to photons [7]. A demonstration of the optical subsystems for ALPS II is currently taking place in a 20 m test facility, referred to as ALPS IIa [8], whereas the 245 m full-scale experiment will be called ALPS IIc. In the current ALPS IIc design, the PC will be seeded with 30 W generated from a high power laser operating at 1064 nm [9]. The cavities will be stabilized using the Pound-Drever-Hall (PDH) technique [10, 11]. With a power buildup factor of 5000 the PC will achieve a nominal circulating power of 150 kW. For the resonant enhancement of the reconversion process it is crucial that the light circulating inside the PC is simultaneously resonant in the RC. Active stabilization systems will be required to suppress the differential length noise between the cavities and maintain the dual resonance condition. Two detection methods with very different systematic uncertainties are planned for ALPS II. First a heterodyne detection scheme will be implemented [12]. Then, the optical system will be adapted to accommodate a transition edge sensor (TES) capable of measuring individual reconverted photons [13]. The two detectors cannot be operated in parallel due to the different optical systems that the experiment must employ in order to use them. For the TES the length sensing of the RC cannot use 1064 nm light to generate an error signal for the feedback control loop as this would be indistinguishable from the regenerated light. Instead 1064 nm light that is offset phase locked to the light transmitted by the PC will be frequency doubled in front of the optical barrier and the length stabilization system will utilize 532 nm light. According to the ALPS IIc design, the optical system must ensure that the power buildup for the regenerated photons stays within 90 % of its value on resonance [14]. This is what we refer to as the dual resonance condition. To check that this condition is satisfied, the optical barrier will be equipped with a shutter that can be opened to allow light transmitted by the PC to couple directly to the RC. By measuring the power of the PC light that is transmitted by the RC, the coupling efficiency, and hence the field overlap between the PC circulating field and RC eigenmode, can be calculated. Even though a seismically quiet environment is chosen for the ALPS II experiments, this sets challenging requirements on the bandwidth of the length control loop and requires a custom made, piezo controlled length actuator. The length stability requirement calls for a differential length noise with an RMS value of less than 0.6 pm between the PC and the RC [14]. The ALPS IIc RC will have a finesse of ∼120,000 for 1064 nm light and a linewidth of 10 Hz. Circulating fields in each of the cavities will propagate through 560 Tm of magnetic field length. Considering all of the parameters given above ALPS IIc will achieve a sensitivity of g αγγ= 2×10−11 GeV −1 for the coupling constant of photons to axion-like particles with masses up to 0.1 meV and a measurement time of 20 days [14]. While this means that ALPS II will not be sensitive to the QCD axion, it will probe an important region of the axion-like particle parameter space searching for particles related to the aforementioned astrophysical hints. A detailed overview and status report on ALPS II is given in [6] and [15]. This paper focuses on the implementation and characterization of the length stabilization system of the ALPS IIa RC. The ALPS IIa RC is being characterized with two figures of merit: finesse for 1064 nm light and differential length noise. For the characterization of the differential length noise a high bandwidth control loop with 532 nm light stabilizes the length of the RC. The error point noise of this setup can be calibrated to provide an in-loop measurement of the suppressed length noise of the cavity. Furthermore, locking a separate 1064 nm laser to the cavity revealed noise sources that were not observable with the in-loop measurement. From here on the terms infrared and green light will refer to 1064 nm and 532 nm light, respectively. The finesse of the RC for 1064 nm light is characterized by measuring the cavity storage time. A 500 mW non-planar-ring-oscillator L1 at a wavelength of 1064 nm is used to implement the length lock of the RC. It seeds a periodically poled potassium titanyl phosphate crystal which generates 100 μW of 532 nm light in a single-pass second harmonic generation (SHG) (see schematic in Fig. 1). An electro-optic modulator (EOM) adds phase modulation sidebands before the light enters the optical cavity. Experimental setup. Length control of the RC: The cavity consists of the mirrors RCI and RCO as well as a PDH feedback control loop. A high bandwidth length actuator is attached to RCO. The laser beam from laser L1 is frequency doubled in a temperature controlled SHG. An EOM imprints phase modulation sidebands on the laser beam and the photodetector PDr_g is used to sense the PDH error signal. Laser frequency feedback: For the high finesse cavity operation with 1064 nm light the frequency of laser L2 follows the RC. The feedback control loop uses PDr_ir as a sensor and photodetector PDt in transmission of the cavity is used to measure the storage time. Beatnote frequency measurement: The beatnote signal between L1 and L2 is measured with a high bandwidth photodetector (PDbn) The two cavity mirrors are mounted on separate optical tables 9.2 m apart from each other and within a common vacuum system. A rubber material in the feet of the optical table provides dampening above 100 Hz. For the measurements the system was pumped down to 1×10−5 mbar in order to minimize acoustic couplings. The entire experiment is located in a clean and temperature controlled environment which is similar to the conditions we anticipate for the ALPS IIc experiment. The cavity input mirror RCI is flat while the cavity end mirror RCO has a radius of curvature of 19.7 ±0.1 m. This configuration yields a beam radius on RCI of 1.82 ±0.01 mm and on RCO of 2.51 ±0.01 mm for 1064 nm light, respectively. Each mirror has a diameter of 50.8 mm with a mass of 43 g and features a dichroic coating. The mirror size was chosen to avoid diffraction losses in ALPS IIc. RCI has a nominal power transmission of 25 ppm for 1064 nm and 5 % for 532 nm light. The RCO coating has a power transmission of 3 ppm for 1064 nm and 1 % for 532 nm light. The free spectral range is 16.2 MHz. A second laser L2 (see Fig. 1) seeds the cavity with infrared light. Photodetectors PDr_g and PDr_ir sense the beat signal between the directly reflected field of the cavity and a fraction of the circulating field that is transmitted through RCI for green and infrared light, respectively. Each signal at the output of the photodetector is demodulated, amplified in the PDH servo electronics and sent to the actuator. PDbn senses the beatnote signal of L1 and L2. In addition, photodetector PDt monitors the power in transmission of the cavity and is also used to perform a measurement of the storage time. High finesse cavity characterization State-of-the-art optics with ultra low losses are required to construct a cavity with a finesse of ∼120,000 for the ALPS II RC [6]. These types of cavities must be set up in vacuum to avoid any kind of dust particles contaminating the mirror surfaces and avoid scattering of the intra-cavity light. Once the laser is frequency locked to the cavity the input light is blocked by suddenly closing the laser shutter. Then the exponential decay of the transmitted power is measured to determine the cavity storage time. The following function was fit to the data [16]: $$ P_{\text{trans}}(t)=P_{0} G T_{\text{in}} T_{\text{out}}\text{exp}\left(-\frac{2t}{\tau_{\text{storage}}}\right) $$ In this equation G is the cavity gain factor, P0 is the initial power, Tin and Tout are the power reflectivities of the input and output mirror, respectively. Figure 2 shows the result of one of the storage time measurements. An average of ten measurements yielded a storage time τstorage of 1.99 ±0.01 ms. The fit considers data points when the power in the cavity dropped by a factor of two since it takes some time until the shutter has blocked the entire input beam. Applying equations from reference [16] yields a finesse of 101,300 ±500 and the roundtrip losses are 33 ±1 ppm. This does not include the transmissivities of the mirrors. We believe that most of the losses are due to scattering caused by low spatial frequency surface roughness of the mirrors. The result of the measurement strongly depended on the position of the beam spot on the mirrors. To find the position with the highest finesse the position of the circulating field was scanned over the area of the mirror within the free aperture of the mount. The measurement reported here was taken at the position in which the highest finesse was measured. The 0.01 ms uncertainty is related to the statistical uncertainty of the measurements made at this position. Storage time measurement. The cavity storage time is a measurement of the exponential decay of the transmitted power when the laser shutter is closed High bandwidth cavity lock One of the key parameter for the ALPS II sensitivity is the differential length stability between the PC and the RC. Differential length noise refers to differential length changes between the PC and the RC after the dual resonance condition has been established. The differential RMS length noise between these two cavities must be suppressed to less than 0.6 pm in order to maintain the dual resonance condition. As mentioned earlier the PDH error signal for the RC is generated using 532 nm light. Based on the transmission values of the cavity mirrors for 532 nm light the finesse is 102 and the linewidth 158.6 kHz in ALPS IIa. The low finesse for 532 nm light was chosen such that only a minimum amount of light is circulating in the RC. A conditionally stable control loop design with two integrators is used to suppress the noise as much as possible. In order to smoothen the transfer function of the piezo actuator attached to RCO and have less impact from the piezo resonances a digital filter was inserted into the control loop. The filter coefficients were chosen such that they inverted the piezo transfer function. Consequently, this optimized the phase and gain margin of the control loop. A unity-gain-frequency of 4 kHz was achieved with a phase margin of 20 deg. The length actuator is a piezo ceramic (Physik Instrumente GmbH & Co. KG). We designed a custom mount to hold a stack consisting of the piezo, the cavity end mirror RCO and a wave washer. The stack is kept in place by exerting pressure on the wave washer with a retaining ring that is screwed into the mount. This also has the effect of preloading the piezo. The force exerted on the stack was optimized such that the resonances of the system were pushed as high as possible. It was also important not to over tighten the retaining ring as this reduced the performance of the length actuator. The result for the optimized setup contains the first resonance at 4.9 kHz. In-loop measurement Figure 3 shows a spectral density of the green control (solid blue trace) and error signal (solid red trace) displayed in terms of length noise of the cavity. As already mentioned in [8] the control signal is dominated by seismic noise up to 1 kHz and by laser frequency noise above 1 kHz. The error signal represents an in-loop measurement of the suppressed length noise. Electronics noise from the digital controller affected the measurement below 10 Hz. This will be addressed by using a different digital control system, however this noise still does not prevent the in-loop measurements from meeting the requirements. Length noise measurement. Amplitude spectral densities of green control (solid blue trace) and error (solid red trace) signal of the PDH control loop calibrated in length noise as well as the beatnote frequency measurement (solid purple trace) representing the differential length noise. For a comparison with the ALPS II requirements the error signal and the beatnote frequency measurement are filtered by a cavity pole frequency of 6 Hz in post processing and its corresponding integrated RMS is shown with the dashed lines Cavities exhibit a passive low pass filter property for their circulating fields. Hence, the frequency noise of the input field is suppressed at Fourier frequencies above the cavity pole [17]. In order to predict the impact to ALPS IIc the error signal noise is therefore filtered in post processing by the expected filter property of the ALPS IIc RC. This consists of a low pass with a pole frequency of 6 Hz, assuming a Finesse of 100,000 as reported on in the previous section. The RMS projection (dashed red trace) shows that the control loop has sufficient gain to meet the length noise requirements for ALPS IIc considering similar uncontrolled length noise conditions as in the ALPS IIa lab [18]. Beatnote frequency measurement Since the measurement in the previous section was an in-loop measurement it was important to confirm the result with an out-of-loop measurement. This was performed with a second 1064 nm laser (L2). While the cavity length is locked to the frequency doubled laser L1, the frequency of the second laser L2 is locked to the cavity in order to simulate the light that comes from the PC and is phase locked to L1. The ∼50 MHz infrared beatnote frequency is monitored with a fast photodetector PDbn and demodulated down to 100 kHz by mixing it with a stable reference. A time series of the 100 kHz signal is then recorded and its frequency noise is analyzed in post-processing. The measurement is displayed in Fig. 3 (solid purple trace). Unexplained out-of-loop noise enters below 25 Hz and above 200 Hz. The filtered RMS noise, displayed in the corresponding dashed line, exceeds the ALPS II length noise requirements by a factor of roughly two. In order to address the out-of-loop noise below 25 Hz the control signal of the length control loop was fed back to the laser frequency of L1 instead. Thus the bandwidth of the loop could be significantly increased to 40 kHz. Electronics noise, which limited the in-loop measurement for the length lock below 10 Hz was substantially lower as the digital controller was not required for this type of control loop. Figure 4 shows the data below 40 Hz for the length (solid blue trace) and frequency feedback (solid red trace), respectively. The measurement was loop gain limited above 40 Hz. While the length stabilization crosses the requirements at 17 Hz and increases further to an RMS value of 3.5 pm at 1 mHz, the frequency stabilization RMS meets the requirements down to 1.3 mHz. Out-of-loop measurement. Beatnote frequency measurement for the length (solid blue trace) and frequency (solid red trace) feedback and corresponding RMS filtered by the infrared RC cavity pole for ALPS IIc It is apparent that the actuation on the piezo increases the out-of-loop noise. We believe this noise is due to the differential changes of the optical path length inside the cavity for 532 nm and 1064 nm light. The cause of this noise will be the subject of further investigation. In ALPS IIa we demonstrated a control loop actuating on the length of a 9.2 m cavity. This system will be capable of maintaining the length stability to a level below the requirements in the ALPS IIc environment. A customized, high bandwidth length actuator that moves a 50.8 mm mirror with a control bandwidth of 4 kHz was an essential component of this work. The discovery of the additional out-of-loop noise indicates that it might be necessary to change the control concept for ALPS II such that the PC length will be actuated on. The PC length sensing will be done with infrared light which avoids differential effects for the green and infrared eigenmodes. Furthermore, an out-of-loop measurement of the differential length noise of the RC with feedback to the laser frequency confirmed that the length stability requirements of ALPS IIc should be maintained over time scales of at least 1000 s. If the out-of-loop noise is not reduced in the future, it is an option to open a shutter in the light tight wall roughly every 1000 s to ensure that the resonance condition for infrared light is still met. This would of course require a thorough characterization of the ALPS IIc RC with the shutter open, to ensure that over the time scales that the shutter is closed we can be confident that the cavities are dually resonant. In this case ALPS IIc could be set up without a dedicated seismic isolation system for the cavity mirrors, as measurements of the seismic noise environment in ALPS IIc are similar to those taken in ALPS IIa [18]. In addition, the finesse of the ALPS IIa RC was measured to be 101,300 ±500 with a storage time of 1.99 ±0.01 ms. These results are comparable to experiments that employ long baseline, high finesse optical cavities such as gravitational wave detectors [19, 20], filter cavities for non-classical light [21] and vacuum magnetic birefringence experiments [22]. These results represent a major milestone for ALPS II from the previous work [8]. The next steps will be towards the identification of the out-of-loop noise sources and their mitigation. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. ALPS: Any light particle search EOM: electro-optic modulator production cavity PDH: Pound-Drever-Hall RC: Regeneration cavity SHG: Second harmonic generation Patrignani C, et al.Review of Particle Physics. Chin Phys C. 2016; 40(10):100001. (Particle Data Group). ADS Article Google Scholar Meyer M, Horns D, Raue M. First lower limits on the photon-axion-like particle coupling from very high energy gamma-ray observations. Phys Rev D. 2013; 5:035027. Giannotti M, Irastorza I, Redondo J, Ringwald A. Cool WISPs for stellar cooling excesses. J Cosmol Astrop Phys. 2016; 5:57. Peccei RD, Quinn HR. CP Conservation in the Presence of Pseudoparticles. Phys Rev Lett. 1977; 38:1440. Abbott LF, Sikivie P. A cosmological bound on the invisible axion. Phys Lett B. 1983; 1:133–6. Bähre R, Döbrich B, Dreyling-Eschweiler J, Ghazaryan S, Hodajerdi R, Horns D, Januschek F, Knabbe E-A, Lindner A, Notz D, Ringwald A, von Seggern JE, Stromhagen R, Trines D, Willke B. Any light particle search II - Technical Design Report. J Inst. 2013; 8(9):T09001. ADS Google Scholar Hoogeveen F, Ziegenhagen T. Production and detection of light bosons using optical resonators. Nucl Phys B. 1991; 358:3–26. Spector AD, Põld JH, Bähre R, Lindner A, Willke B. Characterization of optical systems for the ALPS II experiment. Opt Express. 2016; 24:29237–45. Frede M, Schulz B, Wilhelm R, Kwee P, Seifert F, Willke B, Kracht D. Fundamental mode, single-frequency laser amplifier for gravitational wave detectors. Opt Express. 2007; 15(2):459–65. Drever RWP, Hall JL, Kowalski FV, Hough J, Ford GM, Munley AJ, Ward H. Laser phase and frequency stabilization using an optical resonator. Appl Phys B. 1983; 31(2):97–105. Black ED. An introduction to Pound-Drever-Hall laser frequency stabilization. Am J Phys. 2001; 69(1):79–87. https://doi.org/10.1119/1.1286663. Bush Z, Barke S, Hollis H, Spector AD, Hallal A, Messineo G, Tanner DB, Mueller G. Coherent detection of ultraweak electromagnetic fields. Phys Rev D. 2019; 99:022001. Dreyling-Eschweiler J, Bastidon N, Döbrich BD, Horns D, Januschek F, Lindner A. Characterization, 1064 nm photon signals and background events of a tungsten TES detector for the ALPS experiment. J Mod Opt. 2015; 62(14):1132–40. Põld JH, Grote H. ALPS II – design requirement document. 2019. internal note. D00000008263751. A. D. Spector for the ALPS collaboration. ALPS II status report. 2019. arXiv:1906.09011. Isogai T, Miller J, Kwee P, Barsotti L, Evans M, Loss in long-storage-time optical cavities. Opt Express. 2013; 21(24):30114–25. Mueller CL, Arain MA, Ciani G, DeRosa RT, Effler A, Feldbaum D, Frolov VV, Fulda P, Gleason J, Heintze M, Kawabe K, King EJ, Kokeyama K, Korth WZ, Martin RM, Mullavey A, Peold J, Quetschke V, Reitze DH, Tanner DB, Vorvick C, Williams LF, Mueller G. The advanced LIGO input optics. Rev Sci Instrum. 2016; 87:014502. Miller D. Seismic noise analysis and isolation exemplary shown for the ALPS experiment at DESY, PhD thesis. Hannover: Leibniz Universität; 2019. The LIGO Scientific Collaboration. Advanced LIGO. Classical Quant Grav. 2015; 32:074001. Sato S, Miyoki S, Ohashi M, Fujimoto M, Yamazaki T, Fukushima M, Ueda A, Ueda K, Watanabe K, Nakamura K, Etoh K, Kitajima N, Ito K, Kataoka I. Loss factors of mirrors for a gravitational wave antenna. Appl Opt. 1999; 38:2880–5. Evans M, Barsotti L, Kwee P, Harms J, Miao H. Realistic filter cavities for advanced gravitational wave detectors. Phys Rev D. 2013; 88(2):022002. Della Valle F, Milotti E, Ejlli A, Gastaldi U, Messineo G, Piemontese L, Zavattini G, Pengo R, Ruoso G. Extremely long decay time optical cavity. Opt Express. 2014; 22:11570–7. The authors would like to thank the other members of the ALPS collaboration for valuable discussions and support, especially Axel Lindner and Benno Willke. This work would not have been possible without the wealth of expertise and hands-on support of the technical infrastructure groups at DESY. Deutsche Forschungsgemeinschaft (DFG) (SFB 676) Volkswagen Stiftung Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Callinstraße 38, Hannover, 30167, Germany Jan H. Põld Leibniz Universität Hannover, Callinstraße 38, Hannover, 30167, Germany Deutsches Elektronen-Synchrotron (DESY), Notkestraße 85, Hamburg, 22607, Germany Aaron D. Spector JP and AS designed and conducted the experiment. Both authors read and approved the final manuscript. Correspondence to Jan H. Põld. Põld, J.H., Spector, A.D. Demonstration of a length control system for ALPS II with a high finesse 9.2 m cavity. EPJ Techn Instrum 7, 1 (2020). https://doi.org/10.1140/epjti/s40485-020-0054-8 DOI: https://doi.org/10.1140/epjti/s40485-020-0054-8 Optical resonators Precision interferometry Axion searches
CommonCrawl
Divide A Rectangle Into Thirds He shaded of one rectangle and i of other rectangle. You need to find the area of the largest rectangle found in the given histogram. Let this value be \(z\) The rectangle with area 5 is sharing its length with the rectangle with unknown area. into two rectangles. In simple geometric terms it's formed by dividing a square in half and using the diagonal from the half square extended out to form a rectangle with a ratio of 1:1. For example, let's divide 178 by 3 using long division. Paul at Dudecraft has made an. The Third Compendium is the title given to the collection of manuscripts containing the third phase of writings regarding the Holy Rectangle. Write the name of the fractional unit on the line below the shape. You are done. The algorithm then divides the solution space into smaller hyper-rectangles. Question from Darlene, a parent: A farmer has 10,000 meters of fencing to use to create a rectangular field. In this lesson students will identify and partition equal shares of a rectangle. , the size of the hyper-rectangle. Divide one wedge into thirds equally, cutting from the edge toward the center. Shade one part. This is a great geometric proof. What is the area, in square meters, of the third section of Phoebe's garden?. If you want to divide your canvas into 3 equal columns using Guides, go into the View menu and choose "New Guide…" In the New Guide dialog box enter "33. 2 halves e. if it takes 294 yards of fencing to enclose the field and divide the field into the two parcels, find the dimensions of the field. Each part is equal in size. Partition the rectangle into four equal shares. Enjoy a range of free pictures featuring polygons and polyhedrons of all shapes and sizes, including simple 2D shapes, 3D images, stars and curves before heading over to our geometry facts section to learn all about them. Counting square units fits nicely with the concept of counting squares and it also coincides with a property of algebra. 4 fourths d. This gives four triangles all the same area. How many rectangles with different shapes satisfy these conditions? 27. halves, fourths. It will divide the area of the lawn into two right triangles. You could measure it, then bust out a pencil, paper, and the calculator app, and eventually have to Google a decimal-to-fraction converter to figure out the size of each section. Repeat the process on the remaining two quarter-wedges to divide and cut them into thirds so the pizza has 12 equal-sized wedges. We will look at pictures of partitioned shapes. In Figure-4C those intersections are labeled as Point-D and Point-E. Crease well and unfold. Scale keyframe(s). I can count the equal size squares in a rectangle. Each part is equal in size. Each title is divided into chapters which usually bear the name of the issuing agency. 2 Suggested Learning Target. What fractional part is not shaded? 5. Traditional Frog. Rearrange the halves to create a new rectangle with no gaps or overlaps. One triangle must have edge lengths in the ratio 3:4:5. By decomposing rectangles into rectangular arrays of squares, students connect area to multiplication, and justify using multiplication to determine the area of a rectangle. Let this value be \(w\) If we set up and equation for each rectangle, we get the following: Bottom left rectangle: \(xy = 3\) Bottom right rectangle. In Fine Antiques and Collectables. And with the reference of end points of your broken parts of line, cut your rectangle. He plans on using some of the fencing to divide the rectangular field into two plots of land by constructing a fence inside the rectangle that is parallel to one of the sides. Divide by counting equal groups (3-I. Divide the rectangle into thirds. What fractional part is not shaded? 5. And four such identical hyper-pyramids can be assembled into a hyper-cube. (2) A pair of intersecting rectangles is usually found several times. 4 fourths d. For example, if you use measure command to divide a line of length 9 units in a segment length of 2 units then the line will be divided into four equal parts of 2 unit length but the last segment will be 1 unit in length. We can point our system 20 feet in this direction has lips. See full list on mathblaster. 4 Interpret whole-number quotients of whole numbers (e. Well, this means we have here are called it is 20 feet. Disclosed is a folding collapsible rectangular storage box in which the rectangular box body has a flexible bottom panel, two short upright peripheral panels and two long upright peripheral panels perpendicularly extending upward from the four sides of said rectangular bottom panel, the two long upright peripheral panels having a respective vertically extending folding line arranged in a. Following Robert Larson and HSB: Draw lines between opposite corners. if it takes 294 yards of fencing to enclose the field and divide the field into the two parcels, find the dimensions of the field. Opposite angles formed at the point where diagonals meet are congruent. But if, as I interpreted it, » I'm just trying to divide a rectangle into 7 different equally sized sections. Here is a proof of why there is no solution for n=6. So this one is divided into four equal sections. STEP 2 Model the rectangle cut into 1_ 3-size parts. Find the area of a rectangle by dividing it into two smaller rectangles. We can choose x such that 4 x 2 = 4 yz, so that the area of the rectangle equals the area of the square. You now have three columns of equal size. _____ fourths. Color one eighth red. This is an example of breaking the rule of thirds that works. 2 fourths are shaded. You can divide an edge into multiple equal segments by right clicking on it and choosing Divide. Common Core: 2. Check into "In to equal parts" and write parts in textbox you want to split. Round up to the nearest hundredth. Finding length of MZ. A rectangle is a quadrilateral polygon with 4 sides and 4 right angles. Wrap in saran wrap and refrigerate for at least an hour. Estimate to divide each into 4 equal parts. We will look at pictures of partitioned shapes. Re-cropping to use the rule of thirds can improve composition. 9 cm) side, simply divide 11 by 3. So the shaded area represents one half of the big rectangle in both cases. This video reminded me of this nice little hack, so I'm passing it along. one section is a square with a side of 7 meters and the second section is a square with a side length of 5 meters. Now divide each whole rectangle into five equal parts to illustrate. , and describe the whole as two halves, three thirds, four fourths. This prompt is enough for someone to say, Draw a 4 by 3 rectangle! Bingo! I'm drawing these with you. Start by following the basic process for fraction multiplication, turning any mixed fractions into improper ones. What fractional part is not shaded? 3. If you begin with a single small golden rectangle, and attach to it a square with its sides equal in length to the long side of the rectangle, you get a new rectangle with the same proportions as the first, albeit larger. The triangular faces which are not the rectangular base are called lateral faces and meet at a point called the vertex or apex. Videos, examples, lessons, songs, and solutions to help Grade 2 students learn to partition a rectangle into rows and columns of same-size squares and count to find the total number of them. (Remember a rectangle is a type of parallelogram so rectangles get all of the parallelogram properties) If MO = 26 and the diagonals bisect each other, then MZ = ½(26) = 13. Dividing a Square Cake into Five Equal Pieces [07/28/2001] How can you divide a square-topped cake that is a rectangular solid and is frosted on all faces into five pieces so that everyone receives the same amount of cake and icing? Dividing a Square in Thirds [03/27/2001]. Cut each equal part in half. Since the diagonals of a rectangle are congruent MO = 26. Students color parts to illustrate fractions, write fractions from visual models and from number lines, and learn to draw pie models for some common fractions. So, 4mm x 6mm = 24 mm 2 , hence the formula A = lw. The Alex Liddy Slate & Co Rectangular Divided Dish 30cm is a brilliant serving piece and will look great displaying nuts, chips, dips, candy, biscuits or fruit in your dining room or living room. The length of the rectangle diagonal is 20 cm. C-42 Three Midsegments Conjecture - The three midsegments of a triangle divide it into four congruent triangles. Many of the quotients in this program can by arrived at by inspecting the images. I set it to 6 equal parts but using the Divide tool only makes two halfs of the circle. Ten green extra copies of the green right-angled scalene triangle. So maybe we should think about the size of the rectangle more carefully. We divide, multiply, subtract, include the digit in the next place value position, and repeat. 33%" in the position field, and press OK. TCS Numerical Ability Question Solution - If a rectangle is divided into four parts of different size of rectangles. Cut your own rectangle in half, and demonstrate that if the halves are equal, they will be able to stack neatly on top of. , interpret 56 ÷ 8 as the number of objects in each share when 56 objects are partitioned equally into 8 shares, or as a number of shares when 56 objects are partitioned into equal shares of 8 objects each). For example, we can let a rectangle represent one whole, and then divide it into equal parts as shown below. The Split Into Grid command lets you divide one or more objects into multiple rectangular objects arranged in rows and columns. _____ fourths 6. We can see that using the multiplication principle to multiply each side of an equation by 1/2 is the same as dividing each side of the equation by 2. The central rectangle on the far right is potentially optimal. Rectangles have four sides and four right angles. Step 5: Using the second point on your base line, repeat step 3, to find your third side. Fold and unfold a sheet of paper up and down. The Third Compendium is the title given to the collection of manuscripts containing the third phase of writings regarding the Holy Rectangle. Divide rectangular image into fixed number of random sized rectangles + 1 fixed size rectangle. 3 thirds f. The perimeter of the same rectangle is 52 cm. We can cut the prism into layers, each of length of 1 cm. The third step is an online node insertion procedure; its objective is to preserve optimality when searching for paths in the symmetry-reduced grid map. A rectangular coil with resistance R has N turns, each of length \\ell and width w as shown in Figure \\mathrm{P} 31. The polygons in this feature class represent the smallest division to the sixteenth that has been defined for the first division. Mia will need a fence that is meters long. A chip part according to the present invention includes a substrate having a penetrating hole, a pair of electrodes formed on a front surface of the substrate and including one el. Made from high quality clear glass, the Alex Liddy Slate & Co Rectangular Divided Dish 30cm is perfect for. What is the area, in square meters, of the third section of Phoebe's garden?. Then, we need to consider the maximum empty rectangle that cross the middle. We've completed dividing the paper into fifths on the horizontal side! Divide Square Paper into Fifths Step 10: Let's work on the vertical side. The perimeter of the same rectangle is 52 cm. Let's consider the following situation. Estimate to divide each into 4 equal parts. The triangular faces which are not the rectangular base are called lateral faces and meet at a point called the vertex or apex. Teachers, Share with your Students! We have added a new feature that allows members who are teachers to easily share access to the Math Antics website with their students at home. , triangles with one angle of 90°. Now you will find that, image is of equal size. Here's their tent, which is in the shape of a pyramid. What fractional part is not shaded? 3. Magazine covers work well. Hmmm… What dimensions should our rectangles have so it's easy to divide into fourths and thirds. Videos, examples, lessons, songs, and solutions to help Grade 2 students learn to partition a rectangle into rows and columns of same-size squares and count to find the total number of them. So the shaded area represents one half of the big rectangle in both cases. This is a lesson for 3rd grade math about the concept of a fraction. Rectangles have four sides and four right angles. Distribute the variable into the parentheses. Return to Top. [5] The root-3 rectangle is also called sixton , [6] and its short and longer sides are proportionally equivalent to the side and diameter of a hexagon. Overlay a square fundamental domain for the larger torus to get a way to divide a square into 5 smaller squares. I can draw rows and columns of equal size in a rectangle. Thirds : 1/3 is blue, 1/3 is yellow and 1/3 is green. Solution: Joel is laying pipe for a sprinkler system before he plants his lawn. 14, refers to pi, which is a mathematical constant. Step 4: Measure the length of the rectangle along your base line, and the height of your rectangle along the perpendicular line you constructed, and mark with points. Take some time to figure out why — even better, find a reason that would work on a nine-year-old. Each minute is further divided into 60 equal parts called seconds, and, for instance, 2 degrees 5 minutes 30 seconds is written 2° 5' 30". = Thirds Slide 69 / 201 A rectangle can also be divided into three parts this way. Recognize that equal shares of identical wholes need not have the same shape. Have you ever tried to divide a piece of paper into thirds? It's difficult. To counteract symmetry the "Rule of the Thirds" can follow two concepts: First we can divide the image into two distinctive areas which cover 1:3 and 2:3 of the size of the picture. A rectangle has area 18 square centimeters. Write a sentence telling what "equal parts" means. Since the area of the rectangle is equal to its length multiplied by its width (), and the area of the rectangle is given, the following equation must be true. Use a drawing to represent the portion of the playground that is the play structure. You don't have to pre-select the edge to get the option to divide it but you can. You can continue to work from the points on the circle for each of these slice and divide the circle as many times as you need. 5 inch x 11 inch paper as well) Fold paper in half on the diagonal axis. The menu items in the Context menu are dependent on the entity you right click on it. For irregular-shaped pavements and slabs, create a scale drawing of the project. Fan with rectangular outlet (b)For fans with circular outlets, the outlet shall be divided into 8 equal sectors by vertical, horizontal and 45° lines. within a packing into two rectangles, rectangular room into two rectangular parts. 9) Multiply to find the area of a rectangle made of unit squares (3-DD. Name the fractional unit, and then count and tell how many of those units are shaded. Partition the circle into three equal shares. The length of the rectangle diagonal is 20 cm. 2 out of 3. Recognize that equal shares of identical wholes need not have the same shape. Rectangles have four sides and four right angles. Which of the following correctly models and gives the quotient of fraction 2 over 3 ÷ fraction 1 over 6? Rectangle model divided into 6 equal sections, one section is labeled one-sixth and colored dark, the next section is. Logically I would do this by dividing the slide into quarters, creating the content on one quarter, then copying the content to other three quarters. To find the diagonal, square the width and height of the rectangle and add the squared values. The rectangular form uses horizontal and vertical components. The triangular faces which are not the rectangular base are called lateral faces and meet at a point called the vertex or apex. Question from Darlene, a parent: A farmer has 10,000 meters of fencing to use to create a rectangular field. From here, they can count the total number of tiles inside the rectangle to find the answer. Welcome to MathHomeworkAnswers. Write the name of the fractional unit on the line below the shape. But each rectangle is a third part of the whole. The bars are placed in the exact same sequence as given in the array. A yellow equilateral triangle divided into thirds when all the angles are bisected and the bisectors meet in the centre, we get three obtuse-angled isosceles triangles. The outside fence costs $10 per running foot installed, and the dividers cost $20 per running foot installed. How many small squares did you make? MGSE2. A total of fifteen pennies are put into four piles so that each pile has a different number of pennies. They both hope the market will bounce back and that waiting would yield a higher sale price. I'm doing this to create quarter page sized marketing handouts. Scale keyframe(s). Take piece A and place it on the sheet labeled Halves and Not Halves of Rectangles according to the label. Therefore those two rectangles together are two third parts of the. 2 Suggested Learning Target. A chip part according to the present invention includes a substrate having a penetrating hole, a pair of electrodes formed on a front surface of the substrate and including one el. 60=2w Divide both sides by 2. Which of the following correctly models and gives the quotient of fraction 2 over 3 ÷ fraction 1 over 6? Rectangle model divided into 6 equal sections, one section is labeled one-sixth and colored dark, the next section is. Two of the three rectangles have now been shaded. 2 halves e. Let that long rectangle represent 3; and let us divide it into thirds, that is, into three equal parts; and let us shade one of them -- 1 out of 3. Click on "Divide slice" option; A box of "Divide slice" will open. 3 Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a 2 4. Well, we already know how to divide a decimal by a whole number -- and it was pretty easy So, let's just turn these problems into the easy kind! How do we do that? It looks like it's just a trick But, I'll show you why it works! Let's divide 6. Shade three parts. The outside fence costs $10 per running foot installed, and the dividers cost $20 per running foot installed. We will look at pictures of partitioned shapes. 3 thirds c. Ask the students which rectangle has been divided into parts that are the same size and shape. divide the ſides of a triangle or thoſe ſides produced, into proportional ſegments, it is parallel to the remaining ſide And if the rectangle contained by. If i have a rectangle divided into 10 sections how do i shade in 4/5 of the shape Optional Information: Subject: math - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. This rectangle is a cube, therefore division is identical to that of the initial sampling phase. We obtain a 4 dimensional figure. From here, they can count the total number of tiles inside the rectangle to find the answer. How many equal. For example, we can let a rectangle represent one whole, and then divide it into equal parts as shown below. Some methods are easier than others; the method shown here is: - fairly easy, - but will leave 2 crease marks. Rectangle 3-4-5 The sides of the rectangle are in a ratio of 3:4. But often, bringing fractions into the process is, well, completely unnecessary. Solution: Using cubes from 2 to 30, students can make only one rectangle with 2, 3, 5, 7, 11, 13, 17, 19, 23, and 29 (10 possibilities). He plans on using some of the fencing to divide the rectangular field into two plots of land by constructing a fence inside the rectangle that is parallel to one of the sides. 66%" in the position field. If you show 3 x 4 by placing four items into each of 3 sections (3 groups of 4), you see that you have 12 in all, and you also see that 12 divided into 3 groups gives you 4 in each group. How many rectangles with different shapes satisfy these conditions? 27. This is a circle divided into two pieces, but they're not equal, so this isn't divided into halves. What fractional part is not shaded? 4. Used by over 70,000 teachers & 1 million students at home and school. Remember that a fraction is the number of shaded parts divided by the number of equal parts. Illustrator will take any object and split it into a specified number of equal-sized rectangles. Dividing a rectangle into 4 parts in the ratio 1:2:3:4, with only 2 lines. Fold and unfold a sheet of paper up and down. To solve the example, we will need to define the length in terms of the width. Pat the dough into a rectangle about 12- by 6-inches and about a 1/2-inch thick. (You can use the same steps for a 8. Buy Alex Liddy Slate & Co Rectangle Glass Divided Serving Dish 30cm from Kogan. Divide the rectangle into thirds. Each title is divided into chapters which usually bear the name of the issuing agency. one section is a square with a side of 7 meters and the second section is a square with a side length of 5 meters. This rectangle idea is reinforced by talking about fractions in terms what a fraction means. Divide each whole using a different fractional unit. Round up to the nearest hundredth. 160=(2xx50)+2w 160=100+2w Subtract 100 from each side. Common Core: 2. nght, Separate the solid Into rectangular (See the dotted Line in the find 12-6 the Volume Of Prism B 243 in3 320 cm3 23 ft3 160 in3 Step 2: the W X h to find the volume Of each prism. Rearrange the new equal shares to create different polygons. 2 halves e. Made from high quality clear glass, the Alex Liddy Slate & Co Rectangular Divided Dish 30cm is perfect for. Click on "Divide slice" option; A box of "Divide slice" will open. Color one eighth red. In addition to the enclosing fence,another fence is divide the field into two parts,running parallel to two sides. Shade one part. 2 out of 3. 3 cuts will make 4 parts. All I was trying to do was divide a straight line, whether a path or a line was not important, but, yes, straight, into equal segments of any number (5 segments, 7, 4. They both hope the market will bounce back and that waiting would yield a higher sale price. Since the diagonals of a rectangle are congruent MO = 26. This paper has a light gray. Make a template with another piece of paper, the same size as the paper used for the model. Set it up: Move the decimals to the right on both until you are dividing BY a whole. Divide Square Paper into Fifths Step 9: Fold G-H to I-J. The first one is done for you. Then select all three and resize to your desired width. He shaded of one rectangle and i of other rectangle. Perimeter is found by adding all the sides of a polygon together. In this picture the entire rectangle has been divided into 12 equal-sized pieces. Make sure you select the correct base and height lengths for each rectangle. So this is divided into fourths. Use the third line to draw a parallel line to any of the sides but not passing through the centre. 1 3 1 3 1 = 3 Thirds Slide 70 / 201 Can a triangle be divided. Then repeat the same process but this time enter "66. But if, as I interpreted it, » I'm just trying to divide a rectangle into 7 different equally sized sections. This was purely a matter of finding a solution of proportions to work with. In the third picture, each side of the rectangle has been divided in half but the small rectangle does not represent one half of the area of the large rectangle. Students will partition rectangles into halves, thirds, and fourths and. To determine the area of a square, we could use the rectangle formula, or we can use a special formula: A = s 2. You could also divide the vertical space of the frame into thirds. The denominator of 5/ 6 is 6. Make a template with another piece of paper, the same size as the paper used for the model. Name the fractional unit, and then count and tell how many of those units are shaded. The corner of the condo complex is divided by the left. "Divide a square into 7 triangles. Answer: The length of the pipe is 17 feet. Similarly, adding a square equal to the length of the longest side of the rectangle gets you increasingly closer to a Golden Rectangle and the Golden Ratio. When a root-N rectangle is divided into N congruent rectangles by dividing the longer edge into N segments, the resulting figures keep the root-N proportion (as illustrated above). Let's do that: A' = 225 - 2W. This is a great technique for dividing paper (or anything else) into equal parts I learned this trick some years ago from a workshop instructor and have used it for years. 4x4x4=64 so 4 parts on each side. This is a circle divided into two pieces, but they're not equal, so this isn't divided into halves. 05) = $5,000 commission. Okay, here's the "why". Click here for K-12 lesson plans, family activities, virtual labs and more!. 2 out of 3. Step 3: Add he volumes of each prism. Color the rest green. Kilgard, NVIDIA Corporation (mjk 'at' nvidia. Use this maths mastery teaching pack to deepen year 4 children's understanding of how to calculate the perimeter of rectangles, including adding all sides together or adding the length and width together then multiplying by 2. In each case, the big rectangle has been divided into two equal pieces, one shaded and one unshaded. A traditional model. Remove from the fridge and unwrap the dough rectangles. Represent the problem with multiplication. Dividing one side of the circle will automatically give you the correct line for the opposite pie slice in the circle. Following Robert Larson and HSB: Draw lines between opposite corners. 3x3x3 so 3 cuts on each side of cube. Distribute the variable into the parentheses. Step 3 Grate the remaining frozen butter over the bottom two thirds of the dough. Draw a circle on a horizontal line, and bisect it. So maybe we should think about the size of the rectangle more carefully. Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a third of, etc. 2 Suggested Learning Target. Right-clicking on one of the sides of a rectangle will allow you to add marks and text to the middle of edges, including the ability to automatically mark parallel sides or equal length sides. This rectangle is a cube, therefore division is identical to that of the initial sampling phase. Each colored box is 1/10 of the total. Monitor divided into 9 sections by 2 vertical and 2 horizontal lines. If they end up with very unequal halves, encourage them to stick the two halves back together and try again. The fraction 3/4 means that we are to take some thing (the above rectangle), divide it into 4 equal parts (which has been done horizontally) and do something with 3 of those 4 equal parts. Three-fourths divided by two-thirds. Your students will be ready to take the next step toward math fluency with these third grade geometry worksheets and printables! With illustrated manipulatives and examples to help visual learners, our third grade geometry worksheets guide your students through concepts, such as identifying complex shapes, creating fractions, naming angles, calculating perimeter and area, and more. Well, this means we have here are called it is 20 feet. rectangle shown the scale Calculate the MAD of this data set' 6, 10 -5, 14, 10 to divide an ordered data set into equal parts. In the third picture, each side of the rectangle has been divided in half but the small rectangle does not represent one half of the area of the large rectangle. The other reason is that, unlike rectangles, it. This is a rectangle. We will only illustrate one below. Click on "Divide slice" option; A box of "Divide slice" will open. The Third Compendium is preceded by the First Compendium and the Second Compendium. Be sure each part is the same size. A square is a rectangle with 4 equal sides. Run a vertical line through each of those points to divide the rectangle into three equal sections. Can you fold a square into a square of one-third the area. Now you have 6 shaded parts out of 8 equal parts in the whole. For example, typing +=3 adds 3 to the selected keyframes. Common Core: 2. Partition the rectangle into four equal shares. Affordable and search from millions of royalty free images, photos and vectors. In the simplest box plot the central rectangle spans the first quartile to the third quartile (the interquartile range or IQR). Example: Split the rectangle into 2 rows and 4 columns. Quadrilaterals, diagonal of the quadrilateral, types of quadrilaterals, parallelogram, rectangle, square, rhombus, trapezium, kite, irregular quadrilateral, angle sum of a quadrilateral and applying properties of quadrilaterals to solve problems. If you take a rectangle and divide it into four equal pieces with two perpendicular lines, you have created quadrants: But what if you divide the rectangle into nine equal pieces with two sets of two mutually parallel lines like this: Is there a name for these pieces? An example sentence would be: The dart has struck the middle _____. Since the area of the rectangle is equal to its length multiplied by its width (), and the area of the rectangle is given, the following equation must be true. Download Rectangle stock photos. 4 Interpret whole-number quotients of whole numbers (e. Multiply denominators: 4 x 4 = 16; divide that result into 640. So the total area of the compound shape is 48 cm². But each rectangle is a third part of the whole. Color the rest green. rectangle and again fold into thirds, finishing the second turn. You will see 2 options. Divide the Circle in Ten Step 1. Draw a line on the fold. Well if your to divide to thirds, you have to be dividing into three equal sections. Because when you give 4 cuts in same direction the cube will be divided into 5 parts. To find the diagonal of a rectangle formula, you can divide a rectangle into two congruent right triangles, i. Counting square units fits nicely with the concept of counting squares and it also coincides with a property of algebra. That's where this trick comes in handy. Recognize that. Find the dimensions of. Let that long rectangle represent 3; and let us divide it into thirds, that is, into three equal parts; and let us shade one of them -- 1 out of 3. divided into 3 equal parts, the parts are longer than when the ribbon is divided into 5 equal parts. Here is the issue: My mother and my brother both want to keep the condo for a couple of reasons. Since the diagonals of a rectangle are congruent MO = 26. divide the x-y plane over which these rectangles exist into the minimum number of rows plus columns, such that each resulting cell intersects at most one rectangle. The volume of each of the 1 cm layers is half the volume of the corresponding rectangular prism, i. STEP 2 Model the rectangle cut into 1_ 3-size parts. You are given an array of integers arr where each element represents the height of a bar in a histogram. Which picture divides a rectangle into halves? Well, it has to be two equal sections. He plans on using some of the fencing to divide the rectangular field into two plots of land by constructing a fence inside the rectangle that is parallel to one of the sides. So maybe we should think about the size of the rectangle more carefully. Made from high quality clear glass, the Alex Liddy Slate & Co Rectangular Divided Dish 30cm is perfect for. We refer to one side of the rectangle as the length, L, and its adjacent side as the width, W. Divide a square or rectangle into 3rds, 5ths or 7ths 1. Distribute the variable into the parentheses. Color the rest green. Make sure you select the correct base and height lengths for each rectangle. Have you ever tried to divide a piece of paper into thirds? It's difficult. Divide P into two equal size sets based on x-coordinate and find the maximum empty rectangle within each part. Figure-4D is the same rectangle with the construction lines erased to make it more understandable. Students must choose the shape that is divided into equal parts or identify the fractional pieces of the given shape, such as halves, thirds, fourths, fifths, etc. In this picture the entire rectangle has been divided into 12 equal-sized pieces. The third point need not be on the rectangle. An interactive math lesson about fourths. [5] The root-3 rectangle is also called sixton , [6] and its short and longer sides are proportionally equivalent to the side and diameter of a hexagon. So this one is divided into four equal sections. Your students will be ready to take the next step toward math fluency with these third grade geometry worksheets and printables! With illustrated manipulatives and examples to help visual learners, our third grade geometry worksheets guide your students through concepts, such as identifying complex shapes, creating fractions, naming angles, calculating perimeter and area, and more. The menu items in the Context menu are dependent on the entity you right click on it. Typing 3 would set the value to exactly 3. The -axis and -axis divide a rectangular coordinate system into four areas, called quadrants. How do divide a line or circle into equal segments?? billludwig over 11 years ago I'm sure I've done this before, but I can remember how to divide a circle and a line into equal segments using CorelDraw X3. For irregular-shaped pavements and slabs, create a scale drawing of the project. Divide the butter in half, then cut half the softened butter into slices 1/4 inch thick. The algorithm then divides the solution space into smaller hyper-rectangles. This is a collection of articles I found on the web, explaining how to divide paper into different equal parts. You can approximate this by dividing one half of the cake into three equal pieces. Kilgard Jon Leech Bill Licea-Kane Barthold Lichtenbelt Benjamin Lipchak Brian Paul John Rosasco Jeremy Sandmel Geoff Stahl Contact Mark J. 160=(2xx50)+2w 160=100+2w Subtract 100 from each side. Now divide each whole rectangle into five equal parts to illustrate. Name the fractional unit, and then count and tell how many of those units are shaded. This prompt is enough for someone to say, Draw a 4 by 3 rectangle! Bingo! I'm drawing these with you. Shade three parts. You can continue to work from the points on the circle for each of these slice and divide the circle as many times as you need. Teachers, Share with your Students! We have added a new feature that allows members who are teachers to easily share access to the Math Antics website with their students at home. This is so helpful! Thank you! You can also "drop a diagonal" whereby you create a line segment divided into as many divisions as needed using perpendicular segments (ex: 17 inch line for 17 divisions), group all that and rotate that divided line until each endpoint hits the outer lines of the shape to be divided. Since the perimeter is just the distance around the rectangle, we find the sum of the lengths of its four sides—the sum of two lengths and two widths. The Third Compendium is divided into two parts: "Lore and Research" and "Rules". Partition of a rectangle into squares problem. Third, would be, monumental works, such as the pyramids, great tombs in the Valley of the Kings, and vast palaces. EASY BRAINLIEST! Mia plans to build a fence to divide her rectangular garden into two triangular areas. divide the x-y plane over which these rectangles exist into the minimum number of rows plus columns, such that each resulting cell intersects at most one rectangle. The fraction 3/4 means that we are to take some thing (the above rectangle), divide it into 4 equal parts (which has been done horizontally) and do something with 3 of those 4 equal parts. _____ fourths. I decided to dive deeper into the subject of mechanical construction and dividing a straight line into equal segments. A rectangular pyramid is a three-dimensional object with a rectangle for a base and a triangular face coresponding to each side of the base. Use the table above to calculate how many cubic yards of concrete would be needed for each rectangle in the series. Rotate dough a quarter turn to the right. You can continue to work from the points on the circle for each of these slice and divide the circle as many times as you need. This is the second side of your rectangle shape. Diagonal of Parallelogram Formula. Once placed, Rectangle Sketch Tool markups behave very much like Rectangle markups. If there were such a piece, then the remaining n-1 pieces would form a sub-rectangle of the figure. A diagram is shown below (not to scale). The third strip gets divided into three sections, by measuring out three 4" sections. He needs to lay a piece of pipe that will run along the diagonal of the lawn. A rectangular area is to be enclosed and divided into thirds. Alternatively, if the rectangle is expanded into the third dimention, it could be a brick. 3j Both of the statements above are equal. I can draw rows and columns of equal size in a rectangle. The rectangle with area 4 is sharing its width with the rectangle with unknown area. So it's not thirds. Each shape is 1 whole. ,divide,add,and subtract fractions, Step by step instructions on how to do algrebra using Operations, multiplying and dividing with negative scientific notation, maths calculator fractions into decimals. C-42 Three Midsegments Conjecture - The three midsegments of a triangle divide it into four congruent triangles. Directions made using HTML5 canvas and JavaScript. 1 whole circle can be divided into three equal parts. To find the area of a rectangle, multiply the length by the width. Be sure each part is the same size. Divide the rectangle into thirds. Color the rest green. Use the third line to draw a parallel line to any of the sides but not passing through the centre. Rectangle Picture. The unit is 1 fourth. We can cut the prism into layers, each of length of 1 cm. If you show 3 x 4 by placing four items into each of 3 sections (3 groups of 4), you see that you have 12 in all, and you also see that 12 divided into 3 groups gives you 4 in each group. The rule of thirds is a "rule of thumb" or guideline which applies to the process of composing visual images such as designs, films, paintings, and photographs. When we want to find the maximum value of something, we take the derivative and find the critical points. Traditional Frog. Since the area of the rectangle is equal to its length multiplied by its width (), and the area of the rectangle is given, the following equation must be true. Cut each equal part in half. Polar vs Rectangular: The two different types of notation that we used in the previous examples were polar and rectangular. Be sure each part is the same size. Each shape is a whole divided into equal parts. The Third Compendium is the title given to the collection of manuscripts containing the third phase of writings regarding the Holy Rectangle. 2 out of 3. I can count the equal size squares in a rectangle. 66%" in the position field. Article Source; How to divide a square into Thirds [PDF] Darren Scott: How to divide a square into Thirds - 3 methods [Video on YouTube] Sara Adams: How to divide a square into Fifths [PDF] Darren Scott. Each shape is 1 whole. Enter mixed numbers with space. into two rectangles. This means the folds should be 3 2/3 inches apart. This video reminded me of this nice little hack, so I'm passing it along. What fractional part is not shaded? 4. If they end up with very unequal halves, encourage them to stick the two halves back together and try again. Edit · Unsubscribe · Report · Fri Oct 13 2017 15:21:44 GMT-0400 (EDT) mathematics. Therefore those two rectangles together are two third parts of the. Get help and answers to any math problem including algebra, trigonometry, geometry, calculus, trigonometry, fractions, solving expression, simplifying expressions and more. Model the rectangle cut into 1_ 2-size parts. You now have three columns of equal size. We use the formula A = lw. The other 6 triangles must be arrangeable into another square!" He asks if the same task can be done with less than seven triangles. Well, this means we have here are called it is 20 feet. The rectangle should have two halves. Be sure each part is the same size. Divide the Circle in Ten Step 1. Return to Top. Wrap in saran wrap and refrigerate for at least an hour. Unlike in the previous task, there are no given dimensions for any of the rectangles. We use the formula A = lw. You will see, small rectangle icon on the left corner of image; Right click on rectangle icon. 😁 That means you divide the rectangle into 6 equal parts. Dividing fractions calculator online. We then will need to find the area of each of the rectangles and add them together to calculate the area of the whole figure. But often, bringing fractions into the process is, well, completely unnecessary. We divide, multiply, subtract, include the digit in the next place value position, and repeat. When we want to find the maximum value of something, we take the derivative and find the critical points. a Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Save the image. rectangle and again fold into thirds, finishing the second turn. If it is divided into more, they are not independent and can be represented in terms of other components. In each case, the big rectangle has been divided into two equal pieces, one shaded and one unshaded. Divide each whole using a different fractional unit. 5° Rectangular: 4 + 11. Run a vertical line through each of those points to divide the rectangle into three equal sections. What fractional part is not shaded? 4. Lightly roll into a 16x8-in. Each title is divided into chapters which usually bear the name of the issuing agency. Estimate to divide each into equal parts. Label them 1/3, 2/3, and 3/3 on the three sections. Rectangle Sketch Tool. 3 Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a 2 4. Multiply denominators: 4 x 4 = 16; divide that result into 640. Three-fourths divided by two-thirds. Be sure each part is the same size. The perimeter of the same rectangle is 52 cm. Standard 3. The algorithm then divides the solution space into smaller hyper-rectangles. 4 fourths d. Make three squares and snap them next to each other. Made from high quality clear glass, the Alex Liddy Slate & Co Rectangular Divided Dish 30cm is perfect for. Another way to look at the solution is as a sum of parts. Save the image. I want them to disappear. @jan simon i am learning matlab and i did not expect such a comment when learning you also might have had doubts what if i told you this that time ,everybody has doubts and this is a common forum and i am doing my mtech project if i could i would put my whole mtech project here as a doubt which my conscience would not permit and for the kind information this is a smallest part of my project of. Enjoy a range of free pictures featuring polygons and polyhedrons of all shapes and sizes, including simple 2D shapes, 3D images, stars and curves before heading over to our geometry facts section to learn all about them. _____ fourths 6. You can approximate this by dividing one half of the cake into three equal pieces. The volume of the first pyramid is × 4 h 2 × h by our discussion above. Volume Of Prism A V —1 x 7 x 4 x 2 - The volume of the solid is 28 16 = 44 l. Click on "Divide slice" option; A box of "Divide slice" will open. This helps learners visualize and create the area of a figure. To find the area of a rectangle, multiply the length by the width. Directions made using HTML5 canvas and JavaScript. This is a rectangle. This is a circle divided into two pieces, but they're not equal, so this isn't divided into halves. The way you divide fractions is very similar to the way fractions are multiplied with a simple twist in the middle. We can choose x such that 4 x 2 = 4 yz, so that the area of the rectangle equals the area of the square. If you take a rectangle and divide it into four equal pieces with two perpendicular lines, you have created quadrants: But what if you divide the rectangle into nine equal pieces with two sets of two mutually parallel lines like this: Is there a name for these pieces? An example sentence would be: The dart has struck the middle _____. If we take a shape and we divide it into four equal parts, we call each part a quarter. Solutions were found by Wei-Hwa Huang, Junk Kato, Andrew Cook, and Livio Zucca. Let that long rectangle represent 3; and let us divide it into thirds, that is, into three equal parts; and let us shade one of them -- 1 out of 3. How to divide the edge into Fifths (guesstimation method) [PDF] Anna Kastlunger: How to divide a square into Fifths - 3 methods [Video on YouTube] Sara Adams: How to divide a square into Sevenths [PDF] Darren Scott: How to fold angles of 30 and 60 degrees: Ian Harrison: How to Trisect an angle. divided into 3 equal parts, the parts are longer than when the ribbon is divided into 5 equal parts. This requires you to draw 2 equidistant horizontal lines. Partition the rectangle into four equal shares. DIVIDE: Places points along a line, polyline, arc, or circle, dividing it into the specified number of equal parts : B Uses a specified Block to divide the object instead of a point: DONUT. Color one eighth red. Cut your own rectangle in half, and demonstrate that if the halves are equal, they will be able to stack neatly on top of. To divide a rectangle into 7 parts using 3 lines: Use 2 lines to draw two diagonals. You are given an array of integers arr where each element represents the height of a bar in a histogram. Three-fourths divided by two-thirds. Then repeat the same process but this time enter "66. Then divide it with 1 vertical line down the middle. Some methods are easier than others; the method shown here is: - fairly easy, - but will leave 2 crease marks. 10) Create rectangles with a given area (3-DD. My own doubts about this started creeping in when I began my own investigation of how period furniture was designed in the 18 th century. Commentary We can think of dividing binomials in multiple ways: as a quotient function via poly- nomial long division; as the graph of a rational function with vertical and horizontal asymptotes; or, also via long division, the graph of (1) a horizontal line with a hole in it; or (2) a rectangular. Write a sentence telling what "equal parts" means. Since the perimeter is just the distance around the rectangle, we find the sum of the lengths of its four sides—the sum of two lengths and two widths. Be sure each part is the same size. For example, let's divide 178 by 3 using long division. Draw a large rectangle on the board. The formula is: A = L * W where A is the area, L is the length, W is the width, and * means multiply. We can point our system 20 feet in this direction has lips. Split Into Grid. Name the fractional unit below. For example, let's divide 178 by 3 using long division. This gives four triangles all the same area. Color one quarter orange. "Cropping Images in Photoshop using the Crop Tool And Rectangle Marquee Tool Tutorial" is part of my Photoshop book "Learn Photoshop CC With Pictures", which is more revised and covers more concepts than the online tutorials, feel free to check out the book by visiting my Learn Photoshop CC With Pictures Book Page. So, 4mm x 6mm = 24 mm 2 , hence the formula A = lw. You will see, small rectangle icon on the left corner of image; Right click on rectangle icon. But if, as I interpreted it, » I'm just trying to divide a rectangle into 7 different equally sized sections. These show the power of each of the Pharaoh that builds them and the state of how they ran their cities. Third, would be, monumental works, such as the pyramids, great tombs in the Valley of the Kings, and vast palaces. To find the diagonal, square the width and height of the rectangle and add the squared values. You can approximate this by dividing one half of the cake into three equal pieces. 😁 That means you divide the rectangle into 6 equal parts. Now you will find that, image is of equal size. rectangle and again fold into thirds, finishing the second turn. 1 is one third of 3. Have you ever tried to divide a piece of paper into thirds? It's difficult. Shade three parts. Rectangle Picture. This is three equal sections. Divide P into two equal size sets based on x-coordinate and find the maximum empty rectangle within each part. Be sure each part is the same size. If you like this Site about Solving Math Problems, please let Google know by clicking the +1 button. cal;l it 0 tiers the 2nd element defines a semicircle eithr 0-one hundred eighty or one hundred eighty 360. two equal parts three equal parts four equal parts five equal parts. Stretch bands around the pegs to form line segments and polygons, and make discoveries about perimeter, area, angles, congruence, fractions, and more. You are done. You will see, small rectangle icon on the left corner of image; Right click on rectangle icon. 33%" in the position field, and press OK. The box plot (a. Divide the rectangle into quarters. Draw a picture of a candy bar with two equal halves. The fraction 3/4 means that we are to take some thing (the above rectangle), divide it into 4 equal parts (which has been done horizontally) and do something with 3 of those 4 equal parts. Click here for K-12 lesson plans, family activities, virtual labs and more!. 4 fourths 2. Similarly, adding a square equal to the length of the longest side of the rectangle gets you increasingly closer to a Golden Rectangle and the Golden Ratio. The possibility of the third element in comparable semicircle = 0. 40 x $2,500 per acre = $100,000 sales price; $100,000 x 5% (0. , interpret 56 ÷ 8 as the number of objects in each share when 56 objects are partitioned equally into 8 shares, or as a number of shares when 56 objects are partitioned into equal shares of 8 objects each). On a lightly floured work surface, place 1 rectangle dough and sprinkle half of the chocolate evenly over the rectangle. 1 ivide this rectangle into two equal partsD 2 ircle the word to the right that makes C. Health Level Seven International. Many of the quotients in this program can by arrived at by inspecting the images. How to divide the edge into Fifths (guesstimation method) [PDF] Anna Kastlunger: How to divide a square into Fifths - 3 methods [Video on YouTube] Sara Adams: How to divide a square into Sevenths [PDF] Darren Scott: How to fold angles of 30 and 60 degrees: Ian Harrison: How to Trisect an angle. If you divide a rectangular image into thirds. the center point to the corner of the hyper-rectangle, i. Each shape is 1 whole. The relationship between multiplication and division can (and probably should) be illustrated with virtually every hands-on problem every day. Fold bottom third of rectangle up and top third down, as when folding a business letter, making a 5-1/2x8-in.
CommonCrawl
Flexi Quads A quadrilateral changes shape with the edge lengths constant. Show the scalar product of the diagonals is constant. If the diagonals are perpendicular in one position are they always perpendicular? Flexi Quad Tan As a quadrilateral Q is deformed (keeping the edge lengths constnt) the diagonals and the angle X between them change. Prove that the area of Q is proportional to tanX. Find the distance of the shortest air route at an altitude of 6000 metres between London and Cape Town given the latitudes and longitudes. A simple application of scalar products of vectors. Pythagoras on a Sphere Many thanks Andrei from Tudor Vianu National College, Bucharest, Romania for another excellent solution. To solve the problem I have used the hint, so that all notations are from the hint. I have associated to the sphere a system of Cartesian coordinates, as shown in the sketch. Without loss of generality, I have assumed that $A$ is situated on $Oz$, and has coordinates (0, 0, 1). As $A$ is a right angle, I can assume that $B$ is situated in the plane $yOz$ and $C$ in plane $xOz$ respectively. Let the angle $xOC$ be $u$, and angle $yOB$ be $v$. So, the Cartesian coordinates of the three points, which correspond to the vectors OA, OB and OC , are: $$A(0, 0, 1),\ B(0, \cos v, \sin v),\ C(\cos u, 0, \sin u).$$ Arcs $AB, BC$ and $CA$ are arcs on the three great circles (of radius unity), so that their lengths are equal to the angles at the centre in the corresponding great circle (expressed in radians). So, as shown in the figure: $$\angle BOC = a,\ \angle AOC = b = {\pi \over 2} - u, \ \angle AOB = c = {\pi \over 2} - v. \quad (1)$$ To calculate the length of arc $BC$ I use the same procedure as in the problem "Flight path". I calculate first the straight line distance between $B$ and $C$ inside the Earth: $$BC^2 = \cos^2 u + \cos^2 v + \sin^2 u + \sin^2 v - 2\sin u \sin v = 2 (1 - \sin u \sin v). \quad (2)$$ But from (1) I observe that $\sin u = \cos b$ and $\sin v = \cos c$. Using these and (2), I obtain $BC^2$: $$BC^2 = 2 (1 - \cos b \cos c). \quad (3)$$ Applying the cosine theorem in triangle $BOC$, I shall obtain the measure of $\angle BOC$, which is given by: $$BC^2 = BO^2 + CO^2 - 2 BO\times CO \cos a = 2(1 - \cos a). \quad (4)$$ From (3) and (4) we get Pythagoras' Theorem on the sphere: $$\cos a = \cos b \cos c.$$ An alternative proof of Pythagoras' Theorem on the sphere uses scalar products as follows. Since ${\bf OA, OB}$ and ${\bf OC}$ are unit vectors, the angles between the vectors, and hence the lengths of the sides of triangle $ABC$, are given from the scalar products: $$\eqalign{ a &=\cos^{-1}{\bf OB.OC}= \cos^{-1}\sin u \sin v\cr b &= \cos^{-1}{\bf OA.OC}= \cos^{-1}\sin v\cr c &= \cos^{-1}{\bf OB.OA}= \cos^{-1}\sin u.}$$ Hence $$\cos a = \cos b \cos c.$$ For the second part of the problem I observe that the triangle with vertex coordinates (0, 0, 1), (0, 1, 0) and (1, 0, 0) has 3 right angles. The lengths of its sides are all $\pi/2$. Now I shall prove that all spherical triangles with 3 right angles are equilateral of side $\pi/2$. All the following relations follow from the version of Pythagoras Theorem proved above as angles $A = B = C = \pi/2$. \begin{eqnarray} \cos a &= \cos b \cos c. \quad (5)\\ \cos b &= \cos c \cos a. \quad (6)\\ \cos c &= \cos a \cos b. \quad (7)\\ \end{eqnarray} Multiplying (5), (6) and (7), I obtain: $$\cos a \cos b \cos c = (\cos a \cos b \cos c)^2.$$ If $\cos a, \cos b, \cos c \neq 0$, then $\cos a \cos b \cos c = 1$. But $-1 \leq \cos a, \cos b, \cos c \leq 1.$ So, $\cos a= \cos b = \cos c = 1$, which means $a = b = c = 0$ (impossible) or $\cos a = \cos b = -1$ and $\cos c = 1$ (or any other combination of $a, b$ and $c$), which is also impossible. So, one of $\cos a, \cos b$ or $\cos c$ is 0. Now, evidently $\cos a = \cos b = \cos c = 0$, so $a = b = c = \pi /2$, and all triangles with this property are congruent.
CommonCrawl
Total Debt-to-Capitalization Ratio: Definition and Calculation Will Kenton Will Kenton is an expert on the economy and investing laws and regulations. He previously held senior editorial roles at Investopedia and Kapitall Wire and holds a MA in Economics from The New School for Social Research and Doctor of Philosophy in English literature from NYU. Investopedia / Jake Shi What Is the Total Debt-to-Capitalization Ratio? The total debt-to-capitalization ratio is a tool that measures the total amount of outstanding company debt as a percentage of the firm's total capitalization. The ratio is an indicator of the company's leverage, which is debt used to purchase assets. Companies with higher debt must manage it carefully, ensuring enough cash flow is on hand to manage principal and interest payments on debt. Higher debt as a percentage of total capital means a company has a higher risk of insolvency. The Formula for the Total Debt-to-Capitalization Ratio Is Total debt to capitalization = ( S D + L T D ) ( S D + L T D + S E ) where: S D = short-term debt L T D = long-term debt S E = shareholders' equity \begin{aligned} &\text{Total debt to capitalization} = \frac{(SD + LTD)}{(SD + LTD + SE)} \\ &\textbf{where:}\\ &SD=\text{short-term debt}\\ &LTD=\text{long-term debt}\\ &SE=\text{shareholders' equity}\\ \end{aligned} ​Total debt to capitalization=(SD+LTD+SE)(SD+LTD)​where:SD=short-term debtLTD=long-term debtSE=shareholders' equity​ What Does the Total Debt-to-Capitalization Ratio Tell You? Every business uses assets to generate sales and profits, and capitalization refers to the amount of money raised to purchase assets. A business can raise money by issuing debt to creditors or by selling stock to shareholders. You can see the amount of capital raised as reported in the long-term debt and stockholders' equity accounts on a company's balance sheet. The total debt to capitalization ratio is a solvency measure that shows the proportion of debt a company uses to finance its assets, relative to the amount of equity used for the same purpose. A higher ratio result means that a company is more highly leveraged, which carries a higher risk of insolvency. Examples of the Total Debt-to-Capitalization Ratio in Use Assume, for example, that company ABC has short-term debt of $10 million, long-term debt of $30 million and shareholders' equity of $60 million. The company's debt-to-capitalization ratio is calculated as follows: Total debt-to-capitalization ratio: ( $ 1 0 mill. + $ 3 0 mill. ) ( $ 1 0 mill. + $ 3 0 mill. + $ 6 0 mill. ) = 0 . 4 = 4 0 % \frac{(\$10 \text{ mill.} + \$30 \text{ mill.})} {(\$10 \text{ mill.} + \$30 \text{ mill.} + \$60 \text{ mill.})} = 0.4 = 40\% ($10 mill.+$30 mill.+$60 mill.)($10 mill.+$30 mill.)​=0.4=40% This ratio indicates that 40% of the company's capital structure consists of debt. Consider the capital structure of another company, XYZ, which has short-term debt of $5 million, long-term debt of $20 million and shareholders' equity of $15 million. The firm's debt-to-capitalization ratio would be computed as follows: Total debt to capitalization: ( $ 5 mill. + $ 2 0 mill. ) ( $ 5 mill. + $ 2 0 mill. + $ 1 5 mill. ) = 0 . 6 2 5 = 6 2 . 5 % \frac{(\$5 \text{ mill.} + \$20 \text{ mill.})} {(\$5 \text{ mill.} + \$20 \text{ mill.} + \$15 \text{ mill.})} = 0.625 = 62.5\% ($5 mill.+$20 mill.+$15 mill.)($5 mill.+$20 mill.)​=0.625=62.5% Although XYZ has a lower dollar amount of total debt compared to ABC, $25 million versus $40 million, debt comprises a significantly larger part of its capital structure. In the event of an economic downturn, XYZ may have a difficult time making the interest payments on its debt, compared to firm ABC. The acceptable level of total debt for a company depends on the industry in which it operates. While companies in capital-intensive sectors such as utilities, pipelines, and telecommunications are typically highly leveraged, their cash flows have a greater degree of predictability than companies in other sectors that generate less consistent earnings. Capitalization Ratios: Types, Examples and Their Significance Capitalization ratios are indicators that measure the proportion of debt in a company's capital structure. Capitalization ratios include the debt-equity ratio, long-term debt to capitalization ratio, and total debt to capitalization ratio. Leverage Ratio: What It Is, What It Tells You, How To Calculate A leverage ratio is any one of several financial measurements that look at how much capital comes in the form of debt, or that assesses the ability of a company to meet financial obligations. Funded Debt Funded debt is a company's debt that will mature in more than one year or one business cycle. Total-Debt-to-Total-Assets Ratio: Meaning, Formula, and What's Good Total-debt-to-total-assets is a leverage ratio that shows the total amount of debt a company has relative to its assets. Understanding Liquidity Ratios: Types and Their Importance Liquidity ratios are a class of financial metrics used to determine a debtor's ability to pay off current debt obligations without raising external capital. What Is a Solvency Ratio, and How Is It Calculated? A solvency ratio is a key metric used to measure an enterprise's ability to meet its debt and other obligations. Private Equity & VC Learn the Lingo of Private Equity Investing Solvency Ratios vs. Liquidity Ratios: What's the Difference? Guide to Financial Ratios How to Calculate the Capital-To-Risk Weighted Assets Ratio 6 Basic Financial Ratios and What They Reveal How Investors Use Leverage Ratios to Gauge Financial Health
CommonCrawl
A combined experimental and numerical study on upper airway dosimetry of inhaled nanoparticles from an electrical discharge machine shop Lin Tian1, Yidan Shang1, Rui Chen2, Ru Bai2, Chunying Chen2, Kiao Inthavong1 & Jiyuan Tu1,3 Exposure to nanoparticles in the workplace is a health concern to occupational workers with increased risk of developing respiratory, cardiovascular, and neurological disorders. Based on animal inhalation study and human lung tumor risk extrapolation, current authoritative recommendations on exposure limits are either on total mass or number concentrations. Effects of particle size distribution and the implication to regional airway dosages are not elaborated. Real time production of particle concentration and size distribution in the range from 5.52 to 98.2 nm were recorded in a wire-cut electrical discharge machine shop (WEDM) during a typical working day. Under the realistic exposure condition, human inhalation simulations were performed in a physiologically realistic nasal and upper airway replica. The combined experimental and numerical study is the first to establish a realistic exposure condition, and under which, detailed dose metric studies can be performed. In addition to mass concentration guided exposure limit, inhalation risks to nano-pollutant were reexamined accounting for the actual particle size distribution and deposition statistics. Detailed dosimetries of the inhaled nano-pollutants in human nasal and upper airways with respect to particle number, mass and surface area were discussed, and empirical equations were developed. An astonishing enhancement of human airway dosages were detected by current combined experimental and numerical study in the WEDM machine shop. Up to 33 folds in mass, 27 folds in surface area and 8 folds in number dosages were detected during working hours in comparison to the background dosimetry measured at midnight. The real time particle concentration measurement showed substantial emission of nano-pollutants by WEDM machining activity, and the combined experimental and numerical study provided extraordinary details on human inhalation dosimetry. It was found out that human inhalation dosimetry was extremely sensitive to real time particle concentration and size distribution. Averaged particle concentration over 24-h period will inevitably misrepresent the sensible information critical for realistic inhalation risk assessment. Particle size distribution carries very important information in determining human airway dosimetry. A pure number or mass concentration recommendation on the exposure limit at workplace is insufficient. A particle size distribution, together with the deposition equations, is critical to recognize the actual exposure risks. In addition, human airway dosimetry in number, mass and surface area varies significantly. A complete inhalation risk assessment requires the knowledge of toxicity mechanisms in response to each individual metric. Further improvements in these areas are needed. Exposure to nanoparticles in the workplace is a health concern to occupational workers where there is an increased risk of developing respiratory, cardiovascular, and neurological disorders [1]. Confirmed inhalation hazards include the notorious asbestos, with low dosage, causing severe health consequences [2]. The onset of "manganism", a clinical diagnosed neuro-toxin caused by high level exposure to manganese containing particles, were reported in occupational workers conducting mining, ore grinding and smelting activities [3, 4]. In addition to confirmed cases, there have been discussions on the link between sub-clinical human functional impairment and chronic low dose metal particle exposures [5,6,7]. Similar concerns were also reported in the office environment where the increased usage of modern electrophotography machines elevates the health risks of office workers on inhalation exposure to the emitted nanoparticles during xerographic processes [8, 9]. Electrical discharge machining (EDM) is one of the most important manufacturing processes in the die and mold industry for delicate concave shapes which traditional machining cannot achieve [10]. Rather than using mechanical forces, EDM utilizes high voltage between the "wire" electrode and the conductive metal piece to cause high energy sparks which remove the material by melting and erosion. This high energy electrophysical process is more likely to generate pollutant by-product in nanoscale [11]. Based on animal inhalation study and human lung tumor risk extrapolation, National Institute for Occupational Safety and Health (NIOSH, USA) [12] recommended exposure limits to fine (diameter > 0.1 μm) and ultrafine (diameter ≤ 0.1 μm) titanium dioxide particles as 2.4 mg/m3 and 0.3 mg/m3 in normal working conditions. German Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) [13] established the benchmark limits for ultrafine particle concentrations in workplaces based on state of the art knowledge of measurement and exposure risks. It states that, for ultrafine (1 to 100 nm) metal, metal oxide and other biopersistent granular nanomaterials with a density > 6000 kg/m3, a particle number concentration of 20,000 particles/cm3 should not be exceeded. For ultrafine particles with a density below 6000 kg/m3, a particle number concentration below 40,000 particles/cm3 should be imposed. Other recommendations include 10 mg/m3 by American Conference of Governmental Industrial Hygienists (ACGIH) [14] and 15 mg/m3 by the Occupational Safety and Health Administration (OSHA, USA) [15] for total inhalable particles (diameter ≤ 100 μm). For respirable particles that can penetrate to the alveolar region (diameter ≤ 10 μm), ACGIH and OSHA refine the exposure limit to 3 and 5 mg/m3 respectively [12]. In summary, current exposure standards are focused on ultrafine nanoparticles of 1 to 100 nm, and the recommendations are either on the total mass or number concentrations. Effects of the particle size distribution and the implication to regional airway dosages are not elaborated. In addition to animal and experimental studies, computational fluid dynamics (CFD) are frequently used for investigating detailed human inhalation and particulate transport processes. Compared to experiments, computer simulations are significantly less restrictive from time, cost and ethical perspectives. It allows decomposition of the complex physical phenomenon into focused areas where details of the particle-pulmonary interactions can be derived and integrated. Respiratory anatomy, airflow and particle transport and deposition are the main focused areas where a broad range of CFD studies were reported in past 2 decades. Heistracher and Hofmann (1995) proposed a physiologically realistic human bronchial airway bifurcation model [16]. Tian and Ahmadi (2012) extended the model for multi-level bronchial bifurcations where entire lung can be constructed sequentially [17]. For nasal airways, Zamankhan et al. (2006) and Inthavong et al. (2009) presented methodology of reconstructing human nasal cavities from casts and CT scans respectively [18, 19]. Detailed flow patterns and particle transport characteristics around the human body can be found in the work of Kennedy and Hinds (2002), Anthony and Flynn (2006), Se et al. (2010), Inthavong et al. (2012, 2013), and Ghalati et al. (2012) [20,21,22,23,24,25]. Inside the human respiratory system, Katz and Martonen (1996), Zhang and Kleinstreuer (2001), Hofmann et al. (2003), Tian and Ahmadi (2012, 2013), and Inthavong et al. (2010) [17, 26,27,28,29] employed computational models to investigate the airflow, and particle transport and deposition in human tracheobronchial airways. Subramaniam et al. (1998), Matida et al. (2003), Zamankhan et al. (2006), Xi and Longest (2008), Inthavong et al. (2011), Ge et al. (2012) and Tian et al. (2016) [18, 30,31,32,33,34,35] applied CFD methods in the human nasal/head airways for airflow and particle transport analysis. To evaluate the influence of breathing pattern on particle deposition, Häuβermann et al. (2002) and Inthavong et al. (2010) performed particle transport modeling in nasal and tracheobronchial airways respectively [36, 37]. CFD also plays important role in the study of non-spherical particle transport behavior in human airways. Tian et al. (2013, 2016), Inthavong et al. (2008), and Dastan et al. (2013) were among the few who investigated fibrous and agglomerated particle deposition in human nasal and tracheobronchial airways using CFD-DPM method [29, 34, 53, 38–40]. Recently, CFD-DEM has gained growing interest in studying non-spherical particle dynamics [41]. It has the potential to be applied in human inhalation study. These computational investigations provide detailed descriptions of flow and particle features and allow wider coverage of flow and particle conditions, which would otherwise be difficult to infer from experimental measurements. While most of the computational analysis provides valuable information on detailed flow pattern, particle trajectories and deposition statistics, there has been no study incorporating a realistic inhalation profile accounting for exposure to particle size distribution to inhaled dosimetry. In this research, a combined experimental and numerical study in the upper airway dosimetry of ultrafine particles in an electrical discharge machine shop was performed. The real time evolution of particle concentration and size distribution in the range from 5.52 to 98.2 nm during normal operation of electrical discharge machining was measured during a typical working day. Under these conditions, human inhalation simulations were performed in a physiologically realistic nasal and upper airway replica. Respiratory health risk was determined by regional dosimtries in context of exposure limits recommended by NIOSH, ACGIH, OSHA and IFA. In addition, dose metric relationships with respect to particle number, mass and surface area were analyzed. Based on the simulation data, empirical equations were developed to predict the local dosimetry of the inhaled nanoparticles in human nasal, laryngeal and deeper airways. The combined experimental and numerical study is the first to establish a realistic exposure condition, and under which, detailed dose metric studies were performed. The developed empirical equations will be useful for future nanoparticle dose-deposition prediction in inhalation risk assessments. Particle measurement in an electrical discharge machine shop Located in Beijing, China, the 3.8 m high machine shop hosts five wire-cut electrical discharge machineries (WEDM) manufacturing hardened metal pieces of desired shape during regular working hours (Fig. 1) (WEDM#1-Beijing AgieCharmills Industrial Electronics FW2; WEDM#2 and #3 – Shanghai Troop Group Photoelectric Technology TP-25ZT and TP3271; WEDM#5-Beijing Ninva NH7120ND; WEDM#4 – out of service). The high voltage between the "wire" electrode and the conductive metal piece causes high energy sparks which remove the material by melting and eroding processes. A dielectric liquid (DIC-206, Beijing Hua Ye Oil Limited, China) was used to flush out particle debris as well as restore the electrode potentials. Particle measurements took place during normal working hours from 8:00 to 17:30 in winter. Due to the cold weather condition, window in the machine shop was closed during the working hours, however left open overnight. The door was normally closed with occasional opening by the single machine operator during breaks, including a regular lunch break from 12:00 to 13:30. There was no mechanical ventilation in the workshop and no personal protective equipment was used by the operator due to minimal visible fume emission. The sampling station was located in the breathing zone about 1.5 m high and 1.2 to 5.0 m away from the electrical discharge machines. Floor plan of the WEDM machine shop (dimensions not to scale) The sampling station hosts a suite of aerosol instruments, and the ultrafine particle concentration was measured by a Scanning Mobility Particle Sizer (SMPS, TSI Model 3936, USA), consisting of a Water-Based Condensation Particle Counter (CPC, TSI Model 3788, USA), an Electrostatic Classifier (EC, TSI Model 3080, USA), a Nano Differential Mobility Analyzers (DMA, TSI Model 3085, USA) and a long DMA (TSI Model 3081, USA). Both the Nano and long DMAs detect particles up to 108 particles/cm3 in real time, with a range from 2 to 100 nm, and 14 to 675 nm respectively. Larger particles >1 μm were eliminated by a pre-conditioner impactor at a setting of 0.0457 cm. The DMA Sheath Flow was 7 L/min. Before each field measurement, "zero" calibration was conducted by using a high efficiency particular air filter (HEPA). A diffusion loss correction was applied to account for the nanoparticle losses in the sample lines based on previously described method [42]. In addition to the particle concentration instrumentation, a Micro-Orifice Uniform-Deposit Step Size Impactors (MOUDI, Model 125B NanoMoudi-II, MSP Corporation, USA) was used to collect aerosol particle samples for morphology analysis. The airborne particles were captured on the polycarbonate filters (Φ 47 mm, 0.22 μ, Munktell Inc., Sweden), 2 Scanning Electron Microscopic (SEM) wafers (4 mm × 4 mm, Zhongxing Bairui Inc., China), and 2 Transmission electron microscopy (TEM) grids (200-mesh molybdenum with carbon film, Zhongxing Bairui Inc., China) at a steady flow rate of 10 L/min by using a vacuum pump (Sogevac SV10-16 B, Leybold Vacuum GmbH Co., Ltd., Germany). Offline examination of the wafer was performed with SEM (S-4800 N, HITACHI Inc., Japan). Further detail of the WEDM measurement was given in [43]. Human nasal and upper respiratory airway Modeling A CFD model of the upper respiratory airway containing facial features, the nasal cavity, larynx, trachea and first bifurcation of the bronchial airway tree was developed from CT scans [36, 44, 45] (Fig. 2). Each model of the respiratory airway was connected to form a contiguous path via nostrils, from the external space to the end of the larynx region. The larynx region was extended to the first lung bifurcation to allow sufficient flow recovery and improve numerical convergence in the CFD solution. The respiratory airway was added to a realistic human face exposed to the external surroundings containing airborne particles from the electrical machine shop. Shang et al. (2015) [45] showed that the airflow has negligible influence on particle trajectory outside the breathing zone. In this study, particles were uniformly released on a hemisphere (of radius 3 cm) with the center at the nose tip, resembling the release condition of Doorly et al. (2008) [46]. A high quality mesh (minimum orthogonality >0.1) incorporating prism layers was applied to the bounding respiratory walls, and tetrahedral unstructured mesh filled the airway passage. The final model is shown in Fig. 2, which consists of 7 million cells. Further detail of the computational model was given in [22]. Human nasal and upper respiratory airway model Fluid flow simulation The current study employed a steady inhalation model with the assumption that particle deposition mainly occurs during the inhalation phase [36]. It is worth to note that breathing pattern was shown to affect deposition for micron range particles between 1 and 5 μm [37], however, the effect toward nanoparticle deposition was still not fully understood and requires further investigation. Mild cardiac load was assumed as the machine operator was mainly standing with occasional walking to attend the metal pieces. Laminar flow condition was considered and inspiration flow rates from 3 to 15 L/min were included. The wide coverage of the breathing rates is to facilitate the development of the empirical equations. The airflow was simulated using Ansys-Fluent v16.2. The surrounding walls were set to atmospheric pressure and inhalation was initiated by a negative pressure difference at the bronchial bifurcation outlet. This allowed the ambient flow field to be influenced only by the inhaled air. The continuity and momentum equation of the fluid flow are: $$ \frac{\partial }{\partial {x}_i}\left(\rho {u}_i\right)=0, $$ $$ \rho\;{u}_j\frac{\partial {u}_i}{\partial {x}_j}=-\frac{\partial p}{\partial {x}_i}+\frac{\partial }{\partial {x}_j}\left[\mu \frac{\partial {u}_i}{\partial {x}_j}\right]. $$ where ρ, u and p are density, velocity and pressure of the air, respectively. A second order upwind scheme was used to approximate the momentum equation, while the pressure–velocity coupling was handled through the SIMPLE method. Further detail of the fluid flow modeling was given in [47]. Particle simulation The Lagrangian particle tracking method is used where each particle's trajectory is computed. The particle equation is: $$ \frac{d{\boldsymbol{u}}_{\boldsymbol{p}}}{dt}=\frac{1}{C_c}{\boldsymbol{\mathsf{F}}}_{\boldsymbol{\mathsf{D}}}+\frac{\boldsymbol{\mathsf{g}}\left({\rho}_p-\rho \right)}{\rho_p}+{\boldsymbol{\mathsf{F}}}_{\boldsymbol{\mathsf{L}}}+{\boldsymbol{\mathsf{F}}}_{\boldsymbol{\mathsf{B}}} $$ here u p is the particle velocity, t is the time, g is the gravitational constant, ρ p is the particle density. In this study, both gravitational and buoyancy forces can be neglected. F D is the drag force given by 18 μ( u p u )/(d 2ρ p ), here d is the particle diameter. C c in Eq. (3) is the Cunningham correction given by: $$ {C}_c=1+\frac{2\lambda }{d}\left(1.257+0.4{e}^{\left(-1.1d/2\lambda \right)}\right), $$ λ is the molecular mean free path. F L in Eq. (3) is the Saffman lift force, and F B is the Brownian diffusion force with amplitude of \( \zeta \sqrt{\pi {S}_0/\Delta t} \), here ς is a zero mean, unit variance independent Gaussian random numbers. ∆t is the time-step for particle integration and S o is a spectral intensity function [48]: $$ {S}_o=\frac{216\nu kT}{\pi^2\rho\;{d}^5{\left(\frac{\rho_p}{\rho}\right)}^2{C}_c}. $$ ν is the fluid kinematic viscosity, k is the Boltzmann constant, and T is the absolute temperature of the inspiratory air in the nasal cavity. The simulation was carried out with Ansys-Fluent v16.2 discrete phase model (DPM). With the closed window, door and the lack of mechanical ventilation during the machining process, a homogeneous dispersion of the airborne particles was assumed in the breathing zone. For this study, statistically independent 100,000 uniform concentrated mono-dispersed airborne particles for each particle size, in a hemispheric profile (Fig. 2), were released. Particles of 1, 1.5, 2, 3, 5, 10, 15, 20, 30, 40, 50, 70 and 100 nm were included in the study. All particles entered the human nasal airway. Deposition onto the respiratory walls occurred when the particle was within d/2 distance away from the surface. Here d was the particle diameter. The particle Eq. (3) was solved by stepwise integration over discrete time steps yielding a new particle velocity at each time step. Inthavong et al. (2016) [49] identified the sensitivity of nanoparticle diffusion behavior in Lagrangian tracking to the integral time step factor, mesh size and flow condition. A methodology of selecting the most appropriate time step factors to achieve optimal Lagrangian tracking outcome was proposed and verified in a pipe and a human pharynx model [49]. In Ansys-Fluent, the length scale factor of integration, L s controls the integration time step size and Δt is a function of the particle velocity and the continuous airflow phase velocity: $$ \Delta t=\frac{L_s}{u_p+u} $$ This means that the length scale factor is proportional to the integration time step, equivalent to the distance that the particle travels before its equations are solved again and its trajectory updated. A smaller value for the length scale increases the number of calculations per distance length. Its selection must reproduce the diffusion dispersion mechanism for nanoparticles [49]. A standard geometry in the form of a pipe (Fig. 3a) with analytical solution by Ingham (1975) [50] was used to validate the particle dispersion. A fully developed flow of 1 L/min and 5 L/min was used which has a corresponding Re = 312, and Re = 1560 respectively. The particles were introduced into the pipe with a mass flow rate distributed with a fully developed profile as: $$ .\dot{m}(r)={\dot{m}}_0\left(1-\frac{r^2}{R^2}\right) $$ Brownian diffusion validation testing in a pipe geometry. a meshing scheme; b comparison with analytical solution by Ingham (1975) at pipe flow of 1 L/min; c comparison with analytical solution by Ingham (1975) at pipe flow of 5 L/min where m 0is the maximum mass flow rate at the pipe centerline, r is the radial position from the pipe centerline, and R is the pipe radius. Particle deposition in the pipe over a distance of 0.09 m was compared with the deposition efficiency (DE) correlation by Ingham (1975) [50]. $$ DE=1-\left(0.819{\mathrm{e}}^{-14.63\Delta}+0.0976{\mathrm{e}}^{-89.22\Delta}+0.0325{\mathrm{e}}^{-228\Delta}+0.0509{\mathrm{e}}^{-125.9\Delta 2/3}\right) $$ $$ \Delta =\frac{DL_{\mathrm{pipe}}}{4{U}_{\mathrm{inlet}}{R}^2} $$ Particle deposition in a pipe length of 0.9 m was compared for length scale factors of 5e-5 m, 1e-5 m, and 5e6 m, which showed that the deposition was best described using a value of 1e-5 m. Applying the method to a human pharynx model with 10 different length scale factors, an optimal value of 2e-5 m was identified. Further detail of the methodology was given in [49]. Particle size distribution and concentration Typical ambient environment contains polydispersed particles where the number concentration (number of particles per unit volume) is closely related to the size distribution n(d, r , t), given as: $$ dN=n\left(d,\overrightarrow{r},t\right)d(d) $$ Here n is the particle size distribution function, r is the position, t is time and d is the particle diameter. Accordingly, total number of particles per unit volume can be obtained as: $$ N={\int}_0^{\infty }n\left(d,\overrightarrow{r},t\right)d(d) $$ Due to emission, migration and particle coagulation, N in ambient environment is a function of time and space. In a time domain from t 1 to t 2 , the averaged size distribution function is given as: $$ \overline{n}\left(d,\overrightarrow{r}\right)=\frac{1}{\left({t}_2-{t}_1\right)}{\int}_{t_1}^{t_2}n\left(d,\overrightarrow{r},t\right)dt $$ therefore, the total number of particles per unit volume in the timeframe can be obtained as: $$ {N}_{t_1\hbox{-} {t}_2}=\left({t}_2-{t}_1\right){\int}_0^{\infty}\overline{n}\left(d,\overrightarrow{r}\right)d(d)=\left({t}_2-{t}_1\right){\int}_0^{\infty}\left[\frac{1}{\left({t}_2-{t}_1\right)}{\int}_{t_1}^{t_2}n\left(d,\overrightarrow{r},t\right)dt\right]d(d) $$ Given size distribution function n(d, r , t) (Eq. 10 ), particle surface area, volume and mass concentrations can be readily obtained if particles are spherical, they are: $$ dA=\pi {d}^2\;n\left(d,\overrightarrow{r},t\right)d(d) $$ $$ dV=\frac{\pi }{6}{d}^3\;n\left(d,\overrightarrow{r},t\right)d(d), $$ $$ dM={\rho}_p\frac{\pi }{6}{d}^3\;n\left(d,\overrightarrow{r},t\right)d(d). $$ Similarly to Eq. ( 13 ), the total surface area, volume and mass of particles per unit volume in the timeframe from t 1 to t 2 can be obtained as: $$ {A}_{t_1\hbox{-} {t}_2}=\left({t}_2-{t}_1\right){\int}_0^{\infty}\pi {d}^2\overline{n}\left(d,\overrightarrow{r}\right)d(d)=\pi \left({t}_2-{t}_1\right){\int}_0^{\infty}\left[\frac{1}{\left({t}_2-{t}_1\right)}{\int}_{t_1}^{t_2}{d}^2n\left(d,\overrightarrow{r},t\right)dt\right]d(d) $$ $$ {V}_{t_1\hbox{-} {t}_2}=\left({t}_2-{t}_1\right){\int}_0^{\infty}\frac{\pi }{6}{d}^3\overline{n}\left(d,\overrightarrow{r}\right)d(d)=\frac{\pi }{6}\left({t}_2-{t}_1\right){\int}_0^{\infty}\left[\frac{1}{\left({t}_2-{t}_1\right)}{\int}_{t_1}^{t_2}{d}^3n\left(d,\overrightarrow{r},t\right)dt\right]d(d) $$ $$ {M}_{t_1\hbox{-} {t}_2}=\left({t}_2-{t}_1\right){\int}_0^{\infty }{\rho}_p\frac{\pi }{6}{d}^3\overline{n}\left(d,\overrightarrow{r}\right)d(d)={\rho}_p\frac{\pi }{6}\left({t}_2-{t}_1\right){\int}_0^{\infty}\left[\frac{1}{\left({t}_2-{t}_1\right)}{\int}_{t_1}^{t_2}{d}^3n\left(d,\overrightarrow{r},t\right)dt\right]d(d) $$ Deposition efficiency and particle dosimetry Particle deposition efficiency (DE) is defined as the ratio of the deposited particles in a region to the total number entering to that region; that is: $$ DE=\frac{Number\ of\ Deposited\ Particles}{Total\ Number\ of\ Particles\ Entering\ to\ the\ Region} $$ It is an important parameter characterizing the regional filtering capacity and particle penetration rate. Deposition efficiency (DE) is closely related to the transport mechanisms and for nanoparticles, size, diffusivity and airflow rate are identified the dominant parameters. Due to geometric complexity of human airways, no analytical expression is available for the deposition efficiency (DE). Frequently, empirical fitted deposition equations are used to relate measured data (DE) to the governing parameters. Given particle size distribution and airway deposition equation, particle dosimetries by number, surface area, volume and mass can be readily obtained as: $$ {Dose}_{number}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}n\left(d,\overrightarrow{r},t\right)(DE)d(d)dt $$ $$ {Dose}_{surface\_ area}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}\pi {d}^2n\left(d,\overrightarrow{r},t\right)(DE)d(d)dt $$ $$ {Dose}_{volume}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}\frac{\pi }{6}{d}^3n\left(d,\overrightarrow{r},t\right)(DE)d(d)dt $$ $$ {Dose}_{mass}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}{\rho}_p\frac{\pi }{6}{d}^3n\left(d,\overrightarrow{r},t\right)(DE)d(d)dt $$ Here (t 1 , t 2 ) and (d 1 , d 2 ) are the time and particle size range respectively. Particle morphology Sample SEM and TEM images of the airborne particles were shown in Fig. 4. The aerosol particles from the production activity were largely less than 100 nm and typically captured by the filter of size range from 56 nm to 100 nm. A mixture of iron, aluminum, copper, and trace elements of Mg, Mn, Mo, Zn, Ni, Cr were detected in particle composition. For simplicity, a particle density of 2700 kg/m3, close to that of aluminum, was assumed in current numerical simulation. While the larger particles appear to be compact and closer to spherical shapes, the smaller ones are more agglomerate-like formed by clusters of smaller sized spheres. Since the bipolar charger and the particle classification in the SMPS Particle Sizer utilize both a spherical particle model and an idealized aggregated mobility model, both will be considered in the inhalation study. This study focuses on the methodology of the combined study using the spherical assumption. The effect of agglomeration to particle measurement and inhalation risks will be investigated in a subsequent paper. Sample morphologies of collected airborne particles by MOUDI 125B in the WEDM machine shop Particle distribution in the machine shop Figure 5 shows the measured ultrafine particle (5.52 to 98.2 nm) concentration in the electrical discharge machine shop during a 24-h period in a typical working day. The total mass and number concentrations correlate with working hours which start at 8:00 am and end around 17:30 pm. Particle total mass concentration took a sharp increase (from 2.25 μg/m3) shortly after 8:00 am, peaked (at 27 μg/m3) around 9:30 am and maintained the high level until lunch break. Total mass concentration decreased steadily during the lunch break and a minimum value of 4.5 μg/m3 was reached before 13:00 pm. Particle total mass concentration once again took a sharp increase following the beginning of the afternoon shift, reaching a high of 29.25 μg/m3 around 14:00 pm, and dropped to 9 μg/m3 around 15:00 pm. The particle concentration was maintained at this level until 20:30 pm before it finally dropped to the background level. A similar trend of variation (from 30,000 to 139,000 particles/cm3) was observed in Fig. 5b for the ultrafine particle total numbers; however, a persistent high level of concentration was maintained throughout the working period and it was less affected by the micro-activities such as the lunch break. It was seen from Fig. 5 that both the total mass and total number concentrations for the ultrafine particles in the machine shop correspond to the production activity. A high particle inhalation exposure to the machine operator is clearly demonstrated. Ultrafine particle (5.52 to 98.2 nm) concentrations during a typical working day: a real time ultrafine particle total mass distrubution; b real time ultrafine particle total number distribution To evaluate the evolution of particle size distribution, Figs. 6 and 7 shows the particle size resolved concentrations at a series of representative high production phases (9:30 am, 11:00 am and 14:30 pm). Background concentration was assumed at midnight when a minimum and steady particle concentration was observed. In these figures, the increase of particle concentration over the background was presented to allow a focused analysis on emissions produced by the machining activity. Figure 6a showed that, across all sized groups, a large number of particles (in the order of 104 #/cm3) were generated due to the production. In general, particle number concentration increase was higher in the smaller sized groups (5–30 nm) than the larger ones (>30 nm). However, from the percentage increase perspective (Fig. 6b), the production produced significantly higher number of particles in the larger size groups (>30 nm), monotonically related to the particle sizes. Background particle number concentration was shown in Fig. 6c for comparison. Figure 6 implied a high level of number presence, in the background and also in the machine shop during production, of the ultrafine particles in the lower size range (<30 nm). Relative to the background, the production activity most effectively increased the number of ultrafine particles in the larger size range (>30 nm). Increase of the ultrafine particle number concentration due to the production activities: a particle number increase from the background; b percentage of particle number increase from the background Increase of the ultrafine particle mass concentration due to the production activities: a particle mass concentration increase from the background; b percentage of particle mass concentration increase from the background Contrary to the particle number count, increase of the particle mass concentration due to production is clearly positively related to particle size from both absolute and percentage perspectives (Fig. 7). Mass concentration increase for the smaller particle size groups was almost negligible (<20 nm). The mass concentration for the larger particle size groups was monotically increasing with the particle sizes. Figures 6 and 7 implied that though particle number was higher for the smaller sizes, the mass concentration was dominated by the larger sized groups. The mass increase due to production emission was predominantly contributed by ultrafine particles in the larger size range (>50 nm). Breathing airflow pattern Light breathing at flow rates of 3 to 12 L/min was included in the simulation. Corresponding Reynolds number at the nostrils was given in Table 1. Key features of the airflow pattern were similar, conforming to the geometric details of the airway. Figure 8 displayed the stream-wise and axial airflow pattern in the nasal and upper airways at selected locations. Ambient air enters the nostril in an upward direction, and turns 90o entering the middle and inferior nasal meatus before a second 90° at the posterior nasopharynx. High velocity was observed at the nostril entrance, downstream of the nasal valve and at the larynx. Bulk air passes through the middle and inferior meatus, while the superior meatus that includes the olfactory region, has very low velocity passing through. Airflow pattern progresses rapidly in the laryngeal region, with high velocity streams shifting from back to anterior walls implying significant secondary flow along the airway passage. Since the inhaled airborne particles are transported by the moving fluid, regions with higher velocity imply high particle concentrations. The flow pattern provides valuable indication for the potential deposition of the inhaled particles. Airflow changes, such as a sharp turn, a sudden contraction or an expansion of the cross sectional area may have profound consequence for particle depositions. Table 1 Airflow rate and Reynolds number Stream-wise and axial air flow pattern in the nasal and upper airways at selected locations Particle deposition pattern and deposition equations in human upper airways Figure 9 shows sample deposition pattern of the inhaled nanoparticles (1 and 100 nm) in the nasal cavity and laryngeal region. Here particle size range is slightly expanded to allow a wider coverage of the developed deposition equations being applied in future applications. To elucidate the obscured region in the 3D domain (Fig. 9a), a surface mapping technique [51] was applied where the 3D bounding surface is unwrapped to a 2D surface and shown in Fig. 9b-d. High deposition was observed in nasal vestibule, on anterior septum before the 90° turn, and in posterior nasal cavity following the second 90° turn. In main nasal cavity, majority of the deposition occurred in middle meatus. A small fraction was scattered across superior meatus onto the olfactory mucosa. The left and right nasal chamber geometries were asymmetric with the right chamber slightly wider. Particle deposition pattern was affected by particle size with significantly more deposition and random distribution observed for 1 nm particles. Sporadic and streak patterned deposition (in laryngeal region, Fig. 9d) of the 100 nm particles implied a lower level Brownian diffusion. The nasal cavity was shown to effectively filter the 1 nm particles, while 100 nm particles were more likely to penetrate through and have higher deposition in the laryngeal region. Particle deposition pattern in the nasal and laryngeal region (Q = 10 L/min): a nasal and laryngeal region (3D); b nasal cavity – left (unwrapped 2D); c nasal cavity – right (unwrapped 2D); d laryngeal region (unwrapped 2D) Nasal deposition efficiency is defined as the ratio of the deposited particles in nasal cavity to the total number entering through nostril. It is an important parameter characterizing nasal filtering efficiency. Figure 10a presents the current simulation result and the comparison with literature data [35, 52,53,54,55]. In the nano range (d < 100 nm), nasal deposition monotonically decreased with increase of particle size and the current simulation agreed well with experimental data. Observed variations, within tolerance of accuracy, were due to the experimental scattering, geometry variation between inhalation subjects, and variation in particle inhalation profiles (far field versus nostril, [56]). Based on the simulation, nasal and laryngeal deposition, as a function of flow rate Q (m 3 /s) and particle diffusivity D (m 2 /s) were developed (Table 2). We find that the correlation D 0.510 /Q 0.318 provided the best curve fit for the sampled data for breathing rates of 3 to 12 L/min and particle sizes from 1 to 100 nm in developing the empirical equations (Fig. 10a and b). Similar trends were reported in prior studies, e.g. experiments of Cheng (2003) and simulations from Xi et al. (2008) with the nasal deposition data conforming to D 0.510 /Q 0.280 and D 0.500 /Q 0.125 respectively [35, 57]. The empirical equations are given as: Comparison of nasal deposition efficiencies: a nasal deposition efficiencies (Q = 10 L/min); b nasal deposition equation; c laryngeal deposition equation Table 2 Particle diffusivity D (288.16K) $$ {DE}_{nasal}=\left(1-0.9793{e}^{-36.51\frac{D^{0.510}}{Q^{0.318}}}\right)\times 100 $$ $$ {DE}_{laryngeal}=\left(1-0.9604{\mathrm{e}}^{-10.73\frac{D^{0.510}}{Q^{0.318}}}\right)\times 100 $$ In Eq. (26), the regional laryngeal deposition efficiency is defined as the ratio of the deposited particles in laryngeal region to the total number that entered the region. It is worth to note that Eq. (26) applies to an air flow rate up to 12 L/min. Beyond that, laryngeal induced turbulence start to form which enhances the laryngeal deposition. Particle dosimetry in human upper airways in the machine shop Substitute Eqs. (25) and (26) into Eq. (21), particle number dosimetry in human upper airways can be obtained as: $$ \begin{array}{l}{Dose}_{nasal\_ numer}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}n\left(d,\overrightarrow{r},t\right)\frac{DE_{nasal}}{100}d(d)dt\\ {}\kern5em =\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}n\left(d,\overrightarrow{r},t\right)\left(1-0.9793{e}^{-36.51\frac{D^{0.510}}{Q^{0.318}}}\right)d(d)dt\end{array} $$ $$ \begin{array}{l}{Dose}_{laryngeal\_ number}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}n\left(d,\overrightarrow{r},t\right)\left(1-\frac{DE_{nasal}}{100}\right)\left(\frac{DE_{laryngeal}}{100}\right)d(d)dt\\ {}=\underset{t_1}{\overset{t_2}{\int }}\underset{d_1}{\overset{d_2}{\int }}n\left(d,\overrightarrow{r},t\right)\left(0.9793{e}^{-36.51\frac{D^{0.510}}{Q^{0.318}}}\right)\left(1-0.9604{e}^{-10.73\frac{D^{0.510}}{Q^{0.318}}}\right)d(d)dt\end{array} $$ Similarly, particle dosimetry (by surface area and mass) can be obtained by substituting Eqs. (25) and (26) into Eqs. (22) and (24) accordingly. In the current study, a log-normal particle size distribution (n) was detected in the background; however particle size distribution was transient due to machining processes during working hours; therefore, a time averaged particle distribution function based on real time measurement during the 8-h working period was used (Eq. (12)). Figure 11 shows the measured particle distribution by SMPS during production and with the background concentration. Fitted equations shown in Fig. 11 are given as: $$ \begin{array}{l}\overline{n}(d)=a 1\cdot {e}^{b 1\cdot d}+c 1\cdot {e}^{d 1\cdot d}\\ {} Here\\ {}\kern2em a 1= 3197\kern2em b 1=\hbox{-} 0.03849\kern1.75em c 1= 130.7\kern1.75em d 1= 0.006897\end{array} $$ Time averaged particle distribution function in the WEDM machine shop $$ a 1= 1792\kern1.75em b 1=\hbox{-} 0.1035\kern2em c 1= 0.7465\kern1.5em d 1= 0.03032 $$ Eqs. (29) and (30) give the coefficients for particle size distribution function during production (8-h working period) and at mid-night (background) respectively. Based on the SMPS measurement (Eq. (29)), human upper airway dosimetry in the WEDM machine shop (d = 5.52 - 98.2 nm) was calculated and presented in Table 3. The dosages were based on an 8 h period covering breathing rates of 3 to 12 L/min. Particle penetration, closely related to deep lung dosimetry, was also provided. Table 3 Human upper airway dosages and penetration of nanoparticles from 5.52 to 98.2 nm in the WEDM in a typical working day Table 3 showed a strong monotonic increase in human upper airway dosage (of inhaled nanoparticles) with airflow rate across all metrics. This is simply the result of an increased particle exposure due to larger air exchange. The slight decrease in particle deposition at higher flow rate in diffusion region (Eqs. (25) and (26)) was insignificant to the actual dosimetry. While nasal cavity had a higher "number" dosage than laryngeal, surprisingly, particle mass and surface area dosages in laryngeal were higher than that in the nasal cavity. This difference was overlooked by traditional airway deposition studies where real particle concentration and size distribution were not available. Further examination showed that the slight higher deposition rate of larger sized particles in laryngeal was the cause. For example, at breathing rate of 12 L/min, the deposition efficiencies for 70 nm particles are 3.63% in the nasal and 4.37% in laryngeal region respectively. This implied the mass/surface carrying particles (larger in size) were more likely to pass through the nasal cavity, deposit in high impact region (eg. Laryngeal), or penetrate deep into the lung. It should be noted that empirical fitting could contribute to the increased laryngeal deposition, as a slight under prediction in the nasal and over prediction in the laryngeal region were observed with the fittings (Fig. 10b and c) for low diffusivity particles at low breathing rate (Q = 3 and 5 L/min). Further research is needed in studying the transition region particle deposition, which is extremely low and sensitive to the various transport mechanisms. Here "transition region" refers to the particle size range where the dominant particle transport mechanism changes from Brownian diffusion to inertia, suggested by extremely low particle diffusivity and inertia during the transition phase. More details can be found in the work of Tian and Ahmadi (2007) [58]. Figure 12 compared the percentage of the dosage (number, mass and surface area) in human upper airway in the WEDM machine shop. Nasal, laryngeal, and the penetration rate (an implication of the deep lung dosage) were considered. Clearly shown in Fig. 12, majority of the particles (d = 5.52 – 98.2 nm) penetrated the nasal and upper airway. Nasal barrier was most effective in reducing particle number intake; however least efficient in trapping mass carrying particles. On the other hand, laryngeal region consistently filtered out number/mass/surface particles in all evaluated metrics. Breathing rate had minimum influence on relative dosimetry. The laryngeal region was the least sensitive to breathing rate, while nasal dosage in particle number count was the most affected by breathing rate. Human upper airway dosage and penetration percentage in the WEDM machine shop: a number dosage; b mass dosage; c surface area dosage To examine the effect of production, Table 4 displayed the human upper airway dosimetry with the measured background concentration during 8 h in the working day (d = 5.52 - 98.2 nm). As expected, background dosage estimated from nanoparticle concentration at midnight was significantly lower than the dosage estimated from nanoparticle concentration during working hours (Table 3). To quantify the difference, Fig. 13 presented the percentage increase of the airway doses in the machine shop with respect to the background concentration at a breathing rate of 12 L/min. A remarkable 3100% increase in mass dosage was observed in the laryngeal region, while an even higher percentage increase was seen for the penetrated dose. Meanwhile, a 2664% increase was detected in the nasal cavity. 1626 to 2633% dose increase in surface area, and 451 to 752% dose increase in number was seen respectively. Overall, mass dosage was the most enhanced, and the WEDM production activity had the most profound effects to particle dosage across all regions in all metrics, especially in the laryngeal and downstream airways. Table 4 Human upper airway dosages and penetration of nanoparticles from 5.52 to 98.2 nm with background concentration during 8 h in a day Human airway dosage increase due to production over 8-h shift (Q = 12 L/min) The combined experimental and numerical study showed an astonishing enhancement of human airway dosage as a result of the electrical discharge wire-cutting in the machine shop. At a breathing rate of 12 L/min during a typical 8 h shift, mass dosages to the nasal and laryngeal regions were increased from 0.06 μg to 1.69 μg, and 0.11 μg to 3.42 μg, or 28 and 31 folds respectively. At the same time, mass dosage penetrated deep into the lung was increased from 2.28 μg to 74.85 μg or 33 folds, implying a significant increase of exposure risks to the lower respiratory airways. Though at a relatively milder scale, enhancement of the surface area and number dosages due to production were still significant (6 to 25 folds). Real time particle number and mass concentration increase from the background in the WEDM machine shop (Figs. 6 and 7) has intrinsic effect to airway dosages, which is disproportional to the measured concentration increases when looking from different metric perspectives. This finding implies that a pure number or mass concentration recommendation on the exposure limit at workplace is insufficient. A particle size distribution, together with the deposition equations (eg. Eqs. (25) and (26)), is critical to recognize the actual exposure risks. In addition, human inhalation dosimetry is extremely sensitive to real time particle concentration and size distribution. Averaged particle concentration over 24-h period will inevitably misrepresent the sensible information critical for realistic inhalation risk assessment. Currently, the most stringent recommendations for ultrafine particle exposure are from National Institute for Occupational Safety and Health (NIOSH, USA) and German Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA). According to NIOSH (2011), the exposure limit for ultrafine titanium dioxide particles (d ≤ 100 nm) is recommended not exceeding 0.3 mg/m3 in normal working conditions. IFA (2009) established the benchmark limit for ultrafine particle (1 nm ≤ d ≤ 100 nm, density ≤ 6000 kg/m3) concentration at workplace below 40,000 particles/cm3. Based on the current measurement, an averaged mass concentration of 0.013881 mg/m3 and number concentration of 82,884 particles/cm3 were detected in the WEDM machine shop during working hours. Therefore the working condition met the specification of NIOSH; however failed that of the IFA. As shown in Fig. 13, human airway dosimetry in number, mass and surface area varied significantly. A complete inhalation risk assessment requires the knowledge of toxicity mechanisms in response to each individual metric. For example, a recent study [59] suggested that surface area is the biologically most effective dose metric for nanoparticle lung toxicity. Currently, there is no surface based exposure standard. All recommendations of exposure limits for ultrafine nanoparticles (d ≤ 100 nm) are either on the total mass or number concentrations. Moreover, effects of the particle size distribution and the implication to regional airway dosages, critical for inhalation risk assessment, are not included. Further improvements in these areas are needed. The combined experimental and numerical study is the first to establish a realistic exposure condition to calculate the actual particle dosimetry (in mass, number and surface area) by using deposition equations developed through inhalation modeling in a physiologically realistic nasal and upper airway replica. It was found out that particle size distribution carries very important information in determining human airway dosimetry, and critical for inhalation exposure risk assessment. Together with the deposition equations, powerful and accurate prediction of regional dosages with respect to the various dose metrics (e.g. number, mass, and surface area) can be made. An astonishing enhancement of human airway dosages in the WEDM machine shop was detected by the combined experimental and numerical study. Up to 33 folds in mass, 27 folds in surface area and 8 folds in number doses, penetrating to deeper airways, were detected compared to the background dosimetry. The real time particle concentration measurement showed substantial emission of nano-pollutants by WEDM machining activity, and the combined experimental and numerical study provided extraordinary details on human inhalation dosimetry. It was found out that human inhalation dosimetry is extremely sensitive to real time particle concentration and size distribution. Averaged particle concentration over 24-h period will inevitably misrepresent the sensible information critical for realistic inhalation risk assessment. In the WEDM machine shop, nanoparticle number concentration is dominated by the extremely small scales (d ≤ 30 nm) while mass and surface area concentration is dominated by larger scales (d ≥ 60 nm). Nasal barrier is most effective in reducing particle number intake; however least efficient in catching mass carrying particles. Laryngeal region is consistent in catching particles in all evaluated metrics (number, mass and surface). Majority of the particles (>84% in number, 92% in mass and surface area) (d = 5.52 – 98.2 nm) penetrate into deeper airways. Human upper airway dosages monotonically increase with the breathing rate as a result of the increased particle exposure due to larger air exchange. Human airway dosimetry in number, mass and surface area varies significantly. A complete inhalation risk assessment requires the knowledge of toxicity mechanisms in response to each individual metric. A pure number or mass concentration recommendation on the exposure limit at workplace is insufficient. A particle size distribution, together with the deposition equations, is critical to recognize the actual exposure risks. For ultrafine nanoparticles (d ≤ 100 nm), all current exposure limit recommendations are either on the total mass or number concentrations, and effects of the particle size distribution and the implication to regional airway dosages, critical for inhalation risk assessment, are not included. Further improvements in these areas are needed. ACGIH: American conference of governmental industrial hygienists CFD: CPC: Condensation particle counter DMA: Differential mobility analyzers DPM: Discrete phase model EC: Electrostatic classifier EDM: HEPA: High efficiency particular air filter IFA: German Institute for Occupational Safety and Health of the German Social Accident Insurance MOUDI: Micro-orifice uniform-deposit step size impactors NIOSH: OSHA: Scanning electron microscopic SMPS: Scanning mobility particle sizer WEDM: Wire-cut electrical discharge machine Oberdorster G, Oberdorster E, Oberdorster J. Nanotoxicology: an emerging discipline evolving from studies of ultrafine particles. Environ Health Perspect. 2005;113:823–39. World Health Organization (WHO). Asbestos (Chrysotile, Amosite, Crocidolite, Tremolite, Actinolite, and Anthophyllite). International Agency for Research on Cancer (IARC) Monographs on the Evaluation of Carcinogenic Risks to Humans. 2012;Volume 100c. Couper J. On the effects of black oxide of manganese when inhaled into the lungs. Br Ann Med Pharmacol. 1837;1:41–2. Wang JD, Huang CC, Hwang YH, Chiang JR, Lin JM, Chen JS. Manganese induced parkinsonism: an outbreak due to an unrepaired ventilation control system in a ferromanganese smelter. Br J Ind Med. 1989;46:856–9. Gorell JM, Johnson CC, Rybicki BA, Peterson EL, Kortsha GX, Brown GG, Richardson RJ. Occupational exposures to metals as risk factors for Parkinson's disease. Neurology. 1997;48(3):650–8. Meyer-Baron M, Schäper M, Knapp G, Thriel CV. Occupational aluminum exposure: evidence in support of its neurobehavioral impact. Neurotoxicology. 2007;28:1068–78. Lucchini RG, Martin CJ, Doney BC. From Manganism to manganese-induced parkinsonism: a conceptual model based on the evolution of exposure. NeuroMolecular Med. 2009;11:311–21. Destaillats H, Maddalena RL, Singer BC, Hodgson AT, McKone TE. Indoor pollutants emitted by office equipment: a review of reported data and information need. Atmos Environ. 2008;42:1371–88. Shi XF, Chen R, Huo LL, Zhao L, Bai R, Long DX, Pui DYH, Rang WQ, Chen CY. Evaluation of nanoparticles emitted from printers in a clean chamber, a copy center and office rooms: health risks of indoor air quality. J Nanosci Nanotechnol. 2015;15:9554–64. Tönshoff HK, Egger R, Klocke F. Environmental and safety aspects of electrophysical and electrochemical processes. CIRP Ann: Manuf Techn. 1996;45(2):553–68. Sivapirakasam SP, Mathew J, Surianarayanan M. Constituent analysis of aerosol generated from die sinking electrical discharge machining process. Process Saf Environ Prot. 2011;89(2):141–50. NIOSH. Current Intelligence Bulletin 63: Occupational Exposure to Titanium Dioxide. Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health. 2011. (http://www.cdc.gov/niosh/docs/2011-160/pdfs/2011-160.pdf). IFA – Institut für Arbeitsschutz der Deutschen Gesetzlichen Unfallversicherungen. Criteria for assessment of the effectiveness of protective measures; 2009 (http://www.dguv.de/ifa/Fachinfos/Nanopartikel-am-Arbeitsplatz/Beurteilung-von-Schutzma%C3%9Fnahmen/index-2.jsp). ACGIH. 2009 Tlvs® and Beis® based on the documentation of the threshold limit values for chemical substances and physical agents and biological exposure indices. Cincinnati: American Conference of Governmental Industrial Hygienists; 2009. OSHA. Metal and metalloid particulates in workplace atmospheres (atomic absorption). Washington, DC: U.S. Department of Labor, Occupational Safety and Health Administration; 2002. (http://www.osha.gov/dts/sltc/methods/inorganic/id121/id121.html). Heistracher T, Hofmann W. Physiologically realistic models of bronchial airway bifurcations. J Aerosol Sci. 1995;26(3):497–509. Tian L, Ahmadi G. Transport and deposition of micro-and nano-particles in human tracheobronchial tree by an asymmetric multi-level bifurcation model. J Comput Multiphase Flows. 2012;4(2):159–82. Zamankhan P, Ahmadi G, Wang Z, Hopke PH, Cheng YS, Su WC, Leonard D. Airflow and deposition of nanoparticles in a human nasal cavity. Aerosol Sci Technol. 2006;40:463–76. Inthavong K, Wen J, Tu JY, Tian ZF. From CT scans to CFD modeling – fluid and heat transfer in a realistic human nasal cavity. Eng Appl Comput Fluid Mech. 2009;3(3):321–35. Kennedy NJ, Hinds WC. Inhalability of large solid particles. J Aerosol Sci. 2002;33:237–55. Anthony TR, Flynn MR. CFD model for a 3-D inhaling mannequin: verification and validation. Ann Occup Hyg. 2006;50(2):157–73. Inthavong K, Ge QJ, Li XD, Tu JY. Detailed predictions of particle aspiration affected by respiratory inhalation and airflow. Atmos Environ. 2012;62:107–17. Se CMK, Inthavong K, Tu JY. Inhalability of micron particles through the nose and mouth. Inhal Toxicol. 2010;22(4):287–300. Inthavong K, Ge QJ, Li A, Tu JY. Source and trajectories of inhaled particles from a surrounding environment and its deposition in the respiratory airway. Inhal Toxicol. 2013;25(5):280–91. Ghalati PF, Keshavarzian E, Abouali O, Faramarzi A, Tu JY, Shakibafard A. Numerical analysis of micro- and nano-particle deposition in a realistic human upper airway. Comput Biol Med. 2012;42:39–49. Katz IM, Martonen T. Three-dimensional fluid particle trajectories in the human larynx and trachea. J Aerosol Med. 1996;9(4):513–20. Zhang Z, Kleinstreuer C. Effect of particle inlet distributions on deposition in a triple bifurcation lung airway model. J Aerosol Med. 2001;14:13–29. Hofmann W, Golser R, Balásházy I. Inspiratory deposition efficiency of ultrafine particles in a human airway bifurcation model. Aerosol Sci Technol. 2003;37(12):988–94. Tian L, Ahmadi G. Fiber transport and deposition in human upper tracheobroncial airways. J Aerosol Sci. 2013;60:1–20. Subramaniam RP, Richardson RB, Morgan KT, Kimbell JS, Guilmette RA. Computational fluid dynamics simulations of inspiratory airflow in the human nose and nasopharynx. Inhal Toxicol. 1998;10(2):91–120. Matida EA, Dehaan WH, Finlay WH, Lange CF. Simulation of particle deposition in an idealized mouth with different small diameter inlets. Aerosol Sci Technol. 2003;37(11):924–32. Inthavong K, Tu JY, Heschl C. Micron particle deposition in the nasal cavity using the v2–f model. Comput Fluids. 2011;51(1):184–8. Ge QJ, Inthavong K, Tu JY. Local deposition fractions of ultrafine particles in a human nasal-sinus cavity CFD model. Inhal Toxicol. 2012;24(8):492–505. Tian, L, Inthavong, K, Lidén, G, Shang, YD, Tu, JY. Transport and deposition of welding fume agglomerates in a realistic human nasal airway. Ann Occup Hyg. 2016;1–17. (doi:10.1093/annhyg/mew018). Xi J, Longest PW. Numerical predictions of submicrometer aerosol deposition in the nasal cavity using a novel drift flux approach. Int J Heat Mass Transf. 2008;51:5562–77. Inthavong K, Choi L, Ji T, Ding S, Thien F. Micron particle deposition in a tracheobronchial airway model under different breathing conditions. Med Eng Phys. 2010;32(10):1198–212. Häuβermann S, Bailey AG, Bailey MR, Etherington G, Youngman M. The influence of breathing patterns on on particle deposition in a nasal replica cast. J Aerosol Sci. 2002;33:923–33. Tian L, Ahmadi G. Transport and deposition of nano-fibers in human upper tracheobronchial airways. J Aerosol Sci. 2016;91:22–32. Dastan A, Abouali O, Ahmadi G. CFD simulation of total and regional fiber deposition in human nasal cavities. J Aerosol Sci. 2013;69:132–49. Inthavong K, Wen J, Tian Z, Tu JY. Numerical study of fiber deposition in a human nasal cavity. J Aerosol Sci. 2008;39(3):253–65. Zhong WQ, Yu AB, Liu XJ, Tong ZB, Zhang H. DEM/CFD-DEM modelling of non-spherical particulate systems: theoretical developments and applications. Powder Technol. 2016;302:108–52. Hinds, WC. Aerosol science technology: properties, behavior, and measurement of airborne particles. 2nd ed. Wiley; 1999. Chen R, Shi XF, Bai R, Rang WQ, Huo LL, Zhao L, Long DX, Pui DYH, Chen CY. Airborne nanoparticle pollution in a wire electrical discharge machining workshop and potential health risks. Aerosol Air Qual Res. 2015;15:284–94. Inthavong K, Tu JY, Ahmadi G. Computational modelling of gas-particle flows with different particle morphology in the human nasal cavity. J Comput Multiphase Flows. 2009;1(1):57–82. Shang YD, Inthavong K, Tu JY. Detailed micro-particle deposition patterns in the human nasal cavity influenced by the breathing zone. Comput Fluids. 2015;114:141–50. Doorly DJ, Taylor DJ, Schroter RC. Mechanics of airflow in the human nasal airways. Respir Physiol Neurobiol. 2008;163:100–10. Wen J, Inthavong K, Tu JY, Wang S. Numerical simulations for detailed airflow dynamics in human nasal cavity. Respir Physiol Neurobiol. 2008;161:125–35. Li L, Ahmadi G. Dispersion and deposition of spherical particles from point sources in a turbulent channel flow. J Comput Multiphase Flows. 1992;4(2):159–82. Inthavong K, Tian LP, Tu JY. Lagrangian particle modelling of spherical nanoparticle dispersion and deposition in confined flows. J Aerosol Sci. 2016;96:56–68. Ingham DB. Diffusion of aerosols from a stream flowing through a cylindrical tube. J Aerosol Sci. 1975;6(2):125–32. Inthavong K, Shang YD, Tu JY. Surface mapping for visualization of wall stresses during inhalation in a human nasal cavity. Respir Physiol Neurobiol. 2014;190:54–61. Kelly JT, Asgharian B, Kimbell JS, Wong B. Particle deposition in human nasal airway replicas manufactured by different methods. Part II: ultrafine particles. Aerosol Sci Technol. 2004;38:1072–9. Cheng KH, Cheng YS, Yeh HC, Swift D. Deposition of ultrafine aerosols in the head airways during natural breathing and during simulated breath holding using replicate human upper airway casts. Aerosol Sci Technol. 1995;23:465–74. Swift DL, Montassier N, Hopke PK, Karpen-Hayes K, Cheng YS, Su YF, Yeh HC, Strong JC. Inspiratory deposition of ultrafine particles in human nasal replicate cast. J Aerosol Sci. 1992;23(1):65–72. Swift DL, Strong JC. Nasal deposition of ultrafine 218Po aerosols in human subjects. J Aerosol Sci. 1996;27(7):1125–32. Naseri A, Abouali O, Ghalati PF, Ahmadi G. Numerical investigation of regional particle deposition in the upper airway of a standing male mannequin in calm air surroundings. Comput Biol Med. 2014;52:73–81. Cheng YS. Aerosol deposition in the extrathoracic region. Aerosol Sci Technol. 2003;37:659–71. Tian L, Ahmadi G. Particle deposition in turbulent duct flows – comparisons of different model predictions. J Aerosol Sci. 2007;38:377–97. Schmida O, Stoegera T. Surface area is the biologically most effective dose metric for acute nanoparticle toxicity in the lung. J Aerosol Sci. 2016;99:133–43. The financial supports provided by the National Natural Science Foundation of China (Grant No. 91643102, 21277080, 21477029, 91543206), and Australian Research Council (Grant No. DP160101953) are gratefully acknowledged. The work was supported by the National Natural Science Foundation of China (Grant No. 91643102, 21277080, 21477029, 91543206), and Australian Research Council (Grant No. DP160101953). The datasets supporting the conclusions of this article are included within the article. School of Engineering – Mechanical and Automotive, RMIT University, Bundoora, VIC, Australia Lin Tian, Yidan Shang, Kiao Inthavong & Jiyuan Tu CAS Key Lab for Biomedical Effects of Nanomaterials and Nanosafety & CAS Center for Excellence in Nanoscience, Beijing Key Laboratory of Ambient Particles Health Effects and Prevention Techniques, National Center for Nanoscience and Technology of China, Beijing, China Rui Chen, Ru Bai & Chunying Chen Key Laboratory of Ministry of Education for Advanced Reactor Engineering and Safety, Institute of Nuclear and New Energy Technology Tsinghua University, PO Box 1021, Beijing, 100086, China Jiyuan Tu Lin Tian Yidan Shang Rui Chen Ru Bai Chunying Chen Kiao Inthavong LT, RC, JYT, CYC, KI designed the combined experimental and numerical study. LT and YDS designed and implemented the computational modeling, interpreted the results, developed the empirical models, and wrote the manuscript. KI and JYT contributed in numerical model design and validation, and assisted in writing the manuscript. RC, RB and CYC designed and implemented the experimental measurement in the WEDM machine shop, interpreted the results, and assisted in writing the manuscript. All authors read and approved the final manuscript. Correspondence to Chunying Chen or Jiyuan Tu. Tian, L., Shang, Y., Chen, R. et al. A combined experimental and numerical study on upper airway dosimetry of inhaled nanoparticles from an electrical discharge machine shop. Part Fibre Toxicol 14, 24 (2017). https://doi.org/10.1186/s12989-017-0203-7 Inhalation toxicity Human upper airways Particle dosimetry
CommonCrawl
A recent survey of 7 social networking sites has a Annette Sabin 2021-12-20 Answered A recent survey of 7 social networking sites has a mean of 14.69 million visitors for a specifie month. The standard deviation was 4.4 millon. Find the 95% confidence interval of the true mean. Assume the variable is normally distributed. Round your answers to two decimal places. Jeremy Merritt \(\displaystyle{n}={7}\) \(\displaystyle\hat{{{x}}}={14.69}\) \(\displaystyle{s}={4.4}\) \(\displaystyle{95}\%\) CI of mean \(\displaystyle\mu=\hat{{{x}}}\pm{\frac{{{z}\cdot{s}}}{{\sqrt{{n}}}}}\) \(\displaystyle=\mu={14.69}\pm{\frac{{{1.96}\cdot{4.4}}}{{\sqrt{{7}}}}}\) \(\displaystyle={14.69}\pm{3.25}\) \(\displaystyle={\left({11.44},\ {17.94}\right)}\) Steve Hirano Why z=1.96? Because at 95% CI, z=1.96 In a survey of 3307 adults, 1464 say they startied paying bills online in the last year. Construct a \(\displaystyle{99}\%\) confidence interval for the population proportion. Interpret your results. Choose the correct answer below. A) With \(\displaystyle{95}\%\) confidence, it can be said that the population proportion of adults who say they have started paying bills online in the last year is between the endpoints of the given confidence interval B) The endpoints of the given confidence interval show that adults pay bills online \(\displaystyle{99}\%\) of the time. C) With \(\displaystyle{99}\%\) confidence, it can be said that the sample proportion of adults who say they have started paying bills online in the last year is between the endpoints of the given confidence interval. In a survey of 2695 adults, 1446 say they have started paying bills online in the last year. A) With \(\displaystyle{95}\%\) confidence, it can be said that the sample proportion of adults who say they have started paying bills online in the last year is between the endpoints of the given confidence interval B) With \(\displaystyle{99}\%\) confidence, it can be said that the population proportion of adults who say they have started paying bills online in the last year is between the endpoints of the given confidence interval. C) The endpoints of the given confidence interval show that adults pay bills online \(\displaystyle{99}\%\) of the time. In a survey of 2085 adults in a certain country conducted during a period of economic? Uncertainty, \(\displaystyle{63}\%\) thought that wages paid to workers in industry were too low. The margin of error was 8 percentage points with \(\displaystyle{90}\%\) confidence. For parts (1) through (4) below, which represent a reasonable interpretation of the survey results. For those that are not reasonable, explain the flaw. 1) We are \(\displaystyle{90}\%\) confident \(\displaystyle{63}\%\) of adults in the country during the period of economic uncertainty felt wages paid to workers in industry were too low. A) The interpretation is reasonable. B) The interpretation is flawed. The interpretation provides no interval about the population proportion. C) The interpretation is flawed. The interpretation sugguests that this interbal sets the standard for all the other intervals, which is not true. D) The interpretation is flawed. The interpretation indicates that the level of confidence is varying. 2) We are \(\displaystyle{82}​\%\) to \(\displaystyle{98}​\%\) confident \(\displaystyle{63}​\%\) of adults in the country during the period of economic uncertainty felt wages paid to workers in industry were too low. Is the interpretation​ reasonable? 3) We are \(\displaystyle{90}\%\) confident that the interval from 0.55 to 0.71 contains the true proportion of adults in the country during the period of economic uncertainty who believed wages paid to workers in industry were too low. 4) In \(\displaystyle{90}\%\) of samples of adults in the country during the period of economic uncertainty, the proportion who believed wages paid to workers in industry were too low is between 0.55 and 0.71. A certain reported that in a survey of 2006 American adults, \(\displaystyle{24}\%\) said they believed in astrology. a) Calculate a confidence interval at the \(\displaystyle{99}\%\) confidence level for the proportion of all adult Americans who believe in astrology. (Round your answers to three decimal places.) (_______, _______) b) What sample size would be required for the width of a \(\displaystyle{99}\%\) CI to be at most 0.05 irresoective of the value of \(\displaystyle\hat{{{p}}}\)? (Round your answer up to the nearest integer.) A survey of several 10 to 11 year olds recorded the following amounts spent on a trip to the mall: $19.17,$21.18,$20.38,$25.08 Construct the \(\displaystyle{90}\%\) confidence interval for the average amount spent by 10 to 11 year olds on a trip to the mall. Assume the population is approximately normal. Copy Data Step 4 of 4 : Construct the \(\displaystyle{90}\%\) confidence interval. Round your answer to two decimal places. Survey A asked 1000 people how they liked the new movie Avengers: Endgame and \(\displaystyle{88}\%\) said they did enjoy it. Survey B also concluded that \(\displaystyle{82}\%\) of people liked the move but they asked 1600 total moviegoers. Which of the following is true about this comparison? 1. The margin of error is the same for both. 2. The confidence interval is smaller for Survey B. 3. Increasing the number of people asked does not change the \(\displaystyle{95}\%\) confidence interval. 4. Survey A is more accurate since the percentage is higher. 5. Survey A has more approvals than Survey B. 6. Survey A is better than Survey B since it has a higher percentage. The margin of error is the same for both. The confidence interval is smaller for Survey B. Increasing the number of people asked does not change the \(\displaystyle{95}\%\) confidence interval. Survey A is more accurate since the percentage is higher. Survey A has more approvals than Survey B. Survey A is better than Survey B since it has a higher percentage. Random variables Sampling distributions Significance tests Summarizing quantitative data Analyzing categorical data Modeling data distributions Comparing two groups Alternate coordinate systems Describing quantitative data Chi-square tests Two-way tables Bivariate numerical data
CommonCrawl
Technical advance | Open | Open Peer Review | Published: 26 November 2018 Using the Beta distribution in group-based trajectory models Jonathan Elmer1, Bobby L. Jones2 & Daniel S. Nagin3 BMC Medical Research Methodologyvolume 18, Article number: 152 (2018) | Download Citation We demonstrate an application of Group-Based Trajectory Modeling (GBTM) based on the beta distribution. It is offered as an alternative to the normal distribution for modeling continuous longitudinal data that are poorly fit by the normal distribution even with censoring. The primary advantage of the beta distribution is the flexibility of the shape of the density function. GBTM is a specialized application of finite mixture modeling designed to identify clusters of individuals who follow similar trajectories. Like all finite mixture models, GBTM requires that the distribution of the data composing the mixture be specified. To our knowledge this is the first demonstration of the use of the beta distribution in GBTM. A case study of a beta-based GBTM analyzes data on the neurological activity of comatose cardiac arrest patients. The case study shows that the summary measure of neurological activity, the suppression ratio, is not well fit by the normal distribution but due to the flexibility of the shape of the beta density function, the distribution of the suppression ratio by trajectory appears to be well matched by the estimated beta distribution by group. The addition of the beta distribution to the already available distributional alternatives in software for estimating GBTM is a valuable augmentation to extant distributional alternatives. A trajectory describes the evolution of a behavior, biomarker, or some other repeated measure of interest over time. Group-based trajectory modeling (GBTM) [1], also called growth mixture modeling [2], is a specialized application of finite mixture modeling designed to identify clusters of individuals who follow similar trajectories. Originally developed to study the developmental course of criminal behavior [3], GBTM is now widely applied in biomedical research in such diverse application domains as chronic kidney disease progression [4], obesity [5, 6], pain [7], smoking [8], medication adoption and adherence [9, 10], and concussion symptoms [11]. Like all finite mixture models, GBTM requires that the distribution of the data composing the mixture be specified, although there are no theoretical limits on the distributions that could be used. In GBTM, parameters of the specified distribution (e.g. mean and variance of a normal distribution) are allowed to vary across trajectory groups. To our knowledge, previously published applications have all specified the normal distribution, perhaps with censoring, the Poisson distribution, perhaps with zero-inflation, or the binary logit function. Real-world continuous biomedical data are frequently not normally distributed even after allowing censoring. This is particularly true of biomarker data, which are generally positive, right skewed, and often zero-inflated. This creates a need for flexible alternatives to the Gaussian distribution [12]. In this article, we demonstrate an application of GBTM based on the beta distribution. It is offered as an alternative to the normal distribution for modeling continuous longitudinal data are poorly fit by other distributions. The primary advantage of the beta distribution is the flexibility of the shape of the density function. The normal density function, even in its censored form, must follow some portion of it familiar bell-shaped form whereas the shape of beta distribution is far less constrained. The disadvantage of beta distribution is that the data under study must be transformable to a 0–1 scale. The beta distribution can be parameterized in several different ways. One which is particularly useful for our purposes was proposed by [12]. Let y denote a beta distributed random variable: $$ P\left(y;\mu, \phi \right)=\frac{\Gamma \left(\phi \right)}{\Gamma \left(\mu \phi \right)\Gamma \left(\left(1-\mu \right)\phi \right)}{y}^{\mu \phi -1} $$ where 0 < y < 1, 0 < μ < 1 and ϕ > 0. Under this parameterization E(y) = μ and Var(y) = μ(1 − μ)/1 + ϕ). The parameter ϕ is known as the precision parameter, because for any μ a larger value of ϕ results in a smaller Var(y). We turn now to incorporating the beta distribution into GBTM. In describing a GBTM, we denote the distribution of trajectories by P(Yi), where the random vector Yi=(yi1, yi1, …yiT) represents individual i's longitudinal sequence of measurements over T measurement occasions. The GBTM assumes that the population distribution of trajectories arises from a finite mixture composed of J groups. The likelihood for each individual i, conditional on the number of groups J, may be written as: $$ P\left({Y}_i\right)=\sum \limits_{j=1}^J{\pi}_j\bullet P\left({Y}_i|j;{\theta}_j\right) $$ where πj is the probability of membership in group j, and the conditional distribution of Yi given membership in j is indexed by the unknown parameter vector θj. Typically, the trajectory is modeled by a polynomial function of time (or age). For the case where P(Yi| j; θj) is assumed to follow the beta distribution, its mean at time t for group j, μjt, is linked to time as follows: $$ {\mu}_{jt}={\beta}_{0j}+{\beta}_{1j}t+{\beta}_{2j}{t}^2\dots . $$ where, in principle, the polynomial can be of any order.Footnote 1 Note that the parameters linking μjt to time are trajectory group specific, thus allowing the shapes of trajectories to vary freely across group. Also associated with each trajectory group is a group specific precision parameter, ϕj. The remaining components of θj pertain to the parameterization of πj, which in this case is specified to follow a multinomial logistic function. For given j, conditional independence is assumed. In other words, except as explained by individual i's trajectory group membership, serial observations in the random vector Yi are assumed to be independent of one another. Thus, we may write: $$ {P}_k\left({Y}_i|j;{\beta}_j\right)=\prod \limits_{t=1}^T{p}_k\left({y}_{it},j;{\beta}_j\right) $$ While conditional independence is assumed at the level of the latent trajectory group, at the population level outcomes are not conditionally independent because they depend on a latent construct, trajectory group membership. See chapter 2 of [1] for a discussion of the conditional independence assumption. The GBTM modeling framework does not require that the random vector Yi be complete for all individuals. For the baseline GBTM specified above, missing values in Yi are assumed missing at random. However, for applications such as that described below where measurement ends due to some external event—in this case due to the death of the patient or the patient awakening from coma—an extension of GBTM described in [13] may be used to account for non-random dropout. Detailed discussion of the methods to approach selection of J, the number of latent groups in the population, and the order of the polynomial specifying each group's trajectory are beyond the scope of this paper and have been previously described [1]. Briefly, no test statistics identifies the number of components in a finite mixture [14, 15]. Also, as argued in [1], in most application domains of GBTM the population is not literally composed of a finite mixture of groups. Instead the finite mixture is intended to approximate an underlying unknown continuous distribution of trajectories for the purpose of identifying and summarizing its salient features. As described in [14, 16], finite mixture models are a valuable tool for approximating an unknown continuous distribution. In this paradigm, model selection is performed by combining test statistics such as AIC and BIC, which can guide the statistician to identify which model best fits the data. This is combined with expert knowledge of which model best reveals distinctive trajectory groups that are substantively interesting. The order of the polynomial used to model each group's trajectory is typically determined by starting with an assumed maximum order for each trajectory group then successively reducing the order if the highest order term is statistically insignificant. All models are estimated with software that is freely available at https://www.andrew.cmu.edu/user/bjones/. The maximization is performed using a general quasi-Newton procedure [17, 18] and the variance-covariance of parameter estimates are estimated by the inverse of the information matrix. We demonstrate use of the beta distribution in a GBTM of data quantifying brain activity of 396 comatose patients resuscitated from cardiac arrest. The University of Pittsburgh Institutional Review Board approved all aspects of this study. The data result from an observational cohort study of consecutive comatose patients hospitalized at a single academic center from April 2010 to October 2014 that underwent continuous electroencephalographic (EEG) monitoring for at least 6 h after resuscitation from cardiac arrest. Not included are patients that arrested from trauma or catastrophic neurological event, and those who awakened, died or were transitioned to comfort care within 6 h of hospital arrival. The point of departure for our demonstration is prior work that applied GBTM to an indicator of brain activity, suppression ratio, a quantitative measure of the proportion of a given EEG epoch that is suppressed below a particular voltage threshold for activity [19]. In the first hours after cardiac arrest, many patients' EEGs are quite suppressed (50–80%) [19] showed that patients with persistently low or rapidly improving suppression ratios often make good recoveries, while persistent suppression over the first 36 h is ominous. Our main concern with the prior application was the assumption that suppression ratio followed a censored normal distribution with a minimum of 0 and a maximum of 1. To illustrate the basis for our concern, consider Fig. 1, which reports a histogram of the median suppression ratio at hour 12. It has two spikes close to the minimum of 0 and the maximum of 1. In between, the suppression ratio is approximately uniformly distributed. The histogram bears no resemblance to the normal distribution. While it is possible for a mixture of censored normal distributions to approximate the histogram in Fig. 1, the distribution of suppression ratio data within the four groups reported in [19] does not resemble the normal distribution. By contrast, overlying the histogram is a beta distribution with μ = 0.42 and ϕ = 0.77, which closely resembles the observed distribution of the suppression ratio. The Distribution of Hour 12 Suppression Ratio Data with the Best Fitting Beta Distribution. *The sum of the heights of the relative frequency density bars multiplied by their width sum to 1.0 so as to conform the with estimated beta density Figure 2 shows a three group, beta-based trajectory model over the first 48 h of suppression ratio measurements.Footnote 2 Because EEG monitoring may be ended either because the patient dies or awakens, the model accounted for non-random subject attrition as described in [13]. The three group model was selected because it optimized both BIC and AIC compared to fewer groups, and models with four or more groups were sometimes unstable and did not identity additional trajectory groups that were clinically interesting in terms of their survival prospects. For the three group model, group 1 is specified to follow a cubic function of time, and groups 2 and 3 are specified to follow quadratic functions of time because as discuss above the cubic term of these trajectories were statistically insignificant at the .05 level. As was found in the prior analysis based on the censored normal assumption, trajectory group is strongly associated with survival probability. Overall, only about a third of patients survive to hospital discharge. However, survival probability for group 3, which accounts for an estimated 32.0% of patients who have a persistently high suppression ratio, only an estimated 2.3% survive. By contrast group 1, which accounts for an estimated 26.8% of patients, follows a persistently low suppression ratio trajectory. For this group survival probability is an estimated 69.8%. In between are group 2 patients. Three Group Trajectory Model with Beta Distributed Suppression Ratio How well do these beta distribution-based trajectories fit the data? Fig. 3 overlays the actual distribution of the suppression ratio data by trajectory group with the predicted distribution according to the beta distribution at hour 24. Inspection of the Figure reveals that for each trajectory group the actual and predicted values nicely correspond even though across trajectory group the distribution of the suppression ratio are quite different. Trajectory group 1 (Fig. 3a) and trajectory group 2 (Fig. 3b) have right skewed suppression ratio distributions, whereas the distribution for trajectory group 3 (Fig. 3c) is left skewed. Moreover, the left skew of groups 1 and 2 are distinctly different, with group 1's skew far more extreme than group 2's. The fit between the actual and predicted data distribution by trajectory group is similarly good for other hours. Distribution of 24 h suppression ratio data with the best-fitting data distribution for Group 1 (a), Group 2 (b) and Group 3 (c). *The sum of the heights of the relative frequency density bars multiplied by their width sum to 1.0 so as to conform the with estimated beta density We note that the use of the beta distribution does require an adjustment for boundary observations, namely data equal to 0 or 1, which are formally not feasible for a beta distributed random variable. For boundary observations we follow the suggestion of [20] and add/subtract from 0/1 data points a small amount equal to .5 divided by the number of subjects, 396. However, a useful generalization to avoid this ad hoc adjustment would be the addition of the equivalent of the zero-inflation factor in the Poisson distribution to account for data at the boundary values of the beta distribution. We have demonstrated an extension of GBTM that adds the beta distribution to the heretofore usually applied distributions for modeling trajectories—the censored normal, zero-inflated Poisson, and binary logit. The beta option provides an alternative to the censored normal distribution for modeling continuous or approximately continuous measured outcomes measured over age or time. Figure 1 makes clear that the normal distribution poorly fits the suppression ratio data whereas the beta distribution provides a far better fit. Figure 3 also makes clear that due to the flexibility of the beta distribution a beta-based GBTM can accommodate differences in the distribution of the suppression ratio across trajectory group and over time that are not readily accommodated by the normal distribution. Up to 5th order polynomials can be estimated in the software used to estimate the models reported in the case study. The call to the Stata-based trajectory estimation used to estimate this model was as follows: traj, var.(srt1-srt48) indep(t1-t48) model(beta) order(3 2 2) dropout(0 0 0) where srt* is the median supression ratio at hour * and t* is the hour of measurement from 1 to 48 and the "dropout" component of the call activates the generalization of GBTM to account for nonrandom subject attrition. EEG: GBTM: Group-based trajectory modeling ML: PPGM: Posterior probability of group membership SR: Suppression ratio Nagin D. Group-based modeling of development. Cambridge, Mass: Harvard University Press; 2005. Muthen B. Latent Variable Analysis. In: SAGE Handbook of Quantitative Methodology for the Social Sciences. D. Kaplan (ed.). Thousand Oaks: SAGE Publications, Inc.; 2004. pp 345. NAGIN DS, LAND KC. Age, criminal careers, and population heterogeneity: specification and estimation of a nonparametric, mixed poisson model*. Criminology. 1993;31:327–62. Burckhardt P, Nagin DS, Padman R. Multi-trajectory models of chronic kidney disease progression. AMIA Annu Symp Proc. 2016;2016:1737–46. Malhotra R, Ostbye T, Riley CM, Finkelstein EA. Young adult weight trajectories through midlife by body mass category. Obesity (Silver Spring). 2013;21:1923–34. Reinders I, Murphy RA, Martin KR, Brouwer IA, Visser M, White DK, Newman AB, Houston DK, Kanaya AM, Nagin DS, Harris TB, Health A, Body Composition S. Body mass index trajectories in relation to change in lean mass and physical function: the health, Aging and Body Composition Study. J Am Geriatr Soc. 2015;63:1615–21. Nicholls E, Thomas E, van der Windt DA, Croft PR, Peat G. Pain trajectory groups in persons with, or at high risk of, knee osteoarthritis: findings from the knee clinical assessment study and the osteoarthritis initiative. Osteoarthr Cartil. 2014;22:2041–50. Lessov-Schlaggar CN, Kristjansson SD, Bucholz KK, Heath AC, Madden PA. Genetic influences on developmental smoking trajectories. Addiction. 2012;107:1696–704. Lo-Ciganic WH, Gellad WF, Huskamp HA, Choudhry NK, Chang CC, Zhang R, Jones BL, Guclu H, Richards-Shubik S, Donohue JM. Who were the early adopters of dabigatran?: an application of group-based trajectory models. Med Care. 2016;54:725–32. Juarez DT, Williams AE, Chen C, Daida YG, Tanaka SK, Trinacty CM, Vogt TM. Factors affecting medication adherence trajectories for patients with heart failure. Am J Manag Care. 2015;21:e197–205. Yeates KO, Taylor HG, Rusin J, Bangert B, Dietrich A, Nuss K, Wright M, Nagin DS, Jones BL. Longitudinal trajectories of postconcussive symptoms in children with mild traumatic brain injuries and their relationship to acute clinical status. Pediatrics. 2009;123:735–43. Ferrari S, Cribari-Neto F. Beta regression for modelling rates and proportions. J Appl Stat. 2004;31:799–815. Haviland AM, Jones BL, Nagin DS. Group-based trajectory modeling extended to account for nonrandom participant attrition. Sociol Methods Res. 2011;40:367–90. Everitt B, Hand DJ. Finite mixture distributions. London ; New York: Chapman and Hall; 1981. Titterington DM, AFM S, Makov UE. Statistical analysis of finite mixture distributions. Chichester; New York: Wiley; 1985. Heckman J, Singer B. A method for minimizing the impact of distributional assumptions in econometric models for duration data. Econometrica. 1984;52:271–320. Dennis JEG, M D, Walsh RE. An adaptive nonlinear least-squares algorithm. AMC Trans Math Softw. 1981;7:348–68. Dennis JE, Mei HHW. Two new unconstrained optimization algorithms which use function and gradient values. J Optim Theory Appl. 1979;28:453–82. Elmer J, Gianakas JJ, Rittenberger JC, Baldwin ME, Faro J, Plummer C, Shutter LA, Wassel CL, Callaway CW, Fabio A, Pittsburgh Post-Cardiac Arrest S. Group-based trajectory modeling of suppression ratio after cardiac arrest. Neurocrit Care. 2016;25:415–23. Verkuilen J, Smithson M. Mixed and mixture regression models for continuous bounded responses using the Beta distribution. J Educ Behav Stat. 2012;37:82–113. Support for this research was provided by the Center for Machine Learning and Health, Carnegie Mellon University. Dr. Elmer's research time is supported by the NIH through grant 5K23NS097629. Department of Emergency Medicine, Critical Care Medicine and Neurology, University of Pittsburgh, Pittsburgh, PA, USA Jonathan Elmer University of Pittsburgh Medical Center, Pittsburgh, PA, USA Bobby L. Jones Heinz College, Carnegie Mellon University, Pittsburgh, PA, 15206, USA Daniel S. Nagin Search for Jonathan Elmer in: Search for Bobby L. Jones in: Search for Daniel S. Nagin in: JE, BLJ and DSN each made substantial contributions to the conception and design of the work, jointly performed analysis of the data, have given approval for its publication and take responsibility for the work. JE was responsible for data acquisition. DSN and JE drafted the manuscript, and BLJ provided critical revisions of important intellectual content. All authors read and approved the final manuscript. Correspondence to Daniel S. Nagin. The University of Pittsburgh Institutional Review Board approved all aspects of this study. Consent to participate is not applicable. Beta distribution
CommonCrawl
Projected economic evaluation of the national implementation of a hypothetical HIV vaccination program among adolescents in South Africa, 2012 Nishila Moodley1,2,3, Glenda Gray4,5 & Melanie Bertram6 Adolescents in South Africa are at high risk of acquiring HIV. The HIV vaccination of adolescents could reduce HIV incidence and mortality. The potential impact and cost-effectiveness of a national school-based HIV vaccination program among adolescents was determined. The national HIV disease and cost burden was compared with (intervention) and without HIV vaccination (comparator) given to school-going adolescents using a semi-Markov model. Life table analysis was conducted to determine the impact of the intervention on life expectancy. Model inputs included measures of disease and cost burden and hypothetical assumptions of vaccine characteristics. The base-case HIV vaccine modelled cost at US$ 12 per dose; vaccine efficacy of 50 %; duration of protection of 10 years achieved at a coverage rate of 60 % and required annual boosters. Incremental cost-effectiveness ratios (ICER) were calculated using life years gained (LYG) serving as the outcome measure. Sensitivity analyses were conducted on the vaccine characteristics to assess parameter uncertainty. The HIV vaccination model yielded an ICER of US$ 5 per LYG (95 % CI ZAR 2.77–11.61) compared with the comparator, which is considerably less than the national willingness-to-pay threshold of cost-effectiveness. This translated to an 11 % increase in per capita costs from US$ 80 to US$ 89. National implementation of this intervention could potentially result in an estimated cumulative gain of 23.6 million years of life (95 % CI 8.48–34.3 million years) among adolescents age 10–19 years that were vaccinated. The 10 year absolute risk reduction projected by vaccine implementation was 0.42 % for HIV incidence and 0.41 % for HIV mortality, with an increase in life expectancy noted across all age groups. The ICER was sensitive to the vaccine efficacy, coverage and vaccine pricing in the sensitivity analysis. A national HIV vaccination program would be cost-effective and would avert new HIV infections and decrease the mortality and morbidity associated with HIV disease. Decision makers would have to discern how these findings, derived from local data and reflective of the South African epidemic, can be integrated into the national long term health planning should a HIV vaccine become available. South Africa has the largest human immunodeficiency virus (HIV) epidemic in the world [1]. In 2012, 6.4 million South Africans were living with HIV; 203,000 individuals had lost their lives to it and another 395,000 South Africans had acquired the infection [2, 3]. South Africa's life expectancy was understandably adversely affected by the considerable burden of HIV disease [4]. However, life expectancy had since increased from 53 years in 2006 to 61 years in 2012, and ensuring its continued improvement remains a priority of the national department of health [5]. The gains made in improving life expectancy are in no small part attributable to 'the largest antiretroviral (ART) rollout in the world' that South Africa has managed to achieve [6]. To sustain this achievement is no mean feat. The growing number of patients previously initiated on ART need to be retained in care. While the public sector retention rate approximates 75 % after one year on treatment, South Africa needs to continuously enroll in excess of 500 000 new patients onto ART annually to maintain an ART enrolment ratio exceeding 1.3 [4]. This brings into question the long term sustainability of the ART program considering the massive financial and human resource implications the expansion of ART program entails [7]. Data suggests that close to 25 % of all new HIV infections occurred among young women aged 15–24 years, emphasizing this group as a major driver of the epidemic [2]. The HIV prevalence in this age group is important as it serves as a proxy for HIV incidence. HIV prevalence declined by 18 % in this age group from 2008 to 2012, from 8.7 % to 7.1 %, however there remains a need for intensified prevention efforts [8]. Despite massive accomplishments made in establishing the ART program, the women aged 15 – 24 years persist as the group with the poorest access to this life-saving treatment. The barriers that young people face in accessing public health services has been well documented [9]. Issues concerning lack of confidentiality and privacy, unfriendly and judgmental attitudes of health care staff and inaccessible clinic hours persist [10, 11]. It was against this backdrop that the re-engineering of primary health care in South Africa targeted the development of a school-based sexual and reproductive health service as a priority [12]. The current HIV prevention program has enjoyed limited success in tackling the high rate of new infections in South Africa, highlighting the need for an alternative intervention. Vaccines are regarded at the most cost-effective prevention intervention in the world [13]. Rerks-Ngam et al tested the first HIV vaccine regimen (RV144/Thai Trial) to show moderate vaccine efficacy in humans in Thailand (2009) [14]. The study evaluated a prime-boost strategy, priming with a recombinant canarypox vector (ALVAC-HIV[vCP1521]) administered at baseline, then at week 4, 12 and 24 with recombinant glycoprotein 120 subunit vaccine (AIDSVAX B/E) boosts given with the ALVAC at weeks 12 and 24. The prime-boost HIV vaccine regimen used resulted in modest efficacy of 31 % over 3.5 years [14]. While the effects were not durable, they were indeed promising. After undergoing modifications to optimize the HIV vaccine regimen by making it Clade C specific and changing the protein and adjuvant, a potential vaccine regimen was entered into Phase I/IIb clinical trials at six major South African centers to assess safety and immunogenicity (HIV Vaccine Trial Network (HVTN) 100 study) [15]. Additionally, a pivotal phase IIb/III HIV vaccine efficacy trial is planned to take place in South Africa designated HVTN 702, which will evaluate the same regimen [as HVTN 100], should HVTN 100 prove to be immunogenic. The aim of this analysis was to guide decision makers in assessing the value of national implementation of a potential HIV vaccine among school-based adolescents in South Africa. The work determined the impact of vaccination on HIV disease burden and associated health costs, and evaluated the cost-effectiveness and potential changes in life expectancy based on the premise that school-based care would address the issues of equity and accessibility in health care that adolescent South Africa faces. The study methodology was compliant with the reporting guidelines of the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement [16]. Ten year old adolescents attending South African schools in 2012 were considered for vaccination. This intervention program was introduced as part of the national health initiative to introduce school-based sexual and reproductive health services [12], and targeted learners prior to the onset of sexual activity. The cohort was modelled through a lifetime horizon of 70 years inclusive, which exceeded the current estimated life expectancy of 60.6 years in South Africa [3]. The rationale for this was that life expectancy is rapidly changing in the South African environment and this cohort was considered to probably have a greater life expectancy. The assumption made was that the HIV vaccine would be incorporated into the South African Expanded Program of Immunization and would be administered at school level. The health service provider (provider) perspective was adopted as the information generated was intended to inform national health decision making. The hypothetical HIV vaccine was modelled as a prevention strategy that reduced the HIV disease burden and associated mortality. The vaccine strategies were considered against the system of HIV counselling and testing (HCT) and the national rollout of ART that constituted the standard of care (comparator model) in South Africa [17, 18]. The intervention model combined the current standard of care with the HIV vaccination strategy as both programs would be delivered simultaneously. A discount rate of 3 % was applied to the economic costs and health outcomes, as recommended by the World Health Organization CHOosing Interventions that are Cost-Effective (WHO-CHOICE) guidelines [19]. The epidemiology of South African epidemic is described in Table 1. Table 1 South African population by age groups exploring ARV treatment access. The HIV epidemiology of South Africa is described. The treatment shortfall represents those eligible for ART but unable to access it Life years gained (LYG) was measured in terms of its impact on mortality. The LYG concept represents a modified mortality measure which considers remaining life expectancy. More weight is accrued to the life of a young child than an elderly person, because saving the life of a young child will accrue more life years than saving the life of an elderly person. The life years are calculated as the "remaining life expectancy at the point of each averted death" [20]. Life tables are generally setting specific or standardized for a geographic area. Using the information generated in these life tables, we are able to derive life expectancies for a specific population. The HIV vaccine described for implementation was hypothetical as it is currently undergoing Phase I/II clinical trials. The HIV vaccine characteristics were determined by the target product profile formulated by the Pox-Protein Private Public partnership (P5), developed to build on the success of the RV144/Thai trial and evaluate potential HIV vaccine candidates to determine their public health impact [21]. The regimen included in this economic evaluation mirrored the ongoing HVN 100 study which adapted the ALVAC prime ALVAC/gp120/adjuvant boost of the RV144/Thai trial but added an additional ALVAC/gp120/adjuvant boost at month 12. This boost at month 12 was added to circumvent the waning of the immune response documented in the RV144/Thai trial a year after initial vaccine administration. The estimated vaccination coverage was 60 % (range: 40–70 %). This represents a slight underestimation of the 68 % reported for coverage of the 3rd dose of diphtheria, tetanus and pertussis toxoid (DTP3) which has been validated as a proxy for national immunization performance [22]. The base-case HIV vaccine modelled cost US$ 12 per dose (range: US$ 2–24), had a vaccine efficacy of 50 % (range: 30–70 %) and the duration of protection of 10 years (achieved through the administration of annual boosters). The declining immunity reported in the RV144/Thai trial (particularly in the year following administration) reaffirmed the need for booster injections. Annual boosters may be far from pragmatic but merely represented an overestimation of costs in this evaluation. The vaccine price of US$ 12 was roughly based on the human papillomavirus (HPV) vaccine available on government tender at US$ 17. Markedly reduced vaccines prices deemed plausible given the strides made in negotiating lower priced ART medications and HPV vaccines in the public sector [23, 24]. Pooled utilities relating to HIV/AIDS (acquired immunodeficiency syndromes) were derived from a meta-analysis and were used for the cost-effectiveness analyses of HIV related interventions [25]. Study inputs Input parameters are shown in Table 2. Estimated vaccination coverage of 60 % of adolescents approximated 6 million individuals receiving the initial course. Delivery of health services was conducted at the schools. HIV related costs were estimated identified from the 2013 national HIV treatment guideline [18]. Patients would be consulted by primary health care (PHC) nurses and more complicated cases would be referred. Pharmaceutical costs included ART, treatment of sexually transmitted infections (STI) and condoms. In addition to the costs accumulated in the comparator group, the intervention included the vaccine and its delivery. Laboratory tests conducted by the National Health Services Laboratory, costing of medication, consumables and additional pharmaceuticals and valuations of medical personnel cost based on the Uniform Patient Fee Schedule (UPFS) were sourced from the National Department of Health. All costs were adjusted to the common year 2012. Costs were converted from South Africa rand (ZAR) to United States dollar (US$) using the average exchange rate for 2012, thus allowing for international comparison (US$ 1 = ZAR 8.21) [26]. HIV related disease transition probabilities were obtained from the South African literature and are shown in Table 3. Table 2 Parameter costs and economic considerations. The estimates were obtained from relevant South African literature for the year 2012 Table 3 Disease transition probabilities showing annual progression risk. The possibility of transition from one HIV health state to the next is described. The estimates were obtained from relevant South African literature for the year 2012 Model based economic evaluation Semi-Markov model development Data capture and analysis was conducted in Microsoft Excel® (Version 2010) (Microsoft Corp., Redmond, WA). Ersatz version 1.2 (www.epigear.com), a boot-strap add-in application for Excel, was used to perform the uncertainty analysis. The simulation ran a semi-Markov simulation with annual cycles (Fig. 1). Tunnel states could be added to the semi-Markov model that countered the 'memoryless' nature inherent in the models. The vaccine was offered on a voluntary basis to adolescents from the age of ten years. The model comprised eight health states. All individuals were considered HIV negative and healthy at the start of the model (State 1). The coverage rate determined who moved into a vaccinated (State 2) or unvaccinated (State 3) state. All individuals may transition into an asymptomatic HIV state (State 4). Individuals who seroconverted to HIV positive were started on ART when eligible. Asymptomatic individuals may progress to a symptomatic (State 5) or AIDS (State 6) state. Every HIV infected individual may enter the treatment pool (State 7) which was sub-classified as 1st, 2nd and 3rd line ART regimens. Every aforementioned health state may transition to death (State 8). Each cycle carries a probability of remaining in the current health state or transitioning to another with the arrows representing the transition probabilities from one state to another. Once the vaccine had been stopped, event rates were assumed to be the same for both arms of the study. Model depicting the semi-Markov model of the HIV vaccination strategy. Healthy vaccinated and unvaccinated individuals may enter into a HIV positive state. They can progress from a HIV infection state to the HIV treatment pool. All states may progress to a death states at a rate specific to the state they were currently in One-way sensitivity analyses evaluated the impact of single assumptions on cost and outcomes. Probabilistic sensitivity analysis (PSA) with a bootstrapping technique of 1000 iterations was used to explore the uncertainty in the model and evaluate the robustness of the results. These results were presented as cost-effectiveness scatter plots and cost-effectiveness acceptability curves. The PSA data generated was used to determine if the intervention fell below the willingness-to-pay (WTP) threshold. As South Africa does not have a pre-defined WTP threshold, the Gross Domestic Product (GDP) per capita (2012) was used as a proxy in accordance with the WHO Guide to Cost-Effective Analysis [19, 27]. The WTP threshold was thus defined as US$ 7 508 (ZAR 61 641) per quality adjusted life year (QALY) gained. The GDP per capita range was adapted from the 'value of statistical life' literature and is theoretically the value of an additional healthy life year [28]. It is used in the context of this study against LYG (rather than the convention QALY) as there is no other alternative available to indicate cost-effectiveness in South Africa. Cost and cost-effectiveness of a national HIV vaccination program The programmatic costs and health implications of a vaccine implemented at US$ 12 per dose and 60 % coverage was determined and this was considered the base case. Using PSA techniques, we were able to estimate the change in cost per capita, the approximate cost per LYG and finally the cost per death averted at different vaccines prices per dose. The change from the base cost for the program cost was compared with the baseline vaccine implementation at US$ 12 per dose. Life table analysis A multi-state life table approach was used to describe the differential morbidity and mortality of a population under two alternative interventions [29]. The alternatives were the reference population displaying the HIV associated mortality experienced by the South African adolescent population under the comparator model compared with the outcomes from the adolescent population when exposed to the intervention (which was the vaccine strategy in addition to the comparator model). Disease related mortality was referenced from the literature (Table 3). The study used a cohort life table methodology which calculated the probability of death of a generation (cohort) over the course of their lifetime. Cohort life tables use age-specific mortality rates related to specific cohorts which allow for known and projected changes in mortality [30]. Within a standard life table, the disease related mortality was separated from national mortality (as shown in Eq. 1): $$ {M}_{tot} = {M}_{dis} + {M}_{other} $$ Where M tot is the total mortality identified in the age/sex group, M dis is the mortality attributed to the disease state and M other is the mortality attributed to all other causes. The prevalence estimates for HIV was obtained from South African National HIV Prevalence, Incidence and Behaviour Survey, 2012 [2]. The ratio between the comparator and the intervention groups was used to calculate the relative reduction in HIV related mortality attributable to the intervention (reflected in Eq. 2). This reduction was applied in the life table allowing for comparisons to be made including the life expectancy, individuals surviving and the cumulative years lived. $$ R{R}_m = \frac{M_i}{M_c} $$ Where RR m is the mortality risk reduction, M i is the mortality risk in the intervention group and M c is the mortality risk in the comparator group. Values were entered into a life table to estimate the impact of the intervention on life expectancy and the number of life years gained. Generally, a life table estimates the mortality experience of a population and calculates the life expectancy from birth [31]. The life expectancy calculated from a life table is represented by the following formula (Eq. 3) [32]: $$ {e}_x=\frac{T_x}{I_x} $$ Where e x is the life expectancy at age X, T x is the cumulative person years lived after age X and l x are the individuals alive at beginning of age X. The difference in cumulative years lived between the intervention and comparator groups were used in the incremental cost-effectiveness ratios (ICER) calculations. The ICER represents the difference in costs between strategies and the difference in effects (e.g. LYG) between strategies (Eq. 4). The unit of measurement of the ICER is US$ per LYG gained. $$ ICER=\frac{C_2 - {C}_1}{E_2 - {E}_1} = \frac{\Delta C}{\Delta E} $$ Where C1 and E1 are the costs and effects of the standard of care (comparator), and C2 and E2 are the costs and effects of the intervention. Years of potential life lost The years of potential life lost (YPLL) is used to measure the incidence of 'premature' mortality that occurs within a population to an age at which the death is considered untimely [33, 34]. The YPLL concept quantifies social and economic loss as a result of premature death, and has been useful in assessing specific causes of death targeting younger age groups [35]. The principle of YPLL incorporates the age at death, and the calculation is able to mathematically weight the total deaths by applying values to death at each age (Eq. 5) [34–36]. $$ YPLL=\varSigma \left({}_n{{\mathrm{d}}^i}_x\right) \times \left[70\hbox{--} \left(n \times 5\right)\right] $$ Where n di x is the number of deaths due to HIV/AIDS from age x to age x + n and n is the width of the age interval (in this study ten-year age intervals were used) and 5 represents the number of years till the midpoint of the age interval is reached. Cost consequence analysis The absolute risk reduction (ARR) was then measured as a percentage. This represented the change in the risk of an outcome of the intervention in comparison to the comparator. It was calculated as the difference in the mean values of the parameter of interest and an example of the calculation is shown in Eq. 6. $$ \mathrm{H}\mathrm{I}\mathrm{V}\ {\mathrm{incidence}}_{\mathrm{comparator}}\hbox{--}\ \mathrm{H}\mathrm{I}\mathrm{V}\ {\mathrm{incidence}}_{\mathrm{intervention}} = \mathrm{A}\mathrm{R}\mathrm{R}\ \left[\%\right] $$ Where the HIV incidence comparator and HIV incidence intervention represented mean percentages and the difference in values was the absolute risk reduction percentage. The difference in per capita costs with and without the intervention was then divided by the ARR values obtained for HIV incidence and HIV mortality to yield the cost per percentage reduction in disease. The outcomes for both the ARR and the per percentage reduction in disease burden was described by gender to highlight the areas of greatest impact. Model assumptions All participants entering the model were considered sexually naïve. Drop-out rates were not accounted for as all children of school-going age were assumed to be attending school. The model assumed that the rollout and uptake of HIV counselling and testing (HCT) strategies and the national rollout of the HIV vaccination strategy occurred within the school-based health services that provided comprehensive care to all socio-economic levels of learners. Finally, the model assumed good uptake of school-based health services given the provision of care in a familiar and safe environment with no encroachment on school attendance. As no formal pilot studies have been reported, there remains no validation of this assumption. Ethical approval for the study was obtained from the Human Research Ethics Committee (Medical) of the University of the Witwatersrand. Costs of models The annual per capita cost of the comparator was US$ 80. Annual HIV vaccination per capita cost was calculated at US$ 89, representing an 11 % increase in costs. Table 4 describes the complete breakdown of these costs. There is no appreciable difference in human resources and laboratory costs associated with the vaccine intervention, though the intervention does represent a saving on both these costs. However, the intervention does predict an increase (31 %) in pharmaceutical costs driven by the need for vaccine boosters to attain durable protection. The vaccine price considered in Table 4 was US$ 12. Table 4 Model components and cost comparison of the HIV vaccination program (US$). Complete breakdown of costs relating to the intervention and the comparator. The intervention comprises both the vaccine strategy and the comparator costs Uncertainty analysis The cost and cost-effectiveness of a national HIV vaccination program Implementing a South African national HIV vaccination program at the base vaccine cost of US$12 per dose (Table 5) would be considered cost-effective at US$ 5 per LYG. When benchmarking this against the WHO cost-effectiveness criteria (US$ 7508 per QALY gained), a HIV vaccine at US$ 12 is deemed highly cost-effective. However, introduction of the HIV vaccine at considerably reduced price per dose will significantly impact the future sustainability of the program. At the low vaccine cost of US$ 6, the program cost will be reduced by 5 % (US$ 52 million) of the base vaccination program; and will result in an ICER of US$ 2 per LYG. The very low vaccine price of US$ 2 would yield even better results – an ICER of US$ 1 per LYG with a 9 % reduction (US$ 84 million) in the program costs compared with the baseline vaccination strategy. Table 5 Cost –effectiveness of a national HIV vaccination program at varied vaccine prices, 2012. The programmatic cost implications of varying the vaccine cost per dose were examined. The cost values reflect annual expenditure. At baseline (shaded), a vaccine at the cost of US$ 12 per dose would result in an annual cost of approximately US$ 1017 million. This represents a US$ 9 increase from the base cost per capita (Table 4). All other values have been calculated relative to the base vaccination strategy Impact of coverage on cost and life expectancy Table 6 explores the impact of initial vaccine and subsequent booster coverage on cost and life expectancy. To vary the coverage annually would be computationally challenging, hence the combination of the initial vaccine and the administration of the annual booster was considered as a continuum i.e. if the initial vaccine coverage was 40 %, then the booster coverage considered was also 40 %. Increasing the vaccine coverage would result in significantly increased financial investment. However increasing coverage also translated to improved life expectancy. The increased cost has to be weighed against the improved health outcomes, before the strategy is deemed cost-effective. There would also have to be consideration of the impact of other vaccine characteristics. Table 6 One-way sensitivity analysis of coverage on health outcomes. By varying the coverage rates, we are able to demonstrate how an increased number of doses drive the intervention costs up Probabilistic sensitivity analysis ICER and WTP results The uncertainty around the ICER was assessed using probabilistic sensitivity analysis. The HIV vaccine intervention yielded an ICER of US$ 4.98 per LYG (95 % CI ZAR 2.77–11.61). National projections of the intervention programme were estimated to cost US$ 1017 million annually. This represents a US$ 104 million (11 %) increase on the comparator cost of US$ 913 million. Aside from the need for boosters driving the cost, it should be borne in mind that the vaccine is anticipated to reach approximately 6 million HIV negative 10–19 year old adolescents compared with the comparator strategy providing ART to 78 126 adolescents of the same age group. The intervention, however, would translate to a mean cumulative gain of 23.6 million LYG (95 % CI 8.48–34.3 million years) in the population. Apart from demonstrating the cost-effectiveness of the vaccine intervention, Fig. 2 was designed to evaluate the impact of differing vaccine efficacies on the ICER. At a vaccine efficacy of 30 %, the iterations lie on either side of the WTP threshold indicating that the intervention may not be cost-effective. However, at the vaccine efficacy of 50 % and 70 %, most iterations were considerably below the GDP per capita of South Africa, 2012. Based on this GDP, the intervention would be considered below the WTP threshold defined by the World Health Organization (WHO) and thus deemed to be highly cost-effective [19]. Willingness-to-pay analysis explored by varying vaccine efficacy. This figure shows the scatter plot of the costs and health outcomes from the probabilistic sensitivity analysis. The incremental cost is the difference in costs between the current treatment program and the vaccine program. Similarly, the incremental effect reflects the difference in health outcomes between the vaccine program and the current treatment program. The health outcomes are measured in years of life saved Life expectancy and potential years of life lost The simulation of the life table results are presented in Table 7. Application of the intervention in the 10–19 year age group resulted in a 2.5 year increase in life expectancy, as well as a significant increase in cumulative gain of years lived in the age group. Importantly, as a result of the increase in life expectancy noted in the 10–19 year group, there was a reported increase documented in the subsequent age groups. The PYLL from HIV/AIDS contributing to 'premature' death is also given in Table 7. It is here that the impact of (the vaccine is demonstrated as there is a years of life lost without the vaccine (70 640) is considerably higher than the years lost with the vaccine intervention (48 400). Table 7 Life table analysis and YPLL for 10-19 year age group Cost consequence results The 10 year absolute risk reductions in HIV associated mortality and incidence potentially offered by the HIV vaccine intervention was projected using data modelled. Table 8 described a detailed breakdown of costs to highlight the differences in vaccine impact between the genders. While all scenarios reflected an improvement in HIV related health outcome, the reduction in HIV incidence among females was notable (0.53 %), particularly given their high burden of disease. Table 8 Disease risk reduction and cost consequences. The absolute risk reduction was estimated over a 10 year period The study aimed to assess the cost-effectiveness of national rollout of the hypothetical HIV vaccine to school-based adolescents. The South African HIV epidemic is widely acknowledged to be generalized, with adolescents and young adults disproportionately at risk for HIV [37]. In 2013, South Africa reported 16 % of the global HIV incidence despite concerted efforts at the national level ranging from increasing ART distribution by 75 % between 2009 and 2011 to boasting the largest and most established condom distribution program in the world [2, 38]. This earmarked adolescents as a key population to be reached if HIV prevention strategies are to impact incidence and if HIV mortality rates are to be significantly curtailed [37]. While the introduction of a potential HIV vaccine in schools represents a significant financial investment, the health outcomes in terms of improved life expectancy, markedly decreased potential years of life lost and decreases in HIV mortality and incidence are substantive. Life expectancy was equally influenced by vaccine coverage rates, while the assessment of cost-effectiveness was found to be sensitive to the vaccine efficacy. The life table findings together with the conventionally accepted thresholds for cost-effectiveness being met demonstrate the financial plausibility of HIV vaccine implementation [19]. Importantly, the vaccine remained cost-effective even at higher prices per dose examined but at substantially greater programmatic costs. Annual HIV vaccination represents a substantial increase in costs per capita at base coverage of 60 % of HIV negative adolescents. This constitutes a significant investment considering the intense competition of several competing burdens of disease on a constrained South African health budget [39]. As much as the long term financial sustainability of the burgeoning ART program has been brought into question, the implementation of a HIV vaccine program over several decades may prove equally daunting. It is important to bear in mind that the comparator cost reflects those currently on treatment (excluding the treatment shortfall of approximately 58 % [1]) and thus represents a gross underestimation of what we should be paying if those unable to access treatment were indeed able to access it. Another major consideration is that the upscaling of ART may not impact the HIV incidence as definitively as a primary preventative strategy may. It must be remembered that while averting infections has a cost attached from a government perspective, it may also give rise to the substantial financial gains of reducing the demand for ART [40]. South Africa has successfully negotiated reduced pricing for ART and HPV vaccines in the past, and this bodes well for future procurement of HIV vaccines [23, 24], as the price is undetermined at this point. If vaccine development fails to reduce the number of annual boosters required to maintain protection, then the pricing represents a key factor in deciding the cost-effectiveness of the intervention. Apart from the economic impact, HIV vaccine implementation has the capacity to influence long term health outcomes. The mean cumulative gain of LYG could support efforts to improve life expectancy in the country, an area identified as a strategic output of the National Service Delivery Agreement [5]. The South African epidemic is predominantly heterosexual. This work represents an over-simplification of the rather complex sexual networking structures at play in the South African HIV epidemic. Nonetheless, those individuals at high risk may still acquire infection ascribed to repeated risk exposures despite the protection conferred by the vaccine compared with those at low risk. At a population level, the premise remains that a partially effective vaccine may still avert or delay infection even if it is unable to completely prevent an infection from establishing [41]. Assessment of a partially effective vaccine in the United States of America (USA) emphasizes that even modest and temporal reductions in HIV infections have important benefits at the population level [42]. Andersson et al demonstrated similar health benefits to the USA study when modelling the RV144/Thai trial vaccine in South Africa, but cautioned that a vaccine of limited duration could only be effective with high coverage levels, which translated to millions of doses [43]. Adolescents are a critical target for this intervention. Apart from being a key population identified in the transmission of HIV, adolescents in a school environment appear more easily accessible as a target group considering that more commonly identified high risk groups such as commercial sex workers are often harder to reach due to stigma and marginalization [43]. However, adolescents have historically encountered barriers in trying to access health services in South Africa from confidentiality issues to the judgmental attitudes of staff. It is not surprising that they often do not return for follow up care [9]. The school environment could be deemed a "safe space" for peer discussion and accessibility of relevant health services. Neglecting the comprehensive health needs and barriers to care of this adolescent population has the potential to undermine the success of HIV prevention initiatives [44]. Further, low social acceptability of HIV vaccines fueled by the fear of vaccines and poor side effect profiles present potential deterrents to uptake and coverage [45]. It is understandably difficult for hypothetical scenarios to emulate real-life behavioral changes but knowledge of these factors underscores the need for comprehensive sexual education and risk reduction counselling; which could prove more plausible in the school environment [46]. This study had several limitations. Firstly, it is unclear to what degree behavioral disinhibition may occur following vaccination as this was not assessed in the model. Changes in sexual risk behavior post HIV vaccination are poorly understood in the African setting [46]. In high HIV prevalence communities like South Africa, a decrease in condom use even with stable partners would likely result in an increase in HIV rates [46]. In fact, South African data has inferred that poor comprehension of the 'low-efficacy' concept was associated with a reported potential decrease in condom use. It is further postulated that the degree of behavioral disinhibition may depend largely on the manner in which the vaccine effects are marketed to the public and vaccine recipients alike [40]. The impact of risk compensation becomes critical when considering the low efficacy displayed by the candidate vaccines thus far [46]. Secondly, the study was unable to assess the effects of herd immunity. Notably, Long et al. alluded to partial efficacy vaccines providing some benefits to the unvaccinated population through herd immunity [42]. This is particularly important considering the low coverage rates of childhood vaccinations in South Africa as it speaks directly to the country's capacity to introduce and implement a HIV vaccine [22]. At 60 % coverage, this program calls for an unprecedented 5.9 million adolescents to be vaccinated. Given this, it is not surprising that implementation costs are high. Thirdly, the provider perspective was considered as the largest burden of direct medical program costs will be borne by the healthcare sector. Although the societal costs were not analyzed, its contribution would be substantial and could improve the overall cost-effectiveness of the vaccine. Fourthly, booster vaccinations were not assessed in the original RV144/Thai trial work [42]. Therefore the assumption that booster vaccination would provide the same protective effects as the initial vaccination was hypothetical. There has been limited description of this in the literature [46]. Additionally, administration costs would drastically increase the program costs with the need for annual boosters. This is the key cost factor implicated in the difference between the comparator and intervention cost. However, it is hoped that attrition rates of vaccine recipients would be minimized by targeting the relatively stable school population. Lastly, this study has considered HIV vaccination as an isolated intervention apart from the ART rollout and condom distribution. In the clinical setting, this intervention would probably work synergistically with other prevention strategies such as male medical circumcision and an optimal combination of strategies should be better defined once data becomes available [40, 42]. As discussed earlier, the limited success achieved in curbing the national HIV incidence by the current public sector HIV prevention strategies warrants the evaluation of strategies on its individual merits. In conclusion, these findings suggests that a national HIV vaccine program administered to adolescents in South Africa would be a cost-effective means for reducing the massive disease and economic burden of HIV. The implications on health outcomes are significant with reductions in HIV associated mortality and incidence and improved life expectancy demonstrated by the model. However, a vaccine with more durable protection and requiring fewer boosters would considerably reduce costs. While this work provides decision makers with objective baseline data for considering the adoption of the potential HIV vaccination intervention nationally, more realistic estimates on cost and disease burden should be gauged once the efficacy, duration of protection and vaccine cost is determined. GDP: highly active antiretroviral therapy HIV: ICER: incremental cost-effectiveness ratio PHC: primary healthcare PSA: QALY: quality adjusted life year UPFS: uniform patient fee schedule US$: ZAR: Joint United Nations Programme on HIV/AIDS. The Gap Report. In. Geneva, Switzerland: UNAIDS; 2014. http://www.unaids.org/en/resources/documents/2014/20140716_UNAIDS_gap_report. Accessed 10 Nov 2014. Shisana O, Rehle T, Simbayi LC, Zuma K, Jooste S, Zungu N, Labadarios D, Onoya D. South African National HIV Prevalence, Incidence and Behaviour Survey, 2012. Cape Town, South Africa: HSRC Press; 2014. http://www.hsrc.ac.za/en/research-data/view/6871. Accessed 8 Dec 2014. Statistics South Africa. Mid-year population estimates 2014. In. Pretoria, South Africa: Statistics South Africa; 2014 http://www.statssa.gov.za/publications/P0302/P03022014.pdf. Accessed 9 Sep 2015. South African National AIDS Council. Progress Report on the National Strategic Plan for HIV, TB and STIs (2012–2016). In. Pretoria, South Africa: SANAC; 2014. http://sanac.org.za/2015/05/25/progress-report-national-strategic-plan-on-hiv-stis-and-tb-2012-2016/. Accessed 19 Jul 2015. National Department of Health. National Service Delivery Agreement. In. Pretoria, South Africa: National Department of Health; 2010. http://www.thepresidency.gov.za/MediaLib/Downloads/Home/Ministries/DepartmentofPerformanceMonitoringandEvaluation3/TheOutcomesApproach/Health%20Sector%20NSDA.pdf. Accessed 23 Mar 2015. Moorhouse M. Closer to zero: relections on ten years of ART rollout. S Afr J HIV Med. 2014;15(1):9. Gray A, Conradie F, Crowley T, Gaede B, Gils T, Shroufi A, Hwang B, Kegakilwe D, Nash J, Pillay P, et al. Improving access to antiretrovirals in rural South Africa - a call to action. S Afr Med J. 2015;105(8):638–9. National Department of Health. National Strategic Plan on HIV, STI's and TB 2012–2016. In. Pretoria, South Africa: National Department of Health; 2011. http://www.thepresidency.gov.za/MediaLib/Downloads/Home/Publications/SANACCallforNominations/A5summary12-12.pdf . Accessed 13 Jan 2012. Nkala B, Khunwane M, Dietrich J, Otwombe K, Sekoane I, Sonqishe B, Gray G. Kganya Motsha Adolescent Centre: a model for adolescent friendly HIV management and reproductive health for adolescents in Soweto, South Africa. AIDS care. 2015;27(6):697–702. Ashton J, Dickson K, Pleaner M. Evolution of the national Adolescent-friendly Clinic Initiative in South Africa. Geneva, Switzerland: WHO; 2009. http://apps.who.int/iris/bitstream/10665/44154/1/9789241598361_eng.pdf. Accessed 6 Nov 2011. Lesedi C, Hoque ME, Ntuli-Ngcobo B. Youth's Perception towards Sexual and Reproductive Health Services at Family Welfare Association Centres in Botswana. J Soc Sci. 2011;28(2):137–43. National Department of Health. Provincial Guidelines for the Implementation of the Three Streams of PHC Re-engineering. In. Pretoria, South Africa: National Department of Health; 2011 http://www.cmt.org.za/wp-content/uploads/2011/09/GUIDELINES-FOR-THE-IMPLEMENTATION-OF-THE-THREE-STREAMS-OF-PHC-4-Sept-2.pdf. Accessed 6 Oct 2011. Ozawa S, Mirelman A, Stack ML, Walker DG, Levine OS. Cost-effectiveness and economic benefits of vaccines in low- and middle-income countries: a systematic review. Vaccine. 2012;31(1):96–108. Rerks-Ngarm S, Pitisuttithum P, Nitayaphan S, Kaewkungwal J, Paris R, Premsri N, Namwat C, de Souza M, Adams E, Benenson M, et al. Vaccination with ALVAC and AIDSVAXto Prevent HIV-1 Infection in Thailand. N Engl J Med. 2009;361(23):2209–20. Liefman LS. NIH-Sponsored HIV Vaccine Trial Launches in South Africa - Early Stage Trial Aims to Build on RV144 Results. In: National Institute of Allergy and Infectious Diseases. U.S. Department of Health and Human Services; 2015. http://www.nih.gov/news-events/news-releases/nih-sponsored-hiv-vaccine-trial-launches-south-africa. Accessed 16 Apr 2015) Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, Augustovski F, Briggs AH, Mauskopf J, Loder E. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. Cost Eff Resour Alloc. 2013;11(1):1478–7547. National Department of Health. HIV Counselling and Testing (HCT) Policy Guidelines. Pretoria, South Africa: National Department of Health; 2010. http://www.genderjustice.org.za/publication/national-hiv-counselling-and-testing-hct-policy-guidelines/ . Accessed 11 Nov 2010. National Department of Health. The South African Antiretroviral Treatment Guidelines 2013. Pretoria, South Africa: National Department of Health; 2013. http://www.sahivsoc.org/upload/documents/2013%20ART%20Guidelines-Short%20Combined%20FINAL%20draft%20guidelines%2014%20March%202013.pdf. Accessed 6 Apr 2014. Tan-Torres Edejer T, Baltussen R, Adam T, Hutubessy R, Acharya A, Evans DB, Murray CJL. Making choices in health: WHO Guide to Cost Effectiveness analysis. Geneva, Switzerland: World Health Organization; 2003. http://www.who.int/choice/publications/p_2003_generalised_cea.pdf. Accessed 16 Mar 2015. Robberstad B. QALYs vs DALYs vs LYs gained: What are the differences, and what difference do they make for health care priority setting? Norsk Epidemiologi. 2005;15(2):183–91. Russell N, Marovich M. P5 Update and GAC Progress Report. In: P5 Global Access Committee RSA Summit: October 2015; Cape Town International Conference Centre; October 2015. World Health Organisation. Annual WHO/UNICEF Joint Reporting Form and WHO Regional office reports (Updates of 2013/July/13). In. Edited by Immunization Vaccines and Biologicals. Geneva, Switzerland: World Health Organization; 2013.http://www.who.int/immunization/monitoring_surveillance/Immunization_Summary_2013.pdf. Accessed 23 Nov 2015. Kardas-Nelson M, Goswami S. Upping the competition. NSP review 2013 6 http://www.nspreview.org/wp-content/uploads/2014/06/NSP-review-6-web.pdf. Accessed 9 Jan 2014. (May-June):26-29. Nguyen A, Datta SD, Schwalbe N, Summers D, Adlide G. Working towards affordable pricing for HPV vaccines for developing countries: The role of GAVI. In: Harvard Global Equity Initiative GTFCCC Working paper and Background Series, No 3. 2011. Tengs TO, Lin TH. A Meta-Analysis of Utility Estimates for HIV/AIDS. Medical decision making : an international journal of the Society for Medical Decision Making. 2002;22(6):475–81. The World Bank. http://data.worldbank.org. Accessed 12 Jun 2015. Badri M, Maartens G, Mandalia S, Bekker LG, Penrod JR, Platt RW, Wood R, Beck EJ. Cost-effectiveness of highly active antiretroviral therapy in South Africa. PLoS medicine. 2006;3(1), e4. Sachs JD. Macroeconomics and Health: Investing in Health for Economic Development. Report of the Commission on Macroeconomics and Health. Geneva, Switzerland: World Health Organization; 2001. http://apps.who.int/iris/bitstream/10665/42435/1/924154550X.pdf. Accessed 23 Nov 2015. Salomon JA, Mathers CD, Murray CJL, Ferguson B. Methods for life expectancy and healthy life expectancy uncertainty analysis. Global Programme on Evidence for Health Policy Working Paper No. 10. Geneva: World Health Organization; 2001. Mills J. Historic and Projected Mortality Data from the Period and Cohort Life Tables, 2012- based, UK, 1981–2062: Statistical Bulletin. United Kingdom: Office for National Statistics; 2013. Murray CJL, Ahmad OB, Lopez AD, Salomon JA. WHO System of Model Life Tables. GPE Discussion Paper Series: No 8. Geneva, Switzerland: World Health Organization; 2000. http://www.who.int/healthinfo/paper08.pdf . Accessed 1 Nov 2015. World Health Organization. WHO methods for life expectancy and healthy life expectancy. Global Health Estimates Technical Paper WHO/HIS/HSI/GHE/2014.5. Geneva, Switzerland: World Health Organisation; 2014. http://www.who.int/healthinfo/statistics/LT_method.pdf. Accessed 1 Nov 2015. De Wet N, Oluwaseyi S, Odimegwu C. Youth mortality due to HIV/AIDS in South Africa, 2001-2009: an analysis of the levels of mortality using life table techniques. Afr J AIDS Res. 2014;13(1):13–20. Dranger E, Remington P. YPLL: A Summary Measure of Premature Mortality Used in Measuring the Health of Communities. Wisconsin Public Health & Health Policy Institute Issue Brief 2004, 5(7). https://uwphi.pophealth.wisc.edu/publications/issue-briefs/issueBriefv05n07.pdf. Accessed 23 Nov 2015. Gardner JW, Sanborn JS. Years of potential life lost (YPLL)--what does it measure? Epidemiology. 1990;1(4):322–9. Jain SK. Recent trends in mortality in Australia--an analysis of the causes of death through the application of life table techniques. J Aust Popul Assoc. 1992;9(1):1–23. Bekker LG, Johnson L, Wallace M, Hosek S. Building our youth for the future. J Int AIDS Soc. 2015;18(2 Suppl 1):20027. Beksinska ME, Smit JA, Mantell JE. Progress and challenges to male and female condom use in South Africa. Sex Health. 2012;9(1):51–8. Mayosi BM, Flisher AJ, Lalloo UG, Sitas F, Tollman SM, Bradshaw D. The burden of non-communicable diseases in South Africa. Lancet. 2009;374:934–47. Nagelkerke NJD, Hontelez JAC, de Vlas S. The potential impact of an HIV vaccine with limited protection on HIV incidence in Thailand: A modeling study. Vaccine. 2011;29:6079–85. Schneider K, Kerr CC, Hoare A, Wilson DP. Expected epidemiological impacts of introducing an HIV vaccine in Thailand: A model-based analysis. Vaccine. 2011;29:6086–91. Long EF, Owens DK. The cost-effectiveness of a modestly effective HIV vaccine in the United States. Vaccine. 2011;29(36):6113–24. Andersson KM, Stover J. The potential impact of a moderately effective HIV vaccine with rapidly waning protection in South Africa and Thailand. Vaccine. 2011;29(36):6092–9. Sawyer SM, Afifi RA, Bearinger LH, Blakemore SJ, Dick B, Ezeh AC, Patton C. Adolescence: a foundation for future health. Lancet. 2012;379:1630–40. Newman PA, Logie C. HIV vaccine acceptability: a systematic review and meta-analysis. AIDS. 2010;24(11):1749–56. Andersson KM, Vardas E, Niccolai LM, Van Niekerk RM, Mogale MM, Holdsworth IM, Bogoshi M, McIntyre JA, Gray GE. Anticipated changes in sexual behaviour following vaccination with a low-efficacy HIV vaccine: survey results from a South African township. International journal of STD & AIDS. 2012;23(10):736–41. Lehtinen M, Paavonen J, Wheeler CM, Jaisamrarn U, Garland SM, Castellsague X, Skinner SR, Apter D, Naud P, Salmeron J, et al. Overall efficacy of HPV-16/18 AS04-adjuvanted vaccine against grade 3 or greater cervical intraepithelial neoplasia: 4-year end-of-study analysis of the randomised, double-blind PATRICIA trial. Lancet Oncol. 2012;13(1):89–99. National Department of Health. HM08-2013SYR: The supply and delivery hypodermic syringes, needles and bloodletting devices to the Department of Health for the period 01 December 2013 to November 2015. Pretoria, South Africa: National Department of Health; 2013. http://www.health.gov.za/tender/docs/contructs/HM08-2013SYR.pdf. Accessed 23 Mar 2015. National Department of Health. HM09-2014RTK: Supply and delivery of rapid test kits to the Department of Health for the period 1 April 2014 to 31 March 2017. Pretoria, South Africa: National Department of Health; 2014. http://www.health.gov.za/tender/docs/contructs/HM09-2014RTKCONTRACTCIRCULAR.pdf. Accessed 23 Mar 2015. National Department of Health. Approved UPFS 2014 Fee Schedule for Externally Funded Patients Treated at Differentiated Amenities (Private Wards at Public Health Care Facilities). Pretoria, South Africa: National Department of Health; 2014. http://www.healthinquiry.net/Public%20Submissions/BHF%20AnnexureF.pdf. Accessed 23 Mar 2015. National Department of Health. HM01-2012CNDM: Supply and delivery of male and female condoms to the Department of Health from 1 December 2012 to 30 November 2014. Pretoria, South Africa: National Deprtment of Health; 2012. http://www.health.gov.za/tender/docs/contructs/HM012012CNDM02Contracts.pdf. Accessed 23 Mar 2015. National Department of Health. HP03-2013FP: Supply and delivery of family planing agents to the Department of Health for the period 1 October 2013 to 30 September 2015. Pretoria, South Africa: National Department of Health; 2013. http://www.health.gov.za/tender/docs/contructs/HP03-2013FP.pdf. Accessed 23 Mar 2015. National Health Laboratory Services. State pricing catalogue 2013. In. Pretoria, South Africa; 2013 www.nhls.ac.za. Accessed 23 Nov 2015. Johnson LF. Access to antiretroviral treatment in South Africa, 2004–2011. The Southern African Journal of HIV Medicine. 2012;13(1):22–7. Fox MP, Cutsem GV, Giddy J, Maskew M, Keiser O, Prozesky H, Wood R, Hernan MA, Sterne JA, Egger M, et al. Rates and predictors of failure of first-line antiretroviral therapy and switch to second-line ART in South Africa. J Acquir Immune Defic Syndr. 2012;60(4):428–37. Murphy RA, Sunpath H, Castilla C, Ebrahim S, Court R, Nguyen H, Kuritzkes DR, Marconi VC, Nachega JB. Second-line antiretroviral therapy: long-term outcomes in South Africa. J Acquir Immune Defic Syndr. 2012;61(2):158–63. This work was supported by the National Institute of Allergy and Infectious Diseases (NIAID) U.S. Public Health Service Grants UM1 AI068614 [LOC: HIV Vaccine Trials Network] as part of the South African HVTN AIDS Vaccine Early Stage Investigator Program (SHAPe). The support of the DST-NRF Centre of Excellence in Epidemiological Modelling and Analysis towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the author and are not necessarily to be attributed to SACEMA. Perinatal HIV Research Unit, Faculty of Health Sciences University of the Witwatersrand, PO Box 114, Diepkloof 1864, Johannesburg, South Africa Nishila Moodley South African HVTN AIDS Vaccine Early Stage Investigator Program (SHAPe), Seattle, WA, United States The South African Department of Science and Technology/National Research Foundation (DST/NRF) Centre of Excellence in Epidemiological Modelling and Analysis (SACEMA), University of Stellenbosch, Stellenbosch, South Africa South African Medical Research Council, Tygerberg, South Africa Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Centre, Seattle, WA, USA Health Systems Governance and Finance, World Health Organization, Geneva, Switzerland Melanie Bertram Correspondence to Nishila Moodley. NM, MYB and GEG contributed to the conception of the study. MYB provided guidance and supported the co-ordination of the study. NM and MYB contributed to the statistical analysis. NM was responsible for the overall drafting of the manuscript. All authors contributed to critically revising its content. All authors read and approved the final manuscript. Moodley, N., Gray, G. & Bertram, M. Projected economic evaluation of the national implementation of a hypothetical HIV vaccination program among adolescents in South Africa, 2012. BMC Public Health 16, 330 (2016). https://doi.org/10.1186/s12889-016-2959-3 Antiretroviral therapy (ART)
CommonCrawl
Cubical calculus 1 Calculus and cochains 2 Visualizing cubical cochains 3 Cochains as integrands 4 The algebra of cochains 5 The exterior derivative of a $1$- and $1$-cochains in dimension $2$ 6 Representing cubical cochains with a spreadsheet 7 A bird's-eye view of calculus Calculus and cochains Suppose $I$ is a closed interval. In the analysis context, the definite integral -- with the interval of integration fixed -- is often thought of as a real-valued function of the integrand. This idea is revealed in the usual function notation: $$G\big( f \big):= \displaystyle\int_I f(x) dx \in {\bf R}.$$ This point of view is understandable: after all, the Riemann integral is introduced in calculus as the limit of the Riemann sums of $f$. The student then discovers that this function is linear: $$G \big( sf+tg \big)=sG\big( f \big) +tG \big( g \big) ,$$ with $s,t\in {\bf R}$. However, this notation might obscure another important property of integral, the additivity: $$\displaystyle\int_{[a,b]\cup [c,d]} f(x) dx = \displaystyle\int_{[a,b]} f(x) dx+ \displaystyle\int_{[c,d]} f(x) dx,$$ for $a < b \le c < d$. We then realize that we can also look at the integral as a function of the interval -- with the integrand fixed -- as follows: $$H \big( I \big) := \displaystyle\int_I f(x)dx.$$ In higher dimensions, the intervals are replaced with surfaces and solids while the expression $f(x)dx$ is replaced with $f(x,y)dxdy$ and $f(x,y,z)dxdyz$, etc. These "expressions" are called differential forms and each of them determines such a new function. That's why we further modify the notation as follows: $$\omega \big( I \big) =\displaystyle\int_I \omega.$$ This is an indirect definition of a differential form of dimension $1$ -- it is a function of intervals. Moreover, it is a function of $1$-chains such as $[a,b]+[c,d]$. We can see this idea in the new form of the additivity property: $$\omega \big( I+J \big) = \omega \big( I \big) + \omega \big( J \big).$$ We recognize this function as a $1$-cochain! In light of this approach, let's take a look at the integral theorems of vector calculus. There are many of them and, with at least one for each dimension, maybe too many... Let's proceed from dimension $3$, look at the formulas, and see what they have in common. Green's Theorem: $$\displaystyle \iint_{S} \left( \frac{\partial q}{\partial x} - \frac{\partial p}{\partial y} \right) dA = \displaystyle\int_{\partial S} p dx + q dy. $$ Here, the integrals' domains are a solid and its boundary surface respectively. Gauss' Theorem: $$\displaystyle \iiint_{R} \operatorname{div}F dV = \displaystyle \iint_{\partial R} F \cdot N dA.$$ The domains of integration are a plane region and its boundary curve. Fundamental Theorem of Calculus: $$\displaystyle\int_{[a,b]} F' dx = F \Big|_{a}^b.$$ In the left-hand side, the integrand is $F' dx$. We think of the right-hand side as an integral too: the integrand is $F$. Then the domains of integration are a segment and its two endpoints: $$[a,b] \text{ and } \{a, b\}= \partial [a,b].$$ What do these three have in common? Setting aside possible connections between the integrands, the pattern of the domains of integration is clear. The relation is the same in all these formulas: a region on the left and its boundary on the right. Now, there must be some kind of a relation for the integrands too. The Fundamental Theorem of Calculus suggests a possible answer: a function on the right and its derivative is on the left. Clearly, for the other two theorems, this simple relation can't possibly apply. We can, however, make sense of this relation if we treat those integrands as differential forms. Then the form on the left is what we call the exterior derivative of the form on the right. Consequently, the theorem can be turned into a definition of this new form. Thus, we have just one general formula which includes all three (and many more): Stokes Theorem: $$\displaystyle\int_R d \omega = \displaystyle\int_{\partial R} \omega.$$ The relation between $R$ and $\partial R$ is a matter of topology. The relation between $d \omega$ and $\omega$ is a matter of calculus, the calculus of differential forms: Furthermore, as we shall see, the transition from topology to calculus is just algebra! Visualizing cubical cochains In calculus, the quantities to be studied are typically real numbers. We choose our ring of coefficients to be $R={\bf R}$. Meanwhile, the locus is typically the Euclidean space ${\bf R}^n$. We choose for now to concentrate on the cubical grid, i.e., the infinite cubical complex acquired by dividing the Euclidean space into cubes, ${\mathbb R}^n$. In ${\mathbb R}^1$, these pieces are: points and (closed) intervals, the $0$-cells: $...,\ -3,\ -2, \ -1, \ 0, \ 1, \ 2, \ 3, \ ...$, and the $1$-cells: $...,\ [-2,-1], \ [-1,0],\ [0,1], \ [1,2], \ ...$. In ${\mathbb R}^2$, these parts are: points, intervals, and squares ("pixels"): Moreover, in ${\mathbb R}^2$, we have these cells represented as products: $0$-cells: $\{(0,0)\}, \{(0,1)\}, ...;$ $1$-cells: $[0,1] \times \{0\}$, $\{0\} \times [0,1], ...;$ $2$-cells: $[0,1] \times [0,1], ....$ Recall that within each of these pieces, a cochain is unchanged; i.e., it's a single number. Then, the following is the simplest way to understand these cochains. Definition. A cubical $k$-cochain is a real-valued function defined on $k$-cells of ${\mathbb R}^n$. This is how we plot the graphs of cochains in ${\mathbb R}^1$: And these are $0$-, $1$-, and $2$-cochains in ${\mathbb R}^2$: To emphasize the nature of a cochain as a function, we can use arrows: Here we have two cochains: a $0$-cochain with $0\mapsto 2,\ 1\mapsto 4,\ 2\mapsto 3, ...$; and a $1$-cochain with $[0,1]\mapsto 3,\ [1,2]\mapsto .5,\ [2,3]\mapsto 1, ...$. a $0$-cochain $Q$ with $Q(0)=2,\ Q(1)=4,\ Q(2)=3, ...$; and a $1$-cochain $s$ with $s\Big([0,1] \Big)=3,\ s\Big([1,2] \Big)=.5,\ s\Big([2,3] \Big)=1, ...$. We can also use letters to label the cells, just as before. Each cell is then assigned two symbols: one is its name (a latter) and the other is the value of the cochain at that location (a number): Here we have: $Q(A)=2,\ Q(B)=4,\ Q(C)=3, ...$; $s(AB)=3,\ s(BC)=.5,\ s(CD)=1, ...$. We can simply label the cells with numbers, as follows: Exercise. Another way to visualize cochains is with color. Implement this idea with a spreadsheet. Cochains as integrands It is common for a student to overlook the distinction between chains and cochains and to speak of the latter as linear combinations of cells. The confusion is understandable because they "look" identical. Frequently, one just assigns numbers to cells in a complex as we did above. The difference is that these numbers aren't the coefficients of the cells in some chain but the values of the $1$-cochain on these cells. The idea becomes explicit when we think in calculus terms: cochains are integrands, and chains are domains of integration. In the simplest setting, we deal with the intervals in the complex of the real line ${\mathbb R}$. Then the cochain assigns a number to each interval to indicate the values to be integrated and the chain indicates how many times the interval will appear in the integral, typically once: Here, we have: $$\begin{array}{lllllllll} h(a)&= \displaystyle\int _a h \\ &=\displaystyle\int _{[0,1]} h &+ \displaystyle\int _{[1,2]} h &+\displaystyle\int _{[2,3]} h &+\displaystyle\int _{[3,4]} h&+\displaystyle\int _{[4,5]} h\\ &=3&+.5&+1&+2&+1. \end{array}$$ The simplest cochain of this kind is the cochain that assigns $1$ to each interval in the complex ${\mathbb R}$. We call this cochain $dx$. Then any cochain $h$ can be built from $dx$ by multiplying -- cell by cell -- by a discrete function that takes values $3,.5,1,2,1$ on these cells: The main property of this new cochain is: $$\displaystyle\int _{[A,B]}dx=B-A.$$ Exercise. What is the antiderivative of $dx$? Exercise. Show that every $1$-cochain in ${\mathbb R}^1$ is a "multiple" of $dx$: $h=Pdx$. Next, ${\mathbb R}^2$: In the diagram, the names of the cells are given in the first row; the values of the cochain on these cells are given in the second row; and the algebraic representation of the cochains is in the third. $\\$ The second row gives one a compact representation of the cochain when you don't want to name the cells. Cochains are real-valued, linear functions defined on chains: One should recognize the second line as a line integral: $$\psi (h)= \displaystyle\int _h \psi .$$ What is $dx$ in ${\mathbb R}^2$? Naturally, its values on the edges parallel to the $x$-axis are $1$'s and on the one parallel to the $y$-axis are $0$'s: Of course, $dy$ is the exact opposite. Algebraically, their representations are as follows: $dx\Big([m,m+1]\times \{n\}\Big)=1,\ dx\Big(\{m\} \times [n,n+1] \Big)=0$; $dy\Big([m,m+1]\times \{n\}\Big)=0,\ dy\Big(\{m\} \times [n,n+1] \Big)=1$. Now we consider a general $1$-cochain: $$P dx + Q dy,$$ where $P,Q$ are discrete functions, not just numbers, that may vary from cell to cell. For example, this could be $P$: Exercise. Show that every $1$-cochain in ${\mathbb R}^2$ is such a "linear combination" of $dx$ and $dy$. At this point, we can integrate this cochain. As an example, suppose $S$ is the chain that represents the $2\times 2$ square in this picture going clockwise. The edges are oriented, as always, along the axes. Let's consider the line integral computed along this curve one cell at a time starting at the left lower corner: $$\displaystyle\int _S Pdx = 0\cdot 0 + 1\cdot 0 + (-1)\cdot 1 + 1\cdot 1 + 0\cdot 0 + 2\cdot 0 + 3\cdot (-1) + 1\cdot (-1).$$ We can also compute: $$\displaystyle\int _S Pdy = 0\cdot 1 + 1\cdot 1 + (-1)\cdot 0 + 1\cdot 0 + 0\cdot (-1) + 2\cdot (-1) + 3\cdot 0 + 1\cdot 0.$$ If $Q$ is also provided, the integral $$\displaystyle\int _S Pdx+Qdy$$ is a similar sum. Next, we illustrate $2$-cochains in ${\mathbb R}^2$: The double integral over this square, $S$, is $$\displaystyle\int _S Adxdy = 1+2+0-1=2.$$ And we can understand $dx \hspace{1pt} dy$ as a $2$-cochain that takes the value of $1$ on each cell: Exercise. Evaluate $\int_S dxdy$, where $S$ is an arbitrary collection of $2$-cells. The algebra of cochains We already know that the cochains, as cochains, are organized into vector spaces, one for each degree/dimension. Let's review this first. If $p,q$ are two cochains of the same degree $k$, it is easy to define algebraic operations on them. First, their addition. The sum $p + q$ is a cochain of degree $k$ too and is computed as follows: $$(p+q)(a) := p(a) + q(a),$$ for every $k$-cell $a$. As an example, consider two $1$-cochains, $p,q$. Suppose these are their values defined on the $1$-cells (in green): Then $p+q$ is found by $$1+1=2,\ -1+1=0,\ 0+2=2,\ 3+0=3,$$ as we compute the four values of the new cochain one cell at a time. Next, scalar multiplication is also carried out cell by cell: $$(\lambda p)(a) := \lambda p(a), \ \lambda \in {\bf R},$$ for every $k$-cell $a$. We know that these operations satisfy the required properties: associativity, commutativity, distributivity, etc. Subsequently, we have a vector space: $$C^k=C^k({\mathbb R}^n),$$ the space of $k$-cochains on the cubical grid of ${\bf R}^n$. There is, however, an operation on cochains that we haven't seen yet. Can we make $dxdy$ from $dx$ and $dy$? The answer is provided by the wedge product of cochains: $$dxdy=dx\wedge dy.$$ Here we have: a $1$-cochain $dx \in C^1({\mathbb R}_x)$ defined on the horizontal edges, a $1$-cochain $dy \in C^1({\mathbb R}_y)$ defined on the vertical edges, and a $2$-cochain $dxdy \in C^2({\mathbb R}^2)$ defined on the squares. $\\$ But squares are products of edges: $$\alpha=a \times b.$$ Then we simply set: $$(dx\wedge dy) (a\times b):=dx(a)\cdot dy(b).$$ What about $dydx$? To match what we know from calculus: $$\displaystyle\int _\alpha dy dx=-\displaystyle\int_\alpha dxdy,$$ we require anti-commutativity of cubical cochains under wedge products: $$dy\wedge dx =-dx\wedge dy.$$ Now, suppose we have two arbitrary $1$-cochains $p,q$ and we want to define their wedge product on the square $\alpha:= a\times b$. We can't use the simplest definition: $$(p \wedge q)(a \times b) \stackrel{?}{=} p(a) \cdot q(b) ,$$ as it fails to be anti-commutative: $$(q \wedge p)(a \times b) = q(a) \cdot p(b) = p(b) \cdot q(a).$$ Since we need both of these terms: $$p (a) q(b) \quad p (b) q(a),$$ let's combine them. Definition. The wedge product of two $1$-cochains is a $2$-cochain given by $$(p \wedge q)(a \times b):=p (a) q(b) - p (b) q(a).$$ The minus sign is what gives us the anti-commutativity: $$(p \wedge p)(a \times b):=q (a) p(b) - q (b) p(a)=-(p (a) q(b) - p (b) q(a)).$$ Proposition. $$dx \wedge dx=0,\ dy \wedge dy=0.$$ Here is an illustration of the relation between the product of cubical chains and the wedge product of cubical cochains: The general definition is as follows. Recall that, for our cubical grid ${\mathbb R}^n$, the cells are the cubes given as products: $$Q=\displaystyle\prod _{k=1}^nA _k,$$ with each $A_k$ either a vertex or an edge in the $k$th component of the space. We can derive the formula for the wedge product in terms of these components. If we omit the vertices, a $(p+q)$-cube can be rewritten as $$Q=\displaystyle\prod _{i=1}^{p}I _i \times \displaystyle\prod _{i=p+1}^{p+q}I _i,$$ where $I_i$ is its $i$th edge. The two summands are a $p$-cube and a $q$-cube respectively and can be the inputs of a $p$-cochain and a $q$-cochain respectively. Definition. The wedge product of the a $p$-cochain and a $q$-cochain is a $(p+q)$-cochain given by its value on the $(p+q)$-cube, as follows: $$\big( \varphi ^p \wedge \psi ^q \big)(Q):= \displaystyle\sum _s (-1)^{\pi (s)}\varphi ^p\Big(\displaystyle\prod _{i=1}^{p}I _{s(i)}\Big) \cdot \psi ^q\Big(\displaystyle\prod _{i=p+1}^{p+q}I _{s(i)}\Big),$$ with summation over all permutations $s\in {\mathcal S}_{p+q}$ with $\pi (s)$ the parity of $s$ (the superscripts are the degrees of the cochains). Exercise. Verify that $Pdx=P\wedge dx$. Hint: what is the dimension of the space? Proposition. The wedge product satisfies the skew-commutativity: $$\varphi ^m \wedge \psi ^n= (-1)^{mn} \psi ^n \wedge \varphi ^m.$$ Under this formula, we have the anti-commutativity when $m=n=1$, as above. Unfortunately, the wedge product isn't associative! Exercise. (a) Give an example of this: $$\phi ^1 \wedge (\psi ^1 \wedge \theta ^1) \ne (\phi ^1 \wedge \psi ^1) \wedge \theta ^1 .$$ (b) For what class of cochain is the wedge product associative? The crucial difference between the linear operations and the wedge product is that the former two act within the space of $k$-cochains: $$+,\cdot : C^k \times C^k \to C^k;$$ while the latter acts outside: $$\wedge : C^k \times C^m \to C^{k+m}.$$ We can make both operate within the same space if we define them on the graded space of all cochains: $$C^*:=C^0 \oplus C^1 \oplus...$$ The exterior derivative of a $1$- and $1$-cochains in dimension $2$ Next we consider the case of the space of dimension $2$ and cochains of degree $1$. Given a $0$-cochain $f$ (in red), we compute its exterior derivative $df$ (in green): Once again, it is computed by taking differences. Let's make this computation more specific. We consider the differences horizontally (orange) and vertically (green): According to our definition, we have: (orange) $df\Big([a,a+1] \times \{b\}\Big) := f\Big(\{(a+1,b)\}\Big) - f\Big(\{(a,b)\}\Big)$ (green) $df\Big(\{a \} \times [b,b+1] \Big) := f\Big(\{(a, b+1)\}\Big) - f\Big(\{(a,b)\}\Big)$. $\\$ Therefore, we have: $$df = \langle \operatorname{grad} f , dA \rangle,$$ where $$dA := (dx,dy), \quad \operatorname{grad} f := (d_xf,d_yf).$$ The notation is justified if we interpret the above as "partial exterior derivatives": $d_xf \Big([a,a+1] \times \{b\}\Big) := f(a+1,b) - f(a,b),$ $d_yf \Big(\{a \} \times [b,b+1] \Big) := f(a, b+1) - f(a,b)$. What about the higher degree cochains? Let's start with $1$-cochains in ${\mathbb R}^2$. The exterior derivative is meant to represent the change of the values of the cochain as we move around the space. This time, we have possible changes as we move in both horizontal and vertical directions. Then we will be able to express these quantities by a single number as a combination of the changes: the horizontal change $\ \pm \ $ the vertical change. If we concentrate on a single square, these differences are computed on the opposite edges of the square. Just as in the last subsection, the question arises: where to assign this value? Conveniently, the resulting value can be given to the square itself. We will justify the negative sign in the cochainula below. With each $2$-cell given a number in this fashion, the exterior derivative of a $1$-cochain is a $2$-cochain. Exercise. Define and compute the exterior derivative of $1$-cochains in ${\mathbb R}$. Let's consider the exterior derivative for a $1$-cochain defined on the edges of this square oriented along the $x$- and $y$-axes: Definition. The exterior derivative $d\varphi$ of a $1$-cochain $\varphi$ is defined by its value at each $2$-cell $\tau$ as the difference of the changes of $\varphi$ with respect to $x$ and $y$ along the edges of $\tau$; i.e., $$d \varphi(\tau) = \Big(\varphi(c) - \varphi(a) \Big) - \Big( \varphi(b) - \varphi(d) \Big).$$ Why minus? Let's rearrange the terms: $$d \varphi(\tau) = \varphi(d) + \varphi(c) - \varphi(b) - \varphi(a).$$ What we see is that we go full circle around $\tau$, counterclockwise with the correct orientations. Of course, we recognize this as a line integral. We can read this formula as follows: $$\displaystyle\int_{\tau}d \varphi=\displaystyle\int _{\partial \tau} \varphi.$$ Algebraically, it is simple: $$d \varphi(\tau) = \varphi(d) + \varphi(c) + \varphi(-b) + \varphi(-a)= \varphi(d+c-b -a) = \varphi(\partial\tau). $$ Thus, the resulting interaction of the operators of exterior derivative and boundary takes the same cochain as for dimension $1$ discussed above: $$d\varphi =\varphi\partial.$$ Once again, it is an instance of the Stokes Theorem, which is used as the definition of $d$. Let's represent our $1$-cochain as $$\varphi = A dx + B dy,$$ where $A,B$ are the coefficient functions of $\varphi$: $A$ is the numbers assigned to the horizontal edges: $\varphi (b),\varphi (d)$, and $B$ is the numbers assigned to the vertical edges: $\varphi (a),\varphi (c)$. $\\$ Now, if we think one axis at a time, we use the last subsection and conclude that $A$ is a $0$-cochain with respect to $y$ and $dA=\big(\varphi(b) - \varphi(d) \big)dy$, and $B$ is a $0$-cochain with respect to $x$ and $dB=\big(\varphi(c) - \varphi(a) \big)dx$. $\\$ Now, from the definition we have: $$\begin{array}{llllllll} d \varphi &= \Big(\big(\varphi(c) - \varphi(a) \big) - \big( \varphi(b) - \varphi(d) \big)\Big)dxdy\\ &= \big(\varphi(c) - \varphi(a) \big)dxdy - \big( \varphi(b) - \varphi(d) \big)dxdy\\ &= \big(\varphi(c) - \varphi(a) \big)dxdy + \big( \varphi(b) - \varphi(d) \big)dydx\\ &= \Big( \big(\varphi(c) - \varphi(a) \big)dx\Big)dy + \Big(\big( \varphi(b) - \varphi(d) \big)dy\Big)dx\\ &= dBdy+dA dx. \end{array}$$ We have proven the following. Theorem. $$d (A dx + B dy) = dA \wedge dx + dB \wedge dy.$$ Exercise. Show how the result matches Green's Theorem. In these two subsections, we see the same pattern: if $\varphi \in C^k$ then $d \varphi \in C^{k+1}$ and $d \varphi$ is obtained from $\varphi$ by applying $d$ to each of the coefficient functions involved in $\varphi$. Representing cubical cochains with a spreadsheet This is how $0$-, $1$-, and $2$-cochains are presented in a spreadsheet: The difference of $k$-cochains from $k$-chains is only that this time there are no blank $k$-cells! The exterior derivative in dimensions $1$ and $2$ can be easily computed according to the formulas provided above. The only difference from the algebra we have seen is that here we have to present the results in terms of the coordinates with respect to the cells. They are listed at the top and on the left. The case of ${\mathbb R}$ is explained below. The computation is shown on the right and explained on the left: The Excel formulas are hidden but only these two need to be explained: first, "$B=\partial a$, $0$-chain numbers $B\underline{\hspace{.2cm}}i$ assigned to $0$-cells, differences of adjacent values of a" is computed by $$\texttt{ = R[-4]C - R[-4]C[-1]}$$ second, "$df\underline{\hspace{.2cm}}i=df(a\underline{\hspace{.2cm}}i)$, $1$-cochain, differences of adjacent values of $f$ -- the output" is computed by $$\texttt{ = R[-16]C - R[-16]C[1]}$$ Thus, the exterior derivative is computed in two ways. We can see how the results match. Exercise. Create a spreadsheet for "antidifferentiation". Exercise. (a) Create a spreadsheet that computes the exterior derivative of $1$-cochains in ${\mathbb R}^2$ directly. (b) Combine it with the spreadsheet for the boundary operator to confirm the Stokes Theorem. A bird's-eye view of calculus We have now access to a bird's-eye view of the topological part of discrete calculus, as follows. Suppose we are given the cubical grid ${\mathbb R}^n$ of ${\bf R}^n$. On this complex, we have the vector spaces of $k$-chains $C_k$. Combined with the boundary operator $\partial$, they form the chain complex $\{C_*,\partial\}$ of $K$: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{rrrrrrrrrrrrr} 0& \ra{\partial_{n+1}=0} & C_n & \ra{\partial_n}& ... &\ra{\partial_1} & C_0 &\ra{\partial_0=0} & 0 . \end{array} $$ The next layer is the cochain complex $\{C^*,d\}$, formed by the vector spaces of cochains $C^k=(C_k)^*,\ k=0,1, ...$: $$\newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{rrrrrrrrrrrrr} 0& \la{d_n=0} & C^n & \la{d_{n-1}} & ... & \la{d_0} & C^0 &\la{d_{-1}=0} &0 . \end{array} $$ Here $d$ is the exterior derivative. The latter diagram is the "dualization" of the former as explained above: $$d\varphi (x)=\varphi\partial (x).$$ The shortest version of this formula is as follows. Theorem (Stokes Theorem). The exterior derivative is the dual of the boundary operator: $$\begin{array}{|c|} \hline \\ \quad \partial ^*=d \quad \\ \\ \hline \end{array}$$ Rather than using it as a theorem, we have used it as a formula that defines the exterior derivative. The main properties of the exterior derivative follow. Theorem. The operator $d: C^k \to C^{k+1}$ is linear. Theorem (Product Rule - Leibniz Rule) $$d(\varphi \wedge \psi) = d \varphi \wedge \psi + (-1)^k \varphi \wedge d \psi .$$ Exercise. Prove the theorem for dimension $2$. Theorem (Double Derivative Identity). $dd : C^k \to C^{k+2}$ is zero. Proof. We prove only $dd=0 : C^0({\mathbb R}^2) \to C^2({\mathbb R}^2)$. Suppose $A,B,C,D$ are the values of a $0$-cochain $h$ at these vertices: We compute the values of $dh$ on these edges, as differences. We have: $$-(B-A) + (C-D) + (B-C) - (A-D) = 0,$$ where the first two are vertical and the second two are horizontal. $\blacksquare$ In general, the property follows from the double boundary identity. The proof indicates that the two mixed partial derivatives are equal: $$\Phi_{xy} = \Phi_{yx},$$ just as in Clairaut's Theorem. Exercise. Prove $dd : C^1({\mathbb R}^3) \to C^3({\mathbb R}^3)$ is zero. Exercise. Compute $dd : C^1 \to C^3$ for the following cochain: Retrieved from "https://calculus123.com/index.php?title=Cubical_calculus&oldid=1517"
CommonCrawl
Bolzano-Weierstrass Theorem/Proof 2 < Bolzano-Weierstrass Theorem 1 Theorem 2 Proof 4 Source of Name Every bounded sequence of real numbers has a convergent subsequence. Let $\left \langle {x_n} \right \rangle_{n \in \N}$ be a bounded sequence in $\R$. By definition there are real numbers $c, C \in \R$ such that $c < x_n < C$. Then at least one of the sets: $\left\{{x_n : c < x_n < \dfrac{c + C} 2 }\right\}, \left\{{x_n : \dfrac{c + C} 2 < x_n < C }\right\}, \left\{{x_n : x_n = \dfrac{c + C} 2 }\right\}$ contains infinitely many elements. If the set $\left\{{x_n : x_n = \dfrac{c + C} 2 }\right\}$ is infinite there's nothing to prove. If this is not the case, choose the first element from the infinite set, say $x_{k_1}$. Repeat this process for $\left \langle {x_n} \right \rangle_{n > k_1}$. As a result we obtain subsequence $\left \langle {x_{k_n}} \right \rangle_{n \in \N}$. By construction $\left \langle {x_{k_n}} \right \rangle_{n \in \N}$ is a Cauchy sequence and therefore converges. $\blacksquare$ This article needs to be linked to other articles. In particular: Practically all of this needs links either to definitions or results.. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding these links. To discuss this in more detail, feel free to use the talk page. Once you have done so, you can remove this instance of {{MissingLinks}} from the code. Heine-Borel Theorem Source of Name This entry was named for Bernhard Bolzano and Karl Weierstrass. Retrieved from "https://proofwiki.org/w/index.php?title=Bolzano-Weierstrass_Theorem/Proof_2&oldid=328096" This page was last modified on 18 November 2017, at 07:31 and is 1,229 bytes
CommonCrawl
EURASIP Journal on Image and Video Processing A fast source-oriented image clustering method for digital forensics Chang-Tsun Li1,2 & Xufeng Lin ORCID: orcid.org/0000-0002-3400-87001 EURASIP Journal on Image and Video Processing volume 2017, Article number: 69 (2017) Cite this article We present in this paper an algorithm that is capable of clustering images taken by an unknown number of unknown digital cameras into groups, such that each contains only images taken by the same source camera. It first extracts a sensor pattern noise (SPN) from each image, which serves as the fingerprint of the camera that has taken the image. The image clustering is performed based on the pairwise correlations between camera fingerprints extracted from images. During this process, each SPN is treated as a random variable and a Markov random field (MRF) approach is employed to iteratively assign a class label to each SPN (i.e., random variable). The clustering process requires no a priori knowledge about the dataset from the user. A concise yet effective cost function is formulated to allow different "neighbors" different voting power in determining the class label of the image in question depending on their similarities. Comparative experiments were carried out on the Dresden image database to demonstrate the advantages of the proposed clustering algorithm. Nowadays, digital imaging devices, especially mobile phones with built-in cameras, have become an essential part of modern life. They enable us to record every detail of our life anytime and anywhere. Meanwhile, the rise of social media, such as Facebook, Twitter, and Instagram, has fostered and stimulated our interest in sharing photos and videos of life moments over social networks using mobile imaging devices. On the one hand, social media affords us a new way to express friendship, intimacy, and community. But on the other hand, the difficulty of verifying the profiles or identities of users on social networks also gives rise to the cyber crime. A typical circumstance is that a number of images are collected under proper legal procedures from social networks for forensic analysis, but the devices which have been used to take these images are not available. If those images can be clustered into a number of groups, each including the images acquired by the same camera, the forensic investigators will be able to link the images to particular devices and in a better position to associate different social media accounts belonging to a person of interest. We refer to this task as source-oriented image clustering. This can be particularly useful in a variety of forensic cases, e.g., identifying fake user profiles, finding stolen camera devices, or defending against Internet defamation. Fortunately, with the advances in multimedia forensics, we are able to extract "device fingerprints" from images and videos and trace back to their source device. By resorting to device fingerprints extracted from images, source-oriented image clustering can be divided into two main sequential operations: the extraction of device fingerprint from images followed by an image clustering operation based on the device fingerprints. The main challenges in this scenario are: The investigator does not have the cameras that have taken the photos to generate quality reference device fingerprint. No prior knowledge about the number and types of the cameras are available. Given the sheer number of photos, analyzing each image in its full size is computationally prohibitive. The challenges of source-oriented image clustering and related works There are many factors that affect the performance of the clustering system. One is the accuracy of the fingerprints extracted from images. Various forms of device fingerprints such as sensor pattern noise (SPN) [1–12], camera response function [13], re-sampling artifacts [14], color filter array (CFA) interpolation artifacts [15, 16], JPEG compression [17], and lens aberration [12, 18] have been proposed in recent years. Other device and image attributes such as binary similarity measures, image quality measures, and higher order wavelet statistics have also been adopted for identifying source imaging devices [19–22]. While many methods [13–16] make specific assumptions in their applications, SPN-based methods [1–12] do not require such assumptions to be satisfied and thus have drawn much more attention. Another merit of SPN is that it is unique to each device, which means it is capable of differentiating individual devices of the same model [1, 3, 5, 11]. These merits make SPN a good candidate for various digital forensic applications. Another factor is the system's effectiveness in clustering images based on device fingerprints. The main objective in clustering applications is to group samples into clusters of similar features (e.g., the SPNs). Among a wide variety of methods, k-means [23, 24] and fuzzy c-means [25–27] have been intensively employed in various applications. However, classical k-means and fuzzy c-means clustering methods rely on the user to provide the number of clusters and initial centroids. Moreover, they are sensitive to outliers, and the computational complexities are very high for high-dimensional data, which make them unsuitable for clustering high-dimensional camera fingerprints. The difficulty of specifying an appropriate cluster number also exists in graph clustering-based methods, such as [28–30]. In [31], the camera fingerprints clustering is formulated as a weighted graph clustering problem, where SPNs are considered as the vertices in a graph, while the weight of each edge is represented by the correlation between the SPN pair connected by the edge. A k-class spectral clustering algorithm [32] is employed to group the vertices into a number of partitions. To determine the optimal cluster number, the same spectral clustering algorithm is repeated for different value of k until the smallest size of the resultant clusters equals 1, i.e., one singleton cluster is generated. However, it is easy to form singleton clusters when some SPNs are severely contaminated by other interferences. So the feasibility of such manner of determining the optimal cluster number is still an issue. To work without knowing the number of clusters, the agglomerative hierarchical clustering algorithms [33, 34] were adopted to cluster SPNs. Starting with the pairwise correlation matrix, the algorithms initially consider each SPN as a cluster and iteratively merge the two most similar clusters according to the average linkage criterion. At each iteration, an overall silhouette coefficient, which measures the cohesion inside clusters and the separation among clusters, is calculated to measure the quality of partition. This process stops when all SPNs have been merged into one cluster and the partition corresponding to the best clustering quality is deemed as the final partition. These two algorithms are relatively slow, because their time complexity is \(\mathcal {O}(N^{2}\log N)\), where N is the number of SPNs. Another limitation, which also exists in other more advanced hierarchical clustering-based algorithms such as CURE [35], ROCK [36], and CHAMELEON [37], is that once an object is assigned to a cluster, it will not be considered again in the ensuing process [38]. In the context of SPN clustering, the misclassification at the earlier stage is likely to induce error propagation in the succeeding merge and produce large clusters containing SPNs of different cameras. Since the intrinsic quality of SPNs depends on many complex factors [11, 12, 39, 40], the average correlation between SPNs of one camera may be significantly different from that of other cameras. Therefore, SPN-based image clustering is a typical problem of finding clusters of different densities. The classical density-based algorithms, such as DBSCAN [41] and DENCLUE [42], are not applicable in this scenario, because their density-based definition of core points cannot identify the core points of varying density clusters. To overcome this problem, a shared nearest neighbor (SNN)-based clustering algorithm has been proposed in [43] to find clusters of different sizes and densities. However, choosing appropriate parameters for the algorithm is not easy if the data to be clustered is not well understood. Considering the fact that the estimation of SPN improves if more images from the same camera are involved in the calculation, Bloy [44] presented an ad-hoc algorithm for clustering images based on SPN. The algorithm starts with selecting two images at random with their SPN correlation greater than an adaptive threshold that gradually increases as the number SPNs N in the cluster. The average SPN of this cluster is used as cluster centroid to attract more images whose SPN correlation with the centroid is greater than the adaptive threshold. This procedure repeats until the current cluster has grown to a pre-specified size (i.e., 50) or the entire dataset has been exhausted. If the cluster grows to the pre-specified size before the entire dataset is exhausted, a second pass through the dataset is conducted to include the images with similar SPN into the cluster without updating the centroid and the threshold. Once a cluster is formed, the algorithm repeats to form new clusters until no further clustering is possible. The algorithm allows the threshold to increase, but the adaptive threshold is calculated from a quadratic curve, whose parameters are obtained by fitting the correlation values of four Canon cameras. However, the threshold's quadratic dependence on the number of SPNs is questionable. A clustering algorithm requires no a priori knowledge about the nature of the SPNs and the threshold is certainly more desirable. To overcome the infeasibility of the manner of determining the optimal cluster number in [31], Amerini et al. [45] proposed a blind SPN clustering algorithm based on normalized cut criterion [46]. Similar to [31], SPNs are considered as the vertices in a graph and the weight of each edge measures the similarity between the two vertices connected by the edge. With the pairwise similarities between SPNs, the graph is bipartitioned recursively by finding the splitting point that minimizes the corresponding normalized cut. This recursive bipartition terminates when the mean value of intra-cluster weights is less than a pre-defined threshold T h for all clusters. T h is experimentally set to the value giving the best average performance on five datasets in terms of ROC curves. This normalized cut-based algorithm is fast and was reported to have better performance than [31] and [33] on datasets composed of hundreds of images taken by a few cameras. More recently, Marra et al. introduced a two-step clustering algorithm in [47]. In the first step, the pairwise correlation matrix of SPNs is adjusted by subtracting a constant α=μ 0+3σ 0, where μ 0 and σ 0 are the mean and standard deviation, respectively, of the inter-camera correlations obtained from a training set. Then, the adjusted correlation matrix is fed into the correlation clustering algorithm [48] to generate a large number of over-partitioned clusters. While in the second step, an ad hoc refinement procedure is performed to progressively merge the clusters generated in the first step. The refinement step separates the clusters into two sets, a set of "large" clusters and a set of "small" clusters. If the majority of the SPNs in a small cluster are similar to (i.e., by comparing to a pre-defined threshold β) the centroid of a large cluster, the small cluster will be merged into the large cluster and the centroid will be updated accordingly. This process continues until no further merge can be performed. This algorithm was reported to outperform almost uniformly the state-of-the-art algorithms [47], but it requires all the SPNs to be retained in the RAM for efficiently updating the centroids of clusters, which makes it unsuitable for relatively large datasets. We presented our preliminary study in [49], where each SPN is treated as a random variable and Markov random field (MRF) is used to iteratively update the class labels. Based on the pairwise correlation matrix, a reference similarity is determined using the k-means (k=2) clustering algorithm and a membership committee, which consists of the most similar SPNs of each SPN, is established. The similarity values and the class labels assigned to the members of membership committee are used to estimate the likelihood probability of assigning each class label to the corresponding SPN. Then, the class label with the highest probability is assigned to the SPN. This process terminates when there are no more class label changes in two consecutive iterations. This algorithm performs well on small datasets, but its performance deteriorates as the size of dataset grows. Moreover, it is very slow because the likelihood probability involves all the class labels in the membership committee and has to be calculated for every SPN in every iteration. The time complexity is nearly \(\mathcal {O}(N^{3})\) in the first iteration, which makes it computationally prohibitive for large datasets. Therefore, a faster and more reliable algorithm that can handle large datasets is desirable for source-oriented image clustering. In view of the aforementioned challenges in the context of device fingerprint-based image clustering, we conduct an in-depth study based on the work in [49] and propose a fast clustering framework for images of unknown sources. It makes several major contributions: First, we propose a fast and reliable algorithm for clustering camera fingerprints. Aiming at overcoming the limitations of the work in [49], the proposed algorithm makes the following improvements: (1) redefining the similarity in terms of the shared nearest neighbors; (2) speeding up the calculation of the reference similarity; (3) refining the determination of the membership committee; (4) reducing the complexity of calculations in each iteration; and (5) accelerating the speed of convergence. Not only the presentation of the clustering methodology is more comprehensive and detailed in this work, but also the proposed algorithm is much more efficient and reliable than that in [49]. Secondly, we discuss in detail the related SPN clustering algorithms, namely the spectral, the hierarchical, the shared nearest neighbor, the normalized cut, and our previous MRF-based clustering methods [49]. These algorithms are evaluated and compared on real-world databases to provide insight into the pros and cons of each algorithm and offer a valuable reference for practical applications. Finally, we evaluate the proposed algorithm on a large and challenging image database which contains 7400 images taken by 74 cameras, covering 27 camera models and 14 brands, while the database used in [49] includes only six cameras. Furthermore, the quality of clustering is characterized by F1-measure and Adjusted Rand Index, which are more suitable for evaluating clustering results than only the true positive rate or accuracy used in [31, 33, 34, 49]. Outline of this paper The remainder of this work is organized as follows. The formulation and discussion of the proposed algorithm are given in Section 2. In Section 3, respectively. The parameter selection of the proposed algorithm as well as the comparison with other related works is presented. Finally, Section 5 concludes this work. To facilitate the clustering, the SPN of a small block at the center of each of the given N images are extracted. An N×N correlation matrix is established, with one element, (i,j), representing the correlation between the SPNs of image i and j. Then, an alternative similarity matrix in terms of shared nearest neighbors is constructed from the correlation matrix. By making the pairwise similarities available in the matrix, the system does not have to repeat the similarity calculation when the similarity of the same pair of images is required again in the iterative clustering process. Although the number of image classes (cameras) can be much greater than 2, for each image, there are only two types of similarity: intra-class and inter-class. Based on the similarity matrix, each SPN is treated as a random variable to be assigned a class label, and a reference similarity r is estimated to serve as a rough boundary between the intra- and inter-class similarities in order to encode a cost function using a Markov random field (MRF). Separating the similarities into intra- and inter-class similarities enables us to find clusters of different densities, because the average intra-class similarity indicates the "density" of the cluster that each SPN belongs to. In the following subsections, we will provide the details of the proposed algorithm. SPN extraction Given an image I, the following equation is used to extract the SPN, n, from a block of the size specified by the user from the center: $$ n=I-\mathcal{F}(I), $$ where \(\mathcal {F}\) is the denoising algorithm proposed in [50]. Each SPN is further preprocessed by the Wiener filtering (WF) in the DFT domain [2] to suppress the non-unique artifacts. Note that the reason we do not use our recent preprocessing scheme in [11] is that, the peaks in the DFT spectrum of a single SPN are not as distinct as those in the spectrum of a clean reference SPN. Establishment of similarity matrix During the process of clustering, similarities between SPNs are to be used to determine the class membership of each image (or SPN). As will be seen in Section 2.3.4, the process of class label update, which involves the calculation of similarities between SPNs, has to be iterated until the stop criterion is met. However, repeating the similarity calculation of the same SPN pairs is time-consuming. Therefore, the purpose of establishing an N×N similarity matrix is to calculate the similarity only once for each SPN pair. When a similarity value is needed at any stage, the value is retrieved from the similarity matrix. The similarity between any two SPNs n i and n j is initially measured by the normalized cross correlation (NCC) $$ {\rho}_{ij}=\frac{(n_{i}-\bar{n}_{i})\cdot(n_{j}-\bar{n}_{j})}{\| n_{i}-\bar{n}_{i}\| \cdot \| n_{j}-\bar{n}_{j}\|}, \quad i,j \in [1,N], $$ where ∥·∥ is the L 2 norm and the mean value is denoted with a bar. In this way, we establish an N×N correlation matrix ρ, with element ρ ij indicating the closeness between SPNs n i and n j . Because of the symmetrical nature of the correlation matrix and that the elements (self-correlations) along the diagonal axis is always 1, only N×(N−1)/2 correlations need to be calculated. However, due to the varying qualities of SPNs of different cameras, the average correlation between SPNs of one camera may be different from that of another camera. As exemplified in Fig. 1 a, the average correlation the class highlighted by the green rectangle is higher than that of the class highlighted by the blue rectangle. This problem makes the clustering of SPNs more challenging. An alternative definition of similarity in terms of shared nearest neighbors, as proposed in [36, 43, 51], is a promising way to overcome this problem. Specifically, the similarity W ij between two SPNs n i and n j is redefined as $$ W_{ij}=|\mathbb{N}(n_{i})\cap \mathbb{N}(n_{j})|, $$ Pairwise similarities of 1000 images taken by 25 cameras (each responsible for 40 images). a Correlation matrix ρ. b Similarity matrix W in terms of shared κ-nearest neighbor (κ=15) where \(\mathbb {N}(n_{i})\) and \(\mathbb {N}(n_{j})\) are, respectively, the κ-nearest neighbors of n i and n j constructed from the correlation matrix ρ. So W ij measures the number of κ-nearest neighbors shared by n i and n j . The constructed similarity matrix in terms of shared nearest neighbors (SNN) is shown in Fig. 1 b, where the divergences of similarities in different classes have been significantly reduced. Also note that, even when the SNN similarity is applied, the intra-class connectivity remains weak for the images taken by Casio EX-Z150, as highlighted in the red rectangle. The underlying reason is the irregular geometric distortions related to the different focal length settings when capturing different images, as reported in [52]. Taking the similarity matrix W as input, the task of this step is to identify image groups such that each group corresponds to one camera. Our previous experience of using Markov random field (MRF) approach to image segmentation [53, 54] and many others' successful applications of MRF [9, 55–57] suggest that the local characteristics of MRFs (also known as Markovianity) allow global optimization problems to be solved iteratively by taking local information into account. Suppose there are K classes of images in the subset, with the value of K unknown, and denote D={d k |k=1,2,…,K} as the set of class labels and f i ∈D as the class label of SPN n i . By considering the label f i of each SPN n i as a random variable, the objective of clustering is to assign an optimal class label d k to each random variable n i in an iterative manner until the stop criterion is met. The pseudo code of clustering is shown in Procedure 1, and the details will be explained as follows. Assign unique initial class labels Because the number of classes K is unknown, before the first iteration of the labeling process starts, each SPN n i is treated as a singleton cluster and assigned a unique random class label, as shown in step 1 of Procedure 1. That is to say that K=N and f i =d i , i∈1,2,…,N. The class label of each SPN in question will be updated iteratively in Step 14 of Procedure 1 based on (1) the similarities between the SPN in question and the SPNs in its membership committee and (2) the current class labels of the SPNs in the membership committee. So eventually when the algorithm converges, or the stop criterion is met, images taken by the same camera will be given the same class label. By doing so, the algorithm starts with a set of N singleton clusters without requiring the user to specify the number of clusters. Calculate reference similarity Although the actual number of classes, K, is unknown, we can expect that normally the similarities between SPNs of the same class (called intra-class similarity) are greater than the similarities between SPNs of different classes (called inter-class similarity). So for each SPN n i , its inter-class and intra-class similarities are expected to be separable. In [49], a simple k-means clustering method (k=2) is used to cluster the N−1 similarity values into two groups (one as intra-class and the other inter-class). Then, the average of the centroids of the two clusters is taken as a reference similarity r to separate the two distributions. However, the general-purpose k-means is slow and quickly becomes inefficient for large datasets. Be aware that we are dealing with the binary separation of one-dimensional data, and we have the prior knowledge of the approximate range where the cutoff point should lie in, so we propose a fast method, which shares the same essence as k-means, to iteratively search for the appropriate cutoff point, as shown in step 2 of Procedure 1, where W i: is the similarities between n i and the other (N−1) SPNs. The details of the algorithm are given in Procedure 2. We assume that the reference similarity r to be determined lies in between [low,high], so the sum and size of the similarities between (0,low) and (high,N) can be pre-calculated before the iterative update. The purposes of limiting the search range to [low,high] are twofold. First, it narrows down the search range and therefore speeds up the search process. Second, it forces the optimal r to fall within an appropriate range so as to alleviate the problem of "local minimal". The search process is further sped up by specifying a value ini as the initial r. In step 6 of Procedure 2, \(\mathcal {I}\) represents the "binary" (0 or 1) class labels of the similarities in [low,high]. The midpoint of the means of the two classes is used to update r, as provided in step 9 of Procedure 2. The update terminates when label assignments no longer change. Incorporating ini, low and high to facilitate the determination of r makes the search process faster and more flexible. In some cases, such prior information is already known to the user. In our experiments, low, ini and high were set to 0, 1 and 5, respectively. The output r of Procedure 2 serves the purpose of dividing the intra- and inter-class similarities and can be used to encode the cost function to be defined in Eq. (6). Although the similarities are both scene- and device-dependent, we could expect that most intra-class similarities are greater than r while most inter-class similarities are less than r. It is also intuitive that, for most cases, a similarity value farther away from r on the left-hand side indicates a higher probability that the two corresponding images are taken by different devices. On the other hand, we have higher confidence in believing that a similarity value farther away from r on the right-hand side indicates that the two corresponding images are taken by the same device. The closer to r, the less confidence we have on the similarity value in telling us the situation. This suggests that, if we treat classification as an optimization problem, the distance between a similarity W ij and r can be used to encode an objective function for guiding the search for the optimal class label of each image. We will explain how we make use of this useful information in Section 2.3.4. Establish membership committee When determining the class label for each SPN n i , instead of involving the entire dataset in the decision-making process, the theory of Markov random fields allows us to involve only a small local "neighborhood" of that SPN. As displayed in Step 3 of Procedure 1, we establish a "neighborhood" C i (i.e., membership committee (MC) in [49] and this work) with m key members that are most similar to n i . The membership committee can be efficiently established by partially selecting the m SPNs with the largest similarities in each row of W (e.g., using the partial sorting algorithm proposed in [58]). Note that the reason we use the term "membership committee", instead of "neighborhood", is because information such as similarities and current class labels of the SPNs within the membership committee determines the class label (i.e., membership) of the SPN in question. The m key members contribute "positive" votes (i.e., class labels) which tell the system what the most likely labels are, while the similarity value encoded in the cost function and the associated probability tell the system whether a committee member is a likely one. In so doing, we could ensure that in the main feature of Markov random fields, the local characteristics [53] are exploited in the clustering process. Update class labels using MRF During the clustering process, each SPN is iteratively visited and re-labeled until the stop criterion is met. In terms of Markov random fields, when an SPN n i is being visited, the probability p(·) of assigning each class label l currently assigned to the members of C i is calculated using $$ p(f_{i}=l|W_{{C_{i}}},L_{i})=\frac{1}{Z_{i}}e^{-U_{i}(l,W_{{C_{i}}},L_{i}),} $$ where f i is the class label of SPN n i , \(W_{{C_{i}}}\) is the similarities between SPN n i and the corresponding members of C i , i.e., \(W_{{C_{i}}}=\{W_{ij}|j\in C_{i}\}\), and L i is the set of class labels currently assigned to the members of C i , i.e., L i ={f j |j∈C i }, l∈L i . Z i is the partition function [25] $$ Z_{i}=\sum_{l\in L_{i}}e^{-U_{i}(l,W_{{{{ C_{i}}}}},L_{i})}, $$ where \(\phantom {\dot {i}}U_{i}(l,W_{{C_{i}}},L_{i})\) is the cost of assigning label l to n i given \(W_{{C_{i}}}\) and L i . It is defined as $$ U_{i}(l,W_{{C_{i}}},L_{i})=\sum_{j\in C_{i}}s(l,f_{j})(W_{ij}-r_{i}), $$ where W ij is the similarity (see Eq. (2)) between n i and n j (j∈C i ), r i is the reference similarity described in Section 2.3.2, and s(l,f j ) is a sign function defined as $$ s(l,f_{j})= \left\{ \begin{array}{ll} +1, \quad l \neq f_{j} \\ -1, \quad l = f_{j}. \end{array} \right. $$ It is clear to see that the probability of each label l to be assigned to f i is based on the observed data and the current local class configuration L i . From the cost function U(·) in Eq. (6), we can see that the closer the similarity W ij is to r i , the less significant n j is in determining the class label for SPN n i . From the sign function in Eq. (7) and its role in Eq. (6), we can see that the formulation of Eq. (6) encourages appropriate label assignment with a reward (i.e., a negative cost U(·) to increase the probability of that label). By the same token, it penalizes inappropriate decisions to reduce the probability of assigning an inappropriate label by imposing a positive cost U(·). The following explains these two cases, each with two different scenarios. Rewarding appropriate label assignments Scenario 1: If W ij <r i (i.e., SPNs n i and n j belong to different classes) and the label l under investigation is different from f j (i.e., l≠f j ), then s(l,f j )=+1 will be used in Eq. (6). As a result, a negative value of s(l,f j )(W ij −r i ) is contributed to U(·), which will in turn increase the probability of \(\phantom {\dot {i}}p(f_{i}=l|W_{{C_{i}}},L_{i})\) in Eq. (4). Scenario 2: If W ij >r i (i.e., SPNs n i and n j belong to the same class) and the label l under investigation is also the same as f j (i.e., l=f j ), then s(l,f j )=−1 will be used in Eq. (6). A negative value of s(l,f j )(W ij −r i ) is contributed to U(·), which will in turn increase the probability of \(p(f_{i}=l|W_{{C_{i}}},L_{i})\phantom {\dot{i}}\) in Eq. (4). Penalizing inappropriate label assignments Scenario 3: If W ij <r i (i.e., SPNs n i and n j belong to different classes) but the label l under investigation is the same as f j (i.e., l=f j ), then s(l,f j )=−1 will be used in Eq. (6). As a result, a positive value of s(l,f j )(W ij −r i ) is contributed to U(·), which will in turn reduce the probability of \(\phantom {\dot {i}}p(f_{i}=l|W_{{C_{i}}},L_{i})\) in Eq. (4). Scenario 4: If W ij >r i (i.e., SPNs n i and n j belong to the same class) but the label l under investigation is different from f j (i.e., l≠f j ), then s(l,f j )=+1 will be used in Eq. (6). A positive value of s(l,f j )(W ij −r i ) is contributed to U(·), which will in turn reduce the probability of \(p(f_{i}=l|W_{{C_{i}}},L_{i})\phantom {\dot {i}}\) in Eq. (4). From these four scenarios, we can also see that the farther away W ij is from r i , the greater the reward (penalty) will be when an appropriate (inappropriate) decision is made. Like other MRF approaches to optimization problems, deterministic and stochastic relaxation [54] can be used to pick a new label l for f i based on \(\phantom {\dot {i}}p(f_{i}=l|W_{{C_{i}}},L_{i})\). Because of the low convergence rate of stochastic relaxation, we pick label l in a deterministic sense according to $$ \hat{f}_{i}=\underset{l\in L_{i}}{\text{arg}\,\text{max}}~{p(f_{i}=l|W_{{C_{i}}},L_{i})}. $$ Since Z i is the same for all class labels in L i , maximizing \(\phantom {\dot {i}}p(f_{i}=l|W_{{C_{i}}},L_{i})\) is equivalent to minimizing \(\phantom {\dot {i}}U_{i}(l,W_{{C_{i}}},L_{i})\), as implemented in step 12 of Procedure 1. The m most similar SPNs in the membership committee play a decisive role in determining the class label, so once the label of an SPN has been determined by Eq. (8), it actually triggers a convergence process among the SPNs in its membership committee. As a consequence, the class labels of most SPNs quickly become stable, and continually updating those labels helps little in improving the performance. Therefore, we stop updating the class label of an SPN if no label changes in two consecutive iterations, as shown in step 7 of Procedure 1. This configuration significantly reduces the number of SPNs updated in each iteration and has little effect on the performance. Finally, the stop criterion we employ is that there are no changes of class labels to any SPNs in two consecutive iterations, or the iteration number reaches 50. The reasons why the classifier can work without the user specifying the reference similarity r and the number of classes K can be summarized as follows. The fact that the similarity values between each SPN and the rest of the dataset can be grouped into intra-class and inter-class as described in Section 2.3.2 facilitates adaptive determination of the reference similarity r automatically. This adaptability also allows the algorithm to get rid of the tricky threshold specification (e.g., the similarity threshold used in [44] and the binarization threshold in [59]). The clustering process starts with a class label space as big as the entire dataset (i.e., the worse case with each SPN n i as a singleton cluster) and the most similar SPNs are always kept in n i 's membership committee C i , so the clusters can merge and converge to a certain number of final clusters quickly. The term W ij −r i in Eq. (6) also provides adaptability and helps the clustering to converge because it gives more say to the SPNs with the similarity value farther away from the reference similarity r in determining the class label for the SPN in question. Experimental setup We conducted the experiments on the Dresden image database [60]. 7400 images acquired in JPEG format by 74 cameras (each responsible for 100 images), covering 27 camera models and 14 manufacturers, were involved in the experiments. We only considered the green channel of each image and tested our proposed algorithm on image blocks of three different sizes, namely s=1024×1024, s=512×512, and s=512×256 pixels. All the experiments were performed on a laptop with an Intel(R) Core(TM) i7-6600U CPU @2.6 GHz and a RAM of 16 GB. Evaluation measures We used the ground-truth class labels to evaluate the clustering results. To avoid confusion, we will refer to the images from the same camera as a class and refer to those clustered into the same group by the clustering algorithm as a cluster. Suppose Ω={ω 1,ω 2,…,ω j ,…,ω J } are the set of ground-truth classes, and N images are partitioned to a set of clusters, C={c 1,c 2,…,c i ,…,c I }, by clustering algorithm. We used different measures to evaluate the quality of clustering. The first measure is F1-measure: $$ \mathcal{F}=2\cdot \frac{\mathcal{P}\cdot\mathcal{R}}{\mathcal{P}+\mathcal{R}}, $$ where the average precision rate \(\mathcal {P}\) and the average recall rate \(\mathcal {R}\) are defined as $$ \left\{ \begin{array}{l} \mathcal{P} = \sum_{i}{\vert c_{i}\cap\omega_{j_{i}}\vert}/\sum_{i}{\vert c_{i}\vert} \\ \mathcal{R} = \sum_{i}{\vert c_{i}\cap\omega_{j_{i}}\vert}/\sum_{i}{\vert \omega_{j_{i}}\vert}. \end{array} \right. $$ Here, |c i | is the size of cluster c i , \(\phantom {\dot {i}}\vert \omega _{j_{i}}\vert \) is the size of the most frequent class, \(\phantom {\dot {i}}\omega _{j_{i}}\), in cluster c i . Another popular measure of clustering quality is Rand Index [61], which measures the agreement between C and Ω. Among the \(\binom {N}{2}\) distinct pairs, there are four different types of pairs: True positive pair: images in the pair fall in the same class in Ω and in the same cluster in C. True negative pair: images in the pair fall in different classes in Ω and in different clusters in C. False positive pair: images in the pair fall in different classes in Ω but in the same cluster in C. False negative pair: images in the pair fall in the same class in Ω but in different clusters in C. The Rand Index RI is defined as: $$ RI=\frac{TP+TN}{TP+FP+TN+FN}, $$ where TP, TN, FP, and FN are the numbers of true positive, true negative, false positive, and false negative pairs, respectively. RI ranges from 0 to 1, but its expectation \(\overline{RI}\) does not equal to 0. To get rid of this bias, we adopted the Adjusted Rand Index [62]: $$ \mathcal{A}=\frac{RI-\overline{RI}}{1-\overline{RI}}. $$ The last measure we used is the ratio of the number of discovered clusters to the number of ground-true classes: $$ \mathcal{N}=\frac{n_{d}}{n_{g}}, $$ where n d is the number of discovered clusters and n g is the number of ground-truth classes. We will refer to \(\mathcal {N}\) as the cluster-to-class ratio in the rest of this paper. Note that for \(\mathcal {F}\in [0,1]\) and Adjusted Rand Index \(\mathcal {A}\in [-1,1]\), a higher value indicates better clustering performance. For \(\mathcal {N}\), a value close to 1 does not necessarily indicate a good performance, but a value much larger than 1 does indicate that the clustering algorithm produces a large number of small or even singleton clusters. Parameter settings Two parameters need to be set for our proposed algorithm, the size of the nearest neighbors κ and the size of the membership committee m. κ determines the highest SNN similarity between SPNs, because two SPNs can only share at most κ nearest neighbors according to Eq. (3). If κ is too small, even two dissimilar SPNs are likely to have a similarity of κ and the difference between the "similar" and "dissimilar" pairs will be obscured. On the other hand, if κ is too large, the SNN similarity is insensitive to local variations, and the algorithm tends to produce large clusters containing the images from different actual "classes". Similarly, if m is too small, there will be not enough information for determining the class label of SPNs. As a consequence, the assigned labels remain random and unreliable, which hinders the convergence of the algorithm and result in many singleton clusters. While if m is too large, more and more dissimilar SPNs will be involved in the calculation and therefore mislead the algorithm to make wrong decisions. To see how κ and m affect the clustering quality, we applied the proposed clustering algorithm on two small subsets randomly sampled from the Dresden database. The first subset (i.e., \(\mathcal {D}_{4}\) in Section 4.4.1) consists of 1000 images taken by 50 cameras, with the number of images acquired by different cameras ranging from 10 to 60, and the second subset (i.e., \(\mathcal {D}_{5}\) in section 4.4.1) consists of 1024 images, with 24 additional singleton images (i.e., they are from 24 different cameras) added to the first subset. We varied κ from 1 to 50 and m from 5 to 50. Results on the first and the second subset are shown in the first and second row of Fig. 2, respectively. As can be seen, if κ is too small (e.g., <5), the algorithm produces many small clusters and results in a very low recall rate (see Fig. 2 b, f). As κ increases, the clusters belonging to different classes are likely to be merged together, which gives rise to a lower precision rate (see Fig. 2 a, e). For the size of the membership committee, a small m leads to a low recall rate (see Fig. 2 a, e). As m goes up to a point where "enough" similar SPNs can help to make trustworthy decisions, it strikes a good balance between the precision rate and the recall rate, and therefore achieves a favorable F1-measure and Adjusted Rand Index (see Fig. 2 c, d, g, and h). But if we keep increasing m, there is a chance to decrease the precision rate (see Fig. 2 a, e) due to the misleading information provided by the membership committee. The results on the first and the second subset share very similar patterns and trends, but the areas corresponding to high clustering quality (i.e., the areas highlighted in dark red in Fig. 2 d, h) shrink towards the left-bottom corner. It indicates that a relatively smaller κ or m is preferable when singleton images are present in the database. In our following experiments, both κ and m are set to 15. How κ and m affect the clustering quality. a Precision rates on \(\mathcal {D}_{4}\), b recall rates on \(\mathcal {D}_{4}\), c F1-measures on \(\mathcal {D}_{4}\), d adjusted Rand Indexes on \(\mathcal {D}_{4}\), e precision rates on \(\mathcal {D}_{5}\), f recall rates on \(\mathcal {D}_{5}\), g F1-measures on \(\mathcal {D}_{5}\), and h adjusted Rand Indexes on \(\mathcal {D}_{5}\) Comparisons and analyses To illustrate the advantages of our proposed algorithm, we compared it with other five clustering methods: (1) the multi-class spectral clustering (SC) method [31], (2) the hierarchical clustering (HC) method [34], (3) the shared nearest neighbor clustering (SNNC) method [43], (4) the normalized cut-based clustering (NCUT) method [45], and (5) the Markov random field based clustering (MRF) method [49]. We did not include Boly's algorithm [44] and Marra's algorithm [47], because both algorithms retain the fingerprints in the RAM for updating the centroids of clusters, which makes them unsuitable for relatively large datasets. Moreover, for Marra's algorithm [47], the selection of β, the average level of correlation for same-camera residuals, can be tricky since β varies across different cameras. For fair comparison, we used a fully connected graph for SC rather than the sparse k-nearest neighbor graph in [31]. Also, note that we did not use the hierarchical clustering proposed in [33] for comparison, because we found in experiments that the algorithm in [34] performs slightly faster and better than that in [33]. For SNNC, there are three parameters: the size of the nearest neighbors κ, the similarity threshold Eps for calculating the SNN density, and the density threshold MinPts for finding the core points. We set κ to the same value as our proposed algorithm, i.e., κ=15, and set Eps and MinPts to 2 and 10, respectively. For NCUT, we set the aggregation threshold T h to 0.0037 rather than the 0.037 in [45], which results in many singleton clusters. For MRF, to avoid going into infinite iterations when the algorithm does not converge, we set the maximum number of iterations to 50. We will conduct two experiments, one on datasets of fixed size with varying class distributions and different levels of clustering difficulty to test the adaptability of algorithms and the other on datasets of varying sizes to test the scalability of algorithms. Clustering on datasets of fixed size In this experiment, we first set up four datasets of fixed size and based on the Dresden database. As we know that, images acquired by cameras of the same model may undergo the same or similar image processing pipeline. As a result, the non-unique artifacts left in the images make them more difficult to be distinguished from each other. We therefore categorize the clustering difficulties into easy and hard levels. For the easy level, the images in different classes are taken by cameras of different models, while on the hard level, images in some of the different classes are taken by devices of the same model. Additionally, it is common in practical situation that the numbers of images vary widely across devices. So we categorize the distributions of images into symmetric and asymmetric. Based on these considerations, we set up the following four different datasets: \(\mathcal {D}_{1}\): easy symmetric dataset, which consists of 1000 images taken by 25 cameras of different models (each accounting for 40 images). It nearly covers all the popular camera manufacturers, such as Canon, Nikon, Olympus, Pentax, Samsung, and Sony. \(\mathcal {D}_{2}\): easy asymmetric dataset. 20, 30, 40, 50, and 60 images are alternatively selected from the images taken by each of the 25 cameras in \(\mathcal {D}_{1}\) to make up a total of 1000 images. \(\mathcal {D}_{3}\): hard symmetric dataset, which consists of 1000 images taken by 50 cameras (each accounting for 20 images). The 50 cameras only cover 12 models, so some of them are of the same model. \(\mathcal {D}_{4}\): hard asymmetric dataset. 10, 15, 20, 25, and 30 images are alternatively selected from the images taken by each of the 50 cameras in \(\mathcal {D}_{3}\) to make up a total of 1000 images. Our proposed clustering algorithm essentially exploits the affinities between neighboring images. Thus, it would be interesting to see how the proposed algorithm deals with singleton classes, i.e., classes composed by only one single image as no other images from the same camera are present in the database. Recall that the images in our entire database are from 74 cameras and the images in dataset \(\mathcal {D}_{4}\) are from 50 of them. We therefore randomly select one image from those taken by each of the remaining 24 cameras and add them to \(\mathcal {D}_{4}\) to form an extra database, \(\mathcal {D}_{5}\), consisting of 1024 images. This setting allows us to investigate the influence of singleton classes by comparing the performance on \(\mathcal {D}_{4}\) and \(\mathcal {D}_{5}\). We tested the six algorithms on \(\mathcal {D}_{1}\), \(\mathcal {D}_{2}\), \(\mathcal {D}_{3}\), \(\mathcal {D}_{4},\) and \(\mathcal {D}_{5}\). For each dataset, three pairwise correlation matrices are calculated using SPNs of three different sizes, namely 1024×1024, 512×512, and 512×256 pixels. The results on \(\mathcal {D}_{1}-\mathcal {D}_{5}\) are listed in Tables 1, 2, 3, 4, and 5, respectively. The best F1-measures, adjusted Rand Indexes, and the cluster-to-class ratios are highlighted in bold. Table 1 Comparison of clustering algorithms on \(\mathcal {D}_{1}\) As can be seen, SC performs poorly on challenging datasets \(\mathcal {D}_{3}\), \(\mathcal {D}_{4}\), and \(\mathcal {D}_{5}\) even using SPNs of 1024×1024 pixels, with \(\mathcal {F}=0.18, \mathcal {A}=0.06\) in Table 3 and even worse in Tables 4 and 5. However, SC performs surprisingly better on smaller block sizes, 512×512 pixels. The rather contradictory results are due to the stop criterion of SC, because the algorithm terminates when the size of the smallest cluster equals 1. But the smallest class size of \(\mathcal {D}_{3}\) and \(\mathcal {D}_{4}\) is no larger than 20, so it is easy to form singleton clusters and result in premature termination of the algorithm, while the more ambiguous information in the SPNs extracted from smaller image blocks impedes the separation of images taken by different cameras and therefore forces the algorithm to try out more possible partitions. For example, the optimal number of partitions determined by SC is 5 when using SPNs of 1024×1024 pixels on \(\mathcal {D}_{3}\), but when using SPNs of 512×512 pixels, the number of partitions increases to 9, which is closer to the ground truth class number 50 and therefore ends up with a better performance. Because of the larger class size and easier separation of different classes, SC is able to produce much better results on \(\mathcal {D}_{1}\) and \(\mathcal {D}_{2}\), with \(\mathcal {F}=0.53, \mathcal {A}=0.27\) for \(\mathcal {D}_{1}\) and \(\mathcal {F}=0.51, \mathcal {A}=0.17\) for \(\mathcal {D}_{2}\). The performance of HC is generally good in terms of F1-measure, but it is not as good as reported in [33] and [34], where the datasets used were less challenging in terms of both the number of cameras and the number of images captured by each camera. An interesting observation is that when using SPNs of 512×512 and 512×256 pixels, HC achieves the highest precision rates in most cases. But as can be seen in Tables 1, 3, 4, and 5, the high \(\mathcal {P}\) comes at the expense of low \(\mathcal {R}\), which means HC tends to over-partition the datasets. This is also reflected in the values of \(\mathcal {N}\) that are much larger than 1 in the corresponding rows. SNNC performs well on the easy datasets \(\mathcal {D}_{1}\) and \(\mathcal {D}_{2}\), but its performance on \(\mathcal {D}_{3}-\mathcal {D}_{5}\) is yet far from satisfactory. We found that SNNC is very sensitive to parameters. For example, if we increase Eps from 2 to 4, the performance drops dramatically for \(\mathcal {D}_{1}\) (with \(\mathcal {F}=0.40, \mathcal {A}=0.34\)) and \(\mathcal {D}_{2}\) (with \(\mathcal {F}=0.33, \mathcal {A}=0.30\)). Another drawback of SNN is that it does not cluster all data points, because it discards the non-core data points that are not within a radius of Eps of a core point (i.e., the noise points in [43]). When the parameters are not set appropriately, a large fraction of data points may be identified as noise points. Taking dataset \(\mathcal {D}_{3}\) for example, about 25% of SPNs are identified as noise and discarded when Eps is set to 4. However, it is difficult to determine the "right" parameters that are applicable to different datasets. Similar to HC, NCUT tends to over-partition the datasets, which results in high precision rates, low recall rates, and cluster-to-class ratios much higher than 1. It is worth noting that there are some inconsistencies between \(\mathcal {F}\) and \(\mathcal {A}\) measures. Taking the measures on dataset \(\mathcal {D}_{4}\) using SPNs of 1024×1024 pixels (i.e., Table 3) for example, the \(\mathcal {F}=0.22\) of NCUT is the second worst among the six methods, but its \(\mathcal {A}=0.52\) turns out to be the second best performance. The main reason is that the measure \(\mathcal {F}\) tends to be in favor of clusters of large granularity. To see this, let us consider clustering a dataset of 1000 images taken by 10 cameras, each responsible for 100 images. If all the 1000 images are grouped into one cluster, \(\mathcal {A}\) gives a measure of 0 while \(\mathcal {F}\) gives a measure of 0.18. If for each camera, its 80 images form a cluster and the remaining 20 images form 20 singleton clusters, then \(\mathcal {A}\) gives a measure of 0.76, but \(\mathcal {F}\) only gives a measure of 0.09. So \(\mathcal {F}\) is more prone to heavily penalize singleton clusters. By resorting to the MRF approach and the shared κ-nearest neighbor technique, our proposed algorithm is able to find high-quality clusters. Using SPNs of 1024×1024 pixels, it outperforms other five algorithms in terms of both F1-measure and adjusted Rand Index. It achieves \(\mathcal {F}>=0.86, \mathcal {A}>=0.91\) on easy datasets (\(\mathcal {D}_{1}\) and \(\mathcal {D}_{2}\)) and \(\mathcal {F}>=0.80, \mathcal {A}>=0.74\) on hard datasets (\(\mathcal {D}_{3}-\mathcal {D}_{5}\)). Even using the SPNs of 512×256 pixels, \(\mathcal {F}\) and \(\mathcal {A}\) can be as high as 0.72 on the two easy datasets. Compared with the MRF method in [49], the proposed algorithm shows a significantly better performance. On challenging datasets (\(\mathcal {D}_{3}-\mathcal {D}_{5}\)), the improvement can be as high as 80% (e.g., \(\mathcal {D}_{3}\)) and 170% (e.g., \(\mathcal {D}_{5}\)) in terms of \(\mathcal {F}\) and \(\mathcal {A}\), respectively. But the 24 singleton classes added to \(\mathcal {D}_{5}\) do decrease the precision rate as some of the singleton classes may be wrongly attributed to the clusters close to them. One attractive feature of our proposed algorithm is that it is able to find clusters with high precision rate (i.e., high purity). The high precision rate is important and preferable in the context of forensic investigation, because the false attribution error (i.e., \(1-\mathcal {P}\)) can cause more serious problems, such as accusing an innocent person. Clustering on datasets of varying sizes In the second experiment, we aim to compare the time complexities and the clustering qualities of the six algorithms on datasets of varying sizes. To generate the datasets, we incrementally added 1000 images captured by 10 cameras (100 images per camera) to an empty dataset until all the 74 cameras in the Dresden database had been covered. The six algorithms were run and evaluated on each of these datasets. The log-scale running time (in seconds) is shown in Fig. 3. Our proposed algorithm requires the calculation of the SNN similarity before clustering, so the time used to calculate the SNN similarity is highlighted in green in the stacked bar. Since the running time obtained using SPNs of different lengths exhibit the same trend, we only show the running time for SPNs of 1024×1024 pixels. As can be observed in Fig. 3, MRF is the slowest one, followed by HC and SC. Although SC repeats spectral clustering process several times to search for the optimal number of clusters, the time complexity of each clustering process is only \(\mathcal {O}\left (N^{\frac {3}{2}}K+NK^{2}\right)\) (K is the number of partitions), which is lower than the time complexity \(\mathcal {O}(N^{2}\log N)\) of HC when N≫m. Our proposed algorithm is slightly slower than NCUT and SNNC but is much faster than the other three algorithms. By reducing the complexity of calculation in each iteration and accelerating the convergence, the speed of the algorithm has been significantly improved when compared to our preliminary study in [49]. Most of the running time of our proposed algorithm is spent on constructing the SNN similarity matrix. When the SNN similarity matrix is available, the actual clustering process only takes less than 40 s (the orange bar beneath the green bar in Fig. 3) even for the dataset containing 7400 images. Comparison of the running time (in seconds) of six clustering algorithms using SPNs of 1024×1024 pixels The clustering qualities of different algorithms are illustrated in Fig. 4. As can be seen, our proposed algorithm performs apparently better than the other three algorithms in terms of both the F1-measure and adjusted Rand Index. In particular, when using the SPNs of 1024×1024 pixels, our proposed algorithm delivers a 23% higher \(\mathcal {F}\) and a 13% higher \(\mathcal {A}\) on average than the second best algorithm (see Fig. 4 g, j). Its precision rate consistently stays at a very high level (>96%). Even using the SPNs of 512×256 pixels, the precision rate can reach about 80%. The high precision rate makes the proposed algorithm attractive in forensic applications. Another important observation is that while the performances of the other five algorithms, especially SC and SNNC, decline considerably in terms of \(\mathcal {F}\) or \(\mathcal {A}\) when the size of the dataset increases, the performance of our proposed algorithm is quite stable in terms of \(\mathcal {F}\), \(\mathcal {A}\), and \(\mathcal {N}\). This high stability is preferable in practice when applying the algorithm to new databases. Comparison of different clustering algorithms on datasets of varying sizes. a Precision rates, image block size s=1024×1024 pixels; b precision rates, s=512×512; c precision rates, s=512×256; d recall rates, s=1024×1024; e recall rates, s=512×512; f recall rates, s=512×256; g F1-measures, s=1024×1024; h F1-measures, s=512×512; i F1-measures, s=512×256; j adjusted Rand Indexes, s=1024×1024; k adjusted Rand Indexes, s=512×512; l adjusted Rand Indexes, s=512×256; m cluster-to-class ratios, s=1024×1024; n cluster-to-class ratios, s=512×512; and o cluster-to-class ratios, s=512×256 In this work, we have proposed a novel algorithm for clustering images taken by an unknown number and unknown types of digital cameras based on the sensor pattern noises extracted from images. The clustering algorithm infers the class memberships of images from a random initial membership configuration in the dataset. By giving different "neighbors" different voting power in the concise yet effective cost function depending on their similarity with the image in question, the algorithm is able to converge to the optimal cluster configuration accurately and efficiently. The experiments on the Dresden image database show that the proposed clustering scheme is fast and delivers very good performance. Despite the present advances, the most time-consuming step for image clustering based on SPNs is the calculation of the pairwise similarity matrix due to the high dimension of SPNs. We are currently working towards formulating a compact representation of SPNs in order to facilitate the large-scale source-oriented image clustering. J Lukas, J Fridrich, M Goljan, Digital camera identification from sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 1(2), 205–214 (2006). N Khanna, GT-C Chiu, JP Allebach, EJ Delp, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. Forensic techniques for classifying scanner, computer generated and digital camera images (Las Vegas, 2008), pp. 1653–1656. C-T Li, Source camera identification using enhanced sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 5(2), 280–287 (2010). C-T Li, Y Li, Color-decoupled photo response non-uniformity for digital image forensics. IEEE Trans. Circ. Syst. Video Technol.22(2), 260–271 (2012). X Kang, Y Li, Z Qu, J Huang, Enhancing source camera identification performance with a camera reference phase sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 7(2), 393–402 (2012). Y Tomioka, Y Ito, H Kitazawa, Robust digital camera identification based on pairwise magnitude relations of clustered sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 8(12), 1986–1995 (2013). A Pande, S Chen, P Mohapatra, J Zambreno, Hardware architecture for video authentication using sensor pattern noise. IEEE Trans. Circ. Syst. Video Technol.24(1), 157–167 (2014). TH Thai, R Cogranne, F Retraint, Camera model identification based on the heteroscedastic noise model. IEEE Trans. Image Process.23(1), 250–263 (2014). G Chierchia, G Poggi, C Sansone, L Verdoliva, A Bayesian-MRF approach for PRNU-based image forgery detection. IEEE Trans. Inf. Forensics Secur. 9(4), 554–567 (2014). S Chen, A Pande, K Zeng, P Mohapatra, Live video forensics: Source identification in lossy wireless networks. IEEE Trans. Inf. Forensics Secur. 10(1), 28–39 (2015). X Lin, C-T Li, Preprocessing reference sensor pattern noise via spectrum equalization. IEEE Trans. Inf. Forensics Secur. 11(1), 126–140 (2016). X Lin, C-T Li, Enhancing sensor pattern noise via filtering distortion removal. IEEE Signal Process. Lett.23(3), 381–385 (2016). Y-F Hsu, S-F Chang, in Proc. IEEE Int. Conf. Multimedia and Expo. Image splicing detection using camera response function consistency and automatic segmentation (Beijing, 2007), pp. 28–31. AC Popescu, H Farid, Exposing digital forgeries by detecting traces of resampling. IEEE Trans. Signal Process.53(2), 758–767 (2005). H Cao, AC Kot, Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans. Inf. Forensics Secur. 4(4), 899–910 (2009). A Swaminathan, M Wu, KR Liu, Nonintrusive component forensics of visual sensors using output images. IEEE Trans. Inf. Forensics Secur. 2(1), 91–106 (2007). MJ Sorell, in Proc. Int. Conf. Forensic Appl. and Tech. in Telecom., Inf. Multimedia Workshop. Conditions for effective detection and identification of primary quantization of re-quantized jpeg images (ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) (Adelaide, 2008), p. 18. K San Choi, EY Lam, KK Wong, in Electron. Imag. Source camera identification using footprints from lens aberration (Int. Society for Optics and Photonics San Jose, 2006), pp. 60690–60690. O Celiktutan, B Sankur, I Avcibas, Blind identification of source cell-phone model. IEEE Trans. Inf. Forensics Secur. 3(3), 553–566 (2008). G Xu, S Gao, YQ Shi, R Hu, W Su, in Proc. the 8th Int. Workshop Digit. Watermarking. Camera-model identification using Markovian transition probability matrix, (2009), pp. 294–307. P Sutthiwan, J Ye, YQ Shi, in Proc. the 8th Int. Workshop Digit. Watermarking. An enhanced statistical approach to identifying photorealistic images (University of Surrey, Guildford, 2009), pp. 323–335. I Amerini, R Becarelli, B Bertini, R Caldelli, Acquisition source identification through a blind image classification. IET Image Process.9(4), 329–337 (2015). R-Z Wang, Y-D Tsai, An image-hiding method with high hiding capacity based on best-block matching and K-means clustering. Pattern Recogn.40(2), 398–409 (2007). Article MATH Google Scholar W Zhong, G Altun, R Harrison, PC Tai, Y Pan, Improved K-means clustering algorithm for exploring local protein sequence motifs representing common structural property. IEEE Trans. NanoBiosci.4(3), 255–265 (2005). J Cui, J Loewy, EJ Kendall, Automated search for arthritic patterns in infrared spectra of synovial fluid using adaptive wavelets and fuzzy c-means analysis. IEEE Trans. Biomed. Eng.53(5), 800–809 (2006). D Dembélé, P Kastner, Fuzzy c-means method for clustering microarray data. Bioinformatics. 19(8), 973–980 (2003). NR Pal, K Pal, JM Keller, JC Bezdek, A possibilistic fuzzy c-means clustering algorithm. IEEE Trans. Fuzzy Syst.13(4), 517–530 (2005). B Hendrickson, RW Leland, A multi-level algorithm for partitioning graphs. Supter Comput.95:, 28 (1995). G Karypis, V Kumar, A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput.20(1), 359–392 (1998). U Von Luxburg, A tutorial on spectral clustering. Stat. Comput.17(4), 395–416 (2007). B Liu, H-K Lee, Y Hu, C-H Choi, in Proc. IEEE Int. Workshop Inf. Forensics Security. On classification of source cameras: A graph based approach (Seattle, 2010), pp. 1–5. SX Yu, J Shi, in Proc. IEEE Int. Conf. Comput. Vision. Multiclass spectral clustering (Nice, 2003), pp. 313–319. R Caldelli, I Amerini, F Picchioni, M Innocenti, in Proc. IEEE Int. Workshop Inf. Forensics Security. Fast image clustering of unknown source images (Seattle, 2010), pp. 1–5. LJG Villalba, ALS Orozco, JR Corripio, Smartphone image clustering. Expert Syst. with Appl.42(4), 1927–1940 (2015). S Guha, R Rastogi, K Shim, in ACM SIGMOD Record, 27. Cure: an efficient clustering algorithm for large databases (ACM, Seattle, 1998), pp. 73–84. S Guha, R Rastogi, K Shim, in Proc. Int. Conf. Data Eng. Rock: A robust clustering algorithm for categorical attributes (IEEE, Sydney, 1999), pp. 512–521. G Karypis, E-H Han, V Kumar, Chameleon: Hierarchical clustering using dynamic modeling. Computer. 32(8), 68–75 (1999). R Xu, D Wunsch, Survey of clustering algorithms. IEEE Trans. Neural Netw.16(3), 645–678 (2005). M Chen, J Fridrich, M Goljan, J Lukás, Determining image origin and integrity using sensor noise. IEEE Trans. Inf. Forensics Secur. 3(1), 74–90 (2008). C-T Li, R Satta, Empirical investigation into the correlation between vignetting effect and the quality of sensor pattern noise. IET Comput. Vision. 6(6), 560–566 (2012). M Ester, H-P Kriegel, J Sander, X Xu, et al, in Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 96. A density-based algorithm for discovering clusters in large spatial databases with noise (Portland, 1996), pp. 226–231. A Hinneburg, DA Keim, in Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 98. An efficient approach to clustering in large multimedia databases with noise (New York, 1998), pp. 58–65. L Ertöz, M Steinbach, V Kumar, in Proc. Int. Conf. Data Mining. Finding clusters of different sizes, shapes, and densities in noisy, high dimensional data (SIAM, San Francisco, 2003), pp. 47–58. GJ Bloy, Blind camera fingerprinting and image clustering. IEEE Trans. Pattern Anal. Mach. Intell.30(3), 532–534 (2007). I Amerini, R Caldelli, P Crescenzi, AD Mastio, A Marino, Blind image clustering based on the normalized cuts criterion for camera identification. Signal Process. Image Commun.29(8), 831–843 (2014). J Shi, J Malik, Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell.22(8), 888–905 (2000). F Marra, G Poggi, C Sansone, L Verdoliva, in Proc. Int. Workshop on Inf. Forensics and Security (WIFS). Correlation clustering for prnu-based blind image source identification (Abu Dhabi, 2016), pp. 1–6. N Bansal, A Blum, S Chawla, Correlation clustering. Mach. Learn.56(1-3), 89–113 (2004). C-T Li, in Proc. IEEE Int. Symp. Circuits Syst. Unsupervised classification of digital images using enhanced sensor pattern noise (Paris, 2010), pp. 3429–3432. K Dabov, A Foi, V Katkovnik, K Egiazarian, Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process.16(8), 2080–2095 (2007). RA Jarvis, EA Patrick, Clustering using a similarity measure based on shared near neighbors. IEEE Trans. Comput.100(11), 1025–1034 (1973). T Gloe, S Pfennig, M Kirchner, in Proc. ACM Workshop Multimedia Security. Unexpected artefacts in PRNU-based camera identification: a 'Dresden Image Database' case-study (Coventry, 2012), pp. 109–114. R Wilson, C-T Li, A class of discrete multiresolution random fields and its application to image segmentation. IEEE Trans. Pattern Anal. Mach. Intell.25(1), 42–56 (2003). C-T Li, Multiresolution image segmentation integrating gibbs sampler and region merging algorithm. Signal Process.83(1), 67–78 (2003). M Vignes, F Forbes, Gene clustering via integrated Markov models combining individual and pairwise features. IEEE/ACM Trans. Comput. Biol. Bioinf.6(2), 260–270 (2009). X Zhang, X Hu, X Hu, E Park, X Zhou, Utilizing different link types to enhance document clustering based on Markov Random Field model with relaxation labeling. IEEE Trans. Syst. Man Cybern. Part A: Syst. Humans. 42(5), 1167–1182 (2012). AL Varna, M Wu, in Proc. IEEE Int. Workshop Inf. Forensics Security. Modeling content fingerprints using Markov random fields (London, 2009), pp. 111–115. C Martınez, in Proc. ACM-SIAM Workshop on Algorithm Engineering and Experiments and 1st ACM-SIAM Workshop on Analytic Algorithmics and Combinatorics. Partial quicksort (New Orleans, 2004), pp. 224–228. X Lin, CT Li, Large-scale image clustering based on camera fingerprints. IEEE Trans. Inf. Forensics Security. 12(4), 793–808 (2017). T Gloe, R Böhme, The Dresden image database for benchmarking digital image forensics. J. Digit. Forensic Practice. 3(2-4), 150–159 (2010). WM Rand, Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc.66(336), 846–850 (1971). L Hubert, P Arabie, Comparing partitions. J. Classification. 2(1), 193–218 (1985). This work is partly supported by the EU project, Computer Vision Enabled Multimedia Forensics and People Identification (Project no. 690907; Acronym: IDENTITY), funded through the EU Horizon 2020 - Marie Skodowska-Curie Actions -Research and Innovation Staff Exchange action. This study received funding from the EU project, Computer Vision Enabled Multimedia Forensics and People Identification (Project no. 690907; Acronym: IDENTITY), funded through the EU Horizon 2020 - Marie Skodowska-Curie Actions -Research and Innovation Staff Exchange action. The datasets analyzed during the current study are available in the Dresden Image Database, [http://forensics.inf.tu-dresden.de/ddimgdb/]. School of Computing and Mathematics, Charles Sturt University, Boorooma Street, Wagga Wagga, Australia Chang-Tsun Li & Xufeng Lin Department of Computer Science, University of Warwick, Gibbet Hill Road, Coventry, CV4 7AL, UK Chang-Tsun Li Xufeng Lin First, we propose a fast and reliable algorithm for clustering camera fingerprints. Aiming at overcoming the limitations of the work in [49], the proposed algorithm makes the following improvements: (1) redefining the similarity in terms of the shared nearest neighbors; (2) speeding up the calculation of the reference similarity; (3) refining the determination of the membership committee; (4) reducing the complexity of calculations in each iteration; and (5) accelerating the speed of convergence. Not only the presentation of the clustering methodology is more comprehensive and detailed in this work, but also the proposed algorithm is much more efficient and reliable than that in [49]. Secondly, we discuss in detail the related SPNs clustering algorithms, namely the spectral, the hierarchical, the shared nearest neighbor, the normalized cut, and our previous MRF-based clustering methods [49]. These algorithms are evaluated and compared on real-world databases to provide insight into the pros and cons of each algorithm and offer a valuable reference for practical applications. Finally, we evaluate the proposed algorithm on a large and challenging image database which contains 7400 images taken by 74 cameras, covering 27 camera models and 14 brands, while the database used in [49] includes only six cameras. Furthermore, the quality of clustering is characterized by F1-measure and adjusted Rand Index, which are more suitable for evaluating clustering results than only the true positive rate or accuracy used in [31, 33, 34, 49]. Both authors read and approved the final manuscript. Correspondence to Xufeng Lin. Li, CT., Lin, X. A fast source-oriented image clustering method for digital forensics. J Image Video Proc. 2017, 69 (2017). https://doi.org/10.1186/s13640-017-0217-y Image clustering Markov random fields Sensor pattern noise Multimedia forensics Image and Video Forensics for Social Media analysis
CommonCrawl
Reversibility and branching of periodic orbits Instability of the periodic hip-hop orbit in the $2N$-body problem with equal masses March 2013, 33(3): 1157-1175. doi: 10.3934/dcds.2013.33.1157 On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^2$ Regina Martínez 1, and Carles Simó 2, Departament de Matemàtiques, Universitat Autònoma de Barcelona, Bellaterra, Barcelona Departament de Matemàtica Aplicada i Anàlisi, Universitat de Barcelona, Gran Via 585, 08007 Barcelona Received May 2011 Revised November 2011 Published October 2012 The problem of three bodies with equal masses in $\mathbb{S}^2$ is known to have Lagrangian homographic orbits. We study the linear stability and also a "practical'' (or effective) stability of these orbits on the unit sphere. Keywords: practical stability., Curved 3-body problem, homographic orbits, stability of solutions. Mathematics Subject Classification: Primary: 70F07, 70H12; Secondary: 34D08, 37J2. Citation: Regina Martínez, Carles Simó. On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^2$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1157-1175. doi: 10.3934/dcds.2013.33.1157 C. Batut, K. Belabas, D. Bernardi, H. Cohen and M. Olivier, Users' guide to PARI/GP,, (freely available from \url{http://pari.math.u-bordeaux.fr/})., (). Google Scholar F. Diacu and E. Pérez-Chavela, Homographic solutions of the curved 3-body problem,, Journal of Differential Equations, 250 (2011), 340. doi: 10.1016/j.jde.2010.08.011. Google Scholar A. Giorgilli, A. Delshams, E. Fontich, L. Galgani and C. Simó, Effective stability for a Hamiltonian system near an elliptic equilibrium point, with an application to the restricted three body problem,, Journal of Differential Equations, 77 (1989), 167. doi: 10.1016/0022-0396(89)90161-7. Google Scholar T. Kapela and C. Simó, Rigorous KAM results around arbitrary periodic orbits for Hamiltonian systems,, Preprint, (). Google Scholar R. Martínez, A. Samà and C. Simó, "Stability of Homographic Solutions of the Planar Three-Body Problem with Homogeneous Potentials,", Proceedings EQUADIFF (2003), (2003). Google Scholar R. Martínez, A. Samà and C. Simó, Stability diagram for 4D linear periodic systems with applications to homographic solutions,, Journal of Differential Equations, 226 (2006), 619. doi: 10.1016/j.jde.2006.01.014. Google Scholar R. Martínez, A. Samà and C. Simó, Analysis of the stability of a family of singular-limit linear periodic systems in $R^4.$ applications,, Journal of Differential Equations, 226 (2006), 652. doi: 10.1016/j.jde.2005.09.012. Google Scholar C. Siegel and J. Moser, "Lectures on Celestial Mechanics,", Springer, (1971). Google Scholar C. Simó, On the analytical and numerical approximation of invariant manifolds,, Modern methods in celestial mechanics, (1990), 285. Google Scholar E. T. Whittaker, "A Treatise on the Analytical Dynamics of Particles and Rigid Bodies,", Cambridge Univ. Press, (1970). Google Scholar Gianni Arioli. Branches of periodic orbits for the planar restricted 3-body problem. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 745-755. doi: 10.3934/dcds.2004.11.745 Qunyao Yin, Shiqing Zhang. New periodic solutions for the circular restricted 3-body and 4-body problems. Communications on Pure & Applied Analysis, 2010, 9 (1) : 249-260. doi: 10.3934/cpaa.2010.9.249 Giovanni F. Gronchi, Chiara Tardioli. The evolution of the orbit distance in the double averaged restricted 3-body problem with crossing singularities. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1323-1344. doi: 10.3934/dcdsb.2013.18.1323 Alain Chenciner, Jacques Féjoz. The flow of the equal-mass spatial 3-body problem in the neighborhood of the equilateral relative equilibrium. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 421-438. doi: 10.3934/dcdsb.2008.10.421 Elbaz I. Abouelmagd, Juan Luis García Guirao, Jaume Llibre. Periodic orbits for the perturbed planar circular restricted 3–body problem. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1007-1020. doi: 10.3934/dcdsb.2019003 Volodymyr Pichkur. On practical stability of differential inclusions using Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1977-1986. doi: 10.3934/dcdsb.2017116 Sergey V. Bolotin, Piero Negrini. Variational approach to second species periodic solutions of Poincaré of the 3 body problem. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1009-1032. doi: 10.3934/dcds.2013.33.1009 Sohrab Shahshahani. Stability of stationary wave maps from a curved background to a sphere. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3857-3909. doi: 10.3934/dcds.2016.36.3857 Hirokazu Ninomiya, Masaharu Taniguchi. Global stability of traveling curved fronts in the Allen-Cahn equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 819-832. doi: 10.3934/dcds.2006.15.819 Xiaojun Chang, Tiancheng Ouyang, Duokui Yan. Linear stability of the criss-cross orbit in the equal-mass three-body problem. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 5971-5991. doi: 10.3934/dcds.2016062 Qinglong Zhou, Yongchao Zhang. Analytic results for the linear stability of the equilibrium point in Robe's restricted elliptic three-body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1763-1787. doi: 10.3934/dcds.2017074 Hadia H. Selim, Juan L. G. Guirao, Elbaz I. Abouelmagd. Libration points in the restricted three-body problem: Euler angles, existence and stability. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 703-710. doi: 10.3934/dcdss.2019044 Leif Arkeryd, Raffaele Esposito, Rossana Marra, Anne Nouri. Exponential stability of the solutions to the Boltzmann equation for the Benard problem. Kinetic & Related Models, 2012, 5 (4) : 673-695. doi: 10.3934/krm.2012.5.673 Junxiong Jia, Jigen Peng, Kexue Li. On the decay and stability of global solutions to the 3D inhomogeneous MHD system. Communications on Pure & Applied Analysis, 2017, 16 (3) : 745-780. doi: 10.3934/cpaa.2017036 Nai-Chia Chen. Symmetric periodic orbits in three sub-problems of the $N$-body problem. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1523-1548. doi: 10.3934/dcdsb.2014.19.1523 Richard Moeckel. A topological existence proof for the Schubart orbits in the collinear three-body problem. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 609-620. doi: 10.3934/dcdsb.2008.10.609 Marcel Guardia, Tere M. Seara, Pau Martín, Lara Sabbagh. Oscillatory orbits in the restricted elliptic planar three body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 229-256. doi: 10.3934/dcds.2017009 Jungsoo Kang. Some remarks on symmetric periodic orbits in the restricted three-body problem. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 5229-5245. doi: 10.3934/dcds.2014.34.5229 Daniel Offin, Hildeberto Cabral. Hyperbolicity for symmetric periodic orbits in the isosceles three body problem. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 379-392. doi: 10.3934/dcdss.2009.2.379 Florin Diacu, Shuqiang Zhu. Almost all 3-body relative equilibria on $ \mathbb S^2 $ and $ \mathbb H^2 $ are inclined. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1131-1143. doi: 10.3934/dcdss.2020067 Regina Martínez Carles Simó
CommonCrawl
Singular traveling wave solutions for Boussinesq equation with power law nonlinearity and dual dispersion Shan Zheng1, Zhengyong Ouyang ORCID: orcid.org/0000-0002-4037-63312 & Kuilin Wu3 In this paper we study the Boussinesq equation with power law nonlinearity and dual dispersion which arises in fluid dynamics. A particular kind of product of distributions is introduced and applied to solve non-smooth solutions of this equation. It is proved that, under certain conditions, a distribution solution as a singular Dirac delta function exists for this model. For the first time, this kind of product of distributions is used to deal with a fourth order nonlinear partial differential equation. In 1872, Boussinesq [1] derived one-dimensional nonlinear water wave equation under the assumption that the horizontal velocity is constant along the water depth, and the vertical velocity is linear along the water depth, and it is called the Boussinesq equation. The classical Boussinesq equation is the momentum equation of mass conservation and incompressible inviscid fluid. It has the following properties: (1) the governing equation is expressed by water depth and velocity, which satisfies the conservation of mass and momentum in any case, so it can describe wave refraction, diffraction, and the interaction between waves and reflected waves; (2) the Boussinesq equation is weak dispersive and nonlinear, it is only suitable for the shallow water area; (3) the classical Boussinesq equation cannot be used to deal with the strong nonlinearity of wave breaking and the influence of environmental current. There are more details about advantages and disadvantages and the application of Boussinesq equation in [2,3,4,5,6,7,8,9,10]. However, some characteristics of the classic Boussinesq equation limit the application of the equation in wider range. In order to expand the range of water depth of the equation, many researchers came to study the modified or generalized forms of Boussinesq equation, so that it can be applied to the deep water area. For example, the velocity variation or the higher derivative term is introduced to adjust the linear dispersion performance of the equation. This paper is devoted to the study of Boussinesq equation with power law nonlinearity and dual dispersion that is investigated in fluid dynamics [11,12,13] as follows: $$ u_{tt}-k^{2}u_{xx}+a \bigl(u^{2n} \bigr)_{xx}+b_{1}u_{xxxx}+b_{2}u_{xxtt}=0, $$ where \(u(x,t)\) represents the wave profile, x and t are spacial and temporal variables, respectively, in addition, k, a, and \(b_{j}\) for \(j=1,2\) are real-valued constants. The first term means the developing term, the first two terms form the wave operator, the term with coefficient a represents the nonlinear action, where n is the power law nonlinearity parameter. Then, the two terms with coefficients \(b_{j}\) (\(j=1,2\)) are the dispersions, where the first one is the regular dispersion, while the second one arises as the surface tension [14]. Special solutions play an important role in the research of partial differential equations, and they can be used to describe and explain many phenomena in physics and engineering and so on. It is interesting to consider the different kinds of exact solutions of (1.1). There were some results involving exact solutions to be obtained, here we give a brief review. Equation (1.1) was studied in order to look for exact solutions in [11], there three integration tools were adopted in order to extract the soliton solutions. These methods used in [11] are the traveling wave hypothesis, ansatz method, and the semi-inverse variational principle. The shock waves and singular soliton solutions to (1.1) were obtained and the wave profiles were also displayed numerically. Besides, the connection between singular solitons and solitary waves was also established. The conserved quantities were also obtained by the aid of multiplier method in Lie symmetry. Soliton solutions of (1.1) in two forms were considered in [12]. The solitary wave ansatz was used to carry out the integration of these equations. Two of the conserved quantities were laid down. Finally, the numerical simulation was carried out for these two equations as well. In [13], from the view of integrability as well, the Boussinesq equation with power law nonlinearity and dual dispersion (1.1) was studied, three additional algorithms were used to search for solutions; as a result, the exact expressions of solitary wave solution, singular solitary waves, shock waves, plane waves, and finally singular periodic solutions were obtained. There are also many meaningful results established for fractional Boussinesq equation or generalized Boussinesq systems, readers can refer to [15,16,17,18,19,20] for details. However, to the best of our knowledge, some other special solution such as the singular traveling delta wave has not been considered for (1.1) yet. In [21,22,23,24,25,26], in order to deal with non-smooth or distribution solutions of some nonlinear partial differential equations, such as delta function, Heaviside function etc., the authors have constructed a very suitable definition of products of distribution so that the results remain distributions for any product of distributions. It is a reasonable and effective extension of products of classical functions or distribution multiplied by smooth function, and can return to the classical products if both the factors multiplied by each other are classical functions. We will introduce the details about the products of distributions later. It is worth noticing that, in [21,22,23,24,25,26], only the first order partial differential models were studied. So far, the higher order partial differential equations, even the second order ones, have not been considered in this way yet. It is a new attempt to use these methods in the above references to study the distribution solutions for a fourth order equation like (1.1). So, in this paper, we use the relative definition and approach on products of distributions therein to research some specific aspects of propagation of delta waves for (1.1). It is proved that, in a sense of the products of distributions defined by [21,22,23,24,25,26], under certain conditions, the traveling delta wave $$ u(x,t)=m\delta(x-ct) $$ is a solution of (1.1), where δ stands for the Dirac measure concentrated at the origin. First of all, it is necessary to give an overview on the products of distributions, because we have to depend on such products of distributions to obtain the relative results. Non-smooth functions or singular functions can be regarded as distributions or generalized functions. We have to turn to distributions or generalized functions when we want to obtain non-smooth or singular solutions for nonlinear partial differential equations because of the nonlinearity. Now we recall some results on products of distributions. Firstly, Maslov and his collaborators [27,28,29,30] introduced several distribution algebras, and later Rosinger [31,32,33,34] did some similar works. These works brought into light algebraic structures involved in embedding the space of distributions \(D'\) into certain quotient algebras. The article of Egorov [35] is a very good guide for a preliminary review about those types of approaches to products of distributions. Later, several more products of distributions were introduced, the most popular one is the work of Colombeau [36, 37], it is especially related to the framework of Rosinger. The book of Oberguggenberger [38] can be well referred to in this direction. As is well known, unfortunately, some distributional products are probably not successful in multiplying distributions with a strong singularity at a given point, for instance, the product δδ of two Dirac-delta measures. Other approaches obtain such products at the price of leaving out the space of distributions. For example, δδ is an element of the Colombeau's algebra G, but this element has no associated distribution. Consequently, from the mathematical point of view, δδ is well defined but difficult to interpret at a level of theoretical physics; some indeterminacies also arise. The approach in [21, 22] is a general theory that provides a distribution as the outcome of any product of distributions, once we fix a certain function α. Such a function quantifies the indeterminacy inherent to the products, and, once fixed, its physical interpretation becomes clear. They stress that this indeterminacy is not avoidable in general, and it plays an essential role in many questions. Concerning this point, we can refer to Sect. 6 in [22] and also to [39,40,41]. For instance, within their framework, they have exhibited explicitly [21, 25] Dirac delta wave solutions (and also solutions which are not measures) for the turbulent model ruled by Burgers nonconservative equation, and some phenomena, like "infinitely narrow soliton solutions," obtained by Maslov and his collaborators arise directly in distributional form [25] as a particular case. Also in the same setting, for a model ruled by a singular perturbation of Burgers conservative equation, they have proved [26] that delta-waves under collision behave as classical soliton collisions (as in the Korteweg–de Vries equation). The rest of this paper is organized as follows. In Sect. 2, we give a review about the delta distribution and some of its properties used later. And then we introduce the product of distributions in a particular sense and some arithmetic rules in Sect. 3. In Sect. 4 we define the concept of α-solution and show that it is a particular extension of classical solution. Finally, under some conditions, we prove that (1.1) possesses traveling delta wave solutions in Sect. 5. Delta function and some of its properties In mathematics, the Dirac delta function (δ function) is a generalized function or distribution. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. That is, $$ \delta(x)= \textstyle\begin{cases} +\infty,& x=0,\\ 0,& x\neq0, \end{cases} $$ and it is also constrained to satisfy the identity $$ \int_{-\infty}^{+\infty}\delta(x)\,dx=1. $$ As there is no function that has these properties, the computations made by the theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. Here we present some properties that will be used later. The delta function satisfies the following scaling property for a nonzero scalar μ: $$ \int_{-\infty}^{+\infty}\delta(\mu x)\,dx= \int_{-\infty}^{+\infty }\delta(y)\frac{dy}{ \vert \mu \vert }= \frac{1}{ \vert \mu \vert }, $$ $$ \delta(\alpha x)=\frac{\delta(x)}{ \vert \alpha \vert }. $$ In particular, the delta function is an even distribution in the sense that $$ \delta(-x)=\delta(x). $$ The distributional derivative of the Dirac delta distribution is the distribution \(\delta'\) defined on compactly supported smooth test functions φ $$ \delta'[\varphi]=-\delta \bigl[\varphi' \bigr]=- \varphi'(0). $$ The above equality here is a kind of integration by parts, for if δ were a true function, then $$ { \int_{-\infty}^{\infty}\delta'(x)\varphi(x)\,dx=- \int_{-\infty }^{\infty}\delta(x)\varphi'(x)\,dx.} $$ The kth derivative of δ is defined similarly as the distribution given on test functions by $$ \delta^{(k)}[\varphi]=(-1)^{k}\varphi^{(k)}(0). $$ In particular, δ is an infinitely differentiable distribution. Furthermore, the convolutions of δ and \(\delta'\) with a compactly supported smooth function f are $$ \delta * f=\delta * f=f $$ $$ \delta' * f=\delta * f'=f' $$ respectively, which follows from the properties of the distributional derivative of a convolution. Product of distributions This section introduces the product of distributions defined in [21, 22]. Let \(\mathcal{D}\) be the space of compactly supported infinitely differentiable complex-valued functions defined on R, let \(\mathcal{D}'\) be the space of Schwartz distributions, and let \(\alpha\in\mathcal{D}\) be even with \(\int_{-\infty}^{\infty}\alpha= 1\). In the theory of products in [21, 22], for computing the α-product \(T_{\dot{\alpha }} S \), they arrive at a relation of the form $$ T_{\dot{\alpha}} S = T \beta+ ( T \ast\alpha) f $$ for \(T\in\mathcal{D}'\) and \(S=\beta+f\in C^{p}\oplus\mathcal {D}^{\prime }_{\mu}\), where \(p\in{0,1,2,\ldots,\infty}\), \(\mathcal{D}^{\prime p}\) is the space of distributions of order p in the sense of Schwartz (\({\mathcal{D}}^{\prime \infty}\) means \(\mathcal{D}'\)), \(\mathcal{D}^{\prime }_{\mu}\) is the space of distributions whose support has measure zero in the sense of Lebesgue, and Tβ is the usual Schwartz product of a \(\mathcal{D}^{\prime p}\) distribution by a \(C^{p}\)-function. Remark 3.1 The α-product is a generalization of the classical product of functions in the distribution sense. Therefore, the weak solution of the nonlinear PDE is related to this product. It is clear to see that, in (3.1), if the functions T and S are classical functions, then \(S=\beta+f\), \(f=0\) and \(T_{\dot{\alpha}} S = T \beta+ ( T \ast \alpha) f=T\beta\). Hence the α-product is equivalent to the classical product. For instance, if δ stands for the Dirac measure, we have $$\begin{aligned}& \delta_{\dot{\alpha}} \delta=\delta_{\dot{\alpha}} (0+\delta )=(\delta\ast \alpha)\delta=\alpha\delta= \alpha(0)\delta, \end{aligned}$$ $$\begin{aligned}& \delta_{\dot{\alpha}} (D\delta)=(\delta\ast\alpha) (D\delta )=\alpha(0) (D \delta)-\alpha'(0)\delta=\alpha(0) (D\delta), \end{aligned}$$ $$\begin{aligned}& (D\delta)_{\dot{\alpha}} \delta= \bigl((D\delta)\ast\alpha \bigr)\delta = \bigl( \delta\ast\alpha' \bigr)\delta=\alpha'(0)\delta= 0, \end{aligned}$$ where D denotes the generalized derivative. It is easy to define the product of a distribution with a smooth function. A limitation of the theory of distributions is that there is no associative product of two distributions extending the product of a distribution by a smooth function, as has been proved by Laurent Schwartz in the 1950s [42, 43]. So about the properties of this kind of product of distributions, it is quite different from the pointwise product of classical functions. This α-product is bilinear, has unit element(the constant function taking the value 1 viewed as a distribution), is invariant under translations and also under the action of the transformation \(t \to-t\) from R onto R. In general, this product is neither associative nor commutative; however, $$ \int_{R}T _{\dot{\alpha}} S= \int_{R}S_{\dot{\alpha}} T $$ for any α if \(T,S \in\mathcal{D}'_{\mu}\) and T or S is compactly supported. In general, α-products cannot be completely localized. This becomes clear by noticing that \(\operatorname{supp}(T _{\dot{\alpha}} S) \subset \operatorname{supp} S\) (as for ordinary functions), but it can happen that \(\operatorname{supp}(T _{\dot{\alpha}} S) \subset \operatorname{supp} T\). Thus, in the following, α-product is regarded as a global product, and when we apply the product to differential equations, the solutions are naturally viewed as global solutions. Product (3.1) is consistent with the Schwartz product of \(\mathcal {D}^{\prime p}\)-distributions by \(C^{p}\)-functions (if these functions are placed on the right-hand side) and satisfy the standard differential rules. In general, α-product cannot be completely localized. Thus, in the following, α-product is regarded as a global product. The Leibniz formula must be represented in the form $$ D ( T _{\dot{\alpha}} S ) = ( D T ) _{\dot{\alpha}} S + T _{\dot {\alpha}} ( D S ) , $$ where D is the derivative operator in the distributional sense. Besides, we can use α-products (3.1) to define powers of some distributions. Thus, if \(T=\beta+f\in C^{p}\oplus\mathcal {D}'_{\mu}\cap\mathcal{D}'{p}\), then $$ T_{\dot{\alpha}} T = \beta^{2}+ \bigl[\beta+(\beta * \alpha)+(f * \alpha) \bigr]f, $$ because \(T\in\mathcal{D}'{p}\cap(C^{p}\oplus\mathcal{D}'_{\mu})\). Since \(T_{\dot{\alpha}} T \in C^{p}\oplus\mathcal{D}'_{\mu}\cap \mathcal{D}'{p}\), we can define the α-powers \(T_{\alpha}^{n}\) (\(n\geq0\) is an integer) by the recurrence formula $$\begin{aligned}& T_{\alpha}^{0}=1, \end{aligned}$$ $$\begin{aligned}& T_{\alpha}^{n}= \bigl(T_{\alpha}^{n-1} \bigr)_{\dot{\alpha}} T. \end{aligned}$$ Since the distributional products (3.1) are consistent with the Schwartz products of distributions by functions (when functions are placed on the right-hand side), we have \(\beta_{\alpha}^{n}=\beta^{n}\) for all \(\beta\in C^{p}\), and the consistency of this definition with the ordinary powers of \(C^{p}\)-functions is proved. For instance, if \(m \in C\), then \((m\delta)^{0}_{\alpha}= 1\) and \((m\delta)^{n}_{\alpha}= m^{n}[\alpha(0)]^{n-1}\) for \(n\geq2\), which can readily be seen by induction. We also have \((\tau_{a}T)_{\alpha}^{n}=\tau_{a}(T)_{\alpha}^{n}\) in the distributional sense, where \(\tau_{a}\) is the translation operator defined by \(a \in R\). Thus, in what follows, we shall write \(T^{n}\) instead of \(T_{\alpha}^{n}\) (supposing that α is fixed), which will also simplify the notation. Notice that, under the definition of this kind product of distributions, if \(\phi(u)\) is an entire function of u, then \(\phi \circ u\) is well defined, here \(\phi\circ u\) is used to denote the expression of \(\phi(u)\) involving the product of distributions, and we have the following result. If \(\phi(u)\)is an entire function ofu, then $$ \phi\circ(m\delta)= \textstyle\begin{cases} \phi(0)+\phi'(0)m\delta &\textit{if }\alpha(0)=0,\\ \phi(0)+\frac{\phi[m\alpha(0)]-\phi(0)}{\alpha(0)}\delta& \textit{if }\alpha(0)\neq0. \end{cases} $$ If \(\phi(u)\) is an entire function of u, then we have $$ \phi(u)=a_{0}+a_{1}u+a_{2}u^{2}+ \cdots, $$ where \(a_{n}=\frac{\phi^{n}(0)}{n!}\) for \(n=0,1,2,\dots\). For \(T\in C^{p}\oplus(\mathcal{D}^{\prime p}\cup\mathcal{D}'_{\mu})\), we define the composition \(\phi\circ T\) as follows: $$ \phi\circ T=a_{0}+a_{1}T+a_{2}T^{2}+ \cdots $$ provided this series converges in \(\mathcal{D}'\). This is clearly a consistent definition, and we have \(\tau_{a}(\phi \circ T) = \phi\circ(\tau_{a}T)\) if \(\phi\circ T\) or \(\phi\circ (\tau_{a}T)\) is well defined. Recall that \(\phi\circ T\) depends on α in general. Now we shall show that \(\phi\circ(m\delta)\) is a distribution for all \(m\in\mathbb{C}\). We have \((m\delta)^{0} =1\) and \((m\delta)^{1} =m\delta\) and, for \(n\geqslant2\), $$ (m\delta)^{n}=m^{n} \bigl[\alpha (0) \bigr]^{n-1}\delta, $$ as we have already seen. Then, according to (3.12), $$ \phi\circ(m\delta)=a_{0}+a_{1}m\delta+a_{2}(m \delta)^{2}+\cdots, $$ because, as we shall see, this series is convergent in \(\mathcal{D}'\). Indeed, by (3.13), we have $$ \phi\circ(m\delta)=a_{0}+a_{1}m\delta+a_{2}m^{2} \alpha (0)\delta+a_{3}m^{3} \bigl[\alpha (0) \bigr]^{2} \delta+ \cdots, $$ and thus, if \(\alpha (0) = 0\), then \(\phi\circ(m\delta)= a_{0} + a_{1}m\delta\), while if \(\alpha (0)\neq0\), then $$ \alpha (0) \bigl[\phi\circ(m\delta)-a_{0} \bigr]=a_{1}\alpha (0)m \delta+a_{2}m^{2} \bigl[\alpha (0) \bigr]^{2} \delta+a_{3}m^{3} \bigl[\alpha (0) \bigr]^{3}\delta+ \cdots, $$ which is equivalent to $$ \alpha (0) \bigl[\phi\circ(m\delta)-a_{0} \bigr]= \bigl[{a_{1} \alpha (0)m+a_{2}m^{2} \bigl[\alpha (0) \bigr]^{2}+a_{3}m^{3} \bigl[\alpha (0) \bigr]^{3}+\cdots} \bigr]\delta, $$ because, by (3.11), the series \(\{\cdots\} \) converges to \(\phi (m\alpha (0))-a_{0}\). In this case, $$ \alpha (0) \bigl[\phi\circ(m\delta)-a_{0} \bigr]= \bigl[{\phi \bigl(m\alpha (0) \bigr)-a_{0}} \bigr]\delta, $$ from the above equation, we have $$ \phi\circ(m\delta)=\phi(0)+\frac{\phi[m\alpha(0)]-\phi (0)}{\alpha(0)}\delta. $$ This completes the proof. □ The concept of α-solution For simplicity, we first deal with the case \(n=1\) for (1.1), that is, $$ u_{tt}-k^{2}u_{xx}+a \bigl(u^{2} \bigr)_{xx}+b_{1}u_{xxxx}+b_{2}u_{xxtt}=0. $$ Let us consider equation (4.1). By a classical solution of (4.1) we mean a forth order continuously differentiable complex function \((x,t)\to u(x,t)\) which satisfies (4.1) at every point of its domain. Let I be an interval of R with nonempty interior, and let \(F(I)\) be the space of second order continuously differentiable mappings \(\tilde{u}: I \to\mathcal{D}'\) in the sense of the topology of \(\mathcal{D}'\). For \(t \in I\), the notation \([\tilde{u}(t)](x)\) is sometimes used to stress that the distribution \(\tilde{u}(t)\) acts on functions \(\xi\in\mathcal{D}\) that depend on x. Definition 4.1 The mapping \(\widetilde{u}\in F(I)\) is said to be an α-solution of (4.1) if and only if there is an α such that, for all \(t\in I\), $$ \bigl(1+b_{2}D^{(2)} \bigr)\frac{d^{2}\widetilde{u} (t) }{dt^{2}}-k^{2}D^{(2)} \widetilde {u}(t) +aD^{(2)} \bigl(\widetilde{u}(t) _{\dot{\alpha}} \widetilde {u}(t) \bigr)+b_{1}D^{(4)}\widetilde{u}(t) =0, $$ where \(D^{(n)}\) (\(n=2\text{ or }4\)) stands for the distributional derivative. Theorem 4.1 Ifuis a global classical solution of equation (4.1) on \(R\times I\), then, for anyα, the mapũdefined by \([\widetilde{u}(t)](x)=u(x,t)\)is a globalα-solution of (4.1). If \(u:R\times[0,+\infty)\to C\)is a \(C^{4}\)-function and \(\widetilde {u}:[0,+\infty)\to D' \)defined by \([\widetilde{u}(t)](x) = u(x, t)\)is a globalα-solution of (4.1), thenuis a global classical solution of (4.1). For the proof, it is sufficient to note that a \(C^{4}\)-function \(u(x,t)\) can be treated as a continuously differentiable function \(\widetilde {u}\in F(I)\) defined by \([\widetilde{u}(t)](x) = u(x,t)\) and to use the consistency of the α-products with the classical ones. Theorems 4.1 and 4.2 show that α-solution is a particular extension of the classical solution. The propagation of a wave profile \(T\in\mathcal{D}'\) Let \(\tau_{ct}\) be a translation operator satisfying \(\tau _{ct}T(\cdot)=T(\cdot-ct)\). We say that \(T \in D'\)α-propagates with the speed c, according to (4.1), if and only if the mapping \(\widetilde{u}\in F(I)\) defined by \(\widetilde{u}(t) = \tau_{ct}T\) is an α-solution of (4.1). Let \(T \in D'\)be a nonconstant distribution. ThenTα-propagates with the speedc, according to (4.1), if and only if $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)D^{(2)}T-k^{2}D^{(2)}T +aD^{(2)}(T_{\dot{\alpha }}T)+b_{1}D^{(4)}T =0. $$ Assume that Tα-propagates with the speed c. By Definitions 4.1 and 5.1 we have $$ \bigl(1+b_{2}D^{(2)} \bigr)\frac{d^{2}(\tau_{ct}T )}{dt^{2}}-k^{2}D^{(2)}( \tau_{ct}T )+aD^{(2)} \bigl((\tau_{ct}T) _{\dot{\alpha}}(\tau _{ct}T) \bigr)+b_{1}D^{(4)}( \tau_{ct}T)=0 $$ for all \(t\in I\). According to [13, p. 648], we have \(\frac{d^{2}(\tau_{ct}T )}{dt^{2}}=c^{2}D^{(2)}(\tau_{ct}T )\), so the above equation can be rewritten as $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)D^{(2)}( \tau_{ct}T )-k^{2}D^{(2)}(\tau_{ct}T )+aD^{(2)} \bigl((\tau_{ct}T) _{\dot{\alpha}}(\tau _{ct}T) \bigr)+b_{1}D^{(4)}(\tau_{ct}T)=0. $$ Using the translation operator \(\tau_{-ct}\) to the above equation, we have $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)D^{(2)}T-k^{2}D^{(2)}T +aD^{(2)} \bigl[\tau_{-ct} \bigl((\tau _{ct}T) _{\dot{\alpha}}(\tau_{ct}T) \bigr) \bigr]+b_{1}D^{(4)}T =0. $$ Again, since \(\tau_{ct}(T^{2})_{\dot{\alpha}}=(\tau_{ct}T)^{2}_{\dot {\alpha}}\), it follows that Especially, if \(T=m\delta(x)\), it can be verified that $$ \frac{d^{2}(\tau_{ct}m\delta(x))}{dt^{2}}=c^{2}D^{(2)} \bigl(\tau_{ct}m \delta (x) \bigr). $$ Let \(\xi\in\mathcal{D}\) be a test function, in fact, $$\begin{aligned} \biggl\langle \frac{d(\tau_{ct}m\delta(x))}{dt}, \xi \biggr\rangle =& \biggl\langle \frac{d(m\delta(x-ct))}{dt},\xi \biggr\rangle \end{aligned}$$ $$\begin{aligned} =& \biggl\langle \lim_{h\to0}\frac{m\delta(x-c(t+h))-m\delta(x-ct)}{h},\xi \biggr\rangle \end{aligned}$$ $$\begin{aligned} =&\lim_{h\to0}\frac{1}{h} \bigl[ \bigl\langle m\delta \bigl(x-c(t+h) \bigr),\xi \bigr\rangle - \bigl\langle m\delta (x-ct),\xi \bigr\rangle \bigr] \end{aligned}$$ $$\begin{aligned} =&\lim_{h\to0}\frac{1}{h} \bigl[m\xi \bigl(c(t+h) \bigr)-m \xi(ct) \bigr] \end{aligned}$$ $$\begin{aligned} =&cm\xi'(ct) \end{aligned}$$ $$\begin{aligned} =&cm \bigl\langle \delta(x-ct),\xi'(x) \bigr\rangle \end{aligned}$$ $$\begin{aligned} =&-cm \bigl\langle \delta'(x-ct),\xi(x) \bigr\rangle \end{aligned}$$ $$\begin{aligned} =& \bigl\langle -cm\delta'(x-ct),\xi(x) \bigr\rangle , \end{aligned}$$ $$ \frac{d(\tau_{ct}m\delta(x))}{dt}=-mc\delta'(x-ct), $$ where the prime of δ is the distributional derivative. Similarly, we have $$\begin{aligned} \biggl\langle \frac{d^{2}(\tau_{ct}m\delta(x))}{dt^{2}}, \xi\biggr\rangle =&\biggl\langle \frac{d^{2}(m\delta(x-ct))}{dt^{2}},\xi\biggr\rangle \end{aligned}$$ $$\begin{aligned} =&\biggl\langle \lim_{h\to0}\frac{cm\delta'(x-c(t+h))-cm\delta'(x-ct)}{h},\xi \biggr\rangle \end{aligned}$$ $$\begin{aligned} =&\lim_{h\to0}\frac{1}{h} \bigl[\bigl\langle cm \delta' \bigl(x-c(t+h) \bigr),\xi\bigr\rangle -\bigl\langle cm\delta '(x-ct),\xi\bigr\rangle \bigr] \end{aligned}$$ $$\begin{aligned} =&\lim_{h\to0}\frac{1}{h} \bigl[cm\xi' \bigl(c(t+h) \bigr)-cm\xi'(ct) \bigr] \end{aligned}$$ $$\begin{aligned} =&c^{2}m\xi''(ct) \end{aligned}$$ $$\begin{aligned} =&c^{2}m\bigl\langle \delta(x-ct),\xi''(x) \bigr\rangle \end{aligned}$$ $$\begin{aligned} =&-c^{2}m\bigl\langle \delta'(x-ct),\xi'(x) \bigr\rangle \end{aligned}$$ $$\begin{aligned} =&\bigl\langle c^{2}m\delta''(x-ct),\xi(x) \bigr\rangle , \end{aligned}$$ $$ \frac{d^{2}(\tau_{ct}m\delta(x))}{dt^{2}}=c^{2}D^{(2)} \bigl(m\delta (x-ct) \bigr)=c^{2}D^{(2)} \bigl(\tau_{ct}m\delta(x) \bigr). $$ Now we show that a Dirac delta wave \(T=m\delta\)α-propagating with speed c is a solution of (4.1), \(m\in C\) is a nonzero constant. Dirac delta wave \(T=m\delta\)α-propagates with speedcaccording to (4.1) if and only if one of the following two conditions is satisfied: If \(\alpha(0)=0\), then \(c^{2}=k^{2}=-\frac{b_{1}}{b_{2}}\). If \(\alpha(0)\neq0\), the wave speedcsatisfies \(c^{2}=-\frac {b_{1}}{b_{2}}\), theαfunction should be chosen with \(\alpha (0)=\frac{b_{1}+k^{2}b_{2}}{ab_{2}m}\). According to the definition of product of distributions and Lemma 3.1, calculating directly, we have $$ \phi\circ(m\delta)=m\delta(x)_{\dot{\alpha}}m\delta(x)= \textstyle\begin{cases} 0 &\text{if }\alpha(0)=0,\\ m^{2}\alpha(0)\delta(x) &\text{if }\alpha(0)\neq0. \end{cases} $$ By using Theorem 5.1, substituting (5.25) into (5.1), we have $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)mD^{(2)} \delta(x)-k^{2}mD^{(2)}\delta(x) +aD^{(2)} \bigl(m^{2}\alpha(0)\delta(x) \bigr)+b_{1}mD^{(4)} \delta(x) =0, $$ $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)mD^{(2)} \delta(x)-k^{2}mD^{(2)}\delta(x) +b_{1}mD^{(4)} \delta(x) =0, $$ $$ \bigl[c^{2}-k^{2}+am\alpha(0) \bigr]D^{(2)} \delta(x)+ \bigl(c^{2}b_{2}+b_{1} \bigr)D^{(4)}\delta(x) =0, $$ $$ \bigl(c^{2}-k^{2} \bigr)D^{(2)}\delta(x)+ \bigl(c^{2}b_{2}+b_{1} \bigr)D^{(4)} \delta(x) =0, $$ the above equation holds true if and only if \(c^{2}=-\frac{b_{1}}{b_{2}}\) and \(\alpha(0)=\frac{b_{1}+k^{2}b_{2}}{ab_{2}m}\) or \(c^{2}=k^{2}=-\frac{b_{1}}{b_{2}}\). When \(n=2\), equation (1.1) becomes Following the above steps, we can obtain the δ solution for (5.30) by the following theorem. Dirac delta wave \(T=m\delta\)α-propagates with speedcaccording to (5.30) if and only if one of the following two conditions is satisfied: If \(\alpha(0)\neq0\), the wave speedcsatisfies \(c^{2}=-\frac {b_{1}}{b_{2}}\), theαfunction should be chosen with \(\alpha (0)=\frac{1}{m}\sqrt[3]{\frac{b_{1}+k^{2}b_{2}}{ab_{2}}}\). On the basis of the definition of product of distributions and Lemma 3.1, calculating directly, we have $$ \phi\circ(m\delta)= \bigl[m\delta(x) \bigr]^{4}_{\dot{\alpha}}= \textstyle\begin{cases} 0 &\text{if }\alpha(0)=0,\\ m^{4}[\alpha(0)]^{3}\delta(x) &\text{if }\alpha(0)\neq0, \end{cases} $$ where \([m\delta(x)]^{4}_{\dot{\alpha}}=m\delta(x)_{\dot{\alpha }}m\delta(x)_{\dot{\alpha}}m\delta(x)_{\dot{\alpha}}m\delta(x)\). Then, substituting (5.31) into the following equation $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)D^{(2)}T-k^{2}D^{(2)}T +aD^{(2)} \bigl(T^{4}_{\dot{\alpha }} \bigr)+b_{1}D^{(4)}T =0, $$ where \(T^{4}_{\dot{\alpha}}=T_{\dot{\alpha}}T_{\dot{\alpha}}T_{\dot {\alpha}}T\), we have $$ c^{2} \bigl(1+b_{2}D^{(2)} \bigr)mD^{(2)} \delta(x)-k^{2}mD^{(2)}\delta(x) +aD^{(2)} \bigl(m^{4} \bigl[\alpha(0) \bigr]^{3}\delta(x) \bigr)+b_{1}mD^{(4)}\delta(x) =0, $$ $$ \bigl(c^{2}-k^{2}+a \bigl[m\alpha(0) \bigr]^{3} \bigr)D^{(2)}\delta(x)+ \bigl(c^{2}b_{2}+b_{1} \bigr)D^{(4)}\delta (x) =0, $$ the above equation holds true if and only if \(c^{2}=-\frac{b_{1}}{b_{2}}\) and \(\alpha(0)=\frac{1}{m}\sqrt[3]{\frac{b_{1}+k^{2}b_{2}}{ab_{2}}}\) or \(c^{2}=k^{2}=-\frac{b_{1}}{b_{2}}\). □ Furthermore, with basically similar steps, we can get the traveling delta wave solution of (1.1) for any positive integer n. Dirac delta wave \(T=m\delta\)α-propagates with speedcaccording to (1.1) if and only if the following two conditions are satisfied: If \(\alpha(0)\neq0\), the wave speedcsatisfies \(c^{2}=-\frac {b_{1}}{b_{2}}\), theαfunction should be chosen with \(\alpha (0)=\frac{1}{m}\sqrt[2n-1]{\frac{b_{1}+k^{2}b_{2}}{ab_{2}}}\). Up to now, only first and second order nonlinear partial differential equations have been investigated with this kind of product of distributions. This paper has extended the application of such product of distributions into a higher order nonlinear partial differential equation; under some conditions, it is verified that Dirac delta function with a translation at speed c is a singular solution of (1.1). The result of this paper shows that more higher order nonlinear models are able to be dealt with in this way. Boussinesq, J.: Theory of wave and swells propagated in long horizontal rectangular canal and imparting to the liquid contained in this canal. J. Math. Pures Appl. 17(2), 55–108 (1872) Zhang, X., Shu, T., Cao, H., et al.: The general solution for impulsive differential equations with Hadamard fractional derivative of order \(q \in(1, 2)\). Adv. Differ. Equ. 2016(1), Article ID 14 (2016) Agarwal, P., Dragomir, S.S., Jleli, M., Samet, B.: Advances in Mathematical Inequalities and Applications. Trends in Mathematics (2019) Tariboon, J., Ntouyas, S.K., Sutthasin, B.: Impulsive fractional quantum Hahn difference boundary value problems. Adv. Differ. Equ. 2019(1), Article ID 220 (2019) Sitho, S., Ntouyas, S.K., Agarwal, P., et al.: Noninstantaneous impulsive inequalities via conformable fractional calculus. J. Inequal. Appl. 2018(1), Article ID 261 (2018) Ruzhansky, M., Je, C.Y., Agarwal, P.: Advances in Real and Complex Analysis with Applications. Trends in Mathematics (2017) Agarwal, P., Ibrahim, I.H., Yousry, F.M.: G-stability one-leg hybrid methods for solving DAEs. Adv. Differ. Equ. 2019(1), Article ID 103 (2019) Saoudi, K., Agarwal, P., Kumam, P., et al.: The Nehari manifold for a boundary value problem involving Riemann–Liouville fractional derivative. Adv. Differ. Equ. 2018(1), Article ID 263 (2018) Hammouch, Z., Mekkaoui, T., Agarwal, P.: Optical solitons for the Calogero–Bogoyavlenskii–Schiff equation in \((2+ 1)\) dimensions with time-fractional conformable derivative. Eur. Phys. J. Plus 133(7), Article ID 248 (2018) Saad, K., Iyiola, S., Agarwal, P.: An effective homotopy analysis method to solve the cubic isothermal auto-catalytic chemical system. AIMS Math. 3(1), 183–194 (2018) Biswas, A., Song, M., Triki, H., Kara, A.H., Ahmed, B.S., Strong, A., Hama, A.: Solitons, shock waves, conservation laws and bifurcation analysis of Boussinesq equation with power law nonlinearity and dual dispersion. Appl. Math. Inf. Sci. 8, 949–957 (2014) Biswas, A., Milovic, D., Ranasinghe, A.: Solitary waves of Boussinesq equation in a power law media. Commun. Nonlinear Sci. Numer. Simul. 14, 3738–3742 (2009) Ekici, M., Mirzazadeh, M., Eslami, M.: Solitons and other solutions to Boussinesq equation with power law nonlinearity and dual dispersion. Nonlinear Dyn. 84, 669–676 (2016) Polat, N., Piskin, E.: Existence and asymptotic behavior of solution of the Cauchy problem for the damped sixth-order Boussinesq equation. Acta Math. Appl. Sin. Engl. Ser. 31, 735–746 (2015) Baleanu, D., Inc, M., Aliyu, A.I., Yusuf, A.: The investigation of soliton solutions and conservation laws to the coupled generalized Schrödinger–Boussinesq system. Waves Random Complex Media 29(1), 77–92 (2018) Tchier, F., Yusuf, A., Aliyu, A.I., Baleanu, D.: Time fractional third-order variant Boussinesq system: symmetry analysis, explicit solutions, conservation laws and numerical approximations. Eur. Phys. J. Plus 133(6), Article ID 240 (2018) Javeed, S., Saif, S., Waheed, A., Baleanu, D.: Exact solutions of fractional mBBM equation and coupled system of fractional Boussinesq–Burgers. Results Phys. 9, 1275–1281 (2018) Osman, M.S., Machado, J.A.T., Baleanu, D.: On nonautonomous complex wave solutions described by the coupled Schrödinger–Boussinesq equation with variable-coefficients. Opt. Quantum Electron. 50(2), Article ID 73 (2018) Yang, X.J., Machado, J.A.T., Baleanu, D.: Exact traveling-wave solution for local fractional Boussinesq equation in fractal domain. Fractals 25(4), Article ID 1740006 (2017) Kumar, S., Kumar, A., Baleanu, D.: Two analytical methods for time-fractional nonlinear coupled Boussinesq–Burger's equations arise in propagation of shallow water waves. Nonlinear Dyn. 85(2), 699–715 (2016) Sarrico, C.O.R.: Distributional products and global solutions for nonconservative inviscid Burgers equation. J. Math. Anal. Appl. 281, 641–656 (2003) Sarrico, C.O.R.: Products of distributions and singular travelling waves as solutions of advection–reaction equations. Russ. J. Math. Phys. 19, 244–255 (2012) Sarrico, C.O.R.: About a family of distributional products important in the applications. Port. Math. 45, 295–316 (1988) Sarrico, C.O.R.: Distributional products with invariance for the action of unimodular groups. Riv. Mat. Univ. Parma 4, 79–99 (1995) Sarrico, C.O.R.: New solutions for the one-dimensional nonconservative inviscid Burgers equation. J. Math. Anal. Appl. 317, 496–509 (2006) Sarrico, C.O.R.: Collision of delta-waves in a turbulent model studied via a distribution product. Nonlinear Anal. 73, 2868–2875 (2010) Danilov, V.G., Maslov, V.P., Shelkovich, V.M.: Algebras of singularities of singular solutions to first-order quasi-linear strictly hyperbolic systems. Teor. Mat. Fiz. 114(1), 3–55 (1998). English translation: Theor. Math. Phys. 114(1), 1–42 (1998) Danilov, V.G., Shelkovich, V.M.: Generalized solutions of nonlinear differential equations and the Maslov algebras of distributions. Integral Transforms Spec. Funct. 6(1–4), 171–180 (1998) Maslov, V.P.: Nonstandard characteristics in asymptotical problems. Usp. Mat. Nauk 38(6), 3–36 (1983). English translation: Russ. Math. Surv. 38(6), 1–42 (1983) Maslov, V.P., Tsupin, V.A.: Necessary conditions for existence of infinitely narrow solitons in gas dynamics. Dokl. Akad. Nauk SSSR 246(2), 298–300 (1979). English translation: Sov. Phys. Dokl. 24(5), 354–356 (1979) Rosinger, E.E.: Distributions and Nonlinear Partial Differential Equations. Lecture Notes Math., vol. 684. Springer, Berlin (1978) Rosinger, E.E.: Nonlinear Partial Differential Equations. Sequential and Week Solutions. North Holland, Amsterdam (1980) Rosinger, E.E.: Generalized Solutions of Nonlinear Partial Differential Equations. North Holland, Amsterdam (1987) Rosinger, E.E.: Nonlinear Partial Differential Equations. An Algebraic View of Generalized Solutions. North Holland, Amsterdam (1990) Egorov, Y.V.: On the theory of generalized functions. Usp. Mat. Nauk 45(5), 3–40 (1990). English translation: Russ. Math. Surv. 45(5), 1–49 (1990) Colombeau, J.F.: New Generalized Functions and Multiplication of Distributions. North-Holland, Amsterdam (1985) Colombeau, J.F.: Elementary Introduction to New Generalized Functions. North Holland, Amsterdam (1985) Oberguggenberger, M.: Multiplication of Distributions and Applications to Partial Differential Equations. Longman, Harlow (1992) Bressan, A., Rampazzo, F.: On differential systems with vector-valued impulsive controls. Boll. Unione Mat. Ital. 2B(7), 641–656 (1988) Colombeau, J.F., Roux, A.L.: Multiplication of distributions in elasticity and hydrodynamics. J. Math. Phys. 29, 315–319 (1988) Maso, G.D., LeFloch, P., Murat, F.: Definitions and weak stability of nonconservative products. J. Math. Pures Appl. 74, 483–548 (1995) Schwartz, L.: Théorie des Distributions, vol. I (1950) Schwartz, L.: Théorie des Distributions, vol. II (1951) The authors would like to thank the reviewers for helpful comments and editors for their hard work. This work was supported by the Natural Science Foundation of China (11771151). Department of Basic Courses, Guangzhou Maritime University, Guangzhou, P.R. China Shan Zheng Department of Mathematics, Foshan University, Foshan, P.R. China Zhengyong Ouyang Department of Mathematics, Guizhou University, Guiyang, P.R. China Kuilin Wu The authors contributed equally to this paper. All authors read and approved the final manuscript. Correspondence to Zhengyong Ouyang. Zheng, S., Ouyang, Z. & Wu, K. Singular traveling wave solutions for Boussinesq equation with power law nonlinearity and dual dispersion. Adv Differ Equ 2019, 501 (2019). https://doi.org/10.1186/s13662-019-2428-2 Boussinesq equation Dual dispersion Traveling delta wave solution
CommonCrawl
Many-Body Kernels for TDDFT Calculations by Caterina Cocchi & Santiago Rigamonti for exciting boron-10 Purpose: In this tutorial we will learn how to perform a time-dependent density-functional (TDDFT) calculation using different xc-kernels. Three examples will be proposed: One for the BSE-derived xc kernel, one for the LRC kernel, and another for the RBO (RPA-bootstrap) kernel. As a test case, the optical spectrum of LiF will be studied. 1. Calculation setup Additional details about the calculation workflow 2. Theoretical background 3. TDDFT calculations using the BSE-derived xc-kernel 4. TDDFT calculations using the LRC and RBO Kernels Before starting, be sure that relevant environment variables are already defined as specified in How to set environment variables for tutorials scripts. As a preliminary step for this excited-state calculation, a ground-state calculation will be performed. In this tutorial we consider as an example LiF. Create a directory named LiF-TDDFT-kernels and move into it. $ mkdir LiF-TDDFT-kernels $ cd LiF-TDDFT-kernels Inside the directory LiF-TDDFT-kernels we create a sub-directory GS where we perform the preliminary ground-state calculation: $ mkdir GS $ cd GS Inside the GS sub-directory we create the input file for LiF. In the structure element we include the lattice parameter and basis vectors of LiF, which has a rock-salt cubic lattice, as well as the positions of the Li and F atoms. In the groundstate element, we include a 10$\times$10$\times$10 k-point mesh (ngridk) and a value of 14.0 for gmaxvr. This value, which is larger than the default, is needed in view of the excited-state calculation (for details on this we refer to Excited States from BSE). The resulting input file is the following: <title>LiF-BSE</title> <crystal scale="7.608"> <species speciesfile="Li.xml"> <species speciesfile="F.xml"> gmaxvr="14.0"/> N. B.: Do not forget to replace in the input.xml the string "$EXCITINGROOT" by the actual value of the environment variable $EXCITINGROOT. Start now the ground-state SCF calculation and check if it finishes gracefully. $ time excitingser & In case of a successful run the files STATE.OUT and EFERMI.OUT should be present in the directory. These two files are needed as a starting point for the excited-state calculation. The work-flow of the algorithm is a combination of the TDDFT linear response calculation (see Excited states from TDDFT) and the calculation of the direct term of the BSE Hamiltonian, which is then used to set up a MBPT-derived kernel in first order. This kernel then enters the Dyson equation for the response function in the last stage of the TDDFT formalism. There is a large literature dealing with the inclusion of many-body effects into TDDFT kernels, in order to correctly reproduce excitonic features (for further details we refer the seminal review ORR-2002). In the following, we will present examples related to two different approaches. In the first one we will present a TDDFT calculation of the optical spectrum of LiF performed with a xc kernel derived from the solution of the Bethe-Sapeter equation (BSE). In the second part of the tutorial, we will deal with kernels including the so-called long-range correction (LRC). In the example treating the BSE-derived xc kernel, the scheme proposed in MDR-2003 is adopted. In this approach, a nonlocal exchange-correlation functional is derived by imposing TDDFT to reproduce the many-body diagrammatic expansion of the Bethe-Salpeter polarization function. In this way, it is shown that the TDDFT kernel is able to capture the excitonic features in solids, otherwise missing in simpler approximation for the kernel. For further details about the implementation in the code, see SAD-2009. LRC kernels include the long-range component, which is missing at the level of the adiabatic local-density approximation (ALDA). The first model for a LRC kernel was proposed in REI-2002: \begin{align} f_{\rm xc}(\mathbf{q},\omega) = - \frac{\alpha}{\mathbf{q}^2}\;. \end{align} This kernel is static, non-local, and includes the long-range Coulomb tail. In this model, $\alpha$ is a material dependent parameter. An improvement of this kernel is given in BOT-2005, where a dynamical LRC kernel was developed, which explicitly presents a frequency dependence: \begin{align} f_{\rm xc}(\mathbf{q},\omega)=-\frac{1}{\mathbf{q}^2}\left(\alpha+\beta \omega^2\right)\;. \end{align} The values of $\alpha$ and $\beta$ are also material dependent and have to be tuned in order to correctly reproduce the experimental data for the excitons. For further details about the model we refer to the original paper BOT-2005. The RBO kernel (see SHA-2011 and RIG-2015) shares similar characteristics to the LRC kernel: It is long-ranged and static. An important difference is, however, that it does not depend on adjustable parameters, like the parameter $\alpha$, but is fully ab-initio. It is explicitly given by (RIG-2015) \begin{align} f_{\rm xc}^{RBO}(\mathbf{q}) = \frac{1}{\varepsilon_M^{RPA}(\mathbf{q},\omega=0)\overline{\chi}^{RPA}(\mathbf{q},\omega=0)}\;, \end{align} where $\varepsilon_M^{RPA}$ and $\overline{\chi}^{RPA}$ are, respectively, the macroscopic dielectric function and the modified density-response function (ORR-2002) in the random-phase approximation (RPA). We are now ready to set up the TDDFT calculation of LiF, using a BSE-derived xc kernel. First of all, we move back to the parent directory. There, we create a new directory called BSE-kernel and we move into it: $ cd .. $ mkdir BSE-kernel $ cd BSE-kernel We copy into it the files STATE.OUT and EFERMI.OUT, as well as the input file from the GS folder. Then, in the file input.xml the ground-state calculation can be skipped. To do so, set the attribute do = "skip" inside the groundstate element. Then, paste the following xs block: xstype="TDDFT" nempty="3" scissor="0.2095" screentype="full" fxctype="MB1" aresdf="false" aresfxc="false"/> bsetype="singlet" This block is very similar to the one presented in Excited States from BSE, to which we refer for an exhaustive description of the input attributes. In the following, we discuss only the relevant parameters for the TDDFT calculation with a BSE-derived kernel. In the xs element we set the attribute xstype = "TDDFT", as we are performing a TDDFT calculation; We decrease the number of empty states with respect to the example in Excited States from BSE by choosing (nempty = "3") to speed up the calculation; Both tddft and bse elements appear in the input file; Inside the tddft element, we specify the parameters related to TDDFT calculations. We refer to Excited States from TDDFT for additional details about this element. In this case we choose a many-body, BSE-derived kernel fxctype = "MB1" and we set to "false" the attributes aresdf and aresfxc to speed up the calculation (for further details see the Input Reference); The bse element must be specified to generate the kernel. To run the calculation, simply type: Once the run is completed (it should take a few minutes), we can analyze the results. As in any TDDFT calculation, a number of output files are created (see also Excited States from TDDFT for additional details). Here we are interested in the files named EPSILON_NAR_FXCMB1_OCYY_QMT001.OUT (YY = 11, 22, and 33) and specifically in the imaginary part of the macroscopic dielectric function, corresponding to the optical absorption spectrum. In order to visualize, e.g., the "11" component of the spectrum (notice that, as LiF has a cubic crystal structure, the three diagonal components yield the same result!), type: $ PLOT-spectra.py EPSILON_NAR_FXCMB1_OC11_QMT001.OUT.xml This will generate a file named PLOT.png as output, that you can visualize with any png visualizer (e.g., with "okular", "display" (ImageMagik), "gwenview", etc.). The resulting plot is the following: The main features of the optical spectrum are clearly visible in the graph. The intense excitonic peak at about 13 eV dominates the low energy part of the spectrum and another strong peak is found above 20 eV. This result is in agreement with the spectrum obtained by solving the BSE equation (see Excited States from BSE). For further comparison with the literature, we refer to MDR-2003. As usual, we recommend to move to the parent directory and create a new folder where running the TDDFT calculation with the LRC and RBO kernels: $ mkdir LRC-RBO-kernel $ cd LRC-RBO-kernel We copy into it the files STATE.OUT and EFERMI.OUT, as well as the input file from the GS folder. In the file input.xml the following xs block should be inserted: fxctype="LRCstatic" alphalrc="10.0"/> With respect to the previous example on BSE-derived kernel first of all we notice that the BSE element has disappeared: This is actually a purely TDDFT calculation. Inside the tddft element we have changed the attribute fxctype to LRCstatic: In this way we are setting the exchange-correlation kernel to be the LRC static one (REI-2002). The crucial parameter in this calculation is alphalrc, which determines the value of $\alpha$ in the expression of the kernel $f_{xc}(\mathbf{q},\omega) = - \alpha/\mathbf{q}^2$. This is a dimensionless number, which we choose here to be equal to 10.0. Run the calculation (note that it will take much shorter than the calculation with the BSE-derived kernel!!) and plot the resulting optical absorption spectrum, which should look like this: In the plot above, the strong excitonic peak at about 12.5 eV characterizing the spectrum of LiF is correctly reproduced by the TDDFT calculation with the static LRC kernel. Compared to the result obtained with the BSE-derived kernel the main differences appear in the higher energy region of the spectrum, above 20 eV. However, the purpose to correctly reproduce the intense bound exciton of LiF is fulfilled. Next, we now compare this result with that obtained with the dynamical LRC kernel. In this case, the xs block in the input file is the following: fxctype="LRCdyn" alphalrcdyn="2.0" betalrcdyn="35.6"/> Only the tddft element is modified compared to the static case. The attribute fxctype is set to "LRCdyn" to choose the dynamical $f_{xc}$ quoted in BOT-2005. In this case, two parameters must be tuned, namely alphalrcdyn and betalrcdyn, which correspond respectively to $\alpha$ (dimensionless) and $\beta$ (Ha-2) in the model kernel of Eq. (2). The parameters appearing in Eqs. (1) and (2) are chosen such that the calculated BSE energy peak coincides with the experimental one. For more details we refer to the original paper BOT-2005. As a rule of a thumb, the parameters alphalrcdyn and betalrcdyn can be set in order to mimic the behavior of alphalrc in the static LRC kernel: \begin{align} \alpha_{\rm dyn} + \beta\; \omega_{\rm peak}^2 = \alpha_{\rm static}\;, \end{align} where $\omega_{\rm peak}$ indicates the position of the first excitonic peak for LiF at 12.9 eV. Choosing a value of 2.0 for alphalrcdyn, as suggested in BOT-2005 one obtains 35.6 for betalrcdyn. Using these parameters, the resulting optical spectrum is this: The first intense excitonic peak is again well reproduced by the LRC kernel. Finally, we calculate the spectrum with the RBO kernel. In this case, the xs block in the input file is the following: fxctype="RBO" As you can notice, now the tddft element does not contain any empirical parameter, in accordance with the defining Eq. (3) above. The obtained spectrum shows a bound excitonic peak inside the band-gap, whose binding energy is in good agreement with experiment. In order to compare the RBO result with the LRC static result and the BSE-derived xc kernel, we can generate a plot containing all the spectra together by executing: $ PLOT-spectra.py EPSILON_FXCLRCstatic_OC11_QMT001.OUT.xml EPSILON_FXCRBO_OC11_QMT001.OUT.xml ../TDDFT_BSE-kernel/EPSILON_NAR_FXCMB1_OC11_QMT001.OUT.xml The generated PLOT.png file looks like: If you have already done the tutorial Excited states from TDDFT, calculate the optical absorption spectrum for LiF using RPA and ALDA kernels. What happens to the excitonic peak? Decrease the parameter alphalrc in the calculation with the static LRC kernel and check what happens to the excitonic peak. Compare your results with the onset of the spectrum obtained from the RPA calculation. Tune the parameters alphalrcdyn and betalrcdyn, following the rule of a thumb suggested above. What happens to the spectrum? ORR-2002: G. Onida, L. Reining, and A. Rubio, Rev. Mod. Phys. 74, 601 (2002). MDR-2003: A. Marini, R. Del Sole, and A. Rubio, Phys. Rev. Lett. 91, 256402 (2003). SAD-2009: S. Sagmeister and C. Ambrosch-Draxl, Phys. Chem. Chem. Phys. 11, 4451 (2009). REI-2002: L. Reining, V. Olevano, A. Rubio, and G. Onida Phys. Rev. Lett. 88, 066404 (2002). BOT-2005: S. Botti, A. Fourreau, F. Nguyen, Y.-O. Renault, F. Sottile, and L. Reining, Phys. Rev. B 72, 125203 (2005). SHA-2011: S. Sharma, J. K. Dewhurst, A. Sanna, and E. K. U. Gross, Phys. Rev. Lett. 107, 186401 (2011). RIG-2015: S. Rigamonti, S. Botti, V. Veniard, C. Draxl, L. Reining, and F. Sottile, Phys. Rev. Lett. 114, 146402 (2015).
CommonCrawl
Forecast horizon of dynamic lot size model for perishable inventory with minimum order quantities JIMO Home Optimal production, pricing and government subsidy policies for a closed loop supply chain with uncertain returns May 2020, 16(3): 1415-1433. doi: 10.3934/jimo.2019009 A nonhomogeneous quasi-birth-death process approach for an $ (s, S) $ policy for a perishable inventory system with retrial demands Sung-Seok Ko Department of Industrial Engineering, Konkuk University, Seoul, Korea Received March 2018 Revised October 2018 Published May 2020 Early access March 2019 Figure(4) / Table(1) In this paper, an $ (s, S) $ continuous inventory model with perishable items and retrial demands is proposed. In addition, replenishment lead times that are independent and identically distributed according to phase-type distribution are implemented. The proposed system is modeled as a three-dimensional Markov process using a level-dependent quasi-birth-death (QBD) process. The ergodicity of the modeled Markov system is demonstrated and the best method for efficiently approximating the steady-state distribution at the inventory level is determined. This paper also provides performance measure formulas based on the steady-state distribution of the proposed approximation method. Furthermore, in order to minimize the system cost, the optimum values of $ s $ and $ S $ are determined numerically and sensitivity analysis is performed on the main parameters. Keywords: Inventory, perishable item, retrial demand, level-dependent QBD process. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Sung-Seok Ko. A nonhomogeneous quasi-birth-death process approach for an $ (s, S) $ policy for a perishable inventory system with retrial demands. Journal of Industrial & Management Optimization, 2020, 16 (3) : 1415-1433. doi: 10.3934/jimo.2019009 M. Alizadeh, H. Eskandari and S. M. Sajadifar, A modified $(S-1, S)$ inventory system for deteriorating items with Poisson demand and non-zero lead time, Applied Mathematical Modelling, 38 (2014), 699-711. doi: 10.1016/j.apm.2013.07.014. Google Scholar W. J. Anderson, Continuous-time Markov Chains: An Applications-oriented Approach, Springer-Verlag, New York, 1991. doi: 10.1007/978-1-4612-3038-0. Google Scholar J. R. Artalejo, Accessible bibliography on retrial queues, Mathematical and Computer Modelling, 51 (2010), 1071-1081. doi: 10.1016/j.mcm.2009.12.011. Google Scholar J. Artalejo and G. Falin, Standard and retrial queueing systems: A comparative analysis, Revista Matematica Complutense, 15 (2002), 101-129. doi: 10.5209/rev_REMA.2002.v15.n1.16950. Google Scholar J. R. Artalejo, A. Krishnamoorthy and M. J. Lopez-Herrero, Numerical analysis of $(s, S)$ inventory systems with repeated attempts, Annals of Operations Research, 141 (2006), 67-83. doi: 10.1007/s10479-006-5294-8. Google Scholar J. R. Artalejo and M. J. Lopez-Herrero, A simulation study of a discrete-time multiserver retrial queue with finite population, Journal of Statistical Planning and Inference, 137 (2007), 2536-2542. doi: 10.1016/j.jspi.2006.04.018. Google Scholar O. Baron, O. Berman and D. Perry, Continuous review inventory models for perishable items ordered in batches, Mathematical Methods of Operations Research, 72 (2010), 217-247. doi: 10.1007/s00186-010-0318-1. Google Scholar L. Bright and P. G. Taylor, Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes, Stochastic Models, 11 (1995), 497-525. doi: 10.1080/15326349508807357. Google Scholar B. D. Choi and B. Kim, Non-ergodicity criteria for denumerable continuous time Markov processes, Operations Research Letters, 32 (2004), 574-580. doi: 10.1016/j.orl.2004.03.001. Google Scholar G. Falin and J. G. Templeton, Retrial Queues (Vol. 75). CRC Press, 1997. Google Scholar A. Gómez-Corral, A bibliographical guide to the analysis of retrial queues through matrix analytic techniques, Annals of Operations Research, 141 (2006), 163-191. doi: 10.1007/s10479-006-5298-4. Google Scholar Ü. Gürler and B. Y. Özkaya, Analysis of the $(s, S)$ policy for perishables with a random shelf life, IIE Transactions, 40 (2008), 759-781. Google Scholar S. Kalpakam and G. Arivarignan, A continuous review perishable inventory model, Statistics, 19 (1988), 389-398. doi: 10.1080/02331888808802112. Google Scholar S. Kalpakam and G. Arivarignan, Inventory system with random supply quantity, Operations Research Spektrum, 12 (1990), 139-145. doi: 10.1007/BF01719709. Google Scholar S. Kalpakam and K. P. Sapna, Continuous review $(s, S)$ inventory system with random lifetimes and positive leadtimes, Operations Research Letters, 16 (1994), 115-119. doi: 10.1016/0167-6377(94)90066-3. Google Scholar S. Kalpakam and K. P. Sapna, $(S-1, S)$ Perishable systems with stochastic leadtimes, Mathematical and Computer Modelling, 21 (1995), 95-104. doi: 10.1016/0895-7177(95)00026-X. Google Scholar T. Karthick, B. Sivakumar and G. Arivarignan, An inventory system with two types of customers and retrial demands, International Journal of Systems Science: Operations & Logistics, 2 (2015), 90-112. Google Scholar C. Kouki, E. Sahin, Z. Jemai and Y. Dallery, Periodic Review Inventory Policy for Perishables with Random Lifetime, In Eighth International Conference of Modeling and Simulation, 2010. Google Scholar A. Krishnamoorthy and P. V. Ushakumari, Reliability of a k-out-of-n system with repair and retrial of failed units, Top, 7 (1999), 293-304. doi: 10.1007/BF02564728. Google Scholar S. Kumaraswamy and E. Sankarasubramanian, A continuous review of $(s, S)$ inventory systems in which depletion is due to demand and failure of units, Journal of the Operational Research Society, 32 (1981), 997-1001. Google Scholar G. Latouche and V. Ramaswami, Introduction to Matrix Analytic Methods in Stochastic Modeling, Society for Industrial and Applied Mathematics, 1999. doi: 10.1137/1.9780898719734. Google Scholar A. S. Lawrence, B. Sivakumar and G. Arivarignan, A perishable inventory system with service facility and finite source, Applied Mathematical Modelling, 37 (2013), 4771-4786. doi: 10.1016/j.apm.2012.09.018. Google Scholar P. Vijaya Laxmi and M. L. Soujanya, Perishable inventory system with service interruptions, retrial demands and negative customers, Applied Mathematics and Computation, 262 (2015), 102-110. doi: 10.1016/j.amc.2015.04.013. Google Scholar Z. Lian and L. Liu, Continuous review perishable inventory systems: Models and heuristics, IIE Transactions, 33 (2001), 809-822. Google Scholar L. Liu, (s, S) Continuous Review Models for Inventory with Random Lifetimes, Operations Research Letters, 9 (1990), 161-167. doi: 10.1016/0167-6377(90)90014-V. Google Scholar L. Liu and D. H. Shi, An $(s, S)$ model for inventory with exponential lifetimes and renewal demands, Naval Research Logistics, 46 (1999), 39-56. doi: 10.1002/(SICI)1520-6750(199902)46:1<39::AID-NAV3>3.0.CO;2-G. Google Scholar L. Liu and T. Yang, An $(s, S)$ random lifetime inventory model with a positive lead time, European Journal of Operational Research, 113 (1999), 52-63. doi: 10.1016/0167-6377(90)90014-V. Google Scholar E. Mohebbi and M. J. Posner, A continuous review inventory system with lost sales and variable lead time, Naval Research Logistics, 45 (1998), 259-278. doi: 10.1002/(SICI)1520-6750(199804)45:3<259::AID-NAV2>3.0.CO;2-6. Google Scholar S. Nahmias, Perishable inventory theory: A review, Operations Research, 30 (1982), 680-708. Google Scholar S. Nahmias, Perishable Inventory Systems, Springer Science & Business Media, 2011. Google Scholar M. F. Neuts, Matrix-geometric Solutions in Stochastic Models: An Algorithmic Approach, Courier Corporation, 1981. Google Scholar F. Olsson and P. Tydesjö, Inventory problems with perishable items: Fixed lifetimes and backlogging, European Journal of Operational Research, 202 (2010), 131-137. doi: 10.1016/j.ejor.2009.05.010. Google Scholar C. Periyasamy, A finite source perishable inventory system with retrial demands and multiple server vacation, International Journal of Engineering Research and Technology, 2 (2013), 3802-3815. Google Scholar G. P. Prastacos, Blood inventory management: An overview of theory and practice, Management Science, 30 (1984), 777-800. doi: 10.1287/mnsc.30.7.777. Google Scholar F. Raafat, Survey of literature on continuously deteriorating inventory models, Journal of the Operational Research society, 42 (1991), 27-37. Google Scholar N. Ravichandran, Stochastic analysis of a continuous review perishable inventory system with positive lead time and Poisson demand, European Journal of Operational Research, 84 (1995), 444-457. Google Scholar G. E. H. Reuter, Competition processes, In Proc. 4th Berkeley Symp. Math. Statist. Prob, 2 (1961), 421–430. Google Scholar C. P. Schmidt and S. Nahmias, $(S-1, S)$ Policies for perishable inventory, Management Science, 31 (1985), 719-728. doi: 10.1287/mnsc.31.6.719. Google Scholar L. I. Sennott, P. A. Humblet and R. L. Tweedie, Mean drifts and the non-ergodicity of Markov chains, Operations Research, 31 (1983), 783-789. doi: 10.1287/opre.31.4.783. Google Scholar B. Sivakumar, Two-commodity inventory system with retrial demand, European Journal of Operational Research, 187 (2008), 70-83. doi: 10.1016/j.ejor.2007.02.036. Google Scholar B. Sivakumar, A perishable inventory system with retrial demands and a finite population, Journal of Computational and Applied Mathematics, 224 (2009), 29-38. doi: 10.1016/j.cam.2008.03.041. Google Scholar R. L. Tweedie, Criteria for ergodicity, exponential ergodicity and strong ergodicity of Markov processes, Journal of Applied Probability, 18 (1981), 122-130. doi: 10.2307/3213172. Google Scholar P. V. Ushakumari, On $(s, S)$ inventory system with random lead time and repeated demands, International Journal of Stochastic Analysis, 2006 (2006), Art. ID 81508, 22 pp. doi: 10.1155/JAMSA/2006/81508. Google Scholar H. J. Weiss, Optimal ordering policies for continuous review perishable inventory models, Operations Research, 28 (1980), 365-374. doi: 10.1287/opre.28.2.365. Google Scholar Figure 1. Inventory Model Figure Options Download as PowerPoint slide Figure 2. Contour Plot of TCR Figure 3. The effect of $ \lambda $ Figure 4. The effect of $ \mu $ Table 1. Total Cost Rate(TCR) $S \diagdown s$ 1 2 3 4 5 6 7 8 9 10 15 367.40 363.25 361.55 362.46 366.06 372.45 381.92 394.31 409.75 429.18 Table Options Sung-Seok Ko, Jangha Kang, E-Yeon Kwon. An $(s,S)$ inventory model with level-dependent $G/M/1$-Type structure. Journal of Industrial & Management Optimization, 2016, 12 (2) : 609-624. doi: 10.3934/jimo.2016.12.609 Magfura Pervin, Sankar Kumar Roy, Gerhard Wilhelm Weber. Multi-item deteriorating two-echelon inventory model with price- and stock-dependent demand: A trade-credit policy. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1345-1373. doi: 10.3934/jimo.2018098 Chih-Te Yang, Liang-Yuh Ouyang, Hsiu-Feng Yen, Kuo-Liang Lee. Joint pricing and ordering policies for deteriorating item with retail price-dependent demand in response to announced supply price increase. Journal of Industrial & Management Optimization, 2013, 9 (2) : 437-454. doi: 10.3934/jimo.2013.9.437 Magfura Pervin, Sankar Kumar Roy, Gerhard Wilhelm Weber. A two-echelon inventory model with stock-dependent demand and variable holding cost for deteriorating items. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 21-50. doi: 10.3934/naco.2017002 Umakanta Mishra, Abu Hashan Md Mashud, Sankar Kumar Roy, Md Sharif Uddin. The effect of rebate value and selling price-dependent demand for a four-level production manufacturing system. Journal of Industrial & Management Optimization, 2022 doi: 10.3934/jimo.2021233 Shuhua Zhang, Longzhou Cao, Zuliang Lu. An EOQ inventory model for deteriorating items with controllable deterioration rate under stock-dependent demand rate and non-linear holding cost. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021156 Javad Taheri-Tolgari, Mohammad Mohammadi, Bahman Naderi, Alireza Arshadi-Khamseh, Abolfazl Mirzazadeh. An inventory model with imperfect item, inspection errors, preventive maintenance and partial backlogging in uncertainty environment. Journal of Industrial & Management Optimization, 2019, 15 (3) : 1317-1344. doi: 10.3934/jimo.2018097 Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021013 Onur Kaya, Halit Bayer. Pricing and lot-sizing decisions for perishable products when demand changes by freshness. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3113-3129. doi: 10.3934/jimo.2020110 Kebing Chen, Haijie Zhou, Dong Lei. Two-period pricing and ordering decisions of perishable products with a learning period for demand disruption. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3131-3163. doi: 10.3934/jimo.2020111 Jia Shu, Zhengyi Li, Weijun Zhong. A market selection and inventory ordering problem under demand uncertainty. Journal of Industrial & Management Optimization, 2011, 7 (2) : 425-434. doi: 10.3934/jimo.2011.7.425 Xue Qiao, Zheng Wang, Haoxun Chen. Joint optimal pricing and inventory management policy and its sensitivity analysis for perishable products: Lost sale case. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021079 Fuying Jing, Zirui Lan, Yang Pan. Forecast horizon of dynamic lot size model for perishable inventory with minimum order quantities. Journal of Industrial & Management Optimization, 2020, 16 (3) : 1435-1456. doi: 10.3934/jimo.2019010 Dalila Azzam-Laouir, Fatiha Selamnia. On state-dependent sweeping process in Banach spaces. Evolution Equations & Control Theory, 2018, 7 (2) : 183-196. doi: 10.3934/eect.2018009 Min Li, Jiahua Zhang, Yifan Xu, Wei Wang. Effect of disruption risk on a supply chain with price-dependent demand. Journal of Industrial & Management Optimization, 2020, 16 (6) : 3083-3103. doi: 10.3934/jimo.2019095 Shouyu Ma, Zied Jemai, Evren Sahin, Yves Dallery. Analysis of the Newsboy Problem subject to price dependent demand and multiple discounts. Journal of Industrial & Management Optimization, 2018, 14 (3) : 931-951. doi: 10.3934/jimo.2017083 Konstantina Skouri, Ioannis Konstantaras. Two-warehouse inventory models for deteriorating products with ramp type demand rate. Journal of Industrial & Management Optimization, 2013, 9 (4) : 855-883. doi: 10.3934/jimo.2013.9.855 Wei Liu, Shiji Song, Cheng Wu. Single-period inventory model with discrete stochastic demand based on prospect theory. Journal of Industrial & Management Optimization, 2012, 8 (3) : 577-590. doi: 10.3934/jimo.2012.8.577 Yanyi Xu, Arnab Bisi, Maqbool Dada. New structural properties of inventory models with Polya frequency distributed demand and fixed setup cost. Journal of Industrial & Management Optimization, 2017, 13 (2) : 931-945. doi: 10.3934/jimo.2016054 Magfura Pervin, Sankar Kumar Roy, Gerhard Wilhelm Weber. Deteriorating inventory with preservation technology under price- and stock-sensitive demand. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1585-1612. doi: 10.3934/jimo.2019019 \begin{document}$ (s, S) $\end{document} policy for a perishable inventory system with retrial demands" readonly="readonly">
CommonCrawl
Gaurav Tiwari Updated: April 5, 2014 'Symmetry' has a special meaning in physics. A picture is said to be symmetrical if one side is somehow the same as the other side. Precisely, a thing is symmetrical if one can subject it to a certain operation and it appears exactly the same after the operation. For example, if we look at a base that is left and right symmetrical, then turn it 180° around the vertical axis it looks the same. Newton's laws of motion do not alter when the position coordinates are altered, that is, they are moved (linearly) from one place to other. This is equally true for almost all other physical laws. Therefore, we can say that (almost all) laws of physics are symmetrical for linear displacements. The same is true for rotational displacement. Not only Newton's law, as said, all the other laws of physics known so far are symmetric under translation and rotation of axes. Using these concepts of symmetry, new mathematical techniques has been developed for writing and using physical laws, for example Tensor Analysis. Remark: We use many other different terms for symmetry whenever needed, for example in high-schools we use the term conservation and at grad-level it turns out to be invariance, invariant or symmetry, itself. There are following main symmetry (conservation) operations in physical laws: • Symmetry in Matter and Energy or Conservation of Mass (Matter) and Conservation of Energy • Conservation of Angular Momentum • Conservation of Electric Charge • Conservation of Baryon Number • Conservation of Lepton Number • Conservation of Strangeness • Conservation of Hypercharge • Conservation of Iso-spin • Conservation of Charge Conjugation • Conservation of Parity. Conservation of Mass & Energy This conservation involves the following two different definitions and one hypothesis by Einstein. Definition.1 Conservation of Mass / Matter Matter can never be created or destroyed, but it can convert itself into several other forms of either matter or energy or both. Definition. 2 Energy can never be created or destroyed, but it can convert itself to other forms of matter & energy. In practical, we see that if we burn a coal, it emits heat & remains ash. Scientifically, the coal (matter) is converted into heat (energy) and precipitate (matter). This is a balance conversion in which matter converts into energy. Similarly, we can generate a lot of energy after nuclear fission, in which also matter is converted directly into energy. We have also seen, energies forming different kind of unstable matters in nature. Physics' famous equation $E=mc^2$ given by Einstein also says the same : $E$ (energy) is directly related to $m$ (mass). Matter (mass) and Energy both are conserved with their inner-conversions and the total value of mass + energy is a constant, since origin of universe. The complete hypothesis was by Albert Einstein. After combining the two definitions and the hypothesis we have, The mass and energy can neither be produced nor destroyed — but they can be converted from one form to another. The linear momentum of a system is constant if there are no external forces acting on the system of physical bodies. The angular momentum of a system remains constant if there are no external (angular) torques acting on the system. Conservation of Electric Charge The electric charge can neither be created nor destroyed. The net algebraic sum of positive and negative electric charges is constant. Conservation of Baryon Number In any nuclear reaction, the number of baryon particles must remain the same, at least ] until the reaction completes. Conservation of Lepton Number The lepton number, i.e.; the algebraic sum of number of leptons and anti-leptons, remains constant throughout a nuclear reaction. Conservation of Strangeness The algebraic sum of the number of kaons and hyperons , called strangeness, remains constant in electromagnetic and strong interactions. Conservation of Hypercharge The flavor of quarks remains the same throughout an internuclear interaction. Conservation of Isospin The isospin of hadrons is constant in strong interactions. Conservation of Charge Conjugation Remark: Charge conjugation C is the operation of changing a fundamental particle into its anti-particle. It's something like applying inverse function to any value. For example, let C be the charge conjugation operator • $C (\pi^+) = \pi^- $ (i.e., $\pi$-mesons being converted into their antiparticles, and; • $C (x^2-3x+5) = 3x-5-x^2 \Box$ The charge conjugation operator is conserved in strong and electromagnetic interactions. Conservation of Parity Remark: The parity operation, $P$ is reflection of all coordinates through the origin. That is, in two dimensions co-ordinate system X-Y, $P(x, y) = (-x, -y)$ or $P(\mathbf{r})=-\mathbf{r}$. The parity of any wave function describing an elementary particle is conserved. Gaurav Tiwari › Education › Physics › Symmetry in Physical Laws Tagged: Albert Einstein, Classical Mechanics, Statistical Physics, Writing Previous Post: Genius PDF is no more! Next Post: Must Have Firefox Tools and Extensions for Bloggers
CommonCrawl
eISSN: Discrete & Continuous Dynamical Systems - S December 2011 , Volume 4 , Issue 6 Issue on biomathematics: Newly developed applied mathematics and new mathematics arising from biosciences Select all articles Export/Reference: Jianjun Paul Tian, Murray R. Bremner, Reinhard Laubenbacher and Banghe Li 2011, 4(6): i-ii doi: 10.3934/dcdss.2011.4.6i +[Abstract](1391) +[PDF](80.5KB) The First Joint Meeting of the American Mathematical Society and the Chinese Mathematical Society took place in Shanghai, China, December 17-21, 2008. It was organized by the Shanghai Mathematical Society, and hosted by Fudan University in Shanghai. Leading researchers from China and the United States participated in the conference. The conference was a major event for advancing mathematical research, and especially for developing international communication and cooperation among mathematicians from China and the United States. The conference program consisted of seven plenary talks, invited talks of eighteen special sessions, and many contributed talks. The topics in the special sessions covered a wide range of mathematics, applied mathematics and mathematical biology. For more information please click the "Full Text" above. Jianjun Paul Tian, Murray R. Bremner, Reinhard Laubenbacher, Banghe Li. Preface. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): i-ii. doi: 10.3934/dcdss.2011.4.6i. Derivations in power-associative algebras Joseph Bayara, André Conseibo, Artibano Micali and Moussa Ouattara 2011, 4(6): 1359-1370 doi: 10.3934/dcdss.2011.4.1359 +[Abstract](1854) +[PDF](357.7KB) In this paper we investigate derivations of a commutative power-associative algebra. Particular cases of stable and partially stable algebras are inspected. Some attention is paid to the Jordan case. Further results are given. Especially, we show that the core of a $n^{th}$-order Bernstein algebra which is power-associative is a Jordan algebra. Joseph Bayara, Andr\u00E9 Conseibo, Artibano Micali, Moussa Ouattara. Derivations in power-associative algebras. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1359-1370. doi: 10.3934/dcdss.2011.4.1359. Train algebras of degree 2 and exponent 3 Joseph Bayara, André Conseibo, Moussa Ouattara and Artibano Micali In this paper we investigate the structure of weighted algebras satisfying the equation $(x^3)^2 = \omega(x)^3x^3$, a class of algebras properly containing the class of Bernstein algebras. We give the classification of these algebras in dimension three. Some results about the structure of algebras satisfying the more general equation $(x^n)^2 = \omega(x)^nx^n$, for $n\geq 2$, are also obtained. Joseph Bayara, Andr\u00E9 Conseibo, Moussa Ouattara, Artibano Micali. Train algebras of degree 2 and exponent 3. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1371-1386. doi: 10.3934/dcdss.2011.4.1371. Polynomial identities for ternary intermolecular recombination Murray R. Bremner The operation of binary intermolecular recombination, originating in the theory of DNA computing, permits a natural generalization to $n$-ary operations which perform simultaneous recombination of $n$ molecules. In the case $n = 3$, we use computer algebra to determine the polynomial identities of degree $\le 9$ satisfied by this trilinear nonassociative operation. Our approach requires computing a basis for the nullspace of a large integer matrix, and for this we compare two methods: the row canonical form, and the Hermite normal form with lattice basis reduction. In the conclusion, we formulate some conjectures for the general case of $n$-ary intermolecular recombination. Murray R. Bremner. Polynomial identities for ternary intermolecular recombination. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1387-1399. doi: 10.3934/dcdss.2011.4.1387. Topological symmetry groups of $K_{4r+3}$ Dwayne Chambers, Erica Flapan and John D. O'Brien We present the concept of the topological symmetry group as a way to analyze the symmetries of non-rigid molecules. Then we characterize all of the groups which can occur as the topological symmetry group of an embedding of a complete graph of the form $K_{4r+3}$ in $S^3$. Dwayne Chambers, Erica Flapan, John D. O\'Brien. Topological symmetry groups of $K_{4r+3}$. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1401-1411. doi: 10.3934/dcdss.2011.4.1401. Multiple stable steady states of a reaction-diffusion model on zebrafish dorsal-ventral patterning Wenrui Hao, Jonathan D. Hauenstein, Bei Hu, Yuan Liu, Andrew J. Sommese and Yong-Tao Zhang The reaction-diffusion system modeling the dorsal-ventral patterning during the zebrafish embryo development, developed in [Y.-T. Zhang, A.D. Lander, Q. Nie, Journal of Theoretical Biology, 248 (2007), 579--589] has multiple steady state solutions. In this paper, we describe the computation of seven steady state solutions found by discretizing the boundary value problem using a finite difference scheme and solving the resulting polynomial system using algorithms from numerical algebraic geometry. The stability of each of these steady state solutions is studied by mathematical analysis and numerical simulations via a time marching approach. The results of this paper show that three of the seven steady state solutions are stable and the location of the organizer of a zebrafish embryo determines which stable steady state pattern the multi-stability system converges to. Numerical simulations also show that the system is robust with respect to the change of the organizer size. Wenrui Hao, Jonathan D. Hauenstein, Bei Hu, Yuan Liu, Andrew J. Sommese, Yong-Tao Zhang. Multiple stable steady states of a reaction-diffusion modelon zebrafish dorsal-ventral patterning. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1413-1428. doi: 10.3934/dcdss.2011.4.1413. Equilibrium submanifold for a biological system Hongyu He and Naohiro Kato The complexity in a biological system may be caused by both the number of variables involved and the number of system constants that can vary. A biological system in the subcellular level often stabilizes after a certain period of time. Its asymptote can then be described as an equilibrium under certain continuity assumptions. The biological quantities at the equilibrium can be detected by experiments and they observe some mathematical equations. The purpose of this paper is to study the equilibrium submanifold of vesicle trafficking in a two-compartment system. We compute the equilibrium submanifold under some fairly general assumption on the system constants. The disconnectedness of the equilibrium submanifold may have biological implications. We show that, unlike many other systems, the equilibrium is determined largely by system constants rather than the initial state. In particular, the equilibrium submanifold is locally a real algebraic variety, with small generic dimension and large degenerate dimension. Our result suggests that some biological system may be studied by algebraic or geometric methods. Hongyu He, Naohiro Kato. Equilibrium submanifold for a biological system. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1429-1441. doi: 10.3934/dcdss.2011.4.1429. Boolean models of bistable biological systems Franziska Hinkelmann and Reinhard Laubenbacher This paper presents an algorithm for approximating certain types of dynamical systems given by a system of ordinary delay differential equations by a Boolean network model. Often Boolean models are much simpler to understand than complex differential equations models. The motivation for this work comes from mathematical systems biology. While Boolean mechanisms do not provide information about exact concentration rates or time scales, they are often sufficient to capture steady states and other key dynamics. Due to their intuitive nature, such models are very appealing to researchers in the life sciences. This paper is focused on dynamical systems that exhibit bistability and are described by delay equations. It is shown that if a certain motif including a feedback loop is present in the wiring diagram of the system, the Boolean model captures the bistability of molecular switches. The method is applied to two examples from biology, the lac operon and the phage $\lambda$ lysis/lysogeny switch. Franziska Hinkelmann, Reinhard Laubenbacher. Boolean models of bistable biological systems. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1443-1456. doi: 10.3934/dcdss.2011.4.1443. The dynamics of zeroth-order ultrasensitivity: A critical phenomenon in cell biology Qingdao Huang and Hong Qian It is well known since the pioneering work of Goldbeter and Koshland [Proc. Natl. Acad. Sci. USA, vol. 78, pp. 6840-6844 (1981)] that cellular phosphorylation- dephosphorylation cycle (PdPC), catalyzed by kinase and phosphatase under saturated condition with zeroth order enzyme kinetics, exhibits ultrasensitivity, sharp transition. We analyse the dynamics aspects of the zeroth order PdPC kinetics and show a critical slowdown akin to the phase transition in condensed matter physics. We demonstrate that an extremely simple, though somewhat mathematically "singular" model is a faithful representation of the ultrasentivity phenomenon. The simplified mathematical model will be valuable, as a component, in developing complex cellular signaling netowrk theory as well as having a pedagogic value. Qingdao Huang, Hong Qian. The dynamics of zeroth-order ultrasensitivity:A critical phenomenon in cell biology. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1457-1464. doi: 10.3934/dcdss.2011.4.1457. An enzyme kinetics model of tumor dormancy, regulation of secondary metastases Yangjin Kim and Khalid Boushaba In this paper we study 1 dimensional (1D) and 2D extended version of a two compartment model for tumor dormancy suggested by Boushaba et al. [3]. The model is based on the idea that the vascularization of a secondary tumor can be suppressed by inhibitor originating from a larger primary tumor. It has been observed emergence of a polypoid melanoma at a site remote from a primary polypoid melanoma after excision of the latter. The authors observed no recurrence of the melanoma at the primary site, but did observe secondary tumors at secondary sites five to seven centimeters from the primary site within a period of one month after the excision of the primary site. 1D and 2D simulations show that when the tumors are sufficiently remote, the primary tumor will not influence the secondary tumors while, if they are too close together, the primary tumor can effectively prevent the growth of the secondary tumors, even after it is removed. The sensitivity analysis was carried out for the 1D model. It has been long observed that surgery should be followed by other treatment options such as chemotherapy. 2D simulation suggests a possible treatment options with different dosage schedule after a surgery in order to achieve better clinical outcome. Yangjin Kim, Khalid Boushaba. An enzyme kinetics model of tumor dormancy, regulationof secondary metastases. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1465-1498. doi: 10.3934/dcdss.2011.4.1465. A computational study of avian influenza Shu Liao, Jin Wang and Jianjun Paul Tian We propose a PDE model and conduct numerical simulation to study the temporal and spatial dynamics of the Avian Influenza, and investigate its epidemic and possibly pandemic effects in both the bird and human populations. We present several numerical examples to carefully study the population dynamics with small initial perturbations. Our results show that in the absence of external controls, any small amount of initial infection would lead to an outbreak of the influenza with considerably high death rates in both birds and human beings. Shu Liao, Jin Wang, Jianjun Paul Tian. A computational study of avian influenza. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1499-1509. doi: 10.3934/dcdss.2011.4.1499. Nongeneric bifurcations near heterodimensional cycles with inclination flip in $\mathbb{R}^4$ Dan Liu, Shigui Ruan and Deming Zhu Nongeneric bifurcation analysis near rough heterodimensional cycles associated to two saddles in $\mathbb{R}^4$ is presented under inclination flip. By setting up local moving frame systems in some tubular neighborhood of unperturbed heterodimensional cycles, we construct a Poincaré return map under the nongeneric conditions and further obtain the bifurcation equations. Coexistence of a heterodimensional cycle and a unique periodic orbit is proved after perturbations. New features produced by the inclination flip that heterodimensional cycles and homoclinic orbits coexist on the same bifurcation surface are shown. It is also conjectured that homoclinic orbits associated to different equilibria coexist. Dan Liu, Shigui Ruan, Deming Zhu. Nongeneric bifurcations near heterodimensional cycles with inclination flip in $\\mathbb{R}^4$. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1511-1532. doi: 10.3934/dcdss.2011.4.1511. Update sequence stability in graph dynamical systems Matthew Macauley and Henning S. Mortveit In this article, we study finite dynamical systems defined over graphs, where the functions are applied asynchronously. Our goal is to quantify and understand stability of the dynamics with respect to the update sequence, and to relate this to structural properties of the graph. We introduce and analyze three different notions of update sequence stability, each capturing different aspects of the dynamics. When compared to each other, these stability concepts yield different conclusions regarding the relationship between stability and graph structure, painting a more complete picture of update sequence stability. Matthew Macauley, Henning S. Mortveit. Update sequence stability in graph dynamical systems. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1533-1541. doi: 10.3934/dcdss.2011.4.1533. Conjectures for the existence of an idempotent in $\omega $-polynomial algebras Michelle Nourigat and Richard Varro The existence of idempotent elements in baric algebras defined by $\omega$-polynomial identities ($\omega$-PI algebras) is an important problem for the study of genetic algebras. We conjecture here two criteria on the existence of an idempotent. These criteria are based on the existence of 1/2 as double root of a polynomial built from the identity defining a $\omega$-PI algebra. We show that these criteria are true in all the algebras studied until now and for which we have results concerning the existence of idempotent elements. Michelle Nourigat, Richard Varro. Conjectures for the existence of anidempotent in $\\omega $-polynomial algebras. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1543-1551. doi: 10.3934/dcdss.2011.4.1543. Backward problems of nonlinear dynamical equations on time scales Yunfei Peng, X. Xiang and W. Wei In this paper, the backward problem of nonlinear dynamical equations on time scales is considered. Introducing the reasonable weak solution of the nonlinear backward problem, the existence of weak solution for nonlinear dynamical equation on time scales and its properties are presented. Yunfei Peng, X. Xiang, W. Wei. Backward problems of nonlinear dynamical equations on time scales. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1553-1564. doi: 10.3934/dcdss.2011.4.1553. Topology and dynamics of boolean networks with strong inhibition Yongwu Rong, Chen Zeng, Christina Evans, Hao Chen and Guanyu Wang A major challenge in systems biology is to understand interactions within biological systems. Such a system often consists of units with various levels of activities that evolve over time, mathematically represented by the dynamics of the system. The interaction between units is mathematically represented by the topology of the system. We carry out some mathematical analysis on the connections between topology and dynamics of such networks. We focus on a specific Boolean network model - the Strong Inhibition Model. This model defines a natural map from the space of all possible topologies on the network to the space of all possible dynamics on the same network. We prove this map is neither surjective nor injective. We introduce the notions of "redundant edges" and "dormant vertices" which capture the non-injectiveness of the map. Using these, we determine exactly when two different topologies yield the same dynamics and we provide an algorithm that determines all possible network solutions given a dynamics. Yongwu Rong, Chen Zeng, Christina Evans, Hao Chen, Guanyu Wang. Topology and dynamics of boolean networks with strong inhibition. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1565-1575. doi: 10.3934/dcdss.2011.4.1565. Algebraic model of non-Mendelian inheritance Jianjun Paul Tian Evolution algebra theory is used to study non-Mendelian inheritance, particularly organelle heredity and population genetics of Phytophthora infectans. We not only can explain a puzzling feature of establishment of homoplasmy from heteroplasmic cell population and the coexistence of mitochondrial triplasmy, but also can predict all mechanisms to form the homoplasmy of cell populations, which are hypothetical mechanisms in current mitochondrial disease research. The algebras also provide a way to easily find different genetically dynamic patterns from the complexity of the progenies of Phytophthora infectans which cause the late blight of potatoes and tomatoes. Certain suggestions to pathologists are made as well. Jianjun Paul Tian. Algebraic model of non-Mendelian inheritance. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1577-1586. doi: 10.3934/dcdss.2011.4.1577. Periodic solutions of a model for tumor virotherapy Daniel Vasiliu and Jianjun Paul Tian In this article we study periodic solutions of a mathematical model for brain tumor virotherapy by finding Hopf bifurcations with respect to a biological significant parameter, the burst size of the oncolytic virus. The model is derived from a PDE free boundary problem. Our model is an ODE system with six variables, five of them represent different cell or virus populations, and one represents tumor radius. We prove the existence of Hopf bifurcations, and periodic solutions in a certain interval of the value of the burst size. The evolution of the tumor radius is much influenced by the value of the burst size. We also provide a numerical confirmation. Daniel Vasiliu, Jianjun Paul Tian. Periodic solutions of a model for tumor virotherapy. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1587-1597. doi: 10.3934/dcdss.2011.4.1587. Novel dynamics of a simple Daphnia-microparasite model with dose-dependent infection Kaifa Wang and Yang Kuang Many experiments reveal that Daphnia and its microparasite populations vary strongly in density and typically go through pronounced cycles. To better understand such dynamics, we formulate a simple two dimensional autonomous ordinary differential equation model for Daphnia magna-microparasite infection with dose-dependent infection. This model has a basic parasite production number $R_0=0$, yet its dynamics is much richer than that of the classical mathematical models for host-parasite interactions. In particular, Hopf bifurcation, stable limit cycle, homoclinic and heteroclinic orbit can be produced with suitable parameter values. The model indicates that intermediate levels of parasite virulence or host growth rate generate more complex infection dynamics. Kaifa Wang, Yang Kuang. Novel dynamics of a simple <em>Daphnia<\/em>-microparasitemodel with dose-dependent infection. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1599-1610. doi: 10.3934/dcdss.2011.4.1599. On fuzzy filters of Heyting-algebras Wei Wang and Xiao-Long Xin The concept of fuzzy filter of Heyting-algebras was introduced and some important properties were discussed. Some special kinds of fuzzy filters were defined and we prove that fuzzy Boolean filter is equivelent to fuzzy implicative filter in Heyting-algebras. And the relation among the fuzzy filters were proposed. Wei Wang, Xiao-Long Xin. On fuzzy filters of Heyting-algebras. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1611-1619. doi: 10.3934/dcdss.2011.4.1611. Turing instability in a coupled predator-prey model with different Holling type functional responses Zhifu Xie In a reaction-diffusion system, diffusion can induce the instability of a positive equilibrium which is stable with respect to a constant perturbation, therefore, the diffusion may create new patterns when the corresponding system without diffusion fails, as shown by Turing in 1950s. In this paper we study a coupled predator-prey model with different Holling type functional responses, where cross-diffusions are included in such a way that the prey runs away from predator and the predator chase preys. We conduct the Turing instability analysis for each Holling functional response. We prove that if a positive equilibrium solution is linearly stable with respect to the ODE system of the predator-prey model, then it is also linearly stable with respect to the model. So diffusion and cross-diffusion in the predator-prey model with Holling type functional responses given in this paper can not drive Turing instability. However, diffusion and cross-diffusion can still create non-constant positive solutions for the model. Zhifu Xie. Turing instability in a coupled predator-preymodel with different Holling type functional responses. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1621-1628. doi: 10.3934/dcdss.2011.4.1621. Dynamics of boolean networks Yi Ming Zou Boolean networks are special types of finite state time-discrete dynamical systems. A Boolean network can be described by a function from an $n$-dimensional vector space over the field of two elements to itself. A fundamental problem in studying these dynamical systems is to link their long term behaviors to the structures of the functions that define them. In this paper, a method for deriving a Boolean network's dynamical information via its disjunctive normal form is explained. For a given Boolean network, a matrix with entries $0$ and $1$ is associated with the polynomial function that represents the network, then the information on the fixed points and the limit cycles is derived by analyzing the matrix. The described method provides an algorithm for the determination of the fixed points from the polynomial expression of a Boolean network. The method can also be used to construct Boolean networks with prescribed limit cycles and fixed points. Examples are provided to explain the algorithm. Yi Ming Zou. Dynamics of boolean networks. Discrete & Continuous Dynamical Systems - S, 2011, 4(6): 1629-1640. doi: 10.3934/dcdss.2011.4.1629. RSS this journal Tex file preparation Open Choice Editors/Guest Editors Proposal submission Add your name and e-mail address to receive news of forthcoming issues of this journal: Select the journal Select Journals
CommonCrawl
Sylvester's Problem, Steinberg's Solution The Sylvester Problem has been posed by James Joseph Sylvester in 1893 in Educational Times: Let $n$ given points have the property that the line joining any two of them passes through a third point of the set. Must the $n$ points all lie on one line? R. Steinberg's was actually the first published solution to Syvester's problem, Given the set $\Pi$ of noncollinear points, consider the set of lines $\Sigma$ that pass through at least two points of $\Pi.$ Such lines are said to be connecting. Among the connecting lines, those that pass through exactly two points of $\Pi$ are called ordinary. We consider the configuration in the projective plane. Let $p$ be any point of $\Pi.$ If $p$ lies on an ordinary line we are done, so we may assume that $p$ lies on no ordinary line. Let $t$ be a line (in the plane) through $p$ but not through any other point of $\Pi.$ The lines in $\Sigma$ not through $p$ meet $t,$ in points $x_{1},x_{2},\ldots,x_{k}$ say, named in cyclic order so that one of the two segments determined by $p$ and $x_1$ contains none of the points $x_{2},\ldots,x_{k}$ within it: Let $s$ be a line of $\Sigma$ through $x_1$. Then $s$ must be ordinary! For otherwise there would be three or more points of $\Pi$ on $s,$ say $p_{1},p_{2},p_{3}$ named so that $p_1$ and $x_1$ are separated by $p_2$ and $p_3$: The connecting line through $p$ and $p_1$ would have to contain a further point of $\Pi$ (remember, $p$ lies on no ordinary line), say $p_4,$ and then one of these two connecting lines $p_{2}p_{4},$ $p_{3}p_{4}$ would meet the "forbidden" segment $px_1.$ P. Borwein, W. O. J. Moser, A survey of Sylvester's problem and its generalizations, Aequationes Mathematicae 40 (1990) 111 - 135 H. S. M.Coxeter, A Problem of Collinear Points, The American Mathematical Monthly, Vol. 55, No. 1 (Jan., 1948), pp. 26-28 P. Erdös, R. Steinberg, Problem 4065 [1943, 65]. Proposed by P. Erdös, Princeton, N. J, Solution by Robert Steinberg,S tudent, University of Toronto, The American Mathematical Monthly, Vol. 51, No. 3 (Mar., 1944), pp. 169-171 J. J. Sylvester, Educational Times, Mathematical Question 11851, vol. 59 (1893), p. 98 |Contact| |Front page| |Contents| |Algebra| |Up|
CommonCrawl
Wearable sensors based on colloidal nanocrystals Woo Seok Lee1, Sanghyun Jeon1 & Soong Ju Oh1 In recent times, wearable sensors have attracted significant attention in various research fields and industries. The rapid growth of the wearable sensor related research and industry has led to the development of new devices and advanced applications such as bio-integrated devices, wearable health care systems, soft robotics, and electronic skins, among others. Nanocrystals (NCs) are promising building blocks for the design of novel wearable sensors, due to their solution processability and tunable properties. In this paper, an overview of NC synthesis, NC thin film fabrication, and the functionalization of NCs for wearable applications (strain sensors, pressure sensors, and temperature sensors) are provided. The recent development of NC-based strain, pressure, and temperature sensors is reviewed, and a discussion on their strategies and operating principles is presented. Finally, the current limitations of NC-based wearable sensors are discussed, in addition to methods to overcome these limitations. With the rapid development of the internet of things (IoT), wearable electronic devices have attracted significant attention in research fields and industry, as they can be used for remote health care monitoring and human–machine interfaces [1,2,3,4,5]. They are commonly integrated into clothes, glasses, and watches, and directly attached to human skin to collect physical, chemical, and biological signals generated by humans or their surroundings [6, 7]. Among the various components of wearable devices, strain, pressure, and temperature sensors are critical for the monitoring of human motion, health or physiological information, and external stimuli [8,9,10,11,12,13]. Significant research effort was directed toward the enhancement of the performance of the abovementioned wearable sensors using various materials such as graphene, carbon nanotubes, organic materials, and silicon nanomembranes; and/or by designing unique device structures [14,15,16,17]. However, costly and complex high temperature and/or high vacuum processes such as sputtering, reactive-ion etching, and thermal deposition are generally required to synthesize functional materials and/or manufacture devices [18,19,20,21,22]. This results in a high production cost, which limits their commercialization. Colloidal nanocrystals (NCs) are considered promising building blocks for the next generation of wearable sensors, as they provide the following advantages. First, NCs can be synthesized at a large scale using wet chemical methods, and the resulting NC inks can be deposited onto various substrates in a large area under room-temperature and in an atmospheric environment using a solution based process such as roll-to-roll printing, drop casting, spin-coating, and inkjet printing [23,24,25,26,27,28,29,30]. Second, the electronic, optical, and magnetic properties of NCs can be easily controlled by adjusting their size, shape, composition, and surface state; thus enabling them to demonstrate application-specific functionality [31,32,33,34,35,36,37]. Based on these advantages, significant research effort has been directed toward the realization of high performance NC-based strain, pressure, and temperature sensors by the control of the interparticle distance between the NCs, or by the design of new NC structures [38,39,40,41,42,43,44,45,46,47]. In this brief review, the ligand exchange strategy of NCs for the development of conductive and functional NC thin films with application-specific properties for strain, pressure, and temperature sensors is discussed. Thereafter, a summary on the recently reported NC-based strain, pressure, and temperature sensors is presented, in addition to a brief explanation of their strategies, operating principles, and practical applications. Moreover, the review includes an overview of the current challenges, and a perspective on the future methods for the realization of advanced NC-based wearable sensors. Surface ligand exchange of NCs for specific applications Nanocrystals (NCs) are composed of hundreds to thousands of atoms with diameters smaller than 100 nm [27]. Moreover, NCs are generally synthesized with long organic chains such as oleic acid and oleylamine as their surface capping ligands, using wet chemical methods [27]. These long organic ligands control the size and shape of the NCs during the synthesis, and enable the dispersion of the NCs in organic solvents after the synthesis and washing procedures [27, 28]. The resulting NC inks allow for the formation of NC thin films on various substrates using solution-based process such as spin-coating, drop casting, inkjet printing, and roll-to-roll printing [32]. The as-synthesized NC thin films are electrical insulators, given that long original ligands result in long interparticle distances, which limit the efficient charge transport and effective coupling between each NC. Thus, a ligand exchange strategy is generally used to improve the electrical properties and provide functionality [24, 32]. The original long ligands are replaced with short organic or inorganic ligands by immersing the as-synthesized NC thin films into a ligand exchange solution. It is common knowledge that the conductivity of NC thin films varies from 1012 to 10−6 Ω cm depending on the lengths of the surface ligands, which determine the interparticle distance [48,49,50]. In addition, the overall property of NCs is governed by their surface ligand chemistry, due to their high-surface-to-volume ratio [32, 48]. Therefore, application-specific properties can be realized by selecting appropriate types of surface ligands for the ligand exchange process. This enables NCs with the same composition to be used as active materials for different applications such as strain, pressure, and temperature sensors by adjusting their surface chemistry through the ligand exchange process (Fig. 1) [49,50,51]. Schematic of the synthesis, thin film fabrication, and surface ligand exchange processes of NCs NC-based strain sensors Strain sensors are devices that measure the electromechanical deformation of objects. Strain gauge sensors are key components in wearable electronic devices, as they are able to measure the breathing rate, heartbeat, and pulse for applications in wearable health care systems; in addition to a wide range of human motion, for human–machine interfaces [8,9,10]. The sensitivity of strain sensors is referred to as the gauge factor, which is defined by the following equation: $$ {\text{G}} = \left( {\Delta R/R_{0} } \right)/\varepsilon $$ where ΔR is the change in resistance, R0 is the base resistance, and ε is the applied strain. Commercial strain sensors based on metal thin films have a limited gauge factor of ~ 3 [52]. In contrast, NC-based strain sensors have larger gauge factors, which are higher than 10 [39]. This is attributed to the unique hopping or tunneling transport mechanism in NC thin films, which can be expressed by the following equation: $$ \upsigma =\upsigma_{0} { \exp }\left( { -\upbeta{\text{d}}} \right){ \exp }\left( { - \frac{{E_{a} }}{\text{kT}}} \right) $$ where σo is the intrinsic conductivity, β is the tunneling decay constant, d is the interparticle distance, k is the Boltzmann constant, T is the temperature, and Ea is the activation energy. Thus, the external strain increases the interparticle distance and exponentially decreases the conductance of NC thin films, according to the Eq. (2) (Fig. 2a). The hopping or tunneling transport mechanism promotes the sensitivity of the NC thin films to the applied strain, and results in a high gauge factor, when compared with conventional metal thin films [44, 45]. Significant effort has been directed toward the enhancement of the sensitivity of NC thin films by adjusting the parameters in Eq. (2). First, several studies were conducted on the effects of controlling the composition of NCs using pure metals, metal alloys, metal oxides, and semiconducting materials [39, 40, 50]. Second, several researchers adjusted the size, shape, and morphology of NCs to improve the sensitivity [53,54,55,56]. Third, surface ligand modification using various inorganic or organic ligands was carried out to control the interparticle distance, tunneling decay term, and activation energy between the NCs [53, 57]. Lee et al. investigated the electrical and electromechanical properties of Ag NC thin films with respect to the type of surface ligands (Fig. 2b) [50]. The long carbonic ligands of the as-synthesized Ag NC thin films were replaced with short inorganic ligands of ammonium chloride (NH4Cl) and tetrabutylammonium bromide (TBAB), and short organic ligands of 3-mercaptopropionic acid (MPA) and 1,2-ethanedithiol (EDT). The NH4Cl- and TBAB-treated Ag NC thin films exhibited a significant decrease in the resistivity (10−5 Ω cm) and notably low gauge factor of ~ 1, given that the short inorganic ligand treatment resulted in a minimal interparticle distance or even touch between each NC. In contrast, the MPA- and EDT-treated Ag NC thin films exhibited a relatively high resistivity of over 1 Ω cm, and high gauge factor of ~ 30. Although NC-based strain sensors have higher gauge factors when compared with those of typical metal thin film based strain gauges; the gauge factors are excessively low for the detection of subtle bio-signals, which limits their use in advanced applications such as bio-integrated devices [58]. (Figure reproduced from a, b, and d–f [50], Copyright 2017, Royal Society of Chemistry; c [59], Copyright 2017, Wiley-VCH; g [60] Copyright 2014, Royal Society of Chemistry) a Schematic of NC thin films after applied strain. b Cycle tests with application of 0.2% strain on MPA—(purple), EDT—(green), Cl—(red), and Br—(blue) treated Ag NC thin films. c Conductivity and gauge factor of NC thin films as a function of initial interparticle distance. d Schematics of crack formation strategy. e SEM images of Ag NC thin films after crack formation. f Cycle tests with application of 0.2% strain on MPA-treated Ag NC thin films before (red) and after (black) crack formation and g schematic of current path change of Ag NC thin films with cracks after the strain application There is a theoretical limit to the gauge factor of NC thin films, which is predicted by Eq. (2) [59]. Although the gauge factor can be improved by increasing the initial interparticle distance or the tunneling decay term, the initial conductivity of the NC thin films decreases exponentially, which limits the practical applications of NC based strain sensors (Fig. 2c). To overcome this intrinsic limitation of NC thin films, novel strategies such as artificial crack formation and a NC heterostructure design were developed [50, 59]. Lee et al. introduced artificial nanocracks into MPA-treated Ag NC thin films by the application of a high pre-strain to the NC thin films (Fig. 2d, e) [50]. The external strain opens closed cracks, which results in a significant increase in resistance. Using this approach, a high gauge factor of over 300 was achieved after the crack formation (Fig. 2f). Lee et al. also demonstrated Ag NC thin films with micro-crack based strain sensors, which exhibit a high stretchability, durability, stability, and sensitivity (Fig. 2g) [60]. Besides the crack formation strategy, a percolation strategy was developed to improve the performance of NC-based strain sensors, with respect to the sensitivity, by the design of a metal–insulator hetero-NC structure [59, 61]. The metal–insulator structure exhibited a unique electrical resistance behavior, which was dependent on the ratio of the metal to the insulator, according to the percolation theory [62]. In particular, the conductivity increases significantly as the ratio of metallic components approaches the percolation threshold where external perturbation such as strain can induce significant changes in resistance [63]. Lee et al. designed the metal–insulator structure based strain sensors using Au and CdSe NCs as metallic and insulating components, respectively (Fig. 3a) [59]. The resistivity and gauge factor increased as the ratio of insulating components of the CdSe NCs increased in the heterostructure (Fig. 3b). Artificial nanocracks were created in the NC heterostructure to further enhance the sensitivity, thus achieving a high gauge factor of over 1000. To clarify the origin of the high gauge factor in hetero-NC thin films with cracks, the site and bond percolation theory was developed by considering Au and CdSe NCs as occupied and empty sites, and by bridging ligands of EDT and open cracks as connected and disconnected bonds, respectively (Fig. 3c). (Figure reproduced from a–c [59], Copyright 2017, Wiley-VCH; d, e [61], Copyright 2018, Royal Society of Chemistry; f, g [64], Copyright 2018, American Chemical Society) a TEM images of (left) pure Au NC and (right) Au-CdSe hybrid NC thin films. b Resistivity and gauge factor of Au-CdSe NC hybrid thin films with cracks depending on the fraction of CdSe NCs. c Schematic of square lattice structures for Au-CdSe NC hybrid thin films with cracks according to the site and bond percolation model. The conductivities and gauge factor of d homogeneous arrangement shell binary NC materials (SBNM) and e heterogeneous arrangement SBNM. f Schematic of structural transformation of NC thin films during ligand exchange. g TCR and gauge factor of Ag NC thin films depending on ligand exchange time Zhang et al. designed homogeneous and heterogeneous arrays of NCs with different surface capping ligands, and then evaluated their electrical and electromechanical properties (Fig. 3d, e) [61]. As demonstrated, the gauge factor of hybrid structures can be tuned from 1 to 96 by adjusting the volume ratio of each NC according to the percolation theory. Lee et al. implemented the partial ligand exchange strategy to induce cracks and to simultaneously design NC thin films in a metal–insulator transition state for strain sensor applications (Fig. 3f) [64]. The conventional ligand exchange process was conducted with sufficient treatment time for the formation of fully ligand exchanged functional NC thin films. In the case of Ag NC thin films treated with TBAB for over 60 s, fully ligand exchanged, highly conductive, and strain-insensitive NC thin films were formed. In contrast, partially ligand exchanged Ag NC thin films with naturally formed cracks were observed when the as-synthesized Ag NC thin films were treated with TBAB for 15 s, which exhibited an intermediate conductivity and high gauge factor of up to 300 (Fig. 3g). Based on the advantages of solution processible materials, in addition to their high sensitivity, NC-based strain sensors can be used for various practical applications. Figure 4a presents the detection results for different finger bending motions using NC-based strain sensors. Figure 4b illustrates that NC-based strain sensors can be used to design human body by measuring the curvature of a human arm. Moreover, NC-based strain sensors can be used for voice recognition (Fig. 4c). By attaching sensors to a human neck and measuring the resistance of the sensors with respect to the movement of the vocal cords, the words spoken by a person wearing the sensors can be distinguished. Another potential application of NC-based strain sensors is wearable health care monitoring. Figure 4d illustrates that NC-based sensors attached to a human wrist can measure the pulse in real time. (Figure reproduced from a, c, and d [59], Copyright 2017, Wiley-VCH; b [50], Copyright 2017, Royal Society of Chemistry) Applications of NC-based strain sensors for a human motion detection, b human body design, c voice recognition, and d pulse monitoring NC-based pressure sensors A pressure sensor is a device that detects a force applied to a specific area. Moreover, it is one of the most important mechanical sensors, in addition to strain gauges. Pressure sensors have attracted considerable attention in various research fields, as they can be used for medical diagnoses, touch screen, health care monitoring, and industrial applications [65,66,67,68]. Among the various types of pressure sensors, capacitive or resistive type pressure sensors, which convert applied pressure to electrical signals, are the most efficient and cost-effective [69, 70]. Significant effort was directed toward the improvement of the performance of pressure sensors, such as their stretchability, sensitivity, durability, reliability, linearity, and detection range using various materials, and by the development of unique device architectures [11, 71, 72]. In particular, nanoscale/microscale bumpy structures such as pyramids or hemispheres are generally used to improve the sensitivity and enlarge the detection range of pressure sensors [11, 21, 73]. However, complex and toxic process such as e-beam lithography or chemical etching are generally required for the fabrication of these structures. Kim et al. designed these nanoscale/microscale structures using Ag NCs by controlling their surface chemistry and developing unique hybrid NC structures using a solution process (Fig. 5a) [51]. The pressure sensor consists of top and bottom electrodes. The bottom electrodes were pre-patterned with a separation gap of 1 mm between two conductive electrodes. The top electrodes were designed as a hybrid metal–insulator structure made of conductive NH4Cl-treated Ag NC thin films and insulating as-synthesized Ag NC thin films. The insulating NC thin films acted as a spacer between the top and bottom electrodes, thus limiting the contact between the two electrodes without pressure (Fig. 5b). A new contact point was formed and/or the existing contact area was enlarged, which increased the conductance of the pressure sensors with respect to the magnitude of the applied pressure. By optimizing the thickness and uniformness of the as-synthesized Ag NC thin films on conductive NH4Cl-treated Ag NC thin films, a sensitivity of over 500 kPa−1 and wide pressure detection range of 0.01–100 kPa were achieved (Fig. 5c, d). (Figure reproduced from a–d, g, and h [51], Copyright 2018, American Chemical Society; e, f [60], Copyright 2014, Royal Society of Chemistry) a Schematic and b operating principles of hybrid NC-based pressure sensors. c C-AFM profile of hybrid Ag NC thin films with respect to the amount of as-synthesized Ag NC thin films. d Relative current change of hybrid Ag NC-based pressure sensors with respect to the applied pressure. e Schematic of NC-based pressure sensors with cracks and their operating principles. f Relative change in resistance of NC-based pressure sensors with cracks after the application of pressure. g Real-time pulse monitoring using NC-based pressure sensors. h Images of high-pixel tactile NC-based pressure sensors Lee et al. demonstrated flexible pressure sensors based on NC thin films with micro-cracks (Fig. 5e) [60]. Pressure applied to the bottom of the devices induced positive strain on the NC films with cracks, which increased the resistance. It was revealed that the sensitivity and pressure detection range can be adjusted by controlling the thickness of the substrates (Fig. 5f). High performance NC-based pressure sensors can be utilized in various applications. The practicality and functionality of NC-based pressure sensors are demonstrated in applications that involve pulse monitoring and tactile sensors (Fig. 5g, h). NC-based temperature sensors There is a continuous increase in the demand for high performance temperature sensors, given that accurate temperature measurement is very important in several industries, medical fields, and research fields. The recent advancement of wearable technology promotes the rapid development of wearable temperature sensors, as they are essential components of wearable devices for health care monitoring or disease diagnoses based on body temperature measurements [13, 14]. Several studies were conducted to improve the sensitivity, stability, and durability of wearable temperature sensors, and to enlarge their temperature detection range using carbon materials, polymers, and thin metal films [74, 75]. However, complex multi-step procedures, which include high temperature and high vacuum processes, are mostly used for the fabrication of the wearable temperature sensors. NCs can be synthesized at a large scale and deposited onto various substrates with at low-cost using solution based processes [27, 32]. Thus, to develop cost-effective and highly sensitive NC-based temperature sensors, the temperature-dependent electrical characterization of NC thin films was investigated by several researchers [38, 49, 76]. Joh et al. evaluated the electrical properties of Ag NC thin films with respect to temperature by engineering the surface chemistry using ligand exchange methods [49]. The Ag NC thin films exhibited different charge transport mechanisms that were dependent on the surface ligand of the NCs. Organic ligands such as MPA or EDT for the capped Ag NC thin films exhibited interparticle distances of approximately 1 nm and followed the hopping transport mechanism, as presented in Eq. (2) (Fig. 6a). From the combination of Eqs. (1) and (2) and Ohm's law, the following equation was obtained for the change in resistance as a function of the temperature and strain: $$ \frac{\Delta R}{{R_{0} }} = e^{{\left( { - {\raise0.7ex\hbox{${E_{a} }$} \!\mathord{\left/ {\vphantom {{E_{a} } {k_{B} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${k_{B} }$}}\Delta \left( {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 T}}\right.\kern-0pt} \!\lower0.7ex\hbox{$T$}}} \right)} \right)}} e^{{\left( {G\varepsilon } \right)}} - 1 $$ (Figure reproduced from a–f [49], Copyright 2017, Wiley-VCH; g, h [76], Copyright 2017, Wiley-VCH) TEM images of a EDT- and b TBAB-treated Ag NC thin films (scale bar: 100 nm). c Arrhenius plot of resistance and temperature of (top) EDT- and (bottom) MPA-treated Ag NC thin films. d Change in resistance of (top) TBAB- and (bottom) NH4Cl-treated Ag NC thin films with respect to temperature. e Change in resistance of (top) EDT-, (middle) MPA-, and (bottom) MPA + EDT-treated Ag NC thin films during temperature cycle tests. f Change in resistance of (red) TBAB- and (blue) NH4Cl-treated Ag NC thin films during temperature cycle tests. Temperature-dependent electrical behavior of Au NC thin films with respect to the g type of surface ligands and h NC size In contrast, when the as-synthesized Ag NC thin films were treated with short inorganic ligands, such as NH4Cl and TBAB, a significant decrease in the interparticle distance and active sintering of adjacent NCs were observed (Fig. 6b). The sintered Ag NCs allow charge carriers to move with a metallic or band transport behavior, which is expressed by following equation: $$ \frac{\Delta R}{{R_{0} }} = \alpha \Delta T + {\text{G}}\upvarepsilon $$ where α is a temperature coefficient of resistance (TCR). Furthermore, the MPA- or EDT-treated Ag NC thin films exhibited a negative resistance change (negative TCR) as the strain increased, whereas a positive resistance change was observed for the NH4Cl- or TBAB-treated Ag NC thin films with positive TCR values of 1.03 × 10−3 K−1 and 1.34 × 10−3 K−1, respectively (Fig. 6c–f). Segev-Bar et al. investigated the temperature-dependent electrical properties of Au NC thin films with respect to their size and organic surface ligands. As observed, the temperature sensitivity increased as the NC size and length of the surface ligands increased (Fig. 6g, h) [76]. The de-coupling of strain and temperature is critical for wearable temperature sensors, as the changes in resistance may be due to changes in the strain and/or temperature [67, 77]. For an accurate measurement of the real body temperature, the effect of the strain during natural body movement on the change in resistance should be neglected based on the fundamental understanding of the charge transport in active materials with respect to the changes in the strain and temperature, in addition to the unique device structure. Unfortunately, as predicted by Eqs. (3) and (4), the resistance of the NC thin films treated with organic and inorganic ligands is influenced by the strain, in addition to the temperature. This limits the accuracy of the body temperature measurement when NC-based sensors are attached to human skin, owing to the strain generated during body movement. Joh et al. solve this problem by integrating MPA + EDT- and TBAB-treated Ag NC thin films into a single device using a solution-based process (Fig. 7a) [49]. Given that the MPA + EDT- and TBAB-treated Ag NC thin films have negative and positive TCRs, respectively, in addition to different gauge factors; the temperature and strain can be measured simultaneously by solving Eqs. (3) and (4). For verification, the relative changes in resistance of the strain–temperature sensor mounted on the human finger were observed and compared with the theoretical values (Fig. 7b, c). The real temperature of the finger measured using an infrared (IR) sensor was 303.6–303.4 K, and the strain calculated using the bending radius of the finger was approximately 0.16%. The temperature and strain measured using the Ag NC-based temperature-strain sensors were 303.15 K and 0.162%, respectively, which confirms the high sensitivity and accuracy of the sensors. (Figure reproduced from a–c [49], Copyright 2017, Wiley-VCH) a Schematic of fabrication process for the strain–temperature sensors. b Images of the sensor on human finger (top) in a flat state and (bottom) in a bent state. c Simulation and experimental results of the sensors (top) in a flat state and (bottom) bent state Conclusion and perspective Wearable electronics have attracted significant attention, as they can be utilized in remote health care systems, human–machine interfaces, and soft-robotics, among other applications. NCs can overcome the limitations of conventional wearable devices due to their solution processability and tunable properties. Based on these advantages, significant research effort has been directed toward the improvement of the performance of NC-based wearable sensors (strain, pressure, and temperature sensors), as discussed above. However, the NC-based wearable sensors can be further improved. First, conformal contact with human skin is a critical requirement for wearable electronics, for the efficient and accurate detection of human signals [78, 79]. It is therefore necessary to design NC-based wearable sensors using soft elastomers that have stiffnesses similar to that of human skin. Moreover, the effects of the strain, pressure, and temperature on the wearable sensors should be considered, given that all these external perturbations could induce changes in the resistance of wearable sensors [49, 67, 76]. For example, changes in the applied pressure and temperature can modify the resistance of strain sensors, thus limiting the accuracy of the real strain measurement. Therefore, novel methods to decouple unwanted stimuli should be developed for the realization of NC-based wearable sensors with high accuracies. Furthermore, a power supply should be considered, to fully realize NC-based skin-mountable wearable sensors, given that conventional heavy and bulky batteries cannot be used in the system [80, 81]. Hence, self-powered NC-based wearable sensors should be developed to realize the next generation of wearable technology. Finally, while considerable achievements in printing and patterning methods have been demonstrated such as transfer printing, there is still room for improvement in the manufacturing of NC-based wearable sensors [82]. For example, multiple steps of mask alignment, light exposure, and development using photoresists are generally required in conventional patterning methods. To reduce the fabrication steps and costs, advanced patterning techniques such as direct optical lithography using light-responsive ligands without photoresists should be developed for the realization of practical and cost-efficient NC-based wearable devices [83]. W. Gao, S. Emaminejad, H.Y.Y.N. Yein, S. Challa, K. Chen, A. Peck, H.M. Fahad, H. Ota, H. Shiraki, D. Kiriya, D.H. Lien, G.A. Brooks, R.W. Davis, A. Javey, Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 529, 509–514 (2016). https://doi.org/10.1038/nature16521 D. Kim, D. Kim, H. Lee, Y.R. Jeong, S.-J. Lee, G. Yang, H. Kim, G. Lee, S. Jeon, G. Zi, J. Kim, J.S. Ha, Body-attachable and stretchable multisensors integrated with wirelessly rechargeable energy storage devices. Adv. Mater. 28, 748–756 (2016). https://doi.org/10.1002/adma.201504335 J. Park, J. Kim, K. Kim, S.-Y. Kim, W.H. Cheong, K. Park, J.H. Song, G. Namgoong, J.J. Kim, J. Heo, F. Bien, J.-U. Park, Wearable, wireless gas sensors using highly stretchable and transparent structures of nanowires and graphene. Nanoscale 8, 10591–10597 (2016). https://doi.org/10.1039/C6NR01468B M.S. Kang, H. Joh, H. Kim, H.-W. Yun, D. Kim, H.K. Woo, W.S. Lee, S.-H. Hong, S.J. Oh, Synergetic effects of ligand exchange and reduction process enhancing both electrical and optical properties of Ag nanocrystals for multifunctional transparent electrodes. Nanoscale 10, 18415–18422 (2018). https://doi.org/10.1039/C8NR05212C M.F. El-Kady, R.B. Kaner, Scalable fabrication of high-power graphene micro-supercapacitors for flexible and on-chip energy storage. Nat. Commun. 4, 1475 (2013). https://doi.org/10.1038/ncomms2446 M. Amjadi, A. Pichitpajongkit, S. Lee, S. Ryu, I. Park, Highly stretchable and sensitive strain sensor based on silver nanowire-elastomer nanocomposite. ACS Nano 8, 5154–5163 (2014). https://doi.org/10.1021/nn501204t D.-H. Kim, R. Ghaffari, N. Lu, J.A. Rogers, Flexible and stretchable electronics for biointegrated devices. Annu. Rev. Biomed. Eng. 14, 113–128 (2012). https://doi.org/10.1146/annurev-bioeng-071811-150018 M. Amjadi, Y.J. Yoon, I. Park, Ultra-stretchable and skin-mountable strain sensors using carbon nanotubes-Ecoflex nanocomposites. Nanotechnology 26, 375501 (2015). https://doi.org/10.1088/0957-4484/26/37/375501 N. Lu, C. Lu, S. Yang, J. Rogers, Highly sensitive skin-mountable strain gauges based entirely on elastomers. Adv. Funct. Mater. 22, 4044–4050 (2012). https://doi.org/10.1002/adfm.201200498 C.-J. Lee, K.H. Park, C.J. Han, M.S. Oh, B. You, Y.-S. Kim, J.-W. Kim, Crack-induced Ag nanowire networks for transparent, stretchable, and highly sensitive strain sensors. Sci. Rep. 7, 7959 (2017). https://doi.org/10.1038/s41598-017-08484-y S.C.B. Mannsfeld, B.C.K. Tee, R.M. Stoltenberg, C.V.H.-H. Chen, S. Barman, B.V.O. Muir, A.N. Sokolov, C. Reese, Z. Bao, Highly sensitive flexible pressure sensors with microstructured rubber dielectric layers. Nat. Mater. 9, 859–864 (2010). https://doi.org/10.1038/nmat2834 S. Xu, Y. Zhang, L. Jia, K.E. Mathewson, K.-I. Jang, J. Kim, H. Fu, X. Huang, P. Chava, R. Wang, S. Bhole, L. Wang, Y.J. Na, Y. Guan, M. Flavin, Z. Han, Y. Huang, J.A. Rogers, Soft microfluidic assemblies of sensors, circuits, and radios for the skin. Science 344, 70–74 (2014). https://doi.org/10.1126/science.1250169 W. Honda, S. Harada, T. Arie, S. Akita, K. Takei. Printed wearable temperature sensor for health monitoring. In: SENSORS, 2014 IEEE, 2227–2229 (2014). https://doi.org/10.1109/icsens.2014.6985483 T. Someya, Y. Kato, T. Sekitani, S. Iba, Y. Noguchi, Y. Murase, H. Kawaguchi, T. Sakurai, Conformable, flexible, large-area networks of pressure and thermal sensors with organic transistor active matrixes. Proc. Natl. Acad. Sci. 102, 12321–12325 (2005). https://doi.org/10.1073/pnas.0502392102 T.Q. Trung, S. Ramasundaram, B.-U. Hwang, N.-E. Lee, An all-elastomeric transparent and stretchable temperature sensor for body-attachable wearable electronics. Adv. Mater. 28, 502–509 (2016). https://doi.org/10.1002/adma.201504441 M. Jian, K. Xia, Q. Wang, Z. Yin, H. Wang, C. Wang, H. Xie, M. Zhang, Y. Zhang, Flexible and highly sensitive pressure sensors based on bionic hierarchical structures. Adv. Funct. Mater. 27, 1606066 (2017). https://doi.org/10.1002/adfm.201606066 S. Park, H. Kim, M. Vosgueritchian, S. Cheon, H. Kim, J.H. Koo, T.R. Kim, S. Lee, G. Schwartz, H. Chang, Z. Bao, Stretchable energy-harvesting tactile electronic skin capable of differentiating multiple mechanical stimuli modes. Adv. Mater. 26, 7324–7332 (2014). https://doi.org/10.1002/adma.201402574 C. Wang, D. Hwang, Z. Yu, K. Takei, J. Park, T. Chen, B. Ma, A. Javey, User-interactive electronic skin for instantaneous pressure visualization. Nat. Mater. 12, 899–904 (2013). https://doi.org/10.1038/nmat3711 J. Yang, D. Wei, L. Tang, X. Song, W. Luo, J. Chu, T. Gao, H. Shi, C. Du, Wearable temperature sensor based on graphene nanowalls. RSC Adv. 5, 25609–25615 (2015). https://doi.org/10.1039/C5RA00871A L. Lin, S. Liu, Q. Zhang, X. Li, M. Ji, H. Deng, Q. Fu, Towards tunable sensitivity of electrical property to strain for conductive polymer composites based on thermoplastic elastomer. ACS Appl. Mater. Interfaces 5, 5815–5824 (2013). https://doi.org/10.1021/am401402x L. Pan, A. Chortos, G. Yu, Y. Wang, S. Isaacson, R. Allen, Y. Shi, R. Dauskardt, Z. Bao, An ultra-sensitive resistive pressure sensor based on hollow-sphere microstructure induced elasticity in conducting polymer film. Nat. Commun. 5, 3002 (2014). https://doi.org/10.1038/ncomms4002 D.-H. Kim, J.-H. Ahn, M.C. Won, H.-S. Kim, T.-H. Kim, J. Song, Y.Y. Huang, Z. Liu, C. Lu, J.A. Rogers, Stretchable and foldable silicon integrated circuits. Science 320, 507–511 (2008). https://doi.org/10.1126/science.1154367 Z. Ning, O. Voznyy, J. Pan, S. Hoogland, V. Adinolfi, J. Xu, M. Li, A.R. Kirmani, J.-P. Sun, J. Minor, K.W. Kemp, H. Dong, L. Rollny, A. Labelle, G. Carey, B. Sutherland, I. Hill, A. Amassian, H. Liu, J. Tang, O.M. Bakr, E.H. Sargent, Air-stable n-type colloidal quantum dot solids. Nat. Mater. 13, 822–828 (2014). https://doi.org/10.1038/nmat4007 J.-S. Lee, M.V. Kovalenko, J. Huang, D.S. Chung, D.V. Talapin, Band-like transport, high electron mobility and high photoconductivity in all-inorganic nanocrystal arrays. Nat. Nanotechnol. 6, 348–352 (2011). https://doi.org/10.1038/nnano.2011.46 Y. Liu, M. Gibbs, J. Puthussery, S. Gaik, R. Ihly, H.W. Hillhouse, M. Law, Dependence of carrier mobility on nanocrystal size and ligand length in pbse nanocrystal solids. Nano Lett. 10, 1960–1969 (2010). https://doi.org/10.1021/nl101284k D.S. Chung, J.-S. Lee, J. Huang, A. Nag, S. Ithurria, D.V. Talapin, Low voltage, hysteresis free, and high mobility transistors from All-inorganic colloidal nanocrystals. Nano Lett. 12, 1813–1820 (2012). https://doi.org/10.1021/nl203949n H. Shen, H. Wang, Z. Tang, J.Z. Niu, S. Lou, Z. Du, L.S. Li, High quality synthesis of monodisperse zinc-blende CdSe and CdSe/ZnS nanocrystals with a phosphine-free method. CrystEngComm 11, 1733–1738 (2009). https://doi.org/10.1039/B909063K S. Sapra, A.L. Rogach, J. Feldmann, Phosphine-free synthesis of monodisperse CdSe nanocrystals in olive oil. J. Mater. Chem. 16, 3391–3395 (2006). https://doi.org/10.1039/B607022A Z. Deng, L. Cao, F. Tang, B. Zou, A new route to zinc-blende CdSe nanocrystals: mechanism and synthesis. J. Phys. Chem. B 109, 16671–16675 (2005). https://doi.org/10.1021/jp052484x E.A. Gaulding, B.T. Diroll, E.D. Goodwin, Z.J. Vrtis, C.R. Kagan, C.B. Murray, Deposition of wafer-scale single-component and binary nanocrystal superlattice thin films via dip-coating. Adv. Mater. 27, 2846–2851 (2015). https://doi.org/10.1002/adma.201405575 M.J. Greaney, E. Couderc, J. Zhao, B.A. Nail, M. Mecklenburg, W. Thornbury, F.E. Osterloh, S.E. Bradforth, R.L. Brutchey, Controlling the trap state landscape of colloidal CdSe nanocrystals with cadmium halide ligands. Chem. Mater. 27, 744–756 (2015). https://doi.org/10.1021/cm503529j S.J. Oh, N.E. Berry, J.-H. Choi, E.A. Gaulding, H. Lin, T. Paik, B.T. Diroll, S. Muramoto, C.B. Murray, C.R. Kagan, Designing high-performance PbS and PbSe nanocrystal electronic devices through stepwise, post-synthesis, colloidal atomic layer deposition. Nano Lett. 14, 1559–1566 (2014). https://doi.org/10.1021/nl404818z D.M. Kroupa, G.F. Pach, M. Vörös, F. Giberti, B.D. Chernomordik, R.W. Crisp, A.J. Nozik, J.C. Johnson, R. Singh, V.I. Klimov, G. Galli, M.C. Beard, Enhanced multiple exciton generation in PbS|CdS janus-like heterostructured nanocrystals. ACS Nano 12, 10084–10094 (2018). https://doi.org/10.1021/acsnano.8b04850 K. Lu, Y. Wang, Z. Liu, L. Han, G. Shi, H. Fang, J. Chen, X. Ye, S. Chen, F. Yang, A.G. Shulga, T. Wu, M. Gu, S. Zhou, J. Fan, M.A. Loi, W. Ma, High-efficiency PbS quantum-dot solar cells with greatly simplified fabrication processing via "solvent-curing". Adv. Mater. 30, 1707572 (2018). https://doi.org/10.1002/adma.201707572 J.-H. Choi, S.J. Oh, Y. Lai, D.K. Kim, T. Zhao, A.T. Fafarman, B.T. Diroll, C.B. Murray, C.R. Kagan, In situ repair of high-performance, flexible nanocrystal electronics for large-area fabrication and operation in air. ACS Nano 7, 8275–8283 (2013). https://doi.org/10.1021/nn403752d Y. Wang, K. Lu, L. Han, Z. Liu, G. Shi, H. Fang, S. Chen, T. Wu, F. Yang, M. Gu, S. Zhou, X. Ling, X. Tang, J. Zheng, M.A. Loi, W. Ma, In situ passivation for efficient PbS quantum dot solar cells by precursor engineering. Adv. Mater. 30, 1704871 (2018). https://doi.org/10.1002/adma.201704871 W.S. Lee, D. Kim, B. Park, H. Joh, H.K. Woo, Y.-K. Hong, T. Kim, D.-H. Ha, S.J. Oh, Multiaxial and transparent strain sensors based on synergetically reinforced and orthogonally cracked hetero-nanocrystal solids. Adv. Funct. Mater. 29, 1806714 (2019). https://doi.org/10.1002/adfm.201806714 M. Segev-Bar, H. Haick, Flexible sensors based on nanoparticles. ACS Nano 7, 8366–8378 (2013). https://doi.org/10.1021/nn402728g N. Olichwer, E.W. Leib, A.H. Halfar, A. Petrov, T. Vossmeyer, Cross-linked gold nanoparticles on polyethylene: resistive responses to tensile strain and vapors. ACS Appl. Mater. Interfaces 4, 6151–6161 (2012). https://doi.org/10.1021/am301780b H. Moreira, J. Grisolia, N.M. Sangeetha, N. Decorde, C. Farcau, B. Viallet, K. Chen, G. Viau, L. Ressier, Electron transport in gold colloidal nanoparticle-based strain gauges. Nanotechnology 24, 095701 (2013). https://doi.org/10.1088/0957-4484/24/9/095701 M. Segev-Bar, A. Landman, M. Nir-Shapira, G. Shuster, H. Haick, Tunable touch sensor and combined sensing platform: toward nanoparticle-based electronic skin. ACS Appl. Mater. Interfaces 5, 5531–5541 (2013). https://doi.org/10.1021/am400757q D. Ryu, K.J. Loh, R. Ireland, M. Karimzada, F. Yaghmaie, A.M. Gusman, In situ reduction of gold nanoparticles in PDMS matrices and applications for large strain sensing. Smart Struct. Syst. 8, 471–486 (2011). https://doi.org/10.12989/sss.2011.8.5.471 E. Skotadis, D. Mousadakos, K. Katsabrokou, S. Stathopoulos, D. Tsoukalas, Flexible polyimide chemical sensors using platinum nanoparticles. Sensors Actuators B Chem. 189, 106–112 (2013). https://doi.org/10.1016/j.snb.2013.01.046 C.M. Guédon, J. Zonneveld, H. Valkenier, J.C. Hummelen, S.J. Van Der Molen, Controlling the interparticle distance in a 2D molecule-nanoparticle network. Nanotechnology 22, 125205 (2011). https://doi.org/10.1088/0957-4484/22/12/125205 J. Herrmann, K.H. Müller, T. Reda, G.R. Baxter, B. Raguse, G.J.J.B. De Groot, R. Chai, M. Roberts, L. Wieczorek, Nanoparticle films as sensitive strain gauges. Appl. Phys. Lett. 91, 183105 (2007). https://doi.org/10.1063/1.2805026 A.N. Shipway, E. Katz, I. Willner, Nanoparticle arrays on surfaces for electronic, optical, and sensor applications. ChemPhysChem 1, 18–52 (2000). https://doi.org/10.1002/1439-7641(20000804)1:1%3c18:AID-CPHC18%3e3.0.CO;2-L B. Radha, A.A. Sagade, G.U. Kulkarni, Flexible and semitransparent strain sensors based on micromolded Pd nanoparticle–carbon μ-stripes. ACS Appl. Mater. Interfaces 3, 2173–2178 (2011). https://doi.org/10.1021/am2002873 M. Seong, S.-W. Lee, H. Joh, W.S. Lee, T. Paik, S.J. Oh, Designing highly conductive and stable silver nanocrystal thin films with tunable work functions through solution-based surface engineering with gold coating process. J. Alloys Compd. 698, 400–409 (2017). https://doi.org/10.1016/j.jallcom.2016.12.157 H. Joh, S.-W. Lee, M. Seong, W.S. Lee, S.J. Oh, Engineering the charge transport of Ag nanocrystals for highly accurate, wearable temperature sensors through all-solution processes. Small 13, 1700247 (2017). https://doi.org/10.1002/smll.201700247 S.-W. Lee, H. Joh, M. Seong, W.S. Lee, J.-H. Choi, S.J. Oh, Engineering surface ligands of nanocrystals to design high performance strain sensor arrays through solution processes. J. Mater. Chem. C 5, 2442–2450 (2017). https://doi.org/10.1039/C7TC00230K H. Kim, S.-W. Lee, H. Joh, M. Seong, W.S. Lee, M.S. Kang, J.B. Pyo, S.J. Oh, Chemically designed metallic/insulating hybrid nanostructures with silver nanocrystals for highly sensitive wearable pressure sensors. ACS Appl. Mater. Interfaces 10, 1389–1398 (2018). https://doi.org/10.1021/acsami.7b15566 K.I. Arshak, F. Ansari, D. Collins, R. Perrem, Characterisation of a thin-film/thick-film strain gauge sensor on stainless steel. Mater. Sci. Eng. B 26, 13–17 (1994). https://doi.org/10.1016/0921-5107(94)90180-5 J.L. Tanner, D. Mousadakos, K. Giannakopoulos, E. Skotadis, D. Tsoukalas, High strain sensitivity controlled by the surface density of platinum nanoparticles. Nanotechnology 23, 285501 (2012). https://doi.org/10.1088/0957-4484/23/28/285501 C. Farcau, N.M. Sangeetha, H. Moreira, B. Viallet, J. Grisolia, D. Ciuculescu-Pradines, L. Ressier, High-sensitivity strain gauge based on a single wire of gold nanoparticles fabricated by stop-and-go convective self-assembly. ACS Nano 5, 7137–7143 (2011). https://doi.org/10.1021/nn201833y C. Farcau, H. Moreira, B. Viallet, J. Grisolia, D. Ciuculescu-Pradines, C. Amiens, L. Ressier, Monolayered wires of gold colloidal nanoparticles for high-sensitivity strain sensing. J. Phys. Chem. C 115, 14494–14499 (2011). https://doi.org/10.1021/jp202166s N.M. Sangeetha, N. Decorde, B. Viallet, G. Viau, L. Ressier, Nanoparticle-based strain gauges fabricated by convective self assembly: strain sensitivity and hysteresis with respect to nanoparticle sizes. J. Phys. Chem. C 117, 1935–1940 (2013). https://doi.org/10.1021/jp310077r J. Yin, P. Hu, J. Luo, L. Wang, M.F. Cohen, C.-J. Zhong, Molecularly mediated thin film assembly of nanoparticles on flexible devices: electrical conductivity versus device strains in different gas/vapor environment. ACS Nano 5, 6516–6526 (2011). https://doi.org/10.1021/nn201858c B. Park, J. Kim, D. Kang, C. Jeong, K.S. Kim, J.U. Kim, P.J. Yoo, T.-I. Kim, Dramatically enhanced mechanosensitivity and signal-to-noise ratio of nanoscale crack-based sensors: effect of crack depth. Adv. Mater. 28, 8130–8137 (2016). https://doi.org/10.1002/adma.201602425 W.S. Lee, S.-W. Lee, H. Joh, M. Seong, H. Kim, M.S. Kang, K.-H. Cho, Y.-M. Sung, S.J. Oh, Designing metallic and insulating nanocrystal heterostructures to fabricate highly sensitive and solution processed strain gauges for wearable sensors. Small 13, 1702534 (2017). https://doi.org/10.1002/smll.201702534 J. Lee, S. Kim, J. Lee, D. Yang, B.C. Park, S. Ryu, I. Park, A stretchable strain sensor based on a metal nanoparticle thin film for human motion detection. Nanoscale 6, 11932–11939 (2014). https://doi.org/10.1039/C4NR03295K P. Zhang, H. Bousack, Y. Dai, A. Offenhäusser, D. Mayer, Shell-binary nanoparticle materials with variable electrical and electro-mechanical properties. Nanoscale 10, 992–1003 (2018). https://doi.org/10.1039/C7NR07912E B.J. Last, D.J. Thouless, Percolation theory and electrical conductivity. Phys. Rev. Lett. 27, 1719 (1971). https://doi.org/10.1103/PhysRevLett.27.1719 T. Das Gupta, T. Gacoin, A.C.H. Rowe, Piezoresistive properties of Ag/silica nano-composite thin films close to the percolation threshold. Adv. Funct. Mater. 24, 4522–4527 (2014). https://doi.org/10.1002/adfm.201303775 S.-W. Lee, H. Joh, M. Seong, W.S. Lee, J.-H. Choi, S.J. Oh, Transition states of nanocrystal thin films during ligand-exchange processes for potential applications in wearable sensors. ACS Appl. Mater. Interfaces 10, 25502–25510 (2018). https://doi.org/10.1021/acsami.8b06754 M. Knite, V. Teteris, A. Kiploka, J. Kaupuzs, Polyisoprene-carbon black nanocomposites as tensile strain and pressure sensor materials. Sens. Actuat. A Phys. 110, 142–149 (2004). https://doi.org/10.1016/j.sna.2003.08.006 V. Maheshwari, R.F. Saraf, High-resolution thin film device to sense texture by touch. Science 312, 1501–1504 (2006). https://doi.org/10.1126/science.1126216 N.T. Tien, S. Jeon, D.-I. Kim, T.Q. Trung, M. Jang, B.-U. Hwang, K.-E. Byun, J. Bae, E. Lee, J.B.-H. Tok, Z. Bao, N.-E. Lee, J.-J. Park, A flexible bimodal sensor array for simultaneous sensing of pressure and temperature. Adv. Mater. 26, 796–804 (2014). https://doi.org/10.1002/adma.201302869 Y. Zang, F. Zhang, C.-A. Di, D. Zhu, Advances of flexible pressure sensors toward artificial intelligence and health care applications. Mater. Horiz. 2, 140–156 (2015). https://doi.org/10.1039/C4MH00147H B.S. Kang, J. Kim, S. Jang, F. Ren, J.W. Johnson, R.J. Therrien, P. Rajagopal, J.C. Roberts, E.L. Piner, K.J. Linthicum, S.N.G. Chu, K. Baik, B.P. Gila, C.R. Abernathy, S.J. Pearton, Capacitance pressure sensor based on GaN high-electron-mobility transistor-on-Si membrane. Appl. Phys. Lett. 86, 253502 (2005). https://doi.org/10.1063/1.1952568 S.E. Zhu, M. Krishna Ghatkesar, C. Zhang, G.C.A.M. Janssen, Graphene based piezoresistive pressure sensor. Appl. Phys. Lett. 102, 161904 (2013). https://doi.org/10.1063/1.4802799 G. Schwartz, B.C.-K. Tee, J. Mei, A.L. Appleton, D.H. Kim, H. Wang, Z. Bao, Flexible polymer transistors with high pressure sensitivity for application in electronic skin and health monitoring. Nat. Commun. 4, 1859 (2013). https://doi.org/10.1038/ncomms2832 S. Gong, W. Schwalb, Y. Wang, Y. Chen, Y. Tang, J. Si, B. Shirinzadeh, W. Cheng, A wearable and highly sensitive pressure sensor with ultrathin gold nanowires. Nat. Commun. 5, 3132 (2014). https://doi.org/10.1038/ncomms4132 C.-L. Choong, M.-B. Shim, B.-S. Lee, S. Jeon, D.S. Ko, T.-H. Kang, J. Bae, S.H. Lee, K.-E. Byun, J. Im, Y.J. Jeong, C.E. Park, J.-J. Park, U.-I. Chung, Highly stretchable resistive pressure sensors using a conductive elastomeric composite on a micropyramid array. Adv. Mater. 26, 3451–3458 (2014). https://doi.org/10.1002/adma.201305182 Y. Zhang, R.C. Webb, H. Luo, Y. Xue, J. Kurniawan, N.H. Cho, S. Krishnan, Y. Li, Y. Huang, J.A. Rogers, Theoretical and experimental studies of epidermal heat flux sensors for measurements of core body temperature. Adv. Healthc. Mater. 5, 119–127 (2016). https://doi.org/10.1002/adhm.201500110 D.-H. Kim, N. Lu, R. Ma, Y.-S. Kim, R.-H. Kim, S. Wang, J. Wu, S.M. Won, H. Tao, A. Islam, K.J. Yu, T.-I. Kim, R. Chowdhury, M. Ying, L. Xu, M. Li, H.J. Chung, H. Keum, M. McCormick, P. Liu, Y.W. Zhang, F.G. Omenetto, Y. Huang, T. Coleman, J.A. Rogers, Epidermal electronics. Science 333, 838–843 (2011). https://doi.org/10.1126/science.1206157 M. Segev-Bar, N. Bachar, Y. Wolf, B. Ukrainsky, L. Sarraf, H. Haick, Multi-parametric sensing platforms based on nanoparticles. Adv. Mater. Technol. 2, 1600206 (2017). https://doi.org/10.1002/admt.201600206 S. Harada, W. Honda, T. Arie, S. Akita, K. Takei, Fully printed, highly sensitive multifunctional artificial electronic whisker arrays integrated with strain and temperature sensors. ACS Nano 8, 3921–3927 (2014). https://doi.org/10.1021/nn500845a J. Heikenfeld, A. Jajack, J. Rogers, P. Gutruf, L. Tian, T. Pan, R. Li, M. Khine, J. Kim, J. Wang, J. Kim, Wearable sensors: modalities, challenges, and prospects. Lab. Chip 18, 217–248 (2018). https://doi.org/10.1039/C7LC00914C S. Yao, A. Myers, A. Malhotra, F. Lin, A. Bozkurt, J.F. Muth, Y. Zhu, A wearable hydration sensor with conformal nanowire electrodes. Adv. Healthc. Mater. 6, 1601159 (2017). https://doi.org/10.1002/adhm.201601159 M. Ha, J. Park, Y. Lee, H. Ko, Triboelectric generators and sensors for self-powered wearable electronics. ACS Nano 9, 3421–3427 (2015). https://doi.org/10.1021/acsnano.5b01478 Z. Lou, L. Li, L. Wang, G. Shen, Recent progress of self-powered sensing systems for wearable electronics. Small 13, 1701791 (2017). https://doi.org/10.1002/smll.20170179 M.K. Choi, J. Yang, K. Kang, D.C. Kim, C. Choi, C. Park, S.J. Kim, S.I. Chae, T.-H. Kim, J.H. Kim, T. Hyeon, D.-H. Kim, Wearable red-green-blue quantum dot light-emitting diode array using high-resolution intaglio transfer printing. Nat. Commun. 6, 7149 (2015). https://doi.org/10.1038/ncomms8149 Y. Wang, I. Fedin, H. Zhang, D.V. Talapin, Direct optical lithography of functional inorganic nanomaterials. Science 357, 385 (2017). https://doi.org/10.1126/science.aan2958 WSL and SJO wrote the manuscript. WSL, SJ, and SJO designed the figures. All authors read and approved the final manuscript. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. This research was supported by the Basic Science Research Program through the National Research Foundation (NRF) funded by the Ministry of Science, ICT and Future Planning (2016R1C1B2006534), and Creative Materials Discovery Program through the National Research Foundation of Korea (NRF) funded by Ministry of Science and ICT (NRF-2018M3D1A1059001). This research was also supported by Korea Electric Power Corporation (R18XA06-02). Department of Materials Science and Engineering, Korea University, Seoul, 02841, Republic of Korea Woo Seok Lee, Sanghyun Jeon & Soong Ju Oh Woo Seok Lee Sanghyun Jeon Soong Ju Oh Correspondence to Soong Ju Oh. Lee, W.S., Jeon, S. & Oh, S.J. Wearable sensors based on colloidal nanocrystals. Nano Convergence 6, 10 (2019). https://doi.org/10.1186/s40580-019-0180-7 Received: 08 January 2019 Nanocrystals Synthesis, Self-Assembly, and Applications of Colloidal Inorganic Nanocrystals
CommonCrawl
Enumeration of the twin-prime pairs from 1e16 to 2e16 Thomas R. Nicely http://www.trnicely.net Current e-mail address Freeware copyright (c) 2010 Thomas R. Nicely. Released into the public domain by the author, who disclaims any legal liability arising from its use. Last updated 1000 GMT 18 January 2010. This is an extended table of values of pi_2(x), for 1e16 <= x <= 2e16, the counts of twin-prime pairs (q, q+2) such that q <= x. Also provided are the values of the related functions delta_2(x), S_2(x), and F_2(x); see Enumeration of the twin-prime pairs to 1e16 for an explanation of these symbols and additional notes. Complete counts and reciprocal sums of the prime constellations from Nicely's computations (1993-2009), including the twin-prime pairs, are also available. These data files are very large (over 60MB each, even for the zipped versions), including more than two million data points from 0 to 2e16 at intervals of 1e10 or better. x pi_2(x) delta_2(x) S_2(x) F_2(x) 1.000000e+16 10304195697298 -3142802.2329 1.83048442465833848374 1.902160583104720 Enumeration of the twin-prime pairs to 1e16 (table) Complete counts and reciprocal sums of the prime constellations from Nicely's computations (tables) Latest counts of the prime constellations and Brun's constants A new error analysis for Brun's constant (paper) Enumeration to $1.6 \times 10^{15}$ of the twin primes and Brun's constant (unpublished paper) Enumeration to $10^{14}$ of the twin primes and Brun's constant (paper) Tomás Oliveira e Silva's tabulations of pi_2(x)
CommonCrawl
Penalty algorithm adapted for the spectral element discretization of the Darcy equations Mohamed Abdelwahed1 & Nejmeddine Chorfi1 Any spectral element discretization of the Darcy problem can be efficiently solved by applying the penalty method. This method leads to a system of equations with uncoupled unknowns. We prove a posteriori error estimates for a spectral element discretization of the Darcy problem. The proposed algorithm permits the optimization of the penalty parameter as a function of the error indicators. The Darcy problem introduced in [1] is used to model the flow (water, petrol, …) of an incompressible and isothermal fluid in homogeneous porous media. The unknowns are the velocity and the pressure. Any discretization by the Galerkin method leads to a system of equations where the velocity and the pressure are coupled. Many algorithms are proposed in the literature to uncouple the velocity and the pressure such as the Uzawa method [2] and the penalty method [2, 3] The penalty method has been used extensively in finite element discretization to solve different problems (Stokes, Darcy, Navier–Stokes, …) [4–8]. However, in spectral element discretization [9, 10], this method has been only considered for the Stokes problem [11]. In this work, We are interested in the application of the penalty method to solve the Darcy problem using spectral element discretization for its high accuracy [3, 12]. The advantage of using the penalty method is twofold: first, it permits to decouple the two unknowns (velocity and pressure), and second, it guarantees the stabilization of the discrete problem [13]. Moreover, the optimization of the penalty parameter, using error indicators, reduces considerably the computation cost for solving the discrete problem [14]. In this paper, we perform a posteriori analysis of the penalized spectral element discretization of the Darcy equations. We propose an algorithm, based on the developed error indicator, to optimize the value of the penalty parameter. An outline of the paper is as follows: In Sect. 2 we present the penalized continuous problem and some regularity results. Section 3 is about the analysis of the penalized discrete problem. The a posteriori error analysis of the penalized discrete problem and a penalty adaptation algorithm are developed in Sect. 4. The penalized continuous problem Let Ω a connected domain of \(\mathbb{R}^{d}\) (\(d=2, 3\)), and ∂Ω its Lipschitz continuous boundary. We consider the following Darcy problem: $$ \begin{aligned} &\mathbf{u}+\mu \operatorname{\mathbf{grad}}p=\mathbf{f}\quad \text{in } \varOmega , \\ &\operatorname{div} \mathbf{u}=0\quad \text{in } \varOmega , \\ &\mathbf{u}.\mathbf{n}=0 \quad \text{on } \partial \varOmega , \end{aligned} $$ where the unknowns are the velocity u and the pressure p, f represents the density of forces, μ is a positive constant equal to the quotient of the fluid viscosity by the medium permeability (\(\mu ^{-1}\) is called the porosity). We consider in the following \(\mu =1\). We denote by \(\mathbf{x}=(x,y)\), respectively, \(\mathbf{x}=(x,y,z)\), the generic point in \(\mathbb {R}^{2}\), respectively, in \(\mathbb {R}^{3}\). Consider the Sobolev spaces \(H^{s}(\varOmega )\) and \(H_{0}^{s}(\varOmega )\), \(s\ge 0\) with associated norms \(\|\cdot\|_{H^{s}(\varOmega )}\) and \(\|\cdot\|_{H_{0}^{s}(\varOmega )}\). Let \(L_{0}^{2}(\varOmega )\) the space of functions in \(L^{2}(\varOmega )\) where the integral vanishes on Ω, \(\mathcal{D}(\varOmega )\) is the space of indefinitely differentiable functions with compact support in Ω and the domain \(H(\operatorname{div},\varOmega )\) of the divergence operator, $$ H(\operatorname{div},\varOmega )=\bigl\{ \boldsymbol{\varphi }\in L^{2}(\varOmega )^{d}; \operatorname{div}\boldsymbol{\varphi }\in L^{2}(\varOmega ) \bigr\} , $$ associated with the norm $$ \Vert {\boldsymbol{\varphi }} \Vert _{H(\operatorname{div},\varOmega )}= \bigl( \Vert { \boldsymbol{\varphi }} \Vert ^{2}_{L^{2}(\varOmega )^{d}}+ \Vert \operatorname{div}{ \boldsymbol{\varphi }} \Vert _{L^{2}( \varOmega )}^{2} \bigr)^{1/2}. $$ The normal trace operator \(\mathbf{v}\rightarrow \mathbf{v}.\mathbf{n}\) is defined from \(H(\operatorname{div},\varOmega )\) into \(H^{-1/2}(\partial \varOmega )\) such that, for a vector fields \(\boldsymbol{\varphi } \in H(\operatorname{div},\varOmega )\) and a scalar function \(\psi \in \mathcal{D} (\varOmega )\) [2], $$ \int _{\varOmega } \operatorname{div}\varphi (\mathbf{x}) \psi (\mathbf{x})\,d\mathbf{x}= - \int _{ \varOmega }\varphi (\mathbf{x}) . \operatorname{\mathbf{grad}}\psi (\mathbf{x})\,d\mathbf{x}+ \int _{\partial \varOmega }(\boldsymbol{\varphi }.\mathbf{n}) (\tau ) \psi (\tau )\,d \tau . $$ This leads us to introduce its kernel $$ H_{0}(\operatorname{div},\varOmega )=\bigl\{ \boldsymbol{\varphi }\in H(\operatorname{div}, \varOmega ); \boldsymbol{\varphi }.\mathbf{n}=0 \text{ on } \partial \varOmega \bigr\} . $$ The problem (1) has the following variational formulation: For \(\mathbf{f}\in (L^{2}(\varOmega ))^{d}\), find \(\mathbf{u}\in H(\operatorname{div},\varOmega )\), \(p\in L_{0}^{2}(\varOmega )\) such that \(\forall \mathbf{v}\in H_{0}(\operatorname{div}, \varOmega )\) and \(\forall q\in L_{0}^{2}(\varOmega ) \) $$ \begin{aligned} &\mathbf{a}(\mathbf{u},\mathbf{v})+b(\mathbf{v},p)=(\mathbf{f},\mathbf{v}), \\ &b(\mathbf{u},q)=0, \end{aligned} $$ where \((\cdot ,\cdot)\) is the \(L^{2}(\varOmega )\) scalar product, $$ \mathbf{a}(\mathbf{u},\mathbf{v})= \int _{\varOmega } \mathbf{u}(\mathbf{x}).\mathbf{v}(\mathbf{x})\,d\mathbf{x}\quad \mathrm{and } \quad b(\mathbf{v},p)=- \int _{\varOmega }\operatorname{div} \mathbf{v}(\mathbf{x}) p(\mathbf{x})\,d\mathbf{x}. $$ Let V be the kernel of the bilinear form b defined by $$\begin{aligned} \mathbf{V} =&\biggl\{ \boldsymbol{\varphi }\in H_{0}(\operatorname{div}, \varOmega ); \forall q \in L_{0}^{2}(\varOmega ), \int _{\varOmega }\operatorname{div}{\boldsymbol{\varphi }}(\mathbf{x}) q(\mathbf{x})\,d\mathbf{x}=0 \biggr\} \\ =&\bigl\{ \boldsymbol{\varphi }\in H_{0}(\operatorname{div},\varOmega ); \operatorname{div}{ \boldsymbol{\varphi }}=0 \text{ in } \varOmega \bigr\} . \end{aligned}$$ The norms \(\|\cdot\|_{H(\operatorname{div},\varOmega )}\) and \(\|\cdot\|_{L^{2}(\varOmega )}\) are equivalent on V [15]. This yields the ellipticity of the bilinear form \(\mathbf{a}(\cdot ,\cdot)\) on V: There exists a positive constant \(\lambda >0\); such that $$ \forall {\boldsymbol{\varphi }} \in \mathbf{V}, \quad \mathbf{a}(\boldsymbol{\varphi }, \boldsymbol{\varphi })\ge \lambda \Vert {\boldsymbol{\varphi }} \Vert _{H(\operatorname{div}, \varOmega )}. $$ Moreover, the inf-sup condition on the bilinear form \(b(\cdot ,\cdot)\): There exists a positive constant \(\beta >0\); such that $$ \forall q \in L_{0}^{2}(\varOmega ), \quad \sup_{\mathbf{w}\in H(\operatorname{div},\varOmega )} {\frac{b(\mathbf{w},q)}{ \Vert \mathbf{w}\Vert _{H( \operatorname{div},\varOmega )}}} \ge \beta \Vert q \Vert _{L^{2}(\varOmega )}, $$ is obtained by taking \(\mathbf{w}=\operatorname{\mathbf{grad}}{\boldsymbol{\varphi }}\); where φ is solution of a Laplace equation of data q and Neumann homogeneous boundary conditions ([2], Chap. 1, Corr 2.4). Using the saddle-point theorem, we conclude that, for \(\mathbf{f}\in L^{2}( \varOmega )^{d}\), problem (2) has a unique solution \((\mathbf{u},p) \in H(\operatorname{div},\varOmega )\times L_{0}^{2}(\varOmega )\), verifying the following stability condition: $$ \Vert \mathbf{u}\Vert _{L^{2}(\varOmega )^{d}} + \beta \Vert p \Vert _{L^{2}(\varOmega )} \le 2 \Vert \mathbf{f}\Vert _{L^{2}(\varOmega )^{d}}. $$ Let \(H(\operatorname{\mathbf{curl}},\varOmega )\) the domain of the curl operator $$ H(\operatorname{\mathbf{curl}},\varOmega )=\bigl\{ \boldsymbol{\varphi } \in L^{2}( \varOmega )^{d}, \operatorname{\mathbf{curl}}\boldsymbol{\varphi } \in {L^{2}(\varOmega )}^{\frac{d(d-1)}{2}}\bigr\} . $$ We know (see [16]) that \(H_{0}(\operatorname{div},\varOmega )\cap H( \operatorname{\mathbf{curl}},\varOmega )\) is continuously imbedded in \(H^{1/2}( \varOmega )^{d}\) in general and in \(H^{1}(\varOmega )^{d}\) if Ω is convex. Further results are known (see [17, 18]); when Ω is a polygonal domain, a function \(\mathbf{u}\in H_{0}(\operatorname{div}, \varOmega )\cap H(\text{curl},\varOmega )\) can be written as $$ \mathbf{u}=\mathbf{u}_{R} + \operatorname{\mathbf{grad}}S, $$ where \(\mathbf{u}_{R} \in H^{1}(\varOmega )^{d}\) and S is a linear combination of singular functions. We recall that each singularity in the neighborhood of a corner of the polygon with aperture ω has the form $$ r^{\pi /{\omega }} \varphi (\theta ), $$ where r is the distance to the singular corner, θ is the polar angle and φ belongs to \(\mathcal{C}^{\infty }(]0,2\pi [,\mathbb {R})\). Then, in general, any such function u, which has the further property $$ \operatorname{div} \mathbf{u}\in H^{s}(\varOmega ) \quad \text{and}\quad \operatorname{\mathbf{curl}}\mathbf{u}\in H^{s}(\varOmega )^{3}, $$ admits the expansion (4) with \(\mathbf{u}_{R}\in H^{s+1}(\varOmega )^{d}\) for \(0< s<\frac{2 \pi }{\omega }-1\). Let \(\alpha \in \mathopen{]}0,1]\) the penalty parameter. We consider the following penalized problem: Find \((\mathbf{u}^{\alpha },p^{\alpha }) \in H_{0}( \operatorname{div},\varOmega )\times L_{0}^{2}(\varOmega )\) such that $$ \begin{aligned} & \forall {\boldsymbol{\varphi }} \in H_{0}(\operatorname{div},\varOmega ), \quad \mathbf{a}\bigl(\mathbf{u}^{\alpha }, \boldsymbol{\varphi }\bigr)+b\bigl(\boldsymbol{\varphi },p^{\alpha } \bigr)=(\mathbf{f}, \boldsymbol{\varphi }), \\ &\forall q \in L_{0}^{2}(\varOmega ) , \quad b(\mathbf{u},q)= \alpha \int _{\varOmega } p^{\alpha }(\mathbf{x}) q(\mathbf{x}) \,d\mathbf{x}. \end{aligned} $$ By adapting the result proved on Stokes problem [2], we conclude the following result. For\(\mathbf{f}\in (L^{2}(\varOmega ))^{d}\), problem (5) has a unique solution\((\mathbf{u}^{\alpha },p^{\alpha })\in H_{0}(\operatorname{div},\varOmega ) \times L_{0}^{2}(\varOmega )\)such that if\((\mathbf{u},p)\)is solution to problem (2), we have the following estimation: $$ \bigl\Vert \mathbf{u}-\mathbf{u}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}}+ \bigl\Vert p-p^{\alpha } \bigr\Vert _{L^{2}( \varOmega )} \le C \alpha \Vert \mathbf{f}\Vert _{L^{2}(\varOmega )^{d}}, $$ whereCis a constant independent ofα. The penalized discrete problem We introduce a partition of the domain Ω without overlapping, $$ \overline{\varOmega }=\bigcup_{i=1}^{I} \varOmega _{i} \quad \text{and}\quad \varOmega _{i}\cap \varOmega _{j}=\varnothing , \quad 1 \le i < j \le I, $$ where \(\varOmega _{i}\) are rectangles if \(d=2\) and parallelepiped rectangles if \(d=3\). We suppose that the decomposition is conform in the sense that the intersection of the two sub-domains \(\overline{\varOmega _{i}}\cap \overline{ \varOmega _{j}}\) for \(i \ne j\), if it is not empty, is an entire edge or an entire face of the two sub-domains \(\overline{\varOmega _{i}}\) and \(\overline{\varOmega _{j}}\). We choose without restriction that the edges or faces of each sub-domain \(\overline{\varOmega _{i}}\) is parallel to the axis of the coordinate system. Let \(\mathbb{P}_{nm}(\varOmega )\) the space of the restriction on Ω of the polynomials of degree n in the x directions and m in the y directions in dimension \(d=2\). \(\mathbb{P}_{nms}(\varOmega )\) is the space of the restriction on Ω of the polynomials of degree n in the x directions, m in the y directions and s in the z directions in dimension \(d=3\). Let \(N\ge 2\) an integer. We introduce the space of discrete velocity, $$ \mathbb{D}_{N}(\varOmega )=\bigl\{ \varphi _{N} \in H_{0}(\operatorname{div},\varOmega ); \varphi _{N}/_{\varOmega _{i}} \in \mathbb{P}_{N,N-1}(\varOmega )\times \mathbb{P} _{N-1,N}( \varOmega ) \bigr\} $$ if \(d=2\) or $$ \mathbb{D}_{N}(\varOmega )=\bigl\{ \varphi _{N} \in H_{0}(\operatorname{div},\varOmega ); \varphi _{N}/_{\varOmega _{i}} \in \mathbb{P}_{N,N-1,N-1}(\varOmega )\times \mathbb{P}_{N-1,N,N-1}( \varOmega )\times \mathbb{P}_{N-1,N-1,N}(\varOmega )\bigr\} $$ if \(d=3\) and the space of discrete pressure, $$ \mathbb{M}_{N}(\varOmega )=\mathbb{P}_{N-1}(\varOmega ) \cap L_{0}^{2}(\varOmega ). $$ For this choice, \(\mathbb{M}_{N}(\varOmega )\) does not contain a spurious mode and the inf-sup constant on the bilinear form \(b(\cdot ,\cdot)\) does not depend on N [19]. To define the discrete problem, we remember the Gauss–Lobatto–Legendre quadrature formula on the reference interval \(]{-}1,1[\): Let \(\xi _{0}=-1\) and \(\xi _{N}=1\), there exists a unique set of nodes \(\xi _{k}\); \(1\le k \le N-1\), and a unique set of weights \(\rho _{k}\); \(0\le k \le N\), such that $$ \forall \varphi \in \mathbb{P}_{2N-1}\bigl(]{-}1,1[ \bigr),\quad \int _{1}^{1} \varphi (\mathbf{x})\,d\mathbf{x}= \sum _{k=0}^{N}\varphi (\xi _{k})\rho _{k}. $$ The weights \(\rho _{k}\) are positif and we have the following property: $$ \forall \varphi _{N} \in \mathbb{P}_{N} \bigl(]{-}1,1[\bigr), \quad \Vert \varphi _{N} \Vert _{L^{2}(]{-}1,1[)}^{2} \le \sum_{k=0}^{N} \varphi _{N}^{2}(\xi _{k})\rho _{k} \le 3 \Vert \varphi _{N} \Vert _{L^{2}(]{-}1,1[)}^{2}. $$ Let \((\xi _{k}^{i},\xi _{l}^{i})\), respectively \((\xi _{k}^{i},\xi _{l} ^{i},\xi _{r}^{i})\), the nodes in the sub-domain \(\varOmega _{i}\) deduced from \((\xi _{k},\xi _{l})\), respectively \((\xi _{k},\xi _{l},\xi _{r})\), by bijection in the reference domain \(]{-}1,1[^{2}\), respectively \(]{-}1,1[^{3}\). The local discrete scalar product is defined by: For φ and ψ two continuous functions on \(\overline{\varOmega }_{i}\), $$ (\varphi ,\psi )_{N_{i}} = \textstyle\begin{cases} \frac{ \vert \varOmega \vert }{4}\sum_{k=0}^{N}\sum_{l=0}^{N} \varphi (\xi _{k}^{i}, \xi _{l}^{i})\psi (\xi _{k}^{i},\xi _{l}^{i}) \rho _{k} \rho _{l}& \text{if } d=2, \\ \frac{ \vert \varOmega \vert }{8}\sum_{k=0}^{N}\sum_{l=0}^{N} \sum_{r=0}^{N} \varphi (\xi _{k}^{i},\xi _{l}^{i},\xi _{r}^{i})\psi (\xi _{k}^{i},\xi _{l}^{i},\xi _{r}^{i}) \rho _{k} \rho _{l}\rho _{r}& \text{if } d=3. \end{cases} $$ Then the discrete scalar product on Ω is $$ (\varphi ,\psi )_{N} =\sum_{i=1}^{I} (\varphi ,\psi )_{N_{i}}. $$ The penalized discrete problem is written: Find \((\mathbf{u}_{N}^{\alpha },p _{N}^{\alpha })\in \mathbb{D}_{N}(\varOmega )\times \mathbb{M}_{N}(\varOmega )\) such that $$ \begin{aligned} & \forall \mathbf{v}_{N}\in \mathbb{D}_{N}(\varOmega ), \quad \mathbf{a}_{N}\bigl(\mathbf{u}_{N}^{ \alpha },\mathbf{v}_{N}\bigr)+b\bigl(\mathbf{v}_{N},p_{N}^{\alpha }\bigr)=(\mathbf{f},\mathbf{v}_{N})_{N}, \\ &\forall q_{N}\in \mathbb{M}_{N}(\varOmega ), \quad b_{N}\bigl(\mathbf{u}_{N}^{\alpha },q_{N} \bigr)=\alpha \bigl(p_{N}^{\alpha },q_{N} \bigr)_{N}, \end{aligned} $$ where the two bilinear forms \(\mathbf{a}_{N}(\cdot ,\cdot)\) and \(b_{N}(\cdot ,\cdot)\) are defined by $$ \mathbf{a}_{N}(\mathbf{u}_{N},\mathbf{v}_{N})=(\mathbf{u}_{N},\mathbf{v}_{N})_{N} \quad \text{{and}} \quad b_{N}(\mathbf{v}_{N},q_{N})=-\bigl(\operatorname{div}(\mathbf{v}_{N}),q_{N}\bigr)_{N}. $$ According to the exactness of the quadrature formulas on the space \({\mathbb{P}}_{2N-1}(\varOmega )\), the discrete bilinear form \(b_{N}(\cdot ,\cdot)\) coincides with the continuous bilinear form \(b(\cdot ,\cdot)\). We consider \(\varPi _{N}\) the orthogonal projection operator from the space \(L^{2}(\varOmega )\) into the space \(\mathbb {M}_{N}\), defined with respect the scalar product \(L^{2}(\varOmega )\). We prove that the penalized problem (9) is equivalent to the following uncoupled problem (see [2], Chap. 1, Sect. 4.3): Find \(\mathbf{u}_{N}^{\alpha }\in \mathbb {D}_{N}(\varOmega )\) and \(p_{N} \in \mathbb {M}_{N}( \varOmega )\) such that, for all \(\mathbf{v}_{N} \in \mathbb {D}_{N}(\varOmega )\), $$\begin{aligned}& \mathbf{a}_{N}\bigl(\mathbf{u}_{N}^{\alpha }, \mathbf{v}_{N}\bigr) + \frac{1}{\alpha }\bigl(\varPi _{N} \bigl( \operatorname{div} \mathbf{u}_{N}^{\alpha }\bigr),\varPi _{N}( \operatorname{div} \mathbf{v}_{N})\bigr)_{N}=(\mathbf{f},\mathbf{v}_{N})_{N}, \end{aligned}$$ $$\begin{aligned}& p_{N}^{\alpha }=-\frac{1}{\alpha } \varPi _{N} \bigl(\operatorname{div} \mathbf{u}_{N}^{ \alpha }\bigr). \end{aligned}$$ The penalty method permits us to uncouple the problem (9). The only unknown in equation (10) is the velocity and then we deduce the value of the pressure from equation (11). For a continuous functionfonΩ̄, problem (10)–(11) has a unique solution\((\mathbf{u}_{N}^{\alpha },p _{N}^{\alpha }) \in \mathbb {D}_{N}(\varOmega )\times \mathbb {M}_{N}(\varOmega )\). For \((\boldsymbol{\varphi }_{N},\boldsymbol{\psi }_{N})\in \mathbb {D}_{N}(\varOmega ) \times \mathbb {D}_{N}(\varOmega )\), we consider $$ \hat{\mathbf{a}}(\boldsymbol{\varphi }_{N},\boldsymbol{\psi }_{N})=( \boldsymbol{\varphi }_{N},\boldsymbol{\psi }_{N})_{N} + \frac{1}{\alpha }\bigl(\varPi _{N}(\operatorname{div}{\boldsymbol{\varphi }}_{N}),\varPi _{N}(\operatorname{div}{\boldsymbol{\psi }} _{N}) \bigr)_{N}. $$ We deduce, by the triangular inequality, the continuity of the operator \(\varPi _{N}\) and the continuity of the operator div on the space \(\mathbb {D}_{N}(\varOmega )\), that the bilinear form \(\hat{\mathbf{a}}(\cdot ,\cdot)\) is continuous on \(\mathbb {D}_{N}(\varOmega )\times \mathbb {D}_{N}(\varOmega )\). Using that \(\hat{\mathbf{a}}(\varphi _{N},\varphi _{N})\geq (\varphi _{N}, \varphi _{N})_{N}\) and property (8), we deduce that the bilinear form \(\hat{\mathbf{a}}(\cdot ,\cdot)\) is elliptic. The Lax–Milgram theorem permits one to conclude that problem (10)–(11) has a unique solution \((\mathbf{u}_{N}^{\alpha },p _{N}^{\alpha }) \in \mathbb {D}_{N}(\varOmega )\times \mathbb {M}_{N}(\varOmega )\). □ We know that the discrete bilinear form \(b_{N}(\cdot ,\cdot)\) verifies the following inf-sup condition: For any \(q_{N} \in \mathbb {M}_{N}(\varOmega )\) $$ \sup_{\mathbf{v}_{N}\in \mathbb {D}_{N}(\varOmega )}{\frac{b_{N}(\mathbf{v}_{N},q_{N})}{ \Vert \mathbf{v}_{N} \Vert _{H(\operatorname{div},\varOmega )}}}\geq \gamma \Vert q _{N} \Vert _{L^{2}(\varOmega )}, $$ where γ is a positive constant independent of N and of the penalty parameter α (see [19, 20]). We obtain the following a priori error estimation. Suppose that the data functionfbelongs to the space\(H^{\mu }(\varOmega )^{d}\), \(\mu \geq \frac{d}{2}\)and that the solutions\((\mathbf{u},p)\)of problem (2) and\((\mathbf{u}^{\alpha },p^{\alpha })\)of problem (5) belongs to\(H^{s}(\varOmega )^{d}\times H^{s}(\varOmega )\), \(s\geq 0\), then the error between the solution\((\mathbf{u},p)\)of problem (2) and\((\mathbf{u}_{N}^{\alpha },p_{N}^{\alpha })\)solution of problem (9) is $$\begin{aligned}& \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\omega )^{d}} + \gamma \bigl\Vert p-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \\& \quad \leq C \alpha \bigl(N^{-s}\bigl( \Vert \mathbf{u}\Vert _{H^{s}(\varOmega )^{d}} + \Vert p \Vert _{H^{s}(\varOmega )}\bigr) + N^{-\mu } \Vert \mathbf{f}\Vert _{H^{\mu }(\varOmega )^{d}} \bigr), \end{aligned}$$ whereCis a positive constant independent of N andα. Using the triangular inequality we have $$ \begin{aligned} & \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}}\leq \bigl\Vert \mathbf{u}-\mathbf{u}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}} + \bigl\Vert \mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}}, \\ & \bigl\Vert p-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}\leq \bigl\Vert p-p^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} + \bigl\Vert p^{\alpha }-p_{N} ^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}. \end{aligned} $$ Using problems (5) and (9), we conclude that $$ \mathbf{a}\bigl(\mathbf{u}^{\alpha }-\mathbf{u}^{\alpha }_{N}, \mathbf{v}_{N}\bigr) + b\bigl(p^{\alpha }-p^{ \alpha }_{N}, \mathbf{v}_{N}\bigr)=0 $$ $$ b\bigl(p^{\alpha }-p^{\alpha }_{N},q_{N} \bigr)=\alpha \int _{\varOmega }p_{N}^{ \alpha }(\mathbf{x})q_{N}(\mathbf{x})\,d\mathbf{x}. $$ Based on the inf-sub condition (3) and the continuity of the bilinear form \(\mathbf{a}(\cdot ,\cdot)\), there exists a positive constant C, independent of N and α such that $$ \beta \bigl\Vert p^{\alpha }-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \leq \sup_{\mathbf{v}_{N}\in \mathbb {D}_{N}(\varOmega )} { \frac{b(p^{\alpha }-p^{ \alpha }_{N},\mathbf{v}_{N})}{ \Vert \mathbf{v}_{N} \Vert _{L^{2}(\varOmega )^{d}}}} \leq C \bigl\Vert \mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}}. $$ $$ \bigl\Vert p^{\alpha }-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}\leq C \beta ^{-1} \bigl\Vert \mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}}. $$ If we choose \(\mathbf{v}_{N}=\mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha }\) and \(q_{N}=p ^{\alpha }-p_{N}^{\alpha }\) in (15) and (16), we have $$ \mathbf{a}\bigl(\mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha },\mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha }\bigr) \leq -\alpha \int _{\varOmega }p^{\alpha }(\mathbf{x}) \bigl(p^{\alpha }-p_{N}^{\alpha } \bigr) (\mathbf{x})\,d\mathbf{x}. $$ Using (17), we conclude that $$ \mathbf{a}\bigl(\mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha }, \mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha }\bigr) \leq \bigl\Vert p^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \bigl\Vert \mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}}. $$ Then, by (16), $$ {\operatorname{div}}\bigl(\mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha } \bigr)=\alpha p^{\alpha } _{N} \quad \text{in } L_{0}^{2}(\varOmega ). $$ Using (18) and (19), we find that $$ \bigl\Vert \mathbf{u}^{\alpha }-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}} \leq \alpha C \bigl\Vert p^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}. $$ By combining the inequalities (14), (20), (17) and (6) we conclude (13), using the standard results of spectral approximation [12]. □ A posteriori error analysis We define an error indicator $$ i^{\alpha }=\alpha \bigl\Vert p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} $$ which depends on the discrete pressure, so it is easily to calculate. The error between the solutions\((\mathbf{u},p)\)of problem (2) and\((\mathbf{u}^{\alpha },p^{\alpha })\)of problem (9) is $$ \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )^{d}} + \bigl\Vert p-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \leq C \bigl(i ^{\alpha } + \alpha \bigl\Vert p^{\alpha }-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \bigr). $$ The estimation of the error indicator is $$ i^{\alpha }\leq \bigl( \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{H(\operatorname{div}, \varOmega )} + \alpha \bigl\Vert p-p_{N}^{ \alpha } \bigr\Vert _{L^{2}(\varOmega )} \bigr), $$ Cis a positive constant independent ofNandα. Making the difference between problems (2) and (9), we find, for all \(\mathbf{v}\in H(\operatorname{div},\varOmega )\) and for all \(q\in L ^{2}(\varOmega )\), $$ \begin{aligned} &\mathbf{a}\bigl(\mathbf{u}-\mathbf{u}^{\alpha },\mathbf{v}\bigr) + b\bigl(\mathbf{v},p-p^{\alpha }\bigr)=0, \\ &b\bigl(\mathbf{u}-\mathbf{u}^{\alpha },q\bigr)= - \alpha \int _{\varOmega } p^{\alpha }(\mathbf{x})q(\mathbf{x})\,d\mathbf{x}. \end{aligned} $$ Using the arguments presented in ([2], Chap. 1, Theorem 4.3) combined with the ellipticity of the bilinear form \(\mathbf{a}(\cdot ,\cdot)\) and the inf-sub condition (3), we obtain $$ \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{H(\operatorname{div}, \varOmega )} + \bigl\Vert p-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \leq C \alpha \bigl\Vert p^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}. $$ By the triangular inequality $$ \bigl\Vert p^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \leq \bigl\Vert p^{ \alpha }-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} + \bigl\Vert p_{N} ^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}, $$ we conclude the estimation (22) with \(i^{\alpha }= \alpha \Vert p_{N}^{\alpha }\Vert _{L^{2}(\varOmega )} \). Taking \(q=p^{\alpha }\) in the second equation of (24) yields $$ \alpha \bigl\Vert p^{\alpha } \bigr\Vert _{L^{2}(\varOmega )}\leq \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{H(\operatorname{div}, \varOmega )}. $$ Combining this relation with (26), we find the result (23). □ Let \(\varpi _{i}\), \(1\leq i\leq I\), the family of error indicators which are related to the spectral element discretization $$ \varpi _{i}= N^{-1} \bigl\Vert I_{N}(\mathbf{f}) + \nu \mathbf{u}+ \operatorname{\mathbf{grad}}p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega _{i})^{d}} - \sum_{l=1}^{L(l)} N^{-\frac{{1}}{2}} \bigl\Vert \bigl[p _{N}^{\alpha }.\mathbf{n}\bigr]_{il} \bigr\Vert _{L^{2}(\varGamma _{il})} + \bigl\Vert {\operatorname{div}} \bigl(\mathbf{u}_{N}^{\alpha }\bigr) \bigr\Vert _{L^{2}(\varOmega _{i})}. $$ For each \(1\leq i\leq I\), \(\varGamma _{il}\), \(1\leq l\leq L(l)\), are the edges in dimension \(d=2\) or the faces in dimension \(d=3\) of the sub-domain \(\varOmega _{i}\) that are not included on the boundary ∂Ω and \([p_{N}^{\alpha }.\mathbf{n}]_{il}\) represents the jump through each \(\varGamma _{il}\). We denote by \(I_{N}\) the Lagrange interpolating operator on the Gauss–Lobatto nodes. The a posteriori error estimate between the solutions\((\mathbf{u}^{\alpha },p ^{\alpha })\)of problem (5) and\((\mathbf{u}_{N}^{\alpha },p_{N}^{ \alpha })\)of the problem (9) is $$ \bigl\Vert \mathbf{u}-\mathbf{u}_{N}^{\alpha } \bigr\Vert _{H(\operatorname{div}, \varOmega )} + \bigl\Vert p-p_{N}^{\alpha } \bigr\Vert _{L^{2}(\varOmega )} \leq C \Biggl(i ^{\alpha } + \mu \Biggl(\sum _{1}^{I}\varpi _{i} \Biggr) + \bigl\Vert \mathbf{f}-I _{N}(\mathbf{f}) \bigr\Vert _{L^{2}(\varOmega )}^{d} \Biggr), $$ whereCis a positive constant independent ofNandα, μis equal to 1 if\(d=2\)orΩis convex, \(N^{\frac{1}{2}}\)if\(d=3\)andΩnot convex. To find (28), we proceed as in ([21], Sect. 4), ([14], Sect. 3.3) and ([11], Sect. 3). Let \(\mathbf{U}=(\mathbf{u},p)\) and \(\mathbf{V}=(\mathbf{v},q)\). We define the bilinear form $$ \mathcal{A}_{\alpha }(\mathbf{U},\mathbf{V})=\mathbf{a}(\mathbf{u},\mathbf{v})+b(\mathbf{v},p)-\alpha \int _{ \varOmega }p(\mathbf{x})q(\mathbf{x})\,d\mathbf{x}. $$ The bilinear form \(\mathcal{A}_{\alpha }(\cdot ,\cdot)\) is continuous on the space \(\mathcal{K}(\varOmega )\times \mathcal{K}(\varOmega )\) where $$ \mathcal{K}(\varOmega )=L^{2}(\varOmega )^{d} \times L^{2}_{0}(\varOmega ). $$ This space is equipped with the norm $$ \bigl\Vert (\mathbf{u},p) \bigr\Vert _{\mathcal{K}(\varOmega )}= \bigl( \Vert \mathbf{u}\Vert ^{2}_{L^{2}(\varOmega )^{d}}+ \Vert p \Vert ^{2}_{L^{2}(\varOmega )} \bigr)^{\frac{1}{2}}. $$ Thanks to ([14], Lemma 3.5), the coercivity of the bilinear form \(\mathbf{a}(\cdot ,\cdot)\) and the inf-sup condition of the bilinear form \(b(\cdot ,\cdot)\), we prove an inf-sup condition on the bilinear form \(\mathcal{A}_{\alpha }(\cdot ,\cdot)\) such that there exists a constant \(\delta _{*}\) positive independent of α: $$ \sup_{\mathbf{V}\in \mathcal{K}(\varOmega )} {\frac{\mathcal{A}_{\alpha }(\mathbf{U},\mathbf{V})}{ \Vert \mathbf{V}\Vert _{\mathcal{K}(\varOmega )}}}\geq \delta _{*} \Vert \mathbf{U}\Vert _{\mathcal{K}( \varOmega )}. $$ We need to evaluate the residual term \(\mathcal{A}_{\alpha }(\mathbf{U}^{ \alpha }-\mathbf{U}_{N}^{\alpha },\mathbf{V})\), where \(\mathbf{U}^{\alpha }=(\mathbf{u}^{\alpha },p ^{\alpha })\) and \(\mathbf{U}^{\alpha }_{N}=(\mathbf{u}^{\alpha }_{N},p^{\alpha } _{N})\). According to the exactness of the quadrature formula (7) applied in the problem (9), we obtain, for \(\mathbf{V}_{N-1}=(\mathbf{v}_{N-1},0)\), \(\mathbf{v}_{N-1}\in \mathbb {D}_{N-1}\), $$ \mathcal{A}_{\alpha }\bigl(\mathbf{U}_{N}^{\alpha }, \mathbf{V}_{N-1}\bigr)= \int _{\varOmega }I _{N}(\mathbf{f}) (\mathbf{x}).\mathbf{v}_{N-1}(\mathbf{x})\,d\mathbf{x}. $$ Using problems (5) and (31), we have $$ \mathcal{A}_{\alpha }\bigl(\mathbf{U}^{\alpha }-\mathbf{U}_{N}^{\alpha }, \mathbf{V}\bigr)= \mathcal{A}_{\alpha }\bigl(\mathbf{U}^{\alpha }-\mathbf{U}_{N}^{\alpha },\mathbf{V}-\mathbf{V}_{N-1}\bigr) + \int _{\varOmega } \bigl(\mathbf{f}-I_{N}(\mathbf{f})\bigr) (\mathbf{x}).\mathbf{v}_{N-1}(\mathbf{x})\,d\mathbf{x}, $$ $$\begin{aligned} \mathcal{A}_{\alpha }\bigl(\mathbf{U}^{\alpha }-\mathbf{U}_{N}^{\alpha },\mathbf{V}\bigr) =& \int _{\varOmega } I_{N}(\mathbf{f}) (\mathbf{x}).(\mathbf{v}-\mathbf{v}_{N-1}) (\mathbf{x})\,d\mathbf{x}- \mathcal{A} _{\alpha }\bigl(\mathbf{U}_{N}^{\alpha },\mathbf{V}-\mathbf{V}_{N-1}\bigr) \\ &{}+ \int _{\varOmega } \bigl(\mathbf{f}-I_{N}(\mathbf{f})\bigr) (\mathbf{x}).\mathbf{v}( \mathbf{x})\,d\mathbf{x}. \end{aligned}$$ Applying an integration by part on each sub-domain \(\varOmega _{i}\), we conclude that $$\begin{aligned}& \int _{\varOmega }I_{N}(\mathbf{f}) (\mathbf{x}).(\mathbf{v}-\mathbf{v}_{N-1}) (\mathbf{x})\,d\mathbf{x}-\mathcal{A} _{\alpha }\bigl(\mathbf{U}_{N}^{\alpha },\mathbf{V}-\mathbf{V}_{N-1}\bigr) \\& \quad = \sum_{i=1}^{I} \biggl( \int _{\varOmega _{i}} \bigl(I_{N}(\mathbf{f}) + \nu \mathbf{u}_{n}^{\alpha }- \operatorname{\mathbf{grad}}p_{N}^{\alpha } \bigr) (\mathbf{x}).(\mathbf{v}-\mathbf{v}_{N-1}) (\mathbf{x})\,d\mathbf{x} \\& \qquad {} + \int _{\partial \varOmega _{i}}p_{N}^{\alpha }(\zeta ).(\mathbf{v}-\mathbf{v}_{N-1}) ( \zeta )\,d\zeta \\& \qquad {} + \int _{\varOmega _{i}}{\operatorname{div}} \mathbf{u}_{N}^{\alpha }q(\mathbf{x}) \,d\mathbf{x}+\alpha \int _{\varOmega _{i}}p_{N}^{\alpha }(\mathbf{x})q(\mathbf{x})\,d\mathbf{x}\biggr). \end{aligned}$$ We define \(\mathcal{P}_{N}\) to be the orthogonal projection operator from the space \(H_{0}(\operatorname{div},\varOmega )\) into the space \(\mathbb {D}_{N}\) associated to the scalar product of the space \(H_{0}( \operatorname{div},\varOmega )\). So for any \(\mathbf{v}\in H_{0}(\operatorname{div},\varOmega )\), we have $$ \bigl\Vert \mathbf{v}-\mathcal{P}_{N}(\mathbf{v}) \bigr\Vert _{L^{2}(\varOmega )}= \sup_{\kappa \in {L^{2}(\varOmega )}}{\frac{ \int _{\varOmega }(\mathbf{v}-\mathcal{P}_{N}(\mathbf{v}))(\mathbf{x})\kappa (\mathbf{x})\,d\mathbf{x}}{ \Vert \kappa \Vert _{L^{2}(\varOmega )}}}. $$ For \(\kappa \in L^{2}(\varOmega )\), the problem $$ \begin{aligned} &{-}\Delta \psi=\kappa \quad \text{in } \varOmega , \\ &\psi=0 \quad \text{on } \partial \varOmega , \end{aligned} $$ has a unique solution \(\psi \in H^{1}_{0}(\varOmega )\subset H_{0}( \operatorname{div}, \varOmega )\), then $$\begin{aligned} \int _{\varOmega }\bigl(\mathbf{v}-\mathcal{P}_{N}(\mathbf{v})\bigr) ( \mathbf{x})\kappa (\mathbf{x})\,d\mathbf{x} =& \int _{\varOmega }\nabla \bigl(\mathbf{v}-\mathcal{P}_{N}(\mathbf{v}) \bigr) (\mathbf{x})\nabla \kappa (\mathbf{x})\,d\mathbf{x}\\ =& \int _{\varOmega }\nabla (\mathbf{v}) (\mathbf{x})\nabla \bigl(\kappa (\mathbf{x})- \mathcal{P}_{N}( \kappa )\bigr)\,d\mathbf{x}. \end{aligned}$$ Thus, we conclude that $$ \int _{\varOmega }\bigl(\mathbf{v}-\mathcal{P}_{N}(\mathbf{v})\bigr) ( \mathbf{x})\kappa (\mathbf{x})\,d\mathbf{x}\leq \Vert \mathbf{v}\Vert _{H(\operatorname{div},\varOmega )} \bigl\Vert \kappa (\mathbf{x})- \mathcal{P}_{N}(\kappa ) \bigr\Vert _{H(\operatorname{div},\varOmega )}. $$ We deduce the following inequality from the standard interpolation results [22]: $$ \bigl\Vert \kappa (\mathbf{x})-\mathcal{P}_{N}(\kappa ) \bigr\Vert _{H(\operatorname{div},\varOmega )}\leq C N^{-s} \Vert \kappa \Vert _{H^{s}(\varOmega )}. $$ We consider the following estimation (see [23]): For any \(\phi \in H^{1}_{0}(\varOmega )\subset H_{0}(\operatorname{div}, \varOmega )\) and any sub-domain \(\varOmega _{i}\), \(1\leq i\leq I\), $$ \bigl\Vert \phi (\mathbf{x})-\mathcal{P}_{N}(\phi ) \bigr\Vert _{L^{2}(\partial \varOmega _{i})}\leq C N^{-\frac{1}{2}} \Vert \phi \Vert _{H(\operatorname{div},\varOmega _{i})}. $$ We conclude the a posteriori error estimation (28) applying (30), (32), (33), the Cauchy–Schwarz inequality and (34) combined with (35). □ We remark that in dimension \(d=2\) and if Ω is convex, the a posteriori error estimation (28) is fully optimal and leads to an explicit upper bound for the error. However, the inverse estimation (the estimation of the error indicator in function of the error) is not optimal (see [23], Theorem 2.9) and we will not present it because we are not interested to the adaptability with respect N. Penalty adaptation algorithm We describe in this section the used strategy for the penalty adaptation in order to optimize the penalty parameter. We suppose that the data function f is regular. Let γ a be fixed real number and \(\alpha ^{0}\) is an initial value of α: For \(m=1, \ldots\) For a value \(\alpha ^{m}\) of α Compute the solution \((\mathbf{u}_{N}^{\alpha ^{m}},p_{N}^{\alpha ^{m}})\) of problem (10)–(11) Compute the associated error indicator \(i^{\alpha ^{m}}\) given in (21) $$ \varpi _{N}= \Biggl(\sum_{i=1}^{I} \varpi _{i}^{2} \Biggr)^{\frac{1}{2}}, $$ where \(\varpi _{i}\) is defined by (27) If \(\gamma i^{\alpha ^{m}}\leq \varpi _{N}\), we obtain the optimal value \(\alpha ^{m}\) Otherwise, we choose $$ \alpha ^{m+1}= {\frac{\alpha ^{m}\varpi _{N}}{i^{\alpha ^{m}}}}, $$ and we reiterate. This work concerns the use of the penalty technique to solve Darcy's equations discretized by the spectral elements method. This technique permits to uncouple the two unknowns, the velocity and the pressure. The construction of the error indicators, using an a posteriori error analysis is presented. This made it possible to find an optimal penalty parameter which will reduce the computational cost. The numerical validation of this result will be the subject of a forthcoming work. Darcy, H.: Les Fontaines Publiques de la Ville de Dijon. Dalmont, Paris (1856) Girault, V., Raviart, P.-A.: Finite Element Methods for Navier Stokes Equations, Theory and Algorithms. Springer, Berlin (1986) Maday, Y., Meiron, D., Patera, A.T., Ronquist, E.M.: Analysis of iterative methods for the steady and unsteady Stokes problem: application to spectral element discretizations. SIAM J. Sci. Comput. 14, 310–337 (1993) Bercovier, M.: Régularisation duale des problèmes variationnels mixtes: application aux éléments finis mixtes et extension à quelques problèmes non linéaires. PhD thesis, université de Rouen (1976) Bercovier, M.: Perturbation of mixed variational problems. Application to mixed finite element methods. RAIRO. Anal. Numér. 12, 211–236 (1978) Carey, G.F., Krishnan, R.: Penalty approximation of Stokes flow. Comput. Methods Appl. Mech. Eng. 35, 169–206 (1982) Carey, G.F., Krishnan, R.: Penalty finite element method for the Navier Stokes equations. Comput. Methods Appl. Mech. Eng. 42, 183–224 (1984) Carey, G.F., Krishnan, R.: Convergence of iterative methods in penalty finite element approximation of the Navier Stokes equations. Comput. Methods Appl. Mech. Eng. 60, 1–29 (1987) Abdelwahed, M., Chorfi, N.: The implementation of the mortar spectral element discretization of the heat equation with discontinuous diffusion coefficient. Bound. Value Probl. 2019, 80 (2019) Abdelwahed, M., Al Salam, A., Chorfi, N.: Solving the singular two-dimensional fourth order problem by the mortar spectral element method. Bound. Value Probl. 1998, 39 (1998) Bernardi, C., Blouza, A., Chorfi, N., Kharrat, N.: A penalty algorithm for the spectral element discretization of the Stokes problem. Math. Model. Numer. Anal. 45, 201–216 (2011) Bernardi, C., Maday, Y., Method, S.: In: Ciarlet, P.G., Lions, J.-L. (eds.) Handbook of Numerical Analysis, pp. 209–485. North-Holland, Amsterdam (1997) Malkus, D.S., Olsen, E.T.: Incompressible Finite Elements Which Fail the Discrete LBB Condition. Am. Soc. Mech. Eng, New York (1982) Bernardi, C., Girault, V., Hecht, F.: A posteriori analysis of a penalty method and application to the Stokes problem. Math. Models Methods Appl. Sci. 13, 1599–1628 (2013) Azaïez, M., Bernardi, C., Grundmann, M.: Spectral method applied to porous media. East-West J. Numer. Math. 2, 91–105 (1994) Costabel, M.: A remark on the regularity of solutions of Maxwell equations on Lipschitz domains. Math. Methods Appl. Sci. 12, 365–368 (1990) Costabel, M., Dauge, M.: Computation of resonance frequencies for Maxwell equations in non smooth domains. In: Ainsworth, M., Davies, P., Duncan, D., Martin, P., Rynne, B. (eds.) Topics in Computational Wave Propagation. Springer, Berlin (2004) Dauge, M.: Neumann and mixed problems on curvilinear polyhedra. Integral Equ. Oper. Theory 15, 227–261 (1992) Azaïez, M., Ben Belgacem, F., Grundmann, M., Khallouf, H.: Staggered grids hybrid dual spectral element method for second order elliptic problems. Application to high-order time splitting for Navier–Stokes equations. Comput. Methods Appl. Mech. Eng. 166, 183–199 (1998) Ben Belgacem, F., Bernardi, C., Chorfi, N., Maday, Y.: Inf-sup conditions for the mortar spectral element discretization of the Stokes problem. Numer. Math. 85, 257–281 (2000) Bernardi, C., Métivet, B., Verfürth, R.: Analyse Numérique D Indicateurs D Erreur. In: George, P.-L. (ed.) Maillage et Adaptation. Hermès, Paris (2001) Bernardi, C., Maday, Y., Rapetti, F.: Discrétisations Variationnelles de Problèmes aux Limites Elliptiques. Springer, Berlin (2004) Bernardi, C.: Indicateurs d'erreur en h-n version des éléments spectraux. Modél. Math. Anal. Numér. 30, 1–38 (1996) The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding this Research group No (RG-1440-061). Department of Mathematics, College of Sciences, King Saud University, Riyadh, Saudi Arabia Mohamed Abdelwahed & Nejmeddine Chorfi Search for Mohamed Abdelwahed in: Search for Nejmeddine Chorfi in: The authors declare that the study was realized in collaboration with equal responsibility. All authors read and approved the final manuscript. Correspondence to Nejmeddine Chorfi. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Abdelwahed, M., Chorfi, N. Penalty algorithm adapted for the spectral element discretization of the Darcy equations. Bound Value Probl 2019, 188 (2019) doi:10.1186/s13661-019-01305-3 Penalty method Darcy equations Spectral element discretization
CommonCrawl
Newly Developed Microsatellite Markers of Mystus nemurus Tested for Cross-Species Amplification in Two Distantly Related Aquacultured Catfish Species Chan, S.C.;Tan, S.G.;Siraj, S.S.;Yusoff, K. 1513 https://doi.org/10.5713/ajas.2005.1513 PDF KSCI The work reported here is an attempt to explore the possibility of DNA microsatellite loci transfer (cross-species amplification) to other economically important aquacultured catfish species other than its source species. A total of 25 new microsatellite loci developed for riverine catfish, Mystus nemurus were successfully cross-amplified in two distantly related catfish species within the suborder Siluroidei. Five out of the 19 loci that successfully cross-amplified in Pangasius micronemus were polymorphic, while for Clarias batrachus, cross-amplification was successful using 17 polymorphic loci. The observed heterozygosities were high for all the three catfishes. The results indicated that microsatellite loci could be as polymorphic in non-source species as in the source species. Association Analyses with Carcass Traits in the Porcine KIAA1717 and HUMMLC2B Genes Xu, D.Q.;Xiong, Y.Z.;Liu, M.;Lan, J.;Ling, X.F.;Deng, C.Y.;Jiang, S.W. 1519 By screening a subtracted cDNA library constructed with mRNA obtained from the longissimus dorsi muscles of F1 hybrids Landrace${\times}$Yorkshire and their Yorkshire female parents, we isolated two partial sequences coding for the H3-K4-specific methyltransferase (KIAA1717) and skeletal muscle myosin regulatory light chain (HUMMLC2B) genes. In the present work we investigated two SNPs, one (C1354T) at the 3' untranslated region (UTR) of KIAA1717 and one (A345G) at the SINE (PRE-1) element of HUMMLC2B, in a resource population derived from crossing Chinese Meishan and Large White pig. The selected pigs were genotyped by means of a PCR-RFLP protocol. Significant associations were observed for the KIAA1717 C1354T polymorphic site with thorax-waist backfat thickness (p<0.05), buttock backfat thickness (p<0.05), average backfat thickness (p<0.05), loin eye height (p<0.05), loin eye area (p<0.05), carcass length to 1$^{st}$ spondyle (p<0.01) and carcass length to 1st rib (p<0.01). HUMMLC2B A345G were significantly associated with loin eye width (p<0.05), loin eye area (p<0.05). Further studies are needed to confirm these preliminary results. Identification of Quantitative Traits Loci (QTL) Affecting Growth Traits in Pigs Kim, T.H.;Choi, B.H.;Lee, H.K;Park, H.S.;Lee, H.Y.;Yoon, D.H.;Lee, J.W.;Jeong, G.J.;Cheong, I.C.;Oh, S.J.;Han, J.Y. 1524 Molecular genetic markers were used to detect chromosomal regions which contain economically important traits such as growth, carcass, and meat quality traits in pigs. A three generation resource population was constructed from a cross between Korean native boars and Landrace sows. A total of 240 F2 animals from intercross of F1 was produced. Phenotypic data on 17 traits, birth weight, body weights at 3, 5, 12, and 30 weeks of age, teat number, carcass weight, backfat thickness, body fat, backbone number, muscle pH, meat color, drip loss, cooking loss, water holding capacity, shear force, and intramuscular fat content were collected for F2 animals. Animals including grandparents (F0), parents (F1), and offspring (F2) were genotyped for 80 microsatellite markers covering from chromosome 1 to 10. Least squares regression interval mapping was used for quantitative trait loci (QTL) identification. Significance thresholds were determined by permutation tests. A total of 10 QTL were detected at 5% chromosome-wide significance levels for growth traits on SSCs 2, 4, 5, 6, and 8. Individual-breed Assignment Analysis in Swine Populations by Using Microsatellite Markers Fan, B.;Chen, Y.Z.;Moran, C.;Zhao, S.H;Liu, B.;Yu, M.;Zhu, M.J.;Xiong, T.A.;Li, K. 1529 Individual-breed assignments were implemented in six swine populations using twenty six microsatellites recommended by the Food and Agriculture Organization and the International Society for Animal Genetics (FAO-ISAG). Most microsatellites exhibited high polymorphisms as shown by the number of alleles and the polymorphism information content. The assignment accuracy per locus obtained by using the Bayesian method ranged from 33.33% (CGA) to 68.47% (S0068), and the accumulated assignment accuracy of the top ten loci combination added up to 96.40%. The assignment power of microsatellites based on the Bayesian method had positive correlations with the number of alleles and the gene differential coefficient ($G_{st}$) per locus, while it has no relationship to genetic heterozygosity, polymorphism information content per locus and the exclusion probabilities under case II and case III. The percentage of corrected assignment was highest for the Bayesian method, followed by the gene frequency and distancebased methods. The assignment efficiency of microsatellites rose with increase in the number of loci used, and it can reach 98% when using a ten-locus combination. This indicated that such a set of ten microsatellites is sufficient for breed verification purposes. Multi Trait Selection with Restriction for Cutup Carcass Value in Broiler Chicken: Genetic Relatedness of Lines Involved Based on Randomly Amplified Polymorphic DNA Khosravinia, Heshmatollah;Murthy, H.N.N.;Ramesha, K.P.;Govindaiah, M.G. 1535 Five broiler chicken lines, namely HC, BPB2, CPB2, PB2 and UM1, involving in a selection program and differing in selection intensity and genetic background, were screened for randomly amplified polymorphic DNA (RAPD) polymorphism using 10 selected decamer primers. Nine primers amplified the genomic DNA, generating 200 to 2,500 bp and all detected polymorphism between lines. Out of 74 bands scored using these primers, 34 (50.0%) were found to be polymorphic. The number of polymorphic loci ranged from 3 to 6 with an average of 4.33. Lines differed considerably for within-population genetic similarity estimated by band frequency (WS = 93.55 to 99.25). Between-line genetic similarity estimates based on band sharing as well as on band frequency ranged from 71.35 to 86.45 and from 73.38 to 87.68, respectively. Lines HC and PB2 were the most closely related to the other, while BPB2 and CPB2 appeared to be more distant from each other. The between-line genetic distance based on both band sharing and band frequency revealed the similar trends as for Between-line genetic similarity. Based on BS and BF criteria, BPB2 and CPB2 as well as PB2 and UM1 lines can be merged to launch a new genetic group for further progress in biometrical objectives. A phylogenetic tree, derived using Nei's coefficient of similarity revealed the different pattern of genetic distance between lines. New Evidences of Effect of Melanocortin-4 Receptor and Insulin-like Growth Factor 2 Genes on Fat Deposition and Carcass Traits in Different Pig Populations Chen, J.F.;Xiong, Y.Z.;Zuo, B.;Zheng, R.;Li, F.E.;Lei, M.G.;Li, J.L.;Deng, C.Y.;Jiang, S.W. 1542 The Melanocortin-4 Receptor (MC4R) and Insulin-like Growth Factor 2 (IGF2) are two important candidate genes related to fat deposition and carcass traits. MC4R was found on study on human obesity and then was studied as candidate gene affecting food intake and fat deposition traits in mice and pigs. Insulin-like Growth Factor 2 (IGF2) gene plays an important role on tumor cell proliferation and muscle growth. It also affects fat traits and live weight in pigs. In this paper, MC4R and IGF2 were studied as two candidate genes associated with important economic traits such as fat deposition and carcass traits in five different pig populations. Taq I-PCR-RFLP and Bcn I-PCR-RFLP were respectively used to detect the polymorphism of genotypes of MC4R and IGF2 genes. Different MC4R genotype frequencies were observed in four populations. IGF2 genotype frequencies were also different in two populations. The results of association analysis show both MC4R and IGF2 genes were significantly associated with fat deposition and carcass traits in about 300 pigs. This work will add new evidence of MC4R and IGF2 affecting fat deposition and carcass traits in pigs and show that two genes can be used as important candidate genes for marker assistant selection (MAS) of growth and lean meat percentage in pigs. Highly Polymorphic Bovine Leptin Gene Yoon, D.H.;Cho, B.H.;Park, B.L.;Choi, Y.H.;Cheong, H.S.;Lee, H.K.;Chung, E.R.;Cheong, I.C.;Shin, H.D. 1548 The leptin, an anti-obesity protein, is a hormone protein expressed and secreted mainly from adipocyte tissue, and involved in regulation of body weight, food intake and energy metabolism. In an effort to discover polymorphism(s) in genes whose variant(s) might be implicated in phenotypic traits of growth, we have sequenced exons and their boundaries of leptin gene including 1,000 bp upstream of promoter region with twenty-four unrelated Korean cattle. Fifty-seven sequence variants were identified: fourteen in 5' flanking region, twenty-seven in introns, eight in exons, and eight in 3' flanking region. By pair-wise linkage analysis among polymorphisms, ten sets of SNPs were in absolute linkage disequilibrium (LD) (|D'| = 1 and $r^2$ = 1). Among variants identified, thirty-six SNPs were newly identified, and twenty-one SNPs, which were reported in other breeds, were also confirmed in Korean cattle. The allele frequencies of variants were quite different among breeds. The information from SNPs of bovine leptin gene could be useful for further genetic studies of this gene. The Expression Characterization of Chicken Uncoupling Protein Gene Zhao, Jian-Guo;Li, Hui;Wang, Yu-Xiang;Meng, He 1552 The UCPs are members of the mitochondrial inner membrane transporter family, present in the mitochondrial inner membrane. Their main function is increasing the energy expenditure via diminishing the resulting production of ATP from mitochondrial oxidative phosphorylation instead of yielding dissipative heat. They are associated with the metabolism of fat and regulation of energy expenditure. The UCP gene can be viewed as the candidate gene for chicken fatness. In the present study, RT-PCR and Northern Blot methods were developed to investigate the expression of the UCP gene in ten tissues including heart, liver, spleen, lung, kidney, gizzard, intestine, brain, breast muscle and abdominal fat of chicken. The results of both RT-PCR and Northern Blot methods showed that the UCP gene expressed specific in breast muscle. The expression levels of UCP gene in breast muscles from egg-type and meat-type chickens of hatching, 2, 4, 6 and 8 wk of age were detected by RT-PCR assay and results showed that the expression levels of UCP gene were related to breeds. Expression level of UCP gene in layers was higher than that in broilers at various weeks of age except at 6 wk. The UCP gene's expression was higher at 6 wk and had no significant difference among other weeks of age in broilers; in layers the expression level of UCP gene had no significant difference among weeks of age. The experiment results also showed that insulin could increase the expression level of UCP gene by 40% compared with control group. Isolation of an Oocyte Stimulatory Peptide from the Ovarian Follicular Fluid of Water Buffalo (Bubalus bubalis) Gupta, P.S.P.;Ravindra, J.P.;Nandi, S.;Raghu, H.M.;Ramesha, K.P. 1557 Ovarian follicular fluid contains both stimulatory and inhibitory agents that influence the growth and maturation of oocyte. In the present study, an attempt was made to isolate and study the biological properties of ovarian follicular fluid peptide(s) in buffaloes. Bubaline ovarian follicular was made steroid- and cell-free. A protein fraction was obtained by saturation (30-35% level) of the follicular fluid with ammonium sulfate. The protein fraction was purified with Sephadex-G 50 gel filtration chromatography and a single peak was obtained in the eluant volume, which was lyophilized. SDS-PAGE of the lyophilized fraction revealed a single band and the molecular weight of the peptide was 26.6 kDa. The peptide stimulated the cumulus cell expansion and in vitro maturation rate of oocytes in buffaloes in a dose dependent manner when it was incorporated at different dose levels (0, 10, 25, 50, 100 and 1,000 ng $ml^{-1}$ of maturation medium). The basic culture medium consisted of TCM 199 with Bovine serum albumin (0.3%). The in vitro maturation rates were comparable to those obtained with a positive control medium (TCM 199+20 ng EGF $ml^{-1}$+steer serum (20%)). Further purification and biological assays may throw more light on the nature and functions of this peptide. Use of N-alkanes to Estimate Intake and Digestibility by Beef Steers Premaratne, S.;Fontenot, J.P.;Shanklin, R.K. 1564 The objective of the study was to evaluate the use of n-alkanes to estimate DM intake and digestibility by beef cattle. Six steers were blocked (3 blocks, 2 animals/block) according to the body weight (279${\pm}$19 kg) and randomly allotted within blocks to two diets (3 steers/diet). A second trial was conducted with the same animals (321${\pm}$18 kg) after 36 days (d), using a switch back design. The diets consisted of two types of chopped sun-cured hay, alfalfa (Medicago sativa L) hay, or fescue (Festuca arundinacea Schreb) and alfalfa mixture, which were fed in equal amounts to steers. Animals were dosed with $C_{32}$ and $C_{36}$ alkanes, employing an intra-ruminal controlled-release device at the beginning of each trial. Hay intake per animal was measured from d 6 to 12 and sub samples were taken for chemical analysis. Rectal samples of feces were taken from each animal once/daily from d 8 to 14, freeze dried, and ground prior to alkane analysis. Alkanes were extracted from ground hay and feces. Feed intake was calculated from the dose rate of $C_{32}$ alkane and, the herbage and fecal concentrations of adjacent odd ($C_{33}$ or $C_{31}$) and even ($C_{32}$) chain length alkanes. Crude Protein, NDF, ADF, ash concentrations and In vitro dry matter digestibility (IVDMD) were 17.7, 42.2, 28.4, 7.9 and 71.7 for alfalfa, and 12.4, 56.5, 30.4, 6.9 and 69.1% for fescue/alfalfa mixture, respectively. For both diets, intake estimated from $C_{33}$:$C_{32}$ ratio was not different from the measured intake, but intake estimated from $C_{31}$:$C_{32}$ ratio was lower (p<0.05), than the measured intake for both diets. The average estimated forage intake from $C_{33}$:$C_{32}$ ratio was 4.86 and 0.69% below than the measured intake for alfalfa and, fescue/alfalfa mixed diets, respectively. The respective estimates with $C_{31}$:$C_{32}$ ratio were 9.59 and 11.33% below than the measured intake. According to these results, alkane $C_{33}$:$C_{32}$ ratio is better than alkane $C_{31}$:$C_{32}$ ratio for the estimation of intake by beef steers. Evaluation of Mulberry (Morus alba) as Potential Feed Supplement for Ruminants: The Effect of Plant Maturity on In situ Disappearance and In vitro Intestinal Digestibility of Plant Fractions Saddul, D.;Jelan, Z.A.;Liang, J.B.;Halim, R.A. 1569 The in situ nylon bag degradation and in vitro intestinal digestibility of dry matter (DM), and crude protein (CP) of mulberry (Morus alba) plant fractions was studied at four harvest stages, 3 (W3), 5 (W5), 7 (W7) and 9 (W9) weeks. Degradability of DM and CP of the whole plant and stem fractions declined significantly (p<0.01) with advancing plant maturity in the order W3>W5 and W7>W9 and W3>W5>W7>W9, respectively. The degradation of DM and CP of the leaf fraction was also influenced by plant maturity but no trend was observed. The degradation of DM and CP of the whole plant and leaves increased rapidly during the first 48 and 24 h of incubation, respectively, when maximum degradation was reached. In vitro intestinal digestibility of CP was more influenced by the residence time in the rumen than by plant maturity. This study showed that mulberry is suitable as a supplement, particularly to low-quality roughages, in providing a source of rapidly available nitrogen to the rumen microbes, hence improving the roughage degradability and intake. The Relationships between Plasma Insulin-like Growth Factor (IGF)-1 and IGF-Binding Proteins (IGFBPs) to Growth Pattern, and Characteristics of Plasma IGFBPs in Steers Lee, H.G.;Hidari, H.;Kang, S.K.;Hong, Z.S.;Xu, C.X.;Kim, S.H.;Seo, K.S.;Yoon, D.H.;Choi, Y.J. 1575 This study was conducted to determine the characteristics of IGFBPs in plasma of steers, and to profile the relationship between growth and plasma IGF-1 and IGFBPs with aging in Holstein steers. Four blots of IGFBP at molecular weights of 38-43, 34, 29-32 and 24 kDa bands were detected by western ligand blot assay using $^{125}I-IGF-1$. On the basis of immunoblotting with anti-bovine IGFBP-2 and -3 antiserums, we observed the band for IGFBP-2 at approximately 34 kDa, and the IGFBP-3 band was detected at 38-43 kDa and 34 kDa in adult steers and calves. The IGFBP-3 antiserum used on the blots exhibited significant cross-reactivity with 34 kDa IGFBP-2. Furthermore, the 38-43 kDa IGFBP-3 bands were reduced to a 36 kDa band after deglycosylation, whereas the 34 kDa IGFBP-2 was intact. The plasma IGF-1, IGFBP-3 and other IGFBPs showed stability throughout a whole day. The change in live weight was found to be positively correlated to the plasma IGF-1 concentration (r = 0.6801, n = 64, p<0.05) and plasma IGFBP-3 (r = 0.6321, n = 64, p<0.05), while inversely correlated to plasma IGFBP-2 (r = -0.2919, n = 64, p<0.05). Furthermore, plasma IGF-1 was positively correlated to plasma IGFBP-3 (r = 0.6191, p<0.001), but was not correlated to plasma IGFBP-2. The portion of IGFBP-2 for total IGFBPs in calves was higher than in adult steers (p<0.05) and was decreased with growth, whereas that of IGFBP-3 was increased with increased live weight (p<0.05). The ratio IGFBP-3 for IGFBP-2 (BP-3/BP-2) was increased with growing of liveweight. Therefore, the changes in plasma IGF-1 level with increased liveweight may be related to the changes in plasma IGFBP-3 level and IGFBP-2 may give an important role in anabolic action of IGF-1 with the growth of body during calfhood in Holstein steers. Effect of Additives on the Fermentation Quality and Residual Mono- and Disaccharides Compositions of Forage Oats (Avena sativa L.) and Italian Ryegrass (Lolium multiflorum Lam.) Silages Shao, Tao;Shimojo, M.;Wang, T.;Masuda, Y. 1582 This study aimed to evaluate the effects of silage additives on the fermentation qualities and residual mono- and disaccharides composition of silages. Forage Oats (Avena sativa L.) and Italian Ryegrass (Lolium multiflorum Lam.) were ensiled with glucose, sorbic acid and pre-fermented juice of epiphytic lactic acid bacteria (FJLB) treatments for 30 days. In both species grass silages, although the respective controls had higher contents of butyric acid (20.86, 33.45g $kg^{-1}$ DM) and ammonia-N/total nitrogen (100.07, 114.91 g $kg^{-1}$) as compared with other treated silages in forage oats and Italian ryegrass, the fermentation was clearly dominated by lactic acid bacteria. This was well indicated by the low pH value (4.27, 4.38), and high lactic acid/acetic acid (6.53, 5.58) and lactic acid content (61.67, 46.85 g $kg^{-1}$ DM). Glucose addition increased significantly (p<0.05) lactic acid/acetic acid, and significantly (p<0.05) decreased the values of pH and ammonia-N/total nitrogen, and the contents of butyric acid and volatile fatty acids as compared with control, however, there was a slightly but significantly (p<0.05) higher butyric acid and lower residual mono- and di-saccharides as compared with sorbic acid and FJLB additions. Sorbic acid addition showed the lowest ethanol, acetic acid and ammonia-N/total nitrogen, and highest contents of residual fructose, total mono- and di-saccharides and dry matter as well as high lactic acid/acetic acid and lactic acid content. FJLB addition had the lowest pH value and the highest lactic acid content, the most intensive lactic acid fermentation occurring in FJLB treated silages. This resulted in the faster accumulation of lactic acid and faster pH reduction. Sorbic acid and FJLB additions depressed clostridia or other undesirable bacterial fermentation, thus this decreased the water-soluble carbohydrates loss and saved the fermentable substrate for lactic acid fermentation. Effects of Testosterone, 17β-estradiol, and Progesterone on the Differentiation of Bovine Intramuscular Adipocytes Oh, Young Sook;Cho, Sang Bum;Baek, Kyung Hoon;Choi, Chang Bon 1589 The aim of this study was to investigate the effects of testosterone, 17$\beta$-estradiol, and progesterone on the differentiation of bovine intramuscular adipocytes (BIA). Stromal-vascular (SV) cells were obtained from M. longissimus dorsi of 20 months old Korean (Hanwoo) steers, and were cultured in DMEM containing 5% FBS. The proliferated BIA were induced to differentiate with 0.25 $\mu$M dexamethasone, 0.5 mM 1-methyl-3-isobutyl-xanthine and 10 $\mu$g/ml insulin. During differentiation, the cells were treated with testosterone, 17$\beta$-estradiol, and progesterone at concentrations of $10^{-10}$, $10^{-9}$, and $10^{-8}$ M, respectively, for 12 days. Regardless of its concentration, testosterone remarkably reduced lipid droplets in the cytosol of BIA. On the other hand, 17$\beta$-estradiol and progesterone increased the accumulation of lipid droplets in BIA. Testosterone significantly (p<0.05) decreased GPDH activities with a dose-dependent pattern. 17$\beta$-Estradiol treatment onto BIA during differentiation, however, increased GPDH activity showing the highest activity (11.3 nmol/mg protein/min) at $10^{-10}$ M. Treatment of BIA with progesterone also increased (p<0.05) GPDH activity with the highest activity (13.8 nmol/mg protein/min) at $10^{-9}$ M. In conclusion, the results in the current study suggest that testosterone inhibits differentiation of BIA by suppressing GPDH activity while 17$\beta$-estradiol and progesterone have adverse effects. Growth, Feed Efficiency, Behaviour, Carcass Characteristics and Meat Quality of Goats Fed Fermented Bagasse Feed Ramli, M.N.;Higashi, M.;Imura, Y.;Takayama, K.;Nakanishi, Y. 1594 The effects of long-term feeding of diets based on bermudagrass hay supplemented with lucerne hay cube (LH) or fermented bagasse feed (FBF) on the growth rate, feed efficiency, behaviour, gut development, carcass characteristics and meat quality of goats were investigated. Six spring-born 8-month-old male crossbred (Japanese Saanen${\times}$Tokara native goats) bucks weighing mean 21.6 kg were allotted to 2 treatment groups (3 animals each) and each animal had ad libitum access to feeds, i.e. bermudagrass hay (basal diet)+LH or FBF throughout the experiment. The FBF was produced by the solid-state fermentation of substrates containing dried sugarcane bagasse mixed with wheat bran in a ratio of 1:3 (w/w DM) with Aspergillus sojae. The live body weight, final weight and average daily gain were not different between treatments. Average basal diet intake of goats fed FBF diet was significantly higher than that fed LH diet (p<0.05), but average dry matter intake (DMI; g/day and g/$W^{0.75}$), feed conversion ratio, digestible crude protein (DCP) and total digestible nutrients (TDN) intake of experimental diets were not significantly different between treatments. Goats fed on LH and FBF diets had similar eating, rumination, resting and drinking behaviours, and blood constituents except for phosphorus content. Slaughter and carcass weights, net meat percentage [(total meat/carcass weight)${\times}$100], loin ratio [(loin/total meat)${\times}$100] and rib-eye area were not different between treatments. However, goats fed FBF diet had lower dressing percentage and higher bone/muscle ratio compared with goats fed LH diet (p<0.01). Empty gut and guts fill of goats fed FBF diet were significantly greater (p<0.05 and p<0.01, respectively) than those fed LH diet. The weights of rumen and abomasum were also significantly heavier in goats fed FBF diet (p<0.05), but the length and density of papillae of rumen in goats were not different between treatments. Although meat composition of loin was not different in both groups, the meat of goats fed FBF diet was superior to that of LH diet in flavor, aroma and overall quality of loin (p<0.01). In conclusion, the nature of the diet consumed voluntarily did not affect subsequent growth, nutrient intake and behaviour of goats but had an influence on carcass traits and sensory evaluation of meat partly, when either of LH or FBF was fed with bermudagrass hay. Relationship of Early Lactation and Bovine Somatotropin to Water Metabolism and Mammary Circulation of Crossbred Holstein Cattle Maksiri, W.;Chanpongsang, S.;Chaiyabutr, N. 1600 The study was carried out to evaluate the effect of exogenous bovine somatotropin on water metabolism in relation to mammary function in early lactation of crossbred Holstein cattle. Ten, 87.5% crossbred Holstein cattle were divided into two groups of 5 animals each. At day 60 of lactation, the control group was given placebo while animals in the experimental group were given recombinant bovine somatotropin (rbST) by subcutaneous injection with 500 mg of rbST (14-days prolonged-release rbST). In rbSTtreated animals, milk yield increased 19.8%, which coincided with a significant increase in water intake (p<0.01), while DM daily intake was not different when compared to the control animals. Water turnover rate as absolute values significantly increased (p<0.05), while the biological half-life of water did not change in rbST-treated animals. Total body water (TBW) and total body water space (TOH) as absolute values significantly increased (p<0.01) in rbST-treated animals, while it was decreased in the control animals. Absolute values of empty body water (EBW) markedly increased (p<0.05), which was associated with an increase in the extracellular fluid (ECF) volume. Absolute values of plasma volume and blood volume were also significantly increased (p<0.05) in rbST-treated animals. The increase in mammary blood flow in rbST-treated animals was proportionally higher than an increase in milk production. The plasma IGF-1 concentration was significantly increased (p<0.01) in rbST-treated animals when compared with those of control animals during the treatment period. Milk fat concentration increased during rbST treatment, while the concentrations of both protein and lactose in milk were not affected. The present results indicate that rbST exerts its effect on an increase in both TBW and EBW. An increased ECF compartment in rbST-treated animals might partly result from the decrease in fat mass during early lactation. The action of rbST on mammary blood flow might not be mediated solely by the action of IGF-1 for increase in blood flow to mammary gland. An elevation of body fluid during rbST treatment in early lactation may be partly a result of an increase in mammary blood flow in distribution of milk precursors to the gland. Effect of Dietary Lipid Sources on Growth, Enzyme Activities and Immuno-hematological Parameters in Catla catla Fingerlings Priya, K.;Pal, A.K.;Sahu, N.P.;Mukherjee, S.C. 1609 Ninety advanced Catla catla fingerlings (av. wt. 16 g) were randomly distributed in six treatment groups with three replicates each for an experimental period of 60 days to study the effect of dietary lipid source on growth, enzyme activities and immuno-hematological parameters. Six isoprotein (40.0-41.9%) and isocaloric (4,260 kcal $kg^{-1}$) semi-purified diets were prepared with varying levels of soybean oil (SBO) and cod liver oil (CLO) within a total of 8% lipid viz., $D_1$ (Control), $D_2$ (8% SBO), $D_3$ (6% SBO and 2% CLO), $D_4$ (4% SBO and 4% CLO), $D_5$ (2% SBO and 6% CLO) and $D_6$ (8% CLO). Highest SGR was noted in $D_5$ (0.73${\pm}$0.03) group, which was similar with $D_3$ (0.71${\pm}$0.02) and $D_4$ (0.69${\pm}$0.01) groups. Activity of intestinal lipase, hepatic glucose-6-phosphate dehydrogenase (G6PDH) and aspartate amino transferase (AST) of the lipid treatment groups were significantly higher (p<0.05) than the control group. The respiratory burst activity of the phagocytes (Nitroblue tetrazolium (NBT)) was highest in $D_2$ (1.95${\pm}$0.21) followed by $D_3$ (1.19${\pm}$0.15) group, which were significantly (p<0.05) higher than the other groups. Globulin level was significantly higher in $D_3$ (1.29${\pm}$0.08) than in the other groups expect $D_4$. Hemoglobin content and total erythrocyte count did not show any significant difference. From this study, it is concluded that a diet containing 6% soybean oil and 2% cod liver oil ($D_3$) yields higher growth and immune response in Catla catla fingerlings and would be cost effective. The Effect of Complementary Access to Milk Replacer to Piglets on the Activity of Brush Border Enzymes in the Piglet Small Intestine Wang, J.F.;Lundh, T.;Westrom, B.;Lindberg, J.E. 1617 The activity of brush border enzymes (sucrase, lactase and maltase) in the piglet small intestine was evaluated as well as piglet performance during the weaning period in the present study. There were two treatment groups: Piglets of six litters were fed dry feed plus milk replacer (Group M) and of six litters fed dry pelleted feed (Group C). One piglet from each litter was sacrificed on day 3 before weaning, and day 3, 10 and 17 postweaning, respectively. Providing milk replacer caused an increased piglet live weight at weaning (p<0.001) and until termination of the experiment (p<0.001). A slightly higher (p<0.16) level of protein was measured in the jejunum of group M piglets as compared with group C piglets. Before weaning the activity of lactase was high in the jejunum of group C piglets. The activity of lactase in the jejunum was lowered in the jejunum of group C piglets and in distal jejunum of group M piglets during the postweaning period as compared with pre-weaning period (p<0.05). Lowered activity of lactase in the distal jejunum of piglets was found at day 10 and 17 postweaning, respectively. No treatment differences were found in the activity of lactase in the piglet jejunum. No treatment differences were seen in the activity of maltase and sucrase in the piglet jejunum also. However, weaning caused a higher activity of sucrase in the distal jejunum of group M piglets as compared with pre-weaning period. In conclusion, providing milk replacer to piglets caused an improved growth performance. Feeding milk replacer did not influence the activity of lactase, maltase and sucrase in the jejunum of piglets. Weaning resulted in a markedly lowered activity of lactase, while no dramatic changes in the activity of maltase took place during the period around weaning. Studies on the Concentrations of Cd, Pb, Hg and Cr in Dog Serum in Korea Park, S.H.;Lee, M.H.;Kim, S.K. 1623 Heavy metal pollution has become a serious health concern in recent years. Dogs are a very good indicator of the pollution load on the environment. They share people's environment and are exposed to the action of the same pollutants. This study estimated the heavy metal contents in the serum of dogs in domestic districts, and assessed the age, sex, feeding habits, living area, breeding environment and smoking habit of the owners. The findings suggest that dogs can be used to monitor the environmental quality of heavy metals. The mean concentrations of heavy metals in the dog serum from 204 samples (108 male and 96 female) were 0.22${\pm}$0.01 $\mu$g/ml, 0.24${\pm}$0.04 $\mu$g/ml, 0.61${\pm}$0.08 $\mu$g/ml, and 0.50${\pm}$0.06 $\mu$g/ml (for Cd, Hg, Pb, and Cr), respectively. Concentrations of Pb, Cd, Hg, and Cr in the dog serum were higher in Yeongnam including Ulsan, and Seoul higher than those of Chungchong and Honam, especially Pb concentration, which was significantly higher (p<0.01). Concentrations of Cd, Hg, Pb, and Cr in serum, were increased by age (p<0.05). When commercial pet food was provided to dogs, Cd and Cr concentrations were significantly higher in dog serum than dogs fed a human diet (p<0.01 in Cd and p<0.05 in Cr). Heavy metal concentrations of dogs owned by smoking owners, were higher than non-smoking owners although there was no significant difference. Effects of Various Sources and Levels of Chromium on Performance of Broilers Suksombat, Wisitiporn;Kanchanatawee, S. 1628 Three hundred and twenty four one day old mixed sex broiler chicks were assigned at random into 9 treatment groups. The experimental design was a 3${\times}$3 factorial arrangement. During the starter period (week 1-3), chicks were fed ad libitum. A cornsoybean meal based diet contained 23% crude protein, 3,200 kcal/kg metabolizable energy (NRC, 1994), and supplemented with organic or inorganic forms of chromium. Two organic chromium products, chromium yeast (Cr-Yeast from Alltech Biotechnology Corporation Limited) and chromium picolinate (Cr-Pic) were supplemented at the rate of 200, 400 and 800 ppb. One inorganic product, chromium chloride, was supplemented at the rate of 200, 400 and 800 ppb. During the finishing period (week 4-7), the corn-soybean meal based diet contained 20% crude protein, 3,200 kcal/kg metabolizable energy (NRC, 1994), and the same levels of chromium as in the starter period were added. No significant difference was observed among treatment groups in average daily gain, feed intake, body weight gain, feed conversion ratio and mortality. The carcass percentage of broilers receiving 200 and 400 ppb organic chromium (Cr-Yeast or Cr-Pic) was significantly increased (p<0.01). In addition, the supplementation of organic chromium reduced (p<0.05) breast meat fat content but increased breast meat protein content. The addition of chromium in the diet had no effect on boneless breast, skinless boneless breast, boneless leg, skinless boneless leg but reduced percentage of sirloin muscle. Total cholesterol and triglycerides were reduced by organic Cr supplementation. Supplementation with 200 and 400 ppb of both Cr-Yeast and Cr-Pic showed the lowest total cholesterol. The effects of type of Cr on HDL and LDL were variable, however, LDL increased with increasing level of Cr supplementation. This trial indicates that organic chromium tended to improve growth performances and carcass composition, reduced total cholesterol and triglycerides. The optimum level of organic chromium supplementation was at 200 ppb. Effect of Non-starch Polysaccharides and Resistant Starch on Mucin Secretion and Endogenous Amino Acid Losses in Pigs Morel, Patrick C.H.;Melai, J.;Eady, S.L.;Coles, G.D. 1634 Generally, dietary fibre (DF) includes lignin, non-starch polysaccharides (NSP) and resistant starch (RS). In monogastric species, low levels of dietary fibre in the diet are associated with various diseases and high levels reduce nutrient digestibilities. In this study, the effects of different types and levels of NSP (soluble: $\beta$-glucan, insoluble cellulose) and resistant starch on mucin secretion and endogenous nitrogen and amino acid losses in pigs were investigated. A total of 25 five-week-old weaner pigs (9.5 kg${\pm}$1.5 kg), were randomly allocated to each of five experimental diets. Different levels of purified barley $\beta$-glucan (BG) extract (5 or 10% of $Glucagel^{(R)}$ $\beta$-glucan, providing 4 or 8% of BG in the diet), and resistant starch (RS) (8.3 or 16.6% of Hi-$Maize^{TM}$, providing 5 or 10% RS in the diet) were substituted for wheat starch in a purified diet in which enzymatically-hydrolysed casein was the sole source of protein. The diets were fed for 21 days. No statistically significant difference between treatments (p>0.05) was observed for growth performance and organs weights. No difference in ileal starch digestibility was observed between pigs on the cellulose or $\beta$-glucan diets. However, as the level of resistant starch in the diet increased the ileal starch digestibility decreased (p<0.05). The inclusion of resistant starch in the diet (5 or 10%) did not increase mucin production when compared with the cellulose-only diet. However, as the level of beta-glucan in the diet increased, both crude mucin in the digesta dry matter and per kg dry matter intake increased (p<0.05). Pigs fed the diet containing 8% of beta-glucan had higher endogenous loss flow than those fed the diets including 5 or 10% of resistant starch or 4% of $\beta$-glucan. In conclusion, dietary inclusion of resistant starch increased the level of starch reaching the large intestine without any effect on mucin secretion, or endogenous nitrogen or amino acid losses content in the small intestine. The addition of $\beta$-glucan to a diet containing cellulose increases both mucin secretion and endogenous amino acid and nitrogen losses in the small intestine. The Effects of Dietary Biotite V Supplementation as an Alternative Substance to Antibiotics in Growing Pigs Chen, Y.J.;Kwon, O.S.;Min, B.J.;Son, K.S.;Cho, J.H.;Hong, J.W.;Kim, I.H. 1642 This study was conducted to investigate the effects of Biotite V supplementation on growth performance, nutrients digestibility and blood constituents and to evaluate whether Biotite V could replace an antibiotics in growing pigs diet. One hundred twenty pigs with initial body weight of 18.35${\pm}$0.15 kg were used in a 28 days growth trial. Pigs were allotted to four treatments by sex and body weight in a randomized complete block design. There were six replicate pens per treatment and five pigs per pen. Four dietary treatments were: 1) NC (basal diet without antibiotics), 2) PC (basal diet+0.1% CTC), 3) NCBV (NC diet+0.5% 200 mesh Biotite V) and 4) PCBV (PC diet+0.5% 200 mesh Biotite V). Through the entire experimental period, ADG tented to increase in NCBV and PCBV treatments compared to NC and PC treatments respectively, but no significant differences were observed (p>0.05). ADFI was slightly lower in NCBV and PCBV treatments than that in NC and PC treatments without significant differences (p>0.05). Gain/feed in PC and PCBV treatments was improved significantly compared to NC treatment (p<0.05). N and Ca digestibilities were higher in PCBV treatments than those in PC treatment (p<0.05). DM and P digestibilities were not affected by the addition of Biotite V (p>0.05). RBC, HCT, Hb, lymphocyte and monocyte were increased numerically in NCBV and PCBV treatments compared to NC and PC treatments (p>0.05). WBC was lower in treatment groups than that in NC treatment, but no significant differences were observed (p>0.05). In conclusion, dietary supplementation of Biotite V can better the gain/feed and some of the nutrients digestibilities in growing pigs. It has a possibility to replace antibiotics in swine diet. Properties of Cholesterol-reduced Butter and Effect of Gamma Linolenic Acid Added Butter on Blood Cholesterol Jung, Tae-Hee;Kim, Jae-Joon;Yu, Sang-Hoon;Ahn, Joungjwa;Kwak, Hae-Soo 1646 The present study was carried out to develop cholesterol-reduced and gamma linolenic acid (GLA)-added butter and to examine the changes in chemical and sensory properties, and cholesterol lowering effect of GLA addition. The cholesterol removal rate reached 93.2% by $\beta$-cyclodextrin in butter before GLA addition. The thiobarbituric acid value of cholesterol-reduced and GLA-added butter increased slowly up to 4 week and plauteaued thereafter. TBA value was significantly increased with 2% GLA addition, compared with no GLA addition. The production of short-chain free fatty acids (FFA) increased with storage in all treatments. From 4 weeks storage, the amount of short-chain FFA in 2% GLA-added group was significantly higher than those in other groups. Among sensory characteristics, color, greasiness and overall acceptability were mostly affected by GLA addition, however, the rancidity value of 2% GLA addition was significantly different from those of control and GLA-unadded and cholesterol-reduced butter at 0, 6 and 8 week storage. Among groups, no difference was found in texture in all storage periods. The smallest increase of total blood cholesterol in rats was found in the group fed 2% GLA-added and cholesterol-reduced butter for 8 week, compared with that in controls. The present results showed the possibility of cholesterol-reduced and GLA-added butter development without much difference in chemical, rheological and sensory properties, and indicated a slow increase effect on blood total cholesterol in rats. Fatty Acid Profiles of Various Muscles and Adipose Tissues from Fattening Horses in Comparison with Beef Cattle and Pigs He, M.L.;Ishikawa, S.;Hidari, H. 1655 The present studies were designed to provide new information on fatty acid profiles of various muscles and adipose tissues of fattening horses in comparison with beef cattle and pigs. In the first study, the lipids were extracted respectively from subcutaneous, intermuscular adipose tissues, longissimus dorsi and biceps femoris muscles of fattening Breton horses (n = 8) with an average body weight of 1,124 kg. In the second study, the lipids were extracted from subcutaneous, intermuscular adipose tissues and longissimus dorsi muscle of fattening horses (n = 13), Japanese Black beef cattle (n = 5), Holstein steers (n = 5) and fattening pigs (n = 5). The fatty acids in the lipid samples were determined by gas chromatography after methylation by a combined base/acid methylation method. It was found that the lipids from horse subcutaneous and intermuscular adipose tissues contained more (p<0.05) polyunsaturated fatty acids (PUFA) which were mainly composed of linoleic acid (C18:2) and linolenic acid (C18:3) than those in the muscles. The weight percent of conjugated linoleic acids (CLA cis 9, trans 11) in lipids from biceps femoris muscle was 0.22%, which was higher (p<0.05) than that from the other depots. The horse lipids were higher (p<0.05) in PUFA but lower (p<0.05) in SFA and MUFA in comparison with those of the cattle and pigs. The percentage of C18:2 or C18:3 fatty acid in the horse lipids were respectively 2-8 fold or 5-18 fold higher (p<0.05) than those of the cattle and pigs. The percentages of CLA (cis 9, trans 11) in the horse lipids (0.14-0.16%) were very close to those of the pigs (0.18-0.19%) but much lower (p<0.05) than those of the Japanese Black beef cattle (0.55-0.94%) and Holstein steers (0.46-0.71%). The results indicated that the fatty acid profiles of lipids from different muscle and adipose tissues of fattening horses differed significantly. In comparison with that of the beef cattle and pigs, the horse lipids contained more C18:2 and C18:3 but less CLA. Sex-linked Dwarf Gene for Broiler Production in Hot-humid Climates Islam, M.A. 1662 This review has been done to examine sex-linked dwarf gene in broiler production in hot-humid climates. Introduction of sex-linked dwarf gene especially in hot harsh tropical environments brings a great advantage for broiler production. The heavy broiler parent suffers due to the stress of these adverse climates. Sex-linked dwarf genes reduce body weight, egg weight, but are superior for adaptability under harsh tropical environments, with a lower requirement for housing and feed, better survivability and reproductive fitness giving fewer defective eggs, more hatching eggs, better fertility, hatchability, feed conversion efficiency and resistance to disease. Overall the cost of chick production from dwarf hens is lower than from their normal siblings. Market weights of broilers from sexlinked dwarf dams is almost similar to those of broilers from normal dams with normal sires. But the net benefit of broiler production from sex-linked dwarf dams is found to be greater than that of broilers from normal dams. This will be the most important to the rural communities in Bangladesh and in other countries where the similar environment and socio-economic conditions exist. Therefore, sexlinked dwarf hens might be used in broiler breeding plan as well as broiler production in the tropics.
CommonCrawl
Real data example Comparing survival functions with interval-censored data in the presence of an intermediate clinical event Sohee Kim1, Jinheum Kim2 and Chung Mo Nam3Email authorView ORCID ID profile BMC Medical Research Methodology201818:98 © The Author(s) 2018 Published: 1 October 2018 In the presence of an intermediate clinical event, the analysis of time-to-event survival data by conventional approaches, such as the log-rank test, can result in biased results due to the length-biased characteristics. In the present study, we extend the studies of Finkelstein and Nam & Zelen to propose new methods for handling interval-censored data with an intermediate clinical event using multiple imputation. The proposed methods consider two types of weights in multiple imputation: 1) uniform weight and 2) the weighted weight methods. Extensive simulation studies were performed to compare the proposed tests with existing methods regarding type I error and power. Our simulation results demonstrate that for all scenarios, our proposed methods exhibit a superior performance compared with the stratified log-rank and the log-rank tests. Data from a randomized clinical study to test the efficacy of sorafenib/sunitinib vs. sunitinib/sorafenib to treat metastatic renal cell carcinoma were analyzed under the proposed methods to illustrate their performance on real data. In the absence of intensive iterations, our proposed methods show a superior performance compared with the stratified log-rank and the log-rank test regarding type I error and power. Intermediate clinical event Time-to-event Length-biased Interval-censored Multiple imputation In clinical trials and longitudinal studies, a subject under study may experience an intermediate clinical event (IE) before the event of interest. The occurrence of the IE may induce changes in the survival distribution. An example of a length-biased problem due to the IE is the heart transplantation study [1]. It is necessary to know whether a heart transplant would be beneficial. The waiting time of subjects who eventually have a heart transplant must be long enough to receive treatment, whereas there is no requirement for not having a heart transplant. To resolve length-biased problems due to the IE, the time-dependent Cox regression and landmark studies were conducted [1, 2]. The score tests based on counterfactual variables were derived by Lefkopoulou and Zelen [3] and Nam and Zelen [4]. Moreover, when the primary outcome is interval-censored, the situation is more complicated. Interval-censored data are data for which the exact failure times are not known but are known to have occurred between certain time points. Extensive studies are available regarding statistical approaches for analyzing interval-censored data. A non-parametric maximum likelihood estimation (NPMLE) of the survival function using the Newton-Rapshon algorithm has been proposed [5]. Alternatively, a self-consistent expectation maximization was suggested to compute the maximum likelihood estimators [6]. Dempster et al. [7] and Finkelstein [8] used the discrete-time proportional hazards model to implement the estimation of weighted log-rank tests for interval-censored data. A log-rank-type test was studied under the logistic model by applying Turnbull's algorithm to estimate the pseudo-risk and failure sets [9]. Furthermore, Zhao and Sun [10] improved on the previous study by considering a multiple imputation (MI) technique to estimate the covariance matrix of the generalized log-rank statistics. A log-rank type test was proposed similar to a previous study but used different covariance matrix estimator [11]. Kim et al. [12] studied another log-rank type test that did not use an iterative algorithm. A uniform weights algorithm was proposed where a subject contributed uniformly to each mass point sk; point of the set, which consisted of all the distinct endpoints of the observed intervals. A few methods have been suggested for left truncated and interval-censored (LTIC) data. Turnbull's characterization was corrected to accommodate both truncation and interval-censoring time points [13]. It was extended to the regression model under the proportional assumption [14]. Pan and Chappell noted that NPMLE is inconsistent for the early times with LTIC data, while conditional NPMLE is consistent [15]. The estimation of the parameters in the Cox model with LTIC data and a rank-based test of survival function in LTIC were studied [16, 17]. However, the length-biased problem was not considered in those methods. Most existing methods for interval-censored data use intensively iterative computation. To avoid this, an imputation method was considered in this study. We can obtain complete or (left-truncated and) right-censored data after imputation of the (left-truncated and) interval-censored data. Subsequently, standard statistical methods can be applied to the imputed data. For right-censored data, a semiparametric algorithm was proposed [18], motivated by the data augmentation algorithm [19]. Pan proposed a MI using Cox regression for interval-censored data by adapting previous method [20]. They repeated the algorithm until the coefficient βh converged, where h denotes the number of iterations. A two-sample test with interval-censored data was studied via MI based on the approximate Bayesian bootstrap [21]. The MI for interval-censored data with auxiliary variables was studied [22]. Zhao and Sun [10] and Kim et al. [12] used MI techniques for computing the variance of test statistics. A log-rank test via MI was proposed [11]. After estimating the NPMLE using Turnbull's algorithm, they imputed the exact time for all data points including right-censored data from the conditional probability of NPMLE. The methods of MI using Cox regression were extended to accommodate left-truncation [23, 24]. The purpose of this paper is to suggest new methods for analyzing LTIC data using MI. This study is organized as follows. First, we introduce the notations and framework for interval-censored survival data. In the theoretical model and study hypotheses section, we explain a statistical procedure to compare two survival functions in the presence of the IE. Then, we propose our method with extensive simulation studies. The simulations are conducted to evaluate the properties of multiple imputation. An analysis of the Randomized Phase III SWITCH study was undertaken in the real example section, and we conclude the study with a short discussion. Notation and framework The survival time of a subject who experienced the IE implied that the survival time should exceed the waiting time for the IE. This reflects the length bias phenomenon; namely, a subject has to live long enough to experience the IE. We assume that the IE is binary and that only two treatment groups exist. Let W and T be positive real-valued random variables representing the waiting time until the occurrence of the IE and the time to an event of interest, respectively. We assume the independent of the event time T and waiting time W. Define a binary random variable Z to be Z=I{W≤T}. The random variables T0 and T1 are defined as the times to the event of interest conditional on Z=0 and 1, respectively, namely, T=(1−Z)T0+ZT1. The density probability functions of W, T0, and T1 are defined as g(w), q0(t), and q1(t), respectively; moreover, the corresponding survival distribution functions are G(w)=Pr(W>w),Q0(t)=Pr(T0>t), and Q1(t)=Pr(T1>t), respectively. The model with Z=1 implied that the waiting time was observed before the failure time T. Therefore, T1 was left truncated at the waiting time W. {Bi,1≤i≤N} were considered as the truncation sets, specifically, Bi=(Wi,∞), where N is number of total subjects. We further assume that the time to the event of interest T is interval-censored. Therefore, for the ith subject, we did not observe T exactly but observed T∈Ai, where Ai=(Li,Ri] is the interval in which the event of interest occured. If Ri=∞, we call it a right-censored observation. If Li=Ri, we call it an exact observation. Let δi=1, if the ith subject has experienced the event of interest; otherwise, it was considered 0. We consider the set of N independent pairs {Ai,Bi}. We assume Ai⊆Bi. We now characterize the following union set \(\tilde {C}^{k}\) with all observed points including left-truncated points, which may have a positive mass as mentioned by Frydman [13], where k=0,1. For the survival distribution of T0, Li and Ri of a subject who does not experience the IE is included in the set \(\tilde {C^{0}}\). When the IE occurs (Z=1), the waiting time W is a change point of distribution for survival. Thus, the information of the event exceeding W can no longer be observed. Therefore, the waiting time W for the IE is included in \(\tilde {C^{0}}\) for T0 as the right-censoring time, but the event time exceeding W is not included in set \(\tilde {C^{0}}\). $$\begin{array}{@{}rcl@{}} \tilde{C^{0}} &= \{0\} \cup \{L_{i}; 1 \le i \le N, Z_{i} = 0\}\\& \cup \{R_{i}; 1 \le i \le N, Z_{i} = 0\} \cup \\ &\quad\quad \{W_{i};1 \le i \le N, Z_{i} = 1\} \cup \{\infty \} \end{array} $$ For the survival distribution of T1, Li and Ri of a subject who experienced the IE and the waiting time W as a left-truncated time are included in the set \(\tilde {C^{1}}\). The subject who does not experience the IE is not included in set \(\tilde {C^{1}}\). $${\begin{aligned} \tilde{C^{1}} &=& \{0\} \cup \{L_{i}; 1 \le i \le N, Z_{i} = 1\} \cup \{R_{i}; 1 \le i \le N, Z_{i} = 1\} \cup \\ &&\{W_{i};1 \le i \le N, Z_{i} = 1\} \cup \{\infty \} \end{aligned}} $$ Theoretical model and study hypotheses Nam and Zelen [4] studied a length-biased problem with right-censored data in the presence of the IE. A subject who does not experience the IE means that the waiting time W for the IE has been right-censored; namely, f(t,z=0)=q0(t)G(t). A subject experiences the IE at W, the survival distribution is changed at w and the event occurs at t; namely, \(f(t,w,z=1)=Q_{0}(w)g(w)\frac {q_{1}(t)}{Q_{1}(w)}\). The hypothesis H0:q0A(t)=q0B(t),q1A(t)=q1B(t) versus the general alternative, which is the complement of H0, could be considered, where A,B are two populations. Notably, the hypotheses were independent of the waiting time distribution. They derived the score test using a proportional hazards model for comparing two sample survival functions. The score test could be written using the counting process notation. Define \(\phantom {\dot {i}\!}Q_{kA}(t)=Q_{kB}(t)^{\beta _{k}}\) for k=0,1, N(t)=I(T≤t,δ=1),Z(t)=I(W≤t) and R(t)=I(T≥t), where δ=1 if observation is non-censored, and 0 otherwise. Let \(s_{i} = x_{i} z_{i}(t_{i}){dN}_{i}(t_{i}), n_{i}=\sum _{j=1}^{N} x_{j} R_{j}(t_{i}) z_{j}(t_{i}),\) and \(N_{i} =\sum _{j=1}^{N} R_{j}(t_{i}) z_{j}(t_{i})\), where x=1 if the observations were from A; otherwise, it was 0. The statistics \(\hat {S_{1}}\) can be written as $$\begin{array}{@{}rcl@{}} \hat{S_{1}} = \sum\limits_{i=1}^{N} x_{i} z_{i}(t_{i}) {dN}_{i}(t_{i}) - \sum\limits_{i=1}^{N} p_{i} {dN}_{i}(t_{i}), \quad p_{i}=n_{i}/N_{i} \end{array} $$ and under the null hypothesis has mean zero and variance \(V\left (\hat {S_{1}}\right) = \sum _{i=1}^{N} p_{i}(1-p_{i}){dN}_{i}(t_{i})\). The statistics \(\hat {S_{0}}\) can be written as $${\begin{aligned} \hat{S}_{0} = \sum\limits_{i=1}^{N} x_{i} (1-z_{i}(t_{i})){dN}_{i}(t_{i})-\sum\limits_{i=1}^{N} \pi_{i} {dN}_{i}(t_{i}), \quad \pi_{i} =m_{i}/M_{i}, \end{aligned}} $$ where \(r_{i} = x_{i} (1-z_{i}(t_{i})){dN}_{i}(t_{i}), m_{i}={\sum \nolimits }_{j=1}^{N} x_{j} R_{j}(t_{i}) (1-z_{j}(t_{i}))\), and \(M_{i} ={\sum \nolimits }_{j=1}^{N} R_{j}(t_{i}) (1-z_{j}(t_{i}))\). The variance is \(V\left (\hat {S_{0}}\right) = {\sum \nolimits }_{i=1}^{N} \pi _{i} (1-\pi _{i}){dN}_{i}(t_{i})\). Hence, an appropriate chi-square statistic with 2 degrees of freedom for testing H0 is given by \(\chi _{2}^{2} = \hat {S_{1}^{2}}/V\left (\hat {S_{1}}\right) + \hat {S_{0}^{2}}/V\left (\hat {S_{0}}\right)\). Proposed methods Multiple imputation converts interval-censored data to right-censored data so that standard methods can be applied. This method can simplify complicated situations. We propose two methods: 1) uniform weight method and 2) weighted weight method. The uniform method closely follows the method of Kim et al. [12] and the weighted method closely followed that of Huang et al. [11] to accommodate for left truncation. After imputation, the score statistics \(\chi _{2}^{2}\) were used [4]. Uniform weight method Kim et al. [12] assumed that the true failure time of a subject may be uniformly distributed over {sj,Li<sj≤Ri, for j=1,...,m}. They calculated a pseudo-risk and failure set based on uniform weights. They used the MI techniques to estimate the variance matrix. In this study, we used the MI techniques for deriving the test statistics and their variance-covariance matrix including the imputation of a true failure time under the same assumption. We used a moderate imputation number (M=10) [20].Step 0: Set r=1, where r denotes an imputation number.Step 1. Characterize the set \(\tilde {C_{i}^{k}}\) for each of Tk for k=0,1. The distinct endpoints set \(C_{i}^{k}=\left \{s_{j}^{k}, L_{i}< s_{j}^{k} \leq R_{i}, \text { for }j = 1,..., m\right \}\) in which all the time points \(\tilde {C^{k}}\) are ordered and labeled \(0=s_{0}^{k} < s_{1}^{k} <... < s_{m}^{k} = \infty \) for i=1,...,N,j=1,...,mk,k=0,1. Step 2: If the ith observation is interval-censored, a value randomly sampled from a set \(C_{i}^{k}\) is generated. Notably, after imputing the exact time, \(T_{0}^{(r)}\) is the right-censored data, while \(T_{1}^{(r)}\) is left-truncated and right-censored data. For making \(T_{0}^{(r)}\), we censored the data at Wi if Zi=1. For making \(T_{1}^{(r)}\), we only used the data with Zi=1. $${\begin{aligned} T_{0i}^{(r)} = \left\{ \begin{array}{ll} L_{i} \quad &\text{if}~ \delta_{i} = 0, Z_{i} = 0\\ W_{i} \quad &\text{if} ~Z_{i} = 1\\ \text{sample from the set} \\ \phantom{aaaaa} \{s_{j}^{0}, L_{i}< s_{j}^{0} \leq R_{i}, \text{ for }j = 1,..., m\} \quad &\text{if} ~\delta_{i} = 1, Z_{i} = 0\\ \end{array} \right. \end{aligned}} $$ $${\begin{aligned} T_{1i}^{(r)} = \left\{ \begin{array}{ll} L_{i} \quad &\text{if}~ \delta_{i} = 0, Z_{i} = 1\\ \text{sample from the set} \\ \phantom{aaaaa} \{s_{j}^{1}, L_{i}< s_{j}^{1} \leq R_{i},\ \text{for}\ j = 1,..., m\} \quad &\text{if} ~\delta_{i} = 1, Z_{i} = 1\\ \end{array} \right. \end{aligned}} $$ Step 3. Based on the rth imputed (left-truncated) right-censored data, compute the Nam and Zelen's statistics and their variance \(S_{k}^{(r)}, V\left (\hat S_{k}\right)^{(r)}\) for k=0,1, respectively.Step 4. Repeat Steps 2 and 3 M(>0) times and obtain M pairs of \(\left (S_{k}^{(r)}, V\left (\hat S_{k}\right)^{(r)}\right)\), where r=1,...,M,k=0,1.Step 5: Compute the sum of the average within-imputation covariance associated with Sk and the between-imputation variance of Sk. $$\begin{array}{@{}rcl@{}} \bar{S_{k}} &=& \frac{1}{M}\sum\limits_{r=1}^{M} S_{k}^{(r)},\\ V_{1}(\hat S_{k})_{mi} &\,=\,& \frac{1}{M}\sum\limits_{r=1}^{M} \hat V_{S_{k}}^{(r)} \,+\, \bigg(1\,+\,\frac{1}{M}\bigg)\frac{1}{M\,-\,1} \sum\limits_{r=1}^{M}\left(S_{k}^{(r)}\,-\,\bar{S_{k}}\right)^{2} \end{array} $$ In the present study, we applied two types of variances. The first is as described above: adding within- and between variances. The second is the subtraction of the two variances, which works well when the rate of follow-up loss is high [11]. The second term is formed as $$\begin{array}{@{}rcl@{}} V_{2}\left(\hat S_{k}\right)_{mi}&= \frac{1}{M}{\sum\nolimits}_{r=1}^{M} \hat V_{S_{k}}^{(r)} - \frac{1}{M-1} {\sum\nolimits}_{r=1}^{M}\left(S_{k}^{(r)}-\bar{S_{k}}\right)^{2}. \end{array} $$ Thus, we can test H0 based on $$\begin{array}{@{}rcl@{}} \chi_{2}^{2} =\bar{S_{0}}^{2} / V_{l}\left(\hat S_{0}\right)_{mi} + \bar{S_{1}}^{2} / V_{l}\left(\hat S_{1}\right)_{mi} \quad \text{for }l=1,2, \end{array} $$ where the distribution follows a chi-square with 2 degrees of freedom. Weighted weight method based on NPMLE We propose another weighted weight method based on NPMLE. We estimated the NPMLE from the original data set by Turnbull's algorithm and used the NPMLE as weights for the imputation. The data were LTIC when having the IE; therefore, we characterized the set that may have a positive mass including truncated points, same as the above method. Step 1. Estimate the NPMLE from the original data set.Step 2. Using the NPMLE as weight, impute the data conditional on \(\left \{L_{i} <T_{i}^{(r)} \leq R_{i}\right \}\). $${\begin{aligned} T_{0i}^{(r)} = \left\{ \begin{array}{ll} L_{i} \quad &\text{if}~ \delta_{i} = 0, Z_{i} = 0\\ W_{i} \quad &\text{if}~ Z_{i} = 1\\ \text{sample from the distribution NPMLE}\\ \text{ using the NPMLE as weight} \quad &\text{if} ~\delta_{i} = 1, Z_{i} = 0\\ \end{array} \right. \end{aligned}} $$ $${\begin{aligned} T_{1i}^{(r)} = \left\{ \begin{array}{ll} L_{i} \quad &\text{if} \delta_{i} = 0, Z_{i} = 1\\ \text{sample from the distribution NPMLE}\\ \text{ using the NPMLE as weight} \quad &\text{if} \delta_{i} = 1, Z_{i} = 1\\ \end{array} \right. \end{aligned}} $$ Steps 3–5. Same as the part of the uniform weight method. Based on the rth imputed (left-truncated) right-censored data, we can calculate the average Nam and Zelen statistics and variance using the weighted weight method. Data generation We generated the true failure time T0 and waiting time W from the survival distribution below: \(\phantom {\dot {i}\!}Q_{0g}(t_{0})=e^{-\lambda _{0g} t}, G_{g}(w) = e^{-\mu _{g} w}\) for g=A,B. Note that the probability of experiencing the IE is \(\theta _{g}=\frac {\mu _{g}}{\mu _{g} + \lambda _{0g}}\). If W>T0, then T=T0. If W≤T0, a random variable T1 is generated from the truncated probability distribution function q1g(t)/Q1g(w) with W≤T1, where \(\phantom {\dot {i}\!}Q_{1g}(t)=e^{-\lambda _{1g} t}\) for g=A,B. Therefore, T1 should be larger than W, so that we can generate Q1g(t)∼U(0,Q1g(W)). The value of λ1g is chosen from the mean time to failure, m1g, g=A,B. In our simulations, θA=0.5,θB={0.3,0.4,0.5},λ0A=λ0B=1,m1A=1 and 2,m1B={1,1.25,1.5,2}. Define a censoring indicator δ that takes values 0 or 1 and follows a Bernoulli distribution with a censoring probability cp. cp is set as 0 or 0.3. We could obtain the data set as {Ti,Wi,δi,Zi,xi}, where x=1 if observations from A; otherwise, it was 0. To generate interval-censored data, we first generated (Ti,δi) as above, where Ti and δi are independent. We assumed that each subject was scheduled to be examined at p different visits. The first scheduled visit time E is generated from U(0,ψ). For a subject having the IE, the first scheduled visit time E is equal to or greater than the waiting time W(E∼U(W,W+ψ)). The length of the time interval between two follow-up visits was assumed as a constant, ψ = 0.5. The survival time Ti is observed in one of intervals (0,Ei],(Ei,Ei+ψ),...,(Ei+pψ,∞). Let Ek denote the kth scheduled visit. At each of these time points, it was assumed that a subject could miss the scheduled visit. In such cases, Li is defined as the largest follow-up visit Ek among scheduled visit points less the Ti. Also, Ri is defined as the smallest follow-up visit Ei among scheduled visit points greater than Ti. If δi=0, the observation on Ti is right-censored. If δi=1, the observation on Ti is observed on (Li,Ri]. For right-censored data (δi=0), we set Li as it is, but Ri is set to infinity. In the present study, we did not restrict the number of follow-up visits because a subject having the IE should survive during the waiting time and have more chance to follow up for longer. We assumed that every subject visits at the first visit time point, E. After that, there is a probability that a subject might not comply with the follow-up visits. We assume that a subject might miss any of the follow-up visits and is more likely to miss later visits (such as 0.1 for the first year, and 0.2 thereafter, using the Bernoulli distribution). For comparison, we included the log-rank test and the stratified log-rank test (the stratum is experiencing the IE or not) along with our proposed tests. For the log-rank and stratified log-rank test, the true failure times were used rather than the interval-censored ones. We used two variance forms, which were formed by (1) adding and (2) subtracting within and between variance. The sample sizes were selected as 50, 100 and 200 for each group. The results reported are based on 1000 replications for each scenario. Simulation results The results of the simulations are summarized from Tables 1, 2 and 3. Tables 1 and 2 show the estimate of the upper 5% of each of the five tests under the null hypothesis, whereas Table 3 shows the power under the alternative hypothesis for each scenario. The proposed methods show the appropriate 5% significant level under all scenarios. For the variance with adding form (1), the methods marginally overestimate the variance; thus, the effect sizes are less than 0.05 for most of scenarios. For the variance with subtracting form (2), the methods slightly underestimate the variance. Empirical 5%-level tests by varying θB,m1A, and m1B with θA=0.5 when all events are observed in some intervals and when there are some missed visits with a probability of 0.1 for the first year and then of 0.2 thereafter (θA,θB) (m0A,m0B) III-(1) IV-(1) I = log-rank, II = Stratified log-rank, III = Uniform weight method, IV = Weighted weight method. (1) added within and between variance, (2) subtracted within and between variance Empirical 5%-level tests by varying θB,m1A, and m1B with θA=0.5 when censoring fraction is 0.3, and there are some missed visits with a probability of 0.1 for the first year and then of 0.2 thereafter I = log-rank, II = Stratified log-rank, III = Uniform weight method, IV = Weighted weight method (1) added within and between variance, (2) subtracted within and between variance Empirical power of tests by varying m1B when censoring fraction is 0% and 30% and when there are some missed visits with a probability of 0.1 for the first year and then of 0.2 thereafter Censoring fraction = 0% (2, 1.5) (2, 1.25) Censoring fraction = 30% The stratified log-rank test was unsatisfactory if the proportion of experiencing the IE was different between the two groups (such as θA is not equal to θB.). The log-rank test satisfied the nominal significance level if the survival functions were not changed after experiencing the IE regardless of the proportion. The change in survival distribution after experiencing the IE (such as, m0A was not equal to m1A.) in addition to the difference in the proportion of the IE, which caused the log-rank test to be inappropriate. The comparison of uniform and weighted weights multiple imputation methods did not show significant differences. When θA=θB=0.5, the simulation results confirmed that all tests gave the correct 5% significance level. Hence, the power calculations were restricted to this case. The value of the other parameters was m0A=m0B=1,m1A=2. Only the mean time to failure was changed for m2B. The increase in sample size or a decrease in the value of the censoring fraction cp caused increase in the difference of mean time to failure, thus indicating that the power of the tests could be improved. In all cases, the proposed methods have superior power by taking advantage of the knowledge of the IE. In this section, we illustrate the proposed method using real data from a randomized clinical trial evaluating the efficacy of tyrosine kinase inhibitors sorafenib and sunitinib in the treatment of patients with metastatic renal cell carcinoma. The primary endpoint was total progression-free survival (PFS), which was defined as the interval between the randomization (the start date of first-line therapy) to disease progression or death during second-line therapy. For subjects who did not switch to per-protocol second-line therapy, the first-line events were used. Subjects without tumor progression or death during second-line therapy were censored. The details of the study have been published [25]. We chose this study to illustrate our methods because it presented interesting aspects of IE. The proportion that was administered a second-line therapy was higher in sorafenib-sunitinib (So-Su) compared with sunitinib-sorafenib (Su-So) (57% vs 42%, P value <0.01). The total PFS and PFS of first-line treatment did not show a significant difference (So-Su vs. Su-So: 12.5 mo vs. 14.9 mo (P value = 0.5), 5.9 mo vs. 8.5 mo (P value = 0.9), respectively), whereas the PFS of second-line therapy showed a shorter duration in Su-So (5.4 mo vs. 2.8 mo, P value <0.001). Receiving the second-line therapy might be considered as experiencing the IE to compare the difference in survival functions by utilizing the knowledge of the proportion of having second-line therapy and the duration of first- and second-line therapy with different hazards assumption. Since it is difficult to obtain raw data in this study, we extracted numerical data from the Kaplan–Meier (KM) graph on the total, first-line, and second-line PFS [25] by using WebPlotDigitizer v.3.9 (http://arohatgi.info/WebPlotDigitizer/). With the obtained proportion and numbers at risk tables, we can obtain the observed data as {Ti,Wi,δi,Zi,xi} [26]. Similar KM graphs were obtained with the regenerated data. The interval of radiological assessment follow-up was 12 weeks. As in simulations, we assumed several scheduled visits and loss rates of radiological assessment to make interval-censored data of (Li,Ri]. The proposed methods show a significant difference between the two arms (P value <0.01) unlike the log rank test and the stratified log rank test (P value >0.5). We also applied the methods based on the Cox model and obtained similar results [23, 24]. The hypothesis on (β0,β1) is separable as noted [4]. Therefore, we can test differences in the distributions for each parameter, namely, H0:β1=0 versus H1:β1≠0. One degree of freedom is used in a chi-square test \(\chi ^{2}_{1} = \hat {S_{1}^{2}}/V\left (\hat {S_{1}}\right)\) of this hypothesis. In this case, we do not reject the null hypothesis of β0=0 (P value = 0.6) but reject the null hypothesis of β1=0 (P value <0.001), which is similar to a previous study [25]. We propose a general method of comparing two interval-censored samples in the presence of the IE. The occurrence of IE occurs may change the survival distribution. The focus of the current study is to compare two survival functions incorporating the information of the IE. In the present study, we propose non-iterative multiple imputation methods for the analysis of left-truncated and interval-censored survival data. In the uniform weight method, the true failure time of a subject is assumed uniformly distributed over {sj,Li<sj≤Ri, for j=1,...,m} [12]. We used an MI technique for the derivation of test statistics and its variance-covariance matrix including imputing a true failure time, while Kim et al. used a MI technique to estimate variance matrix. Uniform weight assumption in the characterized set is convenient to implement in practice. We also propose a weighted weight method based on NPMLE. After characterizing the set that may have a positive mass including truncated points [13], Turnbull's algorithm was used to estimate the NPMLE. The performance of imputation procedures highly depends on the performance of the NPMLE. In the case of left-truncated and interval-censored data, NPMLE is not consistent, whereas conditional NPMLE is still consistent [15]. However, the problem is limited to the early time point. In the present study, we did not use any special correction because our purpose was not to obtain the exact NPMLE. The simulation did not show considerable differences compared with the uniform weight methods. We applied the methods based on the Cox model to the real example, and the results were similar to the proposed methods [23, 24]. We applied two forms of variance that were formed by addition and subtraction. Both variance methods were efficient, but the first one was marginally overestimated, and the second one was slightly underestimated. This phenomenon is the same as described by Huang et al. [11] since the follow-up loss rate in each visit was not high. We assumed that the IE was exactly as observed. Further studies are needed if the IE is considered as interval-censored. To avoid the length-biased problem, we recommend incorporating the information of the IE in the analysis. In the absence of intensive iterations, our proposed method exhibits a superior performance compared with the stratified log-rank and the log-rank test regarding the type I error and power. IE: LTIC: Left-truncated interval-censored NPMLE: Non-parametric maximum likelihood estimation This research received no specific grant from any funding agency in the public, commercial or not-for profit sectors. All data generated from simulation are available upon reasonable request to SHK ([email protected]). Authors SHK and CMN designed the study with a critical review from JHK. SHK performed the simulation study and analyzed the results under the supervision of JHK and CMN. SHK drafted the manuscript with input from JHK and CMN. All authors have read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Biostatistics and Computing, Yonsei University Graduate School, Seoul, Korea Department of Applied Statistics, University of Suwon, Suwon, Korea Department of Preventive Medicine/Department of Biostatistics, Yonsei University College of Medicine, Seoul, Korea Mantel N, Byar D. Evaluation of response-time data involving transient states: an illustration using heart-transplant data. J Am Stat Assoc. 1974; 69(345):81–86. https://doi.org/10.1080/01621459.1974.10480131. Accessed 21 Sept 2018.View ArticleGoogle Scholar Anderson JR, Cain KC, Gelber RD. Analysis of survival by tumor response. J Clin Oncol. 1983; 1(11):710–9. https://doi.org/10.1200/JCO.1983.1.11.710. Accessed 21 Sept 2018.View ArticleGoogle Scholar Lefkopoulou M, Zelen M. Intermediate clinical events, surrogate markers and survival. Lifetime Data Anal. 1995; 1(1):73–85. https://doi.org/10.1007/BF00985259. Accessed 21 Sept 2018.View ArticleGoogle Scholar Nam CM, Zelen M. Comparing the survival of two groups with an intermediate clinical event. Lifetime Data Anal. 2001; 7(1):5–19. http://doi.org/10.1023/A:1009609925212.View ArticleGoogle Scholar Peto R. Experimental survival curves for interval-censored data. Appl Stat. 1973; 22(1):86–91. https://doi.org/10.2307/2346307. Accessed 21 Sept 2018.View ArticleGoogle Scholar Turnbull BW. The empirical distribution function with arbitrarily grouped, censored and truncated data. J R Stat Soc, Ser B. 1976; 38(3):290–5. www.jstor.org/stable/2984980. Accessed 21 Sept 2018.Google Scholar Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B. 1977; 39(1):1–38. www.jstor.org/stable/2984875. Accessed 21 Sept 2018.Google Scholar Finkelstein DM. A proportional hazards model for interval-censored failure time data,. Biometrics. 1986; 42(4):845–54. https://doi.org/10.2307/2530698. Accessed 21 Sept 2018.View ArticleGoogle Scholar Sun J. A non-parametric test for interval censored failure time data with application to aids studies. Stat Med. 1996; 15(13):1387–95. http://doi.org/10.1002/(SICI)1097-0258(19960715)15:13<1387::AID-SIM268>3.0.CO;2-R.View ArticleGoogle Scholar Zhao Q, Sun J. Generalized log-rank test for mixed interval-censored failure time data. Stat Med. 2004; 23(10):1621–9. https://doi.org/10.1002/sim.1746. Accessed 21 Sept 2018.View ArticleGoogle Scholar Huang J, Lee C, Yu Q. A generalized log-rank test for interval-censored failure time data via multiple imputation. Stat Med. 2008; 27(17):3217–26. https://doi.org/10.1002/sim.3211. Accessed 21 Sept 2018.View ArticleGoogle Scholar Kim J, Kang DR, Nam CM. Logrank-type tests for comparing survival curves with interval-censored data. Compu Stat Data Anal. 2006; 50(11):3165–78. https://doi.org/10.1016/j.csda.2005.06.014. Accessed 21 Sept 2018.View ArticleGoogle Scholar Frydman H. A note on nonparametric estimation of the distribution function from interval-censored and truncated observations. J R Stat Soc Ser B. 1994; 56(1):71–74. https://www.jstor.org/stable/2346028. Accessed 21 Sept 2018.Google Scholar Alioum A, Commenges D. A proportional hazards model for arbitrarily censored and truncated data. Biometrics. 1996; 52(2):512–24. https://doi.org/10.2307/2532891. Accessed 21 Sept 2018.View ArticleGoogle Scholar Pan W, Chappell R. A note on inconsistency of NPMLE of the distribution function from left truncated and case I interval censored Data. Lifetime Data Anal. 1999; 5(3):281–91. http://doi.org/10.1023/A:1009632400580.View ArticleGoogle Scholar Pan W, Chappell R. Estimation in the cox proportional hazards model with left-truncated and interval-censored data. Biometrics. 2002; 58(1):64–70. https://doi.org/10.1111/j.0006-341X.2002.00064.x. Accessed 21 Sept 2018.View ArticleGoogle Scholar Shen PS. Nonparametric tests for left-truncated and interval-censored data. J Stat Comput Simul. 2015; 85(8):1544–1553. https://doi.org/10.1080/00949655.2014.880705. Accessed 21 Sept 2018.View ArticleGoogle Scholar Wei GC, Tanner MA. Applications of multiple imputation to the analysis of censored regression data. Biometrics. 1991; 47(4):1297–1309. https://doi.org/10.2307/2532387. Accessed 21 Sept 2018.View ArticleGoogle Scholar Tanner MA, Wong WH. The calculation of posterior distributions by data augmentation. J Am Stat Assoc. 1987; 82(398):528–540. https://doi.org/10.1080/01621459.1987.10478458.View ArticleGoogle Scholar Pan W. A multiple imputation approach to cox regression with interval-censored data. Biometrics. 2000; 56(1):199–203. https://doi.org/10.1111/j.0006-341X.2000.00199.x. Accessed 21 Sept 2018.View ArticleGoogle Scholar Pan W. A two-sample test with interval censored data via multiple imputation. Stat Med. 2000; 19(1):1–11. http://doi.org/10.1002/(SICI)1097-0258(20000115)19:1<1::AID-SIM296>3.0.CO;2-Q.View ArticleGoogle Scholar Hsu CH, Taylor JMG, Murray S, Commenges D. Multiple imputation for interval censored data with auxiliary variables. Stat Med. 2007; 26(4):769–81. https://doi.org/10.1002/sim.2581. Accessed 21 Sept 2018.View ArticleGoogle Scholar Yu B, Saczynski JS, Launer L. Multiple imputation for estimating the risk of developing dementia and its impact on survival. Biom J. 2010; 52(5):616–27. https://doi.org/10.1002/bimj.200900266. Accessed 21 Sept 2018.View ArticleGoogle Scholar Shen PS. Proportional hazards regression with interval-censored and left-truncated data. J Stat Comput Simul. 2014; 84(2):264–72. https://doi.org/10.1080/00949655.2012.705844. Accessed 21 Sept 2018.View ArticleGoogle Scholar Eichelberg C, Vervenne WL, De Santis M, Fischer von Weikersthal L, Goebell PJ, Lerchenmüller C, Zimmermann U, Bos MMEM, Freier W, Schirrmacher-Memmel S, Staehler M, Pahernik S, Los M, Schenck M, Flörcken A, van Arkel C, Hauswald K, Indorf M, Gottstein D, Michel MS. SWITCH: A randomised, sequential, open-label study to evaluate the efficacy and safety of sorafenib-sunitinib versus sunitinib-sorafenib in the treatment of metastatic renal cell cancer. Eur Urol. 2015; 68(5):837–47. https://doi.org/10.1016/j.eururo.2015.04.017. Accessed 21 Sept 2018.View ArticleGoogle Scholar Williamson PR, Smith CT, Hutton JL, Marson AG. Aggregate data meta-analysis with time-to-event outcomes. Stat Med. 2002; 21(22):3337–51. https://doi.org/10.1002/sim.1303. Accessed 21 Sept 2018.View ArticleGoogle Scholar
CommonCrawl
An expanded repertoire of intensity-dependent exercise-responsive plasma proteins tied to loci of human disease risk J. Sawalla Guseh1,2 na1, Timothy W. Churchill2 na1, Ashish Yeri1, Claire Lo2,3, Marcel Brown2, Nicholas E. Houstis1, Krishna G. Aragam1, Daniel E. Lieberman3, Anthony Rosenzweig1 & Aaron L. Baggish2 Routine endurance exercise confers numerous health benefits, and high intensity exercise may accelerate and magnify many of these benefits. To date, explanatory molecular mechanisms and the influence of exercise intensity remain poorly understood. Circulating factors are hypothesized to transduce some of the systemic effects of exercise. We sought to examine the role of exercise and exercise intensity on the human plasma proteome. We employed an aptamer-based method to examine 1,305 plasma proteins in 12 participants before and after exercise at two physiologically defined intensities (moderate and high) to determine the proteomic response. We demonstrate that the human plasma proteome is responsive to acute exercise in an intensity-dependent manner with enrichment analysis suggesting functional biological differences between the moderate and high intensity doses. Through integration of available genetic data, we estimate the effects of acute exercise on exercise-associated traits and find proteomic responses that may contribute to observed clinical effects on coronary artery disease and blood pressure regulation. In sum, we provide supportive evidence that moderate and high intensity exercise elicit different signaling responses, that exercise may act in part non-cell autonomously through circulating plasma proteins, and that plasma protein dynamics can simulate some the beneficial and adverse effects of acute exercise. Physical activity, including structured exercise, is associated with numerous health benefits including enhanced cognition1, reduction in cardiovascular disease (CVD)2, improved cancer outcomes3, and decreased mortality4. Cardiovascular benefits from exercise training have been ascribed to improvements in lipid profiles, blood pressure, and insulin sensitivity and reductions in inflammation, but a substantial portion of the observed cardiovascular benefit remains unexplained by conventional risk factor reductions5. Routine exercise accordingly holds a central place in guideline-directed care for the promotion of cardiovascular and neurological health6,7,8, with current physical activity guidelines proposing moderate and vigorous activity exercise as comparable alternatives for preventing CVD and promoting overall health7. However, mounting clinical evidence suggests that different exercise intensities may confer distinct physiologic and health benefits9, while exercise at high intensity has also been associated with a discrete adverse health risks both acutely10 and over the longer term11. At present, however, the biological mechanisms by which exercise confers beneficial and adverse health effects and the degree to which these mechanisms vary as a function of exercise intensity remain incompletely understood12. Prior work has explored the impact of acute exercise on cardiac structure13, DNA methylation14, circulating metabolites15, and microRNAs16. Data defining the impact of exercise on the plasma-based proteome and the degree to which the proteome responds differentially to variable exercise intensities are comparatively limited. Several prior studies have examined protein changes in specific tissues (e.g. cardiac or skeletal muscle) using rodent models or human skeletal muscle biopsies17,18, while characterization of circulating proteins in exercise has largely been limited to select cytokines, myokines, and lipokines and focused studies of extracellular vesicle-bound proteins19,20. Plasma-based proteins play fundamental roles in numerous biological processes including growth, repair, and signaling in both disease and health21 and may facilitate exercise-induced cellular, metabolic, and physiologic changes12. We hypothesized that the human plasma proteome would demonstrate distinct intensity-dependent responses to a single session of exercise and that these acute changes, when integrated over time, might contribute to the beneficial and adverse effects of chronic moderate and vigorous intensity exercise. To address these hypotheses, we employed a well-validated aptamer-based proteomics platform21,22 to measure plasma concentrations of 1,305 circulating proteins before and after acute exercise at intensities chosen to approximate the moderate and vigorous options proposed by clinical guidelines. We then identified genetic loci simultaneously associated with circulating protein levels (protein quantitative trait loci, pQTLs) and with important clinical phenotypes from genome wide association studies (GWAS) to estimate the predicted effect of exercise on relevant human traits. Subjects had an average age of 21 ± 1 years, normal body mass index (22.8 ± 2 kg/m2), no known medical conditions (Table 1) and reported similar levels of habitual physical activity (4–6 days/week of exercise and 20–30 miles/week of running). Baseline cardiopulmonary exercise testing demonstrated maximal oxygen consumption of 62 ± 5 ml/kg/min at peak achieved heart rate (HR) of 195 ± 7 beats per minute (100 ± 4% of age-predicted maximum), with ventilatory threshold HR of 182 ± 10 beats per minute (Table 1; Fig. 1a). In a cross-over design, participants subsequently completed 5-mile treadmill runs at both moderate intensity (6 m.p.h) and high intensity (maximal effort) on separate weeks (see study design schematic in Fig. 1a). Average heart rate over the final mile was 150 ± 16 bpm (82% of the ventilatory threshold HR) during the moderate intensity and 187 ± 7 bpm (102% of the ventilatory threshold HR) during the high intensity run (Fig. 1b). All participants experienced a decline in plasma cortisol following moderate intensity exercise (Fig. 1c) and an increase in plasma cortisol following high intensity exercise (Fig. 1d), consistent with prior reports of discordant cortisol responses to these different intensities of exercise23. Table 1 Baseline participant data. Discovery of exercise regulated plasma proteins at moderate and high intensity exercise. (a) Study design. Participants all underwent CPET with determination of individual peak VO2. Participants were then randomized to two treadmill sessions consisting of a moderate intensity (5 mile/h steady state) or high intensity (maximal effort) exercise session. Participants who underwent a moderate intensity session first later underwent a high intensity session and those who underwent an initial high intensity session later completed a moderate intensity session. Blood was drawn before and immediately after each session. (b) Breath-by-breath cardiopulmonary exercise test data from a representative participant is shown. Moderate vs. high exercise intensity is defined physiologically by an inflection point observed at the ventilatory anaerobic threshold (vertical line at 182 bpm) and distinguishes moderate from high intensity exercise. (c,d) Post-exercise Cortisol kinetics at (c) moderate (p < 0.001) and (d) high intensity (p = 0.013) exercise confirm exercise intensity. Volcano plots show proteins that rise (red) and fall (green) with (e) moderate and (f) high intensity exercise, highlighting greater complexity of the dynamic proteome with high intensity exercise. 1,305 proteins examined. n = 12 participants. p < 0.05 was considered significant and values were adjusted for multiple hypothesis testing (Benjamini–Hochberg). A paired t-test was performed to examine post-exercise cortisol kinetics in panels c,d. The human plasma proteome responds to acute exercise differentially relative to exercise intensity Plasma concentrations of 1,305 proteins (see SI Table S1 for complete list) were measured before and immediately following moderate and high-intensity 5-mile treadmill runs. A total of 623 different proteins (48% of measured proteome) were dynamically regulated by acute exercise. Of these, 25 and 439 proteins were uniquely responsive to moderate and high intensity exercise respectively, while 159 changed at both exercise intensities (Supplemental Data File). Overall 184 distinct proteins were responsive to moderate-intensity exercise (14% of measured proteome) (Fig. 1e), while 598 proteins changed with high-intensity exercise (46% of measured proteome) (Fig. 1f), representing a > 3-fold increase in the number of exercise-responsive proteins at high intensity effort. To further evaluate the impact of exercise intensity, we focused on the 159 proteins modulated by both moderate and high intensity exercise. Comparing fold change at moderate intensity (FCM) to fold change at high intensity (FCH), we observed a range of intensity dependence, with the most intensity-dependent group (n = 22; SI Table S2) increasing by at least 25% more during high intensity than moderate intensity exercise (Fig. 2). In contrast, the least intensity-dependent group of proteins (n = 44) changed to a nearly equivalent degree at moderate and high intensity exercise (FCH within 5% of FCM). All proteins that changed significantly with both types of exercise did so concordantly. Specifically, proteins that decreased during moderate intensity exercise also decreased during high intensity while proteins that increased did so after both exercise intensities. No proteins changed in opposite directions analogous to plasma cortisol. Differential intensity dependent and independent plasma protein responses to moderate and high intensity acute exercise. (a,b) The 25 proteins with the greatest positive and negative fold change at (a) moderate and (b) high intensity exercise are shown. (c) Proteins common to both moderate and high intensity exercise (n = 159) are plotted with high intensity fold change (y-axis) against moderate intensity fold change (x-axis). Relative intensity-dependence (darker blue) and intensity-independence (lighter blue) of protein species are depicted. Distinct exercise-relevant functional processes are enriched at moderate and high intensity exercise Enrichment for gene ontology (GO) curated sets was performed to identify functional pathways altered by moderate and high intensity exercise (Table 2). At moderate intensity, the top two processes with the highest enrichment included bone ossification and lipophagy. Proteins related to multiple pathways relevant to the inflammatory response were additionally enriched, including neutrophil, granulocyte, and monocyte chemotaxis and inflammatory cell migration. At high intensity, the top positively enriched protein sets notably included multiple neurologic pathways including both canonical and non-canonical Wnt signaling and neuronal axonogenesis (collateral sprouting). Other pathways enriched with high intensity exercise included the free radical generation, the inflammatory response (monocyte migration, T-cell cytokine production), and vascular smooth muscle cell migration. Table 2 Gene ontology enrichment biological process. Inferred tissue contribution of exercise-responsive human plasma proteome In order to discern which tissues might be contributing to plasma proteins, we used a probabilistic model of transcriptional inference to map likely tissue sources for the set of proteins increasing in the plasma with exercise (n = 120 at moderate intensity and n = 250 at high intensity, representing 261 total unique proteins). We observed a 2.1-fold increase in dynamic elevated protein species at high as compared to moderate intensity. At both exercise intensities, proteomic contribution to the plasma involved nearly all organ systems (Fig. 3a), with the most prominent absolute donor tissues being the nervous, cardiovascular, and gastrointestinal systems. Skeletal muscle appeared to be a relatively minor tissue source of donor protein. However, when adjusted for platform representation (Fig. 3b), proteins inferred to derive from skeletal muscle were overrepresented and in contrast proteins derived from the collective gastrointestinal system were relatively underrepresented. Other protein sources enriched relative to the overall platform include blood, cardiovascular, and nervous tissue. Transcriptional inference reveals multisystem tissue contributions of proteins from the exercise plasma proteome. (a) Among proteins increased in plasma at moderate (n = 120) and high (n = 250) intensity exercise, transcriptional inference suggests systemic contribution of donor protein species into the plasma. Dominant inferred sources of protein diversity include the nervous, cardiovascular, and gastrointestinal systems at both exercise intensities. (b) Inferred tissue sources of proteins increasing in plasma with exercise are compared against tissue expression of entire SomaLogic platform, revealing relative enrichment during exercise for proteins with expression in blood, cardiovascular, skeletal muscle, and nervous tissue. Exercise-regulated proteins are genetically tied to and simulate observed effects on exercise-associated traits Out of the 623 exercise-regulated proteins, the plasma abundance of 273 (44%) has previously been linked to 272 protein quantitative trait loci (pQTLs). 55% of identified exercise-regulated pQTL-associated proteins were under cis genetic control, 61% under trans genetic control and 16% under both cis and trans genetic control. Chromosomal positions of the pQTLs and associated exercise responsive proteins (Fig. 4a) reveal widespread involvement across the entire genome. Protein-associated pQTLs were linked to a diverse group of phenotypic traits; the numbers of pQTLs and associated proteins across cardiovascular (coronary artery disease (CAD), blood pressure, and dyslipidemia), neurologic, and oncologic phenotypes are highlighted (Fig. 4b). Finally, pQTLs linked to CAD (Fig. 4c) and blood pressure (Fig. 4d) are shown along with associated exercise-responsive proteins and annotated to depict the proteomic response to exercise. Notably, the simulated impact of exercise on CAD risk loci is heterogeneous, with multiple proteins appearing to contribute in both directions towards increasing and decreasing risk, with 6 of 15 total proteins moving in a direction suggesting benefit. In contrast, the simulated impact on blood pressure showed that the exercise responsive dynamic of protein-pQTL combinations was more concordant and associated with an improved risk profile (12 of 14 proteins moving in a direction consistent with lower blood pressure). Exercise regulates plasma proteins tied to human traits. (a) Genomic locations of pQTLs (red, cis; blue, trans). X and Y axes represent chromosomal locations of the pQTL and the associated protein, respectively. (b) Quantification of exercise-responsive plasma proteins (FDR p < 0.05) with associated pQTLs, and associated phenotypic traits (p < 5 × 10−8) permit raw effect estimation. (c,d) The pQTL-linked proteins with corresponding risk pQTLs associated with coronary artery disease (c) and blood pressure (d) are plotted on forest plots and aligned in a manner whereby higher plasma protein concentration associates with either higher or lower disease-specific risk (x-axes). Higher plasma protein concentrations move laterally away from the midline. A red arrow depicts the direction of the simulated impact of exercise-associated acute changes in protein concentration. Forrest plots depict the GWAS β point estimate and standard error. Using an analytical approach that physiologically defines exercise intensity in individual human subjects and leverages the hypothesis-neutral measurement of 1,305 plasma proteins, we characterized and compared the human plasma proteomic response to acute bouts of moderate and high intensity exercise. Our principal findings can be summarized as follows. First, the human plasma proteome is responsive to acute exercise in an intensity-dependent manner. Second, moderate and high intensity exercise stimulate distinct plasma protein changes that correspond to distinct functional biological pathways. Third, the acute proteomic response to exercise appears to derive from systemic tissue sources with enriched contributions from the cardiovascular, neurological and skeletal muscle systems. Finally, by integrating the proteomic response to exercise with available pQTL and GWAS data we were able to simulate the effects of acute exercise with regard to variable impacts on coronary artery disease risk and acute decreases in blood pressure. Taken together, we provide supportive evidence that exercise in part may act through circulating factors, and we highlight the role that circulating plasma proteins may play in mediating some of exercise's established effects on CAD and blood pressure. Distinct proteomic responses to moderate and high intensity aerobic exercise Endurance exercise is among the most potent health interventions for the primary and secondary prevention of a wide range of adverse health conditions. To date, however, mechanistic underpinnings of how exercise confers its health benefits remain incompletely understood, and the potential role of the plasma proteome remains largely unexplored. We therefore conducted this study with the goals of performing a minimally biased assessment of whether the plasma proteome responds to acute bouts of exercise and to what degree this response varies as a function of exercise intensity. Results of this effort provide several novel insights into the proteomic response to acute exercise. First, we demonstrate that the plasma proteome is responsive to a short bout of exercise, with almost half of ~ 1,300 measured proteins changing significantly at one of the two studied exercise intensities. Second, we found that the proteomic response was consistently bi-directional, with both up- and down-regulation of distinct protein species, suggesting that the observed protein changes were non-random. Third, we show that the human plasma proteomic response varies in an intensity- or 'dose'-dependent manner and that circulating protein changes at high intensity exercise are greater in both number and magnitude than those observed at moderate intensity. Distinct functional pathways are enriched with moderate and high intensity aerobic exercise The application of established gene sets to the group of exercise-regulated proteins allows insight into biologic processes relevant to exercise and differentially impacted by exercise intensity. Two well established effects of exercise include its ability to prevent osteoporosis24 and to improve lipid profiles25. At moderate intensity exercise we observed that our top two enriched pathways involved the promotion of bone growth and enhanced lipophagy which concerns the degradation and metabolism of lipids. In both cases, exercise-induced proteomic changes are concordant with clinically-observed impacts of exercise training, suggesting that these pathways may be one way through which moderate intensity exercise, when repeated over time, might act (through the plasma proteome) to exert its impacts, in this case to improve bone health and to reduce lipid-associated metabolic risk. Pathway enrichment at high intensity exercise was particularly notable for the prominent role of neurologic processes, highlighting the close interplay of exercise and the nervous system26. Several of the top enriched pathways pertained to Wnt signaling, which is known to play essential roles in the regulation of central nervous system angiogenesis27 and hippocampal neurogenesis28. An additional mechanistic link is suggested by the enrichment of pathways related to neuronal adaptation and axonogenesis (collateral sprouting), which is in line with clinical observations associating aerobic exercise with neurogenesis and synaptic plasticity29. Notably, the collateral sprouting gene set includes the well-studied brain-derived neurotrophic factor (BDNF). BDNF has been hypothesized to mediate the improvements in cognition and mood observed with exercise, and prior work has documented changes in circulating levels of BDNF with both acute and regular exercise30. However, all of these studies examined moderate intensity exercise or did not report intensity. Our data confirm the exercise responsiveness of BDNF, with levels rising significantly after both moderate and high intensity exercise, and extend the current literature to show that BDNF also appears to be intensity responsive, with high intensity exercise in our platform producing a nearly 30% increase in BDNF levels relative to moderate intensity (Fig. 2; Supplemental Data File). These findings are particularly salient given the rising prevalence of dementia, the absence of efficacious therapies, and links between sedentary behavior and memory loss31. Cardiovascular, neurological, and muscular enrichment in the acute plasma proteome Aerobic exercise requires integrated multi-organ system function, and we expected transcriptional inference to reveal multiple source tissues contributing to the plasma proteomic response. The cardiovascular system experiences workload-dependent increases in pressure and volume stress during exercise, and its role as a major source of circulating proteins was unsurprising. In contrast, the findings of enriched expression of exercise-responsive proteins in the nervous system was unexpected. This novel finding suggests that the nervous system, which is not classically viewed as playing a key role in endurance exercise physiology, responds to exercise and may play mechanistic roles in transducing its health benefits. Further elucidation of the precise protein sources within the neurological (i.e. peripheral versus central neurons or glial cells) system coupled with clarification of downstream effects represent critical areas for future work. The proteomic exercise response simulates acute exercise effects on exercise-relevant traits A pQTL is a genetic locus strongly associated with circulating plasma levels of a given protein. These same genetic loci may be strongly associated with a particular clinical phenotype as documented via a GWAS. Although not proof of causality, integration of these data permits a simulation of the impact of exercise-associated protein changes on a given trait. For example, high levels of a given protein under genetic control may be associated with an adverse trait, and exercise might reduce the level of this protein and thus provide therapeutic benefit. This framework highlights how plasma proteins might integrate the influences of a given gene product with the environment and with behaviors like exercise. Although we found pQTLs across a number of clinical strata (Fig. 4a), we focused these analyses on the acute effects of exercise on CAD and systemic blood pressure given exercise's well-established and clinically relevant impacts on these traits. Coronary artery disease, the leading cause of death worldwide, is a complex disease with polygenic inheritance whereby a large number of common genetic variants with small incremental effects additively confer adverse risk32. Although chronic exercise is associated with beneficial cardiac adaptations and reductions in CAD risk, acute exercise paradoxically increases the risk of myocardial infarction and CAD-related death, particularly with higher intensity efforts33,34,35. Examining the acute effects of exercise on the set of plasma proteins strongly associated with CAD (p < 5 × 10–8), we made two key observations. First, most of these proteins were responsive exclusively at high and not moderate intensity exercise, consistent with clinical observations of acute exercise increasing cardiovascular event risk primarily at higher intensity34,35. Second, the estimated proteomic response with respect to CAD appears to be biased towards increased CAD risk at high intensity exercise, with 9 of 15 proteins moving in a manner consistent with projected increased risk (Fig. 4c). This finding may provide insight into the clinical observation that while the cumulative impact of repeated exercise is to improve an individual's CAD risk profile, exercise paradoxically increases risk acutely during exercise in the short term10,36. Post exercise hypotension, first described by Hill in 189737, refers to the protracted attenuation in resting blood pressure that lasts for several hours after exercise. The precise mechanism by which this occurs and why humans evolved to have this response remain open questions. Nevertheless, this reduction in blood pressure from a single session of exercise is a reliable post-exercise finding thought to confer some of the beneficial effects of exercise. We observe that the simulated impact of exercise-responsive proteins on blood pressure regulation appears broadly concordant in a beneficial direction, suggesting that the proteomic response appears to reproduce known post-exercise physiology in line with well-established clinical acute and chronic observations38. Taken in sum, these data suggest that exercise may modulate both CAD risk and blood pressure in part through non-cell autonomous mechanisms by influencing circulating proteins tied to risk-conferring genetic loci independent of traditional risk markers. Such a proteomic basis for risk transduction raises the intriguing possibility of therapeutic targets for people with elevated polygenic risk or burdensome disease. Several limitations of this study are noteworthy. First, while data presented here are among the most comprehensive characterizations to date of how acute exercise perturbs the plasma proteome, we acknowledge that our use of a commercially-available proteomics platform introduces bias, samples only a portion of the vast proteome, and does not represent a complete characterization of exercise's impact on circulating proteins. Second, we studied a small group of young, healthy, fit males. Future study of females, older participants of both sexes, less aerobically-fit individuals, and patients with established CVD is warranted, and we acknowledge that exercise protein regulation may differ in these populations. While the training status and objective fitness was similar across the study population, we could not evaluate for heterogeneity in proteomic response based on these factors. Further, while our study population did include individuals from different racial backgrounds, we are limited by sample size in our ability to parse racial or ethnic differences in the proteomic effects of exercise. We additionally do not have measures of plasma volume pre and post-exercise, so we cannot ascertain to what extent changes in this may have impacted our results. However, the bi-directional changes in protein concentration suggest that our results were not simply due to hemoconcentration. Finally, our experimental design was not linked to clinical outcomes and was limited to short-duration exercise, with the exact exercise "volume" (running distance) held constant. The majority of exercise's clinical health benefits is observed in those who transition from sedentary to moderate activity. We thus cannot exclude, and consider it probable, that some of the differences between moderate and high intensity exercise observed in longer-term epidemiologic studies stem from differences in volume; the corollary of this possibility is that some of the proteomic differences described in this study may be attenuated at increased volumes of exercise. The extent to which chronic high volume moderate-intensity exercise approximates lower volume high-intensity exercise remains to be seen. Future work aimed at examining longer- and varying-duration endurance exercise and alternative forms of exercise including strength training represent logical future areas of investigation. In conclusion, we provide the first comprehensive characterization of how the human plasma proteome responds to acute moderate and high intensity aerobic exercise, and in doing so we expand the repertoire of known exercise-responsive proteins. Functional analyses suggest that distinct proteomic responses translate into the distinct biological functions that underlie numerous exercise-associated traits integral in human health and disease. Overlaying genomic data onto observed protein changes with exercise, we find explanatory congruence between estimated effects drawn from the high intensity proteomic response and clinical observations surrounding CAD risk and post-exercise hypotension. These data support the concept that exercise may confer its beneficial and adverse effects by influencing plasma proteins and signaling through a non-cell autonomous mechanism. These findings set the stage for future work to deconstruct the specific signaling networks through which exercise transduces its benefits and exerts its harms and to determine how the human proteome might be manipulated to promote human health. We conducted a prospective, repeated-measures study examining the physiologic effects of varied exercise intensity. Twelve healthy adult males without known CVD (age 19–24 years) participated in treadmill running sessions at varied intensities (treadmill speed), with blood samples collected before and immediately after each exercise bout for proteomic profiling. We evaluated samples from before and after 5-mile runs at 6 m.p.h. (moderate intensity) and at maximal volitional effort (high intensity). The Institutional Review Board of Massachusetts General Hospital approved this study. Accordingly all elements of the research were performed in accordance with relevant guidelines and regulations and informed consent was obtained from all participants. Participant recruitment and exercise testing Subject recruitment has previously been described16. Inclusion criteria included male sex and ages 18–30 years. Six participants identified as Caucasian. Exclusion criteria included known heart, liver, or kidney disease or a viral illness within the preceding 2 weeks. Informed consent was obtained from all participants. Baseline data included demographics, medical and athletic history, and basic anthropometrics. Each participant underwent a maximal, effort-limited cardiopulmonary exercise test on a treadmill ergometer as previously described16. Subjects were then randomly assigned to complete variable intensity exercise sessions in varying order, with each exercise session completed 1 week apart. Participants abstained from all exercise above and beyond activities of daily living for a minimum of 48 h prior to each exercise session and arrived for exercise sessions following an overnight fast (excepting water). Exercise session start time (09:00), ambient room temperature (69–72 degrees Fahrenheit), and humidity (20–30%) were held constant across visits. Plasma cortisol is known to decline following low-to-moderate intensity exercise and increase following high intensity exercise23. To confirm that prescribed exercise intensities in this study were physiologically distinct, we measured plasma cortisol before and after both exercise bouts. Demographic and exercise data are reported as mean ± standard deviation. Aptamer-based proteomic profiling Profiling methods have been previously described21,22. Venous blood was collected immediately before and after treadmill running from a superficial upper extremity vein using standard phlebotomy techniques. Samples were drawn into standard anticoagulant ethylenediaminetetraacetic acid (EDTA) treated vacutainer tubes (BD, Franklin Lakes, NJ) and spun at 2,700–2,800 RCF in a Medilite Centrifuge (Thermo Scientific, Waltham, MA) for 12 min to separate plasma. Plasma aliquots (400 μL) were frozen and stored at – 80 °C for analysis. Quantitative levels in plasma samples were assayed by the SOMAscan platform (SomaLogic, Boulder, Colorado). Samples were assayed in one single batch (n = 48). A total of 1,305 proteins were assayed. Protein source inference Among proteins whose levels increased in the plasma after acute exercise, we hypothesized this was unlikely reflective of de novo synthesis, given the short timeframe of exercise (approximately 30 min), and more likely represented translocation or active secretion of proteins from tissues into plasma. We sought to infer the tissue sources from which dynamically-regulated proteins most likely derived. To do so, we devised a computational method of transcriptional inference to derive a probabilistic map of likely donor tissue sources. We made two a priori assumptions: (1) a given protein most likely derives from a tissue in which its messenger transcript is found and (2) the probability that a given tissue is the source of a given protein is proportional to the relative expression of the protein's corresponding gene in that tissue. Sequencing data from the Genotype-Tissue Expression (GTEx) database39 were used to assign a tissue source probability to each protein that increased with acute exercise. For each protein, we assigned a probabilistic weight based on gene expression as reflected in RNA-seq transcript quantification expressed in transcripts per kilobase million (TPM) and recovered from next generation sequencing of human cadaveric tissues (n = 46 distinct human tissues). We defined this relationship as follows: $$p\left( {Tissue\,a} \right) = k\left( {\frac{{TPM_{a} }}{{TPM_{a} + TPM_{b} + TPM_{c} \ldots TPM_{n} }}} \right)$$ Here p(tissue) is the probability that a given protein derives from a certain source tissue (a), where (a), (b) … (n) represent all the available sites of tissue expression. We defined these probabilities individually for each protein in question and subsequently aggregated across all proteins, with each protein weighted equally and grouped by organ system (SI Table S3). Functional annotation and enrichment analysis To characterize the functional pathways present at moderate and high intensity exercise, we performed gene ontology (GO) analysis on the upregulated proteins derived from respective intensities using an open source tool with expanded curation of functional sets40,41. The GO database was accessed 12/01/2019, with data analyzed using the PANTHER Overrepresentation Test. The Binomial test was used, and p values are reported after Bonferroni correction to adjust for multiple comparisons. Trait-based protein annotation We used previously-reported protein quantitative trait loci (pQTLs)22 to map exercise-regulated proteins to strongly-associated cis and trans sentinel genetic sequence variants. To estimate the anticipated effects of protein changes we queried our sentinel variants against genome-wide association studies (GWAS) data using Phenoscanner42, with GWAS results filtered for genome wide signifigant variants, p < 5 × 10–8. Results were manually filtered to identify cardiovascular and neurological traits. Published beta coefficients were used to estimate the directional effect of exercise regulation. All proteins were examined for differential abundance analysis using the LIMMA package in R43. Relative changes in protein abundance between resting and post-exercise samples were analyzed using a paired analysis with a Benjamani-Hochberg false-discovery rate of 5% to limit type 1 error and multiplicity44. Statistical analysis was performed in R 3.5 (R Foundation for Statistical Computing, Vienna, Austria). Full results are provided in Supplemental Data File, and individual participant data may be made available upon reasonable request to the corresponding author. CAD: CVD: GWAS: Genome wide association study pQTL: Protein quantitative trait locus V̇O2 : Oxygen consumption VT: Ventilatory threshold Gomez-Pinilla, F. & Hillman, C. The influence of exercise on cognitive abilities. Compr. Physiol. 3(1), 403–428 (2013). Leon, A. S., Connett, J., Jacobs, D. R. Jr. & Rauramaa, R. Leisure-time physical activity levels and risk of coronary heart disease and death. The multiple risk factor intervention trial. JAMA 258(17), 2388–2395 (1987). Cormie, P., Zopf, E. M., Zhang, X. & Schmitz, K. H. The impact of exercise on cancer mortality, recurrence, and treatment-related adverse effects. Epidemiol. Rev. 39(1), 71–92 (2017). Kujala, U. M., Kaprio, J., Sarna, S. & Koskenvuo, M. Relationship of leisure-time physical activity and mortality: The Finnish twin cohort. JAMA 279(6), 440–444 (1998). Mora, S., Cook, N., Buring, J. E., Ridker, P. M. & Lee, I. M. Physical activity and reduced risk of cardiovascular events: Potential mediating mechanisms. Circulation 116(19), 2110–2118 (2007). Eckel, R. H. et al. 2013 AHA/ACC guideline on lifestyle management to reduce cardiovascular risk: A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation 129(25 Suppl 2), S76-99 (2014). Piercy, K. L. et al. The physical activity guidelines for Americans. JAMA 20, 20 (2018). Petersen, R. C. et al. Practice guideline update summary: Mild cognitive impairment: Report of the Guideline Development, Dissemination, and Implementation Subcommittee of the American Academy of Neurology. Neurology 90(3), 126–135 (2018). Swain, D. P. & Franklin, B. A. Comparison of cardioprotective benefits of vigorous versus moderate intensity aerobic exercise. Am. J. Cardiol. 97(1), 141–147 (2006). Goodman, J. M., Burr, J. F., Banks, L. & Thomas, S. G. The acute risks of exercise in apparently healthy adults and relevance for prevention of cardiovascular events. Can. J. Cardiol. 32(4), 523–532 (2016). Guasch, E. et al. Atrial fibrillation promotion by endurance exercise: Demonstration and mechanistic exploration in an animal model. J. Am. Coll. Cardiol. 62(1), 68–77 (2013). Neufer, P. D. et al. Understanding the cellular and molecular mechanisms of physical activity-induced health benefits. Cell. Metab. 22(1), 4–11 (2015). Neilan, T. G. et al. Persistent and reversible cardiac dysfunction among amateur marathon runners. Eur. Heart. J. 27(9), 1079–1084 (2006). Barres, R. et al. Acute exercise remodels promoter methylation in human skeletal muscle. Cell. Metab. 15(3), 405–411 (2012). Lewis, G. D. et al. Metabolic signatures of exercise in human plasma. Sci. Transl. Med. 2(33), 33ra7 (2010). Ramos, A. E. et al. Specific circulating microRNAs display dose-dependent responses to variable intensity and duration of endurance exercise. Am. J. Physiol. Heart Circ. Physiol. 315(2), H273–H283 (2018). Schild, M. et al. Basal and exercise induced label-free quantitative protein profiling of m. vastus lateralis in trained and untrained individuals. J. Proteom. 122, 119–132 (2015). Ferreira, R. et al. Unraveling the exercise-related proteome signature in heart. Basic Res. Cardiol. 110(1), 454 (2015). Whitham, M. & Febbraio, M. A. The ever-expanding myokinome: Discovery challenges and therapeutic implications. Nat. Rev. Drug Discov. 15(10), 719–729 (2016). Whitham, M. et al. Extracellular vesicles provide a means for tissue crosstalk during exercise. Cell. Metab. 27(1), 237–251 (2018). Emilsson, V. et al. Co-regulatory networks of human serum proteins link genetics to disease. Science 361(6404), 769–773 (2018). ADS CAS PubMed PubMed Central Google Scholar Sun, B. B. et al. Genomic atlas of the human plasma proteome. Nature 558(7708), 73–79 (2018). Davies, C. T. & Few, J. D. Effects of exercise on adrenocortical function. J. Appl. Physiol. 35(6), 887–891 (1973). Moreira, L. D. et al. Physical exercise and osteoporosis: Effects of different types of exercises on bone and physical function of postmenopausal women. Arq. Bras. Endocrinol. Metabol. 58(5), 514–522 (2014). Kraus, W. E. et al. Effects of the amount and intensity of exercise on plasma lipoproteins. N. Engl. J. Med. 347(19), 1483–1492 (2002). Morgan, J. A., Corrigan, F. & Baune, B. T. Effects of physical exercise on central nervous system functions: A review of brain region specific adaptations. J. Mol. Psychiatry. 3(1), 3 (2015). Daneman, R. et al. Wnt/beta-catenin signaling is required for CNS, but not non-CNS, angiogenesis. Proc. Natl. Acad. Sci. USA 106(2), 641–646 (2009). ADS CAS PubMed Google Scholar Lie, D. C. et al. Wnt signalling regulates adult hippocampal neurogenesis. Nature 437(7063), 1370–1375 (2005). Tharmaratnam, T., Civitarese, R. A., Tabobondung, T. & Tabobondung, T. A. Exercise becomes brain: Sustained aerobic exercise enhances hippocampal neurogenesis. J. Physiol. 595(1), 7–8 (2017). Szuhany, K. L., Bugatti, M. & Otto, M. W. A meta-analytic review of the effects of exercise on brain-derived neurotrophic factor. J. Psychiatr. Res. 60, 56–64 (2015). Fenesi, B. et al. Physical exercise moderates the relationship of apolipoprotein E (APOE) genotype and dementia risk: A population-based study. J. Alzheimers Dis. 56(1), 297–303 (2017). Khera, A. V. et al. Genome-wide polygenic scores for common diseases identify individuals with risk equivalent to monogenic mutations. Nat. Genet. 50(9), 1219–1224 (2018). Thompson, P. D., Funk, E. J., Carleton, R. A. & Sturner, W. Q. Incidence of death during jogging in Rhode Island from 1975 through 1980. JAMA 247(18), 2535–2538 (1982). Thompson, P. D. et al. Exercise and acute cardiovascular events placing the risks into perspective: A scientific statement from the American Heart Association Council on Nutrition, Physical Activity, and Metabolism and the Council on Clinical Cardiology. Circulation 115(17), 2358–2368 (2007). Siscovick, D. S., Weiss, N. S., Fletcher, R. H. & Lasky, T. The incidence of primary cardiac arrest during vigorous exercise. N. Engl. J. Med. 311(14), 874–877 (1984). Rognmo, O. et al. Cardiovascular risk of high- versus moderate-intensity aerobic exercise in coronary heart disease patients. Circulation 126(12), 1436–1440 (2012). Hill, L. Arterial pressure in man while sleeping, resting, working, and bathing. J. Physiol. Lond. 22, xxvi–xxix (1897). Cornelissen, V. A. & Smart, N. A. Exercise training for blood pressure: A systematic review and meta-analysis. J. Am. Heart Assoc. 2(1), e004473 (2013). Consortium GT. The genotype-tissue expression (GTEx) project. Nat. Genet. 45(6), 580–585 (2013). Mi, H., Muruganujan, A., Ebert, D., Huang, X. & Thomas, P. D. PANTHER version 14: More genomes, a new PANTHER GO-slim and improvements in enrichment analysis tools. Nucleic Acids Res. 47(D1), D419–D426 (2018). PubMed Central Google Scholar Mi, H. & Thomas, P. In Protein Networks and Pathway Analysis (eds Nikolsky, Y. & Bryant, J.) 123–140 (Humana Press, Totowa, 2009). Staley, J. R. et al. PhenoScanner: A database of human genotype–phenotype associations. Bioinformatics 32(20), 3207–3209 (2016). Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 43(7), e47 (2015). Benjamini, Y. & Hochberg. Y. Controlling The False Discovery Rate—A Practical And Powerful Approach to Multiple Testing. 1995. J.S.G. was supported by the MGH NIH T32 Training Grant (HL007208), the John S. LaDue Memorial Fellowship, the MGH Physician Scientist Development Program, and the AHA-Harold Amos Medical Faculty Development Program. A.R. is funded by the NIH (R01AG061034) and the AHA (16SFRN31720000). A.L.B. is funded by the NIH/NHLBI (Ja). These authors contributed equally: J. Sawalla Guseh and Timothy W. Churchill. Cardiovascular Research Center, Division of Cardiology, Corrigan Minehan Heart Center, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, 02114-2696, USA J. Sawalla Guseh, Ashish Yeri, Nicholas E. Houstis, Krishna G. Aragam & Anthony Rosenzweig Cardiovascular Performance Program, Division of Cardiology, Corrigan Minehan Heart Center, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, 02114-2696, USA J. Sawalla Guseh, Timothy W. Churchill, Claire Lo, Marcel Brown & Aaron L. Baggish Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, 02138, USA Claire Lo & Daniel E. Lieberman J. Sawalla Guseh Timothy W. Churchill Ashish Yeri Claire Lo Marcel Brown Nicholas E. Houstis Krishna G. Aragam Anthony Rosenzweig Aaron L. Baggish Correspondence to Anthony Rosenzweig or Aaron L. Baggish. Supplementary information 1 Guseh, J.S., Churchill, T.W., Yeri, A. et al. An expanded repertoire of intensity-dependent exercise-responsive plasma proteins tied to loci of human disease risk. Sci Rep 10, 10831 (2020). https://doi.org/10.1038/s41598-020-67669-0
CommonCrawl